Llama - dirk_y's results

A governance system for onchain organizations.

General Information

Platform: Code4rena

Start Date: 06/06/2023

Pot Size: $60,500 USDC

Total HM: 5

Participants: 50

Period: 8 days

Judge: gzeon

Id: 246

League: ETH

Llama

Findings Distribution

Researcher Performance

Rank: 2/50

Findings: 3

Award: $8,787.84

Analysis:
grade-b

🌟 Selected for report: 1

🚀 Solo Findings: 0

Findings Information

🌟 Selected for report: ktg

Also found by: auditor0517, dirk_y

Labels

bug
3 (High Risk)
satisfactory
upgraded by judge
edited-by-warden
duplicate-62

Awards

5296.7399 USDC - $5,296.74

External Links

Lines of code

https://github.com/code-423n4/2023-06-llama/blob/main/src/strategies/LlamaRelativeQuorum.sol#L199-L210

Vulnerability details

Impact

A malicious user with sufficient permissions can manipulate approvals and disapprovals of actions using the relative quorum strategy. They could effectively ensure that any action has an 100% chance of being approved or disapproved, even when the ActionInfo object looks very logical. The worst case is the attacker could craft an action that moves funds from an Account contract and ensure the action is approved and cannot be disapproved.

Proof of Concept

This vulnerability requires the malicious user to have a few key permissions, or the malicious user is able to queue actions in such a way where they can obfuscate their intentions. In the best case scenario for the attacker, they have the ability to create actions that can call setRoleHolder in LlamaPolicy.sol. As a result, the malicious user is likely to be a trusted user of a Llama deployment that has decided to be malicious and execute an e.g. rugpull.

When an action is created, there is a call during _createAction to the strategy:

strategy.validateActionCreation(actionInfo);

For the relative quorum strategy, this method gets the total role supply as the number of holders for both the approval and disapproval role, and sets the relevant storage slots used to calculate whether or not an action is approved or disapproved.

The problem with this method is that it doesn't have any guards to protect anyone from calling this at any other time. The result is that a user can change the number of role holders after an action has been created, and then call the validateActionCreation method later to adjust the storage slots and thus make the action easier/harder to approve/disapprove.

The below is a small diff to the existing test suite that highlights the issue. In this diff the number of holders is massively inflated. The action that was previously being approved, is now no longer being approved with the same number of voters.

diff --git a/test/LlamaStrategy.t.sol b/test/LlamaStrategy.t.sol index 136b00f..2f9a221 100644 --- a/test/LlamaStrategy.t.sol +++ b/test/LlamaStrategy.t.sol @@ -696,11 +696,15 @@ contract IsActionApproved is LlamaStrategyTest { ActionInfo memory actionInfo = createAction(testStrategy); + generateAndSetRoleHolders(_numberOfPolicies * 10); + + testStrategy.validateActionCreation(actionInfo); + approveAction(_actionApprovals, actionInfo); bool _isActionApproved = testStrategy.isActionApproved(actionInfo); - assertEq(_isActionApproved, true); + assertEq(_isActionApproved, false); } function testFuzz_AbsolutePeerReview_ReturnsTrueForPassedActions(uint256 _actionApprovals, uint256 _numberOfPolicies)

Tools Used

Manual review

The validateActionCreation method could have an include guard that only allows calls from the relevant LlamaCore contract for this Llama deployment.

Alternatively the method could use an initialiser pattern that only allows a single call during action creation.

Assessed type

Invalid Validation

#0 - c4-pre-sort

2023-06-19T12:01:13Z

0xSorryNotSorry marked the issue as duplicate of #62

#1 - c4-judge

2023-07-02T15:24:03Z

gzeon-c4 changed the severity to 3 (High Risk)

#2 - c4-judge

2023-07-02T15:24:07Z

gzeon-c4 marked the issue as satisfactory

Findings Information

🌟 Selected for report: dirk_y

Also found by: rvierdiiev

Labels

bug
2 (Med Risk)
primary issue
selected for report
sponsor confirmed
M-02

Awards

3442.8809 USDC - $3,442.88

External Links

Lines of code

https://github.com/code-423n4/2023-06-llama/blob/main/src/LlamaCore.sol#L576-L584

Vulnerability details

Impact

Because disapprovals can be cast after the minimum queue time has expired (i.e. the action is now executable), a user with the disapproval role can frontrun any execute calls to push the action into the disapproved state and cause the execute call to fail, hence gas griefing the execute caller. This is particularly easy to achieve if a user has a force disapproval role.

Proof of Concept

During calls to castDisapproval there is a call to _preCastAssertions which checks that the action is in a queued state. The purpose of this check is to ensure that disapprovals can only be cast after the action was first approved and then queued for execution.

However, the issue is that the action remains in the queue state even after the minExecutionTime has been passed. The result is that a malicious user can disapprove an action once it is ready to execute.

Below is a diff to the existing test suite that shows how an action that is ready to be executed could be disapproved just before execution. This isn't demonstrated with a force disapproval role, but that case would be the most harmful in terms of gas griefing.

diff --git a/test/LlamaCore.t.sol b/test/LlamaCore.t.sol index 8135c93..34fd630 100644 --- a/test/LlamaCore.t.sol +++ b/test/LlamaCore.t.sol @@ -1015,8 +1015,12 @@ contract ExecuteAction is LlamaCoreTest { mpCore.queueAction(actionInfo); vm.warp(block.timestamp + 6 days); - vm.expectEmit(); - emit ActionExecuted(0, address(this), mpStrategy1, actionCreatorAaron, bytes("")); + vm.prank(disapproverDave); + mpCore.castDisapproval(uint8(Roles.Disapprover), actionInfo, ""); + vm.prank(disapproverDrake); + mpCore.castDisapproval(uint8(Roles.Disapprover), actionInfo, ""); + + vm.expectRevert(); mpCore.executeAction(actionInfo); }

Tools Used

Manual review

I suggest that disapprovals should only be allowed to be cast whilst the timestamp is still less than the minExecutionTime of the action. Effectively there is a specified disapproval window. The following lines could be added to _preCastAssertions:

if (!isApproval) { require(block.timestamp < action.minExecutionTime, "Missed disapproval window"); }

Assessed type

Invalid Validation

#0 - c4-pre-sort

2023-06-19T11:42:52Z

0xSorryNotSorry marked the issue as primary issue

#1 - c4-sponsor

2023-06-21T21:22:51Z

AustinGreen marked the issue as sponsor acknowledged

#2 - c4-sponsor

2023-06-21T21:22:55Z

AustinGreen marked the issue as sponsor confirmed

#3 - AustinGreen

2023-06-21T21:23:52Z

We confirm this and are working on a fix. It is a duplicate of https://github.com/code-423n4/2023-06-llama-findings/issues/80

Not sure if it should be medium or not but don't feel strongly. Llama is a trusted system so this would require malicious user intent or user error.

#4 - c4-judge

2023-07-02T10:59:05Z

gzeon-c4 marked the issue as selected for report

#5 - Co0nan

2023-07-05T21:54:00Z

This is more of an improved design than a security issue. disapproval role is a highly privileged role as per the design of the system.

The minExecutionTime is meant to prevent someone from executing the action early but is not designed to prevent the disApproval role. Either he disapproved early or after minExecutionTime passed this doesn't break the logic of the function at all, it will be excepted to cancel the action in this case. I believe this is a valid QA.

#6 - AustinGreen

2023-07-21T22:03:32Z

We removed the ability to disapprove after minExecutionTime to address this finding.

Findings Information

Labels

grade-b
analysis-advanced
A-02

Awards

48.2154 USDC - $48.22

External Links

Quality Summary

Overall I would consider the quality of this codebase "high", especially considering the quality of the comments. Any key trade-offs/technical decisions are well highlighted with inline comments which makes it easy to determine what risks are acceptable and what are unacceptable. I would say that the trade-offs made have struck a good balance between gas usage and complexity vs protocol health. Functions are generally short and perform one action, and internal methods are well named and arranged thoughtfully in the source code.

Auditing Approach

I approached this audit similarly to other audits, first starting with reading all the documentation and watching the provided video on high level architecture. I then focused on the Core contract and proceeded to branch out to the other periphery contracts over time. Any interesting thoughts or potential edge cases were noted down and then revisited later once I had a better understanding of the whole protocol. Any edge cases that were complex and that still appeared valid after another code/logic review were then confirmed by making changes to the existing test suite to induce the bug/unexpected behaviour.

Architecture Feedback

Generally I feel like the architecture of Llama is fairly well laid out. Core functionality is split from Policy functionality, and Strategies are also self-contained. Having each component logically separate not only makes for easier reading as an auditor, but it also keeps purpose of each contract clear and minimises the number of entry points for users. Speaking of entry points, having all the key user entrypoints in the Core contract is ideal from a wallet security point of view; a user that interacts with a known, trusted address is usually happier than a user that has to interact with 3 different contract addresses to perform the same actions. From a high-level architecture point of view, I don't feel like there is anything that drastically needs changing.

Mechanism Review

I can understand the reasoning for splitting policy holders from roles from permissions, however the trade-off between power and complexity is a difficult topic to navigate. On one hand, having these concepts nested as they are allows for the most powerful and complex use cases and doesn't restrict Llama instances to only basic use cases. However, on the other hand the complexity introduces risk, which is discussed further below. Assuming the risk is well understood and tolerated by the Llama org, I would say that the mechanism design is smart and clearly very powerful. However, my counter point to that would be, does anyone actually need this kind of complexity? Who needs this complexity or are you building for a future that may or may not exist?

Risks

The most obvious risk to me is the centralisation risk of having the LlamaFactory contract governed by the Llama organisation. This centralisation means that no new Llama instances can be deployed without the permission of the Llama org. In the short term this is probably an acceptable risk, however long term this becomes more of an issue. What if the Llama org becomes insolvent or disbands? If an existing deployment becomes tainted or exploited due to bad permission/role management, how can a protocol that depends on Llama spin up a new instance?

The design of the protocol also introduces a tonne of risk vectors given the number of permutations of policy holders, role holders, and permissions. Managing all of these permutations and having a high level overview of who has what permissions is a dangerous game. Permission management is incredibly important for a product like Llama, so I would like to see a dapp or some other off-chain portal that gives a high level view of role allocations and permissions across an instance. This will help aid instances to stay healthy long term as more complexity is introduced over time (more holders, more roles, and more permissions).

Finally, as detailed in one of my findings, I believe allowing delegatecalls to non-whitelisted contracts is extremely dangerous and should be avoided. I can understand the motivation for delegatecalls to self-contained functionality in scripts like LlamaGovernanceScript, however this should be a whitelisted script (where the Llama org is the whitelister). By having the Llama org whitelist allowed scripts, the org can remain in control of the perception of the Llama product. Imagine if a Llama instance was exploited due to a malicious delegatecall. Not only would this impact the instance itself, but also the perception of other users and potential users of the Llama product.

Time spent:

12 hours

#0 - c4-judge

2023-07-02T15:52:51Z

gzeon-c4 marked the issue as grade-b

AuditHub

A portfolio for auditors, a security profile for protocols, a hub for web3 security.

Built bymalatrax © 2024

Auditors

Browse

Contests

Browse

Get in touch

ContactTwitter