Platform: Code4rena
Start Date: 06/09/2022
Pot Size: $90,000 USDC
Total HM: 33
Participants: 168
Period: 9 days
Judge: GalloDaSballo
Total Solo HM: 10
Id: 157
League: ETH
Rank: 1/168
Findings: 6
Award: $12,228.02
🌟 Selected for report: 2
🚀 Solo Findings: 2
🌟 Selected for report: hyh
9009.7912 USDC - $9,009.79
If Alice the NFT owner first delegates her votes to herself, second delegates to anyone else with delegate() or delegateBySig() then all her NFT ids will become stuck: their transfers and burning will be disabled.
The issue is _afterTokenTransfer() callback running the _moveDelegateVotes() with an owner instead of her delegate. As Alice's votes in the checkpoint is zero after she delegated them, the subtraction _moveDelegateVotes() tries to perform during the move of the votes will be reverted.
As ERC721Votes is parent to Token and delegate is a kind of common and frequent operation, the impact is governance token moves being frozen in a variety of use cases, which interferes with governance voting process and can be critical for the project.
Suppose Alice delegated all her votes to herself and then decided to delegate them to someone else with either delegate() or delegateBySig() calling _delegate():
function _delegate(address _from, address _to) internal { // Get the previous delegate address prevDelegate = delegation[_from]; // Store the new delegate delegation[_from] = _to; emit DelegateChanged(_from, prevDelegate, _to); // Transfer voting weight from the previous delegate to the new delegate _moveDelegateVotes(prevDelegate, _to, balanceOf(_from)); }
_moveDelegateVotes() will set her votes to 0
as _from == Alice
and prevTotalVotes = _amount = balanceOf(Alice)
(as _afterTokenTransfer() incremented Alice's vote balance on each mint to her):
function _moveDelegateVotes( address _from, address _to, uint256 _amount ) internal { unchecked { // If voting weight is being transferred: if (_from != _to && _amount > 0) { // If this isn't a token mint: if (_from != address(0)) { // Get the sender's number of checkpoints uint256 nCheckpoints = numCheckpoints[_from]++; // Used to store the sender's previous voting weight uint256 prevTotalVotes; // If this isn't the sender's first checkpoint: Get their previous voting weight if (nCheckpoints != 0) prevTotalVotes = checkpoints[_from][nCheckpoints - 1].votes; // Update their voting weight _writeCheckpoint(_from, nCheckpoints, prevTotalVotes, prevTotalVotes - _amount); }
After that her votes in the checkpoint become zero. She will not be able to transfer the NFT as _afterTokenTransfer
will revert on _moveDelegateVotes
's attempt to move 1
vote from Alice
to _to
, while checkpoints[Alice][nCheckpoints - 1].votes
is 0
:
function _afterTokenTransfer( address _from, address _to, uint256 _tokenId ) internal override { // Transfer 1 vote from the sender to the recipient _moveDelegateVotes(_from, _to, 1);
The root issue is _afterTokenTransfer() dealing with Alice instead of Alice's delegate.
Consider including delegates() call as a fix:
function _afterTokenTransfer( address _from, address _to, uint256 _tokenId ) internal override { // Transfer 1 vote from the sender to the recipient - _moveDelegateVotes(_from, _to, 1); + _moveDelegateVotes(delegates(_from), delegates(_to), 1);
As delegates(address(0)) == address(0)
the burning/minting flow will persist:
/// @notice The delegate for an account /// @param _account The account address function delegates(address _account) external view returns (address) { address current = delegation[_account]; return current == address(0) ? _account : current; }
#0 - GalloDaSballo
2022-09-26T22:19:36Z
The Warden has shown how, due to the overlapping system handling delegation and balances, it is possible for a user to brick their own transferability
of their tokens.
This POC shows that any delegation will cause the issues as when dealing with a transfer, their currently zero-vote-balance will further be deducted instead of the delegated votes they have.
Because the finding showed a broken invariant, in that any delegation will brick transfers, as the invariants offered by ERC721Votes have been broken; I believe High Severity to be appropriate.
🌟 Selected for report: zkhorse
Also found by: MEP, Picodes, Solimander, berndartmueller, hxzy, hyh, pcarranzav, pfapostol
349.0578 USDC - $349.06
When Mike the NFT owner delegates for the first time and do it to himself, his votes double as _delegate() will reuse minting workflow by obtaining previous delegate from uninitialized delegation
array and running _moveDelegateVotes() with _from
being zero address, that corresponds to minting and is used for votes creation. As Mike already have votes equal to his balance (gathered on each mint by the same _moveDelegateVotes() invoked by _afterTokenTransfer()), he obtains x2 votes this way.
Placing severity to be high as ERC721Votes is parent to Token and this allows to substantially bias the governance voting process.
_delegate() uses delegation
array even if it's not initialized:
function _delegate(address _from, address _to) internal { // Get the previous delegate address prevDelegate = delegation[_from]; // Store the new delegate delegation[_from] = _to; emit DelegateChanged(_from, prevDelegate, _to); // Transfer voting weight from the previous delegate to the new delegate _moveDelegateVotes(prevDelegate, _to, balanceOf(_from)); }
_moveDelegateVotes() doesn't track zero address vote balance, treating it as minting all the time. I.e. when _from == address(0)
the votes are created from the thin air:
function _moveDelegateVotes( address _from, address _to, uint256 _amount ) internal { unchecked { // If voting weight is being transferred: if (_from != _to && _amount > 0) { // If this isn't a token mint: if (_from != address(0)) { // Get the sender's number of checkpoints uint256 nCheckpoints = numCheckpoints[_from]++; // Used to store the sender's previous voting weight uint256 prevTotalVotes; // If this isn't the sender's first checkpoint: Get their previous voting weight if (nCheckpoints != 0) prevTotalVotes = checkpoints[_from][nCheckpoints - 1].votes; // Update their voting weight _writeCheckpoint(_from, nCheckpoints, prevTotalVotes, prevTotalVotes - _amount); } // If this isn't a token burn: if (_to != address(0)) { // Get the recipients's number of checkpoints uint256 nCheckpoints = numCheckpoints[_to]++; // Used to store the recipient's previous voting weight uint256 prevTotalVotes; // If this isn't the recipient's first checkpoint: Get their previous voting weight if (nCheckpoints != 0) prevTotalVotes = checkpoints[_to][nCheckpoints - 1].votes; // Update their voting weight _writeCheckpoint(_to, nCheckpoints, prevTotalVotes, prevTotalVotes + _amount); } }
This way on the first _delegate() call with _to
being the owner it will add balanceOf(owner)
to the owner's vote balance, not subtracting it from anywhere.
Now suppose Mike knows that and:
Obtains 100
NFT. His votes
in the latest checkpoint are now 100
as he got 1
move on each mint to him via _afterTokenTransfer() -> _moveDelegateVotes() logic
Delegates votes to himself with delegate(Mike)
His votes is now 200
as delegates[Mike] = 0
, while _moveDelegateVotes() will add his balance to the votes again as _from = 0
, _to = Mike
, and _amount = 100
in (2).
Consider controlling for the uninitialized delegation
array case with the delegates() function:
/// @notice The delegate for an account /// @param _account The account address function delegates(address _account) external view returns (address) { address current = delegation[_account]; return current == address(0) ? _account : current; }
For example use it in _delegate() directly:
function _delegate(address _from, address _to) internal { // Get the previous delegate - address prevDelegate = delegation[_from]; + address prevDelegate = delegates(_from); // Store the new delegate delegation[_from] = _to; emit DelegateChanged(_from, prevDelegate, _to); // Transfer voting weight from the previous delegate to the new delegate _moveDelegateVotes(prevDelegate, _to, balanceOf(_from)); }
#0 - GalloDaSballo
2022-09-20T19:29:22Z
70.6842 USDC - $70.68
https://github.com/code-423n4/2022-09-nouns-builder/blob/7e9fddbbacdd7d7812e912a369cfd862ee67dc03/src/governance/treasury/Treasury.sol#L208-L230 https://github.com/code-423n4/2022-09-nouns-builder/blob/7e9fddbbacdd7d7812e912a369cfd862ee67dc03/src/governance/treasury/Treasury.sol#L43-L54
Due to the lack of control of the proposal governance configuration the proposal execution process can be sabotaged:
On the one hand delay
can be set to zero, a setting that disables the control execution the delaying provide
On the another setting delay
can be set to be many magnitudes higher than it's normally used, say 10 years
, which will effectively forbid proposal execution
Also, setting gracePeriod
to zero will reduce the execution window to exactly one moment, timestamps[_proposalId]
, which might not even happen as block.timestamp
might not coincide with it.
Such manipulations are conditional on the passing of a corresponding vote. However, due to the Treasury blocking impact the lack of the contract level control here is dangerous.
Suppose vote quorum is thin for any reason and a malicious party passes the vote to block any future proposals as described. This cannot be undone as any future proposals are now disabled.
The logic behind key parameter checks on the contract level is that some values shouldn't be set in any circumstances, so if such a vote passed somehow then it have to be malicious, so the checks aim to limit the impact of that.
Delay and grace period can be set to any arbitrary values, i.e. zero or enormous:
/// @notice Updates the transaction delay /// @param _newDelay The new time delay function updateDelay(uint256 _newDelay) external { // Ensure the caller is the treasury itself if (msg.sender != address(this)) revert ONLY_TREASURY(); emit DelayUpdated(settings.delay, _newDelay); // Update the delay settings.delay = SafeCast.toUint128(_newDelay); } /// @notice Updates the execution grace period /// @param _newGracePeriod The new grace period function updateGracePeriod(uint256 _newGracePeriod) external { // Ensure the caller is the treasury itself if (msg.sender != address(this)) revert ONLY_TREASURY(); emit GracePeriodUpdated(settings.gracePeriod, _newGracePeriod); // Update the grace period settings.gracePeriod = SafeCast.toUint128(_newGracePeriod); }
function initialize(address _governor, uint256 _delay) external initializer { // Ensure the caller is the contract manager if (msg.sender != address(manager)) revert ONLY_MANAGER(); // Ensure a governor address was provided if (_governor == address(0)) revert ADDRESS_ZERO(); // Grant ownership to the governor __Ownable_init(_governor); // Store the time delay settings.delay = SafeCast.toUint128(_delay);
Consider adding the desired range for delay
and gracePeriod
in initialize(), updateDelay(), updateGracePeriod() functions.
As an example, from 10 minutes
to 100 days
for delay
, from 1 day
to 1 year
for gracePeriod
.
#0 - GalloDaSballo
2022-09-25T20:31:42Z
Dup of #482
🌟 Selected for report: hyh
2702.9374 USDC - $2,702.94
ERC721Votes's delegate() and delegateBySig() allow the delegation to zero address, which result in owner's votes elimination in the checkpoint. I.e. the votes are subtracted from the owner, but aren't added anywhere. _moveDelegateVotes() invoked by _delegate() treats the corresponding call as a burning, erasing the votes.
The impact is that the further transfer and burning attempts for the ids of the owner will be reverted because _afterTokenTransfer() callback will try to reduce owner's votes, which are already zero, reverting the calls due to subtraction fail.
As ERC721Votes is parent to Token the overall impact is governance token burning and transfer being disabled whenever the owner delegated to zero address. This can be done deliberately, i.e. any owner can disable burning and transfer of the owned ids at any moment, which can interfere with governance voting process.
User facing delegate() and delegateBySig() allow for zero address delegation:
/// @notice Delegates votes to an account /// @param _to The address delegating votes to function delegate(address _to) external { _delegate(msg.sender, _to); }
function delegateBySig( address _from, address _to, uint256 _deadline, uint8 _v, bytes32 _r, bytes32 _s ) external { // Ensure the signature has not expired if (block.timestamp > _deadline) revert EXPIRED_SIGNATURE(); // Used to store the digest bytes32 digest; // Cannot realistically overflow unchecked { // Compute the hash of the domain seperator with the typed delegation data digest = keccak256( abi.encodePacked("\x19\x01", DOMAIN_SEPARATOR(), keccak256(abi.encode(DELEGATION_TYPEHASH, _from, _to, nonces[_from]++, _deadline))) ); } // Recover the message signer address recoveredAddress = ecrecover(digest, _v, _r, _s); // Ensure the recovered signer is the voter if (recoveredAddress == address(0) || recoveredAddress != _from) revert INVALID_SIGNATURE(); // Update the delegate _delegate(_from, _to); }
And pass zero address to the _delegate() where it is being set:
function _delegate(address _from, address _to) internal { // Get the previous delegate address prevDelegate = delegation[_from]; // Store the new delegate delegation[_from] = _to; emit DelegateChanged(_from, prevDelegate, _to); // Transfer voting weight from the previous delegate to the new delegate _moveDelegateVotes(prevDelegate, _to, balanceOf(_from)); }
In this case _moveDelegateVotes() will reduce the votes from the owner, not adding it to anywhere as _from
is the owner, while _to
is zero address:
if (_from != _to && _amount > 0) { // If this isn't a token mint: if (_from != address(0)) { // Get the sender's number of checkpoints uint256 nCheckpoints = numCheckpoints[_from]++; // Used to store the sender's previous voting weight uint256 prevTotalVotes; // If this isn't the sender's first checkpoint: Get their previous voting weight if (nCheckpoints != 0) prevTotalVotes = checkpoints[_from][nCheckpoints - 1].votes; // Update their voting weight _writeCheckpoint(_from, nCheckpoints, prevTotalVotes, prevTotalVotes - _amount); } // If this isn't a token burn: if (_to != address(0)) { // @ audit here we add the votes to the target, but only if it's not zero address
The owner might know that and can use such a delegation to interfere with the system by prohibiting of transferring/burning of his ids.
This happens via _afterTokenTransfer() reverting as it's becomes impossible to reduce owner's votes balance by 1
:
function _afterTokenTransfer( address _from, address _to, uint256 _tokenId ) internal override { // Transfer 1 vote from the sender to the recipient _moveDelegateVotes(_from, _to, 1); super._afterTokenTransfer(_from, _to, _tokenId); }
Consider prohibiting zero address as a delegation destination:
function _delegate(address _from, address _to) internal { + if (_to == address(0)) revert INVALID_SIGNATURE(); // Get the previous delegate address prevDelegate = delegation[_from]; // Store the new delegate delegation[_from] = _to; emit DelegateChanged(_from, prevDelegate, _to); // Transfer voting weight from the previous delegate to the new delegate _moveDelegateVotes(prevDelegate, _to, balanceOf(_from)); }
When _to
isn't zero there always be an addition in _moveDelegateVotes(), so the system votes balance will be sustained.
#0 - GalloDaSballo
2022-09-25T21:04:11Z
From Auction
we know that token Burning will happen only on failure to complete the auction.
While a new mechanism for burning could be added, it would require changing the code via an upgrade (conditional)
The Warden has proven how underflow will cause the inability to transfer the token permanently and irreversibly.
Delegating to 0, will cause Delegation to no longer be available (see other reports), however this finding also shows how Delegating to Address(0) causes the token to no longer being transferable.
I think the submission is well developed, however it requires the owner to delegate to the address(0)
For the reason I think the finding is of Medium Severity
#1 - GalloDaSballo
2022-09-25T21:05:21Z
Because this is the only report that shows loss of Token, am not going to bulk with the others which instead show how Voting Power is lost irreversibly
#2 - iainnash
2022-09-26T20:48:56Z
Logical duplicate of #478 but agree with @GalloDaSballo about the reference being to lost token(s) being valid.
34.7819 USDC - $34.78
The total number of properties is now limited to be 15 or less with hard code on the storage structures level.
In the same time it is possible to add unlimited number of properties with MetadataRenderer's addProperties().
If this happens, with a malicious intent or as the result of an operational mistake, the whole auctioning will become permanently frozen.
The attributes
mapping has fixed length of 16
for each token's array of attributes:
/// @notice The attributes generated for a token mapping(uint256 => uint16[16]) internal attributes;
First element, tokenAttributes[0]
, is dedicated to the currently existing properties count:
function onMinted(uint256 _tokenId) external returns (bool) { // Ensure the caller is the token contract if (msg.sender != settings.token) revert ONLY_TOKEN(); // Compute some randomness for the token id uint256 seed = _generateSeed(_tokenId); // Get the pointer to store generated attributes uint16[16] storage tokenAttributes = attributes[_tokenId]; // Cache the total number of properties available uint256 numProperties = properties.length; // Store the total as reference in the first slot of the token's array of attributes tokenAttributes[0] = uint16(numProperties);
So, an NFT can have up to 15 properties set.
However, the number of properties addProperties() introduces isn't controlled anyhow, it is possible to add an arbitrary large numNewProperties
:
// Cache the number of existing properties uint256 numStoredProperties = properties.length; // Cache the number of new properties uint256 numNewProperties = _names.length; // Cache the number of new items uint256 numNewItems = _items.length; unchecked { // For each new property: for (uint256 i = 0; i < numNewProperties; ++i) { // Append storage space properties.push(); // Get the new property id uint256 propertyId = numStoredProperties + i; // Store the property name properties[propertyId].name = _names[i]; emit PropertyAdded(propertyId, _names[i]); }
Once numStoredProperties + numNewProperties > 15
, all the token mints will be failing:
function onMinted(uint256 _tokenId) external returns (bool) { // Ensure the caller is the token contract if (msg.sender != settings.token) revert ONLY_TOKEN(); // Compute some randomness for the token id uint256 seed = _generateSeed(_tokenId); // Get the pointer to store generated attributes uint16[16] storage tokenAttributes = attributes[_tokenId]; // Cache the total number of properties available uint256 numProperties = properties.length; // Store the total as reference in the first slot of the token's array of attributes tokenAttributes[0] = uint16(numProperties); unchecked { // For each property: for (uint256 i = 0; i < numProperties; ++i) { // Get the number of items to choose from uint256 numItems = properties[i].items.length; // Use the token's seed to select an item tokenAttributes[i + 1] = uint16(seed % numItems);
function _mint(address _to, uint256 _tokenId) internal override { // Mint the token super._mint(_to, _tokenId); // Generate the token attributes if (!settings.metadataRenderer.onMinted(_tokenId)) revert NO_METADATA_GENERATED();
The Auction pauses as a result:
function _createAuction() private { // Get the next token available for bidding try token.mint() returns (uint256 tokenId) { ... } catch Error(string memory) { _pause(); }
But there will be no fix as there is no way to remove a property, MetadataRenderer have only property addition functionality. I.e. the auctioning system will become permanently frozen.
To align property management functionality with the storage consider introducing the same limit to addProperties():
function addProperties( string[] calldata _names, ItemParam[] calldata _items, IPFSGroup calldata _ipfsGroup ) external onlyOwner { // Cache the existing amount of IPFS data stored uint256 dataLength = ipfsData.length; // If this is the first time adding properties and/or items: if (dataLength == 0) { // Transfer ownership to the DAO treasury transferOwnership(settings.treasury); } // Add the IPFS group information ipfsData.push(_ipfsGroup); // Cache the number of existing properties uint256 numStoredProperties = properties.length; // Cache the number of new properties uint256 numNewProperties = _names.length; + if (numStoredProperties + numNewProperties > 15) revert PROPERTY_LIMIT_BREACHED();
#0 - GalloDaSballo
2022-09-20T18:43:51Z
🌟 Selected for report: Lambda
Also found by: 0x1337, 0x1f8b, 0x4non, 0x85102, 0xA5DF, 0xNazgul, 0xSmartContract, 0xbepresent, 0xc0ffEE, 8olidity, Aymen0909, B2, Bnke0x0, CRYP70, Captainkay, CertoraInc, Ch_301, Chom, ChristianKuri, CodingNameKiki, Deivitto, Diana, DimitarDimitrov, ElKu, EthLedger, Franfran, Funen, GimelSec, JansenC, Jeiwan, Jujic, Lead_Belly, MEP, MasterCookie, MiloTruck, Noah3o6, PPrieditis, PaludoX0, Picodes, PwnPatrol, R2, Randyyy, RaymondFam, Respx, ReyAdmirado, Rolezn, Samatak, Tointer, Tomo, V_B, Waze, _Adam, __141345__, a12jmx, ak1, asutorufos, azephiar, ballx, bharg4v, bin2chen, bobirichman, brgltd, bulej93, c3phas, cccz, ch0bu, cloudjunky, cryptonue, cryptostellar5, cryptphi, csanuragjain, d3e4, datapunk, davidbrai, delfin454000, dharma09, dic0de, dipp, djxploit, eierina, erictee, fatherOfBlocks, gogo, hansfriese, hyh, imare, indijanc, izhuer, jonatascm, ladboy233, leosathya, lucacez, lukris02, m9800, martin, minhtrng, ne0n, neumo, oyc_109, p_crypt0, pashov, pauliax, pcarranzav, pedr02b2, peritoflores, pfapostol, rbserver, ret2basic, robee, rvierdiiev, sach1r0, sahar, scaraven, sikorico, simon135, slowmoses, sorrynotsorry, tnevler, tonisives, volky, yixxas, zkhorse, zzzitron
60.7742 USDC - $60.77
Treasury's settings.gracePeriod
now is set on construction and can be changed, but has no material effect on the Treasury workflow as execute() logic doesn't depend on it.
I.e. it can be set to any value, the execution will continue regardless. This is substantial deviation from the expected core logic, so setting the severity to be high.
execute() now only checks that !isReady(proposalId)
, i.e. that timestamps[_proposalId] != 0 && block.timestamp >= timestamps[_proposalId]
:
/// @notice Executes a queued proposal /// @param _targets The target addresses to call /// @param _values The ETH values of each call /// @param _calldatas The calldata of each call /// @param _descriptionHash The hash of the description function execute( address[] calldata _targets, uint256[] calldata _values, bytes[] calldata _calldatas, bytes32 _descriptionHash ) external payable onlyOwner { // Get the proposal id bytes32 proposalId = hashProposal(_targets, _values, _calldatas, _descriptionHash); // Ensure the proposal is ready to execute if (!isReady(proposalId)) revert EXECUTION_NOT_READY(proposalId); ... // @audit execution goes on
However, there is also an expiration logic, that should prohibit the execution once grace period has passed:
/// @notice If a queued proposal can no longer be executed /// @param _proposalId The proposal id function isExpired(bytes32 _proposalId) external view returns (bool) { unchecked { return block.timestamp > (timestamps[_proposalId] + settings.gracePeriod); } }
Notice, there is no test coverage for gracePeriod
effect on execution, only the ability to set new gracePeriod
is tested:
assertEq(treasury.gracePeriod(), 2 weeks); governor.execute(targets, values, calldatas, keccak256(bytes(""))); assertEq(treasury.gracePeriod(), _newGracePeriod);
Consider checking for the expiration as well (the EXECUTION_EXPIRED
error is already present in ITreasury):
/// @notice Executes a queued proposal /// @param _targets The target addresses to call /// @param _values The ETH values of each call /// @param _calldatas The calldata of each call /// @param _descriptionHash The hash of the description function execute( address[] calldata _targets, uint256[] calldata _values, bytes[] calldata _calldatas, bytes32 _descriptionHash ) external payable onlyOwner { // Get the proposal id bytes32 proposalId = hashProposal(_targets, _values, _calldatas, _descriptionHash); // Ensure the proposal is ready to execute if (!isReady(proposalId)) revert EXECUTION_NOT_READY(proposalId); + if (isExpired(proposalId)) revert EXECUTION_EXPIRED(proposalId);
#0 - GalloDaSballo
2022-09-26T13:04:35Z
The Warden has shown how, most likely due to refactoring, a key piece of code, checking for the proposal grace period, has been omitted from the current execute
check.
The current code will allow expired proposals to be executed.
This breaks the expectations of the timelock, and while there is no clear loss of funds nor denial of service, the lack of the check enables an expired (potentially dangerous, socially agreed on to skip, etc..) proposal to be executed, in a time in which it shouldn't, breaking the invariants of the timelock.
Because of the above I agree with High Severity
#1 - kulkarohan
2022-09-26T23:17:14Z
This is not an issue. The function is called via Governor.execute()
where the proposal is ensured to be queued (ie It'd revert if the proposal expired)
#2 - dmitriia
2022-09-27T18:43:08Z
I agree that if the contracts are initialized with the particular implementation it's not an issue, i.e. Treasury's execute
is called only with Governor's execute
, which checks the expiration.
However, Manager contract, which is called on any deployment, can be initialized with an arbitrary implementation, which defines who is the Treasury owner: https://github.com/code-423n4/2022-09-nouns-builder/blob/7e9fddbbacdd7d7812e912a369cfd862ee67dc03/src/manager/Manager.sol#L55-L135
This way an owner of the Treasury can be set to a contract that do not ensure that isExpired
is always checked before execute
. I.e. architecture-wise these checks have to be performed on the same level, in the Treasury contract. Otherwise, a theoretical attack surface is created, which can become a practical loophole on a change in a long sequence of dependencies.
#3 - kulkarohan
2022-09-27T19:39:38Z
Theoretically, sure I'd agree with that. And i'll also say its on us for not doing a better job documenting the E2E deploy process.
But this would require an egregious error on our end, which would be resolved by a redeploy with the correct implementation. Therefore making this an impractical issue IMO.
#4 - GalloDaSballo
2022-09-28T20:20:06Z
Taking from the POC the Sponsor offered in #723
The following test shows that an expired proposal will not execute
function test_ProposalExpirationRevert() public { // propose mintVoter1(); (address[] memory targets, uint256[] memory values, bytes[] memory calldatas) = mockProposal(); bytes32 descriptionHash = keccak256(bytes("test")); vm.warp(1 days); vm.prank(voter1); governor.propose(targets, values, calldatas, "test"); bytes32 proposalId = governor.hashProposal(targets, values, calldatas, descriptionHash); // vote vm.warp(block.timestamp + governor.votingDelay()); vm.prank(voter1); governor.castVote(proposalId, 1); // queue vm.warp(block.timestamp + governor.votingPeriod()); vm.prank(voter1); governor.queue(proposalId); // execute // Instead wait too long vm.warp(block.timestamp + 50 days); assert (treasury.isQueued(proposalId)); assert (treasury.isExpired(proposalId)); try governor.execute(targets, values, calldatas, descriptionHash) { revert(); } catch { } }
I believe that a couple observations made in this finding are still valid, however, per the discussion had on #723 and the POC, the High Severity finding is not valid.
We could argue (as seen above), that due to a misconfiguration, the Treasury
can be made to execute an expired proposal.
However, this would require delegating ownership to an EOA, which per how the contest was run, is not normal functionality and would resemble an act of dissolvement of the DAO or an act of "self-destruction". For those reasons I don't think this finding could be categorized as medium, as doing so would be logically equivalent to saying that "The DAO can dissolve" or "The DAO can burn their own tokens and lose them all"
However, I think there's a specific observation from the finding above that is worth keeping and it's that the code does contain the CustomError for EXECUTION_EXPIRED
, meaning that it is within reason to suggest a refactoring to add the check recommended by the Warden.
Because of the specific conditionality of the "attack", and given the observation that additional validation could be added, I think QA Low Severity to be more appropriate.
Specifically in the context that the EXECUTION_EXPIRED
error is unused, and the warden has shown a specific way to use it and add further security guarantees to the specific contract, which in the broader context of the contest doesn't actualize in an exploit