Platform: Code4rena
Start Date: 31/10/2023
Pot Size: $60,500 USDC
Total HM: 9
Participants: 65
Period: 10 days
Judge: gzeon
Total Solo HM: 2
Id: 301
League: ETH
Rank: 7/65
Findings: 4
Award: $944.92
🌟 Selected for report: 0
🚀 Solo Findings: 0
🌟 Selected for report: 3docSec
Also found by: Bauchibred, lsaudit, pep7siup
716.7564 USDC - $716.76
The contract is not in compliance with the EIP, also the docs and guides for this contest, it's been stated that a few contracts are to comply with a few EIPs, with one of the explicit mentions being:
PartyGovernance: Should comply with ERC4906
But the above does not hold true, which is due to the way the contract integrates the two metadata upgrades events supported in the EIP.
Quoting the Eip
The
MetadataUpdate
orBatchMetadataUpdate
event MUST be emitted when the JSON metadata of a token, or a consecutive range of tokens, is changed.
Currently PartyGovernanceNFT.sol
does not even implement the MetadataUpdate
event, though it's present in the interface, using this search command: https://github.com/search?q=repo%3Acode-423n4%2F2023-10-party+BatchMetadataUpdate+path%3Acontracts%2Fparty&type=code
we can see that there are 6 instances of the BatchMetadataUpdate
event being emittted in the contract, this is not an issue since either of the events can be used, main thing is to emit the correct event, i.e if the metadata being updated is for a range of tokens then it should be BatchMetadataUpdate
otherwise it's MetadataUpdate
since only one token is being updated.
Now going back to the context in which the BatchMetadataUpdate
is being implemented
All 6 instances are the same, for e.g PartyGovernance.sol#L688
emit BatchMetadataUpdate(0, type(uint256).max);
What this translates to is that for all six functions: distribute()
propose()
, accept()
, veto()
, cancel()
& _executeProposal()
via execute())
third-party platforms such as NFT market are notified that the JSON metadata of all the tokenIds has been updated and that a timely update of the images and related attributes of the NFTs need to be done.
Link: https://github.com/search?q=repo%3Acode-423n4%2F2023-05-party%20BatchMetadataUpdate&type=code
</details>/// @title EIP-721 Metadata Update Extension interface IERC4906 is IERC165, IERC721 { /// @dev This event emits when the metadata of a token is changed. /// So that the third-party platforms such as NFT market could /// timely update the images and related attributes of the NFT. event MetadataUpdate(uint256 _tokenId); /// @dev This event emits when the metadata of a range of tokens is changed. /// So that the third-party platforms such as NFT market could /// timely update the images and related attributes of the NFTs. event BatchMetadataUpdate(uint256 _fromTokenId, uint256 _toTokenId); }
But the current implementation is wrong cause, a hardcode is made for the range of the tokenIds, i.e _fromTokenId == 0
& _toTokenId == type(uint256).max
which would cause compatibility issues with the third party NFT platforms as they can't pin point the correct ranges of token Ids that got updated, since not up to type(uint256).max
tokens would exist.
If we would make a case that the third party platforms should translate this for all tokens, then this is also wrong cause it's not being stated in the EIP to pass type(uint256).max
as _toTokenId
to indicate that all available tokenIds should be considered in the range, which breaks the functionality of this with the third party platforrms that want to track this.
Correctly integrate the EIP, ensure that third party platforms can correctly track updates made to the metadata of the right range of token Ids.
Taking a short glance into all 6 functions that emit the BatchMetadataUpdate
event, we can see that not all functions make an update to the JSON metadata of the NTFs or make update to more than one of the tokenIds, taking a short glance at distribute()
one can see that where as currently the tokenId
is unused for the function it could be in the future and in that instance if only the tokenId's metadata is to be updated then MetadataUpdate
with should be used instead of BatchMetadataUpdate
.
Context
#0 - ydspa
2023-11-12T01:39:16Z
QA: L
#1 - c4-pre-sort
2023-11-12T01:39:21Z
ydspa marked the issue as insufficient quality report
#2 - gzeon-c4
2023-11-19T17:11:37Z
Inefficient, but the event is emitted which make it technically complying to the standard. QA @0xble
#3 - c4-judge
2023-11-19T17:11:42Z
gzeon-c4 changed the severity to QA (Quality Assurance)
#4 - c4-judge
2023-11-19T17:23:19Z
This previously downgraded issue has been upgraded by gzeon-c4
#5 - c4-judge
2023-11-19T17:23:20Z
This previously downgraded issue has been upgraded by gzeon-c4
#6 - c4-judge
2023-11-19T17:23:40Z
gzeon-c4 marked the issue as duplicate of #340
#7 - c4-judge
2023-11-19T17:24:16Z
gzeon-c4 marked the issue as satisfactory
🌟 Selected for report: TresDelinquentes
Also found by: 3docSec, Arz, Bauchibred, D1r3Wolf, J4X, Neon2835, Pechenite, adeolu, chainsnake
152.3655 USDC - $152.37
https://github.com/code-423n4/2023-10-party/blob/b23c65d62a20921c709582b0b76b387f2bb9ebb5/contracts/crowdfund/ETHCrowdfundBase.sol#L140-L172 https://github.com/code-423n4/2023-10-party/blob/b23c65d62a20921c709582b0b76b387f2bb9ebb5/contracts/crowdfund/ETHCrowdfundBase.sol#L196-L273
Bug case has already being explained here, with protocol's work around for this being that a party's host call can just call finalize()
after contributions are more than minTotalContributions
Issue is that this workaround would not always work and could cause some crowdfunds to be stuck in the active
state until it expires and ends up being lost
.
First, take a look at ETHCrowdfundBase.sol#L196-L273
function _processContribution( address payable contributor, address delegate, uint96 amount ) internal returns (uint96 votingPower) { address oldDelegate = delegationsByContributor[contributor]; if (msg.sender == contributor || oldDelegate == address(0)) { // Update delegate. delegationsByContributor[contributor] = delegate; } else { // Prevent changing another's delegate if already delegated. delegate = oldDelegate; } emit Contributed(msg.sender, contributor, amount, delegate); // OK to contribute with zero just to update delegate. if (amount == 0) return 0; // Only allow contributions while the crowdfund is active. CrowdfundLifecycle lc = getCrowdfundLifecycle(); if (lc != CrowdfundLifecycle.Active) { revert WrongLifecycleError(lc); } ....//omitted for brevity //@audit // Check that the contribution amount is at or above the minimum. This // is done after `amount` is potentially reduced if refunding excess // contribution. There is a case where this prevents a crowdfund from // reaching `maxTotalContributions` if the `minContribution` is greater // than the difference between `maxTotalContributions` and the current // `totalContributions`. In this scenario, users will have to wait until // the crowdfund expires or a host finalizes after // `minTotalContribution` has been reached by calling `finalize()`. uint96 minContribution_ = minContribution; if (amount < minContribution_) { revert BelowMinimumContributionsError(amount, minContribution_); } ...//omitted for brevity }
In short, as referenced from the full code block and code comments, the current implementation of ETHCrowdfundBase._processContribution()
contract inadvertently leads to a scenario where the contract fails to reach the maxTotalContributions
due to maybe a high minContribution
, specifically this occurs when the remaining balance needed to achieve maxTotalContributions
is less than the minContribution
.
In such a case, new contributions cannot be made due to a reversion in _processContribution()
, effectively causing the crowdfund to stall and can only be finalized if the party host calls on finalize()
, but this can only happen if the total contribution is already minTotalContributions
or more, but there are no assurances that this would always be the case, because while initializing we can see that minTotalContributions
is allowed to be set equal to maxTotalContributions
.
The initialization logic can be seen here:
function _initialize(ETHCrowdfundOptions memory opts) internal { // Set the minimum and maximum contribution amounts. if (opts.minContribution > opts.maxContribution) { revert MinGreaterThanMaxError(opts.minContribution, opts.maxContribution); } minContribution = opts.minContribution; maxContribution = opts.maxContribution; // Set the min total contributions. //@audit if (opts.minTotalContributions > opts.maxTotalContributions) { revert MinGreaterThanMaxError(opts.minTotalContributions, opts.maxTotalContributions); } minTotalContributions = opts.minTotalContributions; // Set the max total contributions. ...//omitted for brevity }
The above check only prevents minTotalContributions
from being greater than maxTotalContributions
, but does not consider the scenario where minTotalContributions
equals maxTotalContributions
, which leads to the deadlock in the crowdfund.
In this logic, if the remaining contribution it takes to reach minTotalContributions
in this case, maxTotalContributions
too since they are equal is below the minContribution
value, then _processContribution()
would always revert with the BelowMinimumContributionsError()
error, failing to achieve the maxTotalContributions
goal, which halts the crowdfunding process when close to its goal.
Manual Review
As long as minContribution
is going to be used, then minTotalContributions != maxTotalContributions
should be an invariant that's never broken, i.e., the initialization logic should be replaced with this:
function _initialize(ETHCrowdfundOptions memory opts) internal { // Set the minimum and maximum contribution amounts. if (opts.minContribution > opts.maxContribution) { revert MinGreaterThanMaxError(opts.minContribution, opts.maxContribution); } minContribution = opts.minContribution; maxContribution = opts.maxContribution; // Set the min total contributions. //@audit - if (opts.minTotalContributions > opts.maxTotalContributions) { + if (opts.minTotalContributions >= opts.maxTotalContributions) { revert MinGreaterThanMaxError(opts.minTotalContributions, opts.maxTotalContributions); } minTotalContributions = opts.minTotalContributions; // Set the max total contributions. ...//omitted for brevity }
Another subtle hint is to ensure that the difference between minTotalContributions
& maxTotalContributions
is always greater than minContribution
, that way we close all the edge cases where this could happen.
Context
#0 - c4-pre-sort
2023-11-12T07:42:04Z
ydspa marked the issue as duplicate of #552
#1 - c4-pre-sort
2023-11-12T07:42:09Z
ydspa marked the issue as insufficient quality report
#2 - c4-judge
2023-11-19T14:33:03Z
gzeon-c4 marked the issue as unsatisfactory: Invalid
#3 - c4-judge
2023-11-19T14:40:13Z
gzeon-c4 marked the issue as unsatisfactory: Out of scope
#4 - Bauchibred
2023-11-22T16:58:47Z
HI @gzeon-c4.
Thanks for judging, but I’d really appreciate you having another review on the validity of this report, unlike other duplicates of #522, this report explains the same issue as #119 that was deemed satisfactory
, both talk about how crowdfunding would be stuck since they'd be a DOS on an attempt to finalise them which is due to how the “minTotalContributions
& maxTotalContributions
are initialized”, i.e this report explains the case of when minTotalContributions == maxTotalContributions
is true as explained in the referenced report(#119), where as it’d be fair to note that this report does not talk about the exchangeRateBps
being less than 1e4, the core issue is the same, and quoting the Code4rena docs, section on duplication they should be considered duplicates
All issues which identify the same functional vulnerability will be considered duplicates regardless of effective rationalization of severity or exploit path.
The fact of this being a valid duplicate of #119, can be seen right from the first section of this report, i.e impact section of this report.
Issue is that this workaround would not always work and could cause some crowdfunds to be stalled in the
active
state until it expires and ends up beinglost
.
This report also referenced the initialization logic and talked about the bug since they (min/maxContributions) can be set the same, with a code snippet attached.
... because while initializing we can see that
minTotalContributions
is allowed to be set equal tomaxTotalContributions
.
The recommended mitigation step of this report is exactly same with the one referenced above(#119), stating that minTotalContributions != maxTotalContributions
should be an invariant that’s never broken and even adding a subtle hint.
Note that the warden that submitted #119 also submitted #127 (which is a valid duplicate of #522) cause where as bug cases are very similar, the former has an edge case attached to it and should stand on its own, unlike the latter which imo should be valid but OOS, reason below.
About #522 and it's dups, bug is valid and exists, but I believe they are OOS since that particular bug case is considered a known issue due to the comments in code, and all the batches of reports rightfully duplicated to it(#522) only provide a fix to the known issue, which as far as I know on Code4rena, this does not count to be a valid H/M
finding but could be considered QA
, also I believe multiple other wardens (myself included) saw the bug case but didn't submit a report cause it's known, which is why I decided to attach a subtle hint in the Recommended Mitigation Steps of this report to help sponsors cover this edge case.
Coming back to this issue & #119, breezing through the pre-sorted batch of #522, I assume only this report and #539 talks about this specific bug case and hint on making minTotalContributions != maxTotalContributions
an invariant which should warrant being duplicates of #119.
Thank you for your time.
#5 - 0xdeth
2023-11-23T13:18:08Z
Hey @Bauchibred
Your issue reverts because of the minContribution
check and doesn't use the exchangeRate
in the attack, while https://github.com/code-423n4/2023-10-party-findings/issues/119 reverts by rounding down the votingPower
to 0
to force a revert.
The https://github.com/code-423n4/2023-10-party-findings/issues/119#issuecomment-1817893274 states that the attack still works when minContribution = 0
, while this issue does not.
This issue is more similar to #453 than #119
Cheers.
#6 - c4-judge
2023-11-23T14:16:10Z
gzeon-c4 marked the issue as unsatisfactory: Out of scope
#7 - c4-judge
2023-11-23T14:21:29Z
gzeon-c4 marked the issue as satisfactory
🌟 Selected for report: 3docSec
Also found by: 0xMosh, 0xadrii, 0xmystery, Bauchibred, Ch_301, Emmanuel, J4X, Madalad, Pechenite, Shaheen, Topmark, TresDelinquentes, Walter, ZanyBonzy, adeolu, adriro, chainsnake, joaovwfreire, lsaudit, osmanozdemir1, sin1st3r__
15.7808 USDC - $15.78
Issue ID | Description |
---|---|
QA-01 | There are no assurances that emergencyExecute() would ever fulfill its purpose |
QA-02 | Misuse of _rageQuit() could heavily impact proposal outcomes |
QA-03 | Remove redundant naming convention for custom errors |
QA-04 | rageQuit() 's execution window could be inaccurate |
QA-05 | Some safety measures should be implemented before deployment on BASE |
QA-06 | Inconsistency in sister function implementations |
QA-07 | signingThersholdBps should be signingThresholdBps instead |
QA-08 | burn should include a similar check like the prevToken >= withdrawTokens from rageQuit() |
QA-09 | Self-delegation mechanism seems to be flawed |
QA-10 | Fix contextual errors in docs |
emergencyExecute()
would ever fulfill it's purposeLow, since this relies on having a restricted host_.
But current implementation of the emergency execution mechanism indirectly have the host to give the final say, which is unintended or atleast not in the implemented context. is to allow the DOA execute any arbitrary function from the contract, issue is host can preempt the DAO's decision to execute an emergency action. This is due to the ability of a host to disable the emergencyExecute
function, potentially compromising the DAO's ability to respond in critical situations.
Take a look at emergencyExecute()
emergencyExecute
function is intended for the DAO to perform emergency actions.
solidity function emergencyExecute( address targetAddress, bytes calldata targetCallData, uint256 amountEth ) external payable { // Must be called by the DAO. if (_GLOBALS.getAddress(LibGlobals.GLOBAL_DAO_WALLET) != msg.sender) { revert OnlyPartyDaoError(msg.sender); } // Must not be disabled by DAO or host. if (emergencyExecuteDisabled) { revert OnlyWhenEmergencyActionsAllowedError(); } //omitted for brevity }
Function is used to execute arbitrary functions
for the DAOs, now issue is that the disableEmergencyExecute()
also exists as can be seen below:
solidity function disableEmergencyExecute() external { // Only the DAO or a host can call this. if ( !party.isHost(msg.sender) && _GLOBALS.getAddress(LibGlobals.GLOBAL_DAO_WALLET) != msg.sender ) { revert OnlyPartyDaoOrHostError(msg.sender); } emergencyExecuteDisabled = true; emit EmergencyExecuteDisabled(); }
Which just means that the host decides on if they want to allow an emergency execution, since the host can just listen to the mempool and call disableEmergencyExecute
to preemptively block any of the DAO's emergency action.
Add a time lock to the disabling function, fix is perfect cause the emergencyExecute()
is to be implemented for critical situations so a host can't front run the execution.
_rageQuit()
could heavily impact proposal outcomesSomeone with a large enough holding calling rageQuit()
could potentially decide the outcome of votes, and this could impact more that the votes of the caller, cause just as human nature a user that might have voted could see that enough votes is present and decide on not voting just for someone else to call rageQuit()
at the late stages of the voting process.
The _rageQuit
function, intended as a safeguard against 51% attacks, may be exploited by party members who choose to exit the party, thereby impacting the outcomes of ongoing proposals. Members holding significant voting power can sway decision-making processes by participating in a vote and then withdrawing their share, possibly after deducing that their vote could decisively influenced the outcome. This could undermine the integrity of the voting process, as members may have been influenced by the weight of votes at a given time*(human nature)*, this could be misleading if rageQuit
is executed subsequently.
Take a look at: PartyGovernanceNFT.sol#L344-L448
The rageQuit
function allows a party member to burn their NFT in exchange for their share of the party's funds, corresponding to their voting power. The problematic scenario unfolds as follows:
rageQuit
right at the brim of the voting process, removing their funds and voting power from the party.Have a time buffer, i.e do not accept a rageQuit()
around the end of a proposal's voting process.
The use of the term "Error" appended to custom error names in the PartyGovernanceNFT
smart contract is redundant. Solidity's revert statements already indicate that an error condition has been met, making the additional "Error" suffix unnecessary. This redundancy does not impact the functionality or the security of the contract, but it does affect the readability and conciseness of the code.
Here are the custom errors defined in the contract, source.
error FixedRageQuitTimestampError(uint40 rageQuitTimestamp); error CannotRageQuitError(uint40 rageQuitTimestamp); error CannotDisableRageQuitAfterInitializationError(); error InvalidTokenOrderError(); error BelowMinWithdrawAmountError(uint256 amount, uint256 minAmount); error NothingToBurnError();
Solidity's error handling mechanism is explicit in signifying that an error condition has been triggered. Custom errors serve as a mechanism for signalling specific conditions without necessarily needing to state they are errors. The term "Error" is therefore extraneous, as demonstrated by the standard naming practices in widely-accepted EIPs and the broader Solidity community.
The contract should remove the "Error" suffix from custom error names to improve clarity. The updated error names should succinctly describe the condition that triggers them without the redundancy of the "Error" term. Here are the revised custom error names:
error FixedRageQuitTimestamp(uint40 rageQuitTimestamp); error CannotRageQuit(uint40 rageQuitTimestamp); error CannotDisableRageQuitAfterInitialization(); error InvalidTokenOrder(); error BelowMinWithdrawAmount(uint256 amount, uint256 minAmount); error NothingToBurn();
Refactoring the error names as suggested will make the code cleaner and align with common Solidity naming conventions. It's a small change that contributes to better code quality and maintainability.
rageQuit()'s
execution window could be inaccurateThe rageQuit
function in the smart contract allows users to burn governance NFTs and withdraw their share of fungible tokens. However, the current implementation permits a user to execute rageQuit
at the exact block.timestamp
that marks the end of the permitted window, which is inconsistent with the expected behaviour. This may allow an action to be taken when it should theoretically be disallowed, potentially leading to governance issues or disputes among token holders.
Take a look at rageQuit()
function rageQuit( uint256[] calldata tokenIds, IERC20[] calldata withdrawTokens, uint256[] calldata minWithdrawAmounts, address receiver ) external { if (tokenIds.length == 0) revert NothingToBurnError(); // Check if called by an authority. bool isAuthority_ = isAuthority[msg.sender]; // Check if ragequit is allowed. uint40 currentRageQuitTimestamp = rageQuitTimestamp; if (!isAuthority_) { if (currentRageQuitTimestamp != ENABLE_RAGEQUIT_PERMANENTLY) { if ( currentRageQuitTimestamp == DISABLE_RAGEQUIT_PERMANENTLY || currentRageQuitTimestamp < block.timestamp ) { //@audit would pass when currentRageQuitTimestamp == block.timestamp revert CannotRageQuitError(currentRageQuitTimestamp); } //... omitted for brevity }
As seen, the code block includes a conditional that is intended to prevent the function from executing in regards to currentRageQuitTimestamp
.
However, as per the current code, users can still perform a rage quit at the exact block.timestamp
matching currentRageQuitTimestamp
, which is not the desired logic.
Easy fix, make the conditional check inclusive.
if ( currentRageQuitTimestamp == DISABLE_RAGEQUIT_PERMANENTLY || - currentRageQuitTimestamp < block.timestamp // Corrected condition + currentRageQuitTimestamp <= block.timestamp // Corrected condition ) { revert CannotRageQuitError(currentRageQuitTimestamp); }
By making this change, the rageQuit
function will properly prevent users from executing the function at or after the exact timestamp that marks the cutoff, ensuring the smart contract behaves as intended and documented. Furthermore, the documentation should be revised to accurately describe the behaviour and restrictions of the rageQuit
function.
NB: Submitted this as QA as it depends on how the team really view the invariant on how deadlines and not accepting
rageQuit
atcurrentRageQuitTimestamp
. If protocol MUST NOT acceptrageQuit()
whencurrentRageQuitTimestamp == block.timestamp
then this imo should be upgraded.
Unreated to this, do note that the PartyGovernance.cancel()
is also vulnerable in the same context to this,
uint256 cancelTime = values.executedTime + cancelDelay; // Must not be too early. if (block.timestamp < cancelTime) { revert ProposalCannotBeCancelledYetError( uint40(block.timestamp), uint40(cancelTime) );
The check should be inclusive to ensure that cancel time is not too early, i.e the changes below should be applied
uint256 cancelTime = values.executedTime + cancelDelay; // Must not be too early. - if (block.timestamp < cancelTime) { + if (block.timestamp <= cancelTime) { revert ProposalCannotBeCancelledYetError( uint40(block.timestamp), uint40(cancelTime) );
The impact is currently categorized as low, but this is a significant issue on Base where not all opcodes available on the mainnet are supported. A key example is the lack of the PUSH0 opcode.
Note: This issue is erroneously listed in the disputed section of the bot report, but it remains valid and unresolved.
The official BASE documentation explicitly states the discrepancy:
"Base Mainnet and Testnet do not yet support the new PUSH0 opcode introduced in Shanghai. The Solidity compiler defaults to this opcode if version 0.8.20 or later is used. It is recommended to use version 0.8.19 or earlier until an upgrade occurs."
Despite this clear guideline, a review of contracts both in and out of scope reveals that many are compiled using version 0.8.20
, which directly contravenes the recommended safety measures.
To ensure safety and compatibility, it's crucial to adhere strictly to the recommendations provided by the blockchain platforms intended for deployment. In this case, contracts should be compiled using Solidity version 0.8.19
or lower until BASE updates its support for the PUSH0 opcode.
Both batchContribute
and batchContributeFor
could be considered sister functions. Issue is now that while batchContribute
handles the difference between the total ETH sent and the actual ETH used in contributions by refunding the unused portion, batchContributeFor
does not follow the same logic and instead reverts if the ETH sent does not match the exact sum of the contributions. This inconsistency easily makes executing batchContributeFor
way harder as having a more relaxed conditional would be bets and then a refund is made of the extras.
`
The batchContribute()
function allows for a user to send more ETH than needed and refunds the unused portion:
function batchContribute( BatchContributeArgs calldata args ) external payable onlyDelegateCall returns (uint96[] memory votingPowers) { // ... if (ethAvailable > 0) payable(msg.sender).transfer(ethAvailable); }
However, the batchContributeFor()
function requires the exact ETH amount to match the sum of the contributions:
function batchContributeFor( BatchContributeForArgs calldata args ) external payable onlyDelegateCall returns (uint96[] memory votingPowers) { // ... if (msg.value != valuesSum) { revert InvalidMessageValue(); } }
The discrepancy between these two functions leads to not only weird behaviour for users who are accustomed to the refund logic in batchContribute
and attempt to use batchContributeFor
in a similar manner, but would even harden the whole process of executing batchContributeFor
Align the implementation of batchContributeFor
with batchContribute
by including logic to handle and refund any excess ETH sent.
signingThersholdBps
should be signingThresholdBps
insteadLow, info, currently the typographical error in the variable name signingThersholdBps
used in the mapping declaration can lead to confusion and potential errors during future maintenance or upgrades of the contract. The incorrect spelling is inconsistent with standard naming conventions in a logical sense and may cause misunderstandings or miscommunication between developers.
NB: This variable name is not a breaking issue for the current implementation, but it is considered poor practice and goes against the clarity and readability of the code.
The mapping declaration is intended to track the signing threshold basis points (BPS) for each party. The current spelling in the code is signingThersholdBps
where it should be signingThresholdBps
. This is a clear typographical error.
Using the the OffChainSignatureValidator
contract as a case study, t he incorrect variable name is present in two places
mapping(Party => uint96) public signingThersholdBps;
setSigningThresholdBps
function:
emit SigningThresholdBpsSet(party, signingThersholdBps[party], thresholdBps); signingThersholdBps[party] = thresholdBps;
Easy fix, rename the mapping from signingThersholdBps
to signingThresholdBps
throughout the project's scope.
For OffChainSignatureValidator.sol
:
mapping(Party => uint96) public signingThresholdBps; ... emit SigningThresholdBpsSet(party, signingThresholdBps[party], thresholdBps); signingThresholdBps[party] = thresholdBps;
burn
should include a similar check like the prevToken >= withdrawTokens
from rageQuit()
Low
The burn()
function allows for the burning of NFTs. If the same tokenId
is included more than once in the tokenIds
array, the function will revert upon the second attempt to burn the already deleted token. This is because the _burn()
internal function, which is called within _burnAndUpdateVotingPower
, deletes the token's data, and subsequent calls to ownerOf(tokenId)
will fail as the token no longer exists. This considerably makes attemots to burn to be harder
burn
function with an array of tokenIds
that includes duplicates will result in a successful burn of the token on the first iteration.
Keep _burn()
in mind:function _burn(uint256 id) internal virtual { address owner = _ownerOf[id]; require(owner != address(0), "NOT_MINTED"); // Ownership check above ensures no underflow. unchecked { _balanceOf[owner]--; } delete _ownerOf[id]; delete getApproved[id]; emit Transfer(owner, address(0), id); }
On the second iteration, when trying to burn the same tokenId again, the function call to ownerOf(tokenId)
will revert since the token has already been burned and its ownership information deleted.
The smart contract code does not currently check for duplicate tokenIds
in the input array before attempting to burn them.
The potential for reversion is due to these lines:
function _burnAndUpdateVotingPower( uint256[] memory tokenIds, bool checkIfAuthorizedToBurn ) private returns (uint96 totalVotingPowerBurned) { for (uint256 i; i < tokenIds.length; ++i) { uint256 tokenId = tokenIds[i]; address owner = ownerOf(tokenId); // <- Reverts here if tokenId was already burned ... _burn(tokenId); // <- Burns the token and deletes ownerOf mapping ... } ... }
An easy fix would be to introduce a similar check like the one present in rageQuit()
, i.e:
if (prevToken >= withdrawTokens[i]) revert InvalidTokenOrderError();
Potentially hijacking of the self-delegation process in the context of crowdfund contributions, i.e an attacker can effectively redirect the voting power of NFTs, minted post-crowdfund, to themselves, which undermines the integrity of the voting process, potentially leading to a significant shift in control and decision-making within the governance structure.
NB: This issue was previously submitted here, but the recommended fix was never applied, during the course of the contest, sponsors did not clarify if this is an accepted risk or not, but since previous reports were linked, don't know if this is to be considered
Known Issue
.
This is hinted from the interaction between the self-delegation logic in PartyGovernance.sol
and the contribution processing in ETHCrowdfundBase.sol
.
In PartyGovernance.sol
, self-delegation is intended to occur when a user's delegate is address(0)
. This logic assumes that a user delegating to address(0)
implies a desire for self-delegation.
However, in ETHCrowdfundBase.sol
, during contribution processing _processContribution()
, a contributor can specify a delegate. If a user has not previously delegated their vote or if the contribution is made by the contributor themselves, the specified delegate is set.
Modify the logic in ETHCrowdfundBase.sol
to ensure that if a contribution is made by a user for themselves with a delegate of address(0)
, it automatically results in self-delegation. This can be done by adding a check to see if msg.sender
is the same as the contributor
and delegate
is address(0)
, then setting the delegate to the contributor
.
Low, info on better code documentation.
Take a look at contribute()
function contribute( address delegate, bytes memory gateData ) public payable onlyDelegateCall returns (uint96 votingPower) { return _contribute( payable(msg.sender), delegate, msg.value.safeCastUint256ToUint96(), 0, // Mint a new party card for the contributor. gateData ); }
It's been explained in the docs for InitialETHCrowdfund.sol as this:
contribute Contribute ETH to this crowdfund on behalf of a contributor.
But this is wrong cause this is instead the explanation for the contributeFor()
.
Ensure that explanation really translate to how code is intended to be used.
#0 - c4-pre-sort
2023-11-13T04:17:13Z
ydspa marked the issue as insufficient quality report
#1 - c4-judge
2023-11-19T18:14:08Z
gzeon-c4 marked the issue as grade-b
#2 - Bauchibred
2023-11-22T16:58:41Z
HI @gzeon-c4.
Thanks for judging, but I’d really appreciate you having another review on the grading of this QA Report with a consideration of my downgraded issues: #292 and #557, if both are to be finalised as QA reports, doesn't this mean there is a valid ground for the overall grade of this QA report to be A-grade?
Now talking about #557, imo the report proves a flawed logic and a break in protocol's contextual functionality since in some cases Crowdfunds could get finalised and the party hosts wouldn't even be sent up to the amount minTotalContributions
(not even maxTotalContributions
), coupling this with the facts below:
ETHCrowdfundBase.sol
current funding split logic is a one way street, what I mean is since the crowdfund ends up being won, even if party Host decides to refund all participators their contributions due to lack of funds to do what was originally planned, the host would have to skip a few contributors since they wouldn't be enough funds to cover every one.I would still argue that the issue should be a medium as I originally submitted, if you still think otherwise, do clarify, thanks.
#3 - gzeon-c4
2023-11-23T14:45:49Z
I will review, re: #557 it is QA because it depends on how the party creator should think about minTotalContributions (e.g. if they should consider the fee when they set it).
🌟 Selected for report: catellatech
Also found by: 0xSmartContract, 0xbrett8571, Bauchibred, K42, Myd, SAAJ, ZanyBonzy, clara, foxb868, hunter_w3b, kaveyjoe, pavankv
60.0107 USDC - $60.01
The auditing process began even before the contest started, grasping a high level but brief overview of the entire Party DAO ecosystem, i.e glancing through both the docs & the official repo of the Party Protocol.
Once the contest began, the first day was dedicated to examining the smart contracts in scope, after this review, approximately 6 hours spread was spent on previous audit contests reports that were publicly accessible (links to reports are attached in the Security Researcher Logistics section) this was done in order to examine findings, drawinsights from them & see if the recommended fix really correctly mitigates the issue or if the fix was never correctly deployed.
This was then followed by a meticulous, line-by-line security review of the sLOC in scope, beginning from the contracts small in size like OffChainSignatureValidator.sol
ascending to the bigger contracts like thePartyGovernance.sol
. The concluding phase of this process entailed interactive sessions with the developers on Discord. This fostered an understanding of unique implementations and facilitated discussions about potential vulnerabilities and observations that I deemed significant, requiring validation from the team.
InitialETHCrowdfund.sol
contract as understood is the gateway for initiating a new Ethereum party. This contract oversees the crowdfunding process, enabling a collective of participants to pool their ETH to form a new party.ETHCrowdfundBase.sol
establishes the core functionalities for the ETH crowdfund mechanism. This contract is the base for InitialETHCrowdfund.sol
and encapsulates the common logic that would be utilized by various types of crowdfunding activities within the ecosystem.PartyGovernance.sol
is a comprehensive contract that imbues the Party
contract with its governance capabilities. And is the most substantial contract in terms of source lines of code for this contest, i execute proposals, and interact with the broader protocol. It includes the core governance logic and is inherited by the Party
contract, note that this contract is to comply with two different EIPs: EIP-4906
, EIP-165
.PartyGovernanceNFT.sol
contract is an extension of the Party
contract that incorporates NFT (non-fungible token) functionalities into the party's governance structure. It inherits NFT from solmate/ERC721
and adhering to royalty standards as specified by openzeppelin/contracts/interfaces/IERC2981.sol
, this contract is also set up to comply with with three different EIPs: EIP-721
, EIP-165
&EIP-2981
.ProposalExecutionEngine.sol
serves as a dynamic executor of proposals within the party's ecosystem. It is designed to be delegate called from parties, parties use a fallback to perform a static delegate call to this contract, possibly for read-only proposal-related operations. This contract is also set up to comply with with three different EIPs: ERC1271
.ProposalStorage.sol
contract acts as a shared storage module for other contracts in the system. With a relatively small codebase, it's focused on providing a consistent and secure storage pattern that other contracts can access and manipulate due to it's implementation of the bucket-like storage system.SetGovernanceParameterProposal.sol
is just a new proposal type specifically geared towards the modification of governance parameters within a party.SetSignatureValidatorProposal.sol
contract introduces a proposal type that enables the setting or validation of signature validators within the ecosystem. Great initiative cause this allows parties to establish trust with external entities or validate certain operations through signature validation.OffChainSignatureValidator.sol
is a specialized contract dedicated to verifying off-chain signatures. It reconstitutes messages to ensure they originate from a trusted source and confirms their validity if the signer possesses enough voting power or is an acknowledged member of the party. This contract is also set up to comply with with EIP-1271
.Implementation.sol
is a utility contract that provides essential helper functions for implementation contracts used from a proxy.For a deeper dive into the docs, I would recommend going through the following sections of the docs:
The system's testability is pretty much average, if not below average, current coverage is just 62% which in my honest opinion is way less that what's expected for a project of this magnitude, another thing to note would be the occasionally intricate lengthy tests that are a bit hard to follow, this shouldn't necessarily be considered a flaw though, cause once familiarized with the setup, devising tests and creating PoCs becomes a smoother endeavour. In that sense I'd commend the team for having provided a high-quality testing sandbox for implementing, testing, and expanding ideas.
Rage quitting is only allowed before rageQuitTimestamp or if permanently enabled, but note that, the host can change rageQuitTimestamp at will. This enables the host to frontrun a rage quitter with a resetting of rageQuitTimestamp, reverting his rage quit. Therefore a user can never be certain that he can rage quit. The user might have entered the party with the expectation, or even on condition, that he has the choice to rage quit (for the time being). There should be some guarantee that this functionality is available when it says it is.
The ETHCrowdfundBase's "emergencyExecute" function, designed for DAO-controlled emergency actions, is vulnerable to host preemption due to the disableEmergencyExecute
feature. Hosts currently possess the ability to immediately emergency actions, potentially undermining the DAO's responsive capacity in critical situations. Since malicious hosts can just listen to the mempool and front run calls to emergencyExecute()
with ``disableEmergencyExecute()`, an idea of mitigating this would be to use a timelock.
The cancelProposal() can put a party's system in a completely broken state and should be done carefully
The constructor of PartyGovernanceNFT
is marked as payable
this could be a decision to save gas by avoiding the check, but it is important to note that the savings would be extremely marginal over the risk of potentially sending ETH to the implementation contract since this sits behind a proxy.
The _executeUpgradeProposalsImplementation()
is a bit risky since it always upgrade to latest implementation stored in _GLOBALS
, now if new implementation is a bi faulty admin must change to it and can't provide a different implementation to pass the call to.
The _rageQuit()
feature seems to be a bit flawed, this is cause someone with a large enough holding calling rageQuit()
could potentially decide the outcome of votes, and this could impact more that the votes of the caller, cause just as human nature a user that might have voted could see that enough votes is present and decide on not voting just for someone else to call rageQuit()
at the late stages of the voting process, effectively having the proposal's decision not representing the majority's will.
During my security review, I identified several medium-severity findings impacting the smart contract functionalities, each presenting unique challenges and potential risks to the system's overall integrity and compliance.
A loophole was identified in the contribution process, particularly with the contributeFor()
function. This allows users who are not permitted by gatekeepers to still partake in contributions through a third party. The root of the issue lies in the gatekeeper check that focuses on the msg.sender
instead of the actual contributor. To rectify this, the recommendation is to modify the logic to ensure that the actual contributor is always verified against the gatekeeper's allowlist, thereby maintaining the integrity of private crowdfunds and adhering to legal compliance.
Another significant issue is observed in the crowdfund finalization process. The current setup could lead to crowdfunds being stuck in an 'active' state and eventually being classified as 'lost' due to a deadlock in achieving maxTotalContributions
. This situation occurs when minTotalContributions
is set equal to maxTotalContributions
, and the remaining contribution required is less than the minContribution
. The proposed solution involves adjusting the initialization logic to ensure that minTotalContributions
is never equal to greater than maxTotalContributions
.
Two notable observations were in regards to the compliance of some contracts to certain EIPs, namingly the non-compliance of PartyGovernanceNFT.sol
with EIP-2981. The contract's royaltyInfo
function fails to adhere to the EIP's stipulation that royalty payments should be calculated based on the sale price, instead returning a fixed royalty amount. Similarly, the PartyGovernance.sol
contract's adherence to EIP-4906 is also flawed, given the incorrect implementation of the MetadataUpdate
and BatchMetadataUpdate
events. These issues not only breach standard protocols but also potentially mislead third-party platforms relying on accurate metadata updates.
An interesting one relates to the potential difficulty in achieving unanimous votes for certain proposals. This arises from a precision loss issue in the convertVotingPowerToContribution()
function leading to a discrepancy between the calculated and actual voting power. Similar to a previous finding. The problem is exacerbated by the fact that it could hinder the special features that rely on the unanimous consensus.
Another one is in regards to the cancelProposal()
function, where canceling a proposal inadvertently resets the current InProgress proposal ID and next progress data
to zero. This action disrupts the proposal execution flow in executeProposal()
, as each proposal's execution is somewhat chained. After a proposal is canceled, any subsequent attempts to execute another proposal would fail. This is because the reset nextProgressDataHash
and currentInProgressProposalId
values conflict with the expected state for executing new proposals, leading to either assertion failure or invalid proposal progress data errors.
In ETHCrowdfundBase.sol
, a contextual flaw is present in the crowdfunding process, particularly when funding splits are activated. The current logic deducts a funding split from total contributions, leading to scenarios where a crowdfund's lifecycle is finalized without the party receiving the full intended funding. This is especially problematic when minTotalContributions
is set equal to or near maxTotalContributions
, leaving the party underfunded and potentially jeopardizing planned activities or objectives.
Lastly, the accept()
function in PartyGovernance.sol
seems to exhibits a significant vulnerability to multi-block and uncle block attacks. The current safeguard against the exploit noted in the codebase, which checks the lastRageQuitTimestamp
against the block.timestamp
, does not account for the possibility of such sophisticated attack vectors. This oversight could enable attackers to exploit the voting process, particularly in manipulating the outcomes of proposals, thereby undermining the democratic integrity of the governance mechanism.
The first one I'd like to note evolves around the emergencyExecute()
function. Its effectiveness is under scrutiny due to the possibility of the host overriding the DAO's decisions in crucial situations. This undermines the purpose of the function, which is intended for emergency actions by the DAO. A proposed solution involves introducing a time lock on the disabling function, ensuring the host cannot immediately obstruct DAO's emergency executions.
Another significant issue is the potential misuse of the _rageQuit()
function, which could disproportionately influence the outcomes of proposals. Members holding substantial voting power might sway the voting process and then abruptly withdraw, thus compromising the integrity of decision-making. A time buffer to restrict rageQuit()
execution towards the end of the voting process could effectively mitigate this risk.
A separate observation pertains to the redundancy in the naming convention of custom errors within the PartyGovernanceNFT
smart contract. It is suggested that the superfluous "Error" suffix in custom error names be removed to enhance clarity and align with standard Solidity conventions.
There's also a concern regarding the execution window of the rageQuit()
function. The current logic allows execution at the precise block.timestamp
marking the deadline, potentially leading to governance issues. Adjusting the conditional check to include this deadline moment could address this inaccuracy.
The compatibility of contracts compiled with Solidity 0.8.20’ on BASE is questioned due to the lack of support for certain opcodes, such as PUSH0. It is recommended to follow BASE's guideline of using Solidity
0.8.19’ or earlier versions until BASE updates its opcode support.
An inconsistency is identified in the implementation of batchContribute’ and
batchContributeFor’. The latter's lack of a mechanism to refund excess ETH, unlike the former, creates an operational inconsistency. Harmonizing batchContributeFor’ with
batchContribute’ to include logic for handling and refunding excess ETH is advised.
A typographical error in the variable name signingThersholdBps’ is highlighted, with a correction to
signingThresholdBps’ recommended for improved clarity and consistency.
The burn()
function's vulnerability to reversion due to duplicate tokenIds’ in its input array is a technical oversight. Introducing a check to prevent processing of duplicate
tokenIds’, akin to the logic in `rageQuit()’, is suggested.
The self-delegation mechanism within the context of crowdfund contributions is flawed. This loophole allows an attacker to redirect the voting power of NFTs minted post-crowdfund to themselves, undermining the governance structure. Adjusting the contribution logic to automatically set self-delegation when contributing with delegate’ as
address(0)’ is proposed.
Lastly, there are contextual errors in the documentation, specifically in the description of the contribute’ function, which inaccurately mirrors that of
contributeFor()’. Revising the documentation to accurately reflect the functionality of `contribute’ is recommended to ensure clarity and proper understanding.
During the course of my security review, I had to research some stuffs to understand the context in which they are currently being used, some of which led to me learning new things and having refreshers on already acquainted topics. Some of these topics are listed below:
EIP-4906
.EIP-2981
.EIP-165
& EIP-1271
.I'd like to start by commending the quality of the docs. They are generally well-crafted. However, there are some gaps concerning what's within the scope. It's important to remember that code comments also count as documentation, and there have been some inconsistencies in this area. Comprehensive documentation not only provides clarity but also facilitates the onboarding process for new developers. For instance, the PartyGovernanceNFT
smart contract unnecessarily appends the term "Error" to its custom error names, such as FixedRageQuitTimestampError
, CannotRageQuitError
, and others, which is redundant since Solidity's reverts clearly indicate an error condition. This redundancy does not affect the contract's functionality or security but impacts code readability and brevity. It is recommended to remove the "Error" suffix to align with common Solidity practices, thereby improving code clarity and maintainability without the extraneous term. For example, FixedRageQuitTimestampError
should be simplified to FixedRageQuitTimestamp
, making the code cleaner and more in line with widely accepted naming conventions.
Having multiple eyes scrutinizing a protocol can be invaluable. More contributors can significantly reduce potential risks and oversights. Evidence of where this could come in handy can be gleaned from codebase typos, e.g how thershold
is used instead of threshold
in multiple instances even in mappings, another reason why is the availability of the developers during the auditing process. i.e how hard it was to get a hold of developers on discord to talk about potential bug cases. Other cases are simple contextual errors from the docs. I could keep going on and on, but the main aim of this is to hint that having more people in the team would really help in a large margin to resolve this.
Many security experts prefer using Visual Studio Code augmented with specific plugins. While the popular Solidity Visual Developer has already been integrated with the protocol, there's room for incorporating other beneficial tools.
Current implementation subtly suggests that event tracking isn't maximized. Instances where events are missing or seem arbitrarily embedded have been observed. A shift towards a more purposeful utilization of events is recommended.
As already stated in this report in regards to the current level of testing there's a very bug room for further refinement and improvement. Main goal is to have +95% code coverage.
There's a need to improve the naming conventions for contracts, functions, and variables. In several instances, the names don't resonate with their respective functionalities, leading to potential confusion, in other cases clear typos have been made in the names like using thershold
instead of threshold
, another one to note would be the redundant suffixing of Error
to error names, solidity's revert statements already indicate that an error condition has been met, making the additional "Error" suffix unnecessary.
My attempt on reviewing the Party Protocol spanned around ~ 35 - 40 hours distributed over 5 days:
1.5 hours dedicated to writing this analysis.
6 hours exploring all previous contests on Code4rena, viewable from the links below:
This was done to see if issues were correctly mitigated and if other instances of previous bugs exist in the current codebase
2 hours were allocated for discussions with sponsors on the private discord group regarding potential vulnerabilities.
2 hours over the course of the 4 days (~30 minutes per day) was spent lurking in the public discord group to get more ideas based on explanation provided by sponsors to questions other security researchers have asked.
The remaining time was spent on finding issues and writing the report for each of them on the spot, later on editing a few with more knowledge gained on the protocol as a whole, or downgrading them to QA reports.
The codebase was a very great learning experience, though it was a pretty hard nut to crack, seeing that it's like an update contest on relatively the same codebase and most easy to catch bugs have already been spotted and mitigated from the previous contests/audits, This is not surprising, considering there have been three distinct reports on this from Code4rena, and multiple others from independent security researchers and even other auditing firms.
35 hours
#0 - c4-pre-sort
2023-11-13T10:41:23Z
ydspa marked the issue as sufficient quality report
#1 - c4-judge
2023-11-20T18:40:14Z
gzeon-c4 marked the issue as grade-b