Platform: Code4rena
Start Date: 13/12/2023
Pot Size: $36,500 USDC
Total HM: 18
Participants: 110
Period: 8 days
Judge: 0xTheC0der
Id: 311
League: ETH
Rank: 24/110
Findings: 3
Award: $276.12
🌟 Selected for report: 1
🚀 Solo Findings: 0
151.988 USDC - $151.99
The VerbsToken contract deviates from the ERC-721 standard, specifically in the tokenURI
implementation. According to the standard, the tokenURI
method must revert if a non-existent tokenId
is passed. In the VerbsToken contract, this requirement was overlooked, leading to a violation of the EIP-721 specification and breaking the invariants declared in the protocol's README.
The responsibility for checking whether a token exists may be argued to be placed on the descriptor
. However, the core VerbsToken contract, which is expected to adhere to the invariant stated in the Protocol's README, does not follow the specification.
// File: README.md 414:## EIP conformity 415: 416:- [VerbsToken](https://github.com/code-423n4/2023-12-revolutionprotocol/blob/main/packages/revolution/src/VerbsToken.sol): Should comply with `ERC721`
Note: the original NounsToken contract, which VerbsToken was forked from, did implement the tokenURI
function properly.
Manual Review
It is recommended to strictly adopt the implementation from the original NounsToken contract to ensure compliance with the ERC-721 standard.
function tokenURI(uint256 tokenId) public view override returns (string memory) { + require(_exists(tokenId)); return descriptor.tokenURI(tokenId, artPieces[tokenId].metadata); }
ERC721
#0 - c4-pre-sort
2023-12-22T20:03:59Z
raymondfam marked the issue as insufficient quality report
#1 - c4-pre-sort
2023-12-22T20:04:07Z
raymondfam marked the issue as duplicate of #110
#2 - c4-judge
2024-01-06T20:52:24Z
MarioPoneder changed the severity to QA (Quality Assurance)
#3 - c4-judge
2024-01-06T20:56:37Z
MarioPoneder marked the issue as grade-b
#4 - c4-judge
2024-01-09T21:39:45Z
This previously downgraded issue has been upgraded by MarioPoneder
#5 - c4-judge
2024-01-09T21:44:09Z
MarioPoneder marked the issue as satisfactory
#6 - c4-judge
2024-01-09T21:45:43Z
MarioPoneder marked the issue as selected for report
#7 - osmanozdemir1
2024-01-11T09:06:12Z
Hi @MarioPoneder
I assume this issue is accepted based on historical C4 judging even though there is no real impact and it is just a view function. I would like to point out that just a few weeks ago a similar issue was judged as QA due not having a real impact even though it breaks two different "MUST" rules of the EIP. Ref: https://github.com/code-423n4/2023-11-panoptic-findings/issues/473
I acknowledge it is a thin line since there is not a certain rule in the org repo regarding EIPs. Maybe C4 should have a rule for EIP's (at least for the "MUST" rules) but of course here is not the place. I just wanted to point out downgraded issue.
Thanks.
#8 - MarioPoneder
2024-01-11T16:16:40Z
Thank you for your comment!
I agree from a personal point of view.
However, I felt obliged to award with Medium severity due to precedent EIP-721 tokenUri cases (see https://github.com/code-423n4/2023-12-revolutionprotocol-findings/issues/511#issuecomment-1883512625, one judged by Alex).
This should be discussed during the next SC round.
🌟 Selected for report: deth
Also found by: 00xSEV, 0xCiphky, 0xHelium, ABAIKUNANBAEV, Aamir, AkshaySrivastav, King_, Pechenite, SpicyMeatball, Tricko, ast3ros, ayden, bart1e, deth, fnanni, ke1caM, mahdirostami, peanuts, pep7siup, pontifex, ptsanev, roland, rvierdiiev, y4y
7.2215 USDC - $7.22
The vulnerability introduces a limitation on early minted art pieces, capping their influence to the snapshot weight of voters. In scenarios where a substantial number of votes are cast, and new tokens are minted before early art pieces reach quorum, these initial pieces become unable to compete effectively against newly minted ones. This reduces the competitive viability of early minted art pieces.
The root cause of the vulnerability is that early minted art pieces are restricted by the snapshot weight of voters, leading to their diminished influence compared to newly minted pieces. This results in a scenario where, despite potential artistic merit, early art pieces struggle to garner enough influence in the face of increased competition from newer submissions.
// File: packages/revolution/test/culture-index/TopArtPiece.t.sol function testEarlyMintedArtPieceCannotCompete() public {// @audit-issue M-earlyMintedArtPieceCannotCompete // Create first 2 Art pieces when erc20Token total supply is low (100 for voter1, 200 for voter2) uint256 firstPieceId = voter1Test.createDefaultArtPiece(); uint256 secondPieceId = voter2Test.createDefaultArtPiece(); // Mint tokens to the test contracts (acting as voters) vm.stopPrank(); vm.startPrank(address(erc20TokenEmitter)); erc20Token.mint(address(voter1Test), 100); erc20Token.mint(address(voter2Test), 200); vm.roll(block.number + 1); // roll block number to enable voting // Create 3rd Art piece when erc20Token total supply increases (100 for voter1, 200 + 21_000 for voter2) uint256 thirdPieceId = voter2Test.createDefaultArtPiece(); erc20Token.mint(address(voter2Test), 21_000); vm.roll(block.number + 1); // roll block number to enable voting snapshot // Votes for all Art Pieces, max totalVoteWeights for each piece should be filled voter1Test.voteForPiece(firstPieceId); voter1Test.voteForPiece(secondPieceId); voter1Test.voteForPiece(thirdPieceId); voter2Test.voteForPiece(firstPieceId); voter2Test.voteForPiece(secondPieceId); voter2Test.voteForPiece(thirdPieceId); assertEq(cultureIndex.topVotedPieceId(), thirdPieceId, "Third piece should now be top-voted"); // Assess max totalVoteWeights capacity for each Art Piece emit log_string("max votes for firstPieceId"); emit log_uint(cultureIndex.totalVoteWeights(firstPieceId)); emit log_string("max votes for secondPieceId"); emit log_uint(cultureIndex.totalVoteWeights(secondPieceId)); emit log_string("max votes for thirdPieceId"); emit log_uint(cultureIndex.totalVoteWeights(thirdPieceId)); }
The following test results show that those early created Art Piece can ONLY reach upto 300 votes due to the limitation of snapshot weight from voters. That forces their Art Pieces out of the contest for any incoming auctions, potentially compromises the AuctionHouse fairness.
FOUNDRY_PROFILE=dev forge test -vv --mt testEarlyMintedArtPieceCannotCompete
Running 1 test for test/culture-index/TopArtPiece.t.sol:CultureIndexArtPieceTest [PASS] testEarlyMintedArtPieceCannotCompete() (gas: 1900697) Logs: max votes for firstPieceId 300 max votes for secondPieceId 300 max votes for thirdPieceId 21300
Test result: ok. 1 passed; 0 failed; 0 skipped; finished in 7.38ms
Foundry test
It is recommended to reevaluate the snapshot weight mechanism for early minted art pieces. Adjustments should be made to ensure that these pieces can maintain competitive viability, even in the presence of a large number of votes and newly minted tokens. Implementing a dynamic snapshot weight calculation that considers both the historical and current voting landscape could address this vulnerability.
Error
#0 - c4-pre-sort
2023-12-22T21:42:37Z
raymondfam marked the issue as sufficient quality report
#1 - c4-pre-sort
2023-12-22T21:42:45Z
raymondfam marked the issue as duplicate of #16
#2 - c4-pre-sort
2023-12-24T15:11:18Z
raymondfam marked the issue as duplicate of #449
#3 - c4-judge
2024-01-06T16:02:49Z
MarioPoneder marked the issue as satisfactory
🌟 Selected for report: DanielArmstrong
Also found by: 0xDING99YA, MrPotatoMagic, SpicyMeatball, bart1e, mojito_auditor, nmirchev8, pep7siup
116.9139 USDC - $116.91
The vulnerability introduces a risk of item removal and resurrection at the same time in the heap due to the last element not being properly reset after extractMax
. In the event of a future update involving downvoting or unvoting, this unreset last element could impact the property of the MaxHeap, effectively compromising the AuctionHouse core functionality.
According to the Severity Standardization - Speculation on Future Code, this issue validity could be reasonably considered.
The root cause of the vulnerability is that the heap's last element is not cleared after extractMax
was called, allowing the rightValue
variable in subsequent calls to maxHeapify
to read the value the the uncleared position.
// File: packages/revolution/src/MaxHeap.sol 94: function maxHeapify(uint256 pos) internal { 95: uint256 left = 2 * pos + 1; 96: uint256 right = 2 * pos + 2; // <= FOUND 97: 98: uint256 posValue = valueMapping[heap[pos]]; 99: uint256 leftValue = valueMapping[heap[left]]; 100: uint256 rightValue = valueMapping[heap[right]]; // <= FOUND 101: 102: if (pos >= (size / 2) && pos <= size) return; // <= FOUND: is size is even, the value of the right leaf can still be read despite it being out of range ... 113: }
If an updateValue
was called to an item at position parent(size)
occurs, reducing its value to lower than the out-of-range valueMapping[heap[size]]
, the updated item will be downward heapified to the invalid size
position instead of the expected size - 1
position. This can lead to the resurrection of the item into the heap, potentially causing two versions of the same itemID
to exist in the heap.
Run the POC with cd packages/revolution && FOUNDRY_PROFILE=dev forge test -vvv --mc MaxHeapUpdateTestSuite --mt testResurectHeap
. The PASS result will confirm that if the future udpateValue
to a lower votes is realized, the resurrection issue will occur causing undesired impacts such as (1) a targeted Art Piece was forced to be removed from the heap or (2) a duplication of the Art Piece, which was previously located at the end of the Heap, occurs.
// File: packages/revolution/test/max-heap/Updates.t.sol function testResurectHeap() public {// @audit-issue M-unresetLastElementId maxHeap.insert(0, 80); maxHeap.insert(1, 50); maxHeap.insert(2, 40); maxHeap.insert(3, 35); maxHeap.insert(4, 20); maxHeap.insert(5, 10); maxHeap.insert(6, 11); (uint256 maxItemId, uint256 maxValue) = maxHeap.extractMax(); assertEq(maxItemId, 0, "Item ID should be 0 after heapify"); assertEq(maxValue, 80, "Value should be 80 after heapify"); (maxItemId, maxValue) = maxHeap.getMax(); assertEq(maxItemId, 1, "New max piece ID should be 1"); assertEq(maxValue, 50, "New max value should be 50"); // Verify the last item before extractMax is not cleared, itemId 6 can be read assertEq(maxHeap.heap(6), 6, "Heap item 6 cleared"); // Lower the value of itemId 2 (parent of 6) to 10 (less than 11), effectively down-heapify it to position 6 which is out of the heap's valid range maxHeap.updateValue(2, 10); // Impact 1: itemId 2 effectively removed from the heap, since it was swapped to position 6 assertEq(maxHeap.heap(6), 2, "Heap itemId 2 not moved down to pos 6"); assertEq(maxHeap.valueMapping(maxHeap.heap(6)), 10, "Heap item 2 not moved down to pos 6"); // Impact 2: Heap itemId 6 swapped to pos 2. More severely, itemId 6 exists at 2 positions 2 and 3 assertEq(maxHeap.heap(2), 6, "Heap itemId 6 not resurect to pos 2"); assertEq(maxHeap.valueMapping(maxHeap.heap(2)), 11, "Heap item 6 not moved up to pos 2"); assertEq(maxHeap.heap(2), maxHeap.heap(3), "Heap itemId 6 not duplicated at pos[2, 3]"); }
Before extractMax
:
80 / \ 50 40 / \ / \ 35 20 10 11
This is the initial state of the max heap represented as a tree. The root has the value 80
, and the children are arranged such that each node is greater than or equal to its children.
Now, let's go through the steps of the extractMax
function:
80
).11
).After extractMax
:
50 / \ 35 40 / \ / \ 11 20 10 11 (uncleared)
After updateValue(2, 10)
:
50 / \ 35 11 (itemId 6 resurrected here, duplication occured) / \ / \ 11 20 10 10 (itemId 2 was swapped to here)
Foundry test
It is recommended to add a line to reset the heap[size]
to an out-of-range ID after the extraction. This ensures that the last element is properly reset, mitigating the risk of item resurrection in the heap in the future.
function extractMax() external onlyAdmin returns (uint256, uint256) { require(size > 0, "Heap is empty"); uint256 popped = heap[0]; heap[0] = heap[--size]; + heap[size] = type(uint256).max; maxHeapify(0); return (popped, valueMapping[popped]); }
Other
#0 - c4-pre-sort
2023-12-22T22:23:19Z
raymondfam marked the issue as sufficient quality report
#1 - c4-pre-sort
2023-12-22T22:23:34Z
raymondfam marked the issue as duplicate of #266
#2 - c4-judge
2024-01-06T12:28:13Z
MarioPoneder changed the severity to QA (Quality Assurance)
#3 - MarioPoneder
2024-01-06T12:31:42Z
See comment on primary issue: https://github.com/code-423n4/2023-12-revolutionprotocol-findings/issues/266#issuecomment-1879666356
#4 - c4-judge
2024-01-07T16:24:16Z
MarioPoneder marked the issue as grade-b
#5 - hungdoo
2024-01-10T08:36:14Z
Hi @MarioPoneder ,
Thanks for the judging effort.
This comment is more for clarity gaining than escalation.
According to the "Speculation on future code" section, validity could be granted based on the likelihood of future code change which makes the bug have real impact, is that correct? One thing we know for sure is that if down-voting or un-voting are introduced, this bug would become reality. My question is, what the likelihood would be for it to be considered valid? Even if the Sponsor confirms that down-voting will be implemented in the next version, is it enough for consideration?
Speculation on future code
Any issue that is not exploitable within the scope of the contest is defined as speculating on future code. Any such speculation only has the potential to be valid if the root cause is demonstrated to be in the contest scope. Warden may make an argument on why a future code change that would make the bug manifest is reasonably likely. Based on likelihood considerations, the Judge may assign a severity rating in any way they see fit.
If the exploitability relies on a particular 3rd party integration, the likelihood must factor in a competent integrator who has done due diligence.
#6 - c4-judge
2024-01-10T23:49:24Z
This previously downgraded issue has been upgraded by MarioPoneder
#7 - c4-judge
2024-01-10T23:53:28Z
MarioPoneder marked the issue as satisfactory