Platform: Code4rena
Start Date: 02/10/2023
Pot Size: $1,100,000 USDC
Total HM: 28
Participants: 64
Period: 21 days
Judge: GalloDaSballo
Total Solo HM: 13
Id: 292
League: ETH
Rank: 13/64
Findings: 3
Award: $7,263.66
🌟 Selected for report: 0
🚀 Solo Findings: 0
🌟 Selected for report: HE1M
Also found by: 0xsomeone, AkshaySrivastav, Aymen0909, J4de, Koolex, Mohandes, bin2chen, brgltd, cccz, hals, ladboy233, max10afternoon, peanuts, rvierdiiev, shealtielanz, tsvetanovv, zzzitron
273.5673 USDC - $273.57
https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/bridge/L1ERC20Bridge.sol#L342 https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/bridge/L1ERC20Bridge.sol#L344-L349
The deposit limitations meant to be enforced by the L1ERC20Bridge
on the Ethereum mainnet are done so incorrectly. In detail, the deposit limitation enforcement function L1ERC20Bridge::_verifyDepositLimit
will enforce the deposit limit solely when the depositLimitation
flag is true
.
Per the AllowList::setDepositLimit
implementation, the depositLimitation
of a token can be toggled between false
and true
states. This is a valid use case as for example, a token may become compromised and the zkSync Era team may wish to restrict migrations of it to small amounts.
The problem lies in the accounting system of the L1ERC20Bridge
. Due to the "disconnect" between the deposit limitations enforcement and the transactions of the user, the two scenarios described in the PoC can occur.
The L1ERC20Bridge::claimFailedDeposit
function will attempt to decrement the deposit limit of the user when reverting their failed deposit. To do this, it performs a subtraction on the totalDepositedAmountPerUser
if the depositLimitation
is true
. In the following scenario, the code will prevent the user from claiming their failed deposit:
L1ERC20Bridge::deposit
while the deposit limitations for a token do not existL1ERC20Bridge::claimFailedDeposit
At this point, the subtraction performed by L1ERC20Bridge::_verifyDepositLimit
will underflow as it was never incremented, causing the transaction to fail and the funds to be irrecoverable until an upgrade.
The issue is applicable in the other direction as well; a user may detect an on-chain transaction that is about to enforce a deposit limitation for a token. In that case, the user may wish to circumvent this restriction by migrating a significant amount of funds before the deposit limitation is set.
The user will then be able to migrate more funds after the limitation is set as their significant migration was not tracked by the system.
Manual Review.
We advise the totalDepositedAmountPerUser
limitation to be tracked regardless of whether the depositLimitation
for the token is set and the require
check's actual limitation to be solely evaluated when the depositLimitation
is true
.
This will ensure that the system properly tracks deposit limits per user and does not cause unexpected failures when migrating from one state to the other.
Math
#0 - c4-pre-sort
2023-11-02T14:08:22Z
141345 marked the issue as duplicate of #425
#1 - c4-judge
2023-11-24T20:01:09Z
GalloDaSballo marked the issue as satisfactory
6776.3633 USDC - $6,776.36
https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/zksync/facets/Executor.sol#L303-L306 https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/zksync/facets/Executor.sol#L409-L411
The Executor::revertBatches
function will improperly revert batches with an upgrade as it permits batches before the upgrade to also be reverted.
As such, all batch commitments and validations that have been reverted will be processed as if they were submitted after the upgrade even though they may have been submitted before it.
This will cause a fatal failure in the processing of batches as the Executor::_commitBatchesWithSystemContractsUpgrade
function would be executed in an earlier batch than intended, causing the Executor::_processL2Logs
function to fail.
At a minimum, this will cause the executor to halt its relayed batches (which are expected to be processed in sequence) until a smart contract upgrade addresses this issue. I consider this to be of major impact as it effectively leads to a disconnection between the state of the zkSync Era blockchain and the Ethereum blockchain.
The steps outlined below describe a high-level step-by-step flow due to the complexity of creating mock batches for submission to the Executor
contract:
Executor::commitBatches
Executor::proveBatches
BaseZkSyncUpgrade::_setL2SystemContractUpgrade
specifies that the hash is meant for the next batch to be committedExecutor::commitBatches
functionExecutor::revertBatches
functionAt this point, the system will have reverted the upgrade batch as well as several batches in the past that have yet to be finalized via the Executor::executeBatches
function. Attempting to re-submit the same batches would fail as the batches committed before the upgrade will not have the relevant EXPECTED_SYSTEM_CONTRACT_UPGRADE_TX_HASH_KEY
log value.
Manual Review.
There are multiple mitigation approaches that can be adopted for this particular issue. The system already prevents batches that have been executed from being reverted; this restriction can be expanded to upgrade batches. For example, any batches before an upgrade batch cannot be reverted and the system can be reverted up to and inclusive of the l2SystemContractsUpgradeBatchNumber
.
Another potential mitigation involves tracking the number of batches that were submitted before the upgrade batch and have been reverted. The system could then permit up to that number of batches to be submitted before mandating that an upgrade batch is submitted. This mitigation is slightly more complex but would allow batches before an upgrade to be reverted.
DoS
#0 - c4-pre-sort
2023-11-02T14:08:40Z
141345 marked the issue as sufficient quality report
#1 - miladpiri
2023-11-06T11:06:11Z
The old batches will have to be reexecuted. So, this is by design.
#2 - c4-sponsor
2023-11-06T11:06:18Z
miladpiri (sponsor) disputed
#3 - c4-judge
2023-11-25T20:11:35Z
GalloDaSballo changed the severity to QA (Quality Assurance)
#4 - GalloDaSballo
2023-11-25T20:11:50Z
Downgrading to QA as Operative Risk, consistent with other reports
#5 - alex-ppg
2023-11-30T22:59:34Z
I would like to add some additional background information to the exhibit as it appears to not have been properly consumed by the sponsor.
Let us consider a batch that is composed of X transactions pre-upgrade and Y transactions post-upgrade. This batch will be committed for validation and be validated as part of the executor's normal operation.
If such a batch is reverted after it has been committed, all X transactions pre-upgrade will fail when re-submitted. This is because the system mandates an upgrade transaction to be submitted as the first transaction of an ensuing batch even if it originally was introduced in between a batch.
This can have significant implications and does not align with the "transactions can be simply re-submitted" notion as the system incorrectly mandates an upgrade transaction to ensue. Additionally, if any of these X transactions rely on a pre-upgrade state, they will never be able to be re-submitted. This may cause synchronization issues off-chain as well as overall incorrect executor-related behaviour.
I believe there is merit in revisiting this exhibit by both the sponsor and the judge in light of these clarifications, if not as a major at least as a medium severity submission.
#6 - GalloDaSballo
2023-12-02T10:16:33Z
Will double check for dups from another issue of due to Reverted batches + Batch upgrade Prio
#7 - c4-judge
2023-12-07T18:54:35Z
This previously downgraded issue has been upgraded by GalloDaSballo
#8 - c4-judge
2023-12-07T18:54:35Z
This previously downgraded issue has been upgraded by GalloDaSballo
#9 - c4-judge
2023-12-07T18:54:58Z
GalloDaSballo marked the issue as duplicate of #214
#10 - c4-judge
2023-12-07T18:55:05Z
GalloDaSballo marked the issue as satisfactory
🌟 Selected for report: lsaudit
Also found by: 0xAnah, 0xhacksmithh, 0xsomeone, SM3_SS, Sathish9098, albahaca, c3phas, hunter_w3b
213.7261 USDC - $213.73
This submission serves as a gas optimization report of the zkSync Era codebase. All optimizations relate to contracts meant to be deployed to the Ethereum mainnet per the contest guidelines. The ways the codebase can be optimized are as follows:
UnsafeBytes
The UnsafeBytes
library is utilized across L1ERC20Bridge
, L1WethBridge
, Executor
, and the Mailbox
contracts to parse significantly large payloads of data sequentially in an optimized way.
All utilization points of the library make use of a memory
argument when a calldata
argument could be passed in (L1ERC20Bridge::_parseL2WithdrawalMessage
, L1WethBridge::_parseL2EthWithdrawalMessage
, Executor::_processL2Logs
, Mailbox::_parseL2WithdrawalMessage
). If the library was updated to work on calldata
arguments, its gas cost would be significantly reduced as it would utilize calldataload
, would not require copying the full calldata
payload to memory
, and would not incur any hidden memory expansion gas costs.
L2ContractHelper
Lines Affected | Gas Optimization |
---|---|
L27 | Minimal |
The referenced modulo operation (%
) is inefficient as its purpose is to identify whether the last bit of the bytecodeLenInWords
is 1
.
We advise an AND
operation to be utilized instead, reducing the required gas cost from 5
to 3
per operation.
Lines Affected | Gas Optimization |
---|---|
L50 | Minimal |
The referenced multiplication statement (*
) is inefficient as its purpose is to "shift" the _bytecodeHash[2]
value to the left, reconstructing the uint16
variable representing the bytecode length in words.
We advise a bitwise shift operation to be utilized, reducing the gas cost required by the operation from 5
to 3
for each invocation.
AllowList
The referenced statements will re-calculate the keccak256
hash of the hasSpecialAccessToCall
mapping slots twice without being optimized.
While the compiler does optimize keccak256
calculations of the same value consecutively, the hasSpecialAccessToCall
is accessed as a nested mapping causing three unique keccak256
calculations to be calculated per access thus resulting in 6 keccak256
operations that are redundant.
We advise the hasSpecialAccessToCall[_caller][_target]
lookup to be stored in a local mapping(bytes4 => bool) storage
variable, permitting its pre-calculated offset to be re-used when assigning the _enable
flag within the ensuing if
block.
Diamond
Lines Affected | Gas Optimization |
---|---|
L101-L104 | Observable |
The referenced variables are declared locally within the for
loop of Diamond::diamondCut
whilst they are already present in memory
as the function's argument.
We advise the code to instead utilize a single local variable within the for
loop, a FacetCut memory
variable whose members are accessed wherever the previous declarations were.
This optimization will result in a median decrease of 174
gas units per iteration with optimizations turned on for a FacetCut
payload with 3
selectors. The optimizations will scale with the number of selectors
present in a cut.
The referenced statements will re-calculate the keccak256
hash of the facetToSelectors
and selectorToFacet
mapping slots multiple times without being optimized.
While the compiler does optimize keccak256
claculations of the same value consecutively, the referenced statements are not performed in sequence and would benefit from an optimization that caches the mapping
lookup in a local variable.
The total lookups that will be optimized by a local variable are 3
.
LibMap
Lines Affected | Gas Optimization |
---|---|
L6 | Moderate |
The LibMap
in use by the codebase is currently sub-optimal as an implementation is present in the official Solady repository that contains a greater degree of optimization. In detail, lookups are optimized by using bitwise operators instead of divisions and multiplications (result in a 2
gas cost reduction per operation) whilst sets are entirely performed in assembly
blocks further benefitting from optimized gas costs.
We advise the latest implementation by Solady to be re-imported in the codebase, optimizing its gas cost across the board.
Merkle
Lines Affected | Gas Optimization |
---|---|
L30 | Minimal |
The referenced modulo operation (%
) is inefficient as its purpose is to identify whether the last bit of the _index
is 1
.
We advise an AND
operation to be utilized instead, reducing the required gas cost from 5
to 3
per operation.
Lines Affected | Gas Optimization |
---|---|
L33 | Minimal |
The referenced division operation (/
) is inefficient as its purpose is to divide the _index
by 2
on each iteration, effectively shifting its bits.
We advise a bitwise shift operation to be utilized instead, optimizing the gas cost of each iteration by 2
gas units.
Executor
Lines Affected | Gas Optimization |
---|---|
L243 | Observable |
The Executor::_commitBatchesWithSystemContractsUpgrade
function will execute special logic for the first batch of the total batches being committed inefficiently within the for
loop.
We advise the system upgrade batch to be executed outside the for
loop and the for
loop body to be identical to the Executor::_commitBatchesWIthoutSystemContractsUpgrade
albeit starting the iterator at 1
instead of 0
.
Mailbox
The referenced statements will re-calculate the keccak256
hash of the isEthWithdrawalFinalized
mapping slots two times redundantly in the same codeblock.
We advise the s.isEthWithdrawalFinalized[_l2BatchNumber]
lookup to be cached to a local mapping(uint256 => bool) storage
variable that is consequently utilized for the true
assignment statement, optimizing the gas cost of the function by one less redundant keccak256
operation.
L1ERC20Bridge
The referenced statements will re-calculate the keccak256
hash of the isWithdrawalFinalized
and totalDepositedAmountPerUser
mapping slots two times redundantly in each codeblock.
We advise the isWithdrawalFinalized[_l2BatchNumber]
and totalDepositedAmountPerUser[_l1Token]
lookups to be cached to a local mapping(uint256 => bool) storage
and mapping(address => uint256) storage
variables respectively, optimizing the gas cost of each function by one less redundant keccak256
operation.
L1WethBridge
The referenced statements will re-calculate the keccak256
hash of the isWithdrawalFinalized
mapping slots two times redundantly in the same codeblock.
We advise the isWithdrawalFinalized[_l2BatchNumber]
lookup to be cached to a local mapping(uint256 => bool) storage
variable that is consequently utilized for the true
assignment statement, optimizing the gas cost of the function by one less redundant keccak256
operation.
Governance
Lines Affected | Gas Optimization |
---|---|
L225 | Minimal |
The referenced iterator increment statement, unlike the rest of the codebase does not make use of the UncheckedMath
contract.
We advise it to be utilized and the increment statement to be performed unsafely, optimizing the code's gas cost.
BaseZkSyncUpgrade
Lines Affected | Gas Optimization |
---|---|
L204 | Minimal |
The referenced iterator increment statement, unlike the rest of the codebase does not make use of the UncheckedMath
contract.
We advise it to be utilized and the increment statement to be performed unsafely, optimizing the code's gas cost.
#0 - 141345
2023-10-26T04:34:41Z
4 r 4 nc
UBS-01G: Significant Optimization of Overall Transaction Parsing r
LCH-01G: Inefficient Evaluation of Modulo nc
LCH-02G: Inefficient Evaluation of Multiplication nc
ALS-01G: Inefficient Mapping Lookups r
DDN-01G: Inefficient Memory Expansion r
DDN-02G: Inefficient Mapping Lookups d
LMP-01G: Sub-Optimal Fork of Solady r
MEL-01G: Inefficient Evaluation of Modulo d
MEL-02G: Inefficient Evaluation of Division d
ESR-01G: Inefficient Evaluation of Ternary Operator nc
MXO-01G: Inefficient Mapping Lookups d
LER-01G: Inefficient Mapping Lookups d
LWB-01G: Inefficient Mapping Lookups d
GSE-01G: Inefficient Iterator Increment Statement nc
BZS-01G: Inefficient Iterator Increment Statement d
#1 - c4-pre-sort
2023-11-02T16:10:01Z
141345 marked the issue as sufficient quality report
#2 - c4-judge
2023-11-15T13:15:39Z
GalloDaSballo marked the issue as grade-b
#3 - alex-ppg
2023-11-30T23:13:44Z
Hey @141345, I would be keen to know why the following optimizations are marked as "d":
For example, DDN-02G refers to the following lookups:
Diamond::_addOneFunction
: Will optimize at best 2 lookups, at worse 1 lookupDiamond::_removeOneFunction
: Will optimize at best 3 lookups, at worse 1 lookupPlease keep in mind that a mapping
entry lookup performs a keccak256
instruction under the hood (as well as potential abi.encode
etc. methods) to construct the "key" (i.e. storage offset) to access an entry. I am more than happy to produce supplemental test cases showcasing the gas savings of each exhibit.
Additionally, BZS-01G
is marked as d
while GSE-01G
is marked as nc
which does not make sense as they relate to the exact same type of optimization.
#4 - GalloDaSballo
2023-12-10T10:04:00Z
I have checked adding further refactorings to this report score and don't believe it would qualify for A at this time
I also agree with the Presorter Heuristic that a specific type of optimization (lookup) should be awarded only once
I recommend listing the instances in sequence instead of sending them file by file