Platform: Code4rena
Start Date: 02/10/2023
Pot Size: $1,100,000 USDC
Total HM: 28
Participants: 64
Period: 21 days
Judge: GalloDaSballo
Total Solo HM: 13
Id: 292
League: ETH
Rank: 64/64
Findings: 17
Award: $0.00
🌟 Selected for report: 9
🚀 Solo Findings: 3
🌟 Selected for report: chainlight
0 USDC - $0.00
The 'shr' instruction in zkSync Era's 'shift' opcode lacks a constraint on the remainder, potentially allowing malicious operators to generate fake proofs for contracts.
In the 'shift' opcode implementation, the 'shr' instruction in 'main_vm' is defined as dividing the 'divisor' 'reg' by a constant value, specified by the 'allocate_div_result_unchecked' function.
let reg = &common_opcode_state.src0_view.u32x8_view; let full_shift_limbs = get_shift_constant(cs, full_shift); let (rshift_q, _rshift_r) = allocate_div_result_unchecked(cs, ®, &full_shift_limbs);
https://github.com/code-423n4/2023-10-zksync/blob/main/code/era-zkevm_circuits/src/main_vm/opcodes/shifts.rs#L56 https://github.com/code-423n4/2023-10-zksync/blob/main/code/era-zkevm_circuits/src/main_vm/opcodes/shifts.rs#L76 https://github.com/code-423n4/2023-10-zksync/blob/main/code/era-zkevm_circuits/src/main_vm/opcodes/shifts.rs#L82
The 'rshift_q' represents the quotient, while '_rshift_r' represents the remainder of the result. The issue here is that there is no constraint in place to ensure that the remainder is smaller than the divisor. This lack of constraint can lead to various scenarios where a malicious operator can create a fraudulent proof by manipulating the 'shr' instruction. This can impact contracts that rely on such instructions for their functionality. For instance, two precompile contracts EcMul
and EcAdd
are using shr
frequently.
A constraint should be enforced to ensure that the remainder is less than the divisor.
Context
#0 - c4-pre-sort
2023-11-01T09:45:47Z
bytes032 marked the issue as primary issue
#1 - c4-pre-sort
2023-11-01T09:45:52Z
bytes032 marked the issue as sufficient quality report
#2 - miladpiri
2023-11-06T14:57:35Z
#3 - c4-sponsor
2023-11-06T14:57:48Z
miladpiri (sponsor) confirmed
#4 - c4-judge
2023-11-23T19:14:22Z
GalloDaSballo marked the issue as duplicate of #697
#5 - c4-judge
2023-11-23T19:14:46Z
GalloDaSballo changed the severity to 3 (High Risk)
#6 - c4-judge
2023-11-23T19:15:03Z
GalloDaSballo marked the issue as partial-25
#7 - c4-judge
2023-11-23T19:15:21Z
GalloDaSballo marked the issue as partial-50
#8 - GalloDaSballo
2023-11-23T19:15:32Z
Was not weaponized in contrast to primary
🌟 Selected for report: minhtrng
Also found by: BARW, HE1M, Koolex, rvierdiiev
0 USDC - $0.00
This finding highlights the discrepancy in gas limit handling between L1 and L2 processing, suggesting that adjustments may be needed to ensure consistent and correct gas limit calculations across both layers.
During the initiation of an L1 transaction, the provided _l2GasLimit
by the user is subject to checks to ensure it covers both overhead and processing costs. The call flow for this process is as follows:
Mailbox::requestL2Transaction
calls Mailbox::_requestL2Transaction
.Mailbox::_requestL2Transaction
calls Mailbox::_writePriorityOp
.Mailbox::_writePriorityOp
invokes TransactionValidator::validateL1ToL2Transaction
.
https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/zksync/facets/Mailbox.sol#L240
https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/zksync/libraries/TransactionValidator.sol#L18In the validateL1ToL2Transaction
function, the internal function getTransactionBodyGasLimit
is used to calculate overhead costs, which are then checked against the provided _l2GasLimit
. Similarly, in the same function, the internal function getMinimalPriorityTransactionGasLimit
is called to compute processing costs, which are also compared against _l2GasLimit
.
https://github.com/code-423n4/2023-10-zksync/blob/72f5f16ed4ba94c7689fe38fcb0b7d27d2a3f135/code/contracts/ethereum/contracts/zksync/libraries/TransactionValidator.sol#L115-L120
https://github.com/code-423n4/2023-10-zksync/blob/72f5f16ed4ba94c7689fe38fcb0b7d27d2a3f135/code/contracts/ethereum/contracts/zksync/libraries/TransactionValidator.sol#L36-L43
These calculations reveal that _l2GasLimit
should ideally be greater than both overhead and processing costs. However, the current implementation does not align with this requirement. A similar set of calculations is performed on L2 (Bootloader) during the processing of L1 transactions:
getGasLimitForTx
, and these costs are deducted from the totalGasLimit
, yielding gasLimitForTx
. Subsequently, intrinsicOverhead
is subtracted from it.
https://github.com/code-423n4/2023-10-zksync/blob/72f5f16ed4ba94c7689fe38fcb0b7d27d2a3f135/code/system-contracts/bootloader/bootloader.yul#L1151
https://github.com/code-423n4/2023-10-zksync/blob/72f5f16ed4ba94c7689fe38fcb0b7d27d2a3f135/code/system-contracts/bootloader/bootloader.yul#L1157C17-L1157C31
https://github.com/code-423n4/2023-10-zksync/blob/72f5f16ed4ba94c7689fe38fcb0b7d27d2a3f135/code/system-contracts/bootloader/bootloader.yul#L1171gasUsedOnPreparation
is deducted from gasLimitForTx
and is used for the execution of the transaction.
https://github.com/code-423n4/2023-10-zksync/blob/72f5f16ed4ba94c7689fe38fcb0b7d27d2a3f135/code/system-contracts/bootloader/bootloader.yul#L911
https://github.com/code-423n4/2023-10-zksync/blob/72f5f16ed4ba94c7689fe38fcb0b7d27d2a3f135/code/system-contracts/bootloader/bootloader.yul#L986In essence, during the processing of L1 transactions on L2, three parameters, namely operatorOverheadForTransaction
, intrinsicOverhead
, and gasUsedOnPreparation
, are subtracted from gasLimitForTx
before it is forwarded to the destination address. However, on L1, _l2GasLimit
is only enforced to be greater than overhead and processing costs individually.
To clarify further:
operatorOverheadForTransaction
on L2.intrinsicOverhead + gasUsedOnPreparation
on L2.While they are not exactly equal due to challenges in maintaining equivalence, the practical implication is that on L1, the variable l2GasForTxBody
should be used instead of _transaction.gasLimit
in the code. This adjustment is necessary because, starting with the user-provided gas limit, the gas overhead is subtracted first. After this deduction, the remaining gas should ideally be sufficient to cover the processing costs. However, the current implementation appears to require that the user's provided gas limit exceeds both the overhead and the processing costs individually.
This results in a situation where the user's gas limit may exceed the overhead and processing costs separately but falls short of covering their sum. In such cases, in the bootloader, gasLimitForTx
may be less than gasUsedOnPreparation
, which could lead to the if-statement body not being executed.
https://github.com/code-423n4/2023-10-zksync/blob/72f5f16ed4ba94c7689fe38fcb0b7d27d2a3f135/code/system-contracts/bootloader/bootloader.yul#L908-L919
Consider this example with the following parameters:
totalGasLimit
= 300_000numberOfFactoryDependencies
= 0txEncodeLen
= 1000gasPerPubdata
= 800The calculated values on L2 are as follows:
intrinsicOverhead
= 237_557requiredOverhead
(for zero operatorOverheadForTransaction
) = 180_970On L1, we have:
getMinimalPriorityTransactionGasLimit
= 243_884batchOverheadForTransaction
= 176_686Although totalGasLimit
is greater than each of the four parameters individually, it is less than the sum of getMinimalPriorityTransactionGasLimit
and batchOverheadForTransaction
calculated on L1:
300_000 < 243_884 + 176_686
In the event that the operator's requested overhead matches the required overhead, the remaining gas will be insufficient to cover intrinsicOverhead
. This discrepancy arises because, on L1, _l2GasLimit
is not enforced to account for getMinimalPriorityTransactionGasLimit + batchOverheadForTransaction
. In this example, if the operator specifies an overhead of 176_686
, which matches the required overhead, requiredOverhead
will also be 176_686
(it decreases when the operator requests non-zero overhead). After subtracting this amount from totalGasLimit
, only 123_314
remains, which is less than intrinsicOverhead
.
https://github.com/code-423n4/2023-10-zksync/blob/72f5f16ed4ba94c7689fe38fcb0b7d27d2a3f135/code/system-contracts/bootloader/bootloader.yul#L1604
https://github.com/code-423n4/2023-10-zksync/blob/72f5f16ed4ba94c7689fe38fcb0b7d27d2a3f135/code/system-contracts/bootloader/bootloader.yul#L1168
In the line 41, l2GasForTxBody
is recommended to be used instead of _transaction.gasLimit
:
https://github.com/code-423n4/2023-10-zksync/blob/72f5f16ed4ba94c7689fe38fcb0b7d27d2a3f135/code/contracts/ethereum/contracts/zksync/libraries/TransactionValidator.sol#L41
Context
#0 - c4-pre-sort
2023-11-01T18:12:09Z
bytes032 marked the issue as duplicate of #975
#1 - c4-judge
2023-11-28T15:54:14Z
GalloDaSballo marked the issue as satisfactory
🌟 Selected for report: zero-idea
Also found by: 0x1337, 0xTheC0der, 0xstochasticparrot, Audittens, HE1M, Jeiwan, erebus, gumgumzum, leviticus1129, lsaudit, quarkslab, rvierdiiev
0 USDC - $0.00
Protocols relying on zkSync Era's accurate replication of EVM opcodes like extcodehash
and extcodesize
could encounter unexpected outcomes within their smart contracts. This is due to the omission of an update to the CURRENT_MAX_PRECOMPILE_ADDRESS
constant, which results in the two newly added precompiled contracts not being recognized as such. Consequently, the behavior of these contracts in terms of code size and code hash will not align with the expected precompile contract behavior.
Within the Constants
contract, the constant CURRENT_MAX_PRECOMPILE_ADDRESS
serves the purpose of tracking the current maximum precompile address.
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/system-contracts/contracts/Constants.sol#L35
Previously, there were only two precompile contracts implemented, and as a result, this constant was set to the address of the last precompile contract, namely SHA256_SYSTEM_CONTRACT = address(0x02)
.
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/system-contracts/contracts/Constants.sol#L26C18-L26C41
However, with the introduction of two additional precompile contracts, it has become necessary to update this constant to the address of the last precompile contract, which is ECMUL_SYSTEM_CONTRACT = address(0x07)
, as indicated in the comment: "Important! So the constant should be updated if more precompiles are deployed."
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/system-contracts/contracts/Constants.sol#L34C5-L34C83
The lack of an update to this constant has implications for two functions, namely, getCodeHash
and getCodeSize
, which simulate the behavior of EVM opcodes extcodehash
and extcodesize
. These issues manifest in the following manner:
In the case of getCodeHash
, the extcodehash
of the precompile contract ECMUL_SYSTEM_CONTRACT
incorrectly evaluates to getRawCodeHash(address(0x07))
, which is not the expected result. It should return keccak("")
.
In the case of getCodeSize
, the extcodesize
of the precompile contract ECMUL_SYSTEM_CONTRACT
is inaccurately computed as Utils.bytecodeLenInBytes(getRawCodeHash(address(0x07)))
. This, too, deviates from the intended behavior, which should yield a result of zero.
These same issues apply to the precompile contract ECADD_SYSTEM_CONTRACT = address(0x06)
as well.
It's important to note that these discrepancies can impact protocols who rely on the accurate simulation of extcodehash
and extcodesize
as stated by the zkSync protocol. Failure to update this constant may lead to unanticipated behavior in smart contracts and could result in security and operational challenges for protocols.
The code should be revised as:
uint256 constant CURRENT_MAX_PRECOMPILE_ADDRESS = uint256(uint160(ECMUL_SYSTEM_CONTRACT));
Context
#0 - c4-pre-sort
2023-10-31T06:50:45Z
bytes032 marked the issue as primary issue
#1 - c4-pre-sort
2023-11-02T16:25:31Z
141345 marked the issue as sufficient quality report
#2 - miladpiri
2023-11-06T14:27:33Z
Valid finding. Medium is fair.
#3 - c4-sponsor
2023-11-06T14:27:40Z
miladpiri (sponsor) confirmed
#4 - c4-judge
2023-11-23T19:32:14Z
GalloDaSballo marked issue #772 as primary and marked this issue as a duplicate of 772
#5 - c4-judge
2023-11-28T15:58:32Z
GalloDaSballo marked the issue as satisfactory
🌟 Selected for report: OffsideLabs
0 USDC - $0.00
The presence of legacy transactions in zkSync Era, which are not EIP-155 compatible (not including the chain ID), introduces a potential replay attack vulnerability. Malicious users could exploit this vulnerability by replaying legacy transactions executed on Ethereum, leading to unauthorized fund transfers.
During the processing of L2 transactions, zkSync Era saves both the transaction hash and the suggested signed transaction hash in memory as part of the validation process. This is accomplished by invoking the saveTxHashes
function, which, in turn, calls the getTransactionHashes
function within BootloaderUtilities
.
https://github.com/code-423n4/2023-10-zksync/blob/main/code/system-contracts/bootloader/bootloader.yul#L1190
https://github.com/code-423n4/2023-10-zksync/blob/main/code/system-contracts/bootloader/bootloader.yul#L2056
https://github.com/code-423n4/2023-10-zksync/blob/main/code/system-contracts/contracts/BootloaderUtilities.sol#L24
Depending on the type of transaction, zkSync Era calculates the transaction hash, which supports various transaction types, as detailed in the project documentation. https://github.com/code-423n4/2023-10-zksync/blob/main/docs/Smart%20contract%20Section/System%20contracts%20bootloader%20description.md#transaction-types--their-validation
One noteworthy concern is that zkSync Era also accommodates legacy transactions (type 0) that are not EIP-155 compatible. In these transactions, if the reserved[0]
value is zero, it signifies that the chainId is not included.
https://github.com/code-423n4/2023-10-zksync/blob/main/code/system-contracts/contracts/libraries/TransactionHelper.sol#L57
This distinction becomes evident when encoding the legacy transaction hash, where a zero reserved[0]
results in the vInt
not including the block.chainid
.
if (_transaction.reserved[0] != 0) { vInt += 8 + block.chainid * 2; }
This compatibility gap opens up the possibility of a replay attack for legacy transactions that occurred before the introduction of EIP-155, at block number 2,675,000. https://eips.ethereum.org/EIPS/eip-155
This vulnerability can lead to different scenarios:
In practice, a malicious actor could replicate these legacy transactions originally executed on the Ethereum network within zkSync Era again. If the Transaction.from
account maintains any funds on zkSync Era, these funds would be transferred to the destination address specified in Transaction.to
on behalf of Transaction.from
.
It is advisable to require that legacy transactions must have a non-zero value for reserved[0]
.
Context
#0 - c4-pre-sort
2023-11-01T05:35:05Z
bytes032 marked the issue as sufficient quality report
#1 - c4-pre-sort
2023-11-01T05:35:11Z
bytes032 marked the issue as duplicate of #882
#2 - c4-judge
2023-11-23T19:22:44Z
GalloDaSballo changed the severity to QA (Quality Assurance)
#3 - HE1M
2023-12-02T12:03:18Z
@GalloDaSballo Thanks for your effort.
I believe the severity of this issue should be classified as high due to the potential replay attack on pre-EIP-155 transactions, leading to the following scenarios:
Addresses on any EVM chains with transactions before block 2,675,000 can have those transactions replayed on zkSync Era. If these addresses happen to have a non-zero balance on zkSync Era, there is a possibility of unintentional fund transfers to other addresses. This impact is significant as it results in fund loss.
If an address has transactions before block 2,675,000 on any EVM chains and intends to initiate a transaction on zkSync Era, an attacker could replay those transactions on zkSync Era, essentially front-running them and causing an increase in their nonce. Consequently, their transactions on zkSync Era will fail due to non-matching nonces. This impact is considered low, resembling a grieving attack.
For addresses on any EVM chains with transactions before block 2,675,000 that call a specific contract function on an address, applying a replay attack can unintentionally execute those operations again (depending on the callee address implementation). The impact can be significant, contingent on the context of the transaction.
In summary, the impact of this issue should not be relegated to a mere quality assurance concern, given the potential risks and consequences outlined above.
#4 - c4-judge
2023-12-05T12:51:23Z
This previously downgraded issue has been upgraded by GalloDaSballo
#5 - GalloDaSballo
2023-12-05T13:19:40Z
I agree
#6 - c4-judge
2023-12-05T13:19:45Z
GalloDaSballo marked the issue as satisfactory
#7 - c4-judge
2024-01-11T15:36:45Z
GalloDaSballo changed the severity to 2 (Med Risk)
🌟 Selected for report: HE1M
Also found by: 0xsomeone, AkshaySrivastav, Aymen0909, J4de, Koolex, Mohandes, bin2chen, brgltd, cccz, hals, ladboy233, max10afternoon, peanuts, rvierdiiev, shealtielanz, tsvetanovv, zzzitron
0 USDC - $0.00
Users' Failed Deposits: If a token has no deposit limitation initially and a user's deposit transaction fails, they may not be able to claim their failed deposits later if a deposit limit is imposed on that token.
Deposit Limit Bypass: Malicious users can exploit the deposit limit system by making failed deposits before the deposit limit is introduced. By claiming these failed deposits, they can reset their total deposited amount, allowing them to exceed the deposit cap once it is enforced.
When depositing an ERC20 token into the L1ERC20Bridge
, the _verifyDepositLimit
function plays a pivotal role. This function's primary purpose is to ascertain whether there is a predefined deposit limit in place for the given token and, if such a cap exists, to determine if the user's cumulative deposit amount surpasses this imposed restriction. The user's total deposited amount is meticulously tracked and maintained using the totalDepositedAmountPerUser
mapping.
https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/bridge/L1ERC20Bridge.sol#L188
https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/bridge/L1ERC20Bridge.sol#L348
This same mechanism is applied when a user seeks to reclaim their failed deposit. In this scenario, the claimFailedDeposit
function is invoked, which once again triggers the _verifyDepositLimit
function. However, in this context, the objective is to adjust the totalDepositedAmountPerUser
mapping by decreasing the total deposited amount due to the claiming faild transaction.
https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/bridge/L1ERC20Bridge.sol#L278
https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/bridge/L1ERC20Bridge.sol#L345
This implementation can give rise to significant issues, which can be illustrated through the following scenarios:
Scenario 1: Suppose there's no deposit limitation imposed on tokenX, indicated by limitData.depositLimitation = false
. Alice (an honest user) initiates a deposit of 100 tokenX. However, these deposits unfortunately fail. At a later point, the owner decides to enforce a deposit limit on this token by employing the setDepositLimit
function within the AllowList
contract. Now, Alice seeks to claim her previously failed deposits. However, when the _verifyDepositLimit
function is invoked during this process, the if-condition is met. Yet, the issue arises when the amount 100 is going to be deducted from totalDepositedAmountPerUser[tokenX][Alice]
(which equals 0). This results in an underflow, causing the transaction to revert. So, Alice can not claim her failed deposit in this scenario.
In summary, if there was no deposit limit set for a token initially, and users experienced failed deposit transactions for that token, they encounter difficulties in claiming those failed deposits when a deposit limit is subsequently enforced.
Scenario 2: Consider a scenario in which there's no deposit limitation for tokenY initially. However, Bob (a malicious user) monitors the owner's transaction that is going to set a deposit limit for tokenY, specifying tokenDeposit[tokenY].depositCap = 100
. Immediately, Bob deposits 100 tokenY three times, a total of 300 tokenY, deliberately ensuring that these transactions fail on L2 due to a low _l2TxGasLimit
. Subsequently, the owner's transaction to establish a deposit limit of 100 for tokenY is executed. Now, Bob decides to deposit another 100 tokenY, resulting in totalDepositedAmountPerUser[tokenY][Bob]
being set to 100.
However, Bob intends to deposit another 100 tokenY. As the condition require(totalDepositedAmountPerUser[_l1Token][_depositor] + _amount <= limitData.depositCap, "d1");
is examined, this requirement stipulates that totalDepositedAmountPerUser[tokenY][Bob] + 100
must not exceed the deposit cap of 100. Bob's attempt to deposit an additional 100 tokenY is therefore denied.
Nonetheless, Bob realizes he can exploit a potential vulnerability. He decides to claim one of his three failed transactions that occurred prior to the deposit cap set up. As a result, totalDepositedAmountPerUser[tokenY][Bob]
decreases by 100, effectively resetting it to 0. Now Bob can once again deposit 100 tokenY, despite having previously reached the imposed deposit limit. Bob continues this process, claiming his failed deposits after each attempt, thus resetting totalDepositedAmountPerUser[tokenY][Bob]
and allowing him to deposit 400 tokenY, exceeding the deposit limit of 100.
In summary, this implementation opens the door for malicious users to circumvent deposit limits by arranging failed deposits prior to the imposition of such limits. Subsequently, by claiming these failed deposits, they can effectively increase their deposit limit by resetting totalDepositedAmountPerUser
.
The function _verifyDepositLimit
should be revised such that it tracks the deposited amount of users even if there is no deposit limitation in place.
function _verifyDepositLimit(address _l1Token, address _depositor, uint256 _amount, bool _claiming) internal { IAllowList.Deposit memory limitData = IAllowList(allowList).getTokenDepositLimitData(_l1Token); if (_claiming) { totalDepositedAmountPerUser[_l1Token][_depositor] -= _amount; } else { totalDepositedAmountPerUser[_l1Token][_depositor] += _amount; if(limitData.depositLimitation){ require(totalDepositedAmountPerUser[_l1Token][_depositor] <= limitData.depositCap, "d1"); } } }
Context
#0 - c4-pre-sort
2023-10-31T14:55:19Z
bytes032 marked the issue as primary issue
#1 - c4-pre-sort
2023-10-31T14:55:24Z
bytes032 marked the issue as sufficient quality report
#2 - c4-sponsor
2023-11-06T10:23:49Z
miladpiri marked the issue as disagree with severity
#3 - miladpiri
2023-11-06T10:28:16Z
This is an issue if we switch the deposit limitation feature on/off. But, this is not going to happen as the deposit limitation is only active temporarily during alpha version, later it will be deactivated (removed fully). So, medium severity is fair.
#4 - c4-sponsor
2023-11-06T11:40:51Z
miladpiri (sponsor) confirmed
#5 - c4-judge
2023-11-24T19:55:33Z
GalloDaSballo changed the severity to 2 (Med Risk)
#6 - c4-judge
2023-11-24T20:00:42Z
GalloDaSballo marked the issue as selected for report
#7 - GalloDaSballo
2023-11-24T20:01:38Z
Primary because, while other findings have a Coded POC, this one captures an additional risk
🌟 Selected for report: HE1M
0 USDC - $0.00
The timestamp constraints and batch creation limitations in zkSync have several significant impacts on the platform's functionality and operations:
Limited Block Inclusion: The constraints on timestamp differences between batches and their respective blocks restrict the number of blocks that can be included in each batch. This leads to smaller batch sizes, making it challenging to efficiently process transactions and utilize the available block space.
Responsiveness Issue: Due to these constraints, if two batches are intended to be committed on L1 at the same time (in the same Ethereum block), it's not allowed. This can create bottlenecks during batch commitments, especially when zkSync experiences high transaction volumes leading to an increased number of pending transactions and potentially longer finality times, affecting zkSync's overall responsiveness.
Operator-Induced Limitations: These constraints can be exploited by the operator by setting timestamps much further in the future to intentionally limit the number of blocks in batches. This could lead to various operational issues and inefficiencies.
Upon initiating transaction processing in the bootloader, the first step involves creating a new batch using the setNewBatch
function in the SystemContext
contract.
https://github.com/code-423n4/2023-10-zksync/blob/main/code/system-contracts/bootloader/bootloader.yul#L3675
https://github.com/code-423n4/2023-10-zksync/blob/main/code/system-contracts/contracts/SystemContext.sol#L416
This operation enforces that the timestamp of the new batch must be greater than the timestamp of the previous batch and the timestamp of the last block in the previous batch. https://github.com/code-423n4/2023-10-zksync/blob/main/code/system-contracts/contracts/SystemContext.sol#L423 https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/system-contracts/contracts/SystemContext.sol#L402
Subsequently, when processing a specific transaction, an L2 block is designated by invoking the setL2Block
function within the SystemContext
contract. This action ensures that the timestamp of the block is not lower than the timestamp of the current batch.
https://github.com/code-423n4/2023-10-zksync/blob/main/code/system-contracts/bootloader/bootloader.yul#L559
https://github.com/code-423n4/2023-10-zksync/blob/main/code/system-contracts/contracts/SystemContext.sol#L323
Once the processing of all transactions is completed, an additional fictive block is generated, serving as the final block within the batch. https://github.com/code-423n4/2023-10-zksync/blob/main/code/system-contracts/bootloader/bootloader.yul#L3812
Finally, the timestamp data is disclosed to L1 for verification. https://github.com/code-423n4/2023-10-zksync/blob/main/code/system-contracts/bootloader/bootloader.yul#L3816
On the L1 side, during batch commitment, the _verifyBatchTimestamp
function is called to confirm the accuracy of the timestamps. It enforces that the batch's timestamp is later than the previous batch and not less than block.timestamp - COMMIT_TIMESTAMP_NOT_OLDER
. Additionally, it ensures that the timestamp of the last block in this batch is not greater than block.timestamp + COMMIT_TIMESTAMP_APPROXIMATION_DELTA
.
https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/zksync/facets/Executor.sol#L85
https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/zksync/facets/Executor.sol#L93
https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/zksync/facets/Executor.sol#L94
A critical concern arises when the operator, during the creation of a new batch on L2, sets its timestamp in close proximity to the value block.timestamp + COMMIT_TIMESTAMP_APPROXIMATION_DELTA
. To illustrate this point, consider the following example:
Imagine a new batch, numbered 1000, is created with a timestamp of X + COMMIT_TIMESTAMP_APPROXIMATION_DELTA - 1
. This batch includes a block with a timestamp of X + COMMIT_TIMESTAMP_APPROXIMATION_DELTA - 1
and a fictive block with a timestamp of X + COMMIT_TIMESTAMP_APPROXIMATION_DELTA
. Importantly, these values adhere to all the timestamp requirements outlined in the SystemContext
contract, as explained earlier.
When this batch 1000, is committed on L1 at a specific time blockTimestamp1000
(meaning the time at which the batch 1000 is committed on L1), during timestamp verification, the following requirement is met (assuming X <= blockTimestamp1000
):
require(lastL2BlockTimestamp <= block.timestamp + COMMIT_TIMESTAMP_APPROXIMATION_DELTA, "h2");
Because:
assuming: X <= blockTimestamp1000 ==> X + COMMIT_TIMESTAMP_APPROXIMATION_DELTA <= blockTimestamp1000 + COMMIT_TIMESTAMP_APPROXIMATION_DELTA
This results in a successful proof and execution of the batch 1000. Now, when a new batch, 1001, is to be created on L2, its timestamp must be higher than X + COMMIT_TIMESTAMP_APPROXIMATION_DELTA
because a batch's timestamp should surpass the timestamp of the last block in the previous batch.
Suppose the timestamp of batch 1001 is X + COMMIT_TIMESTAMP_APPROXIMATION_DELTA + Y
, and the last block within this batch carries a timestamp of X + COMMIT_TIMESTAMP_APPROXIMATION_DELTA + Y + K
. To summarize:
Batch 1000:
X + COMMIT_TIMESTAMP_APPROXIMATION_DELTA - 1
X + COMMIT_TIMESTAMP_APPROXIMATION_DELTA
blockTimestamp1000
X <= blockTimestamp1000
Batch 1001:
X + COMMIT_TIMESTAMP_APPROXIMATION_DELTA + Y
X + COMMIT_TIMESTAMP_APPROXIMATION_DELTA + Y + K
blockTimestamp1001
During the timestamp verification of batch 1001 on L1, to meet the requirement:
require(lastL2BlockTimestamp <= block.timestamp + COMMIT_TIMESTAMP_APPROXIMATION_DELTA, "h2");
We must have:
X + COMMIT_TIMESTAMP_APPROXIMATION_DELTA + Y + K <= blockTimestamp1001 + COMMIT_TIMESTAMP_APPROXIMATION_DELTA
Simplifying the condition yields:
Y + K <= blockTimestamp1001 - X
Here, X
represents the time set by the operator, where X <= blockTimestamp1000
as explained earlier. In the worst case scenario, if X = blockTimestamp1000
, the condition becomes:
Y + K <= blockTimestamp1001 - blockTimestamp1000
The variable Y
signifies the amount of time required for the timestamp of batch 1001 to be higher than the timestamp of the last block in batch 1000. The minimum value of Y
is 1 second. Assuming it is equal to 1 second (please note that if Y
is higher than 1, the situation becomes even worse), the condition simplifies to:
K <= blockTimestamp1001 - blockTimestamp1000 - 1
This condition imposes a critical limitation on the number of blocks that can be included in a batch. As a reminder, the timestamp of the first block in batch 1001 is X + COMMIT_TIMESTAMP_APPROXIMATION_DELTA + Y
, while the timestamp of the last block in this batch is X + COMMIT_TIMESTAMP_APPROXIMATION_DELTA + Y + K
(the fictive block). The difference between these two timestamps equals K
, and since each block's timestamp must be greater than the previous block, K
defines the maximum number of blocks allowed in a batch.
This condition has several implications:
If both batches 1001 and 1000 are to be committed on L1 at the same time blockTimestamp1000 = blockTimestamp1001
(in the same Ethereum block), it is not allowed, as K <= -1
.
Examining zkSync Era Explorer, it is evident that batches are frequently committed in the same Ethereum block. For example, by observing Ethereum block number 18383261, the function commitBlocks
is called 19 times, with block positions ranging from 0 to 18. The following two transactions show just the first and the last commitBlocks
in this Ethereum block.
https://etherscan.io/tx/0xbfec43599bb73af95eaf4ac9ff83a62cdbe084382dd6f5d12bc8e817ce3574e5
https://etherscan.io/tx/0x1826d459ce7f2ab233374595569a13c4098e8e1eeb26517a98e42d9b5aab7374
The explorer demonstrates that the interval between committing batches is relatively short. For instance, if the interval is 30 seconds, a maximum of 29 blocks can be included in a batch.
It's important to emphasize that batch 1000 has already undergone commitment, proof, and execution processes, and once these steps are completed, they are irreversible. Therefore, this issue will persist within the system.
As mentioned in the document, L2 blocks are employed to reduce the time required for soft confirmation. However, due to the vulnerability explained earlier, there exists a stringent constraint on the number of blocks, resulting in an extended soft confirmation duration.
L2 blocks were created for fast soft confirmation on wallets and block explorer. For example, MetaMask shows transactions as confirmed only after the block in which transaction execution was mined. So if the user needs to wait for the batch confirmation it would take at least minutes (for soft confirmation) and hours for full confirmation which is very bad UX. But API could return soft confirmation much earlier through L2 blocks. https://github.com/code-423n4/2023-10-zksync/blob/main/docs/Smart%20contract%20Section/Batches%20%26%20L2%20blocks%20on%20zkSync.md#motivation
One potential solution is to commit batches on L1 with a longer delay to allow for more blocks to be included in batches. However, this approach may lead to other issues, such as an increased number of pending transactions and significantly extended finality.
Addressing this issue is not straightforward, and it would necessitate an upgrade of the Executor
facet and a redesign of the timestamp verification process.
In summary, if the operator sets the timestamp of a batch too far into the future while still complying with L1 requirements, it can restrict the number of blocks that can be included in batches, resulting in a variety of challenges.
It is recommended that a more stricter timestamp constraint currently enforced in the _verifyBatchTimestamp
function on L1 to also apply on L2. This will help prevent the creation and submission of batches with timestamps set in the distant future to L1.
https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/zksync/facets/Executor.sol#L94
Context
#0 - c4-pre-sort
2023-10-31T11:56:05Z
bytes032 marked the issue as duplicate of #600
#1 - c4-pre-sort
2023-10-31T11:56:09Z
bytes032 marked the issue as sufficient quality report
#2 - c4-judge
2023-11-26T09:39:44Z
GalloDaSballo changed the severity to 2 (Med Risk)
#3 - c4-judge
2023-11-28T15:50:25Z
GalloDaSballo marked the issue as satisfactory
#4 - HE1M
2023-12-03T13:02:04Z
@GalloDaSballo thank you for your judging,
This report is not a complete duplication of issue-600; only the root cause is similar, but the described impacts differ. Concerning this issue:
If the timestamp of a batch is set to its maximum valid value (which is block.timestamp + COMMIT_TIMESTAMP_APPROXIMATION_DELTA
, in this case, block.timestamp + 365 days
), it results in limitations on the speed of blocks and batches creation. Given that L2_TX_MAX_GAS_LIMIT
is equal to 80,000,000, and the maximum computation gas per batch is equal to 2^32 (explained here and set here), roughly 53 transactions (calculated as 2^32 / 80,000,000
), each consuming the maximum 80,000,000 gas, will exhaust the gas capacity of a batch. Consequently, the batch must be sealed, and another batch needs to be created to include additional transactions. If there is a high volume of transactions on the network, numerous batches must be created and committed on L1. Since the timestamp of the batch is set to the current timestamp plus 365 days, committing batches in a single L1 transaction becomes impossible. This is due to each batch's timestamp needing to be higher than the previous batch's timestamp, requiring the validator to wait between each batch commitment for the timestamp on L1 to elapse, thus passing this check. This scenario can lead to several issues:
Firstly, as indicated in the documentation, the value of 2^32 is arbitrary to prevent the state keeper from taking too much time.
The maximum number of computation gas per batch. This is the maximal number of gas that can be spent within a batch. This constant is rather arbitrary and is needed to prevent transactions from taking too much time from the state keeper. It can not be larger than the hard limit of 2^32 of gas for VM.
The operator may also seal the batch earlier as mentioned here. While this speeds up batch commitment on L1, the timestamp of the batch being set to the far future necessitates a delay between each commitment for the timestamp on L1 to elapse.
Secondly, the assumption so far was that each batch consists of one block containing all transactions. However, the primary objective of having multiple blocks in a batch is to enhance the speed of soft confirmation, as explained here:
L2 blocks were created for fast soft confirmation on wallets and block explorer. For example, MetaMask shows transactions as confirmed only after the block in which transaction execution was mined. So if the user needs to wait for the batch confirmation it would take at least minutes (for soft confirmation) and hours for full confirmation which is very bad UX. But API could return soft confirmation much earlier through L2 blocks.
If the operator includes multiple blocks in a batch, it becomes limited on the L1 side during timestamp verification. Suppose batch 100 has a timestamp of timestamp of the to-be-committed batch 100 + 365 days
and has 20 blocks. The 20th block must have a timestamp of at least timestamp of the to-be-committed batch 100 + 365 days + 20 seconds
. When the validator commits this batch, it will be reverted here because block.timestamp + 365 days + 20 seconds > block.timestamp + 365 days
. This indicates that having multiple blocks in a batch while the timestamp of the batch is set to the far future (to the edge of acceptable timestamp) will limit the frequency of batch commitment on L1 and restrict the number of blocks in a batch. In such a case, the validator should wait between each batch commitment on L1, potentially causing slow finality, impacting third parties dependent on that.
Summary:
Setting the timestamp of a batch to the far future will limit the speed of blocks and batches creation:
Furthermore, in the case of a high volume of transactions on the network, this issue will be a significant limiting factor. If such a batch with a timestamp in the far future is executed, this limitation will persist in the protocol, as an executed batch cannot be reverted.
#5 - 0xunforgiven
2023-12-03T14:36:38Z
hey @HE1M, @GalloDaSballo. Thanks for judging.
I want to add that the underlying cause of Issue #316 is a security check that restricts the operator to submitting batches with timestamp far in the future(ZKbatch.timestamp < Etherem.timestamp + DELTA
). This issue persists regardless of the DELTA value, even if set to zero. While this vulnerability exists, there are mitigating factors that diminish its severity:
Etherem.timestamp + DELTA
, increases by 12 seconds with each Ethereum block.Etherem.timestamp + DELTA
, Etherem.timestamp + DELTA +1
, Etherem.timestamp + DELTA +2
,... Etherem.timestamp + DELTA +11
and submitted to Ethereum. These timestamps will pass the batch.timestamp < Etherem.timestamp + DELTA
check in the next Ethereum blocks due to the 12-second increase in Ethereum's timestamp.In conclusion, the security check ZKbatch.timestamp < Etherem.timestamp + DELTA
does not impacts the average 12 batch submissions per Ethereum block, even if the operator sets batch timestamps to Etherem.timestamp + DELTA
.
#6 - miladpiri
2023-12-04T10:43:02Z
@GalloDaSballo thank you for your judging,
This report is not a complete duplication of issue-600; only the root cause is similar, but the described impacts differ. Concerning this issue:
If the timestamp of a batch is set to its maximum valid value (which is
block.timestamp + COMMIT_TIMESTAMP_APPROXIMATION_DELTA
, in this case,block.timestamp + 365 days
), it results in limitations on the speed of blocks and batches creation. Given thatL2_TX_MAX_GAS_LIMIT
is equal to 80,000,000, and the maximum computation gas per batch is equal to 2^32 (explained here and set here), roughly 53 transactions (calculated as2^32 / 80,000,000
), each consuming the maximum 80,000,000 gas, will exhaust the gas capacity of a batch. Consequently, the batch must be sealed, and another batch needs to be created to include additional transactions. If there is a high volume of transactions on the network, numerous batches must be created and committed on L1. Since the timestamp of the batch is set to the current timestamp plus 365 days, committing batches in a single L1 transaction becomes impossible. This is due to each batch's timestamp needing to be higher than the previous batch's timestamp, requiring the validator to wait between each batch commitment for the timestamp on L1 to elapse, thus passing this check. This scenario can lead to several issues:
Firstly, as indicated in the documentation, the value of 2^32 is arbitrary to prevent the state keeper from taking too much time.
The maximum number of computation gas per batch. This is the maximal number of gas that can be spent within a batch. This constant is rather arbitrary and is needed to prevent transactions from taking too much time from the state keeper. It can not be larger than the hard limit of 2^32 of gas for VM.
The operator may also seal the batch earlier as mentioned here. While this speeds up batch commitment on L1, the timestamp of the batch being set to the far future necessitates a delay between each commitment for the timestamp on L1 to elapse.
Secondly, the assumption so far was that each batch consists of one block containing all transactions. However, the primary objective of having multiple blocks in a batch is to enhance the speed of soft confirmation, as explained here:
L2 blocks were created for fast soft confirmation on wallets and block explorer. For example, MetaMask shows transactions as confirmed only after the block in which transaction execution was mined. So if the user needs to wait for the batch confirmation it would take at least minutes (for soft confirmation) and hours for full confirmation which is very bad UX. But API could return soft confirmation much earlier through L2 blocks.
If the operator includes multiple blocks in a batch, it becomes limited on the L1 side during timestamp verification. Suppose batch 100 has a timestamp of
timestamp of the to-be-committed batch 100 + 365 days
and has 20 blocks. The 20th block must have a timestamp of at leasttimestamp of the to-be-committed batch 100 + 365 days + 20 seconds
. When the validator commits this batch, it will be reverted here becauseblock.timestamp + 365 days + 20 seconds > block.timestamp + 365 days
. This indicates that having multiple blocks in a batch while the timestamp of the batch is set to the far future (to the edge of acceptable timestamp) will limit the frequency of batch commitment on L1 and restrict the number of blocks in a batch. In such a case, the validator should wait between each batch commitment on L1, potentially causing slow finality, impacting third parties dependent on that.Summary:
Setting the timestamp of a batch to the far future will limit the speed of blocks and batches creation:
- Including multiple blocks in a batch will be restricted during timestamp verification on L1 since the timestamp of the batch is already at the edge of the accepted timestamp.
- Having only one block in a batch (to bypass the limitation mentioned above) contradicts the goal of having a block for fast finality.
Furthermore, in the case of a high volume of transactions on the network, this issue will be a significant limiting factor. If such a batch with a timestamp in the far future is executed, this limitation will persist in the protocol, as an executed batch cannot be reverted.
Agree.
#7 - c4-judge
2023-12-10T08:48:52Z
GalloDaSballo marked the issue as selected for report
#8 - GalloDaSballo
2023-12-10T08:49:40Z
The finding highlights a way for batch creation to be interrupted, this pertains to any configuration and is something that the operator could do under specific circumnstances either willingly or by mistake
Due to the above, I believe this report should be considered primary and of Medium Severity
#9 - c4-sponsor
2024-02-26T14:50:19Z
vladbochok marked the issue as disagree with severity
#10 - vladbochok
2024-02-26T16:56:02Z
The issue is highly theoretical and has a low impact in principle.
Firstly validator should submit the batch with a timestamp in the future (+365 days). Even the validator is not trusted - this already has a huge impact on L2 protocols (all time-sensitive projects such as DeFi or oracles).
And then the only impact is that the validator can't commit
batches for times of high load. Please note, that the validator can create batches and generate zkp for them, just can't submit commitBatches
transaction by that moment. So, the validator can wait for a couple of minutes and then submit all batches. Moreover, currently, there is a delay between batch commits and batch execution by 21 hours. Due to this delay minutes in commits don't affect the time of execution, not saying that proof generation takes minutes, and sequencing in such a high load is questionable.
#11 - thebrittfactor
2024-02-28T16:30:55Z
For transparency, the sponsor has notated via private discord channel that they are disputing this finding.
0 USDC - $0.00
The Governance contract's exclusive reliance on the owner's authority to schedule upgrades, combined with the requirement for the security council to execute them, presents significant challenges when the owner becomes compromised, inactive, or behaves maliciously. This structure hampers the ability to address critical protocol bugs promptly, change ownership, designate a new governor, and secure scheduled upgrades against potential cancellation.
The structure of the Governance
contract is designed to ensure that only the owner can initiate the scheduling of an upgrade (whether transparent or shadow type), and that only the security council has the authority to execute such an upgrade instantly. Consequently, if the owner is compromised or acts maliciously, they cannot schedule and execute an upgrade. Similarly, if the security council is compromised or malicious, they too lack the capability to initiate and execute an upgrade.
The challenge arises when the owner is compromised, inactive, or behaves maliciously, leading to potential scenarios:
In the event of a critical protocol bug that requires an urgent fix, the inability to schedule an upgrade presents an obstacle to implementing a swift solution, given that the owner's authorization is compromised.
The process of changing the owner is feasible, but it relies on the owner's own action, and if the owner is compromised, this approach becomes impractical. https://github.com/OpenZeppelin/openzeppelin-contracts/blob/v4.9.2/contracts/access/Ownable2Step.sol#L35
The option to change the governor or freezing the Diamond
is also unattainable because this action takes place within the Admin
contract, and only the current governor possesses the authority to designate a new one. However, executing a transaction from the governor's side necessitates scheduling an upgrade, a task rendered impossible when the owner is compromised.
https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/zksync/facets/Admin.sol#L20
https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/zksync/facets/Admin.sol#L109
Furthermore, the owner and the security council have the capability to cancel any previously scheduled upgrades, permitting a compromised owner to nullify any previously arranged patches.
This challenge stems from the exclusive authorization of only owner to schedule an upgrade.
It's important to note that if the security council were to act maliciously while the owner remains honest, a similar issue would arise. This is due to the security council's capacity to cancel any scheduled upgrades, which would prevent the owner from autonomously conducting upgrades.
One potential solution involves granting the security council the ability to schedule an upgrade. However, they should not have the authority to instantly execute an upgrade that they scheduled themselves. This could be achieved through the following code:
function scheduleUpgradeBySecurityCouncil(Operation calldata _operation, uint256 _delay) external onlySecurityCouncil { //... } function executeInstant(Operation calldata _operation) external onlySecurityCouncil { require(!isOperationScheduledBySecurityCouncil(_operation), "The operation was scheduled by the security council, so only delayed execution is allowed"); //... }
It's essential to note that in this scenario, a malicious owner could still cancel the scheduled operation. Therefore, it appears that the governance structure should be refactored to introduce a third authority. In such a setup, two out of the three authorities would be required to cancel a scheduled operation, providing an additional layer of security and mitigating the risk of a compromised owner or security council.
Context
#0 - c4-pre-sort
2023-10-31T07:01:21Z
bytes032 marked the issue as low quality report
#1 - miladpiri
2023-11-09T08:57:05Z
Medium. Duplicate https://github.com/code-423n4/2023-10-zksync-findings/issues/260
#2 - c4-judge
2023-11-26T18:02:15Z
GalloDaSballo marked the issue as duplicate of #260
#3 - c4-judge
2023-11-28T15:53:08Z
GalloDaSballo marked the issue as satisfactory
0 USDC - $0.00
This vulnerability allows malicious operators to potentially exploit the gas refund handling mechanism, resulting in the theft of user refunds.
In the event that a user provides a gasLimit
value exceeding both the MAX_GAS_PER_TRANSACTION
and the operatorTrustedGasLimit
while processing the L1Tx, any surplus gas will be stored in a variable known as reservedGas
. This surplus gas is intended to be refunded to the user at the end of transaction execution.
https://github.com/code-423n4/2023-10-zksync/blob/7ed3944429f437a611c32e782a12b320f6a67c17/code/system-contracts/bootloader/bootloader.yul#L1137
https://github.com/code-423n4/2023-10-zksync/blob/7ed3944429f437a611c32e782a12b320f6a67c17/code/system-contracts/bootloader/bootloader.yul#L1145
When the user's transaction is executed, the refundGas
is determined by taking the maximum value between potentialRefund
and the refund provided by the operator, as invoked through the getOperatorRefundForTx(transactionIndex)
function.
https://github.com/code-423n4/2023-10-zksync/blob/24b4b0c1ea553106a194ef36ad4eb05b3b50275c/code/system-contracts/bootloader/bootloader.yul#L918C38-L918C78
Subsequently, in the calculation of the final refundGas
, the reservedGas
is added to this value. This ensures that any extra gas supplied by the user is included in the final refund calculation.
https://github.com/code-423n4/2023-10-zksync/blob/24b4b0c1ea553106a194ef36ad4eb05b3b50275c/code/system-contracts/bootloader/bootloader.yul#L921
However, it's important to note that if a malicious operator deliberately provides an exceedingly large value as a refund, such as type(uint256).max
, the refundGas
will become equal to this large value. In cases where the reservedGas
is nonzero, an overflow may occur when it is added to this inflated refundGas
, causing the refundGas
to reset to zero.
https://github.com/code-423n4/2023-10-zksync/blob/24b4b0c1ea553106a194ef36ad4eb05b3b50275c/code/system-contracts/bootloader/bootloader.yul#L921
Consequently, having a refundGas
of zero implies that the entire gasLimit
will be directed towards the operator, contrary to the intended scenario where only gasLimit - refundGas
should be allocated to the operator and the remaining refundGas
designated for the user.
https://github.com/code-423n4/2023-10-zksync/blob/24b4b0c1ea553106a194ef36ad4eb05b3b50275c/code/system-contracts/bootloader/bootloader.yul#L927
This manipulation by a malicious operator effectively results in the theft of the refund gas.
To prevent the possibility of overflow, it is advisable to replace the use of the add
function with the safeAdd
function. This will ensure that the gas refund handling mechanism operates securely against potential vulnerabilities associated with integer overflow.
https://github.com/code-423n4/2023-10-zksync/blob/24b4b0c1ea553106a194ef36ad4eb05b3b50275c/code/system-contracts/bootloader/bootloader.yul#L3249
https://github.com/code-423n4/2023-10-zksync/blob/24b4b0c1ea553106a194ef36ad4eb05b3b50275c/code/system-contracts/bootloader/bootloader.yul#L921
Under/Overflow
#0 - c4-pre-sort
2023-10-31T09:39:57Z
bytes032 marked the issue as sufficient quality report
#1 - c4-pre-sort
2023-10-31T09:40:38Z
bytes032 marked the issue as primary issue
#2 - miladpiri
2023-11-06T14:56:34Z
Valid finding. Medium is fair.
#3 - c4-sponsor
2023-11-06T14:56:40Z
miladpiri (sponsor) confirmed
#4 - c4-judge
2023-11-25T19:13:23Z
GalloDaSballo marked the issue as satisfactory
#5 - c4-judge
2023-11-25T19:14:17Z
GalloDaSballo marked issue #255 as primary and marked this issue as a duplicate of 255
0 USDC - $0.00
Inconsistencies in the deposit limitation mechanism between the L1WethBridge
contract and regular user deposits can lead to issues affecting the management of WETH and ETH bridging.
The function _verifyDepositLimit
within Mailbox::requestL2Transaction
is responsible for tracking the amount of ETH that each user moves from L1 to L2 and ensuring that it does not exceed the deposit cap. This measure is in place to prevent users from going over their specific deposit limit.
https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/zksync/facets/Mailbox.sol#L261
However, the L1WethBridge
contract also utilizes the Mailbox::requestL2Transaction
function for transferring WETH from L1 to L2. Consequently, the same deposit limit mechanism is applied to the L1WethBridge
. In other words, the amount of ETH bridged by the L1WethBridge
is constrained by the same ETH cap that applies to regular users. This is despite the fact that the role and purpose of the L1WethBridge
differ from that of ordinary users.
https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/bridge/L1WethBridge.sol#L185
This situation may pose an issue, particularly when an active deposit limitation mechanism restricts the overall bridged WETH to the limitData.depositCap
, whereas bridging ETH is limited to limitData.depositCap
on a per-user basis.
The deposit limit for the L1WethBridge
contract should not be identical to that of regular users.
Context
#0 - c4-pre-sort
2023-11-02T15:01:53Z
141345 marked the issue as duplicate of #246
#1 - c4-judge
2023-11-24T19:24:38Z
GalloDaSballo marked the issue as satisfactory
0 USDC - $0.00
Protocol Version Discrepancy: In cases where an L2 upgrade fails but is processed on L1 without concern for its execution result, the protocol version is advanced while the actual system remains unaltered on L2. This results in a discrepancy between the recorded protocol version and the operational state of the system.
Unique Transaction Hash Requirement: The protocol typically requires that the transaction hashes of L2 system upgrades be unique, with the nonce
of the L2 upgrade transaction equal to the new protocol version. When an L2 upgrade fails but is not appropriately recognized on L1, this requirement is breached, necessitating a refactor of the contracts involved.
In the course of an upgrade, the sequence of function calls is as follows:
Governance::execute >> Governance::_execute >> DiamondProxy::fallback >> Admin::executeUpgrade >> Diamond::diamondCut >> Diamond::_initializeDiamondCut >> DefaultUpgrade::upgrade https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/contracts/ethereum/contracts/upgrades/DefaultUpgrade.sol#L25
During an upgrade, a new protocol version is established, and it should be numerically higher than the previous version. https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/contracts/ethereum/contracts/upgrades/DefaultUpgrade.sol#L28
Let's examine two scenarios:
In the first case, if the upgrade includes an L1 upgrade and it fails to execute successfully, it will result in a rollback that reverses the entire upgrade transaction. https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/contracts/ethereum/contracts/governance/Governance.sol#L230
In the second case, if the upgrade includes an L2 upgrade and it fails to execute successfully on L2, there's a challenge because the execution should take place on L2. As a result, it is unclear immediately whether the upgrade was successful.
The issue lies in the second case. Let's assume that the upgrade exclusively involves an L2 upgrade, and the upgrade transaction is executed on L1 such as the following flow of function calls:
DefaultUpgrade::upgrade >> BaseZkSyncUpgrade::_setL2SystemContractUpgrade
Crucially, within this function, the transaction hash of the upgrade transaction is stored in s.l2SystemContractsUpgradeTxHash
.
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/contracts/ethereum/contracts/upgrades/BaseZkSyncUpgrade.sol#L193
Subsequently, this upgrade transaction should be executed on L2 with _l2ProtocolUpgradeTx.txType = 254
. In the bootloader, when processing such a transaction type, the canonicalL1TxHash
is sent natively to L1.
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/system-contracts/bootloader/bootloader.yul#L587
Following that, the processL1Tx
function is invoked, in which the transaction is prepared and subsequently executed.
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/system-contracts/bootloader/bootloader.yul#L589
During the execution, if, for any reason, such as running out of gas, the execution fails, the entire program does not revert. Instead, an L2 to L1 log is sent with success = false
.
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/system-contracts/bootloader/bootloader.yul#L908-L919
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/system-contracts/bootloader/bootloader.yul#L966
On the L1 side, during the batch commitment, the function _processL2Logs
is called. Within this function, a check is made to ensure that _expectedSystemContractUpgradeTxHash
is equal to the canonicalL1TxHash
associated with the L2 upgrade transaction.
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/contracts/ethereum/contracts/zksync/facets/Executor.sol#L157
The issue stems from the fact that, during the execution of the batch, the s.l2SystemContractsUpgradeTxHash
is cleared, regardless of the outcome of the L2 upgrade transaction on L2. To put it differently, on the L2 side, if the upgrade transaction is executed and returns a 'false' result, it signifies that the upgrade was not correctly implemented on L2. However, on the L1 side, it does not care about the outcome of the L2 upgrade transaction on L2, it only cares about the execution of the L2 upgrade transaction, regardless of its success or failure.
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/contracts/ethereum/contracts/zksync/facets/Executor.sol#L304
The implications of this issue are as follows:
On L1, s.l2SystemContractsUpgradeTxHash
becomes zero, indicating that the previous L2 upgrade has been completed. However, it's important to note that the protocol version was incremented despite the absence of an actual upgrade on L2 due to the unsuccessful execution. Consequently, the protocol version is raised without a corresponding system upgrade.
A discrepancy arises between s.protocolVersion
and the new upgrade transaction. This discrepancy occurs because the protocol aims to have unique hashes for L2 system upgrade transactions, requiring that the nonce
of the L2 upgrade transaction be equal to the new protocol version. Moreover, the new protocol version should be numerically higher than the previous one. Therefore, it appears that the contracts DefaultUpgrade
and BaseZkSyncUpgrade
require a revision to address this scenario.
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/contracts/ethereum/contracts/upgrades/BaseZkSyncUpgrade.sol#L217
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/contracts/ethereum/contracts/upgrades/BaseZkSyncUpgrade.sol#L186
The owner is required to schedule another upgrade, and this necessitates the passage of time for the delay between proposal and execution to elapse (in the case of a delayed upgrade).
One viable approach is to address this issue by incorporating the result of the L2 upgrade transaction into the batch execution process. If the L2 upgrade fails, this solution entails resetting the protocol version to its previous state. However, when an upgrade involves both L1 and L2 components, a straightforward rollback of the protocol version to its former state is not feasible, as the L1 upgrade has succeeded while the L2 counterpart has encountered difficulties.
function executeBatches(StoredBatchInfo[] calldata _batchesData) external nonReentrant onlyValidator { //... uint256 batchWhenUpgradeHappened = s.l2SystemContractsUpgradeBatchNumber; if (batchWhenUpgradeHappened != 0 && batchWhenUpgradeHappened <= newTotalBatchesExecuted) { delete s.l2SystemContractsUpgradeTxHash; delete s.l2SystemContractsUpgradeBatchNumber; if (!proveL1ToL2TransactionStatus(...)){ // checking the L2 upgrade tx was successful or not s.protocolVersion = s.OldProtocolVersion; // assuming the old protocol version is stored } } }
Context
#0 - c4-pre-sort
2023-10-31T15:00:21Z
bytes032 marked the issue as primary issue
#1 - c4-pre-sort
2023-11-02T16:24:06Z
141345 marked the issue as sufficient quality report
#2 - miladpiri
2023-11-06T14:59:40Z
Medium is fair. We can make the bootloader fail if the upgrade transaction is unsuccessful. In general, it is a good idea that is needed to be implemented.
#3 - c4-sponsor
2023-11-06T14:59:46Z
miladpiri (sponsor) confirmed
#4 - miladpiri
2023-11-06T15:02:39Z
The duplicates indicated by the pre-sorter are irrelevant. They should be judged seperately.
#5 - GalloDaSballo
2023-11-28T15:32:38Z
The Warden has shown how, due to operative risks, a failed upgrade transaction can create a desynchronization between the upgraded version and the version reported
I believe the finding leans between Operative Mistake (QA) and risk of protocol DOS (Med) and am leaning towards Med due to the Sponsors comment
#6 - c4-judge
2023-11-28T15:32:43Z
GalloDaSballo marked the issue as selected for report
#7 - 0xunforgiven
2023-12-03T13:12:35Z
Hi @GalloDaSballo, @miladpiri. Thanks for reviewing my comment.
I believe there is a lot of hypothesize and conditions and operational mistakes requires for Issue #214. we have multiple similar issues that are invalidated, I have listed some of them in issue #221's comment.
This commnet explain that there is no real impact mentioned in issue #214 except a burned version number which can be replace by L1 upgrade tx.
Duplicate Issue:
Further more, I believe issue #748 is exact match to #214:
In cases where an L2 upgrade fails but is processed on L1 without concern for its execution result, the protocol version is advanced while the actual system remains unaltered on L2.
So the system update transaction, whether it succeeds or fails, is treated as it had already been executed, by going through steps 1-4. and clearing the l2SystemContractsUpgradeTxHash
System update execution failures are ignored, and if they are security patches, they are ignored, which may cause other transactions to continue to run normally causing security risks.
So both issues #214 and #748 showing concerns regarding L1 contract accepting reverted L2 upgrade transaction as success. but sponsor comment regarding them are is contradictive:
Sponsor Comment:
The sponsor response for issue #214 is (link):
Medium is fair. We can make the bootloader fail if the upgrade transaction is unsuccessful. In general, it is a good idea that is needed to be implemented.
While the sponsor response for issue #748 is(link):
We should verify the behavior, when upgrade transaction fails. The suggested fix is probably not correct (failing full bootloader, when update fails, means that the whole chain is stuck). The chain should not process if the upgrade transaction fails. However, it is more of an operational issue, so it is Low. Not duplicate.
These are obviously in contradict with each other, So I believe there is a misunderstanding or miscommunication here. based on the above facts and the fact the upgrade txs are carefully created by admins and reviewed by security council, and the fact that there is no Mediu/High impact mentioned here, and the fact that governance can perform upgrade/revert by L1 upgrade, I suggest low severity for this issue.
0 USDC - $0.00
The behavioral discrepancy in zkSync Era when using delegatecall
with the SHA256 precompile contract can have significant impacts:
delegatecall
may not produce the expected results, affecting the integrity of data and contracts.It is observed that in the zkSync Era, there is a distinct inconsistency in the behavior of the SHA256 precompile contract (located at address 0x02) when accessed through a delegatecall
. This behavior deviates from the standard Ethereum Virtual Machine (EVM) behavior, where the results across call
, staticcall
, and delegatecall
are consistent.
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/system-contracts/contracts/precompiles/SHA256.yul#L9
Within the zkSync Era, invoking the SHA256 precompile contract using a delegatecall
takes a different path from the usual behavior. Instead of executing within the isolated context of the called contract, it delegates the call to the contract itself and runs its code within the caller's context. Consequently, the returned value is not in line with the expected outcome of a precompileCall
. Instead, it consistently yields bytes32(0)
.
For clarity, when executing the provided code below in the EVM, the returned bytes32
value remains constant at 0x66687aadf862bd776c8fc18b8e9f8e20089714856ee233b3902a591d0d5f2925
for all three scenarios: sha256Staticcall
, sha256Call
, and sha256Delegatecall
. However, in the zkSync Era, while sha256Staticcall
and sha256Call
deliver results consistent with the EVM, sha256Delegatecall
produces an inaccurate outcome.
// SPDX-License-Identifier: MIT pragma solidity >=0.8.20; contract PoC { function sha256Staticcall() public returns (bytes32) { assembly { pop(staticcall(gas(), 0x02, 0, 0x20, 0, 0x20)) return(0, 0x20) } } function sha256Call() public returns (bytes32) { assembly { pop(call(gas(), 0x02, 0, 0, 0x20, 0, 0x20)) return(0, 0x20) } } function sha256Delegatecall() public returns (bytes32) { assembly { pop(delegatecall(gas(), 0x02, 0, 0x20, 0, 0x20)) return(0, 0x20) } } }
This discrepancy is critical in its impact because it introduces a divergence from the expected EVM response. While the likelihood of encountering this issue is not high, as precompile contracts are typically invoked through staticcall
rather than delegatecall
.
It is advisable to substitute a delegatecall
with a staticcall
whenever the target address for the operation is the SHA256 precompiled contract.
function delegateCall( uint256 _gas, address _address, bytes calldata _data ) internal returns (bytes memory returnData) { bool success; if(_address == address(0x02){ success = rawStaticCall(_gas, _address, _data); } else { success = rawDelegateCall(_gas, _address, _data); } returnData = _verifyCallResult(success); }
Context
#0 - c4-pre-sort
2023-11-01T18:43:19Z
bytes032 marked the issue as primary issue
#1 - c4-pre-sort
2023-11-02T16:25:15Z
141345 marked the issue as sufficient quality report
#2 - miladpiri
2023-11-06T14:33:51Z
High impact, low probablity ⇒ medium
#3 - c4-sponsor
2023-11-06T14:33:58Z
miladpiri (sponsor) confirmed
#4 - GalloDaSballo
2023-11-26T19:42:45Z
The Warden has shown an inconsistency in the behaviour of sha256 when using delegatecall
Because the goal of the zkSync EVM is to be the EVM compatible, Medium Severity seems appropriate
#5 - c4-judge
2023-11-26T19:42:50Z
GalloDaSballo marked the issue as selected for report
#6 - GalloDaSballo
2023-12-10T16:53:58Z
I have verified my statement through testing via the zksync-foundry code repo as well as reviewing both the Precompile as well as the Rust code <img width="577" alt="Screenshot 2023-12-10 at 17 51 46" src="https://github.com/code-423n4/2023-10-zksync-findings/assets/13383782/84d8750c-a740-4e17-bfd8-6962f8349b4f">
#7 - GalloDaSballo
2023-12-10T16:54:23Z
This finding is valid and the behaviour is inconsistent, Medium Severity is appropriate
#8 - c4-judge
2024-01-05T08:23:37Z
GalloDaSballo marked the issue as duplicate of #175
#9 - c4-judge
2024-01-05T14:16:04Z
GalloDaSballo marked the issue as not selected for report
#10 - c4-judge
2024-01-12T08:16:17Z
GalloDaSballo marked the issue as satisfactory
0 USDC - $0.00
The discrepancy in delegatecall
behavior with the ECRECOVER precompile contract in zkSync Era can have significant impact leading to incorrect signature validation, potentially compromising data integrity and user funds.
In the context of zkSync Era, there exists a noticeable inconsistency in how the ECRECOVER precompile contract (located at address 0x01) behaves when accessed via delegatecall
. This behavior differs from the standard Ethereum Virtual Machine (EVM) behavior, where the outcomes remain uniform across call
, staticcall
, and delegatecall
.
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/system-contracts/contracts/precompiles/Ecrecover.yul#L7
In zkSync Era, when the ECRECOVER precompile contract is invoked using delegatecall
, it diverges from the usual behavior by delegating the call to the contract itself and executing its code within the caller's context. This results in a returned value that does not align with the anticipated outcome of a precompileCall
. Instead, it yields bytes32(0)
.
To illustrate, in the following example, when executing the provided code in the EVM, the returned bytes32
value consistently appears as 0x000000000000000000000000759389e8e5c1aa1f511e9ea98b6caedd262bff0b
for all three scenarios: ecrecoverStaticcall
, ecrecoverCall
, and ecrecoverDelegatecall
. However, in the zkSync Era, while ecrecoverStaticcall
and ecrecoverCall
maintain the same results as in the EVM, ecrecoverDelegatecall
produces an incorrect outcome.
// SPDX-License-Identifier: MIT pragma solidity >=0.8.20; contract PoC { bytes32 h = keccak256(""); uint8 v = 27; bytes32 r = bytes32(uint256(1)); bytes32 s = bytes32(uint256(2)); function ecrecoverStaticcall() public returns (bytes32) { bytes memory data = abi.encode(h, v, r, s); assembly { pop(staticcall(gas(), 0x01, add(data, 0x20), mload(data), 0, 0x20)) return(0, 0x20) } } function ecrecoverCall() public returns (bytes32) { bytes memory data = abi.encode(h, v, r, s); assembly { pop(call(gas(), 0x01, 0x00, add(data, 0x20), mload(data), 0, 0x20)) return(0, 0x20) } } function ecrecoverDelegatecall() public returns (bytes32) { bytes memory data = abi.encode(h, v, r, s); assembly { pop( delegatecall(gas(), 0x01, add(data, 0x20), mload(data), 0, 0x20) ) return(0, 0x20) } } }
This discrepancy is critical in its impact because it introduces a divergence from the expected EVM response. While the likelihood of encountering this issue is not high, as precompile contracts are typically invoked through staticcall
rather than delegatecall
.
The following revised code is recommended:
function delegateCall( uint256 _gas, address _address, bytes calldata _data ) internal returns (bytes memory returnData) { bool success; if(_address == address(0x01){ success = rawStaticCall(_gas, _address, _data); } else { success = rawDelegateCall(_gas, _address, _data); } returnData = _verifyCallResult(success); }
Context
#0 - c4-pre-sort
2023-11-01T18:43:18Z
bytes032 marked the issue as primary issue
#1 - c4-pre-sort
2023-11-02T16:24:29Z
141345 marked the issue as sufficient quality report
#2 - miladpiri
2023-11-06T14:34:46Z
It has high impact (we do not know in which context it will be used, so it’s wrong result can have critical impact), but low probablity. So, medium is fair.
#3 - c4-sponsor
2023-11-06T14:34:53Z
miladpiri (sponsor) confirmed
#4 - GalloDaSballo
2023-11-26T19:42:13Z
The Warden has shown an inconsistency in the behaviour of ecrecover when using delegatecall
Because the goal of the zkSync EVM is to be the EVM compatible, Medium Severity seems appropriate
#5 - c4-judge
2023-11-26T19:42:18Z
GalloDaSballo marked the issue as selected for report
#6 - GalloDaSballo
2023-12-10T16:53:55Z
I have verified my statement through testing via the zksync-foundry code repo as well as reviewing both the Precompile as well as the Rust code <img width="577" alt="Screenshot 2023-12-10 at 17 51 46" src="https://github.com/code-423n4/2023-10-zksync-findings/assets/13383782/84d8750c-a740-4e17-bfd8-6962f8349b4f">
#7 - GalloDaSballo
2023-12-10T16:54:21Z
This finding is valid and the behaviour is inconsistent, Medium Severity is appropriate
#8 - GalloDaSballo
2024-01-05T08:23:20Z
After discussing with 3 judges (2 from SC) and the sponsor, I am grouping the 2 delegatecall findings
I am not convinced that autonomously grouping instances is better for protocol safety, however I seem to be in the minority in this case
0 USDC - $0.00
Reverts in default account fallback function (when custom accounts delegate calls to a default account) may introduce security risks as protocols might not anticipate these reverts, resulting in vulnerabilities.
As stated in the documentation regarding the DefaultAccount
:
The main purpose of this contract is to provide EOA-like experience for both wallet users and contracts that call it, i.e. it should not be distinguishable (apart of spent gas) from EOA accounts on Ethereum. https://github.com/code-423n4/2023-10-zksync/blob/main/docs/Smart%20contract%20Section/System%20contracts%20bootloader%20description.md#defaultaccount
To achieve this EOA-like behavior, the fallback and receive functions of the DefaultAccount
contract are designed in such a way that, when called or delegate-called, they mimic the actions of an EOA:
fallback() external payable { // The fallback function of the default account should never be called by the bootloader assert(msg.sender != BOOTLOADER_FORMAL_ADDRESS); // If the contract is called directly, it should behave like an EOA. } receive() external payable { // If the contract is called directly, it should behave like an EOA. }
However, there's an exception to this EOA-like behavior when a custom account delegate-calls a default account. Suppose the custom account execution function is implemented as follows:
function _execute(Transaction calldata _transaction) internal { address to = address(uint160(_transaction.to)); (bool success,) = address(to).delegatecall("0x1234"); require(success, "call was not successful"); }
In this scenario, if the to
address is a default account, the fallback function of the default account will be invoked. Since the msg.sender
is the bootloader address, it will trigger a revert due to assert(msg.sender != BOOTLOADER_FORMAL_ADDRESS)
and success
will be false
. This is not the expected behavior, as to
should act like an EOA and success
should be true.
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/system-contracts/contracts/DefaultAccount.sol#L224
This situation can potentially pose a significant issue for several reasons:
One potential solution is to add modifier ignoreInDelegateCall
to the fallback function as well:
fallback() external payable ignoreInDelegateCall { assert(msg.sender != BOOTLOADER_FORMAL_ADDRESS); }
Context
#0 - c4-pre-sort
2023-10-31T14:40:09Z
bytes032 marked the issue as primary issue
#1 - c4-pre-sort
2023-10-31T14:40:13Z
bytes032 marked the issue as sufficient quality report
#2 - miladpiri
2023-11-06T14:32:44Z
It is showing that default account is not purly simulating EOA. So, medium is fair.
#3 - c4-sponsor
2023-11-06T14:32:50Z
miladpiri (sponsor) confirmed
#4 - GalloDaSballo
2023-11-28T12:48:01Z
The Warden has shown an edge case for Default Accounts, due to this being a discrepancy in intended behaviour, Medium Severity seems most appropriate
#5 - c4-judge
2023-11-28T12:48:08Z
GalloDaSballo marked the issue as selected for report
🌟 Selected for report: HE1M
Also found by: 0xstochasticparrot, erebus, quarkslab, rvierdiiev
0 USDC - $0.00
The divergences in the emulation of the extcodehash
EVM opcode within zkSync Era carry several potential impacts, specially on Developers relying on zkSync Era's assurance that it emulates the extcodehash
opcode as per EIP-1052 might encounter unexpected behavior in their smart contracts.
The getCodeHash
function within the zkSync Era is designed to emulate the functionality of the extcodehash
Ethereum Virtual Machine (EVM) opcode, as laid out in the Ethereum Improvement Proposal 1052 (EIP-1052).
https://github.com/code-423n4/2023-10-zksync/blob/main/code/system-contracts/contracts/AccountCodeStorage.sol#L89
However, it's important to recognize that this function does not exhibit precisely identical behavior to the EVM's extcodehash
opcode. There are specific discrepancies:
EIP-161 defines an account as "empty" when it satisfies certain criteria: no code, a nonce value of zero, and a balance of zero. According to EIP-1052, the extcodehash
of an empty account should evaluate to bytes32(0)
. In the zkSync Era, the behavior aligns with this definition when codeHash
is 0x00
, getRawNonce(account)
is 0
, and isContractConstructing(codeHash)
is false
. If an account has nonzero balance, then based on the definition of EIP-161, it will not be considered as an empty account anymore. In this case, that account will be considered as an account with no code. So, extcodehash
of such account will be keccak256("")
in EVM.
The issue is that, zkSync Era returns bytes32(0)
regardless of the balance of the account. It only cares about the nonce and the code.
Based on EIP-1052, the extcodehash
of an precompile contract is either keccak256("")
or bytes32(0)
. For instance, the extcodehash
of address(0x02)
—representing the SHA-256 precompile contract in the EVM—should be keccak256("")
because it has no code, a nonce of zero, and a nonzero balance. In contrast, the zkSync Era consistently returns keccak256("")
for precompile contracts, regardless of their balances. The zkSync Era's behavior is based solely on whether the address is lower/equal to CURRENT_MAX_PRECOMPILE_ADDRESS
.
These observed inconsistencies could potentially raise concerns for developers who rely on zkSync Era's assertion that it accurately simulates the extcodehash
EVM opcode as dictated by the EIP-1051 specification.
The following code is recommended to simulate the extcodehash
EVM opcode precisely based on EIP-1052.
function getCodeHash(uint256 _input) external view override returns (bytes32) { address account = address(uint160(_input)); if (uint160(account) <= CURRENT_MAX_PRECOMPILE_ADDRESS && account.balance != 0) { return EMPTY_STRING_KECCAK; } else if (uint160(account) <= CURRENT_MAX_PRECOMPILE_ADDRESS && address(account).balance == 0) { return bytes32(0); } bytes32 codeHash = getRawCodeHash(account); if (codeHash == 0x00 && NONCE_HOLDER_SYSTEM_CONTRACT.getRawNonce(account) > 0) { codeHash = EMPTY_STRING_KECCAK; } else if (Utils.isContractConstructing(codeHash)) { codeHash = EMPTY_STRING_KECCAK; } else if (codeHash == 0x00 && NONCE_HOLDER_SYSTEM_CONTRACT.getRawNonce(account) == 0 && address(account).balance != 0) { codeHash = EMPTY_STRING_KECCAK; } return codeHash; }
Context
#0 - c4-pre-sort
2023-10-31T13:26:07Z
bytes032 marked the issue as primary issue
#1 - c4-pre-sort
2023-10-31T13:26:48Z
bytes032 marked the issue as sufficient quality report
#2 - miladpiri
2023-11-06T14:26:41Z
The impact is medium, the probablity is low.
So, low severity can be fair.
#3 - c4-sponsor
2023-11-06T14:26:48Z
miladpiri (sponsor) confirmed
#4 - c4-sponsor
2023-11-06T14:26:54Z
miladpiri marked the issue as disagree with severity
#5 - GalloDaSballo
2023-11-24T19:39:36Z
While I agree with low impact, this seem to be an inconsistency in the implementation of the EVM, which makes the finding notable
#6 - GalloDaSballo
2023-11-24T19:40:16Z
I would have a different perspective for an ERC or some implementation, however in this case this is an opcode that behaves differently (although in a edge case)
#7 - GalloDaSballo
2023-11-26T15:05:20Z
At this time, I believe Medium Severity is most appropriate as the finding demonstrates an inconsistent behaviour of the EVM with the zkSyncVM
I agree with the impact, but believe that such finding belongs to the Medium Severity classification as it breaks a coding convention that goes beyond a best-practice
#8 - c4-judge
2023-11-28T12:24:57Z
GalloDaSballo marked the issue as selected for report
🌟 Selected for report: HE1M
0 USDC - $0.00
The discrepancy in deployment nonce behavior between zkSync Era and EVM can cause problems for contract factories and developers. zkSync Era starts the deployment nonce at zero, unlike the EVM, where it starts at one. This difference may lead to incorrect predictions of child contract addresses.
As per EIP-161, it's specified that account creation transactions and the CREATE operation should increase the nonce beyond its initial value by one. https://github.com/ethereum/EIPs/blob/master/EIPS/eip-161.md#specification
Account creation transactions and the CREATE operation SHALL, prior to the execution of the initialisation code, increment the nonce over and above its normal starting value by one (for normal networks, this will be simply 1, however test-nets with non-zero default starting nonces will be different).
In other words, when an EOA (for example with nonce 100) deploys a contract (named as "contract X"), the nonces will be (please note that the nonce of the newly-deployed contract X is also incremented by one):
nonce(EOA): 100 -> 101 nonce(contract X): 0 -> 1
And when in another transaction, this contract X deploys another contract (named as "contract Y"), the nonces will be (again please note that the nonce of the newly-deployed contract Y is also incremented by one):
nonce(EOA): 101 -> 102 nonce(contract X): 1 -> 2 nonce(contract Y): 0 -> 1
However, during the zkSync Era, there is a divergence from the Ethereum standard. In this context, the deployment nonce for a newly created contract initiates at zero. This deviation from the EVM standard can impact factories that anticipate the addresses of child contracts and make decisions based on these assumptions. Such factories might mistakenly assume that their nonce starts at 1, mirroring the EVM, leading to discrepancies between anticipated and actual addresses of the deployed child contracts.
It is advisable to increment the deployment nonce of a contract by one before invoking its constructor. Moreover, this contrast should be documented to provide clarity for developers.
function _constructContract( address _sender, address _newAddress, bytes32 _bytecodeHash, bytes calldata _input, bool _isSystem, bool _callConstructor ) internal { NONCE_HOLDER_SYSTEM_CONTRACT.incrementDeploymentNonce(_newAddress); //... }
Context
#0 - c4-pre-sort
2023-11-01T14:10:40Z
bytes032 marked the issue as low quality report
#1 - c4-pre-sort
2023-11-01T18:45:55Z
bytes032 marked the issue as primary issue
#2 - miladpiri
2023-11-09T11:28:04Z
Medium/Low. The impact can be high (depending on the context), and probality is medium to high (any factory can be affected). I am thinking that if a factory creates a child, and child also creates second contract, and factory already trasnfers ETH to the second contract address (assuming that nonce of child is one, leading to predict the second contract address wrongly). Fund is lost.
#3 - c4-sponsor
2023-11-09T11:28:19Z
miladpiri (sponsor) acknowledged
#4 - GalloDaSballo
2023-11-15T12:36:37Z
We have an historical record of awarding non-evm equivalence that can cause damage as med, so I'm inclined to maintain the severity Will double check
#5 - GalloDaSballo
2023-11-26T19:39:55Z
The Warden has shown a discrepancy in the CREATE opcode for Contracts deployed on the zkEVM
While impact should be low in many scenarios, this highlights a discrepancy between zkEVM and EVM, which is notable, for this reason Medium Severity seems most appropriate
#6 - c4-judge
2023-11-28T15:56:18Z
GalloDaSballo marked the issue as selected for report
#7 - HE1M
2023-12-01T12:55:00Z
@GalloDaSballo Thank you for your effort.
The identified issue could potentially affect numerous widely-used projects and libraries. As an illustration:
The create3 library facilitates EVM contract creation, resembling CREATE2
but differing in that it excludes the contract initCode
from the address derivation formula. The process involves employing the CREATE2
method to deploy a new proxy contract, which subsequently deploys the child contract using CREATE
. This keyless deployment approach removes reliance on the account owner who deployed the factory contract.
Examining the CREATE3
library in solmate or solady, the child contract's address is computed based on the address of the proxy contract and its nonce. Notably, the nonce of the proxy contract is hardcoded as hex"01"
. This choice aligns with EIP-161, specifying that account creation transactions and the CREATE
operation should increment the nonce beyond its initial value by one. However, in the zkSync Era, the nonce does not increase by one, so unexpectedly this mechanism does not work on zkSync Era as on EVM.
Given that this library or a similar mechanism is widely used (as seen in AxelarNetwork and MeanFinance), any deviation from the expected behavior could impact numerous contracts dependent on the correct address of the child contract.
Consequently, I assert that the significance of this bug should be classified as high.
#8 - miladpiri
2023-12-04T16:22:36Z
@HE1M thanks.
Agree. This is an issue.
#9 - GalloDaSballo
2023-12-07T11:48:28Z
After reviewing the issue and consulting with another Judge
While the issue is notable, I believe that the finding is limited in its impact in the sense that it shows a discrepancy against the EVM
Integrators would be subject to an incorrect functionality which would be found rationally through the first usage or an integration test
In the case in which said factory was in scope, then a High Severity would have been appropriate
But in lack of it, the finding impacts the compatibility of zkSync with the EVM, meaning Medium Severity is most appropriate
#10 - vladbochok
2023-12-14T07:43:04Z
1
, not 0
.I do think that the maximum impact is low, due to the reason that only infra is affected, I would like to see the real contract deployed on Ethereum that would affected rather than very hypothetical assumptions about developers and users. The likelihood is also very low. All in all, I don't think this issue has proven to have some severity.
🌟 Selected for report: HE1M
0 USDC - $0.00
There is an inconsistency between zkSync Era and Ethereum Virtual Machine (EVM) in handling deployment nonce leading to significant consequences. In EVM, the factory's nonce is increased at the beginning of child contract creation, and if the child contract's creation fails, the nonce is not rolled back. In zkSync Era, the deployment nonce increases during the deployment process, and if the child contract constructor reverts, the entire transaction, including the increased deployment nonce, also reverts.
In zkSync Era, when a factory contract deploys another contract using low-level creation operations (i.e. using CREATE
or CREATE2
in assembly, instead of using new
), it is expected that the deployment nonce of the factory will increment by one, akin to the behavior in the Ethereum Virtual Machine (EVM). However, a notable issue arises when the constructor of the child contract encounters a revert.
In the EVM, the nonce of the factory increases as soon as the CREATE
operation begins, regardless of whether the child contract creation is ultimately successful. This can be seen in GETH
codebase, where the nonce of the factory is increased at line 431. Then, the EVM captures a snapshot, at line 443, to manage any potential failures, allowing for a graceful revert, at line 492, in case of unsuccessful child contract creation. In such cases, the returned address is address(0),
indicating a failed contract creation attempt.
https://github.com/ethereum/go-ethereum/blob/dc34fe8291bfcaefbce97f559e9610beffb2e470/core/vm/evm.go#L431
https://github.com/ethereum/go-ethereum/blob/dc34fe8291bfcaefbce97f559e9610beffb2e470/core/vm/evm.go#L443
https://github.com/ethereum/go-ethereum/blob/dc34fe8291bfcaefbce97f559e9610beffb2e470/core/vm/evm.go#L492
However, zkSync Era differs in its approach. When the CREATE
opcode is employed, it invokes the create
or create2
function within the ContractDeployer
contract.
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/system-contracts/contracts/ContractDeployer.sol#L146
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/system-contracts/contracts/ContractDeployer.sol#L130
This action results in the incremented deployment nonce of the factory, and, in the final stages of contract creation, it calls the constructor of the child contract.
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/system-contracts/contracts/ContractDeployer.sol#L189
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/system-contracts/contracts/ContractDeployer.sol#L344
In the event of a revert in the child contract's constructor, the entire transaction in the ContractDeployer
is reverted. Consequently, the incremented deployment nonce of the factory is also rolled back, and the returned address signifies an unsuccessful contract creation attempt by being set to address(0).
This divergence in nonce management between zkSync Era and the EVM could have far-reaching implications, especially for factories that are deployed on zkSync Era and assume that the deployment nonce behaves in a manner consistent with the EVM.
Calling the function test
triggers the internal function deploy
three times. In each call, the child contract is deployed, and two values are returned: the predicted address of the to-be-deployed child and the address of the deployed child. The first and third deploy calls provide false
as an argument to ensure that the constructor of the child doesn't revert. In the second deploy call, true
is used as an argument, causing the constructor of the child to revert.
Comparing the predicted addresses with the deployed addresses reveals the following:
It's worth noting that the initial value of the variable nonce
is set to one for the EVM PoC, while it is set to zero for the zkSync Era PoC. This difference is unrelated to this report and has been explained in a separate report.
In EVM:
// SPDX-License-Identifier: MIT pragma solidity 0.8.17; contract Factory { uint256 nonce = 1; address public predictedAddress1; address public predictedAddress2; address public predictedAddress3; address public deployedAddress1; address public deployedAddress2; address public deployedAddress3; function test() public { (predictedAddress1, deployedAddress1) = deploy(false); (predictedAddress2, deployedAddress2) = deploy(true); (predictedAddress3, deployedAddress3) = deploy(false); } function deploy(bool _shouldRevert) internal returns (address, address) { bytes memory bytecode = type(Child).creationCode; bytecode = abi.encodePacked(bytecode, abi.encode(_shouldRevert)); address addr; assembly { addr := create(0, add(bytecode, 0x20), mload(bytecode)) } return (predict(), addr); } function predict() public returns (address) { address predicted = address( uint160( uint256( keccak256( abi.encodePacked( bytes1(0xd6), bytes1(0x94), address(this), uint8(nonce++) ) ) ) ) ); return predicted; } } contract Child { constructor(bool _shouldRevert) { if (_shouldRevert) { revert(); } } }
The result for EVM is:
predictedAddress1: 0xbd5b354220B250DF257ed5e988Fe8FE81CdB6235
deployedAddress1: 0xbd5b354220B250DF257ed5e988Fe8FE81CdB6235
predictedAddress2: 0x02180dD815cA64898F6126f3911515B06D17acaD
deployedAddress2: 0x0000000000000000000000000000000000000000
predictedAddress3: 0x0A9C6D9d0AF27FC9F3F96196c3F8c89C79Df287D
deployedAddress3: 0x0A9C6D9d0AF27FC9F3F96196c3F8c89C79Df287D
In zkSync Era:
// SPDX-License-Identifier: MIT pragma solidity 0.8.17; contract Factory { uint256 nonce = 0; address public predictedAddress1; address public predictedAddress2; address public predictedAddress3; address public deployedAddress1; address public deployedAddress2; address public deployedAddress3; function test() public { (predictedAddress1, deployedAddress1) = deploy(false); (predictedAddress2, deployedAddress2) = deploy(true); (predictedAddress3, deployedAddress3) = deploy(false); } function deploy(bool _shouldRevert) internal returns (address, address) { bytes memory bytecode = type(Child).creationCode; bytecode = abi.encodePacked(bytecode, abi.encode(_shouldRevert)); address addr; assembly { addr := create(0, add(bytecode, 0x20), mload(bytecode)) } return (predict(), addr); } function predict() public returns (address newAddress) { bytes32 hash = keccak256( bytes.concat( keccak256("zksyncCreate"), bytes32(uint256(uint160(address(this)))), bytes32(nonce++) ) ); newAddress = address(uint160(uint256(hash))); } } contract Child { constructor(bool _shouldRevert) { if (_shouldRevert) { revert(); } } }
The result for zkSync Era is:
predictedAddress1: 0xBADCfBd5E90eA6558580B8C6220b750E9438df7c
deployedAddress1: 0xBADCfBd5E90eA6558580B8C6220b750E9438df7c
predictedAddress2: 0xB7E8A1191E6Ef0F3A752eCaa050F0FF7BcaEAF51
deployedAddress2: 0x0000000000000000000000000000000000000000
predictedAddress3: 0xfBAE6B995fe81b48A5e389A81E8Af0ee2AaF7302
deployedAddress3: 0xB7E8A1191E6Ef0F3A752eCaa050F0FF7BcaEAF51
As you see, the deployedAddress3
is equal to predictedAddress2
which is not correct.
A potential solution might involve invoking the _constructContract
function externally within a try/catch block. By doing so, if _constructContract
reverts (as occurred with the child contract), the deployment nonce won't be rolled back. However, implementing this kind of modification would necessitate reviewing and changing other parts of the code as well.
function _performDeployOnAddress( bytes32 _bytecodeHash, address _newAddress, AccountAbstractionVersion _aaVersion, bytes calldata _input ) internal { // ... try this.constructContract(msg.sender, _newAddress, _bytecodeHash, _input, false, true){//...} catch{//...} }
Context
#0 - c4-pre-sort
2023-11-02T15:42:35Z
141345 marked the issue as sufficient quality report
#1 - c4-sponsor
2023-11-06T10:59:38Z
miladpiri (sponsor) acknowledged
#2 - c4-sponsor
2023-11-06T10:59:47Z
miladpiri marked the issue as disagree with severity
#3 - miladpiri
2023-11-06T11:01:17Z
This can have high impact (it depends on the context), and the probablity is medium to high (implementing factory contract is frequent). Medium severity can be fair.
#4 - c4-sponsor
2023-11-08T11:52:22Z
miladpiri (sponsor) disputed
#5 - c4-sponsor
2023-11-08T11:52:30Z
miladpiri (sponsor) confirmed
#6 - c4-sponsor
2023-11-08T11:52:43Z
miladpiri (sponsor) acknowledged
#7 - c4-judge
2023-11-28T12:52:35Z
GalloDaSballo changed the severity to 2 (Med Risk)
#8 - GalloDaSballo
2023-11-28T12:53:51Z
At this time, I am marking the finding as Med for EVM non equivalence, please do raise concerns if you can show a specific way to cause loss of funds or broken behaviour for contracts that are widely used
#9 - c4-judge
2023-11-28T12:53:57Z
GalloDaSballo marked the issue as selected for report
#10 - HE1M
2023-12-01T13:07:58Z
@GalloDaSballo Thanks for your effort
This issue may pose a challenge when a factory contract, sharing the same address, is deployed on different chains and is intended to generate multiple proxy contracts with identical addresses (because the address of the created proxy contract relies on the factory's address and its nonce).
Consider a situation where a factory is deployed with the same address on both Ethereum and zkSync Era (its forks, or other chains). The address of the proxy contract generated by this factory is dependent on the factory's address and its nonce. If, for any reason, a proxy creation fails (for instance, encountering a REVERT during the execution of its initcode
or running out of gas), the nonce of the factory will increase on Ethereum but not in the zkSync Era. Consequently, the nonces of the factory on both chains will no longer be synchronized, leading to divergent addresses for proxies created by this factory on the two chains going forward.
// SPDX-License-Identifier: MIT pragma solidity 0.8.20; contract Factory { function deploy(bytes[] memory _proxy) external { bytes memory bytecode; for (uint256 i = 0; i < _proxy.length; ++i) { bytecode = _proxy[i]; assembly { pop(create(0, add(bytecode, 0x20), mload(bytecode))) } } } function proxyAddress(uint256 _nonce) public view returns (address) { address predicted = address( uint160( uint256( keccak256( abi.encodePacked( bytes1(0xd6), bytes1(0x94), address(this), uint8(_nonce) ) ) ) ) ); return predicted; } }
Given the frequent implementation of factory contracts (whether for ensuring the consistency of proxy contract addresses across different chains or for other use cases), the impact classification of this issue can be high.
#11 - 0xunforgiven
2023-12-03T13:50:56Z
Hey @HE1M, Beautiful finding as always.
Regarding your above comment, which says:
Consider a situation where a factory is deployed with the same address on both Ethereum and zkSync Era (its forks, or other chains).
I want to mention that according the to docs currently it's impossible to for Ethereum and zkSync contract to have sameaddress. docs:
For most of the rollups the address aliasing needs to prevent cross-chain exploits that would otherwise be possible if we simply reused the same L1 addresses as the L2 sender. In zkSync Era address derivation rule is different from the Ethereum, so cross-chain exploits are already impossible. However, zkSync Era may add full EVM support in the future, so applying address aliasing leaves room for future EVM compatibility.
#12 - HE1M
2023-12-03T21:28:17Z
Hey @0xunforgiven , appreciate your point.
You are correct that the address derivation in zkSync Era differs from Ethereum. However, my key point is that developers considering the idea of using a factory to generate proxies with identical addresses (on zkSync Era, its forks, and etc.) will encounter the issue of nonce asynchronization.
#13 - miladpiri
2023-12-04T16:26:48Z
Agree. This can be an issue.
#14 - GalloDaSballo
2023-12-05T13:17:40Z
I believe that the finding is Valid and Medium, it can cause issues, in some specific scenarios, which are possible but not as common and they are implementation reliant The issues are not guaranteed, the behaviour is "sometimes" wrong Meaning that Medium Severity is the most appropriate
If a contract factory was in scope, I can see the argument for raising the severity, but a create2 or create3 deployer, as well as common factories with mappings for children, would work as intended, leading me to confirm Medium Severity as the most appropriate
🌟 Selected for report: HE1M
Also found by: Audittens, OffsideLabs, zkrunner
0 USDC - $0.00
Malicious operators could exploit this issue to overcharge users by artificially increasing the length of the dictionary without any benefit to the encoding process. As a result, users may end up paying higher gas costs for message publication in L1, leading to an adverse financial impact. This issue undermines the intended efficiency and cost-effectiveness of the compression mechanism.
When processing L2 transactions, it is essential to mark the user's provided factoryDeps
on L2 and subsequently publish them to L1.
bootloader::processL2Tx >> bootloader::l2TxExecution >> bootloader::ZKSYNC_NEAR_CALL_markFactoryDepsL2 >> bootloader::sendCompressedBytecode >> Compressor::publishCompressedBytecode
https://github.com/code-423n4/2023-10-zksync/blob/main/code/system-contracts/contracts/Compressor.sol#L54
This process involves compression to reduce gas consumption during publishing to L1. The sequence of actions are as follows:
The remaining gas allocated by the user is utilized to initiate the publication of factoryDeps
.
https://github.com/code-423n4/2023-10-zksync/blob/72f5f16ed4ba94c7689fe38fcb0b7d27d2a3f135/code/system-contracts/bootloader/bootloader.yul#L1242
Compressed data is sent to L1. https://github.com/code-423n4/2023-10-zksync/blob/72f5f16ed4ba94c7689fe38fcb0b7d27d2a3f135/code/system-contracts/contracts/Compressor.sol#L79
Gas consumption is determined by the length of the transmitted message. https://github.com/code-423n4/2023-10-zksync/blob/72f5f16ed4ba94c7689fe38fcb0b7d27d2a3f135/code/system-contracts/contracts/L1Messenger.sol#L153
However, this process could be exploited by a malicious operator to overcharge users. Through manipulation of the compression method, the operator could inflate gas costs. In this context, not only may the compressed bytecode fail to be shorter than the original, but it could even become longer.
To illustrate, consider an example where the bytecode to be compressed and published is ABCB
(with each character representing 8 bytes). Notably, the second and fourth 8-byte segments are identical.
In an ideal compression scenario, the _rawCompressedData
would appear as: 0x0003ABC0000000100020001
. Here, ABC
forms the dictionary
, and encodedData
is 0x0000000100020001
. The prefix 0x0003
indicates the dictionary's length in 8-byte segments, while the encodedData
references the dictionary's segments in this order: 0, 1, 2, 1
, which corresponds to A, B, C, and B
, resepectively.
However, a malicious operator, could artificially extend the dictionary length. They might modify _rawCompressedData
to be: 0x0004ABCX0000000100020001
. In this scenario, ABCX
constitutes the dictionary
, while encodedData
remains 0x0000000100020001
. This essentially introduces an extra 8-byte X
to the dictionary, which serves no functional purpose, just increases the dictionary length. The encodedData
still references the same segments, 0, 1, 2, 1
, without employing the added X
.
In summary, this manipulation increases the dictionary's length by appending an unnecessary chunk, while not functional, this lengthening of the dictionary results in higher charges for users. Importantly, the dictionary remains valid, as it remains possible to decode the original bytecode from _rawCompressedData
using the encodedData
.
The function publishCompressedBytecode
should be revised as follows, where an array named usedDictionaryIndex
is introduced to monitor the utilization of dictionary chunks. Subsequently, it validates whether all chunks in the dictionary have been utilized.
function publishCompressedBytecode( bytes calldata _bytecode, bytes calldata _rawCompressedData ) external payable onlyCallFromBootloader returns (bytes32 bytecodeHash) { unchecked { (bytes calldata dictionary, bytes calldata encodedData) = _decodeRawBytecode(_rawCompressedData); require(dictionary.length % 8 == 0, "Dictionary length should be a multiple of 8"); require(dictionary.length <= 2 ** 16 * 8, "Dictionary is too big"); require( encodedData.length * 4 == _bytecode.length, "Encoded data length should be 4 times shorter than the original bytecode" ); // This code is added bool[] memory usedDictionaryIndex = new bool[]( dictionary.length / 8 ); ////////////////////// for (uint256 encodedDataPointer = 0; encodedDataPointer < encodedData.length; encodedDataPointer += 2) { uint256 indexOfEncodedChunk = uint256(encodedData.readUint16(encodedDataPointer)) * 8; require(indexOfEncodedChunk < dictionary.length, "Encoded chunk index is out of bounds"); // This code is added usedDictionaryIndex[indexOfEncodedChunk] = true; ////////////////////// uint64 encodedChunk = dictionary.readUint64(indexOfEncodedChunk); uint64 realChunk = _bytecode.readUint64(encodedDataPointer * 4); require(encodedChunk == realChunk, "Encoded chunk does not match the original bytecode"); } // This code is added for (uint256 i = 0; i < usedDictionaryIndex.length; ++i) { require( usedDictionaryIndex[i], "the dictionary includes chunks that are uselss" ); } ////////////////////// } bytecodeHash = Utils.hashL2Bytecode(_bytecode); L1_MESSENGER_CONTRACT.sendToL1(_rawCompressedData); KNOWN_CODE_STORAGE_CONTRACT.markBytecodeAsPublished(bytecodeHash); }
Context
#0 - c4-pre-sort
2023-10-31T09:39:53Z
bytes032 marked the issue as sufficient quality report
#1 - miladpiri
2023-11-06T12:50:38Z
Valid report. This report is about providing useless long compressed data leading to consume user’s gas more (if it consumes much gas, nothing is left for the execution, so it will fail).
#2 - c4-sponsor
2023-11-06T12:50:44Z
miladpiri (sponsor) confirmed
#3 - c4-judge
2023-11-25T10:16:54Z
GalloDaSballo marked the issue as primary issue
#4 - c4-judge
2023-11-28T12:48:49Z
GalloDaSballo marked the issue as selected for report
#5 - GalloDaSballo
2023-11-28T12:49:18Z
The Warden has shown a way for the Operator to burn excess gas as a strategy to maximize profits, since this directly conflicts with the refund system, Medium Seems appropriate
🌟 Selected for report: erebus
Also found by: 0xTheC0der, Bauchibred, HE1M, Jorgect, Udsen, alexfilippov314, chaduke, evmboi32, hash, ladboy233, lsaudit, oakcobalt, rvierdiiev, ustas, wangxx2026, xuwinnie, zero-idea, zkrunner
0 USDC - $0.00
Better to rename newBlockInfo
to newBatchInfo
.
https://github.com/code-423n4/2023-10-zksync/blob/24b4b0c1ea553106a194ef36ad4eb05b3b50275c/code/system-contracts/contracts/SystemContext.sol#L431C26-L431C39
The comment should change to COMPRESSED_BYTECODES_BEGIN_SLOT + 32
:
https://github.com/code-423n4/2023-10-zksync/blob/24b4b0c1ea553106a194ef36ad4eb05b3b50275c/code/system-contracts/bootloader/bootloader.yul#L292
The document is not reflecting the same structure defined in Bootloader: https://github.com/code-423n4/2023-10-zksync/blob/24b4b0c1ea553106a194ef36ad4eb05b3b50275c/code/system-contracts/bootloader/bootloader.yul#L367 https://github.com/code-423n4/2023-10-zksync/blob/main/docs/Smart%20contract%20Section/System%20contracts%20bootloader%20description.md#transactions-meta-descriptions
This finding indicates that when currentL2BlockNumber
is equal to 0 and currentL2BlockTimestamp
is also 0, two functions, namely _upgradeL2Blocks
and _setNewL2BlockData
, invoke the _setL2BlockHash
function.
https://github.com/code-423n4/2023-10-zksync/blob/24b4b0c1ea553106a194ef36ad4eb05b3b50275c/code/system-contracts/contracts/SystemContext.sol#L331
https://github.com/code-423n4/2023-10-zksync/blob/24b4b0c1ea553106a194ef36ad4eb05b3b50275c/code/system-contracts/contracts/SystemContext.sol#L226C13-L226C28
https://github.com/code-423n4/2023-10-zksync/blob/24b4b0c1ea553106a194ef36ad4eb05b3b50275c/code/system-contracts/contracts/SystemContext.sol#L290
This redundancy can be optimized by eliminating the _setL2BlockHash
function call within the _upgradeL2Blocks
function.
https://github.com/code-423n4/2023-10-zksync/blob/24b4b0c1ea553106a194ef36ad4eb05b3b50275c/code/system-contracts/contracts/SystemContext.sol#L226
The comment concerning the isETHCall
parameter is currently positioned below the gasPerPubdata
section. To enhance clarity and avoid confusion, it is advisable to move line 550 to line 549.
https://github.com/code-423n4/2023-10-zksync/blob/24b4b0c1ea553106a194ef36ad4eb05b3b50275c/code/system-contracts/bootloader/bootloader.yul#L550
To be more efficient, the following change is recommended, because the initial value of ret
is zero.
ret := overheadForCircuits
The comment needs correction. It should be revised to the following:
However, the default account **cannot** initiate a transaction ...
The comment requires an update. It should be corrected to the following format, as the comment currently suggests that the length of the compressed data is stored in 2 bytes, whereas the code implementation actually uses 3 bytes.
**2** bytes total len of compressed
https://github.com/code-423n4/2023-10-zksync/blob/72f5f16ed4ba94c7689fe38fcb0b7d27d2a3f135/code/system-contracts/contracts/L1Messenger.sol#L281C37-L281C68 https://github.com/code-423n4/2023-10-zksync/blob/72f5f16ed4ba94c7689fe38fcb0b7d27d2a3f135/code/system-contracts/contracts/L1Messenger.sol#L291
The modifier onlySystemCall
is redundant as msg.sender
is enforced to be DEPLOYER_SYSTEM_CONTRACT
:
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/system-contracts/contracts/NonceHolder.sol#L135-L136
Renaming from Block
to Batach
is recommended:
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/system-contracts/contracts/L1Messenger.sol#L131
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/system-contracts/contracts/L1Messenger.sol#L83
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/system-contracts/contracts/SystemContext.sol#L373
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/system-contracts/contracts/SystemContext.sol#L455
Unused variable logIdInMerkleTree
better to be deleted:
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/system-contracts/contracts/L1Messenger.sol#L112
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/system-contracts/contracts/L1Messenger.sol#L88C9-L88C27
When requesting an L2 transaction, if msg.sender != tx.origin
, then sender = aliased(msg.sender)
.
https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/zksync/facets/Mailbox.sol#L249
If _refundRecipient == address(0)
, then _refundRecipient == sender
.
https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/zksync/facets/Mailbox.sol#L310
In this case, it is not necessary to recheck that _refundRecipient
(which is equal to sender
) is a contract or not because it is highly improbable that sender
is a contract since it has already been aliased (its probability is on the order of having a hash collision).
It's better to revise the code as follows:
if (refundRecipient != sender && refundRecipient.code.length > 0) { refundRecipient = AddressAliasHelper.applyL1ToL2Alias(refundRecipient); }
Wrong comment. It should be 4 + 20 + 32 + 20 + _additionalData.length >= 76 (bytes)
https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/zksync/facets/Mailbox.sol#L418
Event name should be revised from Block
to Batch
.
https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/zksync/interfaces/IExecutor.sol#L94-L106
Missing interface extendedAccountVersion
in IContractDeployer
:
https://github.com/matter-labs/era-system-contracts/blob/main/contracts/ContractDeployer.sol#L39
#0 - bytes032
2023-10-30T09:16:35Z
NC
NC
NC
NC
NC
NC
NC
NC
NC
NC
NC
NC
NC
NC
NC
#1 - c4-judge
2023-12-10T20:18:29Z
GalloDaSballo marked the issue as grade-a