Platform: Code4rena
Start Date: 02/10/2023
Pot Size: $1,100,000 USDC
Total HM: 28
Participants: 64
Period: 21 days
Judge: GalloDaSballo
Total Solo HM: 13
Id: 292
League: ETH
Rank: 27/64
Findings: 3
Award: $1,604.77
🌟 Selected for report: 1
🚀 Solo Findings: 0
🌟 Selected for report: zero-idea
Also found by: 0x1337, 0xTheC0der, 0xstochasticparrot, Audittens, HE1M, Jeiwan, erebus, gumgumzum, leviticus1129, lsaudit, quarkslab, rvierdiiev
853.2232 USDC - $853.22
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/system-contracts/contracts/Constants.sol#L27-L28 https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/system-contracts/contracts/Constants.sol#L35 https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/system-contracts/contracts/AccountCodeStorage.sol#L89-L95
Two new precompiles have been introduced (ECADD
and ECMUL
) without updating the pointer that indicates the current maximum precompile address (CURRENT_MAX_PRECOMPILE_ADDRESS
), causing these new precompiles to not be considered as precompiles in the system.
The updated revision of ZkSync Era has introduced two new precompiles, ECADD
and ECMUL
at addresses 0x06
and 0x07
, respectively.
27: address constant ECADD_SYSTEM_CONTRACT = address(0x06); 28: address constant ECMUL_SYSTEM_CONTRACT = address(0x07);
Their implementation is available in code/system-contracts/contracts/precompiles/EcAdd.yul
and code/system-contracts/contracts/precompiles/EcMul.yul
.
Precompile addresses are tracked by a constant named CURRENT_MAX_PRECOMPILE_ADDRESS
, which points to the maximum address that is a precompile. The intention here is to have a way to determine if an address is a precompile, i.e. precompile <=> address <= CURRENT_MAX_PRECOMPILE_ADDRESS
.
35: uint256 constant CURRENT_MAX_PRECOMPILE_ADDRESS = uint256(uint160(SHA256_SYSTEM_CONTRACT));
Note that the implementation still points to the old maximum precompile, which was SHA256
at address 0x02
.
This means that both ECADD
and ECMUL
won't be considered as precompiles, this is because their address are greater than the CURRENT_MAX_PRECOMPILE_ADDRESS
, breaking the invariant.
Particularly, this impacts the getCodeHash()
function in AccountCodeStorage. The function follows the assumption that a precompile is under the CURRENT_MAX_PRECOMPILE_ADDRESS
constant in order to return the hash of the empty string (EMPTY_STRING_KECCAK
).
89: function getCodeHash(uint256 _input) external view override returns (bytes32) { 90: // We consider the account bytecode hash of the last 20 bytes of the input, because 91: // according to the spec "If EXTCODEHASH of A is X, then EXTCODEHASH of A + 2**160 is X". 92: address account = address(uint160(_input)); 93: if (uint160(account) <= CURRENT_MAX_PRECOMPILE_ADDRESS) { 94: return EMPTY_STRING_KECCAK; 95: } ...
As both ECADD
and ECMUL
are greater than CURRENT_MAX_PRECOMPILE_ADDRESS
, the implementation of getCodeHash()
will return zero instead of the expected EMPTY_STRING_KECCAK
constant, as other precompiles do.
The following test asks the AccountCodeStorage contract for the code hash of the new precompiles. The expected value should be the keccak hash of the empty string.
describe('AccountCodeStorage', function() { it('fails to return correct hash for ECADD precompile', async () => { expect(await accountCodeStorage.getCodeHash('0x0000000000000000000000000000000000000006')).to.be.eq( EMPTY_STRING_KECCAK ); }); it('fails to return correct hash for ECMUL precompile', async () => { expect(await accountCodeStorage.getCodeHash('0x0000000000000000000000000000000000000007')).to.be.eq( EMPTY_STRING_KECCAK ); }); });
The expected results from these tests fail:
Audit tests AccountCodeStorage 1) fails to return correct hash for ECADD precompile 2) fails to return correct hash for ECMUL precompile 0 passing (256ms) 2 failing 1) Audit tests AccountCodeStorage fails to return correct hash for ECADD precompile: AssertionError: expected '0x00000000000000000000000000000000000…' to equal '0xc5d2460186f7233c927e7db2dcc703c0e50…' + expected - actual -0x0000000000000000000000000000000000000000000000000000000000000000 +0xc5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470 at Context.<anonymous> (test/Audit.spec.ts:33:110) at processTicksAndRejections (node:internal/process/task_queues:96:5) 2) Audit tests AccountCodeStorage fails to return correct hash for ECMUL precompile: AssertionError: expected '0x00000000000000000000000000000000000…' to equal '0xc5d2460186f7233c927e7db2dcc703c0e50…' + expected - actual -0x0000000000000000000000000000000000000000000000000000000000000000 +0xc5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470 at Context.<anonymous> (test/Audit.spec.ts:39:110) at processTicksAndRejections (node:internal/process/task_queues:96:5)
Update the CURRENT_MAX_PRECOMPILE_ADDRESS
pointer to include the new precompile addresses.
- uint256 constant CURRENT_MAX_PRECOMPILE_ADDRESS = uint256(uint160(SHA256_SYSTEM_CONTRACT)); + uint256 constant CURRENT_MAX_PRECOMPILE_ADDRESS = uint256(uint160(ECMUL_SYSTEM_CONTRACT));
Other
#0 - c4-pre-sort
2023-10-31T11:36:33Z
bytes032 marked the issue as duplicate of #142
#1 - c4-pre-sort
2023-10-31T11:36:45Z
bytes032 marked the issue as sufficient quality report
#2 - c4-judge
2023-11-23T19:30:57Z
GalloDaSballo marked the issue as satisfactory
#3 - c4-judge
2023-11-23T19:38:50Z
GalloDaSballo marked the issue as selected for report
#4 - GalloDaSballo
2023-11-23T19:39:07Z
After more thinking, this finding has a test that is useful + mitigation suggestion
656.3255 USDC - $656.33
https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/zksync/facets/Mailbox.sol#L276-L279 https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/zksync/facets/Mailbox.sol#L276-L279 https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/bridge/L1WethBridge.sol#L170-L173 https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/bridge/L1WethBridge.sol#L185 https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/zksync/facets/Mailbox.sol#L279 https://github.com/code-423n4/2023-10-zksync/blob/main/code/contracts/ethereum/contracts/zksync/facets/Mailbox.sol#L261
The AllowList
establishes token limits for deposits.
It is expected that an ETH deposit limit can be set at any time, as the Mailbox verifies this ETH deposit limit whenever an L1->L2 transaction is requested.
The limit is set per address
that requests an L1->L2 transaction and is designed to impose limits on external actors.
The problem is that zkSync protocol bridges have the same limits as any other external user, and will quickly reach the maximum value per normal usage, or intentionally by an adversary trying to exploit this. ETH is used on L1->L2 txs by the bridges to pay fees, but it is most notable in the case of the L1WethBridge
, where ETH is also converted from WETH to send to L2.
Affected bridges will be in a DOS state where no bridge deposit from L1 to L2 will be possible for any user.
The only quick mitigation by the protocol would be to remove the ETH deposit limit (nullifying its functionality), until an upgrade is done to actually fix the issue.
In the meantime the bridge functionality to deposit assets will be out of service.
Assessing the severity as Medium considering "function of the protocol or its availability could be impacted". The likability of the issue is expected at any time since the code already accounts for this limit.
zkSync bridges deposit assets to L2 via an L1->L2 transaction.
In the case of the L1WethBridge
, for example, it first converts WETH to ETH and then requests an L1->L2 transaction via the Mailbox
with the msg.value
being the sum of the converted ETH + fees.
The Mailbox
then checks for ETH limits and reverts the transaction if the depositor reaches the limit.
The Mailbox
considers this depositor
to be the msg.sender
of the requestL2Transaction()
call, which in the case of bridge deposits, are the bridges themselves.
This means that the WETH bridge whose main function is to convert WETH to ETH and bridge it to L2 for all users, will have the same ETH limit as any normal user that sends ETH directly via the Mailbox
to pay fees for L1->L2 transactions.
Once the limit is reached by the bridge, the deposit functionality will enter in a DOS state
Add this test to code/contracts/ethereum/test/foundry/unit/concrete/Bridge/L1WethBridge/Deposit.t.sol
and run forge test --mt "test_DepositDOS"
to verify how bridges are affected by deposits from different users, and enter in a DOS state for everyone:
function test_DepositDOS() public { // Set the deposit limit to 10 ETH // This will affect all addresses, including the bridge! vm.prank(owner); allowList.setDepositLimit(address(0), true, 10 ether); uint256 fees = 0.01 ether; // The first depositor deposits 9 ETH and the tx succeeds address depositor1 = makeAddr("depositor1"); deal(depositor1, 10 ether); vm.startPrank(depositor1); uint256 amount1 = 9 ether; l1Weth.deposit{value: amount1}(); l1Weth.approve(address(bridgeProxy), amount1); bytes memory depositCallData1 = abi.encodeWithSelector( bridgeProxy.deposit.selector, depositor1, bridgeProxy.l1WethAddress(), amount1, 1000000, REQUIRED_L2_GAS_PRICE_PER_PUBDATA, depositor1 ); (bool success1, ) = address(bridgeProxy).call{value: fees}(depositCallData1); assertEq(success1, true); // The second depositor tries to deposit only 2 ETH and the tx reverts // The bridge is now in a DOS state, not allowing any additional deposits address depositor2 = makeAddr("depositor2"); deal(depositor2, 10 ether); changePrank(depositor2); uint256 amount2 = 2 ether; l1Weth.deposit{value: amount2}(); l1Weth.approve(address(bridgeProxy), amount2); bytes memory depositCallData2 = abi.encodeWithSelector( bridgeProxy.deposit.selector, depositor2, bridgeProxy.l1WethAddress(), amount2, 1000000, REQUIRED_L2_GAS_PRICE_PER_PUBDATA, depositor2 ); (bool success2, ) = address(bridgeProxy).call{value: fees}(depositCallData2); assertEq(success2, false); }
Depending on protocol needs, limits for bridges may be removed completely, or special limits may be applied.
Here's a suggested implementation if the decision is to completely remove ETH limits for bridges:
function _verifyDepositLimit(address _depositor, uint256 _amount) internal { + if (msg.sender == address(wethBridge) || msg.sender == address(erc20Bridge)) return; IAllowList.Deposit memory limitData = IAllowList(s.allowList).getTokenDepositLimitData(address(0)); // address(0) denotes the ETH if (!limitData.depositLimitation) return; // no deposit limitation is placed for ETH require(s.totalDepositedAmountPerUser[_depositor] + _amount <= limitData.depositCap, "d2"); s.totalDepositedAmountPerUser[_depositor] += _amount; }
DoS
#0 - c4-pre-sort
2023-10-31T14:30:43Z
bytes032 marked the issue as duplicate of #246
#1 - c4-pre-sort
2023-10-31T14:30:49Z
bytes032 marked the issue as sufficient quality report
#2 - c4-judge
2023-11-24T19:26:14Z
GalloDaSballo marked the issue as satisfactory
🌟 Selected for report: erebus
Also found by: 0xTheC0der, Bauchibred, HE1M, Jorgect, Udsen, alexfilippov314, chaduke, evmboi32, hash, ladboy233, lsaudit, oakcobalt, rvierdiiev, ustas, wangxx2026, xuwinnie, zero-idea, zkrunner
95.2225 USDC - $95.22
Total of 18 issues:
ID | Issue |
---|---|
L-1 | Extra notifyExecutionResult gas cost is charged to users on L1->L2 transactions processing |
L-2 | if DUMMY_VERIFIER == true block in Executor introduces risk of halting batch proving in production |
L-3 | Refund recipient addresses not aliased if transactions are requested on constructor |
L-4 | Double alias conversion in L1 bridge deposits |
L-5 | Wrong validation on max number of numberOfL2ToL1Logs |
L-6 | isFacetFreezable() returns false for non-existing facets instead of reverting |
L-7 | Sending many logs via a single L1->L2 transactions can halt the publishing of pubdata |
L-8 | Generating a large pubdata via a single L1->L2 transaction can halt the publishing to L1 operation |
L-9 | Upgrade OpenZeppelin dependencies |
L-10 | Missing check for address(0) for the allowList in DiamondInit::initialize() |
L-11 | Missing getters in Getters.sol |
L-12 | Prevent WETH deposits in L1ERC20Bridge |
L-13 | Facet state is not entirely cleared when last selector is removed |
L-14 | Shadow operations will ultimately need to be revealed on-chain |
L-15 | Potential gas bomb in Governance execution |
L-16 | Verifier params can be accidentally skipped during upgrades |
L-17 | Governance contract does not implement ERC721 or ERC1155 receivers |
L-18 | Potential missing validation in SystemContext::publishTimestampDataToL1() |
Total of 5 issues:
ID | Issue |
---|---|
NC-1 | Compressor uses hardcoded values instead of designated constants |
NC-2 | Wrong documentation in _parseL2WithdrawalMessage |
NC-3 | Inconsistencies in "State Diffs Header" Docs |
NC-4 | Rename block -> batch as part of the codebase refactor |
NC-5 | Update forceDeployOnAddresses() NatSpec to reflect its current permissions |
notifyExecutionResult
gas cost is charged to users on L1->L2 transactions processinggetExecuteL1TxAndGetRefund()
in the Bootloader
calculates the gas costs of an L1->L2 transaction execution to set a potentialRefund
:
Note how notifyExecutionResult()
is executed in-between the gasSpentOnExecution
and gasBeforeExecution
calculations:
let gasBeforeExecution := gas() success := ZKSYNC_NEAR_CALL_executeL1Tx( callAbi, txDataOffset ) @> notifyExecutionResult(success) let gasSpentOnExecution := sub(gasBeforeExecution, gas()) potentialRefund := sub(gasForExecution, gasSpentOnExecution)
Now note how this value is not used to calculate the L2 transaction execution cost (gasSpentOnExecute
) in l2TxExecution()
:
let gasBeforeExecute := gas() // for this one, we don't care whether or not it fails. success := ZKSYNC_NEAR_CALL_executeL2Tx( executeABI, txDataOffset ) gasSpentOnExecute := add(gasSpentOnFactoryDeps, sub(gasBeforeExecute, gas())) } @> notifyExecutionResult(success)
Gas costs of notifyExecutionResult()
might not be significant for a refund on transactions of a single user, but they would make a difference in less refunds for an operator that processes a huge number of transactions.
Be consistent in the way gas costs are charged for executions of L2, and L1->L2 transactions.
Here's a recommended diff for omitting notifyExecutionResult()
gas costs in L1->L2 transactions:
let gasBeforeExecution := gas() success := ZKSYNC_NEAR_CALL_executeL1Tx( callAbi, txDataOffset ) - notifyExecutionResult(success) let gasSpentOnExecution := sub(gasBeforeExecution, gas()) + notifyExecutionResult(success) potentialRefund := sub(gasForExecution, gasSpentOnExecution)
if DUMMY_VERIFIER == true
block in Executor
introduces risk of halting batch proving in productionThe proveBatches()
in the Executor
has a conditional block that will be added during build time depending on the Hardhat configuration.
The conditional execution is given by comments, meaning that if the DUMMY_VERIFIER
condition is not processed, all the following code will be included in the build.
In the case of the Executor
, it will try to assert a wrong chainId
and revert on any attempt to prove batches:
// #if DUMMY_VERIFIER // Additional level of protection for the mainnet assert(block.chainid != 1); /// ...
Note how it was differently handled on the codebase of the previous C4 audit.
In this case the condition is #if DUMMY_VERIFIER == false
, meaning that in case of any error, the production code will be injected, but no test/playground code, making it safer, and preventing any production issue:
// #if DUMMY_VERIFIER == false bool successVerifyProof = s.verifier.verify_serialized_proof(proofPublicInput, _proof.serializedProof); require(successVerifyProof, "p"); // Proof verification fail // Verify the recursive part that was given to us through the public input bool successProofAggregation = _verifyRecursivePartOfProof(_proof.recurisiveAggregationInput); require(successProofAggregation, "hh"); // Proof aggregation must be valid // #endif
Given the high severity of halting batch proving on production, it is recommended to only set DUMMY_VERIFIER == false
conditionals that would be safe to run on a production environment.
The Mailbox
does the following check to alias a refundRecipient
address during requestL2Transaction()
.
code.length
is 0
during the constructor
execution of a smart contract.
So, if a contract requests an L2 transaction during their constructor execution, and set theirselves as the refundRecipient
, their address will not be aliased.
// If the `_refundRecipient` is not provided, we use the `_sender` as the recipient. address refundRecipient = _refundRecipient == address(0) ? _sender : _refundRecipient; // If the `_refundRecipient` is a smart contract, we apply the L1 to L2 alias to prevent foot guns. if (refundRecipient.code.length > 0) { refundRecipient = AddressAliasHelper.applyL1ToL2Alias(refundRecipient); }
Same thing happens with the bridges:
address refundRecipient = _refundRecipient; if (_refundRecipient == address(0)) { refundRecipient = msg.sender != tx.origin ? AddressAliasHelper.applyL1ToL2Alias(msg.sender) : msg.sender; }
Requesting L2 executions without the corresponding aliasing can lead to loss of funds, due to the impossibility of receiving the expected refunds, or perform an L1->L2 transaction to withdraw them.
Provide consistent aliasing, regardless of code.length
. Consider providing an option to inform the transaction processing function if the address should be aliased.
The refundRecipient
is aliased on the bridges when it is not provided, and a smart contract is the one calling deposit()
:
if (_refundRecipient == address(0)) { refundRecipient = msg.sender != tx.origin ? AddressAliasHelper.applyL1ToL2Alias(msg.sender) : msg.sender; }
The Mailbox
can alias the address again in case the newly aliased refundRecipient
is also a smart contract on L1.
Although the assumption that for a specific contract the probability of this event is very low, it is still possible for some addresses if we consider the whole spectrum of deployed and to be deployed contracts on Ethereum. In such an event, refunds that are a product of bridge transactions would be lost.
if (refundRecipient.code.length > 0) { refundRecipient = AddressAliasHelper.applyL1ToL2Alias(refundRecipient); }
Consider informing the Mailbox
if the refund recipient was already aliased, and prevent the possibility of double aliasing.
The publishPubdataAndClearState()
function in L1Messenger
has an incorrect validation where both terms of an inequality comparison are the same.
This leads to the requirement always being true, meaning that the parameter is actually not being validated.
numberOfL2ToL1Logs
defines the number of logs that will be included in the logs merkle tree.
require(numberOfL2ToL1Logs <= numberOfL2ToL1Logs, "Too many L2->L1 logs");
If the number of logs is greater than L2_TO_L1_LOGS_MERKLE_TREE_LEAVES
, the operation will revert with an index out of bounds error.
If the number is big enough but lower than L2_TO_L1_LOGS_MERKLE_TREE_LEAVES
, it will successfully publish the pubdata, but may generate an invalid batch with more logs than expected by the zk-EVM, the Executor on L1, or a too expensive tx to process by the Verifier. This could lead to a higher impact as a DOS of the affected components and L1->L2 batch execution.
Set the numberOfL2ToL1Logs
limit according to a proper constant, with a value lower than or equal to L2_TO_L1_LOGS_MERKLE_TREE_LEAVES
, considering the rest of the system components limits.
isFacetFreezable()
returns false for non-existing facets instead of revertingThe function is responsible for informing "whether the facet can be frozen by the governor or always accessible".
If it returns false
, it means that the facet is not freezable.
But in the case that the facet doesn't exist, it will still return false
. This breaks trust assumptions, and can break integrations as it will bypass validations on non-existing facets.
/// @return isFreezable Whether the facet can be frozen by the governor or always accessible function isFacetFreezable(address _facet) external view returns (bool isFreezable) { Diamond.DiamondStorage storage ds = Diamond.getDiamondStorage(); // There is no direct way to get whether the facet address is freezable, // so we get it from one of the selectors that are associated with the facet. uint256 selectorsArrayLen = ds.facetToSelectors[_facet].selectors.length; if (selectorsArrayLen != 0) { bytes4 selector0 = ds.facetToSelectors[_facet].selectors[0]; isFreezable = ds.selectorToFacet[selector0].isFreezable; } }
Revert the transaction in case the facet doesn't exist.
Users can send messages from L2 to L1 via the sendToL1() function of the L1Messenger
. Each of this function calls generate a log that is hashed and rolled into the chainedLogsHash storage variable.
This rolling hash is recalculated among all the logs the operator decided to publish with the pubdata. And if the logs don't match the whole publishing operation will be reverted.
The number of logs is limited by the L2_TO_L1_LOGS_MERKLE_TREE_LEAVES
constant which is equal to 2048. This means that the rolling hash can calculated up to a maximum of 2048 logs. If more logs were sent, the resulting hash will be different and the publishing operation will revert.
In the case of L2 transactions, the operator may decide to not include them.
In the case of L1->L2 transactions, the operator can't omit any transaction, and has to include all of them in order as they are checked in order on L1 against the priority queue.
The operator can still and decide to include less transactions, so that this number is not reached.
But there can still be the case that a single L1->L2 transaction generates enough logs that will make the publishing operation always revert (because of the mismatching rolling hash).
In this event, the operator will be forced to not include any L1->L2 transactions, as they have to be in order, and the malicious one would make the publishing operation always revert.
L1->L2 transaction confirmations on L1 will be halted as no batch would be generated including them.
Given the fact that the number of logs has to be > 2048, and to be generated by a single L1->L2 transaction, the attack vector is constrained to gas limitations on L2, but feasible in the event of config or gas changes.
Limit the number of logs that can be generated in a single transaction to a number lower than the merkle tree leaves. There already exists a numberOfLogsToProcess variable that may also be useful to achieve this solution.
Users can send messages from L2 to L1 via the sendToL1() function of the L1Messenger
.
Messages are included in the pubdata that is processed on the publishPubdataAndClearState()
function. Each time a message is processed an internal pointer is incremented.
When the pointer grows too large, it means that the pubdata is too long and the whole publishing operation will revert.
If an L1->L2 transaction is capable of generating such message, the operator won't be able to continue processing any other L1->L2 transactions as they are checked in order on L1 against the priority queue.
It is not needed that the L1 transaction sends the big chunk of data. It may execute a contract on L2 that calls L1Messenger::sendToL1()
with it. The current value to make this attack possible is 520_000 bytes.
It is also worth noting that the pubdata includes other data apart from messages that contribute to reaching this limit, as the state diffs.
Bytecode publication is another way a malicious actor can contribute to generating a large pubdata via requestBytecodeL1Publication()
, per contract deployments. In this case, large contract bytecodes (up to 2**16 bytes) can be submitted via factoryDeps
(limited to 32) to perform the attack.
In this event, the operator will be forced to not include any L1->L2 transactions, as they have to be in order, and the malicious one would make the publishing operation always revert.
L1->L2 transaction confirmations on L1 will be halted as no batch would be generated including them.
The attack vector is constrained to gas costs on L2, but can lead to a critical scenario where L1->L2 txs commiting is halted.
Limit the maximum length of combined messages that can be sent on a single transaction via the sendToL1()
or any possible function that affects these logs.
Limit the maximum size of combined bytecodes that can be requested for L1 publication on a single transaction.
The current codebase is using OpenZeppelin v4.9.2 which contains published vulnerabilities (no direct impact on the current codebase).
It is still recommended to upgrade dependencies to the latest version v4.9.3 to prevent any possible bugs that are already fixed there.
allowList
in DiamondInit::initialize()
This is a missing instance from the bot report.
The DiamondInit
performs address(0)
checks for all of its parameters except for the allowList
. Please consider adding it:
function initialize(InitializeData calldata _initalizeData) external reentrancyGuardInitializer returns (bytes32) { require(address(_initalizeData.verifier) != address(0), "vt"); require(_initalizeData.governor != address(0), "vy"); require(_initalizeData.admin != address(0), "hc"); require(_initalizeData.priorityTxMaxGasLimit <= L2_TX_MAX_GAS_LIMIT, "vu");
Getters.sol
Getters
is missing some getter functions to obtain the values of private variables:
Please consider adding getter functions for the variables zkPorterIsAvailable
, totalDepositedAmountPerUser
, admin
, and pendingAdmin
The L1ERC20Bridge contract can be used to bridge any ERC20 token from L1 to L2. This also includes the reference WETH implementation in L1, as this is just another ERC20 token.
The protocol already includes a special bridge for WETH using the L1WethBridge contract. This contract provides a dedicated implementation that, in conjunction with L2W
WETH bridged from L1 using L1ERC20Bridge will deploy a new L2 token representation for WETH (L2StandardERC20) different than the intended L2Weth contract used in L2WethBridge, will bridge L1 WETH assets into the L2Weth reference implementation in L2.
In deposit()
, check that _l1Token
is different from the WETH address in L1.
function deposit( address _l2Receiver, address _l1Token, uint256 _amount, uint256 _l2TxGasLimit, uint256 _l2TxGasPerPubdataByte, address _refundRecipient ) public payable nonReentrant senderCanCallFunction(allowList) returns (bytes32 l2TxHash) { require(_amount != 0, "2T"); // empty deposit amount uint256 amount = _depositFunds(msg.sender, IERC20(_l1Token), _amount); require(amount == _amount, "1T"); // The token has non-standard transfer logic + require(_l1Token != l1WethAddress);
Facet state in the Diamond library is maintained by keeping track of the selectors associated with the facet (an address).
When the first selector is added a to new facet, it's state is initialized in _saveFacetIfNew()
, and the the last selector is removed, it's state is cleared in _removeFacet()
:
262: function _removeFacet(address _facet) private { 263: DiamondStorage storage ds = getDiamondStorage(); 264: 265: // Get index of `DiamondStorage.facets` of the facet and last element of array 266: uint256 facetPosition = ds.facetToSelectors[_facet].facetPosition; 267: uint256 lastFacetPosition = ds.facets.length - 1; 268: 269: // If the facet is not at the end of the array then move the last element to the facet position 270: if (facetPosition != lastFacetPosition) { 271: address lastFacet = ds.facets[lastFacetPosition]; 272: 273: ds.facets[facetPosition] = lastFacet; 274: ds.facetToSelectors[lastFacet].facetPosition = facetPosition.toUint16(); 275: } 276: 277: // Remove last element from the facets array 278: ds.facets.pop(); 279: }
In the previous snippet of code, we can see that the facet is removed from the ds.facets
array, but the corresponding entry in ds.facetToSelectors[_faucet]
is not cleared.
In _removeFacet()
, clear the associated state in the facetToSelectors
mapping.
function _removeFacet(address _facet) private { DiamondStorage storage ds = getDiamondStorage(); // Get index of `DiamondStorage.facets` of the facet and last element of array uint256 facetPosition = ds.facetToSelectors[_facet].facetPosition; uint256 lastFacetPosition = ds.facets.length - 1; // If the facet is not at the end of the array then move the last element to the facet position if (facetPosition != lastFacetPosition) { address lastFacet = ds.facets[lastFacetPosition]; ds.facets[facetPosition] = lastFacet; ds.facetToSelectors[lastFacet].facetPosition = facetPosition.toUint16(); } // Remove last element from the facets array ds.facets.pop(); + // Remove entry in facetToSelectors + delete ds.facetToSelectors[_facet]; }
The Governance contract supports the scheduling of shadow operations. These operations are groups of on-chain actions that, due to their sensitive nature, are scheduled using its hash rather than the actions itself.
142: function scheduleShadow(bytes32 _id, uint256 _delay) external onlyOwner { 143: _schedule(_id, _delay); 144: emit ShadowOperationScheduled(_id, _delay); 145: }
However, the operation still needs to be revealed when being executed using execute()
or executeInstant()
. This not only allows front-runners to anticipate the change and realize a potential attack before the scheduled operation is finally executed, but also opens the possibility that the operation itself fails, reverting the transaction and revealing the operation data without any concrete effect.
Utmost precaution should be taken when executing shadow operations. It is recommended to use private pools or flashbots to execute these types of operations, in order to both prevent front-running and also ensure successful execution of the transaction when being included in the blockchain.
Unsuccessful governance actions are reverted by bubbling the error and copying the response returned by the call:
224: function _execute(Call[] calldata _calls) internal { 225: for (uint256 i = 0; i < _calls.length; ++i) { 226: (bool success, bytes memory returnData) = _calls[i].target.call{value: _calls[i].value}(_calls[i].data); 227: if (!success) { 228: // Propage an error if the call fails. 229: assembly { 230: revert(add(returnData, 0x20), mload(returnData)) 231: } 232: } 233: } 234: }
This can lead to a gas bomb in which the callee intentionally returns a large response in order to incur in artificial gas costs.
Bound the size of the returned data (returndatasize
) to a maximum constant, and copy the response to memory up to the capped size.
Verifier params stored in the ZkSync diamond can be updated by executing an upgrade. The implementation can be found in the _setVerifierParams()
function present in the the BaseZkSyncUpgrade contract:
127: function _setVerifierParams(VerifierParams calldata _newVerifierParams) private { 128: if ( 129: _newVerifierParams.recursionNodeLevelVkHash == bytes32(0) || 130: _newVerifierParams.recursionLeafLevelVkHash == bytes32(0) || 131: _newVerifierParams.recursionCircuitsSetVksHash == bytes32(0) 132: ) { 133: return; 134: } 135: 136: VerifierParams memory oldVerifierParams = s.verifierParams; 137: s.verifierParams = _newVerifierParams; 138: emit NewVerifierParams(oldVerifierParams, _newVerifierParams); 139: }
As we can see in lines 128-134, if any of the 3 different values is zero, the implementation will simply skip the update of all parameters.
This could lead to an accidental scenario in which one or two of the parameters are scheduled to be updated, while at least another is accidentally set to zero. This will skip the update silently, without alerting of a potential inconsistent intention.
Check that either all parameters are zero, or that all parameters are not zero.
if (_newVerifierParams.recursionNodeLevelVkHash == bytes32(0)) { require(_newVerifierParams.recursionLeafLevelVkHash == bytes32(0) && _newVerifierParams.recursionCircuitsSetVksHash == bytes32(0)); return; ) else { require(_newVerifierParams.recursionLeafLevelVkHash != bytes32(0) && _newVerifierParams.recursionCircuitsSetVksHash != bytes32(0)); } ...
The Governance contract is an adaptation of the TimelockController contract present in the OpenZeppelin library.
One of the differences with the OpenZeppelin implementation is that the Governance check doesn't implement the IERC721Receiver and IERC1155Receiver interfaces. These provide the required callbacks needed by the ERC721 and ERC1155 standards to receive these kinds of tokens when the receiver is a contract.
As the Governance contract is essentially a timelocked wrapper around a multisig account, it is recommended to implement the IERC721Receiver and IERC1155Receiver interfaces, in order to support these types of tokens and have a future proof implementation in case the contract needs to handle them.
SystemContext::publishTimestampDataToL1()
The publishTimestampDataToL1()
function is used by the bootloader in L2 to publish timestamp data to L1 by sending a L2->L1 system log.
379: function publishTimestampDataToL1() external onlyCallFromBootloader { 380: (uint128 currentBatchNumber, uint128 currentBatchTimestamp) = getBatchNumberAndTimestamp(); 381: (, uint128 currentL2BlockTimestamp) = getL2BlockNumberAndTimestamp(); 382: 383: // The structure of the "setNewBatch" implies that currentBatchNumber > 0, but we still double check it 384: require(currentBatchNumber > 0, "The current batch number must be greater than 0"); 385: bytes32 prevBatchHash = batchHash[currentBatchNumber - 1]; 386: 387: // In order to spend less pubdata, the packed version is published 388: uint256 packedTimestamps = (uint256(currentBatchTimestamp) << 128) | currentL2BlockTimestamp; 389: 390: SystemContractHelper.toL1( 391: false, 392: bytes32(uint256(SystemLogKey.PACKED_BATCH_AND_L2_BLOCK_TIMESTAMP_KEY)), 393: bytes32(packedTimestamps) 394: ); 395: }
Line 385 fetches the previous batch hash from the batchHash
storage mapping, but as we can see in the implementation the variable prevBatchHash
is not used at all.
It can be suspected that the prevBatchHash
is part of a missing validation, such as checking if this value is present, i.e. require(prevBatchHash != bytes32(0))
, or checking the stored previous batch hash matches an expected value, i.e. require(prevBatchHash == expectedPrevBatchHash
.
Ensure there is no missing validation in relation to the unused variable prevBatchHash
, if not then delete line 385 to avoid any potential confusion.
Compressor
uses hardcoded values instead of designated constantsSeveral constants are declared on the Constants
file, but are not used. The Compressor
is currently hardcoding these values like in these cases, and importing constant that it doesn't use.
/// @dev The length of the derived key in bytes inside compressed state diffs. uint256 constant DERIVED_KEY_LENGTH = 32; /// @dev The length of the enum index in bytes inside compressed state diffs. uint256 constant ENUM_INDEX_LENGTH = 8; /// @dev The length of value in bytes inside compressed state diffs. uint256 constant VALUE_LENGTH = 32; /// @dev The length of the compressed initial storage write in bytes. uint256 constant COMPRESSED_INITIAL_WRITE_SIZE = DERIVED_KEY_LENGTH + VALUE_LENGTH; /// @dev The length of the compressed repeated storage write in bytes. uint256 constant COMPRESSED_REPEATED_WRITE_SIZE = ENUM_INDEX_LENGTH + VALUE_LENGTH; /// @dev The position from which the initial writes start in the compressed state diffs. uint256 constant INITIAL_WRITE_STARTING_POSITION = 4; /// @dev Each storage diffs consists of the following elements: /// [20bytes address][32bytes key][32bytes derived key][8bytes enum index][32bytes initial value][32bytes final value] /// @dev The offset of the deriived key in a storage diff. uint256 constant STATE_DIFF_DERIVED_KEY_OFFSET = 52; /// @dev The offset of the enum index in a storage diff. uint256 constant STATE_DIFF_ENUM_INDEX_OFFSET = 84; /// @dev The offset of the final value in a storage diff. uint256 constant STATE_DIFF_FINAL_VALUE_OFFSET = 124;
It is recommended that the Compressor
uses the corresponding values in Constants
to prevent any integration issue with other components that use them.
_parseL2WithdrawalMessage
The _parseL2WithdrawalMessage()
function in the Mailbox
wrongly documents the possible length
of an extended withdraw message, which comes from L2EthToken.
Note that address l2Sender
is wrongly noted as 32
. Also note that the result 68
is wrongly calculated.
// 2. The message that is sent by `withdrawWithMessage(address _l1Receiver, bytes calldata _additionalData)` // It should be equal to the length of the following: // bytes4 function signature + address l1Receiver + uint256 amount + address l2Sender + bytes _additionalData = // = 4 + 20 + 32 + 32 + _additionalData.length >= 68 (bytes).
This extended message is the same as the one expected by the _parseL2EthWithdrawalMessage()
function in the L1WethBridge
.
Note how it is correctly calculated here:
// Check that the message length is correct. // additionalData (WETH withdrawal data): l2 sender address + weth receiver address = 20 + 20 = 40 (bytes) // It should be equal to the length of the function signature + eth receiver address + uint256 amount + // additionalData = 4 + 20 + 32 + 40 = 96 (bytes). require(_message.length == 96, "Incorrect ETH message with additional data length");
Fix the _parseL2WithdrawalMessage()
documentation:
// 2. The message that is sent by `withdrawWithMessage(address _l1Receiver, bytes calldata _additionalData)` // It should be equal to the length of the following: // bytes4 function signature + address l1Receiver + uint256 amount + address l2Sender + bytes _additionalData = - // = 4 + 20 + 32 + 32 + _additionalData.length >= 68 (bytes). + // = 4 + 20 + 32 + 20 + _additionalData.length >= 76 (bytes).
The "State Diffs Header" comment on the publishPubdataAndClearState()
function in the L1Messenger
does not match with what the implementation does.
Upon further checking of the Statediff Compression Spec, we can see that the code implementation is correct, but the comment on the code is wrong:
/// header (1 byte version, 2 bytes total len of compressed, 1 byte enumeration index size, 2 bytes number of initial writes)
require( uint256(uint8(bytes1(_totalL2ToL1PubdataAndStateDiffs[calldataPtr]))) == STATE_DIFF_COMPRESSION_VERSION_NUMBER, "state diff compression version mismatch" ); calldataPtr++; // @audit "version" -> 1 byte OK uint24 compressedStateDiffSize = uint24(bytes3(_totalL2ToL1PubdataAndStateDiffs[calldataPtr:calldataPtr + 3])); calldataPtr += 3; // @audit "total len of compressed" -> 3 bytes vs 2 bytes in comment uint8 enumerationIndexSize = uint8(bytes1(_totalL2ToL1PubdataAndStateDiffs[calldataPtr])); calldataPtr++; // @audit "enumeration index size" -> 1 byte OK // @audit-info missing 2 bytes of "number of initial writes" from comment
Fix the comments according to the actual implementation.
The codebase has been refactored from using the term block
to batch
in many parts.
Please consider renaming these instances as well:
callSystemContext({{RIGHT_PADDED_INCREMENT_TX_NUMBER_IN_BLOCK_SELECTOR}}) callSystemContext({{RIGHT_PADDED_RESET_TX_NUMBER_IN_BLOCK_SELECTOR}})
The intention can be inferred from the process.ts
file:
RIGHT_PADDED_INCREMENT_TX_NUMBER_IN_BLOCK_SELECTOR: getPaddedSelector('SystemContext', 'incrementTxNumberInBatch'), RIGHT_PADDED_RESET_TX_NUMBER_IN_BLOCK_SELECTOR: getPaddedSelector('SystemContext', 'resetTxNumberInBatch'),
forceDeployOnAddresses()
NatSpec to reflect its current permissionsforceDeployOnAddresses()
has been updated to also allow the COMPLEX_UPGRADER_CONTRACT
to execute it.
Please consider adding this to the @dev
section of the NatSpec:
/// @notice This method is to be used only during an upgrade to set bytecodes on specific addresses. /// @dev We do not require `onlySystemCall` here, since the method is accessible only /// by `FORCE_DEPLOYER`. function forceDeployOnAddresses(ForceDeployment[] calldata _deployments) external payable { require( msg.sender == FORCE_DEPLOYER || msg.sender == address(COMPLEX_UPGRADER_CONTRACT), "Can only be called by FORCE_DEPLOYER or COMPLEX_UPGRADER_CONTRACT" );
#0 - bytes032
2023-10-30T09:10:29Z
(l-1-extra-notifyexecutionresult-gas-cost-is-charged-to-users-on-l1-l2-transactions-processing) Extra notifyExecutionResult gas cost is charged to users on L1->L2 transactions processing L (l-2-if-dummy-verifier-true-block-in-executor-introduces-risk-of-halting-batch-proving-in-production) if DUMMY_VERIFIER == true block in Executor introduces risk of halting batch proving in production NC (l-3-refund-recipient-addresses-not-aliased-if-transactions-are-requested-on-constructor) Refund recipient addresses not aliased if transactions are requested on constructor NC (l-4-double-alias-conversion-in-l1-bridge-deposits) Double alias conversion in L1 bridge deposits L (l-5-wrong-validation-on-max-number-of-numberofl2tol1logs) Wrong validation on max number of numberOfL2ToL1Logs L (l-6-isfacetfreezable-returns-false-for-non-existing-facets-instead-of-reverting) isFacetFreezable() returns false for non-existing facets instead of reverting NC (l-7-sending-many-logs-via-a-single-l1-l2-transactions-can-halt-the-publishing-of-pubdata) Sending many logs via a single L1->L2 transactions can halt the publishing of pubdata L (l-8-generating-a-large-pubdata-via-a-single-l1-l2-transaction-can-halt-the-publishing-to-l1-operation) Generating a large pubdata via a single L1->L2 transaction can halt the publishing to L1 operation L (l-9-upgrade-openzeppelin-dependencies) Upgrade OpenZeppelin dependencies NC (l-10-missing-check-for-address0-for-the-allowlist-in-diamondinitinitialize) Missing check for address(0) for the allowList in DiamondInit::initialize() NC (l-11-missing-getters-in-getters-sol) Missing getters in Getters.sol NC (l-12-prevent-weth-deposits-in-l1erc20bridge) Prevent WETH deposits in L1ERC20Bridge L (l-13-facet-state-is-not-entirely-cleared-when-last-selector-is-removed) Facet state is not entirely cleared when last selector is removed L (l-14-shadow-operations-will-ultimately-need-to-be-revealed-on-chain) Shadow operations will ultimately need to be revealed on-chain L (l-15-potential-gas-bomb-in-governance-execution) Potential gas bomb in Governance execution -3 (l-16-verifier-params-can-be-accidentally-skipped-during-upgrades) Verifier params can be accidentally skipped during upgrades NC (l-17-governance-contract-does-not-implement-erc721-or-erc1155-receivers) Governance contract does not implement ERC721 or ERC1155 receivers -3 (l-18-potential-missing-validation-in-systemcontextpublishtimestampdatatol1) Potential missing validation in SystemContext::publishTimestampDataToL1() NC (nc-1-compressor-uses-hardcoded-values-instead-of-designated-constants) Compressor uses hardcoded values instead of designated constants NC (nc-2-wrong-documentation-in-parsel2withdrawalmessage) Wrong documentation in _parseL2WithdrawalMessage NC (nc-3-inconsistencies-in-state-diffs-header-docs) Inconsistencies in "State Diffs Header" Docs NC (nc-4-rename-block-batch-as-part-of-the-codebase-refactor) Rename block -> batch as part of the codebase refactor NC (nc-5-update-forcedeployonaddresses-natspec-to-reflect-its-current-permissions) Update forceDeployOnAddresses() NatSpec to reflect its current permissions NC
#1 - miladpiri
2023-11-30T14:48:26Z
Regarding L-7 and L-8 https://github.com/code-423n4/2023-10-zksync-findings/blob/main/data/zero-idea-Q.md#l-7-sending-many-logs-via-a-si[…]-halt-the-publishing-of-pubdata
It is not possible (implicitly). For L1->L2 transactions, the limit is 72kk, while gas per pubdata is 800. It means that 90000 bytes can be sent. With each L2->L1 message the users pay for 88 bytes of pubdata. 88 * 2048 = 180224
So yeah, the bound is implicit, but it is in the place.
#2 - c4-judge
2023-12-10T18:47:12Z
GalloDaSballo marked the issue as grade-b
#3 - c4-judge
2023-12-10T20:21:37Z
GalloDaSballo marked the issue as grade-c
#4 - c4-judge
2023-12-13T18:09:20Z
GalloDaSballo marked the issue as grade-b