Platform: Code4rena
Start Date: 07/03/2024
Pot Size: $250,000 USDC
Total HM: 5
Participants: 24
Period: 21 days
Judge: 0xsomeone
Total Solo HM: 3
Id: 347
League: ETH
Rank: 4/24
Findings: 3
Award: $2,432.76
🌟 Selected for report: 1
🚀 Solo Findings: 0
🌟 Selected for report: 0x11singh99
Also found by: Bauchibred, Dup1337, Topmark, XDZIBECX, bctester, bin2chen, erebus, forgebyola, oakcobalt, rvierdiiev, yashar, zhanmingjing
565.1582 USDC - $565.16
Judge has assessed an item in Issue #122 as 2 risk. The relevant finding follows:
StateTransitionManager.sol
function freezeChain(uint256 _chainId) external onlyOwner { IZkSyncStateTransition(stateTransition[_chainId]).freezeDiamond(); }
This is a special function used by the admin to freeze a specific chain Id, this would come in handy in different context and is placed as a safe measure to help ensure the chain could be frozen to be safe.
Now protocol, logic also includes an unfreezing functionality, but this has been wrongly implemented, see https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/StateTransitionManager.sol#L164-L168
function unfreezeChain(uint256 _chainId) external onlyOwner { IZkSyncStateTransition(stateTransition[_chainId]).freezeDiamond(); }
It's evident that where as this function is meant to be used for "unfreezing" the specified chain, it insteads attempt to call freezeDiamond()
, now this means that an attempt to call on unfreezeChain()
by the owner of the StateTransitionManager.sol
contract would always revert since while freezeDiamond()
is being executed it checks if the chain Id is frozen already and if yes it reverts (require(!diamondStorage.isFrozen, "a9");
), see https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Admin.sol#L133-L150
function freezeDiamond() external onlyAdminOrStateTransitionManager { Diamond.DiamondStorage storage diamondStorage = Diamond.getDiamondStorage(); require(!diamondStorage.isFrozen, "a9"); // diamond proxy is frozen already diamondStorage.isFrozen = true; emit Freeze(); } /// @inheritdoc IAdmin function unfreezeDiamond() external onlyAdminOrStateTransitionManager { Diamond.DiamondStorage storage diamondStorage = Diamond.getDiamondStorage(); require(diamondStorage.isFrozen, "a7"); // diamond proxy is not frozen diamondStorage.isFrozen = false; emit Unfreeze(); }
Evidently, this bug is probably due to a copy paste error (i.e copying the implementation of freezeChain()
for unfreezeChain()
), as we can see that the dev comment is the same in both instances of freezing the chain.
Whereas the owner/admin can currently freeze a specified chain, the logic of unfreezing these chains is flawed and as such any chain that gets frozen can't get unfrozen by the state transition Manager.
This was submitted as a QA, cause even though this is a break in important functinoality, if we navigate to these instances in
Admin.sol
we can see that having theonlyAdminOrStateTransitionManager
on theunfreeze()
function means that whenver the chain id gets frozen and the state transition manager can't unfreeze it, the admin can just directly callAdmin::freezeDiamond()
since the modifier allows it
Consider applying these changes to https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/StateTransitionManager.sol#L164-L168
- /// @dev freezes the specified chain + /// @dev unfreezes the specified chain function unfreezeChain(uint256 _chainId) external onlyOwner { - IZkSyncStateTransition(stateTransition[_chainId]).freezeDiamond(); + IZkSyncStateTransition(stateTransition[_chainId]).unfreezeDiamond(); }
#0 - c4-judge
2024-05-03T19:01:53Z
alex-ppg marked the issue as duplicate of #97
#1 - c4-judge
2024-05-03T19:02:00Z
alex-ppg marked the issue as satisfactory
🌟 Selected for report: Bauchibred
Also found by: 0x11singh99, ChaseTheLight, Dup1337, hihen, lsaudit, oakcobalt
1782.0996 USDC - $1,782.10
While we've tried our best to maintain readability for brevity and focus, we have selectively truncated the code snippets to spotlight relevant sections. Certain excerpts may be abbreviated, and some references are provided via search commands due to the extensive nature of the project and time constraints
As a request to the judge: In order to not spam the judging repo, we've attached some findings here that are subjective in their nature of being borderline medium/low findings, we've hinted these instances in the reports, so if the judge sees it fit we'd be especting upgrades of these findings.
Finally, it's important to mention that a previous audit conducted a few months back by Code4rena on an older commit of the contracts in scope, this contest had a bot race and as such yielded a bot report with multiple QA suggestions, the report had over 4000
instances of QA report (both L's
& NC's
). Our quick review on some of these findings from the bot, revealed that only few of thosse suggestions have been acted upon. We recommend that, alongside this report, the team also revisits the bot report to explore further QA suggestions to improve code structure, this is cause we've deliberately tried our best in not duplicating points from that previous analysis in this report.
Issue ID | Description |
---|---|
QA-01 | A chain that gets frozen by the state transition manager can never be removed from the frozen state in StateTransitionManager.sol |
QA-02 | Genesis upgrades, currently do not fully work as intended |
QA-03 | Bootloader wrongly identifies the RIGHT_PADDED_GET_ACCOUNT_VERSION_SELECTOR as not attached with the Contract Deployer deployment execution |
QA-04 | The new require(dictionary.length / 8 <= encodedData.length / 2) check allows malicious operators to always pass in malformed data to the bootloader to ensure that all users gas gets burnt without executing the tx. |
QA-05 | Users tokens could get lost indefinitely |
QA-06 | A large amount withdrawal could break the minting logic of L2BaseToken effectively causing attempts of consequent mints to fail |
QA-07 | Fix typos & documentations (Multiple instances) |
QA-08 | There are currently no assurances that funds specified for operation are rightly matched |
QA-09 | Protocol is susceptible to synchronizing issues cause reverted system contract upgrade batches must stiill be commited between L1 and L2 upgrades due to wrongly asuming just reverting in the bootloader is enough |
QA- | 10 Shadow upgrades would currently have their executional data leaked if they fail |
QA-11 | Fix bugs present in the precomcompiles or clearly document the bugs for users/developers |
QA-12 | Consider renaming CURRENT_MAX_PRECOMPILE_ADDRESS to MAX_POSSIBLE_PRECOMPILE_ADDRESS |
QA-13 | Users funds could still be stuck due to lack of access to ETH on the L2 through L1->L2 transactions so this should be clearly documented |
QA-14 | If the pending admin becomes malicious there is no specific way to stop them from accepting the admin |
QA-15 | Protocol does not consider EIP-4844 and still assumes more than one batch can be commited |
QA-16 | Missing getter functions for key internal ZkSyncStateTransitionStorage variables should be introduced |
QA-17 | upgrades are automatically immediately possible instead of being able to have a kick in time in the future |
QA-18 | The placeholder _postUpgrade() has no implementation and is not meant to be used but is being queried twice via both the genesis and normal upgrade pattern |
QA-19 | New instances of missing natspec comments |
QA-20 | Setters do not have equality/zero checkers |
QA-21 | Multiple instances of where messages within require are not descriptive |
QA-22 | Deviation from Solidity best styling practices in scoped contracts |
QA-23 | Remove unnecessary code irrelevant to final production |
QA-24 | Alpha period code is still in production despite the period ending in April 2023 |
QA-25 | Redeployment considerations for L2ContractHelper/Config.sol in case of updates |
QA-26 | L1SharedBridge.sol might encounter accounting errors |
QA-27 | The Ecrecover gas cost been massively increased without significant increases in executional cost |
QA-28 | zkEVM's CodeOracle Precompile somewhat deviates from EVM's extcodecopy behavior |
QA-29 | Protocol currently does not consider all windows in regards to minDelay |
QA-30 | BaseZkSyncUpgradeGenesis::_setNewProtocolVersion() should not allow the protocol version difference to be up to MAX_ALLOWED_PROTOCOL_VERSION_DELTA |
QA-31 | Users failed tx could be unclaimable from the shared bridge |
QA-32 | Apply fixes to commented out bug cases |
QA-33 | Unnecessary duplication of checks |
QA-34 | Non-existent facets would be assumed to be unfreezable due to the wrong conditional keyword in isFacetFreezable() |
QA-35 | Remove the trailing comma after queries to isNotEnoughGasForPubdata in the bootloader to enforce no error in the Yul Syntax |
QA-36 | Consider rightly passing the elements of the BridgehubL2TransactionRequest in _requestL2Transaction() |
QA-37 | Resolve newly introduced TODOs |
## | QA-38 OperationState.Waiting should be considered the case even when timestamp == block.timestamp |
QA-39 | Redundant return of the nonce deployed |
QA-40 | Chunking the pubdata should be made more efficient |
QA-41 | Owner can still access renounceOwnership() |
QA-42 | There exist a light deviation between the Ethereum VM and zK's in regards to eth deposits |
QA-43 | 0.8.20 is vulnerable to some bugs and compiler version should be updated to be on the safe side |
QA-44 | execute() & executeInstant() could use msg.value in a better manner |
QA-45 | StateTransitionManager.sol::createNewChain() has some nuances with it's initData |
QA-46 | Double addressed tokens can be stolen from the bridge |
QA-47 | Fix bootloader's documentation in regards to unread memory points |
QA-48 | Protocol should consider supporting deposits of pure erc20s |
QA-49 | Remove instances of unnecessary casting |
QA-50 | Always update comments in code if it's already the deadline |
QA-51 | Fix documentation in regards to timing requirements so code doesn't deviate from docs |
QA-52 | Airdrops are automatically lost for the L1ERC20Bridge |
QA-53 | TransactionHelper::isEthToken() is not used anywhere, so it should be removed |
QA-54 | The storedBatchHash() fails to distinguish between reverted and unreverted batches |
QA-55 | Users would assume proof verification is performed twice in proveBatches() due to a wrongly commenting out the else keyword and lack of documentation |
QA-56 | Wrong error message in BaseZkSyncUpgradeGenesis::_setNewProtocolVersion |
StateTransitionManager.sol
function freezeChain(uint256 _chainId) external onlyOwner { IZkSyncStateTransition(stateTransition[_chainId]).freezeDiamond(); }
This is a special function used by the admin to freeze a specific chain Id, this would come in handy in different context and is placed as a safe measure to help ensure the chain could be frozen to be safe.
Now protocol, logic also includes an unfreezing functionality, but this has been wrongly implemented, see https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/StateTransitionManager.sol#L164-L168
function unfreezeChain(uint256 _chainId) external onlyOwner { IZkSyncStateTransition(stateTransition[_chainId]).freezeDiamond(); }
It's evident that where as this function is meant to be used for "unfreezing" the specified chain, it insteads attempt to call freezeDiamond()
, now this means that an attempt to call on unfreezeChain()
by the owner of the StateTransitionManager.sol
contract would always revert since while freezeDiamond()
is being executed it checks if the chain Id is frozen already and if yes it reverts (require(!diamondStorage.isFrozen, "a9");
), see https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Admin.sol#L133-L150
function freezeDiamond() external onlyAdminOrStateTransitionManager { Diamond.DiamondStorage storage diamondStorage = Diamond.getDiamondStorage(); require(!diamondStorage.isFrozen, "a9"); // diamond proxy is frozen already diamondStorage.isFrozen = true; emit Freeze(); } /// @inheritdoc IAdmin function unfreezeDiamond() external onlyAdminOrStateTransitionManager { Diamond.DiamondStorage storage diamondStorage = Diamond.getDiamondStorage(); require(diamondStorage.isFrozen, "a7"); // diamond proxy is not frozen diamondStorage.isFrozen = false; emit Unfreeze(); }
Evidently, this bug is probably due to a copy paste error (i.e copying the implementation of freezeChain()
for unfreezeChain()
), as we can see that the dev comment is the same in both instances of freezing the chain.
Whereas the owner/admin can currently freeze a specified chain, the logic of unfreezing these chains is flawed and as such any chain that gets frozen can't get unfrozen by the state transition Manager.
This was submitted as a QA, cause even though this is a break in important functinoality, if we navigate to these instances in
Admin.sol
we can see that having theonlyAdminOrStateTransitionManager
on theunfreeze()
function means that whenver the chain id gets frozen and the state transition manager can't unfreeze it, the admin can just directly callAdmin::freezeDiamond()
since the modifier allows it
Consider applying these changes to https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/StateTransitionManager.sol#L164-L168
- /// @dev freezes the specified chain + /// @dev unfreezes the specified chain function unfreezeChain(uint256 _chainId) external onlyOwner { - IZkSyncStateTransition(stateTransition[_chainId]).freezeDiamond(); + IZkSyncStateTransition(stateTransition[_chainId]).unfreezeDiamond(); }
Based on documentations on how genesis upgrades should work, the transaction is to return the hash of the L2 system contract upgrade transaction after it's execution.
Now, take a look at https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/upgrades/BaseZkSyncUpgradeGenesis.sol#L46-L68
function upgrade(ProposedUpgrade calldata _proposedUpgrade) public virtual override returns (bytes32) { // Note that due to commitment delay, the timestamp of the L2 upgrade batch may be earlier than the timestamp // of the L1 block at which the upgrade occurred. This means that using timestamp as a signifier of "upgraded" // on the L2 side would be inaccurate. The effects of this "back-dating" of L2 upgrade batches will be reduced // as the permitted delay window is reduced in the future. require(block.timestamp >= _proposedUpgrade.upgradeTimestamp, "Upgrade is not ready yet"); _setNewProtocolVersion(_proposedUpgrade.newProtocolVersion); _upgradeL1Contract(_proposedUpgrade.l1ContractsUpgradeCalldata); _upgradeVerifier(_proposedUpgrade.verifier, _proposedUpgrade.verifierParams); _setBaseSystemContracts(_proposedUpgrade.bootloaderHash, _proposedUpgrade.defaultAccountHash); bytes32 txHash = _setL2SystemContractUpgrade( _proposedUpgrade.l2ProtocolUpgradeTx, _proposedUpgrade.factoryDeps, _proposedUpgrade.newProtocolVersion ); _postUpgrade(_proposedUpgrade.postUpgradeCalldata); emit UpgradeComplete(_proposedUpgrade.newProtocolVersion, txHash, _proposedUpgrade); }
This function is an overriden one of the inherited BaseZkSyncUpgrade.upgrade()
, case here is that unlike the inheritted version where the bytes data to be returned has been explicitly mentioned as txHash
, this function does not explicitly name the variable to be returned, leading to all attempts of calling this function to return 0
.
Deploying this contract on remix proves this bug case
// SPDX-License-Identifier: MIT pragma solidity 0.8.20; contract BaseContract { function upgrade(bytes memory _proposedUpgrade) public virtual pure returns (bytes32 txHash) { txHash = bytes32(abi.encodePacked(_proposedUpgrade)); } } contract BaseContractGenesis is BaseContract { function upgrade(bytes memory _proposedUpgrade) public override pure returns (bytes32) { bytes32 txHash = bytes32(abi.encodePacked(_proposedUpgrade)); } }
Passing the value: 0x212226
We can see that:
BaseContract.upgrade()
returns 0: bytes32: txHash 0x2122260000000000000000000000000000000000000000000000000000000000
, &BaseContractGenesis.upgrade()
returns 0: bytes32: txHash 0x0000000000000000000000000000000000000000000000000000000000000000
And this happens despite both functions having the same implementation.
Borderline low/medium, cause whereas genesis upgrades do not work as expected, we can't seem to find a direct instance where the expected hash being returned is implemented in protocol.
Name the variable to be returned as is done in [BaseZkSyncUpgrade.upgrade()](https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/upgrades/BaseZkSyncUpgrade.sol#L67) or attach the
return` at the end of the function's execution, i.e apply these changes to: https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/upgrades/BaseZkSyncUpgradeGenesis.sol#L46-L68
- function upgrade(ProposedUpgrade calldata _proposedUpgrade) public virtual override returns (bytes32) { + function upgrade(ProposedUpgrade calldata _proposedUpgrade) public virtual override returns (bytes32 txHash) { // Note that due to commitment delay, the timestamp of the L2 upgrade batch may be earlier than the timestamp // of the L1 block at which the upgrade occurred. This means that using timestamp as a signifier of "upgraded" // on the L2 side would be inaccurate. The effects of this "back-dating" of L2 upgrade batches will be reduced // as the permitted delay window is reduced in the future. require(block.timestamp >= _proposedUpgrade.upgradeTimestamp, "Upgrade is not ready yet"); _setNewProtocolVersion(_proposedUpgrade.newProtocolVersion); _upgradeL1Contract(_proposedUpgrade.l1ContractsUpgradeCalldata); _upgradeVerifier(_proposedUpgrade.verifier, _proposedUpgrade.verifierParams); _setBaseSystemContracts(_proposedUpgrade.bootloaderHash, _proposedUpgrade.defaultAccountHash); bytes32 txHash = _setL2SystemContractUpgrade( _proposedUpgrade.l2ProtocolUpgradeTx, _proposedUpgrade.factoryDeps, _proposedUpgrade.newProtocolVersion ); _postUpgrade(_proposedUpgrade.postUpgradeCalldata); emit UpgradeComplete(_proposedUpgrade.newProtocolVersion, txHash, _proposedUpgrade); }
Depending on the judge's stance of either looking at this as "breaking core functionality" or "ignorable miss in
code/docs
" determines the final valididty as we assume the former puts this report on the grounds of amedium
.
RIGHT_PADDED_GET_ACCOUNT_VERSION_SELECTOR
as not attached with the Contract Deployer
deployment executionfunction shouldMsgValueMimicCallBeSystem(to, dataPtr) -> ret { let dataLen := mload(dataPtr) // Note, that this point it is not fully known whether it is indeed the selector // of the calldata (it might not be the case if the `dataLen` < 4), but it will be checked later on let selector := shr(224, mload(add(dataPtr, 32))) let isSelectorCreate := or( eq(selector, {{CREATE_SELECTOR}}), eq(selector, {{CREATE_ACCOUNT_SELECTOR}}) ) let isSelectorCreate2 := or( eq(selector, {{CREATE2_SELECTOR}}), eq(selector, {{CREATE2_ACCOUNT_SELECTOR}}) ) // Firstly, ensure that the selector is a valid deployment function ret := or( isSelectorCreate, isSelectorCreate2 ) // Secondly, ensure that the callee is ContractDeployer ret := and(ret, eq(to, CONTRACT_DEPLOYER_ADDR())) // Thirdly, ensure that the calldata is long enough to contain the selector ret := and(ret, gt(dataLen, 3)) }
This function returns whether the mimicCall should use the isSystem
flag, note that his flag should only be used for contract deployments and nothing else, now navigating to the bootloaders preprocess script here https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/system-contracts/scripts/preprocess-bootloader.ts, we can see that, not all contract deployments selectors would be tagged with the flag, note that this is the intended behaviour since calls to the deployer system contract are the only ones allowed to be system calls
... RIGHT_PADDED_GET_ACCOUNT_VERSION_SELECTOR: getPaddedSelector("ContractDeployer", "extendedAccountVersion"), ... CREATE_SELECTOR: getSelector("ContractDeployer", "create"), CREATE2_SELECTOR: getSelector("ContractDeployer", "create2"), CREATE_ACCOUNT_SELECTOR: getSelector("ContractDeployer", "createAccount"), CREATE2_ACCOUNT_SELECTOR: getSelector("ContractDeployer", "create2Account"), ...
Evidently there are 5 contract deployment functionalities attached with the bootloader's preprocessor, but only 4 of those would be tagged with the isSystem
flag in the bootloader, since the "extendedAccountVersion" has not been attached
The current implementation of the bootloader::shouldMsgValueMimicCallBeSystem
would wrongly assume that the contract deployment of the "extendedAccountVersion" is not a system call, i.e this instance of using the isSystem
flag in the bootloader wrongly assumes that this is not a system call, making the ABI returned by getFarCallABI()
for low-level innvocations of calls and mimicCalls wrong.
The route for this logic is : executeL1Tx-> msgValueSimulatorMimicCall -> shouldMsgValueMimicCallBeSystem /mimicCallOnlyResult -> getFarCallABI
Submitting this as QA, leaving to the judge to upgrade if they see fit
Include the RIGHT_PADDED_GET_ACCOUNT_VERSION_SELECTOR
in bootloader::shouldMsgValueMimicCallBeSystem
as an isSystem
call.
require(dictionary.length / 8 <= encodedData.length / 2)
check allows malicious operators to always pass in malformed data to the bootloader to ensure that all users gas gets burnt without executing the tx.function publishCompressedBytecode( bytes calldata _bytecode, bytes calldata _rawCompressedData ) external payable onlyCallFromBootloader returns (bytes32 bytecodeHash) { unchecked { (bytes calldata dictionary, bytes calldata encodedData) = _decodeRawBytecode(_rawCompressedData); require( encodedData.length * 4 == _bytecode.length, "Encoded data length should be 4 times shorter than the original bytecode" ); require( dictionary.length / 8 <= encodedData.length / 2, "Dictionary should have at most the same number of entries as the encoded data" ); for (uint256 encodedDataPointer = 0; encodedDataPointer < encodedData.length; encodedDataPointer += 2) { uint256 indexOfEncodedChunk = uint256(encodedData.readUint16(encodedDataPointer)) * 8; require(indexOfEncodedChunk < dictionary.length, "Encoded chunk index is out of bounds"); uint64 encodedChunk = dictionary.readUint64(indexOfEncodedChunk); uint64 realChunk = _bytecode.readUint64(encodedDataPointer * 4); require(encodedChunk == realChunk, "Encoded chunk does not match the original bytecode"); } }
The publishCompressedBytecode()
function has been updated, and here is the new implementation https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/system-contracts/contracts/Compressor.sol#L44-L70, in comparison to the previous implementation, it no longer enforces multiple checks, i.e the dictionary.length % 8 = 0
and the check where the dictionary's length < 2** 16
this is done so that protocol can correctly enforce more that the reason why a call to this function fails would be due to gas from the bootloader, but there is a new check that faults this logic , i.e require(dictionary.length / 8 <= encodedData.length / 2)
, issue with this is that this check does not allow the bootloader to correctly enforce that calls to this functiion would mostly fail due to the gas provided not being enough and might still cause the attempt to forcefully revert https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/system-contracts/bootloader/bootloader.yul#L1740-L1745 i.e an operator passing in a data that does not always fall as "correct" within the checks, for example say the new check ofdictionary.length / 8 <= encodedData.length / 2
is false, would lead to the sendCompressedBytecode()
in the bootloader near calling and burning all the gas.
Having it in mind that the sendCompressedBytecode()
function is called by the bootloader whenever there is a need for the L2 transaction to use packed bytecode as seen here https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/system-contracts/bootloader/bootloader.yul#L1506 , this then gives the operator the opportunity of burning all of the user's provided gas of any L2 transaction that needs packed/published bytecodes... potentially doing it constantly if the operator is malicious since all they need to do is tamper with the data they pass to the bootloader to ensure the compression always ends up malformed by ignoring the new dictionary.length/8
check and then all users attempt just burn their gas.
Since this enforcement dictionary.length / 8 <= encodedData.length / 2
still needs to be placed so as not to allow operator pass in an excessive length, then the near calling should be scrapped from the bootloader when an attempt to sendCompressedBytecode()
is made, i.e if the compression is malformed the attempt should instead revert with reason rather than burning all the gas.
NB: This wasn't intended to be a QA submission, but these lines exist in the readMe: https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/README.md#L31-L34
### Validator can provide inefficient bytecode compression In the current implementation, the mechanism for bytecode compression is not strictly unambiguous. That means the validator has the flexibility to choose a less efficient compression for the bytecode to increase the deployment cost for the end user. Besides that, there is another non-fixed [issue](https://github.com/code-423n4/2023-10-zksync-findings/issues/805), that gives a way for the operator to forces the user to pay more for the bytecode compression or even burn all the transaction gas during bytecode compression verification.
Now we assume the above snippet makes this issue somewhat subjective as we could argue it makes it stand on the lines of being OOS, If all parties think otherwise then this should be upgraded.
This is a "bug case" that's present in two different identified instances by us
function depositLegacyErc20Bridge( address _prevMsgSender, address _l2Receiver, address _l1Token, uint256 _amount, uint256 _l2TxGasLimit, uint256 _l2TxGasPerPubdataByte, address _refundRecipient ) external payable override onlyLegacyBridge nonReentrant returns (bytes32 l2TxHash) { require(l2BridgeAddress[ERA_CHAIN_ID] != address(0), "ShB b. n dep"); require(_l1Token != l1WethAddress, "ShB: WETH deposit not supported 2"); // Note that funds have been transferred to this contract in the legacy ERC20 bridge. if (!hyperbridgingEnabled[ERA_CHAIN_ID]) { chainBalance[ERA_CHAIN_ID][_l1Token] += _amount; } bytes memory l2TxCalldata = _getDepositL2Calldata(_prevMsgSender, _l2Receiver, _l1Token, _amount); { //@audit // If the refund recipient is not specified, the refund will be sent to the sender of the transaction. // Otherwise, the refund will be sent to the specified address. // If the recipient is a contract on L1, the address alias will be applied. address refundRecipient = _refundRecipient; if (_refundRecipient == address(0)) { refundRecipient = _prevMsgSender != tx.origin ? AddressAliasHelper.applyL1ToL2Alias(_prevMsgSender) : _prevMsgSender; } L2TransactionRequestDirect memory request = L2TransactionRequestDirect({ chainId: ERA_CHAIN_ID, l2Contract: l2BridgeAddress[ERA_CHAIN_ID], mintValue: msg.value, // l2 gas + l2 msg.Value the bridgehub will withdraw the mintValue from the base token bridge for gas l2Value: 0, // L2 msg.value, this contract doesn't support base token deposits or wrapping functionality, for direct deposits use bridgehub l2Calldata: l2TxCalldata, l2GasLimit: _l2TxGasLimit, l2GasPerPubdataByteLimit: _l2TxGasPerPubdataByte, factoryDeps: new bytes[](0), refundRecipient: refundRecipient }); l2TxHash = bridgehub.requestL2TransactionDirect{value: msg.value}(request); } bytes32 txDataHash = keccak256(abi.encode(_prevMsgSender, _l1Token, _amount)); // Save the deposited amount to claim funds on L1 if the deposit failed on L2 depositHappened[ERA_CHAIN_ID][l2TxHash] = txDataHash; emit LegacyDepositInitiated(ERA_CHAIN_ID, l2TxHash, _prevMsgSender, _l2Receiver, _l1Token, _amount); }
This function initiates a deposit by locking funds on the contract and then sending the request, issue here arises whenever the refund is not specified, protocol then goes ahead to assume the refund recipient to be the aliased address of the sender if they are a contract.
Now, key to note that while this function is being called from the L1ERC20Bridge msg.sender
is being passed as prevMsgSender
, main case here is with the aliasing logic and the fact that there is a high chance that the user has no control over the contract/ the private key for the AddressAliasHelper.applyL1ToL2Alias(_prevMsgSender) L2 address
and therefore this would lead to their funds being stuck funds in many cases.
function _requestL2Transaction( uint256 _mintValue, WritePriorityOpParams memory _params, bytes memory _calldata, bytes[] memory _factoryDeps ) internal returns (bytes32 canonicalTxHash) { require(_factoryDeps.length <= MAX_NEW_FACTORY_DEPS, "uj"); _params.txId = s.priorityQueue.getTotalPriorityTxs(); // Checking that the user provided enough ether to pay for the transaction. // Using a new scope to prevent "stack too deep" error _params.l2GasPrice = _deriveL2GasPrice(tx.gasprice, _params.l2GasPricePerPubdata); uint256 baseCost = _params.l2GasPrice * _params.l2GasLimit; require(_mintValue >= baseCost + _params.l2Value, "mv"); // The `msg.value` doesn't cover the transaction cost // If the `_refundRecipient` is not provided, we use the `_sender` as the recipient. //@audit address refundRecipient = _params.refundRecipient == address(0) ? _params.sender : _params.refundRecipient; // If the `_refundRecipient` is a smart contract, we apply the L1 to L2 alias to prevent foot guns. if (refundRecipient.code.length > 0) { refundRecipient = AddressAliasHelper.applyL1ToL2Alias(refundRecipient); } _params.refundRecipient = refundRecipient; // populate missing fields _params.expirationTimestamp = uint64(block.timestamp + PRIORITY_EXPIRATION); // Safe to cast _params.valueToMint = _mintValue; canonicalTxHash = _writePriorityOp(_params, _calldata, _factoryDeps); }
We can see that a similar logic could be applied here, cause the contract during it's execution includes a refund recipient, now if it's set to address(0)
(assuming this to be the default behavior). The transaction on L2 fails, and a refund should be issued. However, because the refund recipient is msg.sender (the L2 contract), the refund goes to the L2 contract's address. The original sender from L1 does not receive the refund, resulting in a potential loss of funds in this case too.
Users funds would be inacessible/stuck since they are not in control of the aliased address.
This is a pretty old bug case, where protocols assume that the msg.sender is in control of their addresses or aliased ones on another chain, whcich led to the famous 20M OP wintermute hack from 2022, bestw ay imo to fix this would be that while depositing via deposit(...)
if _prevMsg.sender
is a contract and no explicit refundRecipient
is specified, just revert the call.
L2BaseToken
effectively causing attempts of consequent mints to failFirst take a look at both ways to intiate the withdrawals https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/system-contracts/contracts/L2BaseToken.sol#L72-L93
function withdraw(address _l1Receiver) external payable override { uint256 amount = _burnMsgValue(); // Send the L2 log, a user could use it as proof of the withdrawal bytes memory message = _getL1WithdrawMessage(_l1Receiver, amount); L1_MESSENGER_CONTRACT.sendToL1(message); emit Withdrawal(msg.sender, _l1Receiver, amount); } function withdrawWithMessage(address _l1Receiver, bytes memory _additionalData) external payable override { uint256 amount = _burnMsgValue(); // Send the L2 log, a user could use it as proof of the withdrawal bytes memory message = _getExtendedWithdrawMessage(_l1Receiver, amount, msg.sender, _additionalData); L1_MESSENGER_CONTRACT.sendToL1(message); emit WithdrawalWithMessage(msg.sender, _l1Receiver, amount, _additionalData); }
Evident that both functions are for initiating the withdrawals to be claimed on L1, and the only difference is that the former purely withdraws whereas the latter attaches a message, now they both make a call to _burnMsgValue()
while calculating the amount to be withdrawn, whose implementation is defined here https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/system-contracts/contracts/L2BaseToken.sol#L99-L109
function _burnMsgValue() internal returns (uint256 amount) { amount = msg.value; // Silent burning of the ether unchecked { // This is safe, since this contract holds the ether balances, and if user // send a `msg.value` it will be added to the contract (`this`) balance. balance[address(this)] -= amount; totalSupply -= amount; } }
Case here is that, protocol wrongly assumes that having the subtractions in the unchecked block is "completely" safe, but that's wrong, this is cause the only safe part of this balance[address(this)] -= amount
this is cause no matter the value of amount
passed it's equal to the msg.value
and as such has already been transferred to the contract, i.e address(this)
, but the same argument can't be made for totalSupply -= amount
this is cause, no addition of msg.value
to totalSupply
is enforced before this subtraction, going through this contract we can see that, the only instance where the totatalSupply gets increased is when mint()
is called from the bootloader https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/system-contracts/contracts/L2BaseToken.sol#L64-L68
function mint(address _account, uint256 _amount) external override onlyCallFromBootloader { totalSupply += _amount; balance[_account] += _amount; emit Mint(_account, _amount); }
Meaning that whenever totalSupply < msg.value
evaluates to true, while executing _burnMsgValue()
the totalSupply value is crazily updated to very close to uint(256).max since the subtraction is done in an unchecked block.
Say multiple black swan events occur on zkSync and now users have lost their interest in the rollup and are taking their funds off to L1 enmasse, this would lead to the higher chance that since all withdrawals are akready depleting the totalSupply
a large supply would come and cause to this to underflow.
The above case is not the only route where this even checks in, the simple fact that if the total minted supply is not too large then multiple attempts to withdraw or even one attempt to withdraw could break the accounting logic of the totalSupply
.
Now asides breaking burning, this would go ahead to break the minting logic cause now we have a very massive value for totalSupply
and any little addition would cause it to overflow,case is that while minting there are no unchecked blocks, so the default solidity 0.8.x overflow/underflow protection would cause this to revert.
NB: This causes a very High impact, but has a very low probability of happening, so its borderline QA/Medium
Consider explicitly checking that the total supply is more than the requested amount to be burnt, i.e apply these changes to https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/system-contracts/contracts/L2BaseToken.sol#L99-L109
function _burnMsgValue() internal returns (uint256 amount) { amount = msg.value; + require(totalSupply >= amount, "Transfer amount exceeds total minted supply"); // Silent burning of the ether unchecked { // This is safe, since this contract holds the ether balances, and if user // send a `msg.value` it will be added to the contract (`this`) balance. balance[address(this)] -= amount; totalSupply -= amount; } }
{ // Deposits that happened before the upgrade cannot be checked here, they have to be claimed and checked in the legacyBridge bool weCanCheckDepositHere = !_isEraLegacyWithdrawal(_chainId, _l2BatchNumber); //@audit // Double claims are not possible, as we this check except for legacy bridge withdrawals // Funds claimed before the update will still be recorded in the legacy bridge // Note we double check NEW deposits if they are called from the legacy bridge notCheckedInLegacyBridgeOrWeCanCheckDeposit = (!_checkedInLegacyBridge) || weCanCheckDepositHere; }
Evidently the statement "Double claims are not possible, as we this except for legacy bridge withdrawals" is wrong and should be fixed.
/// @dev tranfer tokens from legacy erc20 bridge or mailbox and set chainBalance as part of migration process function transferFundsFromLegacy(address _token, address _target, uint256 _targetChainId) external onlyOwner { if (_token == ETH_TOKEN_ADDRESS) { uint256 balanceBefore = address(this).balance; IMailbox(_target).transferEthToSharedBridge(); uint256 balanceAfter = address(this).balance; require(balanceAfter > balanceBefore, "ShB: 0 eth transferred"); chainBalance[_targetChainId][ETH_TOKEN_ADDRESS] = chainBalance[_targetChainId][ETH_TOKEN_ADDRESS] + balanceAfter - balanceBefore; } else { uint256 balanceBefore = IERC20(_token).balanceOf(address(this)); uint256 amount = IERC20(_token).balanceOf(address(legacyBridge)); require(amount > 0, "ShB: 0 amount to transfer"); IL1ERC20Bridge(_target).tranferTokenToSharedBridge(_token, amount); uint256 balanceAfter = IERC20(_token).balanceOf(address(this)); require(balanceAfter - balanceBefore == amount, "ShB: wrong amount transferred"); chainBalance[_targetChainId][_token] = chainBalance[_targetChainId][_token] + amount; } }
This: " /// @dev tranfer tokens from legacy erc20 bridge or mailbox and set chainBalance as part of migration process"
Should be changed to: " /// @dev transfer tokens from legacy erc20 bridge or mailbox and set chainBalance as part of migration process"
// We only require 9 logs to be checked, the 10th is if we are expecting a protocol upgrade // Without the protocol upgrade we expect 9 logs: 2^9 - 1 = 511 // With the protocol upgrade we expect 8 logs: 2^10 - 1 = 1023 if (_expectedSystemContractUpgradeTxHash == bytes32(0)) { require(processedLogs == 511, "b7"); } else { require(processedLogs == 1023, "b8"); }
This line: // With the protocol upgrade we expect 8 logs: 2^10 - 1 = 1023
, obviously should be // With the protocol upgrade we expect **10** logs: 2^10 - 1 = 1023
// Here we are making sure that the bytecode is indeed not yet know and needs to be published, // preveting users from being overcharged by the operator. let marker := getCodeMarker(bytecodeHash)
It should instead be "Here we are making sure that the bytecode is indeed not yet known and needs to be published"
/// @dev The maximal possible address of an L1-like precompie. These precompiles maintain the following properties: /// - Their extcodehash is EMPTY_STRING_KECCAK /// - Their extcodesize is 0 despite having a bytecode formally deployed there. uint256 constant CURRENT_MAX_PRECOMPILE_ADDRESS = 0xff;
Evidently the comment means to say "/// @dev The maximal possible address of an L1-like precompile. These precompiles maintain the following properties:"... and not precompie
.
function decommit(versionedHash, lenInWords) { // The operation below are never expected to overflow since the `lenInWords` is a most 2 bytes long. let gasCost := mul(decommmitCostPerWord(), lenInWords) //..snip }
This line : The operation below are never expected to overflow since the
lenInWords is a most 2 bytes long.
should be "The operation below are never expected to overflow since the lenInWords
is at most 2 bytes long."
* @notice The contract that emulates RIP-7212's P256VERIFY precompile.
RIP
should instead be EIP
* methods won't work for non-system contracts and also breaking changes at short notice are possilbe.
possilbe
should be possible
Bad documentation code structure, making it hard for users/developers to understand code.
Fix the typos from these instances
function execute(Operation calldata _operation) external payable onlyOwnerOrSecurityCouncil { bytes32 id = hashOperation(_operation); // Check if the predecessor operation is completed. _checkPredecessorDone(_operation.predecessor); // Ensure that the operation is ready to proceed. require(isOperationReady(id), "Operation must be ready before execution"); // Execute operation. _execute(_operation.calls); // Reconfirming that the operation is still ready after execution. // This is needed to avoid unexpected reentrancy attacks of re-executing the same operation. require(isOperationReady(id), "Operation must be ready after execution"); // Set operation to be done timestamps[id] = EXECUTED_PROPOSAL_TIMESTAMP; emit OperationExecuted(id); } function executeInstant(Operation calldata _operation) external payable onlySecurityCouncil { bytes32 id = hashOperation(_operation); // Check if the predecessor operation is completed. _checkPredecessorDone(_operation.predecessor); // Ensure that the operation is in a pending state before proceeding. require(isOperationPending(id), "Operation must be pending before execution"); // Execute operation. _execute(_operation.calls); // Reconfirming that the operation is still pending before execution. // This is needed to avoid unexpected reentrancy attacks of re-executing the same operation. require(isOperationPending(id), "Operation must be pending after execution"); // Set operation to be done timestamps[id] = EXECUTED_PROPOSAL_TIMESTAMP; emit OperationExecuted(id); }
These function at the long run call the below: https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/governance/Governance.sol#L225-L235
function _execute(Call[] calldata _calls) internal { for (uint256 i = 0; i < _calls.length; ++i) { (bool success, bytes memory returnData) = _calls[i].target.call{value: _calls[i].value}(_calls[i].data); if (!success) { // Propagate an error if the call fails. assembly { revert(add(returnData, 0x20), mload(returnData)) } } } }
These functions are used to execute schedule operations either instantly or after their respective delays have passed, now issue is that the public function are marked as payable
which suggests that while calling these functions, msg.value is going to be passed with them, but in the internal _execute()
there is no validation that the msg.value provided is equal to the total value of call[i].value
needed in the loop.
Potential having excess funds in the Governance.sol, i.e when the msg.value is way greater than the total call.value needed for thetransaction, or not enough funds being available for the execution which then leads to eating up funds that have been sent to the contract for different operations
Check that msg.value == the total call value needed, if protocol plans on routing the logic via the balance too, then check if the msg.value + contract's balance of eth
is enough for transaction.
Currently, either upgrade()
function of BaseZkSyncUpgrade and BaseZkSyncUpgradeGenesis is called depending on the context of the current upgrade.
Considering BaseZkSyncUpgrade
we can see that BaseZkSyncUpgrade::upgrade()
eventually calls _setL2SystemContractUpgrade()
to pass on the data for the proposed upgrade
txHash = _setL2SystemContractUpgrade( _proposedUpgrade.l2ProtocolUpgradeTx, _proposedUpgrade.factoryDeps, _proposedUpgrade.newProtocolVersion );
Considering it's an upgrade transaction, this transaction should be executed on L2 with _l2ProtocolUpgradeTx.txType = 254
. In the bootloader, i.e https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/system-contracts/bootloader/bootloader.yul#L594-L612
case 254 { // This is an upgrade transaction. // Protocol upgrade transactions are processed totally in the same manner as the normal L1->L2 transactions, // the only difference are: // - They must be the first one in the batch // - They have a different type to prevent tx hash collisions and preserve the expectation that the // L1->L2 transactions have priorityTxId inside them. if transactionIndex { assertionError("Protocol upgrade tx not first") } // This is to be called in the event that the L1 Transaction is a protocol upgrade txn. // Since this is upgrade transactions, we are okay that the gasUsed by the transaction will // not cover this additional hash computation let canonicalL1TxHash := getCanonicalL1TxHash(txDataOffset) sendToL1Native(true, protocolUpgradeTxHashKey(), canonicalL1TxHash) //@audit processL1Tx(txDataOffset, resultPtr, transactionIndex, userProvidedPubdataPrice, false) }
Now during the current execution, if for any reason this attempt fails, say the execution runs out of gas, the execution reverts, which is a direct fix to the primary issue from the previous audit contests, i.e https://github.com/code-423n4/2023-10-zksync-findings/issues/214#issuecomment-1795027489, but this is not enough fix to ensure that the synchronization is always in play.
Consider these issues that were duplicated to the aforementioned due to having a similar logic, most especially 1 & 2.
In short, for 1 the bug report is about, whenever an upgrade has a criticial vulnerabilty, current implementation does not provide a possibility of preventing the upgrade to go on, cause the upgrade must be carried out and cannot be changed even if batches get reverted.
function revertBatches(uint256 _newLastBatch) external nonReentrant onlyValidatorOrStateTransitionManager { _revertBatches(_newLastBatch); } /// @inheritdoc IExecutor function revertBatchesSharedBridge(uint256, uint256 _newLastBatch) external nonReentrant onlyValidator { _revertBatches(_newLastBatch); } function _revertBatches(uint256 _newLastBatch) internal { require(s.totalBatchesCommitted > _newLastBatch, "v1"); // The last committed batch is less than new last batch require(_newLastBatch >= s.totalBatchesExecuted, "v2"); // Already executed batches cannot be reverted if (_newLastBatch < s.totalBatchesVerified) { s.totalBatchesVerified = _newLastBatch; } s.totalBatchesCommitted = _newLastBatch; // Reset the batch number of the executed system contracts upgrade transaction if the batch // where the system contracts upgrade was committed is among the reverted batches. if (s.l2SystemContractsUpgradeBatchNumber > _newLastBatch) { delete s.l2SystemContractsUpgradeBatchNumber; } emit BlocksRevert(s.totalBatchesCommitted, s.totalBatchesVerified, s.totalBatchesExecuted); }
Evidently, one can see that the suggested fix from the report has not been implemented, i.e while reverting the upgrade tx the s.l2SystemContractsUpgradeTxHash
is not provided again to prevent the error inorder to increase the robustness of the system, that's to say for cases like this, the team would then not have time to prevent the upgrade from happening, fix the error, and make an upgrade tx again rather than leaving the vulnerable upgrade in the wild and not being implemented.
Bug flow should be seen from the attached report, now protocol's plan to solve this is by overriding s.l2SystemContractsUpgradeTxHash
in order to cancel the buggy upgrade, but this wouldn't work, cause that upgrade must be carried out in the next batch, so it gives zero time for protocol to code and then even deploy the new upgrader, now within this time frame, an attacker aware of the bugs in the implementation can exploit the system.
For case 2, the core of the issue is the same, i.e it lies in the protocol's failure to delete the transaction hash associated with a system contract upgrade when reverting a batch. This leads to a scenario where subsequent batches, even those not intended to include a system contract upgrade, are processed as if they do, due to the presence of the leftover upgrade transaction hash. This misprocessing causes unintended execution of system contract upgrades or reverts due to mismatched expectations about the batch's content.
Now, the protocol's design allows for the reverting of batches to undo changes not yet executed.However as previously highlighted, it inadequately addresses the cleanup of system contract upgrade markers, specifically the transaction hash (s.l2SystemContractsUpgradeTxHash
). While it resets the batch number indicating an upgrade (s.l2SystemContractsUpgradeBatchNumber
), it neglects the corresponding transaction hash. This leads to confusion in the _commitBatches()
function, take a look at this https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Executor.sol#L215-L248
function _commitBatches( StoredBatchInfo memory _lastCommittedBatchData, CommitBatchInfo[] calldata _newBatchesData ) internal { // check that we have the right protocol version // three comments: // 1. A chain has to keep their protocol version up to date, as processing a block requires the latest or previous protocol version // to solve this we will need to add the feature to create batches with only the protocol upgrade tx, without any other txs. // 2. A chain might become out of sync if it launches while we are in the middle of a protocol upgrade. This would mean they cannot process their genesis upgrade // as thier protocolversion would be outdated, and they also cannot process the protocol upgrade tx as they have a pending upgrade. // 3. The protocol upgrade is increased in the BaseZkSyncUpgrade, in the executor only the systemContractsUpgradeTxHash is checked require( IStateTransitionManager(s.stateTransitionManager).protocolVersion() == s.protocolVersion, "Executor facet: wrong protocol version" ); // With the new changes for EIP-4844, namely the restriction on number of blobs per block, we only allow for a single batch to be committed at a time. require(_newBatchesData.length == 1, "e4"); // Check that we commit batches after last committed batch require(s.storedBatchHashes[s.totalBatchesCommitted] == _hashStoredBatchInfo(_lastCommittedBatchData), "i"); // incorrect previous batch data bytes32 systemContractsUpgradeTxHash = s.l2SystemContractsUpgradeTxHash; // Upgrades are rarely done so we optimize a case with no active system contracts upgrade. //@audit if (systemContractsUpgradeTxHash == bytes32(0) || s.l2SystemContractsUpgradeBatchNumber != 0) { _commitBatchesWithoutSystemContractsUpgrade(_lastCommittedBatchData, _newBatchesData); } else { _commitBatchesWithSystemContractsUpgrade( _lastCommittedBatchData, _newBatchesData, systemContractsUpgradeTxHash ); } s.totalBatchesCommitted = s.totalBatchesCommitted + _newBatchesData.length; }
Here we see that the new logic for EIP-4844, requirees only one batch to be committed at a time due to restriction of blobs per block, but this doesn't solve the root case, cause regardless, if (systemContractsUpgradeTxHash == bytes32(0) || s.l2SystemContractsUpgradeBatchNumber != 0)
is false
, which would be the case for our reverted batch, then the execution always routes the logic to commiting with system contract upgrades.
TLDR: revert()
will delete upgrade batch number but not the batch tx thus making reverts of batches with system upgrade non functional since the system tx hash may still be executed with new batches.
Medium (reason for submitting as QA attached in the #### Additiional Note
section ), this is cause for the first case, we show how protocol have timelocked themself from being able to to stop the upgrade to a buggy implementation, that might even have critical bug implementations that an attacker could exploit while they code and attemot deploying a non buggy implementation, and for the second case, similar to the first we can see how if for whatever reason l2SystemContractsUpgradeTxHash
is to be stopped from executing with a batch, it is not possible to do so, due to s.l2SystemContractsUpgradeTxHash
not being reverted in revertBatches()
Modify the revertBatches()
function to ensure comprehensive cleanup after a batch revert. This includes not only resetting the batch number associated with a system contract upgrade but also clearing the related transaction hash (s.l2SystemContractsUpgradeTxHash
).
While reverting batches both l2SystemContractsUpgradeBatchNumber
s.l2SystemContractsUpgradeTxHash
should be provided and deleted.
NB: This wasn't intended to be a QA submission, but these lines exist in the readMe:https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/README.md#L35-L38
### Aknowledged issues from the previous audits All unfixed issues from the previous audits are considered out of scope. - https://era.zksync.io/docs/reference/troubleshooting/audit-bug-bounty.html
Now, whereas navigating to the attached link from the readMe
we can't see the audit where the idea of this bug has been coined attached, we still assume that it is part of the previous findings, now we just don't know if this was "acknowledged" or "confirmed" since we can see that the team clearly "confirmed" the bug case and applied a fix to it in this commit provided for this contest, albeit an insufficient one... due to all these we've decided that rhis issue somewhat subjective and we'd leave it to the judge if they deem it fit to be be upgraded.
struct Call { address target; uint256 value; bytes data; } struct Operation { Call[] calls; bytes32 predecessor; bytes32 salt; }
The above represents both the struct used for calls made during an operation and how how an operation is being defined.
From here, we can see that protocol implements a logic for shadow upgrades.
Note that these trpes of upgrades are designed to keep upgrade details confidential until execution. However, if an upgrade attempt fails, the information within the Operation calldata _operation
parameter, which is bundled with the call specifics via execute(), becomes public.
A failed shadow upgrade unintentionally exposes critical upgrade details to the public, which can include sensitive call data and operational parameters. Malicious actors could exploit this information to discover system vulnerabilities and mount attacks before the deployment of a security patch, note that the current upgrade to the Governance.sol
even marks these functions (i.e execute() & executeInstant() ) as payable, which suggests that now native token values could be attached to this window.
Enforce a protective mechanism that automatically puts the system in a freeze mode upon failure.
Take a look at https://github.com/code-423n4/2023-10-zksync-findings/issues?q=175
Focus on the primary issue and it's duplicate, they all talk about how they are behavorial incosistencies in precompiles when they are delegate called into, with suggesting that the delegateCall() function of the EfficientCall() should be reimplemented to check if this precompiles are to be called and instead pass the call in as a staticcall, issue is that, as shown in the snippet belwo no change has been made to this function attached with the fact that no documentations attached to the natspec with this bug case, i.e https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/system-contracts/contracts/libraries/EfficientCall.sol#L83-L95
/// @notice Perform a `delegateCall` without copying calldata to memory. /// @param _gas The gas to use for the call. /// @param _address The address to call. /// @param _data The calldata to use for the call. /// @return returnData The copied to memory return data. function delegateCall( uint256 _gas, address _address, bytes calldata _data ) internal returns (bytes memory returnData) { bool success = rawDelegateCall(_gas, _address, _data); returnData = _verifyCallResult(success); }
Deviation from the EVM's precompile method
Consider clearly documenting ths bug case and inform users to not attempt calling it via delegate call or apply the suggested fixes of first checking if the contracts to be called are the precompiles and then staticcall
them.
CURRENT_MAX_PRECOMPILE_ADDRESS
to MAX_POSSIBLE_PRECOMPILE_ADDRESS
/// @dev The maximal possible address of an L1-like precompie. These precompiles maintain the following properties: /// - Their extcodehash is EMPTY_STRING_KECCAK /// - Their extcodesize is 0 despite having a bytecode formally deployed there. uint256 constant CURRENT_MAX_PRECOMPILE_ADDRESS = 0xff;
Note that this address is used in theAccountCodeStorage.getCodeHash
function, i.e
address account = address(uint160(_input)); if (uint160(account) <= CURRENT_MAX_PRECOMPILE_ADDRESS) { return EMPTY_STRING_KECCAK; }
Case is that the current name suggests that there are 255 addresses already, where as is should mean that the maximal possible address of an L1-like precompile should be 255 as this comment hints.
Confusing code, makes it harder to understand protocol
Consider changing CURRENT_MAX_PRECOMPILE_ADDRESS
to MAX_POSSIBLE_PRECOMPILE_ADDRESS
.
ETH
on the L2
through L1->L2
transactions so this should be clearly documentedfunction _requestL2Transaction( uint256 _mintValue, WritePriorityOpParams memory _params, bytes memory _calldata, bytes[] memory _factoryDeps ) internal returns (bytes32 canonicalTxHash) { require(_factoryDeps.length <= MAX_NEW_FACTORY_DEPS, "uj"); _params.txId = s.priorityQueue.getTotalPriorityTxs(); // Checking that the user provided enough ether to pay for the transaction. // Using a new scope to prevent "stack too deep" error _params.l2GasPrice = _deriveL2GasPrice(tx.gasprice, _params.l2GasPricePerPubdata); uint256 baseCost = _params.l2GasPrice * _params.l2GasLimit; //@audit require(_mintValue >= baseCost + _params.l2Value, "mv"); // The `msg.value` doesn't cover the transaction cost // If the `_refundRecipient` is not provided, we use the `_sender` as the recipient. address refundRecipient = _params.refundRecipient == address(0) ? _params.sender : _params.refundRecipient; // If the `_refundRecipient` is a smart contract, we apply the L1 to L2 alias to prevent foot guns. if (refundRecipient.code.length > 0) { refundRecipient = AddressAliasHelper.applyL1ToL2Alias(refundRecipient); } _params.refundRecipient = refundRecipient; // populate missing fields _params.expirationTimestamp = uint64(block.timestamp + PRIORITY_EXPIRATION); // Safe to cast _params.valueToMint = _mintValue; canonicalTxHash = _writePriorityOp(_params, _calldata, _factoryDeps); }
This function eventually gets called when requesting transactions from the L2, note that there is an enforcement that mintVaue
is always larger than baseCost + _params.l2Value
, note that from here we can see that the msg.value of the L1->L2 transaction must only be part of ETH to be minted inside of transaction processing on the L2, the same thing can be confirmed in the bootloader
Now, the DefaultAccount.sol mistakenly assumes that L1->L2 transactions initiated by external owned accounts (EOAs) provide all necessary functionality, leading to its executeTransactionFromOutside()
function being left unimplemented.
This oversight means users cannot access their ETH on L2 if a malicious operator selectively processes only L1->L2 transactions. This scenario not only restricts access to ETH on L2 but also breaches the security principle that guarantees users can initiate transactions from L1 to L2, enabling them to withdraw their assets in anticipation of a potentially harmful upgrade.
NB: Submitting as QA cause whereas sponsors "confirmed" this bug from the previous contest, they tagged it as a
disagree with severity
... but still of the opinion that this is a borderline medium severity bug and leaving at the discretion of the judge.
Enable the creation of L1->L2 transactions that can draw upon ETH in the L2 balance as part of the transaction value. Alternatively, implement functionality in the DefaultAccount::executeTransactionFromOutside
function to ensure it operates as initially intended or this should be clearly documented
/// @inheritdoc IAdmin function setPendingAdmin(address _newPendingAdmin) external onlyAdmin { // Save previous value into the stack to put it into the event later address oldPendingAdmin = s.pendingAdmin; // Change pending admin s.pendingAdmin = _newPendingAdmin; emit NewPendingAdmin(oldPendingAdmin, _newPendingAdmin); } /// @inheritdoc IAdmin function acceptAdmin() external { address pendingAdmin = s.pendingAdmin; require(msg.sender == pendingAdmin, "n4"); // Only proposed by current admin address can claim the admin rights address previousAdmin = s.admin; s.admin = pendingAdmin; delete s.pendingAdmin; emit NewPendingAdmin(pendingAdmin, address(0)); emit NewAdmin(previousAdmin, pendingAdmin); }
Both functions are all used in the logic to update the admins.
Now consider this scenario:
Low, since this overall depends on the current admin setting an address that could become malicious as the pending admin (or a wrong address)
Consider implementing timelocks to the whole logic, i.e if pending admin does not accept their admin status in 1 day, it should be automatically revoked.
function _commitBatches( StoredBatchInfo memory _lastCommittedBatchData, CommitBatchInfo[] calldata _newBatchesData ) internal { //omitted for brevity //@audit // With the new changes for EIP-4844, namely the restriction on number of blobs per block, we only allow for a single batch to be committed at a time. require(_newBatchesData.length == 1, "e4"); // Check that we commit batches after last committed batch require(s.storedBatchHashes[s.totalBatchesCommitted] == _hashStoredBatchInfo(_lastCommittedBatchData), "i"); // incorrect previous batch data bytes32 systemContractsUpgradeTxHash = s.l2SystemContractsUpgradeTxHash; // Upgrades are rarely done so we optimize a case with no active system contracts upgrade. if (systemContractsUpgradeTxHash == bytes32(0) || s.l2SystemContractsUpgradeBatchNumber != 0) { _commitBatchesWithoutSystemContractsUpgrade(_lastCommittedBatchData, _newBatchesData); } else { _commitBatchesWithSystemContractsUpgrade( _lastCommittedBatchData, _newBatchesData, systemContractsUpgradeTxHash ); } s.totalBatchesCommitted = s.totalBatchesCommitted + _newBatchesData.length; } function _commitBatchesWithoutSystemContractsUpgrade( StoredBatchInfo memory _lastCommittedBatchData, CommitBatchInfo[] calldata _newBatchesData ) internal { //omitted for brevity //@audit for (uint256 i = 0; i < _newBatchesData.length; i = i.uncheckedInc()) { _lastCommittedBatchData = _commitOneBatch(_lastCommittedBatchData, _newBatchesData[i], bytes32(0)); s.storedBatchHashes[_lastCommittedBatchData.batchNumber] = _hashStoredBatchInfo(_lastCommittedBatchData); emit BlockCommit( _lastCommittedBatchData.batchNumber, _lastCommittedBatchData.batchHash, _lastCommittedBatchData.commitment ); } } function _commitBatchesWithSystemContractsUpgrade( StoredBatchInfo memory _lastCommittedBatchData, CommitBatchInfo[] calldata _newBatchesData, bytes32 _systemContractUpgradeTxHash ) internal { //omitted for brevity //@audit for (uint256 i = 0; i < _newBatchesData.length; i = i.uncheckedInc()) { // The upgrade transaction must only be included in the first batch. bytes32 expectedUpgradeTxHash = i == 0 ? _systemContractUpgradeTxHash : bytes32(0); _lastCommittedBatchData = _commitOneBatch( _lastCommittedBatchData, _newBatchesData[i], expectedUpgradeTxHash ); s.storedBatchHashes[_lastCommittedBatchData.batchNumber] = _hashStoredBatchInfo(_lastCommittedBatchData); emit BlockCommit( _lastCommittedBatchData.batchNumber, _lastCommittedBatchData.batchHash, _lastCommittedBatchData.commitment ); } }
Now these functions are called whenever batches are to be committed, now post the EIP-4844 implementation, due to the restriction on number of blobs per block, protocol can only allow for a single batch to be committed at a time, which is clearly noted in _commitBatches()
case here is that both _commitBatchesWithoutSystemContractsUpgrade()
and _commitBatchesWithSystemContractsUpgrade()
still attempt to loop through the newBatchesData.length
which is already hardcoded and can only be one.
Confusing code implementation, residue code from pre EIP-4844 implementation still present in code, post EIP-4844
implementation.
Remove the for loops in both functions.
This bug is also present in this instance below: https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/ValidatorTimelock.sol#L107-L142
function commitBatches( StoredBatchInfo calldata, CommitBatchInfo[] calldata _newBatchesData ) external onlyValidator(ERA_CHAIN_ID) { unchecked { // This contract is only a temporary solution, that hopefully will be disabled until 2106 year, so... // It is safe to cast. uint32 timestamp = uint32(block.timestamp); for (uint256 i = 0; i < _newBatchesData.length; ++i) { committedBatchTimestamp[ERA_CHAIN_ID].set(_newBatchesData[i].batchNumber, timestamp); } } _propagateToZkSyncStateTransition(ERA_CHAIN_ID); } /// @dev Records the timestamp for all provided committed batches and make /// a call to the hyperchain diamond contract with the same calldata. function commitBatchesSharedBridge( uint256 _chainId, StoredBatchInfo calldata, CommitBatchInfo[] calldata _newBatchesData ) external onlyValidator(_chainId) { unchecked { // This contract is only a temporary solution, that hopefully will be disabled until 2106 year, so... // It is safe to cast. uint32 timestamp = uint32(block.timestamp); for (uint256 i = 0; i < _newBatchesData.length; ++i) { committedBatchTimestamp[_chainId].set(_newBatchesData[i].batchNumber, timestamp); } } _propagateToZkSyncStateTransition(_chainId); }
/// @dev Check that batches were committed at least X time ago and /// make a call to the hyperchain diamond contract with the same calldata. function executeBatches(StoredBatchInfo[] calldata _newBatchesData) external onlyValidator(ERA_CHAIN_ID) { uint256 delay = executionDelay; // uint32 unchecked { for (uint256 i = 0; i < _newBatchesData.length; ++i) { uint256 commitBatchTimestamp = committedBatchTimestamp[ERA_CHAIN_ID].get( _newBatchesData[i].batchNumber ); // Note: if the `commitBatchTimestamp` is zero, that means either: // * The batch was committed, but not through this contract. // * The batch wasn't committed at all, so execution will fail in the zkSync contract. // We allow executing such batches. require(block.timestamp >= commitBatchTimestamp + delay, "5c"); // The delay is not passed } } _propagateToZkSyncStateTransition(ERA_CHAIN_ID); } /// @dev Check that batches were committed at least X time ago and /// make a call to the hyperchain diamond contract with the same calldata. function executeBatchesSharedBridge( uint256 _chainId, StoredBatchInfo[] calldata _newBatchesData ) external onlyValidator(_chainId) { uint256 delay = executionDelay; // uint32 unchecked { for (uint256 i = 0; i < _newBatchesData.length; ++i) { uint256 commitBatchTimestamp = committedBatchTimestamp[_chainId].get(_newBatchesData[i].batchNumber); // Note: if the `commitBatchTimestamp` is zero, that means either: // * The batch was committed, but not through this contract. // * The batch wasn't committed at all, so execution will fail in the zkSync contract. // We allow executing such batches. require(block.timestamp >= commitBatchTimestamp + delay, "5c"); // The delay is not passed } } _propagateToZkSyncStateTransition(_chainId); }
ZkSyncStateTransitionStorage
variables should be introducedTake a look at the getter facet here: https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Getters.sol
Now, consider the ZkSyncStateTransitionStorage
struct here: https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/chain-deps/ZkSyncStateTransitionStorage.sol#L66-L153
Going through both code snippets, one can see that where as there are multiple getter functions a few are missing, to list them:
__DEPRECATED_diamondCutStorage
__DEPRECATED_governor
__DEPRECATED_pendingGovernor
__DEPRECATED_allowList
UpgradeStorage __DEPRECATED_upgrades
__DEPRECATED_lastWithdrawalLimitReset
__DEPRECATED_withdrawnAmountInWindow
mapping(address => uint256) __DEPRECATED_totalDepositedAmountPerUser
bool zkPorterIsAvailable
address blobVersionedHashRetriever
uint256 chainId
Now where as one can say it's arguable not to attache getters for the deprecated variables, we can't say that for the non-deprecated ones.
Users can't directly query important state data.
Consider adding corresponding getter functions in Getters.sol
for the nondeprecated storage data.
function _setChainIdUpgrade(uint256 _chainId, address _chainContract) internal { bytes memory systemContextCalldata = abi.encodeCall(ISystemContext.setChainId, (_chainId)); uint256[] memory uintEmptyArray; bytes[] memory bytesEmptyArray; L2CanonicalTransaction memory l2ProtocolUpgradeTx = L2CanonicalTransaction({ txType: SYSTEM_UPGRADE_L2_TX_TYPE, from: uint256(uint160(L2_FORCE_DEPLOYER_ADDR)), to: uint256(uint160(L2_SYSTEM_CONTEXT_SYSTEM_CONTRACT_ADDR)), gasLimit: $(PRIORITY_TX_MAX_GAS_LIMIT), gasPerPubdataByteLimit: REQUIRED_L2_GAS_PRICE_PER_PUBDATA, maxFeePerGas: uint256(0), maxPriorityFeePerGas: uint256(0), paymaster: uint256(0), // Note, that the priority operation id is used as "nonce" for L1->L2 transactions nonce: protocolVersion, value: 0, reserved: [uint256(0), 0, 0, 0], data: systemContextCalldata, signature: new bytes(0), factoryDeps: uintEmptyArray, paymasterInput: new bytes(0), reservedDynamic: new bytes(0) }); ProposedUpgrade memory proposedUpgrade = ProposedUpgrade({ l2ProtocolUpgradeTx: l2ProtocolUpgradeTx, factoryDeps: bytesEmptyArray, bootloaderHash: bytes32(0), defaultAccountHash: bytes32(0), verifier: address(0), verifierParams: VerifierParams({ recursionNodeLevelVkHash: bytes32(0), recursionLeafLevelVkHash: bytes32(0), recursionCircuitsSetVksHash: bytes32(0) }), l1ContractsUpgradeCalldata: new bytes(0), postUpgradeCalldata: new bytes(0), upgradeTimestamp: 0, newProtocolVersion: protocolVersion }); Diamond.FacetCut[] memory emptyArray; Diamond.DiamondCutData memory cutData = Diamond.DiamondCutData({ facetCuts: emptyArray, initAddress: genesisUpgrade, initCalldata: abi.encodeCall(IDefaultUpgrade.upgrade, (proposedUpgrade)) }); IAdmin(_chainContract).executeUpgrade(cutData); emit SetChainIdUpgrade(_chainContract, l2ProtocolUpgradeTx, protocolVersion); }
Considering this is the logic, for setting chain upgrades, what to note here is the setting applied to the ProposedUpgrade
struct, navigating here https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/upgrades/BaseZkSyncUpgrade.sol#L14-L40, we can see all the necessities, attached to this struct, i.e most members of the struct are to kept as 0
in order not to be updated which is rightly, but the logic for upgradeTimestamp
and newProtocolVersion
is wrong.
For upgradeTimestamp
, this value should be set as the timestamp after which the upgrade can be executed, i.e in the future or at least it shouldn't be hardcoded to 0
.
For newProtocolVersion
the version needs to be greater than the previous protocol version, but execution instead just queries the current version which is wrong, since the attempt to upgrade the version from BaseZkSyncUpgrade.sol would revert if the version is not greater than the current one.
Additionally note that this comment: "Note, that the priority operation id is used as "nonce" for L1->L2 transactions" in this case is invalid as the operation is not a PRIORITY_OPERATION_L2_TX_TYPE
but rather a SYSTEM_UPGRADE_L2_TX_TYPE
.
The logic of having an upgrade be scheduled in the future is non-existent, due to the hardcodes, upgrading all upgrades are automatically immediately possible contrary to protocol's intention, this is also heavily dependent on the documentations around this, as generally for GenesisUpgrade
, the upgradeTimestamp 0 is an ok value, as it can and should be executed immediately after chain genesis, but that's not the case for all instances
Consider allowing upgradeTimestamp
to be set to a different value other than 0
, additionally consider allowing protocolVersion
to be a passed in value.
_postUpgrade()
has no implementation and is not meant to be used but is being queried twice via both the genesis and normal upgrade pattern/// @notice placeholder function for custom logic for post-upgrade logic. /// Typically this function will never be used. /// @param _customCallDataForUpgrade Custom data for an upgrade, which may be interpreted differently for each /// upgrade. function _postUpgrade(bytes calldata _customCallDataForUpgrade) internal virtual {}
Evidently, we can see that this function is not to be used as it currently has no implementation and an empty block, but it's being queried twice via both the genesis and normal upgrade pattern consider these 2 instances, 1- https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/upgrades/BaseZkSyncUpgradeGenesis.sol#L65-L66, & 2 - https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/upgrades/BaseZkSyncUpgrade.sol#L85-L86
Now would be key to note that where as the postUpgradeCalldata
would usually be empty it's not always the case and is actually the custom calldata for post upgrade hook which would be interpreted differently dependent on the upgrade
Inability to implement any post upgrade calldata whenever postUpgradeCalldata
is empty as the _postUpgrade()
function lacks an implementation.
Submitting as QA due to the "Typically this function will never be used" comment
Introduce a functionality to implement a post upgrade custom call data, as suggested by the docs, this should be via an implementation of _postUpgrade()
.
Multiple instances of this
/*////////////////////////////////////////////////////////////// ERA LEGACY FUNCTIONS //////////////////////////////////////////////////////////////*/ /// @notice Initiates a deposit by locking funds on the contract and sending the request /// of processing an L2 transaction where tokens would be minted. /// @dev If the token is bridged for the first time, the L2 token contract will be deployed. Note however, that the /// newly-deployed token does not support any custom logic, i.e. rebase tokens' functionality is not supported. /// @param _l2Receiver The account address that should receive funds on L2 /// @param _l1Token The L1 token address which is deposited /// @param _amount The total amount of tokens to be bridged /// @param _l2TxGasLimit The L2 gas limit to be used in the corresponding L2 transaction /// @param _l2TxGasPerPubdataByte The gasPerPubdataByteLimit to be used in the corresponding L2 transaction /// @param _refundRecipient The address on L2 that will receive the refund for the transaction. /// @dev If the L2 deposit finalization transaction fails, the `_refundRecipient` will receive the `_l2Value`. /// Please note, the contract may change the refund recipient's address to eliminate sending funds to addresses /// out of control. /// - If `_refundRecipient` is a contract on L1, the refund will be sent to the aliased `_refundRecipient`. /// - If `_refundRecipient` is set to `address(0)` and the sender has NO deployed bytecode on L1, the refund will /// be sent to the `msg.sender` address. /// - If `_refundRecipient` is set to `address(0)` and the sender has deployed bytecode on L1, the refund will be /// sent to the aliased `msg.sender` address. /// @dev The address aliasing of L1 contracts as refund recipient on L2 is necessary to guarantee that the funds /// are controllable through the Mailbox, since the Mailbox applies address aliasing to the from address for the /// L2 tx if the L1 msg.sender is a contract. Without address aliasing for L1 contracts as refund recipients they /// would not be able to make proper L2 tx requests through the Mailbox to use or withdraw the funds from L2, and /// the funds would be lost. /// @return l2TxHash The L2 transaction hash of deposit finalization. function depositLegacyErc20Bridge( address _prevMsgSender, address _l2Receiver, address _l1Token, uint256 _amount, uint256 _l2TxGasLimit, uint256 _l2TxGasPerPubdataByte, address _refundRecipient ) external payable override onlyLegacyBridge nonReentrant returns (bytes32 l2TxHash) { require(l2BridgeAddress[ERA_CHAIN_ID] != address(0), "ShB b. n dep"); require(_l1Token != l1WethAddress, "ShB: WETH deposit not supported 2"); // Note that funds have been transferred to this contract in the legacy ERC20 bridge. if (!hyperbridgingEnabled[ERA_CHAIN_ID]) { chainBalance[ERA_CHAIN_ID][_l1Token] += _amount; } bytes memory l2TxCalldata = _getDepositL2Calldata(_prevMsgSender, _l2Receiver, _l1Token, _amount); { // If the refund recipient is not specified, the refund will be sent to the sender of the transaction. // Otherwise, the refund will be sent to the specified address. // If the recipient is a contract on L1, the address alias will be applied. address refundRecipient = _refundRecipient; if (_refundRecipient == address(0)) { refundRecipient = _prevMsgSender != tx.origin ? AddressAliasHelper.applyL1ToL2Alias(_prevMsgSender) : _prevMsgSender; } L2TransactionRequestDirect memory request = L2TransactionRequestDirect({ chainId: ERA_CHAIN_ID, l2Contract: l2BridgeAddress[ERA_CHAIN_ID], mintValue: msg.value, // l2 gas + l2 msg.Value the bridgehub will withdraw the mintValue from the base token bridge for gas l2Value: 0, // L2 msg.value, this contract doesn't support base token deposits or wrapping functionality, for direct deposits use bridgehub l2Calldata: l2TxCalldata, l2GasLimit: _l2TxGasLimit, l2GasPerPubdataByteLimit: _l2TxGasPerPubdataByte, factoryDeps: new bytes[](0), refundRecipient: refundRecipient }); l2TxHash = bridgehub.requestL2TransactionDirect{value: msg.value}(request); } bytes32 txDataHash = keccak256(abi.encode(_prevMsgSender, _l1Token, _amount)); // Save the deposited amount to claim funds on L1 if the deposit failed on L2 depositHappened[ERA_CHAIN_ID][l2TxHash] = txDataHash; emit LegacyDepositInitiated(ERA_CHAIN_ID, l2TxHash, _prevMsgSender, _l2Receiver, _l1Token, _amount); }
Evidently we can see that for the @param _prevMsgSender
nothing is being explained.
/// @notice Allows bridgehub to acquire mintValue for L1->L2 transactions. /// @dev If the corresponding L2 transaction fails, refunds are issued to a refund recipient on L2. function bridgehubDepositBaseToken( uint256 _chainId, address _prevMsgSender, address _l1Token, uint256 _amount ) external payable virtual onlyBridgehubOrEra(_chainId) { if (_l1Token == ETH_TOKEN_ADDRESS) { require(msg.value == _amount, "L1SharedBridge: msg.value not equal to amount"); } else { // The Bridgehub also checks this, but we want to be sure require(msg.value == 0, "ShB m.v > 0 b d.it"); uint256 amount = _depositFunds(_prevMsgSender, IERC20(_l1Token), _amount); // note if _prevMsgSender is this contract, this will return 0. This does not happen. require(amount == _amount, "3T"); // The token has non-standard transfer logic } if (!hyperbridgingEnabled[_chainId]) { chainBalance[_chainId][_l1Token] += _amount; } // Note that we don't save the deposited amount, as this is for the base token, which gets sent to the refundRecipient if the tx fails emit BridgehubDepositBaseTokenInitiated(_chainId, _prevMsgSender, _l1Token, _amount); }
The onlyBridgehubOrEra
modifier has been used and as such the documentation should entail that it "Allows bridgehub or era to acquire mintValue for L1->L2 "transactions.
function _parseL2WithdrawalMessage( uint256 _chainId, bytes memory _l2ToL1message ) internal view returns (address l1Receiver, address l1Token, uint256 amount) { // We check that the message is long enough to read the data. // Please note that there are two versions of the message: // 1. The message that is sent by `withdraw(address _l1Receiver)` // It should be equal to the length of the bytes4 function signature + address l1Receiver + uint256 amount = 4 + 20 + 32 = 56 (bytes). // 2. The message that is sent by `withdrawWithMessage(address _l1Receiver, bytes calldata _additionalData)` // It should be equal to the length of the following: //@audit // bytes4 function signature + address l1Receiver + uint256 amount + address l2Sender + bytes _additionalData = // = 4 + 20 + 32 + 32 + _additionalData.length >= 68 (bytes). // So the data is expected to be at least 56 bytes long. require(_l2ToL1message.length >= 56, "ShB wrong msg len"); // wrong messsage length (uint32 functionSignature, uint256 offset) = UnsafeBytes.readUint32(_l2ToL1message, 0); if (bytes4(functionSignature) == IMailbox.finalizeEthWithdrawal.selector) { // this message is a base token withdrawal (l1Receiver, offset) = UnsafeBytes.readAddress(_l2ToL1message, offset); (amount, offset) = UnsafeBytes.readUint256(_l2ToL1message, offset); l1Token = bridgehub.baseToken(_chainId); } else if (bytes4(functionSignature) == IL1ERC20Bridge.finalizeWithdrawal.selector) { // We use the IL1ERC20Bridge for backward compatibility with old withdrawals. // this message is a token withdrawal // Check that the message length is correct. // It should be equal to the length of the function signature + address + address + uint256 = 4 + 20 + 20 + 32 = // 76 (bytes). require(_l2ToL1message.length == 76, "ShB wrong msg len 2"); (l1Receiver, offset) = UnsafeBytes.readAddress(_l2ToL1message, offset); (l1Token, offset) = UnsafeBytes.readAddress(_l2ToL1message, offset); (amount, offset) = UnsafeBytes.readUint256(_l2ToL1message, offset); } else { revert("ShB Incorrect message function selector"); } }
Evidently, in this case, we can see that the math around // bytes4 function signature + address l1Receiver + uint256 amount + address l2Sender + bytes _additionalData = // = 4 + 20 + 32 + 32 + _additionalData.length >= 68 (bytes).
is wrong and 68
in this case should be 88
instead.
Incomplete code documentation, makes it harder to understand code
Apply all suffiecient natspec explanations where necessary.
equality/zero
checkersThere are multiple instances where protocol does not validate input data, to list a few, take a look at https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/governance/Governance.sol#L248-L260
/// @dev Changes the minimum timelock duration for future operations. /// @param _newDelay The new minimum delay time (in seconds) for future operations. function updateDelay(uint256 _newDelay) external onlySelf { emit ChangeMinDelay(minDelay, _newDelay); minDelay = _newDelay; } /// @dev Updates the address of the security council. /// @param _newSecurityCouncil The address of the new security council. function updateSecurityCouncil(address _newSecurityCouncil) external onlySelf { emit ChangeSecurityCouncil(securityCouncil, _newSecurityCouncil); securityCouncil = _newSecurityCouncil; }
Evidently, these functions are used for updating governance values, but no checks exist on making sure that the values being passed are not the same as the stored value for these variables.
/// @notice Update the used version of the account. /// @param _version The new version of the AA protocol to use. /// @dev Note that it allows changes from account to non-account and vice versa. function updateAccountVersion(AccountAbstractionVersion _version) external onlySystemCall { accountInfo[msg.sender].supportedAAVersion = _version; emit AccountVersionUpdated(msg.sender, _version); }
As we can see, there's no verification that_version
is different from accountInfo[msg.sender].supportedAAVersion
or not being 0
No input validation in this case leading to an unnecessary execution of code if the value is the same, or even if it's zero
Implement equality checkers in setter functions.
require
are not descriptiveThis is fairly rampant in code, taking the Admin.sol
as the primary case to prove this bug
Take a look at these instances
require(msg.sender == pendingAdmin, "n4"); // Only proposed by current admin address can claim the admin rights require(_newPriorityTxMaxGasLimit <= MAX_GAS_PER_TRANSACTION, "n5"); require(_newFeeParams.maxPubdataPerBatch >= _newFeeParams.priorityTxMaxPubdata, "n6"); require(!diamondStorage.isFrozen, "a9"); // diamond proxy is frozen already
Other instances can be scoped out by using this search keyword: https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync+require%28+NOT+language%3ATypeScript+NOT+language%3AMarkdown&type=code
Where as some instances have the errored out cases commented out it still does not provide a descriptive case for the require()
and as such this causes lack of understnading of protocol
Consider adding descriptive messages after each require()
Multiple instances of going against the best styling practices one of such can be seen in Executor.sol
Take a look athttps://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Executor.sol#L6
import {COMMIT_TIMESTAMP_NOT_OLDER, COMMIT_TIMESTAMP_APPROXIMATION_DELTA, EMPTY_STRING_KECCAK, L2_TO_L1_LOG_SERIALIZE_SIZE, MAX_L2_TO_L1_LOGS_COMMITMENT_BYTES, PACKED_L2_BLOCK_TIMESTAMP_MASK, PUBLIC_INPUT_SHIFT, POINT_EVALUATION_PRECOMPILE_ADDR} from "../../../common/Config.sol";
Covers the whole screen which should instead include a line break for better readability
Difficulties while reading through code
Follow best styling practices
// While this does not provide a protection in the production, it is needed for local testing // Length of the L2Log encoding should not be equal to the length of other L2Logs' tree nodes preimages assert(L2_TO_L1_LOG_SERIALIZE_SIZE != 2 * 32);
This check is used to ensure that the length of the L2Log encoding is not equal to the length of other L2Logs' tree nodes preimages, but according to the code comment, we know that the check is only in place for testing purposes, also from here we can see that this is a constant value set as 88
.
Bad code structure, unnecessary code irrelevant to the production is being left in protocol.
Remove this assertion from final realesed code.
/// @dev The amount of time in seconds the validator has to process the priority transaction /// NOTE: The constant is set to zero for the Alpha release period //@audit this is still 0? uint256 constant PRIORITY_EXPIRATION = 0 days;
Evidently, the PRIORITY_EXPIRATION is set to zero, but this is meant for the Alpha release period which ended in April, now consider https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Mailbox.sol#L258-L287
function _requestL2Transaction( uint256 _mintValue, WritePriorityOpParams memory _params, bytes memory _calldata, bytes[] memory _factoryDeps ) internal returns (bytes32 canonicalTxHash) { require(_factoryDeps.length <= MAX_NEW_FACTORY_DEPS, "uj"); _params.txId = s.priorityQueue.getTotalPriorityTxs(); // Checking that the user provided enough ether to pay for the transaction. // Using a new scope to prevent "stack too deep" error _params.l2GasPrice = _deriveL2GasPrice(tx.gasprice, _params.l2GasPricePerPubdata); uint256 baseCost = _params.l2GasPrice * _params.l2GasLimit; require(_mintValue >= baseCost + _params.l2Value, "mv"); // The `msg.value` doesn't cover the transaction cost // If the `_refundRecipient` is not provided, we use the `_sender` as the recipient. address refundRecipient = _params.refundRecipient == address(0) ? _params.sender : _params.refundRecipient; // If the `_refundRecipient` is a smart contract, we apply the L1 to L2 alias to prevent foot guns. if (refundRecipient.code.length > 0) { refundRecipient = AddressAliasHelper.applyL1ToL2Alias(refundRecipient); } _params.refundRecipient = refundRecipient; // populate missing fields //@audit _params.expirationTimestamp = uint64(block.timestamp + PRIORITY_EXPIRATION); // Safe to cast _params.valueToMint = _mintValue; canonicalTxHash = _writePriorityOp(_params, _calldata, _factoryDeps); }
We can see that the deadline parameter uses this value "PRIORITY_EXPIRATION" to determine the deadline for the validators to process this transaction, but the value being 0 would mean that all transaction would use the timestamp the transaction was requested, which would be logically flawed.
Outdated code still in production.
Since the Alpha release period has passed, the necessary value for PRIORITY_EXPIRATION
should be passed and stored in Config.sol
L2ContractHelper/Config.sol
in case of updatesSeveral functions and constants within L2ContractHelper/Config.sol
are utilized by multiple Diamond facets across the smart contract system. Modifications to these shared elements necessitate Governor intervention to update all relevant facets. Non-compliance can lead to inconsistencies and potentially critical issues due to mismatched functionality across facets.
A search using the following query confirms this concern:https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync+L2ContractHelper.hashL2Bytecode&type=code
The search results demonstrate that the hashL2Bytecode
function is employed within various facets, including BaseZkSyncUpgrade.sol
, Mailbox.sol
, L1ERC20Bridge.sol
, and L1EthBridge.sol
. Consequently, a change to hashL2Bytecode
would necessitate redeploying all four of these facets.
Low
To eliminate the need for widespread redeployments upon updates to L2ContractHelper
and Config.sol
, we propose integrating them as Diamond facets themselves. This approach enables other facets to access the functions and constants through cross-facet calls, while external contracts can interact with them via the Diamond. With this structure, modifications only require redeploying and replacing the specific facet containing the updated elements.
This mitigation strategy streamlines the update process for shared functionalities, minimizing the need for extensive redeployment across multiple facets.
L1SharedBridge.sol
might encounter accounting errorsTake a look at the new implementation of the Shared Bridge and how tokens are gotten from the legacy bridge, https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/bridge/L1SharedBridge.sol#L115-L136
/// @dev tranfer tokens from legacy erc20 bridge or mailbox and set chainBalance as part of migration process function transferFundsFromLegacy(address _token, address _target, uint256 _targetChainId) external onlyOwner { if (_token == ETH_TOKEN_ADDRESS) { uint256 balanceBefore = address(this).balance; IMailbox(_target).transferEthToSharedBridge(); uint256 balanceAfter = address(this).balance; require(balanceAfter > balanceBefore, "ShB: 0 eth transferred"); chainBalance[_targetChainId][ETH_TOKEN_ADDRESS] = chainBalance[_targetChainId][ETH_TOKEN_ADDRESS] + balanceAfter - balanceBefore; } else { uint256 balanceBefore = IERC20(_token).balanceOf(address(this)); uint256 amount = IERC20(_token).balanceOf(address(legacyBridge)); require(amount > 0, "ShB: 0 amount to transfer"); IL1ERC20Bridge(_target).tranferTokenToSharedBridge(_token, amount); uint256 balanceAfter = IERC20(_token).balanceOf(address(this)); require(balanceAfter - balanceBefore == amount, "ShB: wrong amount transferred"); chainBalance[_targetChainId][_token] = chainBalance[_targetChainId][_token] + amount; } }
Now this function calls https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/bridge/L1ERC20Bridge.sol#L63-L69
function tranferTokenToSharedBridge(address _token, uint256 _amount) external { require(msg.sender == address(sharedBridge), "Not shared bridge"); uint256 amount = IERC20(_token).balanceOf(address(this)); require(amount == _amount, "Incorrect amount"); IERC20(_token).safeTransfer(address(sharedBridge), amount); }
Case with this implementation is the line
require(balanceAfter - balanceBefore == amount, "ShB: wrong amount transferred");
This automatically means that all fee on transfer tokens sre not supported, cause now while transfering the funds, as far as the fee is charged the difference between the former balance and the new balance is not going to be the same, and as such this attempt to transfer the funds would always revert, note that protocol can't also specify a lower amount to be sent due to the check that ensures the whole balance is sent.
Now whereas protocol does not work cyrretly with FOT
tokens, there are tokens that exist that could have their implementations be update to support fees, effectively breaking protocol if that were to happen.
Funds are potentially stuck in the ERC20 bridge, also core functionality is broken, being that transferFundsFromLegacy()
won't work for these tokens.
In as much as the aim of transfering these tokens from the ERC20 bridge is to clear the bridge of the tokens in it, then the finalization check could be changed from balanceAfter - balanceBefore == amount
and instead implemented as a check that ensures that IERC20(_token).balanceOf(address(legacyBridge)) == 0
after the transaction.
Ecrecover
gas cost been massively increased without significant increases in executional costTake a look at the gas cost for ecrecover https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/system-contracts/contracts/precompiles/Ecrecover.yul#L22-L25
/// @dev The gas cost of processing ecrecover circuit precompile. function ECRECOVER_GAS_COST() -> ret { ret := 7000 }
It's been massively increased to more than 6x
it's former cost of 1112
, would be key to note that the execution of this contract is still the same without any execessive additional hashing/compiling that justifies this addition.
Users are now charged >6x
the charges to ECRECOVER
where as there are no real stand out addition in executional costs.
Consider reducing this cost or always attach extensive documentation whenever an update happens to a variable, especially the ones attached to costs.
CodeOracle
Precompile somewhat deviates from EVM's extcodecopy
behavior* @notice The contract used to emulate EVM's `extcodecopy` behavior.
We can see that this function is to emulate EVM extcodecopy behaviour, but it contatns multiple deviation to the etheruem's native implementation, to list a few:
extcodecopy
allows specifying the destination memory location. CodeOracle
relies on zkEVM's internal memory management, potentially reusing the same memory page for subsequent decommits, leading to caching issues.extcodecopy
which copies a specific size of code, CodeOracle
decommits the entire code based on a versioned hash stored in a separate contract.extcodecopy
handles out-of-gas situations during memory access. CodeOracle
might not handle such cases, potentially leading to unexpected behavior.Borderline medium/low, since this is a deviation from Ethereum's specific implementation and would lead to confusion
Consider making it very similar to Ethereum's implementation or clearly document the deviation.
minDelay
function updateDelay(uint256 _newDelay) external onlySelf { emit ChangeMinDelay(minDelay, _newDelay); minDelay = _newDelay; }
The updateDelay
function within the Governance contract lacks a minimum threshold for the _newDelay
parameter, which controls the minDelay
value. If _newDelay
gets set to say zero for instance, this effectively removes the security council's oversight.
This lack of synchronization between different time delays used in the upgrade scheduling process creates the possibility of conflicting scenarios. These delays are defined in separate contracts, i.e minDelay
and UPGRADE_NOTICE_PERIOD
in Governance.sol
& executionDelay
in ValidatorTimelock.sol
If executionDelay
exceeds minDelay
or UPGRADE_NOTICE_PERIOD
, the upgrade might be executed on L1 before users can withdraw their funds. Even if executionDelay
is shorter than minDelay
, a malicious validator could intentionally delay the process, causing a similar issue.
These loopholes in the zkSync governance contracts could be exploited to manipulate the upgrade process and potentially harm users.
One window allows the ownerto bypass the security council's oversight by setting the minimum approval delay for operations to zero. This timeframe, known as minDelay
, is crucial because it gives the security council time to review and potentially cancel operations proposed by the admin. With minDelay
at zero, the security council's approval becomes irrelevant, and the admin can execute operations unilaterally. This undermines the two-actor security model intended for governance.
NB: It might as well not be
zero
, but a very short time frame, idea is still the same.
Another window stems from inconsistencies between different time delays used during upgrade scheduling. These delays include minDelay
(time for security council review), UPGRADE_NOTICE_PERIOD
(user notification window), and executionDelay
(delay before upgrade execution on L1). If minDelay
or UPGRADE_NOTICE_PERIOD
are shorter than executionDelay
, it can lead to unpredictable outcomes for users. Users' withdrawal requests might be stuck in limbo or executed before the upgrade, causing them to miss out on crucial information or protections.
For instance, the time delay inconsistencies could be problematic to the upgrade process in a way that impacts users attempting to withdraw ETH before an upgrade, say the finalizeEthWithdrawal()
gets updgraded to function a bit different, users who had their transactions going on would be directly impacted
Enforce minimum minDelay
, the updateDelay
function should be modified to include a requirement that _newDelay
be greater than ValidatorTimelock.executionDelay()
with an additional buffer of at least 2 days to account for potential validator delays. This ensures the security council has sufficient time to review operations.
Also, consider establishing a dependency between minDelay
(or UPGRADE_NOTICE_PERIOD
) and executionDelay
. This could involve enforcing minDelay
to be always greater than executionDelay
with a reasonable buffer.
BaseZkSyncUpgradeGenesis::_setNewProtocolVersion()
should not allow the protocol version difference to be up to MAX_ALLOWED_PROTOCOL_VERSION_DELTA
function _setNewProtocolVersion(uint256 _newProtocolVersion) internal override { uint256 previousProtocolVersion = s.protocolVersion; require( // Genesis Upgrade difference: Note this is the only thing change > to >= _newProtocolVersion >= previousProtocolVersion, "New protocol version is not greater than the current one" ); require( _newProtocolVersion - previousProtocolVersion <= MAX_ALLOWED_PROTOCOL_VERSION_DELTA, "Too big protocol version difference" ); // If the previous upgrade had an L2 system upgrade transaction, we require that it is finalized. require(s.l2SystemContractsUpgradeTxHash == bytes32(0), "Previous upgrade has not been finalized"); require( s.l2SystemContractsUpgradeBatchNumber == 0, "The batch number of the previous upgrade has not been cleaned" ); s.protocolVersion = _newProtocolVersion; emit NewProtocolVersion(previousProtocolVersion, _newProtocolVersion); }
Considering the only change between this and the non-Genesis BaseZkSyncUpgrade
is the fact that the protocol version is allowed to be set to the current one, i.e //Genesis Upgrade difference: Note this is the only thing change > to >= _newProtocolVersion >= previousProtocolVersion, "New protocol version is not greater than the current one"
, i.e then to be in alliance with Config.sol of allowing 100 as a delta, then the difference needs to be dropped by 1, since the lower end of the possible values has been reduced to accept previousProtocolVersion
.
Non-critical, just to be more in allignment with the Config.sol
Consider reimplementing the logic for the genesis upgrade.
/// @dev Processes claims of failed deposit, whether they originated from the legacy bridge or the current system. function _claimFailedDeposit( bool _checkedInLegacyBridge, uint256 _chainId, address _depositSender, address _l1Token, uint256 _amount, bytes32 _l2TxHash, uint256 _l2BatchNumber, uint256 _l2MessageIndex, uint16 _l2TxNumberInBatch, bytes32[] calldata _merkleProof ) internal nonReentrant { { bool proofValid = bridgehub.proveL1ToL2TransactionStatus( _chainId, _l2TxHash, _l2BatchNumber, _l2MessageIndex, _l2TxNumberInBatch, _merkleProof, TxStatus.Failure ); require(proofValid, "yn"); } require(_amount > 0, "y1"); { bool notCheckedInLegacyBridgeOrWeCanCheckDeposit; { // Deposits that happened before the upgrade cannot be checked here, they have to be claimed and checked in the legacyBridge bool weCanCheckDepositHere = !_isEraLegacyWithdrawal(_chainId, _l2BatchNumber); // Double claims are not possible, as we this check except for legacy bridge withdrawals // Funds claimed before the update will still be recorded in the legacy bridge // Note we double check NEW deposits if they are called from the legacy bridge notCheckedInLegacyBridgeOrWeCanCheckDeposit = (!_checkedInLegacyBridge) || weCanCheckDepositHere; } if (notCheckedInLegacyBridgeOrWeCanCheckDeposit) { bytes32 dataHash = depositHappened[_chainId][_l2TxHash]; bytes32 txDataHash = keccak256(abi.encode(_depositSender, _l1Token, _amount)); require(dataHash == txDataHash, "ShB: d.it not hap"); delete depositHappened[_chainId][_l2TxHash]; } } if (!hyperbridgingEnabled[_chainId]) { // check that the chain has sufficient balance require(chainBalance[_chainId][_l1Token] >= _amount, "ShB n funds"); chainBalance[_chainId][_l1Token] -= _amount; } // Withdraw funds if (_l1Token == ETH_TOKEN_ADDRESS) { bool callSuccess; // Low-level assembly call, to avoid any memory copying (save gas) assembly { callSuccess := call(gas(), _depositSender, _amount, 0, 0, 0, 0) } require(callSuccess, "ShB: claimFailedDeposit failed"); } else { IERC20(_l1Token).safeTransfer(_depositSender, _amount); // Note we don't allow weth deposits anymore, but there might be legacy weth deposits. // until we add Weth bridging capabilities, we don't wrap/unwrap weth to ether. } emit ClaimedFailedDepositSharedBridge(_chainId, _depositSender, _l1Token, _amount); }
Function is inevitably called when claiming the failed deposits,c ase here is that the logic for this hardcodes the receiver and expects it to be able to dela with the tokens, now different things could eb the cause of the address not being able to handle tokesn, for example say the _depositSender
can't handle eth or has no receicve function, or for the case of a normal token, assume they get blacklisted or some other sort of issues, tokens are indefinitely locked in the bridge.
Low, since this somewhat relies on user not using an address that can accept their failed deposits from the get go.
As a popular receommendation, it's advisable to implement a pull method for withdrawals, i.e users should be allowed to provide a fresh deposit address than forcing it into the one that made the deposit.
NB: Whereas acknowledged findings have been said to be OOS, I'm considering a fresh case of the bug and submitting since this is a new contract and hasn't been reported before.
function upgrade(ProposedUpgrade calldata _proposedUpgrade) public virtual returns (bytes32 txHash) { // Note that due to commitment delay, the timestamp of the L2 upgrade batch may be earlier than the timestamp // of the L1 block at which the upgrade occurred. This means that using timestamp as a signifier of "upgraded" // on the L2 side would be inaccurate. The effects of this "back-dating" of L2 upgrade batches will be reduced // as the permitted delay window is reduced in the future. require(block.timestamp >= _proposedUpgrade.upgradeTimestamp, "Upgrade is not ready yet"); _setNewProtocolVersion(_proposedUpgrade.newProtocolVersion); _upgradeL1Contract(_proposedUpgrade.l1ContractsUpgradeCalldata); _upgradeVerifier(_proposedUpgrade.verifier, _proposedUpgrade.verifierParams); _setBaseSystemContracts(_proposedUpgrade.bootloaderHash, _proposedUpgrade.defaultAccountHash); txHash = _setL2SystemContractUpgrade( _proposedUpgrade.l2ProtocolUpgradeTx, _proposedUpgrade.factoryDeps, _proposedUpgrade.newProtocolVersion ); _postUpgrade(_proposedUpgrade.postUpgradeCalldata); emit UpgradeComplete(_proposedUpgrade.newProtocolVersion, txHash, _proposedUpgrade); }
We can see that while upgrading, the call gets routed to _upgradeVerifier() -> _setVerifier() -> _setVerifierParams()
, now issue here is that, as commented an upgrade must be carefully done to ensure there aren't batches in the committed state during the transition, this is in order not to have the verifier be immediately used to prove all committed batches.
But this can easily be fixed by applying an equivalence check while upgrading, i.e require(s.totalBatchesCommitted == s.totalBatchesVerified);
Admin error, but wrong verifier could be used to prove batches being committed in the transition state.
Introduce the check require(s.totalBatchesCommitted == s.totalBatchesVerified);
to ensure this never happens.
function finalizeWithdrawal() external override { // To avoid rewithdrawing txs that have already happened on the legacy bridge. // Note: new withdraws are all recorded here, so double withdrawing them is not possible. //@audit if (_isEraLegacyWithdrawal(_chainId, _l2BatchNumber)) { require(!legacyBridge.isWithdrawalFinalized(_l2BatchNumber, _l2MessageIndex), "ShB: legacy withdrawal"); } _finalizeWithdrawal(_chainId, _l2BatchNumber, _l2MessageIndex, _l2TxNumberInBatch, _message, _merkleProof); } /// @dev Internal function that handles the logic for finalizing withdrawals, /// serving both the current bridge system and the legacy ERC20 bridge. function _finalizeWithdrawal() internal nonReentrant returns (address l1Receiver, address l1Token, uint256 amount) { require(!isWithdrawalFinalized[_chainId][_l2BatchNumber][_l2MessageIndex], "Withdrawal is already finalized"); isWithdrawalFinalized[_chainId][_l2BatchNumber][_l2MessageIndex] = true; // Handling special case for withdrawal from zkSync Era initiated before Shared Bridge. //@audit if (_isEraLegacyWithdrawal(_chainId, _l2BatchNumber)) { // Checks that the withdrawal wasn't finalized already. bool alreadyFinalized = IGetters(ERA_DIAMOND_PROXY).isEthWithdrawalFinalized( _l2BatchNumber, _l2MessageIndex ); require(!alreadyFinalized, "Withdrawal is already finalized 2"); } //ommited for brevity }
These functions are both used for finalizing the withdrawals, with multiple checks, issue here now is that the check if the transaction is from the era legacy is implemented twice which is unnecessary.
NC - Overcomplication of code.
The check can be left just to the internal function while finalizing the withdrawal
isFacetFreezable()
/// @inheritdoc IGetters function isFacetFreezable(address _facet) external view returns (bool isFreezable) { Diamond.DiamondStorage storage ds = Diamond.getDiamondStorage(); // There is no direct way to get whether the facet address is freezable, // so we get it from one of the selectors that are associated with the facet. uint256 selectorsArrayLen = ds.facetToSelectors[_facet].selectors.length; //@audit if (selectorsArrayLen != 0) { bytes4 selector0 = ds.facetToSelectors[_facet].selectors[0]; isFreezable = ds.selectorToFacet[selector0].isFreezable; } }
This function is used to determine if a facet is freezable, issue here is that it wrongly uses if()
instead of require()
now since if
is used, in the case where the facet is none existent selectorsArrayLen == 0
would be true and as such the function would return false
whereas it should revert since the facet is non-existent
Users would be confused at this, since there is no difference between unfreezable facets and non-existent ones considering the current implementation of isFacetFreezable()
Consider changing the if()
condition to require()
that way only real unfreezable facets would return false
.
isNotEnoughGasForPubdata
in the bootloader to enforce no error in the Yul Syntaxif isNotEnoughGasForPubdata( basePubdataSpent, gas(), reservedGas, gasPerPubdata, ) {
if isNotEnoughGasForPubdata( basePubdataSpent, gas(), // Note, that for L1->L2 transactions the reserved gas is used to protect the operator from // transactions that might accidentally cause to publish too many pubdata. // Thus, even if there is some accidental `reservedGas` left, it should not be used to publish pubdata. 0, gasPerPubdata, ) {
We can see that tehre is a trailing comma in the function call but this is an invalid syntax according to Yul's formal specification: https://docs.soliditylang.org/en/latest/yul.html#specification-of-yul
Another instance not related to isNotEnoughGasForPubdata
can be seen here: https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/system-contracts/bootloader/bootloader.yul#L972-L977
Borderline low... this is cause the Bootloader.yul
code would not compile with the correct Yul compiler. Even if it compiles with the current zkEVM implementation, it potentially necessitates an update to the Bootloader.yul code and its hash on Layer 1. Such an update would require a system upgrade.
Consider removing the trailing comma in both cases.
BridgehubL2TransactionRequest
in _requestL2Transaction()
function requestL2Transaction( address _contractL2, uint256 _l2Value, bytes calldata _calldata, uint256 _l2GasLimit, uint256 _l2GasPerPubdataByteLimit, bytes[] calldata _factoryDeps, address _refundRecipient ) external payable returns (bytes32 canonicalTxHash) { require(s.chainId == ERA_CHAIN_ID, "legacy interface only available for era token"); canonicalTxHash = _requestL2TransactionSender( BridgehubL2TransactionRequest({ sender: msg.sender, contractL2: _contractL2, mintValue: msg.value, l2Value: _l2Value, //@audit l2Calldata should come before gas limit l2GasLimit: _l2GasLimit, l2Calldata: _calldata, l2GasPerPubdataByteLimit: _l2GasPerPubdataByteLimit, factoryDeps: _factoryDeps, refundRecipient: _refundRecipient }) ); IL1SharedBridge(s.baseTokenBridge).bridgehubDepositBaseToken{value: msg.value}( s.chainId, msg.sender, ETH_TOKEN_ADDRESS, msg.value ); }
Now take a look at the implementation of the BridgehubL2TransactionRequest
struct from https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/common/Messaging.sol#L127-L137
struct BridgehubL2TransactionRequest { address sender; address contractL2; uint256 mintValue; uint256 l2Value; bytes l2Calldata; uint256 l2GasLimit; uint256 l2GasPerPubdataByteLimit; bytes[] factoryDeps; address refundRecipient; }
One can see that, within the requestL2Transaction()
exexcution the positions of l2GasLimit
and the calldata
bytes are swapped which would cause the ABI encoding/decoding of this struct be faulty or even cause an issue with the generation of the proof of this transaction
Better code structure.
Consider passing in the struct in the right manner
Using this search command: https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync%20TODO&type=code we can see that there are more than 22 code files that are left with multiple todos (a notable mention is that some contracts have only one nonetheless this should be fixed)
Open Todos often hint that the codes are not ready for final production/deployment.
Fix open todos
OperationState.Waiting
should be considered the case even when timestamp == block.timestamp
function getOperationState(bytes32 _id) public view returns (OperationState) { uint256 timestamp = timestamps[_id]; if (timestamp == 0) { return OperationState.Unset; } else if (timestamp == EXECUTED_PROPOSAL_TIMESTAMP) { return OperationState.Done; } else if (timestamp > block.timestamp) { return OperationState.Waiting; } else { return OperationState.Ready; } }
As seen this function is used to get the operation states of any operation, one thing to note is that the function is supposed to consider the operation pending only after the timestamp has passed block.timestamp
but in the current implementation even when timestamp == block.timestamp
the state would be Ready
instead of Waiting
Low, info on better code structure
Make the OperationState.Waiting
check inclusive.
function getDeploymentNonce(address _address) external view returns (uint256 deploymentNonce) { uint256 addressAsKey = uint256(uint160(_address)); (deploymentNonce, ) = _splitRawNonce(rawNonces[addressAsKey]); return deploymentNonce; }
Function is returning the deployment nonce for the accounts used with the CREATE opcode, case with this is that the variable to be returned is already named, but execution still passes a return
Redundant code, bad structure
Always remove redundant code from production code.
function chunkAndPublishPubdata(bytes calldata _pubdata) external onlyCallFrom(address(L1_MESSENGER_CONTRACT)) { require(_pubdata.length <= BLOB_SIZE_BYTES * MAX_NUMBER_OF_BLOBS, "pubdata should fit in 2 blobs"); bytes32[] memory blobHashes = new bytes32[](MAX_NUMBER_OF_BLOBS); // We allocate to the full size of MAX_NUMBER_OF_BLOBS * BLOB_SIZE_BYTES because we need to pad // the data on the right with 0s if it doesn't take up the full blob bytes memory totalBlobs = new bytes(BLOB_SIZE_BYTES * MAX_NUMBER_OF_BLOBS); assembly { // The pointer to the allocated memory above. We skip 32 bytes to avoid overwriting the length. let ptr := add(totalBlobs, 0x20) calldatacopy(ptr, _pubdata.offset, _pubdata.length) } for (uint256 i = 0; i < MAX_NUMBER_OF_BLOBS; i++) { uint256 start = BLOB_SIZE_BYTES * i; // We break if the pubdata isn't enough to cover 2 blobs. On L1 it is expected that the hash // will be bytes32(0) if a blob isn't going to be used. if (start >= _pubdata.length) { break; } bytes32 blobHash; assembly { // The pointer to the allocated memory above skipping the length. let ptr := add(totalBlobs, 0x20) blobHash := keccak256(add(ptr, start), BLOB_SIZE_BYTES) } blobHashes[i] = blobHash; } SystemContractHelper.toL1(true, bytes32(uint256(SystemLogKey.BLOB_ONE_HASH_KEY)), blobHashes[0]); SystemContractHelper.toL1(true, bytes32(uint256(SystemLogKey.BLOB_TWO_HASH_KEY)), blobHashes[1]); }
Function is used to chunk pubdata into pieces that can fit into blobs, case is that it queries to constant variables
and then multiplies, when this value can just be hardcoded to the codebase, i.e this line require(_pubdata.length <= BLOB_SIZE_BYTES * MAX_NUMBER_OF_BLOBS, "pubdata should fit in 2 blobs");
, both BLOB_SIZE_BYTES
and MAX_NUMBER_OF_BLOBS
have fixed numbers of 2
and 126976
respectively as gotten from https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/system-contracts/contracts/Constants.sol
This means that the check could be changed to require(_pubdata.length <= 253_952 "pubdata should fit in 2 blobs");
Wastage of gas
Consider making the change in as much as the values are going to be constants
renounceOwnership()
The Governance.sol
contract, which plays a pivotal role in managing governance operations within the zkSync protocol, is designed to inherit both the IGovernance
interface and the Ownable2Step
contract. During the contract's deployment, the initial ownership is assigned to the _admin
address through the Governance.constructor
:
_transferOwnership(_admin);
This design ensures that critical functions, such as scheduleTransparent
and scheduleShadow
, are exclusively accessible by the _admin
(owner) of the contract. However, the incorporation of the Ownable2Step
contract, which extends the Ownable
abstract contract, introduces a significant risk. The Ownable
contract includes a renounceOwnership
function allowing the current owner to relinquish ownership rights, effectively setting the owner to the zero address:
function renounceOwnership() public virtual onlyOwner { _transferOwnership(address(0)); }
If the _admin
executes the renounceOwnership
function, it would render all owner-exclusive functionalities, including upgrade scheduling, inoperative, thereby crippling the entire zkSync protocol.
By renouncing ownership, the _admin
could unilaterally disable key governance functionalities. Such an action would not only undermine the protocol's integrity but also eliminate any possibility of future upgrades, posing a severe threat to the system's sustainability and security.
To safeguard the protocol against potential sabotage through ownership renunciation, it is advisable to override the renounceOwnership
function within the Governance.sol
contract. By explicitly reverting any attempts to renounce ownership, this modification ensures the continuity and inviolability of governance operations:
function renounceOwnership() public override onlyOwner { revert("_admin cannot renounce ownership"); }
function requestL2Transaction( //@audit address _contractL2, uint256 _l2Value, bytes calldata _calldata, uint256 _l2GasLimit, uint256 _l2GasPerPubdataByteLimit, bytes[] calldata _factoryDeps, //@audit address _refundRecipient ) external payable returns (bytes32 canonicalTxHash) { require(s.chainId == ERA_CHAIN_ID, "legacy interface only available for era token"); canonicalTxHash = _requestL2TransactionSender( BridgehubL2TransactionRequest({ sender: msg.sender, contractL2: _contractL2, mintValue: msg.value, l2Value: _l2Value, l2GasLimit: _l2GasLimit, l2Calldata: _calldata, l2GasPerPubdataByteLimit: _l2GasPerPubdataByteLimit, factoryDeps: _factoryDeps, refundRecipient: _refundRecipient }) ); IL1SharedBridge(s.baseTokenBridge).bridgehubDepositBaseToken{value: msg.value}( s.chainId, msg.sender, ETH_TOKEN_ADDRESS, msg.value ); }
This mechanism allows users to deposit ETH from L1 to L2 into two separate addresses within zkSync Era, which introduces a unique feature not found in the traditional Ethereum Virtual Machine (EVM) environment.
Users can deposit ETH to a contract address (_contractL2
) and simultaneously direct a refund to a different recipient address, by specifying a different address from _contractL2
. This dual-address deposit capability deviates from standard EVM practices, where transfers from an externally owned account (EOA) to two addresses simultaneously are not possible. This feature, specific to the zkSync Era, enables such transactions.
Undocumented deviation from the Ethereum's VM
To mitigate any potential confusion and enhance security awareness, it's crucial to thoroughly document this dual-address deposit process.
0.8.20
is vulnerable to some bugs and compiler version should be updated to be on the safe sideSee https://github.com/ethereum/solidity/blob/afda6984723fca99e82ebf34d0aec1804f1f3ce6/docs/bugs_by_version.json#L1865-L1871 we can see that this compiler version is vulnerable to:
.selector()
query, would be key to note that, as explained in this blog there are multiple side effects with accessing .selector() for the current compiler version, since it could cause potentially incorrect behavior of contracts compiled using the legacy pipeline.-Considering the protocol's heavy use of yul
, a different version of compiler should be used, cause this version is vulnerable to the full inliner non expression split argument evaluation order bug, explained here, do note that this could heavily cause reordering reverts in the bootloader or even it's return may lead to storage writes, memory writes, or event emissions not being performed. It may also lead to the contract not reverting (and therefore not rolling back some operations) when it should or vice-versa, and as such an updated version should be used.
verbatim
is also used heavily protocol, for example even in the blob versioned hash receiver https://github.com/code-423n4/2024-03-zksync/blob/main/code/contracts/ethereum/contracts/state-transition/utils/BlobVersionedHashRetriever.yul but this is also a tad affected by this compiler's verion of being vulnerable to the verbatim invalid deduplication bug, note that the TLDR of this bug as understood is that wherein equivalent assembly blocks are identified and merged. verbatim
assembly items surrounded by identical opcodes were incorrectly considered identical and unified, that's in the Block Deduplicator optimizer step, currently the BlobVersionedHashRetriever.yul
only queries the verbatim, just once so no issue of the bug coming in place to unify similar blocks.Consider updating the solididity compiler version
execute()
& executeInstant()
could use msg.value
in a better mannerfunction execute(Operation calldata _operation) external payable onlyOwnerOrSecurityCouncil { bytes32 id = hashOperation(_operation); // Check if the predecessor operation is completed. _checkPredecessorDone(_operation.predecessor); // Ensure that the operation is ready to proceed. require(isOperationReady(id), "Operation must be ready before execution"); // Execute operation. _execute(_operation.calls); // Reconfirming that the operation is still ready after execution. // This is needed to avoid unexpected reentrancy attacks of re-executing the same operation. require(isOperationReady(id), "Operation must be ready after execution"); // Set operation to be done timestamps[id] = EXECUTED_PROPOSAL_TIMESTAMP; emit OperationExecuted(id); } /// @notice Executes the scheduled operation with the security council instantly. /// @dev Only the security council may execute an operation instantly. /// @param _operation The operation parameters will be executed with the upgrade. function executeInstant(Operation calldata _operation) external payable onlySecurityCouncil { bytes32 id = hashOperation(_operation); // Check if the predecessor operation is completed. _checkPredecessorDone(_operation.predecessor); // Ensure that the operation is in a pending state before proceeding. require(isOperationPending(id), "Operation must be pending before execution"); // Execute operation. _execute(_operation.calls); // Reconfirming that the operation is still pending before execution. // This is needed to avoid unexpected reentrancy attacks of re-executing the same operation. require(isOperationPending(id), "Operation must be pending after execution"); // Set operation to be done timestamps[id] = EXECUTED_PROPOSAL_TIMESTAMP; emit OperationExecuted(id); } function _execute(Call[] calldata _calls) internal { for (uint256 i = 0; i < _calls.length; ++i) { (bool success, bytes memory returnData) = _calls[i].target.call{value: _calls[i].value}(_calls[i].data); if (!success) { // Propagate an error if the call fails. assembly { revert(add(returnData, 0x20), mload(returnData)) } } } }
We can see that both functions are marked as payable and then call on the internal _execute()
, but the codeflow does not assert that the provided msg.value
is equal to the one that is been expended in _execut()
.
Misappropriation of funds for operations, as if the msg.value
is less than call.value
, then funds in the contract for different operations would be used for the current one.
Consider requiring that the msg.value
provided is the call.value
needed.
StateTransitionManager.sol::createNewChain()
has some nuances with it's initData
/// @notice called by Bridgehub when a chain registers function createNewChain( uint256 _chainId, address _baseToken, address _sharedBridge, address _admin, bytes calldata _diamondCut ) external onlyBridgehub { if (stateTransition[_chainId] != address(0)) { // StateTransition chain already registered return; } // check not registered Diamond.DiamondCutData memory diamondCut = abi.decode(_diamondCut, (Diamond.DiamondCutData)); // check input bytes32 cutHashInput = keccak256(_diamondCut); require(cutHashInput == initialCutHash, "StateTransition: initial cutHash mismatch"); // construct init data bytes memory initData; /// all together 4+9*32=292 bytes initData = bytes.concat( IDiamondInit.initialize.selector, bytes32(_chainId), bytes32(uint256(uint160(bridgehub))), bytes32(uint256(uint160(address(this)))), bytes32(uint256(protocolVersion)), bytes32(uint256(uint160(_admin))), bytes32(uint256(uint160(validatorTimelock))), bytes32(uint256(uint160(_baseToken))), bytes32(uint256(uint160(_sharedBridge))), bytes32(storedBatchZero), diamondCut.initCalldata ); diamondCut.initCalldata = initData; // deploy stateTransitionContract DiamondProxy stateTransitionContract = new DiamondProxy{salt: bytes32(0)}(block.chainid, diamondCut); // save data address stateTransitionAddress = address(stateTransitionContract); stateTransition[_chainId] = stateTransitionAddress; // set chainId in VM _setChainIdUpgrade(_chainId, stateTransitionAddress); emit StateTransitionNewChain(_chainId, stateTransitionAddress); }
Evidently, the logic for initData
seems to expect 292 bytes, hence this comment: " /// all together 4+9*32=292 bytes" but going through the logic below, we can see that the function after passing in the 292
bytes, i.e the selector and all 9 byte32
elements, it still passes the initData to be concated, which seems as a flawed logic as the whole concated value is later on set as the same initData
.
Bad code strycture, initData
is expected to be 292
bytes but is accepted to be longer.
Consider clearly limiting this to 292
bytes or clearly document that this is to be extended in the future
While depositing the contract calls to see if the deposits have been cleared with token addresses, case here is that in most of the logic, it doesnt consider that a user can just specify the second address if the first one doesn't go through
Consider not supporting these types of tokens
function shouldMsgValueMimicCallBeSystem(to, dataPtr) -> ret { let dataLen := mload(dataPtr) // Note, that this point it is not fully known whether it is indeed the selector // of the calldata (it might not be the case if the `dataLen` < 4), but it will be checked later on let selector := shr(224, mload(add(dataPtr, 32))) let isSelectorCreate := or( eq(selector, {{CREATE_SELECTOR}}), eq(selector, {{CREATE_ACCOUNT_SELECTOR}}) ) let isSelectorCreate2 := or( eq(selector, {{CREATE2_SELECTOR}}), eq(selector, {{CREATE2_ACCOUNT_SELECTOR}}) ) // Firstly, ensure that the selector is a valid deployment function ret := or( isSelectorCreate, isSelectorCreate2 ) // Secondly, ensure that the callee is ContractDeployer ret := and(ret, eq(to, CONTRACT_DEPLOYER_ADDR())) // Thirdly, ensure that the calldata is long enough to contain the selector ret := and(ret, gt(dataLen, 3)) }
This function returns whether the mimicCall should use the isSystem
flag, issue here is that this statement // Note, that this point it is not fully known whether it is indeed the selector...
and should instead be // Note that, at this point it is not fully known whether it is indeed the selector...
Bad code structure, harder to understand code since integrators would assume one is talking about the unread memory point at the end of dataLen
Apply the fix.
Here is how a token's deposit gets finalized: https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/zksync/contracts/bridge/L2SharedBridge.sol#L97
That is the contract creates the token address if the deposit for this token has never been made, But before that the user needs to deposit their tokens vua the L1 shared bridge.
function _getDepositL2Calldata( address _l1Sender, address _l2Receiver, address _l1Token, uint256 _amount ) internal view returns (bytes memory) { bytes memory gettersData = _getERC20Getters(_l1Token); return abi.encodeCall(IL2Bridge.finalizeDeposit, (_l1Sender, _l2Receiver, _l1Token, _amount, gettersData)); } /// @dev Receives and parses (name, symbol, decimals) from the token contract function _getERC20Getters(address _token) internal view returns (bytes memory) { if (_token == ETH_TOKEN_ADDRESS) { bytes memory name = bytes("Ether"); bytes memory symbol = bytes("ETH"); bytes memory decimals = abi.encode(uint8(18)); return abi.encode(name, symbol, decimals); // when depositing eth to a non-eth based chain it is an ERC20 } //@audit using staticcall for a call that is going to revert? Not all tokens support the name, symbol and decimal getters (, bytes memory data1) = _token.staticcall(abi.encodeCall(IERC20Metadata.name, ())); (, bytes memory data2) = _token.staticcall(abi.encodeCall(IERC20Metadata.symbol, ())); (, bytes memory data3) = _token.staticcall(abi.encodeCall(IERC20Metadata.decimals, ())); return abi.encode(data1, data2, data3); }
This function is inevitably called whenever executing depositLegacyErc20Bridge()
& bridgehubDeposit()
, used to generate a calldata for calling the deposit finalization, issue is that while the call is being routed to _getERC20Getters()
and in the case where the token is not eth, protocol assumes the token implements the name(), symbol(), & decimals()
function, but contrary to this assumption, these getter functions are not part of the original EIP20
specification and as such not all tokens support this, causing an attempt get this deposit calldata revert for pure EIP20 tokens.
Pure eip-20 tokens are not supported, since there is an inability to execute both depositLegacyErc20Bridge()
& bridgehubDeposit()
functions for these tokens
Reimplement a logic to allow for the support of pure eip20 tokens
/// @return The total number of unprocessed priority operations in a priority queue function getSize(Queue storage _queue) internal view returns (uint256) { return uint256(_queue.tail - _queue.head); }
But from here we can see that both _queue.tail
& _queue.head
are of unit256
already so there's no need for this casting.
Remove the unnecessary casting
/// @notice NOTE: The `getZkSyncMeta` that is used to obtain this struct will experience a breaking change in 2024. struct ZkSyncMeta { uint32 pubdataPublished; uint32 heapSize; uint32 auxHeapSize; uint8 shardId; uint8 callerShardId; uint8 codeShardId; }
The attached snippets suggests that there would be a breaking change to the struct in 2024, but we are in 2024 and the comment needs to updated to either another year or atleast a more specific time even if not months, could be anything like "Q3/Q4 2024"
Confused code, appplications really hoping to build around this wouldn't know if to go on with their development/deployment since they don't know if the breaking change is going to affect their to be deployed logic, and also don't have a definitive on when this breaking change would occur.
Consider having a more specific time in regards to when the breaking change would occur
function _ensureBatchConsistentWithL2Block(uint128 _newTimestamp) internal view { uint128 currentBlockTimestamp = currentL2BlockInfo.timestamp; require( _newTimestamp > currentBlockTimestamp, "The timestamp of the batch must be greater than the timestamp of the previous block" ); }
Now look at this section of the docs https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/docs/Smart%20contract%20Section/Batches%20%26%20L2%20blocks%20on%20zkSync.md#timing-invariants
We can see that it's been wrongly stated that For each L2 block its timestamp should be ≥ timestamp of the batch it belongs to
case is with the code implementation we can see that the check is instead strictly greater than.
Bad code structure, making it harder to understand implementation.
Fix the docs, change https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/docs/Smart%20contract%20Section/Batches%20%26%20L2%20blocks%20on%20zkSync.md#timing-invariants to For each L2 block its timestamp should be **>** timestamp of the batch it belongs to
Contract is obviously a bridge that allows for the depositing of ERC20 tokens to hyperchains, this means that the contract at some point would hold a lot of tokens and could be eligible for some airdrops, since its most likely going to be based on the snapshot of the token balance, now clearly this contract does not entail any sweeping functions and as such all airdrops sent to this contract are effectively stuck
Leak of value since there are no methods to get hold of the airdrops
Introduce a sweeper functionality for these cases.
TransactionHelper::isEthToken()
is not used anywhere, so it should be removedfunction isEthToken(uint256 _addr) internal pure returns (bool) { return _addr == uint256(uint160(address(BASE_TOKEN_SYSTEM_CONTRACT))) || _addr == 0; }
This function is used to know whether the address
passed is th Ether
address, and also returns true if the address passed is 0x0
(for conveniency) as stated here, issue with this is tht using this search command: https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync+isEthToken&type=code we can see that this function is not used anywhere.
Remove redundant code as they most of the time hint flawed implementation
Remove the isEthToken()
function since it's not being used anywhere
storedBatchHash()
fails to distinguish between reverted and unreverted batches/// @inheritdoc IGetters function storedBatchHash(uint256 _batchNumber) external view returns (bytes32) { return s.storedBatchHashes[_batchNumber]; }
Now consider this previously confirmed issue: https://github.com/code-423n4/2023-10-zksync-findings/issues?q=is%3Aissue+is%3Aopen+Inaccurate+Batch+Stored+Hash+Retrieval+in+Getters+Contract
Case with the function's implementation is that, it returns zero for uncommitted batch numbers, but it lacks the capability to distinguish between a reverted and an unreverted batch. For example, if batch number 500 has been reverted, and a user queries storedBatchHash(500), it returns a non-zero value, which may create the impression that the batch is unreverted. However, it should ideally return zero in such scenarios.
Now evidently this is from a past report, but whereas protocol pronounced "acknowledged" findings to be OOS, this was confirmed
and as such must have forgotten to be fixed.
Users would be misled on the status for batches in zkSync
As previously recommended, revise the function as the below:
function storedBatchHash(uint256 _batchNumber) external view returns (bytes32) { if(_batchNumber > s.totalBatchesCommitted){ return bytes32(0); } return s.storedBatchHashes[_batchNumber]; }
proveBatches()
due to a wrongly commenting out the else
keyword and lack of documentationfunction _proveBatches( StoredBatchInfo calldata _prevBatch, StoredBatchInfo[] calldata _committedBatches, ProofInput calldata _proof ) internal { //ommited for brevity //@audit if (_proof.serializedProof.length > 0) { _verifyProof(proofPublicInput, _proof); } // #else _verifyProof(proofPublicInput, _proof); // #endif emit BlocksVerification(s.totalBatchesVerified, currentTotalBatchesVerified); s.totalBatchesVerified = currentTotalBatchesVerified; }
Thhis function eventually gets called when proving the batches, now as tagged by "@audit", evidently, if the proof is not empty, it gets verified, due to the if
block.
Where as one might look at this and assume there to be a logical error, as naturally if we are having an if/else
conditional arrangement, then the else block should entain a different execution, but here that's not the case cause the preprocessor is going to remove one of the _verifyProof(proofPublicInput, _proof);
Bad code structure, unclear documentation
Correctly apply the documentations attached to this
BaseZkSyncUpgradeGenesis::_setNewProtocolVersion
function _setNewProtocolVersion(uint256 _newProtocolVersion) internal override { uint256 previousProtocolVersion = s.protocolVersion; require( // Genesis Upgrade difference: Note this is the only thing change > to >= _newProtocolVersion >= previousProtocolVersion, "New protocol version is not greater than the current one" ); require( _newProtocolVersion - previousProtocolVersion <= MAX_ALLOWED_PROTOCOL_VERSION_DELTA, "Too big protocol version difference" ); // If the previous upgrade had an L2 system upgrade transaction, we require that it is finalized. require(s.l2SystemContractsUpgradeTxHash == bytes32(0), "Previous upgrade has not been finalized"); require( s.l2SystemContractsUpgradeBatchNumber == 0, "The batch number of the previous upgrade has not been cleaned" ); s.protocolVersion = _newProtocolVersion; emit NewProtocolVersion(previousProtocolVersion, _newProtocolVersion); }
This function is used to change the protocol version, only case atttached here is the fact that if _newProtocolVersion >= previousProtocolVersion,
does not hold true, the execution errors out with "New protocol version is not greater than the current one" instead of "New protocol version is not greater than or equal to the current one".
Inaccurate error messages, code confusion
Apply these changes to https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/upgrades/BaseZkSyncUpgradeGenesis.sol#L20-L41
function _setNewProtocolVersion(uint256 _newProtocolVersion) internal override { uint256 previousProtocolVersion = s.protocolVersion; require( // Genesis Upgrade difference: Note this is the only thing change > to >= _newProtocolVersion >= previousProtocolVersion, "New protocol version is not greater than or equal to the current one" ); require( _newProtocolVersion - previousProtocolVersion <= MAX_ALLOWED_PROTOCOL_VERSION_DELTA, "Too big protocol version difference" ); // If the previous upgrade had an L2 system upgrade transaction, we require that it is finalized. require(s.l2SystemContractsUpgradeTxHash == bytes32(0), "Previous upgrade has not been finalized"); require( s.l2SystemContractsUpgradeBatchNumber == 0, "The batch number of the previous upgrade has not been cleaned" ); s.protocolVersion = _newProtocolVersion; emit NewProtocolVersion(previousProtocolVersion, _newProtocolVersion); }
#0 - razzorsec
2024-04-18T09:52:40Z
The first “QA” was considered medium by us!
#1 - c4-judge
2024-04-29T13:36:41Z
alex-ppg marked the issue as grade-a
#2 - c4-judge
2024-04-29T13:36:44Z
alex-ppg marked the issue as selected for report
#3 - alex-ppg
2024-04-29T13:38:18Z
This submission was graded as the best due to exceeding the 50% accuracy threshold whilst containing valid and thoroughly elaborated findings in an easily digestible format. To note, the first QA finding will be upgraded accordingly once judging concludes and this exhibit is split into a second one as a duplicate of #97.
#4 - Bauchibred
2024-05-01T20:00:30Z
Hi @alex-ppg, thanks for judging the contest, I'd like to ask if it's possible you have any comments in regards to the upgradability of some of the listed borderline low/medium issues, a couple were attached here in the QA report as an attempt on not spamming the judging repo with reports that could end up being finalised as QA, I'd appreciate a quick glance on this borderline issues to see if any could be upgraded.
To ease the re-review, I believe grepping the markdown file with the word
medium
would pinpoint most of these issues, however I'd appreciate re-review as not all have been linked with themedium
word, thanks once again for your time.
#5 - alex-ppg
2024-05-02T14:28:46Z
Hey @Bauchibred, appreciate the in-depth analysis of the QA report and your contribution to the PJQA process! I have evaluated all findings that infer a medium upgrade as follows:
balance
values of the contract are at least equal to the totalSupply
due to the token being a closed-circuit system (i.e. balances
cannot increase from 0
without totalSupply
being increased as well)Additional Note
item is indeed correct in the sense that regardless of whether this finding is valid, the exhibit cannot be considered as a valid HM due to being out-of-scope. One of the duplicates from that contest is actually mine, and I understand the implications fully, however, I believe they have been adequately covered in the previous contest and this item is out-of-scope.extcodecopy
instruction's operation due to zkSync Era's internal structure.I believe that the present ruling is fair, but will make sure to notify the sponsor for a re-evaluation of QA-02 in case it merits an upgrade. It definitely is a mistake in the code, but I do not believe its ramifications to be impactful as nodes utilize events as highlighted in other exhibits such as #112.
#6 - DecentralDisco
2024-05-11T00:11:57Z
Regarding validity of items in this QA report, per conversation with the judge @alex-ppg:
The only clear mistake is QA-06, and the rest are passable NC / L / I recommendations. As such, I confirm that all QA items except for QA-06 are valid.
As such, QA-06 will be excluded from the final audit report.
🌟 Selected for report: lsaudit
Also found by: Bauchibred, ChaseTheLight, DadeKuma, K42, Pechenite, Sathish9098, aua_oo7, hihen, oualidpro, rjs, slvDev
85.5029 USDC - $85.50
While we've tried our best to maintain readability for brevity and focus, we have selectively truncated the code snippets to spotlight relevant sections. Certain excerpts may be abbreviated, and some references are provided via search commands due to the extensive nature of the project and time constraints.
We urge developers to proceed with caution when implementing recommended changes to ensure no new vulnerabilities are introduced. Although we have pre-tested a fair amount of suggested optimizations logically, the onus of comprehensive validation remains with the development team. Rigorous code review and further testing are imperative to mitigate potential risks during the refactoring process.
Finally, it's important to mention that a previous audit conducted a few months back by Code4rena on an older commit of the contracts in scope, this contest had a bot race and as such yielded a bot report with multiple gas optimization findings in 1014
instances. Our quick review on the optimizations revealed that only few of thosse suggestions have been acted upon. We recommend that, alongside this report, the team also revisits the bot report to explore further opportunities for gas savings, this is cause we've deliberately tried our best in not duplicating points from that previous analysis in this report.
Issue ID | Description |
---|---|
G-01 | Multiple instances in scope where re-ordering and packing could save multiple storage slots in scope |
G-02 | Consider removing unnecessary variables (one time usage) in different files in scope being cached during executions |
G-03 | Consider using storage instead of memory for structs or arrays inorder to save gas |
G-04 | We can avoid multiple SSTOREs in case of reverts |
G-05 | Non-gas efficient lookup of mappings in Diamond.sol & L1ERC20Bridge.sol |
G-06 | Do not update storage when the value hasn't changed |
G-07 | More than one address/ID mappings can be combined into a single mapping of an address/ID to a struct, where appropriate |
G-08 | Some events in Admin.sol should be emmitted one step above |
G-09 | L1SharedBridge::deposit()' s current implementation would cost users unnecessary gas while depositing |
G-10 | Initializing uint256 variable to default value of 0 can be omitted |
G-11 | Diamond.sol: Non-gas efficient expansion of memory |
G-12 | Remove the addition if we are always adding 0 |
G-13 | Consider utilizing uint256 for Boolean state management in order to save gas |
G-14 | Use assembly to revert with an error message to save gas |
G-15 | Make finalizeDeposit() more efficient |
G-16 | Consider not using the nonReentrant modifier for Admin functions |
G-17 | _commitBatches() should be made more efficient (2 modifications) |
G-18 | Chosing internal functions over modifiers |
G-19 | Protocol does not consider EIP-4844 and still assumes more than one batch can be commited |
G-20 | DiamondProxy.sol 's fallback could be made more efficient |
G-21 | Consider using assembly for loops |
G-22 | Use assembly to write address storage values |
G-23 | Executions we are sure would never underflow/overrflow should be wrapped in an unchecked |
G-24 | Remove instances of unnecessary casting |
G-25 | It is cheaper to use vanity addresses with leading zeros |
G-26 | In some instances we can make the variables outside the loop so as to save gas |
G-27 | Always use unsafe increment for for loops if possible when being used as terneries |
G-28 | Consider inverting if-else statements that have a negation and also reducing negations when possible |
G-29 | Usage of ternary operators in scope could be better optimized |
G-30 | Replace variable + 1 with value++ |
G-31 | Consider caching bytecode.length than querying it twice |
G-32 | Remove the last offset from the SharedBridge.sol's and other contract's query to UnsafeBytes.readAddress |
G-33 | Removing code for testing before final production |
G-34 | Private functions used once can be inlined |
G-35 | Unnecessary duplication of checks should be removed |
G-36 | No need of require(s.l2SystemContractsUpgradeBatchNumber == 0, "ik"); while commiting batches with system upgrades |
G-37 | Consider importing gas optimised Libmap |
G-38 | Instead of arrays we can use mappings |
G-39 | Caching global variables is more expensive than using the actual variable |
G-40 | Divisions by the powers of two should use bit shifting |
G-41 | Import libraries only when multiple functions are going to be implemented |
G-42 | Sequential for loops should be merged |
G-43 | Consider using the expectedBool == 1 syntax for booleans |
G-44 | Shift Left instead of multiplication saves gas |
G-45 | memory should be used of instead state variables while emiting events |
G-46 | " `` " is cheaper than new bytes(0) |
G-47 | Always use named returns for more efficiency |
G-48 | Check the last bit of a number to know if it's even/odd instead of the modulo operator |
G-49 | Remove unused mapping and import variables |
G-50 | Assembly checks for address(0) are more gas efficient |
G-51 | To extract calldata values more efficiently we schould use assembly instead of abi.decode |
G-52 | Use modifiers instead of functions to save gas |
G-53 | It is sometimes cheaper to cache calldata |
G-54 | Admin functions can be made payable |
G-55 | In some cases using bytes32 instead of string saves gas |
G-56 | Redundant check in DefaultAccount.sol |
G-57 | Chunking the pubdata should be made more efficient |
G-58 | TransactionHelper::isEthToken() is not used anywhere so it should be removed |
G-59 | Count up instead of counting down in for statements |
G-60 | Simplifying conditional checks with Bitwise Operators |
G-61 | Consider writing gas-optimal for-loops |
G-62 | UnsafeBytes.sol could be better optimized while parsing bytes for transactions |
G-63 | Do-while loops are cheaper than for loops and should be considered |
G-64 | Consider Short-circuit booleans |
G-65 | Shortening an array rather than copying to a new one saves gas |
G-66 | Use assembly to perform efficient back-to-back calls |
G-67 | Using assembly to validate msg.sender is more gas efficient |
G-68 | Proof verification is performed twice in proveBatches() causing double expenditure of gas |
G-69 | Use assembly to hash instead of solidity |
G-70 | Openzeppelin's SafeCast is not gas optimized, consider using Solady's safeCast library |
We can see that via re-ordering state variables (in this case the zkPorterIsAvailable
bool) an extra storage slot can be saved also if we are to pack both totalBatchesVerified
and totalBatchesCommitted
together, we save another extra storage slot
struct ZkSyncStateTransitionStorage { /// @dev Storage of variables needed for deprecated diamond cut facet uint256[7] __DEPRECATED_diamondCutStorage; /// @notice Address which will exercise critical changes to the Diamond Proxy (upgrades, freezing & unfreezing). Replaced by STM address __DEPRECATED_governor; + bool zkPorterIsAvailable; // @audit reordered here /// @notice Address that the governor proposed as one that will replace it address __DEPRECATED_pendingGovernor; /// @notice List of permitted validators mapping(address validatorAddress => bool isValidator) validators; /// @dev Verifier contract. Used to verify aggregated proof for batches IVerifier verifier; /// @notice Total number of executed batches i.e. batches[totalBatchesExecuted] points at the latest executed batch /// (batch 0 is genesis) - uint256 totalBatchesExecuted; /// @notice Total number of proved batches i.e. batches[totalBatchesProved] points at the latest proved batch - uint256 totalBatchesVerified; + uint256 totalBatchesExecuted; + uint256 totalBatchesVerified; /// @notice Total number of committed batches i.e. batches[totalBatchesCommitted] points at the latest committed /// batch uint256 totalBatchesCommitted; /// @dev Stored hashed StoredBatch for batch number mapping(uint256 batchNumber => bytes32 batchHash) storedBatchHashes; /// @dev Stored root hashes of L2 -> L1 logs mapping(uint256 batchNumber => bytes32 l2LogsRootHash) l2LogsRootHashes; //..snip - uint256 protocolVersion; + uint96 protocolVersion; + uint256 l2SystemContractsUpgradeBatchNumber; /// @dev Hash of the system contract upgrade transaction. If 0, then no upgrade transaction needs to be done. bytes32 l2SystemContractsUpgradeTxHash; /// @dev Batch number where the upgrade transaction has happened. If 0, then no upgrade transaction has happened /// yet. - uint256 l2SystemContractsUpgradeBatchNumber; //..snip
Consider reordering the
bool
and packing the variables.
protocolVersion
value is increased by one, so the next version number is always one greater than the current version number. This means the value shoukd not in no case exceed the maximum range of uint96
, thid is a very large number, and it is unlikely that any protocol would ever need to be upgraded more than this many times.
Also l2SystemContractsUpgradeBatchNumber
is the batch number of the latest upgrade transaction. If its value is 0, it signifies that no upgrade transactions have occurred, this should also fit with no problems to a uint96
.
struct StoredBatchInfo { uint64 batchNumber; + uint64 indexRepeatedStorageChanges; bytes32 batchHash; - uint64 indexRepeatedStorageChanges; uint256 numberOfLayer1Txs; bytes32 priorityOperationsHash; bytes32 l2LogsTreeRoot; uint256 timestamp; bytes32 commitment; }
txId``uint64
to reduce a slotstruct WritePriorityOpParams { address sender; + uint64 txId; - uint256 txId; uint256 l2Value; address contractAddressL2; uint64 expirationTimestamp; uint256 l2GasLimit; uint256 l2GasPrice; uint256 l2GasPricePerPubdata; uint256 valueToMint; address refundRecipient; }
contract Governance is IGovernance, Ownable2Step { /// @notice A constant representing the timestamp for completed operations. uint256 internal constant EXECUTED_PROPOSAL_TIMESTAMP = uint256(1); /// @notice The address of the security council. /// @dev It is supposed to be multisig contract. address public securityCouncil; + uint64 public minDelay; /// @notice A mapping to store timestamps when each operation will be ready for execution. /// @dev - 0 means the operation is not created. /// @dev - 1 (EXECUTED_PROPOSAL_TIMESTAMP) means the operation is already executed. /// @dev - any other value means timestamp in seconds when the operation will be ready for execution. mapping(bytes32 operationId => uint256 executionTimestamp) public timestamps; /// @notice The minimum delay in seconds for operations to be ready for execution. - uint256 public minDelay; ..//snip
Re-ordering a struct's variable makes it to be packed more efficiently
struct StateTransitionManagerInitializeData { address governor; address validatorTimelock; + uint64 genesisIndexRepeatedStorageChanges; address genesisUpgrade; bytes32 genesisBatchHash; - uint64 genesisIndexRepeatedStorageChanges; bytes32 genesisBatchCommitment; Diamond.DiamondCutData diamondCut; uint256 protocolVersion; }
As seen genesisIndexRepeatedStorageChanges
consumes 8 bytes, so moving it up an extra storage slot can be saved
Apply suggested diff changes and rearrange where possible to even optimise more gas
Multiple instances
Mailbox.sol
https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Mailbox.sol/// @inheritdoc IMailbox function l2TransactionBaseCost( uint256 _gasPrice, uint256 _l2GasLimit, uint256 _l2GasPerPubdataByteLimit ) public view returns (uint256) { uint256 l2GasPrice = _deriveL2GasPrice(_gasPrice, _l2GasPerPubdataByteLimit); return l2GasPrice * _l2GasLimit; }
Consider applying these changes since l2GasPrice
is used only once.
/// @inheritdoc IMailbox function l2TransactionBaseCost( uint256 _gasPrice, uint256 _l2GasLimit, uint256 _l2GasPerPubdataByteLimit ) public view returns (uint256) { - uint256 l2GasPrice = _deriveL2GasPrice(_gasPrice, _l2GasPerPubdataByteLimit); - return l2GasPrice * _l2GasLimit; + return _deriveL2GasPrice(_gasPrice, _l2GasPerPubdataByteLimit) * _l2GasLimit; }
function _deriveL2GasPrice(uint256 _l1GasPrice, uint256 _gasPerPubdata) internal view returns (uint256) { FeeParams memory feeParams = s.feeParams; require(s.baseTokenGasPriceMultiplierDenominator > 0, "Mailbox: baseTokenGasPriceDenominator not set"); uint256 l1GasPriceConverted = (_l1GasPrice * s.baseTokenGasPriceMultiplierNominator) / s.baseTokenGasPriceMultiplierDenominator; uint256 pubdataPriceBaseToken; if (feeParams.pubdataPricingMode == PubdataPricingMode.Rollup) { pubdataPriceBaseToken = L1_GAS_PER_PUBDATA_BYTE * l1GasPriceConverted; } uint256 batchOverheadBaseToken = uint256(feeParams.batchOverheadL1Gas) * l1GasPriceConverted; uint256 fullPubdataPriceBaseToken = pubdataPriceBaseToken + batchOverheadBaseToken / uint256(feeParams.maxPubdataPerBatch); uint256 l2GasPrice = feeParams.minimalL2GasPrice + batchOverheadBaseToken / uint256(feeParams.maxL2GasPerBatch); uint256 minL2GasPriceBaseToken = (fullPubdataPriceBaseToken + _gasPerPubdata - 1) / _gasPerPubdata; return Math.max(l2GasPrice, minL2GasPriceBaseToken); }
Here we have pubdataPriceBaseToken
and minL2GasPriceBaseToken
which are used only once so we don't need to declare them.
L1ERC20Bridge.sol
https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/bridge/L1ERC20Bridge.sol#L158C1-L167C1/// @dev Transfers tokens from the depositor address to the shared bridge address. /// @return The difference between the contract balance before and after the transferring of funds. function _depositFundsToSharedBridge(address _from, IERC20 _token, uint256 _amount) internal returns (uint256) { uint256 balanceBefore = _token.balanceOf(address(sharedBridge)); _token.safeTransferFrom(_from, address(sharedBridge), _amount); uint256 balanceAfter = _token.balanceOf(address(sharedBridge)); return balanceAfter - balanceBefore; }
Here we can see that balanceAfter
is only used once, so there is no need for us to cache it, we can just return _token.balanceOf(address(sharedBridge)) - balanceBefore
as the last step
function l2TokenAddress(address _l1Token) external view returns (address) { bytes32 constructorInputHash = keccak256(abi.encode(l2TokenBeacon, "")); bytes32 salt = bytes32(uint256(uint160(_l1Token))); return L2ContractHelper.computeCreate2Address(l2Bridge, salt, l2TokenProxyBytecodeHash, constructorInputHash); }
constructorInputHash
and salt
are used only once, so they don't need to be declared at all, consider just changing the function implementation to only include this line: return L2ContractHelper.computeCreate2Address(l2Bridge, bytes32(uint256(uint160(_l1Token))), l2TokenProxyBytecodeHash, keccak256(abi.encode(address(l2TokenBeacon), "")));
L1SharedBridge.sol
contract https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/bridge/L1SharedBridge.sol316 bool proofValid = bridgehub.proveL1ToL2TransactionStatus( _chainId, _l2TxHash, _l2BatchNumber, _l2MessageIndex, _l2TxNumberInBatch, _merkleProof, TxStatus.Failure ); require(proofValid, "yn");
Here we can just pass the whole call into the (require) instead of dividing it into two steps, since proofValid
is used only once, so it doesn't need to be declared
bool success = bridgehub.proveL2MessageInclusion( _chainId, _messageParams.l2BatchNumber, _messageParams.l2MessageIndex, l2ToL1Message, _merkleProof ); require(success, "ShB withd w proof"); // withdrawal wrong proof
Here we can also just pass the whole call into the (require) instead of dividing it into two steps, since success
is used only once, so it doesn't need to be declared.
BaseZkSyncUpgrade.sol
contract https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/upgrades/BaseZkSyncUpgrade.sol#L222C1-L232C6function _verifyFactoryDeps(bytes[] calldata _factoryDeps, uint256[] calldata _expectedHashes) private pure { require(_factoryDeps.length == _expectedHashes.length, "Wrong number of factory deps"); require(_factoryDeps.length <= MAX_NEW_FACTORY_DEPS, "Factory deps can be at most 32"); for (uint256 i = 0; i < _factoryDeps.length; ++i) { require( L2ContractHelper.hashL2Bytecode(_factoryDeps[i]) == bytes32(_expectedHashes[i]), "Wrong factory dep hash" ); } }
Here _factoryDeps.length;
is being queried, 3 times.
TransactionValidator.sol
contract, https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/libraries/TransactionValidator.solfunction getOverheadForTransaction( uint256 _encodingLength ) internal pure returns (uint256 batchOverheadForTransaction) { // The overhead from taking up the transaction's slot batchOverheadForTransaction = TX_SLOT_OVERHEAD_L2_GAS; // The overhead for occupying the bootloader memory can be derived from encoded_len uint256 overheadForLength = MEMORY_OVERHEAD_GAS * _encodingLength; batchOverheadForTransaction = Math.max(batchOverheadForTransaction, overheadForLength); }
Here batchOverheadForTransaction
and overheadForLength
are only used once so they don't need to be declared, we can just have the function be one line of batchOverheadForTransaction = Math.max(TX_SLOT_OVERHEAD_L2_GAS, (MEMORY_OVERHEAD_GAS * _encodingLength))
LibMap.sol
contract https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/libraries/LibMap.solFor functions get
, we can see that ariables mapValue
and bitOffset
are used only once, so they don't need to be declared at all.
Also for functions set
the variables oldValue
& nexValueXorOldValue
are only used once too, they also should not be declared.
Getters.sol
https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Getters.solfunction isFacetFreezable(address _facet) external view returns (bool isFreezable) { Diamond.DiamondStorage storage ds = Diamond.getDiamondStorage(); // There is no direct way to get whether the facet address is freezable, // so we get it from one of the selectors that are associated with the facet. uint256 selectorsArrayLen = ds.facetToSelectors[_facet].selectors.length; if (selectorsArrayLen != 0) { bytes4 selector0 = ds.facetToSelectors[_facet].selectors[0]; isFreezable = ds.selectorToFacet[selector0].isFreezable; } }
Here selectorsArrayLen
and selector0
are used only once, so we shouldn't declare them.
Diamond.sol
https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/libraries/Diamond.sol#function _saveFacetIfNew(address _facet) private { DiamondStorage storage ds = getDiamondStorage(); uint256 selectorsLength = ds.facetToSelectors[_facet].selectors.length; // If there are no selectors associated with facet then save facet as new one if (selectorsLength == 0) { ds.facetToSelectors[_facet].facetPosition = ds.facets.length.toUint16(); ds.facets.push(_facet); } }
Here selectorsLength
is used only once.
Also in the same contract,
function diamondCut(DiamondCutData memory _diamondCut) internal { FacetCut[] memory facetCuts = _diamondCut.facetCuts; address initAddress = _diamondCut.initAddress; bytes memory initCalldata = _diamondCut.initCalldata; uint256 facetCutsLength = facetCuts.length; for (uint256 i = 0; i < facetCutsLength; i = i.uncheckedInc()) { Action action = facetCuts[i].action; address facet = facetCuts[i].facet; bool isFacetFreezable = facetCuts[i].isFreezable; bytes4[] memory selectors = facetCuts[i].selectors; require(selectors.length > 0, "B"); // no functions for diamond cut if (action == Action.Add) { _addFunctions(facet, selectors, isFacetFreezable); } else if (action == Action.Replace) { _replaceFunctions(facet, selectors, isFacetFreezable); } else if (action == Action.Remove) { _removeFunctions(facet, selectors); } else { revert("C"); // undefined diamond cut action } } _initializeDiamondCut(initAddress, initCalldata); emit DiamondCut(facetCuts, initAddress, initCalldata); }
Here facet
and isFacetFreezable
are used only once and should not be declared.
Another instance would be in the addOneFunction
https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/libraries/Diamond.sol#L213-L215 where selector0
is being used only once
Another instance here would be ttps://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/libraries/Diamond.sol#L85-L92
function getDiamondStorage() internal pure returns (DiamondStorage storage diamondStorage) { bytes32 position = DIAMOND_STORAGE_POSITION; assembly { diamondStorage.slot := position } }
function getDiamondStorage() internal pure returns (DiamondStorage storage diamondStorage) { - bytes32 position = DIAMOND_STORAGE_POSITION; assembly { - diamondStorage.slot := position + diamondStorage.slot := DIAMOND_STORAGE_POSITION } }
ValidatorTimelock.sol
https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/ValidatorTimelock.solfunction _propagateToZkSyncStateTransition(uint256 _chainId) internal { address contractAddress = stateTransitionManager.stateTransition(_chainId); assembly { // Copy function signature and arguments from calldata at zero position into memory at pointer position calldatacopy(0, 0, calldatasize()) // Call method of the hyperchain diamond contract returns 0 on error let result := call(gas(), contractAddress, 0, 0, calldatasize(), 0, 0) // Get the size of the last return data let size := returndatasize() // Copy the size length of bytes from return data at zero position to pointer position returndatacopy(0, 0, size) // Depending on the result value switch result case 0 { // End execution and revert state changes revert(0, size) } default { // Return data with length of size at pointers position return(0, size) } } }
function _propagateToZkSyncStateTransition(uint256 _chainId) internal { - address contractAddress = stateTransitionManager.stateTransition(_chainId); assembly { // Copy function signature and arguments from calldata at zero position into memory at pointer position calldatacopy(0, 0, calldatasize()) // Call method of the hyperchain diamond contract returns 0 on error - let result := call(gas(), contractAddress, 0, 0, calldatasize(), 0, 0) + let result := call(gas(), stateTransitionManager.stateTransition(_chainId), 0, 0, calldatasize(), 0, 0) // Get the size of the last return data let size := returndatasize() // Copy the size length of bytes from return data at zero position to pointer position returndatacopy(0, 0, size) // Depending on the result value switch result case 0 { // End execution and revert state changes revert(0, size) } default { // Return data with length of size at pointers position return(0, size) } } }
SystemContext.sol
https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/system-contracts/contracts/SystemContext.soluint128 currentBatchTimestamp = currentBatchInfo.timestamp; require( _l2BlockTimestamp >= currentBatchTimestamp, "The timestamp of the L2 block must be greater than or equal to the timestamp of the current batch" );
- uint128 currentBatchTimestamp = currentBatchInfo.timestamp; require( - _l2BlockTimestamp >= currentBatchTimestamp, + _l2BlockTimestamp >= currentBatchInfo.timestamp, "The timestamp of the L2 block must be greater than or equal to the timestamp of the current batch" );
uint128 currentBlockTimestamp = currentL2BlockInfo.timestamp; require( _newTimestamp > currentBlockTimestamp, "The timestamp of the batch must be greater than the timestamp of the previous block"
- uint128 currentBlockTimestamp = currentL2BlockInfo.timestamp; require( - _newTimestamp > currentBlockTimestamp, + _newTimestamp > currentL2BlockInfo.timestamp, "The timestamp of the batch must be greater than the timestamp of the previous block"
Consider applying suggested diff changes and not declaring variables if they are only used once
When accessing data from storage, assigning this data to a variable in memory triggers the entire struct or array to be read from storage. This operation costs a significant amount of gas (2100 gas per field in the struct or array). Accessing these fields from the memory variable later adds extra cost due to the need for an MLOAD operation, as opposed to a less expensive stack read.
246 WritePriorityOpParams memory params; 321 L2CanonicalTransaction memory transaction = _serializeL2Transaction(_priorityOpParams, _calldata, _factoryDeps);
396 VerifierParams memory verifierParams = s.verifierParams; 339924 uint256[] memory proofPublicInput = new uint256[](committedBatchesLength);
27 Diamond.SelectorToFacet memory facet = diamondStorage.selectorToFacet[msg.sig];
136 VerifierParams memory oldVerifierParams = s.verifierParams;
97 FacetCut[] memory facetCuts = _diamondCut.facetCuts; 105 bytes4[] memory selectors = facetCuts[i].selectors;
To dive deeper, still in this same contract we can see the excessive calls on SelectorToFacet memory oldFacet = ds.selectorToFacet[selector]
, i.e take a look at https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/libraries/Diamond.sol#L124-L185
function _addFunctions(address _facet, bytes4[] memory _selectors, bool _isFacetFreezable) private { DiamondStorage storage ds = getDiamondStorage(); // Facet with no code cannot be added. // This check also verifies that the facet does not have zero address, since it is the // address with which 0x00000000 selector is associated. require(_facet.code.length > 0, "G"); // Add facet to the list of facets if the facet address is new one _saveFacetIfNew(_facet); uint256 selectorsLength = _selectors.length; for (uint256 i = 0; i < selectorsLength; i = i.uncheckedInc()) { bytes4 selector = _selectors[i]; SelectorToFacet memory oldFacet = ds.selectorToFacet[selector]; require(oldFacet.facetAddress == address(0), "J"); // facet for this selector already exists _addOneFunction(_facet, selector, _isFacetFreezable); } } function _replaceFunctions(address _facet, bytes4[] memory _selectors, bool _isFacetFreezable) private { DiamondStorage storage ds = getDiamondStorage(); // Facet with no code cannot be added. // This check also verifies that the facet does not have zero address, since it is the // address with which 0x00000000 selector is associated. require(_facet.code.length > 0, "K"); uint256 selectorsLength = _selectors.length; for (uint256 i = 0; i < selectorsLength; i = i.uncheckedInc()) { bytes4 selector = _selectors[i]; SelectorToFacet memory oldFacet = ds.selectorToFacet[selector]; require(oldFacet.facetAddress != address(0), "L"); // it is impossible to replace the facet with zero address _removeOneFunction(oldFacet.facetAddress, selector); // Add facet to the list of facets if the facet address is a new one _saveFacetIfNew(_facet); _addOneFunction(_facet, selector, _isFacetFreezable); } } function _removeFunctions(address _facet, bytes4[] memory _selectors) private { DiamondStorage storage ds = getDiamondStorage(); require(_facet == address(0), "a1"); // facet address must be zero uint256 selectorsLength = _selectors.length; for (uint256 i = 0; i < selectorsLength; i = i.uncheckedInc()) { bytes4 selector = _selectors[i]; SelectorToFacet memory oldFacet = ds.selectorToFacet[selector]; require(oldFacet.facetAddress != address(0), "a2"); // Can't delete a non-existent facet _removeOneFunction(oldFacet.facetAddress, selector); } }
These three functions are used to manage the facets and the functions associated with them, case is that in all three functions SelectorToFacet memory oldFacet = ds.selectorToFacet[selector]
is first queried and then checked to checked to be the old facet, issue is that these instances each copy a complete struct to the memory and then only uses one of its attributes, keep in mind that this is done in a for-loop making it even more expensive.
Since it's more gas-efficient to declare the variable as storage and use stack variables to cache any frequently accessed fields this should then be done, cause this approach minimizes gas consumption to only those fields that are actually accessed.
To suggest a fix for the instances of SelectorToFacet memory oldFacet = ds.selectorToFacet[selector]
,then instead pass it directly in the required check, i.e changes in all three functions should be like this:
- SelectorToFacet memory oldFacet = ds.selectorToFacet[selector]; - require(oldFacet.facetAddress == address(0), "J"); // facet for this selector already exists + require(ds.selectorToFacet[selector].facetAddress == address(0), "J"); // @audit attach the necessary error message for each function
function _executeBatches(StoredBatchInfo[] calldata _batchesData) internal { uint256 nBatches = _batchesData.length; for (uint256 i = 0; i < nBatches; i = i.uncheckedInc()) { _executeOneBatch(_batchesData[i], i); emit BlockExecution(_batchesData[i].batchNumber, _batchesData[i].batchHash, _batchesData[i].commitment); } uint256 newTotalBatchesExecuted = s.totalBatchesExecuted + nBatches; //@audit s.totalBatchesExecuted = newTotalBatchesExecuted; require(newTotalBatchesExecuted <= s.totalBatchesVerified, "n"); // Can't execute batches more than committed and proven currently. uint256 batchWhenUpgradeHappened = s.l2SystemContractsUpgradeBatchNumber; if (batchWhenUpgradeHappened != 0 && batchWhenUpgradeHappened <= newTotalBatchesExecuted) { delete s.l2SystemContractsUpgradeTxHash; delete s.l2SystemContractsUpgradeBatchNumber; } }
Consider this instance too https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Executor.sol#L402
function _proveBatches( StoredBatchInfo calldata _prevBatch, StoredBatchInfo[] calldata _committedBatches, ProofInput calldata _proof ) internal { // Save the variables into the stack to save gas on reading them later uint256 currentTotalBatchesVerified = s.totalBatchesVerified; uint256 committedBatchesLength = _committedBatches.length; // Save the variable from the storage to memory to save gas VerifierParams memory verifierParams = s.verifierParams; // Initialize the array, that will be used as public input to the ZKP uint256[] memory proofPublicInput = new uint256[](committedBatchesLength); // Check that the batch passed by the validator is indeed the first unverified batch //@audit require(_hashStoredBatchInfo(_prevBatch) == s.storedBatchHashes[currentTotalBatchesVerified], "t1");
Consider this instance in scope https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Executor.sol#L402
// Save the variables into the stack to save gas on reading them later uint256 currentTotalBatchesVerified = s.totalBatchesVerified; uint256 committedBatchesLength = _committedBatches.length; // Save the variable from the storage to memory to save gas VerifierParams memory verifierParams = s.verifierParams; // Initialize the array, that will be used as public input to the ZKP uint256[] memory proofPublicInput = new uint256[](committedBatchesLength); // Check that the batch passed by the validator is indeed the first unverified batch require(_hashStoredBatchInfo(_prevBatch) == s.storedBatchHashes[currentTotalBatchesVerified], "t1");
// Save the variables into the stack to save gas on reading them later uint256 currentTotalBatchesVerified = s.totalBatchesVerified; + + require(_hashStoredBatchInfo(_prevBatch) == s.storedBatchHashes[currentTotalBatchesVerified], "t1"); + uint256 committedBatchesLength = _committedBatches.length; // Save the variable from the storage to memory to save gas VerifierParams memory verifierParams = s.verifierParams; // Initialize the array, that will be used as public input to the ZKP uint256[] memory proofPublicInput = new uint256[](committedBatchesLength); // Check that the batch passed by the validator is indeed the first unverified batch - require(_hashStoredBatchInfo(_prevBatch) == s.storedBatchHashes[currentTotalBatchesVerified], "t1");
For the first case, we can save an SSTORE here for whenever this attemot reverts, if we move apply these changes
function _executeBatches(StoredBatchInfo[] calldata _batchesData) internal { uint256 nBatches = _batchesData.length; for (uint256 i = 0; i < nBatches; i = i.uncheckedInc()) { _executeOneBatch(_batchesData[i], i); emit BlockExecution(_batchesData[i].batchNumber, _batchesData[i].batchHash, _batchesData[i].commitment); } uint256 newTotalBatchesExecuted = s.totalBatchesExecuted + nBatches; //@audit + require(newTotalBatchesExecuted <= s.totalBatchesVerified, "n"); // Can't execute batches more than committed and proven currently. s.totalBatchesExecuted = newTotalBatchesExecuted; - require(newTotalBatchesExecuted <= s.totalBatchesVerified, "n"); // Can't execute batches more than committed and proven currently. uint256 batchWhenUpgradeHappened = s.l2SystemContractsUpgradeBatchNumber; if (batchWhenUpgradeHappened != 0 && batchWhenUpgradeHappened <= newTotalBatchesExecuted) { delete s.l2SystemContractsUpgradeTxHash; delete s.l2SystemContractsUpgradeBatchNumber; } }
function _proveBatches( StoredBatchInfo calldata _prevBatch, StoredBatchInfo[] calldata _committedBatches, ProofInput calldata _proof ) internal { // Save the variables into the stack to save gas on reading them later uint256 currentTotalBatchesVerified = s.totalBatchesVerified; + // Check that the batch passed by the validator is indeed the first unverified batch + require(_hashStoredBatchInfo(_prevBatch) == s.storedBatchHashes[currentTotalBatchesVerified], "t1"); + uint256 committedBatchesLength = _committedBatches.length; // Save the variable from the storage to memory to save gas VerifierParams memory verifierParams = s.verifierParams; // Initialize the array, that will be used as public input to the ZKP uint256[] memory proofPublicInput = new uint256[](committedBatchesLength); // Check that the batch passed by the validator is indeed the first unverified batch - // Check that the batch passed by the validator is indeed the first unverified batch - require(_hashStoredBatchInfo(_prevBatch) == s.storedBatchHashes[currentTotalBatchesVerified], "t1");
For the second cae, apply the suggested diff
change in the report.
Diamond.sol
& L1ERC20Bridge.sol
Multiple instances within the Diamond.sol
contract https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/libraries/Diamond.sol
208 uint16 selectorPosition = (ds.facetToSelectors[_facet].selectors.length).toUint16(); 214 bytes4 selector0 = ds.facetToSelectors[_facet].selectors[0]; 223 ds.facetToSelectors[_facet].selectors.push(_selector); 233 uint256 lastSelectorPosition = ds.facetToSelectors[_facet].selectors.length - 1; 237 bytes4 lastSelector = ds.facetToSelectors[_facet].selectors[lastSelectorPosition]; 244 ds.facetToSelectors[_facet].selectors.pop(); 245 ds.selectorToFacet[lastSelector].selectorPosition = selectorPosition.toUint16();
All the above in their instances will re-calculate the keccak256
hash of the facetToSelectors
and selectorToFacet
mapping slots multiple times and not being optimized.
Now within the L1ERC20Bridge
https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/bridge/L1ERC20Bridge.sol#L215
215 require(!isWithdrawalFinalized[_l2BatchNumber][_l2MessageIndex], "pw");
Though the compiler does optimize keccak256
claculations of the same value consecutively, the referenced code snippets are not performed in sequence and could benefit from an optimization where the mapping
lookup is being cached in a local variable.
Multiple instances, consider:
154 depositAmount[msg.sender][_l1Token][l2TxHash] = _amount;
59 _queue.data[tail] = _operation;
If the old value is equal to the new value, not re-storing the value will avoid a Gsreset (2900 gas), potentially at the expense of a Gcoldsload (2100 gas) or a Gwarmaccess (100 gas)
This saves a storage slot for the mapping. Depending on the circumstances and sizes of types, here we avoid a Gsset (20000 gas) per mapping combined.
Note that reads and subsequent writes can also be cheaper when a function requires both values and they both fit in the same storage slot. Finally, if both fields are accessed in the same function, can save ~42 gas per access due to not recalculaing the key's keccak256 hash (Gkeccak256 - 30 gas) and that calculation's associated stack operations.
mapping(address => uint256) public __DEPRECATED_lastWithdrawalLimitReset; /// @dev A mapping L1 token address => the accumulated withdrawn amount during the withdrawal limit window mapping(address => uint256) public __DEPRECATED_withdrawnAmountInWindow; /// @dev The accumulated deposited amount per user. /// @dev A mapping L1 token address => user address => the total deposited amount by the user mapping(address => mapping(address => uint256)) public __DEPRECATED_totalDepositedAmountPerUser;
Admin.sol
should be emmitted one step aboveSome instacs of setting some variables in Adminsol
could make use of the change of having the event get emmited one step earlier so as to save for one memory's gas cost.
function setPendingAdmin(address _newPendingAdmin) external onlyGovernorOrAdmin { // Save previous value into the stack to put it into the event later - address oldPendingAdmin = s.pendingAdmin; + emit NewPendingGovernor(s.pendingAdmin, _newPendingAdmin); // Change pending admin s.pendingAdmin = _newPendingAdmin; - emit NewPendingGovernor(oldPendingAdmin, _newPendingAdmin); }
function acceptAdmin() external { address pendingAdmin = s.pendingAdmin; require(msg.sender == pendingAdmin, "n4"); - address previousAdmin = s.admin; + emit NewAdmin(s.admin, pendingAdmin); s.admin = pendingAdmin; delete s.pendingAdmin; emit NewPendingAdmin(pendingAdmin, address(0)); - emit NewAdmin(previousAdmin, pendingAdmin);
- uint256 oldPriorityTxMaxGasLimit = s.priorityTxMaxGasLimit; + emit NewPriorityTxMaxGasLimit(s.priorityTxMaxGasLimit, _newPriorityTxMaxGasLimit); s.priorityTxMaxGasLimit = _newPriorityTxMaxGasLimit; - emit NewPriorityTxMaxGasLimit(oldPriorityTxMaxGasLimit, _newPriorityTxMaxGasLimit);
Apply suggested diff changes
L1SharedBridge::deposit()'
s current implementation would cost users unnecessary gas while depositingTake a look at the whole shared bridge contract(both L1 and L2): https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/bridge/L1SharedBridge.sol & https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/zksync/contracts/bridge/L2SharedBridge.sol, most especially the depositing logic
The current implementation follows these steps:
L1SharedBridge.sol
)._deployL2Token
function deploys a new L2 token using the provided data. This data includes the token name, symbol, and decimals, source: https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/zksync/contracts/bridge/L2SharedBridge.sol#L97-L101_deployL2Token
function parses the provided data to extract the token name, symbol, and decimals, source https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/zksync/contracts/bridge/L2SharedBridge.sol#L110-L117_getERC20Getters
function, source https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/bridge/L1SharedBridge.sol#L254(600,000 transactions * 0.0001 ETH)
... (current value of that is ~220,000
USD)The deposit
function in L1ERC20Bridge.sol
incurs unnecessary gas costs by fetching the ERC20 token's name, symbol, and decimals on every deposit transaction, even when this information has already been retrieved for the same token in previous deposits. This repetitive fetching wastes gas and increases transaction fees for users.
Try only querying the ERC20 token for its name, symbol, and decimals during the first successful deposit for a particular token. Subsequent deposits for the same token can reuse the cached information, eliminating redundant calls and reducing transaction fees for users.
uint256
variable to default value of 0 can be omittedMultiple instances of this in scope, from uint256
variables, to bool
variables being implemented as 0
.
We can see this from a suggested search command https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync+%3D+0+NOT+language%3ATypeScript+NOT+language%3AMarkdown+NOT+language%3AYul+NOT+language%3AJSON+NOT+language%3ADotenv+NOT+language%3ARust+NOT+language%3ATOML&type=code (we need to heavily pinpoint this swearch comand though)
A direct ecample could be this in Config.sol
uint256 constant PRIORITY_EXPIRATION = 0 days;
We can see that there is no need to initialize variables to their default values during declaration, since they are any way initialized to default value once declared.
Diamond.sol:
Non-gas efficient expansion of memoryAction action = facetCuts[i].action; address facet = facetCuts[i].facet; bool isFacetFreezable = facetCuts[i].isFreezable; bytes4[] memory selectors = facetCuts[i].selectors;
These referenced variables are declared locally within the for
loop of Diamond::diamondCut
even though they are already present in memory
as the function's argument.
Consider utilizing a single local variable within the for
loop, i.e a FacetCut memory
variable whose members are accessed wherever the previous declarations were.
This optimization will result in an observable amount of decrease of per iteration, with optimizations turned on for a FacetCut
payload with 3
selectors. The optimizations will scale with the number of selectors
present in a cut.
0
uint64 expirationTimestamp = uint64(block.timestamp + PRIORITY_EXPIRATION); // Safe to cast
Note that PRIORITY_EXPIRATION
is defined in ethereum/contracts/zksync/Config.sol
as:
/// NOTE: The constant is set to zero for the Alpha release period uint256 constant PRIORITY_EXPIRATION = 0 days;
The above makes this addition redundant (since we're adding 0).
Since PRIORITY_EXPIRATION
is a constant value, known during compiling time, then block.timestamp + PRIORITY_EXPIRATION
will never overflow. The max value of uint64
converted into timestamp is in the year 2554
, so we can either remove the additon of have it unchecked.
uint256
for Boolean state management in order to save gasQuoting this article: https://detectors.auditbase.com/int-instead-of-bool-gas-optimization
" If you don't use boolean for storage, you can avoid Gwarmaccess 100 gas. In addition, state changes from true to false can cost up to ~20000 gas, whereas using uint256(2) to represent false and uint256(1) for true
would cost significantly less."
Now if we are to transition to a convention where uint256(1)
represents 'true' and uint256(2)
signifies 'false' then we can actually have substantial gas savings. Cause this method avoids the Gwarmaccess
cost of 100 gas and circumvents the 20,000 gas expense associated with Gsset
when switching a boolean from 'false' back to 'true', particularly beneficial if the initial state was 'true'.
31 bool approvedBySecurityCouncil; 74 mapping(address validatorAddress => bool isValidator) validators; 103 bool zkPorterIsAvailable; 114 mapping(uint256 l2BatchNumber => mapping(uint256 l2ToL1MessageNumber => bool isFinalized)) isEthWithdrawalFinalized;
Other instances to be considered https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/libraries/Diamond.sol
33 bool isFreezable; 54 bool isFrozen; 65 bool isFreezable;
Consider this gist to see how to input this idea in scope to achieve more efficient gas usage, particularly in operations involving frequent state toggles between true and false values.
When reverting in solidity code, and there is a use of require or revert statement to revert execution with an error message, then we can in most cases further optimize the code by using assembly to revert with the error message. Consider an example:
/// calling restrictedAction(2) with a non-owner address: 24042 contract SolidityRevert { address owner; uint256 specialNumber = 1; constructor() { owner = msg.sender; } function restrictedAction(uint256 num) external { require(owner == msg.sender, "caller is not owner"); specialNumber = num; } } /// calling restrictedAction(2) with a non-owner address: 23734 contract AssemblyRevert { address owner; uint256 specialNumber = 1; constructor() { owner = msg.sender; } function restrictedAction(uint256 num) external { assembly { if sub(caller(), sload(owner.slot)) { mstore(0x00, 0x20) // store offset to where length of revert message is stored mstore(0x20, 0x13) // store length (19) mstore(0x40, 0x63616c6c6572206973206e6f74206f776e657200000000000000000000000000) // store hex representation of message revert(0x00, 0x60) // revert with data } } specialNumber = num; } }
Deploying this on remix, we can see that we get a gas saving in the hundreds (~300 gas) when reverting wth the same error message with assembly against doing so in solidity. This gas savings come from the memory expansion costs and extra type checks the solidity compiler does under the hood.
Consider using assembly to revert with an error message when possible.
finalizeDeposit()
more efficientfunction finalizeDeposit( address _l1Sender, address _l2Receiver, address _l1Token, uint256 _amount, bytes calldata _data ) external payable override { // Only the L1 bridge counterpart can initiate and finalize the deposit. require( AddressAliasHelper.undoL1ToL2Alias(msg.sender) == l1Bridge || AddressAliasHelper.undoL1ToL2Alias(msg.sender) == l1LegacyBridge, "mq" ); require(msg.value == 0, "Value should be 0 for ERC20 bridge"); address expectedL2Token = l2TokenAddress(_l1Token); address currentL1Token = l1TokenAddress[expectedL2Token]; if (currentL1Token == address(0)) { address deployedToken = _deployL2Token(_l1Token, _data); require(deployedToken == expectedL2Token, "mt"); l1TokenAddress[expectedL2Token] = _l1Token; } else { require(currentL1Token == _l1Token, "gg"); // Double check that the expected value equal to real one } IL2StandardToken(expectedL2Token).bridgeMint(_l2Receiver, _amount); emit FinalizeDeposit(_l1Sender, _l2Receiver, expectedL2Token, _amount); }
This function is used to finalize deposits, but it's been marked as payable
and then goes ahead to attach this check require(msg.value == 0, "Value should be 0 for ERC20 bridge")
leading to an unnecessary check, cause not having the payable
attached would mean that the native check of ensuring that no msg.value was passed to the function would be applied, but this implementation forgoes that and then attaches an error message making it more costly than the native check
More gas spent whenever finalizeDeposit()
is called, due to forgoing the native implementation and having a custom one with an error
Consider applying these changes:
function finalizeDeposit( address _l1Sender, address _l2Receiver, address _l1Token, uint256 _amount, bytes calldata _data - ) external payable override { + ) external override { // Only the L1 bridge counterpart can initiate and finalize the deposit. require( AddressAliasHelper.undoL1ToL2Alias(msg.sender) == l1Bridge || AddressAliasHelper.undoL1ToL2Alias(msg.sender) == l1LegacyBridge, "mq" ); - require(msg.value == 0, "Value should be 0 for ERC20 bridge"); address expectedL2Token = l2TokenAddress(_l1Token); address currentL1Token = l1TokenAddress[expectedL2Token]; if (currentL1Token == address(0)) { address deployedToken = _deployL2Token(_l1Token, _data); require(deployedToken == expectedL2Token, "mt"); l1TokenAddress[expectedL2Token] = _l1Token; } else { require(currentL1Token == _l1Token, "gg"); // Double check that the expected value equal to real one } IL2StandardToken(expectedL2Token).bridgeMint(_l2Receiver, _amount); emit FinalizeDeposit(_l1Sender, _l2Receiver, expectedL2Token, _amount); }
nonReentrant
modifier for Admin functionsIn most instances admin functions are trusted there is no use of nonReentrant modifier to save significant amount of Gas.
Multiple instances in code, could be grepped by considering functions restricted to admins, this search command can help prepoint these modifiers: https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync++modifier+only+NOT+language%3AYul+NOT+language%3AMarkdown+NOT+language%3ATypeScript&type=code then the functions where most of these modifiers are used can then remove the nonReentrant
keep in mind that some other instances exist where not modifiers are used to restrict calls to only admins but require msg.sender == expectedAdmin
checks are made so this should be checked too.
Consider not using the nonReentrant
modifier for Admin functions, but keep in mind the level of trust for the admin, if protocol plans on making such entry points decentralised in the future then they return them back to being nonReentrant
protected at that ti,e
_commitBatches()
should be made more efficient (2 modifications)function _commitBatches( StoredBatchInfo memory _lastCommittedBatchData, CommitBatchInfo[] calldata _newBatchesData ) internal { // check that we have the right protocol version // three comments: // 1. A chain has to keep their protocol version up to date, as processing a block requires the latest or previous protocol version // to solve this we will need to add the feature to create batches with only the protocol upgrade tx, without any other txs. // 2. A chain might become out of sync if it launches while we are in the middle of a protocol upgrade. This would mean they cannot process their genesis upgrade // as thier protocolversion would be outdated, and they also cannot process the protocol upgrade tx as they have a pending upgrade. // 3. The protocol upgrade is increased in the BaseZkSyncUpgrade, in the executor only the systemContractsUpgradeTxHash is checked require( IStateTransitionManager(s.stateTransitionManager).protocolVersion() == s.protocolVersion, "Executor facet: wrong protocol version" ); // With the new changes for EIP-4844, namely the restriction on number of blobs per block, we only allow for a single batch to be committed at a time. require(_newBatchesData.length == 1, "e4"); // Check that we commit batches after last committed batch //@audit require(s.storedBatchHashes[s.totalBatchesCommitted] == _hashStoredBatchInfo(_lastCommittedBatchData), "i"); // incorrect previous batch data bytes32 systemContractsUpgradeTxHash = s.l2SystemContractsUpgradeTxHash; // Upgrades are rarely done so we optimize a case with no active system contracts upgrade. if (systemContractsUpgradeTxHash == bytes32(0) || s.l2SystemContractsUpgradeBatchNumber != 0) { _commitBatchesWithoutSystemContractsUpgrade(_lastCommittedBatchData, _newBatchesData); } else { _commitBatchesWithSystemContractsUpgrade( _lastCommittedBatchData, _newBatchesData, systemContractsUpgradeTxHash ); } //@audit s.totalBatchesCommitted = s.totalBatchesCommitted + _newBatchesData.length; }
Keep in mind that with the current implementation of EIP-4844 _newBatchesData.length
is always going to be 1
, also we can see that totalBatchesCommitted
is actually read twice leading to more gas expenditure due to the double SLOADs
.
Reimplement the function, i.e try storing the first storage access in a local variable, and then consider applying these changes
//..snip + s.totalBatchesCommitted = s.totalBatchesCommitted + 1; - s.totalBatchesCommitted = s.totalBatchesCommitted + _newBatchesData.length; }
Modifiers inject their implementation bytecode where it is used while internal functions jump to the location in the runtime code where the its implementation is. This brings certain trade-offs to both options.
In short, using modifiers "more than once" means repetitiveness and increase in size of the runtime code but reduces gas cost because of the absence of jumping to the internal function execution offset and jumping back to continue. So if we want to consider runtime gas cost on a primary basis, then modifiers should be our choice but if deployment gas cost and/or reducing the size of the creation code is most important to you then using internal functions will be best.
This can be proven by the test in this gist, where around 35k gas is saved from using modifiers in just 3 functions.
Consider implementong this where possible, but keep in mind that modifiers have the tradeoff that they can only be executed at the start or end of a functon. This means executing it at the middle of a function wouldn’t be directly possible.
There are two instances of this, i.e both in _commitBatchesWithoutSystemContractsUpgrade()
and _commitBatchesWithSystemContractsUpgrade()
.
function _commitBatches( StoredBatchInfo memory _lastCommittedBatchData, CommitBatchInfo[] calldata _newBatchesData ) internal { //omitted for brevity //@audit // With the new changes for EIP-4844, namely the restriction on number of blobs per block, we only allow for a single batch to be committed at a time. require(_newBatchesData.length == 1, "e4"); // Check that we commit batches after last committed batch require(s.storedBatchHashes[s.totalBatchesCommitted] == _hashStoredBatchInfo(_lastCommittedBatchData), "i"); // incorrect previous batch data bytes32 systemContractsUpgradeTxHash = s.l2SystemContractsUpgradeTxHash; // Upgrades are rarely done so we optimize a case with no active system contracts upgrade. if (systemContractsUpgradeTxHash == bytes32(0) || s.l2SystemContractsUpgradeBatchNumber != 0) { _commitBatchesWithoutSystemContractsUpgrade(_lastCommittedBatchData, _newBatchesData); } else { _commitBatchesWithSystemContractsUpgrade( _lastCommittedBatchData, _newBatchesData, systemContractsUpgradeTxHash ); } s.totalBatchesCommitted = s.totalBatchesCommitted + _newBatchesData.length; } function _commitBatchesWithoutSystemContractsUpgrade( StoredBatchInfo memory _lastCommittedBatchData, CommitBatchInfo[] calldata _newBatchesData ) internal { //omitted for brevity //@audit for (uint256 i = 0; i < _newBatchesData.length; i = i.uncheckedInc()) { _lastCommittedBatchData = _commitOneBatch(_lastCommittedBatchData, _newBatchesData[i], bytes32(0)); s.storedBatchHashes[_lastCommittedBatchData.batchNumber] = _hashStoredBatchInfo(_lastCommittedBatchData); emit BlockCommit( _lastCommittedBatchData.batchNumber, _lastCommittedBatchData.batchHash, _lastCommittedBatchData.commitment ); } } function _commitBatchesWithSystemContractsUpgrade( StoredBatchInfo memory _lastCommittedBatchData, CommitBatchInfo[] calldata _newBatchesData, bytes32 _systemContractUpgradeTxHash ) internal { //omitted for brevity //@audit for (uint256 i = 0; i < _newBatchesData.length; i = i.uncheckedInc()) { // The upgrade transaction must only be included in the first batch. bytes32 expectedUpgradeTxHash = i == 0 ? _systemContractUpgradeTxHash : bytes32(0); _lastCommittedBatchData = _commitOneBatch( _lastCommittedBatchData, _newBatchesData[i], expectedUpgradeTxHash ); s.storedBatchHashes[_lastCommittedBatchData.batchNumber] = _hashStoredBatchInfo(_lastCommittedBatchData); emit BlockCommit( _lastCommittedBatchData.batchNumber, _lastCommittedBatchData.batchHash, _lastCommittedBatchData.commitment ); } }
Now these functions are called whenever batches are to be committed, now post the EIP-4844 implementation, due to the restriction on number of blobs per block, protocol can only allow for a single batch to be committed at a time, which is clearly noted in _commitBatches()
, but navigating to both _commitBatchesWithoutSystemContractsUpgrade()
and _commitBatchesWithSystemContractsUpgrade()
, we can still see an unnecessary attempt to loop through the newBatchesData.length
which is already hardcoded and can only be 1
.
Remove the unnecessary for loops in both the _commitBatchesWithoutSystemContractsUpgrade()
and _commitBatchesWithSystemContractsUpgrade()
functions.
This bug is also present in this instance below: https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/ValidatorTimelock.sol#L107-L142
function commitBatches( StoredBatchInfo calldata, CommitBatchInfo[] calldata _newBatchesData ) external onlyValidator(ERA_CHAIN_ID) { unchecked { // This contract is only a temporary solution, that hopefully will be disabled until 2106 year, so... // It is safe to cast. uint32 timestamp = uint32(block.timestamp); for (uint256 i = 0; i < _newBatchesData.length; ++i) { committedBatchTimestamp[ERA_CHAIN_ID].set(_newBatchesData[i].batchNumber, timestamp); } } _propagateToZkSyncStateTransition(ERA_CHAIN_ID); } /// @dev Records the timestamp for all provided committed batches and make /// a call to the hyperchain diamond contract with the same calldata. function commitBatchesSharedBridge( uint256 _chainId, StoredBatchInfo calldata, CommitBatchInfo[] calldata _newBatchesData ) external onlyValidator(_chainId) { unchecked { // This contract is only a temporary solution, that hopefully will be disabled until 2106 year, so... // It is safe to cast. uint32 timestamp = uint32(block.timestamp); for (uint256 i = 0; i < _newBatchesData.length; ++i) { committedBatchTimestamp[_chainId].set(_newBatchesData[i].batchNumber, timestamp); } } _propagateToZkSyncStateTransition(_chainId); }
/// @dev Check that batches were committed at least X time ago and /// make a call to the hyperchain diamond contract with the same calldata. function executeBatches(StoredBatchInfo[] calldata _newBatchesData) external onlyValidator(ERA_CHAIN_ID) { uint256 delay = executionDelay; // uint32 unchecked { for (uint256 i = 0; i < _newBatchesData.length; ++i) { uint256 commitBatchTimestamp = committedBatchTimestamp[ERA_CHAIN_ID].get( _newBatchesData[i].batchNumber ); // Note: if the `commitBatchTimestamp` is zero, that means either: // * The batch was committed, but not through this contract. // * The batch wasn't committed at all, so execution will fail in the zkSync contract. // We allow executing such batches. require(block.timestamp >= commitBatchTimestamp + delay, "5c"); // The delay is not passed } } _propagateToZkSyncStateTransition(ERA_CHAIN_ID); } /// @dev Check that batches were committed at least X time ago and /// make a call to the hyperchain diamond contract with the same calldata. function executeBatchesSharedBridge( uint256 _chainId, StoredBatchInfo[] calldata _newBatchesData ) external onlyValidator(_chainId) { uint256 delay = executionDelay; // uint32 unchecked { for (uint256 i = 0; i < _newBatchesData.length; ++i) { uint256 commitBatchTimestamp = committedBatchTimestamp[_chainId].get(_newBatchesData[i].batchNumber); // Note: if the `commitBatchTimestamp` is zero, that means either: // * The batch was committed, but not through this contract. // * The batch wasn't committed at all, so execution will fail in the zkSync contract. // We allow executing such batches. require(block.timestamp >= commitBatchTimestamp + delay, "5c"); // The delay is not passed } } _propagateToZkSyncStateTransition(_chainId); }
1
function _commitBatches( StoredBatchInfo memory _lastCommittedBatchData, CommitBatchInfo[] calldata _newBatchesData ) internal { // check that we have the right protocol version // three comments: // 1. A chain has to keep their protocol version up to date, as processing a block requires the latest or previous protocol version // to solve this we will need to add the feature to create batches with only the protocol upgrade tx, without any other txs. // 2. A chain might become out of sync if it launches while we are in the middle of a protocol upgrade. This would mean they cannot process their genesis upgrade // as thier protocolversion would be outdated, and they also cannot process the protocol upgrade tx as they have a pending upgrade. // 3. The protocol upgrade is increased in the BaseZkSyncUpgrade, in the executor only the systemContractsUpgradeTxHash is checked require( IStateTransitionManager(s.stateTransitionManager).protocolVersion() == s.protocolVersion, "Executor facet: wrong protocol version" ); // With the new changes for EIP-4844, namely the restriction on number of blobs per block, we only allow for a single batch to be committed at a time. //@audit require(_newBatchesData.length == 1, "e4"); // Check that we commit batches after last committed batch require(s.storedBatchHashes[s.totalBatchesCommitted] == _hashStoredBatchInfo(_lastCommittedBatchData), "i"); // incorrect previous batch data bytes32 systemContractsUpgradeTxHash = s.l2SystemContractsUpgradeTxHash; // Upgrades are rarely done so we optimize a case with no active system contracts upgrade. if (systemContractsUpgradeTxHash == bytes32(0) || s.l2SystemContractsUpgradeBatchNumber != 0) { _commitBatchesWithoutSystemContractsUpgrade(_lastCommittedBatchData, _newBatchesData); } else { _commitBatchesWithSystemContractsUpgrade( _lastCommittedBatchData, _newBatchesData, systemContractsUpgradeTxHash ); } //@audit s.totalBatchesCommitted = s.totalBatchesCommitted + _newBatchesData.length; }
This function, is used to commit batches, but the last step queries _newBatchesData.length
, but this line require(_newBatchesData.length == 1, "e4");
exists which means that _newBatchesData
would always be 1
, so it's unnecessary to query the length
Apply these changes:
function _commitBatches( StoredBatchInfo memory _lastCommittedBatchData, CommitBatchInfo[] calldata _newBatchesData ) internal { // check that we have the right protocol version // three comments: // 1. A chain has to keep their protocol version up to date, as processing a block requires the latest or previous protocol version // to solve this we will need to add the feature to create batches with only the protocol upgrade tx, without any other txs. // 2. A chain might become out of sync if it launches while we are in the middle of a protocol upgrade. This would mean they cannot process their genesis upgrade // as thier protocolversion would be outdated, and they also cannot process the protocol upgrade tx as they have a pending upgrade. // 3. The protocol upgrade is increased in the BaseZkSyncUpgrade, in the executor only the systemContractsUpgradeTxHash is checked require( IStateTransitionManager(s.stateTransitionManager).protocolVersion() == s.protocolVersion, "Executor facet: wrong protocol version" ); // With the new changes for EIP-4844, namely the restriction on number of blobs per block, we only allow for a single batch to be committed at a time. require(_newBatchesData.length == 1, "e4"); // Check that we commit batches after last committed batch require(s.storedBatchHashes[s.totalBatchesCommitted] == _hashStoredBatchInfo(_lastCommittedBatchData), "i"); // incorrect previous batch data bytes32 systemContractsUpgradeTxHash = s.l2SystemContractsUpgradeTxHash; // Upgrades are rarely done so we optimize a case with no active system contracts upgrade. if (systemContractsUpgradeTxHash == bytes32(0) || s.l2SystemContractsUpgradeBatchNumber != 0) { _commitBatchesWithoutSystemContractsUpgrade(_lastCommittedBatchData, _newBatchesData); } else { _commitBatchesWithSystemContractsUpgrade( _lastCommittedBatchData, _newBatchesData, systemContractsUpgradeTxHash ); } - s.totalBatchesCommitted = s.totalBatchesCommitted + _newBatchesData.length; + s.totalBatchesCommitted = s.totalBatchesCommitted + 1; }
DiamondProxy.sol
's fallback could be made more efficientNB A similar logic can be applied to Mailbox's
_proveL2LogInclusion()
snippet attached
fallback() external payable { Diamond.DiamondStorage storage diamondStorage = Diamond.getDiamondStorage(); // Check whether the data contains a "full" selector or it is empty. // Required because Diamond proxy finds a facet by function signature, // which is not defined for data length in range [1, 3]. require(msg.data.length >= 4 || msg.data.length == 0, "Ut"); // Get facet from function selector Diamond.SelectorToFacet memory facet = diamondStorage.selectorToFacet[msg.sig]; address facetAddress = facet.facetAddress;
The fallback function should validate msg.data.length
first which could help avoid making a state load
function _proveL2LogInclusion( uint256 _batchNumber, uint256 _index, L2Log memory _log, bytes32[] calldata _proof ) internal view returns (bool) { require(_batchNumber <= s.totalBatchesExecuted, "xx"); bytes32 hashedLog = keccak256( abi.encodePacked(_log.l2ShardId, _log.isService, _log.txNumberInBatch, _log.sender, _log.key, _log.value) ); // Check that hashed log is not the default one, // otherwise it means that the value is out of range of sent L2 -> L1 logs require(hashedLog != L2_L1_LOGS_TREE_DEFAULT_LEAF_HASH, "tw"); // It is ok to not check length of `_proof` array, as length // of leaf preimage (which is `L2_TO_L1_LOG_SERIALIZE_SIZE`) is not // equal to the length of other nodes preimages (which are `2 * 32`) bytes32 calculatedRootHash = Merkle.calculateRoot(_proof, _index, hashedLog); bytes32 actualRootHash = s.l2LogsRootHashes[_batchNumber]; return actualRootHash == calculatedRootHash; }
Consider moving the require(_batchNumber <= s.totalBatchesExecuted, "xx");
way lower in the function's execution to save gas not sstoring in case another check reverts.
In order to optimize code we shouldn't load storage if we might revert on another check, validate msg.data first before reading from storage
fallback() external payable { + require(msg.data.length >= 4 || msg.data.length == 0, "Ut"); Diamond.DiamondStorage storage diamondStorage = Diamond.getDiamondStorage(); // Check whether the data contains a "full" selector or it is empty. // Required because Diamond proxy finds a facet by function signature, // which is not defined for data length in range [1, 3]. - require(msg.data.length >= 4 || msg.data.length == 0, "Ut"); // Get facet from function selector Diamond.SelectorToFacet memory facet = diamondStorage.selectorToFacet[msg.sig]; address facetAddress = facet.facetAddress;
In Solidity, assembly for loops save gas by avoiding the overhead of stack operations, that's because in solidity the compiler generates assembly code that includes stack operations to store and retrieve variables. This can lead to high gas costs, especially if the for loop is iterating over a large number of elements, which is why using assembly is more gas efficient since we avoid this unnecessary overhead.
Using this simple contract in Remix IDE we can see how it saves gas in the hundreds per iteration. The gas savings may high based on the complexity.
Multiple instances, we can see that by using this search command: https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync+for+%28+NOT+language%3AMarkdown+NOT+language%3ATypeScript+NOT+language%3AYul+NOT+language%3ARust&type=code
Consider swapping the normal for loop for assembly implemented ones, or atleast in instances where the logic is simple and mistakes would be hard to be made.
An instance can be seen in ValidatorTimelock.sol
validator = _validator; validator = _newValidator;
Consider using assembly to write address storage values to save gas
Take a look at L1ERC20Bridge.sol https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/bridge/L1ERC20Bridge.sol#L158C1-L167C1
/// @dev Transfers tokens from the depositor address to the shared bridge address. /// @return The difference between the contract balance before and after the transferring of funds. function _depositFundsToSharedBridge(address _from, IERC20 _token, uint256 _amount) internal returns (uint256) { uint256 balanceBefore = _token.balanceOf(address(sharedBridge)); _token.safeTransferFrom(_from, address(sharedBridge), _amount); uint256 balanceAfter = _token.balanceOf(address(sharedBridge)); return balanceAfter - balanceBefore; }
Here we are very certain that in no instance is balanceAfter < balanceBefore
ever going to be true, since protocol does not support "funny" tokens so we can wrap this deduction in an unchecked block
Take a look at Executor.sol
require(block.timestamp - COMMIT_TIMESTAMP_NOT_OLDER <= batchTimestamp, "h1"); // New batch timestamp is too small require(lastL2BlockTimestamp <= block.timestamp + COMMIT_TIMESTAMP_APPROXIMATION_DELTA, "h2"); // The last L2 block timestamp is too big
Note that COMMIT_TIMESTAMP_NOT_OLDER
and COMMIT_TIMESTAMP_APPROXIMATION_DELTA
are constant values, which are known before compilation time, in our case from here [https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/common/Config.sol#L44] we can see that COMMIT_TIMESTAMP_APPROXIMATION_DELTA
is just 1 hour
so block.timestamp + COMMIT_TIMESTAMP_APPROXIMATION_DELTA
won't overflow, so we can uncheck the two lines can be
Consider wrapping the mathematical operations in an unchecked block.
/// @return The total number of unprocessed priority operations in a priority queue function getSize(Queue storage _queue) internal view returns (uint256) { return uint256(_queue.tail - _queue.head); }
But from here we can see that both _queue.tail
& _queue.head
are of unit256
already so there's no need for this casting.
Remove the unnecessary casting
First take a look at https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/vendor/AddressAliasHelper.sol#L22
uint160 constant offset = uint160(0x1111000000000000000000000000000000001111);
Now consider OpenSea Seaport contract with this address: 0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC
, now keep in mind that this will not save gas when calling the address directly. But, if that contract’s address is used as an argument to a function, that function call will cost way less gas due to having more zeros in the calldata.
Consider implementing vanity addresses, just be aware that there have been hacks from generating vanity addresses for wallets with insufficiently random private keys, but this is not a concern for smart contracts vanity addresses created with finding a salt for create2, because smart contracts do not have private keys.
loop
so as to save gasWhen we declare a variable inside a loop, Solidity creates a new instance of the variable for each iteration of the loop. This can lead to unnecessary gas costs, especially in the case where the loop is executed frequently or iterates over a large number of elements. By declaring the variable outside the loop, you can avoid the creation of multiple instances of the variable and reduce the gas cost of your contract.
function _execute(Call[] calldata _calls) internal { for (uint256 i = 0; i < _calls.length; ++i) { (bool success, bytes memory returnData) = _calls[i].target.call{value: _calls[i].value}(_calls[i].data); if (!success) { // Propage an error if the call fails. assembly { revert(add(returnData, 0x20), mload(returnData)) } } } }
File : contracts/ethereum/contracts/governance/Governance.sol function _execute(Call[] calldata _calls) internal { + + bool success; + bytes memory returnData; + for (uint256 i = 0; i < _calls.length; ++i) { - (bool success, bytes memory returnData) = _calls[i].target.call{value: _calls[i].value}(_calls[i].data); + (success, returnData) = _calls[i].target.call{value: _calls[i].value}(_calls[i].data); if (!success) { // Propage an error if the call fails. assembly { revert(add(returnData, 0x20), mload(returnData)) } } } }
// linear traversal of the logs for (uint256 i = 0; i < emittedL2Logs.length; i = i.uncheckedAdd(L2_TO_L1_LOG_SERIALIZE_SIZE)) { // Extract the values to be compared to/used such as the log sender, key, and value (address logSender, ) = UnsafeBytes.readAddress(emittedL2Logs, i + L2_LOG_ADDRESS_OFFSET); (uint256 logKey, ) = UnsafeBytes.readUint256(emittedL2Logs, i + L2_LOG_KEY_OFFSET); (bytes32 logValue, ) = UnsafeBytes.readBytes32(emittedL2Logs, i + L2_LOG_VALUE_OFFSET);
+ + address logSender; + uint256 logKey; + bytes32 logValue; // linear traversal of the logs for (uint256 i = 0; i < emittedL2Logs.length; i = i.uncheckedAdd(L2_TO_L1_LOG_SERIALIZE_SIZE)) { // Extract the values to be compared to/used such as the log sender, key, and value - (address logSender, ) = UnsafeBytes.readAddress(emittedL2Logs, i + L2_LOG_ADDRESS_OFFSET); + (logSender, ) = UnsafeBytes.readAddress(emittedL2Logs, i + L2_LOG_ADDRESS_OFFSET); - (uint256 logKey, ) = UnsafeBytes.readUint256(emittedL2Logs, i + L2_LOG_KEY_OFFSET); + (logKey, ) = UnsafeBytes.readUint256(emittedL2Logs, i + L2_LOG_KEY_OFFSET); - (bytes32 logValue, ) = UnsafeBytes.readBytes32(emittedL2Logs, i + L2_LOG_VALUE_OFFSET); + (logValue, ) = UnsafeBytes.readBytes32(emittedL2Logs, i + L2_LOG_VALUE_OFFSET);
function _collectOperationsFromPriorityQueue(uint256 _nPriorityOps) internal returns (bytes32 concatHash) { concatHash = EMPTY_STRING_KECCAK; for (uint256 i = 0; i < _nPriorityOps; i = i.uncheckedInc()) { PriorityOperation memory priorityOp = s.priorityQueue.popFront(); concatHash = keccak256(abi.encode(concatHash, priorityOp.canonicalTxHash)); } }
+ PriorityOperation memory priorityOp; for (uint256 i = 0; i < _nPriorityOps; i = i.uncheckedInc()) { - PriorityOperation memory priorityOp = s.priorityQueue.popFront(); + priorityOp = s.priorityQueue.popFront();
bytes32 prevBatchCommitment = _prevBatch.commitment; for (uint256 i = 0; i < committedBatchesLength; i = i.uncheckedInc()) { currentTotalBatchesVerified = currentTotalBatchesVerified.uncheckedInc(); require( _hashStoredBatchInfo(_committedBatches[i]) == s.storedBatchHashes[currentTotalBatchesVerified], "o1" ); bytes32 currentBatchCommitment = _committedBatches[i].commitment;
bytes32 prevBatchCommitment = _prevBatch.commitment; + bytes32 currentBatchCommitment; for (uint256 i = 0; i < committedBatchesLength; i = i.uncheckedInc()) { currentTotalBatchesVerified = currentTotalBatchesVerified.uncheckedInc(); require( _hashStoredBatchInfo(_committedBatches[i]) == s.storedBatchHashes[currentTotalBatchesVerified], "o1" ); - bytes32 currentBatchCommitment = _committedBatches[i].commitment; + currentBatchCommitment = _committedBatches[i].commitment;
for (uint256 i = 0; i < facetsLen; i = i.uncheckedInc()) { address facetAddr = ds.facets[i]; Diamond.FacetToSelectors memory facetToSelectors = ds.facetToSelectors[facetAddr]; result[i] = Facet(facetAddr, facetToSelectors.selectors); }
+ address facetAddr; + Diamond.FacetToSelectors memory facetToSelectors; for (uint256 i = 0; i < facetsLen; i = i.uncheckedInc()) { - address facetAddr = ds.facets[i]; + facetAddr = ds.facets[i]; - Diamond.FacetToSelectors memory facetToSelectors = ds.facetToSelectors[facetAddr]; + facetToSelectors = ds.facetToSelectors[facetAddr]; result[i] = Facet(facetAddr, facetToSelectors.selectors); }
uint256 factoryDepsLen = _factoryDeps.length; hashedFactoryDeps = new uint256[](factoryDepsLen); for (uint256 i = 0; i < factoryDepsLen; i = i.uncheckedInc()) { bytes32 hashedBytecode = L2ContractHelper.hashL2Bytecode(_factoryDeps[i]);
File : contracts/ethereum/contracts/zksync/facets/Mailbox.sol uint256 factoryDepsLen = _factoryDeps.length; hashedFactoryDeps = new uint256[](factoryDepsLen); + bytes32 hashedBytecode; for (uint256 i = 0; i < factoryDepsLen; i = i.uncheckedInc()) { - bytes32 hashedBytecode = L2ContractHelper.hashL2Bytecode(_factoryDeps[i]); + hashedBytecode = L2ContractHelper.hashL2Bytecode(_factoryDeps[i]);
Apply the diff changes
for
loops if possible when being used as terneriesTake a look at https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/upgrades/BaseZkSyncUpgrade.sol#L226C8-L226C58, unlike most instances of for loops in the codebase, this one does not use an unchecked method to increment i
for (uint256 i = 0; i < _factoryDeps.length; ++i) {
Another instance can be seen in https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/governance/Governance.sol#L225
for (uint256 i = 0; i < _calls.length; ++i) {
Other instances in scope, can be pinpointed, by using this search command: https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync+length%3B+%2B%2Bi+NOT+language%3AMarkdown+NOT+language%3ATypeScript&type=code
Consider having the increment statement be performed unsafely, optimizing the code's gas cost.
if-else
statements that have a negation and also reducing negations when possibleMultiple instances in code, use this search command to pinpoint interesting areas: https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync+if+%28%21+NOT+language%3AMarkdown+NOT+language%3ATypeScript&type=code
Use the code snippet below as an example, the second function avoids an unnecessary negation. where as the extra !
increases the computational cost. one should benchmark both methods because the compiler can sometimes optimize this.
function condition() public { if (!condition) { action1(); } else { action2(); } } function condition() public { if (condition) { action2(); } else { action1(); } }
require(!diamondStorage.isFrozen || !facet.isFreezable, "q1");
Here we can save one negation by rewriting the statement to:
require(!(diamondStorage.isFrozen && facet.isFreezable)), "q1");
Consider inverting, while not over-complicating code.
For example consider this instance in the state-tramsition facet, https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Executor.sol#L291
bytes32 expectedUpgradeTxHash = i == 0 ? _systemContractUpgradeTxHash : bytes32(0)
It's been implemented in the Executor::_commitBatchesWithSystemContractsUpgrade
where it executes a special logic for the first batch of the total batches being committed inefficiently within the for
loop.
Consider making the system upgrade batch to be executed outside the for
loop and then the for
loop body could be identical to the Executor::_commitBatchesWIthoutSystemContractsUpgrade
but here start the iterator at 1
instead of 0
.
variable + 1
with value++
NB: In some instances
uncheck{ value++ }
will be preferable
require(currentBatchNumber == s.totalBatchesExecuted + _executedBatchIdx + 1, "k");
- require(_newBatch.batchNumber == _previousBatch.batchNumber + 1, "f"); + require(_newBatch.batchNumber == _previousBatch.batchNumber++, "f");
Apply the diff changes
bytecode.length
than querying it twicefunction hashL2Bytecode(bytes memory _bytecode) internal pure returns (bytes32 hashedBytecode) { // Note that the length of the bytecode must be provided in 32-byte words. require(_bytecode.length % 32 == 0, "pq"); uint256 bytecodeLenInWords = _bytecode.length / 32; require(bytecodeLenInWords < 2 ** 16, "pp"); // bytecode length must be less than 2^16 words require(bytecodeLenInWords % 2 == 1, "ps"); // bytecode length in words must be odd hashedBytecode = sha256(_bytecode) & 0x00000000FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF; // Setting the version of the hash hashedBytecode = (hashedBytecode | bytes32(uint256(1 << 248))); // Setting the length hashedBytecode = hashedBytecode | bytes32(bytecodeLenInWords << 224); }
Evidently, _bytecode.length
is being queried twice, leading to expending double the gas cost
Consider adding this uint256 _bytecodeLength = _bytecode.length
to the function block, and then query from _bytecodeLength
instead of _bytecode.length
in both instances
UnsafeBytes.readAddress
function _parseL2WithdrawalMessage( uint256 _chainId, bytes memory _l2ToL1message ) internal view returns (address l1Receiver, address l1Token, uint256 amount) { // We check that the message is long enough to read the data. // Please note that there are two versions of the message: // 1. The message that is sent by `withdraw(address _l1Receiver)` // It should be equal to the length of the bytes4 function signature + address l1Receiver + uint256 amount = 4 + 20 + 32 = 56 (bytes). // 2. The message that is sent by `withdrawWithMessage(address _l1Receiver, bytes calldata _additionalData)` // It should be equal to the length of the following: // bytes4 function signature + address l1Receiver + uint256 amount + address l2Sender + bytes _additionalData = // = 4 + 20 + 32 + 32 + _additionalData.length >= 68 (bytes). // ..snip //@audit (l1Receiver, offset) = UnsafeBytes.readAddress(_l2ToL1message, offset); (l1Token, offset) = UnsafeBytes.readAddress(_l2ToL1message, offset); (amount, offset) = UnsafeBytes.readUint256(_l2ToL1message, offset); } else { revert("ShB Incorrect message function selector"); } }
In the last call to UnsafeBytes.readAddress
we can remove the offset
since to save gas
Consider removing it in the last call, a simple POC to test this on REMIX could be considered below, where we save ~8/10
gas (depending on complecisty)
function getUints() public pure returns (uint, uint) {return (1, 2);} function FirstAll() public view { uint a; uint b; (a, b) = getUints(); uint g = gasleft(); (a, b) = getUints(); console.log(g - gasleft()); } function FirstOne() public view { uint a; uint b; (a, b) = getUints(); uint g = gasleft(); (a, ) = getUints(); console.log(g - gasleft()); }
FirstOne()
costs 8 gas lesser than FirstAll()
, so we can decide not to assign every parameter from function call if it's not needed.
// While this does not provide a protection in the production, it is needed for local testing // Length of the L2Log encoding should not be equal to the length of other L2Logs' tree nodes preimages assert(L2_TO_L1_LOG_SERIALIZE_SIZE != 2 * 32); }
Here we know that assert(L2_TO_L1_LOG_SERIALIZE_SIZE != 2 * 32);
is redundant and for testing purposes
Consider removing codes not ready for final production
92: function _setL2DefaultAccountBytecodeHash(bytes32 _l2DefaultAccountBytecodeHash) private { 109: function _setL2BootloaderBytecodeHash(bytes32 _l2BootloaderBytecodeHash) private { 126 function _setVerifier(IVerifier _newVerifier) private { 142 function _setVerifierParams(VerifierParams calldata _newVerifierParams) private { 222 function _verifyFactoryDeps(bytes[] calldata _factoryDeps, uint256[] calldata _expectedHashes) private pure {
46 function _initializeReentrancyGuard() private {
126 function _addFunctions(address _facet, bytes4[] memory _selectors, bool _isFacetFreezable) private { 149 function _replaceFunctions(address _facet, bytes4[] memory _selectors, bool _isFacetFreezable) private { 172 function _removeFunctions(address _facet, bytes4[] memory _selectors) private { 257 function _removeFacet(address _facet) private { 278 function _initializeDiamondCut(address _init, bytes memory _calldata) private {
Consider inlining the private functions to save gas
function finalizeWithdrawal() external override { // To avoid rewithdrawing txs that have already happened on the legacy bridge. // Note: new withdraws are all recorded here, so double withdrawing them is not possible. //@audit if (_isEraLegacyWithdrawal(_chainId, _l2BatchNumber)) { require(!legacyBridge.isWithdrawalFinalized(_l2BatchNumber, _l2MessageIndex), "ShB: legacy withdrawal"); } _finalizeWithdrawal(_chainId, _l2BatchNumber, _l2MessageIndex, _l2TxNumberInBatch, _message, _merkleProof); } function _finalizeWithdrawal() internal nonReentrant returns (address l1Receiver, address l1Token, uint256 amount) { require(!isWithdrawalFinalized[_chainId][_l2BatchNumber][_l2MessageIndex], "Withdrawal is already finalized"); isWithdrawalFinalized[_chainId][_l2BatchNumber][_l2MessageIndex] = true; // Handling special case for withdrawal from zkSync Era initiated before Shared Bridge. //@audit if (_isEraLegacyWithdrawal(_chainId, _l2BatchNumber)) { // Checks that the withdrawal wasn't finalized already. bool alreadyFinalized = IGetters(ERA_DIAMOND_PROXY).isEthWithdrawalFinalized( _l2BatchNumber, _l2MessageIndex ); require(!alreadyFinalized, "Withdrawal is already finalized 2"); } //ommited for brevity }
These functions are both used for finalizing the withdrawals, with multiple checks, issue here now is that the check if the transaction is from the era legacy is implemented twice which is unnecessary.
proveBatches()
causing double expenditure of gas.. PREPROCESSORfunction _proveBatches( StoredBatchInfo calldata _prevBatch, StoredBatchInfo[] calldata _committedBatches, ProofInput calldata _proof ) internal { //ommited for brevity //@audit if (_proof.serializedProof.length > 0) { _verifyProof(proofPublicInput, _proof); } // #else _verifyProof(proofPublicInput, _proof); // #endif emit BlocksVerification(s.totalBatchesVerified, currentTotalBatchesVerified); s.totalBatchesVerified = currentTotalBatchesVerified; }
Evidently, if the proof is not empty, it gets verified, due to the if
block, case is the else
has been has been commented out which means that a proof is always going to undergo double verification in as much as it's > 0. Causig double gas expenditure of calling this: https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Executor.sol#L441-L452
function _verifyProof(uint256[] memory proofPublicInput, ProofInput calldata _proof) internal view { // We can only process 1 batch proof at a time. require(proofPublicInput.length == 1, "t4"); bool successVerifyProof = s.verifier.verify( proofPublicInput, _proof.serializedProof, _proof.recursiveAggregationInput ); require(successVerifyProof, "p"); // Proof verification fail }
For the first case, the check can be left just to the internal function while finalizing the withdrawal and for the second one consider verifying only once if _proof.serializedProof.length > 0
is true.
require(s.l2SystemContractsUpgradeBatchNumber == 0, "ik");
while commiting batches with system upgradesfunction _commitBatchesWithSystemContractsUpgrade( StoredBatchInfo memory _lastCommittedBatchData, CommitBatchInfo[] calldata _newBatchesData, bytes32 _systemContractUpgradeTxHash ) internal { //omitted for brevity //@audit require(s.l2SystemContractsUpgradeBatchNumber == 0, "ik"); //omitted for brevity s.l2SystemContractsUpgradeBatchNumber = _newBatchesData[0].batchNumber; } }
This function is used to commit new batches with a system contracts upgrade transaction, as hinted by the @audit
tag, this execution tries to ensure that the l2SystemContractsUpgradeBatchNumber
is 0
but that check has already been placed in the primary internal function that calls this. i,e here: https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Executor.sol#L237-L245
if (systemContractsUpgradeTxHash == bytes32(0) || s.l2SystemContractsUpgradeBatchNumber != 0) { _commitBatchesWithoutSystemContractsUpgrade(_lastCommittedBatchData, _newBatchesData); } else { _commitBatchesWithSystemContractsUpgrade( _lastCommittedBatchData, _newBatchesData, systemContractsUpgradeTxHash ); }
So there is already an assurance that s.l2SystemContractsUpgradeBatchNumber
is always going to be 0
when this function is called.
Remove this line: require(s.l2SystemContractsUpgradeBatchNumber == 0, "ik");
Libmap
/// @dev This library is an adaptation of the corresponding Solady library (https://github.com/vectorized/solady/blob/main/src/utils/LibMap.sol) /// @custom:security-contact security@matterlabs.dev library LibMap {
We can see that protocl tries adapting Solady's library, issue here is that the LibMap
in current use by the codebase is actually sub-optimal since an implementation is present in the official Solady repository that contains a better level of optimization, i.e to list an optimisation, lookups use bitwise operators instead of divisions and multiplications (result in a 2 gas cost reduction per operation) and then sets are entirely performed in assembly blocks further benefitting from optimized gas costs.
Consider importing the latest gas optimised Libmap
from Solady
Implementing mappings instead of arrays to avoid length checks, so when storing a list or group of items that you wish to organize in a specific order and fetch with a fixed key/index, it’s common practice to use an array data structure, keep in mind that no caveat to this and it works well,
Consider this contract POC:
/// fetchValueAtIndex(0) gas cost: 4860 contract SequentialStorage { uint256[] sequence; constructor() { sequence.push() = 1; sequence.push() = 2; sequence.push() = 3; } function fetchValueAtIndex(uint256 idx) external view returns (uint256) { return sequence[idx]; } } /// fetchValueAtIndex(0) gas cost: 2758 contract KeyedStorage { mapping(uint256 => uint256) dataMap; constructor() { dataMap[0] = 1; dataMap[1] = 2; dataMap[2] = 3; } function fetchValueAtIndex(uint256 idx) external view returns (uint256) { return dataMap[idx]; } }
So by using a mapping, we get a gas saving of > 210o gas, this is cause when we read the value of an index of an array, solidity adds bytecode that checks that we are reading from a valid index (i.e an index strictly less than the length of the array) else it reverts with a panic error. So this prevents from reading unallocated or worse, allocated storage/memory locations.
Due to the way mappings are (simply a key => value
pair), no check like that exists and we are able to read from the a storage slot directly.
Multiple instances in code, could be grepped by using []
since they mostly denote an array, i.e: [https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync+%5B%5D+NOT+language%3ATypeScript+NOT+language%3AMarkdown+NOT+language%3AYul+NOT+language%3AJSON+NOT+language%3ARust&type=code]https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync+%5B%5D+NOT+language%3ATypeScript+NOT+language%3AMarkdown+NOT+language%3AYul+NOT+language%3AJSON+NOT+language%3ARust&type=code) and then we can pinpoint from here for instances where we can apply this.
Consider using mappings in as much cases as possible over arrays, gas wise, albeit it's important to note that when using mappings in this manner, the code should ensure that we are not reading an out of bound index of our canonical array.
Take a look at
Consider this instamce that can be seen in https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Mailbox.sol#L04-L127
address sender = msg.sender;
Consider directly using msg.sender
or block.timestamp
instead of caching it)
Multiple instances in code, can be pinpointed down with this search command: https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync+++%2F+2+NOT+language%3ATypeScript+NOT+language%3AMarkdown+NOT+language%3AYul+NOT+language%3AJSON+NOT+language%3ARust+NOT+language%3ADotenv&type=code
Key to note that we are not only considering instances of X / 2
, ibut even instances that look like: X /= 2
, for e.g consider this from Merkle.sol
https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/libraries/Merkle.sol#L33
_index /= 2;
It could be changed to: _index = _index >> 1;
Use bitshifting for the divisions instead
Take a look at https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Mailbox.sol#L174, here the Math
is imported but only 1
function is used
Whereas here https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/libraries/Diamond.sol#L5 the SafeCast
is imported but only 1
function is used
If only one function is going to be used from an imported library then, it is better we copy that same function into the contract source code.
Use this search command: https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync+for+%28+NOT+language%3AMarkdown+NOT+language%3ATypeScript+NOT+language%3AYul+NOT+language%3ARust&type=code, these are instances where the for loops are implemented, issue is with how some of these for loops are implemented, in this case the ones that are sequential, take a look at https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/system-contracts/contracts/ContractDeployer.sol#L238-L256
function forceDeployOnAddresses(ForceDeployment[] calldata _deployments) external payable { require( msg.sender == FORCE_DEPLOYER || msg.sender == address(COMPLEX_UPGRADER_CONTRACT), "Can only be called by FORCE_DEPLOYER or COMPLEX_UPGRADER_CONTRACT" ); uint256 deploymentsLength = _deployments.length; // We need to ensure that the `value` provided by the call is enough to provide `value` // for all of the deployments uint256 sumOfValues = 0; for (uint256 i = 0; i < deploymentsLength; ++i) { sumOfValues += _deployments[i].value; } require(msg.value == sumOfValues, "`value` provided is not equal to the combined `value`s of deployments"); for (uint256 i = 0; i < deploymentsLength; ++i) { this.forceDeployOnAddress{value: _deployments[i].value}(_deployments[i], msg.sender); } }
Evidently, the double loop of "for (uint256 i = 0; i < deploymentsLength; ++i) " could be merged to reduce cost, cause in solidity, merging multiple for loops within a function can enhance efficiency and reduce gas costs, especially when they share a common iterating variable, since this minimizes redundant iterations over the same data, so execution becomes more cost-effective.
Consider merging similar loops, however, while merging can optimize gas usage and simplify logic, it may also increase code complexity. Therefore there should be a balance between optimization and maintainability.
expectedBool == 1
syntax for booleansTake a look at isDiamondStorageFrozen()
function isDiamondStorageFrozen() external view returns (bool) { Diamond.DiamondStorage storage ds = Diamond.getDiamondStorage(); return ds.isFrozen; }
This function is used to check if the diamond storage is frozen and the function returns a bool, however implenting it with butwise operations would be cheaper, this report focuses on using bitwise operations for bool flags, i.e having the implemented changes be:
function isDiamondStorageFrozen() external view returns (bool) { - Diamond.DiamondStorage storage ds = Diamond.getDiamondStorage(); - return ds.isFrozen; + return ds.isFrozen == 1; }
Apply the diff changes
Whenever possible use shift Left instead of multiplication if possible to save gas Instances are quite much in scope, but use this search command to pin point some: https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync++*++NOT+language%3AMarkdown+NOT+language%3AJSON+NOT+language%3AYul+NOT+language%3ATypeScript+NOT+language%3ARust&type=code
Consider the instaces that could be reimplemented to shift left instead of multiplication.
memory
should be used of instead state variables while emiting eventsfunction _revertBatches(uint256 _newLastBatch) internal { require(s.totalBatchesCommitted > _newLastBatch, "v1"); // The last committed batch is less than new last batch require(_newLastBatch >= s.totalBatchesExecuted, "v2"); // Already executed batches cannot be reverted if (_newLastBatch < s.totalBatchesVerified) { s.totalBatchesVerified = _newLastBatch; } s.totalBatchesCommitted = _newLastBatch; // Reset the batch number of the executed system contracts upgrade transaction if the batch // where the system contracts upgrade was committed is among the reverted batches. if (s.l2SystemContractsUpgradeBatchNumber > _newLastBatch) { delete s.l2SystemContractsUpgradeBatchNumber; } - emit BlocksRevert(s.totalBatchesCommitted, s.totalBatchesVerified, s.totalBatchesExecuted); + emit BlocksRevert(_newLastBatch, s.totalBatchesVerified, s.totalBatchesExecuted); }
Apply suggested diff
Use this search command for instances: https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync+new+bytes%280%29+NOT+language%3AMarkdown+NOT+language%3ATypeScript+NOT+language%3AYul+NOT+language%3ARust&type=code https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Mailbox.sol#L308-L311
A more concrete example could be
signature: new bytes(0), factoryDeps: _hashFactoryDeps(_factoryDeps), paymasterInput: new bytes(0), reservedDynamic: new bytes(0)
Comsider replacing "new bytes(0)" with "``"
The solidity compiler outputs more efficient code when the variable is declared in the return statement. There seem to be very few exceptions to this in practice, so if you see an anonymous return, you should test it with a named return instead to determine which case is most efficient.
For example, ake a look at https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/upgrades/BaseZkSyncUpgradeGenesis.sol#L46-L68
function upgrade(ProposedUpgrade calldata _proposedUpgrade) public virtual override returns (bytes32) { // Note that due to commitment delay, the timestamp of the L2 upgrade batch may be earlier than the timestamp // of the L1 block at which the upgrade occurred. This means that using timestamp as a signifier of "upgraded" // on the L2 side would be inaccurate. The effects of this "back-dating" of L2 upgrade batches will be reduced // as the permitted delay window is reduced in the future. require(block.timestamp >= _proposedUpgrade.upgradeTimestamp, "Upgrade is not ready yet"); _setNewProtocolVersion(_proposedUpgrade.newProtocolVersion); _upgradeL1Contract(_proposedUpgrade.l1ContractsUpgradeCalldata); _upgradeVerifier(_proposedUpgrade.verifier, _proposedUpgrade.verifierParams); _setBaseSystemContracts(_proposedUpgrade.bootloaderHash, _proposedUpgrade.defaultAccountHash); bytes32 txHash = _setL2SystemContractUpgrade( _proposedUpgrade.l2ProtocolUpgradeTx, _proposedUpgrade.factoryDeps, _proposedUpgrade.newProtocolVersion ); _postUpgrade(_proposedUpgrade.postUpgradeCalldata); emit UpgradeComplete(_proposedUpgrade.newProtocolVersion, txHash, _proposedUpgrade); }
The variable is not named and would cost more to return.
COnsider specifically naming variables to be returned in each instance
The conventional way to check if a number is even or odd is to do (x % 2 == 0) where x is the number in question. We can instead check if x & uint256(1) == 0. where x is assumed to be a uint256. Bitwise and is cheaper than the modulo op code. In binary, the rightmost bit represents "1" whereas all the bits to the are multiples of 2, which are even. Adding "1" to an even number causes it to be odd.
Use this search command to find out instances where the %
is used: https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync++%25+NOT+language%3AYul+NOT+language%3AMarkdown+NOT+language%3ATypeScript&type=code
Consider testong if a number is even or odd by checking the last bit.
mapping
and import variables
/// @dev Deprecated storage variable related to withdrawal limitations. mapping(address => uint256) private __DEPRECATED_lastWithdrawalLimitReset; /// @dev Deprecated storage variable related to withdrawal limitations. mapping(address => uint256) private __DEPRECATED_withdrawnAmountInWindow; /// @dev Deprecated storage variable related to deposit limitations. mapping(address => mapping(address => uint256)) private __DEPRECATED_totalDepositedAmountPerUser;
address(0)
are more gas efficientit's generally more gas-efficient to use assembly to check for a zero address (address(0)) than to use the == operator.
The reason for this is that the == operator generates additional instructions in the EVM bytecode, which can increase the gas cost of your contract.
Use this search command to find instances https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync++%3D%3D+address%280%29+NOT+language%3AMarkdown+NOT+language%3ATypeScript&type=code
By using assembly to perform the zero address check, we can make our code more gas-efficient and reduce the overall cost of our contract, keep in mind that assembly can be more difficult to read and maintain than Solidity code, so it should be used with caution and only when necessary.
abi.decode
We can use assembly to decode our desired calldata values directly instead of using abi.decode, this allows us to avoid decoding calldata values that we will not use.
Use this search command to find instances https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync++abi.decode%28+NOT+language%3AMarkdown+NOT+language%3ATypeScript&type=code
Consider applying suggestion.
/// @dev Returns whether an id corresponds to a registered operation. This /// includes Waiting, Ready, and Done operations. function isOperation(bytes32 _id) public view returns (bool) { return getOperationState(_id) != OperationState.Unset; } /// @dev Returns whether an operation is pending or not. Note that a "pending" operation may also be "ready". function isOperationPending(bytes32 _id) public view returns (bool) { OperationState state = getOperationState(_id); return state == OperationState.Waiting || state == OperationState.Ready; } /// @dev Returns whether an operation is ready for execution. Note that a "ready" operation is also "pending". function isOperationReady(bytes32 _id) public view returns (bool) { return getOperationState(_id) == OperationState.Ready; } /// @dev Returns whether an operation is done or not. function isOperationDone(bytes32 _id) public view returns (bool) { return getOperationState(_id) == OperationState.Done; }
These instances showcase how functions are used to get the operation state.
function isEmpty(Queue storage _queue) internal view returns (bool) {
Now consider two contracts with modifiers and internal view function:
// SPDX-License-Identifier: MIT pragma solidity 0.8.9; contract Inlined { function isNotExpired(bool _true) internal view { require(_true == true, "Exchange: EXPIRED"); } function foo(bool _test) public returns(uint){ isNotExpired(_test); return 1; } } // SPDX-License-Identifier: MIT pragma solidity 0.8.9; contract Modifier { modifier isNotExpired(bool _true) { require(_true == true, "Exchange: EXPIRED"); _; } function foo(bool _test) public isNotExpired(_test)returns(uint){ return 1; } }
Deploying this on remix would show how using the modifier option is cheaper.
Consider using modifiers in place of functions(public). In some cases cases especially with internal functions using the internal functions in place of modifiers is better as would be explained in another report below
The calldataload
instruction is a cheap opcode, but the solidity compiler will sometimes output cheaper code if we cache calldataload.
Multiple instances where this could be applied in scope, use this search command to pinpoint areas: https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync+calldata++NOT+language%3ATypeScript+NOT+language%3AMarkdown+NOT+language%3AYul+NOT+language%3AJSON&type=code
contract LoopSum { function sumArr(uint256[] calldata arr) public pure returns (uint256 sum) { uint256 len = arr.length; for (uint256 i = 0; i < len; ) { sum += arr[i]; unchecked { ++i; } } } }
Consider caching the calldataload
, keep in mind that thiis will not always be the case, so you should first test both possibilities.
We can make admin specific functions payable to save gas, because the compiler won’t be checking the callvalue of the function, unlike end user functions, admins are trusted and as such won't unnecesary attach msg.value to functions.
Multiple instances in code, could be grepped by considering functions restricted to admins, this search command can help prepoint these modifiers: https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync++modifier+only+NOT+language%3AYul+NOT+language%3AMarkdown+NOT+language%3ATypeScript&type=code then the functions where most of these modifiers are used can be marked payable, keep in mind that some other instances exist where not modifiers are used to restrict calls to only admins but require msg.sender == expectedAdmin
checks are made so this should be checked too.
Consider making admin functions payable
NB: This should only be used if we can limit the length to a certain number of bytes, we should always use one of bytes1 to bytes32 because they are even much more cheaper.
Multiple instances, to list a few:
string public constant override getName = "AdminFacet";
string public constant override getName = "ExecutorFacet";
string public constant override getName = "GettersFacet";
string public constant override getName = "MailboxFacet";
Apply diffs
to these instances and change string
to bytes
, i.e
- string ... + bytes ...
function _isValidSignature(bytes32 _hash, bytes memory _signature) internal view returns (bool) { require(_signature.length == 65, "Signature length is incorrect"); uint8 v; bytes32 r; bytes32 s; // Signature loading code // we jump 32 (0x20) as the first slot of bytes contains the length // we jump 65 (0x41) per signature // for v we load 32 bytes ending with v (the first 31 come from s) then apply a mask assembly { r := mload(add(_signature, 0x20)) s := mload(add(_signature, 0x40)) v := and(mload(add(_signature, 0x41)), 0xff) } //@audit require(v == 27 || v == 28, "v is neither 27 nor 28"); // EIP-2 still allows signature malleability for ecrecover(). Remove this possibility and make the signature // unique. Appendix F in the Ethereum Yellow paper (https://ethereum.github.io/yellowpaper/paper.pdf), defines // the valid range for s in (301): 0 < s < secp256k1n ÷ 2 + 1, and for v in (302): v ∈ {27, 28}. Most // signatures from current libraries generate a unique signature with an s-value in the lower half order. // // If your library generates malleable signatures, such as s-values in the upper range, calculate a new s-value // with 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141 - s1 and flip v from 27 to 28 or // vice versa. If your library also generates signatures with 0/1 for v instead 27/28, add 27 to v to accept // these malleable signatures as well. require(uint256(s) <= 0x7FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF5D576E7357A4501DDFE92F46681B20A0, "Invalid s"); address recoveredAddress = ecrecover(_hash, v, r, s); return recoveredAddress == address(this) && recoveredAddress != address(0); }
As seen, this check require(v == 27 || v == 28, "v is neither 27 nor 28");
is employed but this is not needed since it's always true, for more info, refer to this tweet
Unnecessary checks while validating signatures
Remove the check: require(v == 27 || v == 28, "v is neither 27 nor 28");
function chunkAndPublishPubdata(bytes calldata _pubdata) external onlyCallFrom(address(L1_MESSENGER_CONTRACT)) { require(_pubdata.length <= BLOB_SIZE_BYTES * MAX_NUMBER_OF_BLOBS, "pubdata should fit in 2 blobs"); bytes32[] memory blobHashes = new bytes32[](MAX_NUMBER_OF_BLOBS); // We allocate to the full size of MAX_NUMBER_OF_BLOBS * BLOB_SIZE_BYTES because we need to pad // the data on the right with 0s if it doesn't take up the full blob bytes memory totalBlobs = new bytes(BLOB_SIZE_BYTES * MAX_NUMBER_OF_BLOBS); assembly { // The pointer to the allocated memory above. We skip 32 bytes to avoid overwriting the length. let ptr := add(totalBlobs, 0x20) calldatacopy(ptr, _pubdata.offset, _pubdata.length) } for (uint256 i = 0; i < MAX_NUMBER_OF_BLOBS; i++) { uint256 start = BLOB_SIZE_BYTES * i; // We break if the pubdata isn't enough to cover 2 blobs. On L1 it is expected that the hash // will be bytes32(0) if a blob isn't going to be used. if (start >= _pubdata.length) { break; } bytes32 blobHash; assembly { // The pointer to the allocated memory above skipping the length. let ptr := add(totalBlobs, 0x20) blobHash := keccak256(add(ptr, start), BLOB_SIZE_BYTES) } blobHashes[i] = blobHash; } SystemContractHelper.toL1(true, bytes32(uint256(SystemLogKey.BLOB_ONE_HASH_KEY)), blobHashes[0]); SystemContractHelper.toL1(true, bytes32(uint256(SystemLogKey.BLOB_TWO_HASH_KEY)), blobHashes[1]); }
Function is used to chunk pubdata into pieces that can fit into blobs, case is that it queries to constant variables
and then multiplies, when this value can just be hardcoded to the codebase, i.e this line require(_pubdata.length <= BLOB_SIZE_BYTES * MAX_NUMBER_OF_BLOBS, "pubdata should fit in 2 blobs");
, both BLOB_SIZE_BYTES
and MAX_NUMBER_OF_BLOBS
have fixed numbers of 2
and 126976
respectively as gotten from https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/system-contracts/contracts/Constants.sol
This means that the check could be changed to require(_pubdata.length <= 253_952 "pubdata should fit in 2 blobs");
Consider making the change in as much as the values are going to be constants
TransactionHelper::isEthToken()
is not used anywhere so it should be removedfunction isEthToken(uint256 _addr) internal pure returns (bool) { return _addr == uint256(uint160(address(BASE_TOKEN_SYSTEM_CONTRACT))) || _addr == 0; }
This function is used to know whether the address
passed is th Ether
address, and also returns true if the address passed is 0x0
(for conveniency) as stated here, issue with this is tht using this search command: https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync+isEthToken&type=code we can see that this function is not used anywhere and as such is just adding unnecessary gas costs to deploy the contract
Remove this section of TransactionHelper.sol
since it's not being used and is just an unnecessary addition to the bytecode that gets deployed for the contract and as such more gas cost while deployign the contract.
Remove the isEthToken()
function since it's not being used anywhere
for
statementsCounting down is more gas efficient than counting up because we will get gas refund in the last transaction when making non-zero to zero variable.
For example take a look at https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/upgrades/BaseZkSyncUpgrade.sol#L226
for (uint256 i = 0; i < _factoryDeps.length; ++i) {
- for (uint256 i = 0; i < _factoryDeps.length; ++i) { + for (uint256 i = _factoryDeps.length - 1; i >= 0; --i) {
Apply the diff changes
This report explores using bitwise operators (^
- XOR and &
- AND) as an alternative to standard equality checks (==
) in Solidity contracts.
Optimizing Conditions:
Checking Equality:
a == b
checks if all bits in a
and b
are the same.!(a ^ b)
(logical NOT of XOR). Since XOR returns 0 for equal bits, the negation (!
) makes it true only for equality.Checking for Any Condition:
a || b
checks if either a
or b
is true (has a non-zero value).(a ^ b) & 1
. This leverages XOR to find any difference and the AND with 1 ensures a non-zero result only if a difference exists.Both approaches achieve the same outcome, but bitwise operations are slightly more gas-efficient compared to standard comparisons.
An instance in scope:
return state == OperationState.Waiting || state == OperationState.Ready;
Checks if the state is either Waiting or Ready.
Optimized code could be like this return state ^ OperationState.Waiting & state ^ OperationState.Ready;
, so it uses XOR to detect any difference between the state and the desired states, then ensures a non-zero result with AND and 1.
Consider applying the fix, but downside is that while bitwise operators can be efficient, code readability would be slightly affected. Consider using them strategically.
Using this search command: https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync+for+%28+NOT+language%3AMarkdown+NOT+language%3ATypeScript+NOT+language%3AYul+NOT+language%3ARust&type=code, we can see instances of the for loops
, but most of these instances do not entail with a gas optimal for loops, for example this is what a gas-optimal for loop looks like, if you combine the two tricks below:
for (uint256 i; i < limit; ) { // inside the loop unchecked { ++i; } }
Main differences here from a conventional for loop is that i++
becomes ++i
and it is unchecked because the limit variable ensures it won’t overflow.
Consider making all loops with the above format
Take a look at all the read functions from https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/common/libraries/UnsafeBytes.sol#L19-L45.
One can see that The UnsafeBytes
library is utilized across L1ERC20Bridge
, L1WethBridge
, Executor
, and the Mailbox
contracts to parse significantly large payloads of data sequentially in an optimized way (This can be seen by grepping these reaad functions in the entire scope).
Now, each and every utilization points of the library make use of a memory
argument even though a calldata
argument could be passed in, i.e (L1ERC20Bridge::_parseL2WithdrawalMessage
, L1WethBridge::_parseL2EthWithdrawalMessage
, Executor::_processL2Logs
, Mailbox::_parseL2WithdrawalMessage
).
Consider updating the library to work on calldata
arguments instead, this way the gas cost would be significantly reduced as it would utilize calldataload
, and would not require copying the full calldata
payload to memory
, therefor not incurirng any hidden memory expansion gas costs.
Consider this article from rareskills for a simple POC on how using do-while loops are cheaper than for loops: https://www.rareskills.io/post/gas-optimization
Now as already hinted in this report they are multiple for loops in scope as can be seen by using this search command: https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync+for+%28+NOT+language%3AMarkdown+NOT+language%3ATypeScript+NOT+language%3AYul+NOT+language%3ARust&type=code
Consider using do-while loops in the stead of for loops where possible.
In Solidity, short-circuiting behavior in boolean expressions with ||
(logical OR) and &&
(logical AND) operators optimizes gas usage by evaluating expressions conditionally. For ||
, if the first condition evaluates to true, the second is not evaluated, saving gas in scenarios where the first condition is likely to succeed. Conversely, for &&
, if the first condition evaluates to false, the second condition is skipped, which is useful for saving gas when the first condition is likely to fail. Placing the less expensive or more likely-to-succeed condition first leverages short-circuiting for efficient gas consumption.
Utilizing short-circuiting effectively requires strategic placement of conditions within require
statements or similar constructs. For instance, require(msg.sender == owner || msg.sender == manager)
will not evaluate the second condition if the first is true, making it gas-efficient for frequent success scenarios. Similarly, in expressions like require(msg.sender == owner && msg.sender == manager)
, failing the first condition skips the second, optimizing gas for scenarios where reverts are common. Thus, arranging conditions based on their gas costs and likelihood of passing can significantly reduce overall gas consumption.
This needs more logic to pinpoint instances, but we can use these two search commands to find out where we can apply short-circuiting:
||
- https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync+%7C%7C++NOT+language%3ATypeScript+NOT+language%3AMarkdown+NOT+language%3AYul+NOT+language%3AJSON+NOT+language%3ARust&type=code&&
- https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync+%26%26+NOT+language%3ATypeScript+NOT+language%3AMarkdown+NOT+language%3AYul+NOT+language%3AJSON+NOT+language%3ARust&type=codeAs hinted, this needs careful logic placed, but consider short circuiting while not overcomplicating readability of code.
uint256[] memory proofPublicInput = new uint256[](committedBatchesLength);
result = new Facet[](facetsLen);
And other instances, could be pinpointed using https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync+++%3D+new+NOT+language%3ATypeScript+NOT+language%3AMarkdown+NOT+language%3AYul+NOT+language%3AJSON+NOT+language%3ARust&type=code, and then take instances where an array is being created where as we can shorten them
If possible, shorten the arrays instead of creating new ones.
Use assembly to reuse memory space when making more than one external call. An operation that causes the solidity compiler to expand memory is making external calls. When making external calls the compiler has to encode the function signature of the function it wishes to call on the external contract alongside it’s arguments in memory. As we know, solidity does not clear or reuse memory memory so it’ll have to store these data in the next free memory pointer which expands memory further.
With inline assembly, we can either use the scratch space and free memory pointer offset to store this data (as above) if the function arguments do not take up more than 96 bytes in memory. Better still, if we are making more than one external call we can reuse the same memory space as the first calls to store the new arguments in memory without expanding memory unnecessarily. Solidity in this scenario would expand memory by as much as the returned data length is. This is because the returned data is stored in memory (in most cases). If the return data is less than 96 bytes, we can use the scratch space to store it so as to prevent expanding memory.
Take a look at this gist, by deploying it, we can see that we save approximately 2,000 gas by using the scratch space to store the function selector and it’s arguments and also reusing the same memory space for the second call while storing the returned data in the zero slot thus not expanding memory.
TLDR: If the arguments of the external function we wish to call is above 64 bytes and if you are making one external call, it wouldn’t save any significant gas writing it in assembly. However, if making more than one call, we can still save gas by reusing the same memory slot for the 2 calls using inline assembly.
NB: Always remember to update the free memory pointer if the offset it points to is already used, to avoid solidity overriding the data stored there or using the value stored there in an unexpected way.
Instances of this in scope are mostly, where more than one external callhaooens in a sequence, i.e take this code snippet for example https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/bridge/L1SharedBridge.sol#L171-L179
/// @dev Transfers tokens from the depositor address to the smart contract address. /// @return The difference between the contract balance before and after the transferring of funds. function _depositFunds(address _from, IERC20 _token, uint256 _amount) internal returns (uint256) { uint256 balanceBefore = _token.balanceOf(address(this)); _token.safeTransferFrom(_from, address(this), _amount); uint256 balanceAfter = _token.balanceOf(address(this)); return balanceAfter - balanceBefore; }
Note to avoid overwriting the zero slot (0x60 memory offset) if you have undefined dynamic memory values within that call stack. An alternative is to explicitly define dynamic memory values or if used, to set the slot back to 0x00 before exiting the assembly block.
msg.sender
is more gas efficientit's generally more gas-efficient to use assembly to check for a msg,sender
Use this search command to find instances https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync++%3D%3D+msg.sender+only+NOT+language%3AYul+NOT+language%3AMarkdown+NOT+language%3ATypeScript&type=code
By using assembly to perform the msg.sender check, we can make our code more gas-efficient and reduce the overall cost of the contract, keep in mind that assembly can be more difficult to read and maintain than Solidity code, so it should be used with caution and only when necessary.
proveBatches()
causing double expenditure of gasfunction _proveBatches( StoredBatchInfo calldata _prevBatch, StoredBatchInfo[] calldata _committedBatches, ProofInput calldata _proof ) internal { //ommited for brevity //@audit if (_proof.serializedProof.length > 0) { _verifyProof(proofPublicInput, _proof); } // #else _verifyProof(proofPublicInput, _proof); // #endif emit BlocksVerification(s.totalBatchesVerified, currentTotalBatchesVerified); s.totalBatchesVerified = currentTotalBatchesVerified; }
Evidently, if the proof is not empty, it gets verified, due to the if
block, case is the else
has been has been commented out which means that a proof is always going to undergo double verification in as much as it's > 0. Causig double gas expenditure of calling this: https://github.com/code-423n4/2024-03-zksync/blob/4f0ba34f34a864c354c7e8c47643ed8f4a250e13/code/contracts/ethereum/contracts/state-transition/chain-deps/facets/Executor.sol#L441-L452
function _verifyProof(uint256[] memory proofPublicInput, ProofInput calldata _proof) internal view { // We can only process 1 batch proof at a time. require(proofPublicInput.length == 1, "t4"); bool successVerifyProof = s.verifier.verify( proofPublicInput, _proof.serializedProof, _proof.recursiveAggregationInput ); require(successVerifyProof, "p"); // Proof verification fail }
Consider verifying only once if _proof.serializedProof.length > 0
is true.
Using assembly for small keccak256
hashes saves gas, there are multiple instances of this, but we can use this search command to grep most of them
canonicalTxHash = keccak256(transactionEncoding);
Consider using assembly for the instances that include small keccak256
hashes
Solady’s SafeCast implementation is more gas efficient than OZ's version. We recommend you switch to that library instead.
Using this search command: https://github.com/search?q=repo%3Acode-423n4%2F2024-03-zksync%20openzeppelin%2Fcontracts%2Futils%2Fmath%2FSafeCast.sol&type=code we can see that Oz's implementation of safe cast is being used, i.e in Diamond.sol
Consider using Solady's SafeCast
instead.
#0 - c4-sponsor
2024-04-18T02:16:07Z
saxenism (sponsor) acknowledged
#1 - saxenism
2024-04-18T02:16:08Z
Ok Report.
#2 - c4-judge
2024-04-29T13:47:33Z
alex-ppg marked the issue as grade-c
#3 - c4-judge
2024-04-29T13:47:43Z
alex-ppg marked the issue as grade-b