Platform: Code4rena
Start Date: 04/03/2024
Pot Size: $140,000 USDC
Total HM: 19
Participants: 69
Period: 21 days
Judge: 0xean
Total Solo HM: 4
Id: 343
League: ETH
Rank: 18/69
Findings: 3
Award: $1,185.32
🌟 Selected for report: 0
🚀 Solo Findings: 0
🌟 Selected for report: MrPotatoMagic
Also found by: 0x11singh99, DadeKuma, Fassi_Security, JCK, Kalyan-Singh, Masamune, Myd, Pechenite, Sathish9098, Shield, albahaca, alexfilippov314, cheatc0d3, clara, foxb868, grearlake, hihen, imare, joaovwfreire, josephdara, ladboy233, monrel, n1punp, oualidpro, pa6kuda, pfapostol, rjs, slvDev, sxima, t0x1c, t4sk, zabihullahazadzoi
573.8716 USDC - $573.87
In Ethereum, an account's state comprises four fields: nonce, balance, storageHash, and codeHash. These are RLP-encoded for storage efficiency. When a smart contract, such as LibTrieProof, decodes this information, it expects an array of precisely four elements. The contract currently assumes that the decoding process results in the correct format without conducting an explicit verification
if rlpAccount is malformed, contains additional data, or is subject to tampering, this assumption can lead to significant errors.
Accessing an index that does not exist (due to fewer elements) can cause runtime errors, leading to transaction reverts and a denial of service for functionalities relying on this logic.
FILE: 2024-03-taiko/packages/protocol/contracts/libs /LibTrieProof.sol 52: RLPReader.RLPItem[] memory accountState = RLPReader.readList(rlpAccount);
Implement an immediate check after decoding the RLP-encoded data to ensure it contains exactly four elements, reflecting the expected Ethereum account structure.
if (accountState.length != 4) { revert("Invalid RLP account state length"); }
The hardcoding of the quorum fraction to a value of '4' (potentially representing 4%) lacks flexibility and adaptability.
A fixed quorum fraction can lead to governance paralysis (if set too high relative to participation) or weakened security (if set too low). The inability to adjust this parameter in response to changes in token distribution, community engagement, or external threats could significantly impair the governance system's efficacy and responsiveness.
The inability to adjust the quorum fraction could result in governance gridlock or reduce security against potential governance attacks.
The set quorum might no longer reflect the current state or sentiment of the community, leading to decreased participation and engagement.
FILE: Breadcrumbs2024-03-taiko/packages/protocol/contracts/L1/gov /TaikoGovernor.sol 43: __GovernorVotesQuorumFraction_init(4);
Adding functionality to update the quorum fraction:
// Add a new function to update the quorum fraction function updateQuorumFraction(uint256 newQuorumFraction) public onlyRole(DEFAULT_ADMIN_ROLE) // Replace with appropriate governance control { __updateQuorumFraction(newQuorumFraction); emit QuorumFractionUpdated(newQuorumFraction); // Ensure event logging for transparency }
The L1_INVALID_ETH_DEPOSIT error is employed in different contexts within the LibDepositing library: it is used both for validating deposit conditions (such as incorrect deposit amounts) and for encoding deposit data. This reuse leads to ambiguity since the same error does not indicate the specific condition that failed. Consequently, developers or users encountering this error would have difficulty determining whether the issue lies with the deposit's size, the deposit's encoding, or another deposit-related condition, hindering effective debugging and resolution.
FILE aiko/packages/protocol/contracts/L1/libs/LibDepositing.sol if (!canDepositEthToL2(_state, _config, msg.value)) { revert L1_INVALID_ETH_DEPOSIT(); } if (_amount > type(uint96).max) revert L1_INVALID_ETH_DEPOSIT();
The LibDepositing library, specifically within the depositEtherToL2 function, sends Ether to a bridge address obtained from an external address resolver. If the bridge contract is malicious or has vulnerabilities, it could potentially exploit the interaction to reenter the calling contract, leading to unexpected behaviors such as state corruption, additional unauthorized actions, or Ether theft. This risk, while reduced due to the internal nature of the function and the sequence of actions, is not entirely negated and could lead to significant security issues.
FILE: 2024-03-taiko/packages/protocol/contracts/L1/libs /LibDepositing.sol _resolver.resolve("bridge", false).sendEther(msg.value); _resolver.resolve("bridge", false)
The processDeposits function in the LibDepositing library behaves differently when the number of pending deposits (numPending) is less than the configured minimum threshold (config.ethDepositMinCountPerBlock). Specifically, in such scenarios, it merely initializes an empty array (deposits = new TaikoData.EthDeposit ;) and terminates without processing any deposits.
Lack of Clarity: The decision to not process deposits when below a certain count lacks explicit rationale or documentation, potentially confusing maintainers or users regarding its intent. Dynamic Configuration Changes: If _config.ethDepositMinCountPerBlock is subject to change (e.g., through governance actions or dynamic adjustments), the system may exhibit unpredictable behavior, such as sudden halts in deposit processing without clear notice or understanding from users.
FILE: 2024-03-taiko/packages/protocol/contracts/L1/libs /LibDepositing.sol 78: if (numPending < _config.ethDepositMinCountPerBlock) {
This logic could reject valid tier upgrades or accept invalid ones due to not properly contextualizing the tier levels, especially when tier configurations change or when differentiating between initial submission and updates.
FILE: 2024-03-taiko/packages/protocol/contracts/L1/libs /LibProving.sol // The new proof must meet or exceed the minimum tier required by the // block or the previous proof; it cannot be on a lower tier. if (_proof.tier == 0 || _proof.tier < _meta.minTier || _proof.tier < ts.tier) { revert L1_INVALID_TIER(); }
The code does not protect against rapid toggling or unclear status communication, which can disrupt the proving process, lead to unintended pauses, or confuse participants about the current operational status.
FILE: 2024-03-taiko/packages/protocol/contracts/L1/libs /LibProving.sol if (_state.slotB.provingPaused == _pause) revert L1_INVALID_PAUSE_STATUS(); _state.slotB.provingPaused = _pause;
The contract assumes that it will be deployed either at the genesis block or the subsequent block, with no flexibility for later deployments. This rigid setup could limit the contract's deployment and testing scenarios, making it less adaptable to evolving blockchain environments.
FILE: 2024-03-taiko/packages/protocol/contracts/L2 /TaikoL2.sol if (block.number == 0) { // This is the case in real L2 genesis } else if (block.number == 1) { // This is the case in tests uint256 parentHeight = block.number - 1; l2Hashes[parentHeight] = blockhash(parentHeight); } else { revert L2_TOO_LATE(); }
Allow contract initialization to handle a broader range of block numbers and scenarios while ensuring integrity and security.
This enforces a strict check on the base fee, which must match exactly with the calculated value. This could cause issues, especially in testing environments or in cases where slight discrepancies might arise due to block-to-block variances or initialization timing.
FILE: 2024-03-taiko/packages/protocol/contracts/L2 /TaikoL2.sol 141: if (!skipFeeCheck() && block.basefee != basefee) { revert L2_BASEFEE_MISMATCH(); }
This check ensures the integrity of public inputs between consecutive L2 blocks. A mismatch could indicate data corruption or synchronization issues between L1 and L2. However, reliance on correct previous hash values without proper error handling or recovery mechanisms can stall the system.
FILE: 2024-03-taiko/packages/protocol/contracts/L2 /TaikoL2.sol if (publicInputHash != publicInputHashOld) { revert L2_PUBLIC_INPUT_HASH_MISMATCH(); }
The validation checks are crucial for maintaining data integrity and security, especially for cross-layer operations. However, the condition (block.number != 1 && _parentGasUsed == 0) seems arbitrary and could block legitimate anchoring in certain edge cases or during initial testing and setup phases.
FILE: 2024-03-taiko/packages/protocol/contracts/L2 /TaikoL2.sol if ( _l1BlockHash == 0 || _l1StateRoot == 0 || _l1BlockId == 0 || (block.number != 1 && _parentGasUsed == 0) ) { revert L2_INVALID_PARAM(); }
The contract uses a sequential approach to assign message IDs, which inherently limits each message to a unique instance without considering batch processing or parallelism. This could limit scalability and introduce delays, especially when the network is congested or when operating at high throughput.
FILE: 2024-03-taiko/packages/protocol/contracts/bridge /Bridge.sol // Configure message details and send signal to indicate message sending. message_.id = nextMessageId++;
Implement a more robust ID management system that can handle concurrent messages more efficiently, such as using a combination of timestamp and a counter to ensure uniqueness and support higher throughput.
The contract allows for banning addresses, but the logic seems arbitrary without clear criteria for what constitutes a ban-worthy offense. This could lead to misuse or misunderstandings, affecting legitimate users' operations or leaving malicious actors unchecked due to unclear policies.
FILE: 2024-03-taiko/packages/protocol/contracts/bridge /Bridge.sol if (addressBanned[_addr] == _ban) revert B_INVALID_STATUS(); addressBanned[_addr] = _ban; emit AddressBanned(_addr, _ban);
The contract has rigid conditions for recalling or retrying messages, based solely on their status. This could prevent legitimate retries or recalls due to transient network issues or minor mistakes, leading to unnecessary message failures.
FILE: 2024-03-taiko/packages/protocol/contracts/bridge /Bridge.sol if (messageStatus[msgHash] != Status.RETRIABLE) { revert B_NON_RETRIABLE(); }
This logic prevents an SGX instance from being re-registered once it has been used, but does not account for legitimate updates or re-attestations of the same SGX instance, potentially limiting flexibility in maintaining SGX enclave authenticity over time.
FILE: 2024-03-taiko/packages/protocol/contracts/verifiers /SgxVerifier.sol if (addressRegistered[_instances[i]]) revert SGX_ALREADY_ATTESTED(); addressRegistered[_instances[i]] = true;
The strict check on the proof data length can limit the flexibility and future adaptability of proof structures. If proof requirements change or additional information needs to be included, this fixed length check could prevent valid proofs from being processed.
152: if (_proof.data.length != 89) revert SGX_INVALID_PROOF();
MAX_TOKEN_PER_TXN
valueUsers who wish to bridge more than 10 tokens will have to perform multiple transactions. For users with large collections, this could lead to increased transaction costs (gas fees) and a more time-consuming process.
Implementing a governance mechanism to adjust the maximum tokens per transaction could provide flexibility while ensuring that changes are made responsibly and with community consensus.
FILE: 2024-03-taiko/packages/protocol/contracts/tokenvault /BaseNFTVault.sol /// @notice Maximum number of tokens that can be transferred per transaction. uint256 public constant MAX_TOKEN_PER_TXN = 10;
Casting the chain ID, which is a uint256 type, down to a uint64 type. This is considered unsafe downcasting because uint256 can represent much larger numbers than uint64 can. If the value of block.chainid exceeds the maximum value that uint64 can represent (which is 2^64 - 1), then the cast will result in data loss, leading to incorrect behavior.
FILE: 2024-03-taiko/packages/protocol/contracts/L1/libs/LibDepositing.sol 90: amount: uint96(data),
addresses
upcast and compared to values larger than a uint160
, may result in collisionsIf the value is being compared to an input value in order to reject it, rather than allowing it to be converted to an address, the check will pass if the value is larger than type(uint160).max, even if, when cast, it matches the gating address.
FILE: 2024-03-taiko/packages/protocol/contracts/L1/libs/LibDepositing.sol 151: return (uint256(uint160(_addr)) << 96) | _amount;
receive()/payable fallback()
function does not authorize requestsHaving no access control on the function (e.g. require(msg.sender == address(weth))) means that someone may send Ether to the contract, and have no way to get anything back out, which is a loss of funds. If the concern is having to spend a small amount of gas to check the sender against an immutable address, the code should at least have a function to rescue mistakenly-sent Ether.
FILE: 2024-03-taiko/packages/protocol/contracts/bridge /Bridge.sol 70: receive() external payable { }
Code should follow the best-practice of check-effects-interaction, where state variables are updated before any external calls are made. Doing so prevents a large class of reentrancy bugs.
FILE: 2024-03-taiko/packages/protocol/contracts/L1/libs/LibProving.sol 196: if (returnLivenessBond) { tko.transfer(blk.assignedProver, blk.livenessBond); blk.livenessBond = 0; } 242: tko.transferFrom(msg.sender, address(this), tier.contestBond); // We retain the contest bond within the transition, just in // case this configuration is altered to a different value // before the contest is resolved. // // It's worth noting that the previous value of ts.contestBond // doesn't have any significance. ts.contestBond = tier.contestBond;
A copy-paste error or a typo may end up bricking protocol functionality, or sending tokens to an address with no known private key. Consider implementing a two-step procedure for updating protocol addresses, where the recipient is set as pending, and must 'accept' the assignment by making an affirmative call. A straight forward way of doing this would be to have the target contracts implement EIP-165, and to have the 'set' functions ensure that the recipient is of the right interface type.
FILE: 2024-03-taiko/packages/protocol/contracts/tokenvault/BridgedERC20.sol 80: function setSnapshoter(address _snapshooter) external onlyOwner { snapshooter = _snapshooter; }
Consider limiting the number of iterations in for-loops that make external calls
FILE: 2024-03-taiko/packages/protocol/contracts/tokenvault /ERC1155Vault.sol 269: for (uint256 i; i < _op.tokenIds.length; ++i) { IERC1155(_op.token).safeTransferFrom({ from: msg.sender, to: address(this), id: _op.tokenIds[i], amount: _op.amounts[i], data: "" }); }
FILE: 2024-03-taiko/packages/protocol/contracts/tokenvault /ERC721Vault.sol 170: for (uint256 i; i < _tokenIds.length; ++i) { IERC721(token_).safeTransferFrom(address(this), _to, _tokenIds[i]); } 210: for (uint256 i; i < _op.tokenIds.length; ++i) { t.safeTransferFrom(_user, address(this), _op.tokenIds[i]); }
#0 - c4-judge
2024-04-10T10:46:37Z
0xean marked the issue as grade-a
#1 - c4-sponsor
2024-04-11T10:06:24Z
dantaik (sponsor) acknowledged
🌟 Selected for report: DadeKuma
Also found by: 0x11singh99, 0xAnah, 0xhacksmithh, Auditor2947, IllIllI, K42, MrPotatoMagic, Pechenite, SAQ, SM3_SS, SY_S, Sathish9098, albahaca, caglankaan, cheatc0d3, clara, dharma09, hihen, hunter_w3b, oualidpro, pavankv, pfapostol, rjs, slvDev, sxima, unique, zabihullahazadzoi
187.8678 USDC - $187.87
Number | Issues | Gas Saved |
---|---|---|
G-1 | Optimize State Variables to Fit Fewer Storage Slots | 8000 |
G-2 | State variables only set in the constructor should be declared immutable | 2000 |
G-3 | Optimizing gas usage by caching state variables in local memory variables | 2200 |
G-4 | Consolidate Multiple Address/ID Mappings into Single Struct-Based Mapping | 2000 |
G-5 | Using storage instead of memory for state variables saves gas | |
4000 | ||
G-6 | Cache the state variables outside the loop | 120 |
G-7 | Don't calculate array lengths multiple times | 20-40 |
G-8 | Using calldata instead of memory for read-only arguments in external functions saves gas | 600 |
G-9 | Replace Function Calls with Constants | 4000 |
G-10 | Remove nonReentrant modifier from admin only functions to save gas | 5,000 - 20,000 |
G-11 | Use assembly to validate msg.sender | 120 |
G-12 | Don't cache global variable msg.sender | 40 |
G-13 | Invert if-else statements that have a negation | - |
G-14 | Assigning state variables directly with named struct constructors wastes gas | - |
G-15 | Consider using alternatives to OpenZeppelin | - |
G-16 | Using assembly to revert with an error message | Over 300 |
G-17 | Do-While loops are cheaper than for loops | - |
G-18 | Short-circuit Booleans | 20 |
G-19 | Use assembly in place of abi.decode to extract calldata values more efficiently | - |
G-20 | Make constructors payable | 400 |
The EVM works with 32 byte words. Variables less than 32 bytes can be declared next to each other in storage and this will pack the values together into a single 32 byte storage slot (if the values combined are <= 32 bytes). If the variables packed together are retrieved together in functions we will effectively save ~2000 gas with every subsequent SLOAD for that storage slot. This is due to us incurring a Gwarmaccess (100 gas) versus a Gcoldsload (2100 gas).
srcChainId
and snapshooter
can be packed same slot : Saves 2000 GAS
, 1 SLOT
uint64 chainId; uint64 destChainId;
Even in other contracts using uint64 for chainId.
There are numerous blockchain platforms, but the number currently active and recognized does not exceed the range that a uint96 can represent. For instance, while there are many blockchain platforms, the actual number of unique, significant blockchain networks is far less than the maximum number that a uint96
can store. A uint96 can store numbers up to 79,228,162,514,264,337,593,543,950,335
. In comparison, the current number of significant blockchain platforms and networks, as indicated by sources discussing different types of blockchain networks and their applications, is only in the dozens or perhaps hundreds​​​​.
Even with the projected growth of blockchain technology and the creation of new chains, it is highly unlikely that the total number will approach the uint96 limit in any foreseeable future.
FILE: 2024-03-taiko/packages/protocol/contracts/tokenvault/BridgedERC20.sol /// @dev Slot 1. address public srcToken; uint8 private __srcDecimals; /// @dev Slot 2. - uint256 public srcChainId; + uint96 public srcChainId; /// @dev Slot 3. address public snapshooter;
srcToken
and srcChainId
can be packed same slot : Saves 4000 GAS
, 2 SLOT
FILE: 2024-03-taiko/packages/protocol/contracts/tokenvault/BridgedERC721.sol /// @notice Address of the source token contract. address public srcToken; /// @notice Source chain ID where the token originates. - uint256 public srcChainId; + uint96 public srcChainId;
FILE: 2024-03-taiko/packages/protocol/contracts/tokenvault/BridgedERC1155.sol /// @notice Address of the source token contract. address public srcToken; /// @notice Source chain ID where the token originates. - uint256 public srcChainId; + uint96 public srcChainId;
_checkLocalEnclaveReport
and owner
can be packed same slot : Saves 2000 GAS
, 1 SLOT
FILE: 2024-03-taiko/packages/protocol/contracts/automata-attestation/AutomataDcapV3Attestation.sol bool private _checkLocalEnclaveReport; + address public owner; mapping(bytes32 enclave => bool trusted) private _trustedUserMrEnclave; mapping(bytes32 signer => bool trusted) private _trustedUserMrSigner; // Quote Collateral Configuration // Index definition: // 0 = Quote PCKCrl // 1 = RootCrl mapping(uint256 idx => mapping(bytes serialNum => bool revoked)) private _serialNumIsRevoked; // fmspc => tcbInfo mapping(string fmspc => TCBInfoStruct.TCBInfo tcbInfo) public tcbInfo; EnclaveIdStruct.EnclaveId public qeIdentity; - address public owner;
Avoids a Gsset (20000 gas) in the constructor, and replaces the first access in each transaction (Gcoldsload - 2100 gas) and each access thereafter (Gwarmacces - 100 gas) with a PUSH32 (3 gas).
While strings are not value types, and therefore cannot be immutable/constant if not hard-coded outside of the constructor, the same behavior can be achieved by making the current contract abstract with virtual functions for the string accessors, and having a child contract override the functions with the hard-coded implementation-specific values.
FILE: 2024-03-taiko/packages/protocol/contracts/automata-attestation /AutomataDcapV3Attestation.sol 52: address public owner; 54: constructor(address sigVerifyLibAddr, address pemCertLibAddr) { sigVerifyLib = ISigVerifyLib(sigVerifyLibAddr); pemCertLib = PEMCertChainLib(pemCertLibAddr); owner = msg.sender; }
The instances below point to the second+ access of a state variable within a function. Caching of a state variable replace each Gwarmaccess (100 gas) with a much cheaper stack read. Other less obvious fixes/optimizations include having local memory caches of state variable structs, or having local caches of state variable contracts/addresses. Most of the times this if statement will be true and we will save 100 gas at a small possibility of 3 gas loss
addressManager
can be cached . Saves 100 GAS
, 1 SLOD
FILE: 2024-03-taiko/packages/protocol/contracts/common/AddressResolver.sol + address addressManager_= addressManager ; - if (addressManager == address(0)) revert RESOLVER_INVALID_MANAGER(); + if (addressManager_== address(0)) revert RESOLVER_INVALID_MANAGER(); - addr_ = payable(IAddressManager(addressManager).getAddress(_chainId, _name)); + addr_ = payable(IAddressManager(addressManager_).getAddress(_chainId, _name)); if (!_allowZeroAddress && addr_ == address(0)) { revert RESOLVER_ZERO_ADDR(_chainId, _name);
_state.slotA.numEthDeposits
can be cached . Saves 100 GAS
, 1 SLOD
FILE:2024-03-taiko/packages/protocol/contracts/L1/libs /LibDepositing.sol // Append the deposit to the queue. + uint64 numEthDeposits_ =_state.slotA.numEthDeposits ; address recipient_ = _recipient == address(0) ? msg.sender : _recipient; - uint256 slot = _state.slotA.numEthDeposits % _config.ethDepositRingBufferSize; + uint256 slot = numEthDeposits_ % _config.ethDepositRingBufferSize; // range of msg.value is checked by next line. _state.ethDeposits[slot] = _encodeEthDeposit(recipient_, msg.value); emit EthDeposited( TaikoData.EthDeposit({ recipient: recipient_, amount: uint96(msg.value), - id: _state.slotA.numEthDeposits + id: numEthDeposits_ }) );
blk.blockId
, blk.metaHash
, blk.livenessBond
, _ts.contester
can be cached : 700 GAS
, 7 SLODs
FILE: /// @dev Proves or contests a block transition. /// @param _state Current TaikoData.State. /// @param _config Actual TaikoData.Config. /// @param _resolver Address resolver interface. /// @param _meta The block's metadata. /// @param _tran The transition data. /// @param _proof The proof. /// @param maxBlocksToVerify_ The number of blocks to be verified with this transaction. function proveBlock( TaikoData.State storage _state, TaikoData.Config memory _config, IAddressResolver _resolver, TaikoData.BlockMetadata memory _meta, TaikoData.Transition memory _tran, TaikoData.TierProof memory _proof ) internal returns (uint8 maxBlocksToVerify_) { // Make sure parentHash is not zero // To contest an existing transition, simply use any non-zero value as // the blockHash and stateRoot. if (_tran.parentHash == 0 || _tran.blockHash == 0 || _tran.stateRoot == 0) { revert L1_INVALID_TRANSITION(); } // Check that the block has been proposed but has not yet been verified. TaikoData.SlotB memory b = _state.slotB; if (_meta.id <= b.lastVerifiedBlockId || _meta.id >= b.numBlocks) { revert L1_INVALID_BLOCK_ID(); } uint64 slot = _meta.id % _config.blockRingBufferSize; TaikoData.Block storage blk = _state.blocks[slot]; // Check the integrity of the block data. It's worth noting that in // theory, this check may be skipped, but it's included for added // caution. + uint64 blockId_ = blk.blockId ; + uint64 metaHash_ = blk.metaHash ; - if (blk.blockId != _meta.id || blk.metaHash != + if (blockId_ != _meta.id || metaHash_ != keccak256(abi.encode(_meta))) { revert L1_BLOCK_MISMATCH(); } // Each transition is uniquely identified by the parentHash, with the // blockHash and stateRoot open for later updates as higher-tier proofs // become available. In cases where a transition with the specified // parentHash does not exist, the transition ID (tid) will be set to 0. (uint32 tid, TaikoData.TransitionState storage ts) = _createTransition(_state, blk, _tran, slot); // The new proof must meet or exceed the minimum tier required by the // block or the previous proof; it cannot be on a lower tier. if (_proof.tier == 0 || _proof.tier < _meta.minTier || _proof.tier < ts.tier) { revert L1_INVALID_TIER(); } // Retrieve the tier configurations. If the tier is not supported, the // subsequent action will result in a revert. ITierProvider.Tier memory tier = ITierProvider(_resolver.resolve("tier_provider", false)).getTier(_proof.tier); // Check if this prover is allowed to submit a proof for this block _checkProverPermission(_state, blk, ts, tid, tier); // We must verify the proof, and any failure in proof verification will // result in a revert. // // It's crucial to emphasize that the proof can be assessed in two // potential modes: "proving mode" and "contesting mode." However, the // precise verification logic is defined within each tier's IVerifier // contract implementation. We simply specify to the verifier contract // which mode it should utilize - if the new tier is higher than the // previous tier, we employ the proving mode; otherwise, we employ the // contesting mode (the new tier cannot be lower than the previous tier, // this has been checked above). // // It's obvious that proof verification is entirely decoupled from // Taiko's core protocol. { address verifier = _resolver.resolve(tier.verifierName, true); if (verifier != address(0)) { bool isContesting = _proof.tier == ts.tier && tier.contestBond != 0; IVerifier.Context memory ctx = IVerifier.Context({ - metaHash: blk.metaHash, + metaHash: metaHash_ , blobHash: _meta.blobHash, // Separate msgSender to allow the prover to be any address in the future. prover: msg.sender, msgSender: msg.sender, - blockId: blk.blockId, + blockId: blockId_, isContesting: isContesting, blobUsed: _meta.blobUsed }); IVerifier(verifier).verifyProof(ctx, _tran, _proof); } else if (tier.verifierName != TIER_OP) { // The verifier can be address-zero, signifying that there are no // proof checks for the tier. In practice, this only applies to // optimistic proofs. revert L1_MISSING_VERIFIER(); } } bool isTopTier = tier.contestBond == 0; IERC20 tko = IERC20(_resolver.resolve("taiko_token", false)); if (isTopTier) { // A special return value from the top tier prover can signal this // contract to return all liveness bond. + uint96 livenessBond_ = blk.livenessBond ; - bool returnLivenessBond = blk.livenessBond > 0 && _proof.data.length == 32 && bytes32(_proof.data) == RETURN_LIVENESS_BOND; + bool returnLivenessBond = livenessBond_ > 0 && _proof.data.length == 32 && bytes32(_proof.data) == RETURN_LIVENESS_BOND; if (returnLivenessBond) { - tko.transfer(blk.assignedProver, blk.livenessBond); + tko.transfer(blk.assignedProver, livenessBond_ ); blk.livenessBond = 0; } } bool sameTransition = _tran.blockHash == ts.blockHash && _tran.stateRoot == ts.stateRoot; if (_proof.tier > ts.tier) { // Handles the case when an incoming tier is higher than the current transition's tier. // Reverts when the incoming proof tries to prove the same transition // (L1_ALREADY_PROVED). _overrideWithHigherProof(ts, _tran, _proof, tier, tko, sameTransition); emit TransitionProved({ - blockId: blk.blockId, + blockId: blockId_, tran: _tran, prover: msg.sender, validityBond: tier.validityBond, tier: _proof.tier }); } else { // New transition and old transition on the same tier - and if this transaction tries to // prove the same, it reverts if (sameTransition) revert L1_ALREADY_PROVED(); if (isTopTier) { // The top tier prover re-proves. assert(tier.validityBond == 0); assert(ts.validityBond == 0 && ts.contestBond == 0 && ts.contester == address(0)); ts.prover = msg.sender; ts.blockHash = _tran.blockHash; ts.stateRoot = _tran.stateRoot; emit TransitionProved({ - blockId: blk.blockId, + blockId: blockId_, tran: _tran, prover: msg.sender, validityBond: 0, tier: _proof.tier }); } else { // Contesting but not on the highest tier if (ts.contester != address(0)) revert L1_ALREADY_CONTESTED(); // Burn the contest bond from the prover. tko.transferFrom(msg.sender, address(this), tier.contestBond); // We retain the contest bond within the transition, just in // case this configuration is altered to a different value // before the contest is resolved. // // It's worth noting that the previous value of ts.contestBond // doesn't have any significance. ts.contestBond = tier.contestBond; ts.contester = msg.sender; ts.contestations += 1; emit TransitionContested({ - blockId: blk.blockId, + blockId: blockId_, tran: _tran, contester: msg.sender, contestBond: tier.contestBond, tier: _proof.tier }); } } ts.timestamp = uint64(block.timestamp); return tier.maxBlocksToVerifyPerProof; } + address contester_ = _ts.contester ; - if (_ts.contester != address(0)) { + if (contester_ != address(0)) { if (_sameTransition) { // The contested transition is proven to be valid, contestor loses the game reward = _ts.contestBond >> 2; _tko.transfer(_ts.prover, _ts.validityBond + reward); } else { // The contested transition is proven to be invalid, contestor wins the game reward = _ts.validityBond >> 2; - _tko.transfer(_ts.contester, _ts.contestBond + reward); + _tko.transfer(contester_, _ts.contestBond + reward); }
ts.prover
can be cached : Saves 200 GAS
, 2 SLODs
2024-03-taiko/packages/protocol/contracts/L1/libs /LibVerifying.sol // Nevertheless, it's possible for the actual prover to be the // same individual or entity as the block's assigned prover. // Consequently, we have chosen to grant the actual prover only // half of the liveness bond as a reward. + address prover_ = ts.prover ; - if (ts.prover != blk.assignedProver) { + if (prover_ != blk.assignedProver) { bondToReturn -= blk.livenessBond >> 1; } IERC20 tko = IERC20(_resolver.resolve("taiko_token", false)); - tko.transfer(ts.prover, bondToReturn); + tko.transfer(prover_ , bondToReturn); // Note: We exclusively address the bonds linked to the // transition used for verification. While there may exist // other transitions for this block, we disregard them entirely. // The bonds for these other transitions are burned either when // the transitions are generated or proven. In such cases, both // the provers and contesters of those transitions forfeit their bonds. emit BlockVerified({ blockId: blockId, assignedProver: blk.assignedProver, - prover: ts.prover, + prover: prover_ , blockHash: blockHash, stateRoot: stateRoot, tier: ts.tier, contestations: ts.contestations });
version
can be cached : Saves 100 GAS
, 1 SLOD
FILE: 2024-03-taiko/packages/protocol/contracts/L1/provers/Guardians.sol + uint32 version_ = version ; unchecked { - _approvals[version][_hash] |= 1 << (id - 1); + _approvals[version_ ][_hash] |= 1 << (id - 1); } - uint256 _approval = _approvals[version][_hash]; + uint256 _approval = _approvals[version_ ][_hash]; approved_ = isApproved(_approval); emit Approved(_operationId, _approval, approved_);
gasExcess
, lastSyncedBlock
can be cached : Saves 300 GAS
, 3 SLODs
FILE: 2024-03-taiko/packages/protocol/contracts/L2/TaikoL2.sol { // gasExcess being 0 indicate the dynamic 1559 base fee is disabled. + uint64 gasExcess_ = gasExcess ; - if (gasExcess > 0) { + if (gasExcess_ > 0) { // We always add the gas used by parent block to the gas excess // value as this has already happened - uint256 excess = uint256(gasExcess) + _parentGasUsed; + uint256 excess = uint256(gasExcess_) + _parentGasUsed; // Calculate how much more gas to issue to offset gas excess. // after each L1 block time, config.gasTarget more gas is issued, // the gas excess will be reduced accordingly. // Note that when lastSyncedBlock is zero, we skip this step // because that means this is the first time calculating the basefee // and the difference between the L1 height would be extremely big, // reverting the initial gas excess value back to 0. uint256 numL1Blocks; + uint64 lastSyncedBlock_ = lastSyncedBlock ; - if (lastSyncedBlock > 0 && _l1BlockId > lastSyncedBlock) { + if (lastSyncedBlock_ > 0 && _l1BlockId > lastSyncedBlock_ ) { - numL1Blocks = _l1BlockId - lastSyncedBlock; + numL1Blocks = _l1BlockId - lastSyncedBlock_; }
migratingAddress
can be cached : Saves 300 GAS
, 3 SLODs
FILE: 2024-03-taiko/packages/protocol/contracts/tokenvault /BridgedERC20Base.sol + address migratingAddress_ = migratingAddress ; - if (msg.sender == migratingAddress) { + if (msg.sender == migratingAddress_) { // Inbound migration - emit MigratedTo(migratingAddress, _account, _amount); + emit MigratedTo(migratingAddress_, _account, _amount); + address migratingAddress_ = migratingAddress ; // Outbound migration - emit MigratedTo(migratingAddress, _account, _amount); + emit MigratedTo(migratingAddress_ , _account, _amount); // Ask the new bridged token to mint token for the user. - IBridgedERC20(migratingAddress).mint(_account, _amount); + IBridgedERC20(migratingAddress_).mint(_account, _amount); // Outbound migration emit MigratedTo(migratingAddress, _account, _amount); // Ask the new bridged token to mint token for the user. IBridgedERC20(migratingAddress).mint(_account, _amount);
instances[idx].addr
, nextInstanceId
instances[id].validSince
can be cached : Saves 400 GAS
, 4 SLODs
FILE: 2024-03-taiko/packages/protocol/contracts/verifiers /SgxVerifier.sol + address addr_ = instances[idx].addr ; - if (instances[idx].addr == address(0)) revert SGX_INVALID_INSTANCE(); + if (addr_ == address(0)) revert SGX_INVALID_INSTANCE(); - emit InstanceDeleted(idx, instances[idx].addr); + emit InstanceDeleted(idx, addr_); for (uint256 i; i < _instances.length; ++i) { if (addressRegistered[_instances[i]]) revert SGX_ALREADY_ATTESTED(); addressRegistered[_instances[i]] = true; if (_instances[i] == address(0)) revert SGX_INVALID_INSTANCE(); + uint256 nextInstanceId_ = nextInstanceId ; - instances[nextInstanceId] = Instance(_instances[i], validSince); + instances[nextInstanceId_ ] = Instance(_instances[i], validSince); - ids[i] = nextInstanceId; + ids[i] = nextInstanceId_; - emit InstanceAdded(nextInstanceId, _instances[i], address(0), validSince); + emit InstanceAdded(nextInstanceId_ , _instances[i], address(0), validSince); nextInstanceId++; } + uint64 validSince_ = instances[id].validSince ; - return instances[id].validSince <= block.timestamp && block.timestamp <= instances[id].validSince + INSTANCE_EXPIRY; + return validSince_ <= block.timestamp && block.timestamp <= validSince_ + INSTANCE_EXPIRY;
Saves a storage slot for the mapping. Depending on the circumstances and sizes of types, can avoid a Gsset (20000 gas) per mapping combined. Reads and subsequent writes can also be cheaper when a function requires both values and they both fit in the same storage slot. Finally, if both fields are accessed in the same function, can save ~42 gas per access due to not having to recalculate the key's keccak256 hash (Gkeccak256 - 30 gas) and that calculation's associated stack operations.
FILE: 2024-03-taiko/packages/protocol/contracts/automata-attestation /AutomataDcapV3Attestation.sol + struct TrustStatus { + bool byEnclave; + bool bySigner; + } + mapping(bytes32 => TrustStatus) private _trustedStatus; - mapping(bytes32 enclave => bool trusted) private _trustedUserMrEnclave; - mapping(bytes32 signer => bool trusted) private _trustedUserMrSigner;
When fetching data from a storage location, assigning the data to a memory variable causes all fields of the struct/array to be read from storage, which incurs a Gcoldsload (2100 gas) for each field of the struct/array. If the fields are read from the new memory variable, they incur an additional MLOAD rather than a cheap stack read. Instead of declaring the variable with the memory keyword, declaring the variable with the storage keyword and caching any fields that need to be re-read in stack variables, will be much cheaper, only incurring the Gcoldsload for the fields actually read. The only time it makes sense to read the whole struct/array into a memory variable, is if the full struct/array is being returned by the function, is being passed to a function that requires memory, or if the array/struct is being read from another memory array/struct
FILE:2024-03-taiko/packages/protocol/contracts/L1/libs /LibProving.sol 110: TaikoData.SlotB memory b = _state.slotB;
FILE: 2024-03-taiko/packages/protocol/contracts/L1/libs /LibUtils.sol 33: TaikoData.SlotB memory b = _state.slotB;
Accessing state variables (like minGuardians) is more expensive in terms of gas than accessing local variables. By reading minGuardians from storage once and storing it in a local variable (cachedMinGuardians), you reduce the cost associated with repeatedly reading this state variable inside the loop. Since minGuardians does not change within the function's scope, this optimization is safe and effective.
FILE: 2024-03-taiko/packages/protocol/contracts/L1/provers /Guardians.sol + uint32 cachedMinGuardians = minGuardians ; unchecked { for (uint256 i; i < guardiansLength; ++i) { if (bits & 1 == 1) ++count; - if (count == minGuardians) return true; + if (count == cachedMinGuardians ) return true; bits >>= 1; } }
_newGuardians.length
calculated multiple timesCalculating length of bytes every time costs gas. The better solution would be to calculate the length once and save it in a local variable
FILE: 2024-03-taiko/packages/protocol/contracts/L1/provers /Guardians.sol // We need at least MIN_NUM_GUARDIANS and at most 255 guardians (so the approval bits fit in // a uint256) if (_newGuardians.length < MIN_NUM_GUARDIANS || _newGuardians.length > type(uint8).max) { revert INVALID_GUARDIAN_SET(); } // Minimum number of guardians to approve is at least equal or greater than half the // guardians (rounded up) and less or equal than the total number of guardians if (_minGuardians < (_newGuardians.length + 1) >> 1 || _minGuardians > _newGuardians.length) {
calldata
instead of memory
for read-only arguments in external functions saves gascalldata must be used when declaring an external function's dynamic parameters
When a function with a memory array is called externally, the abi.decode () step has to use a for-loop to copy each index of the calldata to the memory index. Each iteration of this for-loop costs at least 60 gas (i.e. 60 * <mem_array>.length). Using calldata directly, obliviates the need for such a loop in the contract code and runtime execution.
FILE: 2024-03-taiko/packages/protocol/contracts/L1/libs /LibUtils.sol 23: function getTransition( TaikoData.State storage _state, - TaikoData.Config memory _config, + TaikoData.Config calldata _config, uint64 _blockId, bytes32 _parentHash ) external view returns (TaikoData.TransitionState storage) 52: function getBlock( TaikoData.State storage _state, - TaikoData.Config memory _config, + TaikoData.Config calldata _config, uint64 _blockId ) external view returns (TaikoData.Block storage blk_, uint64 slot_)
FILE: 2024-03-taiko/packages/protocol/contracts/L1/provers /Guardians.sol 53: function setGuardians( - address[] memory _newGuardians, + address[] calldata _newGuardians, uint8 _minGuardians ) external onlyOwner nonReentrant
Functions like votingDelay
, votingPeriod
, and proposalThreshold
are marked pure because they return fixed values without interacting with contract state. However, each function call consumes gas. Replacing these with direct values or constants in the code eliminates function execution overhead, saving gas.
While a pure function call might cost around 700 to 800 gas
per call due to execution and computational overhead
, accessing a constant directly is significantly cheaper, often costing around 3 to 5
gas. Therefore, replacing a function call with a direct value could save approximately 695 to 797 gas
per call.
FILE: 2024-03-taiko/packages/protocol/contracts/L1/gov/TaikoGovernor.sol // @notice How long after a proposal is created should voting power be fixed. A /// large voting delay gives users time to unstake tokens if necessary. /// @return The duration of the voting delay. function votingDelay() public pure override returns (uint256) { return 7200; // 1 day } /// @notice How long does a proposal remain open to votes. /// @return The duration of the voting period. function votingPeriod() public pure override returns (uint256) { return 50_400; // 1 week } /// @notice The number of votes required in order for a voter to become a proposer. /// @return The number of votes required. function proposalThreshold() public pure override returns (uint256) { return 1_000_000_000 ether / 10_000; // 0.01% of Taiko Token }
nonReentrant
modifier from admin only functions to save gasRemoving the nonReentrant modifier from functions that are only accessible by the contract's owner or administrators can save gas, as these functions are less likely to be exposed to reentrancy attacks due to the controlled access.
Given this, removing the nonReentrant modifier could theoretically save you around 5,000 to 20,000 gas for the added state changes, depending on the original and new states of the involved storage slot.
FILE: 53: function setGuardians( address[] memory _newGuardians, uint8 _minGuardians ) external onlyOwner nonReentrant {
use assembly to load the msg.sender value directly, which is more gas-efficient than using the msg.sender global variable.
FILE: 2024-03-taiko/packages/protocol/contracts/common/AddressResolver.sol 25: if (msg.sender != resolve(_name, true)) revert RESOLVER_DENIED();
FILE: 2024-03-taiko/packages/protocol/contracts/L2/TaikoL2.sol 123: if (msg.sender != GOLDEN_TOUCH_ADDRESS) revert L2_INVALID_SENDER();
msg.sender
Using global variables like msg.sender is more gas efficient than cache with local variable
FILE: 2024-03-taiko/packages/protocol/contracts/L1/hooks/AssignmentHook.sol 93: address taikoL1Address = msg.sender;
The extra ! increases the computational cost. Compiler is can sometimes optimize this.
FILE: 2024-03-taiko/packages/protocol/contracts/bridge /Bridge.sol 132: if (!destChainEnabled) revert B_INVALID_CHAINID(); 171: if (!isMessageProven) { 174: if (!ISignalService(signalService).isSignalSent(address(this), msgHash)) { 179: if (!_proveSignalReceived(signalService, failureSignal, _message.destChainId, _proof)) { 209: } else if (!isMessageProven) { 235: if (!isMessageProven) { 302: } else if (!isMessageProven) {
FILE: 2024-03-taiko/packages/protocol/contracts/L1/libs /LibProving.sol 77: if (!_pause) { 420: if (!isAssignedPover) revert L1_NOT_ASSIGNED_PROVER();
Using named arguments for struct means that the compiler needs to organize the fields in memory before doing the assignment, which wastes gas. Set each field directly in storage (use dot-notation), or use the unnamed version of the constructor.
FILE: 2024-03-taiko/packages/protocol/contracts/bridge /Bridge.sol 243: proofReceipt[msgHash] = ProofReceipt({ receivedAt: receivedAt, preferredExecutor: _message.gasLimit == 0 ? _message.destOwner : msg.sender });
OpenZeppelin
OpenZeppelin is a great and popular smart contract library, but there are other alternatives that are worth considering. These alternatives offer better gas efficiency and have been tested and recommended by developers.
Two examples of such alternatives are Solmate and Solady.
Solmate is a library that provides a number of gas-efficient implementations of common smart contract patterns. Solady is another gas-efficient library that places a strong emphasis on using assembly.
"@openzeppelin/contracts": "4.8.2", "@openzeppelin/contracts-upgradeable": "4.8.2",
When reverting in solidity code, it is common practice to use a require or revert statement to revert execution with an error message. This can in most cases be further optimized by using assembly to revert with the error message. we get a gas saving of over 300 gas
when reverting the error message with assembly.
File: packages/protocol/contracts/automataattestation/AutomataDcapV3Attestation.sol 61: require(msg.sender == owner, "onlyOwner");
File: packages/protocol/contracts/thirdparty/optimism/trie/MerkleTrie.sol 77: require(_key.length > 0, "MerkleTrie: empty key"); 89: require(currentKeyIndex <= key.length, "MerkleTrie: key index exceeds total key length"); 191: revert("MerkleTrie: received a node with an unknown prefix"); 194: revert("MerkleTrie: received an unparseable node"); 198: revert("MerkleTrie: ran out of proof elements");
In solidity do-while loops are more gas efficient than for loops
FILE: 2024-03-taiko/packages/protocol/contracts/bridge /Bridge.sol 90: for (uint256 i; i < _msgHashes.length; ++i) {
FILE: 2024-03-taiko/packages/protocol/contracts/L2 /TaikoL2.sol 234: for (uint256 i; i < 255 && _blockId >= i + 1; ++i) {
In Solidity, when you evaluate a boolean expression (e.g the || (logical or) or && (logical and) operators), in the case of || the second expression will only be evaluated if the first expression evaluates to false and in the case of && the second expression will only be evaluated if the first expression evaluates to true. This is called short-circuiting.
Short-circuiting is useful and it’s recommended to place the less expensive expression first, as the more costly one might be bypassed. If the second expression is more important than the first, it might be worth reversing their order so that the cheaper one gets evaluated first.
FILE: 2024-03-taiko/packages/protocol/contracts/L2 /TaikoL2.sol 141: if (!skipFeeCheck() && block.basefee != basefee) {
[G-19] Use assembly in place of abi.decode to extract calldata values more efficiently
Instead of using abi.decode, we can use assembly to decode our desired calldata values directly. This will allow us to avoid decoding calldata values that we will not use.
FILE: 2024-03-taiko/packages/protocol/contracts/tokenvault /ERC1155Vault.sol 140: (bytes memory data) = abi.decode(message.data[4:], (bytes));
FILE: 2024-03-taiko/packages/protocol/contracts/tokenvault /ERC721Vault.sol 123: (bytes memory data) = abi.decode(_message.data[4:], (bytes));
Making the constructor payable saved 200 gas on deployment. This is because non-payable functions have an implicit require(msg.value == 0) inserted in them. Additionally, fewer bytecode at deploy time mean less gas cost due to smaller calldata.
FILE: 2024-03-taiko/packages/protocol/contracts/common /EssentialContract.sol 64: constructor() {
FILE: 2024-03-taiko/packages/protocol/contracts/automata-attestation /AutomataDcapV3Attestation.sol 54: constructor(address sigVerifyLibAddr, address pemCertLibAddr) {
#0 - c4-judge
2024-04-10T10:13:52Z
0xean marked the issue as grade-b
#1 - sathishpic22
2024-04-11T05:27:42Z
Hi @0xean
Thank you for the judging and grades.
I have reviewed all the gas reports that are marked as Grade A. I am quite sure that my reports save more gas than some reports with a Grade A rating. Yes, some of the Grade A reports have a higher count of findings compared to mine, but my submissions contain findings that save more gas, which are of higher value. Also, there is a report marked as Grade A that contains false findings. I have not submitted any false findings and have included some unique findings that were not identified by other wardens. There are reports marked as Grade A, even though they contain many false findings.
Like
but this report marked as grade A .
Based on my analysis and the substantial gas savings demonstrated by my findings, I am very confident that my report merits a higher grade than the one currently assigned.
[G-1.1] srcChainId and snapshooter can be packed same slot : Saves 2000 GAS
, 1 SLOT
[G-1.2] srcToken and srcChainId can be packed same slot : Saves 4000 GAS
, 2 SLOT
[G-1.3] _checkLocalEnclaveReport and owner can be packed same slot : Saves 2000 GAS
, 1 SLOT
[G-2] State variables only set in the constructor should be declared immutabl2 - Saves 2000 GAS
[G-3.1] addressManager can be cached . Saves 100 GAS
, 1 SLOD
[G-3.2] _state.slotA.numEthDeposits can be cached . Saves 100 GAS
, 1 SLOD
[G-3.3] blk.blockId , blk.metaHash , blk.livenessBond, _ts.contester can be cached : 700 GAS
, 7 SLODs
[G-3.4] ts.prover can be cached : Saves 200 GAS
, 2 SLODs
[G-3.5] version can be cached : Saves 100 GAS
, 1 SLOD
[G-3.6] gasExcess , lastSyncedBlock can be cached : Saves 300 GAS
, 3 SLODs
[G-3.7] migratingAddress can be cached : Saves 300 GAS
, 3 SLODs
[G-3.8] instances[idx].addr , nextInstanceId instances[id].validSince can be cached : Saves 400 GAS
, 4 SLODs
[G-4] Consolidate Multiple Address/ID Mappings into Single Struct-Based Mapping - Saves 20000 GAS
[G-5] Using storage instead of memory for state variables saves gas - Saves 4200 GAS
[G-8] Using calldata instead of memory for read-only arguments in external functions saves gas - 740 GAS
[G-9] Replace Function Calls with Constants - Saves around 3000 GAS
[G-10] Remove nonReentrant modifier from admin only functions to save gas - 5000 GAS
This strategic approach ensures that my submissions are both impactful and aligned with the goal of maximizing efficiency.
Thank you for the opportunity to express my perspective. Should there be any discrepancies or errors in my understanding, I welcome and appreciate correction.
Thank you
#2 - 0xean
2024-04-11T10:51:00Z
Thanks for reviewing, and I agree this should be an A
#3 - c4-judge
2024-04-11T10:51:04Z
0xean marked the issue as grade-a
🌟 Selected for report: kaveyjoe
Also found by: 0xbrett8571, 0xepley, JCK, LinKenji, MrPotatoMagic, Myd, Sathish9098, aariiif, albahaca, cheatc0d3, clara, emerald7017, fouzantanveer, foxb868, hassanshakeel13, hunter_w3b, joaovwfreire, pavankv, popeye, roguereggiant, yongskiws
423.5827 USDC - $423.58
The Taiko Protocol is an advanced layer-2 scaling solution designed for the Ethereum blockchain
, aiming to improve transaction efficiency, reduce costs, and enhance scalability. Key components include LibVerifying
for secure block validation, Lib1559Math
for dynamic fee adjustments, and TaikoL2
, which facilitates cross-layer communication and gas pricing. The protocol also introduces Bridged Tokens (BridgedERC20
, BridgedERC721
, BridgedERC1155
) to seamlessly transfer assets between chains while maintaining their integrity. Additionally, the BaseVault contracts (ERC20Vault
, ERC721Vault
, ERC1155Vault
) securely manage token deposits, withdrawals, and bridging. Overall, Taiko stands out for its robust security measures, innovative economic model, and ability to provide seamless cross-chain interactions within the DeFi ecosystem.
Conducted a detailed technical analysis of contracts designated with a HIGH
priority according to the scope document, focusing on their critical roles within the system architecture and potential security risks.
This contract inherits from OpenZeppelin's UUPS
(Universal Upgradeable Proxy Standard) and Ownable2StepUpgradeable
contracts, indicating it is part of a system designed for upgradeability and ownership management. Additionally, it integrates an AddressResolver
for dependency management.
pause()
: Enforces contract pausing
, emitting Paused event, with whenNotPaused guard.
unpause()
: Lifts contract pause state
, emitting Unpaused event, with whenPaused guard.
paused()
: Returns contract's paused status as a boolean from internal __paused
.
__Essential_init(address _owner, address _addressManager)
: Initializes contract's owner and integrates address manager, checks non-zero address manager.
__Essential_init(address _owner)
: Sets initial contract owner, defaulting to message sender if zero address.
_authorizeUpgrade(address)
: Enforces owner-only access for contract upgrades in UUPS pattern.
_authorizePause(address)
: Restricts pause/unpause
actions to contract owner only.
_storeReentryLock(uint8 _reentry)
: Manages reentrancy lock
status, adapting for network-specific storage mechanisms.
_loadReentryLock()
: Retrieves and returns the state of the reentrancy lock.
_inNonReentrant()
: Provides boolean status of contract's reentrancy lock for current operation context.
Central role, typically involved in critical functionalities like contract upgrades, pausing, and unpausing the contract.
Has exclusive rights to authorize upgrades (via _authorizeUpgrade function) and change the contract's paused state
(pause
and unpause
functions).
Involved in the initial setting or transferring of ownership through the __Essential_init
functions and the ownership transfer
mechanisms inherited from Ownable2StepUpgradeable
.
onlyFromOwnerOrNamed
)Secondary role defined by specific names resolved through the AddressResolver, used in the onlyFromOwnerOrNamed
modifier.
Allows specific functions to be executed not just by the contract owner but also by addresses that are resolved (and thus authorized) through the contract's address resolution system.
By varying behavior with chainid
, the contract could perform differently on various Ethereum networks
(mainnet
vs. testnets
or layer-2 networks
). This divergence can lead to a lack of uniformity in how reentrancy protection behaves, making it difficult to ensure the same level of security across environments.
While intended to prevent reentrant attacks, the custom implementation based on chain ID
could harbor unseen vulnerabilities, especially under different network conditions or unexpected interactions.
The system's reliance on the AddressResolver
for identifying roles
and permissions
could lead to integration issues if the resolver contains incorrect addresses
or becomes compromised
.
Utilizing the UUPS upgradeable
framework, the contract grants the owner unilateral authority to deploy new logic. This can centralize power, enabling the owner to modify contract behaviors or insert vulnerabilities without external validation or consensus, potentially compromising transparency and user trust.
The contract's reentrancy lock
varies with the network (mainnet
vs. others
), managed solely by the administrator
. This can create unequal security postures across different environments, leading to potential inconsistencies in threat mitigation and favoritism in network-specific defenses, undermining homogeneous security standards.
The LibTrieProof
library in Solidity is designed for verifying Merkle proofs
against the Ethereum state or account storage. This is particularly relevant for systems interacting with Ethereum's state trie, where verifiability of on-chain data without direct access is necessary.
verifyMerkleProof()
: It confirms whether a specific storage slot value (_value
) of an Ethereum account (_addr
) matches what's recorded on the blockchain, based on a provided state or storage root (_rootHash
).
If an account proof (_accountProof
) is provided, the function first checks whether this proof correctly leads from the state root (_rootHash
) to the specified account. It verifies the account's existence and extracts the account's storage root.
Using the obtained or directly provided storage root, it then validates the storage proof (_storageProof
) to ensure the given value (_value
) is indeed at the specified storage slot (_slot
).
verifyMerkleProof()
might yield false or incorrect informationState Root Mismatch
: If the provided state root does not match the actual root of the data being proven (due to a fork, update, or error), the function will fail to correctly verify the proof against this incorrect root.
Chain Reorganizations
: On blockchains, especially Ethereum, chain reorganizations can change the state root unexpectedly. If a proof was generated just before a reorganization, it might become invalid shortly afterward.
Incorrect Assumptions
: If the function makes incorrect assumptions about input formats, trie structure, or Ethereum state conventions, it might misinterpret valid proofs or validate invalid ones.
In the LibTrieProof
implementations, a replay attack
can occur when an adversary reuses valid Merkle proofs from past transactions or states to perform unauthorized actions or validate incorrect states
as current
. This can lead to the system accepting outdated
or incorrect information
as valid, causing various security issues.
Outdated State Proofs
: An actor could use a Merkle proof from an old state that is no longer accurate. For example, if a user had a large balance at a previous point in time but then spent most of it, they could try to use the old proof to claim they still have a large balance.
Cross-Context Misuse
: A valid proof from one context (e.g., a transaction proving fund ownership in one contract) is used in another context where it should not be valid, exploiting the system's inability to distinguish between the original and intended use cases.
Timestamp or Block Height Validation
: Implementations should include the verification of timestamps or block heights within the proof to ensure they reflect the most recent state, preventing the use of outdated proofs.
Unique Identifiers and Nonces
: Use unique identifiers or nonces associated with each proof or transaction, ensuring a proof cannot be validly submitted more than once.
Contextual Verification
: Ensure that proofs are checked not only for their cryptographic validity but also for their relevance and appropriateness in the current context.
Proof Expiry
: Implement an expiry mechanism for proofs so that they are considered valid only for a certain period or up to a certain block height after their generation.
Resource Limitations
: Intensive computational requirements for proof verification might lead to out-of-gas errors or make the function prohibitively expensive to use, particularly during network congestion.
The LibDepositing
library in the Taiko protocol is designed to manage Ether deposits
, specifically facilitating the transfer of ETH
to a Layer 2 solution
.
function depositEtherToL2(TaikoData.State storage _state, TaikoData.Config memory _config, IAddressResolver _resolver, address _recipient) internal
This function handles the deposit of Ether
from Layer 1
to Layer 2
. It verifies the deposit amount is within set limits, sends the ETH
to a bridge address
, logs the deposit, and updates the state
to reflect the new deposit.
function processDeposits(TaikoData.State storage _state, TaikoData.Config memory _config, address _feeRecipient) internal returns (TaikoData.EthDeposit[] memory deposits_)
Processes a batch of ETH deposits
based on current protocol settings and the number of pending deposits. It applies processing fees, updates the state for each processed deposit, and ensures the fee for processing is allocated correctly.
function canDepositEthToL2(TaikoData.State storage _state, TaikoData.Config memory _config, uint256 _amount) internal view returns (bool)
Determines whether a new ETH deposit
is permissible based on the protocol's current state and configuration, such as checking if the amount falls within the allowed range and ensuring there's room in the deposit queue.
function _encodeEthDeposit(address _addr, uint256 _amount) private pure returns (uint256)
Encodes the recipient's
address and the deposit amount into a single uint256
for efficient storage and handling within the smart contract, ensuring the amount does not exceed predefined limits.
Implied by the bridge address obtained from _resolver.resolve("bridge", false)
.
Responsible for facilitating the actual transfer of Ether from Layer 1
to Layer 2
. This role involves ensuring the bridge operates correctly and securely.
Specified by address _feeRecipient
in the processDeposits function.
This role involves receiving the processing fees collected from batched Ether deposits. Likely, this could be a protocol treasury or maintenance entity.
Reliance on an external bridge for Layer 2 deposits introduces risk; if the bridge has downtime
or is compromised
, it could halt transfers
or lead to loss of assets
.
Nature of Bridge Failures
: Downtime , Security CompromisesData Synchronization
: Discrepancies in data format or synchronization between Layer 1 and Layer 2 systems could lead to inconsistencies in user balances or deposit records.
Upgradability and Compatibility
: LibDepositing
or related contracts are upgradable, there's a risk that updates may introduce incompatibilities or disrupt ongoing deposit processes.
The LibProposing
library is part of the Taiko protocol
, designed for handling block proposals within its Layer 2 (L2)
framework. This library focuses on managing the submission
, validation
, and processing of proposed blocks
, integrating with the broader ecosystem of the Taiko protocol.
proposeBlock(TaikoData.State storage _state, TaikoData.Config memory _config, IAddressResolver _resolver, bytes calldata _data, bytes calldata _txList) internal returns (TaikoData.BlockMetadata memory meta_, TaikoData.EthDeposit[] memory deposits_)
: The proposeBlock function allows a participant (typically a block proposer or validator) to propose a new block for the Taiko L2 chain. This is integral for the progression and updating of the blockchain's state._isProposerPermitted
for validating whether the caller can propose a block. If this internal validation relies solely on address checking without additional security measures (e.g., signatures
or multi-factor authentication
), it might be susceptible to address spoofing or impersonation attacks.params.blobHash
. The logic for determining blob reusability (isBlobReusable
) is flawed or if the reuse conditions are too lenient
, it could lead to the reuse of outdated
or incorrect blob data
, affecting data integrity._state
) and configuration (_config
). Incorrect or outdated configuration values can lead to improper block proposals, such as exceeding the allowed block size or gas limits. isBlobReusable(TaikoData.State storage _state, TaikoData.Config memory _config, bytes32 _blobHash) internal view returns (bool)
: Checks if a data blob is reusable based on expiration and protocol configuration to optimize data storage and cost.
_isProposerPermitted(TaikoData.SlotB memory _slotB, IAddressResolver _resolver) private view returns (bool)
:Determines if the current sender is authorized to propose a new block, based on protocol rules and configurations.
Block Proposer
Represents the entity (typically an externally owned account, EOA) responsible for calling the proposeBlock function to propose new blocks to the Taiko Layer 2 system. This role involves compiling block data, including transactions and deposit information, and submitting this data to the network.Assigned Prover
This is the address designated within a block proposal responsible for providing subsequent proofs or validations for the block. The prover's role is crucial for the integrity and security of the block validation process within Taiko's architecture.Chain Integrity
: Errors or vulnerabilities in the block proposal process can compromise the integrity of the entire Layer 2 chain, leading to incorrect state transitions or consensus failures.
Protocol Reliability
: Dependence on accurate blob handling
and proper block sequencing means that systemic failures (like incorrect parent block references or mishandling of state changes) can disrupt the operational flow
of the entire protocol.
Resource Exhaustion
: The function involves multiple state updates and external calls, which could lead to high gas consumption, potentially causing out-of-gas
errors or making block proposals
prohibitively expensive.Configuration Management
: Misconfiguration in the TaikoData.Config
or the address resolver could lead to incorrect behavior, such as invalid block size limits or incorrect fee parameters.LibProving serves as a crucial mechanism for ensuring the integrity and validity of block transitions within the Taiko protocol. It handles the submission and verification of proofs associated with block transitions, enabling the contestation of incorrect transitions and reinforcing the security and accuracy of the blockchain's state.
pauseProving(TaikoData.State storage _state, bool _pause)
: This function toggles the pausing status for the block proving process within the Taiko protocol. If _pause
is true, new proofs cannot be submitted, effectively pausing the proving operations; if false, the proving operations are resumed. This is critical for maintenance or in response to detected issues.
proveBlock(TaikoData.State storage _state, TaikoData.Config memory _config, IAddressResolver _resolver, TaikoData.BlockMetadata memory _meta, TaikoData.Transition memory _tran, TaikoData.TierProof memory _proof)
: Processes proofs for block transitions within the Taiko protocol. It validates and records the proof
against the specified transition
, handles transitions between different proof tiers
, enforces proof validation rules based on the current protocol configuration, and updates the protocol state to reflect the new proof
. This function is essential for the integrity and security of block transitions in the network.
_createTransition(TaikoData.State storage _state, TaikoData.Block storage _blk, TaikoData.Transition memory _tran, uint64 slot)
:
Internal helper function that ensures the existence and proper initialization of a block transition in the protocol's state. If a transition corresponding to a given parent hash does not exist, it creates one; otherwise, it retrieves the existing transition. This function is crucial for maintaining the continuity and consistency of block transitions within the protocol.
_overrideWithHigherProof(TaikoData.TransitionState storage _ts, TaikoData.Transition memory _tran, TaikoData.TierProof memory _proof, ITierProvider.Tier memory _tier, IERC20 _tko, bool _sameTransition)
: Internal function that manages the logic for updating an existing transition with a new proof of a higher tier. It adjusts the transition's records and handles the transfer of bonds and rewards according to the outcome of the proof submission. This function ensures the protocol adapts to new, more reliable proofs while appropriately rewarding or penalizing the involved parties.
_checkProverPermission(TaikoData.State storage _state, TaikoData.Block storage _blk, TaikoData.TransitionState storage _ts, uint32 _tid, ITierProvider.Tier memory _tier)
: Internal function that verifies whether the sender (prover) is authorized to submit a proof for a particular block transition based on various conditions, such as the timing window and the prover's identity. This function is key to enforcing proof submission policies and preventing unauthorized or premature submissions.
Provers
: Entities responsible for submitting proofs
to verify the correctness of block transitions
. They provide necessary evidence
supporting the validity of the transactions and state transitions within a block.
Contesters
: Participants who challenge the validity of a submitted proof. They play a critical role in maintaining the integrity of the network by identifying and disputing incorrect or malicious proofs.
Protocol Administrators
: Individuals or entities with the authority to pause and unpause the proving process, typically for maintenance or in response to detected vulnerabilities.
Tier Providers
: They define the different tiers of proofs allowed within the system, setting the standards and requirements for each proof level, affecting the security and efficiency of the proving process.
Verifiers
: Smart contracts or entities tasked with validating the submitted proofs according to the protocol's rules and the specific tier's requirements.
Chain Integrity Failure
: Flaws in the proving mechanism can lead to incorrect block transitions being accepted, compromising the entire chain's integrity.
Protocol Stagnation
: The inability to update or pause proving processes in response to emerging threats could result in systemic failures or persistent vulnerabilities.
Incorrect Proof
: Flaws in proof generation
or verification logic
can result in valid transitions being rejected or invalid ones accepted.
Data Handling Errors
: Mismanagement of transitions
, proof data
, or bond information
can lead to inconsistencies, loss of funds, or incorrect state updates.
Configuration Sync
: Ensuring that configurations (e.g., proof tiers
, bond amounts
) remain synchronized across different contracts and protocol layers is crucial for consistent operation and security.Unauthorized Pausing
: If the pausing functionality is abused by protocol administrators, it could lead to unnecessary disruptions in the proving process or be used to censor specific provers or contesters.
Manipulation of Proofs and Tiers
: Administrators with the ability to alter proof requirements or tier parameters could unfairly influence the proving process, benefiting certain parties over others or compromising the network's security.
Improper Bond Management
: Misuse of admin privileges could lead to inappropriate handling of validity and contest bonds, potentially resulting in unjust enrichment or unwarranted penalties.
The LibVerifying
library is part of the Taiko protocol and is designed for handling the verification of block transitions in the protocol's Layer 2 solution. This library includes mechanisms for initializing the protocol state
, verifying blocks
, and ensuring the continuity and integrity.
init(TaikoData.State storage _state, TaikoData.Config memory _config, bytes32 _genesisBlockHash)
: Sets up initial protocol state using specified configuration and genesis block hash, ensuring the protocol is ready for operation from a clearly defined starting point.
verifyBlocks(TaikoData.State storage _state, TaikoData.Config memory _config, IAddressResolver _resolver, uint64 _maxBlocksToVerify)
: Processes and verifies up to _maxBlocksToVerify
blocks based on established transition rules and updates their state as verified, maintaining blockchain integrity.
_syncChainData(TaikoData.Config memory _config, IAddressResolver _resolver, uint64 _lastVerifiedBlockId, bytes32 _stateRoot)
:
Internally updates external systems with the latest verified blockchain data, ensuring consistency across the protocol and external references.
_isConfigValid(TaikoData.Config memory _config)
: Performs checks on protocol configuration parameters to ensure they fall within acceptable ranges and meet operational requirements, guarding against misconfigurations.
The Lib1559Math
library is designed to implement a bonding curve based on the exponential function (e^x
) for the Ethereum fee market mechanism, as proposed by EIP-1559
.
basefee(uint256 _gasExcess, uint256 _adjustmentFactor)
: Validates input parameters to avoid division by zero or other invalid operations. It then calculates the new base fee using the provided formula and adjustments based on EIP-1559 guidelines.
_ethQty(uint256 _gasExcess, uint256 _adjustmentFactor)
: Performs safety checks and scales the input to prevent overflow issues. It uses a fixed-point math library (LibFixedPointMath
) to handle the exponential function calculation, which is not natively supported in Solidity with high precision.
Fixed-Point Precision
: Due to Solidity’s lack of native floating-point support, fixed-point arithmetic might introduce rounding errors, affecting the precision of fee calculations.The TaikoL2 contract is part of a Layer 2 solution that manages cross-layer message verification and implements EIP-1559 gas pricing mechanisms for L2 operations.
L1 block
information to L2
, enabling cross-layer communication
and verification
.L2
based on L1
congestion levels, aligning with EIP-1559
mechanisms.L1 block
information to maintain a history of state transitions between layers.init(address _owner, address _addressManager, uint64 _l1ChainId, uint64 _gasExcess)
: Initializes the Taiko L2
contract with basic setup, including ownership
, address resolution
, and initial gas excess
values for EIP-1559 calculations
.
anchor(bytes32 _l1BlockHash, bytes32 _l1StateRoot, uint64 _l1BlockId, uint32 _parentGasUsed)
: Anchors the latest L1
block details to L2
, updating the contract with the most recent block hash
, state root
, block height
, and gas
usage. This function is critical for maintaining L1-L2
consistency and is restricted to specific authorized addresses.
withdraw(address _token, address _to)
: Enables the withdrawal of tokens or Ether from the contract, typically reserved for the contract owner or a designated withdrawer, adding a layer of operational flexibility and security.
getBasefee(uint64 _l1BlockId, uint32 _parentGasUsed)
: Provides the calculated base fee
per gas for L2
transactions based on L1 congestion metrics, applying the EIP-1559 model to L2 operations. This function is crucial for gas pricing
and network economics.
getBlockHash(uint64 _blockId)
: Retrieves the hash for a specified L2 block number, aiding in data verification and block tracking within the L2 environment.
getConfig()
: Returns the current EIP-1559 configuration parameters used for gas pricing on L2, including the target gas per block and the base fee adjustment quotient.
skipFeeCheck()
:
A function potentially used for simulations or testing environments where base fee mismatch checks can be bypassed, offering flexibility in non-production environments.
_calcPublicInputHash(uint256 _blockId)
: Calculates the hash of public inputs, aiding in the verification and integrity checks of L2 blocks, particularly important for ensuring consistency and security in cross-layer communications.
_calc1559BaseFee(Config memory _config, uint64 _l1BlockId, uint32 _parentGasUsed)
: Internal function that calculates the dynamic base fee for L2 transactions, inspired by EIP-1559's algorithm, considering the excess gas and adjusting for L1 block intervals to manage network congestion effectively.
BLOCK_SYNC_THRESHOLD
:uint8 public constant BLOCK_SYNC_THRESHOLD = 5;
The BLOCK_SYNC_THRESHOLD
is hardcoded, limiting flexibility
and adaptability
to changing network conditions
.
Base Fee Calculation and Validation
:if (!skipFeeCheck() && block.basefee != basefee) { revert L2_BASEFEE_MISMATCH(); }
Assumes basefee
calculations always align with L1
expectations without accommodating for potential variances or future updates in gas pricing.
Gas Excess Handling
gasExcess = _gasExcess;
State changes like updates to gasExcess do not trigger event emissions, reducing transparency and traceability.
l2Hashes[parentId] = blockhash(parentId);
Assumes the immutability of L1 block
hashes stored in L2
without mechanisms to address possible L1 chain
reorganizations affecting these references.
Contract Owner/Administrator
: Manages contract settings, including EIP-1559
parameters and access controls. They are responsible for the initial setup
and ongoing adjustments based on network conditions
.
Golden Touch Address
: Authorized entity allowed to perform the anchor operation, updating L2
with the latest L1 block
details. This role is crucial for maintaining L1-L2 consistency
.
Token Withdrawers
: Specific addresses with permission to withdraw tokens
or ETH
from the contract. This role typically involves managing contract funds
and ensuring liquidity.
L1-L2 Desynchronization
: Failure in regularly updating L2 with L1
block details can lead to inconsistencies between layers, affecting cross-layer
operations and communications.
Misalignment of Economic Models
: Incorrect implementation or management of EIP-1559
features on L2
could lead to economic imbalances, affecting user transaction
costs and network congestion
.
Fee Instability
: Poor calibration of EIP-1559
parameters could result in unpredictable gas fees and network congestion, deteriorating the user experience and L2 operational efficiency.
Cross-Chain Communication Failures
: Errors in cross-layer message verification or disruptions in L1-L2
communications could impede
essential contract functionalities.
Incompatibility with Existing Protocols
: Updates or changes in L1 mechanisms, including EIP-1559 adjustments
, require timely updates on L2
; failure to do so may lead to integration issues
.
Centralized Control Over Anchoring
: Excessive control by the Golden Touch
Address over the anchoring process could be abused, impacting the L2's
alignment with L1
.The Bridge
contract serves as a vital component within a cross-chain communication framework
, enabling the transmission
, management
, and execution of messages
between different blockchain networks
. It supports EIP-1559
gas pricing adjustments for Layer 2 (L2)
operations and ensures secure cross-layer message
verification.
init(address _owner, address _addressManager)
: Sets up the contract with an owner and links it to an address manager for other contract references.
suspendMessages(bytes32[] calldata _msgHashes, bool _suspend)
: Allows toggling the processing state of messages (suspend or unsuspend) based on their hashes.
banAddress(address _addr, bool _ban)
: Enables or disables the ability for a specific address to participate in message sending or receiving.
sendMessage(Message calldata _message)
: Facilitates sending a cross-chain message, recording its details and emitting an event.
recallMessage(Message calldata _message, bytes calldata _proof)
: Allows the sender to recall a message before it's processed on the destination chain.
processMessage(Message calldata _message, bytes calldata _proof)
: Processes an incoming message if validated, performing the instructed action.
retryMessage(Message calldata _message, bool _isLastAttempt)
: Offers a sender another attempt to execute a previously failed message.
isMessageSent(Message calldata _message)
: Checks if a message has already been sent, based on its content.
proveMessageFailed(Message calldata _message, bytes calldata _proof)
: Asserts a message has been marked as failed on its destination chain.
proveMessageReceived(Message calldata _message, bytes calldata _proof)
: Verifies that a message has been received on the destination chain.
isDestChainEnabled(uint64 _chainId)
: Determines if the contract is set up to send messages to a specific chain.
context()
: Retrieves the current operational context of the bridge, used for tracking and validation purposes.
getInvocationDelays()
: Provides the time delays enforced before a message can be executed, important for security and order.
hashMessage(Message memory _message)
: Generates a unique identifier for a message based on its content.
signalForFailedMessage(bytes32 _msgHash)
: Creates a unique identifier for failed messages to help manage message lifecycles.
_authorizePause(address)
: Internal function to check if the calling address has permission to pause or unpause the bridge.
_invokeMessageCall(Message calldata _message, bytes32 _msgHash, uint256 _gasLimit)
: Executes the message call with specified parameters.
_updateMessageStatus(bytes32 _msgHash, Status _status)
: Changes the status of a message, ensuring its lifecycle is accurately tracked.
_resetContext()
: Clears the current operational context after a message has been processed.
_storeContext(bytes32 _msgHash, address _from, uint64 _srcChainId)
: Sets the operational context for a message being processed.
_loadContext()
: Fetches the current operational context from storage.
_proveSignalReceived(address _signalService, bytes32 _signal, uint64 _chainId, bytes calldata _proof)
: Validates that a specific signal (indicative of a message's status) has been correctly received and recorded.
Weak Spot
: The _invokeMessageCall
method decides on a gas limit
for executing a message based on whether the sender is the destOwner
. This can lead to unpredictable execution
outcomes if not enough gas is provided.
Improvement
: Introduce a gas estimation mechanism for cross-chain calls
to dynamically
adjust gas limits based on the payload's complexity
. Implement a safety margin
to cover unexpected gas usage.
Weak Spot
: The _updateMessageStatus
method updates the status without considering the full lifecycle or potential race conditions of message processing.
Improvement
: Implement state machine logic
that enforces strict transitions between message statuses to prevent invalid state changes
. Use events to log all status transitions for transparency.
Weak Spot
: The _proveSignalReceived
function relies heavily on external SignalService
responses without additional validation layers
, which could be a single point of failure
or exploitation
.
Improvement
: Enhance cross-chain message validation by introducing layered checks
, such as multi-sourced proof
aggregation or implementing zero-knowledge proofs
for more secure and decentralized validation processes.
Weak Spot
: The banAddress
function switches the ban status
without context
or granularity
. Arbitrary banning
could disrupt operations and affect user trust
.
Improvement
: Implement time-bound
or context-sensitive banning
, allowing temporary restrictions based on specific behaviors. Provide a transparent process
and criteria for banning
and unbanning addresses
.
Bridge Watchdog
: A specialized role, typically automated or part of a security protocol, responsible for suspending faulty messages and banning malicious addresses.
Signal Service
: External system or service that verifies the sending and receipt of cross-chain messages, ensuring message integrity across chains.
Chain Synchronization Failures
: Discrepancies between L1
and L2
states due to failed anchor operations
can lead to systemic inconsistencies, affecting message validity
and execution
.
Gas Pricing Anomalies
: Incorrect management or calculation of EIP-1559
gas parameters could lead to inflated transaction costs
or network congestion
.
Cross-Chain Communication Breakdown
: Failures in message verification or delivery could disrupt the interoperability and functionality of connected blockchain ecosystems.
Interface Mismatches
: Inconsistencies between expected and actual behaviors of interconnected systems or changes in external contract interfaces could lead to integration issues.
Message Replay or Loss
: Without proper nonce management
or message tracking
, messages could be replayed or lost, leading to double spending or information loss.
The contracts BridgedERC20
, BridgedERC20Base
, BridgedERC721
, and BridgedERC1155
are part of a system designed for bridging tokens (ERC20
, ERC721
, and ERC1155
standards) across different blockchain networks. Each serves a different purpose within the context of token bridging
.
BridgedERC20Base
contract serves as a base for bridged ERC20 tokens, focusing primarily on the migration aspect.changeMigrationStatus
: Enables starting or stopping migration to or from a specific contract.mint
: Mints new tokens, typically called by an authorized bridge contract, especially during inbound migration.burn
: Burns tokens, used during outbound migration or when removing tokens from circulation on the current chain.owner
: Overrides the owner function to maintain compatibility with the IBridgedERC20 interface.bridged ERC20
tokens.setSnapshoter
: Sets the address authorized to take snapshots of the token's state.snapshot
: Allows the snapshooter or contract owner to create a snapshot of token balances.name
, symbol
, decimals
: Overrides standard ERC20 functions to provide names, symbols, and decimal counts that may include bridging-specific details.canonical
: Returns the original token's address and chain ID._mintToken
, _burnToken
: Internal functions to handle minting and burning of tokens as part of the bridging process.BridgedERC721
designed for ERC721 tokensmint
, burn
: Functions allowing minting new tokens or burning existing ones, usually controlled by a bridge entity to facilitate cross-chain movements.name
, symbol
: Provide metadata for the bridged tokens, potentially incorporating cross-chain information.tokenURI
: Generates the URI for token metadata, which might include cross-chain details or reference the original token's metadata.source
: Returns the source token's address and source chain ID, identifying the original token and its native blockchain.BridgedERC1155
contract is for bridging ERC1155
tokensmint
, mintBatch
: Allow for minting single or multiple types of tokens to an address, controlled by an authorized entity for bridging purposes.burn
: Enables burning tokens from an address, used typically in token bridging scenarios to signify moving tokens out of the current chain.name
, symbol
: Return the token's name and symbol with potential modifications to indicate their bridged status._beforeTokenTransfer
: Implements checks before token transfers, similar to BridgedERC721
, ensuring that transfers comply with bridging rules and contract status.The owner is typically the primary authority in the contract, capable of performing critical actions such as initializing the contract, changing migration statuses, and updating critical contract parameters.
In the BridgedERC20
contract, a snapshooter
role is defined. This role is allowed to create snapshots of the token state at specific block numbers. Snapshots can be important for various reasons, such as governance decisions or verifying token distributions at a certain point in time.
Ownable
for the owner role
.AccessControl
for managing roles like snapshooter
or specific vault access
.onlyOwner
, onlyFromNamed
("erc20_vault"), onlyOwnerOrSnapshooter
) to restrict function execution to certain roles.Cross-Chain Consistency
: Ensuring consistent state and tokenomics across chains is challenging. Discrepancies can lead to arbitrage opportunities that might be exploited unfairly.Data Availability and Validity
: The bridge relies on the availability and accuracy of data from both the source
and destination chains
. Issues such as data unavailability, latency, or incorrect data can lead to erroneous
bridging operations.
Token Standards Compatibility
: Bridged tokens must adhere
to the standards of their respective blockchains. Any deviation or incompatibility, especially during upgrades or when integrating with new chains, can lead to loss of funds or broken functionalities.
Rate Limiting
: Implement rate-limiting for minting and burning actions to prevent potential abuse or drastic token supply changes.URI Management
: Implement a flexible mechanism for managing token URIs, especially if they need to represent cross-chain metadata accurately.Token Recovery
: Implement a secure method to allow recovery of ERC721
tokens sent by mistake.Customizable Token Metadata
: Provide functions to adjust token metadata dynamically to reflect its cross-chain nature better.Pausing Mechanism
: Implement a pausing mechanism specific to bridging actions while allowing other ERC1155 actions, providing a more granular control during emergencies.These BaseVault.sol
, ERC1155Vault.sol
, ERC20Vault.sol
, ERC721Vault.sol
four contracts form an integral part of a cross-chain bridging solution, allowing for the secure
, controlled
, and verified transfer
of different types of tokens (fungible
, non-fungible
, and semi-fungible
) across blockchain networks. They ensure that assets
moving between chains are properly locked
, transferred
, and unlocked
(or minted) following the protocols and security standards necessary for cross-chain interoperability.
supportsInterface
: Implements the ERC165 standard by indicating whether the contract implements a specific interface, enhancing interoperability and type recognition.checkProcessMessageContext
and checkRecallMessageContext
: These functions validate that the message processing or recalling occurs in a legitimate context, specifically verifying that the caller is the bridge and the operation conforms to expected parameters.sendToken
: Handles the deposit of ERC1155 tokens into the vault and initiates their cross-chain transfer by crafting and sending a bridge message.onMessageInvocation
: Processes incoming bridge messages to either mint new bridged tokens or release previously locked tokens, depending on the message content.onMessageRecalled
: Reacts to bridge messages being recalled, typically resulting in the return of tokens to their original depositor if a cross-chain transfer is cancelled or reverted.changeBridgedToken
: Allows the management of bridged token representations, enabling updates to the token mapping as necessary._handleMessage
: Prepares and validates data for cross-chain communication, ensuring that token transfers are correctly represented and authorized._getOrDeployBridgedToken
and _deployBridgedToken
: Manage the lifecycle of bridged tokens, including their creation when first encountered.The SgxVerifier
contract provides functionalities related to SGX (Software Guard Extensions) attestation and verification within a blockchain environment.
addInstances(address[] calldata _instances)
: Allows the owner to add new SGX instances to the registry. Each instance represents an SGX enclave identified by its Ethereum address. This function emits an InstanceAdded event for each new instance.
deleteInstances(uint256[] calldata _ids)
: Enables removal of SGX instances from the registry, typically invoked by the contract owner or a specific authorized entity (like a watchdog). It emits an InstanceDeleted event for each instance removed.
registerInstance(V3Struct.ParsedV3QuoteStruct calldata _attestation)
: Registers a new SGX instance after verifying its remote attestation quote. This function is designed to work with an attestation service that confirms the integrity and authenticity of an SGX enclave.
verifyProof(Context calldata _ctx, TaikoData.Transition calldata _tran, TaikoData.TierProof calldata _proof)
: Verifies a cryptographic proof provided by an SGX instance. It's used to ensure that data or a computation (represented by _tran) was correctly processed by an SGX enclave. This function is central to trust and security, especially in cross-chain or L2 scenarios.
getSignedHash(TaikoData.Transition memory _tran, address _newInstance, address _prover, bytes32 _metaHash)
: Constructs a hash intended to be signed by an SGX instance. This forms the basis of verifying the legitimacy and integrity of data processed by the SGX enclave.
_addInstances(address[] memory _instances, bool instantValid)
: A private function to add SGX instances to the registry. It handles the logic for assigning instance IDs and setting validity times.
_replaceInstance(uint256 id, address oldInstance, address newInstance)
: Replaces an existing SGX instance with a new one in the registry. This might be needed if the SGX enclave's keys are rotated or if the enclave needs to be updated.
_isInstanceValid(uint256 id, address instance)
: Checks if an SGX instance is currently valid based on its ID and address. This includes checking whether the instance is within its valid time frame.
Watchdog
: A specific role or entity authorized to remove SGX instances
from the registry, likely for security or operational reasons.SGX Enclave
(Instance): Represents an operational SGX enclave
that performs computations or data processing securely.SGXVerifier
could be a widespread vulnerability or flaw in the SGX
technology itself, such as a side-channel
attack that compromises all SGX enclaves
globally. Another example is reliance on a single attestation service that, if compromised, could invalidate the trustworthiness of all instances.
Message ID Invariant
: The nextMessageId must only increase over time. This ensures that each outgoing message has a unique identifier.uint128 public nextMessageId;
Invariant: For any two messages, if message1 was sent before message2, then message1.id < message2.id.
Message Status Invariant
: Each message identified by its hash (msgHash) should have a status that reflects its current state accurately and should transition between states according to the contract's logic.mapping(bytes32 => Status) public messageStatus;
Invariant: messageStatus can transition from NEW -> RECALLED or NEW -> DONE or NEW -> RETRIABLE, but once it moves to DONE, RECALLED, or FAILED, it cannot change.
Address Ban Invariant
: If an address is banned, it cannot be used for invoking message calls.mapping(address => bool) public addressBanned;
Invariant: If addressBanned[addr] is true, then addr should not successfully invoke message calls.
Invocation Delay Invariant
: Messages must respect the invocation delay, ensuring that they are processed only after a specified time since their reception.function getInvocationDelays() public view returns (uint256, uint256);
Invariant: A message can only be processed after invocationDelay seconds have passed since it was received, as recorded in proofReceipt.
Value Transfer Invariant
: The value (Ether) sent within a message must match the expected value defined in the message structure.uint256 expectedAmount = _message.value + _message.fee;
Invariant: The sum of _message.value
and _message.fee
must equal msg.value when sending or recalling a message.
Migration Status Invariance
: If migration is inbound, no new tokens should be minted. If migration is outbound, tokens can only be minted by the migration target.
Message Processing Invariance
: When a message is being processed from the bridge, it should follow the proper authentication and execution flow without state inconsistencies.
Ownership Tracking Invariant
: Each NFT must be associated with one owner at a time as tracked by the contract.
mapping(uint256 => address) public nftOwners;
Invariant: nftOwners[tokenId]
must match the current owner of the NFT for all tokenId.
Access Control Invariant
: Only authorized users (like the contract owner or designated roles) can perform critical functions like minting or burning tokens.mapping(address => bool) public isAuthorized;
Invariant: Functions like mint()
or burn()
can only be called by addresses where isAuthorized[caller] == true
.
Timestamp Invariant
: The timestamp for the last price update must always be less than or equal to the current block time.mapping(address => uint256) public lastUpdateTime;
Invariant: lastUpdateTime[asset] <= now for all assets.
Proposal State Invariant
: A proposal's state must follow the correct lifecycle transitions.enum ProposalState { Pending, Active, Defeated, Succeeded, Executed } mapping(uint256 => ProposalState) public state;
Invariant: State transitions must follow logical order, e.g., Pending -> Active -> (Defeated | Succeeded) -> Executed.
I have analyzed the contracts that are high priority contracts
EssentialContract.sol LibTrieProof.sol LibDepositing.sol LibProposing.sol LibProving.sol LibVerifying.sol Lib1559Math.sol TaikoL2.sol SignalService.sol Bridge.sol BridgedERC20.sol BridgedERC20Base.sol BridgedERC721.sol BridgedERC1155.sol BaseVault.sol ERC1155Vault.sol ERC20Vault.sol ERC721Vault.sol SgxVerifier.sol
Contractual Relationships
: Identify the relationships between Taiko's core contracts, such as Omnipool, cross-chain bridges, and liquidity provision mechanisms.Flow of Assets
: Trace how assets move within the system, focusing on token wrapping, unwrapping, and the impact of these movements on liquidity and trading.Cross-Chain Security
: Assess the integrity and security of the cross-chain messaging and bridge mechanisms, crucial for Taiko's interoperability features.Smart Contract Vulnerabilities
: Beyond standard checks, focus on issues prevalent in DeFi protocols like flash loan attacks, price manipulation, and oracle failure.Fee Structures and Incentives
: Delve into Taiko's fee structures, reward systems, and their alignment with user and protocol incentives.Liquidity and Slippage Models
: Analyze the mathematical models underpinning liquidity provisions, pricing, and slippageTokenomics
: Review the tokenomics specific to Taiko, considering burn mechanisms, staking rewards, and governance features.Gas Optimization
: Given the complex interactions within DeFi contracts, identify gas-intensive code paths and propose optimizations.Contract Efficiency
: Focus on the efficiency of algorithms particular to Taiko, such as those used in the Omnipool for asset rebalancing and price calculation.Against DeFi Standards
: Compare Taiko's approaches, particularly the Omnipool, with industry standards and leading protocols in similar spaces.Innovations and Distinctions
: Highlight and evaluate Taiko's novel contributions to the DeFi space, ensuring they contribute positively to security, user experience, and financial fairness.Critical Findings on Taiko's Uniqueness
: Summarize findings with a focus on aspects unique to the Taiko Protocol, providing a clear picture of its standing in the DeFi space.Targeted Recommendations
: Offer recommendations that respect Taiko's unique mechanisms and market position, ensuring advice is actionable and directly relevant.Enhancement Proposals
: Propose enhancements based on Taiko's long-term vision and specific technical and financial frameworks, fostering innovation while ensuring security and stability.Codebase Quality
)Based on the contracts and discussions related to Taiko protocol, here’s an in-depth code quality analysis
The Taiko protocol employs a clear modular architecture, dividing functionalities into distinct contracts like LibProving
, LibVerifying
, Bridged Tokens
(ERC20
, ERC721
, ERC1155
), and Vaults
. This division enhances the clarity and maintainability of the code. Libraries and modular components, such as SgxVerifier
, are used strategically to encapsulate complex logic, ensuring scalability and reducing gas costs.
Suggestions
: Continue emphasizing modularity and separation of concerns in future developments. Consider abstracting common patterns into libraries for reuse across contracts.Taiko's contracts exhibit a static nature with minimal
emphasis on upgradeability patterns
. While this approach might contribute to security, it could limit flexibility and adaptability to protocol upgrades or bug fixes.
Suggestions
: Explore and possibly integrate upgradeable contract patterns
, such as Proxy
or Beacon
, ensuring that upgrade governance is transparent and secure.Taiko includes mechanisms like SGXVerifier for decentralized verification, indicating steps toward community-driven governance. However, detailed mechanisms or DAO structures for wider community participation and governance might not be fully fleshed out.
Suggestions
: Develop and document clear governance models enabling token holder proposals
, voting
, and implementation processes
. Enhance community interaction tools
and platforms
.Functions throughout the Taiko codebase implement rigorous condition checks and validate inputs effectively, minimizing the risk of erroneous or malicious transactions.
Suggestions
: Ensure comprehensive input validation, particularly for cross-contract
calls and interactions with external tokens
and data
. Consider edge cases
and adversarial inputs
consistently.Taiko contracts are well-documented, and each serves a clearly defined role within the ecosystem. Usage of Solidity best practices and adherence to security standards indicates a strong foundation for future reliability.
Suggestions
: Introduce mechanisms to reduce centralized control aspects, such as multi-sig or timelocked admin actions. This would enhance trust and decentralization.Extensive commenting throughout the Taiko codebase facilitates understanding and auditability. Complex operations, especially in LibVerifying
and cryptographic parts
, are well-explained.
Suggestions
: Continue maintaining high-quality comments, especially when introducing new complex mechanisms or when modifying existing ones. Ensure comments remain updated with code changes.The codebase demonstrates consistent formatting and structuring, adhering to Solidity’s best practices, which improves readability and code management.
Suggestions
: Where possible, further refine code modularization. Document and enforce coding standards for future contributions.The protocol's innovative approach, particularly in integrating cross-chain functionalities and SGX verification mechanisms, stands out. The implementation of bridged assets and vault strategies showcases a forward-thinking approach to DeFi solutions.
Inline code documentation is thorough, aiding immediate comprehension. However, external documentation might lag behind the latest codebase developments.
In evaluating the 79% coverage for Taiko, it's essential to consider the following aspects
Critical Paths Coverage
: Examine whether the tests adequately cover the critical paths of the Taiko protocol, especially core functionalities like transaction processing, smart contract interactions, and security mechanisms. High-risk areas should ideally have near 100% coverage to ensure stability and security.
Integration and End-to-end Tests
: Check if the 79% coverage mainly comes from unit tests, or if it also includes integration and end-to-end tests. Integration tests are crucial for protocols like Taiko, where different components and smart contracts must interact correctly.
Areas for Improvement
: Based on the uncovered areas and critical functionalities, identify where adding tests could be most beneficial. Focus on parts of the code that are prone to changes, have had historical bugs, or involve complex logic.
Coverage Goals
: Set realistic goals for improving test coverage. While 100% coverage is often impractical, identify key areas where increased coverage could reduce risk and improve code confidence.
Algorithmic Stablecoins
: Explore the integration or development of algorithmic stablecoins to offer users stable value transfer mechanisms within the Taiko ecosystem.
Interoperable Token Standards
: Explore and adopt interoperable token standards that facilitate cross-chain interactions and improve compatibility with other protocols and blockchain ecosystems. This can enhance liquidity and user reach.
Layer 2 and Cross-Chain Solutions
: Explore and integrate Layer 2 solutions or cross-chain interoperability features to improve transaction speeds, reduce costs, and expand the user base. This could involve leveraging existing bridges, rollups, or custom solutions tailored to Taiko's needs.
Dynamic Fee Structure
: Implement a dynamic fee structure based on network congestion, transaction size, or market conditions. This could help optimize costs for users while ensuring the protocol remains financially sustainable. Additionally, consider introducing fee discounts or rebates for frequent users or large liquidity providers.
Protocol-Owned Liquidity
: Explore the concept of protocol-owned liquidity to reduce dependency on external liquidity providers and improve the protocol's self-sustainability and control over its market operations.
Merkle Proof Verification
(LibVerifying.sol
)
Incorrect implementation or manipulation of Merkle tree proofs could result in invalid transactions being accepted or valid transactions being rejected.Block Production and Verification
(LibVerifying.sol
)
Vulnerabilities in block production and verification could lead to blockchain integrity issues, such as double-spending or block withholding attacks.Token Bridging and Minting
(BridgedERC20.sol
, BridgedERC721.sol
, BridgedERC1155.sol
)
Exploitation in token bridging logic may lead to unauthorized minting or burning of tokens, impacting asset integrity across chains.Liquidity Management
(TaikoL2.sol
, BaseVault.sol
)
Insufficient validation and control in liquidity addition or removal could lead to market manipulation or pool imbalances.Smart Contract Upgradeability and Governance
(EssentialContract.sol
)
Centralized control or flawed governance mechanisms could lead to unauthorized protocol changes or exploitation. (
Lib1559Math.sol``)
Dependence on external oracles for price feeds may lead to price manipulation or oracle failure, impacting system operations.Cross-Chain Communication and Security
(Bridge.sol
, BaseVault.sol
)
Inadequate security in cross-chain communication could lead to replay attacks or message forgery.Asset Decimal Handling and Conversion
(LibMath.sol
, Lib1559Math.sol
)
Incorrect handling of asset decimals could lead to rounding errors or imbalances in asset valuation.Token Management and Security
(BridgedERC20.sol
, BridgedERC721.sol
, BridgedERC1155.sol
)
Flaws in token management functions (e.g., mint, burn) could result in unauthorized token creation or destruction.50 Hours
50 hours
#0 - c4-pre-sort
2024-04-01T05:32:03Z
minhquanym marked the issue as high quality report
#1 - dantaik
2024-04-04T08:49:04Z
Thank you for the analysis, but there seems no bugs identified in the report.
#2 - c4-sponsor
2024-04-05T07:38:13Z
dantaik (sponsor) acknowledged
#3 - c4-judge
2024-04-10T10:02:06Z
0xean marked the issue as grade-a