Taiko - cheatc0d3's results

A based rollup -- inspired, secured, and sequenced by Ethereum.

General Information

Platform: Code4rena

Start Date: 04/03/2024

Pot Size: $140,000 USDC

Total HM: 19

Participants: 69

Period: 21 days

Judge: 0xean

Total Solo HM: 4

Id: 343

League: ETH

Taiko

Findings Distribution

Researcher Performance

Rank: 47/69

Findings: 3

Award: $119.29

🌟 Selected for report: 0

🚀 Solo Findings: 0

Awards

33.5408 USDC - $33.54

Labels

bug
grade-b
QA (Quality Assurance)
Q-04

External Links

QA Report: Taiko

L-01 Use MerkleProofUpgradeable.sol instead of MerkleProof.sol

Upgradeable contracts allow you to change your contract's code after deployment, which can be essential for fixing bugs or adding features without deploying a new contract and migrating all existing data.

Github:

https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/team/airdrop/MerkleClaimable.sol#L4

import "@openzeppelin/contracts/utils/cryptography/MerkleProof.sol";

L-02 Implement a Emergency Stop Mechanism for Token Withdrawals

The implementation of an emergency stop mechanism in a smart contract, particularly for sensitive operations like token withdrawals, provides several benefits that enhance the security, reliability, and trustworthiness of the contract and the overall system.

Github:

https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/L2/TaikoL2.sol#L163C5-L178C6

function withdraw(
        address _token,
        address _to
    )
        external
        onlyFromOwnerOrNamed("withdrawer")
        nonReentrant
        whenNotPaused
    {
        if (_to == address(0)) revert L2_INVALID_PARAM();
        if (_token == address(0)) {
            _to.sendEther(address(this).balance);
        } else {
            IERC20(_token).safeTransfer(_to, IERC20(_token).balanceOf(address(this)));
        }
    }

Github:

https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/team/TimelockTokenPool.sol#L168

function withdraw(address _to, bytes memory _sig) external {
        if (_to == address(0)) revert INVALID_PARAM();
        bytes32 hash = keccak256(abi.encodePacked("Withdraw unlocked Taiko token to: ", _to));
        address recipient = ECDSA.recover(hash, _sig);
        _withdraw(recipient, _to);
    }

L-03 Use Indexed Parameters to improve Filtering

Marking parameters as indexed allows them to be efficiently filtered by Ethereum nodes. This is particularly useful for off-chain applications or services that listen for specific events. You might want to index parameters that are commonly queried or filtered, such as the app address.

Github:

https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/signal/SignalService.sol#L268

emit SignalSent(_app, _signal, slot_, _value);

Change to:

event SignalSent(
    address indexed app,
    bytes32 indexed signal,
    bytes32 slot,
    bytes32 value,
    uint256 timestamp,
    uint256 blockNumber
);

L-04 Division Before Multiplication Leads to Precision Loss

To reduce the risk of precision loss in calculations involving token amounts and costs, consider adjusting the order of operations to perform multiplications before divisions whenever possible and appropriate.

Github:

https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/team/TimelockTokenPool.sol#L197

 uint128 _amountUnlocked = amountUnlocked / 1e18; // divide first
        costToWithdraw = _amountUnlocked * r.grant.costPerToken - r.costPaid;

L-05 Events emitted After Changes should include old values for Transparency

GuardiansUpdated event should include old set of Guardian addresses

Github:

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/L1/provers/Guardians.sol#L37

L-06 Users Unable to Claim Airdrops due to Faulty Configurations

The setConfig and __MerkleClaimable_init functions do not explicitly validate that the _claimStart is before _claimEnd. This could potentially allow for a configuration where the claim start time is after the claim end time, which would make claiming airdrops impossible.

Github:

https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/team/airdrop/MerkleClaimable.sol#L45

function setConfig(
        uint64 _claimStart,
        uint64 _claimEnd,
        bytes32 _merkleRoot
    )
        external
        onlyOwner
    {
        _setConfig(_claimStart, _claimEnd, _merkleRoot);
    }

L-07 Merkle Root Updates Could Invalidate Valid Claims

The contract allows updating the Merkle root, claim start, and claim end through setConfig, but it doesn’t handle potential edge cases related to ongoing claims. For instance, if the Merkle root is updated during an active claim period, it could invalidate previously valid claims or enable new ones unexpectedly.

Github:

https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/team/airdrop/MerkleClaimable.sol#L45

function setConfig(
        uint64 _claimStart,
        uint64 _claimEnd,
        bytes32 _merkleRoot
    )
        external
        onlyOwner
    {
        _setConfig(_claimStart, _claimEnd, _merkleRoot);
    }

L-08 Denial of Service Risks when verifying Airdrop Claims due to Maliciously Crafted Proofs

A Merkle proof allows one to prove that a specific piece of data is included in a set without having to provide the entire dataset. For a Merkle proof to be verified, the smart contract needs to perform a series of hash computations. Starting from the specific leaf node, the contract hashes it together with a provided piece of the proof, then moves up the tree, hashing combined nodes, until it reaches the top. If the computed hash matches the known Merkle root, the proof is valid.

The complexity of verifying a Merkle proof depends on the number of hash operations that need to be processed to reach the root. In a balanced binary tree, this number is logarithmic in the number of leaves or data pieces. However, certain factors can increase the complexity. A deeper tree requires more hash operations to reach the root, increasing gas costs. While most Merkle trees are binary and balanced, it's possible to construct them differently, potentially leading to less efficient proofs.

Proof of Concept

A bad actor could for instance, influence the structure of the proofs being generated, by creating proofs that are valid but require more hash operations than necessary.

Attacker could flood the contract with such claims that are expensive to process, depleting the gas limit of blocks and making it costly or impossible for legitimate users to submit their claims.

Github:

function claim(address user, uint256 amount, bytes32[] calldata proof) external nonReentrant {
        // Check if this can be claimed
        _verifyClaim(abi.encode(user, amount), proof);

        // Assign the tokens
        claimedAmount[user] += amount;
    }

Github:

function _verifyClaim(bytes memory data, bytes32[] calldata proof) internal ongoingClaim {
        bytes32 hash = keccak256(abi.encode("CLAIM_TAIKO_AIRDROP", data));

        if (isClaimed[hash]) revert CLAIMED_ALREADY();
        if (!_verifyMerkleProof(proof, merkleRoot, hash)) revert INVALID_PROOF();

        isClaimed[hash] = true;
        emit Claimed(hash);
    }

Github:

function claim(
        address user,
        uint256[] calldata tokenIds,
        bytes32[] calldata proof
    )
        external
        nonReentrant
    {
        // Check if this can be claimed
        _verifyClaim(abi.encode(user, tokenIds), proof);

        // Transfer the tokens
        for (uint256 i; i < tokenIds.length; ++i) {
            IERC721(token).safeTransferFrom(vault, user, tokenIds[i]);
        }
    }

Explicitly impose limits on the proof size or depth that can be processed, rejecting claims with proofs exceeding these limits.

L-09 Add Pausing Mechanisms to Airdrop Claims

The contract lacks mechanisms to pause or stop claims in case of an emergency or discovery of a critical bug. Incorporating such functionality could enhance control and safety.

Github:

https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/team/airdrop/ERC20Airdrop2.sol#L78

function claim(address user, uint256 amount, bytes32[] calldata proof) external nonReentrant {
        // Check if this can be claimed
        _verifyClaim(abi.encode(user, amount), proof);

        // Assign the tokens
        claimedAmount[user] += amount;
    }

L-10 Rate Limit Missing on Airdrop Claims

The current implementation does not limit the frequency of claims. In specific scenarios, it might be beneficial to limit claims to prevent abuse or manage resource allocation more effectively.

Github:

https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/team/airdrop/ERC20Airdrop2.sol#L78

function claim(address user, uint256 amount, bytes32[] calldata proof) external nonReentrant {
        // Check if this can be claimed
        _verifyClaim(abi.encode(user, amount), proof);

        // Assign the tokens
        claimedAmount[user] += amount;
    }

L-11 Implement Batch claiming Functions

Introduce a function that allows multiple claims to be processed in a single transaction. This batch processing should have a limit on the number of claims per transaction to prevent excessive gas costs and potential denial of service attacks.

Github:

https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/team/airdrop/ERC20Airdrop2.sol#L78

function claim(address user, uint256 amount, bytes32[] calldata proof) external nonReentrant {
        // Check if this can be claimed
        _verifyClaim(abi.encode(user, amount), proof);

        // Assign the tokens
        claimedAmount[user] += amount;
    }

L-12 Users are unable to claim Airdrops

Eligible users are unable to access and claim their airdropped tokens beyond their initial claim, which could lead to dissatisfaction and diminish trust in the project. This is especially impactful if users encounter issues or errors during their first claim attempt that prevent them from successfully claiming.

Github:

https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/team/airdrop/ERC20Airdrop2.sol#L78

function claim(address user, uint256 amount, bytes32[] calldata proof) external nonReentrant {
        // Check if this can be claimed
        _verifyClaim(abi.encode(user, amount), proof);

        // Assign the tokens
        claimedAmount[user] += amount;
    }

Github:

https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/team/airdrop/MerkleClaimable.sol#L67

function _verifyClaim(bytes memory data, bytes32[] calldata proof) internal ongoingClaim {
        bytes32 hash = keccak256(abi.encode("CLAIM_TAIKO_AIRDROP", data));

        if (isClaimed[hash]) revert CLAIMED_ALREADY();
        if (!_verifyMerkleProof(proof, merkleRoot, hash)) revert INVALID_PROOF();

        isClaimed[hash] = true;
        emit Claimed(hash);
    }
  1. First Claim:

    • The user calls the claim function with their address and the amount of 50 tokens.
    • Inside the claim function, the _verifyClaim function is called to verify the claim.
    • The _verifyClaim function generates a hash based on the provided data and checks if it has already been claimed. Since this is the first claim, it hasn't been claimed before, so it proceeds.
    • The Merkle proof is then verified. If the proof is valid, the claim is processed, and the isClaimed mapping is updated to mark this claim as already claimed.
  2. Second Claim:

    • The same user attempts to claim 50 tokens again, using the same address.
    • Inside the claim function, the _verifyClaim function is called again.
    • Now, when _verifyClaim tries to check if the claim has been already made, it finds that the claim has already been marked as claimed during the first attempt. So, it reverts execution with the error CLAIMED_ALREADY.

L-13 Openzeppelin Version Used Does not Support Compact Signatures

Signatures with a length of 64 are no longer supported for openzeppelin versions above 4.7.3. This will lead to functions invoking this method to revert due to ECDSA.recover returning 0 when signature length is 64. Support for compact signatures however, remains available for recover(bytes32,bytes32,bytes32).

Github:

https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/libs/LibAddress.sol#L61

    "@openzeppelin/contracts": "4.8.2",
    "@openzeppelin/contracts-upgradeable": "4.8.2",
function isValidSignature(
        address _addr,
        bytes32 _hash,
        bytes memory _sig
    )
        internal
        view
        returns (bool)
    {
        if (Address.isContract(_addr)) {
            return IERC1271(_addr).isValidSignature(_hash, _sig) == _EIP1271_MAGICVALUE;
        } else {
            return ECDSA.recover(_hash, _sig) == _addr;
        }
    }

Github:

https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/team/TimelockTokenPool.sol#L168

function withdraw(address _to, bytes memory _sig) external {
        if (_to == address(0)) revert INVALID_PARAM();
        bytes32 hash = keccak256(abi.encodePacked("Withdraw unlocked Taiko token to: ", _to));
        address recipient = ECDSA.recover(hash, _sig);
        _withdraw(recipient, _to);
    }

L-14 Missing Event Emissions For Critical Functions

Emitting an event after removing a certificate serial number would provide a clear record of the action, allowing for easier tracking and auditing of changes made to the revoked certificate list. This can be useful for compliance purposes and for detecting any unauthorized modifications to the list.

Github:

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/automata-attestation/AutomataDcapV3Attestation.sol#L88

function removeRevokedCertSerialNum(
        uint256 index,
        bytes[] calldata serialNumBatch
    )
        external
        onlyOwner
    {
        for (uint256 i; i < serialNumBatch.length; ++i) {
            if (!_serialNumIsRevoked[index][serialNumBatch[i]]) {
                continue;
            }
            delete _serialNumIsRevoked[index][serialNumBatch[i]];
        }
    }

Github:

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/automata-attestation/AutomataDcapV3Attestation.sol#L73

function addRevokedCertSerialNum(
        uint256 index,
        bytes[] calldata serialNumBatch
    )
        external
        onlyOwner
    {
        for (uint256 i; i < serialNumBatch.length; ++i) {
            if (_serialNumIsRevoked[index][serialNumBatch[i]]) {
                continue;
            }
            _serialNumIsRevoked[index][serialNumBatch[i]] = true;
        }
    }
    

Github:

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/automata-attestation/AutomataDcapV3Attestation.sol#L69

function setMrEnclave(bytes32 _mrEnclave, bool _trusted) external onlyOwner {
        _trustedUserMrEnclave[_mrEnclave] = _trusted;
    }
    

Github:

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/automata-attestation/AutomataDcapV3Attestation.sol#L65

function setMrSigner(bytes32 _mrSigner, bool _trusted) external onlyOwner {
        _trustedUserMrSigner[_mrSigner] = _trusted;
    }

L-15 Implement Batch Approvals

A batch approval function would allow multiple guardians to approve several proofs at once. This could be particularly useful in scenarios where a series of block validations or transitions need to be approved quickly.

Github:

https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/L1/provers/Guardians.sol#L111

function approveBatch(
    TaikoData.BlockMetadata[] calldata _metas,
    TaikoData.Transition[] calldata _trans,
    TaikoData.TierProof[] calldata _proofs
)
    external
    whenNotPaused
    nonReentrant
    returns (bool[] memory approved_)
{
    require(_metas.length == _trans.length && _trans.length == _proofs.length, "Mismatched array lengths");
    approved_ = new bool[](_metas.length);

    for (uint256 i = 0; i < _metas.length; i++) {
        if (_proofs[i].tier != LibTiers.TIER_GUARDIAN) {
            revert INVALID_PROOF();
        }
        bytes32 hash = keccak256(abi.encode(_metas[i], _trans[i]));
        bool approved = approve(_metas[i].id, hash);

        if (approved) {
            deleteApproval(hash);
            ITaikoL1(resolve("taiko", false)).proveBlock(_metas[i].id, abi.encode(_metas[i], _trans[i], _proofs[i]));
            approved_[i] = true;
        }

        emit GuardianApproval(msg.sender, _metas[i].id, _trans[i].blockHash, approved);
    }
    return approved_;
}

L-16 Insufficient validation for EIP1559 Configurations

Setting gasTargetPerL1Block without an upper limit in TaikoL2EIP1559Configurable.setConfigAndExcess could allow configurations that exceed the processing capabilities of the L2 or its validators. This could result in delayed transaction processing, increased latency, or even network congestion, affecting overall performance and reliability.

Github:

https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/L2/TaikoL2EIP1559Configurable.sol#L25

function setConfigAndExcess(
        Config memory _newConfig,
        uint64 _newGasExcess
    )
        external
        virtual
        onlyOwner
    {
        if (_newConfig.gasTargetPerL1Block == 0) revert L2_INVALID_CONFIG();
        if (_newConfig.basefeeAdjustmentQuotient == 0) revert L2_INVALID_CONFIG();

        customConfig = _newConfig;
        gasExcess = _newGasExcess;

        emit ConfigAndExcessChanged(_newConfig, _newGasExcess);
    }
    

L-17 setConfigAndExcess might not be Optimal

Adding a requirement check to validate the _newConfig.gasTargetPerL1Block against a predefined MAX_GAS_TARGET ensures that the gas target for each L1 block does not exceed a maximum threshold, potentially preventing inefficient gas usage or other undesirable effects.

Github:

https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/L2/TaikoL2EIP1559Configurable.sol#L25

function setConfigAndExcess(
        Config memory _newConfig,
        uint64 _newGasExcess
    )
        external
        virtual
        onlyOwner
    {
        if (_newConfig.gasTargetPerL1Block == 0) revert L2_INVALID_CONFIG();
        if (_newConfig.basefeeAdjustmentQuotient == 0) revert L2_INVALID_CONFIG();

        customConfig = _newConfig;
        gasExcess = _newGasExcess;

        emit ConfigAndExcessChanged(_newConfig, _newGasExcess);
    }

Add this check to function.

require(_newConfig.gasTargetPerL1Block <= MAX_GAS_TARGET, "Gas target exceeds maximum allowed");

L-18 Missing array length mismatch checks

If the lengths of _targets, _values, and _calldatas do not match, it could result in logical inconsistencies when executing proposals. For example, a mismatch might lead to executing a call to a target with the wrong calldata or transaction value, potentially resulting in failed transactions or unintended side effects. This could disrupt the intended governance actions and undermine the contract's integrity.

An attacker might exploit the lack of validation to craft proposals that are ambiguous or misleading. For instance, by carefully crafting the lengths of these arrays, an attacker could potentially cause the execution of unintended actions.

Validate that array lengths match.

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/L1/gov/TaikoGovernor.sol#L48

function propose(
        address[] memory _targets,
        uint256[] memory _values,
        bytes[] memory _calldatas,
        string memory _description
    )

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/L1/gov/TaikoGovernor.sol#L69

function propose(
        address[] memory _targets,
        uint256[] memory _values,
        string[] memory _signatures,
        bytes[] memory _calldatas,
        string memory _description
    )

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/libs/LibTrieProof.sol#L34C5-L41C6

function verifyMerkleProof(
        bytes32 _rootHash,
        address _addr,
        bytes32 _slot,
        bytes32 _value,
        bytes[] memory _accountProof,
        bytes[] memory _storageProof
    )
function _transferTokens(
        CanonicalNFT memory ctoken,
        address to,
        uint256[] memory tokenIds,
        uint256[] memory amounts
    )  
function _execute(
        uint256 _proposalId,
        address[] memory _targets,
        uint256[] memory _values,
        bytes[] memory _calldatas,
        bytes32 _descriptionHash
    )
function _cancel(
        address[] memory _targets,
        uint256[] memory _values,
        bytes[] memory _calldatas,
        bytes32 _descriptionHash
    )
function mintBatch(
        address _to,
        uint256[] memory _tokenIds,
        uint256[] memory _amounts
    )
function _beforeTokenTransfer(
        address, /*_operator*/
        address, /*_from*/
        address _to,
        uint256[] memory, /*_ids*/
        uint256[] memory, /*_amounts*/
        bytes memory /*_data*/
    )
function onERC1155BatchReceived(
        address,
        address,
        uint256[] calldata,
        uint256[] calldata,
        bytes calldata
    )
function _transferTokens(
        CanonicalNFT memory ctoken,
        address to,
        uint256[] memory tokenIds,
        uint256[] memory amounts
    )
function claim(
        address user,
        uint256[] calldata tokenIds,
        bytes32[] calldata proof
    )
function _isCpuSvnHigherOrGreater(
        uint256[] memory pckCpuSvns,
        uint8[] memory tcbCpuSvns
    )

L-19 Avoid Empty Names when Setting Addresses

In the setAddress function of the AddressManager contract, there is no explicit check to ensure that the _name parameter is not empty. The function updates the address for a given chain ID and name pair without validating the content of _name. This allows for setting an address with an empty name, which could lead to unintended consequences, such as difficulties in retrieving or managing these addresses later on.

Github:

function setAddress(
        uint64 _chainId,
        bytes32 _name,
        address _newAddress
    )
        external
        virtual
        onlyOwner
    {
    
        if (_name == bytes32(0)) revert AM_INVALID_PARAMS(); // Check if _name is empty
        address oldAddress = __addresses[_chainId][_name];
        if (_newAddress == oldAddress) revert AM_INVALID_PARAMS();
        __addresses[_chainId][_name] = _newAddress;
        emit AddressSet(_chainId, _name, _newAddress, oldAddress);
    }

Add a validation step in the setAddress function to ensure that the _name parameter contains a valid, non-empty value. You can do this by adding a simple condition to revert the transaction if _name is empty.

L-20 Add Timelocks to Sensitive Functions

Adding timelocks to sensitive functions in a smart contract is a powerful security mechanism. It prevents immediate changes to critical parameters, giving stakeholders time to react if an unauthorized or harmful update is attempted.

Github:

https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/automata-attestation/AutomataDcapV3Attestation.sol#L65

function setMrSigner(bytes32 _mrSigner, bool _trusted) external onlyOwner {
        _trustedUserMrSigner[_mrSigner] = _trusted;
    }

Github:

https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/automata-attestation/AutomataDcapV3Attestation.sol#L69

function setMrEnclave(bytes32 _mrEnclave, bool _trusted) external onlyOwner {
        _trustedUserMrEnclave[_mrEnclave] = _trusted;
    }

L-21 Add checks for Empty Arrays

Adding checks for empty arrays in Solidity functions is crucial for avoiding unnecessary gas expenditure and potential logical errors in smart contracts. In your provided function addRevokedCertSerialNum, you can include a check at the beginning of the function to ensure that the serialNumBatch array is not empty.

Github:

https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/automata-attestation/AutomataDcapV3Attestation.sol#L73

function addRevokedCertSerialNum(
        uint256 index,
        bytes[] calldata serialNumBatch
    )

L-22 Implement a Two Step Guardian Confirmation Process

Require that newly added guardians confirm their acceptance of the role before being fully integrated into the guardians array. Similarly, implement a mechanism for guardians to voluntarily resign.

Ensures that guardians are willing and aware participants, reducing the risk of inactive or unwilling guardians being added maliciously or by mistake.

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/L1/provers/Guardians.sol#L53

L-23 Enforce Optimal Hard limits on Guardians Set to Prevent Malicious Dos attempts by Compromoised Owner

The setGuardians leverages the data type's maximum value to impose a limit, serving as an implicit hard cap on the array's size. However, this cap is more related to the technical limitations of the data type than to an explicit design choice regarding the optimal number of guardians for operational efficiency and gas cost considerations.

The function iterates over the current list of guardians, deleting each one's entry in the guardianIds mapping. If there's a significant number of existing guardians, this loop itself consumes gas for each iteration due to the delete operation, which contributes to the total.

The deletion of the array is a single operation, but it's a state change that affects the entire storage structure of the array. The gas cost depends on the size of the array being cleared.

The combined gas costs of checking conditions, adding new guardians, deleting existing ones, and emitting events can accumulate quickly. While optimizations like gas refunds for deleting storage slots can reduce the net cost, the total required gas can still be substantial.

Attempting to add 255 guardians in one transaction could stress the Ethereum network by consuming a large amount of gas, potentially approaching or exceeding block gas limits and incurring significant transaction costs.

Define a more practical upper limit on the number of guardians based on typical governance needs and operational considerations.

Github:

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/L1/provers/Guardians.sol#L53

function setGuardians(
        address[] memory _newGuardians,
        uint8 _minGuardians
    )
        external
        onlyOwner
        nonReentrant
    {
        // We need at least MIN_NUM_GUARDIANS and at most 255 guardians (so the approval bits fit in
        // a uint256)
        if (_newGuardians.length < MIN_NUM_GUARDIANS || _newGuardians.length > type(uint8).max) {
            revert INVALID_GUARDIAN_SET();
        }
        // Minimum number of guardians to approve is at least equal or greater than half the
        // guardians (rounded up) and less or equal than the total number of guardians
        if (_minGuardians < (_newGuardians.length + 1) >> 1 || _minGuardians > _newGuardians.length)
        {
            revert INVALID_MIN_GUARDIANS();
        }

        // Delete the current guardians
        for (uint256 i; i < guardians.length; ++i) {
            delete guardianIds[guardians[i]];
        }
        delete guardians;

        // Set the new guardians
        for (uint256 i = 0; i < _newGuardians.length; ++i) {
            address guardian = _newGuardians[i];
            if (guardian == address(0)) revert INVALID_GUARDIAN();
            // This makes sure there are not duplicate addresses
            if (guardianIds[guardian] != 0) revert INVALID_GUARDIAN_SET();

            // Save and index the guardian
            guardians.push(guardian);
            guardianIds[guardian] = guardians.length;
        }

        // Bump the version so previous approvals get invalidated
        ++version;

        minGuardians = _minGuardians;
        emit GuardiansUpdated(version, _newGuardians);
    }

L-24 Missing checks for zero amounts in some functions

Without these checks, attacker might abuse the function for purposes like denial of service (DoS) by spamming it with zero-amount claims that still consume resources or spamming the Claimed event hindering effective analytics of claims.

Github:

https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/team/airdrop/ERC20Airdrop.sol#L52

function claimAndDelegate(
        address user,
        uint256 amount,
        bytes32[] calldata proof,
        bytes calldata delegationData
    )
        external
        nonReentrant
    {
        // Check if this can be claimed
        _verifyClaim(abi.encode(user, amount), proof);

        // Transfer the tokens
        IERC20(token).transferFrom(vault, user, amount);

        // Delegate the voting power to delegatee.
        // Note that the signature (v,r,s) may not correspond to the user address,
        // but since the data is provided by Taiko backend, it's not an issue even if
        // client can change the data to call delegateBySig for another user.
        (address delegatee, uint256 nonce, uint256 expiry, uint8 v, bytes32 r, bytes32 s) =
            abi.decode(delegationData, (address, uint256, uint256, uint8, bytes32, bytes32));
        IVotes(token).delegateBySig(delegatee, nonce, expiry, v, r, s);
    }

Github:

https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/team/airdrop/ERC20Airdrop2.sol#L78C5-L84C6

function claim(address user, uint256 amount, bytes32[] calldata proof) external nonReentrant {
        // Check if this can be claimed
        _verifyClaim(abi.encode(user, amount), proof);

        // Assign the tokens
        claimedAmount[user] += amount;
    }

NC-01 Add comments to uint128 amount struct member

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/team/TimelockTokenPool.sol#L29

NC-02 Function Violating Solidity Naming Conventions

The function name basefee violates the naming convention typically used in solidity, where function names should follow the camelCase style.

Github: https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/L2/Lib1559Math.sol#L16

function basefee(
        uint256 _gasExcess,
        uint256 _adjustmentFactor
    )
        internal
        pure
        returns (uint256)
    {
        if (_adjustmentFactor == 0) {
            revert EIP1559_INVALID_PARAMS();
        }

        return _ethQty(_gasExcess, _adjustmentFactor) / LibFixedPointMath.SCALING_FACTOR
            / _adjustmentFactor;
    }

The correct way to name this function, following the camelCase convention, would be baseFee.

#0 - c4-judge

2024-04-10T10:37:01Z

0xean marked the issue as grade-b

Awards

26.7572 USDC - $26.76

Labels

bug
G (Gas Optimization)
grade-b
G-11

External Links

Gas Report: Taiko

G-01 Optimize Structs to Utilize Solidity Struct Packing

Solidity utilizes EVM storage slots in a 32-byte (256-bit) alignment, and struct packing is an optimization technique that allows you to minimize the storage space structs use. The goal is to pack smaller data types together within the 32-byte slots to reduce the overall gas cost associated with storage operations. In your Grant struct, you can reorder the fields to ensure that smaller data types are packed together efficiently, thus potentially fitting more data into fewer storage slots.

Github:

https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/team/TimelockTokenPool.sol#L28

struct Grant {
        uint128 amount;
        // If non-zero, each TKO (1E18) will need some USD stable to purchase.
        uint128 costPerToken;
        // If non-zero, indicates the start time for the recipient to receive
        // tokens, subject to an unlocking schedule.
        uint64 grantStart;
        // If non-zero, indicates the time after which the token to be received
        // will be actually non-zero
        uint64 grantCliff;
        // If non-zero, specifies the total seconds required for the recipient
        // to fully own all granted tokens.
        uint32 grantPeriod;
        // If non-zero, indicates the start time for the recipient to unlock
        // tokens.
        uint64 unlockStart;
        // If non-zero, indicates the time after which the unlock will be
        // actually non-zero
        uint64 unlockCliff;
        // If non-zero, specifies the total seconds required for the recipient
        // to fully unlock all owned tokens.
        uint32 unlockPeriod;
    }

G-02 Datatypes Smaller than 32bytes can be Expensive in Structs in Some Cases

In this case it will be cheaper for both variables in the struct to be uint256

Github:

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/automata-attestation/lib/TCBInfoStruct.sol#L13C5-L17C6

struct TCBLevelObj {
        uint256 pcesvn;
        uint8[] sgxTcbCompSvnArr;
        TCBStatus status;
    }

G-03 Inline numGuardians() function to save gas

When you inline numGuardians(), every place in your code where numGuardians() is called would be directly replaced with guardians.length. This saves the gas that would be spent on making a function call.

Github:

function numGuardians() public view returns (uint256) {
        return guardians.length;
    }

G-04 Fail Fast Approach in _resolve function can save gas

Implement a chain ID validation within the getAddress function to indirectly benefit the _resolve function by preventing unnecessary operations. If a chain ID is unsupported, the system can halt the resolution process early, saving computational resources and reducing transaction costs associated with on-chain operations.

Github:

https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/common/AddressResolver.sol#L72C5-L88C6

function _resolve(
        uint64 _chainId,
        bytes32 _name,
        bool _allowZeroAddress
    )
        private
        view
        returns (address payable addr_)
    {
        if (addressManager == address(0)) revert RESOLVER_INVALID_MANAGER();

        addr_ = payable(IAddressManager(addressManager).getAddress(_chainId, _name));

        if (!_allowZeroAddress && addr_ == address(0)) {
            revert RESOLVER_ZERO_ADDR(_chainId, _name);
        }
    }

Github:

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/common/AddressManager.sol#L54

function getAddress(uint64 _chainId, bytes32 _name) public view override returns (address) {
        return __addresses[_chainId][_name];
    }

G-05 Use Short-Circuiting in Return Statement of _verifyCertChain to Save Gas

The final return statement checks all flags (!certRevoked && certNotExpired && verified && certChainCanBeTrusted). If the function is structured to ensure that it only proceeds when all conditions are met, some of these checks might be redundant. Additionally, if the logic ensures that at the first failure, the function will exit, simplifying this condition could save some gas.

Github:

https://github.com/code-423n4/2024-03-taiko/blob/main/packages/protocol/contracts/automata-attestation/AutomataDcapV3Attestation.sol#L248

return !certRevoked && certNotExpired && verified && certChainCanBeTrusted;

To address the issue of lack of short-circuiting in the return statement and potentially redundant checks, we can modify the function to immediately return false upon encountering any condition that invalidates the certificate chain. Since the loop breaks on the first failure condition, there's no need to check all conditions again at the end. Instead, you can return true at the end of the function if none of the break conditions are met, indicating that all certificates in the chain are valid, not revoked, not expired, and the chain can be trusted. Here's how you could restructure the function:

function _verifyCertChain(IPEMCertChainLib.ECSha256Certificate[] memory certs)
    private
    view
    returns (bool)
{
    uint256 n = certs.length;

    for (uint256 i = 0; i < n; ++i) {
        IPEMCertChainLib.ECSha256Certificate memory issuer;
        if (i == n - 1) {
            issuer = certs[i];
        } else {
            issuer = certs[i + 1];
        }

        // Check if the certificate is revoked
        if ((i == n - 2 && _serialNumIsRevoked[uint256(IPEMCertChainLib.CRL.ROOT)][certs[i].serialNumber])
            || (certs[i].isPck && _serialNumIsRevoked[uint256(IPEMCertChainLib.CRL.PCK)][certs[i].serialNumber])) {
            return false; // Certificate is revoked
        }

        // Check if the certificate is expired
        if (block.timestamp <= certs[i].notBefore || block.timestamp >= certs[i].notAfter) {
            return false; // Certificate is expired
        }

        // Verify the certificate's signature
        bool verified = sigVerifyLib.verifyES256Signature(
            certs[i].tbsCertificate, certs[i].signature, issuer.pubKey
        );
        if (!verified) {
            return false; // Signature verification failed
        }

        // Check if the issuer's public key hash matches the trusted root CA's public key hash
        if (i == n - 1 && keccak256(issuer.pubKey) != ROOTCA_PUBKEY_HASH) {
            return false; // The chain cannot be trusted
        }
    }

    // If all checks passed, the certificate chain can be trusted
    return true;
}

This revised version simplifies the function by removing the boolean flags (certRevoked, certNotExpired, verified, certChainCanBeTrusted) and directly returning false upon detecting any issue with the certificate chain. It concludes by returning true only if the function has not exited by any of the previous conditions, implying that the certificate chain is valid and can be trusted. This change makes the function more gas-efficient by avoiding unnecessary checks after a failure condition has already determined the outcome.

G-06 Avoid No-op in claimAndDelegate function to save gas

If the amount is zero, the function will still execute, consuming gas without performing any meaningful action. This results in a waste of resources for the user calling the function.

Github:

https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/team/airdrop/ERC20Airdrop.sol#L52

function claimAndDelegate(
        address user,
        uint256 amount,
        bytes32[] calldata proof,
        bytes calldata delegationData
    )

G-07 Avoid Proceeding with proof verification if empty to save on gas

Calling the function with an empty proof array when the context expects a non-empty proof can result in unnecessary gas consumption for operations that are guaranteed to fail the intended logical checks of a well-formed Merkle tree with multiple elements.

Github:

function _verifyMerkleProof(
    bytes32[] calldata _proof,
    bytes32 _merkleRoot,
    bytes32 _value
)
    internal
    pure
    virtual
    returns (bool)
{
    // Check for non-empty proof
    require(_proof.length > 0, "Merkle proof must not be empty");

    return MerkleProof.verify(_proof, _merkleRoot, _value);
}

G-08 Use abi.encodePacked instead of abi.encode to save gas in some cases

When using abi.encodePacked instead of abi.encode, you can save some gas because abi.encodePacked removes all padding while encoding the data.

Github:

https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/L1/provers/GuardianProver.sol#L46

 if (_proof.tier != LibTiers.TIER_GUARDIAN) revert INVALID_PROOF();
        bytes32 hash = keccak256(abi.encode(_meta, _tran));

Github:

https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/signal/SignalService.sol#L186

return keccak256(abi.encode(_chainId, _kind, _blockId));

#0 - c4-judge

2024-04-10T10:09:36Z

0xean marked the issue as grade-b

Awards

58.9933 USDC - $58.99

Labels

analysis-advanced
grade-b
A-03

External Links

Pic

Taiko Security Analysis Report

1. Overview

Taiko, a Type 1 (Ethereum-equivalent) ZK-EVM, aims for maximum compatibility with Ethereum, providing a seamless experience for developers. Its design prioritizes decentralization, permissionlessness, and security, allowing anyone to participate as nodes, proposers, and provers. Taiko's code is open source, fostering a community-driven development process .

From the security perspective, Taiko emphasizes several core principles to maintain its robustness and reliability. The project places security at the forefront, ensuring that all security assumptions are directly or indirectly enforced by Ethereum itself. This includes maintaining a minimal and robust protocol design that avoids unnecessary changes that could jeopardize Ethereum-equivalence. It also ensures decentralization by having decentralized proposers and provers, ensuring no single party can control transaction ordering or be solely responsible for proving blocks, thereby operating reliably even in adversarial conditions. Moreover, Taiko is committed to permissionlessness, enabling anyone to join or leave the network at any time without causing significant disturbance. By adhering to Ethereum-equivalence, Taiko ensures no changes are made to how Ethereum clients execute bytecode and store data, thereby facilitating a continual alignment with Ethereum's upgrades and contributing to the Ethereum L1 improvements【6†source】.

A detailed code review by Token Metrics reveals Taiko as a highly innovative project. It stands out for its approach to providing a ZK-EVM compatible blockchain that enables Ethereum smart contract code to run without modifications. This innovation addresses the limitations of current ZK-Rollups that are mostly application-specific. Taiko's architecture is robust, supporting the core principles of security, decentralization, and permissionlessness. The code quality is rated highly for its comprehensiveness, maintainability, and test coverage. The project's roadmap indicates a clear path to the mainnet launch expected in early 2024, with several testnets and upgrades planned in the interim. The team behind Taiko comprises active and experienced developers, showcasing a solid commitment to the project's ongoing development.

1.1 Token Metrics Research

CategoryPercentage Score
Innovation18.18%
Architecture20.00%
Code Quality23.64%
Mainnet9.09%
Usability9.09%
Team10.91%
Total90.91%

Source:https://research.tokenmetrics.com/taiko-code-review/

The evaluation of Taiko, a type-1 ZK-EVM blockchain, based on the review by Token Metrics, breaks down its performance across various dimensions using a percentage score. Here's an overview of how Taiko scored in each category:

  • Innovation: 18.18%. This high score reflects Taiko's approach to providing an Ethereum-equivalent ZK-EVM, enabling smart contracts to run without any modifications. It underscores the project's unique solution to scalability while maintaining compatibility with Ethereum's ecosystem.

  • Architecture: 20.00%. The architecture score highlights the project's solid foundation in terms of security, decentralization, and permissionlessness. The detailed and well-explained architecture in the whitepaper demonstrates the project's robustness and commitment to its core principles.

  • Code Quality: 23.64%. This score indicates the high quality of Taiko's code, characterized by regular updates, extensive testing, and maintainability. The open-source nature of the project and the active involvement of experienced developers contribute to this positive assessment.

  • Mainnet: 9.09%. Reflecting the project's stage in its development lifecycle, this score indicates progress toward launching the mainnet. With several testnets planned and underway, the project is methodically moving towards its mainnet launch, expected in early 2024.

  • Usability: 9.09%. Taiko's usability score is tied to its infrastructure compatibility, making it easily integrable with existing EVM-compatible tools and wallets. This ensures a seamless user experience for end customers.

  • Team: 10.91%. The team score highlights the expertise and active involvement of Taiko's developers. With a solid coding style and senior Git backgrounds, the team's capabilities are well-regarded.

  • Total: 90.91%. The overall score is a testament to Taiko's promising potential within the blockchain space, indicating a strong performance across innovation, architecture, code quality, and other key metrics.

These insights highlight Taiko's potential to significantly contribute to the Ethereum ecosystem by enhancing scalability, security, and developer experience without compromising the foundational principles that have made Ethereum the leading smart contract platform.

2. Scope of Audit

2.1 In Scope

packages/protocol/contracts/common/IAddressManager.sol packages/protocol/contracts/common/AddressManager.sol packages/protocol/contracts/common/IAddressR.sol packages/protocol/contracts/common/AddressR.sol packages/protocol/contracts/common/EssentialContract.sol packages/protocol/contracts/libs/Lib4844.sol packages/protocol/contracts/libs/LibAddress.sol packages/protocol/contracts/libs/LibMath.sol packages/protocol/contracts/libs/LibTrieProof.sol packages/protocol/contracts/L1/gov/TaikoGovernor.sol packages/protocol/contracts/L1/gov/TaikoTimelockController.sol packages/protocol/contracts/L1/hooks/IHook.sol packages/protocol/contracts/L1/hooks/AssignmentHook.sol packages/protocol/contracts/L1/ITaikoL1.sol packages/protocol/contracts/L1/libs/LibDepositing.sol packages/protocol/contracts/L1/libs/LibProposing.sol packages/protocol/contracts/L1/libs/LibProving.sol packages/protocol/contracts/L1/libs/LibUtils.sol packages/protocol/contracts/L1/libs/LibVerifying.sol packages/protocol/contracts/L1/provers/GuardianProver.sol packages/protocol/contracts/L1/provers/Guardians.sol packages/protocol/contracts/L1/TaikoData.sol packages/protocol/contracts/L1/TaikoErrors.sol packages/protocol/contracts/L1/TaikoEvents.sol packages/protocol/contracts/L1/TaikoL1.sol packages/protocol/contracts/L1/TaikoToken.sol packages/protocol/contracts/L1/tiers/ITierProvider.sol packages/protocol/contracts/L1/tiers/DevnetTierProvider.sol packages/protocol/contracts/L1/tiers/MainnetTierProvider.sol packages/protocol/contracts/L1/tiers/TestnetTierProvider.sol packages/protocol/contracts/L2/CrossChainOwned.sol packages/protocol/contracts/L2/Lib1559Math.sol packages/protocol/contracts/L2/TaikoL2.sol packages/protocol/contracts/L2/TaikoL2EIP1559Configurable.sol packages/protocol/contracts/signal/ISignalService.sol packages/protocol/contracts/signal/ISignalService.sol packages/protocol/contracts/signal/LibSignals.sol packages/protocol/contracts/signal/SignalService.sol packages/protocol/contracts/bridge/IBridge.sol packages/protocol/contracts/bridge/Bridge.sol packages/protocol/contracts/tokenvault/adapters/USDCAdapter.sol packages/protocol/contracts/tokenvault/BridgedERC20.sol packages/protocol/contracts/tokenvault/BridgedERC20Base.sol packages/protocol/contracts/tokenvault/BridgedERC721.sol packages/protocol/contracts/tokenvault/BridgedERC1155.sol packages/protocol/contracts/tokenvault/BaseNFTVault.sol packages/protocol/contracts/tokenvault/BaseVault.sol packages/protocol/contracts/tokenvault/ERC1155Vault.sol packages/protocol/contracts/tokenvault/ERC20Vault.sol packages/protocol/contracts/tokenvault/ERC721Vault.sol packages/protocol/contracts/tokenvault/IBridgedERC20.sol packages/protocol/contracts/tokenvault/LibBridgedToken.sol packages/protocol/contracts/thirdparty/nomad-xyz/ExcessivelySafeCall.sol packages/protocol/contracts/thirdparty/optimism/Bytes.sol packages/protocol/contracts/thirdparty/optimism/rlp/RLPReader.sol packages/protocol/contracts/thirdparty/optimism/rlp/RLPWriter.sol packages/protocol/contracts/thirdparty/optimism/trie/MerkleTrie.sol packages/protocol/contracts/thirdparty/optimism/trie/SecureMerkleTrie.sol packages/protocol/contracts/thirdparty.sol packages/protocol/contracts/verifiers/IVerifier.sol packages/protocol/contracts/verifiers/GuardianVerifier.sol packages/protocol/contracts/verifiers/SgxVerifier.sol packages/protocol/contracts/team/airdrop/ERC20Airdrop.sol packages/protocol/contracts/team/airdrop/ERC20Airdrop2.sol packages/protocol/contracts/team/airdrop/ERC721Airdrop.sol packages/protocol/contracts/team/airdrop/MerkleClaimable.sol packages/protocol/contracts/team/TimelockTokenPool.sol packages/protocol/contracts/automata-attestation/AutomataDcapV3Attestation.sol packages/protocol/contracts/automata-attestation/interfaces/IAttestation.sol packages/protocol/contracts/automata-attestation/interfaces/ISigVerifyLib.sol packages/protocol/contracts/automata-attestation/lib/EnclaveIdStruct.sol packages/protocol/contracts/automata-attestation/lib/interfaces/IPEMCertChainLib.sol packages/protocol/contracts/automata-attestation/lib/PEMCertChainLib.sol packages/protocol/contracts/automata-attestation/lib/QuoteV3Auth/V3Parser.sol packages/protocol/contracts/automata-attestation/lib/QuoteV3Auth/V3Struct.sol packages/protocol/contracts/automata-attestation/lib/TCBInfoStruct.sol packages/protocol/contracts/automata-attestation/utils/Asn1Decode.sol packages/protocol/contracts/automata-attestation/utils/BytesUtils.sol packages/protocol/contracts/automata-attestation/utils/RsaVerify.sol packages/protocol/contracts/automata-attestation/utils/SHA1.sol packages/protocol/contracts/automata-attestation/utils/SigVerifyLib.sol packages/protocol/contracts/automata-attestation/utils/X509DateUtils.sol

2.2. Out of Scope

All files outside of packages/protocol/contracts are out of scope.

3. Architecture

The architecture of Taiko, as a Type 1 (Ethereum-equivalent) ZK-EVM, is meticulously designed to align with Ethereum's principles while enhancing its scalability, security, and developer experience. At the core, Taiko integrates several key components, principles, and strategies to achieve its goals, which are outlined below:

3.1. L1 Protocol Connections

The Layer 1 (L1) protocol connections facilitate communication between the L2 rollup node and Ethereum's main chain. This includes submitting proofs generated by the L2 rollup node to Ethereum for verification and finality, ensuring the security and integrity of transactions processed on Taiko.

3.2. L2 Rollup Node

The Layer 2 (L2) rollup node in Taiko manages the rollup chain, which aggregates transactions into batches, generating proofs of their correctness. This mechanism significantly reduces the on-chain footprint by compressing multiple transactions into a single proof, enhancing scalability.

3.3. L1 Protocol Connections

The Layer 1 (L1) protocol connections facilitate communication between the L2 rollup node and Ethereum's main chain. This includes submitting proofs generated by the L2 rollup node to Ethereum for verification and finality, ensuring the security and integrity of transactions processed on Taiko.

3.4. Decentralized and Permissionless Participation

In keeping with Ethereum's decentralized ethos, Taiko allows for decentralized and permissionless participation. Nodes, proposers, and provers can participate without centralized control, ensuring a wide and resilient network.

3.5. Security and Ethereum Equivalence

Security is a paramount concern in Taiko's architecture. The platform ensures that all security assumptions are either directly or indirectly enforced by Ethereum's own security mechanisms. By adhering strictly to Ethereum equivalence, Taiko ensures that it inherits Ethereum's security properties and upgrades.

3.6. Minimal and Robust Protocol Design

Taiko's protocol design is both minimal and robust, avoiding unnecessary changes that could compromise compatibility with Ethereum. This design philosophy ensures that Taiko remains closely aligned with Ethereum's upgrades and ecosystem.

3.7. Continual Ethereum Equivalence

Taiko's commitment to Ethereum equivalence is not just a static goal but a continual process. This means that Taiko not only matches Ethereum's current state but also aims to adopt future Ethereum upgrades, ensuring long-term compatibility and co-evolution.

3.8. Scalability Without Compromise

Through its ZK-EVM and rollup-based architecture, Taiko addresses the classic blockchain trilemma, offering scalability without sacrificing decentralization or security. This allows Ethereum to scale effectively, accommodating more transactions and reducing costs for users.

Taiko's architecture represents a thoughtful blend of innovation, security, and Ethereum compatibility, aiming to enhance the scalability and efficiency of the Ethereum ecosystem while preserving its foundational principles.

4. Approach

I did a manual line by line codebase review of the Taiko protocol's smart contracts. I deeply engaged with the code to identify vulnerabilities and areas for improvement, focusing solely on manual inspection techniques. This hands-on approach not only enhances the security and efficiency of the protocol but also serves as a valuable learning experience, enriching my understanding of smart contract best practices and complex blockchain interactions. You can find a high-level analysis of my findings down below.

5. Type 1 zkEVM inherent Risks

5.6 Proof Generation Cost

The complexity and computational intensity required to generate proofs, especially in a multi-proof system where different types of proofs are integrated, can lead to significant delays. This not only affects the speed of transaction verification but also impacts the overall throughput of the blockchain.

For example proposers can submit maliciously crafted blocks that that are cheap in gas but really expensive to generate proofs effectively leading to Dos of the system for other users.

5.7 Upgradability Concerns

Taiko being Type 1 based rollups has both upsides and downsides. It enjoys the security and finality of Ethereum but also suffers when it comes to being able to add features that could be overall beneficial for the protocal in serving it's purpose.

5.8 Scalability Concerns

The scalability benefits Taiko can offer are inherently tied to Ethereum's throughput capabilities. If Ethereum faces scalability issues, it directly impacts Taiko’s performance.

5.9 Proof Submission and Verification Delays

Fluctuations in Ethereum gas prices and network congestion can cause delays in the submission and verification of proofs on the Ethereum blockchain. High gas prices may disincentivize provers from submitting proofs promptly, leading to delays in block verification and finality.

6 SGX Risks and Side Channel Attacks

Dependence on Intel's SGX for a portion of the proof generation introduces some risks. If vulnerabilities in SGX are exploited or if there are hardware failures, it could lead to a temporary unavailability of this proof mechanism.

6.1. Cryptographic Primitives and Protocols

Exploitation of cryptographic vulnerabilities can undermine the security of communication and data within Taiko, leading to the exposure of sensitive information or enabling unauthorized transactions. Local attackers might exploit enclave state retention to brute-force encryption keys or misuse cryptographic protocols to compromise data integrity and confidentiality.

Use up-to-date, well-researched cryptographic primitives and avoid custom or non-standard implementations.

6.2. Page Table Attacks

By manipulating page tables to cause or observe page faults, an attacker could infer the pattern of memory accesses made by the SGX enclave running Taiko's proof generation code. This could potentially allow them to deduce sensitive information, such as private keys or the details of transactions before they are publicly committed to the blockchain. Even if the specific content cannot be directly accessed due to SGX's encryption, the access patterns alone might reveal enough information to compromise privacy or security.

6.3. Segmentation Attacks

If Taiko's SGX-enclave is implemented on systems still using x86 segmentation attackers could exploit this to gain even finer-grained insights into the enclave's memory access patterns, further compromising the security of the data processed within.

6.4. DRAM Vulnerabilities

DRAM-based attacks could similarly allow attackers to observe and infer patterns of memory access outside of the CPU cache, albeit with less precision. This could compromise the integrity of the proof generation process by leaking information about which parts of memory are accessed more frequently, potentially revealing sensitive computation patterns or data.

6.5. Cache Timing Attacks

Implement cache partitioning or cache randomization techniques to reduce the predictability of cache access patterns, making it more difficult for attackers to perform timing analysis.

Where possible, use algorithms designed to execute in constant time, regardless of input data. This approach minimizes the data-dependent variations in execution time that can be exploited through cache timing attacks.

6.6. Branch Prediction Attacks / Speculative Execution

Speculative execution vulnerabilities can allow attackers to bypass security boundaries within the CPU, potentially leading to unauthorized access to sensitive data processed within the enclave.

These sophisticated attacks can reveal detailed execution patterns within the SGX enclave by exploiting the processor's branch prediction mechanism. By monitoring mispredicted branches or the execution of speculative instructions, an attacker could infer sensitive details about the operations being performed within the enclave, such as the processing of transactions or the generation of cryptographic proofs.

This could not only compromise the privacy of transactions but also allow for more sophisticated attacks that could, for example, aim to influence the execution of code within the enclave in a way that could be beneficial to the attacker, such as by causing certain transactions to be prioritized or manipulated before being committed to the blockchain.

Leverage CPU microcode updates and SGX SDK features designed to limit the impact of speculative execution. This could involve disabling or restricting speculative execution for sensitive code paths.

7. Centralization Risks

7.1. Cenralization Risks in Prover Markets

If a small number of provers or prover pools come to dominate the market, it could lead to periods where proof generation is effectively controlled by these entities. This centralization risk might result in unavailability or delays in proof generation during critical periods.

A Taiko prover needs to be able to generate both SGX and ZK proofs at the moment. We are not distributing TTKOk as we are reworking the SGX registration process to occur fully onchain.

7.2. Centralization Risks In Inception Layers

The reliance on specific technologies or infrastructures for managing the inception layers could lead to centralization risks, particularly if certain aspects of the system are controlled by a limited set of actors or if there is a heavy reliance on specific service providers for cross-layer communication.

7.3. Financial Barrier to Entry Due to High Stakes

The requirement to post substantial bonds may deter smaller participants from engaging in the proof and contestation process, potentially limiting the system to wealthier participants and centralizing validation power.

8. Cross Chain Replay Attacks

There's a risk that signals or transactions valid on one chain could be replayed on another, especially if nonce or identifier mechanisms aren't properly implemented or synchronized between chains.

9. Liquidity Risks and Pool Exploitation

Bridging mechanisms that rely on liquidity pools (for token swaps or asset bridging) could be susceptible to liquidity depletion, manipulation, or exploitation through complex attack vectors.

Maintaining robust liquidity management practices, including dynamic fee adjustments, liquidity provider incentives, and implementing security measures to prevent pool draining attacks.

10. Governance and Upgrade Risks

The processes for upgrading smart contracts or changing bridge parameters could be exploited to introduce vulnerabilities or malicious functionality. requiring multi-signature or DAO approval for critical changes, and time-locking upgrades to allow community review.

11. Cross-Chain Data Consistency

Ensuring data consistency and state synchronization across chains, especially in the face of reorgs or chain splits, presents a significant challenge. Implementing robust chain reorganization handling logic, checkpointing, and cross-chain state validation mechanisms to ensure consistency.

12. Cost Implications

12.1. Layered Rollups

Operating and maintaining multiple rollups could lead to increased costs, both in terms of transaction fees for users as they move assets or messages across layers and operational expenses for maintaining the infrastructure of each layer.

12.2. Variable Costs

Ethereum’s gas fees can be highly volatile, subject to market demand. Since Taiko relies on Ethereum for sequencing, the costs of using Taiko can fluctuate significantly, which might deter users during peak periods.

12.3. Economic Dependence

The economic model of Taiko is closely linked to Ethereum's, making it vulnerable to changes in Ethereum's fee market or gas pricing mechanisms.

13. Latency in Cross-Layer Transactions

While arbitrary message passing enables communication across layers, it also introduces latency. Transactions involving multiple layers might face delays as they wait for confirmation across the stack, potentially impacting applications requiring high throughput or real-time interactions.

A table summarizing the latency, cost, and delivery methods for L1 to L2 and L2 to L1 transactions:

DirectionLatencyCostDelivery Method
L1 → L2~3 minutes~110,000 L1 gas + ~440,000 L2 gasManual and automatic delivery supported
L2 → L1~4 minutes~129,000 L2 gas + ~440,000 L1 gasManual and automatic delivery supported

source:https://www.rollup.codes/taiko#opcodes

Note: Might have changed since post was published.

14. Griefing Attack Risks

14.1 Griefing Attacks Via Contests

A bad actor could intentionally contest blocks without valid reasons, forcing honest provers to expend additional resources to re-prove blocks at a higher tier. This not only wastes computational resources but also creates unnecessary delays in the validation process. Such griefing attacks could be aimed at increasing the operational costs for competitors or simply to disrupt the normal operation of the blockchain.

14.2 Computational Resource Depletion

Repeatedly forcing blocks to be re-proven at higher tiers could lead to significant computational resource depletion. Malicious actors might target this aspect to drain the resources of honest provers, particularly in systems where computational resources are a limiting factor. This could result in honest provers withdrawing from the process, reducing the network’s security and resilience.

15. Economic Exploitation

If the economic incentives are not carefully balanced, bad actors might find ways to exploit the proof tier selection process for profit. For example, if the rewards for proving higher-tier blocks are disproportionately high, provers might be incentivized to contest blocks without valid grounds, aiming to re-prove them at a higher tier for greater rewards. This could lead to a scenario where the system is gamed for financial gain at the expense of network efficiency and trust.

Ensure that the economic incentives for contesting and re-proving blocks are balanced to discourage exploitation.

15.1. Influencing Tier Skipping/Delay for Potentially Malicious Blocks

The significant financial risks associated with contesting proofs might discourage participants from challenging proofs, even when valid, promoting a more conservative approach that could allow incorrect proofs to remain unchallenged.

15.2. Dependency on Monetary Incentives

The system heavily relies on financial incentives, which might not always align with the broader goals of truth and integrity within the blockchain, potentially leading to decisions driven more by profit than accuracy.

16. Denial of Service Risks

16.1. Complexity in Verifying Merkle Proofs

The process of verifying Merkle proofs requires hashing operations that could become computationally expensive if the tree depth is large or if proofs are crafted to be inefficient on purpose. An attacker could exploit this by submitting claims with such proofs, potentially leading to high gas costs or even exceeding block gas limits, making it difficult or impossible for legitimate claims to be processed.

16.2. Maliciously Crafted Proofs

If an attacker can influence the structure of the proofs or flood the contract with expensive-to-verify claims, it might deplete the gas limit of blocks or significantly increase the cost for legitimate users to submit their claims. This can be mitigated by imposing limits on the proof size or depth and rejecting claims with proofs exceeding these limits.

Findings

SeverityFindings
Critical0
High1
Medium4
Low24
Gas8
NC2

Time Spent: 120 hours

Time spent:

120 hours

#0 - c4-judge

2024-04-10T11:57:40Z

0xean marked the issue as grade-b

AuditHub

A portfolio for auditors, a security profile for protocols, a hub for web3 security.

Built bymalatrax © 2024

Auditors

Browse

Contests

Browse

Get in touch

ContactTwitter