Platform: Code4rena
Start Date: 04/03/2024
Pot Size: $140,000 USDC
Total HM: 19
Participants: 69
Period: 21 days
Judge: 0xean
Total Solo HM: 4
Id: 343
League: ETH
Rank: 66/69
Findings: 1
Award: $26.76
🌟 Selected for report: 0
🚀 Solo Findings: 0
🌟 Selected for report: DadeKuma
Also found by: 0x11singh99, 0xAnah, 0xhacksmithh, Auditor2947, IllIllI, K42, MrPotatoMagic, Pechenite, SAQ, SM3_SS, SY_S, Sathish9098, albahaca, caglankaan, cheatc0d3, clara, dharma09, hihen, hunter_w3b, oualidpro, pavankv, pfapostol, rjs, slvDev, sxima, unique, zabihullahazadzoi
26.7572 USDC - $26.76
Possible Optimization 1 =
Here is the optimized code snippet:
// Before Optimization function supportsInterface(bytes4 _interfaceId) public view override(GovernorUpgradeable, GovernorTimelockControlUpgradeable, IERC165Upgradeable) returns (bool) { return super.supportsInterface(_interfaceId); } // After Optimization function supportsInterface(bytes4 _interfaceId) public view override(GovernorUpgradeable, GovernorTimelockControlUpgradeable, IERC165Upgradeable) returns (bool) { return _interfaceId == type(IGovernorUpgradeable).interfaceId || _interfaceId == type(GovernorTimelockControlUpgradeable).interfaceId || _interfaceId == type(IERC165Upgradeable).interfaceId; }
supportsInterface
call, depending on the EVM's current gas pricing for these operations.Possible Optimization 2 =
Here is the optimized code:
// Instead of calling votingDelay(), directly use its return value where needed uint256 votingDelay = 7200; // Use the constant value directly // Apply similar inlining for votingPeriod() and proposalThreshold()
Possible Optimization 1 =
tierFees
array to find the fee associated with a given tier. This can be optimized by using a mapping for direct access if the tiers are known and relatively static, or by ensuring the array is sorted and implementing a binary search for dynamic cases.Here is the optimized code snippet:
// Assuming tiers are static and known, replace the array with a mapping for direct access. // This requires changes in how tierFees are stored and accessed throughout the contract. mapping(uint16 => uint256) private _tierFees; function _setProverFee(uint16 _tierId, uint256 _fee) internal { _tierFees[_tierId] = _fee; } function _getProverFee(uint16 _tierId) private view returns (uint256) { require(_tierFees[_tierId] != 0, "HOOK_TIER_NOT_FOUND"); return _tierFees[_tierId]; }
Possible Optimization 2 =
resolve
and msg.sender
for fetching contract addresses (e.g., taiko_token, taikoL1Address). Caching these addresses after the first lookup or initialization could save gas on subsequent accesses.Here is the optimized code:
address private _taikoTokenAddress; address private _taikoL1Address; function _cacheAddresses() internal { if (_taikoTokenAddress == address(0)) { _taikoTokenAddress = resolve("taiko_token", false); } if (_taikoL1Address == address(0)) { _taikoL1Address = msg.sender; // Assuming msg.sender is always the TaikoL1 contract after the first call } } // Use _taikoTokenAddress and _taikoL1Address directly in the function calls instead of resolving them each time
Possible Optimization 1 =
The contract frequently resolves addresses using the _resolver
parameter, which can be gas-intensive due to external calls. Caching these addresses after the first resolution within a function call can save gas.
Optimized Code Snippet:
function proposeBlock( TaikoData.State storage _state, TaikoData.Config memory _config, IAddressResolver _resolver, bytes calldata _data, bytes calldata _txList ) internal returns (TaikoData.BlockMetadata memory meta_, TaikoData.EthDeposit[] memory deposits_) { // Cache resolved addresses address taikoTokenAddress = _resolver.resolve("taiko_token", false); address tierProviderAddress = _resolver.resolve("tier_provider", false); // Use cached addresses in the function ... }
Possible Optimization 2 =
The contract performs many conditional checks in a sequence that could be optimized by combining related conditions and exiting early where possible.
Optimized Code Snippet:
function proposeBlock( ... ) internal returns (TaikoData.BlockMetadata memory meta_, TaikoData.EthDeposit[] memory deposits_) { ... // Combine related conditional checks for efficiency if (params.assignedProver == address(0) || !LibAddress.isSenderEOA()) { revert L1_INVALID_PROVER(); } // Early exit if unauthorized or too many blocks if (!_isProposerPermitted(b, _resolver) || b.numBlocks >= b.lastVerifiedBlockId + _config.blockMaxProposals + 1) { revert L1_UNAUTHORIZED(); } ... }
Possible Optimization 3 =
The logic for reusing blob hashes involves several conditional checks that can be streamlined. Specifically, the checks for blob reuse and caching can be optimized to reduce redundancy.
Optimized Code Snippet:
// Simplify blob reuse and caching logic if (meta_.blobUsed && _config.blobAllowedForDA) { if (params.blobHash != 0 && _config.blobReuseEnabled && isBlobReusable(_state, _config, params.blobHash)) { meta_.blobHash = params.blobHash; } else if (params.blobHash == 0) { meta_.blobHash = blobhash(0); if (meta_.blobHash == 0) revert L1_BLOB_NOT_FOUND(); } if (params.cacheBlobForReuse) { _state.reusableBlobs[meta_.blobHash] = block.timestamp; emit BlobCached(meta_.blobHash); } }
Possible Optimization 1 =
Here is the optimized code snippet:
// Replace repetitive tier checks with a single call to _checkTierValidity where applicable. function _checkTierValidity(TaikoData.TierProof memory _proof, uint16 _minTier, ITierProvider.Tier memory _tier) private pure { require(_proof.tier != 0 && _proof.tier >= _minTier, "L1_INVALID_TIER"); require(_tier.contestBond != 0, "L1_MISSING_VERIFIER"); }
Possible Optimization 2 =
Here is the optimized code:
// Replace repetitive state updates with a single call to _updateTransitionState where applicable. function _updateTransitionState( TaikoData.TransitionState storage _ts, TaikoData.Transition memory _tran, TaikoData.TierProof memory _proof, bool _isContesting, uint256 _contestBond, address _prover ) private { _ts.prover = _prover; _ts.blockHash = _tran.blockHash; _ts.stateRoot = _tran.stateRoot; _ts.tier = _proof.tier; if (_isContesting) { _ts.contestBond = _contestBond; _ts.contester = msg.sender; } }
SSTORE
operations, especially in functions with multiple conditional paths. The exact savings would depend on the frequency and pattern of state updates.Possible Optimization 1 =
Here is the optimized code snippet:
function _isConfigValid(TaikoData.Config memory _config) private pure returns (bool) { bool isValidChainId = _config.chainId > 1 && _config.chainId != block.chainid; bool isValidBlockSettings = _config.blockMaxProposals > 1 && _config.blockRingBufferSize > _config.blockMaxProposals + 1 && _config.blockMaxGasLimit > 0 && _config.blockMaxTxListBytes > 0 && _config.blockMaxTxListBytes <= 128 * 1024; // Up to 128K bool isValidDepositSettings = _config.ethDepositRingBufferSize > 1 && _config.ethDepositMinCountPerBlock > 0 && _config.ethDepositMaxCountPerBlock <= 32 && _config.ethDepositMaxCountPerBlock >= _config.ethDepositMinCountPerBlock && _config.ethDepositMinAmount > 0 && _config.ethDepositMaxAmount > _config.ethDepositMinAmount && _config.ethDepositMaxAmount <= type(uint96).max && _config.ethDepositGas > 0 && _config.ethDepositMaxFee > 0 && _config.ethDepositMaxFee <= type(uint96).max / _config.ethDepositMaxCountPerBlock; return isValidChainId && isValidBlockSettings && isValidDepositSettings && _config.livenessBond > 0; }
Possible Optimization 2 =
Here is the optimized code:
function verifyBlocks( TaikoData.State storage _state, TaikoData.Config memory _config, IAddressResolver _resolver, uint64 _maxBlocksToVerify ) internal { if (_maxBlocksToVerify == 0) return; address tierProvider = _resolver.resolve("tier_provider", false); IERC20 tko = IERC20(_resolver.resolve("taiko_token", false)); uint64 numBlocksVerified = 0; uint64 blockId = _state.slotB.lastVerifiedBlockId + 1; while (blockId < _state.slotB.numBlocks && numBlocksVerified < _maxBlocksToVerify) { uint64 slot = blockId % _config.blockRingBufferSize; TaikoData.Block storage blk = _state.blocks[slot]; if (blk.blockId != blockId) break; uint32 tid = blk.verifiedTransitionId; if (tid == 0) break; TaikoData.TransitionState storage ts = _state.transitions[slot][tid]; if (ts.contester != address(0)) break; ITierProvider.Tier memory tier = ITierProvider(tierProvider).getTier(ts.tier); if (uint256(tier.cooldownWindow) * 60 + uint256(ts.timestamp).max(_state.slotB.lastUnpausedAt) > block.timestamp) break; // Proceed with verification logic... ++numBlocksVerified; blockId = _state.slotB.lastVerifiedBlockId + numBlocksVerified; } if (numBlocksVerified > 0) { _state.slotB.lastVerifiedBlockId += numBlocksVerified; // Additional logic for syncing chain data... } }
tierProvider
and tko
addresses outside the loop and reducing the number of resolver calls, this optimization could save a significant amount of gas, especially in loops with multiple iterations. The savings would be more pronounced in scenarios with a higher number of blocks verified in a single transaction.Possible Optimization =
TaikoData
structs, ensuring that smaller data types are packed together can minimize the number of storage slots used.Here is the optimized code snippet:
struct BlockMetadata { bytes32 l1Hash; // slot 1 bytes32 difficulty; // slot 2 bytes32 blobHash; // slot 3 bytes32 extraData; // slot 4 bytes32 depositsHash; // slot 5 bytes32 parentMetaHash; // slot 8 address coinbase; // slot 6 uint64 id; uint64 timestamp; // slot 7 uint64 l1Height; uint32 gasLimit; uint24 txListByteOffset; uint24 txListByteSize; uint16 minTier; bool blobUsed; }
Possible Optimization 1 =
Here is the optimized code snippet:
function proposeBlock(bytes calldata _params, bytes calldata _txList) external payable nonReentrant whenNotPaused returns (TaikoData.BlockMetadata memory meta_, TaikoData.EthDeposit[] memory deposits_) { // Cache the configuration at the start TaikoData.Config memory config = getConfig(); // Use `config` throughout the function... }
getConfig()
calls within each transaction.Possible Optimization 2 =
Here is the optimized code:
function proveBlock(uint64 _blockId, bytes calldata _input) external nonReentrant whenNotPaused { if (state.slotB.provingPaused) revert L1_PROVING_PAUSED(); // Function logic continues... }
Possible Optimization 1 =
Here is the optimized code snippet:
function anchor( bytes32 _l1BlockHash, bytes32 _l1StateRoot, uint64 _l1BlockId, uint32 _parentGasUsed ) external nonReentrant { Config memory config = getConfig(); // Cache config uint64 lastSynced = lastSyncedBlock; // Cache last synced block // Use `config` and `lastSynced` in the function... }
Possible Optimization 2 =
The _calc1559BaseFee() function performs several arithmetic operations and conditionals that can be streamlined for efficiency.
Optimized Code Snippet:
function _calc1559BaseFee( Config memory _config, uint64 _l1BlockId, uint32 _parentGasUsed ) private view returns (uint256 basefee_, uint64 gasExcess_) { uint256 excess = uint256(gasExcess).add(_parentGasUsed); uint256 numL1Blocks = _l1BlockId > lastSyncedBlock ? _l1BlockId - lastSyncedBlock : 0; uint256 issuance = numL1Blocks.mul(_config.gasTargetPerL1Block); excess = excess > issuance ? excess.sub(issuance) : 1; gasExcess_ = uint64(excess); basefee_ = Lib1559Math.basefee(gasExcess_, _config.basefeeAdjustmentQuotient.mul(_config.gasTargetPerL1Block)); if (basefee_ == 0) basefee_ = 1; }
Estimated Gas Saved = By simplifying arithmetic operations and conditionals, this optimization could save gas by reducing the computational overhead. The savings would be more pronounced in scenarios where this function is called frequently.
Possible Optimization 1 =
The contract frequently accesses storage variables such as isAuthorized
, topBlockId
, and signal slots. Caching these values in memory when they are accessed more than once in a function can save gas.
Optimized Code Snippet:
function syncChainData( uint64 _chainId, bytes32 _kind, uint64 _blockId, bytes32 _chainData ) external returns (bytes32) { // Cache authorization status to reduce storage access bool authorized = isAuthorized[msg.sender]; if (!authorized) revert SS_UNAUTHORIZED(); // Proceed with the rest of the function... }
Estimated Gas Saved = This optimization can save a few hundred gas per transaction by avoiding repeated SLOAD
operations.
Possible Optimization 2 =
The proveSignalReceived() function performs multiple operations that could be optimized. For example, verifying hop proofs and caching chain data involve repeated patterns that could be abstracted into internal functions to reduce code duplication and potentially optimize gas usage.
Optimized Code Snippet:
function proveSignalReceived( uint64 _chainId, address _app, bytes32 _signal, bytes calldata _proof ) public validSender(_app) nonZeroValue(_signal) { HopProof[] memory hopProofs = abi.decode(_proof, (HopProof[])); if (hopProofs.length == 0) revert SS_EMPTY_PROOF(); // Abstracted logic for verifying hop proofs and caching chain data _processHopProofs(hopProofs, _chainId, _app, _signal); } function _processHopProofs(HopProof[] memory hopProofs, uint64 _chainId, address _app, bytes32 _signal) private { // Implement the logic for processing hop proofs, verifying them, and caching chain data // This abstracts the repeated patterns in the original function }
Estimated Gas Saved = While the exact gas savings depend on the implementation details and the number of hop proofs processed, abstracting repeated logic into internal functions can reduce bytecode size and optimize execution paths, potentially saving gas.
Possible Optimization 3 =
Emitting events with less frequently used data or combining multiple related events into a single event can reduce gas costs associated with logging.
Optimized Code Snippet:
event SignalProcessed(address indexed app, bytes32 indexed signal, bool success); function _sendSignal(address _app, bytes32 _signal, bytes32 _value) private validSender(_app) nonZeroValue(_signal) nonZeroValue(_value) returns (bytes32 slot_) { slot_ = getSignalSlot(uint64(block.chainid), _app, _signal); bool success = true; // Attempt to store the signal value, set success to false if it fails // Simplified for illustration assembly { sstore(slot_, _value) } emit SignalProcessed(_app, _signal, success); }
Estimated Gas Saved = Reducing the number of events or the amount of data logged in each event can save gas, especially for contracts that emit events frequently. The savings vary based on the size and number of data points in the original events.
Possible Optimization 1 =
function processMessage(Message calldata _message, bytes calldata _proof) external nonReentrant whenNotPaused sameChain(_message.destChainId) { bytes32 msgHash = hashMessage(_message); Status cachedStatus = messageStatus[msgHash]; // Cache the status if (cachedStatus != Status.NEW) revert B_STATUS_MISMATCH(); // Further logic using cachedStatus instead of reading from storage again }
Possible Optimization 2 =
function suspendMessages(bytes32[] calldata _msgHashes, bool _suspend) external onlyFromOwnerOrNamed("bridge_watchdog") { for (uint256 i = 0; i < _msgHashes.length; ++i) { bytes32 msgHash = _msgHashes[i]; Status currentStatus = messageStatus[msgHash]; Status newStatus = _suspend ? Status.SUSPENDED : Status.NEW; // Define new status based on _suspend flag if (currentStatus != newStatus) { messageStatus[msgHash] = newStatus; // Update only if status changes emit MessageSuspended(msgHash, _suspend); } } }
Possible Optimization 1 =
Optimized Code Snippet:
// Precompute the full name and symbol in the init function and store them in storage variables. string private _fullName; string private _fullSymbol; function init( address _owner, address _addressManager, address _srcToken, uint256 _srcChainId, uint8 _decimals, string memory _symbol, string memory _name ) external initializer { // Existing initialization logic... _fullName = LibBridgedToken.buildName(_name, _srcChainId); _fullSymbol = LibBridgedToken.buildSymbol(_symbol); } function name() public view override returns (string memory) { return _fullName; } function symbol() public view override returns (string memory) { return _fullSymbol; }
Possible Optimization 2 =
owner
or the snapshooter
. This check could be optimized by reducing the number of external
calls if the owner is stored in a state variable that can be directly accessed.Optimized Code Snippet:
modifier onlyOwnerOrSnapshooter() { address _owner = owner(); // Assuming owner() is a function call that might be optimized if owner is stored in a state variable directly require(msg.sender == _owner || msg.sender == snapshooter, "BTOKEN_UNAUTHORIZED"); _; }
owner()
is optimized. The savings would be minor per transaction but could add up over many transactions.Possible Optimization 1 =
When sending tokens, the contract iterates over arrays to check amounts and perform transfers. This process can be optimized by reducing redundant checks and leveraging batch operations provided by the ERC1155 standard more effectively.
Optimized Code Snippet:
function sendToken(BridgeTransferOp memory _op) external payable nonReentrant whenNotPaused withValidOperation(_op) returns (IBridge.Message memory message_) { require(_op.amounts.length > 0, "VAULT_INVALID_AMOUNT"); require(_op.token.supportsInterface(ERC1155_INTERFACE_ID), "VAULT_INTERFACE_NOT_SUPPORTED"); IERC1155(_op.token).safeBatchTransferFrom(msg.sender, address(this), _op.tokenIds, _op.amounts, ""); // Remaining logic for message creation and event emission... }
Estimated Gas Saved = Leveraging safeBatchTransferFrom
directly for all token IDs and amounts in one call significantly reduces the gas cost by minimizing the number of external calls and checks. The exact savings depend on the number of tokens being transferred.
Possible Optimization 2 =
The contract frequently resolves addresses using the resolve() function, which can be gas-intensive due to potential storage reads. Caching these addresses at the beginning of functions that use them multiple times can save gas.
Optimized Code Snippet:
function sendToken(BridgeTransferOp memory _op) external payable nonReentrant whenNotPaused withValidOperation(_op) returns (IBridge.Message memory message_) { address bridgeAddress = resolve("bridge", false); address tokenAddress = resolve(_op.destChainId, name(), false); // Use `bridgeAddress` and `tokenAddress` in the function... }
Estimated Gas Saved = Caching resolved addresses can save hundreds to thousands of gas per transaction, depending on the number of reads avoided and the complexity of the resolve
function.
Possible Optimization 3 =
The _getOrDeployBridgedToken() function checks if a bridged token exists and deploys one if it doesn't. This process can be optimized by reducing the number of storage reads and writes.
Optimized Code Snippet:
function _getOrDeployBridgedToken(CanonicalNFT memory _ctoken) private returns (address btoken_) { btoken_ = canonicalToBridged[_ctoken.chainId][_ctoken.addr]; if (btoken_ == address(0)) { btoken_ = _deployBridgedToken(_ctoken); bridgedToCanonical[btoken_] = _ctoken; canonicalToBridged[_ctoken.chainId][_ctoken.addr] = btoken_; emit BridgedTokenDeployed(_ctoken.chainId, _ctoken.addr, btoken_, _ctoken.symbol, _ctoken.name); } }
Estimated Gas Saved = This optimization minimizes storage operations by ensuring that token deployment and mapping updates are only performed when necessary. The savings are more pronounced when frequently interacting with the same set of tokens.
Possible Optimization 1 =
For sendToken() consolidate token amount and blacklist checks, into a single loop to minimize redundant operations and storage accesses.
Optimized Code Snippet:
function sendToken(BridgeTransferOp calldata _op) external payable nonReentrant whenNotPaused returns (IBridge.Message memory message_) { require(_op.amount > 0, "VAULT_INVALID_AMOUNT"); require(_op.token != address(0), "VAULT_INVALID_TOKEN"); require(!btokenBlacklist[_op.token], "VAULT_BTOKEN_BLACKLISTED"); // Proceed with token transfer logic... }
require
checks for each token in the _op.amounts
array. The exact savings depend on the size of the array and the frequency of transactions.Possible Optimization 2 =
Cache frequently used resolved addresses at the beginning of functions to reduce redundant calls to the resolve
function.
Optimized Code Snippet:
function sendToken(BridgeTransferOp calldata _op) external payable nonReentrant whenNotPaused returns (IBridge.Message memory message_) { address bridgeAddress = resolve("bridge", false); address tokenAddress = resolve(_op.destChainId, name(), false); // Use `bridgeAddress` and `tokenAddress` throughout the function... }
resolve
function and the number of times it's called within a transaction.Possible Optimization 3 =
For _getOrDeployBridgedToken(), optimize the logic for deploying bridged tokens and updating mappings, to reduce storage operations and improve code efficiency.
Optimized Code Snippet:
function _getOrDeployBridgedToken(CanonicalERC20 memory ctoken) private returns (address btoken) { btoken = canonicalToBridged[ctoken.chainId][ctoken.addr]; if (btoken == address(0)) { btoken = _deployBridgedToken(ctoken); canonicalToBridged[ctoken.chainId][ctoken.addr] = btoken; bridgedToCanonical[btoken] = ctoken; emit BridgedTokenDeployed(ctoken.chainId, ctoken.addr, btoken, ctoken.symbol, ctoken.name, ctoken.decimals); } }
Possible Optimization 1 =
For sendToken() here also, simplify and batch token transfer checks to minimize redundant operations.
Code Snippet:
function sendToken(BridgeTransferOp memory _op) external payable nonReentrant whenNotPaused returns (IBridge.Message memory message_) { require(_op.token.supportsInterface(ERC721_INTERFACE_ID), "VAULT_INTERFACE_NOT_SUPPORTED"); for (uint256 i = 0; i < _op.tokenIds.length; ++i) { require(_op.amounts[i] == 0, "VAULT_INVALID_AMOUNT"); IERC721(_op.token).safeTransferFrom(msg.sender, address(this), _op.tokenIds[i]); } // Proceed with the rest of the function... }
Possible Optimization 2 =
Also for sendToken(), cache frequently used resolved addresses at the beginning of functions to reduce redundant calls to the resolve
function.
Code Snippet:
function sendToken(BridgeTransferOp memory _op) external payable nonReentrant whenNotPaused returns (IBridge.Message memory message_) { address bridgeAddress = resolve("bridge", false); // Use `bridgeAddress` throughout the function... }
resolve
function and the number of times it's called within a transaction.Possible Optimization 3 =
For _getOrDeployBridgedToken(), streamline the logic for deploying bridged tokens and updating mappings to reduce storage operations and improve code efficiency.
Optimized code:
function _getOrDeployBridgedToken(CanonicalNFT memory _ctoken) private returns (address btoken_) { btoken_ = canonicalToBridged[_ctoken.chainId][_ctoken.addr]; if (btoken_ == address(0)) { btoken_ = _deployBridgedToken(_ctoken); bridgedToCanonical[btoken_] = _ctoken; canonicalToBridged[_ctoken.chainId][_ctoken.addr] = btoken_; // Emit event here if necessary } }
Possible Optimization 1 =
Minimize the use of inline assembly for operations that can be performed with Solidity's already built-in functions. While assembly can offer gas savings, it also bypasses Solidity's safety checks, increasing the risk of errors.
Code Snippet:
// Original assembly code for pointer adjustment // assembly { // ptr := add(_in, 32) // } // Optimized Solidity approach MemoryPointer ptr = MemoryPointer.wrap(uint256(uint160(address(_in))) + 32);
Estimated Gas Saved = This change might not directly result in gas savings. However, it enhances the safety and maintainability of the code, potentially preventing costly errors.
Possible Optimization 2 =
Optimize the _decodeLength() function by consolidating similar logic branches and removing redundant checks. This can reduce the overall bytecode size and gas usage.
Code Snippet:
function _decodeLength(RLPItem memory _in) private pure returns (uint256 offset_, uint256 length_, RLPItemType type_) { // Simplify the prefix checks and combine similar logic paths uint256 prefix = uint256(uint8(_in.ptr[0])); if (prefix <= 0x7f) { return (0, 1, RLPItemType.DATA_ITEM); } else if (prefix <= 0xbf) { // Combine short and long string logic uint256 lenOfStrLen = (prefix <= 0xb7) ? 1 : prefix - 0xb7; uint256 strLen = (prefix <= 0xb7) ? prefix - 0x80 : uint256(uint8(_in.ptr[lenOfStrLen])); return (lenOfStrLen, strLen, RLPItemType.DATA_ITEM); } else { // Combine short and long list logic uint256 lenOfListLen = (prefix <= 0xf7) ? 1 : prefix - 0xf7; uint256 listLen = (prefix <= 0xf7) ? prefix - 0xc0 : uint256(uint8(_in.ptr[lenOfListLen])); return (lenOfListLen, listLen, RLPItemType.LIST_ITEM); } }
Estimated Gas Saved = This optimization could save gas by reducing the complexity and size of the _decodeLength
function. The exact savings depend on how frequently this function is called and the distribution of RLP item types it processes.
Possible Optimization 3 =
When reading RLP lists, dynamically adjust the size of the output array based on the actual number of items, rather than using a fixed maximum size.
Code Snippet:
function readList(RLPItem memory _in) internal pure returns (RLPItem[] memory out_) { // Existing logic to determine list length and item count... out_ = new RLPItem[](itemCount); // Allocate memory based on actual item count // Populate the `out_` array with RLP items... }
Estimated Gas Saved = This change can significantly reduce gas costs associated with memory allocation and unused array space. The savings vary based on the actual number of items in RLP lists being processed.
Possible Optimization 1 =
For _parseProof() reduce the overhead of repeatedly decoding RLP items during proof parsing by directly accessing the decoded data when possible.
Code Snippet:
function _parseProof(bytes[] memory _proof) private pure returns (TrieNode[] memory proof_) { uint256 length = _proof.length; proof_ = new TrieNode[](length); for (uint256 i = 0; i < length; i++) { bytes memory encoded = _proof[i]; RLPReader.RLPItem[] memory decoded = RLPReader.readList(encoded); proof_[i] = TrieNode({ encoded: encoded, decoded: decoded }); } }
Estimated Gas Saved = This change itself may not directly save a significant amount of gas per operation but improves code clarity and potentially reduces computational redundancy, indirectly affecting gas usage.
Possible Optimization 2 =
Simplify the _getNodeID() function to avoid unnecessary conditional checks and direct manipulation.
Code Snippet:
function _getNodeID(RLPReader.RLPItem memory _node) private pure returns (bytes memory id_) { // Assuming RLPReader.readRawBytes already handles the length check internally id_ = RLPReader.readRawBytes(_node); }
Estimated Gas Saved = This optimization minimizes the execution path and removes redundant checks, potentially saving gas for each node ID generation. The exact savings depend on the frequency and context in which _getNodeID
is called.
Possible Optimization 3 =
Optimize the _getSharedNibbleLength() function to use assembly for loop unrolling and memory access, reducing the overhead of high-level operations.
Code Snippet:
// Note: Use of inline assembly can increase complexity and should be used cautiously. function _getSharedNibbleLength(bytes memory _a, bytes memory _b) private pure returns (uint256 shared_) { assembly { let length := mload(_a) let minLength := lt(length, mload(_b)) ? length : mload(_b) let aPtr := add(_a, 0x20) let bPtr := add(_b, 0x20) for { let i := 0 } lt(i, minLength) { i := add(i, 1) } { let aByte := byte(0, mload(add(aPtr, i))) let bByte := byte(0, mload(add(bPtr, i))) if iszero(eq(aByte, bByte)) { break } shared_ := add(shared_, 1) } } }
Estimated Gas Saved = Using assembly for byte comparison can significantly reduce the gas cost of iterating through byte arrays. However, the savings come with increased code complexity and potential security risks. It's crucial to thoroughly test and review any assembly code.
Possible Optimization 1 =
In _addInstances() perform batch checks for instance registration to minimize redundant storage reads.
Code Snippet:
function _addInstances(address[] memory _instances, bool instantValid) private returns (uint256[] memory ids) { ids = new uint256[](_instances.length); uint64 validSince = uint64(block.timestamp + (instantValid ? 0 : INSTANCE_VALIDITY_DELAY)); for (uint256 i = 0; i < _instances.length; ++i) { require(!_instances[i].isZero(), "SGX_INVALID_INSTANCE"); require(!addressRegistered[_instances[i]], "SGX_ALREADY_ATTESTED"); addressRegistered[_instances[i]] = true; uint256 instanceId = nextInstanceId++; instances[instanceId] = Instance(_instances[i], validSince); ids[i] = instanceId; emit InstanceAdded(instanceId, _instances[i], address(0), validSince); } }
Estimated Gas Saved = Consolidating checks and operations into a single loop reduces the gas cost by minimizing the number of storage reads and writes. The exact savings depend on the number of instances being added.
Possible Optimization 2 =
Optimize the _isInstanceValid() function by reducing conditional checks and leveraging short-circuit evaluation.
Code Snippet:
function _isInstanceValid(uint256 id, address instance) private view returns (bool) { Instance memory inst = instances[id]; return inst.addr == instance && inst.validSince <= block.timestamp && block.timestamp <= inst.validSince + INSTANCE_EXPIRY; }
Estimated Gas Saved = This optimization reduces the computational overhead by directly accessing the instance once and using its cached values for comparison. While the gas savings per call might be minor, it enhances the function's efficiency, especially when called frequently.
Possible Optimization 3 =
Minimize external calls and redundant data processing in verifyProof() and related functions.
Code Snippet:
// Assuming getSignedHash and other related functions are optimized similarly. function verifyProof( Context calldata _ctx, TaikoData.Transition calldata _tran, TaikoData.TierProof calldata _proof ) external onlyFromNamed("taiko") { if (_ctx.isContesting || _proof.data.length != 89) return; // Simplify signature verification logic here, assuming getSignedHash is optimized. }
Estimated Gas Saved = Streamlining data handling and reducing external calls in critical paths like proof verification can lead to significant gas savings, especially for operations that are executed frequently or involve complex data.
Possible Optimization 1 =
Implement batch processing for grant and withdrawal operations to minimize the number of transactions and reduce gas costs associated with repeated contract calls.
Code Snippet:
function grantMultiple(address[] calldata _recipients, Grant[] calldata _grants) external onlyOwner { require(_recipients.length == _grants.length, "Mismatched arrays"); for (uint256 i = 0; i < _recipients.length; i++) { _grant(_recipients[i], _grants[i]); } }
Estimated Gas Saved = This approach can significantly reduce gas costs when granting to multiple recipients by consolidating multiple transactions into a single call. The exact savings depend on the number of grants being processed.
Possible Optimization 2 =
For _withdraw(), minimize redundant reads from and writes to storage, especially for the mappings and state variables that are frequently accessed.
Code Snippet:
function _withdraw(address _recipient, address _to) private { Recipient storage r = recipients[_recipient]; uint128 amountToWithdraw; uint128 costToWithdraw; (,,, amountToWithdraw, costToWithdraw) = getMyGrantSummary(_recipient); // Perform calculations and updates in memory before writing to storage r.amountWithdrawn += amountToWithdraw; r.costPaid += costToWithdraw; // Update storage once after all calculations totalAmountWithdrawn += amountToWithdraw; totalCostPaid += costToWithdraw; }
Estimated Gas Saved = Reducing the frequency of storage operations can lead to moderate gas savings, especially in functions called frequently by users.
Possible Optimization 3 =
For _validateGrant(), streamline the grant validation logic to reduce computational overhead and simplify the code.
Code Snippet:
function _validateGrant(Grant memory _grant) private pure { require(_grant.amount > 0, "INVALID_GRANT"); require(_grant.grantPeriod > 0, "INVALID_GRANT_PERIOD"); // Simplify cliff validation based on grant and unlock periods require(_grant.grantCliff <= _grant.grantStart + _grant.grantPeriod, "INVALID_GRANT_CLIFF"); require(_grant.unlockCliff <= _grant.unlockStart + _grant.unlockPeriod, "INVALID_UNLOCK_CLIFF"); }
Estimated Gas Saved = While the direct gas savings from simplifying validation logic may be minor, the reduced complexity can lead to fewer errors and more efficient execution.
Possible Optimization 1 =
Code Snippet:
function addRevokedCertSerialNum(uint256 index, bytes[] calldata serialNumBatch) external onlyOwner { for (uint256 i = 0; i < serialNumBatch.length; ++i) { bytes memory serialNum = serialNumBatch[i]; if (!_serialNumIsRevoked[index][serialNum]) { _serialNumIsRevoked[index][serialNum] = true; } } }
serialNumBatch
.Possible Optimization 2 =
Code Snippet:
function _verifyParsedQuote(V3Struct.ParsedV3QuoteStruct memory v3quote) internal view returns (bool, bytes memory) { EnclaveIdStruct.EnclaveId memory enclaveId = qeIdentity; // Cached for multiple uses within the function // Use enclaveId in subsequent operations... }
Possible Optimization 1 =
Code Snippet:
function splitCertificateChain(bytes memory pemChain, uint256 size) external pure returns (bool success, bytes[] memory certs) { certs = new bytes[](size); uint256 start = 0; for (uint256 i = 0; i < size; ++i) { uint256 beginPos = pemChain.indexOf(abi.encodePacked(HEADER), start); uint256 endPos = pemChain.indexOf(abi.encodePacked(FOOTER), beginPos) + FOOTER_LENGTH; if (beginPos == type(uint256).max || endPos == type(uint256).max) { return (false, certs); } certs[i] = pemChain.slice(beginPos + HEADER_LENGTH, endPos - beginPos - HEADER_LENGTH - FOOTER_LENGTH); start = endPos; } return (true, certs); }
Possible Optimization 2 =
Code Snippet:
function decodeCert(bytes memory der, bool isPckCert) external pure returns (bool success, ECSha256Certificate memory cert) { // Decode the certificate once and use the decoded values throughout the function DecodedCert memory decoded = decodeCertificate(der); if (!validateDecodedCert(decoded, isPckCert)) { return (false, cert); } // Use decoded values to populate the cert struct cert.serialNumber = decoded.serialNumber; cert.notBefore = decoded.notBefore; cert.notAfter = decoded.notAfter; // Additional logic to populate the cert struct using decoded values success = true; }
Possible Optimization 1 =
Code Snippet:
function parseInput(bytes memory quote, address pemCertLibAddr) internal pure returns (bool success, V3Struct.ParsedV3QuoteStruct memory v3ParsedQuote) { if (quote.length <= MINIMUM_QUOTE_LENGTH) { return (false, v3ParsedQuote); } // Directly calculate localAuthDataSize without substring creation uint256 localAuthDataSize = littleEndianDecode(quote, 432, 4); if (quote.length - 436 != localAuthDataSize) { return (false, v3ParsedQuote); } // Inline parsing operations to avoid unnecessary memory allocations success, v3ParsedQuote.header = parseAndVerifyHeader(quote, 0, 48); success, v3ParsedQuote.localEnclaveReport = parseEnclaveReport(quote, 48, 384); success, v3ParsedQuote.v3AuthData = parseAuthDataAndVerifyCertType(quote, 436, localAuthDataSize, pemCertLibAddr); return (success, v3ParsedQuote); }
Possible Optimization 2 =
Code Snippet:
function parseCerificationChainBytes(bytes memory certBytes, address pemCertLibAddr) internal pure returns (bytes[3] memory certChainData) { // Utilize low-level operations for splitting and decoding the certificate chain (bool success, bytes[] memory certs) = splitAndDecodeCertificateChain(certBytes, 3); require(success, "Certificate chain parsing failed"); for (uint256 i = 0; i < certs.length; ++i) { // Decode each certificate directly if necessary, avoiding redundant library calls certChainData[i] = decodeCertificate(certs[i]); } }
Possible Optimization 1 =
ASN.1
elements, especially for elements with lengths that can be determined without additional computation.Code Snippet:
function _readNodeLength(bytes memory der, uint256 ix) private pure returns (uint256) { uint256 length; uint80 ixFirstContentByte; uint80 ixLastContentByte; uint8 lengthByte = uint8(der[ix + 1]); if (lengthByte < 0x80) { length = lengthByte; ixFirstContentByte = uint80(ix + 2); } else { uint8 numLengthBytes = lengthByte & 0x7F; length = der.readUintN(ix + 2, numLengthBytes); ixFirstContentByte = uint80(ix + 2 + numLengthBytes); } ixLastContentByte = uint80(ixFirstContentByte + length - 1); return NodePtr.getPtr(ix, ixFirstContentByte, ixLastContentByte); }
ASN.1
elements, especially for those with direct length specifications. The savings would be more significant in contracts that frequently parse ASN.1
structures.Possible Optimization 2 =
Code Snippet:
function bitstringAt(bytes memory der, uint256 ptr) internal pure returns (bytes memory) { // Assume type BIT STRING validation is done prior to calling this function uint256 valueLength = ptr.ixl() + 1 - ptr.ixf(); return der.substring(ptr.ixf() + 1, valueLength - 1); }
ASN.1
element has already been validated, this optimization can modestly reduce gas costs. The impact would be more pronounced in parsing operations that are executed frequently within the contract.Possible Optimization 1 =
Code Snippet:
function readUint16(bytes memory self, uint256 idx) internal pure returns (uint16 ret) { require(idx + 2 <= self.length, "invalid idx"); ret = uint16(uint8(self[idx])) * 256 + uint16(uint8(self[idx + 1])); }
Possible Optimization 2 =
substring
function to avoid unnecessary memory allocation and copying when the requested substring represents the entire string or a suffix starting from a certain position.Code Snippet:
function substring( bytes memory self, uint256 offset, uint256 len ) internal pure returns (bytes memory) { require(offset + len <= self.length, "unexpected offset"); if (offset == 0 && len == self.length) { return self; } bytes memory ret = new bytes(len); for (uint256 i = 0; i < len; ++i) { ret[i] = self[offset + i]; } return ret; }
#0 - c4-judge
2024-04-10T10:11:42Z
0xean marked the issue as grade-b