Platform: Code4rena
Start Date: 28/09/2023
Pot Size: $36,500 USDC
Total HM: 5
Participants: 115
Period: 6 days
Judge: 0xDjango
Total Solo HM: 1
Id: 290
League: ETH
Rank: 18/115
Findings: 3
Award: $235.12
🌟 Selected for report: 0
🚀 Solo Findings: 0
🌟 Selected for report: 0xDetermination
Also found by: DeFiHackLabs, Norah, Pessimistic, PwnStars, SpicyMeatball, Testerbot, ThreeSigma, bin2chen, blutorque, deadrxsezzz, dirk_y, ether_sky, hals, neumo, rokinot, said, seerether, turvy_fuzz
198.4834 USDC - $198.48
https://github.com/code-423n4/2023-09-venus/blob/main/contracts/Tokens/Prime/Prime.sol#L221
If a user token is minted in the middle of a round (via a call to claim
or issue
), pendingScoreUpdates
is not updated and there would be more users with pending scoreUpdates than the value stored in the pendingScoreUpdates
variable. This could cause a DoS in calls to function updateScores
.
Now say the system has 10 users pending score update. Imagine a user claims a Prime token, calling claim
function. After some time, calls updateScores
to accrue interest and update the score. So pendingScoreUpdates
is decremented (to 9) in Line 221. We still have 10 users pending score update, but pendingScoreUpdates
value is 9. So the last call to updateScores
, trying to update the scores of the 10 users, would revert due to an underflow in Line 221.
All this could happen accidentally or a malicious user could frontrun calls to updateScores
and make them revert, provided they have one or more Prime tokens to claim.
The following test can be added in file Prime.ts after line 801:
it("Cannot update all pending scores if a user is minted while in a round", async () => { // We issue 2 tokens await prime.issue(true, [user1.getAddress(), user2.getAddress()]); // Update alpha to force a new round of score updates await prime.updateAlpha(4, 5); //0.80 // We issue 1 more token (but the round has already started) await prime.issue(true, [user3.getAddress()]); // Update the score of the last added await prime.updateScores([user3.getAddress()]); // Update the score of both initial users. There is an underflow when trying to decrement pendingScoreUpdates await expect(prime.updateScores([user1.getAddress(), user2.getAddress()])).to.be.revertedWithPanic(0x11); });
Manual review and Hardhat tests.
Function _mint
should increment the value of pendingScoreUpdates
.
Under/Overflow
#0 - c4-pre-sort
2023-10-04T22:42:19Z
0xRobocop marked the issue as duplicate of #555
#1 - c4-judge
2023-11-01T19:52:18Z
fatherGoose1 marked the issue as satisfactory
#2 - c4-judge
2023-11-05T00:52:22Z
fatherGoose1 changed the severity to 3 (High Risk)
🌟 Selected for report: 0xDetermination
Also found by: 0xblackskull, 0xweb3boy, ADM, Breeje, Pessimistic, PwnStars, SBSecurity, Satyam_Sharma, ThreeSigma, al88nsk, blutorque, debo, dethera, maanas, neumo, oakcobalt, pina, said, sces60107, tapir, tsvetanovv, xAriextz
32.2731 USDC - $32.27
https://github.com/code-423n4/2023-09-venus/blob/main/contracts/Tokens/Prime/Prime.sol#L208
There is a gas optimization in function updateScores
in Prime
contract when incrementing the variables of a loop.
200 function updateScores(address[] memory users) external { 201 if (pendingScoreUpdates == 0) revert NoScoreUpdatesRequired(); 202 if (nextScoreUpdateRoundId == 0) revert NoScoreUpdatesRequired(); 203 204 for (uint256 i = 0; i < users.length; ) { 205 address user = users[i]; 206 207 if (!tokens[user].exists) revert UserHasNoPrimeToken(); 208 if (isScoreUpdated[nextScoreUpdateRoundId][user]) continue; 209 210 address[] storage _allMarkets = allMarkets; 211 for (uint256 j = 0; j < _allMarkets.length; ) { 212 address market = _allMarkets[j]; 213 _executeBoost(user, market); 214 _updateScore(user, market); 215 216 unchecked { 217 j++; 218 } 219 } 220 221 pendingScoreUpdates--; 222 isScoreUpdated[nextScoreUpdateRoundId][user] = true; 223 224 unchecked { 225 i++; 226 } 227 228 emit UserScoreUpdated(user); 229 } 230 `` }
As we can see, there is an unchecked block in lines 224 - 226 which increments the i
variable of the outer loop. But in line 208 we have a condition that would skip the iteration and enter the loop again. The issue lies in that variable i
is only incremented at the end of the loop, and if condition in line 208 is true, the loop will be infinite until all gas of the transaction is consumed.
So if the array of users passed to the function contains at least one for which the score has already been updated for the nextScoreUpdateRoundId
, the call will run out of gas. The call will presumably be called with a big array of users, so the gas sent will presumably be a considerable amount, which will be lost in case this situation happens.
The following test can be added in file Prime.ts after line 801:
it("infinite loop OOG revert", async () => { // We issue 2 tokens await prime.issue(true, [user1.getAddress(), user2.getAddress()]); // Update alpha to force a new round of score updates await prime.updateAlpha(4, 5); //0.80 // Update the score of one of them await prime.updateScores([user1.getAddress()]); // Update the score of both users. As the first one is already updated, we enter the infinite loop await expect(prime.updateScores([user1.getAddress(), user2.getAddress()])).to.be.revertedWithoutReason(); });
Manual review and hardhat tests.
This line of function ``:
... 208 if (isScoreUpdated[nextScoreUpdateRoundId][user]) continue; ...
Could be changed by:
... 208 if (isScoreUpdated[nextScoreUpdateRoundId][user]){ 209 unchecked {i++;} 210 continue; 211 } ...
DoS
#0 - c4-pre-sort
2023-10-04T22:45:56Z
0xRobocop marked the issue as duplicate of #556
#1 - c4-judge
2023-11-01T19:51:45Z
fatherGoose1 marked the issue as satisfactory
🌟 Selected for report: Bauchibred
Also found by: 0x3b, 0xDetermination, 0xMosh, 0xScourgedev, 0xTheC0der, 0xTiwa, 0xWaitress, 0xdice91, 0xfusion, 0xpiken, 0xprinc, 0xweb3boy, ArmedGoose, Aymen0909, Breeje, Brenzee, Daniel526, DavidGiladi, DeFiHackLabs, Flora, Fulum, HChang26, Hama, IceBear, J4X, Krace, KrisApostolov, Maroutis, Mirror, MohammedRizwan, Norah, PwnStars, SPYBOY, TangYuanShen, Testerbot, ThreeSigma, Tricko, al88nsk, alexweb3, ast3ros, berlin-101, bin2chen, blutorque, btk, d3e4, deth, e0d1n, ether_sky, ge6a, gkrastenov, glcanvas, hals, imare, inzinko, jkoppel, jnforja, joaovwfreire, josephdara, kutugu, lotux, lsaudit, mahdirostami, merlin, n1punp, nadin, neumo, nisedo, nobody2018, oakcobalt, orion, peanuts, pep7siup, pina, ptsanev, rokinot, rvierdiiev, said, santipu_, sashik_eth, seerether, squeaky_cactus, terrancrypt, tonisives, twicek, vagrant, xAriextz, y4y
4.3669 USDC - $4.37
In contract PrimeLiquidityProvider
, constant MAX_DISTRIBUTION_SPEED
is set to 1e18. This constant serves the purpose of limiting the distribution speed of tokens, this is the number of tokens that can be accrued per block.
function _setTokenDistributionSpeed(address token_, uint256 distributionSpeed_) internal { if (distributionSpeed_ > MAX_DISTRIBUTION_SPEED) { revert InvalidDistributionSpeed(distributionSpeed_, MAX_DISTRIBUTION_SPEED); } if (tokenDistributionSpeeds[token_] != distributionSpeed_) { // Distribution speed updated so let's update distribution state to ensure that // 1. Token accrued properly for the old speed, and // 2. Token accrued at the new speed starts after this block. accrueTokens(token_); // Update speed tokenDistributionSpeeds[token_] = distributionSpeed_; emit TokenDistributionSpeedUpdated(token_, distributionSpeed_); } }
This makes impossible the use of low valued tokens such as SHIB
.
SHIB is a populat ERC20 token that has 18 decimals and a current value of 0.00000723 USD. This means that, using a distribution speed equal to MAX_DISTRIBUTION_SPEED
, the total value accrued after 1,000,000 blocks would be of 7.23 USD, which makes impractical the use of such a token.
Manual review.
Don't make the distribution speed cap a constant value, make it editable by governance.
Another option would be to create a mapping to store the distribution speed per initialized token. This would be more flexible but also more gas intensive.
Other
#0 - c4-pre-sort
2023-10-06T00:31:09Z
0xRobocop marked the issue as duplicate of #414
#1 - c4-judge
2023-10-31T17:15:22Z
fatherGoose1 marked the issue as unsatisfactory: Invalid
#2 - neumoxx
2023-11-07T07:47:21Z
My report is marked as duplicate of #414 but here I talk about the practical impossibility to use certain tokens such as SHIB, not any problem related with token decimals (as stated in #414). The use of a hardcoded value of MAX_DISTRIBUTION_SPEED makes impossible the use of SHIB and many other tokens, and it cannot be solved by setting the distribution speed to any value (as you suggest here, because it is always capped by the max.
#3 - c4-judge
2023-11-08T02:11:59Z
fatherGoose1 marked the issue as not a duplicate
#4 - fatherGoose1
2023-11-08T02:16:40Z
Agree this is not a duplicate. I find this to be QA. This issue only fits a very small selection of tokens. SHIB isn't present in Venus's current markets.
#5 - c4-judge
2023-11-08T02:16:48Z
fatherGoose1 changed the severity to QA (Quality Assurance)
#6 - c4-judge
2023-11-08T02:16:53Z
fatherGoose1 marked the issue as grade-b
#7 - neumoxx
2023-11-08T06:07:27Z
Agree this is not a duplicate. I find this to be QA. This issue only fits a very small selection of tokens. SHIB isn't present in Venus's current markets.
SHIB is not, but for instance BTT is. You can see here https://www.coingecko.com/en/coins/bittorrent, its value is even lower that SHIB, and has also 18 decimals. BTT is one of the tokens used in Venus here https://app.venus.io/#/isolated-pools/pool/0x23b4404E4E5eC5FF5a6FFb70B7d14E3FabF237B0.
So please, consider medium severity here, given at least one token used by Venus could suffer the same impact as my SHIB example here.
#8 - fatherGoose1
2023-11-08T20:03:36Z
@neumoxx you have referenced an isolated pool, while only core pool tokens are a part of Prime.
"Interest reserves (part of the protocol income) from core pool markets are sent to the PSR (Protocol Share Reserve) contract."
#9 - neumoxx
2023-11-08T21:07:45Z
Sorry then, my bad. This report, then, can only show a future issue in case a token such as SHIB or BTT are added to the core pool.
Thanks for your work @fatherGoose1