Platform: Code4rena
Start Date: 01/08/2023
Pot Size: $91,500 USDC
Total HM: 14
Participants: 80
Period: 6 days
Judge: gzeon
Total Solo HM: 6
Id: 269
League: ETH
Rank: 15/80
Findings: 2
Award: $758.30
🌟 Selected for report: 1
🚀 Solo Findings: 0
🌟 Selected for report: said
Also found by: 0xmuxyz, LokiThe5th, Satyam_Sharma, T1MOH, Team_FliBit, radev_sw
482.4792 USDC - $482.48
The poolMatchesOracle
function will mistakenly always return as false
(indicating the price between Uniswap and the Oracle differs a lot so it was manipulated, causing calling functions to fail and revert) for most common pool pairs, making it impossible to use most token pairs for trading.
This issue happens inside the GeVault.sol
contract, in the function poolMatchesOracle
. This is the function code:
function poolMatchesOracle() public view returns (bool matches){ (uint160 sqrtPriceX96,,,,,,) = uniswapPool.slot0(); uint decimals0 = token0.decimals(); uint decimals1 = token1.decimals(); uint priceX8 = 10**decimals0; // Overflow if dont scale down the sqrtPrice before div 2*192 priceX8 = priceX8 * uint(sqrtPriceX96 / 2 ** 12) ** 2 * 1e8 / 2**168; priceX8 = priceX8 / 10**decimals1; uint oraclePrice = 1e8 * oracle.getAssetPrice(address(token0)) / oracle.getAssetPrice(address(token1)); if (oraclePrice < priceX8 * 101 / 100 && oraclePrice > priceX8 * 99 / 100) matches = true; }
In the project test files, they are testing this function with the Uniswap V3 WETH-USDC with 0.05% fee pool (https://etherscan.io/address/0x88e6a0c2ddd26feeb64f039a2c41296fcb3f5640#readContract) and they are using the block number 16360000
(although this issue happen in any block number, including the current one).
At that time, the returned sqrtPriceX96
value obtained from calling the slot0
function in the linked pool contract was 1844325381659723668085984578168907
; the token0
in that pool contract is USDC
with 6
decimals (https://etherscan.io/address/0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48#readProxyContract).
Let's focus on the math in the function:
uint priceX8 = 10**decimals0; priceX8 = priceX8 * uint(sqrtPriceX96 / 2 ** 12) ** 2 * 1e8 / 2**168; priceX8 = priceX8 / 10**decimals1;
Using the parameters the tests are using (decimals0 = 6
, decimals1 = 18
& sqrtPriceX96 = 1844325381659723668085984578168907
) everything will work, tests will pass. But is it enough?
Well, lots of pairs don't have always the token0
with 6
decimals and the token1
with 18 decimals. For example, here it is a very common pool for trading: WETH-USDT (0.05%) in UniV3 (https://etherscan.io/address/0x11b815efb8f581194ae79006d24e0d814b7697f6#readContract).
This pair has the token0
as WETH with 18
decimals and the token1
as USDT with 6
decimals.
What is the deal for this? Well, any pair where the token0.decimals()
are greater than the token1.decimals()
will cause the resulting priceX8
to be 0 (due to the final division happening in the code: https://github.com/code-423n4/2023-08-goodentry/blob/4b785d455fff04629d8675f21ef1d1632749b252/contracts/GeVault.sol#L375) and thus making the next condition to think the price is manipulated and return a false; causing unintended issues in every other dependant function (since the price from Uniswap is equal to the one from the oracle, just the code failed).
Basically, this happens because the size of the divisor is smaller than the size of the dividend and since Solidity does not support decimal numbers, it rounds it to 0.
All the functions that revert when the price is manipulated will also revert in this case, even if in reality the price was not manipulated. This makes it impossible to rebalance the GeVault
since the rebalance
function will revert always (https://github.com/code-423n4/2023-08-goodentry/blob/4b785d455fff04629d8675f21ef1d1632749b252/contracts/GeVault.sol#L203). The withdraw
function won't work (https://github.com/code-423n4/2023-08-goodentry/blob/4b785d455fff04629d8675f21ef1d1632749b252/contracts/GeVault.sol#L215) and the deposit
function will also not work (https://github.com/code-423n4/2023-08-goodentry/blob/4b785d455fff04629d8675f21ef1d1632749b252/contracts/GeVault.sol#L250).
I have created a standalone Foundry POC for you to test here: https://gist.github.com/TheNaubit/c049517c72128b2927b584951356283e
To run this POC, put this in a Foundry project (this is a standalone POC
so no need to install any other dependency or use any other contract). Put the content of this file inside a file called POCMathError.t.sol
in the test
folder and run it with the following command:
forge test --fork-url <RPC_MAINNET_URL> --fork-block-number 16360000 -vvv --match-contract POCMathError
We are using that block number because it is the same the protocol is using for their tests but the issue occurs in any block number.
Remember to replace <RPC_MAINNET_URL>
with your own RPC Mainnet URL. You can get one for free from Alchemy, Infura...
Manual review and Foundry.
There are multiple possible solutions, like setting as token0Decimals
the token1
if it is smaller than token0
. Another solution could be normalizing each step of the math to make sure it works with any amount of decimals (for example normalizing both token values to 18
decimals.
Math
#0 - c4-pre-sort
2023-08-09T12:55:15Z
141345 marked the issue as duplicate of #140
#1 - c4-judge
2023-08-20T17:32:24Z
gzeon-c4 marked the issue as satisfactory
🌟 Selected for report: Team_FliBit
Also found by: 0x70C9, 3docSec, 8olidity, DavidGiladi, Krace, LokiThe5th, Rolezn, Sathish9098, UniversalCrypto, banpaleo5, catellatech, digitizeworx, fatherOfBlocks, hpsb, j4ld1na, josephdara, kutugu, niser93, nonseodion, oakcobalt, osmanozdemir1, pep7siup, ravikiranweb3, said, sivanesh_808
275.8159 USDC - $275.82
In the GeVault.sol
it is impossible to remove a tick
once it is added to the ticks[]
.
There are methods to add, shift and modify ticks but none to remove ticks from the ticks array.
But if due to a human error a faulty tick is added, there won't be any way to remove it. It could only be overridden to 0 and moved left to avoid breaking other functions, a non-ideal solution.
Therefore, it is advisable to implement another function to allow removing a tick.
It won't allow removing faulty added ticks, being the only solution to override them to 0 and move them to the left, to avoid breaking other functions.
pushTick
function: https://github.com/code-423n4/2023-08-goodentry/blob/4b785d455fff04629d8675f21ef1d1632749b252/contracts/GeVault.sol#L116shiftTick
function: https://github.com/code-423n4/2023-08-goodentry/blob/4b785d455fff04629d8675f21ef1d1632749b252/contracts/GeVault.sol#L137modifyTick
function: https://github.com/code-423n4/2023-08-goodentry/blob/4b785d455fff04629d8675f21ef1d1632749b252/contracts/GeVault.sol#L167Manual review.
Add a function to remove a faulty tick from the ticks array.
In the addDust
function in the OptionsPositionManager.sol
file, if token0
or token1
has more than 20 decimals, the transaction will always revert.
The function executes some code to scale the tokens value:
uint scale0 = 10**(20 - ERC20(token0).decimals()) * oracle.getAssetPrice(token0) / 1e8; uint scale1 = 10**(20 - ERC20(token1).decimals()) * oracle.getAssetPrice(token1) / 1e8;
But since they are doing 20 - token0Decimals
and 20 - token1Decimals
, if one of those tokens has more than 20 decimals, the subtraction will underflow causing the entire transaction to revert, making it impossible to run this function for that token pair.
The token pairs having at least one token with more than 20 decimals won't be able to be used in the addDust
execution since those parameters will always cause the function to revert.
Manual review.
Since it is already doing some logic to support different token decimals, a conditional to check if the token decimals are higher than 20 could be implemented, and then some logic to support tokens with more than 20 decimals.
In the initProxy
function of the TokenisableRange.sol
contract, if TOKEN0
or TOKEN1
has 39 or more decimals, the math will overflow, causing the entire function to revert.
In the initProxy
function there is some math to calculate the upper and lower ticks used (https://github.com/code-423n4/2023-08-goodentry/blob/4b785d455fff04629d8675f21ef1d1632749b252/contracts/TokenisableRange.sol#L99-L100):
int24 _upperTick = TickMath.getTickAtSqrtRatio( uint160( 2**48 * sqrt( (2 ** 96 * (10 ** TOKEN1.decimals)) * 1e10 / (uint256(startX10) * 10 ** TOKEN0.decimals) ) ) ); int24 _lowerTick = TickMath.getTickAtSqrtRatio( uint160( 2**48 * sqrt( (2 ** 96 * (10 ** TOKEN1.decimals)) * 1e10 / (uint256(endX10 ) * 10 ** TOKEN0.decimals) ) ) );
But if TOKEN0.decimals
or TOKEN1.decimals
have 39 or more decimals, the math will overflow causing the entire transaction to revert, making it impossible to initialize the contract for that token pair.
This makes it impossible to use this protocol feature with pairs that have at least one token with 39 or more decimals. Not being able to use some protocol features is a high-severity issue but since there are not many tokens using 39 or more decimals we set the impact to low.
Just try running in Solidity the following math with the following params:
TOKEN1.decimals = 39
TOKEN0.decimals = 18
startX10 = 1e10
- Comments in the code explain that this is a value scaled by 1e10, so we can use 1e10 as a demo valueYou can try the following contract:
// SPDX-License-Identifier: GPL-3.0 pragma solidity >=0.8.2 <0.9.0; contract TestContract { function sqrt(uint x) internal pure returns (uint y) { uint z = (x + 1) / 2; y = x; while (z < y) { y = z; z = (x / z + z) / 2; } } function testThatReverts() public pure { uint256 TOKEN1Decimals = 39; uint256 TOKEN0Decimals = 18; uint256 startX10 = 1e10; uint160( 2**48 * sqrt( (2 ** 96 * (10 ** TOKEN1Decimals)) * 1e10 / (uint256(startX10) * 10 ** TOKEN0Decimals) ) ); } function testThatWorks() public pure { uint256 TOKEN1Decimals = 38; uint256 TOKEN0Decimals = 18; uint256 startX10 = 1e10; uint160( 2**48 * sqrt( (2 ** 96 * (10 ** TOKEN1Decimals)) * 1e10 / (uint256(startX10) * 10 ** TOKEN0Decimals) ) ); } }
Manual review, Foundry fuzzing, and Remix.
Restrict the decimals supported, change the way the math is done, or use a library like PBRMath
to support bigger numbers without causing overflow (https://github.com/PaulRBerg/prb-math).
The sqrt
function used in the project fails due to overflow in some cases, making it suboptimal to use.
The sqrt
function defined here and also here (which by the way it should always use the second, instead of having duplicated code in other contracts) does not work in every number in the uint256 range.
After doing some fuzzy testing we found it fails when doing the square root of the max uint256 number (2**256 -1
). That overflow happens because inside the sqrt
function, they sum 1 to the entered argument value and since we entered max uint256, you can not add 1 to that value without causing an overflow.
It is clear this is an edge case but a proper sqrt
function defined to accept a uint256
input should support every possible value inside the supported input type. In this case, sqrt
should support any value from 0
to 2**256-1
, which is not the case.
Suboptimal input type support can cause unexpected reverts when trying to do the sqrt
of some inputs.
2**256-1
: https://github.com/code-423n4/2023-08-goodentry/blob/4b785d455fff04629d8675f21ef1d1632749b252/contracts/lib/Sqrt.sol#L9Manual review and fuzzy testing with Foundry.
Our recommendation is to use another sqrt
function. There are two battle-tested options that are the market standards:
2**256-1
without reverting: https://ethereum.stackexchange.com/a/87713PBRMath
library, which has a sqrt
method that also supports every input value: https://github.com/PaulRBerg/prb-mathIn the LPOracle.sol
and RoeRouter.sol
files, there are functions that don't verify if the tokens sent in the params are part of the oracle/pool address also sent in the params. That can allow human mistakes that lead to wrong price calculations.
There is no check to make sure that the tokens constituting the LP/oracle are the same as the chainlink price tokens. Due to human error, a mistake can be made and you end up with a price for token 0 with decimals of token 1, which would completely destroy any price calculations.
Potential wrong price calculations if the wrong tokens or the right tokens but in a wrong order are submitted when deploying the oracle.
Manual review.
Implement a check to verify the tokens added in the params are the same (and in the same order) as the one used in the LP/oracle added also in the params.
Unexpected return range based on the specified natspec docs.
In this case, natspec function comment says its returned value follows a linear model: from baseFeeX4 / 2
to baseFeeX4 * 2
.
But in the code, if adjustedBaseFeeX4
is greater than baseFeeX4 * 3/2
(which is smaller than baseFeeX4 * 2
), the code will set the adjustedBaseFeeX4
to baseFeeX4 * 3/2
; making impossible to reach a value of baseFeeX4 * 2
ever although being stated like that in the natspec.
Above that function there is a comment saying // Adjust from -50% to +50%
but again, that doesn't follow the defined natspec of the function, which are the ruling comments in a function.
Since NatSpec documentation is considered to be part of the contract’s public API, NatSpec-related issues are assigned a higher severity than other code-comment-related issues and the code must follow the specified natspec, which is not happening.
Manual review.
There are two options:
- /// @dev Simple linear model: from baseFeeX4 / 2 to baseFeeX4 * 2 + /// @dev Simple linear model: from baseFeeX4 / 2 to baseFeeX4 * 3 /2
- if (adjustedBaseFeeX4 > baseFeeX4 * 3 / 2) adjustedBaseFeeX4 = baseFeeX4 * 3 / 2; + if (adjustedBaseFeeX4 > baseFeeX4 * 2) adjustedBaseFeeX4 = baseFeeX4 * 2;
A very old version of OpenZeppelin is being used (version 4.4.1
).
As said in the summary, the project is using a really old OpenZeppelin version (4.4.1
), the latest stable one the 4.9.3
.
Using old versions can lead to less gas performance and security issues. We checked for known issues and although there are some known issues in the version being used, none of the audited contracts are using those features. But in any case, it is not recommended to use an old version so our recommendation is to upgrade to the latest stable version.
Using an old version can lead to worse gas performance, known contract issues, and potential 0-day issues.
Manual review and Snyk.io.
Upgrade to the latest stable OpenZeppelin version. At the time of the audit, the latest stable version is the 4.9.3
.
There are some occurrences of unused (dead
) code in some contracts.
Avoiding to have unused code in the contracts is a good practice since it helps to keep the code clean and readable.
In RangeManager.sol
, the POS_MGR
constant is declared but not being used, so it could be removed: https://github.com/code-423n4/2023-08-goodentry/blob/4b785d455fff04629d8675f21ef1d1632749b252/contracts/RangeManager.sol#L36C47-L36C55
In GeVault.sol
, the rangeManager
is being declared but it is not being used so it could also be removed. Also, if this variable is removed, the import import "./RangeManager.sol";
could also be deleted.
As said before, removing unused code can keep the code clean and more readable.
Affected lines:
Manual review.
Remove the unused code.
Inconsistent names between the contract file name and the contract name.
As per the official Solidity documentation, it is recommended to have the same name for the file and the contract contained in it: https://docs.soliditylang.org/en/v0.8.10/style-guide.html#contract-and-library-names
Contract and library names should also match their filenames.
The current name structure can be misleading.
Affected contract:
Manual review.
Rename the interface or change the file name.
In the RangeManager.sol
file, SafeERC20
is imported twice.
As said in the summary, the SafeERC20
is imported twice.
It is useless and misleading to import the same import in the same contract twice.
Manual review.
Remove one of the duplicated imports:
import "./openzeppelin-solidity/contracts/token/ERC20/ERC20.sol"; import "./openzeppelin-solidity/contracts/token/ERC20/utils/SafeERC20.sol"; import "./openzeppelin-solidity/contracts/utils/cryptography/ECDSA.sol"; - import "./openzeppelin-solidity/contracts/token/ERC20/utils/SafeERC20.sol"; import "./openzeppelin-solidity/contracts/security/ReentrancyGuard.sol"; import "../interfaces/AggregatorV3Interface.sol"; import "../interfaces/ILendingPoolAddressesProvider.sol"; import "../interfaces/IAaveLendingPoolV2.sol"; import "../interfaces/IAaveOracle.sol"; import "../interfaces/IUniswapV2Pair.sol"; import "../interfaces/IUniswapV2Factory.sol"; import "../interfaces/IUniswapV2Router01.sol"; import "../interfaces/ISwapRouter.sol"; import "../interfaces/INonfungiblePositionManager.sol"; import "./TokenisableRange.sol"; import "./openzeppelin-solidity/contracts/proxy/beacon/BeaconProxy.sol"; import "./openzeppelin-solidity/contracts/access/Ownable.sol"; import {IPriceOracle} from "../interfaces/IPriceOracle.sol";
In the project, there are several interfaces only used in the tests. Those interfaces should not be in the same place as the interfaces used in the protocol contracts since they could be misleading.
In the contracts/interfaces
folder there are interfaces only used in tests and also interfaces used in the protocol contracts. Having them mixed together can be misleading and confusing so the ideal thing should be to separate them in different folders.
This can cause confusion and even use the wrong interfaces in tests or core protocol contracts.
Interfaces only used in tests that should be moved to another folder:
Manual review.
Separate the test interfaces and the protocol interfaces in different folders.
#0 - 141345
2023-08-10T09:43:48Z
L-1 is dup of https://github.com/code-423n4/2023-08-goodentry-findings/issues/506
L-2 is dup of https://github.com/code-423n4/2023-08-goodentry-findings/issues/412
L-4 is dup of https://github.com/code-423n4/2023-08-goodentry-findings/issues/351
L-5 is dup of https://github.com/code-423n4/2023-08-goodentry-findings/issues/339
#1 - c4-judge
2023-08-20T16:31:50Z
gzeon-c4 marked the issue as grade-a
#2 - c4-judge
2023-08-20T16:48:56Z
gzeon-c4 marked the issue as selected for report
#3 - gzeon-c4
2023-08-20T16:52:36Z
Low No function to remove a tick Transactions calling addDust revert if token has more than 20 decimals Math always revert for tokens with 39 or more decimals #180 #183 #184
NC Sqrt function does not work in every case Missing check to make sure added tokens corresponds to the oracle/pool Math not following natspec Very old OpenZeppelin version being used Dead code due in some contracts Inconsistent contract and file names Double import Interfaces only used in tests should be separated from core interfaces #187
#4 - Keref
2023-08-23T13:28:53Z
Removed dead code and correct natspec
#5 - c4-sponsor
2023-08-23T13:29:00Z
Keref marked the issue as sponsor acknowledged