Platform: Code4rena
Start Date: 02/08/2023
Pot Size: $42,000 USDC
Total HM: 13
Participants: 45
Period: 5 days
Judge: hickuphh3
Total Solo HM: 5
Id: 271
League: ETH
Rank: 4/45
Findings: 3
Award: $2,760.39
🌟 Selected for report: 2
🚀 Solo Findings: 1
🌟 Selected for report: 0xmystery
2291.9874 USDC - $2,291.99
https://github.com/GenerationSoftware/pt-v5-cgda-liquidator/blob/7f95bcacd4a566c2becb98d55c1886cadbaa8897/src/LiquidationPair.sol#L211-L226 https://github.com/GenerationSoftware/pt-v5-cgda-liquidator/blob/7f95bcacd4a566c2becb98d55c1886cadbaa8897/src/LiquidationPair.sol#L294-L319 https://github.com/GenerationSoftware/pt-v5-cgda-liquidator/blob/7f95bcacd4a566c2becb98d55c1886cadbaa8897/src/libraries/ContinuousGDA.sol#L16-L44
The Continuous Gradual Dutch Auction (CGDA) model has potential scenarios where the purchasePrice
for an amount of tokens could approach near-zero values. This is influenced mainly by two factors: _emissionRate
and _timeSinceLastAuctionStart
. If either one or both of these factors (_emissionRate
specifically more likely) are significantly large, the purchasePrice
could drastically drop.
This condition could cause undesired economic effects in the auction process. Under this context, participants may acquire tokens (amount of Vault shares) at an extremely low price (very low POOL amount indeed), which could lead to significant chance of winnings.
Here is the purchasePrice function within the ContinuousGDA library, which computes the purchase price of tokens based on various parameters.
function purchasePrice( SD59x18 _amount, SD59x18 _emissionRate, SD59x18 _k, SD59x18 _decayConstant, SD59x18 _timeSinceLastAuctionStart ) internal pure returns (SD59x18) { if (_amount.unwrap() == 0) { return SD59x18.wrap(0); } SD59x18 topE = _decayConstant.mul(_amount).div(_emissionRate); topE = topE.exp().sub(ONE); SD59x18 bottomE = _decayConstant.mul(_timeSinceLastAuctionStart); bottomE = bottomE.exp(); SD59x18 result; if (_emissionRate.unwrap() > 1e18) { result = _k.div(_emissionRate).mul(topE).div(bottomE); } else { result = _k.mul(topE.div(_emissionRate.mul(bottomE))); } return result; }
One possible scenarios where purchasePrice
could approach near-zero values is:
_k = 1e18 (the initial price of the CGDA)
_decayConstant = 0.00001 (a very small decay constant)
_amount = 1e18 (the amount of tokens to purchase which has reportedly become a rare commodity)
_emissionRate = 1e20 (a significantly large, but more practical emission rate)
_timeSinceLastAuctionStart = 3600 (equivalent to 1 hour)
The small _decayConstant
, along with the relatively short _timeSinceLastAuctionStart
, results in bottomE
being close to 1. _emissionRate
is significantly larger than _amount
, making topE
also close to 1. As a result, this combination of factors drives the purchasePrice
towards near-zero values.
Please note that this is only one of many possible scenarios where the purchase price could approach near-zero values. Other combinations of parameter tweaks could potentially also lead to similar or more likely outcomes.
Manual
_computeExactAmountIn
function to ensure that _emissionRate
doesn't exceed a certain practical limit.swapExactAmountOut
that the scaled ratio of _amountInForPeriod
to _amountOutForPeriod
should not fall below a certain threshold.Context
#0 - c4-pre-sort
2023-08-08T03:21:45Z
raymondfam marked the issue as primary issue
#1 - c4-pre-sort
2023-08-08T05:29:45Z
raymondfam marked the issue as high quality report
#2 - asselstine
2023-08-10T19:47:41Z
I'd like to see a more concrete example, as some of these parameters have been chosen arbitrarily. What is the asset:prize exchange rate? What should the value of _k be after being tuned correctly with computeK? This boundary condition feels a little contrived.
Regardless of the details, I think it's good that it points out the possibility of a zero-value swap.
We should assert that swapAmountIn > 0
in the swapExactAmountOut
function.
#3 - c4-sponsor
2023-08-10T19:47:50Z
asselstine marked the issue as sponsor confirmed
#4 - HickupHH3
2023-08-14T09:28:20Z
This issue is affected by #24: the near-zero condition presented no longer holds true when the formula has been modified.
Nevertheless, looking at this issue in isolation, the argument is valid: near-zero / zero price scenarios should be blocked.
#5 - c4-judge
2023-08-14T09:28:37Z
HickupHH3 marked the issue as selected for report
#6 - asselstine
2023-08-18T21:17:19Z
🌟 Selected for report: 0xmystery
Also found by: 0xbepresent, MohammedRizwan, Rolezn, bin2chen, rvierdiiev
369.2124 USDC - $369.21
_emissionRate
is zero if the minimum fund is not met as shown in the code logic below.
function _computeEmissionRate() internal returns (SD59x18) { uint256 amount = source.liquidatableBalanceOf(tokenOut); // console2.log("_computeEmissionRate amount", amount); if (amount < minimumAuctionAmount) { // do not release funds if the minimum is not met amount = 0; // console2.log("AMOUNT IS ZERO"); } return convert(int256(amount)).div(convert(int32(int(periodLength)))); }
As such, consider introducing checks on functions calls dependent on it when _emissionRate == 0
. For example, it will be good if a check is implemented in function swapExactAmountOut
to prevent division by zero in the function logic.
function swapExactAmountOut( address _account, uint256 _amountOut, uint256 _amountInMax ) external returns (uint256) { _checkUpdateAuction(); uint swapAmountIn = _computeExactAmountIn(_amountOut); if (swapAmountIn > _amountInMax) { revert SwapExceedsMax(_amountInMax, swapAmountIn); } + if (_emissionRate == 0) { + revert EmissionIsZero(_emissionRate); + } _amountInForPeriod += uint96(swapAmountIn); _amountOutForPeriod += uint96(_amountOut); _lastAuctionTime += uint48(uint256(convert(convert(int256(_amountOut)).div(_emissionRate)))); source.liquidate(_account, tokenIn, swapAmountIn, tokenOut, _amountOut); return swapAmountIn; }
The name of the following custom error should be renamed as follows to be more in line of its intended purpose:
- error TargetFirstSaleTimeLtPeriodLength(uint passedTargetSaleTime, uint periodLength); + error TargetFirstSaleTimeGtPeriodLength(uint passedTargetSaleTime, uint periodLength);
if (targetFirstSaleTime >= periodLength) { - revert TargetFirstSaleTimeLtPeriodLength(targetFirstSaleTime, periodLength); + revert TargetFirstSaleTimeGtPeriodLength(targetFirstSaleTime, periodLength); }
The typecast below should be corrected as follows since Conversions.convert
takes in int256
parameter. Additionally, down-casting periodLength
of int
or int256
to int32
could lead to overflow issue.
- return convert(int256(amount)).div(convert(int32(int(periodLength)))); + return convert(int256(amount)).div(convert(int256(periodLength)));
There is no gas benefit caching a global variable like, msg.sender
, block.timestamp
etc. Moreover, the cached timestamp
is only used once in either the if
block or the last line of return
, which further defeat the purpose of variable caching.
/// @notice Computes the current auction period /// @return the current period function _computePeriod() internal view returns (uint256) { - uint256 _timestamp = block.timestamp; - if (_timestamp < periodOffset) { + if (block.timestamp < periodOffset) { return 0; } - return (_timestamp - periodOffset) / periodLength; + return (block.timestamp - periodOffset) / periodLength; }
AuctionDurationGteSequencePeriod()
The error RngAuction.AuctionDurationGteSequencePeriod
has been declared but never used. It should be utilized to implement the following missing check.
constructor( RNGInterface rng_, address owner_, uint64 sequencePeriod_, uint64 sequenceOffset_, uint64 auctionDurationSeconds_, uint64 auctionTargetTime_ ) Ownable(owner_) { if (sequencePeriod_ == 0) revert SequencePeriodZero(); if (auctionTargetTime_ > auctionDurationSeconds_) revert AuctionTargetTimeExceedsDuration(uint64(auctionTargetTime_), uint64(auctionDurationSeconds_)); + if (auctionDurationSeconds_ > sequencePeriod_) revert AuctionDurationGteSequencePeriod(uint64(auctionDurationSeconds_), uint64(sequencePeriod_)); sequencePeriod = sequencePeriod_; sequenceOffset = sequenceOffset_; auctionDuration = auctionDurationSeconds_; auctionTargetTime = auctionTargetTime_; _auctionTargetTimeFraction = intoUD2x18(convert(uint(auctionTargetTime_)).div(convert(uint(auctionDurationSeconds_)))); _setNextRngService(rng_); }
@ audit `exactly` --> `exact` /// @param _amountOut The `exactly` amount of output tokens expected
@ audit `wil` --> `will` /// @notice The PrizePool whose draw `wil` be closed.
@ audit `Not` --> `Note` /// @dev `Not` that the reward fractions compound
if else
logicThe order of the if else
logic seems to have been reversed. Depending on the values of _emissionRate.unwrap()
and _emissionRate
involved, this could be crucial enough to make the function truncate to zero when _emissionRate
was more than one (1e18) and used as the divisor. On the other hand, the function could also overflow when _emissionRate
was less than one and used as the divisor. If that's the case, I suggest removing the if
clause and keep only the else
clause (with one of the parentheses removed) that will cater to both circumstances.
- if (_emissionRate.unwrap() > 1e18) { - result = _k.div(_emissionRate).mul(topE).div(bottomE); - } else { - result = _k.mul(topE.div(_emissionRate.mul(bottomE))); + result = _k.mul.topE.div(_emissionRate.mul(bottomE)); - }
Caching state variables that will only be used once incurs more gas and not recommended.
function getLastAuctionResult() external view returns (AuctionResult memory) { - address recipient = _lastAuction.recipient; - UD2x18 rewardFraction = _lastAuction.rewardFraction; return AuctionResult({ - recipient: recipient, + recipient: _lastAuction.recipient, - rewardFraction: rewardFraction + rewardFraction: _lastAuction.rewardFraction }); }
Passing zero as a function argument can sometimes result in a security issue (e.g. passing zero as the slippage parameter, fees, token amounts ...). Consider using a constant variable with a descriptive name, so it's clear that the argument is intentionally being used, and for the right reasons.
Here is one specific instance found.
RemoteOwnerCallEncoder.encodeCalldata(address(_remoteRngAuctionRelayListener), 0, listenerCalldata)
The rewards
function in the provided RewardLib
library calculates the rewards for a series of auctions based on a given reserve. A potential issue is that midway through the calculations, the remainingReserve
might become insufficient to provide the required reward for an auction, potentially causing underflows or incorrect behavior. To address this concern, it's recommended to implement a check to ensure that each reward doesn't exceed the remainingReserve
, and if it does, the function should gracefully exit the loop to avoid unexpected results.
Here's a suggested fix:
function rewards( AuctionResult[] memory _auctionResults, uint256 _reserve ) internal pure returns (uint256[] memory) { uint256 remainingReserve = _reserve; uint256 _auctionResultsLength = _auctionResults.length; uint256[] memory _rewards = new uint256[](_auctionResultsLength); for (uint256 i; i < _auctionResultsLength; i++) { - _rewards[i] = reward(_auctionResults[i], remainingReserve); - remainingReserve = remainingReserve - _rewards[i]; + uint256 calculatedReward = reward(_auctionResults[i], remainingReserve); + if (calculatedReward > remainingReserve) { + break; // Stop if there's not enough remaining reserve. + } + _rewards[i] = calculatedReward; + remainingReserve = remainingReserve - calculatedReward; } return _rewards; }
In LiquidationRouter.swapExactAmountOut
, IERC20(_liquidationPair.tokenIn()).safeTransferFrom()
should be moved to LiquidationPair.swapExactAmountOut
to avoid calling computeExactAmountIn()
twice.
function swapExactAmountOut( LiquidationPair _liquidationPair, address _receiver, uint256 _amountOut, uint256 _amountInMax ) external onlyTrustedLiquidationPair(_liquidationPair) returns (uint256) { - IERC20(_liquidationPair.tokenIn()).safeTransferFrom( - msg.sender, - _liquidationPair.target(), - _liquidationPair.computeExactAmountIn(_amountOut) - ); uint256 amountIn = _liquidationPair.swapExactAmountOut(_receiver, _amountOut, _amountInMax); emit SwappedExactAmountOut(_liquidationPair, _receiver, _amountOut, _amountInMax, amountIn); return amountIn; }
function swapExactAmountOut( address _account, uint256 _amountOut, uint256 _amountInMax ) external returns (uint256) { _checkUpdateAuction(); uint swapAmountIn = _computeExactAmountIn(_amountOut); if (swapAmountIn > _amountInMax) { revert SwapExceedsMax(_amountInMax, swapAmountIn); } + tokenIn.safeTransferFrom(tx.origin, target(), swapAmountIn); _amountInForPeriod += uint96(swapAmountIn); _amountOutForPeriod += uint96(_amountOut); _lastAuctionTime += uint48(uint256(convert(convert(int256(_amountOut)).div(_emissionRate)))); source.liquidate(_account, tokenIn, swapAmountIn, tokenOut, _amountOut); return swapAmountIn; }
Before deploying your contract, activate the optimizer when compiling using “solc --optimize --bin sourceFile.sol”. By default, the optimizer will optimize the contract assuming it is called 200 times across its lifetime. If you want the initial contract deployment to be cheaper and the later function executions to be more expensive, set it to “ --optimize-runs=1”. Conversely, if you expect many transactions and do not care for higher deployment cost and output size, set “--optimize-runs” to a high number.
module.exports = { solidity: { version: "0.8.19", settings: { optimizer: { enabled: true, runs: 1000, }, }, }, };
Kindly visit the following site for further information:
https://docs.soliditylang.org/en/v0.5.4/using-the-compiler.html#using-the-commandline-compiler
Here is one particular example of instance on opcode comparison that delineates the gas saving mechanism:
for !=0 before optimization PUSH1 0x00 DUP2 EQ ISZERO PUSH1 [cont offset] JUMPI after optimization DUP1 PUSH1 [revert offset] JUMPI
Disclaimer: There have been several bugs with security implications related to optimizations. For this reason, Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild actually use them. Therefore, it is unclear how well they are being tested and exercised. High-severity security issues due to optimization bugs have occurred in the past. A high-severity bug in the emscripten -generated solc-js compiler used by Truffle and Remix persisted until late 2018. The fix for this bug was not reported in the Solidity CHANGELOG. Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity 0.5.6. Please measure the gas savings from optimizations, and carefully weigh them against the possibility of an optimization-related bug. Also, monitor the development and adoption of Solidity compiler optimizations to assess their maturity.
#0 - c4-pre-sort
2023-08-08T23:42:59Z
raymondfam marked the issue as high quality report
#1 - c4-sponsor
2023-08-09T19:31:53Z
asselstine marked the issue as sponsor confirmed
#2 - c4-judge
2023-08-14T10:10:42Z
HickupHH3 marked the issue as selected for report
#3 - c4-judge
2023-08-14T10:22:52Z
HickupHH3 marked the issue as grade-a
#4 - HickupHH3
2023-08-15T02:41:24Z
There are some gas optimization suggestions thrown into the report, but overall, decent recommendations and issues raised. None in particular that I disagree with.
🌟 Selected for report: 3agle
Also found by: 0xSmartContract, 0xmystery, DedOhWale, K42, cholakov, hunter_w3b
This report presents a comprehensive and impartial analysis of PoolTogether's proposed Version 5 update, conducted by me as an independent Blockchain Security Researcher. Incorporating a deep-dive into the codebase and some liaison with the PoolTogether team, I aimed to assess the quality and architecture of the codebase, as well as evaluate the centralization risks, mechanism complexities, and potential systemic risks. The goal of this succinct report is to elucidate my findings in key categories, highlight the strengths and opportunities for improvement within the protocol, and aid in risk mitigation, thus fostering a deeper understanding of PoolTogether's current protocol update.
I submitted one medium vulnerability and a detailed QA report. Here's the condensed breakdown of the submitted finding:
I started off by reading through the recommended links and docs to get an overall understanding of the intended business logic. I wish I participated for Part I of V5 to better audit this part of the protocol. Nevertheless, checking through discussions in the public channel and direct messaging the sponsor for some specific questions did further help me identify a medium finding and brush up the QA report.
I would say the PoolTogether protocol showcases a highly commendable codebase, characterized by its exceptional readability, comprehensive documentation, and an emphasis on modularity. Its code is clearly articulated, adopting standardized naming conventions for easy comprehension and efficient debugging. An in-depth explanation is provided for all classes, methods, and data structures, aiding developers in understanding the protocol's intricacies. The codebase is segmented into distinct modules, each responsible for a specific functionality, promoting reusability and scalability. A robust testing suite coupled with a well-configured continuous integration pipeline ensures code correctness and timely bug detection. The rigorous code review process and regular security audits add another layer of reliability, while the detailed audit trails offer a transparent view of all modifications, ensuring a high-quality, secure, and efficient protocol.
The PoolTogether protocol, while possessing a robust codebase, could further enhance its architecture by incorporating comprehensive flowcharts. These would serve as visual guides, breaking down complex sequences of essential flows or calls into more understandable segments, thereby easing the onboarding process for new developers and expediting codebase navigation. Annotations within these flowcharts could provide valuable context and details about each step, assisting in better understanding. Regular updates to these flowcharts are essential, reflecting the current state of the codebase, facilitating smoother code modifications, and encouraging constant review and refactoring, thereby contributing to continuous system design and architectural improvements.
The upgrade to Version 5 of the PoolTogether protocol represents a significant leap forward in decentralization, offering both full autonomy and a permissionless feature. This shift ensures that no central authority controls the protocol and invites a broad range of assets and yield sources, thus encouraging an enriched community involvement. However, these exciting changes also call for a heightened level of community responsibility and meticulous scrutiny to ensure the protocol's integrity and resilience against potential vulnerabilities. This further underscores the importance of strong community engagement and the adherence to comprehensive security measures.
The PoolTogether protocol, with its complex design and interconnected mechanisms, has shown a commendable commitment to stability and security, evident in its diligent response to previous concerns. Despite this, the inherent complexity of the protocol necessitates continued vigilance and an unwavering focus on security measures to mitigate potential risks. The adoption of regular audits, high coding standards, and a proactive stance on bug detection helps ensure the system's robustness. Nevertheless, it's essential to acknowledge that the protocol's intricacies may expose it to inherent risks that require proactive management through constant monitoring, regular updates, and an active community.
Inherent systemic risks cannot be completely eliminated in the current setup, particularly considering that a significant portion of the proposed designs are still in the experimental phase and haven't yet been deployed live. The undeployed designs could bring unforeseen challenges and issues to light upon activation, potentially affecting the stability and security of the protocol. One possible measure to mitigate these risks could be the implementation of distributed governance structures, such as a Decentralized Autonomous Organization (DAO), which can bring a layer of resilience and adaptive management to the system. DAOs have the potential to diffuse systemic risks by providing a platform for collective decision-making and fast, effective responses to unexpected threats or issues. However, the transition to DAO must be carefully managed, as it introduces its own set of complexities and demands strong community engagement and active participation.
30 hours
#0 - c4-judge
2023-08-14T11:03:13Z
HickupHH3 marked the issue as grade-b