Platform: Code4rena
Start Date: 09/07/2021
Pot Size: $25,000 USDC
Total HM: 7
Participants: 10
Period: 3 days
Judge: ghoulsol
Total Solo HM: 2
Id: 19
League: ETH
Rank: 3/10
Findings: 6
Award: $4,824.40
🌟 Selected for report: 5
🚀 Solo Findings: 0
pauliax
In addLiquidity function, a router is passed as a sender in LibERC20.transferFrom, not msg.sender, so it basically transfers assets from the router to the contract.
require(LibERC20.transferFrom(assetId, msg.sender, address(this), amount), "addLiquidity: ERC20_TRANSFER_FAILED");
#0 - LayneHaber
2021-07-12T20:00:09Z
#48
1103.2637 USDC - $1,103.26
pauliax
There is a potential issue in function removeUserActiveBlocks and the for loop inside it. I assume you are aware of block gas limits (they may be less relevant on other chains but still needs to be accounted for), so as there is no limit for activeTransactionBlocks it may grow so large that the for loop may never finish. You should consider introducing an upper limit for activeTransactionBlocks. Also, a malicious actor may block any account (DDOS) by just calling prepare again and again with 0 amount acting as a router. This will push activeTransactionBlocks to the specified user until it is no longer possible to remove them from the array.
This is also a gas issue as function removeUserActiveBlocks iterating and assigning large dynamic arrays is very gas-consuming. Consider optimizing the algorithm, e.g. finding the first occurrence, then swap it with the last item, pop the array, and break. Or maybe even using an EnumerableMap so you can find and remove elements in O(1) https://github.com/OpenZeppelin/openzeppelin-contracts/blob/master/contracts/utils/structs/EnumerableMap.sol It depends on what is the usual number of activeTransactionBlocks. If it is expected to be low (e.g. less than 5), then the current approach will work, but with larger arrays, I expect EnumerableMap would be more efficient.
An upper limit will not fully mitigate this issue as a malicious actor can still DDOS the user by pushing useless txs until this limit is reached and a valid router may not be able to submit new txs. As you need to improve both the security and performance of removeUserActiveBlocks I think that EnumerableMap may be a go-to solution.
#0 - LayneHaber
2021-07-14T01:21:55Z
402.1396 USDC - $402.14
pauliax
Both calls to IFulfillHelper (addFunds and execute) are wrapped in separate try/catch statements so basically if addFunds succeeds but execute fails or both of these functions fail, the catch will still send assets to the receivingAddress. I think these should be wrapped to a single try/catch to avoid the potential double spend. Also, I think it is pointless having 2 separate calls (addFunds and execute) as both of them are done one immediately after the other and basically pass the same parameters. One call to the IFulfillHelper could be enough. Eliminating this extra external call will reduce gas costs and simplify the code.
Make execute payable, send the value, and get rid of addFunds.
#0 - LayneHaber
2021-07-12T19:42:12Z
#46
744.703 USDC - $744.70
pauliax
Function fulfill first approves the callTo to transfer an amount of toSend tokens and tries to call IFulfillHelper but if the call fails it transfers these assets directly. However, in such case the approval is not reset so a malicous callTo can pull these tokens later: // First, approve the funds to the helper if needed if (!LibAsset.isEther(txData.receivingAssetId) && toSend > 0) { require(LibERC20.approve(txData.receivingAssetId, txData.callTo, toSend), "fulfill: APPROVAL_FAILED"); }
// Next, call `addFunds` on the helper. Helpers should internally // track funds to make sure no one user is able to take all funds // for tx if (toSend > 0) { try IFulfillHelper(txData.callTo).addFunds{ value: LibAsset.isEther(txData.receivingAssetId) ? toSend : 0}( txData.user, txData.transactionId, txData.receivingAssetId, toSend ) {} catch { // Regardless of error within the callData execution, send funds // to the predetermined fallback address require( LibAsset.transferAsset(txData.receivingAssetId, payable(txData.receivingAddress), toSend), "fulfill: TRANSFER_FAILED" ); } }
Approve should be placed inside the try/catch block or approval needs to be reset if the call fails.
#0 - LayneHaber
2021-07-14T18:02:09Z
pauliax
As it is all experimental, you should consider introducing initial temporal restrictions, for example, only whitelisted tokens can be transferred (e.g. most popular ones) or amounts are capped per token / account, or only whitelisted routers can participate (as now anyone can spin up a router so a malicious actor may try to break the system). These restrictions could be lifted up as time goes by and the robustness of the system is tested.
Consider safe launch options that may limit the initial functionality and gradually unlock more capabilities.
#0 - LayneHaber
2021-07-12T20:24:44Z
#49
pauliax
You should consider if you want to support weird ERC20s, e.g. deflationary tokens that could break the calculations of routerBalances as, for example, addLiquidity trusts on amount param but does not verify how many tokens were actually transferred (balance before/after). You can see some weird behavior examples here: https://github.com/xwvvvvwx/weird-erc20
Consider supporting deflationary / rebasing / etc tokens or inform your users not to use such tokens if they don't want to lose them.
#0 - LayneHaber
2021-07-12T20:06:57Z
#68
pauliax
There are some missing validations left. Function addLiquidity should validate that router is not an empty address (router != address(0)) to prevent assets being basically burned. Same with function removeLiquidity parameter recipient. Also, you should either validate that sendingChainFallback is not empty when preparing a tx or do not send assets if this value was not set (if the burn is not intentional?).
Require these addresses not to be empty.
#0 - LayneHaber
2021-07-12T20:51:50Z
#50
110.3264 USDC - $110.33
pauliax
function fulfill treats txData.expiry = block.timestamp as expired tx: // Make sure the expiry has not elapsed require(txData.expiry > block.timestamp, "fulfill: EXPIRED");
However, function cancel has an inclusive check for the same condition: if (txData.expiry >= block.timestamp) { // Timeout has not expired and tx may only be cancelled by router
Unify that to make the code coherent. Probably txData.expiry = block.timestamp should be treated as expired everywhere.
#0 - sanchaymittal
2021-07-14T08:00:05Z
🌟 Selected for report: pauliax
408.6162 USDC - $408.62
pauliax
There is a MIN_TIMEOUT for the expiry but I think you should also introduce a MAX_TIMEOUT to avoid a scenario when, for example, expiry is set far in the future (e.g. 100 years) and one malicious side does not agree to fulfill or cancel the tx so the other side then has to wait and leave the funds locked for 100 years or so.
Introduce a reasonable MAX_TIMEOUT.
#0 - sanchaymittal
2021-07-14T08:43:10Z
🌟 Selected for report: pauliax
408.6162 USDC - $408.62
pauliax
There is no relayer address param, only relayerFee, so technically anyone can front-run a profitable tx. The scenario might be as follows: A relayer submits a tx. A frontrunner sees it in the mempool and calculates that relayerFee is profitable enough (maybe even insta sell the relayerFee on AMM for the native asset) so he copies and submits the same tx but with a higher gas price. A frontrunner's tx gets through and a relayer's tx is reverted afterward. So basically a relayer will experience only losses in such a case.
Consider introducing relayer address param or reducing the probability of this scenario in any other meaningful way (e.g. blacklist frontrunners).
#0 - LayneHaber
2021-07-12T20:48:34Z
This does technically introduce some frontrun-ability for the relayer fee on the onchain transactions, but relegating the responsibility to a single relayer within the network could compromise the overall network security.
Consider the following case where a relayer is selected:
fulfill
transaction they would like to be submitted on the receiving chainrelayer
who will submit a tx for a feerelayer
the transaction data to submit the tx, including the signature
on the receiving chainrelayer
can see the router
on the transaction, and they collude to submit the signature on the sending chain, wait to cancel the transaction on the receiving chain, and split the profits.While the relayer fees are frontrunnable, and this will drive up the costs of the relayer fees for all users, switching away from this pattern will force an "all honest relayers" assumption instead of a "one honest relayer" assumption.
#1 - ghoul-sol
2021-08-02T01:08:58Z
Making this a low risk as the front running doesn't affect users and it actually forces the whole system to use the most optimal fees.
101.2401 USDC - $101.24
pauliax
I see that every call to LibERC20 or LibAsset in TransactionManager is wrapped in a 'require' statement. However, as far as I understand, e.g. in LibERC20 calls go through the wrapCall which itself checks that it did not revert: LibUtils.revertIfCallFailed(success, returnData); It leaves me wondering why these extra checks are needed in the TransactionManager if they do not add any safety? At least gas usage could be reduced. Overall, you should consider using SafeERC20 library then as it seems to be a standard now and is already battle-tested. https://github.com/OpenZeppelin/openzeppelin-contracts/blob/master/contracts/token/ERC20/utils/SafeERC20.sol
Consider removing unnecessary require statements to save some gas. Also, consider using SafeERC20 library from OpenZeppelin.
#0 - LayneHaber
2021-07-13T17:36:36Z
#41