Platform: Code4rena
Start Date: 08/06/2022
Pot Size: $115,000 USDC
Total HM: 26
Participants: 72
Period: 11 days
Judge: leastwood
Total Solo HM: 14
Id: 132
League: ETH
Rank: 1/72
Findings: 11
Award: $36,502.73
🌟 Selected for report: 9
🚀 Solo Findings: 5
🌟 Selected for report: xiaoming90
8660.4234 USDC - $8,660.42
https://github.com/code-423n4/2022-06-connext/blob/4dd6149748b635f95460d4c3924c7e3fb6716967/contracts/contracts/core/connext/facets/BridgeFacet.sol#L541 https://github.com/code-423n4/2022-06-connext/blob/4dd6149748b635f95460d4c3924c7e3fb6716967/contracts/contracts/core/connext/helpers/SponsorVault.sol#L196
Assume the following:
1,000,000
oUSDC on Optimism Domain/Chain100
oUSDC on Optimism Domain/ChainAt this point, attacker balances are as follows (2,000,000 USDC in total)
Attacker's wallet in Ethereum =
1,000,000
USDCAttacker's wallet in Optimism =
0
oUSDCAttacker's router in Optimism =
1,000,000
oUSDC
First, the attacker attempts to transfer an extremely large amount - 1,000,000
USDC from attacker's address in Ethereum to attacker's address in Optimism Chain. The transfer amount should be much larger than the rest of the router's liquidity so that only attacker's router is capable of providing the liqudity.
In our example, since Router B does not have sufficient liquidity to faciliate the fast transfer, Router B will not be selected by the Sequencer. Since only Router A has sufficient liquidity, Router A, which is owned by the attacker, will faciliate the fast transfer and selected by the Sequencer.
Since the liquidity fee is 5%, Router A only need to supply 950,000
oUSDC on execute
. The Sponsor will then reimburse 50% of the liquidity fee, which is25,000
oUSDC in total. The final amount of oUSDC send to the attacker's wallet address in Optimism will be 975,000
oUSDC.
At this point, attacker balances are as follows (1,025,000 USDC in total)
Attacker's wallet in Ethereum =
0
USDCAttacker's wallet in Optimism =
975,000
oUSDCAttacker's router in Optimism =
50,000
oUSDC
When the nomad message arrives, the attacker will be reimbursed 1,000,000
oUSDC when BridgeFacet._reconcile
is triggered.
At this point, attacker balances are as follows (2,025,000 USDC in total)
Attacker's wallet in Ethereum =
0
USDCAttacker's wallet in Optimism =
975,000
oUSDCAttacker's router in Optimism =
50,000
+1,000,000
oUSDC
Attacker earned 25,000
USDC, and SponsorVault lost 25,000
USDC.
Router owner can intentionally perform many large transfer between their own wallets in two different domain to siphon all the funds from the SponsorVault, and then proceed to withdraw all liquidity from his router.
Although having a sponsor to subside the liqudity fee to encourage users to use sponsor's chain, this subsidy can be gamed by malicious actors for their own benefits. It is recommended to reconsider the need of having a sponsor in Connext as extreme care have to be taken in its design to ensure that it will not be exploited.
#0 - LayneHaber
2022-06-25T18:55:15Z
The SponsorVault
is not mandatory for the bridge flow, and the entire point of the vault option is to allow domains to subsidize fees for users transferring funds there. This is incredibly useful for new domains, that have no default bridge and want to remove any friction for users to get to their chain. Sponsor vault funders should be informed there is no way to enforce only legitimate users get the funds and it is inherently vulnerable to sybil attacks. In our conversations with potential sponsors, they are aware of these issues and are still willing to fund sponsor vaults to a limited capacity.
#1 - 0xleastwood
2022-08-13T21:09:16Z
It seems that it would be easy for routers to sybil attack the protocol and continuously drain the sponsor vault of all its funds. While I understand this might not be an issue when the set of routers is trusted, however, as the protocol continues to become more decentralized, this would be a likely path of attack. I also agree with the current risk even though users' funds aren't at direct risk, the functionality of the sponsor vault is rendered useless and the router profits from this attack.
🌟 Selected for report: xiaoming90
8660.4234 USDC - $8,660.42
AAVE portal provides a trusted credit line that allows bridges to take on an unbacked position, and Connext intents to use this credit line to provide fast-liquidity for its users in the event the routers do not have sufficient liquidity.
Connext will assign one (1) router to be responsible for taking on credit risk of borrowing an unbacked position from AAVE portal as per Source Code
Under normal circumstance, the BridgeFacet._reconcile
function will automatically repay back the loan to AAVE portal when the nomad message arrives. However, if the repayment fails for certain reason, Connext expects that the router will use the repayAavePortal
function out-of-band to help Connext to repay the loan.
Ultimately, it is Connext that take on the credit risk because AAVE portal only provides a trusted credit line to Connext, but not to the individual routers.
When nomad message arrives, it will call BridgeFacet.handle
function, which will in turn trigger the internal _reconcile
function. Note that the handle
or _reconcile
function cannot be reverted under any circumstances because nomad message cannot be reprocessed on the nomad side.
Alice transfers 1,000,000
DAI from Ethereum domain to Polygon domain
None of the existing routers have sufficient liquidity, thus the sequencer decided that AAVE Portal should be used
Bob's router has been selected to take on the credit risk for the unbacked position, and Connext proceeds to borrow 1,000,000
DAI from AAVE Portal and send the 1,000,000
DAI to Alice's wallet on Polygon domain
When slow nomad message arrives, BridgeFacet._reconcile
function is triggered to attempt to repay back the loan to AAVE portal. This function will in turn trigger the BridgeFacet._reconcileProcessPortal
function where the portal repayment logic resides.
Within the BridgeFacet._reconcileProcessPortal
, notice that if the AssetLogic.swapFromLocalAssetIfNeededForExactOut
swap fails, it will return _amount
.
Within the BridgeFacet._reconcileProcessPortal
, notice that if the ``AavaPool.backUnbackedexternal repayment call fails, it will set
amountIn = 0 , and then return [
return (_amount - amountIn)](https://github.com/code-423n4/2022-06-connext/blob/b4532655071566b33c41eac46e75be29b4a381ed/contracts/contracts/core/connext/facets/BridgeFacet.sol#L1061) , which is basically the same as
_amount`.
When the _reconcileProcessPortal
function call returned at Line 603, it will set the toDistribute
to the amount
. amount
in this example is 1,000,000
DAI.
Next, at Line 611, the contract will increase Bob's router balance by 1,000,000
DAI
Bob notices that his router balance has increased by 1,000,000
DAI, and he could not resist the tempation of 1,000,000
DAI. Therefore, instead of helping Connext to repay the loan via repayAavePortal
function out-of-band, he decided to quickly withdraws all his liquidty from his router.
Bob gained 1,000,000
DAI, while Connext still owns AAVE portal 1,000,000
DAI
If routers decided not to repay the loan, Connext will incur large amount of debt from AAVE portal.
Understood that there is a whitelist for routers that can use portals to ensure that only trusted routers could use this feature. In this case, the trust entirely depends on the integrity of the router owner and the assumption that the owner will not act against Connext and its community. However, as seen in many of the past security incidents, trusted actor or even own protocol team member might turn rogue when dealing with significant gain. In the above example, 1,000,000
DAI. It is common to see even larger amount of funds transferring across the bridge.
Therefore, to overcome the above-mentioned risk, some protocol would implement m-of-n
multisig or validation, which help to mitigate the risk of a single trusted actor from turning rogue and perform malicious action.
Therefore, it is recommended to reconsider such design and explore other alternatives. One such alternative would be as follows:
Assuming that the AAVE portal interest rate is fixed, therefore, the amount of repayment is deterministic, and Connext can compute the amount of repayment that needs to be repaid at any point of time.
When the
BridgeFacet._reconcileProcessPortal
swap orAavaPool.backUnbacked
fails, do not immediately credit the Bob's router balance. Instead, escrow the amount (1,000,000
DAI) received from nomad in anEscrow
contract. Implement a function calledsettleAAVEPortalLoan
within theEscrow
contract, which contains the logic to perform the necessary actions to repay AAVE portal loan. In this case, Bob is responsible for triggering theEscrow.settleAAVEPortalLoan
to kick start the out-of-band repayment process. If the repayment is sucessful, Bob's router will be credited with the earning for taking on credit risk.One positive side effect of this approach is that Bob will be incentivize to make the repayment as fast as possible because the longer he delays, the higher the interest rate, and thus less earning for him.
This approach is quite similar to the withdrawal pattern.
#0 - LayneHaber
2022-06-27T14:58:34Z
This is correct, and using an escrow contract would be helpful, but in general the router has no incentive ever to repay aave loans (even with this fix). This eliminates the possibility of a router profiting from the mishandling of reconcile
, but doesn't address the root of the trustedness, which is embedded at the aave layer (by being able to take out unbacked loans)
🌟 Selected for report: xiaoming90
8660.4234 USDC - $8,660.42
This issue is only applicable for fast-transfer. Slow transfer would not have this issue because of the built-in fraud-proof mechanism in Nomad.
First, the attacker will attempt to use Connext to send 1000 USDC
from Ethereum domain to Optimism domain.
Assume that the attacker happens to be a relayer on the relayer network utilised by Connext, and the attacker's relayer happens to be tasked to relay the above execute calldata to the Optimism's Connext BridgeFacet.execute
function.
Optimism's Connext BridgeFacet.execute
received the execute calldata and observed within the calldata that it is a fast-transfer and Router A is responsible for providing the liquidity. It will then check that the router signature is valid, and proceed to transfer 1000 oUSDC
to attacker wallet (0x123456) in Optimism.
Next, attacker will update the ExecuteArgs.local
within the execute calldata to a valid local representation of canonical token (USDC) used within Polygon. Attacker will then send the modified execute calldata to Polygon's Connext BridgeFacet.execute
function. Assume that the same Router A is also providing liquidity in Polygon. The BridgeFacet.execute
function checks that the router signature is valid, and proceed to transfer 1000 POS-USDC
to atttack wallet (0x123456) in Polygon.
At this point, the attacker has 1000 oUSDC
and 1000 POS-USDC
in his wallets. When the nomad message arrives at Optimism, Router A can claim the 1000 oUSDC
back from Connext. However, Router A is not able to claim back any fund in Polygon.
Note that same wallet address exists on different chains. For instance, the wallet address on Etherum and Polygon is the same.
ExecuteArgs.local
does not affect the router signature verification?This is because the router signature is generated from the transferId
+ pathLength
only, and these data are stored within the CallParams params
within the ExecuteArgs
struct.
struct ExecuteArgs { CallParams params; address local; // local representation of canonical token address[] routers; bytes[] routerSignatures; uint256 amount; uint256 nonce; address originSender; }
Within the BridgeFacet._executeSanityChecks
function, it will attempt to rebuild to transferId
by calling the following code:
// Derive transfer ID based on given arguments. bytes32 transferId = _getTransferId(_args);
Within the BridgeFacet._getTransferId
function, we can see that the s.tokenRegistry.getTokenId(_args.local)
will always return the canonical tokenDomain
and tokenId
. In our example, it will be Ethereum
and USDC
. Therefore, as long as the attacker specify a valid local representation of canonical token on a chain, the transferId
returned by s.tokenRegistry.getTokenId(_args.local)
will always be the same across all domains. Thus, this allows the attacker to modify the ExecuteArgs.local
and yet he could pass the router signature check.
function _getTransferId(ExecuteArgs calldata _args) private view returns (bytes32) { (uint32 tokenDomain, bytes32 tokenId) = s.tokenRegistry.getTokenId(_args.local); return _calculateTransferId(_args.params, _args.amount, _args.nonce, tokenId, tokenDomain, _args.originSender); }
Router liquidity would be drained by attacker, and affected router owner could not claim back their liquidity.
The security of the current Connext design depends on how secure or reliable the relayer is. If the relayer turns rouge or act against Connext, many serious consequences can happen.
The root cause is that the current design places enormous trust on the relayers to accurately and reliably to deliver calldata to the bridge in various domains. For instance, delivering of execute call data to execute
function. There is an attempt to prevent message replay on a single domain, however, it does not prevent message replay across multiple domains. Most importantly, the Connext's bridge appears to have full trust on the calldata delivered by the relayer. However, the fact is that the calldata can always be altered by the relayer.
Consider a classic 0x off-chain ordering book protocol. A user will sign his order with his private key, and attach the signature to the order, and send the order (with signature) to the relayer network. If the relayer attempts to tamper the order message or signature, the decoded address will be different from the signer's address and this will be detected by 0x's Smart contract on-chain when processing the order. This ensures that the integrity of the message and signer can be enforced.
Per good security practice, relayer network should always be considered as a hostile environment/network. Therefore, it is recommended that similar approach could be taken with regards to passing execute calldata across domains/chains.
For instance, at a high level, the sequencer should sign the execute calldata with its private key, and attach the signature to the execute calldata. Then, submit the execute calldata (with signature) to the relayer network. When the bridge receives the execute calldata (with signature), it can verify if the decoded address matches the sequencer address to ensure that the calldata has not been altered. This will ensure the intergrity of the execute calldata and prevent any issue that arise due to unauthorised modification of calldata.
Additionally, the execute calldata should also have a field that correspond to the destination domain. The bridge that receives the execute calldata must verify that the execute calldata is intended for its domain, otherwise reject the calldata if it belongs to other domains. This also helps to prevent the attack mentioned earlier where same execute calldata can be accepted in different domains.
#0 - LayneHaber
2022-06-25T19:03:09Z
Agree that this is an issue, but disagree with the framing and mitigation.
The calldata
is included in the generation of the transferId
via the CallParams
, so it cannot be easily manipulated by the relayer network once signed by routers. However, because you are not validating the s.domain
against the CallParams.destinationDomain
you can use the same transfer data across multiple chains, which is a big problem.
#1 - LayneHaber
2022-06-27T15:28:45Z
#2 - 0xleastwood
2022-08-15T09:27:05Z
This seems like the most severe finding of the entire contest. Kudos to the warden on a great find!
Because transfer data is replicated across multiple chains, relayers are also able to execute data on each chain. If _executeSanityChecks
does not check that the message's destination chain matches s.domain
, then transfers could be spent on all available chains.
#3 - 0xleastwood
2022-08-15T09:31:10Z
Interestingly, because the remote router is included in the message, only the correct destination chain will be able to reconcile the transfer and reimburse routers for providing liquidity. Hence, the issue is only prevalent on other chains if routers readily bid on incoming transfers, which seems possible because signatures can be replayed on other chains. So if the same set of routers have sufficient liquidity on another chain, the relayer can execute this message again to create a double spend issue.
#4 - 0xleastwood
2022-08-15T09:40:13Z
Another point to add, this issue would only be prevalent on chains which have its local asset pointing to the same address as this is what the bridge will attempt to transfer to the recipient. Additionally, in order for the relayer to replay a router's signature, the transferId
must exactly match the transferId
on the intended destination chain. This is only possible if TokenRegistry.getTokenId
returns the same canonical domain and ID values.
#5 - 0xleastwood
2022-08-15T09:46:42Z
It would be good to confirm this. Is it possible for a local asset to be registered to the same canonical domain and ID on multiple chains?
#6 - LayneHaber
2022-08-17T17:30:16Z
Is it possible for a local asset to be registered to the same canonical domain and ID on multiple chains?
Yes, that is actually the purpose of the canonicalId
and canonicalDomain
-- there should only be one canonical (locked) token that maps to any number of local (minted) instances.
This issue is valid, and enforcing in _executeSanityChecks
it is only executed on the destination domain should prevent this attack, correct?
#7 - 0xleastwood
2022-08-17T18:19:31Z
Is it possible for a local asset to be registered to the same canonical domain and ID on multiple chains?
Yes, that is actually the purpose of the
canonicalId
andcanonicalDomain
-- there should only be one canonical (locked) token that maps to any number of local (minted) instances.This issue is valid, and enforcing in
_executeSanityChecks
it is only executed on the destination domain should prevent this attack, correct?
Okay great! Because local assets map to the same canonical domain and ID on each chain, I think this issue is most definitely valid. _args.local
is not used to calculate transferId
, hence the representation for each asset may differ on each chain but the TokenRegistry.getTokenId
should return the correct information.
I can confirm that the enforcing check in _executeSanityChecks
should ensure that transfer data is only executed on the intended destination domain.
🌟 Selected for report: xiaoming90
2598.127 USDC - $2,598.13
Connext relies on the relayer to trigger the BridgeFacet.execute
function on the destination domain to initiate the token transfer and calldata execution processes. Relayers pay for the gas cost to trigger the execute
function, and in return for their effort, they are reimbused with the relayer fee.
However, it is possible that the BridgeFacet.execute
function will revert under certain circumstances when triggered by the relayers. For instance, when the BridgeFacet.execute
function is triggered, it will call the BridgeFacet._handleExecuteLiquidity
function. Within the BridgeFacet._handleExecuteLiquidity
function, it will attempt to perform token swap using a StablePool. If the slippage during the swap exceeded the user-defined value, the swap will revert and subseqently the execute
will revert too.
When the BridgeFacet.execute
reverts, the relayers will not receive any relayer fee.
The following code shows that the relayer who can claim the relayer fee is set within the BridgeFacet.execute
function at Line 415. Therefore, if this function reverts, relayer will not be able to claim the fee.
function execute(ExecuteArgs calldata _args) external whenNotPaused nonReentrant returns (bytes32) { (bytes32 transferId, bool reconciled) = _executeSanityChecks(_args); // Set the relayer for this transaction to allow for future claim s.transferRelayer[transferId] = msg.sender; // execute router liquidity when this is a fast transfer // asset will be adopted unless specified to be local in params (uint256 amount, address asset) = _handleExecuteLiquidity(transferId, !reconciled, _args); // execute the transaction uint256 amountWithSponsors = _handleExecuteTransaction(_args, amount, asset, transferId, reconciled); // emit event emit Executed(transferId, _args.params.to, _args, asset, amountWithSponsors, msg.sender); return transferId; }
Lost of fund for the relayers as they pay for the gas cost to trigger the functions, but did not receive any relayer fee in return.
Update the implementation of the BridgeFacet.execute
so that it will fail gracefully and not revert when the swap fails or other functions fails. Relayers should be entitled to relayer fee regardless of the outcome of the BridgeFacet.execute
call for their effort.
#0 - jakekidd
2022-06-26T22:31:50Z
It's quite possible that the slippage changes while the relayer's tx is in the mempool - which I think is the valid concern here. A relayer can't know for a fact that another existing tx won't change the current slippage (assuming there are multiple approved relayers).
Relayers should be entitled to relayer fee regardless of the outcome of the BridgeFacet.execute call for their effort.
Worth noting that the mitigation step here ^ is tricky -- it can incentivize relayers to submit transactions that fail quickly to maximize profit. For example, if we pay relayers even in the event of failure when slippage is too high, they are incentivized to submit txs when the slippage is too high (because they will fail more cheaply, as opposed to fully executing). Relayers could potentially profit by pushing the slippage over a large user transfer's limit via the stableswap themselves.
I'm uncertain of whether I should acknowledge or dispute this issue because, while it is valid that relayers will lose funds in the case that the slippage changes while their tx is in the mempool, this may just be considered a core design property and a risk that relayers must factor into their executions.
(Leaving as acknowledged for now.)
#1 - LayneHaber
2022-06-30T11:54:45Z
While not paying out for every relayed transaction (i.e. forcing relayers to incur/budget for some loss if the transaction fails) is a valid concern, and makes the system less appealing to relay for, I think it is the nature of relay networks to deal with these kinds of problems.
For example, even without the slippage, imagine there are two networks competing to relay for the same transaction. In this case, execution would fail for the second relayer, and there would be no fees remaining to pay them for their efforts. If you want to continue adding fees for the same transaction, you would be passing this failure cost onto the user (with a more stringent liveness condition).
This is a valid issue, but the fixes would introduce more complexity and edge cases than make sense to handle at this level.
#2 - 0xleastwood
2022-08-15T08:42:32Z
I'd say this is part of the risk of being a relayer but definitely worth noting so keeping it as is.
🌟 Selected for report: xiaoming90
340.9262 USDC - $340.93
https://github.com/code-423n4/2022-06-connext/blob/4dd6149748b635f95460d4c3924c7e3fb6716967/contracts/contracts/core/connext/facets/BridgeFacet.sol#L984 https://github.com/code-423n4/2022-06-connext/blob/4dd6149748b635f95460d4c3924c7e3fb6716967/contracts/contracts/core/connext/libraries/AssetLogic.sol#L347
Some tokens (like USDT) do not work when changing the allowance from an existing non-zero allowance value. For example Tether (USDT)'s approve()
function will revert if the current approval is not zero, to protect against front-running changes of approvals.
BridgeFacet._reconcileProcessPortal
The following function must be approved by zero first, and then the SafeERC20.safeIncreaseAllowance
function can be called. Otherwise, the _reconcileProcessPortal
function will revert everytime it handles such kind of tokens. Understood from the comment that after the backUnbacked call there could be a remaining allowance.
function _reconcileProcessPortal( uint256 _amount, address _local, address _router, bytes32 _transferId ) private returns (uint256) { ..SNIP.. SafeERC20.safeIncreaseAllowance(IERC20(adopted), s.aavePool, totalRepayAmount); (bool success, ) = s.aavePool.call( abi.encodeWithSelector(IAavePool.backUnbacked.selector, adopted, backUnbackedAmount, portalFee) ); ..SNIP.. }
BridgeFacet_swapAssetOut
The following fucntion must first be approved by zero, follow by the actual allowance to be approved. Otherwise, the _swapAssetOut
function will revert everytime it handles such kind of tokens.
function _swapAssetOut( bytes32 _canonicalId, address _assetIn, address _assetOut, uint256 _amountOut, uint256 _maxIn ) internal returns ( bool, uint256, address ) { AppStorage storage s = LibConnextStorage.connextStorage(); bool success; uint256 amountIn; // Swap the asset to the proper local asset if (stableSwapPoolExist(_canonicalId)) { // get internal swap pool SwapUtils.Swap storage ipool = s.swapStorages[_canonicalId]; // if internal swap pool exists uint8 tokenIndexIn = getTokenIndexFromStableSwapPool(_canonicalId, _assetIn); uint8 tokenIndexOut = getTokenIndexFromStableSwapPool(_canonicalId, _assetOut); // calculate slippage before performing swap // NOTE: this is less efficient then relying on the `swapInternalOut` revert, but makes it easier // to handle slippage failures (this can be called during reconcile, so must not fail) if (_maxIn >= ipool.calculateSwapInv(tokenIndexIn, tokenIndexOut, _amountOut)) { success = true; amountIn = ipool.swapInternalOut(tokenIndexIn, tokenIndexOut, _amountOut, _maxIn); } // slippage is too high to perform swap: success = false, amountIn = 0 } else { // Otherwise, swap via stable swap pool IStableSwap pool = s.adoptedToLocalPools[_canonicalId]; uint256 _amountIn = pool.calculateSwapOutFromAddress(_assetIn, _assetOut, _amountOut); if (_amountIn <= _maxIn) { // set the success success = true; // perform the swap SafeERC20.safeApprove(IERC20(_assetIn), address(pool), _amountIn); amountIn = pool.swapExactOut(_amountOut, _assetIn, _assetOut, _maxIn); } // slippage is too high to perform swap: success = false, amountIn = 0 } return (success, amountIn, _assetOut); }
Both the_reconcileProcessPortal
and _swapAssetOut
functions are called during repayment to Aave Portal if the fast-transfer was executed using portal liquidity. Thus, it is core part of the token transfer process within Connext, and failure of any of these functions would disrupt the AAVE repayment process.
Since both functions affect the AAVE repayment process, I'm grouping them as one issue.
As Connext bridges/routers deal with all sort of tokens existed in various domains/chains, the protocol should try to implement measure to ensure that it is compatible with as much tokens as possible for future growth and availability of the protocol.
BridgeFacet._reconcileProcessPortal
It is recommended to set the allowance to zero before increasing the allowance
SafeERC20.safeApprove(IERC20(_assetIn), address(pool), 0); SafeERC20.safeIncreaseAllowance(IERC20(adopted), s.aavePool, totalRepayAmount);
BridgeFacet_swapAssetOut
It is recommended to set the allowance to zero before each approve call.
SafeERC20.safeApprove(IERC20(_assetIn), address(pool), 0); SafeERC20.safeApprove(IERC20(_assetIn), address(pool), _amountIn);
#0 - ecmendenhall
2022-06-20T15:35:29Z
I gave this one a :heart: for noting the special case of USDT. Since USDT's approve
reverts if current allowance is nonzero, even calls to safeIncreaseAllowance
must be zeroed first. Many related findings identify the same issue but recommend using safeIncreaseAllowance
only, which would still fail in the case of USDT.
#1 - LayneHaber
2022-06-26T01:05:55Z
This should be higher severity as the funds could get stuck.
#2 - jakekidd
2022-06-27T02:01:07Z
#3 - 0xleastwood
2022-08-12T19:53:29Z
I'm actually going to disagree with the request to raise the severity of this issue because of a few main reasons:
executeBackUnbacked
transfers exactly backUnbackedAmount + portalFee
which is equal to totalRepayAmount
. Hence, the approval is always fully utilised unless the Aave pool contract fails to execute as intended._swapAssetOut
explicitly approves _amountIn
which is determined by _calculateSwapInv
. This same function is also used to actually perform the swap, hence, this value will remain the same unless the user is able to make state changes in between these calls. It does not look like this is possible unless safeApprove
gives the user control over contract execution.I think based on this, it is safe to say that certain assumptions about the behaviour of these contracts must be broken in order for funds to be at risk. Due to this, I think medium
severity makes more sense.
🌟 Selected for report: xiaoming90
2598.127 USDC - $2,598.13
Assume this is a fast-transfer path and the sequencer has a good reason (e.g. some sophisticated liquidity load balancing algorithm) to assign 3 routers to provide liquidity for a transfer of 90 DAI
Therefore, each of them will provide 30 DAI
equally.
_args.routers[] array = [Router A, Router B, Router C]
However, a malicious relayer could rearrange the _args.routers[] array
to any of the following and still pass the sanity checks within BridgeFacet._executeSanityChecks
_args.routers[] array = [Router A, Router A, Router A]
_args.routers[] array = [Router A, Router A, Router B]
_args.routers[] array = [Router A, Router A, Router C]
_args.routers[] array = [Router C, Router A, Router C]
The point is that as long as the attacker ensures that the pathLength
is correct, he will be able to pass the router check within BridgeFacet._executeSanityChecks
.
Assume that malicious relayer decided to rearrange the _args.routers[] array
to as follows, this will cause Router A to provide more liquidity than it should be and overwrite the sequencer decision. In this case, Router A will be forced to provide 90 DAI
.
_args.routers[] array = [Router A, Router A, Router A]
This is possible because the routerHash
used to generate the router signature only consists of two items (transferId
and pathLength
)
function _executeSanityChecks(ExecuteArgs calldata _args) private view returns (bytes32, bool) { ..SNIP.. // Derive transfer ID based on given arguments. bytes32 transferId = _getTransferId(_args); // Retrieve the reconciled record. If the transfer is `forceSlow` then it must be reconciled first // before it's executed. bool reconciled = s.reconciledTransfers[transferId]; if (_args.params.forceSlow && !reconciled) revert BridgeFacet__execute_notReconciled(); // Hash the payload for which each router should have produced a signature. // Each router should have signed the `transferId` (which implicitly signs call params, // amount, and tokenId) as well as the `pathLength`, or the number of routers with which // they are splitting liquidity provision. bytes32 routerHash = keccak256(abi.encode(transferId, pathLength)); // check the reconciled status is correct // (i.e. if there are routers provided, the transfer must *not* be reconciled) if (pathLength > 0) // make sure routers are all approved if needed { if (reconciled) revert BridgeFacet__execute_alreadyReconciled(); for (uint256 i; i < pathLength; ) { // Make sure the router is approved, if applicable. // If router ownership is renounced (_RouterOwnershipRenounced() is true), then the router whitelist // no longer applies and we can skip this approval step. if (!_isRouterOwnershipRenounced() && !s.routerPermissionInfo.approvedRouters[_args.routers[i]]) { revert BridgeFacet__execute_notSupportedRouter(); } // Validate the signature. We'll recover the signer's address using the expected payload and basic ECDSA // signature scheme recovery. The address for each signature must match the router's address. if (_args.routers[i] != _recoverSignature(routerHash, _args.routerSignatures[i])) { revert BridgeFacet__execute_invalidRouterSignature(); } unchecked { i++; } } ..SNIP.. }
Malicious relayer could overwrite sequencer decision, or cause a certain router to drain more liquidity than it should be.
Generate the routerHash
with the following items should help to prevent this attack:
transferId
pathLength
_args.routers[] array
In this case, if the attacker attempts to re-arrange the _args.routers[] array
, the routerHash
generate on the bridge will be different. Thus, it will fail the router signature verification within _executeSanityChecks
function and it will revert.
#0 - jakekidd
2022-06-25T00:32:33Z
The relayer doing this would actually not be an attack on the router providing liquidity; it would actually benefit that router, giving it the full bulk of the transfer - assuming the router can afford it - as that router will claims fees for the whole thing. However, it would grief other routers from getting access to this transfer in the process. This isn't great, but keep in mind relayers are currently a permissioned role (whitelisted) because the trust vectors there have not been abstracted entirely (among other reasons). As such, I am suggesting we de-escalate this issue to QA/Low Risk.
Additionally, the suggested mitigation step is invalid within current design - routerHash
for the router's signature is generated prior to knowing which routers will be selected by sequencer. The only way to prevent this would be to check to make sure there are no duplicates in the routers array manually (which would incur not-so-great gas costs). Acknowledging the issue as we may want to implement that fix in the future.
EDIT: Removed disagree with severity tag. Seems appropriate in hindsight as it would be subverting the functionality of the protocol.
#1 - 0xleastwood
2022-08-13T23:10:53Z
I agree with the validity of this finding. Keeping it as is.
🌟 Selected for report: xiaoming90
Also found by: csanuragjain
Assume that a malicious relayer operates a router in Connext providing fast-liquidity service. A malicious relayer could always swap the router(s) within the execute calldata with the router(s) owned by malicious relayer, and submit it to the chain for execution.
This example assumes a fast-liquidity path. When the relayer posts a execute calldata to destination domain's BridgeFacet.execute
function, this function will trigger the BridgeFacet._executeSanityChecks
function to perform a sanity check.
Assume that the execute calldata only have 1 router selected. A malicious relayer could perform the following actions:
_args.routers[]
array, and remove original router's signature from _args.routerSignatures[]
array from the execute calldata.routerHash = keccak256(abi.encode(transferId, pathLength))
). pathLength = 1 in this example_args.routers[]
array, and insert the router signature obtained fromthe previous step to _args.routerSignatures[]
arrayBridgeFacet.execute
function, and it will pass the BridgeFacet._executeSanityChecks
function since router signature is valid.The BridgeFacet._executeSanityChecks
function is not aware of the fact that the router within the execute calldata has been changed because it will only check if the router specified within the _args.routers[]
array matches with the router signature provided. Once the sanity check passes, it will store the attacker's router within s.routedTransfers[_transferId]
and proceed with providing fast-liquidity service for the users.
The existing mechanism is useful in preventing malicious relayer from specifying routers belonging to someone else because the malicious relayer would not be capable of generating a valid router signature on behalf of other routers because he does not own their private key. However, this mechanism does not guard against a malicious relayer from specifying their own router because in this case they would be able to generate a valid router signature as they own the private key.
/** * @notice Performs some sanity checks for `execute` * @dev Need this to prevent stack too deep */ function _executeSanityChecks(ExecuteArgs calldata _args) private view returns (bytes32, bool) { // If the sender is not approved relayer, revert if (!s.approvedRelayers[msg.sender] && msg.sender != _args.params.agent) { revert BridgeFacet__execute_unapprovedSender(); } // Path length refers to the number of facilitating routers. A transfer is considered 'multipath' // if multiple routers provide liquidity (in even 'shares') for it. uint256 pathLength = _args.routers.length; // Make sure number of routers is below the configured maximum. if (pathLength > s.maxRoutersPerTransfer) revert BridgeFacet__execute_maxRoutersExceeded(); // Derive transfer ID based on given arguments. bytes32 transferId = _getTransferId(_args); // Retrieve the reconciled record. If the transfer is `forceSlow` then it must be reconciled first // before it's executed. bool reconciled = s.reconciledTransfers[transferId]; if (_args.params.forceSlow && !reconciled) revert BridgeFacet__execute_notReconciled(); // Hash the payload for which each router should have produced a signature. // Each router should have signed the `transferId` (which implicitly signs call params, // amount, and tokenId) as well as the `pathLength`, or the number of routers with which // they are splitting liquidity provision. bytes32 routerHash = keccak256(abi.encode(transferId, pathLength)); // check the reconciled status is correct // (i.e. if there are routers provided, the transfer must *not* be reconciled) if (pathLength > 0) // make sure routers are all approved if needed { if (reconciled) revert BridgeFacet__execute_alreadyReconciled(); for (uint256 i; i < pathLength; ) { // Make sure the router is approved, if applicable. // If router ownership is renounced (_RouterOwnershipRenounced() is true), then the router whitelist // no longer applies and we can skip this approval step. if (!_isRouterOwnershipRenounced() && !s.routerPermissionInfo.approvedRouters[_args.routers[i]]) { revert BridgeFacet__execute_notSupportedRouter(); } // Validate the signature. We'll recover the signer's address using the expected payload and basic ECDSA // signature scheme recovery. The address for each signature must match the router's address. if (_args.routers[i] != _recoverSignature(routerHash, _args.routerSignatures[i])) { revert BridgeFacet__execute_invalidRouterSignature(); } unchecked { i++; } } ..SNIP.. }
When the nomad message eventually reaches the destination domain and triggered to the BridgeFacet._reconcile
, the attacker's router will be able to claim back the asset provided in the execution step as per normal.
Malicious relayer could force Connext to use those routers owned by them to earn the liquidity fee, and at the same time causes the original router chosen by the sequencer to lost the opportunity to earn the liquidity fee. This disrupts the balance and fairness of the protocol causing normal routers to lost the opportunity to earn liquidity fee.
In bridge or cross-chain communication design, it is a good security practice to minimize the trust that Connext places on other external protocol (e.g. relayer network) wherever possible so that if the external protocol is compromised or acting against maliciously against Connext, the impact or damage would be reduced.
It is recommended to devise a way for the Connext's destination bridge to verify that the execute calldata received from the relayer is valid and has not been altered. Ideally, the hash of the original execute calldata sent by seqencer should be compared with the hash of the execute calldata received from relayer so that a mismatch would indicate that the calldata has been modified along the way, and some action should be taken.
For instance, consider a classic 0x off-chain ordering book protocol. A user will sign his order with his private key, and attach the signature to the order, and send the order (with signature) to the relayer network. If the relayer attempts to tamper the order message or signature, the decoded address will be different from the signer's address and this will be detected by 0x's Smart contract on-chain when processing the order. This ensures that the integrity of the message and signer can be enforced.
Per good security practice, relayer network should always be considered as a hostile environment/network. Therefore, it is recommended that similar approach could be taken with regards to passing execute calldata across domains/chains.
For instance, at a high level, the sequencer should sign the execute calldata with its private key, and attach the signature to the execute calldata. Then, submit the execute calldata (with signature) to the relayer network. When the bridge receives the execute calldata (with signature), it can verify if the decoded address matches the sequencer address to ensure that the calldata has not been altered. This will ensure the intergrity of the execute calldata and prevent any issue that arise due to unauthorised modification of calldata.
Alternatively, following method could also be adopted to prevent this issue:
Assume that it is possible to embed the selected router(s) within the slow nomad message. Append the selected router(s) within the slow nomad message
Within the BridgeFacet.execute
function, instead of using only the transfer ID as the array index (s.routedTransfers[_transferId] = _args.routers;
), use both transfer ID + selected router as the array index (s.routedTransfers[hash(_transferId+routers)] = _args.routers;
)
When the slow nomad message arrives and triggers to the BridgeFacet._reconcile
, this function will find the routers that provide the fast-liquidity based on the information within the nomad message only. It will attempt to call s.routedTransfers[hash(_transferId+routers)]
and it should return nothing as there is a mismatch between attacker's router and router within the nomad message.
In this case, the attacker will not be able to claim back any of the funds he provided earlier. This will deter anyone from attempting to swap the router within the execute calldata sent to destination domain's BridgeFacet.execute
function because they will not be able to claim back the funds
#0 - jakekidd
2022-06-25T01:22:10Z
Duplicate of #149
#1 - 0xleastwood
2022-08-13T23:13:49Z
I believe this finding is distinct from #149 because this outlines how relayers could collude with routers to earn most of the fees for providing liquidity to bridge users. #149 describes how a relayer may force any arbitrary router to supply more liquidity than they originally intended. The later is bad UX for routers providing liquidity even if they do earn more in fees.
🌟 Selected for report: xiaoming90
Also found by: unforgiven
1169.1572 USDC - $1,169.16
https://github.com/code-423n4/2022-06-connext/blob/4dd6149748b635f95460d4c3924c7e3fb6716967/contracts/contracts/core/connext/facets/RoutersFacet.sol#L293 https://github.com/code-423n4/2022-06-connext/blob/4dd6149748b635f95460d4c3924c7e3fb6716967/contracts/contracts/core/connext/facets/RoutersFacet.sol#L490 https://github.com/code-423n4/2022-06-connext/blob/4dd6149748b635f95460d4c3924c7e3fb6716967/contracts/contracts/core/connext/facets/RoutersFacet.sol#L212
Assume that Alice's router has large amount of liquidity inside.
Assume that the Connext Admin decided to remove a router owned by Alice. The Connext Admin will call the RoutersFacet.removeRouter
function, and all information related to Alice's router will be erased (set to 0x0) from the s.routerPermissionInfo
.
function removeRouter(address router) external onlyOwner { // Sanity check: not empty if (router == address(0)) revert RoutersFacet__removeRouter_routerEmpty(); // Sanity check: needs removal if (!s.routerPermissionInfo.approvedRouters[router]) revert RoutersFacet__removeRouter_notAdded(); // Update mapping s.routerPermissionInfo.approvedRouters[router] = false; // Emit event emit RouterRemoved(router, msg.sender); // Remove router owner address _owner = s.routerPermissionInfo.routerOwners[router]; if (_owner != address(0)) { emit RouterOwnerAccepted(router, _owner, address(0)); // delete routerOwners[router]; s.routerPermissionInfo.routerOwners[router] = address(0); } // Remove router recipient address _recipient = s.routerPermissionInfo.routerRecipients[router]; if (_recipient != address(0)) { emit RouterRecipientSet(router, _recipient, address(0)); // delete routerRecipients[router]; s.routerPermissionInfo.routerRecipients[router] = address(0); } // Clear any proposed ownership changes s.routerPermissionInfo.proposedRouterOwners[router] = address(0); s.routerPermissionInfo.proposedRouterTimestamp[router] = 0; }
Alice is aware that her router has been removed by Connext Admin, so she decided to withdraw the liquidity from her previous router by calling RoutersFacet.removeRouterLiquidityFor
.
However, when Alice called the RoutersFacet.removeRouterLiquidityFor
function, it will revert every single time. This is because the condition msg.sender != getRouterOwner(_router)
will always fail.
/** * @notice This is used by any router owner to decrease their available liquidity for a given asset. * @param _amount - The amount of liquidity to remove for the router * @param _local - The address of the asset you're removing liquidity from. If removing liquidity of the * native asset, routers may use `address(0)` or the wrapped asset * @param _to The address that will receive the liquidity being removed * @param _router The address of the router */ function removeRouterLiquidityFor( uint256 _amount, address _local, address payable _to, address _router ) external nonReentrant whenNotPaused { // Caller must be the router owner if (msg.sender != getRouterOwner(_router)) revert RoutersFacet__removeRouterLiquidityFor_notOwner(); // Remove liquidity _removeLiquidityForRouter(_amount, _local, _to, _router); }
Since the RoutersFacet.removeRouter
function has earlier erased all information related to Alice's router within s.routerPermissionInfo
, the getRouterOwner
function will always return the router address.
In this case, the router address will not match against msg.sender
address/Alice address, thus Alice attempts to call removeRouterLiquidityFor
will always revert.
function getRouterOwner(address _router) public view returns (address) { address _owner = s.routerPermissionInfo.routerOwners[_router]; return _owner == address(0) ? _router : _owner; }
Router owner who provides liquidity could be rugged by Connext admin. When this happen, the router owner funds will be struck within the RoutersFacet
contract, and there is no way for the router owner to retrieve their liquidity.
In the worst case scenario, a compromised Connext admin could remove all routers, and cause all liquidity to be struck within RoutersFacet
and no router owner could withdraw their liquidity from the contract. Next, the RouterFacet
contract could be upgraded to include additional function to withdraw all liquidity from the contract to an arbitrary wallet address.
The router owner is still entitled to their own liquidity even though their router has been removed by Connext Admin. Thus, they should be given the right to take back their liquidity when such an event happens. The contract should update its implementation to support this. This will give more assurance to the router owner.
#0 - jakekidd
2022-06-26T18:55:36Z
The language here is slightly exaggerated - router funds could not be stolen by the Owner; the Owner can only prevent the router from accessing them by revoking their approval.
However, it's still a valid concern, and one that will ultimately be addressed once router permissioning/whitelisting is removed permanently or the Owner role will be delegated to governance.
The problem with the mitigation step proposed here is that the owner might not be set in some cases, so it's not a complete solution. So even if router approval is revoked, but we leave the assignment in the ownership mapping, in cases where the router did not assign an owner they won't be able to withdraw.
There might be a better solution here by leaving the recipient in the corresponding mapping instead. In the removeLiquidityFor
function, we can handle the case where the router's approval/ownership has been revoked by sending the funds to the recipient regardless of the caller. Unlike the owner, the recipient is always set on router registration.
#1 - jakekidd
2022-06-26T18:56:45Z
Changing this to be confirmed - would like to resolve this issue by carrying out the solution described in above comment ^
#2 - 0xleastwood
2022-08-13T23:19:03Z
I think this is only a valid medium
because the admin is an EOA and not delegated to a governance contract. Keeping it as is.
🌟 Selected for report: xiaoming90
Also found by: shenwilly
1169.1572 USDC - $1,169.16
https://github.com/code-423n4/2022-06-connext/blob/4dd6149748b635f95460d4c3924c7e3fb6716967/contracts/contracts/core/connext/facets/BridgeFacet.sol#L819](https://github.com/code-423n4/2022-06-connext/blob/4dd6149748b635f95460d4c3924c7e3fb6716967/contracts/contracts/core/connext/facets/BridgeFacet.sol#L819
A third party sponsor would need to implement a SponsorVault
contract that is aligned with the ISponsorVault
interface.
Assume that a SponsorVault
contract has been defined on Optimism chain. All cross-chain communications are required to call the BridgeFacet.execute
, which in turn will trigger the BridgeFacet._handleExecuteTransaction
internal function.
However, if there is an error within SponsorVault
contract in Optimism causing a revert when s.sponsorVault.reimburseLiquidityFees
or s.sponsorVault.reimburseRelayerFees
is called, the entire execute
transaction will revert. Since execute
transaction always revert, any cross-chain communication between Optimism and other domains will fail.
/** * @notice Process the transfer, and calldata if needed, when calling `execute` * @dev Need this to prevent stack too deep */ function _handleExecuteTransaction( ExecuteArgs calldata _args, uint256 _amount, address _asset, // adopted (or local if specified) bytes32 _transferId, bool _reconciled ) private returns (uint256) { // If the domain if sponsored if (address(s.sponsorVault) != address(0)) { // fast liquidity path if (!_reconciled) { // Vault will return the amount of the fee they sponsored in the native fee // NOTE: some considerations here around fee on transfer tokens and ensuring // there are no malicious `Vaults` that do not transfer the correct amount. Should likely do a // balance read about it uint256 starting = IERC20(_asset).balanceOf(address(this)); uint256 sponsored = s.sponsorVault.reimburseLiquidityFees(_asset, _args.amount, _args.params.to); // Validate correct amounts are transferred if (IERC20(_asset).balanceOf(address(this)) != starting + sponsored) { revert BridgeFacet__handleExecuteTransaction_invalidSponsoredAmount(); } _amount = _amount + sponsored; } // Should dust the recipient with the lesser of a vault-defined cap or the converted relayer fee // If there is no conversion available (i.e. no oracles for origin domain asset <> dest asset pair), // then the vault should just pay out the configured constant s.sponsorVault.reimburseRelayerFees(_args.params.originDomain, payable(_args.params.to), _args.params.relayerFee); } ..SNIP..
It will result in denial of service. The SponsorVault
contract, which belongs to a third-party, is a single point of failure for a domain.
This is a problem commonly encountered whenever a method of a smart contract calls another contract – we cannot rely on the other contract to work 100% of the time, and it is dangerous to assume that the external call will always be successful. Additionally, external smart contract might be vulnerable and compromised by an attacker. Even if the team has audited or review the SponsorVault before whitelisting them, some risk might still exist.
Therefore, it is recommended to implement a fail-safe design where failure of an external call to SponsorVault will not disrupt the cross-chain communication. Consider implementing a try-catch block as shown below. If there is any issue with the external SponsorVault
contract, no funds are reimbursed to the users in the worst case scenario, but the issue will not cause any impact to the cross-chain communication.
function _handleExecuteTransaction( ExecuteArgs calldata _args, uint256 _amount, address _asset, // adopted (or local if specified) bytes32 _transferId, bool _reconciled ) private returns (uint256) { // If the domain if sponsored if (address(s.sponsorVault) != address(0)) { // fast liquidity path if (!_reconciled) { // Vault will return the amount of the fee they sponsored in the native fee // NOTE: some considerations here around fee on transfer tokens and ensuring // there are no malicious `Vaults` that do not transfer the correct amount. Should likely do a // balance read about it uint256 starting = IERC20(_asset).balanceOf(address(this)); + try s.sponsorVault.reimburseLiquidityFees(_asset, _args.amount, _args.params.to) returns (uint256 sponsored) { + // Validate correct amounts are transferred + if (IERC20(_asset).balanceOf(address(this)) != starting + sponsored) { + revert BridgeFacet__handleExecuteTransaction_invalidSponsoredAmount(); + } + + _amount = _amount + sponsored; + } catch {} } // Should dust the recipient with the lesser of a vault-defined cap or the converted relayer fee // If there is no conversion available (i.e. no oracles for origin domain asset <> dest asset pair), // then the vault should just pay out the configured constant + try s.sponsorVault.reimburseRelayerFees(_args.params.originDomain, payable(_args.params.to), _args.params.relayerFee) {} catch {} ..SNIP..
#0 - LayneHaber
2022-06-27T15:27:45Z
#1 - 0xleastwood
2022-08-15T09:10:49Z
I believe this to only be a temporary DoS as the broken state can be recovered by calling setSponsorVault
. Downgrading to medium
risk.
🌟 Selected for report: unforgiven
Also found by: xiaoming90
1169.1572 USDC - $1,169.16
https://github.com/code-423n4/2022-06-connext/blob/4dd6149748b635f95460d4c3924c7e3fb6716967/contracts/contracts/core/connext/facets/BridgeFacet.sol#L757 https://github.com/code-423n4/2022-06-connext/blob/4dd6149748b635f95460d4c3924c7e3fb6716967/contracts/contracts/core/connext/facets/BridgeFacet.sol#L541
Assume that toSwap = 10 DAI
and pathLen = 3
in this example.
Therefore, the bridge will pull 10 DAI from the RouterFacet
contract. However, within the BridgeFacet._handleExecuteLiquidity
function, due to precision loss when solidity handles division of integer, the following code will result in routerAmount
to be evaluted to 3
:
// @audit-issue (10 / 3 = 3) uint256 routerAmount = toSwap / pathLen;
The code will then proceed to deduct the liquidity from all the 3 routers equally:
Router A = -3 DAI
Router B = -3 DAI
Router C = -3 DAI
A total of -9 DAI
is deducted from the 3 routers. However, this value does not match with the amount of DAI that was being pulled from the RouterFacet
contract. 10 DAI
was pulled from RouterFacet
contract, but 9 DAI
was deducted from the 3 routers. Thus, there is some discrepancy over here.
function _handleExecuteLiquidity( bytes32 _transferId, bool _isFast, ExecuteArgs calldata _args ) private returns (uint256, address) { uint256 toSwap = _args.amount; ..SNIP.. // for each router, assert they are approved, and deduct liquidity uint256 routerAmount = toSwap / pathLen; for (uint256 i; i < pathLen; ) { // decrement routers liquidity s.routerBalances[_args.routers[i]][_args.local] -= routerAmount; unchecked { i++; } } ..SNIP..
The following code within BridgeFacet._reconcile
function exhibits the same problem and also demostrate that only 9 DAI
will be reimbuse back to the 3 routers.
function _reconcile(uint32 _origin, bytes memory _message) internal { ..SNIP.. uint256 toDistribute = amount; uint256 pathLen = routers.length; if (portalTransferAmount != 0) { // ensure a router took on credit risk if (pathLen != 1) revert BridgeFacet__reconcile_noPortalRouter(); toDistribute = _reconcileProcessPortal(amount, token, routers[0], transferId); } if (pathLen != 0) { // fast liquidity path // Credit each router that provided liquidity their due 'share' of the asset. uint256 routerAmt = toDistribute / pathLen; for (uint256 i; i < pathLen; ) { s.routerBalances[routers[i]][token] += routerAmt; unchecked { i++; } } } emit Reconciled(transferId, _origin, routers, token, amount, msg.sender); }
It might cause some internal accounting issue within the protocol due to the discrepancy.
It is recommended to update the code so that the amount pulled from RouterFacet
contract is equal to the amount deducted from the routers. Considering having the last router "sweep" all the remaining balance as shown below:
function _handleExecuteLiquidity( bytes32 _transferId, bool _isFast, ExecuteArgs calldata _args ) private returns (uint256, address) { ..SNIP.. uint256 routerAmount = toSwap / pathLen; uint256 remainingBalance = toSwap; for (uint256 i; i < pathLen - 1; ) { s.routerBalances[_args.routers[i]][_args.local] -= routerAmount; remainingBalance -= routerAmount unchecked { i++; } } s.routerBalances[_args.routers[pathLen-1]][_args.local] = remainingBalance; ..SNIP..
The above code will deduct the liquidity from the routers as follows, which will add up to the 10 DAI
.
Router A = -3 DAI
Router B = -3 DAI
Router C = -4 DAI
#0 - jakekidd
2022-06-25T00:21:41Z
Why this is a valid issue and not just a "dusting problem": In the example, the contract would be sending 10 DAI, debiting 3 DAI from each router... essentially, this extra 1 DAI (the "dust") would create an imbalance/deficiency between the liquidity tracked for all routers and the actual liquidity that the contract has.
This deficiency would be resolved when the reimbursement happens, of course. However, the more concerning problem here: if the extra 1 DAI isn't available (as in, there is literally exactly 9 DAI in the contract), this call will revert. Obviously it wouldn't be sent to chain in that event, but still unideal.
#1 - jakekidd
2022-06-27T03:12:40Z
#2 - 0xleastwood
2022-08-15T08:20:17Z
Duplicate of #213.
🌟 Selected for report: BowTiedWardens
Also found by: 0x1f8b, 0x29A, 0x52, 0xNazgul, 0xNineDec, 0xf15ers, 0xkatana, 0xmint, Chom, ElKu, Funen, IllIllI, JMukesh, Jujic, Kaiziron, Lambda, MiloTruck, Ruhum, SmartSek, SooYa, TerrierLover, TomJ, WatchPug, Waze, _Adam, asutorufos, auditor0517, bardamu, c3phas, catchup, cccz, ch13fd357r0y3r, cloudjunky, cmichel, cryptphi, csanuragjain, defsec, fatherOfBlocks, hansfriese, hyh, jayjonah8, joestakey, k, kenta, obtarian, oyc_109, robee, sach1r0, shenwilly, simon135, slywaters, sorrynotsorry, tintin, unforgiven, xiaoming90, zzzitron
307.6356 USDC - $307.64
Following function is not used anywhere in the contracts.
function _reconcileProcessMessage(bytes memory _message) internal returns ( uint256, address, bytes32 ) {
The _router
parameter is not used within the repayAavePortalFor
function. Consider removing it if it is not required.
function repayAavePortalFor( address _router, address _adopted, uint256 _backingAmount, uint256 _feeAmount, bytes32 _transferId ) external payable {
Consider removing it if it is not required.
The current admin transfer process involves the current admin calling ConnextPriceOracle.setAdmin
function. If the nominated EOA account is not a valid account, it is entirely possible the admin might be accidentally transferred to an uncontrolled account.
function setAdmin(address newAdmin) external onlyAdmin { address oldAdmin = admin; admin = newAdmin; emit NewAdmin(oldAdmin, newAdmin); }
Consider implementing a two step process where the admin nominates an account and the nominated account needs to call an acceptOwnership() function for the transfer of admin to fully succeed. This ensures the nominated EOA account is a valid and active account.
OpenZeppelin SafeERC20 safeApprove()
function was found to be used in a number of contracts. This function has been deprecated. Refer to the comments in the SafeERC20's source code (https://github.com/OpenZeppelin/openzeppelin-contracts/blob/828fe365eeff13e7aa188e449005ad81f7222189/contracts/token/ERC20/utils/SafeERC20.sol#L39-L44)
An example of the usage of safeApprove()
is shown below:
Use safeIncreaseAllowance()
instead of safeApprove()
A number of TODO are found in the comments within the source code.
// TODO: upgrade to `ProposedOwnable` contract UpgradeBeaconController is Ownable {
// TODO: do we want to store a mapping of custodied token balances here? // Token is local for this domain. We should custody the token here. // Query token contract for details and calculate detailsHash. detailsHash = ConnextMessage.formatDetailsHash(token.name(), token.symbol(), token.decimals());
It is recommended to review them to ensure that they are resolved before deploying the contracts to the Production environment (e.g. Mainnet)
It is a good practice to an empty constructor with the initializer
modifier to the RelayerFeeRouter
and PromiseRouter
contracts. So the implementation contract gets initialized automatically upon deployment. This is to prevent attack similar to https://forum.openzeppelin.com/t/uupsupgradeable-vulnerability-post-mortem/15680/6
The AssetLogic._swapAssetOut
function authorised the StableSwap pool contract to spend/draw _amountIn
amount of tokens from the bridge for the swap.
SafeERC20.safeApprove(IERC20(_assetIn), address(pool), _amountIn);
This function will attempt to swap exactly _amountOut
number of tokens, by spending up to _maxIn
.
The amount of tokens that was transfer to this contract is _amountIn
. However, during the swap process, it is likely that the amount of tokens utilsed for the swap is less than _amountIn
. Thus, there will be unnecessary residual allowance left after the swap.
function _swapAssetOut( bytes32 _canonicalId, address _assetIn, address _assetOut, uint256 _amountOut, uint256 _maxIn ) internal returns ( bool, uint256, address ) { AppStorage storage s = LibConnextStorage.connextStorage(); bool success; uint256 amountIn; // Swap the asset to the proper local asset if (stableSwapPoolExist(_canonicalId)) { // get internal swap pool SwapUtils.Swap storage ipool = s.swapStorages[_canonicalId]; // if internal swap pool exists uint8 tokenIndexIn = getTokenIndexFromStableSwapPool(_canonicalId, _assetIn); uint8 tokenIndexOut = getTokenIndexFromStableSwapPool(_canonicalId, _assetOut); // calculate slippage before performing swap // NOTE: this is less efficient then relying on the `swapInternalOut` revert, but makes it easier // to handle slippage failures (this can be called during reconcile, so must not fail) if (_maxIn >= ipool.calculateSwapInv(tokenIndexIn, tokenIndexOut, _amountOut)) { success = true; amountIn = ipool.swapInternalOut(tokenIndexIn, tokenIndexOut, _amountOut, _maxIn); } // slippage is too high to perform swap: success = false, amountIn = 0 } else { // Otherwise, swap via stable swap pool IStableSwap pool = s.adoptedToLocalPools[_canonicalId]; uint256 _amountIn = pool.calculateSwapOutFromAddress(_assetIn, _assetOut, _amountOut); if (_amountIn <= _maxIn) { // set the success success = true; // perform the swap SafeERC20.safeApprove(IERC20(_assetIn), address(pool), _amountIn); amountIn = pool.swapExactOut(_amountOut, _assetIn, _assetOut, _maxIn); } // slippage is too high to perform swap: success = false, amountIn = 0 }
Reset the allowance to zero after the swap. This will ensure that the StablePool contract will not be able to withdraw any residual allowance/funds from the bridge under any circumstances.
SafeERC20.safeApprove(IERC20(_assetIn), address(pool), 0); // Reset allowance to zero.
StablePool contract is owned by Connext, therefore marking this issue as low.
The ConnextPriceOracle.setAdmin
function does not check if the new admin address provided is address(0)
. Thus, the ownership of the contract might be accidentally transferred to address(0)
.
function setAdmin(address newAdmin) external onlyAdmin { address oldAdmin = admin; admin = newAdmin; emit NewAdmin(oldAdmin, newAdmin); }
Implement a require
check to ensure that new admin address provided is not address(0)
.
#0 - jakekidd
2022-07-02T00:18:02Z
Unused Function and Parameter: resolved
No two-step process for ownership transfer: acknowledged, any/all time delay features should come from dao implementation (governor = owner) in the future
SafeApprove Has Been Deprecated: approval needs to be reset to 0 and then increased, so we are stuck using safeApprove
method in order to do so (as noted in another issue in this very same report, Residual Allowance Left!)