Maia DAO - Ulysses - Bauchibred's results

Harnessing the power of Arbitrum, Ulysses Omnichain specializes in Virtualized Liquidity Management.

General Information

Platform: Code4rena

Start Date: 22/09/2023

Pot Size: $100,000 USDC

Total HM: 15

Participants: 175

Period: 14 days

Judge: alcueca

Total Solo HM: 4

Id: 287

League: ETH

Maia DAO

Findings Distribution

Researcher Performance

Rank: 10/175

Findings: 4

Award: $2,232.29

QA:
grade-a
Analysis:
grade-a

🌟 Selected for report: 0

🚀 Solo Findings: 0

Findings Information

🌟 Selected for report: 0xStalin

Also found by: Bauchibred

Labels

bug
2 (Med Risk)
disagree with severity
downgraded by judge
satisfactory
sponsor confirmed
sufficient quality report
duplicate-423

Awards

1943.2693 USDC - $1,943.27

External Links

Lines of code

https://github.com/code-423n4/2023-09-maia/blob/f5ba4de628836b2a29f9b5fff59499690008c463/src/BranchBridgeAgent.sol#L433-L457

Vulnerability details

Impact

Users deposit funds that are to be redeemed could get stuck in protocol and lead to loss of funds/ stuck funds for multiple other tokens

NB: Exarcebating this is the fact that not only the affected token's deposit gets stuck in the protocol but all other tokens deposit attached to that specific depositNonce

Proof of Concept

Take a look at BranchBridgeAgent.sol#L433-L457

    function redeemDeposit(uint32 _depositNonce) external override lock {
        // Get storage reference
        Deposit storage deposit = getDeposit[_depositNonce];

        // Check Deposit
        if (deposit.status == STATUS_SUCCESS) revert DepositRedeemUnavailable();
        if (deposit.owner == address(0)) revert DepositRedeemUnavailable();
        if (deposit.owner != msg.sender) revert NotDepositOwner();

        // Zero out owner
        deposit.owner = address(0);

        // Transfer token to depositor / user
        //@audit
        for (uint256 i = 0; i < deposit.tokens.length;) {
            _clearToken(msg.sender, deposit.hTokens[i], deposit.tokens[i], deposit.amounts[i], deposit.deposits[i]);

            unchecked {
                ++i;
            }
        }

        // Delete Failed Deposit Token Info
        delete getDeposit[_depositNonce];
    }

Step-by-step POC:

  1. A user initiates a withdrawal process via the redeemDeposit() function, providing a depositNonce to identify their deposit data.
  2. The function fetches the deposit data using getDeposit[_depositNonce].
  3. Afew checks are done to check the deposit
  4. The deposit's owner is then zeroed out
  5. An iteration is then done to clear the user's tokens.
  6. Here's the problem:
    • Say the data stored at getDeposit[_depositNonce] has 5 different tokens deposits that the user wants to redeem.
    • One of these tokens is USDC
    • Note that from here: https://github.com/d-xo/weird-erc20#tokens-with-blocklists we can see that USDC employs a blocklisting feature.
    • Circle usdc issuer blacklists user for reasons best known to them.
    • The attempt to redeem the deposit using redeemDeposit() will fail since user is blacklisted, using USDC as a case study, its got a notBlacklisted() modifier that's required for functions such as transfer(), transferFrom() and approve().
  7. As explained under step 6 , when the function attempts to clear and transfer this blacklisted token to msg.sender, the token contract will revert the transaction. Since the redeemDeposit() function clears tokens in a loop, a revert on one token stops the entire process. the _cleartokens() function just transfers these tokens.
  8. This means not only is the blacklisted token locked but all other tokens associated with that depositNonce are effectively locked within the protocol as well, preventing the user from accessing any of them.

Key Issue: The vulnerability stems from the hardcoded recipient address (msg.sender) in the redemption process. If one of the token transfers fails (due to reasons like blacklisting), all subsequent transfers in that loop will not be executed.

Coded POC

// SPDX-License-Identifier: AGPL-3.0-only
pragma solidity ^0.8.0;

// Basic ERC20 contract implementation
contract ERC20 {
    mapping(address => uint) public balanceOf;
    mapping(address => mapping(address => uint)) public allowance;

    string public name = "ERC20Basic";
    string public symbol = "ERC";
    uint8 public decimals = 18;
    uint public totalSupply;

    event Approval(address indexed owner, address indexed spender, uint value);
    event Transfer(address indexed from, address indexed to, uint value);

    constructor(uint _totalSupply)  {
        totalSupply = _totalSupply;
        balanceOf[msg.sender] = _totalSupply;
    }

    function approve(address spender, uint value) public returns (bool) {
        allowance[msg.sender][spender] = value;
        emit Approval(msg.sender, spender, value);
        return true;
    }

    function transfer(address to, uint value) public virtual returns (bool) {
        require(balanceOf[msg.sender] >= value, "Insufficient balance");
        balanceOf[msg.sender] -= value;
        balanceOf[to] += value;
        emit Transfer(msg.sender, to, value);
        return true;
    }
}

contract BlockableToken is ERC20 {
    address owner;
    modifier auth() { require(msg.sender == owner, "unauthorised"); _; }

    mapping(address => bool) blocked;

    function block(address usr) auth public { blocked[usr] = true; }
    function allow(address usr) auth public { blocked[usr] = false; }

    constructor(uint _totalSupply) ERC20(_totalSupply)  {
        owner = msg.sender;
    }

    function transfer(address dst, uint wad) override public returns (bool) {
        require(!blocked[msg.sender], "blocked");
        return super.transfer(dst, wad);
    }
}

// Minimalized contract to showcase the issue.
contract BauchibredDepositProtocol {
    struct Deposit {
        address owner;
        address[] tokens;
        uint256[] amounts;
        bool status;
    }

    mapping(uint32 => Deposit) public deposits;
    uint32 public nonceCounter = 1;

    BlockableToken public blacklistedToken; // USDC for this example
    ERC20 public token2;
    ERC20 public token3;
    ERC20 public token4;
    ERC20 public token5;

    constructor()  {
        blacklistedToken = new BlockableToken(1000000);
        token2 = new ERC20(1000000);
        token3 = new ERC20(1000000);
        token4 = new ERC20(1000000);
        token5 = new ERC20(1000000);
    }

    function createDeposit(address[] memory tokens, uint256[] memory amounts) public returns (uint32) {
    require(tokens.length == amounts.length, "Mismatched arrays");
    uint32 currentNonce = nonceCounter++;

    for (uint256 i = 0; i < tokens.length; i++) {
        // Transfer tokens from user to the contract
        require(ERC20(tokens[i]).transferFrom(msg.sender, address(this), amounts[i]), "Transfer failed");
    }

    deposits[currentNonce] = Deposit({
        owner: msg.sender,
        tokens: tokens,
        amounts: amounts,
        status: false
    });

    return currentNonce;
}
With this adjustment, when users create a deposit, they also transfer the tokens to the contract. This ensures that the tokens are securely stored with the BauchibredDepositProtocol until the user decides to redeem them.

    function _clearToken(address recipient, address tokenAddress, uint256 amount) internal {
        ERC20(tokenAddress).transfer(recipient, amount);
    }

    function redeemDeposit(uint32 _depositNonce) public {
        Deposit storage deposit = deposits[_depositNonce];
        require(deposit.owner != address(0), "DepositRedeemUnavailable");
        require(deposit.owner == msg.sender, "NotDepositOwner");

        deposit.owner = address(0);

        for (uint256 i = 0; i < deposit.tokens.length; i++) {
            _clearToken(msg.sender, deposit.tokens[i], deposit.amounts[i]);
        }

        delete deposits[_depositNonce];
    }
}
Guide on running POC
  1. Setting up Remix:

    • Open Remix IDE.
    • Create a new file and paste the provided Solidity code above
  2. Deploying the Contract:

    • Navigate to the "Deploy & Run Transactions" tab.
    • Deploy BauchibredDepositProtocol. It will auto-deploy related contracts.
  3. Demonstration of the Vulnerability:

    a. Create a Deposit:

    • In the BauchibredDepositProtocol functions, select createDeposit.
    • Enter tokens as the address created for blockableToken and one or more of other tokens addresses, and then pass in amounts too, if only two token then you can pass in [1000, 2000]
    • Execute and note down the depositNonce.

    b. Simulate a Blacklist:

    • In BlockableToken functions, call block() and enter the default account address.
    • The above executes the function.

    c. Attempt the Redemption:

    • In BauchibredDepositProtocol, call redeemDeposit().
    • Enter the noted depositNonce and execute.
    • Observe the error, highlighting the vulnerability.

By following these steps, you can see that a blacklisted token prevents the redemption of all tokens linked to a specific depositNonce.

Tool Used

  • Manual Review
  • Remix

Implement a mechanism that allows users to specify an alternate recipient address during the withdrawal process.

If the above can't be done then consider breaking the token clearing loop into individual token redemption requests, ensuring that the failure of one token transfer doesn't affect the others, this way only the blacklisted token would get stuck in the protocol.

Assessed type

Context

#0 - c4-pre-sort

2023-10-13T10:40:19Z

0xA5DF marked the issue as sufficient quality report

#1 - c4-pre-sort

2023-10-13T10:40:25Z

0xA5DF marked the issue as primary issue

#2 - 0xA5DF

2023-10-13T10:40:51Z

The attempt to redeem the deposit using redeemDeposit() will fail since user is blacklisted, using USDC as a case study, its got a notBlacklisted() modifier that's required for functions such as transfer(), transferFrom() and approve().

Seems like the user's fault, but will leave open for sponsor to comment

#3 - c4-sponsor

2023-10-16T16:18:46Z

0xLightt marked the issue as disagree with severity

#4 - 0xLightt

2023-10-16T16:24:50Z

For the described issue to happen, a user would need to first interact with a poison ERC20 token. Following this, they would need to either use a fallback toggled call or call retrieve deposit after the original transaction fails. The likelihood of this sequence of events occurring in the real world is minimal, making the potential impact of this finding limited.

#5 - c4-sponsor

2023-10-16T16:36:16Z

0xLightt (sponsor) confirmed

#6 - alcueca

2023-10-24T14:47:10Z

The vulnerability is not on poison ERC20s, but on ERC20s where the transfer can revert. USDC and USDT are notorious for that. The fact that tokens of different types can be mixed in a deposit would make that a user loses an amount of DAI because he got blacklisted by USDC. While these are user losses, they are not common or large enough to warrant a High severity.

#7 - c4-judge

2023-10-24T14:47:18Z

alcueca changed the severity to 2 (Med Risk)

#8 - c4-judge

2023-10-24T14:47:25Z

alcueca marked the issue as satisfactory

#9 - c4-judge

2023-10-24T14:48:00Z

alcueca marked issue #423 as primary and marked this issue as a duplicate of 423

Findings Information

Awards

20.0051 USDC - $20.01

Labels

bug
2 (Med Risk)
partial-50
sufficient quality report
duplicate-399

External Links

Lines of code

https://github.com/code-423n4/2023-09-maia/blob/f5ba4de628836b2a29f9b5fff59499690008c463/src/RootBridgeAgent.sol#L160-L226

Vulnerability details

Impact

This is an issue that affects all the contracts that try performing calls to the layer zero endpoint which could essentially block the channel for cross chain messaging.

Attack Base

Note that report is like a 2 -in- 1 issue, since this pertains to both the missing Access control and then how specifiying low amount of gas could block a channel Note that in the updated contract of RootBridgeAgent.sol only the callOut() function includes the RequireRouter() modifier which means that anyone could call any of the callOut prefixed functions in order to lead the call to a _performCall().

The above claim can be seen here

<Details> <summary> Click to see full referenced code base </summary>
    function callOut(
        address payable _refundee,
        address _recipient,
        uint16 _dstChainId,
        bytes calldata _params,
        GasParams calldata _gParams
    ) external payable override lock requiresRouter {
        //Encode Data for call.
        bytes memory payload = abi.encodePacked(bytes1(0x00), _recipient, settlementNonce++, _params);


        //Perform Call to clear hToken balance on destination branch chain.
        _performCall(_dstChainId, _refundee, payload, _gParams);
    }


    /// @inheritdoc IRootBridgeAgent
    function callOutAndBridge(
        address payable _refundee,
        address _recipient,
        uint16 _dstChainId,
        bytes calldata _params,
        SettlementInput calldata _sParams,
        GasParams calldata _gParams,
        bool _hasFallbackToggled
    ) external payable override lock requiresRouter {
        // Create Settlement and Perform call
        bytes memory payload = _createSettlement(
            settlementNonce,
            _refundee,
            _recipient,
            _dstChainId,
            _params,
            _sParams.globalAddress,
            _sParams.amount,
            _sParams.deposit,
            _hasFallbackToggled
        );


        //Perform Call.
        _performCall(_dstChainId, _refundee, payload, _gParams);
    }


    /// @inheritdoc IRootBridgeAgent
    function callOutAndBridgeMultiple(
        address payable _refundee,
        address _recipient,
        uint16 _dstChainId,
        bytes calldata _params,
        SettlementMultipleInput calldata _sParams,
        GasParams calldata _gParams,
        bool _hasFallbackToggled
    ) external payable override lock requiresRouter {
        // Create Settlement and Perform call
        bytes memory payload = _createSettlementMultiple(
            settlementNonce,
            _refundee,
            _recipient,
            _dstChainId,
            _sParams.globalAddresses,
            _sParams.amounts,
            _sParams.deposits,
            _params,
            _hasFallbackToggled
        );


        // Perform Call to destination Branch Chain.
        _performCall(_dstChainId, _refundee, payload, _gParams);
    }
</Details>

Proof of Concept

Layer Zero implements a minimum gas showcase

The above means that while sending messages through LayerZero, the sender can specify how much gas he is willing to give to the Relayer to deliver the payload to the destination chain. This configuration is specified in relayer adapter params.

Now take a look at RootBridgeAgent.sol#L808-L839


    function _performCall(
        uint16 _dstChainId,
        address payable _refundee,
        bytes memory _payload,
        GasParams calldata _gParams
    ) internal {
        // Get destination Branch Bridge Agent
        address callee = getBranchBridgeAgent[_dstChainId];


        // Check if valid destination
        if (callee == address(0)) revert UnrecognizedBridgeAgent();


        // Check if call to remote chain
        if (_dstChainId != localChainId) {
            //Sends message to Layerzero Enpoint
            //@audit
            ILayerZeroEndpoint(lzEndpointAddress).send{value: msg.value}(
                _dstChainId,
                getBranchBridgeAgentPath[_dstChainId],
                _payload,
                _refundee,
                address(0),
                abi.encodePacked(uint16(2), _gParams.gasLimit, _gParams.remoteBranchExecutionGas, callee)
            );


            // Check if call to local chain
        } else {
            //Send Gas to Local Branch Bridge Agent
            callee.call{value: msg.value}("");
            //Execute locally
            IBranchBridgeAgent(callee).lzReceive(0, "", 0, _payload);
        }
    }

As seen, this function performs a call to the layer zero endpoint for cross chain messaging, issue is that gParams, i.e the Gas parameters passed are not validated which incentivizes an attacker to just call any of the callOut... prefixed functions and specify an amount of gas that would cause the tx to revert and block off the channel. The line where this happens inside the LayerZero contract can be seen here, as previously explained, note that {gas: _gasLimit} is the gas the sender has paid for. The objective is that due to this small gas passed the transaction reverts somewhere inside the lzReceive() function and the message pathway is blocked, resulting in StoredPayload.

Tool used

Manual Review

Since _performCall() could be accessed by anyone, then for each type of callOut prefixed functions a minimum gas value needs to be passed and then each query to this should be checked to ensure that they are more than enough for the transaction to be executed.

Alternatively since queries could be passed a refundee address in the case of too much gas passed, then an enforcal could be made for any transaction to be able to cover for the worst case scenario, that way the extra is sent back to the message executor, or the estimateFees() endpoint could be used and requirement should be that gas passed should not be less than this.

Assessed type

Access Control

#0 - c4-pre-sort

2023-10-12T07:34:07Z

0xA5DF marked the issue as duplicate of #785

#1 - c4-pre-sort

2023-10-12T07:34:12Z

0xA5DF marked the issue as sufficient quality report

#2 - c4-judge

2023-10-22T05:19:02Z

alcueca marked the issue as satisfactory

#3 - c4-judge

2023-10-22T05:19:50Z

alcueca marked the issue as partial-50

#4 - alcueca

2023-10-22T05:20:08Z

50% due to comparative quality with satisfactory reports

QA Report

Table of Contents

Issue
QA-01Multiple issues with how chain Ids are being used in protocol
QA-02Replenishing reserves should be made to vet strategies easier
QA-03Using a nonce of uint 32 limits the transactions possible in the protocol to 4294967295
QA-04The internal function used to check if a Port Strategy has reached its daily management limit could be gamed
QA-05Toggled bridge agent/agent factory could get added for a second time in bridgeAgent/AgentFactories array
QA-06Anyone can create a RootBridgeAgent which is risky since they can directly interact with RoorPort
QA-07Use safeTransferFrom instead of transferFrom for ERC721 transfers
QA-08block.number will not be a reliable source of timing on Optimism
QA-09More checks against 0 inputs should be implemented
QA-10Inconsistencies between sister function implementations
QA-11Multiple contracts either use Ownable but don't need to use it or import it without using it
QA-12Unlike the MulticallRootRouterLibZip the MultiCallRootRouter does not correctly decode data
QA-13ERC20hTokenBranch & ERC20hTokenRoot could have compatibility issues with tokens that purely comply with EIP-20
QA-14Owner currently can't execute any extra calldata that's needed in the case where _payload.length > PARAMS_SETTLEMENT_OFFSET

QA-01 Multiple issues with how chain Ids are being used in protocol

Impact

Low to medium, but heavy on the low side since chances are too slim for this to be the case.

Proof of Concept

Issue 1

Multiple instances currently have the localchainId being stored as a uint16 variable, for e.g in the ArbitrumBranchPort.sol contract and some have it stored as uint256, probably done in order to save gas

Take a look at the BranchBrdgeAgentFactory.sol's constructor for example

    constructor(
        uint16 _localChainId,
        uint16 _rootChainId,
        address _rootBridgeAgentFactoryAddress,
        address _lzEndpointAddress,
        address _localCoreBranchRouterAddress,
        address _localPortAddress,
        address _owner
    ) {
        require(_rootBridgeAgentFactoryAddress != address(0), "Root Bridge Agent Factory Address cannot be 0");
        require(
            _lzEndpointAddress != address(0) || _rootChainId == _localChainId,
            "Layerzero Endpoint Address cannot be the zero address."
        );
        require(_localCoreBranchRouterAddress != address(0), "Core Branch Router Address cannot be 0");
        require(_localPortAddress != address(0), "Port Address cannot be 0");
        require(_owner != address(0), "Owner cannot be 0");

        localChainId = _localChainId;
        rootChainId = _rootChainId;
        rootBridgeAgentFactoryAddress = _rootBridgeAgentFactoryAddress;
        lzEndpointAddress = _lzEndpointAddress;
        localCoreBranchRouterAddress = _localCoreBranchRouterAddress;
        localPortAddress = _localPortAddress;
        _initializeOwner(_owner);
    }

From the above one can thereby see that both _localChainId and _rootChainId have been changed to uint16 unlike the former contracts that has them in uint256, this mean that any chain with id abovc 65556 can't be used.

Issue 2

Layer Zero chain Ids could be changed in the future

Impact of this is also low, info since this would only mean that they would be a DOS (revertions of any cross chain messaging to/fro the deprecated chain Id) until the chainId gets updated via migratig the root or BranchBridgeAgentFactory

Nevertheless multiple contracts within protocol are going to hardcoded with their respective chain Ids, with them being stored as immutable variables, for example see here:

The ethereum mainnet's chain Id used to be 1 and now it's 101

Chain Ids associated with Layer zero have been changed in the past, with an instance being due to a security upgrade, these contracts would be non-functional since any attempt of a cross message to and fro would be to the wrong chain id

See this, from the official LayerZero discord group, do note that 10Xkelly is a part of the LayerZero team

hey

For the first issue, mske a generic storing of this, advisable to just use a uint256 since if the chain Ids could be change nothing is stoppin LayerZero from deciding to use uint256 in the future to store their Ids.

For the second case, follow recommendations made by the LZ team and ensure that fast replies are done for whenever a chain Id changes, i.e query the chain Ids for transactions and in a case where they've changed then they should be migrated immediately

QA-02 Replenishing reserves should be made to vet strategies easier

Impact

Low, suggestion on how to easier vet strategies

Proof of Concept

Take a look at replenishReserves()

function replenishReserves(address _strategy, address _token) external override lock { // Cache Strategy Token Global Debt uint256 strategyTokenDebt = getStrategyTokenDebt[_token]; // Get current balance of _token uint256 currBalance = ERC20(_token).balanceOf(address(this)); // Get reserves lacking uint256 reservesLacking = _reservesLacking(strategyTokenDebt, _token, currBalance); // Cache Port Strategy Token Debt uint256 portStrategyTokenDebt = getPortStrategyTokenDebt[_strategy][_token]; // Calculate amount to withdraw. The lesser of reserves lacking or Strategy Token Global Debt. uint256 amountToWithdraw = portStrategyTokenDebt < reservesLacking ? portStrategyTokenDebt : reservesLacking; // Update Port Strategy Token Debt getPortStrategyTokenDebt[_strategy][_token] = portStrategyTokenDebt - amountToWithdraw; // Update Strategy Token Global Debt getStrategyTokenDebt[_token] = strategyTokenDebt - amountToWithdraw; // Withdraw tokens from startegy IPortStrategy(_strategy).withdraw(address(this), _token, amountToWithdraw); //@audit-issue // Check if _token balance has increased by _amount require( ERC20(_token).balanceOf(address(this)) - currBalance == amountToWithdraw, "Port Strategy Withdraw Failed" ); // Emit DebtRepaid event emit DebtRepaid(_strategy, _token, amountToWithdraw); }

As seen where as the above process of replenishing reserves is not vulnerable to the classic 1 wei attack due to the nature of it's current implementation of querying the balance twice and the only way the attack could be reached is by one breaking in during the withdrawal itself.

The check that's been implemented: require( ERC20(_token).balanceOf(address(this)) - currBalance == amountToWithdraw, "Port Strategy Withdraw Failed" ); makes it harder to vet strategies since the condition is too strict

Change the check and make it ERC20(_token).balanceOf(address(this)) - currBalance >= amountToWithdraw instead, this way vetting strategies would be easier.

QA-03 Using a nonce of uint 32 limits the transactions possiible in the protocol to 4294967295

Impact

Currently when a total of 4294967295 interactions(including non-bridging interactions) have been made on say a BranchBridgeAgent, the contract becomes unusable

This should be a medium given the high chances of this happening(though it may take a long period), the impact on the protocol when it happens(Protocol will be totally unusable), and the difficulty of fixing the issue when it happens(cannot be fixed through redeployment, and contract is not upgradeable), and the ability of a malicious user to catalyze this on ArbitrumBranch without losing anything(apart from Arbitrum gas fees), I believe this qualifies as atleast a medium impact issue, but I'm submitting as QA hoping the judge upgrades this if he deems it fit

Proof of Concept

In BranchBridgeAgent.sol, the used nonces have a type uint32. And almost all external functions, which users would interact with, increments them because they downstream call one of the createDeposit prefixed functions.

With this in mind, it's key to note that it's no over-inflation to say that the uint32 max value of 4294967295 will definitely be reached even though it may take a long period.

NB: If this is even done on say the ArbitrumBranchBridgeAgent.sol contract, it'd cost aproximately free, since the malicious user would only need to pay the gas fees, which can be considered very cheap too

Change the nonce to be of type uint256 instead, checking the differences between the max uint32 value and the suggested uint256 [ \frac{115792089237316195423570985008687907853269984665640564039457584007913129639935}{4,294,967,295} ]

One can see that not only does this make the attack (2.6959946667150639794667015087019630673637144422540572481103610249215 \times 10^{61}) times harder, it also ensures that the contracts never get unusable due to the amount of transactions.

QA-04 The internal function used to check if a Port Strategy has reached its daily management limit could be gamed

Impact

The internal function used to check if a Port Strategy has reached its daily management limit could be gamed and allow a port to spend even 2x it's daily limit in less than 24 hours, could even be as low as 1, 2 hours...

Proof of Concept

The logic used in the modified _checkTimeLimit() function in BranchPort.sol can be exploited by a strategy to potentially double its withdrawal limit in a short time span. This loophole arises from the logic that resets the lastManaged[msg.sender][_token] timestamp to the beginning of the current day every time the limit is reset, due to the this...

        if (block.timestamp - lastManaged[msg.sender][_token] >= 1 days) {
            dailyLimit = strategyDailyLimitAmount[msg.sender][_token];
            unchecked {
                lastManaged[msg.sender][_token] = (block.timestamp / 1 days) * 1 days;
            }
        }

This line: (block.timestamp / 1 days) * 1 days essentially just sets In practice, this means a strategy can use up its daily limit just before midnight and then immediately use its entire limit again right after midnight.

This flaw poses a risk to the protocol's assets and could lead to unexpected capital utilization and potential losses. If multiple strategies exploit this vulnerability simultaneously, the negative impact on the protocol could be amplified.

Step-by-Step POC

  1. Initial Setup: A daily limit has been set which is monitored and controlled by the _checkTimeLimit function.
  2. First Utilization: A call to the manage() is made (which internally calls _checkTimeLimit) at say 11:30 PM on Day 1 and consumes its entire daily limit. At this time, the lastManaged[msg.sender][_token] is set to the start of Day 1 due to (block.timestamp / 1 days) * 1 days logic, i.e 12:00 AM of Day 1
  3. Limit Reset Exploit: By 12:30 AM of Day 2, just after midnight, another call is made to the manage() function. Since 24 hours have passed since the start of Day 1 (which is the value stored in lastManaged[msg.sender][_token]), the daily limit is reset, this can be confirmed since the below check passes.
        if (block.timestamp - lastManaged[msg.sender][_token] >= 1 days)
  1. Double Consumption: Now, the _amount specified could even be daily limit again, effectively doubling its allowed withdrawal in a very short time span... in this case just north of 1 hour (taking into account transaction delays and all.)

Using this method, the strategy can consistently exploit the time-based logic, leading to higher withdrawals than intended by the protocol.

Do not round down the block.timestamp to the start of the current day if 24 hours has passed, Instead, set the lastManaged timestamp to the current timestamp, this way the dailyLimit is only reset if 24 hours has passed, below is the change to apply to the _checkTimeLimit() function.

 function _checkTimeLimit(address _token, uint256 _amount) internal {
     uint256 dailyLimit = strategyDailyLimitRemaining[msg.sender][_token];
     if (block.timestamp - lastManaged[msg.sender][_token] >= 1 days) {
         dailyLimit = strategyDailyLimitAmount[msg.sender][_token];
         unchecked {
-                lastManaged[msg.sender][_token] = (block.timestamp / 1 days) * 1 days;
+                lastManaged[msg.sender][_token] = block.timestamp;
         }
     }
     strategyDailyLimitRemaining[msg.sender][_token] = dailyLimit - _amount;
 }

This was submitted as QA after extensive discussion with sponsors and their argument on how this is an intended design to have a max per day and not per 24 hours, but in whatever case if there is no problem against having a strategy user 2x their daily limit in less than 24 hours this should be clearly documented and if additional intention is to always have lastManaged[msg.sender][_token] date back to 00:00 of every day then a different approach should be considered due to the issue with leap seconds, otherwise I see no reason and would argue that judge upgrades this to a med severity finding as having strategies 2x their daily limit in less than 24 hours could lead to other issues as well.

QA-05 Toggled bridge agent/agent factory could get added for a second time in bridgeAgent/AgentFactories array

Impact

Low, this leads to clashing/duplicating records of bridge agents or their factories.

Proof of Concept

Take a look at

    function addBridgeAgent(address _manager, address _bridgeAgent) external override requiresBridgeAgentFactory {
        if (isBridgeAgent[_bridgeAgent]) revert AlreadyAddedBridgeAgent();

        bridgeAgents.push(_bridgeAgent);
        getBridgeAgentManager[_bridgeAgent] = _manager;
        isBridgeAgent[_bridgeAgent] = true;

        emit BridgeAgentAdded(_bridgeAgent, _manager);
    }

        function toggleBridgeAgent(address _bridgeAgent) external override onlyOwner {
        isBridgeAgent[_bridgeAgent] = !isBridgeAgent[_bridgeAgent];

        emit BridgeAgentToggled(_bridgeAgent);
    }

    //... codeblocks ommitted for brevity reasons
        function addBridgeAgentFactory(address _bridgeAgentFactory) external override onlyOwner {
        if (isBridgeAgentFactory[_bridgeAgentFactory]) revert AlreadyAddedBridgeAgentFactory();

        bridgeAgentFactories.push(_bridgeAgentFactory);
        isBridgeAgentFactory[_bridgeAgentFactory] = true;

        emit BridgeAgentFactoryAdded(_bridgeAgentFactory);
    }
        function toggleBridgeAgentFactory(address _bridgeAgentFactory) external override onlyOwner {
        isBridgeAgentFactory[_bridgeAgentFactory] = !isBridgeAgentFactory[_bridgeAgentFactory];

        emit BridgeAgentFactoryToggled(_bridgeAgentFactory);
    }

As seen whenever an agent or a factory gets duplicated it's gets set to the opposite value of what it previously was and not removed from the array, now whenever an attempt is made to add an agent or an agent factory this state is being checked, this means that in a case where say the agent has been toggled off the same agent could get added another time in the array.

Even though this could lead to duplication and potentially having multiple agents or agent factories in their respective arrays, these arrays are onnly used to keep a record of them and they are not used elsewhere which justifies the low severity.

While adding do not only check if agent/agent factory has been toggled off, but alos if they've been added to the array previously

QA-06 Anyone can create a RootBridgeAgent which is risky since they can directly interact with RoorPort

Impact

Low, since this seems to be intended design, but a RootBridgeAgent created by a malicious user, with say a malicious router will be allowed to access some of RootPort critical functions.

Proof of Concept

Take a look at RootBridgeAgentFactory.sol#L48-L59

    function createBridgeAgent(address _newRootRouterAddress) external returns (address newBridgeAgent) {
        newBridgeAgent = address(
            new RootBridgeAgent(
                rootChainId,
                lzEndpointAddress,
                rootPortAddress,
                _newRootRouterAddress
            )
        );


        IRootPort(rootPortAddress).addBridgeAgent(msg.sender, newBridgeAgent);
    }

Anyone can call this function, passing in an invalid(or malicious) _newRootRouterAddress, and then RootPort#addBridgeAgent is called which immediately grants unto the deployed address, the isBridgeAgent role.

NB:RootPort is the core of Ulysses, where critical functionalities are handled.

So now, a random BridgeAgent, created by a malicious user, with malicious router can now call RootPort functions which use requiresBridgeAgent modifier like bridgeToRoot,burn... etc.

Apply an access control to the function, i.e advisably only the admin should be allowed to call this

QA-07 Use safeTransferFrom instead of transferFrom for ERC721 transfers

Impact

VirtualAccount's withdrawERC721() is used to transfer ERC721 tokens, but when transferring ERC721 tokens to the recipient, the transferFrom keyword is used instead of safeTransferFrom, which is bad practise, since, if the recipient is a contract and is not capable of handling ERC721 tokens, the sent tokens could be locked.

Proof Of Concept

Take a look at VirtualAccount.sol#L61-L63

Now do note that if transferFrom() is used for transferring an ERC721 token, there is no way of checking if the recipient address has capability to handle incoming ERC721 token. If there is no checking, the transaction will push through and the ERC721 token will be locked in the recipient contract address.

Use the safeTransferFrom() method instead of transferFrom() for NFT transfers. This call will revert if the onERC721Received function is not implemented in the receiving contract, not allowing an incompatible contract to exercise the option.

QA-08 block.number will not be a reliable source of timing on Optimism

Impact

Low, info, Optimism's block.timestamp has a bit of nuances compared to the mainnet that can be seen here

Proof of Concept

The Ulysses omnichain contracts is to be deployed on different chains, most especially the brach contracts, but some of the concept around this is linked with using block.timestamp which could lead to some issues for chains such as Optimism

This should be taken into account and ensurances should be made that no core functionality around this could be broken

QA-09 More checks against 0 inputs should be implemented

Impact

Low, info on having better code structure to prevent unnecesary/costly errors.

Proof Of Concept

This issue can be seen in multiple files, consider copy-pasting the search command below to see the instances

From the above we can see that

Proof of Concept

Take a look at the VirtualAccount.sol's withdrawERC20() as an example

function withdrawERC20(address _token, uint256 _amount) external override requiresApprovedCaller {
        _token.safeTransfer(msg.sender, _amount);
    }

According to https://github.com/d-xo/weird-erc20#revert-on-zero-value-transfers

The above would revert if the token does not support 0 value transfer Another case that could be attached to this is while minting, Using this search command: https://github.com/search?q=repo%3Acode-423n4%2F2023-09-maia+mint%28&type=code, one can see that based on current implementation there is a whooping sum of 14 instances (with a few being Out-of-Scope being that they are in test contracts) where tokens could get minted to the zero address.

Introduce a requirement that the amount should not be 0

NB: The above is just two instances on how applying 0 checks could help improve code structure.

QA-10 Inconsistencies between sister function implementations

Impact

Low, inconsistency in implementing sister functions and in this case it's a security measure that's ben applied that's missing which could lead to issues

Proof Of Concept

Take a look at bridgeInMultiple()

  function bridgeInMultiple(
      address _recipient,
      address[] memory _localAddresses,
      address[] memory _underlyingAddresses,
      uint256[] memory _amounts,
      uint256[] memory _deposits
  ) external override requiresBridgeAgent {
      // Cache Length
      uint256 length = _localAddresses.length;
//@audit Question: why are  the sanity checks applied in `bridgeOutMultiple()` is not applied here, also why isn't this locked
      // Loop through token inputs
      for (uint256 i = 0; i < length;) {
          // Check if hTokens are being bridged in
          if (_amounts[i] - _deposits[i] > 0) {
              unchecked {
                  _bridgeIn(_recipient, _localAddresses[i], _amounts[i] - _deposits[i]);
              }
          }

          // Check if underlying tokens are being cleared
          if (_deposits[i] > 0) {
              withdraw(_recipient, _underlyingAddresses[i], _deposits[i]);
          }

          unchecked {
              ++i;
          }
      }
  }

From the above one can see that while bridging in multiple tokens, multiple checks are applied from checking array lengths and what not, but looking below at the bridgeOutMultiple() function one can see that these checks have not been applied which could lead to unforeseen issues

    /// @inheritdoc IBranchPort
    function bridgeOutMultiple(
        address _depositor,
        address[] memory _localAddresses,
        address[] memory _underlyingAddresses,
        uint256[] memory _amounts,
        uint256[] memory _deposits
    ) external override lock requiresBridgeAgent {
        // Cache Length
        uint256 length = _localAddresses.length;


        // Sanity Check input arrays
        if (length > 255) revert InvalidInputArrays();
        if (length != _underlyingAddresses.length) revert InvalidInputArrays();
        if (_underlyingAddresses.length != _amounts.length) revert InvalidInputArrays();
        if (_amounts.length != _deposits.length) revert InvalidInputArrays();


        // Loop through token inputs and bridge out
        for (uint256 i = 0; i < length;) {
            _bridgeOut(_depositor, _localAddresses[i], _underlyingAddresses[i], _amounts[i], _deposits[i]);


            unchecked {
                i++;
            }
        }
    }

Ensure to implement the same security checks in sister functions

QA-11 Multiple contracts either use Ownable but don't need to use it or import it without using it

Impact

Low, just an info on how to better implement code

Proof Concept

  1. Contracts that don't need to use it: ERC20hTokenBranchFactory.sol;
  2. Contracts that import Ownable but don't use it: ArbitrumBranchPort.sol, CoreBranchRouter.sol, RootBridgeAgent.sol, RootBridgeAgent.sol, RootBridgeAgentFactory.sol, ERC20hTokenRoot.sol

More efforts should be placed on reducing bulkiness of code as much as possible

QA-12 Unlike the MulticallRootRouterLibZip the MultiCallRootRouter does not correctly decode data

Impact

Low, this is cause this seems to be intended behaviour, but at current the docs does not clearly cover this.

Proof of Concept

Take a look at _decode()

    function _decode(bytes calldata data) internal pure virtual returns (bytes memory) {
        return data;
    }

Now here is the implementation from MulticallRootRouterLibZip.sol

    function _decode(bytes calldata data) internal pure override returns (bytes memory) {
        return data.cdDecompress();
    }

As seen in the latter case, the compressed version is used. But all 11 attempts to decode data in the MultiCallRootRouter.sol contract would actually just return the same as is been passed to it, which is an unnecessary behaviour and just wastes gas querying the internal _decode() function

Correctly document this behaviour if intended or use the right implementation to decode data

QA-13 ERC20hTokenBranch & ERC20hTokenRoot could have compatibility issues with tokens that purely comply with EIP-20

Impact

Low, potential incompatibility with some ERC20 tokens which is not protocol's intended behaviour as specified in the ReadME.

Proof Of Concept

Some extra integrations are being queried on some ERC20 tokens to be integrated, which are not a part of the ERC-20 standard, and was added later as an optional extension. As such, it is unsafe to blindly cast all tokens to this interface, and then call this function.

Using these 3 search commands:

From the above we can see that there are 12 instances of attempts to query protperties that re not generic to the EIP 20 specification.

Additionally, note that this section of the contest's ReadME iterates the fact that some contracts must be in compliance with EIP 20: ERC20hTokenBranch & ERC20hTokenRoot but as explainred below this is not the case.

  • ERC20hTokenBranch: Should comply with ERC20/EIP20
  • ERC20hTokenRoot: Should comply with ERC20/EIP20

Either correctly mitigate this, or clearly document that tokens that are purely following the standard might have compatibility issues while integrating the system in the case that they do not include this external integrations.

QA-14 Owner currently can't execute any extra calldata that's needed in the case where _payload.length > PARAMS_SETTLEMENT_OFFSET

Impact

Low.. contract currently not working as specified

Proof Of Concept

Take a look at executeWithSettlement()

     * @notice Function to execute a cross-chain request with a single settlement.
     * @param _recipient Address of the recipient of the settlement.
     * @param _router Address of the router contract to execute the request.
     * @param _payload Data received from the messaging layer.
     * @dev Router is responsible for managing the msg.value either using it for more remote calls or sending to user.
     * @dev SETTLEMENT FLAG: 1 (Single Settlement)
     */
    function executeWithSettlement(address _recipient, address _router, bytes calldata _payload)
        external
        payable
        onlyOwner
    {
        // Clear Token / Execute Settlement
        SettlementParams memory sParams = SettlementParams({
            settlementNonce: uint32(bytes4(_payload[PARAMS_START_SIGNED:PARAMS_TKN_START_SIGNED])),
            recipient: _recipient,
            hToken: address(uint160(bytes20(_payload[PARAMS_TKN_START_SIGNED:45]))),
            token: address(uint160(bytes20(_payload[45:65]))),
            amount: uint256(bytes32(_payload[65:97])),
            deposit: uint256(bytes32(_payload[97:PARAMS_SETTLEMENT_OFFSET]))
        });

        // Bridge In Assets
        BranchBridgeAgent(payable(msg.sender)).clearToken(
            sParams.recipient, sParams.hToken, sParams.token, sParams.amount, sParams.deposit
        );

        // Execute Calldata if there is any
        if (_payload.length > PARAMS_SETTLEMENT_OFFSET) {
            // Execute remote request
     //@audit-issue owner can't execute any extra calldata that's needed in the case where `_payload.length > PARAMS_SETTLEMENT_OFFSET` this si cause the `executeSettlement()` router's function currently has no implementation and would only reverrt
            IRouter(_router).executeSettlement{value: msg.value}(_payload[PARAMS_SETTLEMENT_OFFSET:], sParams);
        } else {
            // Send reamininig native / gas token to recipient
            _recipient.safeTransferETH(address(this).balance);
        }
    }

As seen at the core end of the function, in a case where _payload.length > PARAMS_SETTLEMENT_OFFSET is true, a query is to be made to the executeSettlement() function, but this currently is unimplemented and instead reverts any call made to it, this can be seen here

    /// @inheritdoc IBranchRouter
    function executeSettlement(bytes calldata, SettlementParams memory)
        external
        payable
        virtual
        override
        requiresAgentExecutor
    {
        revert UnrecognizedFunctionId();
    }

Implement the executeSettlement() function or clearly document that for this to be accessible it needs to be properly implemented.

#0 - c4-pre-sort

2023-10-15T13:05:05Z

0xA5DF marked the issue as sufficient quality report

#1 - 0xA5DF

2023-10-15T13:05:24Z

QA3 is dupe of #834

#2 - c4-judge

2023-10-21T12:40:15Z

alcueca marked the issue as grade-a

Awards

243.335 USDC - $243.33

Labels

analysis-advanced
grade-a
high quality report
edited-by-warden
A-25

External Links

Analysis of the Maia DAO - Ulysses Codebase

Approach

The auditing process commenced with a high level but brief overview of the entire Maia DAO ecosystem. This wasn't restricted merely to the recent updates in the Ulysses scope. Subsequent to this, several hours were dedicated to examining findings from previous audits and drawing insights from them. This was followed by a meticulous, line-by-line security review of the sLOC, beginning from the Root contracts and extending down to the Branch contracts and even the Arbitrum-specific contracts. The concluding phase of this process entailed interactive sessions with the developers on Discord. This fostered an understanding of unique implementations and facilitated discussions about potential vulnerabilities and observations that I deemed significant, requiring validation from the team.

Architecture Feedback

  • While the code documentation currently sits at a medium-acceptable level, there's potential for enhancement. This is cause a test coverage of 69% is not really a wide coverage and leaves room for improvement.
  • Extensive usage of factories also add an extra layer of complexity but that is unfortunately unavoidable in the web3 space, especially with the fact that the Omnichain system aims to implement cross-chain messaging and asset transfer, and with the update it now uses LayerZero to help with this, which naturally is a complex challenge and inherently requires complex code.
  • Would be important to also note that Solmate ERC20 implementation is being used , and this implementation allows transfer to the 0x0 address. This could have negative repercussions in the future. As a suggestion, add checks that no transfers can go to zero address or user OpenZeppelin's ERC20 implementation.
  • The current implementation utilizes Balancer's composable stable pools for hTokens and their underlyings to address discrepancies in token decimal counts compared to the prior codebase. While this resolves certain challenges, it also introduces third-party risks from Balancer. MAIA lacks control over Balancer pool actions, leaving them vulnerable to pauses or potential reactionary measures like Balancer's recent activation of the proportional-exit feature following a security breach. This underscores the inherent risks associated with increased external integrations and the need for caution.
  • Some contracts, functions/variables need better naming as in some cases the name has nothing to do with the functionality. Take the the internal function used to check if a Port Strategy has reached its daily management limit as an example: _checkTimeLimit(), the current implementation is not actually a 24 hour limit which in a way goes against the naminng, so a better documentation/naming around this would have been better, talking about the _checkTimeLimit() function it's current implementation is to ensure that it always counts from a specific time on a daily basis, say 00:00 for this Analysis sake , but the current implementation does not take in to account that the issue of leap seconds/years and this could lead to an issue if the exact time law must be followed.
  • Some of the provided Router contracts had functions that weren't implemented (reverting by default), making it difficult to assess whether there could be potential issues arising from that, sinc ethese functions could be extended to in the future by others would be key to make them be in the know that they should be careful for the type of integration that's been added.
  • Wouldn't be fair if i don't comment on the ASCII art on the contracts, it's a nice job:)

Centralization Risks

  • Upholding trust assumptions is pivotal for the project's longevity. Malicious or compromised ownership can drain liquidity from a vast portion of the project. This can be orchestrated by embedding malicious strategies (Gauges) within existing project reward components or by introducing harmful reward tokens.
  • The above is also applicable to Bridge Agents, currently, while adding/toggling them, there are absolutely no checks. Once activated, these agents possess unchecked authority to mint or burn hTokens as they see fit. So a safeguard here could be to empower the admin to exclusively invoke the toggleBridgeAgent() function on a Bridge Agent address recognized by the BranchPort. Another alternative might be to limit admin capabilities to only removing Bridge Agents, though i weigh more on the former.

Systemic Risks

  • Maia DAO has one of the most prolific development team I've seen, would put them on a scale A- if the Chainlink is on scale A+ A major decider for my choice of grading is the fact that the core developers are relatively small. This results in a very low project bus value. If anything happens to one of the team members, development will be drastically stalled or even project may be closed down, this can be sween to be validated based on my Other recommendations section, cause during the duration of the contest, I could only get a hold of 0xbuzzlightyear which means other security researchers most probably faced the same issue and if something might have happened to 0xbuzzlightyear the whole experience would have been massively affected. Getting more team members should be prioritised.
  • There is a possibility of completely unexpected results if an L2 chain goes down.
  • Each chain has some differences from one another, and it may be possible that some chains do not synchronize well with others due to differences in say block timing for example, this may be very problematic when managing funds across chains.
  • From a token/liquidity point of view, I believe the protocol does not have any more of a system risk that other DeFi products. Its spread to integrating balancer composable pools, liquidity mining, staking, and strategies, which gives the project a better chance of surviving any black swan type events that may occur.

Testability

The system's testability is commendable. While the test suite has imperfections – like a 69% coverage rate and occasionally intricate lengthy tests that are a bit hard to follow, the latter shouldn't be considered a flaw though, cause once familiarized with the setup, devising tests and creating PoCs becomes a smoother endeavour. The developers have provided a high-quality testing sandbox for implementing, testing, and expanding ideas.

Other Recommendations

Protocol Requires Additional Developers

While the MaiaDAO developers have showcased commendable prowess, the rapid evolution of the system – notably the shift from anyCall to LayerZero for cross-chain messaging – with a limited workforce has inevitably led to oversight in certain aspects. Evidence of this can be gleaned from codebase typos. Drawing from the broken windows theory, such lapses may signify underlying code imperfections.

Another evident example is the fact that tests only cover 69% of the whole codebase among others, not to forget that during the contest most of the questions in the public contest's discord chat was answered by the OG 0xbuzzlightyear , privately I only received answers from him too, all these reinforces the idea that the team needs more developers and eyes on the project.

Improve Testability

As already stated in this report, whereas 69% is not a bad amount of coverage since it provides security researchers to build on the already available test suite, it still isn't enough and protocol should be aiming at nothing less than 95% of test coverage,

Leverage Additional Auditing Tools

The majority of security researchers use Visual Studio Code with some specific plugins. A very popular one is the Solidity Visual Developer which does not integrate with this protocol, there are others but that I beleive is at least a very good one that should be used.

Enhance Event Monitoring

Resent implementation subtly suggests that event tracking isn't maximized. Instances where events are missing or seem arbitrarily embedded have been observed. A shift towards a more purposeful utilization of events is recommended.

Security Researcher Logistics

My attempt on reviewing the updated Maia DAO - Ulysses Codebase spanned 45 hours distributed over a week:

  • 2 hours dedicated to writing this analysis.
  • 5 hours exploring the previous MAIA contest on Code4rena, viewable here.
  • 3 hours were allocated for discussions with sponsors on the private discord group regarding potential vulnerabilities.
  • The remaining time on finding issues and writing the report for each of them on the spot, later on editing a few with more knowledge gained on the protocol as a whole, or downgrading them to QA reports

Conclusion

The codebase was a very great learning experience, though it was a pretty hard nut to crack, being that it's like an update contest and most easy to catch bugs have already been spotted and mitigated from the previous contest.

During the course of my security review, I encountered only one High severity finding related to how all deposits attached to a specific nonce could become stuck in the protocol for a user. The cause of this bug, in reality, would likely originate from one of the tokens whose deposits need to be redeemed. However, this would affect all the deposits attached to that specific nonce since the redemption is executed in a loop method; a reversion in one translates to a reversion in all.

Additionally, I submitted two potentially Medium-worthy findings:

  • One involves the current MulticallRootRouter, which could revert when a correct integration is made by a 3rd party dApp passing the supported function IDs, namely 0x04, 0x05, 0x06.
  • The second pertains to the potential for an attacker to block the LayerZero channel. This vulnerability arises from missing access control on the callOut prefixed functions and an absence of a minimum gas pass check when the execution reaches performCall().

Time spent:

42 hours

#0 - c4-pre-sort

2023-10-15T14:31:07Z

0xA5DF marked the issue as sufficient quality report

#1 - c4-pre-sort

2023-10-15T14:39:17Z

0xA5DF marked the issue as high quality report

#2 - alcueca

2023-10-20T05:31:39Z

Best report in terms of original content. It would be best overall if it would also contain an architectural description of Ulysses for context.

#3 - c4-judge

2023-10-20T05:31:44Z

alcueca marked the issue as grade-a

AuditHub

A portfolio for auditors, a security profile for protocols, a hub for web3 security.

Built bymalatrax © 2024

Auditors

Browse

Contests

Browse

Get in touch

ContactTwitter