Platform: Code4rena
Start Date: 02/10/2023
Pot Size: $1,100,000 USDC
Total HM: 28
Participants: 64
Period: 21 days
Judge: GalloDaSballo
Total Solo HM: 13
Id: 292
League: ETH
Rank: 1/64
Findings: 8
Award: $502,041.99
π Selected for report: 6
π Solo Findings: 5
π Selected for report: xuwinnie
108756.4488 USDC - $108,756.45
Attacker can manipulate the sorted queue in log sorter, constraints are not strong enough and reverted l1 logs and events can still be emitted.
Let's see what we have enforced in this circuit. For a unique timestamp, either there is only a write log, we should add it to the queue; or there is a write log and a rollback log, which means revert took place, we should ignore it.
// We compare timestamps, and then resolve logic over rollbacks, so the only way when // keys are equal can be when we do rollback let sorting_key = sorted_item.timestamp; // ensure sorting for uniqueness timestamp and rollback flag // We know that timestamps are unique accross logs, and are also the same between write and rollback let (keys_are_equal, new_key_is_smaller) = unpacked_long_comparison(cs, &[previous_key], &[sorting_key]); // keys are always ordered no matter what, and are never equal unless it's padding new_key_is_smaller.conditionally_enforce_false(cs, should_pop);
At first, we enforce the timestamps in the sorted queue are in ascending orders, which means write log and rollback log of the same timestamp should be adjacent.
// there are only two cases when keys are equal: // - it's a padding element // - it's a rollback // it's enough to compare timestamps as VM circuit guarantees uniqueness of the if it's not a padding let previous_is_not_rollback = previous_item.rollback.negated(cs); let enforce_sequential_rollback = Boolean::multi_and( cs, &[previous_is_not_rollback, sorted_item.rollback, should_pop], ); keys_are_equal.conditionally_enforce_true(cs, enforce_sequential_rollback);
Here, for two consecutive element A, B in the queue, if A is not rollback and B is rollback, we enforce that A, B shares the same timestamp.
let same_log = UInt32::equals(cs, &sorted_item.timestamp, &previous_item.timestamp); let values_are_equal = UInt256::equals(cs, &sorted_item.written_value, &previous_item.written_value); let negate_previous_is_trivial = previous_is_trivial.negated(cs); let should_enforce = Boolean::multi_and(cs, &[same_log, negate_previous_is_trivial]); values_are_equal.conditionally_enforce_true(cs, should_enforce);
Here, for two consecutive element A, B in the queue, if they share the same timestamp, we enforce that they have the same written value. (This is already guaranteed by the earlier circuits)
let this_item_is_non_trivial_rollback = Boolean::multi_and(cs, &[sorted_item.rollback, should_pop]); let negate_previous_item_rollback = previous_item.rollback.negated(cs); let prevous_item_is_non_trivial_write = Boolean::multi_and( cs, &[negate_previous_item_rollback, negate_previous_is_trivial], ); let is_sequential_rollback = Boolean::multi_and( cs, &[ this_item_is_non_trivial_rollback, prevous_item_is_non_trivial_write, ], ); same_log.conditionally_enforce_true(cs, is_sequential_rollback);
This is almost the same as the second one.
// decide if we should add the PREVIOUS into the queue // We add only if previous one is not trivial, // and it had a different key, and it wasn't rolled back let negate_same_log = same_log.and(cs, should_pop).negated(cs); let add_to_the_queue = Boolean::multi_and( cs, &[ negate_previous_is_trivial, negate_same_log, negate_previous_item_rollback, ], );
Finally, for two consecutive element A, B in the queue, if A is write and A, B are different, we add A to the result queue.
We use w to denote write and r to denote rollback, two adjacent letters share the same timestamp. An ideal sorted queue would be like wr wr w w w wr. The system worked well in this case. However, what if someone submit wr rw wr rw as the sorted queue? All the four logs here are reverted, so no log should be added to the result queue. However, this sorted queue satisfy all the constraints, and it will add the second and the fourth log to the result queue. (Try it yourself!)
To conclude, the constraints are not strong enough and attacker can manipulate the sorted queue to emit already reverted l1 logs and events.
Manual
Enforce that the first popped element is write and there are no two consecutive rollbacks in the sorted queue.
Context
#0 - c4-pre-sort
2023-11-01T19:00:25Z
bytes032 marked the issue as primary issue
#1 - c4-pre-sort
2023-11-01T19:00:31Z
bytes032 marked the issue as sufficient quality report
#2 - c4-sponsor
2023-11-06T11:07:15Z
miladpiri (sponsor) confirmed
#3 - GalloDaSballo
2023-11-28T12:35:18Z
The Warden has demonstrated a lack of contraints that would allow, per their own words to:
manipulate the sorted queue to emit already reverted l1 logs and events.
This allows for undefined behaviour, which may lead to exploits, leading me to believe that High Severity is appropriate
#4 - c4-judge
2023-11-28T12:35:22Z
GalloDaSballo marked the issue as selected for report
π Selected for report: xuwinnie
108756.4488 USDC - $108,756.45
Attempting to read a word that spans across the pointer bound start+offset should return zero bytes for the addresses. When offset >= length, the result should be completely zeros, so we can skip the read. However, in current implementation, this case is not handled properly so that attacker can forge arbitary read result.
Let's see what happens when skip_if_legitimate_fat_ptr.
let (_, offset_is_strictly_in_slice) = offset.overflowing_sub(cs, length); let offset_is_beyond_the_slice = offset_is_strictly_in_slice.negated(cs); let skip_if_legitimate_fat_ptr = Boolean::multi_and(cs, &[offset_is_beyond_the_slice, is_fat_ptr]); ...... let skip_memory_access = Boolean::multi_or( cs, &[ already_panicked, skip_if_legitimate_fat_ptr, is_non_addressable, ], );
We skip memory access if offset >= length.
bytes_out_of_bound = bytes_out_of_bound.mask_negated(cs, skip_memory_access); bytes_out_of_bound = bytes_out_of_bound.mask_negated(cs, uf); let (_, bytes_out_of_bound) = bytes_out_of_bound.div_by_constant(cs, 32); // remainder fits into 8 bits too let bytes_to_cleanup_out_of_bounds = unsafe { UInt8::from_variable_unchecked(bytes_out_of_bound.get_variable()) }; let new = Self { absolute_address, page_candidate: page, incremented_offset, heap_deref_out_of_bounds: is_non_addressable, skip_memory_access: skip_memory_access, should_set_panic, bytes_to_cleanup_out_of_bounds, };
bytes_out_of_bound will be masked zero and bytes_to_cleanup_out_of_bounds will also be zero.
let apply_any = Boolean::multi_and(cs, &[should_apply, no_panic]); let update_dst0 = Boolean::multi_or(cs, &[is_read_access, is_write_access_and_increment]); let should_update_dst0 = Boolean::multi_and(cs, &[apply_any, update_dst0]); diffs_accumulator .dst_0_values .push((can_write_into_memory, should_update_dst0, dst0_value));
This case is not treated specially and will not panic, so finally we will push it to dst0. (we should push zeros!)
// implement shift register let zero_u8 = UInt8::zero(cs); let mut bytes_array = [zero_u8; 64]; let memory_value_a_bytes = memory_value_a.value.to_be_bytes(cs); bytes_array[..32].copy_from_slice(&memory_value_a_bytes); let memory_value_b_bytes = memory_value_b.value.to_be_bytes(cs); bytes_array[32..].copy_from_slice(&memory_value_b_bytes); // now mask-shift let mut selected_word = [zero_u8; 32]; // idx 0 is unalignment of 0 (aligned), idx 31 is unalignment of 31 for (idx, mask_bit) in unalignment_bit_mask.iter().enumerate() { let src = &bytes_array[idx..(idx + 32)]; // source debug_assert_eq!(src.len(), selected_word.len()); for (dst, src) in selected_word .array_chunks_mut::<4>() .zip(src.array_chunks::<4>()) { *dst = UInt8::parallel_select(cs, *mask_bit, src, &*dst); } use crate::tables::uma_ptr_read_cleanup::UMAPtrReadCleanupTable; let table_id = cs .get_table_id_for_marker::<UMAPtrReadCleanupTable>() .expect("table must exist"); let bytes_to_cleanup_out_of_bound = quasi_fat_ptr.bytes_to_cleanup_out_of_bounds; let bytes_to_cleanup_out_of_bound_if_ptr_read = bytes_to_cleanup_out_of_bound.mask(cs, is_uma_fat_ptr_read); let [uma_cleanup_bitspread, _] = cs.perform_lookup::<1, 2>( table_id, &[bytes_to_cleanup_out_of_bound_if_ptr_read.get_variable()], ); let uma_ptr_read_cleanup_mask = Num::from_variable(uma_cleanup_bitspread).spread_into_bits::<_, 32>(cs); for (dst, masking_bit) in selected_word .iter_mut() .zip(uma_ptr_read_cleanup_mask.iter().rev()) { *dst = dst.mask(cs, *masking_bit); } ....... let dst0_value = VMRegister::conditionally_select( cs, is_write_access_and_increment, &incremented_src0_register, &read_value_as_register, );
Above are the main steps to get dst0_value. At first we read two consecutive words from memory. Then we choose the seleceted word inside them. Finally, we mask it by uma_cleanup_bitspread. The problem is we neither actually read from memory nor mask the value.
let should_read_a_cell = Boolean::multi_and(cs, &[should_apply, do_not_skip_memory_access]); let should_read_b_cell = is_unaligned_read;
We do not read from memory, which means the memory sponge will not be enforced.
let table_id = cs .get_table_id_for_marker::<UMAPtrReadCleanupTable>() .expect("table must exist"); let bytes_to_cleanup_out_of_bound = quasi_fat_ptr.bytes_to_cleanup_out_of_bounds; let bytes_to_cleanup_out_of_bound_if_ptr_read = bytes_to_cleanup_out_of_bound.mask(cs, is_uma_fat_ptr_read); let [uma_cleanup_bitspread, _] = cs.perform_lookup::<1, 2>( table_id, &[bytes_to_cleanup_out_of_bound_if_ptr_read.get_variable()], ); let uma_ptr_read_cleanup_mask = Num::from_variable(uma_cleanup_bitspread).spread_into_bits::<_, 32>(cs);
We don't mask neither, since bytes_to_cleanup_out_of_bounds is zero.
This two facts mean that we(attacker) can put whatever value into dst0_value. The principle that, if memory state is not enforced, the memory value should not be used, is not followed here.
Manual
When skip_if_legitimate_fat_ptr, bytes_to_cleanup_out_of_bounds should be set to 32.
Context
#0 - c4-pre-sort
2023-11-01T18:56:31Z
bytes032 marked the issue as primary issue
#1 - c4-pre-sort
2023-11-02T16:23:04Z
141345 marked the issue as sufficient quality report
#2 - c4-sponsor
2023-11-06T11:48:56Z
miladpiri (sponsor) confirmed
#3 - GalloDaSballo
2023-11-28T12:33:49Z
The Warden has shown how, due to an unnecessary read, an attacker can perform an arbitrary read from memory, manipulating the value of dst0_value
#4 - c4-judge
2023-11-28T12:33:54Z
GalloDaSballo marked the issue as selected for report
π Selected for report: chainlight
18823.2315 USDC - $18,823.23
Remainder less than divisor is not enforced in shift operations. Attacker can forge incorrect result for opcode rol and ror.
let uint256_zero = UInt256::zero(cs); let rem_to_enforce = UInt32::parallel_select(cs, apply_left_shift, &uint256_zero.inner, &_rshift_r); let a_to_enforce = UInt32::parallel_select(cs, apply_left_shift, reg, &rshift_q); let b_to_enforce = full_shift_limbs; let mul_low_to_enforce = UInt32::parallel_select(cs, apply_left_shift, &lshift_low, reg); let mul_high_to_enforce = UInt32::parallel_select(cs, apply_left_shift, &lshift_high, &uint256_zero.inner); let mul_relation = MulDivRelation { a: a_to_enforce, b: b_to_enforce, rem: rem_to_enforce, mul_low: mul_low_to_enforce, mul_high: mul_high_to_enforce, };
Here, full shift is equal to 2^n (n is the digits to move). For left shift, a = reg, b = full_shift, remainder = 0, high = lshift_high, low = lshift_low. For right_shift, a = rshift_q, b = full_shift, remainder = rshift_r, high = 0, low = reg. What we enforce is a * b + rem = mul_low + mul_high * 2^256
However, to ensure the result is indeed what we expected, we also need to enforce remainder is less than b. For example, we we apply 100 shr 3, the expected result is 12 as 12 * 8 + 4 = 100. However 12 * 7 + 16 = 100 is also true since we don't enforce rshift_r is less than full_shift. As a result, attacker can set any number less than 8 as a result for rshift_r, which means they can forge an incorrect outcome for opcode rol or ror.
Manual
Besides mul/div relation, we also need to enforce an add/sub relation to ensure remainder is less than divisor, just like what we did for opcode mul and div.
Math
#0 - c4-pre-sort
2023-11-01T07:29:49Z
bytes032 marked the issue as primary issue
#1 - c4-pre-sort
2023-11-01T07:29:53Z
bytes032 marked the issue as sufficient quality report
#2 - miladpiri
2023-11-06T10:40:16Z
This is duplicate of https://github.com/code-423n4/2023-10-zksync-findings/issues/697
#3 - c4-sponsor
2023-11-06T11:44:29Z
miladpiri (sponsor) confirmed
#4 - c4-judge
2023-11-26T13:17:47Z
GalloDaSballo marked the issue as duplicate of #697
#5 - c4-judge
2023-11-26T13:18:41Z
GalloDaSballo marked the issue as partial-50
#6 - GalloDaSballo
2023-11-26T13:19:29Z
The primary shows a POC in code that shows how to pass an invalid constraint, for this reason, in this case, I think 50% is correct due to the discrepancy of Information and Impact demonstrated by the 2 reports which both share the same Root Cause
π Selected for report: xuwinnie
108756.4488 USDC - $108,756.45
The range check in the binop's reduction gate is insufficient. After decomposing the composite_result, and_result and or_result could be arbitary integer less than 128 and xor_result could overflow. Attacker can forge arbitary result for opcode and
and or
; as for xor
, an overflowed UInt8<F> will stay in the circuit, which can lead to unexpected behavior.
There are three main steps in fn get_binop_subresults.
let mut composite_result = [Variable::placeholder(); 32]; for ((a, b), dst) in a.iter().zip(b.iter()).zip(composite_result.iter_mut()) { let [result] = cs.perform_lookup::<2, 1>(table_id, &[a.get_variable(), b.get_variable()]); *dst = result; }
At first, we perform a lookup to get the composite result for and
, or
and xor
.
for (src, decomposition) in composite_result.iter().zip(all_results.array_chunks::<3>()) { if cs.gate_is_allowed::<ReductionGate<F, 4>>() { let mut gate = ReductionGate::<F, 4>::empty(); gate.params = ReductionGateParams { reduction_constants: [F::SHIFTS[0], F::SHIFTS[16], F::SHIFTS[32], F::ZERO], }; gate.reduction_result = *src; gate.terms = [ decomposition[0], decomposition[1], decomposition[2], zero_var, ]; gate.add_to_cs(cs); }
Then, we decomposite it into a larger array all_results.
for (((and, or), xor), src) in and_results .iter_mut() .zip(or_results.iter_mut()) .zip(xor_results.iter_mut()) .zip(all_results.array_chunks::<3>()) { *and = src[0]; *or = src[1]; *xor = src[2]; } let and_results = and_results.map(|el| unsafe { UInt8::from_variable_unchecked(el) }); let or_results = or_results.map(|el| unsafe { UInt8::from_variable_unchecked(el) }); let xor_results = xor_results.map(|el| unsafe { UInt8::from_variable_unchecked(el) });
Finally, we get three seperate results from all_results.
In reduction gate, the type we are handling is Variable, which means they can be any element in the prime field. we enforce that composit_value = (xor_result as u64) << 32 | (or_result as u64) << 16 | (and_result as u64). To ensure the decomposited result is indeed what we expected, we also need to make sure all of them are less than 128. So we do a range check here.
for source_set in all_results.array_chunks::<3>() { // value is irrelevant, it's just a range check let _: [Variable; 1] = cs.perform_lookup::<2, 1>(table_id, &[source_set[0], source_set[1]]); }
However, the check only ensures and_result and or_result are less than 128. In a prime field, for any given and_result and or_result, there will always be a xor_result such that composit_value = (xor_result as u64) << 32 | (or_result as u64) << 16 | (and_result as u64), though the xor_result may overflow (uint8).
let xor_results = xor_results.map(|el| unsafe { UInt8::from_variable_unchecked(el) });
In the last step, when we wrap the variable into a UInt8<F>, we are using from_variable_unchecked, which means there is no overflow check. As a result, if an attacker provides incorrect result for and_result and or_result, an overflowed UInt8<F> xor_result will stay in our circuit and unexpected behavior may happen in the future.
Manual
// check all three for source_set in all_results.array_chunks::<3>() { // value is irrelevant, it's just a range check let _: [Variable; 1] = cs.perform_lookup::<2, 1>(table_id, &[source_set[0], source_set[1]]); let _: [Variable; 1] = cs.perform_lookup::<2, 1>(table_id, &[source_set[1], source_set[2]]); }
Math
#0 - c4-pre-sort
2023-11-01T14:50:01Z
bytes032 marked the issue as sufficient quality report
#1 - c4-pre-sort
2023-11-01T14:50:07Z
bytes032 marked the issue as primary issue
#2 - c4-sponsor
2023-11-06T10:55:59Z
miladpiri (sponsor) confirmed
#3 - GalloDaSballo
2023-11-26T19:12:33Z
The Warden has found a missing constraint for binop
, due to a lack in an overflow check, malicious parameters could be passed in the circuit
#4 - c4-judge
2023-11-26T19:12:39Z
GalloDaSballo marked the issue as selected for report
π Selected for report: xuwinnie
108756.4488 USDC - $108,756.45
https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/era-zkevm_circuits/src/main_vm/opcodes/mul_div.rs#L286-L292 https://github.com/code-423n4/2023-10-zksync/blob/1fb4649b612fac7b4ee613df6f6b7d921ddd6b0d/code/era-zkevm_circuits/src/main_vm/opcodes/mul_div.rs#L358
When applying opcode div, if dividend is nonzero and divisor is zero, the resulted quotient and remainder are both zero. The enforced mul/div relation does not hold. An unprovable transaction will dos the priority queue.
quotient_is_zero.conditionally_enforce_true(cs, divisor_is_zero); remainder_is_zero.conditionally_enforce_true(cs, divisor_is_zero);
According to the EraVM spec, if divisor is zero, quotient and remainder should also be zero.
let uint256_zero = UInt256::zero(cs); let rem_to_enforce = UInt32::parallel_select( cs, should_apply_mul, &uint256_zero.inner, &remainder_unchecked, ); let a_to_enforce = UInt32::parallel_select(cs, should_apply_mul, src0_view, "ient_unchecked); let b_to_enforce = src1_view.clone(); let mul_low_to_enforce = UInt32::parallel_select(cs, should_apply_mul, &mul_low_unchecked, &src0_view); let mul_high_to_enforce = UInt32::parallel_select( cs, should_apply_mul, &mul_high_unchecked, &uint256_zero.inner, ); let mul_relation = MulDivRelation { a: a_to_enforce, b: b_to_enforce, rem: rem_to_enforce, mul_low: mul_low_to_enforce, mul_high: mul_high_to_enforce, };
When dividing, the relation we need to enforce is src0 = q * src1 + rem. However, if src0(dividend) is nonzero and src1(divisor) is zero, both q and rem will be zero. The relation does not hold.
let apply_any = Boolean::multi_or(cs, &[should_apply_mul, should_apply_div]); ...... diffs_accumulator .mul_div_relations .push((apply_any, mul_div_relations));
In fact, this relation will be enforced as long as we apply div, which will make the operation unprovable.
Manual
Don't enforce mul/div relation when divisor is zero.
Context
#0 - c4-pre-sort
2023-11-01T07:29:43Z
bytes032 marked the issue as primary issue
#1 - c4-pre-sort
2023-11-01T07:29:48Z
bytes032 marked the issue as sufficient quality report
#2 - miladpiri
2023-11-06T10:37:24Z
Overconstrainted.
#3 - c4-sponsor
2023-11-06T10:37:34Z
miladpiri (sponsor) confirmed
#4 - miladpiri
2023-11-06T19:22:27Z
Duplicate #100.
#5 - GalloDaSballo
2023-11-26T19:17:11Z
The Warden has shown a case in which the mul/div relation doesn't require an additional constraint in the case of a zero divisor
#6 - c4-judge
2023-11-26T19:17:16Z
GalloDaSballo marked the issue as selected for report
π Selected for report: xuwinnie
32626.9346 USDC - $32,626.93
Nonce order of an EOA should always be sequetial, otherwise the transaction cannot be validated. The default account does not allow user to call function updateNonceOrdering of ContractDeployer, However, the restriction can be bypassed if user call updateNonceOrdering through an L1 priority transaction. As a result, the user's account will be permanently frozen.
function _validateTransaction( bytes32 _suggestedSignedHash, Transaction calldata _transaction ) internal returns (bytes4 magic) { // Note, that nonce holder can only be called with "isSystem" flag. SystemContractsCaller.systemCallWithPropagatedRevert( uint32(gasleft()), address(NONCE_HOLDER_SYSTEM_CONTRACT), 0, abi.encodeCall(INonceHolder.incrementMinNonceIfEquals, (_transaction.nonce)) ); function incrementMinNonceIfEquals(uint256 _expectedNonce) external onlySystemCall { uint256 addressAsKey = uint256(uint160(msg.sender)); uint256 oldRawNonce = rawNonces[addressAsKey]; (, uint256 oldMinNonce) = _splitRawNonce(oldRawNonce); require(oldMinNonce == _expectedNonce, "Incorrect nonce"); unchecked { rawNonces[addressAsKey] = oldRawNonce + 1; } }
incrementMinNonceIfEquals is used in _validateTransaction. it always uses sequential nonce.
// Checks whether the nonce `nonce` have been already used for // account `from`. Reverts if the nonce has not been used properly. function ensureNonceUsage(from, nonce, shouldNonceBeUsed) { // INonceHolder.validateNonceUsage selector mstore(0, {{RIGHT_PADDED_VALIDATE_NONCE_USAGE_SELECTOR}}) mstore(4, from) mstore(36, nonce) mstore(68, shouldNonceBeUsed) let success := call( gas(), NONCE_HOLDER_ADDR(), 0, 0, 100, 0, 0 ) if iszero(success) { revertWithReason( ACCOUNT_TX_VALIDATION_ERR_CODE(), 1 ) } } function validateNonceUsage(address _address, uint256 _key, bool _shouldBeUsed) external view { bool isUsed = isNonceUsed(_address, _key); if (isUsed && !_shouldBeUsed) { revert("Reusing the same nonce twice"); } else if (!isUsed && _shouldBeUsed) { revert("The nonce was not set as used"); } } function isNonceUsed(address _address, uint256 _nonce) public view returns (bool) { uint256 addressAsKey = uint256(uint160(_address)); return (_nonce < getMinNonce(_address) || nonceValues[addressAsKey][_nonce] > 0); }
The bootloader calls validateNonceUsage too, which considers the arbitary ordering.
We need to prevent default account from changing nonce ordering type, otherwise these two methods will conflict. If user's current minNonce is 13, nonce of new tx also needs to be 13, but if some value is already stored under 14, isNonceUsed will return true. Which means the user will no longer be able to initiate new transactions.
function _execute(Transaction calldata _transaction) internal { address to = address(uint160(_transaction.to)); uint128 value = Utils.safeCastToU128(_transaction.value); bytes calldata data = _transaction.data; uint32 gas = Utils.safeCastToU32(gasleft()); // Note, that the deployment method from the deployer contract can only be called with a "systemCall" flag. bool isSystemCall; if (to == address(DEPLOYER_SYSTEM_CONTRACT) && data.length >= 4) { bytes4 selector = bytes4(data[:4]); // Check that called function is the deployment method, // the others deployer method is not supposed to be called from the default account. isSystemCall = selector == DEPLOYER_SYSTEM_CONTRACT.create.selector || selector == DEPLOYER_SYSTEM_CONTRACT.create2.selector || selector == DEPLOYER_SYSTEM_CONTRACT.createAccount.selector || selector == DEPLOYER_SYSTEM_CONTRACT.create2Account.selector; }
flag isSystemCall is only set true when calling deployment method of ContractDeployer, so if user tries to call function updateNonceOrdering of ContractDeployer, isSystemCall will be false and the call will fail.
function msgValueSimulatorMimicCall(to, from, value, dataPtr) -> success { // Only calls to the deployer system contract are allowed to be system let isSystem := eq(to, CONTRACT_DEPLOYER_ADDR()) success := mimicCallOnlyResult( MSG_VALUE_SIMULATOR_ADDR(), from, dataPtr, 0, 1, value, to, isSystem ) }
However, if the transaction is initiated on L1, selector is not checked. Flag isSystem is set true as long as the target is ContractDeployer. The transaction to updateNonceOrdering will succeed.
function updateNonceOrdering(AccountNonceOrdering _nonceOrdering) external onlySystemCall { AccountInfo memory currentInfo = accountInfo[msg.sender]; require( _nonceOrdering == AccountNonceOrdering.Arbitrary && currentInfo.nonceOrdering == AccountNonceOrdering.Sequential, "It is only possible to change from sequential to arbitrary ordering" ); currentInfo.nonceOrdering = _nonceOrdering; _storeAccountInfo(msg.sender, currentInfo); emit AccountNonceOrderingUpdated(msg.sender, _nonceOrdering); }
Once updated, the change is permanent. The account can no longer initiate any new transactions.
Manual
Don't allow EOA to call updateNonceOrdering. Do the check on L1 side.
Context
#0 - c4-pre-sort
2023-11-02T13:58:17Z
141345 marked the issue as sufficient quality report
#1 - miladpiri
2023-11-06T18:05:17Z
Not possible to make a tx from L1 to update the nonce ordering of the sender due to onlySystemCall
.
#2 - c4-sponsor
2023-11-06T18:05:22Z
miladpiri (sponsor) disputed
#3 - c4-judge
2023-11-26T15:06:20Z
GalloDaSballo marked the issue as unsatisfactory: Insufficient proof
#4 - GalloDaSballo
2023-11-26T15:07:19Z
We'd need a POC for:
isSystem
(sponsor dispute)#5 - miladpiri
2023-11-30T09:00:42Z
Agree with the warden, this is duplicate of https://github.com/code-423n4/2023-10-zksync-findings/issues/512, pointing at the same root cause and issue.
#6 - c4-sponsor
2023-11-30T09:08:08Z
miladpiri (sponsor) confirmed
#7 - c4-sponsor
2023-11-30T09:08:13Z
miladpiri marked the issue as disagree with severity
#8 - c4-judge
2023-12-05T11:37:31Z
GalloDaSballo marked the issue as duplicate of #512
#9 - c4-judge
2023-12-05T11:37:50Z
GalloDaSballo marked the issue as satisfactory
#10 - c4-judge
2023-12-13T16:01:57Z
GalloDaSballo marked the issue as selected for report
#11 - vladbochok
2023-12-14T07:58:14Z
The highest impact is self-harm, moreover the claim that no more transactions are possible to send is wrong. Even after changing nonce ordering account may still use incremental nonces. Low or QA most tbh
#12 - GalloDaSballo
2024-01-11T15:36:33Z
A dispute regarding usage of future code was made, which is rational.
Personally I believe that being able to access the isSystem
flag could be precursor to higher impact findings and that for this reason the finding warrants medium severity
I understand that as of now, the system is safe, however this broken invariant could be a root cause for drastic issues in the future if zkSync Era functionality is further expanded
14682.1206 USDC - $14,682.12
In code unpacker, version hash is not correctly enforced.
In code unpacker, we enforce that version hash is correct.
let version_hash_matches = UInt16::equals(cs, &chunks[1], &versioned_hash_top_16_bits); state.state_get_from_queue.conditionally_enforce_true(cs, version_hash_matches);
The second line means, if version hash matches, state_get_from_queue must be true. But this is not what we want. The logic should be, if we are getting from queue, version hash must match. Otherwise, even if version hash is incorrect, this line passes as no constraint is actually enforced.
Let's take a closer look at fn conditionally_enforce_true, bool should_enforce is the third parameter.
pub fn conditionally_enforce_true<CS: ConstraintSystem<F>>( &self, cs: &mut CS, should_enforce: Self, ) { // this is equal to having !self && should_enforce == false; // so (1 - self) * should_enforce == 0 if cs.gate_is_allowed::<FmaGateInBaseFieldWithoutConstant<F>>() { let zero_var = cs.allocate_constant(F::ZERO); let gate = FmaGateInBaseFieldWithoutConstant { params: FmaGateInBaseWithoutConstantParams { coeff_for_quadtaric_part: F::MINUS_ONE, linear_term_coeff: F::ONE, }, quadratic_part: (self.variable, should_enforce.variable), linear_part: should_enforce.variable, rhs_part: zero_var, }; gate.add_to_cs(cs); } else { unimplemented!() } }
Manual
version_hash_matches.conditionally_enforce_true(cs, state.state_get_from_queue);
Context
#0 - shamatar
2023-10-23T23:36:18Z
Good catch
#1 - c4-pre-sort
2023-10-31T11:41:57Z
bytes032 marked the issue as primary issue
#2 - c4-pre-sort
2023-11-02T16:22:26Z
141345 marked the issue as sufficient quality report
#3 - c4-sponsor
2023-11-06T16:10:56Z
miladpiri (sponsor) confirmed
#4 - GalloDaSballo
2023-11-26T19:37:14Z
The Warden has shown how, due to a constant enforcement on version hashes, future hash versions would fail the check
#5 - c4-judge
2023-11-26T19:37:19Z
GalloDaSballo marked the issue as selected for report
π Selected for report: erebus
Also found by: 0xTheC0der, Bauchibred, HE1M, Jorgect, Udsen, alexfilippov314, chaduke, evmboi32, hash, ladboy233, lsaudit, oakcobalt, rvierdiiev, ustas, wangxx2026, xuwinnie, zero-idea, zkrunner
883.9114 USDC - $883.91
In Boojum weβll have to maintain this fact. Having the priority operationsβ statuses is important to enable proving failed deposits for bridges.
However, the check for numberOfL2ToL1Logs is incorrect. As a result, malicious operator can append transaction even after the merkle tree is full. User will not be able to redeem failed deposits since the proof for the failed transaction is not included in the merkle tree.
Let's focus on function publishPubdataAndClearState of L1 messenger.
require(numberOfL2ToL1Logs <= numberOfL2ToL1Logs, "Too many L2->L1 logs");
We are requiring nothing here. So numberOfL2ToL1Logs can be larger than 2048.
uint256 nodesOnCurrentLevel = L2_TO_L1_LOGS_MERKLE_TREE_LEAVES; while (nodesOnCurrentLevel > 1) { nodesOnCurrentLevel /= 2; for (uint256 i = 0; i < nodesOnCurrentLevel; ++i) { l2ToL1LogsTreeArray[i] = keccak256( abi.encode(l2ToL1LogsTreeArray[2 * i], l2ToL1LogsTreeArray[2 * i + 1]) ); } } bytes32 l2ToL1LogsTreeRoot = l2ToL1LogsTreeArray[0];
Later, we simply ignore logs after 2049. They will not be reflected in the root hash.
Each time when function processL1Tx is called in bootloader, a log indicating transaction result will be sent.
// Sending the L2->L1 log so users will be able to prove transaction execution result on L1. sendL2LogUsingL1Messenger(true, canonicalL1TxHash, success)
A malicious operator can wait patiently until the current pending priority transactions emit more than 2048 logs in total (operator can also do it herself!) After that, if a ERC20 deposit failed, operator conclude the batch and the failed deposit can never be redeemed, since the log is not included in the merkle tree.
Manual
require(numberOfL2ToL1Logs <= 2048, "Too many L2->L1 logs");
(It's better to have a dynamic tree here)
Context
#0 - c4-pre-sort
2023-10-31T08:53:25Z
bytes032 marked the issue as low quality report
#1 - c4-pre-sort
2023-11-02T14:59:08Z
141345 marked the issue as duplicate of #853
#2 - c4-judge
2023-11-25T09:54:14Z
GalloDaSballo changed the severity to 2 (Med Risk)
#3 - c4-judge
2023-11-25T20:03:06Z
GalloDaSballo changed the severity to QA (Quality Assurance)
#4 - miladpiri
2023-11-27T10:38:18Z
While the attack scenario may not be fully correct (indeed more than 2048 is not possible to send on L1), the fix / the general issue is right. Low is fair.
#5 - c4-sponsor
2023-11-27T10:38:45Z
miladpiri marked the issue as disagree with severity
#6 - GalloDaSballo
2023-11-27T15:43:52Z
Maintaining QA and I believe that a checksum on hashes would prevent this from being exploited If the warden wishes to have this looked at again they will have to send a coded POC of bypassing all security checks