Halborn Logo

zkSync Era - Matter Labs


Prepared by:

Halborn Logo

HALBORN

Last Updated 04/26/2024

Date of Engagement by: January 9th, 2023 - March 8th, 2023

Summary

100% of all REPORTED Findings have been addressed

All findings

5

Critical

1

High

0

Medium

0

Low

1

Informational

3


1. INTRODUCTION

MatterLabs zkSync Era is a Layer 2 blockchain protocol that eliminates Ethereum’s inherent congestion with zero knowledge proofs. Matter Labs' creation is on a mission to accelerate the mass adoption of crypto for personal sovereignty. It is designed to unlock the full potential of trustless blockchain technology while scaling the core values of Ethereum.

MatterLabs engaged Halborn to conduct a security audit on their zero knowledge circuits and the verifier, beginning on 2023-01-09 and ending on 2023-03-08. The security assessment was scoped to the circuits provided to the Halborn team.

2. AUDIT SUMMARY

The team at Halborn was provided two months for the engagement and assigned a full-time security engineer to audit the security of the zero knowledge circuits and the verifier. The security engineer is a blockchain, smart-contract and ZK security expert with advanced penetration testing, smart-contract hacking, and deep knowledge of multiple blockchain protocols.

The purpose of this audit to achieve the following:

    • Ensure that the circuits operate as intended.

    • Identify potential security issues within the circuits.

    • Ensure that the verifier operate as intended.

    • Identify potential security issues within the verifier contracts.

In summary, Halborn identified some security risks that were mostly addressed by the MatterLabs team.

3. TEST APPROACH & METHODOLOGY

Halborn performed a combination of manual and automated security testing to balance efficiency, timeliness, practicality, and accuracy in regard to the scope of this audit. While manual testing is recommended to uncover flaws in logic, process, and implementation; automated testing techniques help enhance coverage of the bridge code and can quickly identify items that do not follow security best practices. The following phases and associated tools were used throughout the term of the audit:

    • Research into architecture and purpose.

    • Smart contract manual code review and walkthrough.

    • Circuit manual code review and walkthrough.

    • Graphing out functionality and circuit logic/connectivity/functions. (cargo-deps)

    • Manual code review of common Rust security vulnerabilities.

    • Manual code review of specific zero knowledge security vulnerabilities.

    • Scanning of circuits files for unsafe Rust code usage. (cargo-geiger)

    • Static Analysis of security for scoped circuits, and imported functions. (cargo-audit)

4. SCOPE

1. IN-SCOPE:

The security assessment was scoped to the following zero knowledge circuits:

    • /src/vm/*

    • /src/glue/sort_decommitments_requests/*

    • /src/glue/code_unpacker_sha256/*

    • /src/glue/demux_log_queue/*

    • /src/precompiles/keccak256.rs

    • /src/precompiles/sha256.rs

    • /src/glue/ecrecover_circuit/*

    • /src/glue/ram_permutation/*

    • /src/glue/storage_validity_by_grand_product/*

    • /src/glue/storage_application/*

    • /src/glue/pub_data_hasher/*

    • /src/glue/log_sorter/*

    • /src/glue/merkleize_l1_messages/*

    • /src/scheduler/*

    • /src/circuit_structures/*

    • /src/data_structures/*

    • /src/inputs/*

    • /src/recursion/*

    • /src/traits/*

    • /src/secp256k1/*

    • /src/utils.rs

Commit ID : 014e674916058e31725d6e92439fa5ff14e6677e

And the verifier:

    • /zksync/Verifier.sol

    • /zksync/Plonk4VerifierWithAccessToDNext.sol

    • /zksync/libraries/PairingsBn254.sol

    • /zksync/libraries/TranscriptLib.sol

Commit ID : fc7e86a3df404acb88d86502c944c0630a7ed288

2. REMEDIATION PR/COMMITS:

5. RISK METHODOLOGY

Every vulnerability and issue observed by Halborn is ranked based on two sets of Metrics and a Severity Coefficient. This system is inspired by the industry standard Common Vulnerability Scoring System.
The two Metric sets are: Exploitability and Impact. Exploitability captures the ease and technical means by which vulnerabilities can be exploited and Impact describes the consequences of a successful exploit.
The Severity Coefficients is designed to further refine the accuracy of the ranking with two factors: Reversibility and Scope. These capture the impact of the vulnerability on the environment as well as the number of users and smart contracts affected.
The final score is a value between 0-10 rounded up to 1 decimal place and 10 corresponding to the highest security risk. This provides an objective and accurate rating of the severity of security vulnerabilities in smart contracts.
The system is designed to assist in identifying and prioritizing vulnerabilities based on their level of risk to address the most critical issues in a timely manner.

5.1 EXPLOITABILITY

Attack Origin (AO):
Captures whether the attack requires compromising a specific account.
Attack Cost (AC):
Captures the cost of exploiting the vulnerability incurred by the attacker relative to sending a single transaction on the relevant blockchain. Includes but is not limited to financial and computational cost.
Attack Complexity (AX):
Describes the conditions beyond the attacker’s control that must exist in order to exploit the vulnerability. Includes but is not limited to macro situation, available third-party liquidity and regulatory challenges.
Metrics:
EXPLOITABILIY METRIC (mem_e)METRIC VALUENUMERICAL VALUE
Attack Origin (AO)Arbitrary (AO:A)
Specific (AO:S)
1
0.2
Attack Cost (AC)Low (AC:L)
Medium (AC:M)
High (AC:H)
1
0.67
0.33
Attack Complexity (AX)Low (AX:L)
Medium (AX:M)
High (AX:H)
1
0.67
0.33
Exploitability EE is calculated using the following formula:

E=meE = \prod m_e

5.2 IMPACT

Confidentiality (C):
Measures the impact to the confidentiality of the information resources managed by the contract due to a successfully exploited vulnerability. Confidentiality refers to limiting access to authorized users only.
Integrity (I):
Measures the impact to integrity of a successfully exploited vulnerability. Integrity refers to the trustworthiness and veracity of data stored and/or processed on-chain. Integrity impact directly affecting Deposit or Yield records is excluded.
Availability (A):
Measures the impact to the availability of the impacted component resulting from a successfully exploited vulnerability. This metric refers to smart contract features and functionality, not state. Availability impact directly affecting Deposit or Yield is excluded.
Deposit (D):
Measures the impact to the deposits made to the contract by either users or owners.
Yield (Y):
Measures the impact to the yield generated by the contract for either users or owners.
Metrics:
IMPACT METRIC (mIm_I)METRIC VALUENUMERICAL VALUE
Confidentiality (C)None (I:N)
Low (I:L)
Medium (I:M)
High (I:H)
Critical (I:C)
0
0.25
0.5
0.75
1
Integrity (I)None (I:N)
Low (I:L)
Medium (I:M)
High (I:H)
Critical (I:C)
0
0.25
0.5
0.75
1
Availability (A)None (A:N)
Low (A:L)
Medium (A:M)
High (A:H)
Critical (A:C)
0
0.25
0.5
0.75
1
Deposit (D)None (D:N)
Low (D:L)
Medium (D:M)
High (D:H)
Critical (D:C)
0
0.25
0.5
0.75
1
Yield (Y)None (Y:N)
Low (Y:L)
Medium (Y:M)
High (Y:H)
Critical (Y:C)
0
0.25
0.5
0.75
1
Impact II is calculated using the following formula:

I=max(mI)+mImax(mI)4I = max(m_I) + \frac{\sum{m_I} - max(m_I)}{4}

5.3 SEVERITY COEFFICIENT

Reversibility (R):
Describes the share of the exploited vulnerability effects that can be reversed. For upgradeable contracts, assume the contract private key is available.
Scope (S):
Captures whether a vulnerability in one vulnerable contract impacts resources in other contracts.
Metrics:
SEVERITY COEFFICIENT (CC)COEFFICIENT VALUENUMERICAL VALUE
Reversibility (rr)None (R:N)
Partial (R:P)
Full (R:F)
1
0.5
0.25
Scope (ss)Changed (S:C)
Unchanged (S:U)
1.25
1
Severity Coefficient CC is obtained by the following product:

C=rsC = rs

The Vulnerability Severity Score SS is obtained by:

S=min(10,EIC10)S = min(10, EIC * 10)

The score is rounded up to 1 decimal places.
SeverityScore Value Range
Critical9 - 10
High7 - 8.9
Medium4.5 - 6.9
Low2 - 4.4
Informational0 - 1.9

6. SCOPE

Out-of-Scope: New features/implementations after the remediation commit IDs.

7. Assessment Summary & Findings Overview

Critical

1

High

0

Medium

0

Low

1

Informational

3

Security analysisRisk levelRemediation Date
CIRCUIT NOT PROPERLY WORKING WHEN USING SHARD ID > 0CriticalSolved - 04/04/2023
HEAD STATE NOT ENFORCED TO BE ZEROLowSolved - 04/04/2023
UNUSED CIRCUIT FUNCTIONALITYInformationalSolved - 04/04/2023
QUEUE NOT ENFORCED TO BE EMPTY RIGHT AFTER POPPING ALL ELEMENTSInformationalAcknowledged - 04/04/2023
UNNEEDED INITIALIZATION OF UINT256 VARIABLESInformationalAcknowledged - 04/04/2023

8. Findings & Tech Details

8.1 CIRCUIT NOT PROPERLY WORKING WHEN USING SHARD ID > 0

// Critical

Description

The STORAGE QUERIES FILTER circuit (storage_validity_by_grand_product) does not produce the intended output when shard ID is greater than 0. When the keys of each element of the initial queue are being packed throughout the sorting of the queues, the second linear combination is wrongly created, overlapping the bits of the address variable, due to using the incorrect coefficient.

The problem with this bug, is that as the pack_key function is returning an incorrect packed key to the sorting functionality, the final sorted queue of the circuit is not being properly generated, leading to even more issues afterward. This can be seen as a completeness bug.

Code Location

storage_validity_by_grand_product/mod.rs

const PACKED_WIDTHS: [usize; 2] = [192, 232];
// now resolve a logic
for (item, is_trivial) in it {
    // check if keys are equal and check a value
    let TimestampedStorageLogRecord { record, timestamp } = item;

    let packed_key = pack_key(
        cs,
        (record.shard_id.clone(), record.address.clone(), record.key),
    )?;

    // ensure sorting
    let (keys_are_equal, previous_key_is_greater) =
        prepacked_long_comparison(cs, &previous_packed_key, &packed_key, &PACKED_WIDTHS)?;

    can_not_be_false_if_flagged(cs, &previous_key_is_greater.not(), &is_trivial.not())?;

    // if keys are the same then timestamps are sorted
    let (_, previous_timestamp_is_less) = previous_timestamp.sub(cs, &timestamp)?;
    // enforce if keys are the same and not trivial
    let must_enforce = smart_and(cs, &[keys_are_equal, is_trivial.not()])?;
    can_not_be_false_if_flagged(cs, &previous_timestamp_is_less, &must_enforce)?;

    // we follow the procedure:
    // if keys are different then we finish with a previous one and update parameters
    // else we just update parameters

storage_validity_by_grand_product/mod.rs

pub fn pack_key<E: Engine, CS: ConstraintSystem<E>>(
    cs: &mut CS,
    key_tuple: (Byte<E>, UInt160<E>, UInt256<E>),
) -> Result<[Num<E>; 2], SynthesisError> {
    let shifts = compute_shifts::<E::Fr>();

    // LE packing

    let (shard_id, address, key) = key_tuple;
    let mut lc_0 = LinearCombination::zero();
    lc_0.add_assign_number_with_coeff(&key.inner[0].inner, shifts[0]);
    lc_0.add_assign_number_with_coeff(&key.inner[1].inner, shifts[64]);
    lc_0.add_assign_number_with_coeff(&key.inner[2].inner, shifts[128]);
    // 192 in total
    let value_0 = lc_0.into_num(cs)?;

    let mut lc_1 = LinearCombination::zero();
    lc_1.add_assign_number_with_coeff(&key.inner[3].inner, shifts[0]);
    lc_1.add_assign_number_with_coeff(&address.inner, shifts[64]);
    lc_1.add_assign_number_with_coeff(&shard_id.inner, shifts[160]);
    let value_1 = lc_1.into_num(cs)?;
    // 64 + 160 + 8 = 232 in total

    Ok([value_0, value_1])
}
  1. The STORAGE QUERIES FILTER circuit receives as input witness the storage access requests queue.
  2. This circuit aims to order the resulting queue of all the elements in the initial queue by a generated key.
  3. The circuit calls the pack_key function to generate the key for the current element of the queue.
  4. A shard_id different from 0 is being used.
  5. Within the generated key, the bits of the address parameter gets overlapped with the bits of the shard_id.
  6. An incorrect key is returned to the main function of the circuit, thus breaking the overall functionality of the circuit.
  7. The resulting queue is not ordered as expected, leading to even more issues afterward.
BVSS
Recommendation

SOLVED: The MatterLabs team solved the issue by fixing the offset and adding assertion.

Commit ID : 5109e0768c7de799f87ec67bf40b6a544cca4e4e

8.2 HEAD STATE NOT ENFORCED TO BE ZERO

// Low

Description

The MESSAGES EVENTS FILTER circuit (log_sorter) is not enforcing the head state of the initial queue received as input to be zero. Even though the LOG DEMULTIPLEXER circuit gives as output all queues that were initially empty, already enforcing the head state to be zero, it is essential to ensure it is atomically checked in each circuit, regardless of where the input is coming from.

Code Location

log_sorter/mod.rs

let structured_input_witness = project_ref!(witness, closed_form_input).cloned();
let initial_queue_witness = project_ref!(witness, initial_queue_witness).cloned();
let intermediate_sorted_queue_state =
    project_ref!(witness, intermediate_sorted_queue_state).cloned();
let sorted_queue_witness = project_ref!(witness, sorted_queue_witness).cloned();

let mut structured_input = EventsDeduplicatorInputOutput::alloc_ignoring_outputs(
    cs,
    structured_input_witness.clone(),
)?;

Boolean::enforce_equal(cs, &structured_input.start_flag, &Boolean::constant(true))?;

let mut initial_queue = StorageLogQueue::from_raw_parts(
    cs,
    structured_input
        .observable_input
        .initial_log_queue_state
        .head_state,
    structured_input
        .observable_input
        .initial_log_queue_state
        .tail_state,
    structured_input
        .observable_input
        .initial_log_queue_state
        .num_items,
)?;

// dbg!(initial_queue.clone().into_state().create_witness());

if let Some(wit) = initial_queue_witness {
    initial_queue.witness = wit;
}
BVSS
Recommendation

SOLVED: The MatterLabs team solved the issue by enforcing the initial queue head state to zero.

Commit ID : b0a79356613655bddccaab3b89dbf1142b5483fb

8.3 UNUSED CIRCUIT FUNCTIONALITY

// Informational

Description

Within the LOG DEMULTIPLEXER circuit (demux_log_queue) the demultiplex_storage_logs_inner function is declared but never used. The demultiplex_storage_logs_inner_optimized function is used instead.

Code Location

demux_log_queue/mod.rs

pub fn demultiplex_storage_logs_inner<
    E: Engine,
    CS: ConstraintSystem<E>,
    R: CircuitArithmeticRoundFunction<E, 2, 3, StateElement = Num<E>>,
>(
    cs: &mut CS,
    mut storage_log_queue: StorageLogQueue<E>,
    round_function: &R,
    limit: usize,
) -> Result<[StorageLogQueue<E>; NUM_SEPARATE_QUEUES], SynthesisError> {
    assert!(limit <= u32::MAX as usize);

    let mut optimizer = SpongeOptimizer::new(round_function.clone(), 3);

    let mut rollup_storage_queue = StorageLogQueue::empty();
    // let mut porter_storage_queue = StorageLogQueue::empty();
    let mut events_queue = StorageLogQueue::empty();
    let mut l1_messages_queue = StorageLogQueue::empty();
    let mut keccak_calls_queue = StorageLogQueue::empty();
    let mut sha256_calls_queue = StorageLogQueue::empty();
    let mut ecdsa_calls_queue = StorageLogQueue::empty();

    const SYSTEM_CONTRACTS_OFFSET_ADDRESS: u16 = 1 << 15;

    const KECCAK256_ROUND_FUNCTION_PRECOMPILE_ADDRESS: u16 = SYSTEM_CONTRACTS_OFFSET_ADDRESS + 0x10;
    const SHA256_ROUND_FUNCTION_PRECOMPILE_ADDRESS: u16 = 0x02; // as in Ethereum
    const ECRECOVER_INNER_FUNCTION_PRECOMPILE_ADDRESS: u16 = 0x01; // as in Ethereum

    let keccak_precompile_address = UInt160::from_uint(u160::from_u64(
        KECCAK256_ROUND_FUNCTION_PRECOMPILE_ADDRESS as u64,
    ));
    let sha256_precompile_address = UInt160::from_uint(u160::from_u64(
        SHA256_ROUND_FUNCTION_PRECOMPILE_ADDRESS as u64,
    ));
    let ecrecover_precompile_address = UInt160::from_uint(u160::from_u64(
        ECRECOVER_INNER_FUNCTION_PRECOMPILE_ADDRESS as u64,
    ));

    for _ in 0..limit {
        let execute = storage_log_queue.is_empty(cs)?.not();

        // let n = cs.get_current_step_number();
        let popped = storage_log_queue.pop_first(cs, &execute, round_function)?;
        // dbg!(cs.get_current_step_number() - n); // 291

        let is_storage_aux_byte =
            Num::equals(cs, &aux_byte_for_storage().inner, &popped.aux_byte.inner)?;
        let is_event_aux_byte =
            Num::equals(cs, &aux_byte_for_event().inner, &popped.aux_byte.inner)?;
        let is_l1_message_aux_byte =
            Num::equals(cs, &aux_byte_for_l1_message().inner, &popped.aux_byte.inner)?;
        let is_precompile_aux_byte = Num::equals(
            cs,
            &aux_byte_for_precompile_call().inner,
            &popped.aux_byte.inner,
        )?;

        let is_keccak_address = UInt160::equals(cs, &keccak_precompile_address, &popped.address)?;
        let is_sha256_address = UInt160::equals(cs, &sha256_precompile_address, &popped.address)?;
        let is_ecrecover_address =
            UInt160::equals(cs, &ecrecover_precompile_address, &popped.address)?;

        let is_rollup_shard = popped.shard_id.inner.is_zero(cs)?;

        let execute_rollup_storage =
            smart_and(cs, &[is_storage_aux_byte, is_rollup_shard, execute])?;
        let execute_porter_storage =
            smart_and(cs, &[is_storage_aux_byte, is_rollup_shard.not(), execute])?;
        Boolean::enforce_equal(cs, &execute_porter_storage, &Boolean::constant(false))?;
        let execute_event = smart_and(cs, &[is_event_aux_byte, execute])?;
        let execute_l1_message = smart_and(cs, &[is_l1_message_aux_byte, execute])?;
        let execute_keccak_call =
            smart_and(cs, &[is_precompile_aux_byte, is_keccak_address, execute])?;
        let execute_sha256_call =
            smart_and(cs, &[is_precompile_aux_byte, is_sha256_address, execute])?;
        let execute_ecrecover_call =
            smart_and(cs, &[is_precompile_aux_byte, is_ecrecover_address, execute])?;

        // let n = cs.get_current_step_number();
        rollup_storage_queue.push_with_optimizer(
            cs,
            LogType::RollupStorage as u64,
            &popped,
            &execute_rollup_storage,
            &mut optimizer,
        )?;
        // porter_storage_queue.push_with_optimizer(cs, LogType::PorterStorage as u64, &popped, &execute_porter_storage, &mut optimizer)?;
        events_queue.push_with_optimizer(
            cs,
            LogType::Events as u64,
            &popped,
            &execute_event,
            &mut optimizer,
        )?;
        l1_messages_queue.push_with_optimizer(
            cs,
            LogType::L1Messages as u64,
            &popped,
            &execute_l1_message,
            &mut optimizer,
        )?;
        keccak_calls_queue.push_with_optimizer(
            cs,
            LogType::KeccakCalls as u64,
            &popped,
            &execute_keccak_call,
            &mut optimizer,
        )?;
        sha256_calls_queue.push_with_optimizer(
            cs,
            LogType::Sha256Calls as u64,
            &popped,
            &execute_sha256_call,
            &mut optimizer,
        )?;
        ecdsa_calls_queue.push_with_optimizer(
            cs,
            LogType::ECRecoverCalls as u64,
            &popped,
            &execute_ecrecover_call,
            &mut optimizer,
        )?;
        // dbg!(cs.get_current_step_number() - n); // 96

        // let n = cs.get_current_step_number();
        optimizer.enforce(cs)?;
        // dbg!(cs.get_current_step_number() - n); // 338

        let expected_bitmask_bits = [
            is_storage_aux_byte,
            is_event_aux_byte,
            is_l1_message_aux_byte,
            is_precompile_aux_byte,
        ];

        let (is_bitmask, all_flags_are_false) =
            check_if_bitmask_and_if_empty(cs, &expected_bitmask_bits)?;
        can_not_be_false_if_flagged(cs, &is_bitmask, &Boolean::Constant(true))?;
        can_not_be_false_if_flagged(cs, &all_flags_are_false.not(), &execute)?;
    }

    storage_log_queue.enforce_to_be_empty(cs)?;

    let all_queues = [
        rollup_storage_queue,
        events_queue,
        l1_messages_queue,
        keccak_calls_queue,
        sha256_calls_queue,
        ecdsa_calls_queue,
    ];

    Ok(all_queues)
}
Score
Recommendation

SOLVED: The MatterLabs team solved the issue by removing the unused function.

Commit ID : 06c2e76546369fb112d8ac14fb5388154857435b

8.4 QUEUE NOT ENFORCED TO BE EMPTY RIGHT AFTER POPPING ALL ELEMENTS

// Informational

Description

The MERKLEIZER circuit (merkleize_l1_messages) does not enforce the initial queue to be empty right after popping all elements. Even though it does it at the end of the circuit, as this circuit is resource-consuming while computing the linear hash and the Merkle tree hash, it would be useful to enforce it before all the hashing functionality for better readability of the overall code and optimization.

Code Location

merkleize_l1_messages/merkleize.rs

for chunk in linear_hash_input[4..].chunks_exact_mut(MESSAGE_SERIALIZATION_BYTES) {
    let can_pop = initial_queue.is_empty(cs)?.not();
    let item = initial_queue.pop_first(cs, &can_pop, round_function)?;
    let serialized = item.serialize(cs)?;
    assert_eq!(chunk.len(), serialized.len());
    for (dst, src) in chunk.iter_mut().zip(serialized.iter()) {
        *dst = Byte::conditionally_select(cs, &can_pop, src, dst)?;
    }
}

let linear_hash = if output_linear_hash {
    println!(
        "Computing linear hash over {} bytes",
        linear_hash_input.len()
    );
    let pubdata_hash = tree_hasher.hash(cs, &linear_hash_input)?;
    let pubdata_hash_as_bytes32 = Bytes32::from_bytes_array(&pubdata_hash);

    pubdata_hash_as_bytes32
} else {
    Bytes32::empty()
};

// a little bit tricky: unsafe cast, but we checked the length, and ABI wise it's guaranteed
// later on we can use split_array_ref

let leafs_only_bytes = &linear_hash_input[4..];
assert!(leafs_only_bytes.len() % MESSAGE_SERIALIZATION_BYTES == 0);

let mut leafs = vec![];
for chunk in leafs_only_bytes.chunks_exact(MESSAGE_SERIALIZATION_BYTES) {
    let leaf_encoding: [_; MESSAGE_SERIALIZATION_BYTES] = chunk.to_vec().try_into().unwrap();
    leafs.push(leaf_encoding);
}

println!("Computing tree over {} leafs", leafs.len());

let calculated_merkle_root =
    circuit_compute_merkle_root_from_leafs_generic::<_, _, H, ARITY>(cs, &leafs, tree_hasher)?;

initial_queue.enforce_to_be_empty(cs)?;
Score
Recommendation

ACKNOWLEDGED: The MatterLabs team acknowledged this issue. It will be addressed while moving to the new proof system.

8.5 UNNEEDED INITIALIZATION OF UINT256 VARIABLES

// Informational

Description

As i is an uint256, it is already initialized to 0. uint256 i = 0 reassigns the 0 to i which wastes gas.

Code Location

Verifier.sol

  • Line 134: for (uint256 i = 0; i < public_inputs.length; i = i.uncheckedInc()) {
  • Line 139: for (uint256 i = 0; i < STATE_WIDTH; i = i.uncheckedInc()) {
  • Line 164: for (uint256 i = 0; i < proof.quotient_poly_parts_commitments.length; i = i.uncheckedInc()) {
  • Line 172: for (uint256 i = 0; i < proof.state_polys_openings_at_z.length; i = i.uncheckedInc()) {
  • Line 178: for (uint256 i = 0; i < proof.state_polys_openings_at_z_omega.length; i = i.uncheckedInc()) {
  • Line 183: for (uint256 i = 0; i < proof.gate_selectors_openings_at_z.length; i = i.uncheckedInc()) {
  • Line 188: for (uint256 i = 0; i < proof.copy_permutation_polys_openings_at_z.length; i = i.uncheckedInc()) {

Plonk4VerifierWithAccessToDNext.sol

  • Line 144: for (uint256 i = 0; i < vk.num_inputs; i = i.uncheckedInc()) {
  • Line 148: for (uint256 i = 0; i < STATE_WIDTH; i = i.uncheckedInc()) {
  • Line 164: for (uint256 i = 0; i < proof.quotient_poly_parts_commitments.length; i = i.uncheckedInc()) {
  • Line 171: for (uint256 i = 0; i < proof.state_polys_openings_at_z.length; i = i.uncheckedInc()) {
  • Line 175: for (uint256 i = 0; i < proof.state_polys_openings_at_z_omega.length; i = i.uncheckedInc()) {
  • Line 178: for (uint256 i = 0; i < proof.gate_selectors_openings_at_z.length; i = i.uncheckedInc()) {
  • Line 181: for (uint256 i = 0; i < proof.copy_permutation_polys_openings_at_z.length; i = i.uncheckedInc()) {
  • Line 301: for (uint256 i = 0; i < lagrange_poly_numbers.length; i = i.uncheckedInc()) {
  • Line 307: for (uint256 i = 0; i < vk.num_inputs; i = i.uncheckedInc()) {
  • Line 324: for (uint256 i = 0; i < proof.copy_permutation_polys_openings_at_z.length; i = i.uncheckedInc()) {
  • Line 448: for (uint256 i = 0; i < proof.state_polys_openings_at_z.length; ) {
  • Line 472: for (uint256 i = 0; i < STATE_WIDTH - 1; i = i.uncheckedInc()) {
  • Line 558: for (uint256 i = 0; i < STATE_WIDTH - 1; i = i.uncheckedInc()) {
  • Line 613: for (uint256 i = 0; i < STATE_WIDTH; i = i.uncheckedInc()) {
  • Line 622: for (uint256 i = 0; i < STATE_WIDTH - 1; i = i.uncheckedInc()) {

PairingsBn254.sol

  • Line 240: for (uint256 i = 0; i < elements; ) {
Score
Recommendation

ACKNOWLEDGED: The MatterLabs team acknowledged this issue.

9. Automated Testing

STATIC ANALYSIS REPORT

Description

Halborn used automated testing techniques to enhance the coverage of certain areas of the scoped circuits. Among the tools used was cargo geiger, a tool that lists statistics related to the usage of unsafe Rust code in a Rust crate and all its dependencies. After Halborn verified all the circuits in the repository and was able to compile them correctly, cargo geiger was run on the all-scoped circuits.

cargo geiger results

1.png2.png3.png4.png5.png6.png7.png8.png
  • As a result of the tests carried out with the cargo geiger tool, the results were obtained and reviewed by Halborn. Based on the results reviewed, all warnings were determined to not pose a security issue.

AUTOMATED SECURITY SCAN

Description

Halborn used automated security scanners to assist with detection of well-known security issues, and to identify low-hanging fruits on the targets for this engagement. Among the tools used was cargo audit, a command-line utility which inspects Cargo.lock files and compares them against the RustSec Advisory Database, a community database of security vulnerabilities maintained by the Rust Secure Code Working Group. Cargo audit performed a scan on all the circuits.

cargo audit results

1.png
  • No major issues found by cargo audit.

Halborn strongly recommends conducting a follow-up assessment of the project either within six months or immediately following any material changes to the codebase, whichever comes first. This approach is crucial for maintaining the project’s integrity and addressing potential vulnerabilities introduced by code modifications.

© Halborn 2024. All rights reserved.