Halborn Logo

icon

Protocol MultiversX - Hatom


Prepared by:

Halborn Logo

HALBORN

Last Updated 04/23/2024

Date of Engagement by: March 7th, 2023 - April 10th, 2023

Summary

100% of all REPORTED Findings have been addressed

All findings

7

Critical

0

High

0

Medium

1

Low

2

Informational

4


1. Introduction

Hatom Protocolis a decentralized lending protocol developed on the MultiversX (Elrond) blockchain, which aims to provide a comprehensive money market ecosystem to users. The platform enables users to lend, borrow, and stake various cryptocurrencies at interest rates determined by supply and demand dynamics in the market. The protocol offers depositors the opportunity to earn passive income by providing liquidity, while borrowers can access loans by supplying collateral in an over-collateralized fashion, which reduces the risk of default.

Hatom Protocol's ecosystem is designed to support eGold (EGLD), the native token of MultiversX, along with other major cryptocurrencies, facilitating seamless integration and interaction between the two platforms. This compatibility with Multiversx's native token ensures that users can access a wide range of digital assets within the lending protocol.

The platform may also implement a governance model to maintain decentralization and encourage community involvement. This governance system would allow token holders to participate in decision-making processes, such as proposing and voting on protocol updates or modifications.

In summary, Hatom Protocol is a decentralized lending protocol on the Multiversx (Elrond) blockchain that offers users the ability to lend, borrow, and stake cryptocurrencies at market-driven interest rates. The platform seeks to provide a reliable and efficient DeFi solution by leveraging the advanced features of the Multiversx (Elrond) network, while promoting financial inclusion and accessibility in the digital asset space.

Initially, Hatom Protocol engaged Halborn to conduct a security assessment on their smart contracts beginning on 03/07/2023 and ending on 04/10/2023. Before launching the protocol, some changes, and functionalities were added to the code and Halborn conducted a review on the new code beginning on 06/26/2013 and ending on 06/30/2023.

2. Assessment Summary

The team at Halborn was provided four weeks for the engagement and assigned two full-time security engineers to verify the security of the smart contracts. The security engineers are blockchain and smart-contract security experts with advanced penetration testing, smart-contract hacking, and deep knowledge of multiple blockchain protocols.

The purpose of this assessment to achieve the following:

- Ensure that smart contract functions operate as intended: Verify that each function performs its intended purpose and adheres to the specified requirements.

- Identify potential security issues with the smart contracts: Detect vulnerabilities, bugs, and weaknesses that could be exploited by malicious actors or result in unintended consequences.

- Verify compliance with best practices and coding standards: Check that the smart contract code follows established best practices, such as clear documentation, proper error handling, and efficient gas usage.

- Evaluate the overall design and architecture: Assess the smart contract's design and structure to ensure it is robust, modular, and maintainable.

- Validate access controls and permissions: Ensure that access controls are correctly implemented and that only authorized parties can perform sensitive operations.

- Confirm proper handling of user funds and assets: Verify that the smart contract securely manages user funds and assets without risk of loss or unauthorized access.

- Test for potential race conditions and reentrancy attacks: Identify scenarios where concurrent transactions or external calls could lead to unexpected behavior or security vulnerabilities.

- Examine the smart contract's upgradeability and extensibility: Ensure that the contract can be upgraded or extended in a secure and controlled manner, if necessary.

- Review the contract's interactions with other contracts and external systems: Analyze how the smart contract interacts with other components in the ecosystem, including other smart contracts and external services, to identify potential issues or points of failure.

- Validate the smart contract's performance and resource usage: Analyze the contract's gas consumption and resource usage to ensure it operates efficiently and does not impose unnecessary costs on users or the network.

In summary, Halborn identified some improvements to reduce the likelihood and impact of multiple issues which were successfully addressed by the Hatom team. The main ones were the following:


***(HAL-01) HARDCODED USDC DOLLAR PEG IN CHAINLINK PRICE CALCULATION***

The get_chainlink_price_in_egld function assumes that the USDC stablecoin is always pegged 1:1to USD, which may not always be true. This assumption can lead to inaccurate price calculations, potentially impacting users' collateral values, borrowing capacity, and liquidations.

***(HAL-02) REWARDS PERIOD SHOULD BE GREATER THAN ZERO***

In the function update_rewards_batch_speed, there is a possibility that the calculated rewards batch period dt is zero. This may lead to unexpected behavior, such as the rewards batch being stuck in the current state and preventing further rewards distribution.

3. Test Approach and Methodology

Halborn performed a combination of the manual view of the code and automated security testing to balance efficiency, timeliness, practicality, and accuracy regarding the scope of the smart contract assessment. While manual testing is recommended to uncover flaws in logic, process, and implementation, automated testing techniques help enhance the coverage of smart contracts. They can quickly identify items that do not follow security best practices. The following phases and associated tools were used throughout the term of the assessment:

- Research into architecture, purpose, and use of the platform.

- Manual code read and walk through.

- Manual Assessment of use and safety for the critical Rust variables and functions in scope to identify any arithmetic related vulnerability classes.

- Race condition tests.

- Cross contract call controls.

- Architecture related logical controls.

- Fuzz testing. (`cargo fuzz`)

- Checking the unsafe code usage. (`cargo-geiger`)

- Scanning of Rust files for vulnerabilities.(`cargo audit`)



3.1 Manual Testing

INTEREST RATE MODEL

- The interest rate calculation is accurate under normal conditions. - PASSED

- The interest rate calculation handles edge cases correctly, such as zero or extremely high utilization rates - PASSED

- The interest rate model can be updated by an authorized admin or governance action - PASSED

- Unauthorized users cannot update the interest rate model - PASSED

MONEY MARKET

- Users can borrow and repay loans according to the platform’s rules - PASSED

- Accrual of interest for borrowers and suppliers is updated with frequency - PASSED

- Liquidations work correctly when collateral value falls below the required threshold - PASSED

- Users cannot manipulate the system to obtain loans exceeding their allowed borrowing capacity - PASSED

ORACLE

- The oracle returns accurate price data for supported assets - PASSED

- Oracle’s response to extreme price fluctuations or unexpected asset price data can be abused to steal funds - PASSED

CONTROLLER

- Controller enforces proper collateralization and borrowing limits for users - PASSED

- Controller accurately calculates interest rates and updates user balances accordingly - PASSED

- Controller’s handling of user actions, such as depositing collateral, borrowing, repaying loans, and withdrawing collateral can be abused to obtain profits - PASSED

- Controller responds correctly to external factors, such as changes in the interest rate model or oracle price data - PASSED

ADMIN

- Admin actions are restricted to authorized admin accounts - PASSED

- Admin actions, such as updating interest rate models or adding new collateral types, function as expected - PASSED

- Unauthorized users cannot perform admin actions - PASSED

GOVERNANCE

- Governance system enforces proper voting weights and quorum requirements - PASSED

- Governance proposals result in the desired changes to the protocol - PASSED

- Users cannot manipulate the governance process or bypass voting requirements - PASSED

4. RISK METHODOLOGY

Every vulnerability and issue observed by Halborn is ranked based on two sets of Metrics and a Severity Coefficient. This system is inspired by the industry standard Common Vulnerability Scoring System.
The two Metric sets are: Exploitability and Impact. Exploitability captures the ease and technical means by which vulnerabilities can be exploited and Impact describes the consequences of a successful exploit.
The Severity Coefficients is designed to further refine the accuracy of the ranking with two factors: Reversibility and Scope. These capture the impact of the vulnerability on the environment as well as the number of users and smart contracts affected.
The final score is a value between 0-10 rounded up to 1 decimal place and 10 corresponding to the highest security risk. This provides an objective and accurate rating of the severity of security vulnerabilities in smart contracts.
The system is designed to assist in identifying and prioritizing vulnerabilities based on their level of risk to address the most critical issues in a timely manner.

4.1 EXPLOITABILITY

Attack Origin (AO):
Captures whether the attack requires compromising a specific account.
Attack Cost (AC):
Captures the cost of exploiting the vulnerability incurred by the attacker relative to sending a single transaction on the relevant blockchain. Includes but is not limited to financial and computational cost.
Attack Complexity (AX):
Describes the conditions beyond the attacker’s control that must exist in order to exploit the vulnerability. Includes but is not limited to macro situation, available third-party liquidity and regulatory challenges.
Metrics:
EXPLOITABILIY METRIC (mem_e)METRIC VALUENUMERICAL VALUE
Attack Origin (AO)Arbitrary (AO:A)
Specific (AO:S)
1
0.2
Attack Cost (AC)Low (AC:L)
Medium (AC:M)
High (AC:H)
1
0.67
0.33
Attack Complexity (AX)Low (AX:L)
Medium (AX:M)
High (AX:H)
1
0.67
0.33
Exploitability EE is calculated using the following formula:

E=meE = \prod m_e

4.2 IMPACT

Confidentiality (C):
Measures the impact to the confidentiality of the information resources managed by the contract due to a successfully exploited vulnerability. Confidentiality refers to limiting access to authorized users only.
Integrity (I):
Measures the impact to integrity of a successfully exploited vulnerability. Integrity refers to the trustworthiness and veracity of data stored and/or processed on-chain. Integrity impact directly affecting Deposit or Yield records is excluded.
Availability (A):
Measures the impact to the availability of the impacted component resulting from a successfully exploited vulnerability. This metric refers to smart contract features and functionality, not state. Availability impact directly affecting Deposit or Yield is excluded.
Deposit (D):
Measures the impact to the deposits made to the contract by either users or owners.
Yield (Y):
Measures the impact to the yield generated by the contract for either users or owners.
Metrics:
IMPACT METRIC (mIm_I)METRIC VALUENUMERICAL VALUE
Confidentiality (C)None (I:N)
Low (I:L)
Medium (I:M)
High (I:H)
Critical (I:C)
0
0.25
0.5
0.75
1
Integrity (I)None (I:N)
Low (I:L)
Medium (I:M)
High (I:H)
Critical (I:C)
0
0.25
0.5
0.75
1
Availability (A)None (A:N)
Low (A:L)
Medium (A:M)
High (A:H)
Critical (A:C)
0
0.25
0.5
0.75
1
Deposit (D)None (D:N)
Low (D:L)
Medium (D:M)
High (D:H)
Critical (D:C)
0
0.25
0.5
0.75
1
Yield (Y)None (Y:N)
Low (Y:L)
Medium (Y:M)
High (Y:H)
Critical (Y:C)
0
0.25
0.5
0.75
1
Impact II is calculated using the following formula:

I=max(mI)+mImax(mI)4I = max(m_I) + \frac{\sum{m_I} - max(m_I)}{4}

4.3 SEVERITY COEFFICIENT

Reversibility (R):
Describes the share of the exploited vulnerability effects that can be reversed. For upgradeable contracts, assume the contract private key is available.
Scope (S):
Captures whether a vulnerability in one vulnerable contract impacts resources in other contracts.
Metrics:
SEVERITY COEFFICIENT (CC)COEFFICIENT VALUENUMERICAL VALUE
Reversibility (rr)None (R:N)
Partial (R:P)
Full (R:F)
1
0.5
0.25
Scope (ss)Changed (S:C)
Unchanged (S:U)
1.25
1
Severity Coefficient CC is obtained by the following product:

C=rsC = rs

The Vulnerability Severity Score SS is obtained by:

S=min(10,EIC10)S = min(10, EIC * 10)

The score is rounded up to 1 decimal places.
SeverityScore Value Range
Critical9 - 10
High7 - 8.9
Medium4.5 - 6.9
Low2 - 4.4
Informational0 - 1.9

5. SCOPE

Files and Repository
(a) Repository: hatom-protocol
(b) Assessed Commit ID: 44bd30a
(c) Items in scope:
  • - Interest Rate Model
  • - Money Market
  • - Oracle
↓ Expand ↓
Out-of-Scope: - Staking, - Safety Module
Files and Repository
(a) Repository: hatom-protocol
(b) Assessed Commit ID: eaaf9d6
(c) Items in scope:
  • - Interest Rate Model
  • - Money Market
  • - Oracle
↓ Expand ↓
Out-of-Scope: - Staking, - Safety Module
Files and Repository
(a) Repository: hatom-protocol
(b) Assessed Commit ID: 4fd4eac
(c) Items in scope:
  • - Interest Rate Model
  • - Money Market
  • - Oracle
↓ Expand ↓
Out-of-Scope: - Staking, - Safety Module
Files and Repository
(a) Repository: hatom-protocol
(b) Assessed Commit ID: 4fd4eac
(c) Items in scope:
  • - Interest Rate Model
  • - Money Market
  • - Oracle
↓ Expand ↓
Out-of-Scope: - Staking, - Safety Module
Files and Repository
(a) Repository: hatom-governance
(b) Assessed Commit ID: 9d95873
(c) Items in scope:
  • - Governance
Out-of-Scope:
Files and Repository
(a) Repository: hatom-governance
(b) Assessed Commit ID: 9d95873
(c) Items in scope:
  • - Governance
Out-of-Scope:
Remediation Commit ID:
Out-of-Scope: New features/implementations after the remediation commit IDs.

6. Assessment Summary & Findings Overview

Critical

0

High

0

Medium

1

Low

2

Informational

4

Security analysisRisk levelRemediation Date
HARDCODED USDC DOLLAR PEG IN CHAINLINK PRICE CALCULATIONMediumSolved - 03/28/2023
REWARDS PERIOD SHOULD BE GREATER THAN ZEROLowSolved - 03/29/2023
SINGLE STEP OWNERSHIP TRANSFER PROCESSLowSolved - 05/26/2023
ANCHOR TOLERANCE DOES NOT HAVE A THRESHOLD IN THE ORACLE CONSTRUCTORInformationalSolved - 04/15/2023
REDUNDANT CHECK FOR UNDERLYING AMOUNT GREATER THAN ZEROInformationalSolved - 04/15/2023
MISSING INTEREST RATE MODEL VALIDATIONInformationalSolved - 03/29/2023
SUPPORTED PAIRS CANNOT BE MADE UNSUPPORTEDInformationalSolved - 05/25/2024

7. Findings & Tech Details

7.1 HARDCODED USDC DOLLAR PEG IN CHAINLINK PRICE CALCULATION

// Medium

Description

The get_chainlink_price_in_egld function assumes that the USDC stablecoin is always pegged to USD, which may not always be true. This assumption can lead to inaccurate price calculations, potentially impacting users' collateral values, borrowing capacity, and liquidations; particularly during periods of high market volatility or external factors causing deviations from the 1:1 peg. The inaccurate price calculations can have consequences such as:

- Unfair treatment of users

- Incorrect interest rates

- Unjust liquidations


Code Location


   #[view(getChainlinkPriceInEgld)]
    fn get_chainlink_price_in_egld(&self, token_id: &TokenIdentifier) -> BigUint {
        let optional_pair = self.get_chainlink_pair(token_id);
        match optional_pair {
            None => sc_panic!(ERROR_UNSUPPORTED_TOKEN),
            Some(pair) => {
                let wad = BigUint::from(WAD);
                let (_, , , , egldin_usd, decimals0) = self.get_chainlink_latest_price_feed(&ManagedBuffer::from(CHAINLINK_EGLD_SYMBOL), &ManagedBuffer::from(CHAINLINK_USD_SYMBOL));
                // chainlink is not pushing the USDC price, assume USDC is always pegged to USD
                let (token_in_usd, decimals1) = if pair.chainlink_symbol.to_boxed_bytes().as_slice() == b"USDC" {
                    (wad.clone(), 18u8)
                } else {
                    let (_, , , , tokenin_usd, decimals1) = self.get_chainlink_latest_price_feed(&pair.chainlink_symbol, &ManagedBuffer::from(CHAINLINK_USD_SYMBOL));
                    (token_in_usd, decimals1)
                };
                ...snipped...
                self.chainlink_price_fetched_event(token_id, &price);
                price
            },
        }
    }
BVSS
Recommendation

To mitigate the potential impact of peg deviation, consider fetching the USDC price directly from a trusted oracle, rather than assuming a constant 1:1 peg. This ensures that the lending platform utilizes accurate price data, accounting for any deviation from the peg in its calculations. Implementing monitoring and alerting mechanisms to detect substantial deviations from the expected peg can also help take corrective actions in a timely manner. Lastly, incorporating additional checks and safety measures in the lending platform's logic, such as collateral buffer requirements, can provide further protection against the risks posed by peg deviation.


Remediation Plan

SOLVED: The issue was solved in commit 58c1c by removing the hardcoded price and requesting the price from Chainlink instead.

Remediation Hash

7.2 REWARDS PERIOD SHOULD BE GREATER THAN ZERO

// Low

Description

In the function update_rewards_batch_speed, there is a possibility that the calculated rewards batch period dt could be zero. This may lead to unexpected behavior, such as the rewards batch being stuck in the current state and preventing further rewards distribution.

The problem arises from the fact that there is no explicit check for whether dt is zero, and the code proceeds to update the rewards_batch.end_time with the potentially zero value.

Consider the following scenario:

A rewards batch has an initial speed of 100, old_dt of 100, and new_speed of 10,000.

The calculation for new_dt would then be: $$(100 100 wad) / (10,000 * wad) = 1$$

In this case, the value of dt is 1, which is greater than zero and proceeds normally.

However, if we change the initial speed to 1 and the new_speed to 10,000, the calculation for new_dt would result in:

$$ (1 100 wad) / (10,000 * wad) = 0.01 $$

Since the dt is being converted to u64, the fractional part is truncated, resulting in a value of 0 for dt.


Code Location

Down below is the code snippet from the update_rewards_batch_speed function:


let dt = match BigUint::to_u64(&new_dt) {
    None => sc_panic!(ERROR_UNEXPECTED_REWARDS_BATCH_PERIOD),
    Some(dt) => dt,
};
BVSS
Recommendation

To mitigate this issue, it is recommended to add an explicit check for a zero value of dt and handle it appropriately. This might include returning an error or requiring a minimum rewards batch period.


Remediation Plan

SOLVED: The issue was solved in commit a76fa by adding a check to verify the reward's period is greater than zero when updating rewards speed.

Remediation Hash

7.3 SINGLE STEP OWNERSHIP TRANSFER PROCESS

// Low

Description

The current ownership transfer process involves the approval of a proposal. When the proposal is successful, the admin is set to the proposed address in the try_change_admin function. This function checks if the new owner is not the zero address and proceeds to write the new admin. If the submitted account is not a valid account, it is entirely possible the owner may accidentally transfer ownership to an uncontrolled account, breaking the access control in the execute endpoint and in the proposals' creation process. Lack of a two-step procedure for critical operations leaves them error-prone if the address is incorrect, the new address will take the administration of the contract.


Code Location

Down below is the code snippet from the try_change_admin function:


fn try_change_admin(&self, address: ManagedAddress) {
        require!(!address.is_zero(), ERROR_ADDRESS_ZERO);
        self.admin().set(&address);
    }
BVSS
Recommendation

It is recommended to introduce a validation step to confirm the new administrator address before updating the admin field, for example, requiring the new administrator to accept the role explicitly, potentially through a function call from the new admin address.


Code Location

This issue was solved in commit 1804e by requiring the new admin to accept their role to finalize the role transfer. It is important to note that the governance contract was moved to a new repository.

Remediation Hash

7.4 ANCHOR TOLERANCE DOES NOT HAVE A THRESHOLD IN THE ORACLE CONSTRUCTOR

// Informational

Description

The anchor tolerance is set to ensure the acceptable deviation between the oracle's reported price and the actual market price of the asset. In the oracle's init function, this deviation is set using the parameter anchor_tolerance. The function set_anchor_tolerance_internal checks that the lower_bound is at least 1, but the upper bound is 1 + the submitted anchor tolerance, so the upper bound can have any value.

Big upper bounds could make the oracle vulnerable to price manipulations and/or high volatility.


Code Location

Down below is the code snippet from the set_anchor_tolerance_internal function:

The upper bound is calculated here:


fn set_anchor_tolerance_internal(&self, anchor_tolerance: &BigUint) {
        let wad = BigUint::from(WAD);
        let upper_bound = &wad + anchor_tolerance;
        let lower_bound = if anchor_tolerance < &wad {
            &wad - anchor_tolerance
        } else {
            // avoid prices being zero
            BigUint::from(1u32)
        };
        self.upper_bound_ratio().set(upper_bound);
        self.lower_bound_ratio().set(lower_bound);
    }
Score
Recommendation

It is recommended to set a max value for the upper bound, so the anchor tolerance is always between a reasonable value.



Remediation Plan

SOLVED: The issue was solved in commit 035ec by setting a max_anchor_tolerance value as threshold.


Remediation Hash

7.5 REDUNDANT CHECK FOR UNDERLYING AMOUNT GREATER THAN ZERO

// Informational

Description

In the borrow_internal function, there is a redundant check for underlying_amount > BigUint::zero(). This check is unnecessary as the borrow function, which calls borrow_internal, has already ensured that the underlying_amount is greater than zero. Removing this redundant check can improve code efficiency.


Code Location

Down below is the code snippet from the borrow_internal function:


if borrower_current_borrow_amount == BigUint::zero() && underlying_amount > BigUint::zero() {
    self.borrower_count().update(|_count| *_count += 1u64);
}
Score
Recommendation

To address this issue, it is recommended to remove the unnecessary check for underlying_amount > BigUint::zero() in the borrow_internal function.


Remediation Plan

SOLVED: The issue was solved in the commit 47658 by removing the underlying_amount > BigUint::zero() check.

Remediation Hash

7.6 MISSING INTEREST RATE MODEL VALIDATION

// Informational

Description

The get_interest_rate_model_proxy function does not validate if the interest rate model address is defined before accessing it, which may lead to unexpected behavior if the address is not set, or has been accidentally removed or invalidated.


Code Location

Down below is the code snippet from the get_interest_rate_model_proxy function:


fn get_interest_rate_model_proxy(&self, sc_address: Option<ManagedAddress>) -> interest_rate_model::Proxy<Self::Api> {
        match sc_address {
            Some(interest_rate_model_address) => self.interest_rate_model_proxy(interest_rate_model_address),
            None => {
                let interest_rate_model_address = self.interest_rate_model().get();
                self.interest_rate_model_proxy(interest_rate_model_address)
Score
Recommendation

To mitigate this vulnerability, it is recommended to add a check before accessing the interest rate model to ensure that it is defined and valid.



Remediation Plan

SOLVED: The issue was solved in the commit ac0fc by adding a check to verify if the interest rate model contract has been defined.

Remediation Hash

7.7 SUPPORTED PAIRS CANNOT BE MADE UNSUPPORTED

// Informational

Description

It was observed that it is not possible to remove a supported pair from the whitelisted_maiar_pairs nor the whitelisted_chainlink_pairs. The only way to remove those pairs is overwriting its value, or pausing the market.

Score
Recommendation

It is recommended to write a function to empty values from the whitelisted_maiar_pairs or to pause the use of supported pairs.


Remediation Plan


The issue was solved in commit 1eacfc by introducing the removeToken endpoint, which deletes all information related to a previously supported Token. The Hatom Protocol now places more emphasis on the token itself, rather than on pairs.

Remediation Hash

8. Automated Testing

Introduction

Halborn used automated security scanners to assist with detection of well-known security issues and vulnerabilities. Among the tools used was cargo audit, a security scanner for vulnerabilities reported to the RustSec Advisory Database. All vulnerabilities published in https://crates.io are stored in a repository named The RustSec Advisory Database. cargo audit is a human-readable version of the advisory database which performs a scanning on Cargo.lock. Security Detections are only in scope. To better assist the developers maintaining this code, the auditors are including the output with the dependencies tree, and this is included in the cargo audit output to better know the dependencies affected by unmaintained and vulnerable crates.


Results

Halborn strongly recommends conducting a follow-up assessment of the project either within six months or immediately following any material changes to the codebase, whichever comes first. This approach is crucial for maintaining the project’s integrity and addressing potential vulnerabilities introduced by code modifications.

© Halborn 2024. All rights reserved.