Prepared by:
HALBORN
Last Updated 04/23/2024
Date of Engagement by: March 7th, 2023 - April 10th, 2023
100% of all REPORTED Findings have been addressed
All findings
7
Critical
0
High
0
Medium
1
Low
2
Informational
4
Hatom Protocol
is a decentralized lending protocol developed on the MultiversX (Elrond) blockchain, which aims to provide a comprehensive money market ecosystem to users. The platform enables users to lend, borrow, and stake various cryptocurrencies at interest rates determined by supply and demand dynamics in the market. The protocol offers depositors the opportunity to earn passive income by providing liquidity, while borrowers can access loans by supplying collateral in an over-collateralized fashion, which reduces the risk of default.
Hatom Protocol's ecosystem is designed to support eGold (EGLD), the native token of MultiversX, along with other major cryptocurrencies, facilitating seamless integration and interaction between the two platforms. This compatibility with Multiversx's native token ensures that users can access a wide range of digital assets within the lending protocol.
The platform may also implement a governance model to maintain decentralization and encourage community involvement. This governance system would allow token holders to participate in decision-making processes, such as proposing and voting on protocol updates or modifications.
In summary, Hatom Protocol is a decentralized lending protocol on the Multiversx (Elrond) blockchain that offers users the ability to lend, borrow, and stake cryptocurrencies at market-driven interest rates. The platform seeks to provide a reliable and efficient DeFi solution by leveraging the advanced features of the Multiversx (Elrond) network, while promoting financial inclusion and accessibility in the digital asset space.
Initially, Hatom Protocol engaged Halborn to conduct a security assessment on their smart contracts beginning on 03/07/2023 and ending on 04/10/2023. Before launching the protocol, some changes, and functionalities were added to the code and Halborn conducted a review on the new code beginning on 06/26/2013 and ending on 06/30/2023.
The team at Halborn was provided four weeks for the engagement and assigned two full-time security engineers to verify the security of the smart contracts. The security engineers are blockchain and smart-contract security experts with advanced penetration testing, smart-contract hacking, and deep knowledge of multiple blockchain protocols.
The purpose of this assessment to achieve the following:
- Ensure that smart contract functions operate as intended: Verify that each function performs its intended purpose and adheres to the specified requirements.
- Identify potential security issues with the smart contracts: Detect vulnerabilities, bugs, and weaknesses that could be exploited by malicious actors or result in unintended consequences.
- Verify compliance with best practices and coding standards: Check that the smart contract code follows established best practices, such as clear documentation, proper error handling, and efficient gas usage.
- Evaluate the overall design and architecture: Assess the smart contract's design and structure to ensure it is robust, modular, and maintainable.
- Validate access controls and permissions: Ensure that access controls are correctly implemented and that only authorized parties can perform sensitive operations.
- Confirm proper handling of user funds and assets: Verify that the smart contract securely manages user funds and assets without risk of loss or unauthorized access.
- Test for potential race conditions and reentrancy attacks: Identify scenarios where concurrent transactions or external calls could lead to unexpected behavior or security vulnerabilities.
- Examine the smart contract's upgradeability and extensibility: Ensure that the contract can be upgraded or extended in a secure and controlled manner, if necessary.
- Review the contract's interactions with other contracts and external systems: Analyze how the smart contract interacts with other components in the ecosystem, including other smart contracts and external services, to identify potential issues or points of failure.
- Validate the smart contract's performance and resource usage: Analyze the contract's gas consumption and resource usage to ensure it operates efficiently and does not impose unnecessary costs on users or the network.
In summary, Halborn identified some improvements to reduce the likelihood and impact of multiple issues which were successfully addressed by the Hatom team
. The main ones were the following:
***(HAL-01) HARDCODED USDC DOLLAR PEG IN CHAINLINK PRICE CALCULATION***
The get_chainlink_price_in_egld
function assumes that the USDC stablecoin is always pegged 1:1to USD, which may not always be true. This assumption can lead to inaccurate price calculations, potentially impacting users' collateral values, borrowing capacity, and liquidations.
***(HAL-02) REWARDS PERIOD SHOULD BE GREATER THAN ZERO***
In the function update_rewards_batch_speed
, there is a possibility that the calculated rewards batch period dt
is zero. This may lead to unexpected behavior, such as the rewards batch being stuck in the current state and preventing further rewards distribution.
Halborn performed a combination of the manual view of the code and automated security testing to balance efficiency, timeliness, practicality, and accuracy regarding the scope of the smart contract assessment. While manual testing is recommended to uncover flaws in logic, process, and implementation, automated testing techniques help enhance the coverage of smart contracts. They can quickly identify items that do not follow security best practices. The following phases and associated tools were used throughout the term of the assessment:
- Research into architecture, purpose, and use of the platform.
- Manual code read and walk through.
- Manual Assessment of use and safety for the critical Rust variables and functions in scope to identify any arithmetic related vulnerability classes.
- Race condition tests.
- Cross contract call controls.
- Architecture related logical controls.
- Fuzz testing. (`cargo fuzz`)
- Checking the unsafe code usage. (`cargo-geiger`)
- Scanning of Rust files for vulnerabilities.(`cargo audit`)
INTEREST RATE MODEL
- The interest rate calculation is accurate under normal conditions. - PASSED
- The interest rate calculation handles edge cases correctly, such as zero or extremely high utilization rates - PASSED
- The interest rate model can be updated by an authorized admin or governance action - PASSED
- Unauthorized users cannot update the interest rate model - PASSED
MONEY MARKET
- Users can borrow and repay loans according to the platform’s rules - PASSED
- Accrual of interest for borrowers and suppliers is updated with frequency - PASSED
- Liquidations work correctly when collateral value falls below the required threshold - PASSED
- Users cannot manipulate the system to obtain loans exceeding their allowed borrowing capacity - PASSED
ORACLE
- The oracle returns accurate price data for supported assets - PASSED
- Oracle’s response to extreme price fluctuations or unexpected asset price data can be abused to steal funds - PASSED
CONTROLLER
- Controller enforces proper collateralization and borrowing limits for users - PASSED
- Controller accurately calculates interest rates and updates user balances accordingly - PASSED
- Controller’s handling of user actions, such as depositing collateral, borrowing, repaying loans, and withdrawing collateral can be abused to obtain profits - PASSED
- Controller responds correctly to external factors, such as changes in the interest rate model or oracle price data - PASSED
ADMIN
- Admin actions are restricted to authorized admin accounts - PASSED
- Admin actions, such as updating interest rate models or adding new collateral types, function as expected - PASSED
- Unauthorized users cannot perform admin actions - PASSED
GOVERNANCE
- Governance system enforces proper voting weights and quorum requirements - PASSED
- Governance proposals result in the desired changes to the protocol - PASSED
- Users cannot manipulate the governance process or bypass voting requirements - PASSED
EXPLOITABILIY METRIC () | METRIC VALUE | NUMERICAL VALUE |
---|---|---|
Attack Origin (AO) | Arbitrary (AO:A) Specific (AO:S) | 1 0.2 |
Attack Cost (AC) | Low (AC:L) Medium (AC:M) High (AC:H) | 1 0.67 0.33 |
Attack Complexity (AX) | Low (AX:L) Medium (AX:M) High (AX:H) | 1 0.67 0.33 |
IMPACT METRIC () | METRIC VALUE | NUMERICAL VALUE |
---|---|---|
Confidentiality (C) | None (I:N) Low (I:L) Medium (I:M) High (I:H) Critical (I:C) | 0 0.25 0.5 0.75 1 |
Integrity (I) | None (I:N) Low (I:L) Medium (I:M) High (I:H) Critical (I:C) | 0 0.25 0.5 0.75 1 |
Availability (A) | None (A:N) Low (A:L) Medium (A:M) High (A:H) Critical (A:C) | 0 0.25 0.5 0.75 1 |
Deposit (D) | None (D:N) Low (D:L) Medium (D:M) High (D:H) Critical (D:C) | 0 0.25 0.5 0.75 1 |
Yield (Y) | None (Y:N) Low (Y:L) Medium (Y:M) High (Y:H) Critical (Y:C) | 0 0.25 0.5 0.75 1 |
SEVERITY COEFFICIENT () | COEFFICIENT VALUE | NUMERICAL VALUE |
---|---|---|
Reversibility () | None (R:N) Partial (R:P) Full (R:F) | 1 0.5 0.25 |
Scope () | Changed (S:C) Unchanged (S:U) | 1.25 1 |
Severity | Score Value Range |
---|---|
Critical | 9 - 10 |
High | 7 - 8.9 |
Medium | 4.5 - 6.9 |
Low | 2 - 4.4 |
Informational | 0 - 1.9 |
Critical
0
High
0
Medium
1
Low
2
Informational
4
Security analysis | Risk level | Remediation Date |
---|---|---|
HARDCODED USDC DOLLAR PEG IN CHAINLINK PRICE CALCULATION | Medium | Solved - 03/28/2023 |
REWARDS PERIOD SHOULD BE GREATER THAN ZERO | Low | Solved - 03/29/2023 |
SINGLE STEP OWNERSHIP TRANSFER PROCESS | Low | Solved - 05/26/2023 |
ANCHOR TOLERANCE DOES NOT HAVE A THRESHOLD IN THE ORACLE CONSTRUCTOR | Informational | Solved - 04/15/2023 |
REDUNDANT CHECK FOR UNDERLYING AMOUNT GREATER THAN ZERO | Informational | Solved - 04/15/2023 |
MISSING INTEREST RATE MODEL VALIDATION | Informational | Solved - 03/29/2023 |
SUPPORTED PAIRS CANNOT BE MADE UNSUPPORTED | Informational | Solved - 05/25/2024 |
// Medium
The get_chainlink_price_in_egld
function assumes that the USDC stablecoin is always pegged to USD, which may not always be true. This assumption can lead to inaccurate price calculations, potentially impacting users' collateral values, borrowing capacity, and liquidations; particularly during periods of high market volatility or external factors causing deviations from the 1:1 peg. The inaccurate price calculations can have consequences such as:
- Unfair treatment of users
- Incorrect interest rates
- Unjust liquidations
oracle/src/prices.rs
#[view(getChainlinkPriceInEgld)]
fn get_chainlink_price_in_egld(&self, token_id: &TokenIdentifier) -> BigUint {
let optional_pair = self.get_chainlink_pair(token_id);
match optional_pair {
None => sc_panic!(ERROR_UNSUPPORTED_TOKEN),
Some(pair) => {
let wad = BigUint::from(WAD);
let (_, , , , egldin_usd, decimals0) = self.get_chainlink_latest_price_feed(&ManagedBuffer::from(CHAINLINK_EGLD_SYMBOL), &ManagedBuffer::from(CHAINLINK_USD_SYMBOL));
// chainlink is not pushing the USDC price, assume USDC is always pegged to USD
let (token_in_usd, decimals1) = if pair.chainlink_symbol.to_boxed_bytes().as_slice() == b"USDC" {
(wad.clone(), 18u8)
} else {
let (_, , , , tokenin_usd, decimals1) = self.get_chainlink_latest_price_feed(&pair.chainlink_symbol, &ManagedBuffer::from(CHAINLINK_USD_SYMBOL));
(token_in_usd, decimals1)
};
...snipped...
self.chainlink_price_fetched_event(token_id, &price);
price
},
}
}
To mitigate the potential impact of peg deviation, consider fetching the USDC price directly from a trusted oracle, rather than assuming a constant 1:1 peg. This ensures that the lending platform utilizes accurate price data, accounting for any deviation from the peg in its calculations. Implementing monitoring and alerting mechanisms to detect substantial deviations from the expected peg can also help take corrective actions in a timely manner. Lastly, incorporating additional checks and safety measures in the lending platform's logic, such as collateral buffer requirements, can provide further protection against the risks posed by peg deviation.
SOLVED: The issue was solved in commit 58c1c
by removing the hardcoded price and requesting the price from Chainlink instead.
// Low
In the function update_rewards_batch_speed
, there is a possibility that the calculated rewards batch period dt
could be zero. This may lead to unexpected behavior, such as the rewards batch being stuck in the current state and preventing further rewards distribution.
The problem arises from the fact that there is no explicit check for whether dt
is zero, and the code proceeds to update the rewards_batch.end_time
with the potentially zero value.
Consider the following scenario:
A rewards batch has an initial speed of 100, old_dt
of 100, and new_speed of 10,000.
The calculation for new_dt
would then be: $$(100 100 wad) / (10,000 * wad) = 1$$
In this case, the value of dt
is 1, which is greater than zero and proceeds normally.
However, if we change the initial speed to 1 and the new_speed
to 10,000, the calculation for new_dt
would result in:
$$ (1 100 wad) / (10,000 * wad) = 0.01 $$
Since the dt
is being converted to u64, the fractional part is truncated, resulting in a value of 0 for dt
.
Down below is the code snippet from the update_rewards_batch_speed
function:
controller/src/governance.rs
let dt = match BigUint::to_u64(&new_dt) {
None => sc_panic!(ERROR_UNEXPECTED_REWARDS_BATCH_PERIOD),
Some(dt) => dt,
};
To mitigate this issue, it is recommended to add an explicit check for a zero value of dt
and handle it appropriately. This might include returning an error or requiring a minimum rewards batch period.
SOLVED: The issue was solved in commit a76fa
by adding a check to verify the reward's period is greater than zero when updating rewards speed.
// Low
The current ownership transfer process involves the approval of a proposal. When the proposal is successful, the admin
is set to the proposed address in the try_change_admin
function. This function checks if the new owner is not the zero address and proceeds to write the new admin. If the submitted account is not a valid account, it is entirely possible the owner may accidentally transfer ownership to an uncontrolled account, breaking the access control in the execute
endpoint and in the proposals' creation process. Lack of a two-step procedure for critical operations leaves them error-prone if the address is incorrect, the new address will take the administration of the contract.
Down below is the code snippet from the try_change_admin
function:
governance/src/config.rs
fn try_change_admin(&self, address: ManagedAddress) {
require!(!address.is_zero(), ERROR_ADDRESS_ZERO);
self.admin().set(&address);
}
It is recommended to introduce a validation step to confirm the new administrator address before updating the admin
field, for example, requiring the new administrator to accept the role explicitly, potentially through a function call from the new admin address.
This issue was solved in commit 1804e
by requiring the new admin to accept their role to finalize the role transfer. It is important to note that the governance contract was moved to a new repository.
// Informational
The anchor tolerance is set to ensure the acceptable deviation between the oracle's reported price and the actual market price of the asset. In the oracle's init
function, this deviation is set using the parameter anchor_tolerance
. The function set_anchor_tolerance_internal
checks that the lower_bound is at least 1, but the upper bound is 1 + the submitted anchor tolerance, so the upper bound can have any value.
Big upper bounds could make the oracle vulnerable to price manipulations and/or high volatility.
Down below is the code snippet from the set_anchor_tolerance_internal
function:
The upper bound is calculated here:
oracle/src/common.rs
fn set_anchor_tolerance_internal(&self, anchor_tolerance: &BigUint) {
let wad = BigUint::from(WAD);
let upper_bound = &wad + anchor_tolerance;
let lower_bound = if anchor_tolerance < &wad {
&wad - anchor_tolerance
} else {
// avoid prices being zero
BigUint::from(1u32)
};
self.upper_bound_ratio().set(upper_bound);
self.lower_bound_ratio().set(lower_bound);
}
It is recommended to set a max value for the upper bound, so the anchor tolerance is always between a reasonable value.
SOLVED: The issue was solved in commit 035ec
by setting a max_anchor_tolerance
value as threshold.
// Informational
In the borrow_internal
function, there is a redundant check for underlying_amount > BigUint::zero()
. This check is unnecessary as the borrow
function, which calls borrow_internal
, has already ensured that the underlying_amount
is greater than zero. Removing this redundant check can improve code efficiency.
Down below is the code snippet from the borrow_internal
function:
/money-market/src/borrow.rs
if borrower_current_borrow_amount == BigUint::zero() && underlying_amount > BigUint::zero() {
self.borrower_count().update(|_count| *_count += 1u64);
}
To address this issue, it is recommended to remove the unnecessary check for underlying_amount > BigUint::zero()
in the borrow_internal
function.
SOLVED: The issue was solved in the commit 47658
by removing the underlying_amount > BigUint::zero()
check.
// Informational
The get_interest_rate_model_proxy
function does not validate if the interest rate model address is defined before accessing it, which may lead to unexpected behavior if the address is not set, or has been accidentally removed or invalidated.
Down below is the code snippet from the get_interest_rate_model_proxy
function:
/money-market/src/proxies.rs
fn get_interest_rate_model_proxy(&self, sc_address: Option<ManagedAddress>) -> interest_rate_model::Proxy<Self::Api> {
match sc_address {
Some(interest_rate_model_address) => self.interest_rate_model_proxy(interest_rate_model_address),
None => {
let interest_rate_model_address = self.interest_rate_model().get();
self.interest_rate_model_proxy(interest_rate_model_address)
To mitigate this vulnerability, it is recommended to add a check before accessing the interest rate model to ensure that it is defined and valid.
SOLVED: The issue was solved in the commit ac0fc
by adding a check to verify if the interest rate model contract has been defined.
// Informational
It was observed that it is not possible to remove a supported pair from the whitelisted_maiar_pairs
nor the whitelisted_chainlink_pairs
. The only way to remove those pairs is overwriting its value, or pausing the market.
It is recommended to write a function to empty values from the whitelisted_maiar_pairs
or to pause the use of supported pairs.
The issue was solved in commit 1eacfc
by introducing the removeToken
endpoint, which deletes all information related to a previously supported Token. The Hatom Protocol now places more emphasis on the token itself, rather than on pairs.
Halborn used automated security scanners to assist with detection of well-known security issues and vulnerabilities. Among the tools used was cargo audit
, a security scanner for vulnerabilities reported to the RustSec Advisory Database. All vulnerabilities published in https://crates.io
are stored in a repository named The RustSec Advisory Database. cargo audit
is a human-readable version of the advisory database which performs a scanning on Cargo.lock. Security Detections are only in scope. To better assist the developers maintaining this code, the auditors are including the output with the dependencies tree, and this is included in the cargo audit
output to better know the dependencies affected by unmaintained and vulnerable crates.
Halborn strongly recommends conducting a follow-up assessment of the project either within six months or immediately following any material changes to the codebase, whichever comes first. This approach is crucial for maintaining the project’s integrity and addressing potential vulnerabilities introduced by code modifications.
// Download the full report
* Use Google Chrome for best results
** Check "Background Graphics" in the print settings if needed