Halborn Logo

Layer 1 Assessment - Story


Prepared by:

Halborn Logo

HALBORN

Last Updated 12/12/2024

Date of Engagement by: November 7th, 2024 - December 6th, 2024

Summary

100% of all REPORTED Findings have been addressed

All findings

8

Critical

1

High

2

Medium

2

Low

0

Informational

3


1. Introduction

Story engaged Halborn to conduct a security assessment on their Cosmos project, beginning on November 07th, 2024, and ending on December 06th, 2024. The security assessment was scoped to the smart contracts provided to Halborn. Commit hashes and further details can be found in the Scope section of this report.

2. Assessment Summary

The team at Halborn assigned one full-time security engineer to assessment the security of the merge requests. The security engineers are blockchain and smart-contract security experts with advanced penetration testing, smart-contract hacking, and deep knowledge of multiple blockchain protocols.

The purpose of this assessment is to:

    • Ensure that the Golang components and Smart Contracts operate as intended.

    • Identify potential security issues with the Cosmos application and Smart Contracts in scope.


In summary, Halborn identified some improvements to reduce the likelihood and impact of risks, which were mostly addressed by the Story team. The main recommendations were the following:

    • Implement compatibility with common features in Geth such as blobs.

    • Charge fees accordingly to prevent from DoS attacks or trash transactions.

    • Proper error handling implementations.

    • Use algorithms applied in CometBFT.

3. Test Approach and Methodology

Halborn performed a combination of manual and automated security testing to balance efficiency, timeliness, practicality, and accuracy in regard to the scope of the custom modules. While manual testing is recommended to uncover flaws in logic, process, and implementation; automated testing techniques help enhance coverage of structures and can quickly identify items that do not follow security best practices. The following phases and associated tools were used throughout the term of the assessment :

    • Research into architecture and purpose.

    • Static Analysis of security for scoped repository, and imported functions. (e.g., staticcheck, gosec, unconvert, codeql, ineffassign and semgrep)

    • Manual Assessment for discovering security vulnerabilities on the codebase.

    • Ensuring the correctness of the codebase.

    • Dynamic Analysis of files and modules in scope.

4. RISK METHODOLOGY

Every vulnerability and issue observed by Halborn is ranked based on two sets of Metrics and a Severity Coefficient. This system is inspired by the industry standard Common Vulnerability Scoring System.
The two Metric sets are: Exploitability and Impact. Exploitability captures the ease and technical means by which vulnerabilities can be exploited and Impact describes the consequences of a successful exploit.
The Severity Coefficients is designed to further refine the accuracy of the ranking with two factors: Reversibility and Scope. These capture the impact of the vulnerability on the environment as well as the number of users and smart contracts affected.
The final score is a value between 0-10 rounded up to 1 decimal place and 10 corresponding to the highest security risk. This provides an objective and accurate rating of the severity of security vulnerabilities in smart contracts.
The system is designed to assist in identifying and prioritizing vulnerabilities based on their level of risk to address the most critical issues in a timely manner.

4.1 EXPLOITABILITY

Attack Origin (AO):
Captures whether the attack requires compromising a specific account.
Attack Cost (AC):
Captures the cost of exploiting the vulnerability incurred by the attacker relative to sending a single transaction on the relevant blockchain. Includes but is not limited to financial and computational cost.
Attack Complexity (AX):
Describes the conditions beyond the attacker’s control that must exist in order to exploit the vulnerability. Includes but is not limited to macro situation, available third-party liquidity and regulatory challenges.
Metrics:
EXPLOITABILIY METRIC (mem_e)METRIC VALUENUMERICAL VALUE
Attack Origin (AO)Arbitrary (AO:A)
Specific (AO:S)
1
0.2
Attack Cost (AC)Low (AC:L)
Medium (AC:M)
High (AC:H)
1
0.67
0.33
Attack Complexity (AX)Low (AX:L)
Medium (AX:M)
High (AX:H)
1
0.67
0.33
Exploitability EE is calculated using the following formula:

E=meE = \prod m_e

4.2 IMPACT

Confidentiality (C):
Measures the impact to the confidentiality of the information resources managed by the contract due to a successfully exploited vulnerability. Confidentiality refers to limiting access to authorized users only.
Integrity (I):
Measures the impact to integrity of a successfully exploited vulnerability. Integrity refers to the trustworthiness and veracity of data stored and/or processed on-chain. Integrity impact directly affecting Deposit or Yield records is excluded.
Availability (A):
Measures the impact to the availability of the impacted component resulting from a successfully exploited vulnerability. This metric refers to smart contract features and functionality, not state. Availability impact directly affecting Deposit or Yield is excluded.
Deposit (D):
Measures the impact to the deposits made to the contract by either users or owners.
Yield (Y):
Measures the impact to the yield generated by the contract for either users or owners.
Metrics:
IMPACT METRIC (mIm_I)METRIC VALUENUMERICAL VALUE
Confidentiality (C)None (I:N)
Low (I:L)
Medium (I:M)
High (I:H)
Critical (I:C)
0
0.25
0.5
0.75
1
Integrity (I)None (I:N)
Low (I:L)
Medium (I:M)
High (I:H)
Critical (I:C)
0
0.25
0.5
0.75
1
Availability (A)None (A:N)
Low (A:L)
Medium (A:M)
High (A:H)
Critical (A:C)
0
0.25
0.5
0.75
1
Deposit (D)None (D:N)
Low (D:L)
Medium (D:M)
High (D:H)
Critical (D:C)
0
0.25
0.5
0.75
1
Yield (Y)None (Y:N)
Low (Y:L)
Medium (Y:M)
High (Y:H)
Critical (Y:C)
0
0.25
0.5
0.75
1
Impact II is calculated using the following formula:

I=max(mI)+mImax(mI)4I = max(m_I) + \frac{\sum{m_I} - max(m_I)}{4}

4.3 SEVERITY COEFFICIENT

Reversibility (R):
Describes the share of the exploited vulnerability effects that can be reversed. For upgradeable contracts, assume the contract private key is available.
Scope (S):
Captures whether a vulnerability in one vulnerable contract impacts resources in other contracts.
Metrics:
SEVERITY COEFFICIENT (CC)COEFFICIENT VALUENUMERICAL VALUE
Reversibility (rr)None (R:N)
Partial (R:P)
Full (R:F)
1
0.5
0.25
Scope (ss)Changed (S:C)
Unchanged (S:U)
1.25
1
Severity Coefficient CC is obtained by the following product:

C=rsC = rs

The Vulnerability Severity Score SS is obtained by:

S=min(10,EIC10)S = min(10, EIC * 10)

The score is rounded up to 1 decimal places.
SeverityScore Value Range
Critical9 - 10
High7 - 8.9
Medium4.5 - 6.9
Low2 - 4.4
Informational0 - 1.9

5. SCOPE

Files and Repository
(a) Repository: story
(b) Assessed Commit ID: e6d2d51
(c) Items in scope:
  • /evmengine/keeper/abci.go
  • /evmengine/keeper/db.go
  • /evmengine/keeper/evmmsgs.go
↓ Expand ↓
Out-of-Scope: Third party dependencies and economic attacks.
Files and Repository
(a) Repository: story
(b) Assessed Commit ID: e6d2d51
(c) Items in scope:
  • /IPTokenStaking.sol
  • /PubKeyVerifier.sol
  • /UBIPool.sol
↓ Expand ↓
Out-of-Scope: Third party dependencies and economic attacks.
Files and Repository
(a) Repository: story
(b) Assessed Commit ID: e6d2d51
(c) Items in scope:
  • /abci.go
  • /app_config.go
  • /app.go
↓ Expand ↓
Out-of-Scope: Third party dependencies and economic attacks.
Files and Repository
(a) Repository: story
(b) Assessed Commit ID: e6d2d51
(c) Items in scope:
  • /queue.go
Out-of-Scope: Third party dependencies and economic attacks.
Remediation Commit ID:
Out-of-Scope: New features/implementations after the remediation commit IDs.

6. Assessment Summary & Findings Overview

Critical

1

High

2

Medium

2

Low

0

Informational

3

Security analysisRisk levelRemediation Date
Incompatibility with blobs lead to halt of block processingCriticalFuture Release - 12/11/2024
Missing chargeFee modifier allows spamming the CLHighSolved - 11/23/2024
Mismatch between proposer selection algorithmHighSolved - 12/10/2024
Potential Non-Determinism Issue In FinalizeBlockMediumSolved - 11/25/2024
Lack of pubkey integrity checksMediumSolved - 12/04/2024
Floating PragmaInformationalSolved - 12/11/2024
Use Of Custom ErrorsInformationalAcknowledged - 12/09/2024
Contracts Pause Feature MissingInformationalAcknowledged - 12/10/2024

7. Findings & Tech Details

7.1 Incompatibility with blobs lead to halt of block processing

// Critical

Description

The evmengine module is responsible for interacting with Geth's APIs (forkchoiceUpdatedV3 and getPayloadV3) to construct optimistic blocks based on transactions occurring in Geth. These APIs, in their version V3, include compatibility with Ethereum blobs introduced by the relevant EIPs (e.g., EIP-4844). However, there is a critical issue in the implementation of the pushPayload function, which always sets emptyVersionHashes to an empty array of hashes when calling Geth's NewPayloadV3 API.


https://github.com/piplabs/story/blob/e6d2d51550c3eff3f561f8d1b860888ea2bf8060/client/x/evmengine/keeper/proposal_server.go#L33-L53

err = retryForever(ctx, func(ctx context.Context) (bool, error) {
	status, err := pushPayload(ctx, s.engineCl, payload)
	if err != nil || isUnknown(status) {
		// We need to retry forever on networking errors, but can't easily identify them, so retry all errors.
		log.Warn(ctx, "Verifying proposal failed: push new payload to evm (will retry)", err,
			"status", status.Status)

		return false, nil // Retry
	} else if invalid, err := isInvalid(status); invalid {
		return false, errors.Wrap(err, "invalid payload, rejecting proposal") // Don't retry
	} else if isSyncing(status) {
		// If this is initial sync, we need to continue and set a target head to sync to, so don't retry.
		log.Warn(ctx, "Can't properly verifying proposal: evm syncing", err,
			"payload_height", payload.Number)
	}

	return true, nil // We are done, don't retry.
})
if err != nil {
	return nil, err
}

The pushPayload function sets emptyVersionHashes to an empty array (make([]common.Hash, 0)), regardless of whether the payload includes blob transactions. This causes a mismatch between the blob hashes in the payload and the expected versioned hashes during validation in Geth's ExecutableDataToBlockNoHash.


https://github.com/piplabs/story/blob/e6d2d51550c3eff3f561f8d1b860888ea2bf8060/client/x/evmengine/keeper/msg_server.go#L173-L189

func pushPayload(ctx context.Context, engineCl ethclient.EngineClient, payload engine.ExecutableData) (engine.PayloadStatusV1, error) {
	sdkCtx := sdk.UnwrapSDKContext(ctx)
	appHash := common.BytesToHash(sdkCtx.BlockHeader().AppHash)
	if appHash == (common.Hash{}) {
		return engine.PayloadStatusV1{}, errors.New("app hash is empty")
	}

	emptyVersionHashes := make([]common.Hash, 0) // Cannot use nil.

	// Push it back to the execution client (mark it as possible new head).
	status, err := engineCl.NewPayloadV3(ctx, payload, emptyVersionHashes, &appHash)
	if err != nil {
		return engine.PayloadStatusV1{}, errors.Wrap(err, "new payload")
	}

	return status, nil
}

This behavior leads to a failure when the ExecutableDataToBlockNoHash function is invoked within Geth. Specifically, this function validates that the blob hashes used in the payload match the versionedHashes provided. Since emptyVersionHashes is always empty, any transaction containing blobs will fail validation, resulting in an error. Consequently, the Cosmos evmengine cannot process such transactions, preventing block verification and generation on the chain.


https://github.com/ethereum/go-ethereum/blob/e6d2d51550c3eff3f561f8d1b860888ea2bf8060/beacon/engine/types.go#L245-L251

var blobHashes = make([]common.Hash, 0, len(txs))
for _, tx := range txs {
	blobHashes = append(blobHashes, tx.BlobHashes()...)
}
if len(blobHashes) != len(versionedHashes) {
	return nil, fmt.Errorf("invalid number of versionedHashes: %v blobHashes: %v", versionedHashes, blobHashes)
Proof of Concept

After sending a transaction with a blob attached, the chain halts with the following output:


Thus, being unable to process further blocks.

BVSS
Recommendation

It's recommended the following:

  • Modify the pushPayload function to correctly populate the versionedHashes parameter with the blob hashes associated with the transactions in the payload.

  • This requires extracting blob hashes from the payload's transactions and ensuring they are passed to Geth's NewPayloadV3 API.

Remediation

FUTURE RELEASE: The Story team states that this issue can be fixed in future releases as it does not pose an immediate risk for the chain since the EL (Execution Layer) will continue disabling blobs by default.

7.2 Missing chargeFee modifier allows spamming the CL

// High

Description

The removeOperator, redelegate and redelegateOnBehalf do not have the chargeFee modifier. As none of them requires the user sending funds, it allows a malicious user to spam the CL by sending these events, which requires consumption from the CL.


Code Location

https://github.com/piplabs/story/blob/e6d2d51550c3eff3f561f8d1b860888ea2bf8060/contracts/src/protocol/IPTokenStaking.sol#L162

    function removeOperator(
        bytes calldata uncmpPubkey,
        address operator
    ) external verifyUncmpPubkeyWithExpectedAddress(uncmpPubkey, msg.sender) 

https://github.com/piplabs/story/blob/e6d2d51550c3eff3f561f8d1b860888ea2bf8060/contracts/src/protocol/IPTokenStaking.sol#L423

    function redelegate(
        bytes calldata delegatorUncmpPubkey,
        bytes calldata validatorUncmpSrcPubkey,
        bytes calldata validatorUncmpDstPubkey,
        uint256 delegationId,
        uint256 amount
    )
        external
        verifyUncmpPubkeyWithExpectedAddress(delegatorUncmpPubkey, msg.sender)
        verifyUncmpPubkey(validatorUncmpSrcPubkey)
        verifyUncmpPubkey(validatorUncmpDstPubkey)

https://github.com/piplabs/story/blob/e6d2d51550c3eff3f561f8d1b860888ea2bf8060/contracts/src/protocol/IPTokenStaking.sol#L446

    function redelegateOnBehalf(
        bytes calldata delegatorUncmpPubkey,
        bytes calldata validatorUncmpSrcPubkey,
        bytes calldata validatorUncmpDstPubkey,
        uint256 delegationId,
        uint256 amount
    )
        external
        verifyUncmpPubkey(delegatorUncmpPubkey)
        verifyUncmpPubkey(validatorUncmpSrcPubkey)
        verifyUncmpPubkey(validatorUncmpDstPubkey)
BVSS
Recommendation

It's recommended to add the chargeFee modifier to them.

Remediation

SOLVED: The Story team fixed this issue as recommended.

Remediation Hash

7.3 Mismatch between proposer selection algorithm

// High

Description

The function isNextProposer attempts to determine whether the local node is the next proposer for the upcoming block in a CometBFT consensus system. However, there is a fundamental issue with the logic: it incorrectly applies a simple round-robin algorithm to select the next proposer, while CometBFT uses a weighted round-robin algorithm based on validator voting power. This discrepancy can lead to incorrect proposer selection, which may cause the node to behave incorrectly in the consensus process, potentially leading to missed blocks or other consensus failures:


// isNextProposer returns true if the local node is the proposer
// for the next block. It also returns the next block height.
//
// Note that the validator set can change, so this is an optimistic check.
func (k *Keeper) isNextProposer(ctx context.Context, currentProposer []byte, currentHeight int64) (bool, error) {
	// PostFinalize can be called during block replay (performed in newCometNode),
	// but cmtAPI is set only after newCometNode completes (see app.SetCometAPI), so a nil check is necessary.
	if k.cmtAPI == nil {
		return false, nil
	}

	valset, ok, err := k.cmtAPI.Validators(ctx, currentHeight)
	if err != nil {
		return false, err
	} else if !ok || len(valset.Validators) == 0 {
		return false, errors.New("validators not available")
	}

	idx, _ := valset.GetByAddress(currentProposer)
	if idx < 0 {
		return false, errors.New("proposer not in validator set")
	}

	nextIdx := int(idx+1) % len(valset.Validators)
	nextProposer := valset.Validators[nextIdx]
	nextAddr, err := k1util.PubKeyToAddress(nextProposer.PubKey)
	if err != nil {
		return false, err
	}

	isNextProposer := nextAddr == k.validatorAddr

	return isNextProposer, nil
}

The code uses a modulo operation (nextIdx := int(idx+1) % len(valset.Validators)) to select the next proposer in a round-robin fashion. However, CometBFT uses a weighted round-robin algorithm that takes into account the voting power of each validator when selecting proposers. This oversight means that validators with higher voting power will not be appropriately prioritized, resulting in an incorrect proposer selection.

BVSS
Recommendation

Modify the proposer selection logic to account for validator voting power in line with CometBFT’s weighted round-robin algorithm, as it is explained in CometBFT’s documentation.

Remediation

SOLVED: The Story team fixed this issue by using the exposed validators functions to check if the local node is the next (predicted) proposer.

Remediation Hash
References

7.4 Potential Non-Determinism Issue In FinalizeBlock

// Medium

Description

There is a possible non-determinism issue in FinalizeBlock. That function must be deterministic per the CometBFT specs. However, there is a call path postFinalize -> isNextProposer -> Validators -> cmtAPI that can incurr in a non-deterministic scenario:


https://github.com/piplabs/story/blob/e6d2d51550c3eff3f561f8d1b860888ea2bf8060/client/app/abci.go#L114

// FinalizeBlock calls BeginBlock -> DeliverTx (for all txs) -> EndBlock.
func (l abciWrapper) FinalizeBlock(ctx context.Context, req *abci.RequestFinalizeBlock) (*abci.ResponseFinalizeBlock, error) {

	...

	if err := l.postFinalize(sdkCtx); err != nil {
		log.Error(ctx, "PostFinalize callback failed [BUG]", err, "height", req.Height)
		return resp, err
	}

        ...

https://github.com/piplabs/story/blob/e6d2d51550c3eff3f561f8d1b860888ea2bf8060/client/x/evmengine/keeper/abci.go#L189

// PostFinalize is called by our custom ABCI wrapper after a block is finalized.
// It starts an optimistic build if enabled and if we are the next proposer.
//
// This custom ABCI callback is used since we need to trigger optimistic builds
// immediately after FinalizeBlock with the latest app hash
// which isn't available from cosmosSDK otherwise.
func (k *Keeper) PostFinalize(ctx sdk.Context) error {

	...

	// Maybe start building the next block if we are the next proposer.
	isNext, err := k.isNextProposer(ctx, proposer, height)
	if err != nil {
		return errors.Wrap(err, "next proposer")
	}

        ... 

https://github.com/piplabs/story/blob/e6d2d51550c3eff3f561f8d1b860888ea2bf8060/client/x/evmengine/keeper/keeper.go#L191

// isNextProposer returns true if the local node is the proposer
// for the next block. It also returns the next block height.
//
// Note that the validator set can change, so this is an optimistic check.
func (k *Keeper) isNextProposer(ctx context.Context, currentProposer []byte, currentHeight int64) (bool, error) {

	...

	valset, ok, err := k.cmtAPI.Validators(ctx, currentHeight)
	if err != nil {
		return false, err
	}

        ...

The reason why it can lead to non-determinism is because the call to the CometBFT API is done through an RPC, which is a network connection, which by definition is non-deterministic between nodes (as some can revert due to timeout, network errors or go on flawlessly). That makes it possible that, under the same input, some nodes would go on whilst others would crash, as the errors are not handled in the FinalizeBlock context but rather bubbled up to the caller, leading to a chain split and the consensus would be broken.

BVSS
Recommendation

Ignore any error returned from PostFinalize in FinalizeBlock.

Remediation

SOLVED: The Story team fixed this issue by skipping optimistic builds if any non-determinism error was encountered.

Remediation Hash

7.5 Lack of pubkey integrity checks

// Medium

Description

The function UncmpPubKeyToCmpPubKey is responsible for converting uncompressed public keys into their compressed format. While it correctly handles the conversion by extracting the x and y coordinates and determining the prefix based on the parity of y, it does not validate whether the provided public key lies on the elliptic curve. This omission can lead to unexpected errors or security vulnerabilities when other modules, such as evmstaking, attempt to use these keys.


https://github.com/piplabs/story/blob/e6d2d51550c3eff3f561f8d1b860888ea2bf8060/client/x/evmstaking/keeper/keys.go#L14-L35

func UncmpPubKeyToCmpPubKey(uncmpPubKey []byte) ([]byte, error) {
	if len(uncmpPubKey) != 65 || uncmpPubKey[0] != 0x04 {
		return nil, errors.New("invalid uncompressed public key length or format")
	}

	// Extract the x and y coordinates
	x := new(big.Int).SetBytes(uncmpPubKey[1:33])
	y := new(big.Int).SetBytes(uncmpPubKey[33:])

	// Determine the prefix based on the parity of y
	prefix := byte(0x02) // Even y
	if y.Bit(0) == 1 {
		prefix = 0x03 // Odd y
	}

	// Construct the compressed public key
	compressedPubKey := make([]byte, 33)
	compressedPubKey[0] = prefix
	copy(compressedPubKey[1:], x.Bytes())

	return compressedPubKey, nil
}

As it can be seen above, the function does not verify whether the x and y coordinates of the uncompressed public key satisfy the elliptic curve equation. Public keys that do not lie on the curve are invalid and can cause cryptographic operations to fail unexpectedly in downstream modules. Since this function is used by other modules, any invalid key processed here could lead to cascading failures or unexpected behavior in those modules.

BVSS
Recommendation

It is recommended to validate whether the used points are in the described curve in order to discard incorrect keys.

Remediation

SOLVED: The Story team solved these issues as it was recommended.

Remediation Hash

7.6 Floating Pragma

// Informational

Description

Smart contracts in scoped folder use the floating pragma >=0.8.23. Contracts should be deployed with the same compiler version and flags that they have been tested with thoroughly. Locking the pragma helps to ensure that contracts do not accidentally get deployed using, for example, either an outdated compiler version that might introduce bugs that affect the contract system negatively or a pragma version too new which has not been extensively tested.

Score
Recommendation

Consider locking the pragma version with known bugs for the compiler version by removing the >= symbol. When possible, do not use floating pragma in the final live deployment. Specifying a fixed compiler version ensures that the bytecode produced does not vary between builds. This is especially important if you rely on bytecode-level verification of the code.

Remediation

SOLVED: The Story team fixed this issue by setting 0.8.23 as fixed pragma version.

Remediation Hash

7.7 Use Of Custom Errors

// Informational

Description

Failed operations in several contracts are reverted with an accompanying message selected from a set of hard-coded strings.


In the EVM, emitting a hard-coded string in an error message costs ~50 more gas than emitting a custom error. Additionally, hard-coded strings increase the gas required to deploy the contract.

Score
Recommendation

It's recommended to use custom errors instead of hardcoded strings.

Remediation

ACKNOWLEDGED: The Story team acknowledged this issue by stating the following:

Acknowledged, deployment costs are not an issue due to those contracts being predeploys. We will consider using custom errors for mainnet.

7.8 Contracts Pause Feature Missing

// Informational

Description

It was identified that no high-privileged user can pause any of the scoped contracts. In the event of a security incident, the owner would not be able to stop any plausible malicious actions. Pausing the contract can also lead to more considered decisions.

Score
Recommendation

It is recommended including the pause feature in the contracts.

Remediation

ACKNOWLEDGED: The Story team acknowledged this issue.

Halborn strongly recommends conducting a follow-up assessment of the project either within six months or immediately following any material changes to the codebase, whichever comes first. This approach is crucial for maintaining the project’s integrity and addressing potential vulnerabilities introduced by code modifications.

© Halborn 2024. All rights reserved.