Prepared by:
HALBORN
Last Updated 08/20/2024
Date of Engagement by: June 17th, 2024 - August 13th, 2024
96% of all REPORTED Findings have been addressed
All findings
27
Critical
2
High
5
Medium
8
Low
8
Informational
4
Lombard
engaged Halborn to conduct a security assessment on their app chain module beginning on 2024-06-17 and ending on 2024-08-13. The security assessment was scoped to the consortium-node
Golang application provided to the Halborn team.
The team at Halborn was provided eight weeks for the engagement and assigned one full-time security engineer to assessment the security of the merge requests. The security engineers are blockchain and smart-contract security experts with advanced penetration testing, smart-contract hacking, and deep knowledge of multiple blockchain protocols.
The purpose of this assessment is to:
Ensure that the Golang components operate as intended.
Identify potential security issues with the Golang application.
In summary, Halborn identified several relevant issues that were mostly addressed by Lombard team
. Additionally, most of the issues that were not fixed, will be considered to be fixed in future releases.
Halborn performed a combination of manual and automated security testing to balance efficiency, timeliness, practicality, and accuracy in regard to the scope of the custom modules. While manual testing is recommended to uncover flaws in logic, process, and implementation; automated testing techniques help enhance coverage of structures and can quickly identify items that do not follow security best practices. The following phases and associated tools were used throughout the term of the assessment:
Research into architecture and purpose.
Static Analysis of security for scoped repository, and imported functions. (e.g., staticcheck
, gosec
, unconvert
, codeql
, ineffassign
and semgrep
)
Manual Assessment for discovering security vulnerabilities on the codebase.
Ensuring the correctness of the codebase.
Dynamic Analysis of files and modules in scope.
EXPLOITABILIY METRIC () | METRIC VALUE | NUMERICAL VALUE |
---|---|---|
Attack Origin (AO) | Arbitrary (AO:A) Specific (AO:S) | 1 0.2 |
Attack Cost (AC) | Low (AC:L) Medium (AC:M) High (AC:H) | 1 0.67 0.33 |
Attack Complexity (AX) | Low (AX:L) Medium (AX:M) High (AX:H) | 1 0.67 0.33 |
IMPACT METRIC () | METRIC VALUE | NUMERICAL VALUE |
---|---|---|
Confidentiality (C) | None (I:N) Low (I:L) Medium (I:M) High (I:H) Critical (I:C) | 0 0.25 0.5 0.75 1 |
Integrity (I) | None (I:N) Low (I:L) Medium (I:M) High (I:H) Critical (I:C) | 0 0.25 0.5 0.75 1 |
Availability (A) | None (A:N) Low (A:L) Medium (A:M) High (A:H) Critical (A:C) | 0 0.25 0.5 0.75 1 |
Deposit (D) | None (D:N) Low (D:L) Medium (D:M) High (D:H) Critical (D:C) | 0 0.25 0.5 0.75 1 |
Yield (Y) | None (Y:N) Low (Y:L) Medium (Y:M) High (Y:H) Critical (Y:C) | 0 0.25 0.5 0.75 1 |
SEVERITY COEFFICIENT () | COEFFICIENT VALUE | NUMERICAL VALUE |
---|---|---|
Reversibility () | None (R:N) Partial (R:P) Full (R:F) | 1 0.5 0.25 |
Scope () | Changed (S:C) Unchanged (S:U) | 1.25 1 |
Severity | Score Value Range |
---|---|
Critical | 9 - 10 |
High | 7 - 8.9 |
Medium | 4.5 - 6.9 |
Low | 2 - 4.4 |
Informational | 0 - 1.9 |
Critical
2
High
5
Medium
8
Low
8
Informational
4
Security analysis | Risk level | Remediation Date |
---|---|---|
Node Allows To Notarize Reverted Transactions | Critical | Solved - 08/14/2024 |
Nodes Can Panic While Handling Errors | Critical | Solved - 08/14/2024 |
Incorrectly Formed Messages Can Disrupt Consensus Reply Processing | High | Risk Accepted |
Node Can Be Flooded With Proposal Requests | High | Future Release |
Lack Of Slashing/Penalty Mechanism | High | Risk Accepted |
Parallel Operations Over Proposals Lead To Race Conditions | High | Solved - 08/14/2024 |
Reply Can Get Stuck In Consensus | High | Solved - 08/14/2024 |
Malleability In Message’s Signature Check | Medium | Solved - 08/14/2024 |
Consensus Routines Do Not Check Threshold Key Role | Medium | Solved - 08/14/2024 |
TSS Does Not Validate Players Uniqueness | Medium | Solved - 08/14/2024 |
HandleReply Lacks Of Player Authentication Checks | Medium | Solved - 08/14/2024 |
Consensus Does Not Check Current Proposal | Medium | Solved - 08/14/2024 |
Empty Proposals Bypass Proposal Validations | Medium | Solved - 08/14/2024 |
Multiple Missing Field Validations | Medium | Solved - 08/14/2024 |
Possible Key Collisions In Packed Data | Medium | Solved - 08/14/2024 |
Some Chain IDs Are Not Initialized | Low | Solved - 08/14/2024 |
Infinite Loop Should Timeout | Low | Solved - 08/15/2024 |
Lack Of Validations For Generated TSS Key | Low | Solved - 08/15/2024 |
Incomplete DragonBoat Integration | Low | Solved - 08/15/2024 |
Lack Of Validations For Incoming Requests | Low | Solved - 08/15/2024 |
Node Manager Can Be Initialized Multiple Times | Low | Solved - 08/15/2024 |
Lack Of Validations In Submitted Events | Low | Solved - 08/15/2024 |
Inaccurate Bitcoin Fee Calculation | Low | Solved - 08/15/2024 |
Unused Arguments In Functions | Informational | Solved - 08/15/2024 |
Open TO-DOs | Informational | Partially Solved - 08/15/2024 |
Use Class Method Instead Of Native Comparers | Informational | Solved - 08/15/2024 |
Input Amount Must Match With Input UTXOs | Informational | Solved - 08/15/2024 |
// Critical
During the notarizeTransactionController
class execution, there are several checks run in the preHandle
function in order to validate whether a transaction is correct or not, taking into account several situations such as if the transactions have been mined, it has the enough amount of confirmations to guarantee it won’t be removed from the chain, if the transaction resulted in a revert…
Most of these checks are implemented to make sure the transaction, which is requested to be notarized, was mined and properly store in the chain.
However, although the function in charge of running all these verifications, named VerifyTransaction
, queries the status of the transaction’s receipt in order to check whether the transaction was revert and sets accordingly the NotarizedTransaction
object. There is no further action to prevent from processing reverted transactions, therefore, any user could be able to make the consensus to process a reverted transaction, even taking into account that this transaction won’t have any impact in the targeted chain.
tk = c.databaseService.GetThresholdKeyOrNil(uuid.MustParse(req2.ThresholdKey))
if tk == nil {
err = fmt.Errorf("there is no threshold key with id (%s)", req2.ThresholdKey)
return
}
tx, err = c.blockchainService.VerifyTransaction(context.TODO(), req2.ChainId, req2.TransactionHash)
if err != nil {
return
}
func (s *Service) VerifyTransaction(ctx context.Context, chainId uint64, hash []byte) (tx *types.NotarizedTransaction, err error) {
var nm nodeManager
if nm, err = s.nodeManager(chainId); err != nil {
return nil, err
}
tx, err = nm.verifyTransaction(ctx, hash)
if err != nil {
return nil, errors.Wrapf(err, "failed to verify transaction for blockchain (%d)", chainId)
}
return tx, nil
}
var status types.NotarizedTransactionStatus
if receipt.Status == types2.ReceiptStatusSuccessful {
status = types.NotarizedTransactionStatus_NOTARIZED_TRANSACTION_STATUS_CONFIRMED
} else {
status = types.NotarizedTransactionStatus_NOTARIZED_TRANSACTION_STATUS_REVERTED
}
// ...
// if everything is fine return tx object
result := &types.NotarizedTransaction{
Status: status,
Blockchain: nm.config.Blockchain,
TransactionHash: hash,
BlockNumber: receipt.BlockNumber.Uint64(),
BlockHash: receipt.BlockHash.Bytes(),
TransactionIndex: uint64(receipt.TransactionIndex),
ReceiptHash: crypto.Keccak256(rawReceiptForStorage),
TransferredAmount: tx.Value().String(),
ChainId: nm.config.ChainID.Uint64(),
}
It is recommended to filter which NotarizedTransaction
object has been set with a NotarizedTransactionStatus_NOTARIZED_TRANSACTION_STATUS_REVERTED
status, and panic or return an error in that case to prevent from processing these kinds of transactions.
SOLVED: The Lombard team solved this issue by adding the following check:
if receipt.Status != ethtypes.ReceiptStatusSuccessful {
return nil, ErrReverted
}
// Critical
The suggestNewPendingWork
function is responsible for proposing new work for the consensus process by fetching pending proposals from the database, unmarshalling them, and then proposing them for further processing. However, the current implementation lacks robust error handling, leading to a critical vulnerability. Specifically, if an attacker were to introduce a malicious proposal into the system—one that causes an error during the unmarshalling process (req.Unmarshal(p.Message)
) or during the proposal submission (s.Propose(req)
), the function would trigger a log.Panicf
. This panic would cause the service to crash, halting the consensus process across the nodes.
func (s *Service) suggestNewPendingWork() {
_ = s.WaitForLeader(context.TODO())
if s.proposal != nil {
return
}
proposals := s.databaseService.GetProposalsByStatusOrderByIndex(types.ProposalStatus_PROPOSAL_STATUS_QUEUED, 0, 1)
if len(proposals) == 0 {
return
}
p := proposals[0]
req := &types.RequestProposal{}
if err := req.Unmarshal(p.Message); err != nil {
log.Panicf("failed to parse proposal message: %+v", err)
}
// make this job forcibly pending to let other workers know that we're working on this job
req.Queued = true
err := s.Propose(req)
if err != nil {
log.Panicf("failed to propose message: %+v", err)
}
}
Such a crash could be exploited to disrupt the consensus mechanism, leading to a denial-of-service (DoS) attack on the system.
To mitigate this vulnerability, replace the log.Panicf
calls with proper error handling mechanisms that safely log the error and allow the service to continue operating. Specifically, instead of panicking, the function should log the error, skip the malicious proposal, and continue processing other proposals. Additionally, consider implementing validation and sanitization checks on incoming proposals to detect and reject malicious or malformed proposals before they reach critical points in the code.
SOLVED: The Lombard team solved this issue by following the aforementioned recommendation and removing panics.
// High
The handleJobReply
function in the Service
struct processes reply messages in the consensus mechanism by invoking the appropriate handler for the proposal type. If an error occurs in the HandleReply
function of the handler, the current context is cleared by calling h.Clear()
. While this might seem like a logical way to reset the state, it introduces a vulnerability: an attacker could craft a malicious message that triggers an error in HandleReply
, causing the node to clear its current task context prematurely. This could disrupt the consensus process, as the node would lose its ability to process further legitimate replies, effectively stalling its participation in the ongoing consensus.
func (s *Service) handleJobReply(proposal *types.Proposal, reply *types.RequestProposal) error {
h := s.controllers[proposal.Type]
if h == nil {
return fmt.Errorf("there is no handler for proposal type (%s)", proposal.Type.String())
}
s.logger.Infof("handle reply (%s)", proposal.Id)
result, err := h.HandleReply(reply)
if err != nil {
_ = h.Clear()
return err
}
if result != nil {
return s.propose(result)
}
return nil
}
For example, in the case of the signEarlyUnbondController
's HandleReply
function, several errors could be triggered by malicious messages, such as invalid signatures or malformed transaction data. This would result in the context being cleared, leaving the node unable to continue processing the consensus task.
func (c *signEarlyUnbondController) HandleReply(reply *types.RequestProposal) (*types.RequestProposal, error) {
if c.signingData == nil {
log.Tracef("player (%s) state is clean", c.consensusFeed.HostID())
return nil, nil
}
reply2 := reply.GetSignEarlyUnbondReply()
if reply2 == nil {
return nil, errors.Errorf("unable to handle incompatible reply message")
} else if err := validation.Validate(reply2); err != nil {
return nil, err
}
// ...
There are two possible approaches to address the issue:
Identify recoverable and non-recoverable errors: Evaluate which errors represent a non-recoverable state in the consensus, requiring the process to be restarted, and which do not pose a significant risk, allowing the current task to continue processing. This approach involves distinguishing between errors that should clear the consensus state and those that should not, though it may be less scalable and effective.
Reject external messages and penalize malicious behavior: Identify and validate consensus participants, rejecting messages from external agents. Additionally, implement a penalty mechanism to report and sanction malicious behavior among nodes. While more complex, this approach is more effective and decentralized for managing issues and misbehavior within the consensus process.
RISK ACCEPTED: The Lombard team accepted the risk of this issue and states:
All errors in the application-specific handler are treated as non-recoverable. Errors that are typically considered recoverable, such as those caused by validators sending spam or malformed requests, are not a concern in this context. This is because the consortium operates within a permissioned network where only vetted validators are allowed. Additionally, all requests are routed through the Lombard backend API, which enforces rigorous validation. As a result, such erroneous requests are not expected to occur.
// High
When a new request is received through any of the available gateways, it is processed and verified by passing several checks over the requests depending on its type. After all validations and check are passed successfully, this request is proposed to the consensus in order to handle it by all the involved players in the threshold key. This proposal is made by an initial node which will be executing the ProposeAndWait
function.
This function is in charge of generating a new random message ID for the proposal and execute the DragonBoat
API to add a new proposal to the process. Since this is an asynchronous process, the proposer has to wait until the proposal has been enqueued in database by checking every 100 milliseconds this proposal in the database, in case this proposal hasn’t been included in the database in less than 5 minutes, this functions will return an error.
Since this function is open to be executed through any gateway and it blocks its thread’s execution until the request is enqueued or after 5 minutes, its implementation is highly inefficient and even could lead to a denial of service if the node is flooded with multiple proposal requests.
func (s *Service) ProposeAndWait(ctx context.Context, message *types.RequestProposal) (*types.Proposal, error) {
ctx, cancel := context.WithTimeout(context.TODO(), 5*time.Minute)
defer cancel()
if message.Id == "" {
message.Id = uuid.New().String()
}
err := s.propose(message)
if err != nil {
return nil, err
}
id, _ := uuid.Parse(message.Id)
timer := time.Tick(100 * time.Millisecond)
for {
select {
case <-timer:
case <-ctx.Done():
return nil, ctx.Err()
}
p := s.databaseService.GetProposalOrNil(id)
if p == nil {
continue
}
if p.Status != types.ProposalStatus_PROPOSAL_STATUS_QUEUED {
return p, nil
}
}
}
It is recommended to rewrite the logic of this function to keep it asynchronous or to adjust the timeout to prevent from denial of services attacks.
PENDING: The Lombard team will fix this issue in future releases and states:
The consortium API supports both asynchronous and synchronous workflows. The asynchronous flow is the preferred method and is currently implemented across several API handlers, with plans to extend its use to all handlers in future updates. The synchronous flow, which utilizes the ProposeAndWait method, is reserved exclusively for API calls with deterministic logic. In cases where a request has already been processed, the synchronous approach returns the result of the previous execution to maintain consistency.
While the synchronous ProposeAndWait approach remains in place for backward compatibility, it is now considered deprecated and will be phased out in future releases. Transitioning away from this method is necessary to standardize on the more efficient and scalable asynchronous workflow.
// High
In the current implementation of a Threshold Signature Scheme (TSS
) that communicates through a consensus mechanism (Dragonboat
), there is a notable absence of slashing or penalty mechanisms for nodes that act maliciously within the network. This gap is particularly concerning given the vulnerabilities identified in previous issues, such as "Malicious Messages Can Clear Consensus State
". In that issue, it was highlighted that malicious nodes could exploit the error handling in consensus processes to force nodes to clear their current consensus state, effectively causing a denial-of-service (DoS) attack on the network.
The TSS
relies on a subset of nodes (signers/players) to collaboratively generate signatures on transactions or other data, making it critical that all participating nodes adhere strictly to the protocol. However, without a mechanism to penalize nodes for behaviors that deviate from the expected protocol (such as sending incorrect or malicious data that leads to consensus failure), there is little deterrent against such actions. This lack of penalties could lead to scenarios where malicious or faulty nodes disrupt the TSS process, causing delays, incorrect signing, or complete failure of critical operations.
It is recommended to implement a penalty and slashing mechanism based on collective node reporting in a decentralized manner. Nodes that receive erroneous or incorrect messages from peers participating in the consensus process should have the capability to report such behaviors to the broader network. If a consensus majority confirms the reported misbehavior, the mechanism should then enact penalties such as temporary suspension or permanent expulsion of the offending node from the consensus process. This system of checks and balances will ensure that all participating nodes adhere strictly to protocol rules, thereby enhancing the integrity and reliability of the entire network. Bear in mind that, the implementation of this mechanism would require careful design to ensure that it cannot be misused for competitive or malicious purposes against honest nodes.
RISK ACCEPTED: The Lombard team accepted the risk of this issue with the following statement:
While we currently do not have a technical slashing mechanism in place, our network operates within a permissioned environment with carefully vetted validators. Any malicious activity would be addressed through the enforcement of our contractual agreements, ensuring the immediate removal of any offending validators. This contractual framework provides a strong layer of security, maintaining the integrity and trustworthiness of the consortium.
// High
The functions suggestNewPendingWork
and handleJobRequest
within the Service
class are integral to handling proposals within a consensus-based system. Both functions operate concurrently and access shared resources, such as s.proposal
, which introduces a high risk of race conditions. These race conditions can occur because both functions check, modify, or set the state of s.proposal
without any form of synchronization or locking mechanism In suggestNewPendingWork
, if a proposal is not currently set, the function fetches a queued proposal, unmarshals it, and proposes it to the network.
func (s *Service) suggestNewPendingWork() {
_ = s.WaitForLeader(context.TODO())
if s.proposal != nil {
return
}
proposals := s.databaseService.GetProposalsByStatusOrderByIndex(types.ProposalStatus_PROPOSAL_STATUS_QUEUED, 0, 1)
if len(proposals) == 0 {
return
}
p := proposals[0]
req := &types.RequestProposal{}
if err := req.Unmarshal(p.Message); err != nil {
log.Panicf("failed to parse proposal message: %+v", err)
}
// make this job forcibly pending to let other workers know that we're working on this job
req.Queued = true
err := s.Propose(req)
if err != nil {
log.Panicf("failed to propose message: %+v", err)
}
}
However, during this process, handleJobRequest
might simultaneously check or modify s.proposal
based on another incoming request. This lack of coordinated access control can lead to inconsistent system states or errors, such as duplicate proposal processing or incorrect proposal handling.
func (s *Service) handleJobRequest(proposal *types.Proposal, req *types.RequestProposal) error {
// don't execute same job twice
if s.proposal != nil {
if s.proposal.Id == proposal.Id {
return nil
}
log.Warnf("there is already pending job (%s), next one (%s) can't be proceed", s.proposal.Id, proposal.Id)
return nil
}
// check proposal status
{
p := s.databaseService.GetProposalOrNil(uuid.MustParse(proposal.Id))
if p != nil && p.Status != types.ProposalStatus_PROPOSAL_STATUS_QUEUED {
log.Warnf("proposal (%s) is already finished (%s), skipping request", p.Id, p.Status)
return nil
}
}
if err := s.databaseService.SaveProposal(proposal); err != nil {
return err
}
h := s.controllers[proposal.Type]
if h == nil {
return fmt.Errorf("there is no handler for proposal type (%s)", proposal.Type.String())
}
result, err := h.HandleRequest(req)
if err != nil {
_ = h.Clear()
return err
}
s.proposal = proposal
log.Infof("handled proposal (%s) start", proposal.Id)
if result != nil {
return s.propose(result)
}
return nil
}
Furthermore, the error handling in suggestNewPendingWork
uses log.Panicf
which can crash the service when an error occurs during proposal unmarshalling or submission, exacerbating the impact of these race conditions.
To mitigate these risks and ensure robust operation, implementing a proper synchronization mechanism is essential. Use mutexes or other concurrency control structures to manage access to shared resources, such as s.proposal
.
SOLVED: The Lombard team solved this issue by implementing locks/mutexes.
// High
The functions HandleReply
in both the signEarlyUnbondController
and signTimelockWithdrawController
handle incoming reply messages related to the early unbonding and timelock withdrawal processes, respectively. In these functions, there are several scenarios where the createReply
method is called to generate a new reply message, even when an error or an incompatible situation occurs. This approach can lead to an issue where the consensus mechanism might enter a recursive loop, repeatedly sending reply messages. This is problematic because these functions are intended to handle a reply, not generate a new one, and returning a new reply could result in potential race conditions, and incorrect consensus behavior.
mfa, err2 := c.cubistService.GetMfa(c.earlyUnbondingParams.MfaId)
if err2 != nil {
// NOTE: could be race when mfa already aproved by another node
if strings.Contains(err2.Error(), "not found") {
return c.createReply(reply, nil, nil), nil
}
return nil, errors.Wrapf(err2, "get mfa %s", c.earlyUnbondingParams.GetMfaId())
}
// if mfa approved just wait for signature in reply, because if we go to approve it in this node we can get race condition
if len(mfa.GetStatus().GetApprovedBy()) < int(mfa.GetStatus().GetCount()) {
if !cubistcommon.IsAllowedToApprove(mfa, c.signingData.userId) {
return nil, errors.Errorf("user (%s) is not allowed to approve transactions", c.signingData.userId)
}
// NOTE: it's can't happen, because if tx already processed we found it at beginning
if cubistcommon.IsApprovedByUser(mfa, c.signingData.userId) {
return c.createReply(reply, nil, nil), nil
}
// NOTE: it's can't happen, because if tx already processed we found it at beginning
if cubistcommon.IsApprovedByUser(mfa, c.signingData.userId) {
return c.createReply(reply, rawTx), nil
}
// NOTE: it's can't happen, because if tx already processed we found it at beginning
if cubistcommon.IsApprovedByUser(mfa, c.signingData.userId) {
return c.createReply(reply, rawTx), nil
}
Modify the HandleReply
functions to return nil, nil
instead of calling createReply
in situations where an error occurs or when the conditions are not met for processing the reply message.
SOLVED: The Lombard team solved this issue by returning nil, nil
as it was recommended.
// Medium
During the generateDepositAddressController
class execution, there are several checks being executed in order to guarantee that the request is safe to be processed. One of those checks is run in the verifyDestinationSignature
function, in charge to validate several parameters related to the input signature.
chainId, ok := c.blockchainService.ChainIds[req2.GetToChain()]
if !ok {
err = errors.Errorf("unsupported chain (%s)", chainId)
}
if err = verifyDestinationSignature(req2, chainId); err != nil {
return
}
Apart from checking the chain’s destination following the specified chainId
, it also tries to retrieve the public key associated to the signature used in the request by using the blockchain.VerifyMessage()
function.
func verifyDestinationSignature(req *types.GenerateDepositAddressRequest, chainID *big.Int) error {
switch req.GetToChain() {
case types.Blockchain_BLOCKCHAIN_ETHEREUM, types.Blockchain_BLOCKCHAIN_MANTA, types.Blockchain_BLOCKCHAIN_MANTLE, types.Blockchain_BLOCKCHAIN_XLAYER, types.Blockchain_BLOCKCHAIN_SCROLL:
// verify eth
thresholdKey := uuid.MustParse(req.GetThresholdKey())
address, err := blockchain.VerifyMessage(getTemplatedMessage(chainID, thresholdKey), req.GetToAddressSignature())
if err != nil {
return errors.Wrap(err, "verify message")
}
The VerifyMessage
function check some parameters of the signed message such us the length and EIP-191 format. After all those checks are passed, the node tries to retrieve the public key from the signature by executing the crypto.SigToPub
function.
func VerifyMessage(message string, signedMessage []byte) ([]byte, error) {
if len(signedMessage) < 64 {
return nil, errors.Errorf("wrong signature length (%d < 64)", len(signedMessage))
}
// Hash the unsigned message using EIP-191
hashedMessage := []byte("\\x19Ethereum Signed Message:\\n" + strconv.Itoa(len(message)) + message)
hash := crypto.Keccak256Hash(hashedMessage)
// Handles cases where EIP-115 is not implemented (most wallets don't implement it)
if signedMessage[64] == 27 || signedMessage[64] == 28 {
signedMessage[64] -= 27
}
// Recover a public key from the signed message
sigPublicKeyECDSA, err := crypto.SigToPub(hash.Bytes(), signedMessage)
if sigPublicKeyECDSA == nil {
err = errors.New("Could not get a public key from the message signature")
}
if err != nil {
return nil, err
}
return crypto.PubkeyToAddress(*sigPublicKeyECDSA).Bytes(), nil
}
The crypto.SigToPub
function relies on Ecrecover
, which is well-known for being vulnerable to malleable signatures. This issue could lead to the re-use of signatures in the protocol.
func SigToPub(hash, sig []byte) (*ecdsa.PublicKey, error) {
s, err := Ecrecover(hash, sig)
if err != nil {
return nil, err
}
return UnmarshalPubkey(s)
}
It is recommended to use the VerifySignature
function prior to retrieving the public key in order to verify if the signature follows the minimal standards, and it’s not a malleable signature.
SOLVED: The Lombard team solved this issue by following the aforementioned recommendation.
// Medium
When a threshold key is proposed to be created, it is necessary to specify a role in its request in order to indicate what the key’s purpose is. This role is checked against a list of available roles inside the protocol in the generateThresholdKeyController.preHandleRequest()
method by using the isValidThresholdKeyRole()
function.
func (c *generateThresholdKeyController) preHandleRequest(req *types.RequestProposal) (result *types.RequestProposal, tk *types.ThresholdKey, err error) {
// get request and validate it
req2 := req.GetGenerateThresholdKeyRequest()
if req2 == nil {
err = fmt.Errorf("failed to handle incompatible message format")
return
} else if err = validation.Validate(req2); err != nil {
return
}
// validate the roles
for _, role := range req2.Roles {
if !isValidThresholdKeyRole(role) {
err = fmt.Errorf("invalid role: %v", role)
return
}
}
As it can be seen below, the available roles follow different purposes inside the node as it’s possible, following the logic of the node, generate multiple threshold keys depending on the action to execute by the node.
func isValidThresholdKeyRole(role types.ThresholdKeyRole) bool {
switch role {
case types.ThresholdKeyRole_THRESHOLD_KEY_ROLE_NOTARIZE,
types.ThresholdKeyRole_THRESHOLD_KEY_ROLE_SIGN,
types.ThresholdKeyRole_THRESHOLD_KEY_ROLE_TRANSACT,
types.ThresholdKeyRole_THRESHOLD_KEY_ROLE_ISSUE,
types.ThresholdKeyRole_THRESHOLD_KEY_ROLE_AUTHORITY,
types.ThresholdKeyRole_THRESHOLD_KEY_ROLE_CA_CERT,
types.ThresholdKeyRole_THRESHOLD_KEY_ROLE_PLAYER_CERT:
return true
default:
return false
}
}
By the other hand, these roles should be checked once an action is going to be executed in the node. For example, when the CA generation process is about to start, the node should verify that the threshold key specified in the request has the types.ThresholdKeyRole_THRESHOLD_KEY_ROLE_CA_CERT
to be elegible to participate in the process. This also happens when a CSR issue process starts.
tk = c.databaseService.GetThresholdKeyOrNil(uuid.MustParse(req2.ThresholdKey))
if tk == nil {
err = fmt.Errorf("no threshold key found with id (%s)", req2.ThresholdKey)
return
}
requiredRole := types.ThresholdKeyRole_THRESHOLD_KEY_ROLE_CA_CERT
if !internal.HasRequiredRole(tk, requiredRole) {
err = fmt.Errorf("the threshold key does not have the required role: %s", requiredRole)
return
}
tk = c.databaseService.GetThresholdKeyOrNil(uuid.MustParse(req2.ThresholdKey))
if tk == nil {
err = fmt.Errorf("no threshold key found with id (%s)", req2.ThresholdKey)
return
}
requiredRole := types.ThresholdKeyRole_THRESHOLD_KEY_ROLE_PLAYER_CERT
if !internal.HasRequiredRole(tk, requiredRole) {
err = fmt.Errorf("the threshold key does not have the required role: %s", requiredRole)
return
}
However, in classes such as notarizeTransactionController
, notarizeTransactionOutputController
and generateDepositAddressController
, this condition is not verifying and any threshold signature can be used in their processes no matter what their roles or purposes are.
It is recommended to check the threshold key role in each to request in order to guarantee the correct threshold key it’s being used in each case.
SOLVED: The Lombard team solved this issue following the aforementioned recommendation.
// Medium
In the node, the tss-lib
from BNB Chain is used to sign multiple actions to execute by the nodes over the protocol following a consensus mechanism in charge of the communication between peers or players, as it’s named by the node’s implementation.
These players are identified in the protocol by certificates following a CA-CSR model. As this is controlled by the node, the tss-lib
just receive the required parameters to work such as a player’s array, which are the peers that will take part in the keygen and signature process in the TSS protocol.
Therefore, the node is in charge of applying the necessary sanity checks for the TSS to work, and in this case, the node initialize keygen and signature processes without verifying whether each player is unique in the array. This could lead to further issues if any of the aforementioned processes start, taking into the account that repeated players in the array will have more power in the process depending on the specified threshold.
func NewTEddsaCryptoKeygen(players []*types.Player, privateKey *ecdsa.PrivateKey, hostId uuid.UUID, crypto types.Crypto) (internal.ICryptoKeygen, error) {
var curve elliptic.Curve
if crypto == types.Crypto_CRYPTO_EDDSA_ED25519 {
curve = tss.Edwards()
} else {
return nil, fmt.Errorf("there is no keygen for crypto (%s)", crypto.String())
}
return &teddsaCryptoKeygen{
players: players,
privateKey: privateKey,
hostId: hostId,
curve: curve,
}, nil
}
func NewTEddsaCryptoSigner(players []*types.Player, privateKey *ecdsa.PrivateKey, hostId uuid.UUID, privateShare []byte, crypto types.Crypto) (internal.ICryptoSigner, error) {
var curve elliptic.Curve
if crypto == types.Crypto_CRYPTO_EDDSA_ED25519 {
curve = tss.Edwards()
} else {
return nil, fmt.Errorf("there is no signer for crypto (%s)", crypto.String())
}
return &teddsaCryptoSigner{players: players, privateKey: privateKey, hostId: hostId, privateShare: privateShare, curve: curve}, nil
}
func NewTssCryptoKeygen(players []*types.Player, privateKey *ecdsa.PrivateKey, hostId uuid.UUID, crypto types.Crypto) (internal.ICryptoKeygen, error) {
var curve elliptic.Curve
if crypto == types.Crypto_CRYPTO_ECDSA_SECP256K1 {
curve = btcec.S256()
} else if crypto == types.Crypto_CRYPTO_ECDSA_SECP256R1 {
curve = elliptic.P256()
} else {
return nil, fmt.Errorf("there is no keygen for crypto (%s)", crypto.String())
}
return &TssCryptoKeygen{
players: players,
privateKey: privateKey,
hostId: hostId,
curve: curve,
}, nil
}
func NewTssCryptoSigner(players []*types.Player, privateKey *ecdsa.PrivateKey, hostId uuid.UUID, privateShare []byte, curveType types.Crypto) (internal.ICryptoSigner, error) {
var curve elliptic.Curve
if curveType == types.Crypto_CRYPTO_ECDSA_SECP256K1 {
curve = btcec.S256()
} else if curveType == types.Crypto_CRYPTO_ECDSA_SECP256R1 {
curve = elliptic.P256()
} else {
return nil, fmt.Errorf("there is no keygen for crypto (%s)", curveType.String())
}
return &TssCryptoSigner{
players: players,
privateKey: privateKey,
hostId: hostId,
privateShare: privateShare,
curve: curve,
}, nil
}
During the initialization of each process of the TSS, the uniqueness of each player should be verified to prevent from repeated entries.
SOLVED: The Lombard team solved this issue following the aforementioned recommendation.
// Medium
The controller functions for handling replies in various consensus ceremonies — such as generating CA certificates, deposit addresses, issuing CSR certificates, notarizing transactions, and notarizing transaction outputs — are crucial for maintaining the integrity and security of these processes. These functions currently validate the structure and content of incoming replies and progress the consensus state based on these messages. However, they do not verify whether the incoming reply message originates from a participant associated with the current threshold key used in the cryptographic ceremony, nor do they check if the threshold key specified in the reply matches the one in the current consensus task's context (reply.GetThresholdKey() == c.thresholdKey.Id
).
func (c *generateCaCertificateController) HandleReply(reply *types.RequestProposal) (result *types.RequestProposal, err error) {
// Validate the reply message
reply2 := reply.GetGenerateCaCertificateReply()
if reply2 == nil {
return nil, fmt.Errorf("unable to handle incompatible reply message")
}
// Continue the signing ceremony with the received messages
if c.cryptoSigner == nil {
return nil, fmt.Errorf("CA signing ceremony hasn't started yet")
}
newReplies, res, err := c.cryptoSigner.Upgrade(reply2.Messages)
if err != nil {
return nil, err
} else if res == nil {
return c.createReplyResult(c.thresholdKey, reply, newReplies), nil
}
c.result = res
c.caCert.Status = types.CertificateAuthorityStatus_CA_STATUS_GENERATED
c.caCert.SerializedCert += "." + base64.RawURLEncoding.EncodeToString(res)
func (c *generateDepositAddressController) HandleReply(reply *types.RequestProposal) (result *types.RequestProposal, err error) {
// do upgrade current keygen ceremony
reply2 := reply.GetGenerateDepositAddressReply()
if reply2 == nil {
return nil, errors.New("unable to handle incompatible reply message")
} else if err := validation.Validate(reply2); err != nil {
return nil, err
}
if c.cryptoSigner == nil {
log.Tracef("player (%s) is not in the party, skipping signing", c.consensusFeed.HostID())
return nil, nil
}
func (c *issueCsrCertificateController) HandleReply(reply *types.RequestProposal) (result *types.RequestProposal, err error) {
// Validate the reply message
reply2 := reply.GetIssueCsrCertificateReply()
if reply2 == nil {
return nil, fmt.Errorf("unable to handle incompatible reply message")
} else if err = validation.Validate(reply2); err != nil {
errors.Wrapf(err, "CSR HandleReply validation error!")
return
}
// Continue the signing ceremony with the received messages
if c.cryptoSigner == nil {
return nil, fmt.Errorf("CSR signing ceremony hasn't started yet, , skipping reply")
}
newReplies, res, err := c.cryptoSigner.Upgrade(reply2.Messages) // process replies and update the signing state
if err != nil {
return nil, err
} else if res == nil {
return c.createReplyResult(c.thresholdKey, reply, newReplies), nil
}
func (c *notarizeTransactionController) HandleReply(reply *types.RequestProposal) (result *types.RequestProposal, err error) {
// do upgrade current keygen ceremony
reply2 := reply.GetNotarizeTransactionReply()
if reply2 == nil {
return nil, fmt.Errorf("unable to handle incompatible reply message")
} else if err := validation.Validate(reply2); err != nil {
return nil, err
}
if c.cryptoSigner == nil {
log.Tracef("player (%s) is not in the party, skipping signing", c.consensusFeed.HostID())
return nil, nil
}
newReplies, res, err := c.cryptoSigner.Upgrade(reply2.Messages)
if err != nil {
return nil, err
}
func (c *notarizeTransactionOutputController) HandleReply(reply *types.RequestProposal) (result *types.RequestProposal, err error) {
// do upgrade current keygen ceremony
reply2 := reply.GetNotarizeTransactionOutputReply()
if reply2 == nil {
return nil, errors.Errorf("unable to handle incompatible reply message")
} else if err := validation.Validate(reply2); err != nil {
return nil, err
}
if c.cryptoSigner == nil {
log.Tracef("player (%s) is not in the party, skipping signing", c.consensusFeed.HostID())
return nil, nil
}
It is recommended to verify if reply sender is part of the current consensus task, and if the specified threshold key in the reply match with the one used in the current context.
SOLVED: The Lombard team solved this issue following the aforementioned recommendation.
// Medium
The update
function in the Service
struct processes proposal requests, replies, and results within a consensus mechanism. However, it does not verify if the proposal associated with a request, reply, or result is the same as the one currently being processed in the consensus. This oversight can lead to inconsistencies and errors in the consensus process. Specifically, the function parses incoming messages and handles them without ensuring that the proposals are matched to the correct context.
This can result in processing outdated or irrelevant proposals, leading to potential consensus failures or data integrity issues.
To ensure reliable and accurate consensus handling, the function should be updated to verify that each incoming request, reply, or result corresponds to the currently active proposal.
} else if req.Reply != nil {
var proposal *types.Proposal
id, err := uuid.Parse(req.Id)
if err != nil {
log.Warnf("failed to parse proposal id (%s)", req.Id)
return nil
}
if proposal = s.databaseService.GetProposalOrNil(id); proposal == nil {
log.Warnf("there is no proposal (%s) to process reply, skipping", id)
return nil
}
if err := s.handleJobReply(proposal, req); err != nil {
log.Warnf("failed to handle proposal reply: %+v", err)
return s.propose(common.RequestProposalError(proposal, err))
}
}
if req.Result != nil || req.Error != "" {
var proposal *types.Proposal
id, err := uuid.Parse(req.Id)
if err != nil {
log.Warnf("failed to parse proposal id (%s)", req.Id)
return nil
}
if proposal = s.databaseService.GetProposalOrNil(id); proposal == nil {
log.Warnf("there is no proposal (%s) to process result, skipping", id)
return nil
}
err = s.handleJobResult(proposal, req, entry)
if err != nil {
log.Warnf("failed to handle proposal result: %+v", err)
return s.propose(common.RequestProposalError(proposal, err))
}
}
As it was aforementioned, the update
function should verify if the proposal used in the request is the same as the one being processed by the current consensus process, in that case, the request should be reject to prevent from further issues.
SOLVED: The Lombard team solved this issue following the aforementioned recommendation.
// Medium
The checkProposalIsWellFormed
function is intended to validate proposals by ensuring that exactly one non-nil value is present among the provided values. The function's logic is flawed, as it incorrectly returns true
for empty proposals, thereby treating them as well-formed. This behavior occurs because the function does not explicitly check for the presence of at least one non-nil value before returning true
.
func checkProposalIsWellFormed(values ...interface{}) bool {
hasValue := false
for _, v := range values {
if v == nil {
continue
} else if hasValue {
return false
}
hasValue = true
}
return true
}
In this implementation, if an empty list of values is passed, the loop is skipped entirely, and the function returns true
, indicating that the empty proposal is well-formed. This logical oversight allows empty proposals to bypass validation checks, potentially leading to significant security vulnerabilities.
To address the issue in the checkProposalIsWellFormed
function, it is recommended that the function be modified to ensure that at least one non-nil value is explicitly checked before returning true
.
SOLVED: The Lombard team solved this issue following the aforementioned recommendation.
// Medium
In the current implementation of the project, data structures used throughout the system are supposed to undergo validation checks as defined in validator.go
to ensure they are well-formed before processing. This validation is crucial to maintaining the integrity and stability of the system, as it prevents malformed data from causing errors, inconsistencies, or security vulnerabilities during runtime. However, there appears to be an inconsistency in how and where these validations are applied, with some data structures not being validated at all before use.
This inconsistency arises because the validation routines are not uniformly integrated across all points where data structures are received, created, or modified. For instance, structures might be validated in one part of the application (such as at the entry point of data) but are assumed to be valid in subsequent operations without re-validation. This assumption can lead to serious issues if the data is inadvertently corrupted or modified between processes or if the initial validation fails to catch certain edge cases.
The list of unimplemented validations is the following:
NotarizedTransactionOutput
ThresholdKey
GenerateThresholdKeyReply
GenerateThresholdKeyResult
Also, the NotarizedTransaction
object lacks from check in some crucial struct’s members:
func validateNotarizedTransaction(err *Error, tx *types.NotarizedTransaction) {
validateField(err, "id", tx.Id, validation.Required, is.UUIDv4)
validateEnum(err, "status", tx.Status, types.NotarizedTransactionStatus_value)
validateEnum(err, "blockchain", tx.Blockchain, types.Blockchain_value)
validateField(err, "transaction_hash", tx.TransactionHash, validation.Required, validation.Length(32, 32))
validateField(err, "block_number", tx.BlockNumber, validation.Required, validation.Min(0))
validateField(err, "block_hash", tx.BlockHash, validation.Required, validation.Length(32, 32))
validateField(err, "receipt_hash", tx.ReceiptHash, validation.Required, validation.Length(32, 32))
validateField(err, "transferred_amount", tx.TransferredAmount, validation.Required)
validateField(err, "chain_id", tx.ChainId, validation.Required, validation.Length(1, 32))
validateField(err, "proposal", tx.Proposal, validation.Required, is.UUIDv4)
}
type NotarizedTransaction struct {
Id string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"`
Status NotarizedTransactionStatus `protobuf:"varint,2,opt,name=status,proto3,enum=finance.lombard.node.NotarizedTransactionStatus" json:"status,omitempty"`
Blockchain BlockchainType `protobuf:"varint,3,opt,name=blockchain,proto3,enum=finance.lombard.node.BlockchainType" json:"blockchain,omitempty"`
TransactionHash []byte `protobuf:"bytes,4,opt,name=transaction_hash,json=transactionHash,proto3" json:"transaction_hash,omitempty"`
BlockNumber uint64 `protobuf:"varint,5,opt,name=block_number,json=blockNumber,proto3" json:"block_number,omitempty"`
BlockHash []byte `protobuf:"bytes,6,opt,name=block_hash,json=blockHash,proto3" json:"block_hash,omitempty"`
TransactionIndex uint64 `protobuf:"varint,7,opt,name=transaction_index,json=transactionIndex,proto3" json:"transaction_index,omitempty"`
ReceiptHash []byte `protobuf:"bytes,8,opt,name=receipt_hash,json=receiptHash,proto3" json:"receipt_hash,omitempty"`
TransferredAmount string `protobuf:"bytes,9,opt,name=transferred_amount,json=transferredAmount,proto3" json:"transferred_amount,omitempty"`
ChainId uint64 `protobuf:"varint,10,opt,name=chain_id,json=chainId,proto3" json:"chain_id,omitempty"`
ThresholdKey string `protobuf:"bytes,11,opt,name=threshold_key,json=thresholdKey,proto3" json:"threshold_key,omitempty"`
Proposal string `protobuf:"bytes,12,opt,name=proposal,proto3" json:"proposal,omitempty"`
Payload []byte `protobuf:"bytes,13,opt,name=payload,proto3" json:"payload,omitempty"`
Signature []byte `protobuf:"bytes,14,opt,name=signature,proto3" json:"signature,omitempty"`
RawPayload []byte `protobuf:"bytes,15,opt,name=raw_payload,json=rawPayload,proto3" json:"raw_payload,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
It is recommended to add validation for each object and object’s members used by the program in order to prevent for further issues and inconsistencies in its execution.
SOLVED: The Lombard team solved this issue following the aforementioned recommendation, and stating:
Partially deprecated since NotarizeTransaction was changed to NotarizeBridgeDeposit.
// Medium
The calcDestinationId
function is designed to generate a UUID based on a destination address and a blockchain identifier, aiming to ensure a unique identifier for different blockchain interactions. The function appends a destination address (to
) with a separator byte, then appends the bytes representing the blockchain identifier, and finally applies a Keccak256
hash to the concatenated data. While this method initially seems reliable, the deterministic nature of the hash function combined with the modification of certain bytes to conform to UUID version 4 specifications introduces the possibility of collisions. Specifically, the adjustments made to ensure the UUID version and variant can reduce the entropy of the resulting UUID.
func calcDestinationId(to []byte, chain types.Blockchain) uuid.UUID {
data := append(to, byte(1)) // separator byte
data = append(data, new(big.Int).SetInt64(int64(chain)).Bytes()...)
data = gethcrypto.Keccak256(data) // hash
data[6] = (data[6] & 0x0f) | 0x40 // version is 4
data[8] = (data[8] & 0x3f) | 0x80 // variant is 10
res, _ := uuid.FromBytes(data[:16]) // use first 16 bytes for uuid
return res
}
Moreover, using only the first 16 bytes of the hash limits the space of possible outcomes, which could lead to different inputs (different combinations of to
and chain
) producing the same UUID.
To address the potential for UUID collisions, consider implementing a more robust method of UUID generation that either increases the entropy available for each UUID or incorporates a wider range of input variations.
SOLVED: The Lombard team solved this issue following the aforementioned recommendation, and stating:
This part of logic was deprecated and rewrote.
// Low
The Start
function in the Service
struct is responsible for initializing node managers for various blockchain configurations. However, it does not initialize some chainId
values in the ChainIds
mapping, leading to potential inconsistencies and errors. The function parses the configuration and initializes the Bitcoin node manager correctly. For EVM-based configurations, it iterates through config.EvmConfigs
, creating node managers and logging their initialization. However, the ChainIds
mapping is only updated for specific keys ("0x4268"
and "0x01"
), which are hardcoded to map to types.Blockchain_BLOCKCHAIN_ETHEREUM
. This approach overlooks other potential blockchain configurations, resulting in incomplete mapping initialization. Consequently, the system may fail to recognize or correctly manage blockchain nodes with uninitialized chainId
values, leading to operational failures.
func (s *Service) Start(cp shared.IConfigProvider) error {
config := &Config{}
if err := cp.Parse(config); err != nil {
return err
}
btc, err := newBtcNodeManager(config.BitcoinConfig)
if err != nil {
return errors.Wrap(err, "new btc node manager")
}
s.bitcoin = btc
for key, nc := range config.EvmConfigs {
nm, err := s.factoryNodeManager(nc.Blockchain, nc)
if err != nil {
return errors.Wrapf(err, "unable to factory node manager")
}
log.Infof("found blockchain node config (%s) with chain id (%d)", nc.Name, nc.ChainID.Uint64())
s.nms[nc.ChainID.Uint64()] = nm
if key == "0x4268" || key == "0x01" {
s.ChainIds[types.Blockchain_BLOCKCHAIN_ETHEREUM] = nc.ChainID
}
}
return nil
}
To resolve this, the function should be updated to ensure all relevant chainId
values are initialized in the ChainIds
mapping.
SOLVED: The Lombard team solved this issue following the aforementioned recommendation.
// Low
The following code snippet, located in GenerateThresholdKey
function, is designed to retrieve a ThresholdKey
from the database, continuously looping until the key is obtained:
var tk *types.ThresholdKey
for tk == nil {
tk = s.databaseService.GetThresholdKeyOrNil(uuid.MustParse(p.Id))
if tk != nil {
break
}
time.Sleep(100 * time.Millisecond)
}
However, this implementation is flawed as it can lead to an infinite loop. If the GetThresholdKeyOrNil
method continuously returns nil
due to the key not being present in the database, the loop will never terminate. This can cause the system to hang indefinitely, consuming resources and potentially leading to application crashes or degraded performance.
It is recommended to use a timeout or restrict the number of iterations.
SOLVED: The Lombard team solved this issue following the aforementioned recommendation.
// Low
The provided code snippet is part of a loop handling responses from a Threshold Signature Scheme (TSS) and assembling a digital signature once all required parts are received. The loop listens for messages on two channels: s.messageC
for intermediate TSS messages, which are then encrypted by s.encryptReplies
, and s.signC
for the final signature components. However, the current implementation lacks critical validation checks to ensure the integrity and validity of the data received through these channels. This omission could originate further issues where invalid or corrupted data could potentially be processed and used to construct a digital signature. For instance, there is no verification to ensure that the signature components R
, S
, and SignatureRecovery
received from s.signC
are valid. If these components are malformed or incorrect, the final signature would be invalid, potentially leading incorrect verification or authorization of transactions.
for {
select {
case msg := <-s.messageC:
replies, err = s.encryptReplies(replies, msg)
if err != nil {
err = errors.Wrapf(err, "failed to handle tss message for ECDSA signing")
return
}
continue
case signatureData := <-s.signC:
sigBuf := bytes.NewBuffer([]byte{})
sigBuf.Write(signatureData.R)
sigBuf.Write(signatureData.S)
sigBuf.Write(signatureData.SignatureRecovery)
result = sigBuf.Bytes()
return
default:
return
}
}
To enhance the robustness and security of the signature generation process, it is recommended to implement checks for all data received from TSS messages. Specifically:
Validate TSS Messages: Before processing messages from s.messageC
, validate their format and integrity. Ensure that they conform to expected structures and contain all necessary data fields without corruption.
Verify Signature Components: Before assembling the final signature from the data received on s.signC
, validate the signature components (R
, S
, SignatureRecovery
). Check that these components are within the correct cryptographic bounds and are not corrupted or malformed.
Error Handling: Enhance error handling mechanisms to manage scenarios where invalid data is detected. Instead of merely logging the error, implement strategies to retry the signature process or abort it gracefully, notifying the relevant systems or users of the failure.
SOLVED: The Lombard team solved this issue following the aforementioned recommendation.
// Low
The Service
struct contains several placeholder functions intended for DragonBoat
consensus integration, including Lookup
, Sync
, PrepareSnapshot
, SaveSnapshot
, RecoverFromSnapshot
, and Close
. These functions currently return nil
or perform no operations, leaving the consensus mechanism incomplete. This can lead to further issues such as consensus failures, data integrity problems, and operational instability due to the lack of proper state management and synchronization in some cases where these functions are involved.
func (s *Service) Lookup(interface{}) (interface{}, error) {
return nil, nil
}
func (s *Service) Sync() error {
return nil
}
func (s *Service) PrepareSnapshot() (interface{}, error) {
return nil, nil
//return s.databaseService.NewSnapshot(), nil
}
func (s *Service) SaveSnapshot(snapshot interface{}, writer io.Writer, _ <-chan struct{}) error {
return nil
//ss := snapshot.(database.IDatabaseSnapshot)
//if err := ss.Stream(writer); err != nil {
// return err
//}
//return ss.Close()
}
func (s *Service) RecoverFromSnapshot(reader io.Reader, _ <-chan struct{}) error {
return nil
//return s.databaseService.RecoverFromSnapshot(reader)
}
func (s *Service) Close() error {
return nil
}
To address this, each function must be fully implemented to handle its respective functionalities included in DragonBoat
.
SOLVED: The Lombard team solved this issue following the aforementioned recommendation.
// Low
The update
function in the Service
class is a critical component responsible for processing various types of requests within a consensus mechanism, including proposals, replies, and results. This function parses and handles different types of messages based on the structure and content of the request. However, a notable omission in the implementation is the lack of validation checks using the checkProposalIsWellFormed
function, which is utilized in other parts of the consensus system to ensure that proposals are well-formed and meet the required criteria before proceeding.
func checkProposalIsWellFormed(values ...interface{}) bool {
hasValue := false
for _, v := range values {
if v == nil {
continue
} else if hasValue {
return false
}
hasValue = true
}
return true
}
It is recommended to integrate the checkProposalIsWellFormed
function into the update
function. This integration should occur early in the message handling process to ensure that all incoming proposals are scrutinized for completeness and validity before any processing takes place.
SOLVED: The Lombard team solved this issue following the aforementioned recommendation.
// Low
The Start
function in the Service
struct initializes node managers for blockchain configurations. However, if the configuration contains duplicate entries with the same ChainID
, the function may initialize multiple instances of the same node manager. This leads to resource wastage and potential overwriting of node managers, causing operational inconsistencies. Specifically, the loop in the function does not check if a node manager for a given ChainID
already exists before creating a new one. This oversight can result in multiple initializations.
for key, nc := range config.EvmConfigs {
nm, err := s.factoryNodeManager(nc.Blockchain, nc)
if err != nil {
return errors.Wrapf(err, "unable to factory node manager")
}
log.Infof("found blockchain node config (%s) with chain id (%d)", nc.Name, nc.ChainID.Uint64())
s.nms[nc.ChainID.Uint64()] = nm
if key == "0x4268" || key == "0x01" {
s.ChainIds[types.Blockchain_BLOCKCHAIN_ETHEREUM] = nc.ChainID
}
}
To prevent multiple initializations, the function should check if a node manager for a given ChainID
already exists before creating a new one.
SOLVED: The Lombard team solved this issue following the aforementioned recommendation.
// Low
The functions verifyBridgeDeposit
and verifyUnstakeRequest
in the web3NodeManager
handle events emitted by a smart contract. In both functions, data is parsed from the event logs to extract important information such as toAddress
, toContract
, toChainId
, amount
, and other fields. While these inputs are retrieved from a trusted smart contract, there is currently no validation implemented to ensure that the extracted data follows the correct format and contains valid values. This lack of validation could lead to further issues if the data is malformed or unexpected, potentially causing errors downstream or leading to incorrect processing.
// parse data
// DepositToBridge (index_topic_1 address fromAddress, index_topic_2 bytes32 toAddress, bytes32 toContract, bytes32 chainId, uint64 amount)
toAddress := eventLog.Topics[2]
toContract := eventLog.Data[:32]
toChainId := eventLog.Data[32:64]
amount := new(big.Int).SetBytes(eventLog.Data[64:96])
// parse data
amount := new(big.Int).SetBytes(eventLog.Data[32:64])
length := new(big.Int).SetBytes(eventLog.Data[64:96])
hexOutput := eventLog.Data[96 : 96+length.Int64()]
Implement input validation checks for the parsed data fields in both functions. This includes verifying that the toAddress
, toContract
, toChainId
, amount
, length
, and hexOutput
fields conform to expected formats and value ranges. By adding these checks, the functions will be more robust, ensuring that only correctly formatted and valid data is processed, thus minimizing the risk of unexpected behavior or errors.
SOLVED: The Lombard team solved this issue following the aforementioned recommendation.
// Low
The SponsorTx
function adjusts the change output of a Bitcoin transaction by deducting a calculated fee. However, the fee calculation within CalculateFee
is currently flawed. The method doubles the estimated transaction size (tx.SerializeSize() * 2
), leading to an imprecise fee estimation. This can result in an overestimation or underestimation of the required fee, which could cause the transaction to either be rejected by the network or overpay in fees, wasting resources. Additionally, the function checks if the remaining change output is above a certain threshold, but this logic might also be affected by the inaccurate fee calculation. If the calculated fee is too high, it may trigger unexpected errors, stating that the change output is too small when, in reality, it is sufficient.
func SponsorTx(tx *wire.MsgTx, changeOutput *wire.TxOut, feePerVB int64) error {
fee := CalculateFee(tx, feePerVB)
if btcutil.Amount(changeOutput.Value) <= fee {
return errors.Errorf("change output (%d) is too small (expectedFee=%d)", changeOutput.Value, fee)
}
changeOutput.Value = changeOutput.Value - int64(fee)
dustLimit := GetDustLimitForOutput(changeOutput)
if btcutil.Amount(changeOutput.Value) < dustLimit {
return errors.WithMessagef(ErrTooSmallOutput, "%d", changeOutput.Value)
}
return nil
}
func CalculateFee(tx *wire.MsgTx, feePerVB int64) btcutil.Amount {
// TODO: tx.SerializeSize() doesn't return real size of transaction
// TODO: we need to reimplement this method to estimate real vB size, for now just multiply by 2
b := tx.SerializeSize()
fee := feePerVB * int64(b) * 2
return btcutil.Amount(fee)
}
Refactor the CalculateFee
function to accurately estimate the transaction's virtual size (vB
) rather than relying on the simple doubling of the serialized size.
SOLVED: The Lombard team solved this issue following the aforementioned recommendation.
// Informational
It’s been identified that there are some functions which implement some extra arguments that are not being using during these functions. Thus, increasing unnecessarily the complexity of the code.
func (s *Service) GetProposalsByStatusOrderByIndex(status types.ProposalStatus, offset, size uint32) []*types.Proposal {
result := s.d.FindProposalsByStatusOrderByIndex(status)
if result == nil {
result = []*types.Proposal{}
}
return result
}
It’s recommended to remove those extra parameters if they are not going to be used in the function’s implementation.
SOLVED: The Lombard team solved this issue following the aforementioned recommendation.
// Informational
It has been identified Open TO-DOs in the scoped files. Open TO-DOs can point to architecture or programming issues that still need to be resolved. Often these kinds of comments indicate areas of complexity or confusion for developers. This provides value and insight to an attacker who aims to cause damage to the protocol.
Consider resolving the TO-DOs before deploying code to a production context. Use an independent issue tracker or other project management software to track development tasks.
PARTIALLY SOLVED: The Lombard team partially solved this issue by resolving some TO-DO comments and states:
We did review of all TO-DOs and added them to our backlog. 2 TO-DOs we going to fix in scope of audit, others is not critical for release.
// Informational
In the verifyBridgeDeposit
function of the web3NodeManager
class, there's a comparison between the eventLog.Address
and the nm.config.LBTCAddress
using a native comparison operator (!=
). This approach is not optimal because Ethereum addresses, which are complex data structures, should be compared using dedicated methods that are specifically designed to handle such comparisons correctly.
if eventLog.Address != ethcommon.BytesToAddress(nm.config.LBTCAddress) {
return nil, errors.Errorf("wrong contract address (%s)", eventLog.Address.String())
}
Replace the native comparison with a method specifically designed for Ethereum address comparison. The common.Address
class typically has an Cmp
method or similar that should be used for this purpose.
SOLVED: The Lombard team solved this issue following the aforementioned recommendation.
// Informational
The preHandle
function within the signDepositController
class is tasked with preparing the necessary components for a Bitcoin Partially Signed Bitcoin Transaction (PSBT) by iterating through UTXOs provided in a request. This function extracts and decodes UTXO details such as the outpoint, script, and amount to build the transaction inputs for the PSBT
. However, a significant oversight in the implementation is the lack of a check to ensure that the total sum of the UTXOs equals the specified amount
in the signDepositSigningData
structure, inclusive of transaction fees. This total should accurately reflect the total outputs of the transaction, ensuring that there is no discrepancy between the funds being signed over and the funds available in the provided UTXOs.
signingData := &signDepositSigningData{
amount: amount,
stakerPk: strings.TrimPrefix(stakerPk.PublicKey, "0x"),
fpPk: req2.GetFinalityProviderPk(),
feeRate: btcutil.Amount(req2.FeeRate),
stakingTime: uint16(req2.GetStakingTime()),
userId: userId,
psbt: nil,
changeScript: changeScript,
babylonParams: babylonParams,
}
// iterate through the UTXOs, checking that we can interpret them and
// recording useful information for the PSBT
for i, u := range req2.GetUtxos() {
// input seqnum
psbtSeqnums[i] = wire.MaxTxInSequenceNum
outpoint, err2 := wire.NewOutPointFromString(u)
if err2 != nil {
return nil, signingData, errors.Wrapf(err2, "parse utxo %d to outpoint (%s)", i, u)
}
psbtTxIns[i] = outpoint
utxo, err2 := c.blockchainService.GetUTXO(outpoint)
if err2 != nil {
return nil, signingData, errors.Wrapf(err2, "get utxo %d", i)
}
// compute the WitnessUtxo field for the PSBT
scriptBytes, err2 := hex.DecodeString(utxo.ScriptPubKey.Hex)
if err2 != nil {
return nil, signingData, errors.Wrapf(err2, "Decoding ScriptPubKey %d '%v'", i, utxo.ScriptPubKey.Hex)
}
txAmount, err2 := btcutil.NewAmount(utxo.Value)
if err2 != nil {
return nil, signingData, errors.Wrapf(err2, "UTXO %d amount %f", i, utxo.Value)
}
psbtInputs[i].WitnessUtxo = wire.NewTxOut(int64(txAmount), scriptBytes)
While the consensus mechanism might eventually reject a transaction where the input sums do not match the expected outputs (including fees), relying solely on this downstream validation introduces unnecessary inefficiency. It could lead to situations where invalid transactions are only caught after significant processing, wasting resources and potentially delaying other tasks’ finalization.
It’s recommended to add a check which verifies whether the sum of input UTXOs is equal or greater than the specified amount
parameter.
SOLVED: The Lombard team solved this issue following the aforementioned recommendation.
Halborn used automated testing techniques to enhance coverage of certain areas of the scoped component. Among the tools used were staticcheck
, gosec
, semgrep
, codeQL
and Nancy
. After Halborn verified all the files and scoped structures in the repository and was able to compile them correctly, these tools were leveraged on scoped structures. With these tools, Halborn can statically verify security related issues across the entire codebase.
The result of those automatic tools were analyzed and none of the highlighted findings pose any risk to the application.
Halborn strongly recommends conducting a follow-up assessment of the project either within six months or immediately following any material changes to the codebase, whichever comes first. This approach is crucial for maintaining the project’s integrity and addressing potential vulnerabilities introduced by code modifications.
// Download the full report
* Use Google Chrome for best results
** Check "Background Graphics" in the print settings if needed