Prepared by:
HALBORN
Last Updated 02/19/2025
Date of Engagement by: February 5th, 2025 - February 7th, 2025
100% of all REPORTED Findings have been addressed
All findings
12
Critical
1
High
0
Medium
5
Low
3
Informational
3
BlockDAG
engaged Halborn to conduct a security assessment on smart contracts beginning on February 5th, 2025 and ending on February 7th, 2025. The security assessment was scoped to the smart contracts provided to the Halborn team. Commit hashes and further details can be found in the Scope section of this report.
The team at Halborn dedicated 3 days for the engagement and assigned one full-time security engineer to evaluate the security of the smart contract.
The security engineer is a blockchain and smart-contract security expert with advanced penetration testing, smart-contract hacking, and deep knowledge of multiple blockchain protocols
The purpose of this assessment is to:
Ensure that smart contract functions operate as intended.
Identify potential security issues with the smart contracts.
In summary, Halborn identified some improvements to reduce the likelihood and impact of risks, which were all addressed by the BlockDAG team
. The main ones were the following:
Implement Correct logic for token distribution
Ensure accurate accounting for multi sig approvals
Strengthening validation during token allocation
Implement proper access control on release functions.
Add an expiry mechanism to time-lock operations.
Add explicit category validation at the start of the executeAddCategory function.
Disable the initializer in the implementation contract.
Halborn performed a combination of manual, semi-automated and automated security testing to balance efficiency, timeliness, practicality, and accuracy regarding the scope of this assessment. While manual testing is recommended to uncover flaws in logic, process, and implementation; automated testing techniques help enhance coverage of the code and can quickly identify items that do not follow security best practices. The following phases and associated tools were used throughout the term of the assessment:
Research into architecture and purpose.
Smart contract manual code review and walk-through.
Manual assessment of use and safety for the critical Solidity variables and functions in scope to identify any vulnerability classes
Manual testing by custom scripts.
Static Analysis of security for scoped contract, and imported functions. (Slither
)
Local deployment and testing ( Foundry
)
EXPLOITABILITY METRIC () | METRIC VALUE | NUMERICAL VALUE |
---|---|---|
Attack Origin (AO) | Arbitrary (AO:A) Specific (AO:S) | 1 0.2 |
Attack Cost (AC) | Low (AC:L) Medium (AC:M) High (AC:H) | 1 0.67 0.33 |
Attack Complexity (AX) | Low (AX:L) Medium (AX:M) High (AX:H) | 1 0.67 0.33 |
IMPACT METRIC () | METRIC VALUE | NUMERICAL VALUE |
---|---|---|
Confidentiality (C) | None (I:N) Low (I:L) Medium (I:M) High (I:H) Critical (I:C) | 0 0.25 0.5 0.75 1 |
Integrity (I) | None (I:N) Low (I:L) Medium (I:M) High (I:H) Critical (I:C) | 0 0.25 0.5 0.75 1 |
Availability (A) | None (A:N) Low (A:L) Medium (A:M) High (A:H) Critical (A:C) | 0 0.25 0.5 0.75 1 |
Deposit (D) | None (D:N) Low (D:L) Medium (D:M) High (D:H) Critical (D:C) | 0 0.25 0.5 0.75 1 |
Yield (Y) | None (Y:N) Low (Y:L) Medium (Y:M) High (Y:H) Critical (Y:C) | 0 0.25 0.5 0.75 1 |
SEVERITY COEFFICIENT () | COEFFICIENT VALUE | NUMERICAL VALUE |
---|---|---|
Reversibility () | None (R:N) Partial (R:P) Full (R:F) | 1 0.5 0.25 |
Scope () | Changed (S:C) Unchanged (S:U) | 1.25 1 |
Severity | Score Value Range |
---|---|
Critical | 9 - 10 |
High | 7 - 8.9 |
Medium | 4.5 - 6.9 |
Low | 2 - 4.4 |
Informational | 0 - 1.9 |
Critical
1
High
0
Medium
5
Low
3
Informational
3
Security analysis | Risk level | Remediation Date |
---|---|---|
No Token Distribution in BatchRelease Due to Premature State Updates | Critical | Solved - 02/19/2025 |
Total Amount Limit Can Be Bypassed During Allocations | Medium | Solved - 02/13/2025 |
Missing Access Control on Token Release Functions | Medium | Solved - 02/13/2025 |
Cleanup Reverts for Pause/Unpause Approvals | Medium | Solved - 02/19/2025 |
Double-Counting in Multi-Signature Approval for Users with Multiple Roles | Medium | Solved - 02/19/2025 |
Operator Approvals Equal to Admin Approvals | Medium | Solved - 02/19/2025 |
Timelock Operations Without Expiry | Low | Solved - 02/13/2025 |
Missing Category Validation | Low | Solved - 02/13/2025 |
Initializer Not Disabled | Low | Solved - 02/13/2025 |
Insufficient Validation Of Vesting Duration | Informational | Solved - 02/13/2025 |
Centralization Risks | Informational | Solved - 02/13/2025 |
Redundant Constants | Informational | Solved - 02/19/2025 |
// Critical
In the batchRelease()
function of TreasuryVesting
, currently separates state updates and token transfers into two separate loops, following the checks-effects-interactions pattern.
// Current implementation
function batchRelease(bytes32 category, address[] calldata users) external {
//...
// First loop: Updates state
for (uint256 i = 0; i < users.length; i++) {
uint256 releasable = getReleasableAmount(users[i], category); // Returns X tokens
if (releasable > 0) {
userReleased[users[i]][category] += releasable; // Updates state
categoryVestings[category].released += releasable;
totalReleased += releasable;
processed++;
}
}
// Second loop: Attempts transfers
for (uint256 i = 0; i < users.length; i++) {
uint256 releasable = getReleasableAmount(users[i], category); // Returns 0 because state was updated
if (releasable > 0) { // This condition is never true
bdagToken.safeTransferFrom(msg.sender, users[i], releasable);
}
}
//...
}
However, this implementation causes the second loop's getReleasableAmount()
calls to return 0 since the state has already been updated in the first loop, resulting in no tokens being transferred to users.
Add the following test function in the foundry test file:
function test_BatchRelease() public {
test_AddEarlyBirdCategory();
// Setup allocations
vm.startPrank(operator);
treasuryVesting.allocateTokens(user1, treasuryVesting.EARLY_BIRD_CATEGORY(), 1000e18);
treasuryVesting.allocateTokens(user2, treasuryVesting.EARLY_BIRD_CATEGORY(), 2000e18);
vm.stopPrank();
address[] memory users = new address[](2);
users[0] = user1;
users[1] = user2;
vm.startPrank(admin);
treasuryVesting.batchRelease(treasuryVesting.EARLY_BIRD_CATEGORY(), users);
vm.stopPrank();
assertEq(token.balanceOf(user1), 1000e18);
assertEq(token.balanceOf(user2), 2000e18);
}
Output:
Combine the state updates and token transfers into a single loop to ensure that tokens are properly distributed:
function batchRelease(bytes32 category, address[] calldata users)
external
onlyRole(ADMIN_ROLE)
nonReentrant
whenNotPaused
{
require(users.length > 0, "Empty users array");
require(users.length <= MAX_BATCH_SIZE, "Batch too large");
emit BatchReleaseStarted(category, users.length, block.timestamp);
uint256 totalReleased;
uint256 processed;
for (uint256 i = 0; i < users.length; i++) {
uint256 releasable = getReleasableAmount(users[i], category);
if (releasable > 0) {
userReleased[users[i]][category] += releasable;
categoryVestings[category].released += releasable;
totalReleased += releasable;
processed++;
bdagToken.safeTransferFrom(msg.sender, users[i], releasable);
}
}
emit BatchReleaseCompleted(category, totalReleased, processed, block.timestamp);
}
SOLVED: The suggested mitigation was implemented by the BlockDAG team.
// Medium
The allocateTokens
function in the TreasuryVesting
contract contains a vulnerability that allows operators to allocate more tokens than the category's total amount limit. This occurs because the check that's meant to enforce the category limit uses vesting.released
, which is never updated during token allocation or release, making the limit check ineffective.
function allocateTokens(address user, bytes32 category, uint256 amount) external {
// ...
require(vesting.released + amount <= vesting.totalAmount, "Exceeds category limit");
// vesting.released is never updated, always remains 0
// ...
}
The vulnerability allows unlimited token allocations within a category, bypassing the intended total amount limit. This could lead to:
More tokens being allocated than intended by the protocol
Potential insolvency if more tokens are promised than available
Breaking of tokenomics and vesting schedules
Loss of funds if the contract doesn't have enough tokens to cover all allocations
Here's a test function proving that token allocations can exceed total amount:
function test_ExceedCategoryLimit() public {
// Setup category with 1000e18 total amount limit
uint256[] memory releaseSteps = new uint256[](1);
uint256[] memory timeSteps = new uint256[](1);
releaseSteps[0] = 10000;
timeSteps[0] = 0;
vm.startPrank(admin);
bytes32 operationId = keccak256(
abi.encode(
treasuryVesting.OPERATION_ADD_CATEGORY(),
treasuryVesting.EARLY_BIRD_CATEGORY(),
block.timestamp,
0,
1000e18,
releaseSteps,
timeSteps
)
);
treasuryVesting.scheduleAddCategory(
treasuryVesting.EARLY_BIRD_CATEGORY(),
block.timestamp,
0,
1000e18,
releaseSteps,
timeSteps
);
vm.warp(block.timestamp + treasuryVesting.TIMELOCK_DURATION_ADD_CATEGORY());
treasuryVesting.executeAddCategory(operationId);
vm.stopPrank();
vm.startPrank(operator);
// Allocate maximum amount to first user
treasuryVesting.allocateTokens(user1, treasuryVesting.EARLY_BIRD_CATEGORY(), 1000e18);
// Should fail but succeeds: allocate same amount to second user
treasuryVesting.allocateTokens(user2, treasuryVesting.EARLY_BIRD_CATEGORY(), 1000e18);
vm.stopPrank();
// Verify total allocated amount exceeds limit
uint256 totalAllocated = treasuryVesting.getAllocation(user1, treasuryVesting.EARLY_BIRD_CATEGORY()) +
treasuryVesting.getAllocation(user2, treasuryVesting.EARLY_BIRD_CATEGORY());
assertEq(totalAllocated, 2000e18); // Double the intended limit
}
Replace the current check with a tracking mechanism for total allocations.
SOLVED: The BlockDAG team solved this issue as follows:
Adds proper tracking of total allocations via totalAllocated
Changes the limit check to use total allocations instead of released amounts
Updates the running total when new allocations are made
Maintains the same check in both single and batch allocation functions
// Medium
The releaseTokens
and batchRelease
functions lack access control, allowing anyone to release tokens:
function releaseTokens(
address user,
bytes32 category
) external nonReentrant whenNotPaused returns (uint256)
function batchRelease(
bytes32 category,
address[] calldata users
) external nonReentrant whenNotPaused
This implementation does not follow the intention as stated in the provided documentation:
3.3 Admin-Only Release
The admin triggers batch release for each category after verifying the correct vesting stage has arrived.
Add appropriate access control to both release functions.
SOLVED: The BlockDAG team solved this issue as follows:
Adds the onlyRole(ADMIN_ROLE)
modifier to both functions
Ensures only authorized admins can trigger token releases
Maintains existing nonReentrant and pause checks
Aligns with the requirement for admin-only releases
// Medium
The TreasuryVesting
contract implements two different patterns for handling operations. While standard operations (like adding categories) use a structured TimelockOperation
system with proper expiry and cleanup mechanisms, emergency operations (pause/unpause) use a simplified direct hashing approach.
// Emergency operations use direct hashing
bytes32 pauseOperationId = keccak256("EMERGENCY_PAUSE");
// While standard operations use TimelockOperation struct
struct TimelockOperation {
bytes32 operationId;
uint256 executeTime;
uint256 expiryTime;
bool executed;
bytes encodedParams;
}
This inconsistency prevents the cleanup function from working properly for emergency operations, as they don't exist in the timelockOperations
mapping.
Impact:
Stale pause/unpause approvals remain in the system
The approval state doesn't get reset after execution
This could lead to confusion and potential security issues if old approvals are reused
Standardize the operation handling:
Use the same timelock and operation ID pattern for all operations including pause/unpause
Include pause/unpause in the standard operation types
Reset approval states after execution:
Clear approvals after successful pause/unpause
Implement proper cleanup for all operation types
Add proper expiry mechanism for pause/unpause operations
SOLVED: The suggested mitigation was implemented by the BlockDAG team - Use the same timelock and operation ID pattern for all operations including pause/unpause
// Medium
In TreasuryVesting::approveOperation()
function, when a user with both ADMIN_ROLE
and OPERATOR_ROLE
approves an operation, both approval counters are incremented, effectively giving them two votes from a single address.
function approveOperation(bytes32 operationId) external {
require(hasRole(ADMIN_ROLE, msg.sender) || hasRole(OPERATOR_ROLE, msg.sender), "Not authorized");
MultiSigApproval storage approval = multiSigApprovals[operationId];
require(!approval.hasApproved[msg.sender], "Already approved");
require(!approval.executed, "Operation already executed");
approval.hasApproved[msg.sender] = true;
// Issue: Both counters increment if user has both roles
if (hasRole(ADMIN_ROLE, msg.sender)) {
approval.adminApprovalCount++;
}
if (hasRole(OPERATOR_ROLE, msg.sender)) {
approval.operatorApprovalCount++;
}
emit MultiSigApprovalSubmitted(
operationId, msg.sender, approval.adminApprovalCount, approval.operatorApprovalCount
);
}
This undermines the security of the multi-signature system by allowing users with multiple roles to have disproportionate voting power.
Add the following to the foundry test file:
// Test multi-sig approval tracking
function test_MultiSigApprovalTracking() public {
uint256[] memory releaseSteps = new uint256[](1);
uint256[] memory timeSteps = new uint256[](1);
releaseSteps[0] = 10000;
timeSteps[0] = 0;
vm.startPrank(admin);
bytes32 operationId = keccak256(
abi.encode(
treasuryVesting.OPERATION_ADD_CATEGORY(),
treasuryVesting.EARLY_BIRD_CATEGORY(),
block.timestamp,
100000e18,
releaseSteps,
timeSteps
)
);
treasuryVesting.scheduleAddCategory(
treasuryVesting.EARLY_BIRD_CATEGORY(), block.timestamp, 100000e18, releaseSteps, timeSteps
);
// Track admin approvals
treasuryVesting.approveOperation(operationId);
(uint256 adminCount, uint256 operatorCount,,) = treasuryVesting.multiSigApprovals(operationId);
assertEq(adminCount, 1);
assertEq(operatorCount, 0);
vm.stopPrank();
// Track operator approvals
vm.prank(operator);
treasuryVesting.approveOperation(operationId);
(adminCount, operatorCount,,) = treasuryVesting.multiSigApprovals(operationId);
assertEq(adminCount, 1);
assertEq(operatorCount, 1);
}
Output:
The assertions fail, proving the failing integrity of approval counts.
Modify the approval logic to use mutually exclusive conditions, ensuring a user can only increment one counter even if they have multiple roles:
function approveOperation(bytes32 operationId) external {
require(hasRole(ADMIN_ROLE, msg.sender) || hasRole(OPERATOR_ROLE, msg.sender), "Not authorized");
MultiSigApproval storage approval = multiSigApprovals[operationId];
require(!approval.hasApproved[msg.sender], "Already approved");
require(!approval.executed, "Operation already executed");
approval.hasApproved[msg.sender] = true;
// Fix: Mutually exclusive conditions prevent double counting
if (hasRole(ADMIN_ROLE, msg.sender)) {
approval.adminApprovalCount++;
} else if (hasRole(OPERATOR_ROLE, msg.sender)) {
approval.operatorApprovalCount++;
}
emit MultiSigApprovalSubmitted(
operationId, msg.sender, approval.adminApprovalCount, approval.operatorApprovalCount
);
}
SOLVED: The suggested mitigation was implemented by BlockDAG team.
// Medium
The current implementation doesn't distinguish between operator and admin approvals in the approveOperation
function. Both operators and admins contribute to the same approval count, which could lead to security risks.
function approveOperation(bytes32 operationId) external {
require(hasRole(ADMIN_ROLE, msg.sender) || hasRole(OPERATOR_ROLE, msg.sender), "Not authorized");
MultiSigApproval storage approval = multiSigApprovals[operationId];
require(!approval.hasApproved[msg.sender], "Already approved");
require(!approval.executed, "Operation already executed");
approval.hasApproved[msg.sender] = true;
approval.approvalCount++; // Both operator and admin approvals increment the same counter
}
Impact:
Operators could potentially contribute to admin-level operations
The system doesn't properly enforce the separation between REQUIRED_ADMIN_SIGNATURES (3)
and REQUIRED_OPERATOR_SIGNATURES (2)
This could lead to unauthorized execution of sensitive operations
Separate the approval tracking for operators and admins:
Add separate counters for admin and operator approvals
Validate the appropriate threshold based on the operation type
Implement proper role-based approval counting
SOLVED: Separate counters for admin and operator approvals were implemented by the BlockDAG team.
// Low
Timelock operations don't have an expiration time, allowing them to be executed indefinitely after the timelock period:
struct TimelockOperation {
bytes32 operationId; // Unique identifier for the operation
uint256 executeTime; // When the operation can be executed
bool executed; // Whether operation has been completed
bytes encodedParams; // Encoded function parameters
}
This means old operations could be executed long after they become irrelevant.
Add expiration time to TimelockOperation
struct:
struct TimelockOperation {
bytes32 operationId;
uint256 executeTime;
uint256 expiryTime;
bool executed;
bytes encodedParams;
}
SOLVED: Added expiration time to TimelockOperation struct.
// Low
The executeAddCategory
function in the TreasuryVesting contract allows the creation of arbitrary categories without validating if they match one of the predefined category types (EARLY_BIRD_CATEGORY, PRESALE_CATEGORY, or TEAM_CATEGORY). While the function includes specific validation logic for known categories, it doesn't prevent the creation of undefined categories:
function executeAddCategory(bytes32 operationId) external onlyRole(ADMIN_ROLE) {
// ... decode parameters ...
// Category-specific validations
if (category == EARLY_BIRD_CATEGORY) {
// Early Bird validations
}
else if (category == PRESALE_CATEGORY) {
// Presale validations
}
else if (category == TEAM_CATEGORY) {
// Team validations
}
// No else clause to prevent undefined categories
Impact:
Allows creation of non-standard vesting categories
Could lead to confusion in token distribution
Inconsistent vesting rules across the protocol
Add explicit category validation at the start of the function:
function executeAddCategory(bytes32 operationId) external onlyRole(ADMIN_ROLE) {
TimelockOperation storage operation = timelockOperations[operationId];
require(operation.operationId != bytes32(0), "Operation doesn't exist");
require(!operation.executed, "Already executed");
require(block.timestamp >= operation.executeTime, "Timelock not expired");
operation.executed = true;
// Decode operation parameters
(
bytes32 category,
uint256 start,
uint256 duration,
uint256 totalAmount,
uint256[] memory releaseSteps,
uint256[] memory timeSteps
) = abi.decode(
operation.encodedParams,
(bytes32, uint256, uint256, uint256, uint256[], uint256[])
);
// Add explicit category validation
require(
category == EARLY_BIRD_CATEGORY ||
category == PRESALE_CATEGORY ||
category == TEAM_CATEGORY,
"Invalid category type"
);
// Rest of the function...
}
SOLVED: The BlockDAG team solved this issue as follows:
Added explicit validation of category types at the start
Maintains all existing category-specific validations
Ensures only predefined categories can be created
Keeps the error message clear and descriptive
// Low
The TreasuryVesting
contract is designed to be used with the proxy pattern, as evidenced by its inheritance of Initializable
and use of the initialize()
function. However, the contract does not disable the initializer in its constructor, leaving it vulnerable to potential initialization attacks:
contract TreasuryVesting is
Initializable,
AccessControlUpgradeable,
ReentrancyGuardUpgradeable,
PausableUpgradeable
{
function initialize(
address admin,
address _bdagToken
) public initializer {
// ... initialization logic ...
}
}
Add a constructor that disables initializers:
contract TreasuryVesting is
Initializable,
AccessControlUpgradeable,
ReentrancyGuardUpgradeable,
PausableUpgradeable
{
/// @custom:oz-upgrades-unsafe-allow constructor
constructor() {
_disableInitializers();
}
function initialize(
address admin,
address _bdagToken
) public initializer {
// ... rest of initialization logic ...
}
}
This ensures that the implementation contract cannot be initialized directly, following the proper proxy pattern implementation.
SOLVED:
Adds the constructor with _disableInitializers()
Includes the OpenZeppelin annotation for upgrades safety
Maintains all existing initialization logic
Prevents potential reinitialization attacks
Follows best practices for upgradeable contracts
// Informational
The getReleasableAmount
function in the TreasuryVesting contract appears to lack validation for the duration
field. However, this is not a security concern as the token release schedule is effectively controlled by the releaseSteps
and timeSteps
arrays, which are properly validated during category creation:
function getReleasableAmount(address user, bytes32 category) public view returns (uint256) {
CategoryVesting storage vesting = categoryVestings[category];
if (block.timestamp < vesting.start) return 0;
uint256 allocation = userAllocations[user][category];
uint256 totalReleased = userReleased[user][category];
uint256 releasable = 0;
// Release schedule is controlled by timeSteps
for (uint256 i = 0; i < vesting.timeSteps.length; i++) {
if (block.timestamp >= vesting.start + vesting.timeSteps[i]) {
releasable += (allocation * vesting.releaseSteps[i]) / 10000;
}
}
return releasable > totalReleased ? releasable - totalReleased : 0;
}
Impact:
Code readability and maintainability could be improved
duration field in CategoryVesting struct is effectively redundant
Minor gas cost for storing unused duration value
Consider removing the redundant duration field or using it for validation.
SOLVED: Removed the redundant duration field.
// Informational
TreasuryVesting
contract relies heavily on admin and operator roles for critical functions. Therefore it creates single points of failure if admin/operator keys are compromised.
It is recommended to implement multi-signature requirements for admin/operator actions.
SOLVED: The BlockDAG team solved this issue as follows:
Requires multiple signatures for admin and operator actions
Tracks approvals per operation
Clears approvals after execution
Emits events for transparency
Maintains existing role-based access control
// Informational
The following constants in the TreasuryVesting
contract are not used:
bytes32 public constant OPERATION_UPDATE_SCHEDULE = keccak256("UPDATE_SCHEDULE");
bytes32 public constant OPERATION_PAUSE = keccak256("OPERATION_PAUSE");
bytes32 public constant OPERATION_UNPAUSE = keccak256("OPERATION_UNPAUSE");
It is recommended to remove them for code clarity and maintainability.
SOLVED: The suggested mitigation was implemented.
Halborn used automated testing techniques to enhance the coverage of certain areas of the smart contracts in scope. Among the tools used was Slither, a Solidity static analysis framework. After Halborn verified the smart contracts in the repository and was able to compile them correctly into their ABIs and binary format, Slither was run against the contracts. This tool can statically verify mathematical relationships between Solidity variables to detect invalid or inconsistent usage of the contracts' APIs across the entire code-base.
The security team conducted a comprehensive review of findings generated by the Slither static analysis tool. No major issues were found. The issue related to reentrancy is a false positive as the external call is made to a trusted contract.
Halborn strongly recommends conducting a follow-up assessment of the project either within six months or immediately following any material changes to the codebase, whichever comes first. This approach is crucial for maintaining the project’s integrity and addressing potential vulnerabilities introduced by code modifications.
// Download the full report
* Use Google Chrome for best results
** Check "Background Graphics" in the print settings if needed