Rob Behnke
May 10th, 2023
The average developer introduces at least 10 bugs per 1,000 lines of code. This means that, regardless of pre-launch testing and auditing, writing error-free software is never guaranteed. For Web3 projects with user funds at stake, the question is: “How can we detect and fix bugs that surface after deploying publicly?”
Besides a vulnerability disclosure program (VDP), a bug bounty is a popular approach to managing vulnerabilities in blockchain applications. Bug bounties incentivize members of the security community to disclose flaws in a company’s software (via financial rewards). Thus it becomes easier for developers to fix critical errors in dApps before malicious actors exploit them.
This guide explains bug bounties, including how they work and what benefits your Web3 project can gain from running a bug bounty program. We’ll also provide some real-world examples of successful bug bounties.
A bug bounty is a financial reward paid out to ethical hackers and security researchers for finding and disclosing vulnerabilities in software systems. Bug bounties usually reflect the severity of vulnerabilities discovered and provide the opportunity to patch software bugs before a malicious exploit occurs.
Bug bounties can be considered a form of ethical hacking because hackers generally have the organization’s permission to probe for weaknesses in its systems. Importantly, bug bounties complement other Web3 cybersecurity activities such as pentesting, smart contract auditing, and vulnerability disclosure programs (VDPs).
Bug bounty programs can either be private or public, managed in-house, or coordinated using a third-party platform. We’ll describe these features after highlighting the general framework used by all bug bounty programs (check Binance’s bug bounty if you need an example to grasp these concepts):
Since crypto bug bounty programs attract many participants, you may want to have rules around what systems to test. This improves productivity since hackers know where to direct efforts, and reduces costs for companies (you don’t pay for bugs discovered in out-of-scope assets). It also ensures that bug hunters don’t accidentally break production systems outside of the scope of the engagement.
Alternatively, a project may declare “open season on bugs,” which authorizes hackers to test all parts of a protocol’s attack surface. This would naturally increase the number of bug submissions and result in higher bounty payouts (except the attack surface is limited).
Every Web3 bug bounty program has an established process for hackers to submit bugs. Hackers are typically asked to provide details of the bug—including the severity and location—and a “proof of concept” showing how to exploit the vulnerability. The submission procedure may also outline the organization’s approach toward evaluating vulnerability disclosures and providing feedback on submissions.
As contractors, hackers have few legal obligations—unlike pen testers, for example—making it important to have a policy for ethical hacking/bug hunting. Guidelines may discuss issues like accepted types of vulnerability submissions and approved methods for finding bugs. For example, companies may exclude testing methods that may affect day-to-day operations (eg. DDoS attacks).
You’ll want to define the “bounty pool” or total amount dedicated to rewarding vulnerability disclosures. It’s also common to set total payouts for different classes of vulnerabilities and the maximum amount a bounty hunter can receive for disclosing a bug. This promotes financial transparency and helps businesses avoid overspending on bug bounties.
Typically, payouts occur after the client evaluates and validates a bug report submitted by a hacker. This pay-per-vulnerability model means companies only pay if a bug is found, unlike traditional testing approaches where companies pay even if no weaknesses are discovered.
Participation in a private bug bounty program is “invite-only” and limited to security researchers pre-approved by an organization (ie. the client). Private bug bounties are useful for organizations experimenting with crowdsourced security solutions. For example, your company can invite ethical hackers based on background, KYC compliance, skills, and geographical location.
A public bug bounty has no restrictions on who can find and disclose bugs. That said, some third-party platforms like HackerOne or Bugcrowd may require ethical hackers to sign up for membership before joining bounties hosted on the platform. Even so, running your bug bounty there gives access to all hackers registered on the platform.
An in-house bug bounty requires members of the project team to coordinate all aspects of the program. For example, you’ll need someone to organize and review bug submissions (triaging). Other responsibilities include collaborating with ethical hackers on software patching and retesting, and coordinating payouts for approved bug submissions.
A managed approach to bug bounties outsources the administrative tasks of running a Web3 bug bounty program to a third party (eg. HackerOne, Immunefi, and Hackenproof). These third parties operate as software-as-a-service (SaaS) platforms that connect ethical hackers (aka “bounty hunters”) to companies. They may also assist in setting up the bug bounty program—for example, by providing advice related to project guidelines, budgets/payouts, and relationships with ethical hackers.
Other features of a managed bug bounty program include:
Triaging: Third-party bug bounty platforms will typically evaluate incoming vulnerability reports on behalf of clients and filter out trivial or unacceptable submissions.
Communication: Bug bounty platforms act as intermediaries between organizations and ethical hackers and manage communications between both parties.
Compensation: Clients delegate responsibility for coordinating bounty payouts to the platform.
Managed bug bounties are the standard, particularly because project teams can save time and effort and focus on patching known vulnerabilities. Nonetheless, you can still run your bug bounty in-house if you have enough resources and manpower available.
1. Access to diverse talent: Crypto bug bounty programs attract a diverse group of security experts with different skills and professional backgrounds. This is unlike traditional approaches, like pen testing, that may rely on small teams with narrow specialties/expertise.
2. Continuous testing: Bug bounty programs can be active year-round and allow organizations to stay ahead of emerging threats. This is particularly important in Web3 where malicious hackers are always devising new means of exploiting hidden vulnerabilities in smart contracts to steal funds.
3. Flexibility: Crypto bug bounties provide flexibility in terms of how much to spend on security and what assets to secure, which is useful for early-stage projects unable to hire full-time pen testers/auditors. And since bounties pay for valid vulnerabilities, your cybersecurity budget can be flexible—as opposed to paying a fixed fee upfront for testing/auditing services.
4. Incentivizing responsible disclosures: With bug bounties, ethical hackers have a financial motivation to discover and disclose vulnerabilities in smart contracts. Bug bounties also have an element of competition that encourages participation—even if just to gain recognition in the industry.
5. Realistic threat assessment: Ethical hackers operate with an attacker’s mindset, making bug bounties effective for realistic testing. Attaching financial rewards to responsible disclosures also reduces the possibility of hackers exploiting errors in your code for personal gain.
Bug bounties have a long and illustrious history—from Netscape offering one of the first bounties in the 1990s to modern-day companies like Google and Meta running million-dollar bug bounty programs. Bug bounties have also made their way to Web3 where protocols, DAOs, and other projects offer large sums in exchange for responsibly disclosing vulnerabilities.
In many cases, like the Port Finance bug, the amount offered as a bounty is a fraction of the funds at risk. This, coupled with Web3’s emphasis on community participation and open-source development, makes bug bounties valuable for crypto-native organizations.
Bug bounties also have drawbacks—for example, they aren’t useful for proving compliance or coverage to regulators and users and may incur significant overhead (especially with in-house programs). This is why you should consider making bug bounties part of a more comprehensive cybersecurity strategy that includes auditing and pentesting.