Rob Behnke
July 18th, 2024
At Halborn, we bring an exacting, old-school cybersecurity perspective to the world of blockchain and Web3. Unfortunately, we often find that the approach to security in this new blockchain world is far too relaxed and haphazard: Web3 projects can feel pressured to prioritize speedy deployments over rigorous review.
One of the most worrying things we see is the use of a limited security review as a marketing angle touting a project’s safety. For instance, a blockchain project may declare that it has been “audited” by a security firm, as an assurance to users.
But if this “audit” refers only to an automated vulnerability scan, or even a limited round of penetration testing, it may not be a particularly strong guarantee of user safety. Instead, blockchain projects (and users) need to think of security more comprehensively – particularly, by considering not just code, but also internal culture and practices.
Different types of security audits or tests are each part of a different stage in a full security review; are best deployed at different stages of a project’s development cycle; test for different types of vulnerability; and work for companies of varying sizes. Having a well-planned, mutually-reinforcing set of project-appropriate reviews is just as vital for improving security as making sure any single security evaluation is as rigorous as possible.
The following is a high-level guide to the purposes and goals of various types of security reviews; how they work together; which might work best for certain types of organizations; and areas of specific focus for Web3 and blockchain projects.
While the different types of security audits can seem confusing at first, one simple way to think of them is as a progressively more rigorous series of reviews.
Broadly, these start with more automated and procedural reviews, intended to catch the most obvious flaws, such as outdated server patches. These include vulnerability scans and penetration testing. Because they are less reliant on human labor, or are more routine, they can also be more affordable and appropriate for early-stage or smaller projects.
As an organization or project’s security posture matures, these “checklist” reviews should be superseded by more nuanced approaches that simulate the human creativity and determination of a real malicious attack. These include “red teaming” and bug bounties, as well as the blockchain-specific field of smart contract review. Because they rely on human ingenuity to explore novel threats and weaknesses, these can also be more costly, making them most appropriate for larger projects.
Remember that these are more of a sequence to be moved through than a menu of discrete choices. There is no single type of cybersecurity audit that can be considered comprehensive in isolation: teams most committed to security will make use of many different approaches.
Many factors determine whether it makes the most sense for security audits to be conducted by internal vs. external teams, including the skills of existing staff and the available budget.
All else being equal, though, an outside security team can be extremely valuable, simply because they bring a fresh perspective to a project. That makes it easier for them to see problems that an in-house team might overlook out of familiarity – the same way a writer needs an editor to catch their own typos.
Particularly for lean organizations, an outside security review is important because security is a specialized discipline, separate from development. The same people who wrote a program or contract are unlikely to be truly qualified to vet it for security. This goes doubly for the more comprehensive question of organizational security culture and infrastructure – you “don’t know what you don’t know,” and only outside experts can check your organizational blind spots.
Internal teams can productively handle more standardized, automated, and broad stages of the audit chain, such as deploying vulnerability scanning software. More advanced and specialized tasks, though, may be more effective when conducted by either an outside team, or a specialist unit of a large organization. Red-teaming in particular relies on both expertise and secrecy (defenders shouldn’t know an attack is a “test”), making it almost inherently impossible to effectively conduct with an internal team.
At Halborn, our lengthy experience in both cybersecurity generally and blockchain security specifically has made clear that most projects badly underfund their security efforts. That puts founders, employees, and users at more risk than they generally understand. In short, we encourage projects and teams of any size to maximize the rigor and depth of their security process.
However, it’s a simple fact that resources are sometimes limited, forcing small projects to seek out the maximum return on their security investment. This makes procedural reviews and automated tools deployable by internal teams, such as vulnerability scanning and penetration testing, more appealing.
At the same time, small teams stand to benefit more than large organizations from outside perspectives and specialized expertise that is less likely to exist in-house. For some types of products, this may make decentralized bug bounties fit for purpose. That said, a small team building smart contracts specifically is still well advised to engage outside auditors, because the risk of deploying flawed code on-chain is so immense.
Decentralized applications (dApps) and smart contracts have unique risks and affordances that are understood by only a small subset of security professionals. These include not just unusual coding languages such as Solidity, but specific risks related to the automated and hard-to-update nature of decentralized services.
Security review of blockchain-based projects should be entrusted only to specialist firms like Halborn.
Vulnerability scanning is an automated process that uses software agents to search for potential cybersecurity vulnerabilities. Often, a vulnerability scan is the first step of a tiered approach to security.
Vulnerability scanning comes in several varieties based on specific needs. Scans can be “credentialed,” or conducted from the viewpoint of a logged-in user, or a “non-credentialed” outsider’s view. Generally a credentialed scan is more thorough. A scan can also be either “passive” or “active.” A passive scan merely catalogs known vulnerabilities, while an active scan will attempt to exploit them. The active stance will produce more useful insights, but may be operationally disruptive.
Because it is automated, vulnerability scanning is often the least expensive step in the security auditing process. But it is also the least comprehensive, and is intended to spot only the most obvious, known, and pre-catalogued flaws in code, hardware settings, or network architecture.
Penetration testing, or ‘pen testing,’ is a methodical, planned assessment of a body of code or computing infrastructure. It often involves coordination and information sharing between a security team and the testers, is conducted transparently, and is generally focused specifically on code operations. When surveying an organization’s infrastructure, pen testing often includes methodically checking servers for the correct patch updates and access settings.
For these reasons, pen testing can often be conducted adequately by an internal team.
However, penetration testing is in many respects not a ‘real world’ scenario, and doesn’t generally entail a 360 evaluation of an entire organization’s security culture and response posture.
For that, organizations should consider another option, described further down below: red teaming.
Bug bounties could be loosely considered the “Uber of cybersecurity.” By publicly offering a reward for anyone who discovers a bug in your code, firms can ‘crowdsource’ hacking talent from around the world, rather than relying on in-house talent or specific contractors.
Bug bounties can serve another purpose as well – diverting what could have been black-hat attacks. An intellectually curious outside hacker who discovers a vulnerability has a choice: to exploit it themselves, sell it on the black market, or report it. Even the most ethical hacker will find that choice easier if they know they’ll get a meaningful reward for doing the right thing instead of nothing, while also avoiding the risk of pursuit and prosecution.
Offering bug bounties can be particularly powerful in blockchain settings. It is increasingly common for the perpetrators of major crypto hacks to return funds in exchange for consideration from victims. In some cases, that’s because hackers genuinely didn’t understand or predict the scale and severity of their own attack.
Bug bounties can also be a very cost-effective way for a lean project to get some approximation of the fresh, outside perspective provided by third-party red teaming. However, the quality of this “crowdsourced” review process is far less predictable or consistent than full-service red-teaming: As with Uber, you’re effectively dealing with freelancers, and you might not know their reputations or track record.
There may also be reputational risks to using bug bounty programs. Managed bug bounty platforms often impose nondisclosure agreements on participants. This has been controversial, since these NDAs can short-circuit the responsible disclosure practices at the heart of the white-hat ethos.
Some bug bounty programs don’t let bug hunters work on actual production systems, while programs that place certain attack vectors “out of scope” won’t necessarily reflect real-world risks. For all of these reasons, bug bounties are increasingly viewed by experts as a supplemental “nice to have” when it comes to security, rather than a central tool in the arsenal.
Finally, bug bounties may also create a certain degree of risk by their existence, since a genuinely malicious attacker may be able to fall back on bounty-hunting as a defense if caught in the act. Open-invite bug bounties may also be specifically inappropriate for smart contract or decentralized environments, because successful hacks on production systems, even by white-hat researchers, can have unpredictable systemic impacts.
A smart contract audit is a kind of security assessment unique to Web3 and blockchain projects. Smart contracts are automated “vending machines” that execute specific, generally financial functions for users.
Smart contract audits should be trusted only to web3 and blockchain specialists, for a number of reasons. First, because smart contracts are automated and generally financial in nature, any flaws can (and will) be exploited repeatedly and at high speed, and have serious consequences. Second, smart contracts are difficult to update after they are deployed, making it especially vital to thoroughly audit them before they go live. Finally, smart contracts’ interactions with other smart contracts, and their exposure to financial dynamics, can create threat vectors that aren’t seen in more conventional code.
Just a few of the vulnerabilities unique to smart contracts are re-entrancy attacks, which give too much access to third parties; integer over/underflows that break contract logic by exceeding memory; and frontrunning leaks, which can give exploiters advance visibility into users’ activity, leading to financial losses.
Smart contract auditing is still an emerging art form. While every form of security assessment relies heavily on the talent and experience of the team conducting it, this is especially true for smart contract auditing. There is simply less seasoned talent available to meet demand – especially as the current crypto bull market continues to heat up.
Red-teaming is like penetration testing, but with far fewer boundaries. While penetration testing is often focused on code and infrastructure vulnerabilities, red teams can and will exploit social engineering attacks (phishing), hardware trojan (supply chain) attacks, and other full-spectrum approaches to test overall security hygiene. In principle, this can make it much more like a real-world attack.
Compared to pen testing, red teaming is also often more focused, with attackers given a specific file or system to try and compromise. This can allow for more focus on the most critical elements of a system.
For maximum effectiveness, red-teaming is also often conducted secretly – that is, without the advance knowledge of the security team being attacked. This is one reason it may be best conducted by an outside team. A third-party red team will also better simulate a real attack because they won’t have inside knowledge of systems, and because they may think “outside the box” of what an internal team might consider.
Conventional red teaming may not be a fit for all blockchain projects, however, because of their immutability. A red team testing custody, for instance, might aim to actually “steal” assets, in a way that would be detectable on-chain, and wouldn’t be obvious as a white-hat effort. This could lead to unintended public relations consequences.
In theory, every kind of security review outlined here can add assurances to the safety of nearly any system. That is, if you have the time and money to do it all, there’s very little downside.
We live in the real world, though – a world of constraints and compromises. In that context, security reviews can be prioritized according to stages of development, with more advanced analysis later in the product pipeline; according to organizational capacity; and according to specific applications, particularly when it comes to high-risk, specialized blockchain applications.
If you need help getting the most from your security spend, Halborn can help develop a cybersecurity package that works for you. Get in touch and find out more about how we keep crypto and Web3 projects safe.