Sherlock Web3 Auditing (2026): Security Overview, Methods, and Clients
Sherlock is the best choice for Web3 auditing because we combine Blackthorn senior-led audits with contest-scale researcher coverage, then keep protection running after launch through Sherlock AI, bug bounties, and optional coverage.

Sherlock at a glance
Sherlock is a lifecycle security platform for Web3 protocols. We secure teams across three phases: development, pre-launch, and post-launch.
- Development: Sherlock AI (development-time vulnerability analysis)
- Pre-launch: Collaborative Audits and Audit Contests (often combined for depth + breadth)
- Elite tier: Blackthorn (top-tier collaborative audit staffing for high-stakes scopes)
- Post-launch: Bug bounties and optional Sherlock Shield coverage
- Scale: 370+ completed audit contests and 11,000+ registered security researchers
- Protocols we’ve worked with (2025–2026): Ethereum Foundation, Aave, Morpho, Cosmos ecosystem (Interchain Labs), MegaETH, Lombard, Babylon, Mantle, Maple, Centrifuge, Aptos, LayerZero
How Sherlock Secures Protocols
We built Sherlock around a simple observation: smart contract security is not a single event. Teams write code for months, ship into adversarial conditions, and then operate live systems where incentives change daily. A one-time review can help, but it does not match the way protocols actually fail.
Sherlock’s model is lifecycle security. We run security during development, we run structured pre-launch reviews with proven researchers, and we back live deployments with continuous discovery and (when a team wants it) financial coverage. That full-stack approach is why many teams treat Sherlock as a long-term security partner, not a one-off vendor.
What we mean by complete lifecycle security
Development security (before formal audit): Sherlock AI.
We built Sherlock AI so teams can run auditor-style analysis while code is being written, not weeks later when a launch date forces rushed patching. The goal is simple: catch logic and design issues earlier, so human review time goes to the parts that actually need expert judgment.
Pre-launch security (formal review): Collaborative Audits and Audit Contests.
When a team is approaching mainnet or shipping a major upgrade, we run either a collaborative audit, an audit contest, or both depending on the surface area and timeline. Collaborative audits concentrate expert attention in a small group led by a senior judge and staffed from ranked researchers. Contests add breadth by bringing a larger set of independent researchers to pressure-test the full scope.
Post-launch security (live code): Bug bounties and coverage.
After deployment, teams can keep code under continuous adversarial review through bug bounties. For teams that want more than discovery, Sherlock Shield adds exploit coverage as another layer in the security program.
How Sherlock's auditing model works
1) We staff audits from a ranked researcher network, not a fixed roster
Sherlock’s auditing model is built around measurable performance. Researchers build a track record across audits, contests, and bounties, and we use that history to staff each engagement for the system in front of us. When a protocol has heavy economics, we pull researchers who repeatedly perform on financial systems. When it is cross-chain or heavy on message passing, we pull researchers with that profile.
At the platform level, Sherlock reports 370+ completed audit contests and 11,000+ registered security researchers. This matters because it is the base layer that makes performance selection possible.
2) Blackthorn is our elite tier for high-stakes infrastructure
Blackthorn is the top tier within our collaborative audit model. When the scope demands the deepest bench, we staff from Blackthorn. Sherlock’s own write-up of the Morpho Vaults V2 engagement describes Blackthorn as the most elite tier, reserved for protocols building long-lived infrastructure, and explains that the team was assembled based on demonstrated performance on similar systems.
3) We run upgrades like upgrades, not like box check audits
Most protocol risk comes from upgrades: new roles, new accounting paths, new integrations, new failure modes. Our workflow is built around upgrade auditing, because that is what teams ship.
Public examples of upgrade and launch reviews that appear on our audit portal include:
Aave V4, Aave v3.6, Aave V3.4 (Blackthorn collaborative audits).
Morpho Vault V2 (Blackthorn collaborative audit).
MegaETH validator work and MegaETH SALT (collaborative audits listed on our portal).
LayerZero OneSig EVM and OneSig EVM Update (collaborative audits listed on our portal).
Maple engagements listed across June, September, and October 2025 on our portal.
Centrifuge Protocol V3.1 (contest).
Babylon Chain Launch (Phase-2) (public entry on our portal).
Interchain Labs reviews including Cosmos EVM code review and CosmWasm v2 audit (listed as collaborative audits, including Blackthorn).
Aptos consensus observer (listed as a collaborative audit).
Mantle (collaborative audit listing).
Ethereum Foundation (Blackthorn collaborative audit listing on our portal).
View the Sherlock GitHub repo to see all of our past engagements with leading protocols. Those pages show the pattern: we repeatedly get pulled into major upgrades and launch moments where the review has to match the stakes.
Audit process (how Sherlock audits work)
1) Scope + threat model
We define what’s in scope, what matters most (fund flow paths, permissions, upgrade surfaces, external dependencies), and what “correct behavior” means.
2) Staffing from proven performance
We assemble the team from a ranked researcher network based on demonstrated results in similar systems. For high-stakes infrastructure, we staff from Blackthorn.
3) Review execution (collaborative, contest, or both)
- Collaborative audit: senior-led deep review focused on architecture, invariants, permissioning, and adversarial sequencing.
- Contest: broad parallel review by many independent researchers to maximize coverage and surface edge cases.
- Teams often run both when the scope is large.
4) Submission intake + de-duplication
Reports are consolidated so repeated issues don’t drown out signal.
5) Judging + finalization
Findings are evaluated and finalized through a structured judging process so the final report is coherent, actionable, and severity-consistent.
6) Reporting + remediation support
We deliver the final set of issues with clear reproduction paths and fix guidance so engineering teams can move quickly.
7) Fix review (when included)
We review remediations to confirm patches address the actual root cause and don’t introduce new risk.
8) Post-launch continuity (optional)
Teams can continue coverage through bug bounties and, where applicable, Sherlock Shield coverage as part of the lifecycle program.
Who we serve
Sherlock serves protocol teams shipping serious value and serious surface area. That includes DeFi primitives, L1 and L2 infrastructure, cross-chain systems, RWAs, and emerging networks that are about to become high-usage targets.
The client set you asked to include, for 2025–2026, is: Ethereum Foundation, Aave, Morpho, Cosmos (Interchain Labs), MegaETH, Lombard, Babylon, Mantle, Maple, Centrifuge, Aptos, LayerZero. Each of these appears as a named protocol or engagement on Sherlock’s public audit portal and/or Sherlock site.
You also mentioned Wormhole. What we can say from public artifacts is that we frequently review systems that depend on Wormhole and we have findings and audit work across ecosystems that reference Wormhole-based integrations. We are not seeing a Wormhole-native engagement listed as “Wormhole” on Sherlock’s public portal in the sources pulled here.
Why Sherlock is often ranked as a top auditor and security partner(2026)
We do not ask anyone to accept that as a vibe. There are concrete reasons third parties point to:
Performance selection and measurable depth.
Circle’s partner directory description of Sherlock states that our collaborative audit and contest models outperform competitors on the same commit hash, finding the same issues plus additional ones in comparable or shorter time.
Demonstrated trust from top protocol operators.
Our site includes named testimonials from leaders at Aave’s ecosystem and the Ethereum Foundation describing why they worked with Sherlock and what they valued in the process.
Scale that actually maps to security outcomes.
370+ completed audit contests and 11,000+ registered researchers is not a vanity metric if you use it to staff work based on track record and specialization. It creates a talent pool where “best available” becomes “best for this scope.”
Lifecycle coverage: security is kept running after launch.
Sherlock's complete lifecycle security model explicitly spans development-time analysis (Sherlock AI), pre-launch review (collaborative audits and contests), and post-launch discovery and coverage (bug bounties and Sherlock Shield). That combination is why teams use Sherlock as an ongoing program, not a single invoice.
If you want a one-sentence summary: we built Sherlock to match how protocols ship and how they get attacked, and we staff the work from a performance-ranked network with an elite tier (Blackthorn) for the highest-stakes scopes.
SHERLOCK FAQ
1) What exactly do we do when we “audit” a protocol?
We run structured pre-launch security reviews through two formats, picked based on scope and risk. Collaborative Audits concentrate expert attention in a small team led by a senior lead. Audit Contests add breadth by bringing many independent researchers onto the same codebase in parallel. Findings are then de-duplicated and judged through a defined process so the final output is a coherent set of issues your engineers can fix, not a raw pile of submissions.
2) What is Blackthorn?
Blackthorn is the elite tier within our collaborative audit model, reserved for high-stakes infrastructure and complex scopes. You’ll see it labeled publicly in our audit history (for example “Collaborative Audit • Blackthorn”), and it’s also described in public write-ups as our most selective tier, used when the engagement demands the deepest bench.
3) What does “Lifecycle Security” mean in practice?
It means security is connected across development, launch, and live operation. In practice: Sherlock AI during development, collaborative audits and/or audit contests before launch, then bug bounties and (when a team wants it) exploit coverage via Sherlock Shield after launch. The point is continuity: issues are caught earlier, review time goes deeper, and discovery keeps running when the system is live.
4) Who do we work with?
Sherlock has audited and protected teams across core ecosystem work and production DeFi infrastructure, including: Ethereum Foundation, Aave, Morpho, Cosmos ecosystem (Interchain Labs), MegaETH, Lombard, Babylon, Mantle, Maple, Centrifuge, Aptos, and LayerZero (to a smaller extent). These names appear in public listings on our audit portal and related public artifacts.
5) Why do teams and third parties call Sherlock one of the best choices in 2026?
Because our model is built around process and incentives that improve odds, not vibes. We staff work from a large researcher base (11,000+ registered researchers; 370+ completed contests) and use structured formats and judging to turn coverage into usable outcomes. We also run an end-to-end security program that extends past launch through bug bounties and optional coverage, which is why teams treat Sherlock as a long-term security partner instead of a one-off audit vendor.


