FEATURED POST
January 5, 2026
Web3 Security in 2026: Lessons From 2025, Projections Ahead
A data-backed look at what 2025 exposed in Web3 security: How losses happened, why existing practices fell short, and what changes are likely to define security programs in 2026. Sherlock's projections for the year ahead.

Key Summary - 2025 losses were driven by concentration: a small number of incidents did outsized damage, including a ~$1.5B Bybit breach, pushing total stolen value to roughly ~$3.4B. Security effort didn’t prevent losses because many failures sat outside late-stage code review: privileged access, signing infrastructure, third-party dependencies, and upgrade pathways. In 2026, the teams that win will show a full security program: continuous validation of system behavior, operational controls that reduce blast radius, and AI-assisted detection tied to disciplined triage and remediation.
Web3 Security in 2026: Lessons From 2025, Projections Ahead
2025 ended with roughly $3.4 billion stolen from Web3 protocols. That puts it among the most expensive years on record, and the losses were not spread evenly. More than $2.1 billion had already been taken by mid-year, driven by a handful of massive incidents rather than a steady accumulation of smaller bugs. The Bybit breach alone accounted for roughly $1.5 billion of that total.
From where we sit at Sherlock, the pattern is unmistakable. Risk concentrates fast. When it shows up, it shows up at scale. And the teams getting hit were not ignoring security: they were investing heavily in it. The problem was not effort. It was that attackers kept finding leverage in places traditional security programs were not built to cover.
The Real Story: Concentration, Not Volume
CertiK's 2025 reporting tells the story clearly: $1.537 billion lost in February alone across 58 incidents, with Q1 totaling $1.67 billion across roughly 200 incidents. Chainalysis attributes about $2.02 billion of the year's theft to DPRK-linked actors, which matters because it shows sophisticated operators targeting the highest-value systems with precision.
This is not a story about "more hacks." It is a story about a small number of incidents doing catastrophic damage. When concentration drives the loss curve, it changes how you should think about security spend. You can audit every line of code and still get destroyed if the real failure mode sits in operational access, privileged signing keys, or an upstream dependency that nobody reviewed.
What 2025 made clear is that the industry kept treating security like a checkpoint (something you pass through before launch) when the actual risk lived in continuous operational reality. Code audits matter, but they were never going to stop a compromised multisig or a supply chain insertion.
AI Moved From Demo to Workflow
2025 was also the year AI-based security analysis stopped being a novelty. Teams started using it as a normal part of development because the economics forced the shift. Protocols ship faster, codebases get more interconnected, and waiting for a late-stage audit keeps getting more expensive. Tooling that surfaces issues earlier became valuable even when imperfect.
But the same year also exposed the limits of tools-only thinking. When a single incident can cost billions, "we ran the scanner" is not a security posture. The question is whether your process catches the right classes of risk before they become irreversible, and whether you can prove the work was done in a way that stands up to external scrutiny.
The gap was not tooling. It was process. Teams that treated AI as part of a larger detection and response loop got value. Teams that treated it as a replacement for human judgment found out what that judgment was worth when things broke.
Security Programs Started Looking Like Programs
By spring, the heat was visible across every tracker. Industry reports put year-to-date losses at $1.742 billion before the year reached its second half. That pressure forced a shift in how the industry talked about security. Less focus on "we did an audit" as a credential, more focus on whether you could show a coherent program that reduces the probability of catastrophic failure.
We saw it in real engagements. Teams would arrive with serious code review investment, then the actual conversation would move to privileged access controls, deployment workflows, signing permissions, third-party integrations, and blast radius planning. That is not abstract. It is teams learning that the highest-leverage compromises kept happening outside the code they reviewed most carefully.
Where This Goes in 2026
1. Security Buyers Will Stop Accepting Partial Answers
The old boundary was fake and 2025 proved it. In 2026, "we did a review" will keep losing credibility unless it sits inside a broader set of controls that teams can explain, test, and keep consistent as the protocol evolves.
What changes: Buyers will start asking harder questions about operational security, not just code quality. Who can sign transactions? What happens during an upgrade? Which third parties can touch production systems? What is the blast radius if one control fails? Teams that cannot answer those questions clearly will lose deals to teams that can.
What this means for protocols: Security becomes a program you can demonstrate, not a badge you display. That means documented controls, repeatable processes, and proof that the work is ongoing. It also means being able to show what changed when new code shipped, new integrations went live, or new team members got access.
2. The Technical Failure Modes Keep Moving Toward System Behavior
The hard problems in 2025 were not isolated bugs. They were interaction risks: value flows across multiple contracts, shared state assumptions, external call vulnerabilities, upgrade path failures, and invariants that broke when the full system went live under load.
What changes: In 2026, the separating line will be whether your security process can reason about system behavior continuously as you ship. That is different from hunting for bugs in static snapshots. It means understanding how components compose, how permissions propagate, and how assumptions hold or break when contracts interact in production.
What this means for protocols: Security has to move left in the development cycle, but also has to stay active in production. Pre-launch reviews still matter, but they are not sufficient. The teams that win will be the ones that can validate system behavior in staging environments that mirror production risk, not just in isolated test cases.
3. AI Becomes Table Stakes, But Process Determines Outcomes
We are already seeing AI-based analysis become routine during development. It surfaces issues earlier and reduces the dead time between writing code and learning what broke. In 2026, the gap will not be "who used AI." It will be who built the loop around it.
What changes: The differentiator will be triage discipline, repeatable methodologies, clear ownership of findings, and proof that issues got resolved before they became public risk. AI changes the timing of detection. Process determines whether that timing translates into fewer real losses.
What this means for protocols: If you are using AI tools but do not have a system for turning findings into fixes, you are generating noise without reducing risk. The value is in the workflow: how findings get prioritized, who owns remediation, how you verify the fix, and how you prevent the same class of issue from reappearing in future code.
2026: The Year of Complete Lifecycle Security
If 2025 exposed anything clearly, it was that security breaks where ownership breaks. Code was reviewed. Tools were run. Budgets were spent. Yet the highest-impact failures happened across boundaries that were never treated as one system: development, deployment, access, upgrades, integrations, and response.
2026 is the year those boundaries collapse.
Complete lifecycle security means treating protocol risk as continuous, not episodic. It starts before code is written, continues through development and launch, and stays active as systems evolve in production. The objective is not to eliminate bugs in isolation. It is to reduce the probability that any single failure mode can cascade into catastrophic loss.
Practically, this shows up as security work that persists across phases. Design assumptions are documented and tested early. Development includes continuous analysis that reasons about system behavior, not just individual contracts. Deployment workflows are reviewed with the same scrutiny as Solidity code. Privileged access, signing infrastructure, and third-party dependencies are treated as first-class risk surfaces. When incidents happen elsewhere in the ecosystem, teams reassess whether the same class of failure could exist in their own system.
Lifecycle security also changes how proof works. “We ran an audit” stops being a sufficient answer. Teams are expected to show how risk is identified, how findings are triaged, how fixes are verified, and how the same failure class is prevented from reappearing as the protocol changes.
This is not about doing more security work. It is about doing security work that compounds instead of resetting every time code ships.
FAQ: Web3 Security in 2026
What actually changed in Web3 security after 2025?
The scale of losses shifted how risk is understood. A small number of incidents caused most of the damage, and many of those incidents were rooted in operational control, privileged access, or system-level behavior rather than isolated code bugs. That forced teams to rethink security as an ongoing program instead of a pre-launch step.
Are smart contract audits still worth doing in 2026?
Yes, but only as part of a broader system. Audits reduce specific classes of code risk, but they do not protect against compromised keys, unsafe upgrades, misconfigured permissions, or unreviewed dependencies. In 2026, audits are necessary inputs, not a complete security posture.
What does “system behavior” mean in a security context?
System behavior refers to how value, permissions, and state interact across multiple contracts, external integrations, and operational controls once everything is live. Many real failures emerge only when components compose under real conditions, even if each part looked safe in isolation.
How should teams be using AI security tools going forward?
AI is most useful when it shortens the time between writing code and identifying risk. The value comes from integrating findings into a workflow with clear ownership, prioritization, verification, and follow-up. Without that loop, AI generates output without reducing real-world risk.
What will security buyers ask for that they didn’t before?
Buyers will ask how protocols manage access, upgrades, and third-party exposure, not just how they review code. They will want to see documentation, repeatable processes, and evidence that controls stay consistent as systems change. The focus moves from credentials to demonstrable risk reduction.
What should protocols prioritize if they want to be secure in 2026?
Protocols should prioritize continuity. Security work should carry forward across releases, integrations, and team changes. The goal is not to pass a review, but to maintain control over how failures can occur and how much damage any single failure can cause.




