A Complete Guide to Solidity Security Audits for Web3 Protocols
In this guide, we’ll break down how to prepare for a Solidity audit, what the code review and post-audit processes look like, the most common vulnerabilities found in Web3 projects, and what you can expect in terms of cost.

By the Sherlock Team · Updated March 252026
Q1 2026 has already seen $414 million lost in January alone, and most of those exploits trace back to patterns that a rigorous review process would have caught. This guide is not about why audits matter. It's a walkthrough of how to actually do one: the methodology, the tooling, and the specific vulnerability patterns you should be hunting for when you sit down with a Solidity codebase.
Step 1: Scoping and Architecture Review
Before you read a single line of code, build a mental model of the system. Identify every external entry point (external and public functions), map out the trust boundaries (who can call what, and under which conditions), and diagram the token flow. You are looking for the attack surface, and that means understanding which contracts hold value, which ones have privileged roles, and how upgrades or migrations work.
Run slither . --print human-summary to generate a quick overview of contract complexity, inheritance trees, and state variable counts. Cross-reference this against the protocol's documentation. Gaps between documented behavior and actual code are where most critical bugs live.

Step 2: Static Analysis (and Why It's Not Enough)
Run Slither first. It catches low-hanging fruit fast: unused return values, missing zero-address checks, dangerous delegatecalls, unprotected selfdestruct calls, and basic reentrancy patterns. A clean Slither run doesn't mean your code is safe, but a noisy one tells you a lot about code hygiene. Focus on the high and medium detectors first, then work through informational findings for patterns that hint at deeper issues.
The real work is triaging detectors. Slither's reentrancy-eth and reentrancy-no-eth detectors produce false positives in codebases that use ReentrancyGuard correctly. Learn to read the detector output rather than blindly trusting or dismissing it. Pair Slither with Aderyn or Wake for a second opinion, since different tools catch different patterns.
Step 3: Manual Review Patterns
This is where audits are won or lost. Static analysis catches maybe 15–20% of real vulnerabilities. The rest require a human reading code line by line with an adversarial mindset. Here are the patterns that matter most in 2026:
Access control. This category accounted for roughly 59% of all funds stolen across Web3 in the past year according to Chainalysis. Check every external function. Ask: who should be able to call this, and what happens if someone else does? Look for missing modifiers, overly permissive role assignments, and initializer functions that can be called twice.

Reentrancy. The classic checks-effects-interactions pattern still applies, but modern reentrancy is subtler. Cross-function reentrancy happens when two functions share state but only one has a guard. Cross-contract reentrancy occurs when Protocol A calls Protocol B, which calls back into Protocol A before the first call resolves. Read-only reentrancy through view functions that read stale state mid-callback is increasingly common in DeFi protocols that integrate with lending markets or AMMs. If you see an external call followed by a state read in a separate function, that's a red flag.
Oracle and price manipulation. If the protocol reads prices from an external source, check what happens when that price is zero, stale, or manipulable within a single transaction. Spot-price reads from AMM pools are almost always exploitable via flash loans. Look for missing staleness checks on Chainlink feeds and verify that the protocol handles L2 sequencer downtime.
Rounding and precision loss. Integer division in Solidity truncates toward zero. In token calculations, this creates opportunities to extract value over many small transactions or to grief other users by making their shares worth zero. AI-assisted review tools are getting better at flagging these, but the exploit potential depends entirely on context that still requires human judgment.
Step 4: Invariant Testing with Foundry
Once you have hypotheses about what should always be true (total shares should never exceed total assets, user balance should never go negative, the sum of all deposits should equal the contract's token balance), encode them as invariant tests and let a fuzzer try to break them.

Run this with forge test --mt invariant_solvency --fuzz-runs 50000. If the fuzzer breaks your invariant, you have a real bug. If it doesn't, you have higher confidence but not a guarantee. Pair Foundry's fuzzer with Echidna for property-based testing on critical paths, since its coverage-guided approach finds edge cases that random fuzzing misses.
Sherlock's internal research consistently show that the highest-value findings come from researchers who write custom invariant tests for protocol-specific logic rather than running generic checklists. The methodology matters more than the tooling.
Step 5: DeFi-Specific Economic Review
If the protocol touches token prices, lending rates, or liquidity pools, you need to think like an arbitrageur. Can an attacker manipulate the price feed within a single transaction to extract value? Can they inflate share prices via direct token transfers (the classic ERC-4626 donation attack)? Can they sandwich a governance vote by flash-loaning voting tokens?
This kind of analysis requires understanding the protocol's economic model in full, not just its code. Sherlock's 2026 security retrospective found that the most damaging exploits increasingly sit at the intersection of correct code and broken assumptions: individually safe functions that become exploitable when composed in sequences the developers didn't model. Checking for these requires mapping every multi-step path through the protocol, which is where collaborative review models have an advantage over solo auditors.
Step 6: Post-Audit is Not Post-Security
After remediation and re-review, the protocol goes live and the threat model changes. Code that was safe in isolation may become exploitable once real liquidity arrives or once other protocols start composing with it. Maintain a bug bounty with payouts that reflect actual risk. The OWASP Smart Contract Top 10 is a useful reference for structuring ongoing monitoring around known vulnerability classes. Run on-chain monitoring for anomalous transactions, and have a pause mechanism that trusted multisig holders can trigger if something goes wrong.
The most secure protocols treat the audit as the starting point of their security program, not the end of it. The cost of a thorough audit is a rounding error compared to the cost of an exploit.
Talk to the Sherlock team if you want experienced eyes on your codebase alongside your own review process.


