Smart Contract Audit: The Complete Process from Scoping to Secure Deployment
What a smart contract audit actually covers, how the process works from scoping to fix verification, and how to know when your code is ready to deploy.

Executive Summary: A smart contract audit is a structured security review of blockchain code performed by independent researchers to identify vulnerabilities, logic errors, and design flaws before deployment. The process follows five stages: scoping and commit-pinning, architecture and threat model review, deep code review combining manual analysis with automated and AI-assisted tooling, reporting with severity-classified findings, and fix verification to confirm the remediated code matches what gets deployed.
Introduction
A smart contract audit is a security review of blockchain-based code, typically Solidity, Rust, or Move, performed by independent researchers to identify vulnerabilities, logic errors, and design flaws before the contracts go live. The goal is to validate that the code behaves as intended, that privileged roles are properly constrained, and that assets governed by the contract are protected against known attack vectors and economic exploits.
That definition captures the core of what an audit is. But the teams we see get burned after an audit don't fail because the definition was wrong. They fail because they treated the audit as a checkbox rather than a process with real requirements: a pinned code version, a shared understanding of the threat model, and a verified final state that matches what actually gets deployed.
We have run over 370 audit contests and collaborative engagements at Sherlock and worked with teams across every stage of protocol maturity. What follows is the audit process as we understand it from that vantage point: what actually happens during an engagement, what deliverables you should expect, and a practical framework for knowing when you are actually done.
Why smart contract audits matter
Smart contracts are immutable once deployed. If vulnerable code reaches mainnet, there is no patch, no rollback, and no support line to call. The OWASP Smart Contract Top 10, the most widely referenced vulnerability framework in this space, analyzed over 120 smart contract incidents in 2025, with total losses in the hundreds of millions. Access control failures led the list for the second consecutive year, followed by price oracle manipulation and reentrancy. These are not exotic attack types. They are well-documented, well-understood failure modes that competent audits are specifically designed to catch.
For protocol teams, the economics are clear. According to Immunefi's historical reporting, the average loss per smart contract exploit typically runs well into seven figures. Against that, the cost of a thorough audit, typically $25,000 to $100,000 for most DeFi protocols, represents the most efficient form of risk reduction available before launch. Where a specific engagement lands within that range depends on the number of contracts and total lines of code under review, the complexity of the protocol's logic (a simple token versus a lending protocol with liquidation mechanics and oracle integrations), the smart contract language (Rust and Move audits carry a premium over Solidity because the reviewer pool is smaller), whether the timeline is compressed, and how many remediation review rounds are included. Most quotes also exclude post-launch services like bug bounty setup or continuous monitoring, so teams should budget for those separately.
Beyond direct financial protection, audits have become a prerequisite for fundraising, exchange listings, integrations, and user trust. In an ecosystem built on open-source code and trustless execution, an independent security review is one of the few credible signals that a team takes operational maturity seriously.
How the audit process works
Most audits break down into five stages: scoping, architecture review, deep code review, reporting, and fix verification. The quality of the engagement depends heavily on how the first two stages are handled, because scope and context failures account for most of the wasted time and missed findings we observe across real-world engagements.

Scoping and preparation
A strong engagement begins by locking the target. That means selecting a specific repository, branch, and commit hash, then agreeing on what is in scope (contracts, libraries, upgrade scripts, configuration) and what is out of scope (frontends, off-chain indexers, third-party dependencies the team is not modifying). This commit-pinning step is non-negotiable. If the code changes mid-review, auditors lose the ability to reason about the system as a whole, and the timeline expands as they re-trace the impact of each modification.
Preparation also means having documentation ready. A technical spec that describes the system's intended behavior, covering trust boundaries, privileged roles, invariants, and expected token flows, gives auditors the context they need to distinguish "working as designed" from "working as coded but not as intended." Chainlink's education hub describes this as providing a high-level guide covering what the code aims to achieve, its scope, and the exact implementation. Without that context, auditors spend review hours reverse-engineering design intent instead of pressure-testing security properties.
The practical advice here is simple: freeze your code, write down what it is supposed to do, include any known sharp edges or prior incidents, and have a senior developer available throughout the engagement to clarify questions as they arise.
Architecture and threat model review
Before going line-by-line, good auditors spend time understanding the system at a higher level. This means mapping contract interactions, tracing value flows, identifying external dependencies (oracles, bridges, governance modules), and understanding the protocol's economic model.
This matters because the highest-impact bugs in production DeFi systems are rarely isolated "Solidity gotchas." They are failures of assumptions: who is allowed to call what, what happens when an external call reverts in an unexpected state, what invariants must hold for a vault or lending pool to remain solvent, and where value can leak through edge conditions that the developer considered unlikely. Access control and logic errors, the top two categories in the OWASP Smart Contract Top 10, are both fundamentally about assumptions that did not survive contact with adversarial conditions.
Deep code review
This is the stage people picture when they hear "audit," and it is where the bulk of the engagement time goes. Modern audits combine manual expert review with automated tooling and, increasingly, AI-assisted analysis. Each approach catches a different class of issue.
Manual review is essential for logic errors, economic attack paths, and design-level flaws that require understanding business context. Automated tools (static analyzers, fuzzers, and formal verification engines) excel at systematically checking for known vulnerability patterns across every possible state of a contract. AI-assisted tools extend coverage further by scanning for anomalies and recurring patterns at scale, though expert validation remains essential for anything the model flags.
The strongest audit methodologies ensure multiple independent reviewers examine the same code, because a single reviewer's blind spots are a real and well-documented source of missed findings. In our experience, the collaborative audit model addresses this directly: each engagement is staffed from a ranked network of researchers, with team composition matched to the specific architecture under review. Contest-based approaches take this principle further by putting 100+ researchers on the same scope simultaneously, which tends to surface issues that smaller dedicated teams miss through sheer volume of parallel coverage.
Throughout the review, communication should stay active. Auditors who surface high-severity issues early, rather than holding everything for a final report, give the development team time to begin remediation in parallel and prevent last-minute surprises that compress fix timelines.
Reporting
The core deliverable is the audit report. A useful report tells you what was reviewed (repo, commit hash, scope boundaries), how it was reviewed (methodology, tooling, reviewer count), what was found (each issue with severity, impact explanation, and a reproducible description of the exploit path), and what to do about it (actionable remediation guidance per finding).
Severity classifications typically follow a structure like critical, high, medium, low, and informational. The important thing is that each finding clearly explains why it matters. A report that says "reentrancy detected" without describing the specific conditions under which an attacker could exploit it, what they would gain, and what the protocol would lose is hard to act on. This is especially true for economic attacks where the exploit path is a sequence of coordinated actions rather than a single line mistake.
The report should also make clear what was not reviewed. Misinterpreting audit coverage is one of the most common mistakes teams make after receiving a report. If off-chain components, admin key management, or deployment parameters were excluded from scope, the report should say so explicitly.
Fix verification
This is where an audit turns from a document into an outcome. After the team remediates findings, auditors verify that patches actually resolve the reported issues, that fixes do not introduce new problems, and that the final report reflects the code version the team plans to deploy.
Fix verification matters because patches commonly change assumptions, shift trust boundaries, or create fresh edge cases. A team that receives a report and ships fixes without re-verification is introducing unreviewed code into a security-critical system. The best audit workflows include at least one formal fix review round, and many include multiple passes for complex remediations.
The final deliverable should be a closing report or updated version that clearly documents which issues were fixed, which were mitigated with tradeoffs acknowledged, and which were accepted as known risk with rationale. Without that closing snapshot, you are left guessing which version of the code is actually safe to ship.
Common vulnerabilities audits catch
Understanding what auditors look for helps teams write more secure code before the engagement starts. Based on OWASP's 2025/2026 incident data and patterns we see repeatedly across our own engagements, the most frequently identified vulnerability classes include access control failures (improperly implemented permissions that let unauthorized callers reach privileged functions), reentrancy (where external calls allow an attacker to re-enter a function before the first execution completes), price oracle manipulation (exploiting the gap between a contract's price feed and actual market conditions), logic errors in business-critical flows (where the code compiles and runs but produces incorrect outcomes under specific conditions), and unchecked external calls (where return values from cross-contract interactions are ignored, causing silent failures).
These are not obscure edge cases. They are the categories that account for the majority of real-world exploit losses, and they recur because they emerge from common development patterns rather than unusual code.
What "done" looks like
"Done" means more than "we got audited." Done is a state you can describe precisely.
A reasonable definition: the scope is pinned to a specific commit, identified issues are triaged with severity and impact explained, patches are applied as targeted changes against those findings, and fixes are verified so the final report reflects the code version being deployed. If any findings were accepted as risk rather than fixed, the rationale is documented and the team has made a conscious, informed decision to proceed.
From a protocol operator's perspective, done also includes launch readiness. You should be able to explain, in plain language, which privileged roles exist, what your upgrade process is, what monitoring is in place, and what the incident response plan looks like if something goes wrong after deployment. The security story does not end when code hits mainnet. It extends through the full lifecycle of development, launch, and live operation.
If you cannot explain your own protocol's trust model, an audit can still find bugs, but you are more likely to ship fragile governance, unsafe operational procedures, or upgrades that bypass your intended security controls.
How to get more value from your audit
The teams that get the best results treat audit as a collaboration between engineers and auditors, not a handoff. Having a senior developer available to discuss and review auditor feedback reduces friction and speeds up the path from finding to fix.
Audit quality also improves dramatically when the code is stable, tested, and documented before the engagement begins. OpenZeppelin's audit readiness guide emphasizes that audits are most productive when the project is prepared, because auditors can focus on real security analysis rather than reverse-engineering design intent from code alone.
Pragmatically, the goal is to remove ambiguity. Give auditors a clear spec, include invariant statements ("this must always be true"), document trusted roles and upgrade paths, and keep the code from changing mid-engagement unless changes are explicitly reviewed and re-scoped.
Teams increasingly use AI-assisted development tools to run continuous security checks during development, catching structural issues and common vulnerability patterns before the formal audit begins. This reduces the volume of shallow findings that reach human reviewers and lets them focus review time on the complex, context-dependent issues that actually require expert judgment.
Limits you should understand before you buy
Even a thorough audit does not mean "no exploits." Audits are point-in-time reviews of a scoped codebase. If you ship different code, deploy with different parameters, integrate new dependencies, or change admin processes after the audit ends, you can create new failure modes that the review never examined.
The right mental model is that audits reduce unknowns and surface high-impact issues early. They do not replace good engineering practices, runtime monitoring, bug bounty programs, or operational discipline around key management and deployment procedures. The most resilient protocols treat audit as one component of an ongoing security program that runs across the entire lifecycle of the system.
If your team is preparing for an audit or looking to build a security program that covers development through post-launch, reach out to Sherlock. We will scope the engagement to where you are and help you figure out the right path forward.
FAQ
How long does a smart contract audit take?
Timeline depends on scope, complexity, and code stability. Simple contracts may take one to two weeks. Complex DeFi protocols with multiple integrations typically require three to six weeks. The single biggest factor in timeline accuracy is whether the target commit stays stable, because when scope shifts mid-engagement, the timeline expands as reviewers re-trace impact. Teams should also budget time for fix verification, since the engagement does not end when the first report arrives.
What should be included in an audit report?
At minimum: the scope (including commit hash), methodology at a high level, a findings section with severity classifications and impact explanations, and actionable remediation guidance per finding. Strong audit processes also include a fix verification phase and produce a final report that reflects the remediated state of the code.
How much does a smart contract audit cost?
Costs range from roughly $5,000 for a simple token contract to $250,000 or more for complex multi-chain systems. Most standard DeFi protocol audits fall between $25,000 and $100,000, depending on codebase size, logic complexity, the chain and language involved, and timeline urgency. Rush fees, non-EVM language premiums, and remediation review rounds all factor into the final number.
When should you schedule an audit?
When the design is largely settled, the contracts are tested, and you can commit to a stable code version for the duration of the review. Scheduling an audit while code is still in active flux leads to wasted cycles, re-scoping, and findings against code that no longer exists by the time the report arrives. The best outcomes come from teams that treat audit preparation as a first-class engineering milestone.
Do audits prevent exploits?
They significantly reduce risk, but they do not guarantee safety. A smart contract audit is scoped and time-bounded. Post-audit code changes, upgrade processes, operational mistakes, and newly discovered vulnerability classes can all introduce risk after the engagement ends. The safest teams treat audit as one part of a security lifecycle that includes development-time tooling, pre-launch review, and post-launch monitoring and bounty programs.


