Auto Finance: Stress-Testing Autonomous Liquidity Allocation at the Protocol Layer

Auto Finance’s autonomous liquidity infrastructure was stress tested by Sherlock, surfacing interaction-driven risks before scaling automated capital allocation.

Auto Finance: Stress-Testing Autonomous Liquidity Allocation at the Protocol Layer

Auto Finance is a DeFi liquidity protocol designed to automate how capital is deployed across decentralized exchanges and lending markets. At its core, Auto Finance’s Autopools aim to remove the manual burden liquidity providers face when navigating yield variance, AMM mechanics, fees, slippage, and rebalancing costs.

Rather than asking LPs to constantly reposition capital, Auto Finance centralizes those decisions into protocol-level logic. Capital is pooled, strategies rebalance autonomously, and allocation adjusts as conditions change across venues like DEXs and lending markets.

That design makes liquidity provision more accessible and more capital-efficient - but it also concentrates risk. When rebalancing decisions, accounting, and incentives are enforced by shared contracts, any flaw at the core layer affects every participant relying on the system.

As Auto Finance’s Autopools matured, the team engaged Sherlock to evaluate whether the protocol could hold up under adversarial conditions before scaling further.

The Challenge: When Optimization Logic Becomes the Attack Surface

Autonomous liquidity systems introduce a distinct class of security risk. Auto Finance’s contracts don’t just custody assets—they actively move capital, rebalance exposure, and respond to changing yield conditions across multiple AMM models and markets.

In systems like this, issues rarely present as obvious bugs. Risk emerges from how assumptions interact over time: how rebalancing logic behaves when yields diverge, how accounting responds when costs compound, or how incentives shift when liquidity moves rapidly between venues.

Manual review struggles here because the danger isn’t in one line of code. It’s in how many components behave together under stress. To evaluate that properly, the protocol needed adversarial pressure from many angles at once.

Sherlock’s Approach: Parallel Review of System Behavior

Sherlock conducted a crowdsourced audit of Auto Finance’s core contracts over several weeks. A broad set of independent security researchers reviewed the same codebase in parallel, each applying different threat models, mental shortcuts, and areas of focus.

This structure allowed the audit to explore rebalancing paths, accounting flows, and incentive dynamics simultaneously rather than sequentially. Findings were gathered, deduplicated, and judged through a structured process that emphasized real-world impact over theoretical concern.

Only issues that represented credible risk to protocol behavior were included in the final results. The goal was clarity, not noise.

What Sherlock Found

Across the engagement, Sherlock identified thirty-five valid vulnerabilities in Auto Finance’s contracts. Sixteen were categorized as high-severity risks with the potential to materially impact protocol safety. Nineteen additional medium-severity issues surfaced behaviors that could degrade performance or create adverse outcomes under specific conditions.

Many findings centered on how liquidity accounting and rebalancing logic behaved during rapid shifts in market conditions. Others examined how incentive mechanisms interacted across strategies and venues when assumptions broke down.

In several cases, the same issue was identified independently by multiple researchers approaching the system from different directions. That convergence reinforced the seriousness of the findings and reduced the likelihood of misclassification.

Just as important, Sherlock filtered out findings that lacked credible exploit paths or conflicted with Auto Finance’s intended design. This kept the focus on issues that actually mattered to LPs and the protocol’s long-term health.

Why It Mattered for Auto Finance

For Auto Finance, the audit delivered visibility into systemic risk rather than isolated defects. The breadth of findings highlighted how autonomous rebalancing logic could behave under edge conditions that are difficult to simulate internally.

Because Autopools act as shared infrastructure for liquidity providers, addressing these issues early reduced risk across the entire pool of participants rather than protecting a single strategy or market.

The review also validated core design choices. Where assumptions were challenged but ultimately held, the process reinforced confidence that the system behaved as intended even under adversarial scrutiny.

By the end of the engagement, Auto Finance emerged with a clearer understanding of its risk surface and a stronger foundation for scaling automated liquidity provision.

The Takeaway

As DeFi moves toward autonomous capital allocation, security failures are increasingly driven by interaction effects rather than simple implementation errors.

For Auto Finance, Sherlock’s collective auditing model exposed those interaction-driven risks before they could manifest in production. The engagement showed how broad adversarial review, paired with disciplined judgment, can harden systems where optimization logic is inseparable from security.

When liquidity decisions live at the protocol layer, security has to operate at that same level.