January 29, 2026
|
The Security Table Podcast

Beyond Audits: Why the Future of Smart Contract Security Requires Both LLMs and Formal Verification

A conversation with David Schwed on The Security Table reveals why Web3's dominant security model is fundamentally broken and what needs to replace it.

The $121 Million Problem

When the Balancer protocol lost $121 million in a single exploit, the post-mortem revealed something uncomfortable: the vulnerable contract had been audited. When Abracadabra Money lost $1.8 million, audited. When BetterBank lost $5 million, audited.

The pattern is undeniable: 90% of exploited smart contracts were previously audited.

In a recent episode of The Security Table, cybersecurity expert David Schwed (COO of SVRN and former CISO at Robinhood) sat down with our team to dissect why the audit-only security model is broken and what needs to replace it.

His insights reveal a fundamental mismatch between how Web3 approaches security and what actually works.

The Three Fatal Flaws of Audit-Only Security

David breaks down the audit-only model into three critical failures:

1. Audits Are Point-in-Time Snapshots

"Something that was audited in January, there may be a zero-day vulnerability that is exploited the day after your audit is performed, and there might not be any change to the code base," David explains.

A clean audit report doesn't protect you from vulnerabilities discovered after the audit completes. The codebase didn't change, but the threat landscape did. New attack vectors emerge. Novel exploits get published. The security posture you had in January is not the security posture you have in February.

This is the nature of zero-day vulnerabilities. By definition, they haven't been seen yet. An audit can only catch known vulnerability classes and patterns auditors have encountered before.

2. Audit Reports Expire

David regularly sees teams waving around 12, 18, even 24-month-old audit reports like they're still valid credentials.

They're not.

"These things should be constantly refreshed," David emphasizes. Security isn't a certificate you earn once and forget about. It's a continuous process that requires ongoing validation as both your codebase and the threat landscape evolve.

Yet many protocols treat audits like compliance checkboxes. Get the report, put it on the website, move on. The reality is that an audit from 18 months ago tells you almost nothing about your current security posture.

3. Audits Should Be Your Last Line of Defense, Not Your First

This might be David's most counterintuitive insight: "When I was working for an auditing firm, a lot of organizations would get upset if there was nothing found or there was no criticals or high founds. But that should be the goal."

Teams get nervous when audits find nothing. They paid $50,000+ for a security review that returned zero findings. The question that keeps them up at night: is the code actually safe, or did the auditor miss something critical?

This anxiety reveals the fundamental paradox of audits. They don't prove security. They're unverifiable opinions at a point in time. You can't distinguish between "your code is secure" and "we didn't find the vulnerability." There's no mathematical proof. No guarantee. Just a report that says "we looked and didn't see anything."

As David explains, if your auditor finds critical vulnerabilities, your internal development process failed. The audit didn't succeed, it caught what should never have made it that far.

What Actually Works: Borrowing from 20 Years of InfoSec

The solution isn't revolutionary. It's borrowed from two decades of information security frameworks that already exist outside Web3.

"You should have such a great internal program, which includes CI/CD, internal SAST scanning, DAST scanning, everything that you can possibly do in order to identify those vulnerabilities before it hits the auditor," David explains. "And if the auditor gives you a thumbs up, we found nothing, you should be happy that your internal security program and processes are working."

The model that works: preventative controls plus detective controls, layered throughout the entire development process.

Shift left. Catch vulnerabilities before they hit the auditor. Build circuit breakers for what you miss. Borrow from the 30 years of security engineering that Web2 already figured out.

Outsourcing your security responsibility to a third party isn't a strategy. It's a gamble.

Start With Threat Modeling, Not Tool Shopping

When asked what an ideal security stack looks like, David's answer wasn't a list of tools.

It was a single word: threat modeling.

"As engineers, we tend to think we know the answer to the problem before actually doing our own investigation," David observes. "But I would say it really is advantageous to stop and really understand from a threat modeling perspective, what are the bad things that can happen and how can they happen? And then you develop your controls."

David's threat modeling framework for Web3:

  • Who has the keys to update contracts?
  • Where are those keys stored? Self-custody? Custodian? Multi-sig?
  • What controls exist to review code before production?
  • What prevents unauthorized code from hitting production?
  • If an employee gets compromised, what's the blast radius?

The point isn't that YubiKeys and data loss prevention and infrastructure security don't matter. They absolutely do. But you can't build the right controls until you understand what you're actually defending against.

This is where most teams go wrong: they copy-paste a "best practices" security stack without asking what specific threats their protocol faces.

A lending protocol's threat model looks nothing like a DEX's threat model. An L2's key management risks are different from a DAO treasury's. Your controls should reflect that.

Treating Smart Contracts Like Employees

One of David's most actionable insights centers on how teams think about non-human identities.

"Most teams secure their employees better than they secure their smart contracts. And the smart contracts are the ones moving the money," David points out.

Smart contracts, bots, and automated agents aren't infrastructure. They're non-human identities. They execute tasks. They access sensitive systems. They move assets. Functionally, they're doing the same work as a full-time employee.

But most organizations treat them like second-class citizens when it comes to security controls.

"You should be building and thinking about the same proper guardrails that you put in place for your employees that you put in place for these non-human identities," David explains.

His framework is straightforward:

  • Apply zero-standing privileges to all identities, human and non-human
  • No access until it's needed, evaluated contextually in real-time
  • Include non-human entities in your insider threat program from day one

We run entire insider threat programs focused on humans while the automated systems that actually execute transactions operate with standing privileges, permanent access, and minimal oversight.

When you map that against how major Web3 exploits actually happen (compromised keys, over-privileged contracts, access that never expired), the gap becomes obvious.

This is the security posture institutions expect when they're evaluating whether to deploy on-chain. And it's the gap that's costing teams millions when things break.

The Future: LLMs Plus Formal Verification

David's framework for modern security tools is nuanced. He doesn't believe in either/or thinking. He believes in both.

LLMs as Force Multipliers

"LLMs can effectively be almost, I don't want to say your entire security team, but it can really take a team of three to five and make them appear to be a team of 20 to 30 folks," David explains.

But there's a catch. LLMs only deliver this kind of force multiplication if you treat them like employees, not magic boxes.

"I wouldn't just YOLO my instructions into a large model and just say, go do blank. I would purposely build a model or an agent that's specifically a pen tester. I'd build an agent that's specifically looking at source code, an agent that's specifically doing this, and have them all function independently of each other with the proper guardrails, with the proper access into the right information."

Just like you wouldn't hire a generalist and expect them to do everything, don't build a single LLM agent and expect comprehensive security coverage.

Build specialized agents:

  • One agent for pen testing
  • One agent for source code analysis
  • One agent for specific vulnerability patterns

Each with proper guardrails, proper access controls, and proper knowledge sets.

But LLMs Are Probabilistic

Here's what LLMs can't do: prove correctness.

LLM-based tools are incredible for pattern matching at scale, finding known vulnerability classes quickly, augmenting small security teams, and catching common mistakes during development.

But they're probabilistic. They find what looks like past vulnerabilities. They don't prove your contract works correctly.

Formal Verification Provides Mathematical Guarantees

"At a very high level, formal verification is a mathematical guarantee or certainty that your application is running as you have designed it," David explains. "And as we know within Web3 due to the immutability of transactions and irreversibility and the fact that these things are bare assets, it is so critically important that we get these things right when we are launching contracts."

David uses a highway analogy: "We're developing these guardrails to say it should behave, when I say the highway, this is what we're expecting. We're not expecting the car to flip over the rails and start going downhill and doing something that were completely unexpected that we don't have any controls over any oversight."

Example: "This balance should never be lower than that balance." Formal verification proves mathematically that the code you've written will never create an instance where that truth is violated.

Two Types of Exploits Require Two Types of Tools

David identifies a critical distinction most teams miss:

Type 1: Technical vulnerabilities

  • Reentrancy
  • Integer overflow
  • Access control bugs

Type 2: Business logic exploits

  • Behavioral deviations
  • Unexpected state transitions
  • Economic design flaws

LLMs excel at pattern matching for Type 1. They've seen reentrancy bugs thousands of times and can spot them instantly.

Formal verification catches Type 2 by proving invariants hold under all conditions. Mathematical guarantees, not pattern recognition.

"There's let me hack and exploit a vulnerability. But then there's the behavioral things. Is it behaving the way that it should? Has somebody found an exploit from a business logic perspective?" David explains. "And that's where formal verification and unit testing and those things can really help organizations really think through, have I properly thought through how can somebody exploit from a business logic perspective?"

At Olympix, we see this constantly. Protocols that rely purely on LLM-based scanning get exploited through business logic that no pattern matcher would catch. The vulnerability wasn't in the code style. It was in the economic design.

The Evolution: Making Formal Verification Accessible

David acknowledges the historical barrier: "I think things like formal verification, while understood, are so difficult to implement for many organizations because it touches on not just security, but it touches on math."

Traditional formal verification requires deep security knowledge AND mathematical expertise. Most teams have one or the other, not both. Implementation has been prohibitively difficult.

"But there are certain non-negotiables that we should be able to prove when it comes to formal verification," David emphasizes.

Critical invariants like:

  • Token balances should never go negative
  • Total supply should equal sum of all balances
  • Access controls should prevent unauthorized upgrades
  • Withdrawal amounts should never exceed deposits

These aren't edge cases. They're invariants that must hold under all conditions.

The evolution David describes: "There are many tools today that are helping organizations realize the benefit of formal verification."

Tools that make formal verification accessible without requiring PhDs in formal methods. Automated formal verification at scale that helps teams define and prove their critical invariants without manual mathematical proofs.

The Future Must Be Proactive, Automated, and Verifiable

The future of smart contract security will marry both approaches. LLMs for surfacing vulnerability patterns quicker. Formal verification to mathematically prove security.

As David explains: "For organizations, the best place to catch things is before you implement. And those things are done through things like LLM probabilistic testing, as well as formal verification guardrails."

Finding bugs post-deployment means emergency response protocols, pausing live contracts (downtime equals lost revenue), patching production code (new attack surface), and potential exploit windows before fixes deploy.

All because the vulnerability wasn't caught during development.

The future has to be:

  • Proactive: Catch vulnerabilities during development, not after deployment
  • Automated: Integrate into CI/CD so security becomes part of the workflow
  • Verifiable: Mathematical proof for critical invariants, not unverifiable opinions

LLMs provide speed and scale on known patterns. Formal verification provides guarantees on critical invariants.

If your security stack only has one, you're leaving half the threat landscape unaddressed.

You Can't Fix Security Later

David's final hot take cuts to the heart of why so many Web3 projects struggle with security:

"I think the answer of, or the thought of I'll fix it later, I think is still very much prevalent in Web3 security."

He doesn't blame projects for this thinking. Most organizations start bootstrapped or seed-funded. They're not hiring CISOs. They're not hiring security engineers. They're taking their existing engineers and saying, "well, you're technical, so therefore, build out security."

"And that's what I think is probably one of the biggest dangers," David explains. "Engineers are not security people and security people are not engineers. We all do things differently."

Ask a security engineer to build production-quality applications and you'll get subpar code. Ask a software engineer to build comprehensive security and they won't think through the contingencies that security professionals live and breathe.

It's not a skill issue. It's a perspective issue. Security engineers have seen things break in ways developers never consider. They've lived through incidents that shape how they evaluate risk.

David's analogy brings it home: "You don't build a house and then say, I'm going to worry about building a bathroom later. You have to build the pipes first before you put up the drywall so that way you can understand later I may want to build a bathroom here. It's the same thing with security. You can't just start ripping things open and start laying down a new foundation for certain things."

Security is foundational infrastructure. Not a feature you bolt on post-launch.

Once deployed, contracts are immutable or require complex upgrade mechanisms. Every architectural decision you make without security input becomes technical debt that's difficult or impossible to fix later.

Building Security Right From Day One

The path forward is clear:

1. Start with threat modelingUnderstand what you're defending against before you choose tools. Map your specific threats. A lending protocol faces different risks than a DEX. Your controls should reflect that.

2. Treat smart contracts like employeesApply zero-standing privileges to all identities. No access until needed, evaluated contextually. Include non-human entities in your insider threat program.

3. Build preventative controls into developmentStatic analysis catching vulnerabilities as you code. Mutation testing validating test coverage. Automated formal verification proving invariants hold. Fuzzing discovering edge cases before deployment.

4. Layer detective controls in productionCircuit breakers. Behavioral monitoring. Anomaly detection. Assume your preventative controls might fail and build detection for when they do.

5. Use audits as final validationNot as primary security. When your contract hits the auditor, they should find nothing. That's proof your internal program works.

6. Marry LLMs with formal verificationLLMs for surfacing patterns quicker during development. Formal verification to mathematically prove security of critical invariants. Both integrated into CI/CD. Both running before deployment.

Even bootstrapped teams can't afford to "fix security later." Because later means after exploit, after audit findings, after architectural decisions are locked in.

Install the pipes before you put up the drywall.

About Olympix

Olympix provides proactive security tools for smart contract development, including automated formal verification, static analysis, mutation testing, and fuzzing. Our platform integrates into CI/CD pipelines to catch vulnerabilities during development, not after deployment.

Founded by security engineers who lived through the limitations of audit-only security, Olympix is trusted by leading protocols including Circle, Uniswap Foundation, and Cork Protocol.

Learn more at olympix.security

What’s a Rich Text element?

The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.

A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!

Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.

  1. Follow-up: Conduct a follow-up review to ensure that the remediation steps were effective and that the smart contract is now secure.
  2. Follow-up: Conduct a follow-up review to ensure that the remediation steps were effective and that the smart contract is now secure.

In Brief

  • Remitano suffered a $2.7M loss due to a private key compromise.
  • GAMBL’s recommendation system was exploited.
  • DAppSocial lost $530K due to a logic vulnerability.
  • Rocketswap’s private keys were inadvertently deployed on the server.

Hacks

Hacks Analysis

Huobi  |  Amount Lost: $8M

On September 24th, the Huobi Global exploit on the Ethereum Mainnet resulted in a $8 million loss due to the compromise of private keys. The attacker executed the attack in a single transaction by sending 4,999 ETH to a malicious contract. The attacker then created a second malicious contract and transferred 1,001 ETH to this new contract. Huobi has since confirmed that they have identified the attacker and has extended an offer of a 5% white hat bounty reward if the funds are returned to the exchange.

Exploit Contract: 0x2abc22eb9a09ebbe7b41737ccde147f586efeb6a

More from Olympix:

No items found.

Ready to Shift Security Assurance In-House? Talk to Our Security Experts Today.