Security Is Not a Checklist: Lessons from Agora's CTO on Building Defense-in-Depth Into DeFi Culture
In a space where hundreds of millions of dollars have been lost to vulnerabilities that existed in audited, reviewed, and widely-used code, the security conversation in DeFi tends to circle back to the same flawed premise: that audits are the finish line. Drake Evans, CTO and co-founder of Agora and architect of some of the most battle-tested lending infrastructure in DeFi, has a different view. In his conversation on The Security Table with Sarah Hicks, Olympix's Co-founder, he argues that the real threat is not a missing audit report. It is the organizational culture that outsources security thinking to someone else.
What follows is a breakdown of the most consequential ideas from that conversation, expanded with context on why they matter and how they map to the broader challenge of securing the next generation of on-chain financial infrastructure.
The foundational assumption: every external interaction is potentially adversarial
Evans's security philosophy begins with a single premise that sounds obvious but is systematically violated in practice: every external interaction should be treated as malicious, or at minimum, as something that could become malicious at some point in the future.
Your system should not rely on any security guarantees you cannot build yourself. Every external interaction should be considered malicious. - Drake Evans, CTO & Co-Founder, Agora
This is not paranoia. It is a structural acknowledgment of how DeFi actually works. Protocols are composable by design. A contract that behaves safely today can be called by a malicious contract tomorrow. A bridge integration that is benign at launch can become an attack vector after a governance change in a dependency protocol. The smart contract environment is adversarial by nature, not by exception.
His metric for evaluating protocol maturity reflects this: the "dollar day," a measure of how much value has been secured for how long without incident. The bigger the number, the more evidence exists that the code has survived real-world adversarial conditions, not just theoretical review.
For security tooling, this framing has direct implications. Static analysis tools like Slither catch a meaningful subset of vulnerabilities but operate on code in isolation. The most dangerous attack classes, including reentrancy under novel compositions, flash loan interactions, and cross-protocol state manipulation, only become visible when you stress-test the system against adversarial external inputs. This is exactly what tools like Olympix BugPOCer are designed to surface: proof-of-concept exploits that demonstrate real exploitability under real conditions, not just theoretical flagging.
Complexity is the root cause, not the symptom
Evans's most consistent throughline across the conversation is a deep skepticism of complexity. He argues that the instinct to build sprawling multi-contract architectures is an engineering habit that creates more attack surface than it solves problems.
Keeping your system simple, owning the security of every piece yourself. That is how I think about it the most. - Drake Evans, CTO & Co-Founder, Agora
His take on contract size limits is deliberately provocative: if you cannot fit your logic into a single contract, the system is too complex. The point is not that monolithic contracts are always preferable. It is that complexity is a cost, and most teams underestimate how much of that cost is paid in security debt rather than engineering velocity.
This mirrors a pattern visible in post-mortem data across major DeFi exploits. The Venus Protocol incident involved an exchange rate manipulation technique that was flagged in a Code4rena audit and left unremediated. The Resupply exploit used an ERC-4626 donation attack vector. The Abracadabra Money hack exploited a solvency check bypass. In each case, the vulnerability was not in an exotic new attack class. It was in the gap between system complexity and the team's ability to reason about all interactions. More contracts, more interfaces, and more composability touchpoints mean more places for something to fall through.
Olympix data across 107,000+ scanned vulnerabilities shows that constructor validation failures, reentrancy events, and uninitialized state variables are consistently among the top findings. These are not advanced attack classes. They are complexity hygiene failures that slip through because systems are too large to reason about holistically.
Game theory goes further than second-order effects
On economic protocol risk, Evans offers a framing that separates good security thinkers from great ones. Most engineers think in second-order effects. The market thinks in ninth-order effects. The gap between those two levels of analysis is where exploits live.
You have to take it to the end. Everything, the market takes it to the end. They go to the ninth order, the tenth order, the eleventh. It goes forever. And the whole point is that you have to do that too. - Drake Evans, CTO & Co-Founder, Agora
The concrete examples he gives are instructive. On governance token economics: if the market cap of the governance token is smaller than the balance sheet it controls, the protocol is economically attackable through open-market token acquisition. This is not a code vulnerability. It is a mechanism design failure that no audit will catch, because it only becomes visible when you model attacker incentives at scale.
Oracle design is his other example, and his position here has shifted in ways that are worth examining closely. He was historically skeptical of fixed price oracles because the instinct toward real-time spot pricing feels more correct. But the empirical record suggests otherwise: flash crashes are far more likely to produce erroneous price signals than genuine oracle manipulation. The practical conclusion is that guardrails on price deviation, even if they feel like centralization trade-offs, are statistically more likely to prevent harm than they are to cause it.
The elegance trap
Evans names something that many engineers recognize but rarely articulate: the brain reaches for the elegant solution, but the pragmatic solution is often more important. Elegance is an aesthetic preference. Security is an empirical outcome. When the data suggests that a less elegant approach produces better security outcomes in practice, the obligation is to follow the data, not the instinct.
This is a useful corrective to a pattern that shows up in post-mortems repeatedly. Teams implement sophisticated oracle designs because they feel more rigorous, when simpler designs with hard-coded deviation limits would have prevented the attack that eventually occurred.
Security culture starts at the top and requires constant repetition
Evans's description of how to build a security culture at Agora is practical to the point of being uncomfortable. He requires two layers of verification before signing anything. He shares examples of real attack techniques, including prompt injection and social engineering vectors, with his team on an ongoing basis. He hired a dedicated CISO at a stage where most companies would consider that premature. He provides YubiKeys to every employee.
The number one thing is setting an example. And then just saying it again and again. We live it. - Drake Evans, CTO & Co-Founder, Agora
None of these are novel ideas. The point is that most organizations know these practices exist and implement them inconsistently. Evans's argument is that the gap between knowing and living a security culture is closed only by leadership behavior, not by policy documents.
His stance on enforcement is unambiguous. Developers using AI coding assistants with permissive flag configurations that bypass security guardrails face termination. This is not presented as a threat. It is presented as a logical consequence of the fact that security is not negotiable when the alternative is losing the entire business to an exploit.
The security culture Evans describes at Agora maps directly to what Olympix is built to support: security integrated into the development workflow before code reaches audit, so that the team's security culture has tooling that matches its ambition. Continuous scanning, pre-commit vulnerability detection, and proof-of-concept exploit generation all exist to extend the discipline of a security-first culture into the parts of the workflow where human vigilance alone is insufficient.
Owning security is not the same as outsourcing it to auditors
Evans's evolution across Frax and Agora reveals a consistent pattern: teams that mature their security stack do not simply add more audits. They internalize security ownership. At Frax, there was never a cost objection to additional audits. But more importantly, the team operated under the principle that a high-critical finding in an audit represented a failure of internal process, not a routine discovery. Security was owned by the engineers, not delegated to external reviewers.
If there is a high critical in an audit, it is a big fuck up. We own security. Security is our responsibility. - Drake Evans, CTO & Co-Founder, Agora
This framing reframes what audits are for. An audit is a second opinion, not a first line of defense. If a critical vulnerability makes it to audit without being caught internally, the internal process has failed, regardless of whether the audit caught it. The goal is to arrive at audit with zero high criticals, not to rely on auditors to find them.
This is the precise gap that Olympix is designed to close. BugPOCer surfaces vulnerabilities during development, before they reach audit, before they reach deployment, and before they become exploits. The 65% reduction in audit findings that Olympix clients observe is a direct consequence of shifting security work left, so that the audit functions as verification of a secure system rather than discovery of an insecure one.
The hot take: AI can find real bugs, and too much trust in auditors is a structural problem
Evans's closing take cuts through two persistent myths in DeFi security. The first is that AI-based security tooling cannot find meaningful vulnerabilities. He dismisses this directly, noting that he has seen AI tools surface genuinely interesting findings and that the dismissiveness toward AI security review reflects a conservatism the data does not support.
The second myth is that audit coverage is sufficient protection. Protocols that have been audited multiple times have been hacked. The audit was completed but the finding was not remediated. Or the attack surface emerged from an interaction the auditors were not asked to review. Auditors are human. Their coverage is bounded by the scope they are given and the time they have. If the codebase is large, complex, and moving fast, something will fall through.
There is way too much complexity in smart contract development and probably way too much speed as well. Auditors are human too, AI or otherwise. If you embed so much surface area and complexity into what you are building, something will fall through. - Drake Evans, CTO & Co-Founder, Agora
The prescription is not to audit more. It is to build less complexity, move more deliberately, and own security at every layer rather than trusting any single external review process to catch what the internal process missed.
What this conversation means for how teams should be building
Evans's framework, taken in full, produces a clear set of operating principles for any team building on-chain infrastructure that handles real value. Treat every external interaction as adversarial. Keep systems simple enough to reason about completely. Model attacker incentives to the ninth order, not just the second. Build a security culture enforced by leadership behavior, not policy. Own security internally before relying on external audit. Use tooling that finds real vulnerabilities before deployment, not after.
The teams that get hacked are rarely the ones that skipped security entirely. They are the ones that did most things right and trusted that "most things right" was enough. In a dark forest environment where attackers have unlimited time, unlimited compute, and direct economic incentive, most things right is not a sufficient margin. Every layer of defense matters, and the teams that understand that are the ones still standing after the next exploit cycle.
About Olympix
Olympix builds proactive smart contract security tooling for Web3 development teams, combining symbolic execution, static analysis, mutation testing, and fuzzing to surface exploitable vulnerabilities before audit and before deployment, so security is owned by the team, not delegated to the last line of defense. Request a demo.
What’s a Rich Text element?
The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.
A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!
Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.
Follow-up: Conduct a follow-up review to ensure that the remediation steps were effective and that the smart contract is now secure.
Follow-up: Conduct a follow-up review to ensure that the remediation steps were effective and that the smart contract is now secure.
In Brief
Remitano suffered a $2.7M loss due to a private key compromise.
GAMBL’s recommendation system was exploited.
DAppSocial lost $530K due to a logic vulnerability.
Rocketswap’s private keys were inadvertently deployed on the server.
Hacks
Hacks Analysis
Huobi | Amount Lost: $8M
On September 24th, the Huobi Global exploit on the Ethereum Mainnet resulted in a $8 million loss due to the compromise of private keys. The attacker executed the attack in a single transaction by sending 4,999 ETH to a malicious contract. The attacker then created a second malicious contract and transferred 1,001 ETH to this new contract. Huobi has since confirmed that they have identified the attacker and has extended an offer of a 5% white hat bounty reward if the funds are returned to the exchange.