Get $IP
Get $IP
Story Story
07 February 2025
© Story Foundation 2025

Learn

WhitepaperBlogFAQs

Build

Getting StartedDocsGitHubBrand Kit

Tools

Block ExplorerProtocol ExplorerFaucetStaking

Explore

EcosystemBridgeIP Portal

Community

CareersGovernanceForum

Legal

PrivacyTerms of UseEnd User TermsMiCA White Paper
How Story Built a Multi-Layer Defense for Mainnet
back

How Story Built a Multi-Layer Defense for Mainnet

Story

Story

06 November 2025

This post kicks off a new multi-part series exploring how we approach security at Story: the principles that guide us, the systems we’ve built, and what we’ve learned along the way.

A Year in Security at Story

Security is a core part of Story’s DNA. It is a constant, deliberate practice that protects everything from our network to the community members and ecosystems that depend on it.

This first installment of this series will look back on all the efforts we have made at Story to secure our codebase and infrastructure in the lead-up to Story’s mainnet and token launch. Our goal is to share our learnings with the community so builders in the Story ecosystem and beyond can learn from the team's experience.

This blog series will cover:

  1. Securing our code and our ecosystem’s code
  2. Securing our governance processes
  3. Securing our infrastructure, people, and community (OpSec, InfraSec, brand protection)
  4. Preparing for the worst (monitoring, SOPs, incident response)

Securing Our Code

Hackers exploiting flaws in the logic of smart contracts or even blockchain code is one of the most notorious ways a web3 project can be exploited.

As Zellic put so eloquently in their blog post, finding a bug becomes more and more expensive the closer you are to deploying your code publicly onchain.

To minimize the chance of an exploit, we built a layered review process during development:

  • Minimum of 2 approvals required to merge a PR into main branch for critical code (smart contracts, L1)
  • Strive for 100% unit test coverage
  • Review calls for new features or big PRs, so the developer can add context for the reviewers
  • The security team conducts an internal review before external audits

Security Blog Diagram1 V2

Finding the Right Auditors

The amount and variety of security service providers have grown significantly since 2017, when teams could end up on month-long waitlists for the few auditing companies available.

Today, there’s a much broader ecosystem of options (though top-tier firms can still have a waitlist, so start looking early!). We’ve classified the ecosystem into several categories.

Auditing Companies

These are established firms with dedicated teams, brand reputation, and standardized processes. They typically provide detailed reports and structured workflow managed by a dedicated project lead. Note that recognized names will add credibility with investors and partners.

Here's what to look for:

  • Review  their previous published reports for quality, depth, and project complexity.
  • Match auditor experience to your project; firms that only audit ERC-20s would be a risky choice for complex protocols.
  • Ask to know the auditors that will audit your code. Previous firm experience doesn't transfer automatically, and obscuring this might indicate high auditor churn, which can be a red flag.
  • Clarify whether you’ll get direct access to auditors or only to a manager.
  • At the time of writing, AI auditors are improving, but are better suited to run during development. Beware of firms offering cheap, fully automated audits for final code.
  • The experience of trusted peers is very valuable, so ask around for references.

Independents

With the rise of bug bounty and competition platforms like Immunefi, Code4rena, Cantina, Sherlock, and others, some solo auditors have formed strong reputations within these platforms' communities.

The great thing about solo auditors is that they will have a public portfolio in these platforms, where you can directly verify previous experience that may match your project, and their performance.

For small codebases, hiring an independent auditor directly can make sense, but for medium-large projects you need a team. Fortunately, independents sometimes form collectives, and the platforms offer private audits with these groups.

Pricing directly correlates with experience and portfolio, but you might save on overhead. Bandwidth will be more limited than if you’re hiring a full firm.

Specialists

These are firms that focus on specific methodologies or technologies rather than broad manual auditing. Think formal verification (Certora, Runtime Verification), or fuzz and invariant testing and tooling like FuzzingLabs, Fuzzland. Hiring one of these teams can be expensive, but their work becomes part of your long-term tooling and test-suites.

Multi-Approach (Audit + Fuzzing)

Some firms like Guardian Audits and Enigma Dark offer fuzzy test suite and manual audit, which is a very good option to deeply cover your project for many angles in one sitting.

Security Blog Diagram2 V3

Our Approach

Our strategy was to have our code covered from different angles, so we wanted to get audits from:

  • A traditional auditing firm.
  • A group of independents.
  • One fuzzy testing provider.

At Story’s first stage, we focused on the Proof of Creativity (PoC) protocol, the smart contract protocol for IP registration.

In the first round of audits we enlisted:

  • SlowMist as the traditional auditing firm.
  • Fuzzland for fuzzy/invariant testing.
  • Trust Security as a group of top-level independent auditors.

After completing the first round, we realized we needed blockspace and a way to express the derivation data structure as a graph. Because of this we pivoted to an L1 (Modified Cosmos SDK and CometBFT for consensus, Geth for execution), tightly integrating PoC with our execution layer through a stateful precompile called IPGraph.

Our scope is wide, and very importantly, PIP Labs ships fast (we shipped L1 in 5 months!). To be able to honor our deadlines and cover the maximum possible surface, we decided to split the audit scope in related areas, so we could easily have different teams looking at different areas in parallel, and we could have them focus on critical interactions between parts of our system.

We decided the audits should split, focusing around 2 critical interactions between components.

  • One team audits the Story client, specifically our modifications to cosmos-sdk interacting with predeployed contracts in the Execution Layer:IPTokenStaking UBIPool and UpgradesEntryPoint. The events of these contracts drive staking and other functionalities in our Cosmos modules.
  • The other team did a differential audit for the new features and refactors in PoC, mainly the integration with IPGraph, and the actual inner workings of the precompile.

The selected vendors were:

  • Trust Security as independents, returning.
  • Halborn as the auditing firm.
  • FuzzingLabs, who created a custom fuzzer for our Cosmos modules, a version of Attacknet for Story, and a custom Echidna fork that would enable us to fuzz PoC and hit the precompile as well.

FuzzingLabs’ work was particularly helpful for testing gas pricing and stressing the stateful precompile.

Audit Contests

These focused, formal audits might still have missed some system-level vulnerabilities that can emerge from all the pieces of a codebase working together as a system. To make sure every possible attack vector and surface area was examined, we ran a 1-month long audit contest before our token launch.

Audit contests are open or invite-only events where many researchers review your codebase within a limited timeframe, typically on platforms like Code4rena, Sherlock, Immunefi, Cantina, or CodeHawks by Cyfrin. Projects offer a prize pool that is distributed based on the severity and number of valid findings.

What to Consider When Running an Audit Contest:

  • You’re competing for researcher attention, so budget and time accordingly.
  • Be careful with prize structures that may disincentivize participation (eg: conditional pools are polemical, not rewarding Medium and Lows).
  • On the flipside, make sure to run contests on mature, previously audited code, or you can bet your conditionals will trigger.
  • Ensure a strong judging and deduplication processes. Our contest drew 977 submissions, so the triage stage was critical.
  • Allow ample time post-contest for judging, remediation, and fix verification.

Story’s Audit Contest

Our audit contest ran from Dec. 14 2024 - Jan. 17, 2025 and offered up to $1 million as a prize pool.

Discoveries made:

  • Medium-severity: $300k
  • High-severity: $1 million
  • $25k reserved for low-severity findings, where the top 5 submissions received tiered rewards ($10k → $1.25k)

We received 977 submissions, with a total of 19 Highs, 44 Mediums, 51 Lows, 76 Informationals that we proceeded to fix as soon as we confirmed them.

We consider the audit contest highly productive. Researchers tried every interaction between our components, and improved our security posture considerably.

We created a public utility repo to help participants spin up localnets and submit PoCs quickly, and to stay on schedule we began triaging and fixing “live” during the competition. Usually you wait for the end of a competition to start fixes, but it is a good thing we got a head start, as we still needed considerable resources for the final surge of reports after the contest wrapped.

For projects scoping an audit contest, we recommend setting aside enough time (at least a month, depending on the scope of your audit) to be able to properly address fixes, review fixes and escalation process happening after judging. Our audit covered 4 repositories, and one of our key learnings was to ensure we set aside more time post-contest as a buffer.

Bug Bounty

After Mainnet and TGE, we launched a bug bounty program offering up to $600k for critical loss-of-funds exploits.

The Scope Includes:

  • Consensus client and staking contracts
  • Cosmos SDK fork
  • Geth precompile
  • PoC and peripheral smart contracts
  • APIs, SDKs, and web2 assets

The bounty continues to attract new researchers and veterans from our audit contest. Reported issues have ranged from misconfigured web2 infra to potential chain-halt scenarios, all resolved quickly and safely.

For an example of a reported issue that was resolved without incident, check out this published post-mortem.

If you are a security researcher reading this, come break our code!

After TGE: Internal Reviews and Differential Audits.

We want to make true the phrase “don’t deploy unaudited code.” Our post-launch workflow ensures new features remain under scrutiny:

  • Security intake process: Dev teams submit design docs and completion timelines early, so internal and external audits can be scheduled ahead of delivery.
  • In-house researchers: Our growing security team now includes a dedicated web3 researcher who manually reviews new features and PRs as the first line of defense.
  • External partners: Trust Security has been a partner on retainer for priority scheduling. We will soon announce new partners on retainer.

Security Blog Diagram3 V2

Securing Our Ecosystem’s Code

To strengthen the Story ecosystem and protect the network from vulnerabilities in external projects, we provide security guidance to selected ecosystem teams. Our goal is to help promising builders launch safely, while ensuring that no insecure code puts the broader network at risk.

During the TGE phase, we built a structured security pipeline for top Story Academy projects such as PiperX, UnleashProtocol, MetaPool, Verio, Color, and StoryHunt. The process included:

  • Completing security questionnaires so our team could perform risk profiling and help teams think critically about security.
  • Connecting projects looking for audits with our Ecosystem Auditor Program, which can sometimes offer discounted services from select partners.

After Story Academy ended, we continued supporting key and incubated projects in several ways:

  • Ongoing educational initiatives.
  • Pentesting efforts, like for Poseidon’s DePin App, and support for anti-farming research during its campaign.
  • Security posture review for IP Strategy, the first publicly listed treasury vehicle centered on $IP token.

We selectively extend this kind of support to projects that are strategically important to Story’s ecosystem as a way to ensure that the most impactful builders on Story can launch with strong security foundations.

Closing Thoughts

For us, security isn’t a single milestone. It’s a continuous process baked in throughout protocol development at Story. From code reviews to bounties, our focus has been to create overlapping layers of defense that evolve with our technology and expanding ecosystem.

This post is the first in our series on security at Story. Upcoming posts will dive into how we secure governance, infrastructure, and community operations; the human side of building a resilient network.

You might also like

The Infinite Horizon: Inside the Visual World of the Origin Summit

The Infinite Horizon: Inside the Visual World of the Origin Summit

30 Oct 2025
Origin Summit: Where K-Pop, Crypto, and IP Converged in Unstoppable Momentum

Origin Summit: Where K-Pop, Crypto, and IP Converged in Unstoppable Momentum

07 Oct 2025
Introducing IP Vault: Secure, Confidential, Programmable Access to Onchain IP Data

Introducing IP Vault: Secure, Confidential, Programmable Access to Onchain IP Data

Tech
12 Sep 2025

Subscribe to our newsletter

Thanks for subscribing!

Sign Up