Get $IP
Get $IP
Story Story
07 February 2025
© Story Foundation 2025

Learn

WhitepaperBlogFAQs

Build

Getting StartedDocsGitHubBrand Kit

Tools

Block ExplorerProtocol ExplorerFaucetStaking

Explore

EcosystemBridgeIP Portal

Community

CareersGovernanceForum

Legal

PrivacyTerms of UseEnd User TermsMiCA White Paper
Recent Network Patches
back

Recent Network Patches

Story

Story

20 February 2026

Tech

In recent months, the PIP Labs team has patched issues on Story that may have affected its liveness. As a part of the process of fixing a live issue without telegraphing its nature to possible attackers, PIP Labs distributed private binaries to validators.

We are sharing this explainer in the spirit of transparency for validators and the broader general community. See below for a summary of issues and measures taken to fix them.

Cantina Issue #131: Cosmos SDK issue (patched on Story v1.3.3)

Root Cause Analysis

The root cause of this bug is really on comet-bft, but Story, among other chains using Cosmos tech, was vulnerable.

By broadcasting a block message with empty BlockParts.Elems:

func (conR *Reactor) broadcastNewValidBlockMessage(rs *cstypes.RoundState) {
    psh := rs.ProposalBlockParts.Header()
    csMsg := &cmtcons.NewValidBlock{
        Height:             rs.Height,
        Round:              rs.Round,
        BlockPartSetHeader: psh.ToProto(),
        BlockParts:         rs.ProposalBlockParts.BitArray().ToProto(),
        IsCommit:           rs.Step == cstypes.RoundStepCommit,
    }
+    nodeIDEnv := os.Getenv("ID")
+    if nodeIDEnv == "1" && rs.Height == 5  {
+        fmt.Println("mxuse")
+        csMsg.BlockParts.Elems = nil
+    }
    conR.Switch.Broadcast(p2p.Envelope{
        ChannelID: StateChannel,
        Message:   csMsg,
    })
}

You could create a panic in comet-bft:

25-10-03 07:44:18.917 INFO 👾 ABCI call: FinalizeBlock height=5 proposer=3dd8cba

25-10-03 07:44:18.917 DEBU Skip minting during singularity

panic: runtime error: index out of range [0] with length 0

goroutine 78 [running]:

github.com/cometbft/cometbft/libs/bits.(*BitArray).setIndex(...)

/story/vendor/github.com/cometbft/cometbft/libs/bits/bit_array.go:97

github.com/cometbft/cometbft/libs/bits.(*BitArray).SetIndex(0x400083f2b0?, 0x40006aa021?, 0xc0?)

/story/vendor/github.com/cometbft/cometbft/libs/bits/bit_array.go:89 +0x1a8

github.com/cometbft/cometbft/consensus.(*PeerState).SetHasProposalBlockPart(0x4003a829f0?, 0x0?, 0x0?, 0x2fda1c0?)

/story/vendor/github.com/cometbft/cometbft/consensus/reactor.go:1148 +0x110

github.com/cometbft/cometbft/consensus.(*Reactor).gossipDataRoutine(0x4001146ea0, {0x3020ed8, 0x400083f2b0}, 0x400083f380)

/story/vendor/github.com/cometbft/cometbft/consensus/reactor.go:574 +0xb04

created by github.com/cometbft/cometbft/consensus.(*Reactor).AddPeer in goroutine 205

/story/vendor/github.com/cometbft/cometbft/consensus/reactor.go:202 +0xf4
...
/story/vendor/github.com/cometbft/cometbft/consensus/reactor.go:202 +0xf4

Mitigation

We communicated with the cosmos-bft team and directed the researcher to their bug bounty, where it scored a High.

Process Improvement

Since we believe it’s important for the space to boost the rewards of base technology used by many projects that may have lower bug bounty amounts compared with the criticality of the bug, we’d like to spin up a collective bug bounty pool for CometBFT and Cosmos SDK that any Cosmos project can top up alongside the existing Cosmos bounty. If a vulnerability has downstream and ecosystem impact, the pooled funds would boost the reward accordingly. This will help:

  • Align incentives across teams for shared-surface issues
  • Strengthen researcher participation (bigger, clearer upside)
  • Ecosystem-first response when bugs cross module and app boundaries

If you are interested to contribute, please fill out this form.

Action Items

We rolled a private release with the patch before the official patch, since Comet team was going to do a public, routine upgrade to patch. This means there was a window of around a week where black hats monitoring the comet-bft repo could see the upgrade and attack vulnerable chains.

After that, we upgraded to the public version of comet-bft.

Issue #133 (patched on Story v1.3.3)

Root Cause Analysis

On our TGE audits (Story Protocol v1.2), one of the issues (TRST-R-2) notes the Story cosmos-sdk fork was not patched for ASA-2024-0012 and ASA-2024-0013, but incorrectly claims these vulnerabilities “do not directly impact the chain”.

Our dev team also marked the issue as “Risk was tolerable for the project” in GitHub’s Dependabot warning.

The issue was unresolved for some time, but thankfully we got a submission through our bug bounty. They proved a single malicious validator could trigger a network shutdown.

The ModeInfo_Multi structure defined in cosmos-sdk/types/tx/tx.pb.go contains an array of ModeInfo structures, which can also be ModeInfo_Multi.

This allows deep nesting within a containing Tx object. Deep enough that with an 18MB transaction, the recursive call to RejectUnknownFields within the standard transaction decoder causes a stack overflow fatal error, crashing the process with a fatal runtime error.

As the single Tx allowed within a block is decoded before checks are performed, a malicious validator can crash every other node by proposing a block with such a malicious transaction contained within. Note that the 18MB required fits within the maximum 20MB Story mainnet block size, and there is no limit enforced on Tx size during the ProcessProposal stage.

This issue was patched by cosmos-sdk in v0.50.11, one minor release later than the fork of v0.50.10 used by Story. The patch introduces depth counters for both RejectUnknownFields and UnpackAny.

Mitigation

Bumped comet-bft version in binary version story v1.3.

Thanks to MajorExcitement for the submission. Out of scope but still got a reward.

Action Items

  • Review more thoroughly audit claims
  • Monitor dependencies more actively

Internal issue #1 (patched on Story v1.3.3)

While reviewing Issue #133, the PIP Labs team found an additional issue that could crash validators.

If a proposer proposes empty transaction in PrepareProposal, other validators would panic with an error of invalid memory address or nil pointer dereference. It's due to the transaction being empty (and some fields are nil).

func validateTx(tx sdk.Tx) error {
    // Reject invalid protobuff transaction
    protoTx, ok := tx.(protoTxProvider)
    if !ok {
        return errors.New("invalid proto tx")
    }
    // Reject empty signatures
    signatures := protoTx.GetProtoTx().Signatures
    if len(signatures) != 0 {
        return errors.New("disallowed signatures in tx")
    }

    standardTx, ok := tx.(signing.Tx)
    //...
    // Verify fee is empty
    if protoTx.GetProtoTx().AuthInfo.Fee == nil || standardTx.GetFee() != nil {
        return errors.New("invalid fee fee in tx")
    }

    //...
}

Fixed to reject the empty transactions while processing block.

Mitigation

  • Included above fix in story v1.3.3

Action Items

  • Always verify every possible parameter for invalid or empty states.
  • Improve unit test coverage and fuzzy testing.

Internal Issue #2 (patched on Story v1.4.2)

Root Cause Analysis

During the work testing a reduction of delegation (staking) minimum amount reduction requested by the community in the Forum, a bug was discovered by PIP Labs' L1 dev team.

This bug has been there since genesis block 0, and it has not been discovered through 3 parallel audits, $1M audit competition and current live up to $600k bug bounty.

There was a typo on ValidateUnboundAmount method of Story’s fork of cosmos-sdk, that meant if:

  • validators supporting delegations of locked tokens
  • and when doing partial unbonding (unstaking)
  • an accounting error in rewards calculation, and the possibility of chain halt if the partial undelegation was by the validator itself
func (k Keeper) ValidateUnbondAmount(
    ctx context.Context, delAddr sdk.AccAddress, valAddr sdk.ValAddress, periodDelegationID string, amt math.Int,
) (shares, rewardsShares math.LegacyDec, err error) {
    .../

    rewardsShares = (shares.Mul(periodDelegation.RewardsShares)).Quo(periodDelegation.Shares)
    /////// NOTE: below shares.GT should be rewardShares.GT
    if shares.GT(periodDelegation.RewardsShares) {
        rewardsShares = periodDelegation.RewardsShares
    }

    return shares, rewardsShares, nil
}

Delegation in cosmos-sdk is based on the concept of shares, which represent the portion of stake a delegator holds for a given validator. Rewards are distributed proportionally according to these shares. The “reward shares” shown in the code above are the weighted shares we introduced because period delegations and locked tokens have different weights.

If you look at the code, it caps the undelegation amount to the current delegated shares and reward shares when the user tries to undelegate more than what they actually have. But as you can see, the comparison for reward shares was wrong. This leads to an inconsistency between remaining shares and reward shares after undelegation.

This happens when the undelegated share amount is smaller than the delegation’s normal shares but larger than the delegation’s reward shares. Since period delegations always use weights greater than 1, normal undelegations should never hit this condition. Reward shares will always be weighted above 1 in the normal case.

The problem appears only when the validator is locked, since locked tokens use a weight of 0.5, which is less than 1.

This means that even with a normal undelegation, the remaining shares will always be greater than the delegation’s reward shares (which have been weighted down to 0.5). As a result, the if condition incorrectly triggers and wipes out all reward shares.

The panic occurs when a validator performs self-delegation and then undelegates. After undelegation, the reward shares become zero, and during reward withdrawal and initialization, it tries to divide by the reward token amount (now zero), causing a panic.

Impact

Fortunately, delegations are constrained to a small subset of the validator set (locked token validators), with only 3 delegators having losses totaling less than 3.5k $IP. Rest of the delegators experienced small gains.

Mitigation

  1. Privately release security patch v1.4.2 to Story validators fixing the bug to remove the possibility of a network crash, from:
if shares.GT(periodDelegation.RewardsShares) {
    rewardsShares = periodDelegation.RewardsShares
}

To the correct implementation:

if rewardsShares.GT(periodDelegation.RewardsShares) {
    rewardsShares = periodDelegation.RewardsShares
}
  1. In the next network upgrade, fix the affected reward shares to stop the small accounting differences from compounding further.

Action Items

  • Increase unit testing for all paths in L1 code, especially the economic logic.
  • Increase fuzzy testing coverage.
  • Allocation of time for internal reviews.

Comet BFT Vulnerability (Patched on Story v1.4.3 / v1.5.0)

On January 13th, the Comet BFT team privately contacted PIP Labs to inform of an incoming private release to patch a bug of High severity discovered through their Bug Bounty.

The effect of exploitation was described as allowing permanent fund loss and chain halts if exploited by rogue validators. The issue was at the consensus layer and affecting all versions of CometBFT (it was there since 2015).

PIP Labs distributed the private source patch of CometBFT in Story private release v1.4.3, released to trusted validators during the maintenance window the Story network went through.

Comet BFT released the public patched version for the following supported versions:

  • CometBFT v0.38.x
  • CometBFT v0.37.x

This was included in Story binary v1.5.0.

You might also like

Introducing IP Vault: Secure, Confidential, Programmable Access to Onchain IP Data

Introducing IP Vault: Secure, Confidential, Programmable Access to Onchain IP Data

Tech
12 Sep 2025
External Royalty Policy Guide

External Royalty Policy Guide

Tech
27 Aug 2025
Story Network Update

Story Network Update

From Launch to Chapter 2
Tech
19 Aug 2025

Subscribe to our newsletter

Thanks for subscribing!

Sign Up