Giga SS Store Migration Guide
Giga SS Store is the next step in Sei’s storage evolution on top of SeiDB. It splits the hot EVM state into its own dedicated state-store (SS) database so the node can scale toward the ~150k TPS target throughput, and so non-EVM modules stop paying write amplification for EVM state.
After migration the SS layer is repartitioned into two cooperating stores:
| Layer | Cosmos backend | EVM backend |
|---|---|---|
| SC (State Commit, app hash) | memiavl | FlatKV |
| SS (State Store, historical queries) | single MVCC DB (PebbleDB or RocksDB) | dedicated EVM SS MVCC DB(s) under data/evm_ss/ |
Only the SS layer changes for this migration. SC layer config is untouched
and memiavl remains the authoritative source for the app hash, so this is
invisible to the network.
docs/migration/giga_store_migration.md inside sei-chain. Open an issue there if anything here drifts.Prerequisites
- A
seidbuild with theevm-ss-splitflag wired in (Sei v6.5 or later). Older releases used per-keyevm-ss-write-mode/evm-ss-read-modetoggles; if yourapp.tomlstill has those keys, upgradeseidbefore continuing. sc-enable = trueandss-enable = trueinapp.toml. Both must stay enabled.- A trusted RPC endpoint to state-sync from (chain ID and trust-height source).
- Disk headroom for two SS databases. The EVM split does not duplicate data, but during migration both the old and the new layouts may briefly coexist on disk.
The migration requires a full state sync. There is no in-place migration path and no live “dual-write then split” workflow — the state sync wipes the local data directory and imports a fresh snapshot into the new layout.
Benefits
- EVM reads are served exclusively from a dedicated EVM SS database.
- Non-EVM modules no longer pay write amplification for EVM state.
- A backend change (PebbleDB ↔ RocksDB) can be combined with the same state
sync, since
ss-backenddrives both the Cosmos SS MVCC DB and every EVM SS sub-DB.
What’s different about EVM SS
EVM SS is point-query only by design (Get / Has). Iteration is
explicitly disabled on the EVM backend for performance: the hot EVM read path
is tuned for direct key lookups, and cross-bucket scans would defeat the
per-type sub-DB layout. Any EVM read that needs iteration must stay on the
Cosmos SS side.
Migration Steps
Step 1: Update app.toml
Apply the following settings in ~/.sei/config/app.toml:
[state-commit]
# State commit is untouched by this migration.
sc-enable = true
[state-store]
ss-enable = true
# DBBackend for the Cosmos SS MVCC DB and for every EVM SS sub-DB.
# Supported: pebbledb, rocksdb. Default pebbledb.
ss-backend = "pebbledb"
# Route EVM state to the dedicated EVM SS backend.
# When false (default), EVM state lives in the Cosmos SS backend alongside
# everything else. When true, EVM data is routed exclusively to the EVM SS
# backend; non-EVM data stays in Cosmos SS. No fallback between backends.
evm-ss-split = trueIf you want to switch SS backend in the same step:
- PebbleDB → RocksDB: set
ss-backend = "rocksdb", buildseidwith-tags rocksdbBackend, and install RocksDB per the RocksDB Backend Guide.ss-backenddrives both the Cosmos SS MVCC DB and every EVM SS sub-DB, so a single setting flips both. - No data migration tool is needed across backends — the state sync populates the new layout.
Step 2: State sync into the new layout
Giga SS Store is fully compatible with the existing state-snapshot format. On
import, the composite state store routes each snapshot node based on the
importing node’s evm-ss-split:
- With
evm-ss-split = true, EVM snapshot nodes go only into EVM SS and non-EVM nodes go only into Cosmos SS. - The import path normalizes legacy
evm_flatkvsnapshot nodes toevm, so snapshots produced by either the old or new FlatKV module are accepted.
Both stores end up fully populated at the snapshot height, so the node can start serving reads immediately.
The full state-sync flow is documented in the Statesync guide. The minimal shape for this migration:
export TRUST_HEIGHT_DELTA=10000
export MONIKER="<moniker>"
export CHAIN_ID="<chain_id>"
export PRIMARY_ENDPOINT="<rpc_endpoint>"
export SEID_HOME="$HOME/.sei"
# 1. Stop seid
sudo systemctl stop seid
# 2. Back up files you need to preserve and wipe local state
cp $SEID_HOME/data/priv_validator_state.json /tmp/priv_validator_state.json
cp $SEID_HOME/config/priv_validator_key.json /tmp/priv_validator_key.json
cp $SEID_HOME/config/genesis.json /tmp/genesis.json
rm -rf $SEID_HOME/data/*
rm -rf $SEID_HOME/wasm
rm -rf $SEID_HOME/config/priv_validator_key.json
rm -rf $SEID_HOME/config/genesis.json
rm -rf $SEID_HOME/config/config.toml
# 3. Re-init, re-apply config.toml and app.toml (set Step 1 values again)
seid init --chain-id "$CHAIN_ID" "$MONIKER"
# 4. Resolve trust height/hash and persistent peers against PRIMARY_ENDPOINT,
# then update config.toml. See /node/statesync for the full snippet.
# 5. Restore the backed-up files
cp /tmp/priv_validator_state.json $SEID_HOME/data/priv_validator_state.json
cp /tmp/priv_validator_key.json $SEID_HOME/config/priv_validator_key.json
cp /tmp/genesis.json $SEID_HOME/config/genesis.json
# 6. Start seid
sudo systemctl restart seidpriv_validator_key.json is in safe storage before deleting it from the config directory. Loss of this key is unrecoverable for a validator and is not relevant to RPC-only nodes — but if you’re following this from the wrong checklist you’ll find out the hard way.Step 3: Verify the new layout
Once the state sync completes and the node starts producing blocks, confirm Giga SS Store is active in two places.
Startup logs. All three lines should appear:
"SeiDB SS is enabled" # with the configured `backend`
"SeiDB EVM StateStore optimization is enabled" # with the `separateDBs` label
"EVM state store enabled" # with `dir` and `separateDBs` labelsEVM RPC. debug_traceBlockByNumber is the cleanest end-to-end check —
it forces the node to read EVM state out of the new EVM SS backend:
curl -s -X POST http://127.0.0.1:8545 \
-H 'Content-Type: application/json' \
--data '{"jsonrpc":"2.0","method":"debug_traceBlockByNumber","params":["latest",{}],"id":1}'The response should contain a "result" field rather than an RPC error.
Safety checks
seid runs three DB-state checks at startup and refuses to launch if the EVM
SS and Cosmos SS DBs are inconsistent. They specifically catch the footgun of
flipping evm-ss-split from false to true without state syncing.
- EVM SS directory missing or empty (before the EVM SS is opened). When
evm-ss-split = true, the composite state store refuses to proceed if Cosmos SS already has committed history but the EVM SS directory (data/evm_ss/by default) does not exist or is empty. Failing before the sub-DBs are opened means a rejected config does not leave a confusing emptydata/evm_ss/behind. - EVM SS DB empty post-open, pre-recovery. Belt-and-suspenders for (1)
when the directory exists but its DBs are empty. The WAL only covers the
last
KeepRecentblocks, so replay cannot rebuild a fresh EVM SS from scratch. - Mismatched earliest versions, post-recovery. If the two DBs were populated from different snapshots (or pruned independently), historical reads would be inconsistent. A non-zero earliest-version divergence aborts startup.
If any check fires, the correct fix is either (a) complete the state sync
described above, or (b) set evm-ss-split = false and restart. If
data/evm_ss/ is stale from a failed attempt, remove it before state syncing.
Rollback
To roll back:
- Set
evm-ss-split = falseinapp.toml. - Restart the node. The EVM SS DB under
data/evm_ss/is no longer opened but stays on disk until manually removed.
To fully reclaim the disk used by EVM SS, stop the node and delete
data/evm_ss/ after reverting the setting.
evm-ss-split = false requires another state sync. Under evm-ss-split = true, EVM writes go only to the EVM SS DB, so Cosmos SS will not have those writes. Restarting with evm-ss-split = false stops opening the EVM SS DB, but EVM-state queries will miss anything written after the Giga state sync until you re-state-sync without the split.FAQ
Where do the data files live after migrating?
- Cosmos SS data lives under the same directory as before, typically
data/pebbledb/for the defaultpebbledbbackend. - EVM SS data lives under
data/evm_ss/. - SC data (
memiavl+ FlatKV) is untouched by this migration.
Does Giga SS Store change the app hash or consensus?
No. The SC layer is unchanged, so memiavl remains the authoritative source
for the app hash. Giga SS Store is a per-node SS change that is invisible to
the network.
Can I migrate a validator node with this guide?
Not yet. This migration guide is for RPC nodes only.
Can I migrate an archive node with this guide?
Not yet. Archive-node migration is out of scope for this guide.
Can I toggle back to evm-ss-split = false after enabling it?
Yes, but cleanly rolling back requires another state sync — see the Rollback section above.
Why can’t I just flip evm-ss-split = true on a running node?
Because evm-ss-split = true requires the EVM SS DB to already contain the
full history that Cosmos SS has. A live flip would leave the EVM SS DB empty
while the composite store refuses to fall back to Cosmos SS, which would
translate into missing EVM state at query time. The safety checks above
block this scenario at startup.
Does Giga SS Store support historical proofs?
No, same as SeiDB. SS stores raw KVs and does not reconstruct IAVL-style proofs.