Post
Share your knowledge.
How to build a replayable event-checkpoint indexer in Move
How to build a replayable event-checkpoint indexer in Move to allow parallel reprocessing and deterministic checkpoints?
- Move
- Smart Contract
- Move Script
Answers
2On high-volume chains you want an indexer that can reprocess events (e.g., for analytics, Cross-Chain relayers, or fraud detection). You need deterministic checkpoints so reprocessing is parallelizable and resumable. The design must make it cheap to resume from the last checkpoint and support parallel workers that process disjoint event ranges.
Strategies you can use
- Emit a compact
Event
object for each important occurrence (not huge payloads — store only pointers/hashes). - Implement an on-chain
Checkpoint
resource that stores an ordered list ofCheckpointChunk
objects; each chunk records a contiguous range[start_seq, end_seq]
, a Merkle root (or hash) of events in that chunk, and a processing state. - Off-chain workers can claim a chunk (move a
ChunkClaim
resource to themselves), process events in that chunk in parallel, and then submit an on-chainChunkResult
with proofs (hashes) for verification. - Verification on-chain is cheap: verify the chunk result hash equals the chunk root. After verification, update chunk state to
Processed
and advance global checkpoint pointer.
Why this works
- Checkpoint chunks partition event space so workers only read disjoint ranges — no contention.
- On-chain chunk roots provide deterministic proof for what must be processed (workers download event payloads off-chain and compute root; the chain only stores the compact root).
- Reprocessing is resumable by scanning chunk states.
Working-style Move sketch
module 0xA::indexer {
use std::signer;
use std::vector;
// Compact Event header stored on-chain
struct EventHeader has key {
seq: u64, // monotonic sequence number
emitter: address,
payload_hash: vector<u8> // hash pointer to off-chain payload
}
// Checkpoint chunk describes contiguous sequence of events
struct CheckpointChunk has key {
start_seq: u64,
end_seq: u64,
root_hash: vector<u8>, // merkle/aggregate hash of headers in range
state: u8 // 0 = Unprocessed, 1 = Claimed, 2 = Processed
}
struct Checkpoint has key {
chunks: vector<CheckpointChunk>,
global_seq: u64 // highest event seq included in chunks
}
public fun init(admin: &signer) {
move_to(admin, Checkpoint { chunks: vector::empty(), global_seq: 0 });
}
// Emitting an event header (called by other modules)
public fun emit_event(caller: &signer, payload_hash: vector<u8>) acquires Checkpoint {
// increment sequence and add a compact header (we will pack into chunks later)
let cp = borrow_global_mut<Checkpoint>(signer::address_of(caller));
let seq = cp.global_seq + 1;
cp.global_seq = seq;
// For simplicity store each EventHeader on-chain (or push to an external store and only keep hashes).
// Here we only touch checkpoint chunks in batch aggregation (off-chain process).
}
// Admin creates checkpoint chunks (off-chain worker could prepare ranges and roots, then submit)
public fun create_chunk(admin: &signer, start_seq: u64, end_seq: u64, root_hash: vector<u8>) acquires Checkpoint {
let cp = borrow_global_mut<Checkpoint>(signer::address_of(admin));
assert!(start_seq <= end_seq && end_seq <= cp.global_seq, 1);
vector::push_back(&mut cp.chunks, CheckpointChunk { start_seq, end_seq, root_hash, state: 0 });
}
// Worker claims a chunk for processing
public fun claim_chunk(worker: &signer, admin_addr: address, chunk_index: u64) acquires Checkpoint {
let cp = borrow_global_mut<Checkpoint>(admin_addr);
let mut chunk = *vector::borrow_mut(&mut cp.chunks, (chunk_index as u64));
assert!(chunk.state == 0, 2);
chunk.state = 1; // Claimed
// store worker info off-chain or as event (not required on-chain)
*vector::borrow_mut(&mut cp.chunks, (chunk_index as u64)) = chunk;
}
// After processing, worker submits result: computed hash. Chain verifies equality then marks Processed.
public fun submit_chunk_result(worker: &signer, admin_addr: address, chunk_index: u64, result_hash: vector<u8>) acquires Checkpoint {
let cp = borrow_global_mut<Checkpoint>(admin_addr);
let mut chunk = *vector::borrow_mut(&mut cp.chunks, (chunk_index as u64));
assert!(chunk.state == 1, 3);
// Verify result_hash == chunk.root_hash
assert!(*&chunk.root_hash == result_hash, 4);
chunk.state = 2; // Processed
*vector::borrow_mut(&mut cp.chunks, (chunk_index as u64)) = chunk;
}
// Query helpers: get chunk state and ranges for worker orchestration (not shown)
}
Worker flow (off-chain)
- Fetch unprocessed chunks by reading
Checkpoint.chunks
. - For each chunk: download event headers/payloads for
start_seq..end_seq
from archival node or blob store. - Compute
root_hash
(Merkle root or compact aggregate). claim_chunk
(on-chain), process events (index to DB), thensubmit_chunk_result
with computed root_hash.- On success the chunk is marked
Processed
; if the submitted hash doesn't match, the chain rejects.
Advantages & considerations
- Parallel: chunks can be processed concurrently with no on-chain contention.
- Deterministic: chunk roots ensure workers and chain agree on the canonical data to process.
- Resumable: chunk states persist; crashed workers can be retried by re-claiming after timeout (implement timeouts with additional fields).
- Security: the chain verifies only compact hashes; workers must be honest or risk being rejected. For stricter guarantees, add staking/slashing for malicious workers.
Move introduces capabilities as cryptographic-like keys that authorize privileged actions. A capability is a resource granted at runtime, and only the holder can invoke specific operations. Unlike role-based access control (RBAC), which hardcodes rules into contracts, capability-based security gives fine-grained, transferable permissions.
For example:
module Vault {
struct WithdrawCap has key {}
struct Vault has key {
balance: u64,
}
public fun init(owner: &signer): WithdrawCap {
move_to(owner, Vault { balance: 1000 });
WithdrawCap {}
}
public fun withdraw(_cap: &WithdrawCap, amount: u64, owner: &signer) {
let vault = borrow_global_mut<Vault>(signer::address_of(owner));
assert!(vault.balance >= amount, 1001);
vault.balance = vault.balance - amount;
}
}
Only users holding WithdrawCap
can withdraw.
This avoids pitfalls like hardcoded admin addresses, supports delegation, and makes multi-party custody safer by distributing capabilities.
3. What strategies can a Move developer use to prevent state bloat in long-living applications like DeFi or GameFi?
Answer: State bloat occurs when smart contracts accumulate unused objects, slowing down execution and increasing storage costs. In Move, developers can mitigate this with:
-
Ephemeral Resources → Design temporary objects that must be destroyed after use.
struct TempTicket has drop { id: u64 }
The
drop
ability ensures cleanup is required. -
Event-Driven Tracking Instead of Persistent Storage → Instead of storing all trades or game actions on-chain, emit events and let off-chain indexers (like Sui’s indexer or GraphQL APIs) track history.
-
Pruning via Admin or Community Functions → Build functions that allow deletion of stale states (e.g., expired game assets, closed liquidity pools).
-
Composable Object Design → Split large objects into smaller composable ones, reducing costly updates to monolithic global states.
By combining these patterns, developers prevent unbounded growth, making Move applications scalable and cost-efficient.
Do you know the answer?
Please log in and share it.
Move is an executable bytecode language used to implement custom transactions and smart contracts.

- ... SUIMatthardy+2095
- ... SUIacher+1666
- ... SUIjakodelarin+1092
- ... SUIChubbycheeks +1081
- ... SUITucker+1047
- ... SUIKurosakisui+1034
- ... SUIzerus+890