Skip to Content
LearnParallelization Engine

Sei Parallelization Engine

Introduction

The Sei Parallelization Engine is a core component designed to increase transaction processing throughput beyond the limitations of traditional sequential execution models common in many blockchain systems. The engine enables concurrent execution of non-conflicting transactions within a block and leverages multi-core processors to improve performance and scalability. This document provides a technical explanation of its architecture and mechanisms.

The Parallelization Engine is a key innovation that allows Sei to process transactions concurrently across multiple CPU cores, significantly increasing throughput while maintaining deterministic outcomes identical to sequential processing.

Core Mechanism: Optimistic Concurrency Control

At its heart, the Parallelization Engine utilizes an Optimistic Concurrency Control (OCC) strategy. Unlike pessimistic approaches that lock state resources before execution, OCC allows transactions to execute in parallel based on an initial estimation of the state they will access. Conflicts (e.g., two transactions writing to the same state key) are detected during or after this parallel execution phase. When conflicts occur, they are resolved through mechanisms that often include re-execution of the conflicting transactions. This approach minimizes overhead when conflicts are infrequent and boosts overall throughput.

System Architecture

Sei Parallelization Engine
Transaction Preprocessing
Classifier
Dependency Analysis
Access Analysis
Parallel Execution
Worker Pool
Batch Formation
Dependency Mapping
State Transitions
Gas Estimation
Contract Interaction
Speculative Execution
Conflict Resolution
Conflict Detection
Sequential Re-execution
SeiDB & Consensus
State Commit
MVCC Support

This architecture enables the Sei blockchain to process transactions in parallel while ensuring consistency and correctness. Let’s explore each component in detail.

1. Transaction Preprocessing

Before execution can begin, incoming transactions undergo preparatory processing. Raw transaction bytes transform into structured transaction objects that allow the system to identify message types and transaction characteristics. Initial validation occurs at this stage through ante handlers that verify signatures, fees, and other prerequisites.

The system also identifies certain critical transactions, such as oracle price feed updates, which may need prioritized execution to ensure timely data processing. This initial analysis leads to transaction grouping into batches for subsequent processing stages and establishes the foundation for parallel execution.

2. Dependency Analysis (Estimation)

This critical stage forms the backbone of the OCC approach. The system estimates potential state accesses for each transaction rather than determining exact dependencies.

An access control module analyzes each transaction to predict which state keys it might read or write during execution. Specialized Dependency Generator functions predict potential state access patterns for each transaction type. The precision of this analysis varies depending on transaction complexity:

Simple transactions like token transfers allow precise determination of which account balances will be accessed. Complex smart contract interactions make prediction of all potential storage accesses much more challenging. The generators provide broader estimates for these complex cases and acknowledge areas of uncertainty.

For example, a token transfer transaction between accounts A and B will generate estimated read/write sets that include:

  • Read access to account A’s balance
  • Read access to account B’s balance
  • Write access to account A’s balance
  • Write access to account B’s balance
  • Read access to any related metadata or configuration

The dependency analyzer might also consider secondary effects such as:

  • Fee payment impacts on the transaction sender’s account
  • Potential interactions with staking or governance mechanisms if the tokens have such properties

For smart contract calls, the analyzer employs heuristic methods that consider the contract’s address and storage layout, the specific function selectors being called, any parameters passed to the contract functions, and historical patterns of state access from previous similar calls.

The dependency generator outputs an estimated read/write set for each transaction. These sets contain the state keys expected to be accessed, with additional metadata indicating access types (read vs. write) and confidence levels for each prediction. The system uses these confidence levels during scheduling decisions to minimize the likelihood of conflicts.

The output from this phase consists of an “estimated writeset” for each transaction—essentially a prediction of its state impact—which serves as crucial input for the next execution planning phase.

3. Parallel Execution

The system attempts concurrent execution with the estimated writesets through these steps:

  1. Batch Organization: The process organizes transactions and their associated writesets for processing.

    • Transactions receive logical grouping based on estimated dependencies
    • The system creates execution units with appropriate metadata
    • Priority queues determine processing order for maximum throughput
    • Metadata tags track transaction relationships for later conflict analysis
  2. Worker Assignment: A pool of execution workers (typically implemented as goroutines) stands ready to process transactions concurrently.

    • The worker pool size adapts dynamically based on system resources
    • CPU core allocation strategies optimize for NUMA architectures
    • Worker specialization allows for targeted execution of specific transaction types
    • Load balancing algorithms redistribute work for maximum resource utilization
  3. Strategic Scheduling: Transaction assignment to workers occurs based on estimated writesets, with strategic grouping of non-conflicting transactions.

    • Graph-based scheduling algorithms minimize potential conflicts
    • The scheduler includes lookahead capabilities for multi-step dependency chains
    • Execution time estimates prioritize shorter transactions when appropriate
    • Locality-aware scheduling places related transactions on the same worker when beneficial
  4. Buffered Execution: Workers execute transactions and write all state modifications to a temporary buffer instead of the main state store.

    • Each worker maintains an isolated transaction context
    • The execution engine applies gas metering and limits
    • Buffered state operations capture all read and write operations
    • Intermediate results remain isolated until conflict detection completes
  5. Runtime Tracking: The system monitors which state keys are actually read and written throughout execution to build an accurate picture of real (versus estimated) state access patterns.

    • Access tracking captures both direct and indirect state interactions
    • Runtime statistics feed back into the dependency estimation system
    • Execution anomalies trigger adaptive scheduling adjustments
    • Performance metrics identify optimization opportunities

Advanced techniques ensure maximum parallelism extraction:

  • Adaptive batch sizing based on system load and conflict rates
  • Predictive execution of high-probability transaction paths
  • Distributed execution capabilities across multiple physical nodes when available
  • Resource throttling to prevent worker starvation

4. Conflict Detection & Resolution

The system must identify and resolve any conflicts that emerge during and after parallel execution. The process follows this sequence:

  1. Conflict Identification: The system compares the actual read/write sets recorded during execution. A conflict exists when one transaction reads state that another concurrent transaction wrote, or when multiple transactions write to the same state key.

    The conflict detector implements several optimized algorithms:

    • A hash-based data structure tracks state accesses in memory for efficient lookup
    • Bloom filters provide rapid preliminary conflict checks
    • A deterministic conflict resolution order prevents cyclic dependencies during re-execution
    • Precise timestamp tracking ensures proper sequencing based on arrival order

    Common conflict patterns include:

    • Read-After-Write (RAW): Transaction B reads a key that Transaction A has modified
    • Write-After-Read (WAR): Transaction B modifies a key that Transaction A has read
    • Write-After-Write (WAW): Multiple transactions attempt to modify the same key
  2. Conflict Resolution: The system executes a resolution strategy when it detects conflicts:

    • It identifies the conflicting transactions as a group
    • It removes their results from the temporary state buffer
    • It re-executes the conflicting set in a deterministic, sequential order to ensure correctness

    The deterministic ordering relies on transaction prioritization rules:

    • Critical system transactions receive highest priority
    • Oracle updates and price feeds receive elevated priority
    • Standard transactions follow their original sequence in the block
    • Gas price and other economic factors may influence the ordering

    The resolution system maintains isolation guarantees through careful state management:

    • Conflict groups remain isolated from successfully processed transactions
    • The temporary buffer preserves successfully executed state transitions
    • Resolution attempts track their own read/write sets to detect cascading conflicts

This conflict resolution makes the final state after parallel execution identical to what sequential processing would have produced, which maintains blockchain consistency.

5. Fallback Mechanism

The OCC approach includes a critical safety mechanism for situations where parallel execution cannot succeed:

  1. Failure Detection: The system detects failure conditions when parallel execution encounters excessive conflicts or critical errors.
  2. State Rollback: All state changes from the failed parallel attempt are completely discarded.
  3. Sequential Reprocessing: The entire transaction batch undergoes sequential processing as a fallback.

This fallback guarantees that the blockchain can always make progress, even in worst-case scenarios where parallelization proves ineffective for a particular block.

Interaction with SeiDB

The parallelization engine works in close concert with SeiDB, Sei’s custom storage layer. SeiDB’s State Commit layer provides extremely low-latency read/write access needed for high transaction volume concurrent execution. Its asynchronous commit capabilities also allow quick block state finalization, which creates an efficient pipeline for block processing.

SeiDB integration offers several technical advantages crucial for parallelization:

Performance Optimizations:

  • Multi-version concurrency control (MVCC) at the storage level supports multiple execution attempts
  • Lock-free read operations prevent stalls during parallel transaction execution
  • Memory-mapped state caching reduces I/O overhead for frequently accessed keys
  • Batch commit optimizations minimize disk write amplification

Parallelization-Specific Features:

  • Snapshot isolation provides consistent views of state for each transaction execution
  • Atomic multi-key operations ensure multi-part state transitions remain consistent
  • Versioned state access allows concurrent reads of different state versions
  • Optimized rollback capabilities support the rapid discarding of speculative state changes

Storage Architecture Alignment:

  • The key-value structure mirrors the state access patterns in the dependency analysis
  • Prefix-based organization aligns with contract and module-based access patterns
  • Hybrid storage tiers optimize for different access patterns across workloads
  • Asynchronous persistence decouples block execution from storage commit latency

The tight integration between the parallelization engine and SeiDB enables the system to maintain high throughput even under challenging workloads with complex dependency patterns or high conflict rates.

Performance Impact

The Parallelization Engine delivers significant performance improvements:

Transaction TypeSequential TPSParallel TPSImprovement
Simple Transfers3,00015,000+5x
ERC-20 Transfers2,2009,500+4.3x
DEX Swaps8002,800+3.5x
Complex Contracts5001,500+3x

These improvements scale with CPU core count, allowing Sei to take full advantage of modern server hardware.

Last updated on