Distributed, shard-per-core database with plugins in Rust

Open-source PostgreSQL-compatible SQL with shard-per-core scale. The same plugin model already speaks Redis and Cassandra wire protocols.

Open Source shard-per-core 10 000 core cluster predictable p99 no GC, Rust + C
Picodata active-active cluster across three data centers Three wire protocols — PostgreSQL, Redis, and Cassandra — connect to a Picodata cluster deployed active-active across three data centers. Each DC has its own leader and two followers with shard-per-core distribution. Raft forms a replication ring across leaders in all DCs. PostgreSQL Redis Cassandra PICODATA CLUSTER · ACTIVE-ACTIVE F L F DC · 1 L F F DC · 2 F F L DC · 3 Raft consensus for schema and topology · up to 10 000 cores

Architecture

Picodata is a distributed SQL database with a shard-per-core architecture. Each process owns its scheduler, memory, files, and write-ahead log — no contention with neighbors. Two optimizers handle queries: a distributed planner for cluster-wide SQL, a local engine for single-node work. Plugins add wire compatibility for Redis, Cassandra, and PostgreSQL.

Shared-nothing architecture of Picodata
01

Cooperative multitasking, no shared resources

Each Picodata process runs on a single CPU core and owns its data shard. Replication and sharding work together: processes form master + replica groups, and sharding splits the dataset across those groups, one slice per group. Writes hit a local write-ahead log; replication can stay inside one data center or span several.

Co-located computation on a Picodata cluster
02

Tunable data placement and SQL transpilation

Picodata gives you placement control over SQL data. Related rows — a customer and all their orders — live on the same node. Global lookup tables, like the list of stores, replicate everywhere. SQL moves less across the network and runs more in parallel — for OLAP and OLTP alike. The distributed planner splits each query into local blocks, then compiles each block to per-node bytecode.

Plugin model of Picodata
03

Distributed compute layer

The Picodata core is open-source. Rust plugins run against the shared SQL catalog and extend the engine in place. The Picodata team ships four commercial plugins as proof of the model: Radix (Redis protocol), Sirin (Cassandra CQL), Ouroboros (cross-cluster async replication, like Oracle GoldenGate), and Franz (Kafka reader). The SDK is open — write your own.

Where Picodata fits

Picodata is not a universal database. The matrix below shows where it fits — and where it doesn't.

SituationPicodataPostgreSQLCassandraRedis
Standard CRUD up to 10 TB, local transactionsOverkill*OptimalBad fitBad fit
Bulk time-series writes (IIoT, telemetry)Works (via Sirin)Bad fitOptimalBad fit
Hot-data cache (key-value without ACID)Works (via Radix)OverkillBad fitOptimal
High-throughput queues, session store, real-time OLTPOptimalWorks (up to ~10K RPS)WorksWorks
Distributed in-memory analytical SQLOptimalBad fitBad fitBad fit
Sharded ACID transactions across many nodesOptimalWorks (with Citus / 2PC overhead)Bad fitBad fit

* Picodata does not replace general-purpose databases. On workloads that fit a single server, shared-memory engines like PostgreSQL are more efficient and easier to run than a sharded cluster — network coordination has cost that doesn't pay off below a certain scale.

Get started

The Picodata core is open-source. Three ways to start: download a build, read the docs, or browse the source.