TETRA
|

The proof.

172.2M
edges · 27.2M nodes
LDBC Social Network Benchmark
14/14
passed · 405ms avg
Interactive Complex suite
37–19
vs Neo4j · same hardware
105 queries · Neo4j OOMs on writes
1,611
1,611 / 1,611 · 100%
Full openCypher compliance
See the Pricing

We used the math behind AIto build a database.

The matrix operations behind every large language model — the mathematical engine driving the AI revolution — turn out to be a radically better way to store and query data. Not to generate it. Not to predict it. To retrieve exactly what’s there, with perfect accuracy, faster and cheaper than anything else on the market.

The Problem

The cost of data infrastructureis becoming unsustainable.

Behind every enterprise data budget is a deeper crisis — power, compute, and complexity compounding faster than the value they deliver.

$29M
Avg. enterprise data
spend per year1
9%
US electricity consumed
by data centers by 20302
$2.2M
Annual cost just to
maintain data pipelines1
73%
Enterprise data initiatives
that fail expectations1

Enterprise data spending has reached $29 million per year on average, with cloud compute and ingestion alone running over $500K per month at many organizations. Engineering teams spend $2.2 million a year just maintaining data pipelines. And despite all of that, 73% of enterprise data initiatives fail to meet expectations.1

Behind those budgets is a deeper problem: power. Data centers are on track to consume 9% of all US electricity by 2030 — triple what they use today.2 Power demand is growing at 15% per year.3 The industry’s own new benchmark for infrastructure value is “tokens per watt per dollar”4 — a metric that Jensen Huang formally introduced at GTC 2026.5

Every watt spent on database overhead is a watt not available for the workloads that actually generate value.

Traditional databases are a major part of this waste. They store your data and then build an equally large shadow of infrastructure around it — indexes, logs, caches, replication layers — until the system is 5 to 10 times larger than the data itself. All of that overhead consumes compute, memory, storage, and electricity. It exists because the old approach to databases requires it.

TETRA doesn’t.

The Solution

One engine. A fraction of the footprint.

TETRA files compress to ~38% of the initial raw data size — and that includes everything needed to search and query it (what other databases call “indexes”). There is no separate index layer. The compressed, post-quantum encrypted file is the query engine. By formal proof, zero data resolution is lost.

Where a traditional system wraps your data in layers of infrastructure — indexes, logs, caches, replication layers — TETRA’s structure is the query engine. Nothing is wasted.

That efficiency cascades into every cost line:

Smaller data

Smaller servers. Lower cloud bills. Lower electricity. A multi-server cluster workload runs on a single standard machine.

Single engine

Replaces separate systems for structured records, relationships, search, and analytics. Fewer systems, fewer teams, fewer integration points.

Lower ops burden

No indexes, logs, caches, or replication layers to maintain. The savings compound from the storage layer through the power grid.

Runs anywhere

Datacenter, edge, or a device in the field. Intelligence moves to where the data lives, without a round trip to a central server.

A workload that required a multi-server cluster — with all the licensing, staffing, cooling, and power that implies — runs on a single standard machine.

Everyone else used this math to build language models. We used it to build a database.

Schedule a Consultation See what TETRA can replace

AI + Data

It makes AI accurateinstead of approximate.

The biggest unsolved problem in enterprise AI is accuracy. Models hallucinate. The proven fix is giving them structured knowledge to reason over — not documents to guess from.

3.4×

better accuracy on complex business questions when LLMs reason over structured knowledge graphs vs. vector-only retrieval.6

The bottleneck has been the database underneath. Existing options are too slow to keep up with a model during a live query, and too heavy to deploy alongside one.

TETRA is fast enough and small enough to sit inside the AI stack itself — no separate infrastructure, no additional power draw, no added complexity. This turns AI from a tool that sometimes gets it right into a system that reasons from your actual data, every time.

And because the engine is so compact, it runs wherever the AI runs — in a datacenter, at the edge, on a device in the field. Intelligence moves to where the data lives, without a round trip to a central server and without spinning up another rack.

Timing

Why now.

The data infrastructure industry is hitting a wall. Power constraints are dictating where data centers can be built, what workloads they can run, and how much they can grow.7 Every efficiency gain at the database layer translates directly into capacity freed up for everything else.

175%
Power demand surge
projected by 20307
15%
Annual growth in
data center power3
Current electricity
use by 20302

TETRA doesn’t just reduce cost. It reduces the physical resources required to do the same work — at a moment when those resources are the binding constraint on the entire technology industry.

The result changes the economics of enterprise data infrastructure at exactly the moment those economics are breaking.

Schedule a Consultation Talk to us about TETRA

Sources

1
Fivetran — “The Enterprise Data Infrastructure Benchmark Report 2026.” Survey of 500+ senior data leaders at organizations with 5,000+ employees. fivetran.com
2
Electric Power Research Institute (EPRI) — Data center electricity consumption forecast. epri.com
3
Goldman Sachs — “AI to Drive 165% Increase in Data Center Power Demand by 2030.” 15% CAGR projection for US data center power demand 2023–2030. goldmansachs.com
4
Data Center Knowledge — “2026 Predictions: AI Sparks Data Center Power Revolution.” datacenterknowledge.com
5
CIQ — “Tokens Per Watt is the New CEO Metric.” Jensen Huang’s introduction of tokens-per-watt at GTC 2026. ciq.com
6
Diffbot KG-LM Accuracy Benchmark — GraphRAG improved LLM accuracy 3.4× across 43 business questions vs. vector-only retrieval. falkordb.com
7
Goldman Sachs — “Data Center Power Demand: The 6 Ps Driving Growth and Constraints.” Power demand projected to surge 175% by 2030. goldmansachs.com

See TETRA in action.

Explore the technical benchmarks or talk to us about what TETRA can do for your infrastructure.

Schedule a Consultation Talk to us about TETRA

A graph data system
that runs anywhere.

We didn’t improve the graph database. We reinvented it. Sub-millisecond queries. No specialized hardware. Tiny RAM footprint.

Benchmark Report

LDBC Social Network Benchmark

Interactive Complex Reads · IC1–IC14

SF10 · 30M nodes · 115M edges · single file · single process

What this proves

TETRA executes all 14 of the LDBC Interactive Complex reads — the hardest standardized benchmark for graph databases — on a single laptop, from a single file. These are not micro-benchmarks. Each query is a real analytical question that traverses variable-length friendship paths, scans millions of messages, runs correlated subqueries, and finds shortest paths across 27 million nodes and 172 million edges. Average latency: 405 ms. Six queries finish under 100 ms. Two finish under 1 ms.

27.2M
Nodes
172.2M
Edges
14/14
IC queries passed
405ms
Avg latency
Shortest path (IC13/IC14)
Bidirectional BFS across 68K persons · ~16K edges
0.5ms
Median query (IC9/IC10 boundary)
2-hop friend messages · 2.1M edges scanned
286ms
Slowest (IC5) · forum memberships
VLP 1..2 → 55K memberships → forum post counts · 4 pipeline stages
1.49s

Latency distribution · 14 Interactive Complex queries

< 100ms
6 queries
100–500ms
5 queries
500ms+
3 queries

All 14 Interactive Complex queries

Every query returns correct, non-zero results. Parameters sampled live from the graph (top persons by KNOWS degree — maximum fan-out at every hop).

IDQueryRowsLatency
IC1
Transitive friends by name
2074ms
IC2
Recent friend messages
2047ms
IC3
Friends in countries
201.12s
IC4
New tags in friend posts
10883ms
IC5
New forum memberships
201.49s
IC6
Tag co-occurrence
10514ms
IC7
Recent likers
2014ms
IC8
Recent replies
2089ms
IC9
Recent FoF messages
20357ms
IC10
FoF birthday + interests
10237ms
IC11
Friends work in country
10247ms
IC12
Expert replies
20602ms
IC13
Shortest path
10.5ms
IC14
All shortest paths
150.5ms

Query work — what each query actually does

Click any query to see the traversal shape and the factual work involved. Edge counts are estimated from the LDBC SF10 degree distributions for high-degree seed persons.

IC1 Transitive friends by name 74ms
graph LR
  P((Person)):::p -->|"KNOWS*1..3"| F((friend)):::p
  F -->|IS_LOCATED_IN| C[City]:::loc
  F -.->|STUDY_AT| U[Uni]:::org
  F -.->|WORK_AT| W[Comp]:::org
  classDef p fill:#4e79a7,stroke:none,color:#F5F0E6
  classDef loc fill:#59a14f,stroke:none,color:#F5F0E6
  classDef org fill:#f28e2b,stroke:none,color:#F5F0E6

Explore all persons within 3 friendship hops who share a given first name. Visits ~80,000 persons across the 3-hop KNOWS frontier, reads the firstName property on each, deduplicates. After filtering and ranking, enriches the top 20 with their city of residence, universities attended (with enrollment year), and employers (with start year).

~1.4M edges20 rows3-hop VLP + enrichment joins
IC2 Recent friend messages 47ms
graph LR
  P((Person)):::p -->|KNOWS| F((friend)):::p
  M["Post | Comment"]:::m -->|HAS_CREATOR| F
  classDef p fill:#4e79a7,stroke:none,color:#F5F0E6
  classDef m fill:#76b7b2,stroke:none,color:#F5F0E6

Retrieve the 20 most recent messages from direct friends. Finds ~200 friends, examines ~77,000 messages (Posts and Comments) authored by those friends, applies a date ceiling, and selects the top 20 newest by creation date using a heap.

~77K edges20 rows1-hop + top-K selection
IC3 Friends in countries 1.12s
graph LR
  P((Person)):::p -->|"KNOWS*1..2"| F((friend)):::p
  F -->|located| Co[Country]:::loc
  M["message"]:::m -->|HAS_CREATOR| F
  M -->|located| Co2["Country X/Y"]:::loc
  classDef p fill:#4e79a7,stroke:none,color:#F5F0E6
  classDef loc fill:#59a14f,stroke:none,color:#F5F0E6
  classDef m fill:#76b7b2,stroke:none,color:#F5F0E6

Count messages by 2-hop friends in two specific countries, excluding friends who live in either country. Explores ~5,500 persons across 2 hops, resolves each person’s location through a City→Country chain, applies the exclusion filter, then scans ~1.9M messages and joins each to its country of origin. Splits counts into two country-specific buckets using conditional aggregation.

~4M edges20 rows2-hop VLP + NOT pattern + 5-hop chain
IC4 New tags in friend posts 883ms
graph LR
  P((Person)):::p -->|KNOWS| F((friend)):::p
  Po[Post]:::m -->|HAS_CREATOR| F
  Po -->|HAS_TAG| T[Tag]:::org
  X["NOT EXISTS: same tag before date"]:::res
  classDef p fill:#4e79a7,stroke:none,color:#F5F0E6
  classDef m fill:#76b7b2,stroke:none,color:#F5F0E6
  classDef org fill:#f28e2b,stroke:none,color:#F5F0E6
  classDef res fill:#e15759,stroke:none,color:#F5F0E6

Find tags that appear on recent friend posts but never on older friend posts — newly trending topics. Scans ~200 friends’ posts within a date window, collects their tags (~36,000 post-tag pairs), then for each candidate tag verifies it has no occurrence before the start date. This is a correlated anti-join: the engine must cross-reference each tag against the entire pre-window post history of the same friend set.

~300K–500K edges10 rowsNOT EXISTS correlated subquery
IC5 New forum memberships 1.49s
graph LR
  P((Person)):::p -->|"KNOWS*1..2"| F((friend)):::p
  Fo[Forum]:::org -->|HAS_MEMBER| F
  Fo -->|CONTAINER_OF| Po[Post]:::m
  classDef p fill:#4e79a7,stroke:none,color:#F5F0E6
  classDef m fill:#76b7b2,stroke:none,color:#F5F0E6
  classDef org fill:#f28e2b,stroke:none,color:#F5F0E6

Rank forums by how many 2-hop friends joined recently, then count each forum’s total posts. Deduplicates ~5,500 persons from the 2-hop frontier, scans ~55,000 forum memberships with a date filter on the membership edge, groups by forum, sorts by friend count. For the top 20 forums, counts all contained posts. Four sequential pipeline stages, each requiring full materialization before the next begins.

~85K–130K edges20 rows2-hop VLP + 4 WITH barriers
IC6 Tag co-occurrence 514ms
graph LR
  P((Person)):::p -->|"KNOWS*1..2"| F((friend)):::p
  Po[Post]:::m -->|HAS_CREATOR| F
  Po -->|HAS_TAG| T1["known Tag"]:::org
  Po -->|HAS_TAG| T2["co-occurring"]:::res
  classDef p fill:#4e79a7,stroke:none,color:#F5F0E6
  classDef m fill:#76b7b2,stroke:none,color:#F5F0E6
  classDef org fill:#f28e2b,stroke:none,color:#F5F0E6
  classDef res fill:#b07aa1,stroke:none,color:#F5F0E6

Find tags that co-occur with a named tag on posts by 2-hop friends. Scans ~660,000 posts from ~5,500 friends and checks ~2.6M tag edges to find the ~16,500 posts carrying the target tag. Then reads ~66,000 co-occurring tags on those posts. Groups by tag name with distinct post count.

~3.3M edges10 rows2-hop VLP + full post×tag scan
IC7 Recent likers 14ms
graph LR
  P((Person)):::p -->|created| M["messages"]:::m
  M -->|LIKES| L((likers)):::res
  classDef p fill:#4e79a7,stroke:none,color:#F5F0E6
  classDef m fill:#76b7b2,stroke:none,color:#F5F0E6
  classDef res fill:#b07aa1,stroke:none,color:#F5F0E6

Find who most recently liked the seed person’s content. Scans ~385 messages, follows ~3,850 LIKES edges, sorts by like timestamp, groups by liker keeping only the most recent like per person. Checks whether each liker is already a friend (boolean NOT EXISTS). Computes the time gap between the like and the original message creation.

~4.3K edges20 rowsreversed chain + head(collect())
IC8 Recent replies 89ms
graph LR
  P((Person)):::p -->|created| M["messages"]:::m
  C[Comment]:::m2 -->|REPLY_OF| M
  C -->|HAS_CREATOR| A((author)):::res
  classDef p fill:#4e79a7,stroke:none,color:#F5F0E6
  classDef m fill:#76b7b2,stroke:none,color:#F5F0E6
  classDef m2 fill:#76b7b2,stroke:none,color:#F5F0E6
  classDef res fill:#b07aa1,stroke:none,color:#F5F0E6

Find the most recent replies to the seed person’s content. Scans ~385 messages, follows ~3,080 REPLY_OF edges to find comments, resolves each comment’s author. Returns the 20 newest replies sorted by creation date. A straight 4-hop chain with no aggregation or VLP.

~6.5K edges20 rows4-hop chain + top-K
IC9 Recent FoF messages 357ms
graph LR
  P((Person)):::p -->|"KNOWS*1..2"| F((friend)):::p
  M["Post | Comment"]:::m -->|HAS_CREATOR| F
  classDef p fill:#4e79a7,stroke:none,color:#F5F0E6
  classDef m fill:#76b7b2,stroke:none,color:#F5F0E6

Retrieve the 20 most recent messages from 2-hop friends. Deduplicates ~5,500 persons from the 2-hop frontier, then scans all of their authored content — approximately 2.1 million messages. Applies a date filter and multi-label check (Post or Comment), then selects the top 20 by creation date via a 20-element heap over the full 2.1M candidates.

~2.1M edges20 rows2-hop VLP + bulk scan + top-K heap
IC10 FoF birthday + interests 237ms
graph LR
  P((Person)):::p -->|"KNOWS*2"| FoF((fof)):::p
  FoF -->|IS_LOCATED_IN| C[City]:::loc
  Po[Post]:::m -->|HAS_CREATOR| FoF
  Po -->|HAS_TAG| T[Tag]:::org
  T -->|HAS_INTEREST| P
  classDef p fill:#4e79a7,stroke:none,color:#F5F0E6
  classDef loc fill:#59a14f,stroke:none,color:#F5F0E6
  classDef m fill:#76b7b2,stroke:none,color:#F5F0E6
  classDef org fill:#f28e2b,stroke:none,color:#F5F0E6

Score friend-of-friends by shared interests. Takes the exactly-2-hop KNOWS frontier (~8,000 persons), excludes direct friends and the seed, filters by birthday month (~833 candidates remaining). For each candidate, runs two passes: one counting posts whose tags match the seed’s declared interests, one counting posts that don’t match. Computes a similarity score as the difference.

~500K–600K edges10 rowsfixed 2-hop VLP + double OPTIONAL MATCH
IC11 Friends work in country 247ms
graph LR
  P((Person)):::p -->|"KNOWS*1..2"| F((friend)):::p
  F -->|WORK_AT| O[Org]:::org
  O -->|IS_LOCATED_IN| Co[Country]:::loc
  classDef p fill:#4e79a7,stroke:none,color:#F5F0E6
  classDef org fill:#f28e2b,stroke:none,color:#F5F0E6
  classDef loc fill:#59a14f,stroke:none,color:#F5F0E6

Find 2-hop friends who work at companies in a specific country before a given year. Explores ~5,500 persons, follows ~8,250 WORK_AT edges (most people have 1–2 jobs), resolves each company’s country, filters by country name and employment start year. Sorts by work-from year with tiebreakers on person ID and company name (mixed ASC/DESC).

~33K edges10 rows2-hop VLP + 4-hop chain + property filters
IC12 Expert replies 602ms
graph LR
  P((Person)):::p -->|KNOWS| F((friend)):::p
  C[Comment]:::m -->|HAS_CREATOR| F
  C -->|REPLY_OF| Po[Post]:::m
  Po -->|HAS_TAG| T[Tag]:::org
  T -->|HAS_TYPE| TC[TagClass]:::org
  TC -->|"SUBCLASS*0.."| B["base class"]:::res
  classDef p fill:#4e79a7,stroke:none,color:#F5F0E6
  classDef m fill:#76b7b2,stroke:none,color:#F5F0E6
  classDef org fill:#f28e2b,stroke:none,color:#F5F0E6
  classDef res fill:#b07aa1,stroke:none,color:#F5F0E6

Find friends whose comments reply to posts tagged under a given tag class hierarchy — the deepest chain at 7+ hops. For ~200 friends, examines ~53,000 comments. Each comment is chased through a mandatory 7-hop path: Comment→REPLY_OF→Post→HAS_TAG→Tag→HAS_TYPE→TagClass, then up the IS_SUBCLASS_OF tree (variable depth, min 0) to test against a named base class. Every comment must complete the full chain before the engine knows if it qualifies.

~530K edges20 rows7-hop chain + VLP on class hierarchy
IC13 Shortest path 0.5ms
graph LR
  P1((Person 1)):::p ---|"KNOWS* — BFS"| P2((Person 2)):::res
  classDef p fill:#4e79a7,stroke:none,color:#F5F0E6
  classDef res fill:#b07aa1,stroke:none,color:#F5F0E6

Find the shortest friendship path between two people. Launches a bidirectional breadth-first search from both endpoints through the KNOWS graph (68,673 persons). Social network path lengths average ~4 hops, so each direction expands only ~2 frontiers before they meet. No property reads, no filters — pure adjacency-list iteration on the compact KNOWS subgraph.

~16K edges1 rowbidirectional BFS
IC14 All shortest paths 0.5ms
graph LR
  P1((Person 1)):::p ---|"KNOWS* — all paths"| P2((Person 2)):::res
  classDef p fill:#4e79a7,stroke:none,color:#F5F0E6
  classDef res fill:#b07aa1,stroke:none,color:#F5F0E6

Enumerate all shortest friendship paths between two people. Same BFS as IC13, but tracks every equal-length route by recording multiple parent pointers when a node is reached by different shortest paths simultaneously. After BFS completes, reconstructs all 15 paths by backtracking through the parent DAG. Returns each path as a list of person IDs. Same ~16K edge reads as IC13; the multi-parent bookkeeping and path enumeration (15 paths × ~4 nodes) add negligible cost.

~16K edges15 rowsbidirectional BFS + multi-parent tracking

Dataset: LDBC SNB SF10

The LDBC Social Network Benchmark is the industry-standard benchmark for graph databases, maintained by an independent consortium. SF10 represents a social network at 10× base scale with realistic power-law degree distributions, temporal properties, and correlated data generation.

EntityCount
Persons
68,673
Posts
8,273,491
Comments
18,196,074
Forums
667,545
Tags
16,080
Total nodes
27,231,349
Total edges
172,183,299
Edge types
16
Node labels
13

Environment

engine: Tetra / EdgeGlider · native arm64 binary · single process · in-process (no network hop)

storage: Single file · mmap’d · LDBC SNB SF10

hardware: Apple M-series · single machine · CPU only — no GPU, no cluster

protocol: Bolt v4.4 (Neo4j wire-compatible) · openCypher

parameters: Sampled live from graph · top persons by KNOWS degree

date: 2026-04-14

Tetra / EdgeGlider · LDBC SNB SF10 · 14/14 Interactive Complex queries passed

For context

TigerGraph’s audited LDBC SNB BI result at SF1000 was run on a Dell PowerEdge R7725 bare-metal server (AMD EPYC). GraphScope Flex holds throughput records (130K+ ops/s at SF100/SF300) on full server infrastructure. Neo4j failed to complete several IC queries at SF10 in the 2019 Rusu & Huang study. Amazon Neptune has no published LDBC results and its openCypher implementation does not support shortestPath() or allShortestPaths(). TETRA passes 14/14 on a single process from a single file.

37–19
Tetra vs Neo4j wins
66 MB
RAM used (vs 710 MB)
485/s
Peak throughput
100%
Cypher TCK
477ms
Startup (vs ~8 sec)
11/11
Write ops (Neo4j: OOM)
63×
Best aggregation win
290×
Best overall win

What’s included

Everything. No add-on licensing.

Visualization, algorithms, encryption, compliance. It’s all in the box. Other vendors charge separately for each of these.

Retina — 3D/2D Graph Data Viewer & Explorer

Layouts

Force-directed, spherical, and radial — switch on the fly

2D overhead or full 3D orbit

Node Shapes

Six distinct platonic solids. Each node type gets its own shape and color automatically.

Explore

Click to focus — see N hops deep

Fly to any node in the graph

Pin and inspect multiple nodes at once

WebGPU-driven rendering (2D and 3D modes). Server-side and client-side analytics built in (shortest path, centrality, community detection). Query UI for exploring and verifying data or validating migrations. Export results as CSV or JSON.

Free and ungated — part of the TETRA suite, no add-on licensing. Neo4j charges $1,200–$2,500/user/year for Bloom (self-hosted Enterprise). TigerGraph charges 10% compute surcharge for Insights. Neptune has only the open-source graph-explorer.

30+ Graph Algorithms

Built in. No add-on licensing. Neo4j charges $10K–$25K+/yr for Graph Data Science. TigerGraph charges 10% compute surcharge for Insights.

Post-Quantum Encryption

Encrypted at rest with post-quantum resistant cryptography. The encrypted, compressed file is the queryable database — no decryption step.

~38% Compression

Raw data compresses to ~38% including all query structures. No separate indexes. By formal proof, zero data resolution lost.

100% Cypher TCK

1,611/1,611 openCypher scenarios passed. Full compliance over Bolt v4.4. Neptune can’t run shortestPath(). TigerGraph uses proprietary GSQL.

Co-location Ready

66 MB RAM for 166K edges. Native binary in a 24 MB Alpine container. Runs alongside your app — no cluster, no network hop.

Ingestion

10 GB of CSV ingested, converted, and loaded in under 7 minutes on 5 GB of RAM. Single-process. No ETL cluster.

Performance

78 queries. 300 iterations each. Same hardware.

Recommendations dataset — 28,863 nodes, 166,261 edges. Both containerized on Apple M4 Pro, 16GB, CPU only. Native binary shown for reference.

54
Tetra wins
24
Neo4j wins
0
Tied (5% buffer)
300
Iterations each
TETRA container
Neo4j container
Bar shows container head-to-head · p50 median latency

Container Benchmark

Head-to-head. Same hardware. Same queries. No tricks.

105 Cypher queries, 30 iterations each. Both databases containerized with hard resource limits. Neo4j gets 2× the RAM budget and still uses 10.8× more memory than Tetra.

Tetra
Neo4j
Image Alpine 3.21 (24 MB) neo4j:5-community (JVM)
CPU Limit 1 core 1 core
RAM Limit 512 MB 1 GB
RAM Used 66 MB 710 MB
Startup 477 ms ~8 seconds
Data Load Instant (mmap) 18 seconds (Bolt import)
37
Tetra wins
19
Neo4j wins
49
Native fastest

105 queries · Dataset: Recommendations — 28,863 nodes, 166,261 edges · p99 determines winner · 5% tie threshold

TETRA, 2.4× faster
count all nodes
TETRA 325µs
764µs Neo4j
TETRA, 3.9× faster
count all edges
TETRA 167µs
652µs Neo4j
TETRA, 3.1× faster
count RATED edges
TETRA 198µs
608µs Neo4j
TETRA, 1.9× faster
lookup by title
TETRA 282µs
527µs Neo4j
TETRA, 1.1× faster
lookup by name
TETRA 502µs
533µs Neo4j
TETRA, 1.4× faster
1-hop: actors in movie
TETRA 392µs
551µs Neo4j
TETRA, 1.3× faster
1-hop: movies by actor
TETRA 641µs
809µs Neo4j
TETRA, 1.5× faster
2-hop: co-actors
TETRA 677µs
1.0ms Neo4j
TETRA, 1.4× faster
2-hop: directors of movies
TETRA 615µs
842µs Neo4j
TETRA, 1.8× faster
3-hop: user→movie→actor→movie
TETRA 431µs
771µs Neo4j
Neo4j, 1.04×
3-hop: actor chain
Neo4j 923µs
958µs TETRA
Neo4j, 3.6× faster
3-hop: actor→movie→genre→movie
Neo4j 732µs
2.6ms TETRA
TETRA, 16× faster
movies per genre
TETRA 220µs
3.5ms Neo4j
TETRA, 14.5× faster
top rated movies
TETRA 3.9ms
56.7ms Neo4j
TETRA, 2.9× faster
prolific actors
TETRA 7.7ms
22.6ms Neo4j
TETRA, 4.7× faster
prolific directors
TETRA 1.8ms
8.5ms Neo4j
TETRA, 7.5× faster
avg rating per genre
TETRA 8.7ms
65.7ms Neo4j
TETRA, 63× faster
genre popularity
TETRA 565µs
35.7ms Neo4j
TETRA, 1.1× faster
WITH + filter
TETRA 569µs
637µs Neo4j
TETRA, 22× faster
WITH agg + filter
TETRA 1.7ms
38.2ms Neo4j
TETRA, 2.2× faster
director’s other movies
TETRA 353µs
777µs Neo4j
TETRA, 2.8× faster
MATCH+MATCH shared var
TETRA 607µs
1.7ms Neo4j
TETRA, 2.1× faster
MATCH+MATCH chain
TETRA 625µs
1.3ms Neo4j
TETRA, 2.5× faster
3-match chain
TETRA 718µs
1.8ms Neo4j
TETRA, 2.3× faster
similar users
TETRA 443µs
1.0ms Neo4j
TETRA, 1.4× faster
actor filmography
TETRA 598µs
850µs Neo4j
TETRA, 3.5× faster
genre browse
TETRA 573µs
2.0ms Neo4j
TETRA, 1.7× faster
search by title prefix
TETRA 398µs
681µs Neo4j
TETRA, 7.6× faster
yearly movie count
TETRA 433µs
3.3ms Neo4j
TETRA, 5.7× faster
top genres by avg rating
TETRA 11.3ms
64.6ms Neo4j
TETRA, 290× faster
most connected actors
TETRA 334µs
96.8ms Neo4j
TETRA, 1.1× faster
4-hop actor chain
TETRA 991µs
1.1ms Neo4j
TETRA, 1.8× faster
Kevin Bacon 2-degree
TETRA 676µs
1.2ms Neo4j
TETRA, 1.6× faster
Kevin Bacon 3-degree
TETRA 1.1ms
1.8ms Neo4j
Neo4j, 1.3× faster
Kevin Bacon 4-degree
Neo4j 2.0ms
2.5ms TETRA
TETRA, 3.7× faster
VLP 1..4 from Keanu
TETRA 591µs
2.2ms Neo4j

Neo4j’s wins are concentrated in multi-variable RETURN projections and OPTIONAL MATCH chains — areas where Tetra’s query planner has known optimization opportunities.

These represent active query planner optimization targets for TETRA. We expect these gaps to close as the planner matures. We ship what’s real, including what’s not done yet.

Neo4j, 53× faster
multi-var RETURN (3 vars)
Neo4j 762µs
40.8ms TETRA
Neo4j, 28× faster
OPTIONAL MATCH chain
Neo4j 924µs
26.0ms TETRA
Neo4j, 14× faster
multi-var RETURN (2 vars)
Neo4j 897µs
12.8ms TETRA
Neo4j, 2.4× faster
actors same genre as Matrix
Neo4j 879µs
2.1ms TETRA
Neo4j, 1.9× faster
shortestPath
Neo4j 574µs
1.1ms TETRA
Neo4j, 3.6× faster
3-hop genre fan-out
Neo4j 732µs
2.6ms TETRA

Write Operations

11 write tests. Tetra passes all. Neo4j: OOM.

CREATE, SET, REMOVE, DELETE, DETACH DELETE, MERGE (ON CREATE, ON MATCH). Neo4j could not complete the write verification suite under 1 GB with 1 CPU. Every write attempt resulted in an OOM crash.

11/11
Tetra (512 MB)
0/11
Neo4j (1 GB) — OOM

Throughput

Concurrent scaling. Same hardware. Fair fight.

Mixed workload — queries per second as concurrent Bolt clients increase. Tetra: 1 CPU / 512 MB. Neo4j: 1 CPU / 1 GB. At 64 clients, Neo4j runs out of memory and crashes. Tetra keeps serving.

Queries / second by concurrency level

Higher is better
TETRA (1 CPU / 512 MB)
Neo4j (1 CPU / 1 GB)
1
391/s
90/s
4.3×
2
456/s
82/s
5.6×
4
457/s
37/s
12.4×
8
471/s
29/s
16.2×
16
464/s
49/s
9.5×
32
468/s
54/s
8.7×
64
485/s
0/s — OOM CRASH
485/s
TETRA peak
90/s
Neo4j peak
16.2×
TETRA @ 8 clients
OOM
Neo4j @ 64 clients

Neo4j’s throughput drops as concurrency increases — from 90 q/s at 1 client to 29 q/s at 8 clients. The JVM garbage collector fights the 1 GB memory limit. Tetra’s throughput increases with concurrency and stays flat. No GC, no heap pressure.

Tetra: 1 CPU / 512 MB · Neo4j: 1 CPU / 1 GB · Bolt v4.4 · Mixed read workload · Recommendations dataset

Economics

What it actually costs.

Every graph database prices differently. We did the math so you don’t have to. All prices verified from vendor sites, April 2026.

ProviderConfigMonthlyWhat’s missing
TETRAFlat rate$299Nothing. Retina, 30+ algorithms, full Cypher included.
Neptunedb.r5.large, single$297No HA/failover. +replicas (2–4× cost), +storage, +I/O. No shortestPath(). No built-in visualization.
Neo4j AuraDB Pro16 GB / 3 CPU$1,051Bloom included in cloud. Self-hosted GDS: $10K–$25K+/yr extra.
Neo4j AuraDB BC8 GB / 2 CPU$1,168SLAs, RBAC, SSO. Self-hosted Enterprise: $20K–$200K+/yr.
TigerGraph SavannaTG-00 (2 vCPU, 16 GB)$720Compute only. +HA (2.8× → ~$2,016/mo), +storage, +Insights (10%). Proprietary GSQL.

Cumulative annual TCO

Neo4j ProNeo4j BCTigerGraphNeptuneTETRA + migrationTETRA only
Year 1$12,614$14,016$8,640$3,559$14,452$3,588
Year 3$37,843$42,048$25,920$10,678$21,328$10,764
Year 5$63,072$70,080$43,200$17,796$28,504$17,940

Important caveats

Neptune $297/mo is single instance, no HA. Production requires writer + replicas (2–4× instance cost).

TigerGraph $720/mo is compute only. With HA (2.8×) it’s ~$2,016/mo before storage or add-ons.

Neo4j self-hosted Bloom ($1,200–$2,500/user/yr) and GDS ($10K–$25K+/yr) pricing is from Vendr third-party transaction data, not Neo4j-published.

Neo4j Pro 16GB: $1,051.20 × 12 = $12,614/yr

Neo4j BC 8GB: $1,168.00 × 12 = $14,016/yr

TigerGraph TG-00: $720 × 12 = $8,640/yr (compute only)

Neptune db.r5.large: $296.61 × 12 = $3,559/yr (single instance)

TETRA w/ migration: $10,864 + ($299 × 12) = $14,452 yr 1; $3,588/yr after

TETRA product: $299 × 12 = $3,588/yr

Pricing sources: neo4j.com/pricing · TigerGraph Savanna · AWS Neptune · Vendr (self-hosted estimates)

All prices verified April 2026. Smallest production-viable configuration for each provider.

Methodology

hardware: Apple M-series silicon · single machine · CPU only

containers: Podman 5.x · resource limits via cpus + mem_limit

tetra: 1 CPU / 512 MB RAM · Alpine 3.21 · single Go binary · no JVM

neo4j: 1 CPU / 1 GB RAM · neo4j:5-community · JVM heap 256–512 MB · page cache 128 MB

dataset: Recommendations — 28,863 nodes, 166,261 edges · same JSONL source

queries: 105 Cypher queries across 12 categories · 30 iterations each

metric: p50 median latency (reported) · p99 determines winner · 5% tie threshold

driver: neo4j-go-driver/v5 · same Bolt client for all targets

throughput: 4 representative queries · 5-second wall clock per level · 1–64 clients

writes: 11 operations: CREATE, SET, REMOVE, DELETE, DETACH DELETE, MERGE

cypher_tck: 1,611 / 1,611 scenarios (100%)

What we’re NOT doing

No cherry-picking queries. All 105 run on all engines. Neo4j wins are reported alongside Tetra wins.

No warm-up discarding. First-query latency counts.

No query hints or engine-specific tuning. Identical Cypher strings over the same Bolt protocol.

No pre-warming caches. Both containers start fresh, load data, and run the benchmark.

See it for yourself.

Explore the recommendations dataset in 3D — 2,330 nodes, 3,506 edges, running in your browser.

WebGPU-powered Retina viewer. Click a node, follow the connections, see what’s actually there. No install. No account. Free.

Launch Demo →

Introductory Pricing

$299/mo
The database. Flat rate. No per-GB scaling. No add-on tiers.
$10,864
One-time professional services — data migration, ingestion, conversion & validation.

Includes Retina visualization, 30+ graph algorithms, full openCypher, and operational support. The one-time fee is optional — for existing data migrations only. Not a license fee.

Get TETRA →

Questions? Schedule a call or reach out.

We use cookies to understand how you use our site and improve your experience. Privacy Policy