Skip to content

ArcadeData/ldbc_graphalytics_platforms_arcadedb

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

47 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LDBC Graphalytics ArcadeDB Platform Driver

Platform driver implementation for the LDBC Graphalytics benchmark using ArcadeDB.

Uses ArcadeDB in embedded mode with the Graph Analytical View (GAV) engine, which builds a CSR (Compressed Sparse Row) adjacency index for high-performance graph algorithm execution with zero GC pressure.

This repository contains three benchmark modes:

  1. Official LDBC Graphalytics — standardized framework with per-algorithm isolation, validation, and reporting
  2. Native multi-vendor comparison — load once, run all algorithms, compare ArcadeDB vs Kuzu vs DuckPGQ vs Memgraph vs Neo4j vs FalkorDB vs HugeGraph
  3. LSQB (Labelled Subgraph Query Benchmark) — 9 subgraph pattern matching queries on the LDBC SNB social network, comparing ArcadeDB (Cypher) vs DuckDB (SQL)

Supported Algorithms

Algorithm Implementation Complexity
BFS (Breadth-First Search) Parallel frontier expansion with bitmap visited set and push/pull direction optimization O(V + E)
PR (PageRank) Pull-based parallel iteration via backward CSR O(iterations * E)
WCC (Weakly Connected Components) Synchronous parallel min-label propagation O(diameter * E)
CDLP (Community Detection Label Propagation) Synchronous parallel label propagation with sort-based mode finding O(iterations * E * log(d))
LCC (Local Clustering Coefficient) Parallel sorted-merge triangle counting O(E * sqrt(E))
SSSP (Single Source Shortest Paths) Dijkstra with binary min-heap on CSR + columnar weights O((V + E) * log(V))

Prerequisites

  • Java 21 or later (required for jdk.incubator.vector SIMD support)
  • Maven 3.x
  • ArcadeDB engine built locally

Build

# 1. Build ArcadeDB engine
cd /path/to/arcadedb
mvn install -DskipTests -pl engine -am -q

# 2. Build the LDBC platform driver
cd /path/to/ldbc_graphalytics_platforms_arcadedb
mvn package -DskipTests

The build produces a self-contained distribution in graphalytics-1.3.0-arcadedb-0.1-SNAPSHOT/.

Dataset

Download datasets from the LDBC Graphalytics data repository. For example, datagen-7_5-fb (633K vertices, 34M edges):

/path/to/graphs/
  datagen-7_5-fb.v              # vertex file (one ID per line)
  datagen-7_5-fb.e              # edge file (src dst weight, space-separated)
  datagen-7_5-fb.properties     # graph metadata
  datagen-7_5-fb-BFS/           # validation data per algorithm
  datagen-7_5-fb-WCC/
  datagen-7_5-fb-PR/
  datagen-7_5-fb-CDLP/
  datagen-7_5-fb-LCC/
  datagen-7_5-fb-SSSP/

Mode 1: Official LDBC Graphalytics Benchmark

Uses the official LDBC Graphalytics framework with ArcadeDB's platform driver. Produces standardized results with separate load_time, processing_time, and makespan measurements. The framework reloads the graph for each algorithm to ensure isolated measurements.

Configuration

Edit files in graphalytics-1.3.0-arcadedb-0.1-SNAPSHOT/config/:

benchmark.properties:

graphs.root-directory = /path/to/graphs
graphs.validation-directory = /path/to/graphs
benchmark.runner.max-memory = 16384

benchmarks/custom.properties:

benchmark.custom.graphs = datagen-7_5-fb
benchmark.custom.algorithms = BFS, WCC, PR, CDLP, LCC, SSSP
benchmark.custom.timeout = 7200
benchmark.custom.output-required = true
benchmark.custom.validation-required = true
benchmark.custom.repetitions = 1

platform.properties:

platform.arcadedb.olap = true

Run

cd graphalytics-1.3.0-arcadedb-0.1-SNAPSHOT
bash bin/sh/run-benchmark.sh

Results are written to report/<timestamp>-ARCADEDB-report-CUSTOM/json/results.json.

Extract Results

LATEST=$(ls -td report/*ARCADEDB* | head -1)
python3 -c "
import json
with open('$LATEST/json/results.json') as f:
    data = json.load(f)
result = data.get('result', data.get('experiments', {}))
runs = result.get('runs', {})
jobs = result.get('jobs', {})
for rid, r in sorted(runs.items(), key=lambda x: x[1]['timestamp']):
    algo = next(j['algorithm'] for j in jobs.values() if rid in j['runs'])
    print(f\"{algo:6} proc={r['processing_time']:>8}s  load={r['load_time']:>8}s\")
"

Mode 2: Native Multi-Vendor Comparison

Located in native-benchmark/. Loads the graph once and runs all algorithms sequentially on the same in-memory structure. This provides a fair apples-to-apples comparison since all systems use the same approach.

Systems tested: ArcadeDB, Kuzu, DuckPGQ, Memgraph, Neo4j, ArangoDB, FalkorDB, HugeGraph

ArcadeDB (Java)

# Compile (use the LDBC platform fat JAR for dependencies)
LDBC_JAR=graphalytics-1.3.0-arcadedb-0.1-SNAPSHOT/lib/graphalytics-platforms-arcadedb-0.1-SNAPSHOT-default.jar
cd native-benchmark
javac --add-modules jdk.incubator.vector -cp "../$LDBC_JAR" ArcadeDBBenchmark.java

# Run
java --add-modules jdk.incubator.vector -Xms8g -Xmx8g -cp ".:../$LDBC_JAR" ArcadeDBBenchmark

Kuzu, DuckPGQ, Memgraph, Neo4j, ArangoDB (Python)

# Create virtual environment and install dependencies
cd native-benchmark
python3 -m venv .venv
source .venv/bin/activate
pip install kuzu duckdb pymgclient neo4j python-arango

# Run all available benchmarks
python3 benchmark.py

For Memgraph, start Docker first:

docker run -d --name memgraph -p 7687:7687 memgraph/memgraph-mage

For Neo4j, start Docker with GDS plugin:

docker run -d --name neo4j -p 7474:7474 -p 7688:7687 \
  -e NEO4J_AUTH=neo4j/benchmark123 \
  -e NEO4J_PLUGINS='["graph-data-science"]' \
  neo4j:2026-community

For ArangoDB, start Docker (use 3.11 — Pregel was removed in 3.12):

docker run -d --name arangodb -p 8529:8529 -e ARANGO_ROOT_PASSWORD=benchmark arangodb:3.11

For HugeGraph (Vermeer OLAP engine):

docker network create hugegraph-net
docker run -d --name vermeer-master --network hugegraph-net \
  -p 6688:6688 -p 6689:6689 hugegraph/vermeer --env=master
docker run -d --name vermeer-worker --network hugegraph-net \
  -p 6788:6788 -p 6789:6789 \
  -v /path/to/graphs:/data/graphs:ro \
  hugegraph/vermeer --env=worker --master_peer=vermeer-master:6689
# Assign worker to common pool:
WORKER=$(curl -s http://localhost:6688/api/v1/workers | python3 -c "import sys,json; print(json.load(sys.stdin)['workers'][0]['name'])")
curl -X POST "http://localhost:6688/api/v1/admin/workers/group/\$/${WORKER}"

Benchmark Results

Dataset: datagen-7_5-fb (633,432 vertices, 34,185,747 edges, undirected, weighted)

Benchmarks run on a MacBook Pro 16" (2019), Intel Core i9-9880H 8-core @ 2.3GHz, 32GB RAM, macOS.

Official LDBC Graphalytics Results (ArcadeDB)

Using the LDBC Graphalytics framework (graph reloaded per algorithm):

Algorithm processing_time load_time makespan
PR 16.12s 95.04s 48.80s
WCC 8.36s 95.04s 37.67s
BFS 22.81s 95.04s 57.52s
CDLP 30.38s 95.04s 56.81s
LCC 43.75s 95.04s 73.76s
SSSP 28.72s 115.50s 144.84s

All 6 algorithms passed with validation.

Native Comparison (load once, run all algorithms)

System Version Edition License Mode Overhead
ArcadeDB (embedded) 26.4.1 Open Source Apache 2.0 Embedded (in-process, Java 21) None
ArcadeDB (Docker) 26.4.1 Open Source Apache 2.0 Server (Docker, HTTP API) Network + Docker
Neo4j 2026 Community GPL 3.0 Server (Docker, Bolt protocol) Network + Docker
Kuzu 0.11.3 Open Source MIT Embedded (in-process, C++ via Python) None
DuckPGQ DuckDB 1.5.0 Open Source MIT Embedded (in-process, C++ via Python) None
Memgraph 3.8.1 Community BSL 1.1 Server (Docker, Bolt protocol) Network + Docker
ArangoDB 3.11.14 Community Apache 2.0 Server (Docker, HTTP API) Network + Docker
FalkorDB 4.16.6 Open Source Source Available Server (Docker, Redis protocol) Network + Docker
HugeGraph Vermeer latest Open Source Apache 2.0 Server (Docker, HTTP API) Network + Docker

ArcadeDB is tested in two modes: embedded (in-process Java, zero overhead) and Docker (same HTTP/network overhead as the other Docker-based systems). Kuzu and DuckPGQ run embedded. Neo4j, Memgraph, ArangoDB, FalkorDB, and HugeGraph run as Docker containers.

ArcadeDB Embedded vs Docker

Algorithm ArcadeDB Embedded ArcadeDB Docker
PageRank 0.48s 0.83s
WCC 0.30s 0.22s
BFS 0.13s 0.07s
LCC 27.41s 34.98s
SSSP 3.53s 0.97s
CDLP 3.67s 3.35s

Docker results are measured warm (JIT-compiled), matching how production servers run. WCC, BFS, and SSSP are faster in Docker because the server JVM (16GB heap) has more room for JIT optimization than the embedded benchmark (8GB heap). LCC is slightly slower due to Docker Desktop's Linux VM overhead on macOS.

All Systems Comparison

Algorithm ArcadeDB ArcadeDB Docker Neo4j 2026 Kuzu DuckPGQ Memgraph ArangoDB FalkorDB HugeGraph
PageRank 0.48s 0.83s 11.15s 4.30s 6.14s 16.90s 157.01s 1.67s 4.01s
WCC 0.30s 0.22s 0.75s 0.43s 13.93s crash 78.03s 0.85s 6.71s
BFS 0.13s 0.07s 1.91s 0.86s 2,754s 11.72s 511.55s 0.20s 0.54s
LCC 27.41s 34.98s 45.78s N/A 38.59s N/A N/A N/A 272.04s
SSSP 3.53s 0.97s N/A N/A N/A N/A 301.93s N/A N/A
CDLP 3.67s 3.35s 6.43s N/A N/A N/A 407.41s 5.38s 62.70s

Memgraph crashes with segfault (exit 139) during edge loading at ~18-20M of 34M edges.

ArcadeDB is the fastest on every comparable algorithm and the only system that successfully runs all 6 LDBC Graphalytics algorithms. Even when running as a Docker container (same conditions as Neo4j, Memgraph, FalkorDB, and HugeGraph), ArcadeDB leads on every algorithm.

ArcadeDB Embedded vs other systems:

  • vs Neo4j 2026 GDS: PageRank 23x faster, WCC 2.5x faster, BFS 15x faster, LCC 1.7x faster, CDLP 1.8x faster
  • vs Kuzu: PageRank 9x faster, WCC 1.4x faster, BFS 6.6x faster
  • vs DuckPGQ: PageRank 13x faster, WCC 46x faster, BFS 21,185x faster, LCC 1.4x faster
  • vs Memgraph: PageRank 35x faster, BFS 90x faster (WCC/LCC/SSSP/CDLP: crash or unavailable)
  • vs ArangoDB: PageRank 327x faster, WCC 260x faster, BFS 3,935x faster, SSSP 86x faster, CDLP 111x faster
  • vs FalkorDB: PageRank 3.5x faster, WCC 2.8x faster, BFS 1.5x faster, CDLP 1.5x faster (LCC/SSSP: not available)
  • vs HugeGraph: PageRank 8.4x faster, WCC 22x faster, BFS 4.2x faster, LCC 9.9x faster, CDLP 17x faster (SSSP: not available)

ArcadeDB Docker vs other Docker systems (apples-to-apples):

  • vs Neo4j 2026 GDS: PageRank 13.4x faster, WCC 3.4x faster, BFS 27x faster, LCC 1.3x faster, CDLP 1.9x faster
  • vs FalkorDB: PageRank 2x faster, WCC 3.9x faster, BFS 2.9x faster, CDLP 1.6x faster (LCC/SSSP: not available in FalkorDB)
  • vs HugeGraph: PageRank 4.8x faster, WCC 30x faster, BFS 7.7x faster, LCC 7.8x faster, CDLP 18.7x faster

Notes:

  • Memgraph 3.8.1 crashes with segfault (exit 139) during edge loading at ~18-20M edges. WCC previously failed with OOM at 7.6GB.
  • ArangoDB 3.11 uses Pregel for PageRank/WCC/SSSP/CDLP and AQL traversal for BFS. Pregel was removed in ArangoDB 3.12.
  • Kuzu and DuckPGQ lack native implementations for most algorithms beyond PageRank, WCC, and BFS.
  • FalkorDB (RedisGraph fork) has no built-in LCC or full SSSP algorithm. Its algo.SSpaths is pair-oriented, not a full single-source Dijkstra.
  • HugeGraph/Vermeer's SSSP is unweighted (hop-count only), so weighted SSSP is not available. Uses the Vermeer Go-based OLAP engine.
  • ArcadeDB Docker results measured warm (JIT-compiled) to match how production servers run. All Docker systems run on Docker Desktop for macOS with 16 CPUs and 24GB RAM.
  • None of the competing systems have official LDBC Graphalytics platform drivers. Only ArcadeDB has an official LDBC Graphalytics platform implementation.

File Structure

native-benchmark/
  ArcadeDBBenchmark.java    # ArcadeDB Graphalytics benchmark (Java, embedded)
  ArcadeDBLSQB.java         # ArcadeDB LSQB benchmark (Java, embedded, Cypher)
  benchmark.py              # Kuzu, DuckPGQ, Memgraph, Neo4j, ArangoDB Graphalytics benchmarks (Python)
  lsqb_benchmark.py         # Kuzu, DuckDB, Neo4j LSQB benchmarks (Python)
  bench_common.py           # Shared benchmark infrastructure

Mode 3: LSQB (Labelled Subgraph Query Benchmark)

The LSQB benchmark is a lightweight microbenchmark from the LDBC council that focuses on subgraph pattern matching — counting how many times a given labelled graph pattern appears in the dataset. It tests the query optimizer's ability to handle multi-way joins, anti-patterns (NOT EXISTS), and type hierarchy (Message supertype with Post/Comment subtypes).

The benchmark uses the LDBC SNB social network dataset (SF1: ~3.9M vertices, ~17.9M edges) and runs 9 Cypher queries (Q1–Q9) covering patterns from simple 2-hop paths to complex 8-hop chains and triangle patterns.

Dataset

Download the LDBC SNB SF1 dataset (merged-fk format for ArcadeDB/DuckDB, projected-fk for Kuzu):

# Merged-fk (for ArcadeDB and DuckDB)
curl -L -o /path/to/graphs/lsqb-sf1-merged.tar.zst \
  https://datasets.ldbcouncil.org/lsqb/social-network-sf1-merged-fk.tar.zst
cd /path/to/graphs && tar --use-compress-program=unzstd -xf lsqb-sf1-merged.tar.zst

Update DATA_DIR in ArcadeDBLSQB.java to point to the extracted directory.

Run ArcadeDB (Java, embedded)

cd native-benchmark
LDBC_JAR=../target/graphalytics-platforms-arcadedb-0.1-SNAPSHOT-default.jar

# Compile
javac -cp "$LDBC_JAR" ArcadeDBLSQB.java

# Run (first run loads data, subsequent runs reuse the database)
java -Xms4g -Xmx4g --add-modules jdk.incubator.vector -cp ".:$LDBC_JAR" ArcadeDBLSQB

# Force reload from scratch
java -Xms4g -Xmx4g --add-modules jdk.incubator.vector -cp ".:$LDBC_JAR" ArcadeDBLSQB --reset

Run DuckDB (Python)

cd native-benchmark
pip install duckdb
python3 lsqb_benchmark.py duckdb

Run All Systems (Kuzu, DuckDB, Neo4j)

python3 lsqb_benchmark.py              # Run all systems
python3 lsqb_benchmark.py --reset      # Delete all data and reload
python3 lsqb_benchmark.py kuzu duckdb  # Run specific systems only

LSQB Queries

Query Pattern Description
Q1 8-hop chain Country←City←Person←Forum→Post←Comment→Tag→TagClass
Q2 Diamond Person-KNOWS-Person with Comment→Post creator path
Q3 Triangle 3 Persons in same Country, all connected by KNOWS
Q4 Star Message with Tag, Creator, Likes, and Replies (inner join)
Q5 Fork Message←Reply with different Tags
Q6 2-hop + interest Person-KNOWS-Person-KNOWS-Person→Tag
Q7 Star (optional) Same as Q4 but with OPTIONAL MATCH for Likes and Replies
Q8 Anti-pattern Like Q5 but Comment must NOT have the parent's Tag
Q9 Anti-pattern Like Q6 but Person1 must NOT know Person3

LSQB Results

Dataset: LDBC SNB SF1 (3,947,829 vertices, 17,882,623 edges)

Benchmarks run on a MacBook Pro 16" (2019), Intel Core i9-9880H 8-core @ 2.3GHz, 32GB RAM, macOS.

System Version Mode Language
ArcadeDB 26.4.1 Embedded (Java 21) Cypher
DuckDB 1.4.4 Embedded (C++ via Python) SQL
Kuzu 0.11.3 Embedded (C++ via Python) Cypher
Neo4j 2025 Community Docker Cypher
PostgreSQL 17 Docker SQL
Memgraph latest Docker Cypher
Query Expected Count ArcadeDB DuckDB Kuzu Neo4j PostgreSQL Memgraph
Q1 221,636,419 0.23s 0.15s 5.83s 8.25s 6.56s 60.45s
Q2 1,085,627 0.20s 0.02s 0.14s 2.06s 0.34s timeout
Q3 753,570 0.10s 0.05s 2.44s 14.31s 2.12s timeout
Q4 14,836,038 0.02s 0.08s N/A 7.82s 6.86s 4.50s
Q5 13,824,510 0.31s 0.04s N/A 6.72s 0.69s 3.86s
Q6 1,668,134,320 0.75s 2.18s 1.41s 52.06s 17.72s 148.14s
Q7 26,190,133 0.03s 0.08s N/A 10.45s 11.22s 5.59s
Q8 6,907,213 0.58s 0.07s N/A 12.91s 1.31s 3.37s
Q9 1,596,153,418 1.17s 7.77s 6.15s 59.09s 22.25s timeout

All 9 queries produce correct results matching the official LSQB expected output. Kuzu skips Q4/Q5/Q7/Q8 (no :Message supertype support). Memgraph times out on Q2/Q3/Q9 (600s limit).

Analysis:

  • ArcadeDB is the fastest on 4 out of 9 queries (Q4, Q6, Q7, Q9), DuckDB on the other 5.
  • Q4 and Q7 — star-shaped joins centered on Message (Tag, Creator, Likes, Replies). With the GAV's CSR acceleration, ArcadeDB completes these in 20–30ms, 2.7–3.5x faster than DuckDB, and 200–400x faster than Neo4j/PostgreSQL. The benchmark uses GraphTraversalProviderRegistry.awaitAll() to ensure the GAV is fully registered with the query optimizer before timing queries.
  • Q6 and Q9 — multi-hop path traversals (Person-KNOWS-Person-KNOWS-Person) where graph adjacency lists outperform relational self-joins. These are the two heaviest queries with billion-scale result counts. ArcadeDB is 3–7x faster than DuckDB, 20–50x faster than Neo4j, and 19–24x faster than PostgreSQL.
  • DuckDB wins on remaining queries — Q1 (long chain), Q2 (diamond), Q3 (triangle), Q5 (fork), Q8 (anti-pattern) are join-intensive patterns where DuckDB's columnar vectorized execution excels.
  • Neo4j and Memgraph are significantly slower across the board. Memgraph times out on 3 of 9 queries. Neo4j completes all queries but is 10–140x slower than ArcadeDB on every query.
  • PostgreSQL is a solid middle ground for a traditional RDBMS — faster than Neo4j/Memgraph but significantly slower than both ArcadeDB and DuckDB.

Architecture

Graph Analytical View (GAV)

The GAV engine builds a CSR adjacency index from ArcadeDB's OLTP storage:

  1. Pass 1: Scans all vertices, assigns dense integer IDs, collects edge pairs
  2. Pass 2: Computes prefix sums from degree arrays, fills CSR neighbor arrays
  3. Result: Packed int[] arrays for forward/backward offsets and neighbors, plus columnar edge property storage

All graph algorithms operate directly on these packed arrays with zero object allocation in hot loops.

Algorithm Execution Modes

  • CSR-accelerated (default when OLAP enabled): Algorithms run on the GAV's CSR arrays via GraphAlgorithms.* methods
  • OLTP fallback: If GAV is unavailable, algorithms fall back to ArcadeDB's built-in graph traversal procedures

JVM Flags

The benchmark runner uses:

-Xms16g -Xmx16g --add-modules jdk.incubator.vector

The jdk.incubator.vector module enables SIMD-accelerated operations in the GAV engine.

License

Apache License, Version 2.0

About

LDBC Graphalytics benchmark platform driver for ArcadeDB

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors