You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Tensorlake wins all three modes, consistently fastest across every dataset size and configuration.
Vercel is a solid second place overall, with tight competition from E2B in fsync mode.
Daytona concurrent reads are 3-4x slower than every other provider across all modes. This is the single biggest performance differentiator -- Daytona's single-threaded operations are competitive, but multi-threaded reads expose a bottleneck.
Modal's large mode performance collapsed -- 6.7x slower than Tensorlake (63.55s vs 9.45s). LIKE queries took 38s (vs 6.5s on Tensorlake) and concurrent reads hit only 101 q/s. Modal uses SQLite 3.40.1 (oldest in the set), which may explain the degradation at scale. Default and fsync modes were mid-pack.
LIKE queries (full table scans) dominate total time in all modes, especially at the large scale where they account for 60-70% of runtime.
Environment
Tensorlake
Vercel
Daytona
E2B
Modal
Python
3.12.3
3.13.1
3.13.12
3.13.12
3.13.3
SQLite
3.45.1
3.51.1
3.46.1
3.46.1
3.40.1
vCPUs (verified)
2
2
2 (cgroup)
2
2
Memory (verified)
3.9 GB
4.3 GB
4.0 GB (cgroup)
3.9 GB
~1 TB (host)
Running the Benchmarks
Prerequisites
Install and authenticate each provider's CLI:
Provider
Install
Auth
Tensorlake
pip install tensorlake (into /tmp/venv)
tensorlake login
Vercel
npm i -g sandbox
sandbox login
Daytona
brew install daytonaio/cli/daytona
daytona login
E2B
npm i -g e2b
e2b auth login
Modal
pip install modal
modal token set
E2B requires building a template to configure CPU and memory:
# Run all providers, all modes (default + fsync + large)
python run_benchmarks.py
# Run specific providers
python run_benchmarks.py tensorlake vercel
# Use a custom E2B template
python run_benchmarks.py e2b --e2b-template bench-2cpu-4gb
Provider Notes
Vercel: Python runtime does not include the native _sqlite3 C extension. The runner installs pysqlite3-binary automatically.
Daytona: Using --class small locks you to a snapshot with fixed 1 vCPU. To get custom resources, use --cpu/--memory with -f Dockerfile instead. nproc reports host CPUs, but cgroup enforces the requested limit.
E2B: CPU and memory cannot be set at sandbox creation time. You must build a custom template with e2b template create --cpu-count N --memory-mb N.
Modal: Python SDK (not CLI). Sandboxes are created via modal.Sandbox.create(). Requires pip install modal and modal token set. Memory reports ~1 TB as it sees the host; actual sandbox allocation may differ.