The Neuracore Data Daemon is a small background service that runs on your machine and takes care of storing recordings locally and uploading them.
You can use it in two ways:
- CLI first: launch the daemon, then run your scripts
- Script first: run your script and let it start the daemon automatically
Profiles are optional. If you do not use a named profile, the daemon uses the default profile (and any environment variable overrides you set).
- How to run the daemon (CLI or from a script)
- How profiles work (optional) and where they are stored
- The configuration fields you can set
- Environment variables that control DB path, recordings root, and upload concurrency
- The order of precedence (defaults, profile, environment variables, CLI)
- What happens to old daemon databases at startup (automatic schema migration)
- A full CLI reference for the commands currently in use
It does not explain internal implementation details.
pip install -e .Optional, but recommended for video performance:
sudo apt-get update && sudo apt-get install -y ffmpegThe data daemon prefers the ffmpeg CLI encoder for recording. If the binary is not installed or encoder init fails, it automatically falls back to PyAV.
With the default profile:
neuracore data-daemon launchWith a named profile:
neuracore data-daemon profile create recording
neuracore data-daemon profile update recording --storage-limit 2gb --bandwidth-limit 50mb --storage-path /data/records --num-threads 4
neuracore data-daemon launch --profile recordingBackground (runs quietly):
neuracore data-daemon launch --profile recording --backgroundneuracore data-daemon status
neuracore data-daemon stopYou do not have to use neuracore data-daemon launch beforehand. The daemon will automatically start in the background if it is not already running when your script needs it.
It will:
- check if the daemon is already running
- start it in the background if it is not running
- wait until it is ready before continuing
Example:
import neuracore as nc
def main():
nc.login()
# The daemon starts automatically when needed
nc.start_recording()
# ...
nc.stop_recording()Choosing a profile when using auto-start:
export NEURACORE_DAEMON_PROFILE=recording
python your_script.py --recordWhen to use which approach:
- Use CLI launch if you want to start the daemon once and then run many scripts.
- Use auto-start if you want each script to be self contained.
When you run:
neuracore data-daemon launchthe CLI starts the daemon as a separate Python process by running:
python -m neuracore.data_daemon.runner_entry
That daemon process:
- boots the internal components it needs
- starts its main loop
- stays running until you stop it (or the machine shuts down)
You may see simple messages when it stops:
- Daemon exited.
- Daemon stopped.
On startup, the daemon initializes the SQLite store and ensures schema compatibility.
If an older single-table schema is detected (legacy traces.status format), the daemon
automatically migrates data to the current schema:
tracesrows are transformed into lifecycle fields:write_statusregistration_statusupload_status
recordingsrows are generated per uniquerecording_id- Existing trace metadata/bytes/error fields are preserved
- Migration runs before normal startup reconciliation
Migration runs once per DB file. After a successful migration, startup continues normally.
A profile is a YAML file that stores daemon settings you want to reuse.
Profiles are stored here:
~/.neuracore/data_daemon/profiles/<name>.yaml
Manage profiles with:
neuracore data-daemon profile create <name>
neuracore data-daemon profile update [profile_name] [options...]
neuracore data-daemon profile get [profile_name]
neuracore data-daemon profile listNotes:
- Profile names are positional arguments, not
--nameflags. profile updatecan be run without a profile name to update the default profile.profile getcan be run without a profile name to read the default profile.- The default profile is protected and cannot be deleted.
Delete a named profile with:
neuracore data-daemon profile delete <name>If you do not use a named profile, the daemon uses the default profile.
These are the supported settings:
| Field | What it controls |
|---|---|
storage_limit |
Maximum local disk space the daemon should use for recordings (bytes). |
bandwidth_limit |
Maximum upload speed the daemon should use (bytes per second). |
path_to_store_record |
Folder where recordings are stored. |
num_threads |
Number of worker threads used by the daemon. |
keep_wakelock_while_upload |
Whether to keep the machine awake during uploads (where supported). |
offline |
If enabled, uploading is disabled and data is only stored locally. |
api_key |
API key used for authenticating the daemon. |
current_org_id |
Which organisation the daemon should operate under. |
For storage_limit and bandwidth_limit, you can pass a raw number (bytes) or a unit suffixed value.
Supported units:
- b
- k or kb
- m or mb
- g or gb
Examples:
--storage-limit 500000000
--storage-limit 2gb
--bandwidth-limit 50mbWhen the daemon resolves its configuration, this is the order:
- Built in defaults (used if nothing is provided)
- Profile YAML (if you choose a profile)
- Environment variables (optional overrides)
- CLI values (explicit values you pass on the command line)
You can override settings using environment variables. This is useful in CI, containers, or when you do not want to edit a profile file.
Supported environment variables:
| Setting | Environment variable |
|---|---|
storage_limit |
NCD_STORAGE_LIMIT |
bandwidth_limit |
NCD_BANDWIDTH_LIMIT |
path_to_store_record |
NCD_PATH_TO_STORE_RECORD |
num_threads |
NCD_NUM_THREADS |
keep_wakelock_while_upload |
NCD_KEEP_WAKELOCK_WHILE_UPLOAD |
offline |
NCD_OFFLINE |
api_key |
NCD_API_KEY |
current_org_id |
NCD_CURRENT_ORG_ID |
Boolean values treat these as true:
1trueyesy
Examples:
export NCD_STORAGE_LIMIT=3gb
export NCD_OFFLINE=true
neuracore data-daemon launchexport NCD_PATH_TO_STORE_RECORD=/mnt/data/records
export NCD_NUM_THREADS=4
neuracore data-daemon launch --backgroundThese variables control where the daemon runtime artifacts live:
| Purpose | Environment variable | Default |
|---|---|---|
| PID file path | NEURACORE_DAEMON_PID_PATH |
~/.neuracore/daemon.pid |
| SQLite DB path | NEURACORE_DAEMON_DB_PATH |
~/.neuracore/data_daemon/state.db |
| Recordings root | NEURACORE_DAEMON_RECORDINGS_ROOT |
sibling of DB path (<db_dir>/recordings) |
| Profile for launch/auto-start | NEURACORE_DAEMON_PROFILE |
unset |
| Enable debug mode | NDD_DEBUG |
false |
Recommended for containers/dev environments:
export NEURACORE_DAEMON_DB_PATH=/workspaces/neuracore/data_daemon_state.db
export NEURACORE_DAEMON_RECORDINGS_ROOT=/workspaces/neuracore/recordingsRecommended upload concurrency:
- Most machines:
5-10 - Start at
5, increase only if CPU/network/disk are stable - Very high values can increase retries, memory pressure, and shutdown latency
neuracore data-daemon profile create <name>Example:
neuracore data-daemon profile create laptopUpdate a named profile:
neuracore data-daemon profile update <name> [--storage-limit <bytes|unit>] [--bandwidth-limit <bytes|unit>] [--storage-path <path>] [--num-threads <n>] [--max-concurrent-uploads <n>] [--wakelock|--no-wakelock] [--offline|--online] [--api-key <key>] [--current-org-id <org_id>]Update the default profile:
neuracore data-daemon profile update [--storage-limit <bytes|unit>] [--bandwidth-limit <bytes|unit>] [--storage-path <path>] [--num-threads <n>] [--max-concurrent-uploads <n>] [--wakelock|--no-wakelock] [--offline|--online] [--api-key <key>] [--current-org-id <org_id>]Example:
neuracore data-daemon profile update laptop --storage-limit 2gb --offlineDescribe a profile:
neuracore data-daemon profile get [profile_name]Examples:
neuracore data-daemon profile get high-bandwidth
neuracore data-daemon profile get low-bandwidth
neuracore data-daemon profile getneuracore data-daemon profile listneuracore data-daemon profile delete <name>Notes:
- The profile name is required.
- The default profile cannot be deleted.
neuracore data-daemon launch [--profile <name>] [--background]Examples:
neuracore data-daemon launchneuracore data-daemon launch --profile laptopneuracore data-daemon launch --profile laptop --backgroundneuracore data-daemon statusneuracore data-daemon stopSet offline: true in your profile, then launch the daemon with that profile as usual. Record normally, all data is stored locally. When you have internet access again, relaunch the daemon without offline mode and it will automatically upload your recordings to Neuracore.
# Set offline mode
neuracore data-daemon profile update my_profile --offline
# Record offline
neuracore data-daemon launch --profile my_profile
# Back online, disable offline mode and relaunch
neuracore data-daemon profile update my_profile --online
neuracore data-daemon launch --profile my_profileFor multi-node offline setups, collect your data using a data distribution system like ROS across multiple nodes, then use a single node to import your collected data into Neuracore.
You tried to launch it while it is already running.
Try:
neuracore data-daemon status
neuracore data-daemon stop
neuracore data-daemon launchRun it in the foreground so you can see the output:
neuracore data-daemon launchIf it still fails, check your profiles:
neuracore data-daemon profile list
neuracore data-daemon profile get
neuracore data-daemon profile get <name>A common cause is trying to launch with offline: false and no valid api_key.
neuracore data-daemon launch --background currently confirms that the subprocess started, but it may still exit shortly afterward during bootstrap, for example if authentication fails.
If background launch appears successful but status later shows the daemon is not running, rerun in the foreground:
neuracore data-daemon launchThe recording encoder selects backend at runtime:
- Uses
ffmpegCLI whenffmpegis available onPATH - Falls back to PyAV when
ffmpegis unavailable or fails to initialize
Quick check:
ffmpeg -versionIf this command succeeds, the daemon will use the FFmpeg backend for new recordings.
If startup logs mention migration failures:
- Verify the daemon is using the DB you expect:
echo "$NEURACORE_DAEMON_DB_PATH"-
Ensure the process has write permission to DB directory and recordings root.
-
Start in foreground and read migration logs:
neuracore data-daemon launch- If migration fails repeatedly, stop daemon and keep a backup copy of the DB before retrying.
Repeated Ctrl+C while shutdown is already in progress can interrupt cleanup.
Recommended:
- Press
Ctrl+Conce, then wait for shutdown to complete - For normal operation, use:
neuracore data-daemon stop