Thanks for taking the time to contribute to TerminusDB!
Before submitting a PR, please run make pr to make sure
linting and all tests pass. Failure should result in a big fail message, and
success with a final true. API tests will require that the admin
password is root or that the environment variable
TERMINUS_ADMIN_PASSWD is set prior to invocation of terminusdb.
It is preferred that the integration tests are run with the test server
script ./tests/terminusdb-test-server.sh start. Starting the server
with --clean will wipe the storage directory. Run the tests with
run plunit and mocha integration tests
make test
npx mocha tests/test/*.jsrun json plunit test suite only
make test SUITE='[json]'Note about running tests from different locations:
Integration tests can be run from both the repository root and the tests directory:
# From repo root
npx mocha tests/test/*.jsCLI Test Wrapper: The CLI test files use helper functions (util.terminusdbScript() and util.servedPath()) that automatically detect the correct paths based on your current working directory. The underlying tests/terminusdb.sh wrapper script can run tests against either:
- A locally built executable (default: auto-detected relative to script location)
- A Docker container (set
TERMINUSDB_DOCKER_CONTAINERenv var)
Note: Some tests verify that the git hash of the repository matches the git hash of the binary. If you get info_ok or , rebuild with make dev first.
For rapid iteration during development, use the test server script:
# Start test server (builds if needed, reuses existing storage)
./tests/terminusdb-test-server.sh start
# Start with fresh storage (wipes previous data)
./tests/terminusdb-test-server.sh start --clean
# Check server status
./tests/terminusdb-test-server.sh status
# View logs
./tests/terminusdb-test-server.sh logs
# Quick restart (keeps storage, for code changes)
./tests/terminusdb-test-server.sh restart
# Restart with fresh storage
./tests/terminusdb-test-server.sh restart --clean
# Stop server
./tests/terminusdb-test-server.sh stop
# Stop server and remove all test data
./tests/terminusdb-test-server.sh cleanBenefits:
- Fast rebuild cycle: Only rebuilds Rust if sources changed with
make dev - Safe by default: Preserves storage unless
--cleanflag is used - Port conflict detection: Checks for existing processes on port 6363
- Quick readiness check: Server ready in < 10s
- Consistent: Same setup across all developers
- Background process: Runs in background with PID tracking
- Default credentials: admin/root (configurable with
TERMINUSDB_ADMIN_PASS)
Server Details:
- URL:
http://127.0.0.1:6363 - User:
admin - Pass:
root
If you need more control:
# Build Rust library and create development binary
# (see specific instructions for macOS further down)
make rust
# Start server manually with custom storage
./terminusdb serve --storage /tmp/my-test-dbNote: The test server script is preferred as it ensures consistency and prevents storage conflicts between test runs.
For quick Prolog-only tests without Rust:
swipl src/start.pl serve --memorydocker build -t terminusdb-dev .
# Prefer adding the --rm flag too for container clean up after it exits!
docker run -p 6363:6363 --name terminusdb-test terminusdb-devFor faster development iterations, you can skip both Rust and Prolog tests:
docker build --build-arg SKIP_TESTS=true -t terminusdb-dev .
docker run -p 6363:6363 --name terminusdb-test terminusdb-devThis significantly reduces build time but should only be used for development. Always run the full test suite (without SKIP_TESTS) before creating pull requests or deploying to production.
For local development on macOS, use the development build target which avoids code signing issues:
make devWhen changing Rust code, you must clean the Rust library first:
rm src/rust/librust.{dylib,so}
make devWhy? The Rust library is cached and make dev won't automatically rebuild it if source files change. Removing the library forces a clean rebuild.
This creates a terminusdb binary that:
- Works immediately without Gatekeeper/code signing issues
- Dynamically links to SWI-Prolog libraries (no stripping)
- Faster to rebuild during development
- Requires SWI-Prolog to be installed (via Homebrew recommended)
Run the development binary:
./terminusdb help
./terminusdb serve
./terminusdb store initFor production or distribution, build the standalone binary:
makeThis creates a standalone terminusdb binary (strips and embeds libraries), but requires one-time Gatekeeper approval:
- In Finder, navigate to the project directory
- Double-click the
terminusdbfile - Click "OK" when macOS shows the security warning
- Right-click on
terminusdb→ "Open" - Click "Open" to approve
After this one-time step, the binary will work from the command line.
Why? The standalone build strips and embeds SWI-Prolog libraries, invalidating their code signatures. macOS security requires manual approval for such binaries.
Alternatively, use the shell wrapper for development:
./terminusdb.sh helpThis bypasses binary compilation entirely and runs via swipl directly.
- Understand in detail what the issue or enhancement is about
- If needed, build a small throw-away demo code to understand the logic clearly
- Create new PLUnit tests in the same file for swipl unit tests
- Run make dev to rebuild the binary and include the autoloaded modules
- Run the swipl unit tests using commands below as they load correctly
- Create integration tests once the plunit tests runs for the parts that now work
- Run the integration tests with the mocha commands below
- Run the full test suite and fix lint errors
Run JavaScript tests from the repository root, that does not NOT cd into tests directory as it becomes a nuisance!
# Run all JavaScript tests
npx mocha tests/test/*.js --timeout 10000
# Run specific test file
npx mocha tests/test/graphql.js --timeout 10000
# Run two tests matching a pattern
npx mocha tests/test/graphql.js tests/test/decimal-precision.js
# Run with coverage
npm run test:coverageAlways run npx mocha from the repository root directory, not from within the tests/ directory. Using cd tests && in commands can cause terminal hanging issues in some workflows.
# Run all Prolog tests
swipl -g run_tests -t halt src/interactive.pl
# Run specific Prolog test module
swipl -g "run_tests(graphql_numeric_serialization)" -t halt src/interactive.pl
# Run a specific test, in a specific module
swipl -g "run_tests(woql:group_by_single_element_list_template)" -t halt src/interactive.plBefore running JavaScript tests, ensure the test server is running:
# Start test server
./tests/terminusdb-test-server.sh start
# Check if server is ready
./tests/terminusdb-test-server.sh status
# Restart after code changes
./tests/terminusdb-test-server.sh restartTypical development cycle:
# 1. Make code changes
# 2. Rebuild (clean Rust library if you changed Rust code)
# For Prolog-only changes:
make dev
# For Rust changes:
rm src/rust/librust.{dylib,so}; make dev
# 3. Restart test server
./tests/terminusdb-test-server.sh restart
# 4. Run relevant tests (from repo root!)
npx mocha tests/test/graphql.js --timeout 10000TerminusDB uses a structured JSON logging system defined in src/core/util/json_log.pl. Use these functions for consistent logging:
Logging Functions:
:- use_module(core(util)). % Imports json_log module
% Basic logging (simple message)
json_log_debug(Message).
json_log_info(Message).
json_log_warning(Message).
json_log_error(Message).
% Formatted logging (with format string)
json_log_debug_formatted('Variable value: ~q', [Value]).
json_log_info_formatted('Processing ~w items', [Count]).
json_log_warning_formatted('Deprecated function ~w called', [FuncName]).
json_log_error_formatted('Failed at ~w with ~q', [Location, Error]).Direct stderr output:
% For quick debug output (not structured)
format(user_error, 'Debug: ~q~n', [SomeValue]).Log Levels:
debug- Development debugging (disabled by default in production)info- General informational messageswarning- Warning conditionserror- Error conditions
Viewing Logs:
# View test server logs
./tests/terminusdb-test-server.sh logs
# Or directly
tail -f tests/.terminusdb-test.logTerminusDB provides built-in logging functions in Rust that integrate with the server's structured JSON logging system.
Built-in Logging Functions:
The logging module (src/rust/terminusdb-community/src/log.rs) provides five severity levels:
use crate::log::{log_debug, log_info, log_notice, log_warning, log_error};
// Inside a predicate function with a context parameter
predicates! {
#[module("$my_module")]
semidet fn my_predicate(context, input_term, output_term) {
// Simple logging (pass context and message)
log_debug!(context, "Starting processing");
log_info!(context, "Processing item");
log_warning!(context, "Potential issue detected");
log_error!(context, "Operation failed");
// Formatted logging (like Rust's format! macro)
let value = input_term.get::<i64>()?;
log_debug!(context, "Input value: {}", value);
log_info!(context, "Processing {} items", count);
log_warning!(context, "Value {} exceeds threshold", value);
log_error!(context, "Failed to process {} with error: {:?}", id, error);
output_term.unify("result")
}
}How It Works:
- The Rust logging functions call Prolog's
json_log:json_log/2predicate - Messages appear in the server log with proper timestamps, severity, and metadata
- Log output respects
TERMINUSDB_LOG_LEVELenvironment variable - Log format respects
TERMINUSDB_LOG_FORMAT(text or json)
Logging Levels:
log_debug!- Development debugging (only shown whenTERMINUSDB_LOG_LEVEL=DEBUG)log_info!- General informational messages (default level)log_notice!- Notable but normal conditionslog_warning!- Warning conditionslog_error!- Error conditions
Controlling Log Output:
# Set log level (DEBUG, INFO, NOTICE, WARNING, ERROR)
export TERMINUSDB_LOG_LEVEL=DEBUG
# Set log format (text or json)
export TERMINUSDB_LOG_FORMAT=text
# Start server with debug logging
TERMINUSDB_LOG_LEVEL=DEBUG ./tests/terminusdb-test-server.sh restart
# View logs
./tests/terminusdb-test-server.sh logsExample Usage in Context:
predicates! {
#[module("$json_preserve")]
semidet fn json_read_string_preserving_numbers(context, json_string_term, result_term) {
log_debug!(context, "json_read_string_preserving_numbers called");
let json_string: PrologText = json_string_term.get()?;
let json_str = json_string.to_string();
log_debug!(context, "Parsing JSON string of length: {}", json_str.len());
match serde_json::from_str::<Value>(&json_str) {
Ok(parsed) => {
log_info!(context, "Successfully parsed JSON");
// ... process result
result_term.unify(dict)
}
Err(e) => {
log_error!(context, "JSON parse error: {:?}", e);
Err(PrologError::Exception)
}
}
}
}Alternative: File-Based Logging (When Built-in Logging Isn't Available):
For Rust code that doesn't have access to a Prolog context (e.g., standalone functions), use temporary file logging:
use std::io::Write;
if let Ok(mut f) = std::fs::OpenOptions::new()
.create(true)
.append(true)
.open("/tmp/debug_output.log")
{
let _ = writeln!(f, "Debug message: {:?}", some_value);
}
// View output
// tail -f /tmp/debug_output.logBest Practices:
- Always use built-in logging when you have a
contextparameter - Use appropriate severity levels - avoid
log_error!for non-errors - Include context in messages - function name, key identifiers
- Remove debug logging before committing (or use INFO+ level for permanent logs)
- File-based logging should only be used when context isn't available
GraphQL queries execute in Rust and can be debugged using the file logging pattern above. Key files:
src/rust/terminusdb-community/src/graphql/schema.rs- Schema resolution and field extractionsrc/rust/terminusdb-community/src/graphql/mod.rs- Query executionsrc/rust/terminusdb-community/src/graphql/filter.rs- Filter logic
Tracing function execution:
my_function(Input, Output) :-
json_log_debug_formatted('my_function called with: ~q', [Input]),
do_something(Input, Intermediate),
json_log_debug_formatted('Intermediate result: ~q', [Intermediate]),
final_step(Intermediate, Output),
json_log_debug_formatted('Final output: ~q', [Output]).Debugging data transformations in Rust:
With context (preferred):
predicates! {
semidet fn process_data(context, input_term, output_term) {
let input: Data = input_term.get()?;
log_debug!(context, "Processing input: {:?}", input);
let result = transform(&input)?;
log_debug!(context, "Transform result: {:?}", result);
output_term.unify(result)
}
}Without context (fallback):
fn helper_function(input: &Data) -> Result<Output> {
use std::io::Write;
if let Ok(mut f) = std::fs::OpenOptions::new()
.create(true)
.append(true)
.open("/tmp/process_data.log")
{
let _ = writeln!(f, "INPUT: {:?}", input);
}
let result = transform(input)?;
if let Ok(mut f) = std::fs::OpenOptions::new()
.create(true)
.append(true)
.open("/tmp/process_data.log")
{
let _ = writeln!(f, "OUTPUT: {:?}", result);
}
Ok(result)
}✅ Use built-in logging: Always prefer log_debug!, log_info!, etc. macros in Rust when you have a context parameter. They integrate seamlessly with the server's logging system.
TERMINUSDB_LOG_LEVEL=DEBUG to see debug messages. Default is INFO level.
Before submitting a change, please run make && ./terminusdb test to make sure that all tests pass. Failure should result in a big fail message, and success with a final true. API tests will require that the admin password is root or that the environment variable TERMINUSDB_ADMIN_PASS is set prior to invocation of terminusdb.
Please send a GitHub Pull Request to the main branch.
Please write clear log messages with your commits. Small changes can be a one line message, but big changes should have a descriptive paragraph with a newline after the title in the message.
It should preferably look something like this:
$ git commit -m "My change title
This is a paragraph describing my change."One of the easier ways to set up a development environment is by forking the git repository, cloning it and checking out the dev branch. Docker is a prerequisite for setting it up this way, an alternative is following the instructions in BUILD.md.
- Make a fork on GitHub
- Clone the repository with
git clone git@github.com:[your_username]/terminusdb.git - Go to the directory
cd terminusdb. - Run
docker run -it --mount type=bind,source="$(pwd)",target=/app/terminusdb -p 6363:6363 --rm terminusdb/terminusdb:devinside the terminusdb directory. It will mount the current sources to the Docker container. - Run
make.inside the swipl console after you changed the code.
make clean && make dev && tests/terminusdb-test-server.sh restart --clean && make test-int && swipl -g run_tests -t halt src/interactive.pl
We have a house style for prolog, especially for conditionals. Try to copy what you see.
When handling input from external sources (client libraries, user queries, API requests), use explicit type checking rather than wildcards or permissive pattern matching. This prevents type confusion bugs and potential security vulnerabilities.
For example, when matching typed literals, be explicit about the expected type:
% Good - explicit type checking
Key = Value^^'xsd:string'
% Bad - accepts any type, potential security issue
Key = Value^^_Always validate assumptions about data types and provide clear error messages when validation fails. This makes bugs easier to diagnose and prevents silent failures that can lead to incorrect results or security issues downstream.
- Leverage TDD where relevant: add failing unit and integration tests first to verify assumptions of how the thing should work
- Fix the failing tests, and ensure no regressions in other tests
- Complete the PR, write an informative PR summary to help the reviewer
- Run the full unit test suites
- Run the integration test suites
- Run the lint tool
rm src/rust/librust.*
make dev
./tests/terminusdb-test-server.sh restart
make lint && make lint-mocha && make test && make test-intNo tabs please! 4 spaces for indentation. This is non-negotiable.