Index | oJob Reference | Security | Flags | Recipes | Advanced
Deeper coverage of selected modules and patterns.
ow.loadServer();
var sch = new ow.server.scheduler();
var id = sch.addEntry("*/10 * * * * *", () => log('tick'), true); // cron, fn, waitForFinish
// Modify later
id = sch.modifyEntry(id, "*/30 * * * * *", () => log('slower'), true);Used internally by periodic oJobs with cron expressions.
ow.loadFormat();
var info = ow.format.cron.howManyAgo("*/5 * * * * *", Date.now() - 20000);
// info.isDelayed, info.missedExecutionsUseful for custom reliability checks beyond built-in cronCheck.
memin‑memorymvspersistent (MapDB / storage engine)bigpeer-synchronizable
Pattern: create, expose, peer, cluster.
$ch('events').create();
$ch('events').set({id:1},{msg:'hello'});
$ch('events').subscribe((ch,op,k,v)=> log(op+':'+stringify(k)));Declared via ojob.channels.clusters enabling node discovery & replication. Each cluster entry initializes an ow.server.cluster with periodic verification; combine with persistent channel types for fault tolerance.
ow.loadSec();
ow.sec.openMainSBuckets('masterSecret');
var bucket = $sec('repo','bucket','lock','masterSecret');
bucket.setSecret('bucket','lock','apiKey',{ key:'abc', created:new Date() });
var s = bucket.getSecret('bucket','lock','apiKey');Supports multiple repos, lock-based access, encrypted at rest.
ow.loadAI();
var lr = ow.ai.regression().linear([[0,1],[1,3],[2,5]]);
log(lr.string);Combine with (llm) built-in job for integrated prompt workflows; set OAF_MODEL or use provider-specific configs.
See also: ow-ai-gpttypes.md for provider wrappers, standardized interfaces and implementation notes.
Use server mode for throughput:
ow.loadPython();
ow.python.startServer();
for (var i=0;i<100;i++) ow.python.exec("x = " + i + "*2", {}, []);
ow.python.stopServer();Maintain state across calls with execPM persistent map.
Blend $tb with external cancel signals:
var cancelled=false;
setTimeout(()=> cancelled=true, 3000);
var result = $tb(() => {
while(!cancelled) {/* work */}
}).timeout(10000).stopWhen(()=>cancelled).exec();Add namespaced helpers:
ow.loadTemplate();
ow.template.addHelpers('x', { upper: s => s.toUpperCase() });
print(templify('Hi {{x_upper name}}', { name:'dev' }));Set ojob.log.format: json or env OJOB_JSONLOG=true. Combine with external log forwarder reading stderr. Include correlation IDs by inserting into args early and referencing in templates.
_$(value,'label').toNumber().isNumber().between(1,100).default(10) provides readable declarative constraints. Attach to job check.in & check.out for consistent boundaries.
Favor built-in job shortcuts (if), (repeat), (each), (parallel) to minimize JS code, enabling easier reasoning by humans & LLMs while improving change auditability.
- Reduce startup: set
ojob.conAnsi: false(skip terminal probing) - Limit thread count:
ojob.numThreads - Adjust parallel heuristics:
ojob.flags.PFOREACH.* - Prefer
execRequirefor large reusable code blocks vs inline duplicates
- $do – queue work on the standard ForkJoin-backed pool and receive an
oPromisefor fluent.then/.catchcomposition. The resolver passed into your function can resolve with returned values or explicitresolve()calls, while thrown errors orreject()calls route to the rejection chain.【F:js/openaf.js†L13426-L13455】【F:js/openaf.js†L12472-L12528】 - $doV – same contract as
$dobut targets a virtual-thread-per-task executor so launching many concurrent tasks will not consume native threads when the JVM supports Project Loom virtual threads.【F:js/openaf.js†L13444-L13455】【F:js/openaf.js†L12421-L12438】 - Coordination helpers – mix
$doAll/$doFirst(wrappers overoPromise.all()/.race()) to wait for all tasks or the first completion, enabling fan-out/fan-in patterns without manual synchronization primitives.【F:js/openaf.js†L13459-L13479】【F:js/openaf.js†L12532-L12589】 - Cancellation – call
.cancel()on any$do/$doVpromise to interrupt the associated thread (mirroring the Threads plugin) and drive the chain into the rejection path for cleanup.【F:js/openaf.js†L13426-L13455】【F:js/openaf.js†L12664-L12683】
Example fan-out flow using virtual threads:
var tasks = [url1, url2, url3].map(url =>
$doV(() => httpGet(url))
);
$doAll(tasks)
.then(results => log("Fetched: " + results.length))
.catch(err => logErr("HTTP error: " + err));Combine with $tb or custom cancellation logic for cooperative shutdown when outstanding promises should be abandoned.
OpenAF provides comprehensive metrics capabilities through ow.metrics and integrated oJob telemetry:
// Load metrics module
ow.loadMetrics();
// Add custom metrics
ow.metrics.add("customGauge", () => ({ value: getCustomValue() }));
ow.metrics.add("requestCounter", () => ({ value: $get("requestCount") || 0 }));
// Start built-in collectors (CPU, memory, etc.)
ow.metrics.startCollecting();
// Get all metrics
var allMetrics = ow.metrics.getAll();
// Get specific metrics
var someMetrics = ow.metrics.getSome(["customGauge", "mem"]);ojob:
metrics:
# Passive metrics (HTTP endpoint)
passive: true
port : 9101
uri : "/metrics"
host : "0.0.0.0"
# Active metrics (push to external systems)
add:
customMetric: | #js
return { value: Math.random() * 100 }
processedItems: | #js
return { value: $get("itemsProcessed") || 0 }
# Push to OpenMetrics/Prometheus
openmetrics:
url: "http://pushgateway:9091/metrics/job/myapp"
period: 30000
metrics: ["customMetric", "mem", "cpu"]
# Push to nAttrMon
nattrmon:
url: "http://nattrmon:8080/cvals"
period: 60000
metrics: ["customMetric", "processedItems"]
# Collect to channel for historical analysis
collect:
active: true
ch : "metricsHistory"
period: 10000// Load server module
ow.loadServer();
// Passive telemetry endpoint
ow.server.telemetry.passive(9102, "/health", true, "myapp", {
"requests_total": {
text: "Total requests",
help: "Number of requests processed",
type: "counter"
}
});
// Active telemetry pushing
ow.server.telemetry.active(function() {
// Custom telemetry sender
var metrics = ow.metrics.getAll();
// Send to your monitoring system
log("Sending metrics: " + JSON.stringify(metrics));
}, 30000);
// Send to nAttrMon
var sender = ow.server.telemetry.send2nAttrMon(
"http://nattrmon:8080/cvals",
"myapp",
["cpu", "mem", "customMetric"]
);
ow.server.telemetry.active(sender, 60000);// Convert metrics to OpenMetrics format
var openMetrics = ow.metrics.fromObj2OpenMetrics(
ow.metrics.getAll(), // metrics object
"myapp", // prefix
new Date(), // timestamp
{ // help text mapping
"cpu": { text: "CPU usage percentage", type: "gauge" },
"mem": { text: "Memory usage bytes", type: "gauge" }
}
);
// Expose via HTTP server
var httpd = ow.server.httpd.start(9090);
ow.server.httpd.route(httpd, "/metrics", function(req) {
return httpd.replyOKText(openMetrics);
});ojob:
channels:
create:
- name: metricsHistory
type: mvs
options:
file: metrics.db
metrics:
collect:
active: true
ch : metricsHistory
period: 5000 # Collect every 5 seconds
some : ["mem", "cpu", "customMetric"] # Only specific metrics# Prometheus integration
ojob:
metrics:
passive: true
port : 9101
uri : "/metrics"
openmetrics: true
prefix : "myapp"
# Custom alerting
jobs:
- name: "Health Check"
type: periodic
typeArgs:
cron: "*/30 * * * * *" # Every 30 seconds
exec: |
var metrics = ow.metrics.getAll();
if (metrics.mem.value > 1000000000) { # 1GB
log("HIGH MEMORY USAGE: " + ow.format.toBytesAbbreviation(metrics.mem.value));
}OpenAF provides comprehensive AI capabilities through ow.ai for both traditional ML and modern LLM integration.
// Load AI module
ow.loadAI();
// Create a neural network
var nn = new ow.ai.network({
type: "perceptron",
args: [2, 3, 1] // 2 inputs, 3 hidden, 1 output
});
// Train the network
nn.train([
{input: [0,0], output: [0]},
{input: [0,1], output: [1]},
{input: [1,0], output: [1]},
{input: [1,1], output: [0]}
]);
// Use the network
var result = nn.get([1,0]); // Returns ~1 for XOR// Regression analysis
var regression = ow.ai.regression();
var data = [[0,1],[1,3],[2,5],[3,7]]; // x,y pairs
var linear = regression.linear(data);
log("Equation: " + linear.string); // y = 2x + 1
log("R²: " + linear.r2); // Correlation coefficient
// Other regression types
var polynomial = regression.polynomial(data, { order: 2 });
var exponential = regression.exponential(data);// Create statistical tracker
var stats = ow.ai.valuesArray(100); // Keep last 100 values
// Add values
stats.push(85.2);
stats.push(87.1);
stats.push(92.3);
// Get statistics
log("Average: " + stats.movingAverage());
log("Deviation: " + stats.deviation());
log("Variance: " + stats.variance());// Create LLM client (OpenAI example)
var llm = ow.ai.gpt({
type: "openai",
key : "your-api-key",
url : "https://api.openai.com/v1",
model: "gpt-3.5-turbo"
});
// Simple prompt
var response = llm.prompt("Explain quantum computing in simple terms");
log(response);
// Conversation
llm.addSystemPrompt("You are a helpful coding assistant");
llm.addUserPrompt("How do I sort an array in JavaScript?");
var answer = llm.prompt();// OpenAI
var openai = ow.ai.gpt({
type: "openai",
key : process.env.OPENAI_KEY,
model: "gpt-4"
});
// Anthropic Claude
var claude = ow.ai.gpt({
type: "anthropic",
key : process.env.ANTHROPIC_KEY,
model: "claude-3-sonnet-20240229"
});
// Local Ollama
var ollama = ow.ai.gpt({
type: "ollama",
url : "http://localhost:11434",
model: "llama2"
});
// Google Gemini
var gemini = ow.ai.gpt({
type: "gemini",
key : process.env.GEMINI_KEY,
model: "gemini-2.5-flash"
});var llm = ow.ai.gpt({ type: "openai", key: "...", model: "gpt-3.5-turbo" });
// Register a tool
llm.setTool("getCurrentWeather",
"Get the current weather for a location",
{
type: "object",
properties: {
location: { type: "string", description: "City name" },
units: { type: "string", enum: ["celsius", "fahrenheit"] }
},
required: ["location"]
},
function(params) {
// Your weather API call here
return { temperature: 22, condition: "sunny" };
}
);
// Use the tool in conversation
var response = llm.prompt("What's the weather like in London?");var llm = ow.ai.gpt({ type: "openai", key: "...", model: "gpt-4-vision-preview" });
// Analyze an image
var description = llm.promptImage(
"Describe what you see in this image",
"/path/to/image.jpg",
"high" // detail level
);
// Generate images (OpenAI only)
var imageData = llm.promptImgGen("A sunset over mountains");
io.writeFileBytes("generated.png", imageData[0]);# Built-in LLM job
jobs:
- name: "AI Analysis"
from: "ojob llm"
args:
__llmPrompt: "Analyze this data and provide insights"
__llmInPath: "data"
__llmEnv : "OPENAI_API_KEY"
__llmOptions:
type: "openai"
model: "gpt-4"
temperature: 0.3
- name: "Data Processing"
exec: |
args.data = { sales: [100, 200, 150], region: "North" };
to: ["AI Analysis"]// Conversation management
var llm = ow.ai.gpt({...});
// Save/restore conversations
var conversation = llm.getConversation();
$set("chatHistory", conversation);
// Later...
llm.setConversation($get("chatHistory"));
// JSON structured responses
var jsonResponse = llm.prompt(
"Return product info as JSON",
"gpt-3.5-turbo",
0.1, // low temperature for consistency
true // JSON flag
);
var product = JSON.parse(jsonResponse);
// Batch processing
var results = [];
["item1", "item2", "item3"].forEach(item => {
results.push(llm.prompt("Analyze: " + item));
});- API Key Management: Use environment variables and
ow.secfor secure key storage - Error Handling: LLM calls can fail; wrap in try-catch blocks
- Rate Limiting: Implement delays between calls to respect API limits
- Cost Control: Monitor usage and implement budget controls
- Prompt Engineering: Use clear, specific prompts for better results
- Conversation Memory: Manage conversation history to stay within token limits
// Example with error handling and retry
function robustLLMCall(prompt, maxRetries = 3) {
var llm = ow.ai.gpt({...});
for (var i = 0; i < maxRetries; i++) {
try {
return llm.prompt(prompt);
} catch (e) {
logWarn("LLM call failed (attempt " + (i+1) + "): " + e.message);
if (i < maxRetries - 1) sleep(2000); // Wait before retry
}
}
throw "LLM call failed after " + maxRetries + " attempts";
}Combine integrity hashes + authorized domains + change auditing flags. For local development disable with environment variable toggles but keep production strict.
OpenAF's embedded HTTP server supports serving all resources and routes under a configurable path prefix. This is useful when deploying behind a reverse proxy that forwards requests under a subpath (e.g. /app).
Set the HTTPD_PREFIX flag before starting any server. Key "0" is the global default for all ports; use a specific port number string to override per port:
// Set global prefix /app for all HTTP servers
__flags.HTTPD_PREFIX = { "0": "/app" };
// Or different prefixes per port
__flags.HTTPD_PREFIX = { "0": "", "8080": "/api", "9090": "/gui" };Via oJob YAML:
ojob:
flags:
HTTPD_PREFIX:
"0": /appow.server.httpd provides helper functions to work with prefixes:
| Function | Description |
|---|---|
ow.server.httpd.getPrefix(httpdOrPort) |
Returns the normalized prefix for the given server or port |
ow.server.httpd.withPrefix(httpdOrPort, uri) |
Prepend the prefix to uri (skips absolute URLs) |
ow.server.httpd.stripPrefix(httpdOrPort, uri) |
Remove the prefix from a URI for internal routing |
ow.server.httpd.normalizePrefix(aPrefix) |
Normalize a raw prefix string (ensures leading /, no trailing /) |
ow.loadServer();
__flags.HTTPD_PREFIX = { "0": "/app" };
var httpd = ow.server.httpd.start(8080);
// Build a prefixed URL
var link = ow.server.httpd.withPrefix(httpd, "/about"); // "/app/about"
// Strip prefix for internal route matching
ow.server.httpd.route(httpd, ow.server.httpd.mapWithExistingFn(httpd, {
"/about": function(req) {
return httpd.replyOKText("About page");
}
}), function(req) {
var internalURI = ow.server.httpd.stripPrefix(httpd, req.uri);
return httpd.replyNotFound();
});All built-in GUI pages and static-file handlers automatically respect the configured prefix.
$mcp(aOptions) creates a Model Context Protocol (MCP) client for communicating with LLM tool servers.
type |
Description |
|---|---|
stdio (default) |
Spawn a local process; communicate over stdin/stdout |
remote / http |
HTTP JSON-RPC endpoint |
sse |
HTTP endpoint with Server-Sent Events responses |
ojob |
In-process oJob jobs exposed as MCP tools |
dummy |
Local in-memory stub for testing |
// stdio MCP server
var client = $mcp({ type: "stdio", cmd: "my-mcp-server" });
client.initialize();
var tools = client.listTools();
var result = client.callTool("myTool", { param: "value" });
// Remote HTTP MCP server
var remote = $mcp({ type: "remote", url: "https://mcp.example.com/mcp" });
remote.initialize();For remote/http/sse connections:
// Static bearer token
var client = $mcp({
type: "remote",
url: "https://mcp.example.com/mcp",
auth: { type: "bearer", token: "my-token" }
});
// OAuth2 client credentials
var client = $mcp({
type: "remote",
url: "https://mcp.example.com/mcp",
auth: {
type: "oauth2",
tokenURL: "https://auth.example.com/oauth/token",
clientId: "my-client",
clientSecret: "my-secret",
scope: "mcp:read mcp:write"
}
});
// OAuth2 authorization_code (opens browser)
var client = $mcp({
type: "remote",
url: "https://mcp.example.com/mcp",
auth: {
type: "oauth2",
grantType: "authorization_code",
authURL: "https://auth.example.com/authorize",
tokenURL: "https://auth.example.com/oauth/token",
clientId: "my-client",
redirectURI: "http://localhost:8080/callback",
disableOpenBrowser: false // set true to suppress browser launch
}
});OAuth2 token URLs can also be auto-discovered from the MCP server's OAuth 2.0 Protected Resource Metadata when tokenURL/authURL are omitted.
Prevent specific tools from appearing in listTools() or being called via callTool():
var client = $mcp({
type: "stdio",
cmd: "my-mcp-server",
blacklist: ["dangerousTool", "internalTool"]
});
client.initialize();
// listTools() will not include blacklisted tools
// callTool("dangerousTool", {}) throws an errorSee also: ojob-security.md, openaf-flags.md, and main references.