The server module provides the backend infrastructure for api-ape's WebSocket-based Remote Procedure Events (RPE) system. It transforms a standard Node.js or Bun HTTP server into a real-time API server where client function calls are automatically routed to controller files.
Key capabilities:
- Auto-routing — Drop JavaScript files in a folder, they become API endpoints automatically
- Real-time broadcasts — Built-in
broadcast()andbroadcastOthers()for pushing events to clients - Connection lifecycle — Hooks for
onConnect,onDisconnect,onReceive,onSend,onError - Binary transfers — Transparent file upload/download with streaming support
- HTTP fallback — Long-polling transport when WebSocket is blocked
- Multi-runtime — Works on Node.js, Bun, and Deno
- Zero dependencies — Built-in RFC 6455 WebSocket implementation (or uses native when available)
- 🌲 Forest — Distributed mesh for horizontal scaling across multiple servers
The server integrates with Express.js, raw Node.js HTTP servers, and Bun's native server.
Contributing? See
files.mdfor directory structure and file descriptions.
npm i api-ape// CommonJS
const api = require('api-ape') // Client proxy (default)
const { ape } = require('api-ape') // Server initializer
// ESM
import api, { ape } from 'api-ape'const { createServer } = require('http')
const { ape } = require('api-ape')
const server = createServer()
ape(server, {
where: 'api', // Controller directory
onConnect: (socket, req, send) => ({
embed: { userId: req.session?.userId },
onDisconnect: () => console.log('Client left')
})
})
server.listen(3000)Your server can connect to another api-ape server as a client. The API is 100% identical to browser usage:
const api = require('api-ape')
const { ape } = require('api-ape')
// Start your own server
ape(server, { where: 'api' })
// Connect to another api-ape server
api.connect('other-server', 3000) // → ws://other-server:3000/api/ape
// Now use it exactly like browser code!
const result = await api.hello('World')
api.on('message', ({ data }) => console.log(data))Or set the connection URL via environment variable:
APE_SERVER=ws://other-server:3000/api/ape node app.jsThis enables server-side microservice patterns while keeping the familiar api-ape interface.
| Option | Type | Description |
|---|---|---|
where |
string |
Directory containing controller files |
onConnect |
function |
Connection lifecycle hook |
fileTransferOptions |
object |
Binary transfer settings (see below) |
authFramework |
object |
Authentication framework instance (see below) |
authMiddleware |
object |
Authorization middleware instance (see below) |
ape(app, {
where: 'api',
fileTransferOptions: {
startTimeout: 60000, // Time to wait for transfer start (ms)
completeTimeout: 60000 // Time after start before cleanup (ms)
}
})| Property | Description |
|---|---|
this.broadcast(type, data) |
Send to ALL connected clients |
this.broadcastOthers(type, data) |
Send to all EXCEPT the caller |
this.publish(channel, data) |
Send to all subscribers of a channel |
this.clientId |
Unique ID of the calling client (generated by api-ape) |
this.sessionId |
Session ID from cookie (set by outer framework, may be null) |
this.req |
Original HTTP request |
this.socket |
WebSocket instance |
this.agent |
Parsed user-agent |
this.isAuthenticated |
Whether socket is authenticated (requires auth config) |
this.authTier |
Current authentication tier 0-3 (requires auth config) |
this.principal |
User info: { userId, roles, permissions } (requires auth config) |
this.requiresTier(n) |
Check if socket meets minimum tier (requires auth config) |
onConnect(socket, req, send) {
return {
embed: { ... }, // Values available as this.* in controllers
onReceive: (queryId, data, type) => afterFn,
onSend: (data, type) => afterFn,
onError: (errStr) => { ... },
onDisconnect: () => { ... }
}
}Drop JS files in your where directory:
api/
├── hello.js → api.hello(data)
├── users.js → api.users(data)
├── posts/
│ ├── index.js → api.posts(data) # index.js maps to parent folder
│ ├── list.js → api.posts.list(data)
│ └── create.js → api.posts.create(data)
Note: Both api/users.js and api/users/index.js map to the same endpoint api.users(data). Use index.js when you want to group related files in a folder.
🦍 Duplicate endpoint detected: "users"
- /users/index.js
- /users.js
Remove one of these files to fix this conflict.
Controllers are automatically hot-reloaded when files are added or changed. No server restart required during development:
🦍 Hot-loaded: users/profile # New file added
🦍 Reloaded: users/list # Existing file changed
This works for both new controllers and updates to existing ones. The file watcher monitors the where directory recursively.
api-ape includes a built-in pub/sub system for channel-based messaging. Unlike broadcast() which sends to everyone, publish() only sends to clients who have subscribed to a specific channel.
Use chained ape.publish.channel.name(data) syntax from anywhere on the server:
const { ape } = require('api-ape')
// Publish from a controller
module.exports = function(data) {
this.publish('/health', { status: 'ok', uptime: process.uptime() })
return { published: true }
}
// Chained publish syntax (recommended)
ape.publish.stock.AAPL({ price: 185.50, change: 2.3 })
ape.publish.notifications({ message: 'System update!' })
ape.publish.news.banking({ headline: 'Market Update' })
// Legacy syntax (still supported)
ape.publish('/stock/AAPL', { price: 185.50, change: 2.3 })Clients subscribe using the same chaining syntax. Pass a callback function to subscribe:
// Subscribe to channels (pass a callback function)
const unsub1 = api.health(data => {
console.log('Health update:', data)
})
const unsub2 = api.stock.AAPL(data => {
console.log('AAPL:', data.price)
})
// Unsubscribe when done
unsub1()
unsub2()Key insight: The same chaining syntax is used for both RPC calls and subscriptions. The difference is what you pass:
- Data → RPC call (returns Promise)
- Callback function → Subscription (returns unsubscribe function)
| Feature | Description |
|---|---|
| Last message cache | New subscribers receive the last published message immediately |
| Channel names | Any string (e.g., /health, /chat/room/123, /stock/AAPL) |
| Auto-cleanup | Subscriptions are removed when client disconnects |
| Message format | Same as broadcast(): { type: channel, data: payload } |
- Health monitoring — Clients subscribe to
/health, server publishes status periodically - Stock tickers — Subscribe to
/stock/AAPL, receive price updates - Chat rooms — Subscribe to
/chat/room/123, receive messages for that room only - User-specific updates — Subscribe to
/user/123/notifications
| Method | Sends To | Use Case |
|---|---|---|
broadcast(type, data) |
ALL connected clients | Server announcements, global events |
broadcastOthers(type, data) |
All EXCEPT caller | Chat messages (don't echo back) |
publish(channel, data) |
Only subscribers of that channel | Targeted updates, topics |
Access connected clients via ape.clients to send messages to specific clients.
const { ape } = require('api-ape')
// Iterate all connected clients
for (const [clientId, client] of ape.clients) {
console.log(`Client ${clientId} connected`)
}
// Get a specific client
const client = ape.clients.get(clientId)
// Check client count
console.log(`${ape.clients.size} clients connected`)Each client wrapper has a send function that supports both direct and chained syntax:
const client = ape.clients.get(clientId)
// Direct syntax
client.send('news/banking', { headline: 'Market Update' })
// Chained syntax (same result)
client.send.news.banking({ headline: 'Market Update' })
// Deep nesting works too
client.send.stocks.nasdaq.tech({ price: 100 })| Property | Type | Description |
|---|---|---|
clientId |
string |
Unique client identifier |
sessionId |
string|null |
Session ID from cookie |
embed |
object |
Values from onConnect's embed return |
agent |
object |
Parsed user-agent (browser, os, device) |
isAuthenticated |
boolean |
Whether client is authenticated |
authTier |
number |
Authentication tier (0-3) |
send |
function |
Send message to this client |
api-ape includes a tiered authentication system with OPAQUE/PAKE support (server never learns raw passwords).
const { createAuthFramework } = require('api-ape/server/security/auth');
const { createAuthMiddleware } = require('api-ape/server/socket/authMiddleware');
const authFramework = createAuthFramework({
opaque: {
getUser: async (username) => db.users.findOne({ username }),
saveUser: async (username, data) => db.users.insertOne({ username, ...data })
}
});
const authMiddleware = createAuthMiddleware({
requirements: {
'admin/*': { tier: 2 }, // Admin requires MFA
'user/*': { tier: 1 }, // User requires auth
'public/*': { tier: 0 } // Public allows guests
}
});
ape(server, { where: 'api', authFramework, authMiddleware });| Tier | Name | Description |
|---|---|---|
| 0 | GUEST | Unauthenticated, public endpoints only |
| 1 | BASIC | Identity verified via OPAQUE or enterprise SSO |
| 2 | ELEVATED | Tier 1 + MFA (WebAuthn or TOTP) |
| 3 | HIGH_SECURITY | Full 2-of-3 scheme for client-side key reconstruction |
// api/protected/data.js
module.exports = function(query) {
if (!this.isAuthenticated) {
throw new Error('Authentication required');
}
console.log('User:', this.principal.userId);
console.log('Tier:', this.authTier);
return { data: 'sensitive info' };
};See security/auth/README.md for full documentation.
Controllers can return Buffer data directly. The framework handles conversion:
// api/files/download.js
const fs = require('fs')
module.exports = function(filename) {
return {
name: filename,
data: fs.readFileSync(`./uploads/${filename}`)
}
}For uploads, the controller receives Buffer data:
// api/files/upload.js
module.exports = function({ name, data }) {
// data is a Buffer
fs.writeFileSync(`./uploads/${name}`, data)
return { success: true }
}Binary data is transferred via /api/ape/data/:hash with session verification and HTTPS enforcement (localhost exempt).
For sharing files between clients (broadcasts), use the <!F> marker. Messages route immediately; file data transfers asynchronously with true streaming support.
Client A → Server: { msg: "here's a file", file<!F>: "hash123" } + HTTP upload
Server → Client B: { msg: "here's a file", file<!F>: "hash123" } (immediate)
Client B → Server: GET /api/ape/data/hash123 (streams available bytes)
Key differences from regular file transfer (<!A>/<!B>):
| Feature | Regular (<!A>/<!B>) |
Shared (<!F>) |
|---|---|---|
| Session check | Required | Skipped |
| Blocking | Waits for upload | Non-blocking |
| Partial download | No | Yes (stream what's uploaded) |
| Use case | Client → Server | Client → Client via broadcast |
Server-side flow:
- Message with
<!F>received → streaming file registered - Controller invoked immediately (non-blocking)
- When broadcast,
<!F>tags pass through unchanged - HTTP upload completes streaming file
- Other clients fetch from
/api/ape/data/:hash(no session check)
Response headers:
| Header | Description |
|---|---|
X-Ape-Complete |
1 if upload finished, 0 if still streaming |
X-Ape-Total-Received |
Bytes received so far |
api-ape automatically provides HTTP streaming endpoints as a fallback when WebSockets are blocked:
Long-lived HTTP streaming connection for receiving server messages.
- Session: Cookie-based (
apeClientId) - Response: Streaming JSON messages
- Heartbeat: Every 20 seconds
- Auto-reconnect: Client reconnects after 25 seconds
Send messages to server when using HTTP streaming transport.
- Session: Cookie-based (
apeClientId) - Body: JSS-encoded message
- Response: JSS-encoded result
- Client attempts WebSocket connection first
- On failure (firewall/proxy blocking), falls back to HTTP streaming
- Background WebSocket retry every 30 seconds
- Automatically upgrades back to WebSocket when available
The fallback is completely transparent to your controllers - they work identically with both transports.
api-ape includes its own RFC 6455 WebSocket implementation with zero npm dependencies.
The server automatically detects and uses the best available WebSocket implementation:
- Deno: Uses native
Deno.upgradeWebSocket()API - Bun: Uses native
Bun.serve()WebSocket handlers - Node.js 24+ (stable): Uses native
node:wsmodule - Earlier Node.js: Uses built-in RFC 6455 polyfill
// Automatic - no configuration needed
ape(server, { where: 'api' })The built-in polyfill implements:
- Full RFC 6455 handshake (SHA-1 + GUID)
- Text and binary frames
- Frame fragmentation
- Ping/pong heartbeats
- Proper close handshake
- Masking (client→server)
Forest is api-ape's distributed coordination system for horizontal scaling. It routes messages between servers via a shared database, enabling you to run multiple api-ape instances behind a load balancer.
const { ape } = require('api-ape');
const { createClient } = require('redis');
const redis = createClient();
await redis.connect();
// Join the mesh — pass any supported database client
ape.joinVia(redis);
// Graceful shutdown
process.on('SIGINT', async () => {
await ape.leaveCluster();
process.exit(0);
});Without coordination, each server only knows about its own connected clients:
Load Balancer
│
┌────────────┼────────────┐
│ │ │
Server A Server B Server C
client-1 client-2 client-3
If Server A wants to send a message to client-2, it doesn't know where client-2 is connected.
Naive solutions:
- Broadcast to all servers — O(n) messages, doesn't scale
- Sticky sessions — Complex LB config, no failover
Forest's solution:
- Direct routing — Lookup
clientId → serverId, push only to that server. O(1).
Forest uses two database primitives:
| Primitive | Purpose | Example |
|---|---|---|
| Lookup Table | Maps clientId → serverId |
Redis key, Postgres row |
| Channels | Real-time message push | Redis PUB/SUB, Postgres NOTIFY |
Server A: "Send message to client-2"
│
▼
1. Check local clients → not found
│
▼
2. lookup.read("client-2") → "srv-B"
│
▼
3. channels.push("srv-B", { destClientId: "client-2", ... })
│
▼
Database (Redis/Postgres/Mongo/etc)
│
▼
Server B: Receives message, delivers to client-2
| Backend | How to Connect | Channels | Lookup | Ideal For |
|---|---|---|---|---|
| Redis | createClient() |
PUB/SUB | Key-value | Most deployments; fastest |
| MongoDB | new MongoClient() |
Change Streams | Collection | Mongo-native stacks |
| PostgreSQL | new pg.Pool() |
LISTEN/NOTIFY | Table | SQL shops |
| Supabase | createClient() |
Realtime | Table | Supabase users |
| Firebase | getDatabase() |
Native push | JSON tree | Serverless/edge |
Join the distributed mesh.
ape.joinVia(redis);
ape.joinVia(redis, {
namespace: 'myapp', // Key/table prefix (default: 'apes')
serverId: 'srv-west-1' // Custom server ID (default: auto-generated)
});| Option | Type | Default | Description |
|---|---|---|---|
namespace |
string |
'apes' |
Prefix for all keys/tables |
serverId |
string |
Auto-generated | Unique ID for this server instance |
Gracefully leave the mesh. Removes client mappings and unsubscribes from channels.
await ape.leaveCluster();Forest creates its own database objects with your namespace prefix:
| Backend | Created Objects |
|---|---|
| Redis | apes:client:{id}, apes:channel:{serverId}, apes:channel:ALL |
| MongoDB | Database: apes_cluster, Collections: clients, events |
| PostgreSQL | Tables: apes_clients, Channel: apes_events |
| Supabase | Table: apes_clients (must create), Realtime channels |
| Firebase | Paths: /apes/clients/*, /apes/channels/* |
For unsupported databases or testing, implement the adapter interface:
ape.joinVia({
async join(serverId) {
// Subscribe to channels, register this server
},
async leave() {
// Unsubscribe, cleanup client mappings
},
lookup: {
async add(clientId) {
// Map clientId → this server
},
async read(clientId) {
// Return serverId or null
},
async remove(clientId) {
// Delete mapping (must own it)
}
},
channels: {
async push(serverId, message) {
// Send to server's channel ("" = broadcast)
},
async pull(serverId, handler) {
// Subscribe to channel
// handler(message, senderServerId)
return async () => { /* unsubscribe */ };
}
}
});| Event | What Happens |
|---|---|
| Server joins | join(serverId) — subscribe to channels |
| Client connects | lookup.add(clientId) — register mapping |
| Message to remote client | lookup.read() → channels.push() |
| Broadcast | channels.push('') — to ALL channel |
| Client disconnects | lookup.remove(clientId) |
| Server shuts down | leave() — cleanup everything |
clientId is ephemeral — generated fresh on each connection. If a server crashes:
- Orphaned client mappings remain (stale)
- Clients reconnect with new
clientIdto another server - New mappings are created; old ones are harmless
- Optional: Use Redis
EXPIREor DB TTL indexes for cleanup
Server A (port 3001):
const { ape } = require('api-ape');
const redis = createClient();
await redis.connect();
ape(server, { where: 'api' });
ape.joinVia(redis, { serverId: 'srv-a' });
server.listen(3001);Server B (port 3002):
const { ape } = require('api-ape');
const redis = createClient();
await redis.connect();
ape(server, { where: 'api' });
ape.joinVia(redis, { serverId: 'srv-b' });
server.listen(3002);Controller (api/chat.js):
module.exports = function(message) {
// Broadcasts across ALL servers automatically
this.broadcastOthers('chat', {
from: this.clientId,
message
});
return { sent: true };
};Now clients connected to different servers can chat with each other seamlessly.
| Concern | Recommendation |
|---|---|
| Lookup latency | Use Redis for sub-ms lookups |
| Message throughput | Redis PUB/SUB handles millions/sec |
| Stale mappings | Set TTL/EXPIRE on client keys |
| Large payloads | Postgres NOTIFY has 8KB limit |
| Change Stream lag | MongoDB may have slight delay |
Forest logs key operations:
🔌 APE: Detected redis adapter (serverId: X7K9MWPA)
✅ Redis adapter: joined as X7K9MWPA
📍 Redis adapter: registered client abc123 -> X7K9MWPA
📤 Redis adapter: pushed to server Y8M2ZPQR
📢 Redis adapter: broadcast to all servers
🔴 Redis adapter: leaving, cleaning up 3 clients
See detailed adapter implementations in server/adapters/:
| File | Description |
|---|---|
index.js |
Auto-detects database type, creates adapter |
redis.js |
Redis PUB/SUB adapter |
mongo.js |
MongoDB Change Streams adapter |
postgres.js |
PostgreSQL LISTEN/NOTIFY adapter |
supabase.js |
Supabase Realtime adapter |
firebase.js |
Firebase RTDB adapter |
README.md |
Quick reference for all adapters |
- Check that your controller file is in the
wheredirectory (default:api/) - Ensure the file exports a function:
module.exports = function(...) { ... } - File paths map directly:
api/users/list.js→api.users.list()
The client automatically reconnects with exponential backoff. If connections drop often:
- Check server WebSocket timeout settings
- Verify network stability
- Check server logs for errors
Return Buffer data from controllers:
// api/files/download.js
module.exports = function(filename) {
return {
name: filename,
data: fs.readFileSync(`./uploads/${filename}`) // Buffer
}
}Client receives ArrayBuffer:
const result = await api.files.download('image.png')
const blob = new Blob([result.data])
img.src = URL.createObjectURL(blob)Type definitions are included (index.d.ts). For full type safety:
- Define interfaces for your controller parameters and return types
- Use type assertions when calling
api.<path>.<method>()