-
-
Notifications
You must be signed in to change notification settings - Fork 18
Description
Hi! 👋
I'm experiencing what seems like unexpected timeout behavior and wanted to understand if this is intended.
The Scenario
- Step 1: Run
import.jsto enqueue ~1000 jobs with.timeout(3_000)(3 seconds) - Step 2: Jobs get stored in database, backend is NOT running
- Step 3: Start backend with
node app.js - Step 4: Jobs start processing successfully
The Weird Behavior
After exactly 3 seconds of backend runtime (not 3 seconds per job), ALL subsequent jobs start timing out - even jobs that just started executing.
Timeline:
- T+0s: Backend starts, first jobs begin processing
- T+0s-T+3s: Jobs complete successfully
- T+3s+: ALL jobs timeout immediately, including newly started ones
The Confusion
This suggests the timeout is measured from backend start time, not from:
- Job enqueue time (jobs can sit in DB for hours before backend starts)
- Individual job execution time (would expect each job to have its own 3s window)
Test Case
// import.js - enqueues jobs
const jobPromise = Sidequest.build(ScanJob)
.timeout(3 * 1_000) // 3 seconds
.enqueue(domain);
// app.js - starts backend later
await Sidequest.start(config);{
"backend": {
"driver": "@sidequest/sqlite-backend",
"config": {
"client": "better-sqlite3",
"connection": {
"filename": "./data/out/database.sqlite"
}
}
},
"maxConcurrentJobs": 1,
"minThreads": 1,
"maxThreads": 1,
"dashboard": {
"enabled": false
}
}Expected: Each job gets 3 seconds to execute when it starts
Actual: All jobs timeout 3 seconds after backend starts, regardless of individual execution time
Workaround: Removing .timeout(3 * 1_000) completely allows all jobs to process successfully without any timeout issues
Is this the intended behavior? It makes timeouts unusable for any real workload since you can't predict when jobs will start executing relative to backend startup.
Thanks for any clarification! 🙏