The INSTRUMENT sequencer timing overhaul replaced a 500Hz polling architecture with event-scheduled Routines and server-side OSC timestamp bundling, eliminating the fundamental source of 2-7ms jitter that made transients and hi-hats sound "stiff." The original system used a Tdef running an inf.do loop that ticked every 2ms ((1/500).wait), checking every parameter track on every tick via two binary searches per track through getCurrentEventNew() — a function that converted ticks to beat positions using floating-point modulo arithmetic (patternPosition % sequenceDuration), tracked played/unplayed state with boolean flags, and fired triggers through server.bind which timestamped OSC bundles at the moment sclang executed them rather than when they should have been heard. This meant that even when an event was correctly identified, the 2ms quantization grid plus sclang's single-threaded scheduling jitter (GC pauses, GUI updates, OSC construction all competing for the same thread) added 2-50ms+ of imprecision that server.bind couldn't recover.
The refactor replaces all of this: each ParameterTrack now runs its own Routine that iterates through newSequenceInfo (the Order timeline built by updateSequenceInfo) using delta-based wait calls — (beatPosition - previousBeatPos) / speed — on the shared main.clock TempoClock, with quant: main.sequencer.timeSignature.beats ensuring all tracks align to bar boundaries automatically. The trigger calls are wrapped in main.server.makeBundle(main.server.latency, { track.instrument.trigger(name, event) }) which captures all OSC messages (Synth.new, synth.set, etc.) into a single bundle timestamped 50ms in the future, giving scsynth time to queue and execute them cleanly. Note: the bundle timestamp itself carries ~0.2-1ms of TempoClock jitter — this is sub-millisecond precision, not true sample-accurate (which would require computing timestamps from clock.beats2secs or server-side sequencing). However, ~0.2-1ms is below the human perception threshold for onset timing. The Sequencer itself was simplified from the monolithic tick loop to two lightweight clock-scheduled callbacks: a bar-boundary callback that fires every timeSignature.beats for queue processing and looper state machine transitions, and a beat-boundary callback that fires every beat for singleFunctions/repeatFunctions. The .collect -> .do cleanup across 11 call sites eliminated ~3000 garbage arrays per second that were contributing to GC pressure spikes, and the cosmetic Task.new({0.1.wait; ...}).play in addPattern was removed to stop competing for clock time. Net result: zero polling, zero binary searches in the hot path, zero floating-point beat detection, and three cleanly separated timing layers (TempoClock scheduling ~0.2-1ms -> server.latency absorption -> scsynth bundle execution).
To prove the timing improvements are real and not just theoretical, add SendReply.kr(Impulse.kr(0), '/noteOn', [timer.value]) to a test SynthDef and collect server-side timestamps via OSCdef(\jitterTest, {|msg, time| times.add(time) }, '/noteOn') — then analyze the deltas with times.differentiate.drop(1) to get mean, stddev, and max jitter. Run this with a simple 16th-note pattern at 120 BPM (expected delta: 0.125s) and compare before/after: the old system should show stddev of 2-5ms with occasional 10-50ms outliers during GC, while the new system should show stddev under 0.5ms with no outliers exceeding server.latency (50ms). For edge cases:
- Non-round tempos: Test at 130 BPM where the old
(60/main.tempo)*tickTimeformula produced irrational numbers (230.769...) causing beat drift and double-fires — the new system uses TempoClock's internal beat-to-seconds conversion which handles this natively. - Pattern hot-swapping: Call
.seq()mid-playback and verify the new pattern starts on the next bar boundary (the Routine restart withquantguarantees this). - Rewind: Test
go(0)during playback — the Routine stops and restarts cleanly. - Stress load: Run 8+ instruments with complex patterns to verify no CPU-induced jitter under load — the old system ran 4000+ binary search pairs per second that scaled with instrument count, while the new system does zero work between events.
- Long-running stability: Play for 10+ minutes and compare the first and last minute's jitter distributions — the old system accumulated floating-point drift in its tick counter, while the new system has no cumulative state.
- Tempo changes: Change tempo mid-playback with
main.tempo_(140)— all Routines share the same TempoClock, so all scheduled events stretch uniformly with zero re-synchronization needed.
The most revealing audible test is a tight hi-hat pattern (16th notes at 140+ BPM) alongside a kick on quarter notes — jitter is most perceptible on short transient sounds at fast subdivisions. With the old system, hi-hats should sound subtly uneven or "drunk," especially after the system has been running for several minutes and GC pressure builds. With the new system, they should lock to a rigid grid indistinguishable from a hardware sequencer.
A second audible test: program two instruments with the exact same pattern and pan them hard left and hard right — timing differences manifest as stereo "flamming" where attacks don't line up. The old tick loop processed tracks sequentially within each tick (sequencerTracks.do), so Track B always fired slightly after Track A; the new system schedules both via independent Routines on the same TempoClock beat, and server.makeBundle ensures both synths are created at the exact same sample.
A third test: record the output and zoom into the waveform in a DAW — measure the actual onset times of triggered synths against the expected grid positions. The old system's onsets should cluster in 2ms bands (the tick quantization floor) with occasional outliers; the new system's onsets should be within one sample buffer (64 samples = ~1.5ms at 44.1kHz) of the exact grid position, limited only by scsynth's block size rather than sclang scheduling.