Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions src/audio_worklet.js
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,8 @@ function createWasmAudioWorkletProcessor() {
assert(opts.callback)
assert(opts.samplesPerChannel)
#endif
this.stopped = false;
this.port.onmessage = this.onmessage.bind(this);
this.callback = {{{ makeDynCall('iipipipp', 'opts.callback') }}};
this.userData = opts.userData;
// Then the samples per channel to process, fixed for the lifetime of the
Expand Down Expand Up @@ -86,6 +88,18 @@ function createWasmAudioWorkletProcessor() {
}
#endif

onmessage(msg) {
var data = msg.data;
if (data['stop']) {
this.stopped = true;
if (data['cb']) {
// Send the same message back so that the main thread can verify that
// the Worklet has stopped
this.port.postMessage(data);
}
}
}

/**
* Marshals all inputs and parameters to the Wasm memory on the thread's
* stack, then performs the wasm audio worklet call, and finally marshals
Expand All @@ -100,6 +114,8 @@ function createWasmAudioWorkletProcessor() {
process(inputList, outputList) {
#endif

if (this.stopped) return false;

#if ALLOW_MEMORY_GROWTH
// Recreate the output views if the heap has changed
// TODO: add support for GROWABLE_ARRAYBUFFERS
Expand Down
1 change: 1 addition & 0 deletions src/lib/libsigs.js
Original file line number Diff line number Diff line change
Expand Up @@ -628,6 +628,7 @@ sigs = {
emscripten_debugger__sig: 'v',
emscripten_destroy_audio_context__sig: 'vi',
emscripten_destroy_web_audio_node__sig: 'vi',
emscripten_destroy_web_audio_node_async__sig: 'vipp',
emscripten_destroy_worker__sig: 'vi',
emscripten_enter_soft_fullscreen__sig: 'ipp',
emscripten_err__sig: 'vp',
Expand Down
28 changes: 27 additions & 1 deletion src/lib/libwebaudio.js
Original file line number Diff line number Diff line change
Expand Up @@ -160,6 +160,22 @@ var LibraryWebAudio = {
#endif
// Explicitly disconnect the node from Web Audio graph before letting it GC,
// to work around browser bugs such as https://webkit.org/b/222098#c23
EmAudio[objectHandle].port.postMessage({'stop': 1});
EmAudio[objectHandle].disconnect();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we not use a shared memory location here to guarantee that the callback will never fire again once this function returns?

Then we would not need emscripten_destroy_web_audio_node_async at all I think?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But that shared location will then be leaked memory? As it will never be able to tell when the last process callback will be (at least according to the WebAudio spec)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, interesting yes. What about this sequence of operations:

  1. Main thread sets "shutdown" bit.
  2. Main thread blocks/spins until worklet's next "process" callback which consumes the "shutdown" bit and sets a JS flag preventing any future "process" callback. The worklet would then set the "shutdown_complete" bit unblocking the main thread.
  3. Main thread is now free to release all shared memory resources.

Assuming the audio worklet always make progress during the emscripten_destroy_web_audio_node I think it should be OK with block like this. WDYT @cwoffenden ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This type of blocking in the main thread should be fine, since we'd already be in the main thread, waiting on the AW, so at most it should be spinning for 3ms with the default quantum size of 128 samples (as this becomes adjustable this blocking time will increase).

It would need some testing to prove that spinning for 3ms doesn't cause Chrome to start delaying timeouts or frames, so I think in general the async approach is cleaner.

Copy link
Author

@lindell lindell Dec 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would very much prefer if we could avoid the callback here.

Assuming process will always be called afterwards, that should work. But is that really guaranteed to happen? Should at least not be when using an Offline Audio context?

Assuming we can guarantee it.
Do we even need the spinlock? If we set the shutdown bit, we can already then guarantee that the process callback wont be called. So there is no need to wait at that point?
As soon as the shutdown bit is set, we should be free to free any resource that is connected to the Process callback?

Copy link
Collaborator

@cwoffenden cwoffenden Dec 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lindell I've never run timings between process callbacks so will take your word for it that they're unevenly spaced (compounding that the audio worklet's design is terrible 😀). I guess it depends on how often and where closing the audio device is expected, whether a 10ms spin is acceptable.

@sbc100 It's not something I'd think of using, nothing is ever released after acquiring by design (except clean-up of WebGL contexts on page unload, but that's more to do with a long standing issue of browsers keeping them around if the debugger is open). If I look at cross platform shipping code going back 20+ years, some of the close() implementations are empty (which I guess makes me a cowboy, so ye-ha?!).

Copy link
Author

@lindell lindell Dec 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cwoffenden https://ui.perfetto.dev is really useful when understanding how the code is running.

Here are some chrome traces of a (non emscripten) audio worklet:

Demo page here: https://lindell.me/audio-context-demos/noise-generator.html

Copy link
Author

@lindell lindell Dec 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sbc100 Your suggestion would work from a free perspective (being sure you can free when the shutdown_competed bit is set. I was referring to my modified suggestion which will not since the process callback might already be running.

While a predictable 3ms spin might be acceptable, we can see this might take hundreds of milliseconds.

Furthermore, if the Audio Context is suspended, the process callback may never fire, causing the main thread to spinlock indefinitely (deadlock), or if we have a max timeout, cause a use after free when it resumes. Suspended audio contexts are simply no longer requesting audio through the audio graph and thus not calling the process callback on worklet nodes. While MessagePorts are still handled:
https://lindell.me/audio-context-demos/suspended-messages.html

Similarly, with OfflineAudioContexts. Will only call process, when a chunk of audio is actively requested. While MessagePorts will still be processed: https://lindell.me/audio-context-demos/offline-messages.html

Because we cannot guarantee the process scheduler's behavior, we cannot safely block the main thread waiting for it.

Reusing the existing API function without any new functions is definitely to preferred if it can be done without other consequences, which I do unfortunately not think we can do. We do not know

  1. How long it might take for schedules (even if we could build the sync API on top of MessagePorts)
  2. How performance sensitive users are.
  3. How many Worklet nodes the users might need to destroy at once. Since WebAudio is designed to be used with a graph of nodes. There could definitely be scenarios with a lot of different worklet nodes.

If we implement this on top of the existing API, hoping the spinlocking will be fine, and where users can expect the userData will not be used after the destroy. And then some time later realises that this will be unacceptable for some new user. We will not be able to reverse it without breaking existing uses.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the Audio Context is suspended

A very valid point, suspended either explicitly or simply because the tab is backgrounded.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there more information needed here?

delete EmAudio[objectHandle];
},

emscripten_destroy_web_audio_node_async: (objectHandle, callback, userData) => {
#if ASSERTIONS || WEBAUDIO_DEBUG
emAudioExpectNode(objectHandle, 'emscripten_destroy_web_audio_node_async');
#endif
// Explicitly disconnect the node from Web Audio graph before letting it GC,
// to work around browser bugs such as https://webkit.org/b/222098#c23
EmAudio[objectHandle].port.postMessage({
'stop': 1,
'cb': callback,
'ud': userData,
});
EmAudio[objectHandle].disconnect();
delete EmAudio[objectHandle];
},
Expand Down Expand Up @@ -352,7 +368,17 @@ var LibraryWebAudio = {
dbg(`Creating AudioWorkletNode "${UTF8ToString(name)}" on context=${contextHandle} with options:`);
console.dir(opts);
#endif
return emscriptenRegisterAudioObject(new AudioWorkletNode(EmAudio[contextHandle], UTF8ToString(name), opts));

const node = new AudioWorkletNode(EmAudio[contextHandle], UTF8ToString(name), opts);
node.port.onmessage = (msg) => {
var data = msg.data;
if (data['stop']) {
var cb = data['cb'];
callUserCallback(() => {{{ makeDynCall('vp', 'cb') }}}(data['ud']));
}
};

return emscriptenRegisterAudioObject(node);
},
#endif // ~AUDIO_WORKLET

Expand Down
15 changes: 14 additions & 1 deletion system/include/emscripten/webaudio.h
Original file line number Diff line number Diff line change
Expand Up @@ -60,11 +60,24 @@ typedef void (*EmscriptenStartWebAudioWorkletCallback)(EMSCRIPTEN_WEBAUDIO_T aud
// after calling this function.
void emscripten_destroy_audio_context(EMSCRIPTEN_WEBAUDIO_T audioContext);

// Disconnects the given audio node from its audio graph, and then releases
// Disconnects the given audio node from its audio graph, make sure the
// process callback is not called anymore and then releases
// the JS object table reference to the given audio node. The specified handle
// is invalid after calling this function.
// The process callback can be called after this function is called.
// If you need to ensure that the process callback is not called anymore, use
// emscripten_destroy_web_audio_node_async() instead.
void emscripten_destroy_web_audio_node(EMSCRIPTEN_WEBAUDIO_T objectHandle);

typedef void (*EmscriptenDestroyWebAudioNodeCallback)(void *userData3);

// Disconnects the given audio node from its audio graph, make sure the
// process callback is not called anymore and then releases
// the JS object table reference to the given audio node. The specified handle
// is invalid after calling this function.
// Once the node has been verified to be stopped, the callback will be called.
void emscripten_destroy_web_audio_node_async(EMSCRIPTEN_WEBAUDIO_T objectHandle, EmscriptenDestroyWebAudioNodeCallback callback, void *userData3);

// Create Wasm AudioWorklet thread. Call this function once at application startup to establish an AudioWorkletGlobalScope for your app.
// After the scope has been initialized, the given callback will fire.
// audioContext: The Web Audio context object to initialize the Wasm AudioWorklet thread on. Each AudioContext can have only one AudioWorklet
Expand Down
3 changes: 3 additions & 0 deletions test/test_interactive.py
Original file line number Diff line number Diff line change
Expand Up @@ -298,6 +298,9 @@ def test_audio_worklet(self):
self.btest('webaudio/audioworklet.c', expected='0', cflags=['-sAUDIO_WORKLET', '-sWASM_WORKERS', '--preload-file', test_file('hello_world.c') + '@/'])
self.btest('webaudio/audioworklet.c', expected='0', cflags=['-sAUDIO_WORKLET', '-sWASM_WORKERS', '-pthread'])

def test_audio_worklet_destroy_async(self):
self.btest('webaudio/audioworklet_destroy_async.c', expected='0', cflags=['-sAUDIO_WORKLET', '-sWASM_WORKERS', '-pthread', '-lwebaudio.js'])

# Tests a second AudioWorklet example: sine wave tone generator.
def test_audio_worklet_tone_generator(self):
self.btest('webaudio/audio_worklet_tone_generator.c', expected='0', cflags=['-sAUDIO_WORKLET', '-sWASM_WORKERS'])
Expand Down
107 changes: 107 additions & 0 deletions test/webaudio/audioworklet_destroy_async.c
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
#include <stdio.h>
#include <stdlib.h>
#include <emscripten.h>
#include <emscripten/webaudio.h>
#include <emscripten/threading.h>
#include <stdatomic.h>

#ifdef REPORT_RESULT
volatile int audioProcessedCount = 0;
int valueAfterDestroy;
#endif

EMSCRIPTEN_AUDIO_WORKLET_NODE_T node_id;

bool ProcessAudio(int numInputs, const AudioSampleFrame *inputs, int numOutputs, AudioSampleFrame *outputs, int numParams, const AudioParamFrame *params, void *userData) {
#ifdef REPORT_RESULT
++audioProcessedCount;
#endif

// Produce noise in all output channels.
for(int i = 0; i < numOutputs; ++i)
for(int j = 0; j < outputs[i].samplesPerChannel*outputs[i].numberOfChannels; ++j)
outputs[i].data[j] = (rand() / (float)RAND_MAX * 2.0f - 1.0f) * 0.3f;

return true;
}

void observe_after_destroy(void * userData) {
printf("Expected processed count to be %d, was %d\n", valueAfterDestroy, audioProcessedCount);

#ifdef REPORT_RESULT
if (audioProcessedCount == valueAfterDestroy) {
printf("Test PASSED!\n");
REPORT_RESULT(0);
} else {
printf("Test FAILED!\n");
REPORT_RESULT(1);
}
#endif
}

void AudioWorkletDestroyed(void* userData) {
emscripten_out("AudioWorkletDestroyed");
#ifdef REPORT_RESULT
valueAfterDestroy = audioProcessedCount;
#endif
emscripten_set_timeout(observe_after_destroy, 1000, 0);
}

void observe_after_start(void *userData) {
#ifdef REPORT_RESULT
if (audioProcessedCount == 0) {
printf("Test FAILED!\n");
REPORT_RESULT(1);
}
#endif

emscripten_destroy_web_audio_node_async(node_id, &AudioWorkletDestroyed, 0);
}

// This callback will fire after the Audio Worklet Processor has finished being
// added to the Worklet global scope.
void AudioWorkletProcessorCreated(EMSCRIPTEN_WEBAUDIO_T audioContext, bool success, void *userData) {
if (!success) return;

emscripten_out("AudioWorkletProcessorCreated");

// Specify the input and output node configurations for the Wasm Audio
// Worklet. A simple setup with single mono output channel here, and no
// inputs.
int outputChannelCounts[1] = { 1 };

EmscriptenAudioWorkletNodeCreateOptions options = {
.numberOfInputs = 0,
.numberOfOutputs = 1,
.outputChannelCounts = outputChannelCounts
};

// Instantiate the counter-incrementer Audio Worklet Processor.
node_id = emscripten_create_wasm_audio_worklet_node(audioContext, "counter-incrementer", &options, &ProcessAudio, 0);
emscripten_audio_node_connect(node_id, audioContext, 0, 0);

// Wait 1s to check that the counter has started incrementing
emscripten_set_timeout(observe_after_start, 1000, 0);
}

// This callback will fire when the audio worklet thread has been initialized.
void WebAudioWorkletThreadInitialized(EMSCRIPTEN_WEBAUDIO_T audioContext, bool success, void *userData) {
if (!success) return;

emscripten_out("WebAudioWorkletThreadInitialized");

WebAudioWorkletProcessorCreateOptions opts = {
.name = "counter-incrementer",
};
emscripten_create_wasm_audio_worklet_processor_async(audioContext, &opts, AudioWorkletProcessorCreated, 0);
}

uint8_t wasmAudioWorkletStack[4096];

int main() {
EMSCRIPTEN_WEBAUDIO_T context = emscripten_create_audio_context(NULL);

emscripten_start_wasm_audio_worklet_thread_async(context, wasmAudioWorkletStack, sizeof(wasmAudioWorkletStack), WebAudioWorkletThreadInitialized, 0);

return 0;
}