← Back to Lessons Lesson 23 of 28
Advanced api
Audio Processing
Introduction
Audio processing is a perfect fit for Wasm — it requires low-latency, real-time computation. Rust/Wasm processes audio buffers faster than JavaScript while the Web Audio API handles playback.
Architecture
┌──────────────────┐ ┌──────────────────┐
│ Web Audio API │ buffer │ Rust/Wasm │
│ (AudioContext) │────────▶│ (process audio) │
│ │◀────────│ │
│ 🔊 Speakers │ buffer │ DSP, filters, │
│ │ │ synthesis │
└──────────────────┘ └──────────────────┘Setup
[dependencies]
wasm-bindgen = "0.2"
[dependencies.web-sys]
version = "0.3"
features = [
"AudioContext",
"AudioBuffer",
"AudioBufferSourceNode",
"GainNode",
"AudioDestinationNode",
]Playing Generated Audio
import init, { generate_sine } from './pkg/audio_wasm.js';
await init();
const ctx = new AudioContext();
const samples = generate_sine(ctx.sampleRate, 440, 1.0); // A4 note, 1 second
// Create AudioBuffer from Wasm-generated data
const buffer = ctx.createBuffer(1, samples.length, ctx.sampleRate);
buffer.copyToChannel(samples, 0);
// Play it
const source = ctx.createBufferSource();
source.buffer = buffer;
source.connect(ctx.destination);
source.start();Real-time Processing with AudioWorklet
For low-latency audio processing, use AudioWorklet:
// audio-processor.js (runs in audio thread)
class WasmProcessor extends AudioWorkletProcessor {
constructor() {
super();
this.wasmReady = false;
this.port.onmessage = async (e) => {
if (e.data.type === 'init') {
// Load Wasm module in the audio thread
const { default: init, low_pass_filter } = await import('./pkg/audio_wasm.js');
await init();
this.filter = low_pass_filter;
this.wasmReady = true;
}
};
}
process(inputs, outputs) {
if (!this.wasmReady || !inputs[0].length) return true;
const input = inputs[0][0];
const output = outputs[0][0];
output.set(input);
// Process with Rust/Wasm
this.filter(output, 1000, sampleRate);
return true;
}
}
registerProcessor('wasm-processor', WasmProcessor);Common Audio DSP Functions
Gain (volume)
#[wasm_bindgen]
pub fn apply_gain(samples: &mut [f32], gain: f32) {
for sample in samples.iter_mut() {
*sample *= gain;
*sample = sample.clamp(-1.0, 1.0);
}
}Distortion
#[wasm_bindgen]
pub fn distortion(samples: &mut [f32], amount: f32) {
for sample in samples.iter_mut() {
let x = *sample * amount;
*sample = x / (1.0 + x.abs());
}
}Reverb (simple delay)
#[wasm_bindgen]
pub fn delay(samples: &mut [f32], delay_samples: usize, feedback: f32) {
let len = samples.len();
for i in delay_samples..len {
samples[i] += samples[i - delay_samples] * feedback;
samples[i] = samples[i].clamp(-1.0, 1.0);
}
}Performance: Why Wasm for Audio?
| Operation | JavaScript | Rust/Wasm |
|---|---|---|
| FFT (1024 samples) | ~0.5ms | ~0.08ms |
| Low-pass filter (44100 samples) | ~2ms | ~0.3ms |
| Waveform generation (1 sec) | ~5ms | ~0.8ms |
Audio processing requires completing within ~5ms per buffer (at 44.1kHz, 256 sample blocks). JavaScript sometimes misses this deadline → audio glitches. Rust/Wasm consistently meets it.
Try It
The starter code generates a sine wave, applies a low-pass filter, and measures volume. In a real project, these functions would process live audio buffers from a microphone or audio file.
Try It
Chapter Quiz
Pass all questions to complete this lesson