FAQ

Can the audio start playing before the waveform is drawn?

Yes, if you use the backend: 'MediaElement' option. See here: http://wavesurfer-js.org/example/audio-element/. The audio will start playing as you press play. A thin line will be displayed until the whole audio file is downloaded and decoded to draw the waveform.

Can drawing be done as file loads?

No. Web Audio needs the whole file to decode it in the browser. You can however load pre-decoded waveform data to draw the waveform immediately. See here (the "Pre-recoded Peaks" section).

Can I make the audio start playing automatically on iOS?

Nope. It's a known issue that iOS won't allow you to play the audio programmatically. It won't play unless the user clicks on the page. It's a power/bandwidth-saving feature of iOS Safari.

How to generate waveform data on the server?

You can use the audiowaveform program. For example, let's generate peaks for a MP3 file called 'long_clip.mp3'.

Generate JSON-formatted peaks data from the file long_clip.mp3:

audiowaveform -i long_clip.mp3 -o long_clip.json --pixels-per-second 20 --bits 8

To generate waveforms for each audio channel separately, add the '--split-channels' flag long_clip.mp3:

audiowaveform -i long_clip.mp3 -o long_clip.json --pixels-per-second 20 --bits 8 --split-channels

Then normalize the peak data so the values stay between 0 and 1 using the Python script below:

python scale-json.py long_clip.json

import sys
import json


def scale_json(filename):
    with open(filename, "r") as f:
        file_content = f.read()

    json_content = json.loads(file_content)
    data = json_content["data"]
    channels = json_content["channels"]
    # number of decimals to use when rounding the peak value
    digits = 2

    max_val = float(max(data))
    new_data = []
    for x in data:
        new_data.append(round(x / max_val, digits))
    # audiowaveform is generating interleaved peak data when using the --split-channels flag, so we have to deinterleave it
    if channels > 1:
        deinterleaved_data = deinterleave(new_data, channels)
        json_content["data"] = deinterleaved_data
    else:
        json_content["data"] = new_data
    file_content = json.dumps(json_content, separators=(',', ':'))

    with open(filename, "w") as f:
        f.write(file_content)


def deinterleave(data, channelCount):
    # first step is to separate the values for each audio channel and min/max value pair, hence we get an array with channelCount * 2 arrays
    deinterleaved = [data[idx::channelCount * 2] for idx in range(channelCount * 2)]
    new_data = []

    # this second step combines each min and max value again in one array so we have one array for each channel
    for ch in range(channelCount):
        idx1 = 2 * ch
        idx2 = 2 * ch + 1
        ch_data = [None] * (len(deinterleaved[idx1]) + len(deinterleaved[idx2]))
        ch_data[::2] = deinterleaved[idx1]
        ch_data[1::2] = deinterleaved[idx2]
        new_data.append(ch_data)
    return new_data


if __name__ == '__main__':
    if len(sys.argv) < 2:
        print("Usage: python scale_json.py file.json")
        exit()
    filename = sys.argv[1]
    scale_json(filename)

You can now load the long_clip.json file with the peaks data and pass it to wavesurfer.js:


fetch('../long_clip.json')
.then(response => {
    if (!response.ok) {
        throw new Error("HTTP error " + response.status);
    }
    return response.json();
})
.then(peaks => {
    console.log('loaded peaks! sample_rate: ' + peaks.sample_rate);

    // load peaks into wavesurfer.js
    wavesurfer.load(mediaElt, peaks.data);
})
.catch((e) => {
    console.error('error', e);
});

Fork me on GitHub