Introduction to web APIs - Learn web development

15 Nov.,2022

 

Your code interacts with APIs using one or more JavaScript objects, which serve as containers for the data the API uses (contained in object properties), and the functionality the API makes available (contained in object methods).

Note: If you are not already familiar with how objects work, you should go back and work through our JavaScript objects module before continuing.

Let's return to the example of the Web Audio API — this is a fairly complex API, which consists of a number of objects. The most obvious ones are:

  • AudioContext, which represents an audio graph that can be used to manipulate audio playing inside the browser, and has a number of methods and properties available to manipulate that audio.
  • MediaElementAudioSourceNode, which represents an <audio> element containing sound you want to play and manipulate inside the audio context.
  • AudioDestinationNode, which represents the destination of the audio, i.e. the device on your computer that will actually output it — usually your speakers or headphones.

So how do these objects interact? If you look at our simple web audio example (see it live also), you'll first see the following HTML:

<

audio

src

=

"

outfoxing.mp3

"

>

</

audio

>

<

button

class

=

"

paused

"

>

Play

</

button

>

<

br

/>

<

input

type

=

"

range

"

min

=

"

0

"

max

=

"

1

"

step

=

"

0.01

"

value

=

"

1

"

class

=

"

volume

"

/>

We, first of all, include an <audio> element with which we embed an MP3 into the page. We don't include any default browser controls. Next, we include a <button> that we'll use to play and stop the music, and an <input> element of type range, which we'll use to adjust the volume of the track while it's playing.

Next, let's look at the JavaScript for this example.

We start by creating an AudioContext instance inside which to manipulate our track:

const

AudioContext

=

window

.

AudioContext

||

window

.

webkitAudioContext

;

const

audioCtx

=

new

AudioContext

(

)

;

Next, we create constants that store references to our <audio>, <button>, and <input> elements, and use the AudioContext.createMediaElementSource() method to create a MediaElementAudioSourceNode representing the source of our audio — the <audio> element will be played from:

const

audioElement

=

document

.

querySelector

(

'audio'

)

;

const

playBtn

=

document

.

querySelector

(

'button'

)

;

const

volumeSlider

=

document

.

querySelector

(

'.volume'

)

;

const

audioSource

=

audioCtx

.

createMediaElementSource

(

audioElement

)

;

Next up we include a couple of event handlers that serve to toggle between play and pause when the button is pressed and reset the display back to the beginning when the song has finished playing:

 
playBtn

.

addEventListener

(

'click'

,

(

)

=>

{

if

(

audioCtx

.

state

===

'suspended'

)

{

audioCtx

.

resume

(

)

;

}

if

(

playBtn

.

getAttribute

(

'class'

)

===

'paused'

)

{

audioElement

.

play

(

)

;

playBtn

.

setAttribute

(

'class'

,

'playing'

)

;

playBtn

.

textContent

=

'Pause'

}

else

if

(

playBtn

.

getAttribute

(

'class'

)

===

'playing'

)

{

audioElement

.

pause

(

)

;

playBtn

.

setAttribute

(

'class'

,

'paused'

)

;

playBtn

.

textContent

=

'Play'

;

}

}

)

;

audioElement

.

addEventListener

(

'ended'

,

(

)

=>

{

playBtn

.

setAttribute

(

'class'

,

'paused'

)

;

playBtn

.

textContent

=

'Play'

}

)

;

Note: Some of you may notice that the play() and pause() methods being used to play and pause the track are not part of the Web Audio API; they are part of the HTMLMediaElement API, which is different but closely-related.

Next, we create a GainNode object using the AudioContext.createGain() method, which can be used to adjust the volume of audio fed through it, and create another event handler that changes the value of the audio graph's gain (volume) whenever the slider value is changed:

 

const

gainNode

=

audioCtx

.

createGain

(

)

;

volumeSlider

.

addEventListener

(

'input'

,

(

)

=>

{

gainNode

.

gain

.

value

=

volumeSlider

.

value

;

}

)

;

The final thing to do to get this to work is to connect the different nodes in the audio graph up, which is done using the AudioNode.connect() method available on every node type:

audioSource

.

connect

(

gainNode

)

.

connect

(

audioCtx

.

destination

)

;

The audio starts in the source, which is then connected to the gain node so the audio's volume can be adjusted. The gain node is then connected to the destination node so the sound can be played on your computer (the AudioContext.destination property represents whatever is the default AudioDestinationNode available on your computer's hardware, e.g. your speakers).