Procedural Audio GAN’18 Talk

For the last few months, I’ve been doing loads of research into procedural audio and investigating if it’s yet accessible enough for the average sound designer to get started with. I have determined that it definitely is, and if you’re at least tech-savvy enough to operate a synthesiser, you can get to grips with procedural audio in no time.

During this year’s round of Game Audio North talks, I give a brief overview of procedural audio and had a look at where it’s being used. I ran through the two primary approaches (top-down & bottom-up) and showed how you might tackle a project with a top-down mindset. To demonstrate this, I created a PureData patch that simulated a recording that I took of my convection fan. Without delving into the operational mechanics of the fan, I tried to approximate the features of the sound with simple signal generators and some filters.

Looking at a spectrogram of the recording, I determined that I could simply use some band-passed noise, and a harmonic signal generator.

Sinesum sets the amplitude of the harmonics, which was set to roughly to mimic the spectrogram. Phasor then reads through the table 500 times per second, creating the 500Hz fundamental. The noise is band-passed around 500 to give it more energy around that region, and further low-passed to create that pink noise distribution.

Finally, to make it interactive, a recording of the click is stored in a table and played back every time the switch is toggled. A lowpass filter is used as a logarithmic sweep, which mimics the non-linear build up and decay of the harmonics.

The patches and recordings are freely available to download below, as well as the slides for the presentation, which have loads of great resources for getting started with procedural audio.

Procedural Audio GAN’18 Powerpoint Slides

3D Audio Desktop

For my 2nd year Creative Tech module, I created a virtual 3D desktop listening environment using convolution. I captured 5.1 surround impulses with a binaural dummy head, and convolved discrete channels of internally routed audio through a modular patching environment. Check out the video below for a demo of what it can do!

Signal : GGJ ’17

For this year’s Global Game Jam I chose Birmingham City University as my jam site. I decided to go without a premade team, as I was hoping to meet some talented developers. Fortunately, that’s exactly what happened! And we were even fortunate enough to win the award sponsored by BCU. With the theme of ‘Waves’, we decided on a horror themed game, that requires making use of an echo-location ability to navigate. Click the link below to check it out!

We created the game in Unity using its new collaboration feature, which made the whole process incredibly streamlined. I was surprised that it worked as well as it did, especially regarding FMOD (the middleware I used for implementation). New assets were easily uploaded and quickly implemented, providing instant gratification that kept the team inspired. Until now, I’d gotten used to working on games that had placeholder assets (or no sound at all), with any new assets uploaded in batches, or at the very end. This system made it quick and easy to make changes, and FMOD allowed me to mix the audio and tweak it throughout the project, rather than leaving it as a mad rush before the end.