Blog

Creating an Electric Fan in Pure Data – Procedural Audio

This is a quick overview of how I went about synthetically creating the sound of an electric fan in Puredata. For reference, I have outlined my approach, the model adopted, and the type of synthesis in the video’s description. I am aiming to create more content in future which will elaborate on what these aspects mean in practice.

This electric fan prototype was made without a specific context in mind, but as I get to grips with current game engines and programming, I hope to place more emphasis on the practical benefits of using procedural methods. I am still new to the fields of procedural audio and audio programming, so I’ll likely go back over my work in future, aiming to achieve better sound quality and more efficiency.

You can play around with the patch below, just tick the start / stop box to engage the patch, and toggle the fan on and off with the Toggle_Switch:


Procedural Audio GAN’18 Talk

During this year’s round of Game Audio North talks, I give a brief overview of procedural audio and had a look at where it’s being used. I ran through the two primary approaches (top-down & bottom-up) and showed how you might tackle a project with a top-down mindset.

To demonstrate this, I created a PureData patch that simulated a recording that I took of my convection fan. Without delving into the operational mechanics of the fan, I tried to approximate the features of the sound with simple signal generators and some filters.

I also briefly covered some of the research I undertook for my final year tech project, which culminated in an attempt at procedurally generating footfalls using granular synthesis. The resultant footfall synthesis still leaves a lot to be desired, but it did highlight the level of depth and nuance involved in simple interactions. These sorts of insights will become even more important as video games try to become more immersive and interactive, especially within media such as VR.

Click the image below to access the slides for the presentation:


Convolution for Creative Sound Design

During my second year at university, I became really interested in digitally processing and manipulating audio. Endless debates rage online about whether analog is better than digital, but I feel that it’s down to applicability to task.

Convolution is an example of a type of processing that can only happen in the digital domain, and can lead to amazing and pleasingly unpredictable sounds with some experimentation. Inspired by sound designers such as Diego Stocco, my GAN talk gives an overview of some creative ways to use convolution, outside of it’s more traditional use as a method for creating reverb. I cover rhythmic convolution, emulating a guitar cab, and creating dub-style delays, but many possibilities still exist beyond these~


Signal : GGJ ’17

For this year’s Global Game Jam (2017) I chose Birmingham City University as my jam site. I decided to go without a premade team, as I was hoping to meet some talented developers. Fortunately, that’s exactly what happened! And we were even fortunate enough to win the award sponsored by BCU. With the theme of ‘Waves’, we decided on a horror themed game that requires making use of an echo-location ability to navigate.

Click the link below to check it out:

https://abercromby3.itch.io/signal

We created the game in Unity using its new collaboration feature, which made the whole process very streamlined. I was surprised that it worked as well as it did, especially regarding FMOD (the middleware I used for implementation).

New assets were easily uploaded and quickly implemented, providing instant gratification that kept the team inspired. Until now, I’d gotten used to working on games that had placeholder assets (or no sound at all), with any new assets uploaded in batches, or at the very end. This system made it quick and easy to make changes, and FMOD allowed me to mix the audio and tweak it throughout the project, rather than doing it all at the end.