New software launched by researchers at Birmingham City University aims to reduce the long periods of training and expensive equipment required to make music, while also giving musicians more intuitive control over the music that they produce.
The developed software, showcased at the British Science Festival, trains computers to understand the language of musicians when applying effects to their music.
The software (the SAFE Project) uses artificial intelligence to allow a computer to perceive sounds like a human being. The development of the software was motivated by the lack of statistically-defined transferable semantic terms (meaningful words) in music production.
The software allows users to use key words to process sounds, e.g. ‘warm’, ‘crunchy’ or ‘dreamy’, rather than technical parameters. Users can also label their created sounds under key words, over time allowing a whole series of sounds to be grouped together and further strengthening the searches that musicians make when searching for specific types of sounds.
The software helps to computationally define words such as ‘dreamy,’ so computers can easily find a sound or set of sounds which should match with what a ‘dreamy’ sound would be considered as.
Dr. Ryan Stables, lecturer in audio engineering and acoustics at Birmingham City University and lead researcher on the SAFE project, said: “When we started the project, we were really keen to try and simplify the whole process of music production for those who were untrained in the area.
“Musicians can often spend their whole lives mastering their instrument, but then when they come to the production stage, it’s very difficult for them to produce a well-recorded piece of music. The SAFE project aims to overcome this and gives musicians and music production novices the ability to be creative with their music.”