Sound quality on phones, video recorders and dictaphones is often poor; distorted or noisy with garbled speech or indistinct music. Acoustic scientists at have developed an algorithm to improve user-generated recordings, after tests revealed the extent to which consumers are struggling to control quality.
A team led by Professor Trevor Cox the University of Salford asked thousands of volunteers to explain what they thought was interfering with the quality of sound on clips recorded in living rooms, on the street and at gigs, including at the Glastonbury Festival.
“People are often disappointed when they play their recordings back, after a concert or a party, but there is a real lack of understanding as to why,” explains Cox, professor of acoustic engineering and author of Sonic Wonderland.
Tag sound quality
“It could be microphone handling noise, distortion, wind noise or a range of other conditions. What we have worked out is a way of automatically assessing the relative impact of these sound errors.”
The algorithm, which makes it possible to tag content and quality, has already been applied to an app for assessing wind noise, which alerts the user when there is significant risk of the sound being affected.
The three-year Good Recording Project, led by Salford University, is a response to increasing demand from consumers and from broadcasters who often use amateur footage which is compromised by sound quality.
Lagging behind cameras
“We’re used to having visual processing improving our photos, such as the camera that spots faces and changes exposure, but we have not had the same tools to do the audio equivalent, added Cox.
Rapid quality assessment could determine whether the sound is of broadcast quality without time consuming manual auditioning.
The £0.5 million project was funded by the Engineering and Physical Sciences Research Council and run in collaboration with BBC R&D and The British Sound Archive. The research is published in the journal PloS One and in the Audio Engineering Society journal. It will be presented at the Audio Engineering Society Conference in New York (October 29 – November 1, 2015).
The research project was carried out by Prof Trevor Cox, Dr Bruno Fazenda, Dr Iain Jackson, Dr Francis Li and Paul Kendrick.