Researchers at MIT have demonstrated an algorithm that has effectively learned how to predict sound using deep learning. Over several months, the researchers recorded roughly 1,000 videos of an estimated 46,000 sounds that represent various objects being hit, scraped, and prodded with a drumstick. Next, the team fed those videos to a deep-learning algorithm that deconstructed the sounds and analyzed their pitch, loudness and other features. To then predict the sound of a new video, the algorithm looks at the sound properties of each frame of that video, and matches them to the most similar sounds in the database. The advance could lead to better sound effects for film and television and robots with improved understanding of objects in their surroundings.
Join in on the conversation with David Schatsky when you subscribe to Exponentials.