A picturesque beach with gently lapping waves, a boat ride through a swamp, some shots of marine animals at an aquarium. Google software engineer Vinay Bettadapura captured approximately 26.5 hours of footage when he took a two-week road trip from coast-to-coast across the southern United States.
Rather than manually edit his footage together, Bettadapura teamed up with Georgia Institute of Technology (Georgia Tech) PhD student Daniel Castro to create an algorithm that would do the work for him.
The result: a 38-second highlight video compiled in three hours.
“We can tweak the weights in our algorithm based on the user’s aesthetic preferences,” Bettadapura said in a statement. He’s recently completed his PhD from Georgia Tech in the fall.
“By incorporating facial recognition, we can further adapt the system to generate highlights that include people the user cares about,” he added.
Bettadapura’s footage was recorded on a Contour Action Camera, which captured the shot’s GPS data. Through the geolocation data, the algorithm reduced the footage to 16 hours. Next, the algorithm used shot boundary detection to further reduce the video to 10.2 hours. Following that, artistic qualities, such as color vibrancy, composition, and symmetry were taken into consideration.
The duo presented the algorithm at the WACV 2016: IEEE Winter Conference on Applications of Computer Vision. They plan to continue their development of the algorithm going forward.
R&D 100 AWARD ENTRIES NOW OPEN:
Establish your company as a technology leader! For more than 50 years, the R&D 100 Awards have showcased new products of technological significance. You can join this exclusive community! Learn more.