The effectiveness of highway billboards and posters may soon be better understood, as researchers have developed a method to track data and measure gaze direction.
Researchers from Oska University, the Excellence Cluster at Saarland University and the Max Planck Institute for Informatics, have developed a new method based on a next-generation algorithms for estimating gaze direction.
“Until now, if you were to hang an advertising poster in the pedestrian zone and wanted to know how many people actually looked at it, you would not have had a chance,” Andreas Bulling, who leads the independent research group “Perceptual User Interfaces” at the Excellence Cluster at Saarland University and the Max Planck Institute for Informatics, said in a statement.
The researchers used a special type of neural network called “Deep Learning” that is currently being used in a several areas of industry and business.
The method utilizes a so-called ‘clustering’ of the estimated gaze directions. It is the same strategy utilized when one distinguishes apples and pears according to various characteristics, without having to explicitly specify how the two differ.
The most likely clusters are identified and the gaze direction estimates they contain are used for the training of a target-object-specific eye contact detector.
This procedure involves no involvement from the user and can improve further the longer the camera remains next to the target object and records data.
“In this way, our method turns normal cameras into eye contact detectors, without the size or position of the target object having to be known or specified in advance,” Bulling said. “Our method currently assumes that the nearest cluster belongs to the target object and ignores the other clusters.
“This limitation is what we will tackle next,” he added. “It paves the way not only for new user interfaces that automatically recognize eye contact and react to it but also for measurements of eye contact in everyday situations, such as outdoor advertising, that were previously impossible.”
The researchers tested the method in a workspace with a camera mounted to a target object and in an everyday situation where a user wore an on-body camera so that could take on a first-person perspective.
“We can in principle identify eye contact clusters on multiple target objects with only one camera, but the assignment of these clusters to the various objects is not yet possible,” Bulling said.
Information was previously captured by measuring gaze direction, requiring special eye tracking equipment that needs minutes-long calibration and required everyone to wear a tracker.
Real world studies with people in a pedestrian zone or with multiple people can be very complicated and often impossible.
When a camera is placed at the target object and machine learning was used, the computer was trained using a sufficient quantity of sample data only glances at the camera itself could be recognized.
However, the difference between the training data and the data in the target environment was often too great, so more research is still needed.