The combination of thick fog and autonomous vehicles could be dangerous and potentially deadly.
That is why a team from the Massachusetts Institute of Technology (MIT) has created a new system using computational photography for self-driving navigation that can produce images of objects smothered with fog so thick that humans cannot see through it, while also gauging the object’s distance from the vehicle.
“I decided to take on the challenge of developing a system that can see through actual fog,” Guy Satat, a graduate student in the MIT Media Lab, who led the research, said in a statement. “We’re dealing with realistic fog, which is dense, dynamic, and heterogeneous. It is constantly moving and changing, with patches of denser or less-dense fog. Other methods are not designed to cope with such realistic scenarios.”
To test the system, the researchers used a small tank of water with a vibrating motor from a humidifier immersed in it. The system was able to resolve images of objects and gauge their depth at a range of 57 centimeters in fog so dense that human vision could only penetrate 36 centimeters.
The team believes in the real world, a typical fog could would provide visibility of about 30 to 50 meters.
The new system uses a time-of-flight camera that fires ultrashort bursts of laser light into a scene and measures the time it takes their reflections to return.
On a clear day, the light’s return time faithfully indicates the distances of the objects that reflected it. However, fog causes light to scatter or bounce in random ways and in foggy weather, most of the light that reaches the camera’s sensor will have been reflected by airborne water droplets and not by the types of objects autonomous vehicles need to avoid.
Even the light that does reflect from potential obstacles arrives at different times, having been deflected by water droplets on both the way out and the way back.
To work around this problem, the new systems uses patterns produced by fog-reflected light that vary based on the fog’s density. The researchers showed that regardless of the density of the fog, the arrival times of the reflected light adhere to a statistical pattern called gamma distribution.
Gamma distributions can be asymmetrical and can take a wider variety of shapes. The MIT system estimates the values of those variables on the fly and uses the resulting distribution to filter fog reflection out of the light signal that reaches the time-of-flight camera’s sensor.
The system calculates a different gamma distribution for each of the 1,024 pixels in the sensor, which enables it to handle the variations in fog density.
The camera counts the number of light particles, or photons, that reach it every 56 picoseconds, or trillionths of a second, which the system uses to produce a histogram indicating the photon counts for each interval. It then finds the gamma distribution that best fits the shape of the histogram and subtracts the associated photon counts from the measured totals.
What remain are slight spikes at the distances that correlate with physical obstacles.
“What’s nice about this is that it’s pretty simple,” Satat said. “If you look at the computation and the method, it’s surprisingly not complex. We also don’t need any prior knowledge about the fog and its density, which helps it to work in a wide range of fog conditions.”