Researchers may have found a new way to give autonomous vehicles the ‘eyesight’ they need to see objects through thick layers of fog.
A research team from the Massachusetts Institute of Technology has developed a sub-terahertz radiation receiving system that could aid autonomous cars in driving through low-visibility conditions like fog, when traditional methods fail.
Sub-terahertz wavelengths are located between microwave and infrared radiation on the electromagnetic spectrum. These wavelengths can be detected through fog and dust clouds, while infrared-based LiDAR imaging systems that are commonly used in autonomous vehicles struggle to see through the haze.
Sub-terahertz imaging systems send an initial signal of an object through a transmitter, where a receiver measures the absorption and reflection of the rebounding sub-terahertz wavelengths and sends a signal to a processor that recreates an image of the object.
However, sub-terahertz sensors have yet to be implemented into driverless vehicles because they require a strong output baseband signal from the receiver to the processor that can be either large and expensive or small but produce signals too weak.
In the new MIT system, a two-dimensional, sub-terahertz receiving array on a chip, that is orders of magnitude more sensitive, is able to better capture and interpret sub-terahertz wavelengths in the presence of signal noise due to a scheme of independent signal-mixing pixels–dubbed heterodyne detectors. These pixels are generally difficult to densely integrate into chips at their current size.
To overcome this design issue, the researchers shrunk the heterodyne detectors so that several can fit onto a chip, creating a compact, multipurpose component that can simultaneously down-mix input signals, synchronize the pixel array and produce strong output baseband signals.
The team built a prototype system that includes a 32-pixel array that is integrated on a 1.2-square-millimeter device. These pixels are 4,300 times more sensitive than the pixels currently used in sub-terahertz array sensors.
“A big motivation for this work is having better ‘electric eyes’ for autonomous vehicles and drones,” co-author Ruonan Han, an associate professor of electrical engineering and computer science, and director of the Terahertz Integrated Electronics Group in the MIT Microsystems Technology Laboratories (MTL), said in a statement. “Our low-cost, on-chip sub-terahertz sensors will play a complementary role to LiDAR for when the environment is rough.”
In the new design, a single pixel generates the frequency beat—the frequency difference between two incoming sub-terahertz signals—as well as the local oscillation— an electrical signal that changes the frequency of an input frequency—producing a signal in the megahertz range that can be interpreted by a baseband processor.
The output signal can be used to calculate the distance of objects and a combination of output signals of an array of pixels with steering the pixels in a specific direction can enable high-resolution images and the recognition of specific objects.
The MIT design also allows each pixel to generate their own local oscillation signal that is used for receiving and down-mixing the incoming signal. An integrated coupler also synchronizes the local oscillation signal with its neighbor to give each pixel more output power.
“We designed a multifunctional component for a [decentralized] design on a chip and combine a few discrete structures to shrink the size of each pixel,” first author Zhi Hu, a PhD student in the Department of Electrical Engineering and Computer Science, said in a statement. “Even though each pixel performs complicated operations, it keeps its compactness, so we can still have a large-scale dense array.”
The researchers also ensured that the frequency of the local oscillation signals are stable by incorporating the chip into a phase-locked loop, which locks the sub-terahertz frequency of all 32 local oscillation signals to a stable, low-frequency reference.
“In summary, we achieve a coherent array, at the same time with very high local oscillation power for each pixel, so each pixel achieves high sensitivity,” Hu said.