What you will learn:
- What is sensor fusion and why is it the future of autonomous vehicles?
- What sensors are needed for sensor fusion.
- Different sensor architectures used by car manufacturers.
Sensor fusion is the future of autonomous vehicles (AVs), allowing them to replicate the way human senses work together to provide spatial awareness. So what is sensor fusion?
It involves harnessing data from multiple sensors to become aware of events surrounding the vehicle, allowing it to process what is happening and then take appropriate action. The most discussed sensors in AV are LiDARs, radars, and cameras, and when combined through sensor fusion, they complement each other very well.
LiDAR technology has a scanning effect that allows the vehicle to detect objects at both high resolution and long range, which can help prevent accidents. LiDAR mimics our depth perception by providing 3D information about nearby objects, but it doesn’t provide the kind of resolution like that of a camera. Additionally, LiDAR is sensitive to weather conditions such as dense fog or heavy rain.
Radar is used to detect the speed and distance of objects near the car. Consisting of both a transmitter and a receiver, it sends out radio waves that bounce and reflect off objects until they are picked up by the receiver. Through this echolocation-like process, the radar can determine the distance, speed, and direction of nearby objects. Although the radar does not have a high resolution at range, it has the added advantage of being able to detect well in bad weather.
Finally, cameras, as I’m sure one would assume, mimic our sense of sight by creating an image from reflected and refracted light rays. Although the cameras have exceptional quality resolution, they are unable to provide distance and depth detail of what they are imaging. And like LiDAR, cameras don’t work well in certain weather conditions and can struggle to be detected at night.
Combination of sensor data
With sensor fusion, the input data from several or all of these sensors (LiDAR, radar, cameras, etc.) is brought together. Indeed, the data from each sensor is combined and forms a picture of the events surrounding the vehicle, allowing it to take the appropriate measures. As with the human senses, the combination of various sensory inputs creates highly detailed visuo-spatial awareness.
The three types of sensors can be combined and implemented in the different design architectures used by car manufacturers. One such example is the “zonal” architecture, which can be referred to as an evolved domain architecture, an intermediate advancement over the original flat electronic control unit (ECU) architecture of older vehicles. .
With zonal, the focus is on certain physical areas of the vehicle like the front, rear, central core or sides. All ECUs located in a given physical zone connect to the same zone controller or gateway, regardless of the exact function of the ECU.
By using a zonal design, the gateways can be much closer to the sensors themselves. This means that cabling between hosts and gateways is greatly simplified, allowing for better connectivity. Such an approach offers enormous advantages in terms of scalability and functionality associated with the use of high-speed Ethernet and various other computing resources. Thus, it is reliable for both vehicle decision making and data processing. Zonal, however, comes at the cost of greater complexity for gateways that must manage and route traffic to and from computers with very different functions.
The zonal architecture also offers a notable advantage specifically applicable to sensor fusion. The vehicle may have multiple zone controllers that collect sensor data and sometimes even compensate the sensors if they lack performance. Therefore, the area controller can apply any local processing that the sensors themselves haven’t done, such as signal cleaning or local machine learning (ML).
Another type of architecture envisaged for modern vehicles implements the fusion of sensors with centralized processing. Instead of having multiple ECUs spread throughout the car, all domains are merged into a centralized domain control system, leading to the name “central processing”. Central processing is the preferred design of high-tech vehicle manufacturers, such as Tesla and Waymo, as they strive to achieve Level 5 (full) autonomy. Today, this type of architecture leads to three or more centralized controllers rather than the ideal single controller.
While it sounds simpler to have all domains located in centralized domain control, it actually makes things more complicated from a processing perspective. In this type of architecture, most of the processing is done by the central unit. However, the immense amount of data generated by the sensors means that the CPU can limit performance.
One solution is to rely on state-of-the-art processing units with high-end processing capabilities, specifically designed to process automotive sensor data. Another approach is to rely on distributed processing to optimize the system, reduce the workload on the centralized controller, and enable high computational speeds.
Different manufacturers will select what is most appropriate for their business type and history. New entrants may favor the high-end centralized compute-intensive approach, while legacy manufacturers may prefer the latter distributed solution to better leverage contributions from their vendors.
With the distributed approach, it is essential to have multiple modules pushed to the periphery to handle the abundance of data generated by the multitude of different sensors, including sensor modules, brake control modules, dimmer modules, etc. lighting, etc. Highly sophisticated sensors are often implemented in such a way that they can perform certain data processing functions to reduce the bandwidth of information sent back to the CPU or to reduce noise in the received signal – forming sensor fusion .
For vehicles to become fully autonomous, they need perfect spatial awareness to properly identify and react to the ever-changing environment on the road. It’s hard to say whether zonal or central processing will be the most common type of automotive architecture in the future, but what we do know is that sensor fusion will be key to reproducing human senses in AVs. and eventually reach level 5.
Regardless of the architecture used, ensuring that it is properly networked is key for AVs. We will explore this topic further in the next article.