What is Sensor Fusion?

Sensor Fusion Shapes the Future of Connected Devices

Sensor Fusion Shapes the Future of Connected Devices

Sensors are used in almost every industry now: they’re found in our cars, in our factories, and even in our smartphones. While an individual sensor may provide useful data on its own, imagine the information that could be extracted from combining output from multiple sensors at once. This would give us a much better model of the world around us, assuming the whole is greater than the sum of its parts. Sensor fusion is the process through which we can accomplish this feat.  Specifically, sensor fusion is the process of merging data from multiple sensors to create a more accurate conceptualization of the target scene or object. The idea behind it is that each individual sensor has both strengths and weaknesses; the goal is to leverage the strengths of each and reduce any uncertainty to obtain a precise model of the environment being studied.

What is Sensor Fusion

What are the Different Types of Sensors?

First, let’s define the various types of sensors used in a sensor fusion process. An autonomous car, for example, uses many different sensors to navigate its environment. Here are several of the most common sensor technologies:

Camera

A camera captures images of a scene, which can then be used to identify objects within. Their weakness is that the image can easily be obscured by darkness, poor weather, dirt, etc. 

Radar

A radar sensor emits radio waves to detect objects and precisely estimate their speeds. It uses a function called radial velocity to measure changes in wave frequencies to determine if something is moving toward it or away. Unlike a camera, it can’t be used to indicate what objects are being sensed by it. 

LiDAR

Standing for light detection and ranging, LiDAR uses infrared sensors to measure the distance between a target object and the sensors. The sensors send out waves and measure the time it takes for the waves to bounce off an object and return. This data is then used to create a 3D point cloud of the environment. The disadvantage of LiDAR is that it doesn’t have a very long range and also isn’t nearly as affordable as cameras or radar. 

Ultrasonic Sensor

Ultrasonic sensors can estimate the position of a target object within a few meters.

Odometric Sensor

This sensor can help predict a vehicle’s position and orientation relative to a known starting point if the wheel speeds are measured. 

The Three Categories of Sensor Fusion

Generally speaking, there are three different approaches, or types, of sensor fusion.

Complementary

This type of sensor fusion consists of independent sensors that aren’t dependent on one another, but when their output is combined, create a more complete image of a scene. For instance, several cameras placed around a room and focused on different parts of the room can collectively provide a picture of what the room looks like. The advantage of this type is that it typically offers the greatest level of accuracy. 

Competitive/Redundant

When sensor fusion is set up in a competitive arrangement, sensors provide independent measurements of the same target object. In this category, there are two configurations: one is the fusion of data from independent sensors, and the other is the fusion of data from a single sensor taken at separate instances. This category gives the highest level of completeness of the three types.

Cooperative

The third type of sensor fusion is called cooperative. It involves two independent sensors providing data that when taken together, delivers information that wouldn’t be available from a single sensor. For instance, in the case of stereoscopic vision, two cameras at slightly different viewpoints provide data that can collectively form a 3D image of the target object. This is the most difficult of the three categories to use as the results are especially sensitive to errors from the individual sensors. But it’s advantage is its ability to provide a unique model of a scene or target object.  One should note that many uses of sensor fusion leverage more than one of the three types to create the most accurate result possible. There are also three communication schemes used in sensor fusion; they include:
  • Decentralized: No communication occurs between the sensor nodes.
  • Centralized: Sensors communicate to a central node. 
  • Distributed: Sensor nodes communicate at set intervals.  
The communication scheme used is dependent on the use case and requirements of the project. 

Algorithms in Sensor Fusion

To merge the data in sensor fusion applications, data scientists need to use an algorithm. Perhaps the most popular algorithm in sensor fusion is known as the Kalman filter. In general, though, there are three levels of sensor fusion that data scientists explore.

Three Levels of Sensor Fusion

The output of the sensors requires post-processing, the level of which will vary. The level chosen impacts data storage needs as well as accuracy of the model. Here’s a summary of the three levels:
  • Low-Level: At a low level, sensor fusion takes the raw output of the sensors to ensure we’re not unintentionally introducing noise in the data during transformation. The disadvantage of this method is it requires processing quite a bit of data.
  • Mid-Level: Instead of using raw data, mid-level sensor fusion uses data that’s already been interpreted by either the individual sensor or by a separate processor. It leverages hypotheses on the object’s position, weighting various interpretations to arrive at a singular answer.
  • High-Level: Similar to mid-level, high-level sensor fusion weights hypotheses to achieve an answer on the object’s position. However, it also uses this method to identify an object’s trajectory.

The Kalman Filter

Invented as far back as 1960, the Kalman filter is present in our smartphones and satellites, and commonly used for navigation purposes. Its purpose is to estimate the current state of a dynamic system (although it can also estimate the past, known as smoothing, and the future, known as prediction). It’s especially helpful when parsing through noisy data; for instance, sensors on autonomous vehicles sometimes capture incomplete or noisy data that can then be corrected by the Kalman algorithm. The Kalman filter is a form of a Bayesian filter. In Bayesian filtering, the algorithm alternates between the prediction (an estimate of what the current state is) and the update (the observations of the sensors). Essentially the algorithm will take the prediction and correct it according to the update, cycling through these two steps until reaching desired accuracy.  The Kalman filter makes predictions in real-time using mathematical modeling based on the state (which includes position and speed) and uncertainty. It works with linear functions, but some sensor output, like radar data, isn’t linear. In this case, data scientists rely on two approaches to linearize the model:
  • Extended Kalman Filter: Uses Jacobian and Taylor series to linearize.
  • Unscented Kalman Filter: Uses a more precise approximation to linearize. 
In any case, the Kalman Filter is one of the most common algorithmic approaches to sensor fusion, helping to merge multiple data sources into one clear current state. 

Other Algorithms in Sensor Fusion

In addition to the Kalman Filter, data scientists may choose to apply other algorithms to sensor fusion. They include: Neural Networks: A neural network is used in deep learning to fuse image data from multiple sensors in order to classify results. Central Limit Theorem (CLT): Using a bell curve representation in most cases, a CLT algorithm will average data from multiple sensors. Bayesian Algorithms: We mentioned that the Kalman Filter is one type of Bayesian filter, but there are others. For example, the Dempster-Schafer algorithm uses measurements of uncertainty and inferences to mimic human reasoning. 

Sensor Fusion and Autonomous Vehicles

Sensor fusion is critically important to the field of autonomous vehicles. Fusion algorithms help vehicles navigate through obstacles by understanding their precise location, how fast they’re moving, and in what direction. In other words, sensor fusion maximizes the safety of autonomous vehicles. Let’s talk about how it works.  A vehicle will take in input from cameras, radar, and LiDAR sensors to form a single model of its surrounding environment. Sensor fusion occurs in the practice of merging this data into that singular model. We mentioned previously how these different sensors work, as well as what their strengths and weaknesses are. Combined, their strengths can paint a highly accurate picture of a vehicle’s environment. This picture can then be used to inform navigation decisions on where to go and how fast.  Sometimes, autonomous vehicles will leverage data pulled from inside the car as well, in what’s known as interior and exterior sensor fusion. The more sensor inputs the vehicle collects, the more complicated it can be to use sensor fusion. However, the tradeoff is that if done correctly, the resulting model will generally be more accurate the more sensor data is used. One example of the power of sensor fusion in automotive is in preventing skidding. Leveraging a combination of sensor inputs from the steering wheel’s orientation, a gyroscope, and an accelerometer, a sensor fusion algorithm can determine if the car is in the process of a skid and what direction the driver intends for the vehicle to go. In this scenario, an autonomous vehicle could then automatically pump the brakes to prevent further skidding. 

A Case Study: Appen’s Work with Autonomous Vehicles

Appen works with seven out of ten of the largest automotive manufacturers in the world, supplying them with high-quality training data. These complex, multi-modal projects must achieve as close to 100% accuracy as possible to ensure the vehicle can operate under any number of conditions.
“It isn’t enough for vehicles to perform well in simulated or good weather conditions in one type of topography. They must perform flawlessly in all weather conditions in every imaginable road scenario they will encounter in real world deployments. This means that teams working on the machine learning model for the vehicle’s AI must focus on getting training data with the highest possible accuracy before being able to deploy on the road.” – Wilson Pang, CTO, Appen
In Appen’s work on autonomous vehicle projects, we’ve recognized that there are considerable quality challenges when sourcing and aggregating data from multiple sensor types. Poor quality training data, if caught early, requires much wasted time spent determining which components of the dataset need improvement. If caught later, the results will be seen in poor-performing models, and a self-driving car that cannot complete testing with the needed levels of accuracy (then it’s back to the drawing board). As a result, Appen has extensive auditing tools to monitor annotated data and help auto manufacturers get as close to 100% accuracy as possible for the future safety of their customers.  As a specific example of sensor fusion, some auto manufacturers that Appen works with need to often merge two datasets with different dimensions. This is a difficult task, one that isn’t easily done without the right technology. To solve this, Appen’s platform can provide 3D point cloud annotation with object tracking by over 99% at the cuboid level. What this means is that our clients can annotate their datasets with 2D images bound to one with 3D point cloud annotations with the purpose of mapping across multiple dimensions. Our technology serves as an example of how leveraging tools and third-party expertise can help teams accomplish complex, multi-modal AI endeavors. 

Other Applications of Sensor Fusion

Autonomous vehicles may be the most obvious application of sensor fusion, but it can benefit any area where multiple data sources need to be aggregated. Researchers are also considering its future usage in new areas like public safety, wearables, and more.

Navigation

You already experience sensor fusion whenever you use your maps function on your smartphone. Google Maps or Apple Maps will combine GPS data with phone data (including your compass, gyroscope, and accelerometer) to determine not only where you are, but which direction you’re facing and how fast you’re moving. This is probably one of the best examples of how sensor fusion can impact our everyday lives. 

Internet of Things

The Internet of Things (IoT) offers a multitude of opportunities for sensor fusion. The IoT refers to the idea of interconnected devices that share and aggregate data, ultimately in service of making life easier for us humans. These devices include sensors, so it makes sense to apply sensor fusion to this endeavor for greater efficiency.  The specific application of sensor fusion will depend on the devices in question, of course. It could help with monitoring smart water meters, managing indoor temperatures at your home, or even providing safety in the event of an attempted home intrusion. It may be an exaggeration to say, but the possibilities here do seem truly endless. 

Law Enforcement

On behalf of the National Institute for Justice and the Priority Criminal Justice Needs Initiative, the RAND corporation hosted a workshop to explore how sensor fusion could be applied to law enforcement initiatives. As it stands law enforcement increasingly relies on video technology to complete their police work. This includes video feeds from their vehicles as well as bodycams. Because this data is already being collected by video cameras and other sensors, there’s an opportunity here to leverage sensor fusion for added efficiencies.  The workshop found that sensor fusion could be “extremely promising” for improving public safety. Participants determined that it could be useful in detecting crimes in progress as well as investigating crimes. Additionally, this technology could better monitor officer behavior and offer greater protection for officer health and safety. Of course, the workshop noted that security and civil rights protections would need to be in place to ensure these technologies aren’t misused.

Wearables Technology

The wearables market is thriving, with popular products on the shelf like Fitbit and Apple Watch. Researchers conducted a case study to explore a possible opportunity for the use of sensor fusion in the wearables market. Specifically, the case study focused on the monitoring of eating habits as a stand-in for accessing insight on human activity in general. Researchers used the data from the wearable technology to create a 2D space that could enable the classification of eating episodes. In merging the data, they were able to identify patterns in the activity, in this case eating, that wouldn’t have been possible without sensor fusion. They hope that these results indicate the possibility for more process monitoring of human activity over time. While applications of sensor fusion outside of autonomous vehicles may frequently still be in the exploratory phase, keep an eye on this space for new developments as the availability of data of all types grows. 

The Future of Sensor Fusion

For sensor fusion to work well, there need to be advances in both hardware and software. On the hardware side, sensing technology continues to improve. More accurate, widely-available data naturally supports higher performance in sensor fusion models. On the software side, digital signal processing provides significant advantages in fusing huge amounts of data quickly. Data storage options, such as the cloud, are also critical factors in the advancement of sensor fusion techniques as computing power likewise experiences rapid growth.  What’s the ultimate goal of sensor fusion? In fact, it’s to mimic the sensing abilities of the human body. When you think about it, our brain takes in inputs from various sensors, or systems, such as the nervous system, muscular, and more to create a picture of the world around us. It’s truly an impressive feat, one that researchers aren’t necessarily close to imitating. But as more software and hardware developments supporting sensor fusion occur, we’re getting closer to achieving it. 

Expert Insight from Kuo-Chin Lien – Head of Computer Vision

The intuition behind sensor fusion is that the more information we can collect, the better understanding we have of the world. It’s very important to organize different pieces of information about our surroundings before we can make use of them in ML models. Two essential steps to organize data are temporal synchronization and spatially calibration. You can only construct the temporal causality and spatial geometry correctly with accurate synchronization and calibration, only then you can use the output in sensor fusion algorithms.  Both temporal synchronization and spatial calibration steps are non-trivial and typically need well-designed hardware, as well as software. Interestingly enough, when looking at our own human capabilities, it seems we have a good ability to adapt to offsets in time and in space when working with imperfect multi-sensor data. This is the reason why data annotation can be performed to create multi-sensor datasets, even if the sensor hardware and/or software registration has not happened in an ideal environment. You can still save your dataset with the right annotation provider.  In practice, when you are creating a new dataset, the sensor calibration levels that are available often have a certain degree of inaccuracy. When annotating, we have to present this mis-calibrated sensor information to annotators in a way they can easily interpret and fix the idiosyncrasies. We found that well-designed layouts for displaying different sensors, insertion of static visual hints, and dynamically rendering necessary information sets all help human annotators to deliver results faster and with less error, when compared to manual processes. Even more interestingly, designing a multi-step annotation process can benefit the final result, as previously annotated data may make an annotator’s job much easier in the next annotation stage. This points out that we need to spend time thinking about how to divide a sensor fusion annotation job into subtasks, as well as what tools and platforms we use to produce the best possible output. 

What We Can Do For You

Appen collects and labels images, text, speech, audio, and video used to build and continuously improve the world’s most innovative and complex artificial intelligence systems. With over 25 years of expertise in more than 235+ languages, a global crowd of over 1 million skilled contractors, and the industry’s most advanced AI-assisted data annotation platform, Appen solutions provide the quality, security, and speed required by leaders in technology, automotive, financial services, retail, manufacturing, and governments worldwide. As evidenced in our work with automotive manufacturers, Appen is able to deliver over 99% accuracy on multimodal AI projects. Our ability to combine 2D and 3D datasets enables us to support highly complex machine learning training data requirements.  To learn more about how we can help you with your next AI project, contact us.
Website for deploying AI with world class training data
Language