Harwin
Recom
Blog

Interior Vehicle Sensing AI Improves Safety

CarInterior_AdobeStock_482767110

Source : Mouser Electronics

Today we hear about self-driving vehicles on the horizon, but true autonomous driving in varied real-world conditions is still many years away. Human drivers still need to be attentive to the situation at hand, and the interior of a vehicle appears to present a relatively static lab-like environment for observation. Eyeris, started in 2013 as a human-centric artificial intelligence (AI) company, aims to make driving safer and more comfortable by monitoring interior conditions, ensuring that the human is in control, and confirming that even the environment is up to performing this critical task.

The Challenge: Varied Occupants and Sensing Conditions

While relatively static compared to the outside world, vehicle interior conditions still present a variety of challenges. A person might be driving the car alone, or several additional occupants could be in the car who might be male or female and range in size from small children to 100-kilogram adults and beyond. Add on to this the fact that humans have a wide range of skin tones and might be wearing different clothing and accessories in different lighting conditions and temperatures, and suddenly this “lab environment” becomes a rather complicated experiment. That is even before considering a family pet or two along for the ride, the hamburger wrapper in the back seat that wasn’t cleaned up yesterday, and a phone or two dropped in the passenger seat.

The Solution: Sensor Fusion and Data Abundance

While one sensor system might boast the best eye-tracking or other technical merits, as an AI software company Eyeris instead focuses on fusing a variety of hardware sensing elements. As such, they partner with a wide range of hardware manufacturers for sensing technologies—including traditional infrared (IR) modern red, green, blue, plus infrared (RGBIR) sensors, thermal imagers, and even radar—to get an overall view of the situation and collaborate with a wide range of processor manufacturers to run AI routines. This sensor fusion, combined with an extremely large dataset used for training, means that the interior space of a vehicle can be accurately interpreted in the same way a human amalgamates sight, hearing, touch, smell, and possibly even taste to perform a complicated task.

In addition to the raw computing power needed to run an AI system, connections among camera hardware, sensor processing modules, and the other processing hardware of an automobile also must be considered. For instance, Eyeris has used for some of its reference designs a Maxim’s MAX96706 deserializer to connect mobile industry processor interface (MIPI)-based image sensors and camera modules into the AI processing board with great success. As automotive electronics become ever more integrated, reliable methods of handling and abstracting this data transfer are important to consider.

The wide range of automobiles that are manufactured means that a well-organized system that can be easily integrated into automobiles X, Y, or Z can significantly reduce development costs and time to market.

Hardware Innovation: Facilitating Software Innovation

We have seen an incredible explosion of computing power and hardware innovation over the past decades. That being said, innovation cycles for software naturally move at a much faster pace than those for hardware, and manufacturers often find themselves in a “catch-up” mode in relation to their software counterparts. It is one reason Tesla, Apple, and others make their own AI hardware to cater specifically to software improvements that are on the horizon.

For smaller software/AI companies, which partner with a wide range of existing hardware manufacturers, it is important to have mature software stacks and software development kits (SDKs) available that are compatible with the latest AI frameworks—such as TensorFlow, PyTorch, and ONNX—in addition to having adequate raw computing power. Available compilers should support modern neural network layers, with mature software emulators, simulation engines, and related tools for AI model parsing, pruning, quantization, and other tasks. Finally, enabling sensor fusion tasks, such as built-in 3D disparity engines, multi-camera streaming capabilities, rich input/output (IO) interfaces, and more are also incredibly helpful. This enables AI, and those that set up AI systems, to work with a broad array of data while cutting through the noise.

AI Sensor Fusion: Automotive Safety and More

While this blog focuses on interior automotive sensing, more generally speaking, a range of applications exist in which a traditional vision-only AI setup might seem like the logical choice but may not be sufficient for a particular use case. Especially in safety-critical applications, a vision system that works most of the time in proper lighting and other conditions may be far from sufficient. In these situations, adding additional sensing capabilities—whether that be a second RGB visible light device, an IR sensor, radar, or even something like a thermal sensor for enhanced presence detection—may enable AI to sufficiently monitor and control an environment.

Multibillion-dollar companies may have the resources to develop their own chips in-house, but in other situations, a smaller, more flexible AI company can be the right fit for the job. Here the proper hardware partners must be identified, developed, and integrated to produce an all-in-one product for automotive and other industries. The better the available hardware and software interfacing tools, the easier it is to set up AI software, and the faster an excellent product can be produced. With the proper data, tools, and AI training, we can make our world safer and better for the users of such systems and for society as a whole.

To learn more, visit www.mouser.com

Allegro
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

To Top