Categories
Autonomous Vehicles, New Mobility & the Built Environment

Fusing Sensors to Make a Smarter Driving Experience

Last April, a RAND report suggested that it would be nearly impossible to drive enough miles to prove the safety benefits of autonomous vehicles..It suggested that alternatives, such as simulations, would be needed to supplement the real-world conditions. The answer to part of this challenge may lie with video games.

It may not be the actual video games that automakers end up using, but, as described by Nvidia’s Senior Director, Automotive, Danny Shapiro, the GPUs and the same software elements that are needed for rendering realistic video graphics, can be used for simulating a virtual wind-tunnel, driving conditions and crash tests. From an engineering standpoint, these realistic simulations allows a design to be tested and tweaked prior to fabrication; saving both money and time.

Simulation may be most powerful when it comes to autonomous vehicle testing. These simulations do more than test, as they allow their deep learning algorithms to “teach” the car how to be a better driver. Shapiro points out that this learning is fed back to actual cars to improve their performance in real-world conditions.

Shapiro talks about the idea of fusing different sensors (e.g. cameras, LiDAR, Radar, acoustic, gyros, etc.) to create a 360 degree, three-dimensional picture of the environment that is richer than any human could ever see. Unlike a human that is easily distracted with external inputs like crying babies, blaring radio and incessant text messages, Nvidia’s car-based, supercomputer focuses on the input it receives and makes decisions (up to 24 trillion operations per second) that control the car’s steering, acceleration and braking.

It is the power of the network that will accelerate the accuracy of the decisions made by the GPU brains of the autonomous vehicle. Each vehicle, whether real or simulated, are part of a bigger network that provide inputs to and becomes trained from a cloud-based, neural network that is much more powerful than the individual vehicle. Shapiro points out that the full 360 degree situational awareness of the autonomous vehicle provides “super-human levels of perception and tracking.”

Author Ken Pyle, Managing Editor

By Ken Pyle, Managing Editor

Ken Pyle is Marketing Director for the Broadband Forum. The mission of this 25+-year-old non-profit “is to unlock the potential for new markets and profitable revenue growth by leveraging new technologies and standards in the home, intelligent small business, and multi-user infrastructure of the broadband network.”

He is also co-founder of Viodi, LLC and Managing Editor of the Viodi View, a publication focused on the rural broadband ecosystem, autonomous vehicles, and electric aviation. He has edited and produced numerous multimedia projects for NTCA, US Telecom and Viodi. Pyle is the producer of Viodi’s Local Content Workshop, the Video Production Crash Course at NAB, as well as ViodiTV. He has been intimately involved in Viodi’s consulting projects and has created processes for clients to use for their PPV and VOD operations, as well authored reports on the independent telco market.

Linked In Profile

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.