For consumers, SAE Level 3 automation could be like “having their cake and eating it too,” so says Princeton’s Dr. Alain Kornhauser. Simply, Level 3 offers the promise of a traditional hands-on driving experience or robo-chauffeur led journey. At the same time, visions of new revenue opportunities associated with Level 3 excite and are prodding car manufacturers to develop the associated hardware and software. With that explanation of the potential benefits, Kornhauser set the stage for the latest Smart Driving Car Summit Session, Can Level 3 Be Delivered?.
Talking About the Vehicles People Can Buy #
The scope of the panel is bound by those things consumers can buy, not things like Robo-taxis. Or, as panel moderator, Russell Shields, President and CEO RoadDB, calls them SPVs (Series Production Vehicles) with Highly Automated Driving (HAD). These vehicles have eyes-off and hands-off capabilities.[dropshadowbox align=”center” effect=”lifted-both” width=”auto” height=”” background_color=”#ffffff” border_width=”1″ border_color=”#dddddd” ]Added 03/01/21 – In his opening comments, Shields pointed out that the SAE, and the
rest of the consumer vehicle industry, treats automation features as a product separate from the vehicle [see SAE J3016 for an explanation of the various automation levels].
In a subsequent email exchange, he made it clear that the level of driving automation is dependent upon the Operational Design Domain (ODD). For instance, a vehicle with a driver automation product that is capable of Level 4 (no human intervention) operation on some highways in good weather (its ODD) will only operate in an automated mode when the driver requests it and the vehicle software determines that the vehicle is in the ODD and that all of the vehicle’s sensors are operating properly. Otherwise, the driver will have to control the vehicle.
In both the Level 4 and Level 3 case, the UN’s Regulation No. 157 ALKS says that the driver automation product will initiate a Minimum Risk Manoeuvre (MRM) in the event that the human doesn’t take over control when requested (a Transition Demand as defined by ALKS).[/dropshadowbox]

“Do we need it,” asks Gerry Conover, Managing Director, PRC Associates, regarding Level 3. A world-renowned expert on intelligent transport systems and services, Conover indicates that in 2014, Level 3 was introduced as a bridge between Level 2 (human-assist technologies) and Level 4 (driverless). The intent for Level 3 is for the vehicle to handle situations requiring an immediate response, while drivers must be still prepared to intervene.
But will the driver be ready to regain control if they are texting, napping, or doing other things that passengers do? And it may not be obvious for the driver what they have to do (what he calls mode confusion) when they need to take back control. Mix in standards that are still developing and manufacturers developing proprietary ways of interacting and confusion seems to be the operative word.
A Mixed View from Europe #
Bringing the European perspective to the discussion Joost Vantomme, Smart Mobility Director, European Automobile Manufacturers’ Association, says European truck manufacturers see Level 3 as a transition to Level 4 and view it as an aid to the professional drivers. European Manufacturers of passenger vehicles are split in their view as to the need for Level 3 and that there is not yet EU-wide, Level 3-type approval.
Part of the issue is whether society will trust and be ready for Level 3. Vantommre astutely points out that, like broadband, there are two core issues; accessibility to and affordability of mobility. It is unclear how Level 3 addresses those issues. Then, there is the question of whether drivers will be ready to adapt to a new role as back-up to an automation system.

The digital twinning of the physical infrastructure, whereby there is a virtual representation through maps and telecommunications is a huge deal in Europe, according to Vantomme. Adapting the physical infrastructure to accommodate such a network and keeping it updated are some of the challenges.
Other challenges include the traffic rules and regulations. When a car can drive itself in a Level 3 configuration, what can the human driver now legally do (e.g. eat, text, etc.)? Vantomme indicates that there are national and regional differences between Europe’s 27 countries that have a potential impact when crossing borders.
He questions whether the policy framework is ready. He points to ALKS R157, Automatic Lane-Keeping Systems (ALKS) as a major milestone in a creating uniform way to perform a specific automation task. Still, this is just one policy element which more broadly will need to include agreement on things such as
- Hands off warnings (an adaptation of UN-ECE R79)
- Speed limitations
- Lane changes
- Enforcement of liability rules
- Ethical rules
Underlying the above challenges are forward and backward compatibility of the technology, particularly around communications.
Technology as Easy as 1/2, 2+, 4+ #

Qualcomm Technologies’ Jeremiah Golston simplifies the different SAE automation levels into three groups
- Active safety – Levels 1 & 2 – includes features like forward collision, blind spot & lane departure warnings, automatic emergency braking, and automatic cruise control.
- Convenience – Level 2+ – includes autonomous parking, automated lane change, hands-off highway autopilot, and adaptive cruise control with lane-keeping.
- Self-driving – Level 4+ – includes robotaxis, robo-logistics, long-haul trucking, parking lot to parking lot.
This definition is a little more nuanced than the two-category self-driving/driverless definition provided in the 2/18/21 Smart Driving Car Summit session. Still, it is much simpler than SAE’s J3016, six-level automation definition. Interestingly, Qualcomm doesn’t explicitly refer to Level 3 in its categorization of automation.
With that as a framework, the amount of automation is directly proportional to the hardware installed on the car.
- Active safety – cameras and radar begin to provide a level of perception
- Convenience – adds surround cameras and Driver Monitoring System (DMS), a sort of monitor of the human who is supposed to be monitoring the automation.
- Self-Driving – cameras that see near and far and additional perception through LIDAR.
The resolution of the camera is increasing as well, going from the megapixel range to 8 megapixels, allowing up to 300 meters of vision. At the same time, 10 cm accuracy is the objective for lane keeping.
Both the increasing resolution and hardware requirements are driving the need for additional computation power. As such, Qualcomm’s road map shows formerly discrete ECUs (Electronic Control Units) integrated into a consolidated centralized ECU to both reduce cost and improve reliability by reducing the number of parts.
Another key to scaling is to use the same software architecture regardless of where an OEM is on the automation scale. Still, the software is limited by the input it receives from the hardware on a given vehicle. Golston observes that most EV start-ups are installing the hardware for full automation, while incumbent OEMs are only installing the full hardware packages on select vehicles.
The Balance Between Selling the Future and Over Promising Today’s Features #
Consumer confusion seems to increase in direct proportion to the level of automation. This starts with the perception of a car’s capabilities by the general public. For the driver, the confusion is in the human-machine interface and understanding how a particular feature works. The driver may put too much trust in the technology and some even will override the features intended to make sure the driver is paying attention.
Hence, Audi’s Alison Pascale emphasizes that
- A really good HMI (Human-Machine Interface) is important so the driver remains engaged and understands what is going on.
- There needs to be good education of the purchaser/driver.
- Education needs to extend to the general public so they understand and embrace, instead of fear the technology. Pascal lauds the efforts of PAVE to educate the public.¹

Media, whether paid for in advertising or via the news, is probably the predominant source of knowledge about autonomy for the general public. Ironically, the company with the loudest voice, Tesla, spends virtually nothing on media and yet has non-stop media coverage. Shields’ assessment is that “Tesla risks ruining it for everyone by the way it is marketing its automation systems.”
Amplifying Shields point, Conover bluntly states that,
“Tesla is being totally irresponsible in what it is doing. Got to get him [Elon Musk] under control. The media loves Elon, Wall Street loves him, Courts love him.”
For some, the Tesla product names “Full Self-Driving” and “Autopilot” connotate a hands-off/eyes-off experience, even though Tesla’s current description (png) suggests one must be an attentive driver to use either of these packages. It may even lead to the misuse/hack of the technology to perform a form of unsafe self-driving. Adding to the confusion is that Tesla’s product definition changes with each new software update. The perception is also shaped by Elon Musk’s tweets and promises of its potential.
Russ Mitchell, the highly respected automotive reporter for the LA Times, points out that Tesla is pulling the rest of the industry along, as evidenced by the recent Superbowl ads that prominently featured “hands-free” driving experiences that go beyond the capabilities of today’s vehicles (e.g. added 3/1/21 the Cadillac LYRIQ ScissorHandsFree video).
¹ Tesla is notably absent from the Partners for Automated Vehicle Education.
The Balance Between Innovation and Safety #
Automakers have simply gotten ahead of NHSTA, according to Conover. With that said, both Conover and Shields are pessimistic about the ability of Congress to craft legislation to create clear definitions for the industry and question whether Congress can even craft relatively simple targeted legislation, such as increasing the exemptions for driverless vehicles.
Former Assistant Secretary of the Department of Transportation, Diana Furtchgott believes that the incentives are divergent between the OEMs and regulators. OEMs want to sell cars, while regulators want to make them safe, she opines. In such a dynamic market, litigation may be a better regulator than the regulators suggests Diana Furtchgott-Roth.

A standardized Data Storage System for Automated Driving / Event Data Recorder (DSSAD/EDR) aims to give a clear picture of the interactions between the driver and the automated driving system, prior to a crash or incident. Shields’ estimation is that cars might have this capability in 10 years.
In the end, the intent of the technology should be to make it safer for the human driver, and, by extension, the surrounding people, property, and vehicles. As pointed out by several of the panelists, Level 3 adds convenience for the driver. Echoing last week’s Smart Driving Car Summit panel, the LA Times’ Mitchell proposes that industry and media use a simpler two-level definition for automation; the human-driven vehicle (self-driving) and the driverless version where there is no driver.
Finally, Dr. Kornhauser summarizes the panel by suggesting that “The technology is here, but the problem is us.” There are issues, which mostly have to do with humans and how they interact with technology. Kornhauser holds out hope for Level 3 in the trucking industry as the drivers are professionals. Level 3 for these mobile workers is about more than convenience, as it offers the potential to improve the work environment.
The question for the insurance industry is whether Level 3 will improve its bottom line. The answer to that question will determine the adoption of Level 3.
Stay tuned for next week’s Smart Driving Car Summit, as insurance as a tool for wider deployment of self-driving and driverless will be the topic of discussion.
Leave a Reply