Self-driving vehicles won’t be widely viable commercially until their AI guidance systems are better than human drivers and can adjust to unpredictable road circumstances.
By Esther Ajao
When electric car giant Tesla recently introduced a beta of it latest full self-driving software then withdrew it, the abrupt reversal encapsulated the stop-start state of autonomous vehicle technology.
In a tweet on Oct. 23 about the withdrawal of the system, Tesla CEO and founder Elon Musk said it’s impossible at this point to test all configurations of a fully self-driving vehicle.
That dilemma is at the heart of the fitful progress on autonomous vehicle technology by major automakers such as Ford, General Motors and Hyundai, and tech giants including Apple, Google’s Waymo subsidiary and Amazon’s Zoox robotaxi subsidiary.
The industry must also overcome daunting challenges in ensuring safety.
While Tesla cars now have only limited autonomous capabilities, the company has made significant advancements, including putting cars on the road with limited self-driving features.
But some experts say Tesla’s chief limitations stem from a sensor suite that is not quite diverse enough. And different iterations of that problem also afflict other developers of autonomous vehicle technology.
Tesla relies on mainly cameras and software to perform all aspects of sensing the environment surrounding the car, leaving its level of autonomy at what is known in the industry as level 2, meaning the driver’s hands must always be on the wheel. Others in the industry that are developing level 3 or 4 technology (vehicles that don’t require limited help from the driver or don’t require a safety driver) are using a combination of cameras, software, lidar, radar and updateable HD maps.
“The advantage to adding these additional sensor modalities is that it provides redundancy across environmental conditions … and road types, and provide[s] an alternative method to distinguish certain roadway elements and actors,” said Matt Arcaro, an analyst at IDC.
Advances in autonomous vehicle technology
A prominent vendor in this area is Waymo. The Google subsidiary runs a self-driving taxi service in a section of Phoenix.
The vendor is also working on using the technology in other areas such as trucking, logistics and personal vehicles. Recently Waymo and GM’s Cruise division were the first autonomous vehicle technology vendors to obtain autonomous vehicle permits in California that would allow them to transport passengers.
Other promising projects involve traditional and new automotive manufacturers working on autonomous technology for robotaxis and personal vehicles.
For example, in 2019 Volkswagen said it would join Ford in investing in AI vendor Argo AI to introduce autonomous vehicle technology in the U.S. and Europe. The companies committed to spending more than $4 billion through 2023 to develop their self-driving service.
Another involves Intel subsidiary Mobileye and SIXT, a provider of mobility services in Germany; the vendors plan to offer robotaxis in Munich next year.
Amazon’s Zoox is designing autonomous vehicles from the ground up. The Zoox strategy is distinguished by its unique model, which is not designed like a car and doesn’t have a front or a back end or even a steering wheel.
The safety issue with autonomous vehicles
Despite the advances in the technology and interest in it from the public and enterprises, the autonomous vehicle industry is still up against many challenges, chief among them safety.
Although the industry is making progress in trying to make sure the technology is safe, regulation that could enforce safety requirements is largely absent, and it is mostly unclear still how local authorities will administer laws and regulations governing the use of autonomous vehicles.
Some states have enacted legislation for autonomous vehicles. Nevada requires manufacturers to meet detailed testing and safety rules; maintain a $5 million insurance policy to test vehicles on public roads; and, as of now, place a safety driver on board.
“If it’s vague out there that you’re relying on each individual provider to come up with their own definition of safety, that’s not the right way to do it,” Arcaro said.
And if autonomous vehicles are not as safe or even safer than human drivers, then there’s no point to them, said Sam Abuelsamid, an analyst at Guidehouse.
Abuelsamid noted that while human drivers are criticized for being responsible for most crashes, people are good at driving and most of the time they drive, they don’t crash.
“We have not yet proven that autonomous vehicles can do the same, especially in the more challenging conditions,” he said. “We will undoubtedly have more crashes and more fatalities with autonomous vehicles because it’s impossible to make a system that complex perfect. But if we make it better than humans, then we’ll make progress.”
The uses of autonomous vehicle technology
Another challenge with the technology is figuring out where it’s useful.
Vendors such as Waymo, Uber and Motional — a joint venture between Hyundai and autonomous vehicle technology vendor Aptiv — are using the technology to develop robotaxis.
However, the goal and challenge for businesses is to find wide applications for the technology that can help them make money.
One industry that’s crying out for autonomous vehicles is trucking, which is suffering from a widespread shortage of drivers. An autonomous vehicle out on the highway is much simpler than guiding a vehicle autonomously in the city.
Waymo’s Via system uses cameras, software, lidar and radar configured to the specific requirements of truck driving. Startup vendor Aurora has put autonomous trucks on the road with backup human drivers. However, Aurora hopes to remove the drivers by 2023 when It fully launches its trucking business.
While there appears to be a real need for autonomous vehicles in the trucking business, Arcaro said other sectors are interested too, but self-driving is not like “turning on a light switch.”
“A lot of enterprises are a little skeptical because of initial timelines, but they’re also open to the technology,” Arcaro said.
Challenges for automated driving systems
Part of the reason that enterprises are still skeptical is because while some models are being tested, autonomous vehicle technology is still firmly in the testing phase.
Driving is an extremely difficult task for AI technology to understand, according to Abuelsamid.[Driving] takes a lot of nuance and nuance is something that computers and software are actually not good at.Sam AbuelsamidAnalyst, Guidehouse
“It takes a lot of nuance, and nuance is something that computers and software are actually not good at,” he said.
One reason is because AI is not as adaptable as the human brain and can fail in unpredictable ways.
The task of an automated driving system is broken down into four major steps:
The first step is sensing and perception, using a software system on top of a sensor array including cameras, radar, lidar and ultrasonic sensors. The sensors look at the world around the vehicle to detect what’s around it — in much the same way humans use their five senses to understand objects and distinguish them from each other.
The next step is to make sense of what all the different objects around the vehicle are. After that, the technology must be able to predict what the objects — including pedestrians, bicyclists, other vehicles and roadway items — might do within three to five seconds.
After the prediction, the next step is to plan a path for a vehicle to drive through that environment.
The final step is control. This is where the system sends signals to the steering and brakes and accelerator to make the car or truck go where it needs to.
“Perception is really hard for software,” Abuelsamid said. “AI systems can actually be fooled by subtle changes.”
One of the thorniest problems for autonomous vehicle technology is the prediction phase. While it might be easy for the autonomous vehicle to understand what other vehicles could do, pedestrians are relatively unpredictable and can easily change their minds at any moment.
“So, it has taken longer to create autonomous vehicles that can operate reliably in the kinds of environments with which humans have few problems,” Abuelsamid said.
For engineers, it’s about being able to manage and deploy a system that can support not only 99% of what is predictable, but also the small fraction that is unpredictable, Arcaro said.
“That last one percent is very difficult,” he continued. “That’s what’s keeping cars off the road today.”
The possible workaround
Some vendors are already finding ways around these problems and taking multiple pathways, Abuelsamid said.
They are deploying multiple algorithms instead of just one for perception. Rather than having software rely on cameras, lidar or radar, developers are using them all together. In some cases, vendors use both a deterministic approach and a probabilistic approach in their development of autonomous vehicles.
The deterministic approach is traditional and rules-based, Abuelsamid said. This is when an input signal to the algorithm will always yield the same result and the algorithm’s path is fully traceable.
“This is a well understood and reliable but sometimes limited approach,” Abuelsamid said
The probabilistic or AI approach uses complex models with many parameters that are often determined by the algorithm itself with a training process in which known, labeled data is fed to the algorithm.
The subsets within this category includes machine learning or neural networks. The algorithms will usually produce a probable answer about the likelihood that an object is a pedestrian or a vehicle According to Abuelsamid, the systems tend to be brittle and can sometimes give the wrong response in unexpected ways.
Making algorithms reliable
Most developers are now combining both approaches and breaking their software down into smaller chunks so that even if a specific algorithm can’t be traced, the problem can be isolated.
“By using all of those together, you can mix and match them in different ways so that you can verify what you think the system can verify … and have a more accurate result,” Abuelsamid continued. “Having that redundancy built in really turned out to be really important to have a safe and reliable system.”
Another way vendors are trying to attack this problem is by focusing on one industry. While some vendors zero in on the robotaxi industry, others are looking at the delivery industry.
As some focus on trucking on highways because it’s simpler than driving in the city and doesn’t contain many intersections, others are reining in the scope of their version of autonomous vehicle technology by geography, location or weather conditions.
“What we’re seeing now are companies that are limiting the problem set in some way, and having a limited design domain,” Abuelsamid said.
One autonomous vehicle vendor that has been somewhat successful in constraining the problem set is Motional. Motional transports passengers in Las Vegas through a partnership with Lyft. The company said it has conducted more than 100,000 public rides with two safety drivers in the front seat.
Earlier this year, Motional began testing driverless vehicles in Las Vegas after a two-year safety evaluation period and testing self-driving taxis with a safety driver. The company says it will launch a driverless service in 2023 through its partnership with Lyft, without a safety driver.
The Motional method
Meanwhile, Motional is still working to ensure its technology can handle unpredictable situations without a safety driver.
The vendor is using a machine learning-first approach and a continuous learning framework, said Sammy Omari, vice president of engineering and head of autonomy.
The continuous learning framework automatically detects unusual situations or objects, produces training data automatically in the Motional system and enables engineers and data scientists to retrain quickly and redeploy if the performance evaluation is good enough in simulation.
“This automatic loop allows us to get better, build a better autonomy and handle these kinds of rare situations better with every single mile we drive,” Omari said.
Motional also uses the continuous learning framework to mine for unexpected situations and determine if its prediction system is responding correctly.
If the prediction system does not respond correctly, Omari said, that data is put into the training data that is used to train the model or system. In some cases, the data may also be placed into the model validation set and used to re-create simulation scenarios.
One challenge Motional faces with the continuous learning approach is that it’s working with a large volume of data, as well as hundreds of cars that are out on the road every day.
Processing all that data, pushing it into the cloud and mining for relevant scenarios to create the training and retraining data is also challenging. Another hurdle to overcome is to keep the costs of doing all that for each vehicle under control.
“If you want to pull the safety driver out, you need to have the confidence that you have the right behavior of hundreds or thousands and millions of files,” Omari said.