Por Carlos Gonzalez
John Ludwig is an electrical engineer and the president of Xevo’s Artificial Intelligence (AI) Group. Xevo is a tier-one OEM software company, located in Seattle, that manages automotive software for driver assistance, engagement, and in-vehicle entertainment.
Its main product is the Xevo Market, a merchant-to-driver commerce platform that uses a vehicle’s infotainment screen to make purchases and transations from inside the car. Xevo Market launched at the end of 2017 and is already available in millions of vehicles.
Prior to working with Xevo, Ludwig was a software manager with Microsoft, overseeing operating systems and online service projects. Afterwards, he spent 12 years as a venture capitalist, leading to the formation of his own company, Surround IO, which was then acquired by Xevo in 2017. Ludwig is now the head of the AI department at Xevo, and we spoke with him about the future of AI in cars, how AI will enhance the driving experience, and the future of vehicle autonomy.
John Ludwig is the president of the Artificial Intelligence division at Xevo.
What is Xevo’s goal in the car marketplace? Are you looking to be the go-to company for automotive AI?
Our focus is having two different approaches towards software inside the vehicle. First there’s the low-level software, which is managing the engine and the driving autonomy of the car, helping manage those millisecond-level decisions. That software domain is one that we don’t participate in directly but work with the manufacturers to leverage that data.
Creating a better and safer experience: It’s all about making data streams better, and artificial intelligence helps us accomplish this in a couple different ways. First, there’s a large amount of information the user needs to know about their driving in order to be a safer and better driver.
By using AI, we can provide the driver with information on how to be a safer. AI can determine that you tend to be good at driving in the rain but you’re not so good at highway situations, or you’re not very attentive when you get into downtown city areas and you should be more attentive, etc. Because we are making people safer drivers, we can help them obtain lower insurance rates.
Using AI, we can also enhance a driver’s journey. By knowing their preferences, like their favorite coffee spot, or the food and goods they typically purchase, the AI system can offer suggestions on the best way to fill their time at their destination or along the way.
For example, say I didn’t have time to grab my morning coffee: The car’s AI can figure out where their favorite coffee is on the way, the best time to buy it, and where would be the best spot to stop based on their driving trends.
When you talk about learning from user data, where is the data stored and where is the computing taking place? Is the computing done offsite via the cloud or is the computing being performed at the edge, which in this instance would be the car?
Before we got acquired by XEVO, our company was called Surround IO and we were 100% focused on computing at the edge. We saw the massive growth in the amount of sensor data becoming available and how affordable it was becoming to collect data on the edge.
Computing at the edge is a more efficient method because you can’t possibly send all the data to the cloud. Even if you had the all privacy and security issues handled, the computational access to the cloud will not be as fast as at the edge. We are very focused on trying to push the computing power out to the edge, making full use of the car’s processing capabilities.
The AI software in the car can help find your favorite coffee spot along your route autonomously.
We are seeing quite a bit of that in automation. While the cloud is useful to connect networks, especially those that are located far apart and have a dedicated internet access, several companies have focused on the edge due to faster processing capabilities.
Correct. There are certain instances where edge computing is the only answer. If you have a pedestrian not paying attention, you need to have the car decide at the edge and apply the right action then and there.
Currently, car manufacturers are devising their own standards for AI deployment. Do you think there needs to be a single standard for AI implementation? Would government regulation help drive the market?
I think it is a matter of volume scaling. I don’t think it matters so much if you have different standards in Europe vs. North America vs. Asian markets.
My experience from many years in the software industry is that eventually there will be a de facto standard that will arrive and push things forward. If you look to the early days of cell phones, when we had all different kinds of phones and services, it all got washed away, eventually, by IP networking. Standards will emerge and the marketplace will slowly push other things out.
Does Xevo have a preference towards a particular type of sensor?
Our focus is on the software and collecting the data from all different kinds of sensors. We are fairly flexible, but how we obtain the data depends on the car manufacturer. I think there are few frameworks that are going to dominate the field. Google, Microsoft, Facebook, and Amazon are all pushing very hard in these regions. Those are going to dominate, and we tend to use a lot of Google’s framework for our technology sensor flow. It has a huge community and activity around it.
In any case, we work with all of the frameworks and try to make it easy to move data to and from their systems and their competitor’s systems. This helps our customers to not have to overcommit right now at any one of those frameworks. There’s a lot of transportability of models and data back and forth between them, and that’s good for the industry.
Xevo also helps design car manufacturers’ mobile apps as another way for drivers to stay in touch with their cars.
In which vehicles is your software being currently featured?
We are in every Toyota or Lexus vehicle. The infotainment experience is in the backend service road vehicles provided by Xevo. The service may be invisible to you, but if you use their mobile app or use any of the infotainment services in the car, that is leveraging our services.
Any late model GM vehicle uses our service. The Market was launched last year, which allows you to make in-car payments for services such as fuel or food ordering. It is currently available now in two million of their vehicles, with plans to roll it into future GM lines.
We recently announced a new partnership with Hyundai, which will allow customers to find and pay for coffee, gas, and parking using their car’s infotainment screen. The news services will work the Hyundai Blue Link connected-car system.
One of the big things we are focused on doing with all this data is figuring out how to monetize it and turn it into actual revenue for merchants and the OEMs. In our business that is the aspect that is missing. Several manufacturers are talking about how we can collect data and share it, but it’s really important for the OEMs and for all the people providing the data that there is a monetization model.
The first step is to demonstrate how our technology can generate new revenue transactions, to help defray the cost of investing in more sensor equipment for the manufacturer. We also have to ensure the end user that the data will be secure and private before being sent it to the cloud. Because ultimately the end user has to feel good and have confidence about how their data is being used.
I definitely see the potential benefits and concerns for end users. On the one the hand, the AI assistance can be immensely useful from preventing accidents, correcting bad driving habits, and assisting them on their day-to-day tasks. On the other hand, you have privacy concerns and users’ information and data being misused.
We feel like this will be first be implemented in fleet use cases, where someone is driving the vehicle as a service. The fleet company is paying for the insurance and operation of the vehicle. The company can then deploy our software to help track the driver’s performance and provide the company’s insight. This can help with insurance rates by proving the reliability of the fleet drivers.
What is your personal take on the future of self-driving cars? Where do you think we will be in five to 10 years from now?
Xevo doesn’t directly invest in the autonomy, but I certainly have my own opinions on how it will unfold. I suspect we’re going to have some situational autonomy sooner rather than later, but I think full autonomy will take a while. For example, take the carpooling lane on the highway: If that was to be reclassified as only for autonomous vehicles, you can control the vehicles on the highway and help streamline traffic.
In downtown areas, where people are pulling cars in-and-out of different parking situations and garages, dealing with so much foot and bike traffic, it is a more complicated situation. It strikes me that it may take us a while before we are ready to take all that on.
What do you see as the future of Xevo? Is the goal to be in every car manufacturer on the market?
I’ll think you’ll see Xevo software in every car manufacturer in the next five years. We have engagements with every significant carmaker right now, and we have contracts with Asian car manufacturers as well. With autonomous vehicles approaching the market, people will be spending more time in their cars and looking to enhance their driving experience. That’s great for us because we can we can offer them solutions and services to take advantage of their time.