Cruise y mentiras de vehiculos autonomos

The Cruise case and the lies about autonomous vehicles.

Autonomous vehicles have been on the front page of many, many newspapers, and many articles have been written about them for years, even decades, claiming that they will replace vehicles as we know them. The fact is that while the arrival of autonomous vehicles in our lives has long been heralded, the reality is very different. Not only have they not arrived, they are still a long way off. Their technology is very complex, the amount of data they require is monstrous, and they require years of training.

In the U.S., autonomous vehicle companies that offer a taxi service are in the news, Cruise, because the California Department of Motor Vehicles have decided to take them off the road, and the reasons are compelling: people’s safety is at stake.

Specifically, in San Francisco, Cruise’s vehicle crashed into a fire truck, on another occasion it ran over a pedestrian, dragging her for meters and seriously injuring her, and in another accident it killed a four-year-old girl.

Kyle Vogt, the CEO of Cruise, a company owned by General Motors, assured that autonomous vehicles do not get drunk, take drugs, get tired or get lost, but the fact is that even if they do not suffer these mishaps, they have endangered people’s safety and in some cases caused death.

The point is that we cannot add the deaths caused by autonomous vehicles to the deaths caused by traffic accidents. That is more than enough to take them off the road.

But let’s talk a little bit about the levels of vehicle autonomy and what they mean:

Level 0. No automation. The driver has to do everything.

Level 1. Driver Assisstance. The driver still does almost everything, but the car is equipped with some basic automation, such as lane departure warning or cruise control.

Level 2. Hands off. The car can take over driving in certain situations. It is still quite limited, and the driver must be able to intervene within seconds of receiving a warning from the car.

Level 3. No eyes. The car can take over the driving function for longer periods of time, e.g. on highways. In less complicated situations, it has a kind of autonomous driving. The driver is still there and intervenes, but the autonomous driving function is much longer in time and space.

Levels 4 and 5. These are considered true autonomous driving and the goal we would like to achieve as a society. The difference between them is that level 4 is called mind-off and level 5 is called person-off.

The difference between them is that Level 4 is supposed to require human intervention in some situations. There will still be a cockpit, a steering wheel, and pedals. In very exceptional situations there will still be a human driver, although the car will remain autonomous.

At level 5 there is no driver. Everyone in the car is a passenger, there are no steering wheels, no pedals, no way for the human to intervene even if he or she wanted to. So level 5 is the societal goal that really is the benefit of autonomous driving, that it has been proven to really reduce accidents to below 1% of what we have today, reduce carbon emissions and so on.

I have participated in workshops on autonomous vehicles, I have worked with engineers who are still working on developing autonomous vehicles, I have participated in a chapter in a book on autonomous vehicles about privacy and ethical issues. I mean, it’s not a foreign subject to me, and in this article I only touch on about 20% of the things that I’ve studied and researched, but they’re very useful to know where we are.

The key point is that the benefits of autonomous driving would be achieved and seen and felt in society when we reach at least 90% usage of autonomous driving cars.

Now comes the million dollar question: what is the most advanced thing on the market today? Society today is at level 2, even a little bit at level 3. There is a debate about whether we are really at level 3. This is the reality, nothing more, nothing less. So imagine what it will take to get to the level of autonomy that Elon Musk has been promising for the last 9 years.

In the autonomous car, we will move from the driver to the passenger, to the people around the car, when the car talks to other cars, the car has not only cameras inside, but also outside.

In fact, at the Mobile World Congress a couple of years ago, we saw some of the big car manufacturers, and they were there not only as car manufacturers, but also as data companies.

Going back to the case at hand, what led a company to put autonomous vehicles in a city without being 100% sure that they would not cause any harm? Nothing more or less than the culture of the technology industry and its fierce competition, in this case with Waymo, Google’s company.

In fact, Kyle Vogt, Cruise’s CEO, focused on rapid growth and forgot about safety. He focused on the competitive race between Cruise and Waymo, Vogt wanted to dominate the market in the same way Uber did with its competitor Lyft.

Let’s think about what we’ve just read: he put company growth ahead of safety, and that has caused a huge crisis of public confidence, perhaps the most difficult crisis to overcome. Every time one of the cars in the Cruise fleet had an accident, such as when it collided with a Toyota Prius in the bus lane, the company focused on changing its software to prevent that type of accident from happening again.

Later, another Cruise vehicle collided with a fire truck in an emergency, and the company focused on changing its software to detect sirens. But the point is that these are not the only problems the Cruise cars caused in the city. There were many traffic violations that didn’t actually cause an accident, but which the company presumably would have worked to avoid.

I would like to stop here and reflect on the aspects that regulation takes into account. Regulation should not only focus on security it should focus on regulating corporate culture. Business models are a product of corporate culture, and they are not regulated.

The risks of AI are regulated, but if we only are focused on risks, we miss the harm. The risk itself, the negative social impact, is what gives us information about the context in which the technology has been applied, and this is where regulation, in addition to the corporate culture, must become firm.

In fact, the corporate culture is from the ex-ante design of the technology to the ex-post, which would be its implementation and social impact.

And I am sorry to tell you that self-regulation and guidelines with principles are just “pamphlets” with good intentions, reflecting a naive attitude, or even arrogant if I may say so, thinking that these principles will be applied because they have been drawn up by a group of experts. And believe me, I know what I am talking about.

Well, having said that, let’s move on to another big (loud) scandal in the autonomous vehicle industry: autonomous vehicles are the parents. Or, to put it another way, they DON’T EXIST.

We saw it clearly with Cruise. The NYT revealed in an article that there are 1.5 employees per VA Cruise, remotely assisting them every 2.5-5 miles. This means that they often have to remotely control the vehicles after receiving a signal that they are having problems. In other words, the VA’s are not that autonomous. PERIOD.

Because of incidents caused by Cruise VAs, the California Department of Motor Vehicles (DMV) has suspended Cruise’s permit to operate its autonomous cabs commercially and to conduct any driverless road tests, specifically the one on October 2 when one of Cruise’s VAs ran over a passerby, dragging her for yards and causing her serious injuries.

But this was not the only serious incident caused by Cruise VAs. On August 14 of last year, a Cruise vehicle ran into the path of an ambulance on its way to a hit-and-run accident, and the ambulance was killed because of the delay in responding to the accident.

In fact, at a meeting last August 7 to discuss VA safety, San Francisco Fire Chief Jeanine Nicholson told the investigating committee that her department had received 55 complaints of VAs driving recklessly near emergency vehicles, obstructing or blocking the passage of vehicles.

Jeanine Nicholson said that 55 alerts may not be a lot if it’s not your family, but for her department, protecting the family of everyone in SF is their priority.

But there is something that has added fuel to the fire in the Cruise case, and that is that through undisclosed non-internal material, it has been learned that Cruise was aware of two very serious safety issues: (i) the cars had trouble detecting large holes in the road, and (ii) they had such difficulty detecting children, depending on the scenario, that they were in danger of running them over in certain situations.

Despite this, they decided to keep their fleet of driverless cabs active while maintaining their usual safety guarantees.

This has caused a million dollar investment to General Motors of $588 million, each VA costs between $150,000-200,000, that is another, the huge value of VA’s because of the amount of sensors, cameras, software, operating systems it needs to function.

But the losses have been even greater. General Motors has said that they lost $1.9 billion from March to September.

What is my conclusion in this case? Well, that the potential of AI to solve problems is relative. And that is something we should take to heart. The big problem is the hype around this technology, fueled by Silicon Valley CEOs and entrepreneurs, who are making governments, institutions, other entrepreneurs and even regulators believe that we are at a very advanced stage and that in a few years a huge percentage of jobs will become obsolete or disappear.

How long have we been hearing that? What I mean is not that AI is useless, far from it. I am a big fan of this technology, but not of unscrupulous lies to increase profits.

AI is not a silver bullet, but governments should recognize this and know that this technology is not the solution to the social problems we have. In my case, I have been repeating for years, without success, that the solutions come from the hand of social technologies.

It is not about developing more technology to solve our social problems, but about increasing human interaction, changing business models, creating new narratives to replace the current ones, adapted to our realities, regulation adapted to the social impact but also to the culture of companies, renewing our institutions that protect us collectively against the negative effects of AI and do not force us to go to court individually.

All these examples of solutions belong to social technologies, and that is what we really need. Are we ready to embrace them?

Leave a Reply

Your email address will not be published. Required fields are marked *