Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
Missy Cummings in IEEE:

WHAT SELF-DRIVING CARS TELL US ABOUT AI RISKS​

5 conclusions from an automation expert fresh off a stint with the U.S. highway safety agency​


I don't trust this woman to have an unbiased opinion. She was ordered by NHTSA to recuse herself from matters relating to Tesla after a series of unprofessional tweets, as well as an undisclosed $400,000 salary from "Veoneer," a company that is developing ADAS.

 
I don't trust this woman to have an unbiased opinion. She was ordered by NHTSA to recuse herself from matters relating to Tesla after a series of unprofessional tweets, as well as an undisclosed $400,000 salary from "Veoneer," a company that is developing ADAS.
“Unbiased opinion“, “unprofessional tweets”, “40 billion in stock comp”. Funny since you seem to have blind faith in Elon and Tesla.

Cummings, a long time safety and robotics researcher, was on the board of Veoneer in an advisory capacity. They make car safety tech. I don’t think they have made an ADAS and shipped it. The certainty don’t make Lidars. Her comp was hardly “undisclosed”. She was on their website. Veoneer is not a competitor to Tesla in any capacity.

Regarding the tweets, she wasn’t afraid to call out Elon on his lies. Kudos to her.

If you would read the article instead of going after Dr Cummings, we can discuss what she says that’s so horribly wrong. It’s not really about Tesla.

Her conclusions:

1. Human errors in operation get replaced by human errors in coding

2. AI failure modes are hard to predict

3. Probabilistic estimates do not approximate judgment under uncertainty

4. Maintaining AI is just as important as creating AI

5. AI has system-level implications that can’t be ignored

 
Last edited:
  • Like
Reactions: nativewolf

1. Human errors in operation get replaced by human errors in coding

2. AI failure modes are hard to predict

3. Probabilistic estimates do not approximate judgment under uncertainty

4. Maintaining AI is just as important as creating AI

5. AI has system-level implications that can’t be ignored


Literally all of these are also true about non-AI computer software. The entire OpEd is fearmongering about Tesla, Waymo, and Cruise; and it's obvious she's doing it with an agenda, and hiding behind her association with NHTSA to grant herself legitimately.
 
  • Disagree
Reactions: nativewolf
Literally all of these are also true about non-AI computer software. The entire OpEd is fearmongering about Tesla, Waymo, and Cruise; and it's obvious she's doing it with an agenda, and hiding behind her association with NHTSA to grant herself legitimately.
First of all, none of that is true.

Secondly, what’s her “obvious” agenda? I don’t see any obvious benefits for her writing it. She’s conservative, sure. Perhaps for a reason.

But again, what specifically do you think is wrong? To me these are all relevant points to bring up when discussing safety- and time-critical systems.
 
Last edited:
I read it as "I understand AI. It has inherent technical problems. There have been practical problems in deploying it. The government should regulate it. To do that, the government should be developing expertise in it."

Given her expertise, this article is essentially a job application.

That said, I don't see a problem with her arguments. After all, how can government have any idea of what to do about a new technology if it doesn't have any technical experts about it? It's why we have all our regulatory bodies - to have experts independent of the industry that they regulate.
 
First of all, none of that is true.

Tell me you've never worked in software without telling me you've never worked in software. Just take "Human errors in operation get replaced by human errors in coding" for example. There is no such thing as bug-free code. Human-errors in coding will occur no matter if AI is involved or not.
Secondly, what’s her “obvious” agenda? I don’t see any obvious benefits for her writing it. She’s conservative, sure. Perhaps for a reason.

She's a Professor with Duke University's Pratt School of Engineering, and a content-expert consultant for NHTSA. She's not stupid. But the article is full of false analogies and misrepresentations of software and technology. So the best explanation is that she is deliberately drawing false equivalences between the machine learning techniques used in computer vision and those used in large-language models in order to scare people that are unfamiliar with both.

Look what her former employer is selling: Learning Intelligent Vehicle

1691066074788.png


Pretty convenient that she's writing OpEd's saying that Autonomous Vehicles cannot be trusted while the company that was previously cutting her checks is selling a software solution meant to create trust in AVs.
 
Tell me you've never worked in software without telling me you've never worked in software. Just take "Human errors in operation get replaced by human errors in coding" for example. There is no such thing as bug-free code. Human-errors in coding will occur no matter if AI is involved or not.
I’ve worked with software construction for almost 40 years. I own a SaaS business...

Sure, software may have bugs. Sometimes you write software to protect another software from malfunction… Her statement still is true. The 94% is not going to 0% because we remove the human.
She's a Professor with Duke University's Pratt School of Engineering, and a content-expert consultant for NHTSA. She's not stupid. But the article is full of false analogies and misrepresentations of software and technology. So the best explanation is that she is deliberately drawing false equivalences between the machine learning techniques used in computer vision and those used in large-language models in order to scare people that are unfamiliar with both.
Shes neither with Duke nor NHTSA anymore.

Again, what’s the factual thing she gets wrong in the article? What‘s wrong with the LLM analogy?
 
Last edited:
  • Like
Reactions: nativewolf
I'm telling you that every fault she's just attributed to AI is generally true of all software, and therefore it's wrong to ascribe worry to AI in particular for those same problems. It's not factually wrong, just misleading to imply it's a new problem from a new technology.
So there are no specific problems or risks with using AI in safety critical applications compared to traditional software?
 
  • Like
Reactions: nativewolf
First of all, none of that is true.

Secondly, what’s her “obvious” agenda? I don’t see any obvious benefits for her writing it. She’s conservative, sure. Perhaps for a reason.

But again, what specifically do you think is wrong? To me these are all relevant points to bring up when discussing safety- and time-critical systems.
Don' you get it? Its an agenda because it includes Tesla. If Tesla wasn't mentioned then it wouldn't be an agenda. It would just be regulators having concerns. If Ms.Cummings comments were against waymo and cruise, do you think Tesla fans would be in an uproar about it? Ofcourse not. This is all about Tesla for them. It always has been. Anyone who makes a statement that can be perceived as being remotely critical about Tesla automatically has an agenda and is being paid by Big Oil, Big Donors and Special Interests to take down Tesla.
 
Cummings, a long time safety and robotics researcher, was on the board of Veoneer in an advisory capacity. They make car safety tech. I don’t think they have made an ADAS and shipped it. The certainty don’t make Lidars.
Their site seems to disagree with you on that:


It is unclear if they manufacture it, or if they just helped with the design and will integrate it and sell it with their solutions.
 
Last edited:
Their site seems to disagree with you on that.
I'm trying to discuss the CONTENTS of the Cummings article because I found it interesting, but let's discuss if Veoneer makes Lidars. No they do not. They own a small stake of a partner, Baraja, that tries to make a solid state Lidar. Baraja

Veoneer has no meaningful marketshare, if any, in vehicle automation and Cummings doesn't advise them anymore for the last two years.
 
Last edited:
  • Like
Reactions: nativewolf
Don' you get it? Its an agenda because it includes Tesla. If Tesla wasn't mentioned then it wouldn't be an agenda. It would just be regulators having concerns. If Ms.Cummings comments were against waymo and cruise, do you think Tesla fans would be in an uproar about it? Ofcourse not. This is all about Tesla for them.

Are you aware that you're posting this message on TeslaMotorsClub.com?

Shock and awe people here are interested in talking about Tesla.
 
Shock and awe people here are interested in talking about Tesla.
You made this whole discussion about Tesla because you tried to discredit Cummings, as Elon, Technoking, the first of his name, commanded.

The damn article is only marginally about Tesla as should this thread be.

Let's try this again:
I'm telling you that every fault she's just attributed to AI is generally true of all software, and therefore it's wrong to ascribe worry to AI in particular for those same problems. It's not factually wrong, just misleading to imply it's a new problem from a new technology.
So there are no specific problems or increased risks with using AI in safety critical applications compared to traditional software?
 
  • Like
Reactions: nativewolf
Interview with Dr Matt Markle (ex-Waymo radar guy):
 
Missy Cummings in IEEE:

WHAT SELF-DRIVING CARS TELL US ABOUT AI RISKS​

5 conclusions from an automation expert fresh off a stint with the U.S. highway safety agency​

A lot of odd stuff in this. She jumbles self-driving cars and ADAS together, especially with phantom braking. Then she says: "The cause of such events is still a mystery. Experts initially attributed it to human drivers following the self-driving car too closely". Seriously? Spooky name aside, there is no mystery about phantom braking. And I've never heard an expert, or even an amateur, blame a phantom braking event on a car to the rear. The car to the rear may be judged at fault for rear-ending the phantom braker, of course, but that's a completely different matter.

She misstates the Cruise bug that caused it to rear end a city bus. That's not a big deal, but shows she either didn't read the Cruise description that she linked or she didn't understand it. I'd expect the latter from a bureaucrat, but not a so-called expert.

"Self-driving cars rely on wireless connectivity to maintain their road awareness."
This is also untrue. Cruise does seem to stop as soon as they lose connection, but the car doesn't suddenly go blind or become lost. They simply don't want a car to drive around while the remote monitor is unable to monitor it. Waymo, on the other hand, seems to operate through momentary connection interruptions. Or maybe they just have a much more reliable cell phone provider, ha.

She ends calling for more regulation (like Europe, the leader in autonomy, lol) and better pay / more prestige for bureaucrats and academics (like her!).
 
A lot of odd stuff in this. She jumbles self-driving cars and ADAS together, especially with phantom braking. Then she says: "The cause of such events is still a mystery. Experts initially attributed it to human drivers following the self-driving car too closely". Seriously? Spooky name aside, there is no mystery about phantom braking. And I've never heard an expert, or even an amateur, blame a phantom braking event on a car to the rear. The car to the rear may be judged at fault for rear-ending the phantom braker, of course, but that's a completely different matter.

She misstates the Cruise bug that caused it to rear end a city bus. That's not a big deal, but shows she either didn't read the Cruise description that she linked or she didn't understand it. I'd expect the latter from a bureaucrat, but not a so-called expert.

"Self-driving cars rely on wireless connectivity to maintain their road awareness."
This is also untrue. Cruise does seem to stop as soon as they lose connection, but the car doesn't suddenly go blind or become lost. They simply don't want a car to drive around while the remote monitor is unable to monitor it. Waymo, on the other hand, seems to operate through momentary connection interruptions. Or maybe they just have a much more reliable cell phone provider, ha.

She ends calling for more regulation (like Europe, the leader in autonomy, lol) and better pay / more prestige for bureaucrats and academics (like her!).
I generally agree with the stuff you pointed out are sloppy/odd. As a whole though, I think most of her points are valid.

#1 AV:s will have different failure modes than humans.
#2/3 AV failures are hard to predict.
#4 You are never done and sometimes have a hard time generalising long tail events.
#5 AI still has a long way to go in cars and trucks. EU has regulated, perhaps the US should too?

"We need less hysteria and more education so that people can understand the promises but also the realities of AI."

Many in the Tesla crowd seem so upset that she criticised Elon and Tesla publicly so they stopped listening, but I think she has some interesting points on the space. I think this interview with her on The Robot Brains from two years ago is pretty good. I am more optimistic than her about the field in general, but I don't know half as much about this area so perhaps that's why. I broadly agree with her on where computer vision is at. "It's still research, and that's the problem."
 
Last edited:
  • Like
Reactions: nativewolf
Amnon Shashua -- Founder and Chief Executive Officer

"I think that Tesla has mentioned several times in the past about licensing their FSD. So, it's not really a new concept. It's not new to have competitive noise in the market. I would say that we have lots of respect to what Tesla has accomplished with FSD.

In fact, we see the rapid development of the significant positive for us that pushes the market to move faster to implement advanced solutions like SuperVision. Now, specific question of Tesla working with OEM, I think there is one argument that really clarifies the matter. I would put it as performance versus cost of the system. If you look at SuperVision, it's an FSD-like category, 11 cameras under radar or a few radars.

SuperVision is also REM, the high-definition mapping, in addition to what FSD can offer. Today, we have 120,000 SuperVision-enabled vehicles in China, more than 1,000 beta testers. And the response in terms of comparative analysis is very, very good. It's, you know, on power or superior to FSD that's measured by the rate of intervention and ability to handle complex maneuvers.

REM is a stronger differentiation. But now, let's look at the cost. The price of a SuperVision subsystem, including the cameras and radars, you know, the ECU, software, the REM, is approximately somewhere in the $2,500 range. Now, if Tesla matches that system price, then OEMs will be able to offer, you know, SuperVision or FSD at less than half the price that FSD is offered to Tesla car owners.

Now, this would immediately cannibalize Tesla, whose strategy appears to be to reduce gross margins on the vehicle and rely almost solely on the value of the FSD for creating growth. Now, I would also mention, and this bodes well with our OEM customers, now there are 400,000 FSDs on the road since 2019. And Mobileye has already 120,000. And in approximately two years, we'll surpass the 1 million bar.

And from there, we'll grow much faster. There are also important differences with respect to access of data, something that Tesla has very often highlighted as an advantage. And that's another key advantage that OEMs recognize. So, for example, at their March investor day, Tesla noted they had a video cache of 30 petabytes and were intending to grow to 200 petabytes.

Our video database is 400 petabytes. Not to mention all the data that we collect for the program, the high-definition mapping. We collected almost nine billion miles of this type of data in 2022 alone. Tesla talks about 300 million miles of driven to date.

So, I think, overall, when you look at what Tesla has accomplished, it's a very, very big positive for us. We believe that SuperVision is a much more optimal solution for our customers, both in terms of cost and performance and customization basis. And all of Tesla's accomplishments actually create a very positive momentum to have other OEMs wanting to have this type of -- this category of solution in their own cars."
 
Missy Cummings in IEEE:

WHAT SELF-DRIVING CARS TELL US ABOUT AI RISKS​

5 conclusions from an automation expert fresh off a stint with the U.S. highway safety agency​

From the description, I had a suspicion who it was and sure enough it was Missy Cummings. She seems to be going around the media name dropping her former NHTSA role.

The most recent time I read about her was here, where the author claimed:
"NHTSA also found that Tesla drivers were having “more severe — and fatal — crashes than people in a normal data set.”
Except the fact NHTSA never claimed that.

Instead, claim was from here:
Former NHTSA senior safety adviser Missy Cummings, a professor at George Mason University’s College of Engineering and Computing, said the surge in Tesla crashes is troubling.

“Tesla is having more severe — and fatal — crashes than people in a normal data set,” she said in response to the figures analyzed by The Post.”

NHTSA says explicitly the data gathered can't be used for such comparative analysis due to differences in manufacturer ability to gather data, and as such, they would never make such a claim.
"Due to variation in data recording and telemetry capabilities, the summary incident report data should not be assumed to be statistically representative of all crashes."

I found with some research as others did that she was ordered by NHTSA to recluse herself of all things Tesla, after it was found she had deleted tweets that were obviously biased against Tesla. And she has since even been fully removed of all roles in the NHTSA.

In that case, it was the typical journalist telephone game where they embellish things when paraphrasing, but she also shares some fault from her continual name dropping of NHTSA and sloppy usage of data against explicit NHTSA guidelines.

Edit:
@Doggydogworld did a great analysis of how sloppy she was in this article also. I guess this is typical of her work and journalists are not equipped to vet things further, especially when she name drops her former NHTSA role for legitimacy.
 
Last edited: