Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Open Letter to Elon Musk

This site may earn commission on affiliate links.

Interesting, one could make the argument that this strategy from Audi is irresponsible. It could have the effect of downplaying the importance of monitoring an Audi car because, hey, Audi is going to pay for it if it wrecks! I am kind of joking but not really.

Or perhaps this just means Audi will result in much more intrusion and limitations of being able to activate self-driving features.....

Either way, this is most likely careful PR and branding by Audi (something Tesla has struggled with at times).... I imagine Audi will defend itself legally in a different way than what they insinuate in that article..... And once they have real skin in the game, and owners start blaming Audi for things that were not the fault of the system, you will see a clarification of the "full responsibility" statement from Audi.

I imagine Audi will likely take full responsibility for only it's aspect of the code. Tesla has considered that approach, but they decided on a better idea, which is to call it beta and use at your own risk.... which I think is the way to go.

The perception that Tesla throws its owners under the bus after a software malfunction is not accurate They defend themselves publicly when an owner makes inconsistent accusations regarding their software.

It's been interesting to see the situations where an owner may think they have hit the jackpot with Tesla by ramming it into a brick wall, only to have Tesla say, hmmmmm let's look at that... in slow motion... and see when you touched the wheel, etc.
 
I figured this thread would be coming given his previous thread died out and no one cared when he posted about his new articles in that thread. A lot of the arguments against his points were already made there, so I don't think it needs rehashing.

It's quite obvious we are being used as a platform to promote his content (does this not violate the TOS here? not sure). Anyways, would this head towards being the most disagreed thread like the other one?
 
  • Informative
Reactions: JohnSnowNW
I don't think the OP understands it - if he did believe shadow driving to take place during manual driving as well as autopilot driving he wouldn't have an argument.

I agree with your definition of shadow driving. Put it this way: if OP does not understand shadow driving, then of course that part of his position does not make sense. That said, I did get the sense that he was advocating for a more rigorous approach of non-public testing overall in the industry.
 
This claim has been made dozens of times on TMC with no evidence whatsoever to back it up. The only evidence we do have is that there has been at least one confirmed death in the U.S. while using autopilot and yet autopilot continues, development continues, other cars are becoming more automated etc.

I think that one death did cause fairly significant repercussions, though: AP1 changed a lot afterwards and MobilEye even argued it was part of their thinking into ending their relationship with Tesla. These were not non-events. It is often argued AP2 is in the state it is, because MobilEye refused to allow Tesla to put EyeQ3 on the AP2 board for the transition.

That said, the victim in this case was one middle-aged person speeding on a mostly empty, straight road - and an acknowledged tech-nerd into pushing the envelope of this stuff.

Might the PR result have been different if the victims had been a car full of children and, say, the incident started from an AP2 car swerving due to a tar-line into the front of a semi?
 
  • Like
Reactions: u00mem9
Or perhaps this just means Audi will result in much more intrusion and limitations of being able to activate self-driving features.....

Either way, this is most likely careful PR and branding by Audi (something Tesla has struggled with at times).... I imagine Audi will defend itself legally in a different way than what they insinuate in that article..... And once they have real skin in the game, and owners start blaming Audi for things that were not the fault of the system, you will see a clarification of the "full responsibility" statement from Audi.

Actually, the way Audi and others (not Tesla, though I believe) are working towards this is the way of a standardized, auditable black box. The black box will "assign" blame based on who is listed as driving, the car or the person. I'm sure some disagreements are bound to emerge, but the idea seems as solid as it can be IMO.
 
I figured this thread would be coming given his previous thread died out and no one cared when he posted about his new articles in that thread. A lot of the arguments against his points were already made there, so I don't think it needs rehashing.

It's quite obvious we are being used as a platform to promote his content (does this not violate the TOS here? not sure). Anyways, would this head towards being the most disagreed thread like the other one?

What does it matter? Let's discuss the topic.
 
The Gao Yaning's family is suing Tesla claiming AP1 was active at that time.

Tesla has been working with the family to access the car log but the family has been uncooperative.
Thus, this case continues to be unverifiable and it still can not be confirmed as a first DOCUMENTED Autopilot death.
I think that's a standard excuse Tesla has been putting out in such cases. I have heard otherwise, that owners couldn't get Tesla to release the logs when they suspected SUA. There were couple of threads here on TMC, where Tesla never released the logs to the owners.

The video is pretty clear proof that it was on AP. Driving straight, keeping lane but not seeing the truck. AP1 had a few crashes in similar situations. Like this one below. I think the crash log is redundant here. May be, AP gave up fraction of a second before crash. That shouldn't exonerate the manufacturer from responsibility, as the driver doesn't have time to react. Tesla said exactly that for this case below to shirk liability. Strangely enough, this video was also made private shortly after it gained publicity.
Tesla Model S on Autopilot crashes into van parked on highway - Roadshow
 
Last edited:
...I think that's a standard excuse Tesla has been putting out in such cases...

This is quoted from Gao's lawyer:

"During the trial's opening session in Beijing Tuesday, Gao applied for an investigation into whether the autopilot was activated when the accident occurred..."

Which is consistent with Tesla's claim:

1) The car was totaled so Tesla couldn't get the log remotely.
2) It needs to physically access the car to retrieve the log but the family refuses.

My understanding is: The lawyer does not trust Tesla to retrieve the log. It wants the court to retrieve it.

Anyhow, until the court will appoint an investigation team to retrieve the log, the Autopilot involvement is still undocumented.

The video does not show when the Autopilot was initiated, how many error messages, did the driver respond and so on. The video does not show the dashboard as proof. It does not show the driver manually initiated the Autopilot stalk.

It does show an accident.

But even Gao's lawyer admits that she does not know about the Autopilot and wants the court to help her out to point out Tesla's guilt!
 
Last edited:
  • Informative
Reactions: mmd
...AP1 had a few crashes in similar situations...

There is no question that AP1 involves in crashes including the blue van in Switzerland.

In that case, Tesla has not giving out a standard line that it cannot retrieve the vehicle log.

The owner has confessed already so why bothering about vehicle log:

“Yes, I could have reacted sooner, but when the car slows down correctly 1’000 times, you trust it to do it the next time to. My bad..”

The video is still there but the author wants to keep it copyrighted so you need to use it with permission.

For other cases that I heard, Tesla's standard line has been: "The system works as designed" or "Nothing's wrong with it."

Remember: It is designed to be watched by human. As it is now: It works in many scenarios but not others. So you can pay good money to babysit it or you can skip paying it so you don't have to monitor the system.
 
Last edited:
There is no question that AP1 involves in crashes including the blue van in Switzerland.

In that case, Tesla has not giving out a standard line that it cannot retrieve the vehicle log.

The owner has confessed already so why bothering about vehicle log:

“Yes, I could have reacted sooner, but when the car slows down correctly 1’000 times, you trust it to do it the next time to. My bad..”

The video is still there but the author wants to keep it copyrighted so you need to use it with permission.

For other cases that I heard, Tesla's standard line has been: "The system works as designed" or "Nothing's wrong with it."

Remember: It is designed to be watched by human. As it is now: It works in many scenarios but not others. So you can pay good money to babysit it or you can skip paying it so you don't have to monitor the system.

FWIW: I believe in the DARPA Urban Challenge they had blue opposition vehicles and obstacles due the issues sensors have differentiating them from asphalt / background.
 
?..

Also if you have not seen aerospace level simulation, especially FAA Level D simulation, with networked simulations in complex war games you have no idea what that tech gap is.
OK, I'll bite. I have seen both of your examples. I have earned type ratings in a Level D simulator.

A level D is expensive to build but wa a vastly simpler problem than is autonomous highway driving. Simulators for war games, aircraft, ships and other aerospace applications all have in common an environment more controlled than that of road vehicles.

The war game analogy is pretty weak, actually, because fidelity to reality is not so much the points as is teaching tactics, strategy and novel technologies.

If you actually experience a Level D, then the aircraft the Level D represents, you might well agree that the learning is fantastic but it "just ain't real". The simulators are, in my limited experience, subtly harder to operate than are the actual aircraft. One primary reason for those simulators is to experience systems failures and weather conditions that would be lethal were they to be done in real life. Exact fidelity is not necessary.

I think you may not really understand the problem. Nobody suggests Elon is infallible, nor that he is eternally optimistic. You are one of a tiny group who seem to think NASA somehow 'saved' Space X. Most informed people that I know think the NASA regulatory role is positive, just as is the diagnostic role of the NHTSB. Neither of these actually innovate nor should they. Neither are invariably correct in their views.

Bluntly, despite your best efforts to defame people who disagree with you, your own statements make it quite clear that you do not possess the qualifications to make the assertions you are making. Being annoying is not usually a productive way to solve problems. You can use your single success in whistleblowing to claim understanding and even use short term project manager positions to proclaim technical expertise. People who understand project managers tend to think they are not usually technical wizards.

Finally, posting here will not help you in your goal to make money out of the Tesla pursuit of autonomous vehicles. For that matter, making misstatements about NHTSB findings won't help your quest either.

Possibly you might consider going back to school and actually studying the subjects about which you claim such expertise.
 
OK, I'll bite. I have seen both of your examples. I have earned type ratings in a Level D simulator.

A level D is expensive to build but wa a vastly simpler problem than is autonomous highway driving. Simulators for war games, aircraft, ships and other aerospace applications all have in common an environment more controlled than that of road vehicles.

The war game analogy is pretty weak, actually, because fidelity to reality is not so much the points as is teaching tactics, strategy and novel technologies.

If you actually experience a Level D, then the aircraft the Level D represents, you might well agree that the learning is fantastic but it "just ain't real". The simulators are, in my limited experience, subtly harder to operate than are the actual aircraft. One primary reason for those simulators is to experience systems failures and weather conditions that would be lethal were they to be done in real life. Exact fidelity is not necessary.

I think you may not really understand the problem. Nobody suggests Elon is infallible, nor that he is eternally optimistic. You are one of a tiny group who seem to think NASA somehow 'saved' Space X. Most informed people that I know think the NASA regulatory role is positive, just as is the diagnostic role of the NHTSB. Neither of these actually innovate nor should they. Neither are invariably correct in their views.

Bluntly, despite your best efforts to defame people who disagree with you, your own statements make it quite clear that you do not possess the qualifications to make the assertions you are making. Being annoying is not usually a productive way to solve problems. You can use your single success in whistleblowing to claim understanding and even use short term project manager positions to proclaim technical expertise. People who understand project managers tend to think they are not usually technical wizards.

Finally, posting here will not help you in your goal to make money out of the Tesla pursuit of autonomous vehicles. For that matter, making misstatements about NHTSB findings won't help your quest either.

Possibly you might consider going back to school and actually studying the subjects about which you claim such expertise.

Interestingly, though, one of the things Tesla IS hiring people for are simulators for autonomous driving...

I wonder if both you and OP aren't wrong, OP in his hastiness to dismiss Tesla's efforts, you in dismissing the role of simulation in autonomous driving validation...
 
Interestingly, though, one of the things Tesla IS hiring people for are simulators for autonomous driving...

I wonder if both you and OP aren't wrong, OP in his hastiness to dismiss Tesla's efforts, you in dismissing the role of simulation in autonomous driving validation...
I don't want to negate the value of simulation in autonomous driving at all, but emphasize that the primary source of the information to develop the required data should, probably must be, real world driving data. In my very humble opinion, the role of simulation will be to develop and test specific solutions to clearly defined problems so the amount of decisions that must be made by the AI process can be reduced. In reality any time there are specific decision rules that can apply they should be used. Simulation is the only relatively easy way to test and perfect such rules.

My perspective on this is clearly humble because I have zero direct knowledge of driving automation. The opinions I hold are taken by inference from work developing product failure prediction, and financial industry predictive modeling, using AI.

Frankly, heuristics are perfectly appropriate for almost all such problems. Driving automation needs absolute certainty for staying on the road, avoiding collisions and not too much else. Even position in a lane can use fuzzy logic, as it does with aircraft autopilots. Navigation rules, including turns and on/off public road rules, can all be evaluated conventionally and can be modeled. Tesla almost certainly has enough data now to make huge strides, and whenever there is enough data, simulation works very well to determine what is really needed. Otherwise data crunching will become more and more demanding with less and less ability to differentiate between essential and useless information. One major issue for level 4 and 5 will be weather-related. That will require, I think, cheaper and better sensors such as lidar. My guess is that even Elon thinks that now.

If I implied otherwise in my posts I apologize. I'm fairly confident of these views, but acutely aware of my own limited knowledge of the exact problem.
 
  • Like
Reactions: AnxietyRanger
Interesting, one could make the argument that this strategy from Audi is irresponsible. It could have the effect of downplaying the importance of monitoring an Audi car because, hey, Audi is going to pay for it if it wrecks! I am kind of joking but not really.

But isn't this what L3 is supposed to offer, where you don't have to be monitoring the car, until it tells you to take over? And part of the ultimate goal of autonomous driving.
 
OK, I'll bite. I have seen both of your examples. I have earned type ratings in a Level D simulator.

A level D is expensive to build but wa a vastly simpler problem than is autonomous highway driving. Simulators for war games, aircraft, ships and other aerospace applications all have in common an environment more controlled than that of road vehicles.

The war game analogy is pretty weak, actually, because fidelity to reality is not so much the points as is teaching tactics, strategy and novel technologies.

If you actually experience a Level D, then the aircraft the Level D represents, you might well agree that the learning is fantastic but it "just ain't real". The simulators are, in my limited experience, subtly harder to operate than are the actual aircraft. One primary reason for those simulators is to experience systems failures and weather conditions that would be lethal were they to be done in real life. Exact fidelity is not necessary.

I think you may not really understand the problem. Nobody suggests Elon is infallible, nor that he is eternally optimistic. You are one of a tiny group who seem to think NASA somehow 'saved' Space X. Most informed people that I know think the NASA regulatory role is positive, just as is the diagnostic role of the NHTSB. Neither of these actually innovate nor should they. Neither are invariably correct in their views.

Bluntly, despite your best efforts to defame people who disagree with you, your own statements make it quite clear that you do not possess the qualifications to make the assertions you are making. Being annoying is not usually a productive way to solve problems. You can use your single success in whistleblowing to claim understanding and even use short term project manager positions to proclaim technical expertise. People who understand project managers tend to think they are not usually technical wizards.

Finally, posting here will not help you in your goal to make money out of the Tesla pursuit of autonomous vehicles. For that matter, making misstatements about NHTSB findings won't help your quest either.

Possibly you might consider going back to school and actually studying the subjects about which you claim such expertise.

Yes, apples and oranges.

While automotive traffic density, flow-control, and type is an order of magnitude more complex than air traffic control, it's not true with vehicle dynamics.

Cars driven legally are 2D mapping. 3D usually ends up in a crash. Calculations for rotational inertia, acceleration, deceleration, slip/traction, are wildly simpler for cars than for aircraft. Wind and baro readings or instrument failure seldom cause car crashes.

So a flight simulator struggles to mimic to vehicle dynamics accurately, and an automotive simulator struggles to mimic the surrounding environment accurately.

The OP's concept of using 'aerospace level simulation' to replace road testing assumes the OP can procure some system that can accurately mimic the surrounding environment, ie- other people's actions. Good luck with that. Best you can do is record and reverse engineer video footage of hundreds of thousands of different situations. Skip any, and a simulation based AV coding results in a crash.

Waymo is heavily invested simulation technology. Until 2017 it was obvious Waymo was going to win the race to AV deployment. Now it's not so clear, as GM has put a larger number of AVs in the real world than Waymo and seem to be making great strides in a short time. Waymo has had At Fault incidents on the road per the DMV, and GM has not.
 
  • Like
Reactions: jbcarioca
The OP's concept of using 'aerospace level simulation' to replace road testing assumes the OP can procure some system that can accurately mimic the surrounding environment, ie- other people's actions. Good luck with that. Best you can do is record and reverse engineer video footage of hundreds of thousands of different situations. Skip any, and a simulation based AV coding results in a crash.

Waymo is heavily invested simulation technology. Until 2017 it was obvious Waymo was going to win the race to AV deployment. Now it's not so clear, as GM has put a larger number of AVs in the real world than Waymo and seem to be making great strides in a short time. Waymo has had At Fault incidents on the road per the DMV, and GM has not.

Yet - as we speak - Tesla is hiring people for Unreal 4 based simulation... for Autopilot.