Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Open Letter to Elon Musk

This site may earn commission on affiliate links.
Again those are two completely different points, which he tries to conflate together:
"Here is a report I actually just became aware of that combines many other reports on the dangers of handover or public shadow driving."
"You are incorrect about public shadow driving and handover not being the same from the standpoint of situational awareness"

He is drawing a false equivalence between the two. The report he linked from Venturer had absolutely nothing to do with shadow driving, even though he claimed it did.

IMO that seems very minor, there certainly is the same idea behind both: full self-driving made in the labs before deployment to consumers, no steps in-between. That is a pretty clear contrast being drawn to say Tesla/Comma.AI or even the likes of Audi's Level 3, no? Maybe we'd be better off discussing the position more than the representation?

These are not my views. But it seemed obvious to me what the views were the you seemed to be missing...

You can argue handover is dangerous (although every level 2+ system does it), but public shadow driving has nothing to do with it because it does not involve handover at all. Also, Tesla doing public shadow driving does not mean they don't also do simulation. We discussed Tesla's simulation related hirings elsewhere already:
Autopilot simulation!

And the reason why I am not forgiving in terms of wrong usage of terminology, is because he had previously appealed to authority and claimed he was an expert. It's forgivable for a general member to be wrong on terminology. but for an expert, I'm going to take issue with it.

As I said:

Now, none of this is black and white, of course. We know Tesla is hiring simulation people of course, and Waymo drivers their cars in the public too to validate them. But there are different emphasis the companies have. Interesting to see how it plays out.

As for handover's being dangerous, that is not my position. But it is a position.

It is not an unreasonable position to say the fleet-learning focus of Tesla (or e.g. Comma AI) and the use of Level 2, or even the Level 3 handover approach of Audi, is insufficient, and that some other primary approach would bring better results.

It is just one view.
 
Again those are two completely different points, which he tries to conflate together:
"Here is a report I actually just became aware of that combines many other reports on the dangers of handover or public shadow driving."
"You are incorrect about public shadow driving and handover not being the same from the standpoint of situational awareness"

He is drawing a false equivalence between the two. The report he linked from Venturer had absolutely nothing to do with shadow driving, even though he claimed it did.

You can argue handover is dangerous (although every level 2+ system does it), but public shadow driving has nothing to do with it because it does not involve handover at all. Also, Tesla doing public shadow driving does not mean they don't also do simulation. We discussed Tesla's simulation related hirings elsewhere already:
Autopilot simulation!

And the reason why I am not forgiving in terms of wrong usage of terminology, is because he had previously appealed to authority and claimed he was an expert. It's forgivable for a general member to be wrong on terminology, but for an expert, I'm going to take issue with it.

The similarity is allowing the vehicle to steer with your hands off for any amount of time.

It is not only unsafe and cannot be made safe but using that practice for AI, engineering and test on public roads will never result in L4. You cannot drive the one trillion miles needed. This is why Waymo evolved to replace most of it.
 
The similarity is allowing the vehicle to steer with your hands off for any amount of time.

It is not only unsafe and cannot be made safe but using that practice for AI, engineering and test on public roads will never result in L4. You cannot drive the one trillion miles needed. This is why Waymo evolved to replace most of it.
It seems you still don't understand what shadow driving is. During shadow driving, the human is driving the car the entire time. The system is only passively observing, it is not providing any assistive action. The vehicle is not doing any steering.

Again:
Public shadow driving (emphasized part bolded):
"New Tesla Model S and Model X automobiles will run Autopilot in “shadow mode” and collect driving data that pits a human versus computer. Autopilot vehicles running in shadow mode will not take any driving-assist or self-driving actions. Rather they will only log instances when Autopilot would have taken action and compare those results to the real life actions taken by human drivers."
 
First - there is a bit of a terminology issue with just Tesla. Their "shadow mode" at least in their literature is not shadow driving. However. . .

I understand what the literature says. It is not what people do nor what is actually expected. Tesla uses that language for legal CYA while promoting and expecting folks to let the vehicle drive. Should I post videos of Elon doing this long ago? A plethora of customer videos with them doing this? How about reviewer or press videos and articles saying the same thing? This is an Elektrek review of 2.0 Autosteer 6 months ago - Tesla Autopilot 2.0: Watch the latest and ‘greatly improved’ version of Autopilot at work

And from a pure common sense POV you have to let the vehicle drive or you have no way of knowing if the AI is working right. (Unless you are using simulation. If Tesla were doing that well or often they would be much further along and not allow folks to take their hands off the wheel).
 
OK @stopcrazypp fair enough, @imispgh does not know what he is talking about.

The points in the links are interesting, but this is just gibberish.
I'm glad you finally saw what I saw since his first post on this forum. Your previous post tried to reinterpret what he actually meant into a somewhat reasonable argument (but completely different argument), but with his recent comments I believe it's quite clear he doesn't know what he is talking about. That's why a lot of people have just put him on ignore.
 
Last edited:
  • Like
Reactions: AnxietyRanger
I'm glad you finally saw what I saw since his first post on this forum. Your previous post tried to reinterpret what he actually meant into a somewhat reasonable argument (but completely different argument), but with his recent comments I believe it's quite clear he doesn't know what he is talking about. That's why a lot of people have just put him on ignore.

I'm all for admitting when I was wrong. I was wrong about the poster.

Now, the points I did extrapolate from there IMO are still interesting. :)
 
I am on record as saying that full level 5 is impossible and that in the near future we will only see highly restricted domain level 4. (Vetted expressways only.). So if anything I think Mr DeKort is too optimistic about "self driving". That said, almost anything is better than the maniacs on the road now.
 
  • Like
Reactions: 1 person
I am on record as saying that full level 5 is impossible and that in the near future we will only see highly restricted domain level 4. (Vetted expressways only.). So if anything I think Mr DeKort is too optimistic about "self driving". That said, almost anything is better than the maniacs on the road now.
But the new Teslas are 'fully self driving capable'
We are almost there. They couldn't sell such a possibility if it wasn't.
 
  • Funny
Reactions: AnxietyRanger
I registered an account just to reply to this thread, because it matters.

I am a flight software engineer working on autonomous space crafts, before that, I worked for one of the top 3 defense contractor writing software for their large military UAV (fully autonomous)

I interviewed with Tesla (I think 5-6 years ago) when Elon expressed the desire to hire autonomous software engineers. At the interview, there was this young, energized stanford software engineer, which I believed is working on the software (they call it firmware btw, which is beyond my understanding. you dont call an advanced piece of software that handles your driving firmware..). But anyways, when I told him that all self driving vehicles require at least dual redundancy control so that if one computer dies, the other one can be hot swapped. He did not take that well, as redundancy to them, is... not needed nor understood. I was shocked by their approach at that time. At that point, it is when I realized this group of people is a bunch of silicon valley application level software engineers that have 0 background in reliable software. they had no clue about the importance of safety, reliability and the need for rigor.

During lunch, the senior manager at the time, I think his name is David, asked me, what do you think about completely automating the driving experience.

I was chewing at the time, and almost spat out my food. I had to hold it in, swallow, and tell him "not any time soon". The best you can do is maybe automating freeway driving first (exactly what Tesla did, kudos to them, at least they figured this out), then large urban roads but even that is going to be a challenge. Time, money and most importantly, mistakes must be made before you can succeed. This is not a website like google.com, if your car fails, people die.

Needless to say, I did not get the job, as they did not like my way of thinking.

Today, as I decided to cancel my Model 3 due to the elimination of the federal credit(only wanted the base model), I looked up David's profile, holy *sugar* he advanced fast. It is truly scary that someone has 0 aerospace or safety/mission/flight critical background can be the VP of their driving software.

5 years ago, I just thought Elon failed to understand autonomous driving, now I think he is actively cheating and lying so that he can make money off of us.

except this time, human lives are involved.
 
I registered an account just to reply to this thread, because it matters.

I am a flight software engineer working on autonomous space crafts, before that, I worked for one of the top 3 defense contractor writing software for their large military UAV (fully autonomous)

I interviewed with Tesla (I think 5-6 years ago) when Elon expressed the desire to hire autonomous software engineers. At the interview, there was this young, energized stanford software engineer, which I believed is working on the software (they call it firmware btw, which is beyond my understanding. you dont call an advanced piece of software that handles your driving firmware..). But anyways, when I told him that all self driving vehicles require at least dual redundancy control so that if one computer dies, the other one can be hot swapped. He did not take that well, as redundancy to them, is... not needed nor understood. I was shocked by their approach at that time. At that point, it is when I realized this group of people is a bunch of silicon valley application level software engineers that have 0 background in reliable software. they had no clue about the importance of safety, reliability and the need for rigor.

During lunch, the senior manager at the time, I think his name is David, asked me, what do you think about completely automating the driving experience.

I was chewing at the time, and almost spat out my food. I had to hold it in, swallow, and tell him "not any time soon". The best you can do is maybe automating freeway driving first (exactly what Tesla did, kudos to them, at least they figured this out), then large urban roads but even that is going to be a challenge. Time, money and most importantly, mistakes must be made before you can succeed. This is not a website like google.com, if your car fails, people die.

Needless to say, I did not get the job, as they did not like my way of thinking.

Today, as I decided to cancel my Model 3 due to the elimination of the federal credit(only wanted the base model), I looked up David's profile, holy *sugar* he advanced fast. It is truly scary that someone has 0 aerospace or safety/mission/flight critical background can be the VP of their driving software.

5 years ago, I just thought Elon failed to understand autonomous driving, now I think he is actively cheating and lying so that he can make money off of us.

except this time, human lives are involved.

So, you're a spurned potential employee, that has just canceled his Model 3 reservation for something that may not happen, and calling Musk a fraudster?

Well, thanks for taking the time to set up an account to let us all know.
 
I registered an account just to reply to this thread, because it matters.

I am a flight software engineer working on autonomous space crafts, before that, I worked for one of the top 3 defense contractor writing software for their large military UAV (fully autonomous)

I interviewed with Tesla (I think 5-6 years ago) when Elon expressed the desire to hire autonomous software engineers. At the interview, there was this young, energized stanford software engineer, which I believed is working on the software (they call it firmware btw, which is beyond my understanding. you dont call an advanced piece of software that handles your driving firmware..). But anyways, when I told him that all self driving vehicles require at least dual redundancy control so that if one computer dies, the other one can be hot swapped. He did not take that well, as redundancy to them, is... not needed nor understood. I was shocked by their approach at that time. At that point, it is when I realized this group of people is a bunch of silicon valley application level software engineers that have 0 background in reliable software. they had no clue about the importance of safety, reliability and the need for rigor.

During lunch, the senior manager at the time, I think his name is David, asked me, what do you think about completely automating the driving experience.

I was chewing at the time, and almost spat out my food. I had to hold it in, swallow, and tell him "not any time soon". The best you can do is maybe automating freeway driving first (exactly what Tesla did, kudos to them, at least they figured this out), then large urban roads but even that is going to be a challenge. Time, money and most importantly, mistakes must be made before you can succeed. This is not a website like google.com, if your car fails, people die.

Needless to say, I did not get the job, as they did not like my way of thinking.

Today, as I decided to cancel my Model 3 due to the elimination of the federal credit(only wanted the base model), I looked up David's profile, holy *sugar* he advanced fast. It is truly scary that someone has 0 aerospace or safety/mission/flight critical background can be the VP of their driving software.

5 years ago, I just thought Elon failed to understand autonomous driving, now I think he is actively cheating and lying so that he can make money off of us.

except this time, human lives are involved.


Appreciate your insight. I think your views on how difficult it is to get full self driving and the redundancy and sensors required to get there, is spot on - IMHO. I also believe Musk is blowing hot air on this subject and getting away with it.

I also get a chuckle when folks use the term 'firmware' for any software. I think if it is running on a device other than your PC or laptop or phone, they end up calling it firmware.
 
Today, as I decided to cancel my Model 3 due to the elimination of the federal credit(only wanted the base model), I looked up David's profile, holy *sugar* he advanced fast. It is truly scary that someone has 0 aerospace or safety/mission/flight critical background can be the VP of their driving software.

If you're talking about David Nister, he's been gone from Tesla for over eight months. I think he went to Nvidia.
 
So, you're a spurned potential employee, that has just canceled his Model 3 reservation for something that may not happen, and calling Musk a fraudster?

Well, thanks for taking the time to set up an account to let us all know.
That part stood out to me too. The tax credit did not get eliminated yet, so it's definitely not a reason to cancel.
 
Tesla has been using simulation since the previous generation of Autopilot. This is a slide from a talk given by Sterling Anderson (Tesla’s former Director of Autopilot) in May 2016:

NFkewTV.jpg


Tesla has posted public job listings for an “Autopilot Simulation Engineer” and a “Software Engineer, Autopilot Simulation”.

Based on what Waymo has recently chosen to reveal to the public about its use of simulation, real world data plays a crucial role in generating simulation scenarios and environments. Tesla can passively collect real world data from its fleet of Hardware 2 cars, which is 2,000x larger than Waymo’s fleet of test cars, and generate more simulation scenarios and environments than Waymo can.

The inert installation described on the slide above is what Elon Musk has referred to as “shadow mode”. (Note: These are transcribed quotes that sound normal when you listen to them. They come off as a bit garbled in writing, but you can understand Elon’s meaning.)

...the system will always be operating in shadow mode so we can gather a large volume of statistical data to show the false positives and false negatives, when would the computer acted and with that have prevented an accident or if the computers would have acted and that would have produced an accident.

We think that operating in shadow mode so we can send say when would it have incorrectly acted or not acted and compare that to what should’ve been done in the real world, that point which it is a statistically significant result that shows material improvements of the accident rates for many of the driven cars.

And:

Shadow mode actually means that the car is not actually not taking any action but it’s registering when it would take an action and when it would not take an action and when you compare that to cases where let’s say some body had and accident but you look at the vehicle logs and say well if the car had been in autonomous mode that accident would’ve been avoided, okay obviously is a plus and you can also say okay the car would have send something that would have resulted in an accident in that case it’s in the shadow mode and then that’s an issue that needs to be corrected.

So, a car operating in what Elon calls “shadow mode” is still 100% manually controlled by a human driver. The driver doesn’t even know the car is in shadow mode.

Shadow mode is, in fact, just a kind of simulation that uses the real world as its environment, but does not take real action. As Rodney Brooks says, “The world is its own best model.” What is a better simulation of the world than the world itself?

Michael, I hope you are satisfied now that Tesla uses simulation to develop its cars’ autonomous capabilities. It not only uses computer simulation, but also “shadow mode” which is a form of simulation that uses real world data as its input.

You may not yet be satisfied that Tesla uses “aerospace-level” simulation. There is no way to know this for sure, since Tesla hasn’t revealed much detail about the simulation platform it uses. I’m also not sure what exactly you mean by the term “aerospace-level”. However, it’s now common among companies working on self-driving cars to run highly detailed simulations with realistic physics. These simulations even include simulated versions of the sensor hardware (e.g. the cameras, radar, and lidar unit) on companies’ real cars.

Since Elon Musk also runs SpaceX, I’m sure he is familiar with the state of the art in spacecraft simulation. That knowledge would, it stands to reason, inform his management of Tesla’s simulation efforts.
 
Last edited:
  • Like
Reactions: bonnie
...I told him that all self driving vehicles require at least dual redundancy control so that if one computer dies, the other one can be hot swapped.

My thought: is the hardware failure rate of a single computer higher or lower than the failure rate of a human brain? At what rate do humans fall asleep at the wheel, get dangerously distracted, crash, etc. versus the rate that computers die?

More redundancy is always better. Two redundant computers is better than one, ten computers is better than two, fifty is better than ten, a hundred is better than fifty, a thousand is better than a hundred... But how do we decide how much redundancy is enough? I think human driving has to be the benchmark.

Human driving is not a safe default option: it kills as many people as HIV/AIDS. Autonomous driving only needs to be, say, 20% safer than that to be safe enough to deploy, in my opinion. Tesla is aiming for at least 2x safer (ideally 10x safer) and will be collecting billions of miles of shadow mode data to confirm that statistically before it deploys fully autonomous driving.

Related: Tesla mentioned that in Hardware 2.5 there is some computing redundancy added, but I am not aware of the details beyond what I just said.
 
First - there is a bit of a terminology issue with just Tesla. Their "shadow mode" at least in their literature is not shadow driving. However. . .

I understand what the literature says. It is not what people do nor what is actually expected. Tesla uses that language for legal CYA while promoting and expecting folks to let the vehicle drive. Should I post videos of Elon doing this long ago? A plethora of customer videos with them doing this? How about reviewer or press videos and articles saying the same thing? This is an Elektrek review of 2.0 Autosteer 6 months ago - Tesla Autopilot 2.0: Watch the latest and ‘greatly improved’ version of Autopilot at work

And from a pure common sense POV you have to let the vehicle drive or you have no way of knowing if the AI is working right. (Unless you are using simulation. If Tesla were doing that well or often they would be much further along and not allow folks to take their hands off the wheel).

imispgh : You really need to understand what shadow mode is as it applies to Tesla in order for anyone to have an intelligent conversation with you on this subject. Shadow mode is not legalize CYA Tesla is using. Shadow mode is a method of data collection. This is the reason Tesla puts the hardware on all of their cars and not just if you pay for the option. Shadow mode is collecting the data from when a human is driving the car and comparing the actions the human took to what actions the software wanted to take. This is being used to collect the data needed for the simulator.

Shadow mode has nothing to do with handover as in shadow mode the human is in full control already there is nothing to handover.

Shadow mode is looking for situations where the software wanted to take an action but the human didn't take action and did not have an accident. It can also be used in cases where there was an accident however since there is no proof what the software wanted to do would have prevented the accident it is less useful.

Simulation will have to be used however Tesla is the only company that has the ability of collecting enough real world data to feed the simulators with the data that will be needed. However they may not even be collecting the right data if they are wrong on what sensors are required then the data they are collecting will be incomplete. Will Tesla be the company that finally makes FSD probably not. I think they will eventually sell the data they collect to another company that will combine it with data from other companies to finally make it work.
 
imispgh : You really need to understand what shadow mode is as it applies to Tesla in order for anyone to have an intelligent conversation with you on this subject. Shadow mode is not legalize CYA Tesla is using. Shadow mode is a method of data collection. This is the reason Tesla puts the hardware on all of their cars and not just if you pay for the option. Shadow mode is collecting the data from when a human is driving the car and comparing the actions the human took to what actions the software wanted to take. This is being used to collect the data needed for the simulator.

Shadow mode has nothing to do with handover as in shadow mode the human is in full control already there is nothing to handover.

Shadow mode is looking for situations where the software wanted to take an action but the human didn't take action and did not have an accident. It can also be used in cases where there was an accident however since there is no proof what the software wanted to do would have prevented the accident it is less useful.

Simulation will have to be used however Tesla is the only company that has the ability of collecting enough real world data to feed the simulators with the data that will be needed. However they may not even be collecting the right data if they are wrong on what sensors are required then the data they are collecting will be incomplete. Will Tesla be the company that finally makes FSD probably not. I think they will eventually sell the data they collect to another company that will combine it with data from other companies to finally make it work.
I know how it works and why it does what it does. Data collected while the driver maintains steering control is a must. The issue is ceding control of steering to the vehicle.

I am now part of the SAE V&V Task Force. They would not have extended that invitation for no reason. Also please read the articles on Waymo's recent paradigm shift on this.