Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD rewrite will go out on Oct 20 to limited beta

This site may earn commission on affiliate links.
Of course they are not the same things, as I made clear, but they are similar in the sense that the item under test (a) has the potential to do harm if mis-used and (b) is hard to test in a situation where it cannot actually do harm. That is, testing for efficacy implies a certain level of risk. I was addressing the idea that FSD should not be released until it has been shown to be "safe" (whatever that means), which is essentially impossible.
Other companies have already proven that it can be safe to test autonomous vehicles on public roads (though Uber proved how it can be unsafe!). I'm hoping that Tesla is doing a good enough job vetting and monitoring their beta testers so that their testing is also safe. The way you determine if it's safe enough for release is to look at all the disengagement data and simulate what would have happened if the safety driver had not disengaged the system. Tesla has stated their goal is 2x better than the average human which should give them plenty of margin in this calculation. Remember they have a huge amount of data about how well humans drive.
 
Someone posted earlier that they “hate having to spoon feed”
I frame it as I “hate having to dumb things down”
You’re more patient than me

Its not really about spoon feeding as much as I don’t mind some good spoon feeding.

If one makes an assertion or some claim, back it up with evidence. There are far too many people listing opinion as fact on these forums. And for those of us who aren’t subject matter experts you don’t know what to believe. Sure maybe my question was a little dumb, but if one is asserting that Musk said this, or x is worse than y, show me evidence of why (pun intended).
 
If anyone was wondering what it’s like to be a developer for a corporation, this thread is a fairly accurate representation.

A bunch of business people making wild assertions of the how justifying their own why, without any understanding of the system. While the people with the understanding just wait quietly for the seagulls to quit squawking so they may try to figure something real out.

10/10 can’t wait to read another 70 pages.
 
Other companies have already proven that it can be safe to test autonomous vehicles on public roads (though Uber proved how it can be unsafe!). I'm hoping that Tesla is doing a good enough job vetting and monitoring their beta testers so that their testing is also safe. The way you determine if it's safe enough for release is to look at all the disengagement data and simulate what would have happened if the safety driver had not disengaged the system. Tesla has stated their goal is 2x better than the average human which should give them plenty of margin in this calculation. Remember they have a huge amount of data about how well humans drive.
Which according to the data they don't drive very well even though when polled 80% think they are above average.
 
If anyone was wondering what it’s like to be a developer for a corporation, this thread is a fairly accurate representation.

A bunch of business people making wild assertions of the how justifying their own why, without any understanding of the system. While the people with the understanding just wait quietly for the seagulls to quit squawking so they may try to figure something real out.

10/10 can’t wait to read another 70 pages.

damn this is too much real life. I hate listening to the managers and “tech leads” spout off what they think is going on and how the system “works” while having no clue what they’re talking about.
 
Tesla's data shows about 1 severe accident (> ~12 mph) every 2 million miles (Older models with zero computer assistance). I think that's harder for a computer alone to achieve than most people think. I think humans are pretty amazing!
Now that we know how you got 2 million miles point (for all non aided Tesla driving)
Can I burst your bubble?

Tesla already achieved that with computers.
4.59 miles between accidents using Autopilot and Safety features (all computer)
1.79 mile between accidents, pure human driving on Tesla's

4.59 - 1.79 = 2.8 million mile difference between computer help and zero computer help.

Computers have beat your metric by 40%. (at this point Tesla Vision computers are about 40% better than humans) << Oh I am going to regret this little comment.

Bring it ;)

upload_2020-10-27_20-32-40.png
 
Last edited:
  • Funny
Reactions: AlanSubie4Life
Tesla already achieved that with computers.
4.59 miles between accidents using Autopilot and Safety features (all computer)
1.79 mile between accidents, pure human driving on Tesla's

4.59 - 1.79 = 2.8 million mile difference between computer help and zero computer help.

Computers have beat your metric by 40%.

How many times do we need to go through this and explain that that's not what this data means? At all.

Please compare the miles between accidents for humans driving Teslas, under the exact same traffic conditions/speed/time of day as they are when using Autopilot, and with the exact same driver demographics, with the same vehicle age, etc., etc. Please make sure to account for any confounding variables I have not listed here.

I just want to see a side-by-side comparison with all the confounding variables and selection bias removed. Can you point me to that?

And to be clear, I've already said that I believe that freeway driving with AP is likely safer than without AP. I just don't know how much safer. And we don't have the data!
 
Now that we know how you got 2 million miles point (for all non aided Tesla driving)
Can I burst your bubble?
FSD is supposed to work without a human.
There is no doubt in my mind that humans assisted by computers are superior to humans alone. In fact I think computer assisted humans can be so good that they'll be nearly impossible for computers to beat. I actually support releasing computer only FSD even if it's not as safe computer assisted humans because I think that's an acceptable amount of risk for the benefit.
 
Now that we know how you got 2 million miles point (for all non aided Tesla driving)
Can I burst your bubble?

Tesla already achieved that with computers.
4.59 miles between accidents using Autopilot and Safety features (all computer)
1.79 mile between accidents, pure human driving on Tesla's

4.59 - 1.79 = 2.8 million mile difference between computer help and zero computer help.

Computers have beat your metric by 40%. (at this point Tesla Vision computers are about 40% better than humans) << Oh I am going to regret this little comment.

Bring it ;)

View attachment 602979

Hi. I’m sorry to jump in so late in this conversation. But, has someone mentioned the delta between the types of miles probably driven in these cases? That the Autopilot miles are almost certainly almost entirely on highways, whereas the NHTSA numbers almost certainly include (probably more accident prone, per mile) city miles? And, that Tesla drivers are (by income bracket alone) already probably among the safest drivers around, regardless (as insurance actuarial rates would imply)?

Don’t get me wrong, I’m a big AP and FSD believer/owner. But, there’s a little slight of hand going on in Tesla’s marketing...
 
En route to L4, we don't know yet if Tesla will ever give drivers permission to not pay attention at all times. They may decide L3 is too risky for them.

There's always that possibly, but Elon doesn't seem intent on staying at L2 if he's still talking about robotaxis. And it's unlikely will we ever see a change of wording in the manual or their website regarding driver attentiveness in the near
future, regardless of how capable the system becomes.


I guess I am trying to envision a situation in city driving where L3 would work. Can you describe it?

I think there is probably a reason why people don’t talk much about L3 in this use case.

I don’t want to derail this thread by talking about about the differences between L3 and L4, though, since it is EXTREMELY boring. Maybe you meant we are getting into L4 territory at the point you were describing?

Maybe I can better answer your question if I know what your idea of L3 is. To me, L3 is the necessary precursor to L4.
 
Maybe I can better answer your question if I know what your idea of L3 is. To me, L3 is the necessary precursor to L4.

L3 requires notification of the driver (who doesn't have to be paying attention at all!) prior to disengagement, with "sufficient time." I've seen as long as 45 seconds suggested! The actual amount of time is not defined.

That makes it really tricky...many situations could arise where it seems like the system could not provide enough warning for the driver - and the driver may be completely spaced out and lacking context so needs a LOT of time for taking over the driving task. So it seems many self-driving companies skip L3 and go straight to L4.

So, given the amount of warning time for unmanageable situations that has to be provided with an L3 system, I just have a hard time seeing it work in city driving scenarios. I don't really see it working on freeways either, but at least there it seems more likely.
 
Last edited:
  • Like
Reactions: willow_hiller
L3 requires notification of the driver (who doesn't have to be paying attention at all!) prior to disengagement, with "sufficient time." I've seen as long as 45 seconds suggested! The actual amount of time is not defined.

That makes it really tricky...many situations could arise where it seems like the system could not provide enough warning for the driver - and the driver may be completely spaced out and lacking context so needs a LOT of time for taking over the driving task. So it seems many self-driving companies skip L3 and go straight to L4.

So, given the amount of warning time for unmanageable situations that has to be provided with an L3 system, I just have a hard time seeing it work in city driving scenarios. I don't really see it working on freeways either, but at least there it seems more likely.
That was my thought about Level 3 in the city until I saw this video recently with the CTO of Cruise. He said their vehicles, which they're about to deploy driverless in San Francisco, actually ask for human assistance every 5-10 miles. The assistance is remote and they don't override the controls of the car, just give it a higher level command (the idea being that one remote assistant services many vehicles). Here is the example in the video, the car is blocked by construction and then stops and waits for a command. The white truck has is also parked with hazard lights flashing and the car goes around.
Screen Shot 2020-10-27 at 7.13.55 PM.png

This seems like it would really annoy other drivers though unless the remote assistance is very quick.
 
L3 requires notification of the driver prior to disengagement, with "sufficient time." I've seen as long as 45 seconds suggested! The actual amount of time is not defined.

That makes it really tricky...many situations could arise where it seems like the system could not provide enough warning for the driver - and the driver may be completely spaced out and lacking context so needs a LOT of time for taking over the driving task. So it seems many self-driving companies skip L3 and go straight to L4.

So, given the amount of warning time for unmanageable situations that has to be provided with an L3 system, I just have a hard time seeing it work in city driving scenarios. I don't really see it working on freeways either, but at least there it seems more likely.

Many self driving companies don't have regular everyday drivers all over the country in all types of environments like Tesla has. Tesla has to deal with the average Joe behind the wheel. Waymo had their own test drivers overlooking the system before they offered their services to the public. They also had full control of their cars. And, they operate in mapped out geofenced cities. You are absolutely right that it is tricky in Tesla's case. There has to be a test phase. You don't just release a piece of software and L4 magically happens. There is a lot of testing involved up until that point. Google did a lot of that before Waymo came to be. And they had their own drivers as well.

And If you think L3 is difficult, L4 is on a different level. There is no human fallback with L4, which means the car has to do everything, and went it can't, it has to deal with the situation in such a way as to not require a driver. That's a lot harder to implement than having a human fallback like level 3.
 
Last edited:
That was my thought about Level 3 in the city until I saw this video recently with the CTO of Cruise. He said their vehicles, which they're about to deploy driverless in San Francisco, actually ask for human assistance every 5-10 miles. The assistance is remote and they don't override the controls of the car, just give it a higher level command (the idea being that one remote assistant services many vehicles). Here is the example in the video, the car is blocked by construction and then stops and waits for a command. The white truck has is also parked with hazard lights flashing and the car goes around.
View attachment 603012
This seems like it would really annoy other drivers though unless the remote assistance is very quick.

Kind of seems like high functioning L3. I don't know. I guess these distinctions do blend together. Technically the assistance is not taking over the driving task. I don't know what to call that.