Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FUD I believe in. There is an enormous leap from a safe Level 2 system to a safe Level 3-5 system.

This site may earn commission on affiliate links.
.the ones have Autopilot are those who can afford it,
.they follow the owners' manual and use it only on highways and not city streets, not in constructions and they never misuse or abuse the system (and never sleep behind the wheel)...
.they are more safety-conscious that's why they bought it.

Not sure about that second point. And not sure what the first point has to do with driving ability.

What makes it problematic is that it is comparing miles driven without AP to miles driven with AP, as opposed to miles driven on AP-enabled cars versus miles driven on non-AP-enabled cars. AP tends to be used consistently on the safest roads, such as freeways, but less frequently on city streets, where accidents are more common. So this skews the statistics pretty badly, to the point where you can't derive any meaningful data from this beyond that AP and Tesla's other safety features make it a lot safer than cars that lack those features.
 
If you go down the Elon Musk, Lex Fridman, George Hotz rabbithole on this, I would say read the Fridman paper first, check out some of the citations, check out Fridman's podcast with Hotz next, and finish up wiht Fridman's podcast with Musk.

Musk's conclusion is fairly straightforward. We only care about whether drivers are attentive or not when a driver intervention is better than autopilot would otherwise do. Musk thinks that the time frame within which human drivers will be better than the computer is so short that this issue is irrelevant.

His example is that when elevators came out people did not trust them unless there was an elevator operator. Now, says Musk, it would be insane to have a guy with a handle who could stop the elevator anywhere, even between floors, or could start it accidentally with person halfway out or whatever.

Its impossible to say if Musk is right, because the rate of improvement is a matter of opinion at this point. But, carrying on with his example, Musk also said that at some point people are not going to believe that we allowed humans to actually drive cars.

This issue may not be solvable by data. The Fridman paper showed that in his statistical sample, over 90% of the human overrides of the AP system were "anticipational" -- but because the humans took over we don't know how the autopilot would have done.

I think one problem with Musk’s conclusions and statements is that some of them are easier to agree with than others.

I don’t know if manually driving a car is comparable to human operation of an elevator to the extent that the former would be driving, given that driving cars and before them wagons has a much longer history and cultural significance than operating elevators, but the logic otherwise is easy to agree with: at some point surely automated cars will be good enough that we’ll look back at manual driving much the same way as we now look at horse riding. A leisure activity at best.

The complexity starts when you realize accepting the vision from someone does not automatically translate into belief in its execution from the same. Musk could well be right in everything theoretical (the vision, the pointlessness of Lidar, the pointlessness of maps beyond navigation) — for argument’s sake — and Tesla could still fail to deliver with the current suite, within the current timelines. In the end Musk can not control his teams ultimate ability to deliver on what remains an unsolved research problem. And it only gets more complex in reality, because in reality Musk is probably not right about everything either but probably wrong on some portions of the equation as well.

So what to believe and whom? Not so simple.
 
Tesla’s Driver Fatality Rate is more than Triple that of Luxury Cars (and likely even higher)

"Tesla’s mortality rate (41 deaths per million vehicle years) is so much higher than the average luxury car (13 deaths per million vehicle years) that when comparing the two, the difference is hugely statistically significant. The difference is 28 additional deaths per million vehicle years, with a confidence interval of 11 to 63, and a p-value of 0.0001."

That's not really surprising. Notice how many of those luxury cars have physical, tactile controls for everything important. Without AutoPilot, touchscreen interfaces in your car are an accident waiting to happen, and unfortunately, not all Tesla cars have AP capabilities.
 
I think one problem with Musk’s conclusions and statements is that some of them are easier to agree with than others.

I don’t know if manually driving a car is comparable to human operation of an elevator to the extent that the former would be driving, given that driving cars and before them wagons has a much longer history and cultural significance than operating elevators, but the logic otherwise is easy to agree with: at some point surely automated cars will be good enough that we’ll look back at manual driving much the same way as we now look at horse riding. A leisure activity at best.

The complexity starts when you realize accepting the vision from someone does not automatically translate into belief in its execution from the same. Musk could well be right in everything theoretical (the vision, the pointlessness of Lidar, the pointlessness of maps beyond navigation) — for argument’s sake — and Tesla could still fail to deliver with the current suite, within the current timelines. In the end Musk can not control his teams ultimate ability to deliver on what remains an unsolved research problem. And it only gets more complex in reality, because in reality Musk is probably not right about everything either but probably wrong on some portions of the equation as well.

So what to believe and whom? Not so simple.

That's what makes this so much fun! Think about this, we WILL be the generation that witnesses this whole AI, self driving EV transition thing that is, in my opinion, probably as or possibly more important historically than the internet, which we ALSO saw. Once we solve self driving with AI other simpler problems will fall like dominoes because we'll have the tools and knowledge that will accelerate us towards whatever weird AI future we are heading towards. Maybe nothing much will change, or maybe we'll merge into a singularity 2 decades from now...I want to be on the front lines watching this, seeing it develop, seeing the forward and backwards steps . I could deal without all the rude negativity from some on here and most outside here. My grand kids will ask me about it like I asked mine about their first car and TV ;) Worth some patience and a couple grand in my opinion, but not for everyone. Not for you, don't spend the cash...

As for more on topic stuff.
1) I think yes there is a HUGE jump to true level 4 and again to level 5. I believe we'll have level 5 'functionality' for several years at least before they remove the human and go level 4. You almost need level 5 funtionality in order to get enough confidence you've found enough edge cases to go level 3. The first level 4 experiments will be acceptably safe, but will require remote human operators probably often at first. Level 3 seems like what FSD will actually end up being for my car in the next couple years. If it gets confused it just has to be smart enough to pull off and stop and ask for help from me. That will be a fundamentally safer system because it won't be relying on me monitoring it. By my definition, which I think aligns with SAE, we don't have to worry about how long it takes a human to take over. The car will be safe, just may get stuck if you ignore it. By that definition I don't see it as a huge step to level 3. Hwy NoA is actually pretty close: we need more accurate maps or more intelligence to avoid most of the dumb stuff it fails at (being in the wrong lane, speed limits etc.) and the ability to ask for help when there is a cop behind you or on the side of the road. Actual driving etc. is pretty close to acceptably safe and getting better so quickly I have no doubt we are getting close or are already safer than a human on a why. Simple things like not being aggressive, tail gating, wandering out of lanes etc automatically make it less likely for there to be an accident around you.

2) I know 1. above will make some say 'noa is nowhere close'. Well, my experience is that if you let the car do what it wants, it does weird stuff, is kinda rude, not as smooth as the smoothest drivers, and can frustrate drivers around you...BUT it almost never hits things. These days most accidents come from lack of attention and lane changes, two things computers accel at. Some of those can be easily solved with HW3 speeds (faster decision making), some tweaked algorithms, but MOSTLY its about just accepting that what it is doing is safe enough, just not what I would do. Still level 3, just not as you envision. Tesla has chosen to not spend time making existing features more robust, and this makes sense if they believe they are close to turning it all 'on' so they can work on big picture stuff. For instance; why solve the dancing cars issue on the first releases if they were going to change the AI algorithms fundamentally for V10 that fixed that as a matter of course. I think a lot of the shortcomings fall into that camp. 8 signal ticks before changing a lane? come on!

3) which leads me a very fundamental question. Are there two forks of code; FSD and EAP? Will they suddenly switch HW3 people with FSD to a completely different code base or will keep merging pieces of their FSD code into the existing code base. You see what their FSD cars were capable of, and they were obviously a V9/10 interface but a lot more showing on the display and a lot more capability. When the first FSD features come out, we'll know one way or the other pretty quickly imho.

TLDR; read it or move on, nothing really added!
 
That's not really surprising. Notice how many of those luxury cars have physical, tactile controls for everything important. Without AutoPilot, touchscreen interfaces in your car are an accident waiting to happen, and unfortunately, not all Tesla cars have AP capabilities.

Regarding this, I agree with many above. Demographics, driving style, where you are driving all make the existing data that we have access to kinda irrelevant. Cars as fast as a model s or 3P in the hands of your average driver, is probably actually not a good idea lol safety wise. (Watch people on their first couple of autocrosses to see how poor humans are when hitting the go pedal). What we do know, is that OBJECTIVELY teslas are as safe or safer in an accident compared with any other vehicle. I, personally, am much safer on AP after driving for 3.5hrs in traffic at night to the cottage due to attention, night vision of the radar etc. In busy traffic with people all over, auto lanes changes etc. I think it is currently debatable, but in theory only, since we just don't have enough RELEVANT information. I bet tesla has better information that they use for validating their own software, but it would be very difficult to put that in a form fit for public consumption, so don't expect it any time soon.

What would fill in the gaps is if other car mfgs started posting their same stats that tesla publishes so we could compare different autopilot type systems. That would also add some transparency, but I am pretty sure companies won't want to do that until they have to.
 
2) I know 1. above will make some say 'noa is nowhere close'. Well, my experience is that if you let the car do what it wants, it does weird stuff, is kinda rude, not as smooth as the smoothest drivers, and can frustrate drivers around you...BUT it almost never hits things.

This is a dangerous position to take. It is well documented that left unattended AP/NoA will hit stationary cars, can exit a poorly marked lane and hit something on the curb as well as an hit cross-traffic trucks. These in fact were the three first Autopilot fatalities as well, but there are plenty of similar examples that luckily ended up better.
 
This is a dangerous position to take. It is well documented that left unattended AP/NoA will hit stationary cars, can exit a poorly marked lane and hit something on the curb as well as an hit cross-traffic trucks. These in fact were the three first Autopilot fatalities as well, but there are plenty of similar examples that luckily ended up better.

Agreed on all fronts, but I think they've made headway on these and they'll get filled in pretty quickly. Basically, what I was saying is that as is, as long as something doesn't appear in a lane, they are pretty close. Maybe a year of edge cases and a couple of fundamentally new abilities, like stopping for stuff in the lane. So, it may be feasible to have level 3 hwy by end of next year? Just a guess though so feel free to roast me on that.
 
Agreed on all fronts, but I think they've made headway on these and they'll get filled in pretty quickly. Basically, what I was saying is that as is, as long as something doesn't appear in a lane, they are pretty close. Maybe a year of edge cases and a couple of fundamentally new abilities, like stopping for stuff in the lane. So, it may be feasible to have level 3 hwy by end of next year? Just a guess though so feel free to roast me on that.

Given that Tesla says they expect to be Level 5 no geofence feature complete by end of 2019, they should indeed have autonomous capable recognition and stopping abilities by then.

That is, if we believe what they say.
 
My interpretation of feature complete is that it can do the basics, but not 100%. What exactly it can and can't do at that level is open to much debate.

I agree we don’t know how far along Tesla is. Feature complete itself is fairly well established term that does mean basically a release candidate level of function though not yet reliability, so they can’t miss much in terms of features if they really expect to be ”feature complete” at ”Level 5 no geofence” by end of 2019 as Elon said on Autonomy Investor Day.

Of course we don’t know if Elon or Tesla was truthful, accurate or knowledgeable enough to state this. But he did say it.