Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

It's happening....FSD v9 to be released (to current beta testers)

This site may earn commission on affiliate links.
What are we comparing human drivers to? If the bar were actually low we'd have autonomous vehicles after the billions of dollars that have already been invested.
We dont have to compare human drivers to anything. I fail to follow your logic about investment, since you seem to be implying that there was some "expected" sum to develop autonomous driving? Expected by whom? This sounds suspiciously like the old argument "This can't be a good idea, because if it was someone else would already have thought of it."
 
We dont have to compare human drivers to anything. I fail to follow your logic about investment, since you seem to be implying that there was some "expected" sum to develop autonomous driving? Expected by whom? This sounds suspiciously like the old argument "This can't be a good idea, because if it was someone else would already have thought of it."
No, it's the exact opposite, making a self-driving car is a very good idea, that's why an obscene amount of money has been spent trying to realize it.
When we're talking about autonomous vehicles we have to compare them to humans because that's the benchmark. Saying humans are "bad" at driving is a very popular sentiment but I contend that humans are actually very good at driving. I'm saying that if 100's of billions of dollars have been spent trying to make machines drive as well as humans with minimal success then maybe humans aren't so bad at driving after all.
As you said, all that really matters are the statistics. How close is unsupervised beta FSD to human safety? Can you give an estimate? What is the bar?
 
  • Like
Reactions: gearchruncher
It's very difficult for anyone to predict how this will work itself out. First, imho we already have driver inattention issues even with no driver assist at all. Texting. Fiddling with the audio system. Daydreaming. In fact, it's ironic but with AP engaged and the car nag screens drivers are possibly paying more attention while on AP than while manually driving down the road.

There will of course always be the idiots who game the system .. the guy who watched a DVD and got killed etc. The real question comes down to simple statistics .. on aggregate, does a given system make the roads safer or not? When anti-lock brakes were being tested, some argued (quite seriously) that it would encourage people to drive closer and faster, and make accidents more probable. Didn't happen. Similar arguments have been used against most new safety systems over the years.

Yes, as the car takes a more and more active role in driving, drivers are going to deteriorate in both ability and responsibility. But I think you are way over-estimating the current abilities of most drivers today. How many drivers can really respond well in an emergency? Know exactly how much braking to apply? Know if its possible to swerve safely given the road conditions and other cars around them? Of course, the answer is almost no-one.

The fact is, the bar is pretty low for a car being safer than a human driver. Sure, it will mess up occasionally, sometimes with catastrophic consequences. But each such incident can be analyzed, rectified, and a fix deployed to an entire fleet of cars in a short time. Its leverage like this that will tip the statistics toward semi-automonous systems, regardless of how fast human abilities (such as they are) might atrophy.

Of course, we are all speculating until we start seeing some serious deployment. The current FSD beta really doesnt tell us much, since the drivers are selected to remain alert at all times, and are disengaging any time something looks risky. But regardless of opinions on pros/cons, it's going to be an interesting time ahead as the technology evolves.
The story about the DVD watching was apocryphal and clearly debunked.
 
First, I'm in general agreement overall. This is an interesting discussion, with a lot of competing concerns and challenges. Not really pushing back on the general idea, but I do have some different observations on some details:

The fact is, the bar is pretty low for a car being safer than a human driver.

1:2M miles for an accident and 1:100M for a fatality is not *that low*. It's pretty remarkable how good humans are at such a complex task actually. In fact, they are so good at the task, that it may be decades before computers can do it as well. If 1:2M is a "low bar", what do you expect an acceptable bar for a computer is, and why are computers still very far away from it? If humans are so crappy at driving cars, why are we all willing to hop in an Uber or co-workers car without a second thought?

But I think you are way over-estimating the current abilities of most drivers today. How many drivers can really respond well in an emergency? Know exactly how much braking to apply? Know if its possible to swerve safely given the road conditions and other cars around them? Of course, the answer is almost no-one.
What percentage of accidents could be solved with these skills? These are skills needed *after* something else has gone wrong, and aren't generally needed by attentive drivers. The majority of accidents in the USA happen at low speed, in urban environments. Giving everyone great emergency braking or swerving skills won't make all that much difference if attention deficit is the issue. And, like I explained before, these are things tech already is solving- ABS, stability control, and panic braking assist do all help here even without full FSD. We can (and are) making humans superhuman in these areas with much simpler tech than FSD.

Sure, it will mess up occasionally, sometimes with catastrophic consequences. But each such incident can be analyzed, rectified, and a fix deployed to an entire fleet of cars in a short time. Its leverage like this that will tip the statistics toward semi-automonous systems, regardless of how fast human abilities (such as they are) might atrophy.

Maybe. But today, we're nowhere near being able to do this. We struggle to get a system to not drive into cones, or not think a picture of a car is a real car. So it's not like because we suddenly have data for an "edge case" that we could instantly program it in for next time. Sure, this is a hopeful end point, but it's really, really far away. Many of these edge cases will require complex, complete understanding of the world with context, and saying it will be trivial to do this is completely ignoring how hard it's been to even get good image recognition working today, and just assuming that we'll be so much better in the future. If the car has this complete understanding of the world like a human does, why did it crash in the first place?

Plus, that's the thing with edge cases- you may never see it again, so fixing it *after* it happens is too late. Your next case will be different. This is why safety critical systems are not designed by sending them out into the field, waiting until they hurt someone, fixing that failure, and then waiting for the next one.

However, people are *not* rational and even if there is proof FSD is say 10 times saver, one accident will be spilled out on all headlines, news and be probably discussed in politics.
To be aware, Tesla themselves says that an accident occurs on AP every 1:4M miles today. That's multiple accidents a day on AP already. These do not all make the news for the very reason that they are so common.

If we get to prove-able 1:20M accident rate, and 1:1B miles fatality, then an accident will be news- because it will be so rare, like an airline accident is today. But below that, there will be a reasonable discussion as to if the system is actually safer than a human. It will also matter what mistake the vehicle made, and if it was one a reasonable human driver would have made or not.

This is exactly the issue with airplanes today- they are so safe that any accident is a news story, which as you say, makes people bad at statistics, as they dramatically over-estimate the crash rate of aircraft, ironically pushing some to more dangerous forms of transport. But it also overlaps because airplanes are things you *ride* in, and have no control over, and that is a powerful mental shift vs a car you drive yourself. Self driving is going to have that same weird issue of handing your fate over to a machine that we do as passengers in an airliner.
 
The point was that if something as simple as rain-sensing wipers is still in beta after several years, it doesn't bode well for FSD _ever_ being out of beta.

Tesla introduced the first version of autopilot, the one based on the MobileEye hardware, in September 2014, almost 7 years ago. It's been in beta ever since.

And don't think I didn't see your sneaky insertion of "L4"! Musk has always said the goal is Level 5. Many many times. Most recently he promised it by the end of this year, a possibility I rate slightly below the Second Coming of Christ.

I think Tesla abuses the label while at the same time other companies don't honestly admit how many bugs their systems have. Other companies are especially problematic with their media systems, and charging systems. Just look how long it took Ford to sort out issues with the Mach-E. Heck in the early days some people couldn't even start their vehicles.

Another example is the Subaru Eyesight system. The first few generations were problematic, but it never had the beta tag. In fact I'd say that the first few generations of most ADAS systems left a lot to be desired.

The industry is as a whole is moving to OTA SW updates which is a two sides sword. On the one hand it allows for easy fixing of issues, and adding addition functionality (if the HW supports it). But, it creates major SW headaches because then you have a mix of hardware variations in the field where its hard to test all the possible scenarios.

The rain sensing wipers is a great example of something that really should be given the freedom to evolve if its truly their as a convenience to a human driver. I can't say I particularly liked the AP1 Rain sensor despite Tesla using a proven one back then, I also don't particularly like the Sprinter Van rain sensor. Sometimes it just gets confused and will aggressively start wiping. Do I like the one in my Model 3? Sometimes. It's better than it was.

As to L4/L5 I'm not sure why I even mentioned L5. Sure I know L5 is the goal for Musk, but I firmly believe he picked the impossible goal because it's impossible. If he had picked some limited L4 functionality a lot more people would be livid at the lack of progress to it.

The problem with the second coming of Christ is that's as a real as hostile aliens. If you were a hostile alien looking to take over the planet your best bet would be to disguise it as a the second coming of Christ. You could wipe out entire areas without anyone putting up a fight. But, I'm pretty sure even aliens couldn't figure out how to get L5 to work safely alongside human drivers. So maybe L5 after the second coming of Christ.
 
Folks I bought car with FSD ( not a subscription) bit still today no any features of FSD ( and my software even not FSD). I am new to Tesla and what is this? If it's not ready ( don't put on FULL subscription or sell it) please return money back and if you charged people for features which is not provided this is a PURE FRAUD.
 
Folks I bought car with FSD ( not a subscription) bit still today no any features of FSD ( and my software even not FSD). I am new to Tesla and what is this? If it's not ready ( don't put on FULL subscription or sell it) please return money back and if you charged people for features which is not provided this is a PURE FRAUD.
Where is Kelowna?
Check the furthest right column on the above table that says FSD / EA for FSD features.
 
Where is Kelowna?
Check the furthest right column on the above table that says FSD / EA for FSD features.
Kelowna in British Columbia, Canada. OK if it's only for BETA testers ( upgrade or sell it' ONLY for testers) and if it's not ready for public don't sell it TO PUBLIC, otherwise this is not provided feature what was paid for considered a fraud. I understand if this is very NEW features and I can wait for couple of months, but not 6 month -1 year. Moreover I called Customer Service and they say Book Mobile service appointment and they can engaged it for me, because in my case it's PART of Vehicle.
 
Last edited:
Kelowna in British Columbia, Canada. OK if it's only for BETA testers ( upgrade or sell it' ONLY for testers) and if it's not ready for public don't sell it TO PUBLIC, otherwise this is not provided feature what was paid for considered a fraud. I understand if this is very NEW features and I can wait for couple of months, but not 6 month -1 year. Moreover I called Customer Service and they say Book Mobile service appointment and they can engaged it for me, because in my case it's PART of Vehicle.
It has been in developed and for sale since 2016. 2019 it was coming “later this (i.e. 2019) year”.
 
It has been in developed and for sale since 2016. 2019 it was coming “later this (i.e. 2019) year”.
I found my answer directly from TESLA SERVICE TECH DEPARTMENT. I need service appointment with Mobile service tech and he can config and engage my FSD. I am already pass 1000km and my cameras must be calibrated. Nothing with this is BETA TESTERS. I have integrated full FSD, Premium connectivity and it's MUST works ( probably a bug issue). BETA TESTERS very limited Tesla owners who using absolutely different software available only for them.
 
  • Like
Reactions: Matias
I found my answer directly from TESLA SERVICE TECH DEPARTMENT. I need service appointment with Mobile service tech and he can config and engage my FSD. I am already pass 1000km and my cameras must be calibrated. Nothing with this is BETA TESTERS. I have integrated full FSD, Premium connectivity and it's MUST works ( probably a bug issue). BETA TESTERS very limited Tesla owners who using absolutely different software available only for them.
Wow, pretty vitriolic. I'm sure that you'll be courteous to Tesla's employees, because if you come in with an attitude like "FRAUD!" you will be treated in kind.

Off topic, you live in a beautiful area. We took our Model 3 on a road trip to Banf, and stopped in Kelowna for a B&B and to check out the wineries.
 
Wow, pretty vitriolic. I'm sure you're courteous to Tesla's employees, because if you come in with an attitude like "FRAUD!" you will be treated in kind.

Off topic, you live in a beautiful area. We took our Model 3 on a road trip to Banf, and stopped in Kelowna for a B&B and to check out the wineries.
I am very new on this is forum and this is word "Fraud" not mine . Many topics were people( experience people with Tesla, not like me) complaining about this is system from 2018, 2019... using this word even more harder!!! With my BMW I take a phone and after 5 min I found all my answers, but with Tesla this is not so easy. I love my Model 3 very good vehicle, and this is up to me I will use FSD or not, but if I bought this is feature I want this is works. Very simply.
 
I am very new on this is forum and this is word "Fraud" not mine . Many topics were people( experience people with Tesla, not like me) complaining about this is system from 2018, 2019... using this word even more harder!!! With my BMW I take a phone and after 5 min I found all my answers, but with Tesla this is not so easy. I love my Model 3 very good vehicle, and this is up to me I will use FSD or not, but if I bought this is feature I want this is works. Very simply.
And you will indeed get it to work after service does their thing. Your area has a few divided highways (trans-Canada Rt 1 and Rt 5) and the current functionality of FSD will work very nicely on those. City or town FSD is the one in very limited beta testing.

(In 2019, TACC/FSD speed limits were based on Google maps and were frequently 5km/hr lower than posted; now, since our cars read signs, I expect that minor frustration will be gone.)
 
And you will indeed get it to work after service does their thing. Your area has a few divided highways (trans-Canada Rt 1 and Rt 5) and the current functionality of FSD will work very nicely on those. City or town FSD is the one in very limited beta testing.

(In 2019, TACC/FSD speed limits were based on Google maps and were frequently 5km/hr lower than posted; now, since our cars read signs, I expect that minor frustration will be gone.)
May be I am very naïve, but I believe Tesla and Elon. I will patient and wait when it will be added cities and streets. What Tesla says configuration what I have right now in my Model 3 must works.👍
 
Dude you nailed my thoughts exactly! The train of thought that since humans drive with just two eyes surely a car with +8 cameras and a powerful computer must be better is extremely flawed. There are multiple flaws in this thinking but the biggest is what you said - a machine that can make the decisions needed to actually autonomously drive in even the most benign real world situations would have have to be essentially sentient (replicate a human brain). Pretty sure we are a LONG ways off from that - also positive cars wouldn’t even make the top ten list for optimal use cases for self aware sentient machine.

That’s why I scoff at those that believe level 3 and above is in any way possible in the next 5 to 10 years. It defies common sense.
While you are right to be skeptical, just arguing from “common sense” is rather meaningless. Virtually every major scientific discovery in history has gone against the “common sense” of the time. Your argument is that an autonomous vehicle requires sentience and that we don’t know how to create this in a computer. The latter, for the foreseeable future , in certainly true, but not the former. On what do you base your argument that an autonomous car requires sentience?

Certainly some exceptional circumstances require reasoning, but most day to day driving is mechanical and based on following simple rules. Which is all that is required for the autonomomy that Tesla is currently aiming for (I won’t comment on their earlier more ambitious goals). Sure there are a lot of rules, some conflicting, but that again is just a scale issue rather than one requiring sentience (and no, I’m not going to debate endlessly on what sentience means).

I think Tesla (or at least Elon) have been way too optimistic with timelines, but I don’t think the problem is technically impossible, nor is a solution improbable.
 
  • Like
Reactions: rxlawdude
1:2M miles for an accident and 1:100M for a fatality is not *that low*. It's pretty remarkable how good humans are at such a complex task actually. In fact, they are so good at the task, that it may be decades before computers can do it as well. If 1:2M is a "low bar", what do you expect an acceptable bar for a computer is, and why are computers still very far away from it? If humans are so crappy at driving cars, why are we all willing to hop in an Uber or co-workers car without a second thought?

What percentage of accidents could be solved with these skills? These are skills needed *after* something else has gone wrong, and aren't generally needed by attentive drivers. The majority of accidents in the USA happen at low speed, in urban environments. Giving everyone great emergency braking or swerving skills won't make all that much difference if attention deficit is the issue. And, like I explained before, these are things tech already is solving- ABS, stability control, and panic braking assist do all help here even without full FSD. We can (and are) making humans superhuman in these areas with much simpler tech than FSD.
You are muddling things up. First, the low fatality rate is mostly related to the vastly safer cars driven today. Crumple zones, seat belts, air bags etc all make accidents more and more survivable. It's important to distinguish accident avoidance from accident mitigation.

Let's look at serious accidents. What are the causes? Some are mechanical, to be sure, but most (look up the stats) are human error. Someone doesnt look in their mirror before a lane change etc etc. These are lapses in concentration (or just bad driving). How would an autonomous vehicle reduce these serious accidents? First, by avoiding causing the accident in the first place since the car isnt going to stop paying attention while it checks a text message (or get drunk etc). Second, by reacting faster and more safely when it does find itself faced with an emergency. Avoidance and mitigation. Sure, numerically there are far fewer of these serious accidents, but so what? They are the significant ones because they are serious. Arguing that they are not relevant because they are numerically infrequent compared to fender-benders is an invalid argument. I'm also unclear as to why you argue we dont need FSD because we have airbags. Really?

What about those minor accidents? As you note, the majority of accidents fall into this category, mostly rear-ending someone at a stop light and similar. Again, the cause here is a lapse in concentration. And again, an autonomous car wont suffer from that.

The fact is, both humans and autonomous cars are going to get things wrong. Sometimes badly. Sometimes fatally. But the nature of those errors will be fundamentally different. Any decent responsible human driver knows what they should do when driving, but lapses, distraction, laziness and fatigue mean they often do not do those things. We've more or less given up trying to make humans better drivers (dont drink and drive? dont text while driving?) .. as I noted above most improvements over the past decades have been mechanical mitigations not driver education.

The problem facing autonomous car development isnt solving lapses in concentration, that comes essentially free of charge by the very nature of the system. Instead, its advancing the cars ability to understand what it should do. We are essentially seeing scientists and engineers teaching a car to drive. If/when that teaching reaches a comparable level to an average human driver, then Its inevitable that the autonomous car will be safer because it never has lapses of concentration.

Of course, the big "if" here is can that autonomous driving level be achieved And no-one knows this .. sure we can all speculate and argue about it (it's part of the fun). But that's all it is .. speculation. People here have used peculiar arguments about Lidar, Radar, sentience and who knows what else. But until we get hard numbers no-one knows. But if it is solved, then humans are definitely going to be in the back seat .. both figuratively and literally.
 
You are muddling things up. First, the low fatality rate is mostly related to the vastly safer cars driven today. Crumple zones, seat belts, air bags etc all make accidents more and more survivable. It's important to distinguish accident avoidance from accident mitigation.

Let's look at serious accidents. What are the causes? Some are mechanical, to be sure, but most (look up the stats) are human error. Someone doesnt look in their mirror before a lane change etc etc. These are lapses in concentration (or just bad driving). How would an autonomous vehicle reduce these serious accidents? First, by avoiding causing the accident in the first place since the car isnt going to stop paying attention while it checks a text message (or get drunk etc). Second, by reacting faster and more safely when it does find itself faced with an emergency. Avoidance and mitigation. Sure, numerically there are far fewer of these serious accidents, but so what? They are the significant ones because they are serious. Arguing that they are not relevant because they are numerically infrequent compared to fender-benders is an invalid argument. I'm also unclear as to why you argue we dont need FSD because we have airbags. Really?

What about those minor accidents? As you note, the majority of accidents fall into this category, mostly rear-ending someone at a stop light and similar. Again, the cause here is a lapse in concentration. And again, an autonomous car wont suffer from that.

The fact is, both humans and autonomous cars are going to get things wrong. Sometimes badly. Sometimes fatally. But the nature of those errors will be fundamentally different. Any decent responsible human driver knows what they should do when driving, but lapses, distraction, laziness and fatigue mean they often do not do those things. We've more or less given up trying to make humans better drivers (dont drink and drive? dont text while driving?) .. as I noted above most improvements over the past decades have been mechanical mitigations not driver education.

The problem facing autonomous car development isnt solving lapses in concentration, that comes essentially free of charge by the very nature of the system. Instead, its advancing the cars ability to understand what it should do. We are essentially seeing scientists and engineers teaching a car to drive. If/when that teaching reaches a comparable level to an average human driver, then Its inevitable that the autonomous car will be safer because it never has lapses of concentration.

Of course, the big "if" here is can that autonomous driving level be achieved And no-one knows this .. sure we can all speculate and argue about it (it's part of the fun). But that's all it is .. speculation. People here have used peculiar arguments about Lidar, Radar, sentience and who knows what else. But until we get hard numbers no-one knows. But if it is solved, then humans are definitely going to be in the back seat .. both figuratively and literally.
You: Humans are bad drivers, the bar is low.
Us: Humans get into collisions (>12mph) every 2 million miles. It doesn't look like FSD beta is very close to that.
You: Humans are bad in these ways, machines are good in these ways. Autonomous vehicles will replace human drivers when they achieve greater than human safety.
Me: Yes. Are we arguing about something?