Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Post Elon Update Poll on FSD

I believe that Level 4 autonomous driving will be a reality with FSD before December 31 2022

  • Yes, I believe this

  • No, I do not believe this


Results are only viewable after voting.
This site may earn commission on affiliate links.
AP not suppose to do that. And FSD is still in development. Anything Else?
AP is still in development:

3ZSUmXX.png


Autopark is interestingly the only component of Autopilot that doesn't have a BETA disclaimer.
 
  • Like
Reactions: WhiteWi
One gets the impression that Tesla FSD team is a talented bunch of people with low experience - so they try an approach and if it doesn’t work, they are not afraid to change.

There are pros and cons to having people who are not AV industry veterans. Veterans tend to be dogmatic but their first approach would be better …. but if they get stuck they will keep patching rather than rewrite.
One might get that impression, but it would be wrong. People like Andrej Karpathy are experts and innovators in the machine learning field.

I don‘t think at this point in history that there’s any such thing as an ”autonomous vehicle veteran”, simply because there are as yet few if any real autonomous vehicles.
 
One might get that impression, but it would be wrong. People like Andrej Karpathy are experts and innovators in the machine learning field.

I don‘t think at this point in history that there’s any such thing as an ”autonomous vehicle veteran”, simply because there are as yet few if any real autonomous vehicles.
See my other post on this. Anyone from the DARPA challenge is a FSD "veteran" ....

BTW, Machine Learning <> Self-Driving. CNN is a specialized field with Machine Learning and FSD is a specialized field within CNN.
 
Placing this here for when the inevitable happens later this year and FSD is pronounced to be safer than humans by elon

This is based on the "analysis" by one Edward Niedermeyer.

Let me explain who Niedermeyer is. He is a notorious anti-Tesla "journalist" and an important cheerleader for TSLAQ.
- He ran a blog called "TeslaDeathMarch" since 2008 (!), which eventually died, before Tesla did.
- He has written books "exposing" Tesla & Elon Musk
- He was fired by his employer. In an industry that still employs Lora Kolodny, that's something.


And you claim be not "anti-Tesla" ? With "friends" like you, Tesla doesn't need enemies.
 
  • Informative
Reactions: WhiteWi
This is based on the "analysis" by one Edward Niedermeyer.

Let me explain who Niedermeyer is. He is a notorious anti-Tesla "journalist" and an important cheerleader for TSLAQ.
- He ran a blog called "TeslaDeathMarch" since 2008 (!), which eventually died, before Tesla did.
- He has written books "exposing" Tesla & Elon Musk
- He was fired by his employer. In an industry that still employs Lora Kolodny, that's something.


And you claim be not "anti-Tesla" ? With "friends" like you, Tesla doesn't need enemies.
Looks like this Niedermeyer was one reference in the study

Do the contents of that paper not seem entirely reasonable? Tesla's Autopilot crash stats are all but meaningless without granularity in the data and I think most reasonable people would draw that conclusion with a bit of critical thinking. The author of that paper is someone who does research in automotive safety and clearly knows the numbers provided say nothing of real value without appreciation for the plethora factors involved.

People operating on either extreme end of the spectrum, well they tend to exhibit extreme opinions/behaviors. Shareholders and those with long positions are more inclined to put on the blinders and shorts are hypercritical -- as usual, the truth lies somewhere in the middle.

But there seems to be no shortage of people who love Tesla vehicles and what they've done there while having a very different opinion of FSD.
 
See my other post on this. Anyone from the DARPA challenge is a FSD "veteran" ....

Hm. But the DARPA Challenge was vehicles trying to find a path across unmarked desert, bereft of traffic rules, lights, signs, pedestrians, lane markings; and filled with things like ditches, cliffs, large cacti, etc. How much of that would actually apply to what companies are trying to do now?

Still, it’s a good point. I’d assume anyone who’d worked on that would at least have a running start on an FSD implementation.
BTW, Machine Learning <> Self-Driving. CNN is a specialized field with Machine Learning and FSD is a specialized field within CNN.
I’m so old, all I’ve done is procedural and object-oriented programming. And a fair bit of assembly, although I don’t think anyone does that any more. Anyway, I’ve no expertise at all on NN programning, other than what you could use to fill the first page of “Neural Nets for Beginners.” Although I’m too lazy to look it up, I’m pretty sure Tesla’s tossed everything and started essentially from scratch at least 3 times.
 
Hm. But the DARPA Challenge was vehicles trying to find a path across unmarked desert, bereft of traffic rules, lights, signs, pedestrians, lane markings; and filled with things like ditches, cliffs, large cacti, etc. How much of that would actually apply to what companies are trying to do now?
That was the 2004/2005 Grand Challenges. The DARPA Urban challenge was in 2007 and was on a city street test track.

Here's the paper on Stanford's entry: http://robots.stanford.edu/papers/junior08.pdf
 
Last edited:
  • Like
Reactions: EVNow
As far as "FSD safer than humans," I will remind everybody that there are absolutely no statistics on the performance of the driving task by FSD that can be compared to the performance of the driving task by humans in order to calculate any "safer" metric. This is because there are no cars being driven by FSD alone. The only conclusion that could be reached from the statistics that are available today is that FSD in conjunction with a human is safer than human alone, and this statistic is about as meaningful as saying anti-lock brakes in conjunction with a human are safer than the human braking alone.
 
  • Like
Reactions: Sharps97
The only conclusion that could be reached from the statistics that are available today is that FSD in conjunction with a human is safer than human alone
Yeah, so what could Elon Musk be referring to when he said, "My personal guess is that we'll achieve Full Self-Driving this year at a safety level significantly greater than a person." Like you suggest, FSD Beta as a driver assist should be safer than a human alone, and he has said elsewhere there's been "no accidents" presumably similar to how they report vehicle safety quarterly.

I suppose the interpretation should be that FSD Beta is already safer but not significantly, so the goal for this year is still FSD Beta as driver assist as removing the driver is something not directly measured (or at least not reported). This intermediate goal could still be a reasonable step for Tesla to not require a driver for FSD as if it's even not significantly safer with a human, it's probably much harder to reach "safer without a human."

For example, if we say humans crash on average every 500k miles on all types of roads, FSD Beta + human safety should be 10x for "significantly greater" so on average 5M miles per crash. After achieving that, Tesla could test something similar to FSD Beta capability but designed for robotaxi evaluation (i.e. not driver assist) and try to achieve say 2-4x safer-than-human (1M-2M miles per crash).
 
As far as "FSD safer than humans," I will remind everybody that there are absolutely no statistics on the performance of the driving task by FSD that can be compared to the performance of the driving task by humans in order to calculate any "safer" metric. This is because there are no cars being driven by FSD alone. The only conclusion that could be reached from the statistics that are available today is that FSD in conjunction with a human is safer than human alone, and this statistic is about as meaningful as saying anti-lock brakes in conjunction with a human are safer than the human braking alone.
Totally agree. But trust and believe, that sometime this year, there will be some sort of statement from a certain CEO stating that FSD is safer than humans, and the statement will be supported by metrics slanted wildly to support his statement. ("see..no fatalities while the car was on FSD. Compared to fatalities of cars where humans were in full control. See? I was correct. FSD is safer") And persons who lack critical thinking skills, will believe him.
 
The only way the FSD miles can be considered for that type of approval is if they are driven with zero disengagements or interventions.

Even then, there might be questions around which roads the system is being used on. If a Beta tester finds a route the system can handle and does that loop over and over again, it could easily rack up disengagement-free miles. Try to do that route in reverse or change the route a bit and it might introduce new disengagements, so I imagine there's some type of uniqueness metric or they'd need data that is applicable to a certain ratio of public roads or something for approval as anything beyond Level 2.

There's a lot of nuance here that I'm sure the regulators are thinking about.


But lets just throw this out there: Autopilot is supposedly much, much safer than the average driver in terms of accidents per mile according to Tesla's published data but is still a Level 2 ADAS requiring hands on the wheel and eyes forward at all times. Why would FSD be any different? Tesla will use this data to get approval to roll out FSD Beta to the wider fleet as Level 2 ADAS, anything beyond that will be another iterative process similar to what we're seeing right now. This is exactly the sequence of events Tesla described to the California DMV.

Elon thinks getting to that point is the easy part, getting to 1000% better than a Human driver is what will be difficult.
 
Last edited by a moderator: