Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta Videos (and questions for FSD Beta drivers)

This site may earn commission on affiliate links.
In general (No route limitations):

If you define accident as hitting another car or pedestrian,

If I didn’t intervene at all, most of my risk would be from pissed off people behind me. Or someone rear ending me.

Considering all of this, probably around 500 miles

If I could choose a 5 mile route, probably 10k miles or more
It’s not even close to 500 miles unless you were on the interstate for 500 miles driving strait and even then it’s doubtful.

I feel like it is 10-15 miles on well marked roads with stoplights and 0-5 miles on other roads.
 
Yea, but the cars aren’t operating on autopilot in the tunnel…

Ya, I previously posted that I think fsd should be safe and reliable enough to do some geofenced routes within 6 months. People think I'm crazy, and perhaps I am lol. But I think the current approach will allow for reliable driving soon. There are a lot of little issues, but most of these issues should be fixed "soon", and fsd should be ready to perform some routes safely and consistently in 6 months.

Robotaxis not only require safety and reliability though. There's an entire ecosystem that needs to be built for robotaxis, like support infrastructure, software, testing, etc.

What I'm noticing while using 10.2 are some very consistent issues, not random. Based on progress we've seen in the past, Tesla can and will fix consistent issues.

As one example, straight unprotected left turns made huge progress between 10.1 and 10.2 (~2 weeks)...
 
So do I…are you trolling? there’s no way it can do 500 miles on average without an accident.

I pulled out of my house yesterday, at the first light it stopped to make a right, light turned green, so it started to proceed into hitting a woman and her stroller. Had to disengage there. Half a mile later, it tried to merge into another car, so I had to slam on the brake there. That was basically two accidents in 1 mile. Now granted sometimes it can go a few miles or even 10 or 20 without a dangerous situation/disengagement in low traffic scenarios, but if you were to let it loose on the street, I’d bet on average it would be one accident every 5-10 miles. Maybe even more during rush hour traffic
I think you're underestimating other drivers ability to dodge you. :p
Think about how many collisions you've avoided that would have been caused by other drivers.
@powertoold used to think it was 1 per 4 million+ miles so he's definitely getting closer to reality! To be fair that was a more severe collision definition...
1634662419277.png

 
In general (No route limitations):

If you define accident as hitting another car or pedestrian,

If I didn’t intervene at all, most of my risk would be from pissed off people behind me. Or someone rear ending me.

Considering all of this, probably around 500 miles

If I could choose a 5 mile route, probably 10k miles or more
Using your 500 mile estimate, that puts you at about 0.3% of your prediction for July 2021 (1 accident every 150,000 miles).

Just something to keep in mind when you make these predictions. ;)
 
  • Like
Reactions: Terminator857
Using your 500 mile estimate, that puts you at about 0.3% of your prediction for July 2021 (1 accident every 150,000 miles).

Just something to keep in mind when you make these predictions. ;)

Yup, my posts back in Oct 2020 were about geofenced routes (if you look at the 3 relevant posts I made that day). I obviously didn't mean like Manhattan or any location of your choosing. I even prefaced my prediction with how crazy I think it is, so I didn't think it was likely. Anyway, I want to put this to rest. I've already admitted defeat about some of my prior predictions.


Screen Shot 2021-10-18 at 7.51.45 AM.png


Screen Shot 2021-10-18 at 7.52.02 AM.png


Screen Shot 2021-10-18 at 7.52.20 AM.png
 
Last edited:
I need to bookmark this one. haha.
How exactly will you determine that it's safe enough? Is this claim falsifiable?
It isn't. @powertoold has made a dozen predictions and claims in the last year alone and none of them. Absolutely zero, zip, nada has happened. Yet he keeps trucking on and just moves the goal post or tries to revise history of what he said.
 
View attachment 723303

If x is time we are probably at the middle point. Level five autonomy is somewhere above 2 on the y axis.
Musk says the improvements are exponential. I'm just saying we may be way out on the negative axis where the change with time is slow, small slope. So it looks like not much is happening to get us to level 3 (time = 0) any time soon. Exponential doesn't necessarily mean what Musk implied, fast change with time.
 
Musk says the improvements are exponential. I'm just saying we may be way out on the negative axis where the change with time is slow, small slope. So it looks like not much is happening to get us to level 3 (time = 0) any time soon. Exponential doesn't necessarily mean what Musk implied, fast change with time.

It may exponential. Example:

Have to disengage for same or similar issue every 10 miles
Tesla fixes issue, now have to disengage for another issue every 100 miles
Tesla fixes another issue ... 1000
10,000... so on

There's definitely a pattern of issues that lead to my disengagements. Hopefully the repeating issues get fixed, leading to "exponentially" fewer disengagements.
 
Fair enough, you did say accidents.

Let me ask you this, if you were not to intervene on the Beta software currently in your car, how many miles do you think it would drive before it got into an accident?
If @powertoold post was strictly accident then guess what? It was already fulfilled in the very first month. The beta testers went over 150k miles without an accident and there still hasn't been any accident *according to Elon.

Clearly the only way to compare and evaluate a SDC that is still in testing to average human reliability and accident rate, is not by trying to count accidents that happen when there are humans literally taking over and preventing them. Because there won't be any accidents. Because the drivers are preventing them. duhhhhhhhhh.

But by actually counting accidents that WOULD HAVE occurred if the human driver didn't take over.
Hence they are called safety related disengagement.

The context to @powertoold statement is that Tesla is done, its game over and Tesla is 5+ years ahead. They already won and it will be ready in 6 months and it will have human reliability (accident rate). Then someone then looked up the stats for human reliability and then @powertoold then said it will easily match that in 6 months.

@powertoold post prompted a 3 page discussion on 150k miles per disengagement. At no point did he ever correct anyone in those 3 pages. He joined the discussion and double downed, directly linking disengagement to this statement.

Here was my question to powertold directly quoting his prediction.
So based on your timeline if it will be at "no accidents on average every 150K miles" (no disengagement) in "6 months" according to you.
Seeing as one month is in 5 days. Fact checking you means that the next update should bring it to no disengagement for every 25,000 miles.
Here is his response, notice he again doesn't correct that he isn't talking about disengagement but rather links both.
Sorry, but this is the wrong logic to apply. Disengagement or development improvement doesn't have to be linear. For example, let's say drivers often have to disengage every 10 miles because the car doesn't get into the correct left turn lane. If Tesla fixes that one problem, it's possible drivers will only have to disengage every 100 miles. I think disengagement improvement can be exponential.

Many of the disengagements we've seen are mostly:
Gets into wrong turn lane
Moves into different lane over complicated intersections
Difficulty turning into narrow roads with cars

As for my estimate of 6-9 months, that's unbelievable to me. But based on what we're seeing, it's possible.

Its okay to be wrong, its not okay to be wrong then try to revise history and conjure up another prediction while at the same time spewing misinformation.
At another point @powertoold also said that Tesla will have L5 that was better than humans behind closed door by end of the 2021 or something similar to that.
 
  • Like
Reactions: FloridaJohn
It may exponential. Example:

Have to disengage for same or similar issue every 10 miles
Tesla fixes issue, now have to disengage for another issue every 100 miles
Tesla fixes another issue ... 1000
10,000... so on

There's definitely a pattern of issues that lead to my disengagements. Hopefully the repeating issues get fixed, leading to "exponentially" fewer disengagements.
We talked about this "exponential" thing a lot in 2019 when Elon first brought it up. Here was my take on how that could happen ...

I think we are saying Tesla can solve x number of scenarios per year, on flat budget. May be more than x (say 2x) - but not 10x.

So, what does that mean in terms of march of 9s ? It depends on the distribution of edge cases by their probability of occuring. In one case this leads to exponentially better FSD, in others not.

Assuming Tesla can solve 100 scenarios in a year, in this ideal case - it takes 100 scenarios to go from 90% quality to 99%, as each of them have 0.09% of occurring. Next year, again Tesla solves 100 scenarios which have 0.009% probability of (i.e. 1/10 the 1st year scenarios). So FSD goes from 99% to 99.9%.

fsd9s-png.403804


But if the edge cases are such that the second year ones have 0.005% probability of occurring, FSD only goes from 99% to 99.5%. So, as I said whether solving a certain number of edge cases a year results in exponentially better quality or not depends on the probability distribution.

fsd9s-png.403822


BUT, since we are talking about the long tail and the total of all probabilities need to be 100, it is somewhat reasonable to assume that the probability goes down exponentially i.e. asymptotically approaches zero. So, Tesla FSD can get exponentially better if
- Tesla figures out a way to solve the most probable edge cases first i.e. prioritization is very important
- Tesla doesn't start running out of enough training data / NN nodes
 
  • Like
Reactions: powertoold