Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
You understand that L4 autonomy is driverless robotaxi right? Send your kids to practice alone in the car and have the car return and then sit in the backseat and sip champagne while it drives you and the missus to a restaurant? That's what you envision Tesla getting to in 12 months?

Just explain to me how you get from 5-15 miles per disengagement in 36 months. Then then magically from 15 miles to 40000 miles per disengagement in 12 months? Try a graphing tool?

Each "nine" (order of magnitude improvement) is harder, not easier, you know... 🙃
Right. Makes 0 sense.

Also, there are plenty of examples from Chuck Cook and others where the car has to creep way to far to see because of the camera locations.
 
You understand that L4 autonomy is driverless robotaxi right? Send your kids to practice alone in the car and have the car return and then jump into the the backseat and sip champagne while it drives you and the missus to a restaurant? That's what you envision Tesla getting to in 12 months?

Technically speaking, an L4 vehicle is still allowed to make mistakes. It just needs to recognize situations in which it cannot drive, and enter a minimal risk condition (pull over, hazards on), as long as it never asks for a human to take over to achieve that minimal risk condition.

Don't let the perfect be the enemy of the good. It's still possible to have a safe L4 vehicle that is capable of making mistakes.
 
  • Like
Reactions: powertoold
Technically speaking, an L4 vehicle is still allowed to make mistakes. It just needs to recognize situations in which it cannot drive, and enter a minimal risk condition (pull over, hazards on), as long as it never asks for a human to take over to achieve that minimal risk condition.

Don't let the perfect be the enemy of the good. It's still possible to have a safe L4 vehicle that is capable of making mistakes.
In the same gist, L4 vehicles are also allowed to stop operating due to weather (including fog, rain, snow). Waymo vehicles have been recorded for stopping for fog, and most L4 fleets also avoid driving in heavy rain and almost none operate in snow.
 
11.4.2 is just incredible. I don't know how anyone can use it in the many locales where it works well and not see L4 being close. Basically, the only thing left is emergency vehicle response and better parking lot navigation, along with some typical frequency of Tesla polishing updates.

I'm posting this after my first-ever disengagement-free drive with it from work to home during rush hour, two minor accel pushes, one to overtake a backed up left turn lane, and another to add 5mph going too slow changing into a backed up highway lane.
 
Right. Makes 0 sense.

Also, there are plenty of examples from Chuck Cook and others where the car has to creep way to far to see because of the camera locations.
I think the argument from the other side is other than some examples that are clearly perception related (like the Chuck Cook one you brought up) which can be addressed by routing away from such turns (a strategy that was ironically commonly applied by L4 vehicles like Waymo's which actively route to avoid unprotected left turns, but Tesla's system doesn't know to do), there's not a whole lot of evidence that the remaining issues for FSD Beta is perception related.

Most of them seem to be dumb decisions made due to poor planning (which kind of made sense given the gist I got was they were largely hand coded, although some argued things like lane planning actually got worse when Tesla started using NNs), and that is an issue more sensors won't seem to solve, but rather a better planner would (which is all software).

On the flip side, some of those poor decisions, the inaccuracies of the sensor suite may have to do with it (especially for path prediction of other vehicles/objects), but in the later updates this isn't very apparent this is the case. It seems as FSD Beta started getting better, the largest remaining issues is the planning, not the perception.
 
What deficiencies in vision-only do you see vs LIDAR, please give specific categories / examples of failures or deficiencies?

Optical illusions will be an issue with any neural net based system* which a second source of measurements can help disambiguate.

* Either you require very high threshold for classification and fail to classify many objects, or you accept incorrect classification of some objects will happen. Additional measures don't completely eliminate this risk but they can reduce the errors quite a bit. Using parallax over some time helps add measurements too. Another tool to improve classification would be using non-visible frequencies like infrared.

I'd really like to see a system which can determine density of an object in addition to size/distance/colour. Is that snow coming off the truck infront of the vehicle or a sheet of drywall? Is there anything solid in the pile of leaves covering the road? AFAIK no companies have touched this type of thing yet. IMO, human equivalent driving is a terrible threshold to stop at, automation should be able to achieve much higher levels of safety.
 
Last edited:
  • Like
Reactions: nativewolf
Optical illusions will be an issue with any neural net based system* which a second source of measurement can disambiguate.

* Either you require extreme probabilities and fail to classify some objects, or you accept incorrect classification of some objects.

I'd really like to see a system which can determine density of an object in addition to size/distance/colour. Is that snow coming off the truck infront of the vehicle or a sheet of drywall? Is there anything solid in the pile of leaves covering the road? IMO, human equivalent driving is a terrible threshold to stop at: I'd like death by car accident to be national news because of how rare it is.

This discussion started when I mentioned that CV has won because you don't need LIDAR to achieve Cruise / Waymo levels of L4 performance / safety in certain geofenced areas. FSDb version 11.4.2 proves that for me.

As for your examples of pile of leaves, FSDb currently avoids those. FSDb also avoids road debris on highway and city streets. It's not perfect, but I have yet to see an example of a failure on 11.4.2 (where it runs into an obstacle or is about to). The risk of running into obstacles is less and less with every update.
 
Are there examples of Chuck fsd UPL creeping too far on 11.4.2? Can we conclude it's a sensor placement issue vs software issue?
Chuck often states that he feels the car is creeping too far, but every time he says that, the drone footage shows otherwise. The car is usually at least a foot outside the cross lane. It's hard to judge from the driver's seat because the curbs flare out so much.
 
  • Like
Reactions: powertoold
Are there examples of Chuck fsd UPL creeping too far on 11.4.2? Can we conclude it's a sensor placement issue vs software issue?
I haven't been following, but he did post UPLs on 11.4.2. He did mention it was only light to medium traffic due to the holidays (so perhaps more testing needed in heavier traffic), but seems like no problems through all the turns. Way better performance than when I last viewed his videos, but it has been a while.
He also did a B-pillar range test and got 120-190m as the max range area, significantly further than the quoted 80m.
Chuck often states that he feels the car is creeping too far, but every time he says that, the drone footage shows otherwise. The car is usually at least a foot outside the cross lane. It's hard to judge from the driver's seat because the curbs flare out so much.
Yeah, I observed that in the past also. There was a previous thread that discussed the stalkless Teslas and the need to reverse quickly to avoid traffic hitting them when creeping forward for a blind intersection (this is manual driving, not related to FSD), and I suspect that people who find that necessary to do likely because they feel they have creeped into traffic already, when that isn't necessarily the case.
 
  • Like
Reactions: powertoold
He did mention it was only light to medium traffic due to the holidays (so perhaps more testing needed in heavier traffic)

I think light to medium traffic would be best to demonstrate that the sensor placement is adequate to conquer obstructed UPLs.

In heavy traffic, traffic might actually be slower or backed up, so it's more of a planner issue vs sensor placement.
 
Do you remember what video / videos you're referring to wrt interventions? In the past, there wasn't a consensus wrt intervention vs disengagement. I've been watching his videos since the very beginning.
No, I can't find a bookmark. I can picture the intersection, which was in an open area (no buildings) and on a bit of a curve. Something caught my eye in the sped-up version so I went back. At 0.25x speed I could barely see his hands quickly move to cancel and restart FSD. In freeze frame mode you could see the planned path go awry and the blue steering wheel blink off for a couple frames when he intervened. I then slowed down some other areas which looked a bit tricky and saw him do the accelerator thing. At that point I stopped watching and just wrote him off.

As for conegate, that was a fail of epic proportions, so it makes sense for the wide publicity.
Ironically it as the human remote monitor who screwed up and triggered the whole debacle.
 
  • Like
Reactions: powertoold
This discussion started when I mentioned that CV has won because you don't need LIDAR to achieve Cruise / Waymo levels of L4 performance / safety in certain geofenced areas. FSDb version 11.4.2 proves that for me.
How does FSDb "prove that"? 🤣 Functionality isn't the same as reliability. FSDb is about the same quality as where Waymo was 6-7 years ago. It's hard work getting reliability up. Given the breakthroughs (that Google has made) in ML, Tesla might get there in 3-4 years, but not on vision alone in a comparable ODD.

I'm posting this after my first-ever disengagement-free drive with it from work to home during rush hour, two minor accel pushes, one to overtake a backed up left turn lane, and another to add 5mph going too slow changing into a backed up highway lane.
You realize you need thousands of drives IN A ROW without interventions/disengagements + a good ride quality to have a viable product? Only getting a functional summon and reverse summon from/to drop off will take Tesla a year, then we have ingress/egress. Then we have hand signs, emergency vehicles etc etc etc.

L4 in 12 months you say? I lean towards 12 years for vision only systems to get to L4, and highly limited ODD:s given where things are currently at and the rate of progress. A few breakthroughs in research might change that of course.

But then the question remains - if you can archive higher reliability and a wider ODD with more cheap sensors, why wouldn't you? The only thing that is certain is that hardware cost will keep dropping like a stone.
 
Last edited:
  • Like
Reactions: daktari
So are you saying Tesla's functionality is similar to Waymo, but Tesla isn't as reliable?
That is clearly not what they are saying. A Tesla cannot pick you up and take you to your destination and you get out and it leaves. They might have similar functionality in terms of a software controlling steering and breaking for all intents and purposes but the reliability to do so without a driver in the driver seat which is what determines L4 ADS is not the same. Tesla has not achieved that yet; they are still at L2.
 
So are you saying Tesla's functionality is similar to Waymo, but Tesla isn't as reliable?
No. But Tesla is a helluva lot closer in functionality than reliability and Waymo has spend the last 6 years mostly improving reliability and specific scenarios such as construction zones. And reliability is gained through redundant systems, multiple sensing modalities, hd-maps, sim and lots and lots of software development solely focused on that.

Reliability is the only metric that gets you from "a neat demo" or "decent driver assist" to L3+ autonomy.

Tesla is building a wide-ODD Level 2 system. It's unclear to me if it's useful. Personally I'd pick highway-only L3 any day of the week over what Tesla offers as L2 if I have to babysit it all the time. L3+ frees up time. It's the killer app.
 
Last edited:
No. But Tesla is a helluva lot closer in functionality than reliability and Waymo has spend the last 6 years mostly improving reliability and specific scenarios such as construction zones. And reliability is gained through redundant systems, multiple sensing modalities, hd-maps, sim and lots and lots of software development solely focused on that.

Reliability is the only metric that gets you from "a neat demo" or "decent driver assist" to L3+ autonomy.

Tesla is building a wide-ODD Level 2 system. It's unclear to me if it's useful. Personally I'd pick highway-only L3 any day of the week over what Tesla offers as L2 if I have to babysit it all the time. L3+ frees up time. It's the killer app.

What specific reliability issues do you notice with 11.4.2? (Preferably related to deficiencies in camera vision)