Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla FSD Beta Release 10.5 2021.36.8.8 - 10.5.1?

This site may earn commission on affiliate links.
I did Waymo in AZ a few months back. It uses radar and lidar. It works. It doesn't solve Tesla's problem, but it does solve a real-world autonomy problem very well, without a driver.

You can drive a Tesla everywhere in the US with FSD beta, but always with a driver.
These arguments are never ending - we are comparing two separate dimensions - geography and features. They can’t be compared to figure out who is ahead or better now.
 
These arguments are never ending - we are comparing two separate dimensions - geography and features. They can’t be compared to figure out who is ahead or better now.
Not comparing who's ahead or not. Just pointing out that it clearly works. Only the Tesla religion says radar and lidar are wrong.

The reality is radar and lidar are both perfectly serviceable for certain purposes. For stuff like phantom braking, lidar > radar > vision all the time. Using 2 or more discrete data sources is better than 1.
 
Oddly convenient that the removal of radar and the push for pure vision came at a time when there was a radar shortage. Meanwhile at that time S/X cars continued to get radar installed.

FWIW, Elon had mentioned the system would be vision based years before they actually removed radar.

It's possible they accelerated the removal somewhat based on parts but it's not like they just decided earlier this year "Hey we can't get enough radar guess we'll just decide vision is fine" then.




The more points of truth that exist for the vehicle to make a decision, the more confident it will be.

Unless those "truths" disagree of course.


This is also why there are redundant systems internally in HW3, to basically seek some level of agreement or failover to whatever the tiebreaker needs to be.

No such thing exists in HW3.

It's physically redundant- in that there's 2 largely identical SOCs so in theory you COULD run software on one that can fail-over to the other.

But the software doesn't actually work that way.

Nor is there any sort of "agreement" between cores, or tiebreaking, because again they're not running anything redundantly in software.


It's one of the reasons we can be pretty sure that we'll need HW4 (at a minimum) before we see anything from Tesla released at >L2.



Lidar and radar is superior to vision-only for obstacle detection in 2021. Prove me wrong.

They're superior in a way that's not at all needed for self driving. Ever. (and COULD be counter-productive- an argument Andrej Karpathy makes in some detail, with data to demonstrate it, during AI day)....Prove me wrong :)

Ultimately you don't need mm precise measurements to drive a car. Humans sure don't have that kind of precision when they drive.

Even if all you have is vision-as-good-as-a-human but able to look in all directions at the same time you ought be able to achieve better-than-human self driving once you get the SW up to speed.

And it's a helluva lot easier and cheaper to scale that too


.
Not comparing who's ahead or not. Just pointing out that it clearly works.


As long as you never need to go anywhere outside the tiny waymo service area in a tiny suburb in AZ.

Which is a somewhat narrow definition of "works"
 
  • Like
Reactions: alee
And back to FSD 10.5!
My experience with FSD 10.3, .4 and .5 has me believing that each version is showing improvement.
Yes, some phantom braking is still in 10.5, but not nearly as bad as 10.4. My experience is that 10.5 no longer unexpectedly "dives" into right or left turn lanes as they appear along the two-lane highways, which my wife greatly appreciates.
Since I drive many rural unmarked roads, I also appreciate 10.5 not driving down middle. Sure, it's safer from brush overgrowing the side of the road, and gives more reaction time from the wildlife darting across the road, but the oncoming traffic gets a little uneasy at my car moving over at the last minute.
 
No such thing exists in HW3.

It's physically redundant- in that there's 2 largely identical SOCs so in theory you COULD run software on one that can fail-over to the other.

But the software doesn't actually work that way.

Nor is there any sort of "agreement" between cores, or tiebreaking, because again they're not running anything redundantly in software.
Thanks... this is interesting to me. Typically you see redundant architectures work in parallel and sometimes by consensus. You see that esp. at the chip level as EPEs become more relevant with chip densities increase. Also seems odd that the B-side is asleep until the A side fails, esp. since A and B are both on the same board. I'd want them separated. Will read up more... I had made an incorrect assumption here.

I wonder if not having A and B powered simultaneously can be allowed for L4/L5.
 
They're superior in a way that's not at all needed for self driving. Ever. (and COULD be counter-productive- an argument Andrej Karpathy makes in some detail, with data to demonstrate it, during AI day)....Prove me wrong :)

Ultimately you don't need mm precise measurements to drive a car. Humans sure don't have that kind of precision when they drive.

Even if all you have is vision-as-good-as-a-human but able to look in all directions at the same time you ought be able to achieve better-than-human self driving once you get the SW up to speed.

And it's a helluva lot easier and cheaper to scale that too
I agree with this long term, but it assumes near and far depth are able to be computed well by vision. Karparthy demonstrates some very nice depth mapping from a car's immediate surroundings, but I remain unconvinced for long range, where there is only 1 camera, which previously could lean in on the point cloud from radar.

What I do know first hand is vision trips out on shadows often where radar did not... there has to be some sort of ground truth from radar that vision is still incapable of providing (today). Obviously the NNs get better, and shadows become de-risked.

2021 is almost over... I hope 2022 is the year of vision. I just don't think vision is there yet in 2021 to declare victory.
 
  • Informative
Reactions: pilotSteve
Thanks... this is interesting to me. Typically you see redundant architectures work in parallel and sometimes by consensus. You see that esp. at the chip level as EPEs become more relevant with chip densities increase. Also seems odd that the B-side is asleep until the A side fails, esp. since A and B are both on the same board. I'd want them separated. Will read up more... I had made an incorrect assumption here.

I wonder if not having A and B powered simultaneously can be allowed for L4/L5.


FWIW, at autonomy day when touting the redundancy Tesla certainly made it SOUND like they intended to do this- running each node independently for redundancy... (remarks about comparison/tie breaking never made much sense since there's only 2 nodes, you can't really "break" a 1-1 tie with 2 nodes).

But in reality they never did much with that.

Initially node B did nothing at all. It was powered but running nothing.

In late 2019 it began to run a copy of the AP stack, but it just sat there.

Not too long after, and through today, all the actively-running code is split across both nodes because they long since (early/mid 2020 I think?) ran out of enough compute on a single node to run everything needed- so even though the system was never designed/intended to share compute between nodes, they're doing it anyway because they have no choice.


Some folks think Tesla will magically get everything back into a single node by "optimization" even though the existing code keeps getting bigger and more complex as they add/improve it, and they clearly still have significant additional capability yet to be added. I don't think that's at all realistic, the performance penalty for crossing nodes is significant so if there was any obvious way to NOT do it they'd have done that in the first place-- and that we'll need (at least) HW4 to have enough compute to redundantly operate a software stack tesla is willing to certify as >L2.


The bigger issue is-nobody knows how much compute is actually needed to do autonomy. Tesla once thought HW2 was enough. Then they thought 2.5 was. Then they thought 3 was. They've no idea, and neither does anyone else since they haven't done it yet.

HW4 could be enough. Or it might not. They won't know till they get there. And when they do they need to hope "enough" is contained in a single node so they can have failover.
 
I agree with this long term, but it assumes near and far depth are able to be computed well by vision. Karparthy demonstrates some very nice depth mapping from a car's immediate surroundings, but I remain unconvinced for long range, where there is only 1 camera, which previously could lean in on the point cloud from radar.


FWIW the stuff from @verygreen showed the vision only depth mapping ahead only operated out to about 60 meters.

Which coincidentally is roughly the range of the shortest forward cam-- so if his idea that they're specifically using that and the main cam for this (and not the longest-range narrow cam) would imply it's using 2 cameras for accuracy (and IIRC he compared some speed/distance values to radar data too-- finding that the vision results didn't match radar 100%, but was pretty close-- certainly close enough for driving where you shouldn't ever need mm level accuracy anyway since you should never get that near another vehicle.
 
  • Like
Reactions: alee
FWIW the stuff from @verygreen showed the vision only depth mapping ahead only operated out to about 60 meters.

Which coincidentally is roughly the range of the shortest forward cam-- so if his idea that they're specifically using that and the main cam for this (and not the longest-range narrow cam) would imply it's using 2 cameras for accuracy (and IIRC he compared some speed/distance values to radar data too-- finding that the vision results didn't match radar 100%, but was pretty close-- certainly close enough for driving where you shouldn't ever need mm level accuracy anyway since you should never get that near another vehicle.
Wasn't that the voxel stuff? That's obtaining 3d coordinates for unclassified pixels which would be good for avoiding obstructions where 60 meters is probably adequate. This says nothing about the measurements for classified objects.
 
FWIW, at autonomy day when touting the redundancy Tesla certainly made it SOUND like they intended to do this- running each node independently for redundancy... (remarks about comparison/tie breaking never made much sense since there's only 2 nodes, you can't really "break" a 1-1 tie with 2 nodes).
They provided a little more detail on how the two nodes are used in the 2021 Autonomy day:


Stating that only one SOC has control of the car at the time, with the other providing an extension of compute, but it can switch which one is controlling the car if necessary:

1638195821774.png

1638195853267.png
 
Last edited:
(remarks about comparison/tie breaking never made much sense since there's only 2 nodes, you can't really "break" a 1-1 tie with 2 nodes)


It's almost like you need an odd number of nodes (>1) to have consensus in highly available systems or something...

TBH though for "L2" systems where a human can take over, non-agreement just means AP could freak out (like it already does all too frequently) and let the human break the tie.

Not too long after, and through today, all the actively-running code is split across both nodes because they long since (early/mid 2020 I think?) ran out of enough compute on a single node to run everything needed

Any links to this? I found this tweet from green pointing to the primary/failover pattern being used, but not anything pointing at them being used as a 2-node cluster...
 
It's almost like you need an odd number of nodes (>1) to have consensus in highly available systems or something...

TBH though for "L2" systems where a human can take over, non-agreement just means AP could freak out (like it already does all too frequently) and let the human break the tie.



Any links to this? I found this tweet from green pointing to the primary/failover pattern being used, but not anything pointing at them being used as a 2-node cluster...


tons and tons of stuff about them needing to steal compute from node B going back to only a few months after they first began using node B for anything at all.

examples:

 
They provided a little more detail on how the two nodes are used in the 2021 Autonomy day:


Stating that only one SOC has control of the car at the time, with the other providing an extension of compute, but it can switch which one is controlling the car if necessary:


The explanation doesn't resolve anything though... if you need compute running on both nodes to operate, one node failing means your system can't operate.

Even if only 1 node is sending commands to the car, it can't keep operating without enough compute to run everything.

it could have some super-dummy fail mode for when the other node crashes of course with vastly limited functionality- but it couldn't keep operating normally--- which was ORIGINALLY intended with the dual node design- and would absolutely be needed for robotaxis.
 
I've seen no evidence it has anything to do with HW3, and more likely has to do with the main forward camera having less range than radar does.... (150m vs 160m in this case).
The evidence I experienced many times now in testing and witnessed on many YT videos from other beta testers across the nation is that when FSD says it's OK to enter an intersection to make a left turn or go straight through it will stop when there is no traffic anywhere. Sometimes it stops for up to 15 seconds other times it stops for 3 seconds for no apparent reason. Like it is crunching lots of data from the cameras running through a long list of scenarios to analyze before proceeding. Sometimes I witness it not proceeding until a car put is in danger of a T bone collision, other times I have intervened because the FSD computer failed to make a decision to just go through the intersection as any normal driver would.
Obviously no one here knows what is going on inside that processor but we just observe and act to avoid an accident. But I like many other beta testers have an opinion based on observation. Mine is the FSD computer we have is about tapped out to make decisions fast enough to perform as needed.

My second reason is that if Tesla engineers really believed HW3.0 is all they need to do FSD as everyone expects, then why would they be spending so much time and money developing a faster HW4.0 hardware set?
The bigger issue is-nobody knows how much compute is actually needed to do autonomy. Tesla once thought HW2 was enough. Then they thought 2.5 was. Then they thought 3 was. They've no idea, and neither does anyone else since they haven't done it yet.
Exactly!
 
  • Like
Reactions: Ben W and EVNow
The evidence I experienced many times now in testing and witnessed on many YT videos from other beta testers across the nation is that when FSD says it's OK to enter an intersection to make a left turn or go straight through it will stop when there is no traffic anywhere. Sometimes it stops for up to 15 seconds other times it stops for 3 seconds for no apparent reason. Like it is crunching lots of data from the cameras running through a long list of scenarios to analyze before proceeding.
The funny thing is when its already moving, it doesn't slow down all the much when taking right turns.
 
The funny thing is when its already moving, it doesn't slow down all the much when taking right turns.
Often that is true and in some cases it has not steered hard enough on a 90° turn and ends up on the wrong side of the road. If a car is coming in the opposite direction, that would result in a head on collision. I have two hard right turns where the road is 45mph and the turn is clearly marked as 20mph but FSD has ignored that every time. Bends in the road are handled much better. In a couple of test runs the car slammed in the brakes when it couldn't see the road continuing ahead. This one is a blind hard right turn.
 
The evidence I experienced many times now in testing and witnessed on many YT videos from other beta testers across the nation is that when FSD says it's OK to enter an intersection to make a left turn or go straight through it will stop when there is no traffic anywhere. Sometimes it stops for up to 15 seconds other times it stops for 3 seconds for no apparent reason. Like it is crunching lots of data from the cameras running through a long list of scenarios to analyze before proceeding. Sometimes I witness it not proceeding until a car put is in danger of a T bone collision, other times I have intervened because the FSD computer failed to make a decision to just go through the intersection as any normal driver would.
Obviously no one here knows what is going on inside that processor but we just observe and act to avoid an accident. But I like many other beta testers have an opinion based on observation. Mine is the FSD computer we have is about tapped out to make decisions fast enough to perform as needed.
There's a great song lyric by Rush: "If you choose not to decide, you will have made a choice!" The FSD computer (or the engineers who programmed it) doesn't seem to have quite internalized this. The car seems to have a "safe mode", which is usually to act timidly and do nothing, or brake to a halt, without taking into account that in many contexts this "safe mode" is actively _more_ dangerous than continuing with less than perfect certainty. E.g. stopping in the middle of an intersection when the light is red, or hard phantom braking when another car is on your tail. I strongly wish that the car would go into "safe mode" less often, and/or just beep to alert the driver when there's too much uncertainty, rather than safing. (The beeped-at driver can do the safing, and Tesla gets a lot more useful training data this way.) Perhaps 10.6+ will go more in this direction.

In the medium term, the true tipping point will be when FSD is able to reduce the driver's cognitive load for the same level of safety. This is already quite true on the highway for Autopilot/NOA, but I expect it will be several more years before FSD reaches this point in the city for the typical driver. 10.x dramatically increases my cognitive load relative to driving manually; I have to watch it like a hawk! It may also reach an "uncanny valley" where it's consistent enough to lull the driver into a false sense of complacency, but inconsistent enough for that complacency to be dangerous. (This will probably and unfortunately have to be overcome with nags.) Autopilot/NOA is squarely in this uncanny valley right now, but I expect NOA may reach L4 consistency on the highway in a few years; 2025 or so. City FSD may reach the crucial tipping point of L2 usefulness by 2025 as well, but I don't expect it to reach L4 caliber til 2030 at the earliest. City L5, non-geofenced, I'm not expecting til 2035 or so. I hope I'm wrong!
 
Last edited: