Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
the only time i get PB's galore is during hot days, long stretches of road, and little to no cloud cover... that's when mirages appear over the asphalt. Unfortunately those conditions are common in 1/2 of the US....

Works better to follow another car fairly closely which anchors the perception. I have no PB problems, but I'm using the FSDb stack and driving in crowded SoCal urban freeways.

Of course it should have a radar being used for these situations (like it should have a rain sensor).
 
Works better to follow another car fairly closely which anchors the perception. I have no PB problems, but I'm using the FSDb stack and driving in crowded SoCal urban freeways.

Of course it should have a radar being used for these situations (like it should have a rain sensor).
100% agree on radar. Little to no traffic on wide open highways in Texas outside of major cities… so there’s not a “reference point” for the vision only system like following a car. Obviously radar isn’t fooled by mirages/reflections. Reason why planes use weather radar, military jets use radar…. To detect objects and not relying on cameras only…
 
  • Like
Reactions: beachmiles and rpo
given that the FSDb stack is now Neural Net based, not as hard coded, what's lacking isn't radar - it's video frames of similar environments / circumstances.

The large majority of footage for FSDb to learn from, is from where lots of Tesla cars are in operation.
That's NOT the open plains of Texas. It's largely metropolitan areas.

Radar doesn't matter as much any more - the Neural Net learns from the previous experience of Tesla cars.

According to the book about Elon I read that was recently published:
the FSD development engineering team perceives that NN really only starts to exhibit results once 1-1.5 million miles of footage is absorbed.
I doubt that much has been available from geographies like west Texas.
 
100% agree on radar. Little to no traffic on wide open highways in Texas outside of major cities… so there’s not a “reference point” for the vision only system like following a car. Obviously radar isn’t fooled by mirages/reflections. Reason why planes use weather radar, military jets use radar…. To detect objects and not relying on cameras only…
Ummm…you need to educate yourself on how and why radar is used in the applications you listed as well as its limitations.
 
  • Like
Reactions: Supcom
Little to no traffic on wide open highways in Texas outside of major cities… so there’s not a “reference point” for the vision only system like following a car.

Reminds me of a Canadian stand-up comedian joke, "How flat and unchanging are the prairies? I was driving across Saskatchewan and I ran out of gas. It took me 20 minutes to notice my car wasn't moving!"
 
given that the FSDb stack is now Neural Net based, not as hard coded, what's lacking isn't radar - it's video frames of similar environments / circumstances.

The large majority of footage for FSDb to learn from, is from where lots of Tesla cars are in operation.
That's NOT the open plains of Texas. It's largely metropolitan areas.

Radar doesn't matter as much any more - the Neural Net learns from the previous experience of Tesla cars.

According to the book about Elon I read that was recently published:
the FSD development engineering team perceives that NN really only starts to exhibit results once 1-1.5 million miles of footage is absorbed.
I doubt that much has been available from geographies like west Texas.
Dallas . Ft. Worth - Amarillo - Clayton is the *main* corridor for Dallas - Ft. Worth to Denver. Hundreds of Tesla's drive that stretch weekly.. Millions of people on each end. if that doesnt provide sufficient data... then NN won't work.
 
Dallas . Ft. Worth - Amarillo - Clayton is the *main* corridor for Dallas - Ft. Worth to Denver. Hundreds of Tesla's drive that stretch weekly.. Millions of people on each end. if that doesnt provide sufficient data... then NN won't work.
1-1.5 Million Miles of video footage needed before real progress, according to FSD development engineering team, as reported in Walter Issacson's book on Elon.
 
so how is this different then compared to pre-mapped interstates/ highways in BlueCruise/ SuperCruise? aka... it only applies to very commonly traveled interstates/highways?
the difference is that BlueCruise is hard-coded stacks requiring significantly larger compute power. which Tesla abandoned in favor of NN learning.
but in the short term, a meaningless difference.
over the longer term it will be increasingly significant, both in terms of driving competence and User confidence, and in terms of compute costs per vehicle.

of course, both products depend on competent execution by GM and Telsa. A bumpy road for sure.
 
  • Like
Reactions: SalisburySam
Dallas . Ft. Worth - Amarillo - Clayton is the *main* corridor for Dallas - Ft. Worth to Denver. Hundreds of Tesla's drive that stretch weekly.. Millions of people on each end. if that doesnt provide sufficient data... then NN won't work.
Part of the problem is at this point computers can’t infer. A human driving that stretch of road can see the mirage and recognize it as such, overriding visual input with cognitive procesing, whereas a computer takes it at face value. I’m sure people have looked at this issue (and not just at Tesla) but I have no idea what the current best approach to dealing with it is.

The other problem for a car maker is that outcome severity. Phantom braking is bad but not braking and having an accident is worse. If a human gets in an accident because of a mirage people say ‘oh, yeah, I can see how that could happen.’ If FSD (or Blue Cruise, or…) gets in an accident people say “how can that happen?!?! Tesla is recklessly putting cars on the road! They need to be recalled NOW!” Not to mention the fact that any nuance that may have been happening ‘under the hood’ as the code tried to distinguish mirage from reality never gets reported in the press.
 
Part of the problem is at this point computers can’t infer. A human driving that stretch of road can see the mirage and recognize it as such, overriding visual input with cognitive procesing, whereas a computer takes it at face value. I’m sure people have looked at this issue (and not just at Tesla) but I have no idea what the current best approach to dealing with it is.

The other problem for a car maker is that outcome severity. Phantom braking is bad but not braking and having an accident is worse. If a human gets in an accident because of a mirage people say ‘oh, yeah, I can see how that could happen.’ If FSD (or Blue Cruise, or…) gets in an accident people say “how can that happen?!?! Tesla is recklessly putting cars on the road! They need to be recalled NOW!” Not to mention the fact that any nuance that may have been happening ‘under the hood’ as the code tried to distinguish mirage from reality never gets reported in the press.
maybe not disable radar/ remove radar sensors.... again.... radar and lidar isn't fooled by optical illusions. the amount of excuses folks make here for tesla going "vision only" is stunning...
 
maybe not disable radar/ remove radar sensors.... again.... radar and lidar isn't fooled by optical illusions. the amount of excuses folks make here for tesla going "vision only" is stunning...
Not making excuses for vision only - I've read that part of the problem was difficulties fusing the two sensing modalities. I've also read that that shouldn't be an issue. I'm not qualified to say which is correct but I can say that radar wouldn't necessarily fix the problem because they'd still have the issue of deciding whether the camera had a false positive or the radar had a false negative.
What? A mirage is a visual phenomenon. Train the network to ignore that visual and Bob's Your Uncle.
Train the camera to ignore what it sees... got it. Except I don't have an uncle named Bob.
 
Train the camera to ignore what it sees... got it.
This sounds like a troll, but I can't guess at your tone, so I'll try again. Trees are ignored. Buildings are ignored. Marks on the road are ignored. Ideally, leaves blowing across the road should be ignored. Heck, even potholes are ignored. I see no reason why something else that is of no consequence shouldn't be ignored.

Except I don't have an uncle named Bob.
As soon as you understand the point, you will.
 
Reminds me of a Canadian stand-up comedian joke, "How flat and unchanging are the prairies? I was driving across Saskatchewan and I ran out of gas. It took me 20 minutes to notice my car wasn't moving!"
Funny but that is exactly the situation that triggers phantom braking to make sure it is actually moving and not just stopped somewhere.
 
given that the FSDb stack is now Neural Net based, not as hard coded, what's lacking isn't radar - it's video frames of similar environments / circumstances.
Potentially but radar is still better to eliminate these false positives.

Also the existing HW3 cameras are not very good---human eyes in fovea are significantly better, with better autofocus and color perception, and less glare sensitivity.

Straight vertically undulating roads with no cars, like mirages, also trigger the errors. The resolution isn't good enough far down the road, where humans can see there's nothing there, and use their brains knowing that there is no hidden car in the dip stopped.

The large majority of footage for FSDb to learn from, is from where lots of Tesla cars are in operation.
That's NOT the open plains of Texas. It's largely metropolitan areas.
And this is why even ML heavy auto driving companies still want additional sensors. HW4 has better cameras too. But more bits in the cameras means much higher compute burden and needs more expensive chips and higher power consumption.
 
Last edited:
  • Like
Reactions: texas_star_TM3