Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
HW3 can make Chuck's UPL. That means the camera sensors and resolution are adequate for a significant majority of US roads imo.

In the cases where it fails Chuck's UPL, you can still see the oncoming cars on the visualization. This means the NN can see the cars but perhaps can't figure out how they relate to the road geometry with their given velocity.
 
  • Like
Reactions: APotatoGod
HW3 can make Chuck's UPL. That means the camera sensors and resolution are adequate for a significant majority of US roads imo.
No it can’t. It’s extremely well documented in the other thread that it has never been able to.

Obviously, being able to do the turn one or many times is not sufficient for determining adequacy of anything.
In the cases where it fails Chuck's UPL
This contradicts the first sentence in your post.
 
But that was never the problem anyway. It was figuring out if the stationary object it thought you were coming toward was relevant to your path. See phantom braking for overpasses and signs as but one example. Most makers had to semi-manually set lots of objects to "ignore" to mitigate this and AFAIK nobody has really "solved" it.
Are you arguing that CV has the same levels of recall and precision as Lidar? That Lidar or imaging mmW radar has a problem with stationary objects? Either way I call BS on both.
 
Last edited:
  • Like
Reactions: beachmiles
No it can’t. It’s extremely well documented in the other thread that it has never been able to.

Obviously, being able to do the turn one or many times is not sufficient for determining adequacy of anything.

This contradicts the first sentence in your post.

Yes, it can and has. Nothing in my post contradicts.

It seems people don't understand that fsdb's NN has to reason about the entirety of the road structure and semantics of NA. Tesla hit a limit as to how much V11's NN can store all that autolabeled training information (on HW3). It seems it's not possible to have V11'S NN generalize the entire road structure and semantics of NA. This, along with all the human programmed heuristics can't account for all the different behaviors across locales.
 
Last edited:
  • Disagree
Reactions: AlanSubie4Life
Yes, it can and has. Nothing in my post contradicts.

It seems people don't understand that fsdb's NN has to reason about the entirety of the road structure and semantics of NA. Tesla hit a limit as to how much V11's NN can store all that autolabeled training information. It seems it's not possible to have V11'S NN generalize the entire road structure and semantics of NA. This, along with all the human programmed heuristics can't account for all the different behaviors across locales.
It's complete BS that the car reliably can make that turn even 95% of the time even after Tesla specifically trained and tested for that specific turn. For autonomy you need at least six orders of magnitude more reliaiblity.
 
Given I don't mention LIDAR, or CV, at all, in my post, and all the things I DO mention reference problems Tesla had using radar, I'm not sure why you think I'm arguing anything at all about those 2 things I don't discuss even slightly there.
You started this exchange by quoting me on why I think additional sensors like Lidar and Radar are needed... I was discussing both. :rolleyes:
I don't care what performance Tesla got from their use of the Conti radar. It was never ment to be used in the way Tesla was trying to use it.

A low-grade cruise control radar is not viable for autonomy. Who would have known?
 
Last edited:
  • Like
Reactions: diplomat33
Further on in this thread was a mention of using synthetic aperture antennas for better radial resolution. Nice.. I think. If the cost can be kept down. (For those of you who aren't aware, those 1.5' x 1.5' antennas used by SpaceX for their Starlink sets consists of a lot of individual sensors whose received signals are electronically summed, with phase shifts, to create a steerable receiver pattern that can track satellites as they pass by. Much bigger versions of this thing are used by the militaries of the world to create multiple receive beams, the better to track multiple targets, all at the same time. But, once again, these are usually meant to track aerial targets without a pile of clutter, not a target sitting in and amongst other, non-moving like bridge abutments, signs, and stopped cars.)
Fellow radar person!

I think you mixed terms here. Synthetic Apperature Radar (SAR) uses post processing of a set of radar data taken along a path to create a virtual antenna whose width is the length of the path taken in the data set. This would only increase resolution tangential to the vehicle path.
Phased arrays like on Starlink use multiple elements to steer a beam whose beamwidth is roughly the size of the array.
The Tesla Phoenix radar uses a phased arrays combined with multiple transmit and recieve antennas to create a denser virtual array phased array. They seem to be using TI chips and TI has great app notes/ background tutorials for anyone interested.
 
  • Informative
Reactions: beachmiles
Fellow radar person!

I think you mixed terms here. Synthetic Apperature Radar (SAR) uses post processing of a set of radar data taken along a path to create a virtual antenna whose width is the length of the path taken in the data set. This would only increase resolution tangential to the vehicle path.
Phased arrays like on Starlink use multiple elements to steer a beam whose beamwidth is roughly the size of the array.
The Tesla Phoenix radar uses a phased arrays combined with multiple transmit and recieve antennas to create a denser virtual array phased array. They seem to be using TI chips and TI has great app notes/ background tutorials for anyone interested.
Good enough; yeah, I worked on RADARs as a techie back in the (serious) long-ago days, and took a RADAR course in grad school, more or less as a fun, easy-A elective, but really haven't touched any of it since then.

Still: Synthetic Aperture Radar or not, or phased array or not: Wherever that beam gets pointed, unless said beam is exceedingly narrow, there's going to be clutter picked up in the return, right along with desired reflections from other cars, stopped or not. Which, and we have to be clear here, the RADAR has to find, as in, sweep around and look for.

So, for an argument, say we have infinite time. We take this steerable RADAR antenna and sweep it back and forth, multiple times, so we can get the vertical as well as the horizontal (we got hills, attitude of the car, etc.). Store all the reflected pulses in an array of data, then do, pretty much, image recognition on the fuzzy results. Then do it again, and again, repetitively, and, from those multiple images, do shape recognition (stopped car, right? It's got a shape..), fixed item recognition, that whole occupancy network stuff Tesla goes on about, and, from that, make driving decisions.

Never mind the processing time of the image thus constructed; we're doing that already with vision. It's the acquisition time that gives me serious pause. While what we're looking for isn't exactly pixels in the true image sense, it sure looks like that, at the maximum resolution of this steerable antenna. So, lots and lots and lots of sweeps.. that adds up. In the end, we'd like to match the 40-50 image captures per second that a vision camera can do; I fear that we're talking two or three captures per second.. and that, if true (and I'm a tyro at the modern stuff) is just Too Darned Slow.

There's lots more. Many of these kinds of antennas can be rigged to have multiple beams at once; but then the power per beam goes down, raising questions of, "is it all going to fit?" come to mind.

Admittedly, modern ICs can have zillions of transistors on them leading to better, faster processing capabilities that doesn't break the power consumption bank. Heck, I've designed some custom multi-million gate ICs before now; but even those suckers have limits.

Got a reference for somebody doing more than tracking a target, a bit better than, say, the systems that were emplaced in Teslas beck in 2018 or so? Because this idea, kinda an improved vision system that can see through fog despite clutter, sounds like several orders of magnitude more complex, power hungry, and physically large. And if said "modern" system is using Doppler shift to get rid of the clutter at a distance, well, we're right back where we started from. Better resolution, maybe, but an inability to see stopped objects in the roadway.
 
Good enough; yeah, I worked on RADARs as a techie back in the (serious) long-ago days, and took a RADAR course in grad school, more or less as a fun, easy-A elective, but really haven't touched any of it since then.

Still: Synthetic Aperture Radar or not, or phased array or not: Wherever that beam gets pointed, unless said beam is exceedingly narrow, there's going to be clutter picked up in the return, right along with desired reflections from other cars, stopped or not. Which, and we have to be clear here, the RADAR has to find, as in, sweep around and look for.

So, for an argument, say we have infinite time. We take this steerable RADAR antenna and sweep it back and forth, multiple times, so we can get the vertical as well as the horizontal (we got hills, attitude of the car, etc.). Store all the reflected pulses in an array of data, then do, pretty much, image recognition on the fuzzy results. Then do it again, and again, repetitively, and, from those multiple images, do shape recognition (stopped car, right? It's got a shape..), fixed item recognition, that whole occupancy network stuff Tesla goes on about, and, from that, make driving decisions.

Never mind the processing time of the image thus constructed; we're doing that already with vision. It's the acquisition time that gives me serious pause. While what we're looking for isn't exactly pixels in the true image sense, it sure looks like that, at the maximum resolution of this steerable antenna. So, lots and lots and lots of sweeps.. that adds up. In the end, we'd like to match the 40-50 image captures per second that a vision camera can do; I fear that we're talking two or three captures per second.. and that, if true (and I'm a tyro at the modern stuff) is just Too Darned Slow.

There's lots more. Many of these kinds of antennas can be rigged to have multiple beams at once; but then the power per beam goes down, raising questions of, "is it all going to fit?" come to mind.

Admittedly, modern ICs can have zillions of transistors on them leading to better, faster processing capabilities that doesn't break the power consumption bank. Heck, I've designed some custom multi-million gate ICs before now; but even those suckers have limits.

Got a reference for somebody doing more than tracking a target, a bit better than, say, the systems that were emplaced in Teslas beck in 2018 or so? Because this idea, kinda an improved vision system that can see through fog despite clutter, sounds like several orders of magnitude more complex, power hungry, and physically large. And if said "modern" system is using Doppler shift to get rid of the clutter at a distance, well, we're right back where we started from. Better resolution, maybe, but an inability to see stopped objects in the roadway.

Objects, even stopped ones, have different velocity profiles due to observer movement. The radar pulses at higher than single Hz rate rates (16, I think). Multiple Tx, Multiple recieve (MIMO) leverages single pulses across multiple antennas so not as much power as you might think. The recieve beam steering is done in post processing, so it doesn't need one pulse per angle. Transmit doesn't use beam forming in MIMO mode.

Some technical resources
mmWave radar sensors | TI.com
 
Yes, it can and has. Nothing in my post contradicts.

It seems people don't understand that fsdb's NN has to reason about the entirety of the road structure and semantics of NA. Tesla hit a limit as to how much V11's NN can store all that autolabeled training information (on HW3). It seems it's not possible to have V11'S NN generalize the entire road structure and semantics of NA. This, along with all the human programmed heuristics can't account for all the different behaviors across locales.

Well then that's one reason why other autonomous driving companies use direct sensors to reduce perception errors, and devote more machine learning and computation to policy.
 
  • Like
Reactions: ZeApelido
You started this exchange by quoting me on why I think additional sensors like Lidar and Radar are needed... I was discussing both. :rolleyes:
I don't care what performance Tesla got from their use of the Conti radar. It was never ment to be used in the way Tesla was trying to use it.

A low-grade cruise control radar is not viable for autonomy. Who would have known?

Right now no Tesla being sold has anything close to autonomy, it's all L2+ driver assist and for the product that Tesla is currently selling, a cruise control radar could help. Elon's complaint about sensor mismatches is tendentious bullshit--you work and do best statistical sensor fusion you can and you test and tune the **** out of it. It's a problem everyone else has (like MobileEye) and yet they power through it and make a consumer L2 product that's more reliable and pleasant to use. It was all about making more money by taking out the radar and then lying about it, just like the ultrasonic sensors and the rain sensor.

The Tesla system is pretty good for a L2 vision and works well on highway most of the time but with some occasional big problems that radar helps with--as evidenced by the people who experienced worse performance once their radar was turned off.

And yes I do want an imaging radar.
 
Got a reference for somebody doing more than tracking a target, a bit better than, say, the systems that were emplaced in Teslas beck in 2018 or so? Because this idea, kinda an improved vision system that can see through fog despite clutter, sounds like several orders of magnitude more complex, power hungry, and physically large.
Complex yet but not remotely several orders of magnitude more power hungry and large.

There are many such startups and mainstream electronics companies







Tesla isn't the only high performance tech company in automotive. Legacy OEMs were easy to beat.



And if said "modern" system is using Doppler shift to get rid of the clutter at a distance, well, we're right back where we started from. Better resolution, maybe, but an inability to see stopped objects in the roadway.
 
Last edited:
Right now no Tesla being sold delivered has anything close to autonomy.
Fixed that for you. Elon's been selling and promising robotaxi FSD since 2016. Rewatch Autonomy Day. It's pretty cringe. "We expect to have the first operating robotaxis next year (2020)... With no one in them. Next year." "Any customer will be able to add their car to the Tesla Network".

Elon is the PT Barnum (or Elizabeth Holmes) of AV:s. The only justification for the share price being over $120 is FSD robotaxi.

Funny how it used to be "exponential" and now it's "stacked logarithmic". No one could have known, right?

 
Last edited:
  • Like
Reactions: beachmiles
Objects, even stopped ones, have different velocity profiles due to observer movement. The radar pulses at higher than single Hz rate rates (16, I think). Multiple Tx, Multiple recieve (MIMO) leverages single pulses across multiple antennas so not as much power as you might think. The recieve beam steering is done in post processing, so it doesn't need one pulse per angle. Transmit doesn't use beam forming in MIMO mode.

Some technical resources
mmWave radar sensors | TI.com
Although I'm not a radar aficionado, it's interesting to see in the TI literature you cite a reference
to radar-assisted child/animal detection for car interiors. I suppose Tesla is using vision + neural nets
and microphones to meet those NCAP standards.
 
Although I'm not a radar aficionado, it's interesting to see in the TI literature you cite a reference
to radar-assisted child/animal detection for car interiors. I suppose Tesla is using vision + neural nets
and microphones to meet those NCAP standards.
Tesla has previously filled with the FCC for an interior radar in the 60 something GHz band.
 
  • Informative
Reactions: loquitur
Fixed that for you. Elon's been selling and promising robotaxi FSD since 2016. Rewatch Autonomy Day. It's pretty cringe. "We expect to have the first operating robotaxis next year (2020)... With no one in them. Next year." "Any customer will be able to add their car to the Tesla Network".

Elon is the PT Barnum (or Elizabeth Holmes) of AV:s. The only justification for the share price being over $120 is FSD robotaxi.

Funny how it used to be "exponential" and now it's "stacked logarithmic". No one could have known, right?


Strong or very confident expectations != promise