Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Fixed object detection in Autopilot/FSD (out of main)

This site may earn commission on affiliate links.
It seems that someone in a model 3 hit a fire truck and the passenger died

TESLA STRIKES FIRE APPARATUS AT CRASH SCENE – 2 CIVILIANS CRITICALLY INJURED, FIREFIGHTERS OK

Sad. :(

A reminder to investors, one of the key challenges (probably the single most important) for FSD and "March of 9s" is static object detection and classification. Is the fleet data enough for training and does HW3.0 allow for a big enough network to make a difference here? Remains to be seen
 
Sad. :(

A reminder to investors, one of the key challenges (probably the single most important) for FSD and "March of 9s" is static object detection and classification. Is the fleet data enough for training and does HW3.0 allow for a big enough network to make a difference here? Remains to be seen
Yes this is really awful and sad. God willing Tesla or someone else will soon make tragedies like this a thing of the past.

My understanding is that FSD is not “better autopilot”. It’s a fundamentally different approach that allows the vehicles “understand “ their environment better. It’s great fsd can recognize traffic cones, pedestrians, trash cans, and traffic lights. But can it recognize the car is about to drive straight into a brick wall? It should be possible for the vehicles to use stereo imaging to see an imminent impact. I’ve never seen that discussed by Tesla reps.
 
  • Informative
Reactions: BLSmith2112
Sad. :(

A reminder to investors, one of the key challenges (probably the single most important) for FSD and "March of 9s" is static object detection and classification. Is the fleet data enough for training and does HW3.0 allow for a big enough network to make a difference here? Remains to be seen
I'm totally with you. All Emergency Brake Systems have the same problem (excluding apparently the unfeasible Lidar) and here is the technical limit of existing camera system (and hw). And is a huge problem. It's mostly here that fully autonomous drive plays its future.
I'm wondering why there's no dedicated discussions. Honestly i can't believe to Fsd and Robotaxi feasibility without the demo on how Tesla overcome this limit. Hope they will succed but i don't feel actual camera resolution and Hw can deal with it. Hope Musk will surprise the world again.
 
Yes this is really awful and sad. God willing Tesla or someone else will soon make tragedies like this a thing of the past.

My understanding is that FSD is not “better autopilot”. It’s a fundamentally different approach that allows the vehicles “understand “ their environment better. It’s great fsd can recognize traffic cones, pedestrians, trash cans, and traffic lights. But can it recognize the car is about to drive straight into a brick wall? It should be possible for the vehicles to use stereo imaging to see an imminent impact. I’ve never seen that discussed by Tesla reps.
Well Autopilot is well known to plow straight into non-moving objects because of how it fundamentally works. Teslas on Autopilot have hit concrete crash barriers, fire trucks, police cars, semis, etc. because Autopilot simply cannot be made to detect non-moving objects without also creating equally hazardous false positives which would cause the car to slam on the brakes randomly all the time and cause rear-end collisions instead.

So if FSD can at least detect non-moving objects and not crash into them, while also not creating false positives, which is necessary for no human supervision whatsoever, then it's a huge step up from Autopilot. And yes, FSD is fundamentally different from Autopilot which is just TACC (Traffic-Aware Cruise Control), FSD means the car literally drives itself from point A to B while the human does nothing and could maybe even be asleep. They are completely different.
 
I'm totally with you. All Emergency Brake Systems have the same problem (excluding apparently the unfeasible Lidar) and here is the technical limit of existing camera system (and hw). And is a huge problem. It's mostly here that fully autonomous drive plays its future.
I'm wondering why there's no dedicated discussions. Honestly i can't believe to Fsd and Robotaxi feasibility without the demo on how Tesla overcome this limit. Hope they will succed but i don't feel actual camera resolution and Hw can deal with it. Hope Musk will surprise the world with that.
Camera resolution isn’t the limit here. House flies get by with 1,000 retinal “pixels” It’s about high speed image processing. Next time one of you guys sees Elon or Karpathy, ask them about this please
 
I'm totally with you. All Emergency Brake Systems have the same problem (excluding apparently the unfeasible Lidar) and here is the technical limit of existing camera system (and hw). And is a huge problem. It's mostly here that fully autonomous drive plays its future.
I'm wondering why there's no dedicated discussions. Honestly i can't believe to Fsd and Robotaxi feasibility without the demo on how Tesla overcome this limit. Hope they will succed but i don't feel actual camera resolution and Hw can deal with it. Hope Musk will surprise the world again.

Don't have a car and obviously no auto drive then so as a nonuser is it correct to assume the problem is that a car would stop for a pedestrian or animal in the middle of a highway but not a car since the car is 'supposed' to be there? Only it can't be detected that it's stationary?

Wouldn't the solution be to have all cars "talk" to all cars nearby so it would know that there is one stopped?
 
Don't have a car and obviously no auto drive then so as a nonuser is it correct to assume the problem is that a car would stop for a pedestrian or animal in the middle of a highway but not a car since the car is 'supposed' to be there? Only it can't be detected that it's stationary?
The automatic braking systems works until a max speed, different for different makers, because the risk of false positives in making a difference btw several stationary objects. So, above certain speeds the system does work at all, no matter what object is in your lane.
 
Yes this is really awful and sad. God willing Tesla or someone else will soon make tragedies like this a thing of the past.

My understanding is that FSD is not “better autopilot”. It’s a fundamentally different approach that allows the vehicles “understand “ their environment better. It’s great fsd can recognize traffic cones, pedestrians, trash cans, and traffic lights. But can it recognize the car is about to drive straight into a brick wall? It should be possible for the vehicles to use stereo imaging to see an imminent impact. I’ve never seen that discussed by Tesla reps.

 
Sad. :(

A reminder to investors, one of the key challenges (probably the single most important) for FSD and "March of 9s" is static object detection and classification. Is the fleet data enough for training and does HW3.0 allow for a big enough network to make a difference here? Remains to be seen

?

How is someone who was clearly not paying attention to the road a reminder for a FSD challenge?

Tesla will have solved that challenge before FSD is turned on without driver oversight or regulators won’t allow it.
 
  • Like
Reactions: Carl Raymond
Don't have a car and obviously no auto drive then so as a nonuser is it correct to assume the problem is that a car would stop for a pedestrian or animal in the middle of a highway but not a car since the car is 'supposed' to be there? Only it can't be detected that it's stationary?

Wouldn't the solution be to have all cars "talk" to all cars nearby so it would know that there is one stopped?
The first problem is recognition of stationary objects without false positives. AP can't do that, but FSD will. Total difference in ability.

The second problem is that talking to other cars won't help unless every vehicle is equipped with the ability. It will still be up to the driver because there will always be those vehicles that aren't equipped. For years now, emergency vehicles have had the means to broadcast that they are coming (not sure if this is 100% of emergency vehicles or not), but car manufacturers haven't put in the receivers--even though doing so would help speed the emergency vehicle and wouldn't cost much.
 
The first problem is recognition of stationary objects without false positives. AP can't do that, but FSD will. Total difference in ability.

The second problem is that talking to other cars won't help unless every vehicle is equipped with the ability. It will still be up to the driver because there will always be those vehicles that aren't equipped. For years now, emergency vehicles have had the means to broadcast that they are coming (not sure if this is 100% of emergency vehicles or not), but car manufacturers haven't put in the receivers--even though doing so would help speed the emergency vehicle and wouldn't cost much.

The problem of stationary objects above a certain speed always gets brought up as a way of trying to say FSD is ages away because this limitation can't be solved. It seems for some reason people forget hardware procession power, speed, latency and response time plays big roles in how quickly it can detect an object, identify that object, and then perform the appropriate action. They also seem to forget Hardware 3.0 specific code/software has not been active yet in any Tesla(except for the dev team who are actually running that Hardware 3.0 specific code).

The state of how a Tesla reacts to stationary objects has absolutely no meaning to the current state of their actual FSD software because their retail code on active Tesla's is specific to Hardware 2.5 or below.
 
?

How is someone who was clearly not paying attention to the road a reminder for a FSD challenge?

Tesla will have solved that challenge before FSD is turned on without driver oversight or regulators won’t allow it.

Of course they won't. But it's a reminder that this problem needs to still be "solved".

Simply having HW3.0 does not guarantee this success.
 
The problem of stationary objects above a certain speed always gets brought up as a way of trying to say FSD is ages away because this limitation can't be solved. It seems for some reason people forget hardware procession power, speed, latency and response time plays big roles in how quickly it can detect an object, identify that object, and then perform the appropriate action. They also seem to forget Hardware 3.0 specific code/software has not been active yet in any Tesla(except for the dev team who are actually running that Hardware 3.0 specific code).

The state of how a Tesla reacts to stationary objects has absolutely no meaning to the current state of their actual FSD software because their retail code on active Tesla's is specific to Hardware 2.5 or below.

I mean I agree with you. We have no idea how well HW3 will do. This is the crux of the problem that others argue require too much compute with cameras, (thus lidar). So it's just a reminder of this issue.
 
  • Like
Reactions: EV Promoter
I mean I agree with you. We have no idea how well HW3 will do. This is the crux of the problem that others argue require too much compute with cameras, (thus lidar). So it's just a reminder of this issue.
Lidar requires much compute, too. It returns a bunch of dots as distances. The computer still needs to correlate all those dots, and determine that a group of them represents a stopped object. I suspect that this was a case of a lead car changing lanes, and the firetruck becoming a surprise.
 
How do you know it hasn’t been solved? FSD is apparently complete per Tesla.

Considering there is an entire industry working on this problem, I will assume it hasn't until proven otherwise.

Tesla has never indicated they have "solved" perception yet. They have indicated they are confident they will.

I believe they will, because they have the data. But when do they have enough training compute, and then enough inference compute to fit in a car?

Keep in mind the best deep learning algorithms are feeding in a few camera images at a time.

These things need to be feeding in 3 to 5 seconds x 30 fps images ideally. Video.

But they aren't yet because the training would be massive. Tesla agrees and that's why they are working on Project Dojo.
 
In a reddit thread the problem of the need of stereoscopic view with cameras, so not only processing problems, was brought about.
It would be good to separate the two issue to discuss the topic. I have no idea about stereoscopy, so i'm not stating anything.
 
Maybe not topical, but IMO important and definitely related to Tesla's competitors. I have a strong belief that LIDAR should be banned due to the risk it poses to vision -- both of humans and animals.

Roughly speaking, the argument in favor of LIDAR safety goes that the lasers are kept to low enough levels to avoid problems. However, I have not found anything convincing that this has been thoroughly studied or even takes into account individual variance. For example, what is the expected rate for eye damage with currently approved LIDAR systems? Does it assume an ideal state with only one LIDAR unit in use? How rigorous are the protocols for determining whether or not eye damage has occurred?

My searching has not been particularly fruitful so maybe I have missed something and LIDAR has been shown to be safe. One article[1] I did find indicates that studies are "on-going" and that there is substantial risk to overlooking eye damage due to compensation concealing blind spots until the damage is severe. I am concerned that, unless checked, use of LIDAR on public roads will not only have an undesirable impact on animals, but also result in a wave of severe eye damage.

Am I wrong? I'd love to be, and I admit I don't have much data to work with in drawing my conclusions. But as it stands, I think automotive LIDAR should be banned.

1) Lasers and Eye Safety: Are Lasers Dangerous? | SemiNex