Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Profound progress towards FSD

This site may earn commission on affiliate links.
And the work being done at Tesla is to solving THAT problem.

Yes, I get that. And when do you think Tesla will solve it? Other companies that use lidar have already solved it.

Human-field-of-view-FOV-for-both-eyes-showing-different-levels-of-peripheral-vision.ppm

Stationary pedestrian in a tunnel that jumps in front of the car = TSLAQ

I am not sure what you are trying to say. Are you saying that human far peripheral vision is not as good as computer vision?

Also the stationary pedestrian in a tunnel is a real case. It's not some prank by TSLAQ. It happened in China. I showed the video.

 
  • Disagree
Reactions: mikes_fsd
The problem is that "camera and computer combo" is not as good as "human vision and brain combo". They are not equal. And radar would not help in those cases I mentioned. If the pedestrian were stationary, the radar would probably ignore it.
In theory, the camera should be able to recognize a deer or other animal and a person. A person by the side of the road could be counted on to stay out of the way of the car if they''re not at a crosswalk, and the car could slam on the brakes if the pedestrian decides to commit suicide at the last minute. If it sees an animal, other than a dog on a leash, it should slow down. Not sure how well the cameras see at night. Can the cameras see into the infra red, thus distinguishing a live animal from a dead animal by the side of the road?
 
In theory, the camera should be able to recognize a deer or other animal and a person. A person by the side of the road could be counted on to stay out of the way of the car if they''re not at a crosswalk, and the car could slam on the brakes if the pedestrian decides to commit suicide at the last minute. If it sees an animal, other than a dog on a leash, it should slow down. Not sure how well the cameras see at night. Can the cameras see into the infra red, thus distinguishing a live animal from a dead animal by the side of the road?

No, Tesla cameras cannot see in the infrared.
 
At least for the backup camera, Tesla has been able to adjust camera exposure with software updates, e.g., Backup Cam doesn't adjust exposure for night?

mpxjry9b1ky31.jpg


The image from that thread also shows what appears to be the backup camera and main front camera both without vehicle lights on to get a sense of the difference between what can be seen in the dark. For the tunnel situation, unclear if the front cameras would have seen it anyway without exposure differences or if there needs to be differences in exposure, would the system be able to dynamically adjust and would it know to do so.
 
  • Informative
Reactions: diplomat33
But probably what worry me the most are cases where cameras are likely to fail.

For examples:
- Cameras being blinded
- Cameras not seeing quick enough a pedestrian with dark clothes or dark animal at night
- Cameras not seeing quick enough a pedestrian hiding in a shadow like my dark tunnel example.
- Cameras not recognizing sharp glass on the road because glass is see-through.

I would never expect L5 type functionality with existing technology (even for Waymo tbh), but humans can also easily fail many such scenarios including potholes, being blinded with sun or glare, hitting random unrecognized debris, barely missing pedestrians in poor visibility, etc. Whether driving functionality needs to get to "far in excess" of human ability with software and cameras before its usefulness outweighs the risks is another question though.
 
  • Like
Reactions: ZsoZso
I would never expect L5 type functionality with existing technology (even for Waymo tbh), but humans can also easily fail many such scenarios including potholes, being blinded with sun or glare, hitting random unrecognized debris, barely missing pedestrians in poor visibility, etc. Whether driving functionality needs to get to "far in excess" of human ability with software and cameras before its usefulness outweighs the risks is another question though.

I think that is precisely why Waymo has so many sensors, including lidar. The sensor suite is designed to give them the maximum chance of solving these cases. I read that Waymo has 29 cameras. That is done precisely so that no matter what direction a camera gets blinded, it will not cripple vision. And of course multiple lidars to also detect objects around the car like road hazards that the camera vision might miss. In fact, with so many sensors, I suspect that Waymo cars probably don't fail these cases, and if they do, it is probably extremely rarely. The whole point is that if you really want "driverless L5" then your car needs to be able to handle these cases extremely reliably. You can't do a cross country drive while you sleep in the back seat if the car might crash if the cameras get blinded or if it hits a road hazard. The FSD needs to be safer than that to achieve "driverless L5".

And there is no reason why autonomous cars should have the same limitations as humans. If we can make autonomous cars that don't fail in cases where humans might fail, why wouldn't we do that?
 
Last edited:
I thought FSD would handle it because there was a fatality that made the national news of just this kind of thing over a year ago, and recently, there was a Tesla that crashed into an overturned trailer in the left lane of a limited-access highway that got lots of views on YouTube. I thought the Tesla neural network was supposed to learn from experience. Apparently that's too much to expect.

Neural networks dont learn from experience. They are "trained" at the factory and then deployed. Perhaps you should try reading the manual rather than relying on watching arbitrary YouTube channels for your "knowledge" ?
 
  • Disagree
Reactions: DanCar
Neural networks dont learn from experience. They are "trained" at the factory and then deployed. Perhaps you should try reading the manual rather than relying on watching arbitrary YouTube channels for your "knowledge" ?
Don't be so pedantic. Yes it learns from scenarios that are gathered from the experience of the car being driven. If you want to talk about non Tesla Neural networks there are networks that are updated continuously. Perhaps you should try reading a manual rather than relying on watching arbitrary YouTube channels for your "knowledge" ?
 
Don't be so pedantic. Yes it learns from scenarios that are gathered from the experience of the car being driven. If you want to talk about non Tesla Neural networks there are networks that are updated continuously. Perhaps you should try reading a manual rather than relying on watching arbitrary YouTube channels for your "knowledge" ?

The neural network in the car does not learn. It processes data based on fixed information provided during training by Tesla before software updates are distributed. After that it's behavior is fixed until the next update. That's not being pedantic its being accurate. If you don't like that its not my problem.

While several people have speculated that Tesla somehow gather feedback from all the cars to train the network, there is no actual evidence for this, apart from some uploads when the car is on AP and involved in an accident.

As for reading the manual, the prior post indicated that he "expected" (his words) the car to brake on a non-divided road when a truck made a turn at an intersection based on viewing YouTube. Presumably we also should "expect" the world to be flat, and drinking bleach to protect us from Covid-19.
 
Yes, I get that. And when do you think Tesla will solve it? Other companies that use lidar have already solved it.



I am not sure what you are trying to say. Are you saying that human far peripheral vision is not as good as computer vision?

Also the stationary pedestrian in a tunnel is a real case. It's not some prank by TSLAQ. It happened in China. I showed the video.

Yep - computer vision is like a human glancing directly in every direction all the time. No one can do that due to peripheral vision plus they tend to look at speedometer etc.

Couldn't run the dodgy video. A man driving into a tunnel where a woman is pushing a stroller. Are you serious about this? - this is why regulators will say that humans who are 100 times less safe must continue looking down the road instead of watching Netflix?

Eventually, governments will lose control - the world will be watching Netflix behind the wheel.
 
  • Like
Reactions: mikes_fsd
Yep - computer vision is like a human glancing directly in every direction all the time. No one can do that due to peripheral vision plus they tend to look at speedometer etc.

I am not talking about direction but quality. Of course, cameras arranged around the car will have a great coverage that human vision. That's why every autonomous car has multiple cameras arranged to provided 360 degree coverage. I am saying central vision for computers is not as good as central vision for humans. Human brain is more powerful than computers.

Plus, there are cases where human eyes would also fail to see something. How do you propose autonomous cars handle situations where the human eye can't see it? Autonomous cars need to be able to see stuff even when human vision can't see it like the example below.

Couldn't run the dodgy video. A man driving into a tunnel where a woman is pushing a stroller. Are you serious about this? - this is why regulators will say that humans who are 100 times less safe must continue looking down the road instead of watching Netflix?

No, that is an example of both human vision and camera vision failing. It illustrates why autonomous cars with lidar are better because they will be safer. FSD cars with lidar will detect the woman pushing the stroller in the tunnel where humans and cameras would not.
 
Last edited:
  • Disagree
Reactions: mikes_fsd
I am not talking about direction but quality. Of course, cameras arranged around the car will have a great coverage that human vision. That's why every autonomous car has multiple cameras arranged to provided 360 degree coverage. I am saying central vision for computers is not as good as central vision for humans. Human brain is more powerful than computers.

Plus, there are cases where human eyes would also fail to see something. How do you propose autonomous cars handle situations where the human eye can't see it? Autonomous cars need to be able to see stuff even when human vision can't see it like the example below.



No, that is an example of both human vision and camera vision failing. It illustrates why autonomous cars with lidar are better because they will be safer. FSD cars with lidar will detect the woman pushing the stroller in the tunnel where humans and cameras would not.
Doesn't need to be better than central human vision to be way better than a human driver. Sounds like you are expecting regulators to only permit use if nothing shy of perfect. Average human driver is a low bar. China will regulate early - they desperately want the economy to be more efficient. US will need to follow suit fairly quickly. EU will be years later no doubt..
 
But there's also radar in front of the Tesla. That should be able to distinguish between a stopped object not in the travel lane from a stopped object in the travel lane. If it can't then a higher definition radar should be installed.

Just for kicks, a couple of weeks back I was driving along a main road, 60+MPH with AP engaged (dual carriageway / central divide / UK) and there was a large van facing me unloading goods in my lane. I ignored it until the last possible moment then swerved to avoid. The car hadn't even blinked! I don't quite get the issue, but I gather that this is a situation that is hard for AP to detect. Sounds like an area needing major improvement!

Another FSD issue is potholes. Right now, I understand that FSD can't distinguish between an actual pothole and a patched pothole so it ignores potholes. FSD is going to have to be able to make that distinction.

I agree, but i doubt it will happen. And what about potholes when it rains?! The whole area of both radar signal and visible light reflections seem potentially problematic. I have seen all sorts of reflections on wet / rain drenched roads being visualized as cones right in the middle of the road.
 
Last edited:
Doesn't need to be better than central human vision to be way better than a human driver. Sounds like you are expecting regulators to only permit use if nothing shy of perfect. Average human driver is a low bar. China will regulate early - they desperately want the economy to be more efficient. US will need to follow suit fairly quickly. EU will be years later no doubt..

Computer vision needs to be at least as good at seeing stuff as human central vision. Just having some advantages over human vision like being 360 degree or not getting tired or distracted is not enough. What good is it if your robotaxi can see 360 degrees and never gets tired if it hits a parked car on the side of the road or hits a white truck crossing in front because the camera vision did not correctly recognize the objects? So yes, camera vision needs to be at least as good as human vision at detecting stuff.

No, I am not expecting perfection. But I am expecting autonomous cars to be much safer than human drivers.
 
It's certainly true that without a human driver, the Tesla would crash nearly every time. At least, that's been my experience in the month and a half that I've owned my M3. And that's during a pandemic when I'm not driving much, and there's not much traffic. It's the exception rather than the rule that I can do an entire drive on NoA without having to intervene to prevent an accident. I can't figure out how people got enthusiastic about FSD in the past when, presumably FSD was not as capable as it is now.

lol dude you had to see when tesla sales reps were demo'ing ap1 as a hands off system....and then they say but wait we got ap2 now and eventually you'll sleep in your car!

sign me up i said!

didnt have ap1 reliablility/functionality for at least 18 months thereafter....

AND STILL till this day after driving back and forth between 15 different AP1 loaners and my AP2 vehicle....i could absolutely swear that whatever computer digital stuff that creates the path for the lanes in AP1 predicts curves better still till this day better then ap2 or even ap3. there is just something different about it. ap1 is still smoother.