Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
In the screenshot the front of the Tesla is in contact with the Camry. It would be interesting to know exactly what the blindspots of the car are though. I bet it's pretty close. I guess you could put your head on the ground and pretend to be a curb and look up at the camera and housing. haha.
It has a much better vantage point than the driver.
I updated my post with screenshots earlier into the video when the curb on the left is still visible and immediately after it becomes not visible. The Camry is much further away. That seems to confirm the curb would be invisible to this camera as you are making a right turn (or left turn from one way).
 
I updated my post with screenshots earlier into the video when the curb on the left is still visible and immediately after it becomes not visible. The Camry is much further away. That seems to confirm the curb would be invisible to this camera as you are making a right turn (or left turn from one way).
The Camry isn't that far from the curb! Anyway you need to be able to detect the positions of object this close to the car otherwise you're going to have problems.
1656033327566.png
 
The Camry isn't that far from the curb! Anyway you need to be able to detect the positions of object this close to the car otherwise you're going to have problems.
View attachment 820293
I mean the Camry is far away from the Tesla (not that it is far away from the curb). That means the curb becomes invisible to this camera well before the Tesla crosses into the intersection, so if it was making a turn it would also be invisible.

Again here is picture of when curb disappears from camera view. I think it should be fairly obvious to make the turn, the car would have still needed to travel even further forward, thus it would not be directly seeing the curb while it is turning.
wide_camera2.jpg


Note I'm not talking about "objects" like cars and pedestrians not being visible, talking purely about very low things like curbs.

If there was a wide camera video of the car turning, it'll probably be more clear.
 
Last edited:
I mean the Camry is far away from the Tesla (not that it is far away from the curb). That means the curb becomes invisible to this camera well before the Tesla crosses into the intersection, so if it was making a turn it would also be invisible.

Again here is picture of when curb disappears from camera view. I think it should be fairly obvious to make the turn, the car would have still needed to travel even further forward, thus it would not be directly seeing the curb while it is turning.
How many feet in front of the car does the camera suite need to be able to see the curb in your estimation?
@drtimhill is asking why people are skeptical of computer vision and you're saying that it's not reasonable to expect it to determine the position of a curb at 20ft away. You're making me more skeptical of computer vision than I already was. haha.
 
How many feet in front of the car does the camera suite need to be able to see the curb in your estimation?
@drtimhill is asking why people are skeptical of computer vision and you're saying that it's not reasonable to expect it to determine the position of a curb at 20ft away. You're making me more skeptical of computer vision than I already was. haha.
If something like a curb is obscured by objects (cars etc) then I'm not sure what any sensor is supposed to do ... they are all going to be blinded. At that point you are relying on the NN being able to extrapolate the curb from the bits it can see (which it can do) and/or handle the object continuity as the curb becomes obscured (which Tesla are working on for cars, and I would assume will also apply to other objects).

Of course, this all applies to humans too, though we are very good at assuming curbs and/or providing object continuity (until we're aren't).
 
  • Like
Reactions: stopcrazypp
At CVPR 2022, Argo shared 7 research breakthroughs:
  1. Using lidar data to do end-to-end path prediction.
  2. Balancing AI Training to improve long tail recognition.
  3. Algorithm for efficiently purging old map data.
  4. Turning large images into 3D maps
  5. Estimating paths of moving objects smoothly with lidar
  6. Creating more accurate 3D scenes
  7. Recognizing new objects not previously trained on
 
  • Like
Reactions: SCTes1aMan
So does Waymo not count because Arizona does not require government approval or because Chandler is not a major city?
76th most populous vs. 17th
Because Chandler is not a major city. They're just playing word games. Kyle Vogt is even worse. In the interview Phil LeBeau says it's "definitely" the first time in the US anyone has done paid rides with no driver and Kyle just rolls with it. Never mind Waymo who has done it for two years. Kyle later says "no major city has had driverless cars running around" even though Waymo is doing driverless in downtown Phoenix (just not paid rides). And Yandex has done driveless in Las Vegas, also a major city. Of course it's not Kyle's job to do Waymo's marketing for them, but that doesn't mean he has to lie.

AZ does have some gov't requirements for self-driving cars, btw. They suspended Uber's approval after the Elaine Herzberg killing. It's much less bureaucratic than CA, but then again, what isn't?
 
  • Like
Reactions: diplomat33
Note I'm not talking about "objects" like cars and pedestrians not being visible, talking purely about very low things like curbs.

If there was a wide camera video of the car turning, it'll probably be more clear.
Ok so I realized that measuring this is actually quite simple since we know the view is masked by the shroud. I used a flashlight and positioned on the ground it as close to possible to the car while still illuminating the fisheye lens. I got a range of about 7 feet directly in front of the car and 8 feet a bit to the side (as a curb would be). It can actually see a foot or two closer than I do sitting in the driver's seat.

1656127198241.jpeg
1656127435077.jpeg

If something like a curb is obscured by objects (cars etc) then I'm not sure what any sensor is supposed to do ... they are all going to be blinded. At that point you are relying on the NN being able to extrapolate the curb from the bits it can see (which it can do) and/or handle the object continuity as the curb becomes obscured (which Tesla are working on for cars, and I would assume will also apply to other objects).
I haven't seen this case in any FSD Beta videos where people hit curbs. Does it really follow cars that closely? Seems too close.
 
  • Informative
Reactions: scottf200
Ok so I realized that measuring this is actually quite simple since we know the view is masked by the shroud. I used a flashlight and positioned on the ground it as close to possible to the car while still illuminating the fisheye lens. I got a range of about 7 feet directly in front of the car and 8 feet a bit to the side (as a curb would be). It can actually see a foot or two closer than I do sitting in the driver's seat.
I'm not sure I'm understanding correctly, but 8 ft to the side is pretty far way from the curb to be making a tight right/left turn. That would guarantee that while the car is doing the turn, that camera can't see the curb unless you turn really wide.

And humans also can't see the curb while they are turning, so only seeing 1-2 feet closer wouldn't really change things much.

Not sure if we are talking past each other, but I'm talking about the visibility of the curb immediately as it is turning in. As the view currently is, Tesla has to extrapolate like a human does (recall from memory the curb position while car was approaching, but not directly seeing it while turning). I'm not seeing how having a low parking camera would not be a big help in this case to avoid needing to do any extrapolation (as the curb will be completely visible even as the car is turning and immediately before).
 
I'm not sure I'm understanding correctly, but 8 ft to the side is pretty far way from the curb to be making a tight right/left turn. That would guarantee that while the car is doing the turn, that camera can't see the curb unless you turn really wide.

And humans also can't see the curb while they are turning, so only seeing 1-2 feet closer wouldn't really change things much.

Not sure if we are talking past each other, but I'm talking about the visibility of the curb immediately as it is turning in. As the view currently is, Tesla has to extrapolate like a human does (recall from memory the curb position while car was approaching, but not directly seeing it while turning). I'm not seeing how having a low parking camera would not be a big help in this case to avoid needing to do any extrapolation (as the curb will be completely visible even as the car is turning and immediately before).
No I’m saying it can see the position of the curb better than someone sitting in the driver‘s seat, about 8 feet ahead and 2 feet to the side. Software is already far superior than humans at doing the “extrapolating” you’re talking about. The car knows the exact wheel speed and steering position so it could easily navigate ten feet of travel by dead reckoning, no AI involved. But it also has all the cameras that it can use for localization and an accelerometer.
I‘m just skeptical that adding additional cameras would help if your computer vision system can’t even tell the position of a curb ten feet away. It just doesn’t make sense to me that being able to see fixed objects up until the moment of impact would make any significant difference in the progress towards FSD. I do think having zero blind spots will be necessary for L5 though. Humans sometimes have to get out of the car and survey their surroundings though that could also be solved by my frunk Optimus idea.
 
Last edited:
could also be solved by my frunk Optimus idea.
Yep, he might not even have to get out in all cases. Could just pop the button, poke his head out, give the thumbs up, then close things back up again. Might be able to latch onto that hood somehow to pull it closed from the inside. Probably will be some aftermarket kits.

Tesla will need to think ahead and make sure Optimus is very skinny and flexible, though optional dismemberment could also be employed.
 
Last edited:
  • Funny
Reactions: Doggydogworld
Yep, he might not even have to get out in all cases. Could just pop the button, poke his head out, give the thumbs up, then close things back up again. Might be able to latch onto that hood somehow to pull it closed from the inside. Probably will be some aftermarket kits.

Tesla will need to think ahead and make sure Optimus is very skinny and flexible, though optional dismemberment could also be employed.
Frunk Optimus also solves Chuck’s left turn without the need for a camera retrofit.
 
No I’m saying it can see the position of the curb better than someone sitting in the driver‘s seat, about 8 feet ahead and 2 feet to the side. Software is already far superior than humans at doing the “extrapolating” you’re talking about.
Except it's not in this case, given Teslas are curbing wheels when in FSD mode. Not saying it's not impossible to fix this in software, but having parking cameras are a trivial way to address it.
The car knows the exact wheel speed and steering position so it could easily navigate ten feet of travel by dead reckoning, no AI involved. But it also has all the cameras that it can use for localization and an accelerometer.
It does have an accelerometer, but that is not enough for dead reckoning, it would also need a gyroscope. The accuracy matters too, given Tesla seems to tune to have a relatively small margin of error (not leaving a whole lot of space between wheels and curb, instead of turning wide). I'm skeptical an IMU would be able to beat a simple parking camera in this particular application (especially when your initial guesstimate of curb position may be wrong in the first place, and you get zero updates as you are making the turn given the cameras are completely blind to it).
I‘m just skeptical that adding additional cameras would help if your computer vision system can’t even tell the position of a curb ten feet away. It just doesn’t make sense to me that being able to see fixed objects up until the moment of impact would make any significant difference in the progress towards FSD. I do think having zero blind spots will be necessary for L5 though. Humans sometimes have to get out of the car and survey their surroundings though that could also be solved by my frunk Optimus idea.
It certainly helps for automated parking, which a tight right/left turn pretty much is the same thing. That's why parking cameras are implemented this way. If what you say was true, there would be no need for parking cameras.
 
Last edited:
  • Helpful
Reactions: Doggydogworld
Except it's not in this case, given Teslas are curbing wheels when in FSD mode. Not saying it's not impossible to fix this in software, but having parking cameras are a trivial way to address it.

It does have an accelerometer, but that is not enough for dead reckoning, it would also need a gyroscope. The accuracy matters too, given Tesla seems to tune to have a relatively small margin of error (not leaving a whole lot of space between wheels and curb, instead of turning wide). I'm skeptical an IMU would be able to beat a simple parking camera in this particular application (especially when your initial guesstimate of curb position may be wrong in the first place, and you get zero updates as you are making the turn given the cameras are completely blind to it).

It certainly helps for automated parking, which a tight right/left turn pretty much is the same thing. That's why parking cameras are implemented this way. If what you say was true, there would be no need for parking cameras.
Or the perception NN can’t tell the position of a curb 10 feet away or a bollard directly in front of the car. Because the state of the art in camera based computer vision isn’t sufficient for self driving and nobody knows when it will be.
 
  • Informative
Reactions: AlanSubie4Life