Hi All,
One thing that makes me a bit nervous is the AP blind spots that Tesla seems to have at the front two corners. Cars sometimes "disappear" (albeit less frequently now with the latest software update) from my screen as I pass them, and I think it's due at times to these blindspots. If you look at the videos posted by Green, or on the Tesla Autonomy Day video, you'll note that the actual image that the fisheye sees is much less than what the autopilot paints as driveable space. In other words, the fisheye camera's field of view is constrained by the camera housing on the sides and bottom (and you have to look very closely to see this as at first it looks like the camera is seeing more than it really is). So for objects close to the car and low, such as wheels of cars, these objects will "disappear" when they pass by the fish eye camera and before they get to the pillar cams.
It is curious as to why Tesla didn't widen and extend the bottom of the camera housing slightly as this would have improved the field of view, particularly on the sides. And I realize that the camera still sees "most" of the car, but in the case of very small cars (smart cars) or other objects, it's still curious and disconcerting.
I'm no fan of XPeng's, but they seem to have much better coverage, as evinced by their video.
I don't know why I raise this other than it's curious (or perhaps someone has video that shows that Tesla's cameras' field-of-view is somehow wider than what is actually being portrayed in these videos??)
Tesla's cameras, via greentheonly (many other great examples on his youtube channel):
Xpeng:
One thing that makes me a bit nervous is the AP blind spots that Tesla seems to have at the front two corners. Cars sometimes "disappear" (albeit less frequently now with the latest software update) from my screen as I pass them, and I think it's due at times to these blindspots. If you look at the videos posted by Green, or on the Tesla Autonomy Day video, you'll note that the actual image that the fisheye sees is much less than what the autopilot paints as driveable space. In other words, the fisheye camera's field of view is constrained by the camera housing on the sides and bottom (and you have to look very closely to see this as at first it looks like the camera is seeing more than it really is). So for objects close to the car and low, such as wheels of cars, these objects will "disappear" when they pass by the fish eye camera and before they get to the pillar cams.
It is curious as to why Tesla didn't widen and extend the bottom of the camera housing slightly as this would have improved the field of view, particularly on the sides. And I realize that the camera still sees "most" of the car, but in the case of very small cars (smart cars) or other objects, it's still curious and disconcerting.
I'm no fan of XPeng's, but they seem to have much better coverage, as evinced by their video.
I don't know why I raise this other than it's curious (or perhaps someone has video that shows that Tesla's cameras' field-of-view is somehow wider than what is actually being portrayed in these videos??)
Tesla's cameras, via greentheonly (many other great examples on his youtube channel):
Xpeng: