Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta Videos (and questions for FSD Beta drivers)

This site may earn commission on affiliate links.
The recent AI Day presentation demonstrated they do have cross camera object recognition (recognizing objects that straddle two cameras) instead of relying on stitching. on a prr-car
Well I was a little confused on that point. I think that it is a kind of stitching but maybe not at the overlapping pixel level the level the way many stitching algorithms work. Note that Karpathy made a point that it's essential for them to calibrate out the slight alignment variances in each car (he made the amusing comment "all of our cars are slightly cockeyed, in a slightly different way") and he did say the images are "stitched up").

I would certainly agree that more overlap, and less distorted distorted overlap, would help a lot in in creating the continuous panoramic view more quickly and confidently. And I've been a big proponent of more and better camera angles. However it's also true that our brains are very talented at filling in a world view from fairly distorted and low-resolution peripheral vision, based on anchor objects rather than a meshing 9f fine-grained detail. So in the NN, It may not be a case of being presented with a nicely stitched 360 video feed, hut rather a poorly-stitched and and somewhat imperfect panorama that still contains the elements necessary to extract the needed features.
 
The recent AI Day presentation demonstrated they do have cross camera object recognition (recognizing objects that straddle two cameras) instead of relying on stitching.

maybe I misunderstood, but I thought they used the cameras to build a 'world model' and then display from that.

to create that model you DO stitch things together. how else would you?
Start at 2:25 or so -- rectification -- raw video from AI Days ( but a further breakdown one at bottom)


Also good article on steps -- Tesla's Autopilot Explained! Tesla AI Day in 10 Minutes

Great breakdown video by the article author:
 
Last edited:
maybe I misunderstood, but I thought they used the cameras to build a 'world model' and then display from that.

to create that model you DO stitch things together. how else would you?
I thought you were referring to traditional camera stitching, where the images across cameras are first stitched into one continuous image before any work is done on them. For this you do need considerable overlap to get a decent stitch. This is usually done for any human visualization tasks as humans expect to see a continuous image and it makes for a nice visual for presentations.

However, if you are referring to a common "world model" as "stitching", that's a completely different matter. Note however for this kind of "stitching" you don't actually necessarily even need any overlap. You just need a way to map objects across different cameras in the same space. This works even with gaps or blind spots (think of how you handle blind spots in visibility in the car yourself).
 
Last edited:
Seriously? He's once again moving his lips but the truth not there. Not even close. Funny thing is, he probably believes his own lies. The man needs serious help.

Maybe we need an X-File investigation into FSD?
You have to be wildly optimistic to do kinds of stuff Musk is doing. Everything he has achieved till now is considered highly improbably if not "impossible". First new auto company to survive in US in 80 years, for eg.

Not sure he is the one who needs serious help, if you can't understand this.
 
I recall in Metro Amsterdam, freeway speeds are dynamic and change based on conditions ahead. This was in 1998.
Don't the concrete lane dividers in Chicago's Lake Shore Drive raise and lower automatically in response to the direction of rush hour traffic?

Anyway, the True Defenders of Tesla™️ on Twitter are weary of debating the camera issue:


You can beat a dead horse but it still won't properly execute an unprotected left turn.
 
What I find conflicting is the opportunity when the Model Y came out to improve upon the number and location of cameras. If Tesla truly thought they would limit FSD why not make changes then? Is it possible then there isn't a problem? Or any new camera changes would create a firestorm with existing owners?
 
What I find conflicting is the opportunity when the Model Y came out to improve upon the number and location of cameras. If Tesla truly thought they would limit FSD why not make changes then? Is it possible then there isn't a problem? Or any new camera changes would create a firestorm with existing owners?

Cameras in identical form factor headlights? Wiring would obviously need to be addressed, but they would obviate the need for body changes if front edge visibility is the main issue.
 
Last edited:
Cameras in identical form factor headlights? Wiring would obvioiusly need to be addressed, but they would obviate the need for body changes if front edge visibility is the main issue.
I still think headlights are a bad idea, even ignoring that there might be glare when the headlights are on. That seems like a more vulnerable position that may be a recipe for more frequent expensive replacements. Cruise did something similar early on where they had sensors in the corners of the vehicle (that I observed seem like a very bad idea how it seems like it may easily be damaged), but with the later iterations they moved it to near the mirrors.

The easiest location remains to put extra cameras in the side marker/repeater lights given they already have cameras there. They might not even need new wiring if they can multiplex the signal. The other place that might be relatively easy to swap out are the side mirrors, but those would need new wiring.
 
I would try my best to 'protect' the cameras. right out front on the headlights is too exposed (breakage or collisions, even small ones). I'd want my design to be able to survive as many collision angles and still have a functioning camera. if at all possible.

if there is natural cleaning due to airflow, that's also a motivation for placement.

if they ever do increase camera count, I so hope they take it SERIOUSLY with wipers, heaters (real ones, not just pcb components heating up) and even dual focal lengths in each camera capsule, for more views. more views is always better. always.

and they do need to have polarizers or ND filters that are switchable. electronic ones are not cheap but they do the job.

tesla: you really need to up your camera game if you are going to put lives at stake for level2 and higher.
 
Elon did NOT say 2 weeks this time...so...does that mean this Friday, or earlier, or later?

1630361943883.png
 
  • Informative
Reactions: rxlawdude
To be fair Elon did not delay the public button opt in date just the date for the FSD beta group since he shortened the time between the FSD Beta group and the public button. "so we will need another few weeks after that for tuning & bug fixes. Best guess is a public beta button in ~4 weeks". That has now been changed to "looks promising that Beta 10.1, about 2 weeks later will be good enough for public opt in request button". So your "shocking" comment is not very accurate. I take that as encouraging for a change.

Of major note it also looks like the 10.1 release will also include the inside camera to alert drivers when they are not paying attention. That would be a significant enhancement to address criticism that the steering wheel nag is not sufficient. That comes up often when NoA/FSD gets compared to Supercruise. 2021.32.5 Official Tesla Release Notes - Software Updates
 
  • Like
Reactions: n.one.one