Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta Videos (and questions for FSD Beta drivers)

This site may earn commission on affiliate links.
what are the repeaters? Do you mean the side cameras?
The repeaters on the front fender. They are called that officially by Tesla and is standard industry terminology given the repeater lights are there. They are called repeaters given they repeat the turn signal.


Tesla and I don't call them just "side cameras" given that is ambiguous and can refer to either of the side cameras (could also refer to the B-pillar cameras).

 
Last edited:
@Chazman92 Something I'm really curious about is the behavior exiting long offramps. I know the highway code is supposed to be the same as in production firmware, but I'm curious if it slows down any differently. On prod builds, NoA moves the set-speed down in 5mph increments, so the car slows suddenly, maintains speed for a bit, then slows down by 5mph again. Over and over until you're at the end of the ramp. Is it any smoother in FSD Beta or does it still ratchet down in 5mph increments?
 
wireless short range is very reliable
No, not really. I've tried to run single-digit Mbps video signals (NDI | HX) over gigabit Wi-Fi, and even at single-digit feet inside a masonry church with zero interference from other Wi-Fi signals, it is just not reliable enough to work without random dropouts every so often. And that's with a massively MIMO system that uses 4+ antennas to minimize multipath interference, massively boost SNR, etc. That's why I'm the proud owner of about six hard-wired Ethernet adapters for iPhone/iPod Touch devices.

I've had problems with high-end MIMO wireless audio gear at five feet, with wireless video transmitters at ten feet, etc., even in environments with low noise. The problem is that failures with wireless are random and unpredictable.

A cord can be damaged just as easy as a wireless link can be lost. There’s no redundancy in either system.
A cord can be damaged, but it typically cannot be jammed wirelessly from a hundred feet away.
 
No, not really. I've tried to run single-digit Mbps video signals (NDI | HX) over gigabit Wi-Fi, and even at single-digit feet inside a masonry church with zero interference from other Wi-Fi signals, it is just not reliable enough to work without random dropouts every so often. And that's with a massively MIMO system that uses 4+ antennas to minimize multipath interference, massively boost SNR, etc. That's why I'm the proud owner of about six hard-wired Ethernet adapters for iPhone/iPod Touch devices.

I've had problems with high-end MIMO wireless audio gear at five feet, with wireless video transmitters at ten feet, etc., even in environments with low noise. The problem is that failures with wireless are random and unpredictable.

you need to get a better av technician... the guy you’re using mustn’t be very good. And there are better technologies than WiFi.


I use this every weekend and never have a dropout unless I’m 2 miles down range.

A cord can be damaged, but it typically cannot be jammed wirelessly from a hundred feet away.
So then the question become: who would do that who has the capability to do that, and then so what?
 
Last edited:
Tesla #FSDBeta B-Pillar and Headlight Camera Analysis
FSDBeta 9.1 - 2021.4.18.13

I am growing increasingly sceptical that FSD can be delivered without side ward looking cameras in the front bumpers. But that will be a huge undertaking. A million cars to refit and rewire, and retraining the NNs to take feeds from additional cameras. There is no good solution here. Re-routing to avoid unprotected lefts is not a solution.
 
I am growing increasingly sceptical that FSD can be delivered without side ward looking cameras in the front bumpers. But that will be a huge undertaking. A million cars to refit and rewire, and retraining the NNs to take feeds from additional cameras. There is no good solution here. Re-routing to avoid unprotected lefts is not a solution.

I think this is the best placement of a b-pillar camera:

 
Tesla #FSDBeta B-Pillar and Headlight Camera Analysis
FSDBeta 9.1 - 2021.4.18.13

Thank you Chuck, great work!

I feel like this is a confirmation / vindication of my prior posts regarding the angle-of-view benefits of cameras integrated into the headlight module. That forward-placed geometry benefit is the main motivation, but there are other good reasons to seriously consider the headlight module location:
  • It solves the main mechanical retrofit objections that a lot of people here are posing, regarding "cutting holes" in the bumper or fender. Swapping out headlight modules requires no such body modification.
  • It's IMO the best available position on the car (leaving out silly domes or little towers or the like). On the corner, as high as possible off the road and its mud/grime splatter, already worked into the aerodynamic (drag coefficient) design.
  • Headlight lenses are designed already to shed water and grime within reason, and if that's insufficient then cheap washer and/or wiper solutions are known & available now in the industry.
We've also discussed the harness/bus and computer-feed retrofit issues, and there are potential solutions that don't have to involve gutting the car's wiring or necessarily adding video channels.

I wouldn't trivialize this, but it's my feeling that Tesla could do this in a reasonable backwards-compatible way if they decided they wanted to. Otherwise, I'd love to see someone from Tesla explain why the existing camera placements present no limitations. I think it's clear that they do and Chuck's video shows how much better things could be before you intrude into traffic. Especially considering that purpose-built corner cameras would have a somewhat narrower view than the GoPro he used - i.e. higher magnification for better detail at distance.

Thanks again, great video!
 
I am growing increasingly sceptical that FSD can be delivered without side ward looking cameras in the front bumpers. But that will be a huge undertaking. A million cars to refit and rewire, and retraining the NNs to take feeds from additional cameras. There is no good solution here. Re-routing to avoid unprotected lefts is not a solution.
One potential solution I can think of is add a swivel and swivel controller to the front narrow 250M camera. If that camera can turn and look to the sides, problem solved, as it is far seeing and higher up. This might also be a simpler solution.
 
One potential solution I can think of is add a swivel and swivel controller to the front narrow 250M camera. If that camera can turn and look to the sides, problem solved, as it is far seeing and higher up. This might also be a simpler solution.
This is also great thinking about a potential solution. But I would point out the implication that in order to use such a real-time adaptive camera (or more generally, adaptive sensor), there would have to be a fairly fundamental architectural change in the Neural Network, including new hardware-control output paths.

Right now, the perception NN receives the multi-camera input (possibly somewhat pre-processed by stitching software in another NN or a more conventional video-merging block) and analyzes the surround or Bird's Eye View. Presently though, this has an entirely fixed relationship to the vehicle position. It then processes the imagery to recognize and label objects, including aspects of their time-domain past and predicted future paths. I think it's also now explicitly farming out some of the distance-extraction work to yet another "pseudo-lidar" NN, though I don't claim to know the real details of the architecture and its explicit vs. implicit blocks within the Mind of the Car. But the point is that it's not currently set up the way humans and animals are, where there are feedback loops to allow optimization of the sensor position or output. We humans constantly move our eyes and swivel our heads to augment our visual perception. Animals like cats & dogs can move their ears independently, but we can only reposition our heads or cup our hands to locate or isolate sound.

The Tesla Vision NN, I think, is not currently architected with the capability to request camera pan, tilt or zoom for a real-time adjustment of its imaging input. Of course it could be done but I'm saying that it probably involves a deeper redesign of the "brain" map and training protocols, with some kind of handler NN that knows when the main perception block needs a better angle-of-view or possibly higher magnification. Part of such a redesign would add the physical I/O channels to enable that in external hardware.

I will say, however, that there's indeed some evidence of very crude attempts by the car to improve its camera-angles. First, the whole creeping-forward action to compensate for the lack of real side-looking cameras that are well forward of the human driver (or at least as far forward as a driver can lean today). Second, less clear though, is the behavior where the car angles uncomfortably leftward while considering high-speed oncoming traffic - that could be designed to point the narrow-view center camera. (I hope there's some good reason, as it's otherwise a very poor driving practice!) These awkward behaviors are, to me, indications that the system needs better camera angles but is very poorly equipped to get them and currently very slow to process the results.

So yes, perhaps some pan/tilt/zoom camera hardware, but properly combined with a modified perception NN that knows how to use it efficiently and with low latency. Alternatively to motorized hardware, a couple more fixed cameras but also a general upgrade to 4k-ish sensors and lenses. Not to overwhelm the general driving NN with a ridiculous and unnecessary ultra-res bitstream, but another approach to allow adaptive digital zoom-in when useful. The main NN could continue with down-sampled modest-resolution video over the whole panoramic field, but with the ability to consider one or more virtual high-resolution regions that can supplement the perception. The detail region(s) would typically be forward vision but as needed, to either side at any needed angle. This is an implementation of human-like pointable central-vision (fovea) detail, but potentially super-human in handling more than one angle for multiple detailed views and more rapid shifting of the high-attention view.

There are some great possibilities, but Tesla admits no need for any of this. We're left to accept the idea that it's simply a matter of tightening up the software.
 
This is also great thinking about a potential solution. But I would point out the implication that in order to use such a real-time adaptive camera (or more generally, adaptive sensor), there would have to be a fairly fundamental architectural change in the Neural Network, including new hardware-control output paths.

Right now, the perception NN receives the multi-camera input (possibly somewhat pre-processed by stitching software in another NN or a more conventional video-merging block) and analyzes the surround or Bird's Eye View. Presently though, this has an entirely fixed relationship to the vehicle position. It then processes the imagery to recognize and label objects, including aspects of their time-domain past and predicted future paths. I think it's also now explicitly farming out some of the distance-extraction work to yet another "pseudo-lidar" NN, though I don't claim to know the real details of the architecture and its explicit vs. implicit blocks within the Mind of the Car. But the point is that it's not currently set up the way humans and animals are, where there are feedback loops to allow optimization of the sensor position or output. We humans constantly move our eyes and swivel our heads to augment our visual perception. Animals like cats & dogs can move their ears independently, but we can only reposition our heads or cup our hands to locate or isolate sound.

The Tesla Vision NN, I think, is not currently architected with the capability to request camera pan, tilt or zoom for a real-time adjustment of its imaging input. Of course it could be done but I'm saying that it probably involves a deeper redesign of the "brain" map and training protocols, with some kind of handler NN that knows when the main perception block needs a better angle-of-view or possibly higher magnification. Part of such a redesign would add the physical I/O channels to enable that in external hardware.

I will say, however, that there's indeed some evidence of very crude attempts by the car to improve its camera-angles. First, the whole creeping-forward action to compensate for the lack of real side-looking cameras that are well forward of the human driver (or at least as far forward as a driver can lean today). Second, less clear though, is the behavior where the car angles uncomfortably leftward while considering high-speed oncoming traffic - that could be designed to point the narrow-view center camera. (I hope there's some good reason, as it's otherwise a very poor driving practice!) These awkward behaviors are, to me, indications that the system needs better camera angles but is very poorly equipped to get them and currently very slow to process the results.

So yes, perhaps some pan/tilt/zoom camera hardware, but properly combined with a modified perception NN that knows how to use it efficiently and with low latency. Alternatively to motorized hardware, a couple more fixed cameras but also a general upgrade to 4k-ish sensors and lenses. Not to overwhelm the general driving NN with a ridiculous and unnecessary ultra-res bitstream, but another approach to allow adaptive digital zoom-in when useful. The main NN could continue with down-sampled modest-resolution video over the whole panoramic field, but with the ability to consider one or more virtual high-resolution regions that can supplement the perception. The detail region(s) would typically be forward vision but as needed, to either side at any needed angle. This is an implementation of human-like pointable central-vision (fovea) detail, but potentially super-human in handling more than one angle for multiple detailed views and more rapid shifting of the high-attention view.

There are some great possibilities, but Tesla admits no need for any of this. We're left to accept the idea that it's simply a matter of tightening up the software.
Until ReallyReallyFSD™ is launched on new Teslas. "Classic" FSD will get L2 city, but let's face reality that L3 and above are extremely unlikely with the current HW3 and camera system.

Edit: This in no way means I wouldn't be thrilled with L2 "everywhere" in my two current Tessies.
 
Last edited:
One potential solution I can think of is add a swivel and swivel controller to the front narrow 250M camera. If that camera can turn and look to the sides, problem solved, as it is far seeing and higher up. This might also be a simpler solution.
Note this is not really possible with the current triple cam setup. The image sensors are all on the same control board and can't be moved independently. You can have the lenses swivel, but that will cause issues for light transmission (light will be off axis) and will have focus uniformity issues (like a tilt/shift lens).

And if you swivel the whole assembly, you pretty much would have redesign the whole housing. At that point, it makes more sense to just have 5 cameras and use a switch to switch between camera signals (if bandwidth does not allow them simultaneously). In general any physical moving mechanism is another source of failure, so I think tesla will want to avoid that.

Tesla-Triple-Forward-Camera-part-collection-System-Plus-Consulting.jpg
 

car tries to drive into a wall at the start. DirtyTesla has tested whether the car can handle the parking structure at the beginning several times through several versions and the car consistently fails.

9:20 - interesting moment where the car seems to recognize it’s in the left turn lane and instead of forcing it’s way straight it ignores the nav and turns left and re-routes. But when the car loops around and arrives here again, it instead does go straight from the left turn lane the second time around.

12:00 - doesn’t see a row of planters along the side of the it is turning into street and steers right towards them. Driver takes over.
 
  • Informative
Reactions: diplomat33
Do those of you who like Twitter, greentheonly entered the conversation yesterday regarding whether more front cameras are needed to make FSD work under all conditions.

Keep in mind that this is not a new issue. Some Tesla owners have been complaining for years about the potential ULT blindspot in the camera array. It's only because of recent discussions both here and elsewhere that the topic has resurfaced. I think Chuck Cook's videos also demonstrate the issue quite clearly. A recently poll of Tesla users (also on Twitter) showed about half of the responders believed that the existing camera array was sufficient for implementing FSD while the rest said that it either wasn't good enough or they weren't sure. Fortunately, engineers don't design products solely based on user polls.
 
One potential solution I can think of is add a swivel and swivel controller to the front narrow 250M camera. If that camera can turn and look to the sides, problem solved, as it is far seeing and higher up. This might also be a simpler solution.
I've been saying this, too. the human can focus and point their eyes. if we are to continue with the FALSE analogy that 'people use eyes to drive, cars should be able to, too' then at least give the car a fighting chance. pan, tilt zoom. and nd filters, as well.

if you are going vision, do it RIGHT, dammit. dont half-ass it.
 
  • Like
Reactions: diplomat33
I've been saying this, too. the human can focus and point their eyes. if we are to continue with the FALSE analogy that 'people use eyes to drive, cars should be able to, too' then at least give the car a fighting chance. pan, tilt zoom. and nd filters, as well.

if you are going vision, do it RIGHT, dammit. dont half-ass it.

Cruise has implemented this very concept with their radar. They have radar where the side mirrors would be that can rotate to point in different directions as needed when the car is driving.

0*FyHT2SMl-wByZMk0
 
  • Like
Reactions: S4WRXTTCS