Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Almost ready with FSD Beta V9

This site may earn commission on affiliate links.
I mean, it literally does, but ok.
If It was literal facing forward, it would be able to see directly forward and forward in different angles. It can't see forward 0 degrees and other angles to the other side. It is in between side facing and forward facing. From the autopilot graphic side cameras have zero overlap with main front facing camera.
 
  • Disagree
Reactions: WarpedOne
If It was literal facing forward, it would be able to see directly forward and forward in different angles. It can't see forward 0 degrees and other angles to the other side. It is in between side facing and forward facing.


You seem pretty unclear on what 'forward' means.

It's entire field of view is in front of the midline of the car.

Facing the direction the car is travelling; toward the front.

That's forward.
 
@Knightshade Appreciate all the post today on this topic. One question: If the radar input is removed from the FSD does that mean that it is removed from all input factored into the computer? Like accident avoidance when a person's vision fails to detect risks. Just curious. Again, thanks for the thoughtful posts.


Right now we don't know for sure- since the software that switches to 100% vision only isn't available yet (outside of internal testers who aren't speaking about it other than Elon anyway).

If we take Elon at his most basic meaning, yes it means they'd ignore all radar input period (and presumably remove the HW entirely from future cars at some point)

His tweets suggest they plan to double down on vision as good enough to avoid accidents a persons own vision might miss rather than try and figure out when to believe a second type of sensor data rather than believing the vision input.



That said- because Tesla has gone down dead end paths before initially thinking they were better- I think it's likely the HW will remain on new vehicles for a while yet in case it turns out this is another example of a mistaken path.




What about fog and heavy rain? Obviously one shouldn't rely on electronics to keep them safe when they can't see the car in front of them, but it does a great job and it seems like we will get a lot of "camera is blocked" errors? Currently I look at it as an added layer of safety.

It is.

Likewise it is in regard to seeing ahead of a car in front of you- and we've seen actual real world accidents prevented by having this ability.



They're choosing to sacrifice that for what they believe are gains worth giving that up for.

It's possible they're right-- if vision only gets you a system that is 500% safer than a human driver, and you can have it this year.... is that worth giving up a fusion vision/radar system that is 550% safer than a human but is years away from being workable because such fusion, especially with the current sensors, is MUCH harder to get working right? Probably it is?

But we won't know what the real #s are for a while yet on the vision only system.... (and presumably will never know what the real #s on the fusion one would be since it won't exist)
 
Likewise it is in regard to seeing ahead of a car in front of you- and we've seen actual real world accidents prevented by having this ability.

It seems that with their vision-only system they’ll just have to do what a human does:

1) Deviate from lane-centered position to allow visibility around vehicle in front.

2) If close following is detected, increase following distance.

3) Probably in general increase following distance, and offset & vary centering in the lane, as this makes it easier to see around vehicles in front. Just like a human!

4) With the central positioning of the front-facing cameras, it will be a little trickier than it is for a human (who will tend to deviate to left side of lane for left-hand drive since it only requires a small adjustment), and I’d expect it would to need to get an addition 1-2 feet to the left of where we would be used to traveling in a lane. This seems a little problematic but perhaps it is possible. The b-pillar forward facing cameras would not have quite the field of view needed to help in this specific endeavor, unfortunately (based on images above assuming they show the whole field of view).

Not particularly necessary in low speed situations, but as speeds increase, these strategies will be important to get to better-than-human-level safety, and we should look to see them being implemented (I would expect them to show up maybe 6 months to a year after initial release of V9 to limited group).

In theory there is no reason that appropriate cameras and driving policy could not do better than a human, and possibly do as well as the radar (which I do not believe was 100% effective at detecting slowing of the vehicle in front of the lead vehicle - it depends on the lead vehicle!). Specifically in cases of a large leading vehicle, that’s when radar would be less likely to detect the next vehicle, and it’s when deviating lane position and increasing following distance would help improve visibility and increase safety margin. In theory. I guess we will see what they can do!
 
It seems that with their vision-only system they’ll just have to do what a human does:

1) Deviate from lane-centered position to allow visibility around vehicle in front.

This is an inferior solution for several reasons-

Sometimes a big/wide vehicle is in front of you, and a small/narrow one in front of him, no amount of jinking a little left/right within your lane will that 2nd car in view of the any camera on your car- but the radar would see it fine, and notice if it was hard braking (or stopped) before the guy in front of you might.... (exactly the case in the accident we saw a tesla avoid, thanks to radar)

Also sometimes you don't wanna drift too far to one side in a lane because there's an idiot who is already too far over in his own next to you.... or there's a bike...or there's pedestrians- or a myriad of other reasons 7 years of AP history has been to stay as centered as possible as the safest way to drive.


2) If close following is detected, increase following distance.

So, outside of sparsely populated areas, you'll constantly be getting cut off by other cars now... another downgrade.

The ability of Teslas to follow CLOSER than I'd seen any other TACC-like system, and thus be cut off far less often- was a huge plus the first time I ever test drove one.



3) Probably in general increase following distance, and offset & vary centering in the lane, as this makes it easier to see around vehicles in front. Just like a human!


"Well, we had it set up to work better than people, but we kinda gave up on it...."


4) With the central positioning of the front-facing cameras, it will be a little trickier than it is for a human (who will tend to deviate to left side of lane for left-hand drive since it only requires a small adjustment), and I’d expect it would to need to get an addition 1-2 feet to the left of where we would be used to traveling in a lane. This seems a little problematic but perhaps it is possible. The b-pillar forward facing cameras would not have quite the field of view needed to help in this specific endeavor, unfortunately (based on images above assuming they show the whole field of view).

Not particularly necessary in low speed situations, but as speeds increase, these strategies will be important to get to better-than-human-level safety, and we should look to see them being implemented (I would expect them to show up maybe 6 months to a year after initial release of V9 to limited group).

In theory there is no reason that appropriate cameras and driving policy could not do better than a human


Still better than a human? Sure.

it can at least see in every direction at once, and react faster.



, and possibly do as well as the radar[/QUOTE}

Since it physically can not see as much with vision as it could with vision+radar I don't see how you could ever make that case.

It will always have less info without the radar, and unlike LIDAR, it's info vision can not provide in some cases.


As I say it's possible it's "worth" giving that up to get working L3+ FSD years sooner because integrating radar with the new vision system is hard/not possible right now.... but you're objectively giving up data you can't replace to do it.

Yes humans drive without it... they drive without ultrasonics and an eye in the back of their head too but they're keeping those for the moment- remember we want MUCH BETTER than human performance.
 
Sometimes a big/wide vehicle is in front of you, and a small/narrow one in front of him, no amount of jinking a little left/right within your lane will that 2nd car in view of the any camera on your car- but the radar would see it fine, and notice if it was hard braking (or stopped) before the guy in front of you might.... (exactly the case in the accident we saw a tesla avoid, thanks to radar)

Yep. As I said that is a tough situation, have to really fall back just to understand the situation, and then can close back up if following distances are acceptable and circumstances allow for it. Or just pass. As I said, I just don’t count on the radar for this under any circumstances. If the visibility is not there, I make it visible, or add extra time.

Also sometimes you don't wanna drift too far to one side in a lane because there's an idiot who is already too far over in his own next to you.... or there's a bike...or there's pedestrians- or a myriad of other reasons 7 years of AP history has been to stay as centered as possible as the safest way to drive.

Yep. That’s why I try not to drive next to anyone. But yes, as I said, I do see this being potentially problematic for the centerline cameras and modified policy. It may be necessary to fall back further than a human on the left side of the car would.

So, outside of sparsely populated areas, you'll constantly be getting cut off by other cars now... another downgrade.

The ability of Teslas to follow CLOSER than I'd seen any other TACC-like system, and thus be cut off far less often- was a huge plus the first time I ever test drove one.

Meh. I’m still waiting for following distance 10-14 - in the city! Aggression will be controlled by following distance of course.

Since it physically can not see as much with vision as it could with vision+radar I don't see how you could ever make that case.

It will always have less info without the radar, and unlike LIDAR, it's info vision can not provide in some cases.

Sorry I was not clear. I was comparing to the current system including the radar, not a hypothetical future system including radar. Obviously leaving radar in would be better, assuming enough resources to integrate it properly.

I think I can make the case that a full vision system with better vision capability and driving policy adjustments will be safer than the current system. It may not be able to detect the same amount as the radar, but with those changes and better vision and understanding of the road situation I am just saying it can be as safe at avoiding this sort of issue on average as the current system. Partially because it will pick up things the radar and current system would not detect (as a human would). This obviously is not a strong claim! If it’s not true, Tesla has big problems.

Note I am not saying vision will always be able to detect situations the radar can. There definitely will be holes, but vision-only could still be safer (on average - the old system could potentially avoid accidents the new one cannot, but the new system could avoid more accidents overall).

future vision/perception + new policy - radar > current vision/perception + radar.

With that inequality I think it is certainly possible to avoid more rear-end accidents due to stopping traffic in front of the lead car than the current system.

But to be completely clear, I don’t think they should remove radar! Obviously you could add another inequality on the left side, including the radar, and that would be even better! It allows additional flexibility and might help with some of the issues you raise above. (Though personally, I would NEVER trust the radar to detect the car in front of the lead car, or make any of my driving habits reliant on that detection being solid. I prefer to keep multiplying small numbers together. Want all numbers as small as possible when it comes to accident probability.)
 
Last edited:
1. If you are following a passenger vehicle, vision can be fairly easily trained to recognize the second vehicle ahead by seeing thru the windows or by one of the other cameras. FSD has temporal consistency (will improve over time) just as your brain does, so if it sees a second car ahead, it can “remember” that and increase following distance, while also continuing to track through the windows.

2. If you are following a bus or semi truck where you cannot see the second car ahead, the mass of the truck/bus is so much larger that a collision ahead wouldn’t reduce the truck’s velocity as much due to its much higher momentum. Therefore the need for rapid braking is much reduced.

I see the eventual removal of radar as a positive.

Radar is the cause of phantom braking events because radar is easily tricked by metallic objects of certain geometries.

Removing radar lowers hardware cost, repair cost, and energy consumption.

Radar provides little useful information when object density is low.

Radar bounce characteristics actually cause more trouble in tunnels. The active signal bounces off of the walls and can present a confusing picture to the sensor processing.

I have zero doubt that removing radar is the right approach going forward (I said as much a month or two ago in a different thread).

99.9% of the safety benefits that will come from FSD are an automated system’s inability to get distracted, sleepy, emotional (road ragey) or drunk. Given a vision system trained to recognize cars thru windows or around obstructions ahead as Tesla is doing, radar bouncing brings very little statistical benefit.

And I say this as someone who has driven about 50,000+ miles with autopilot engaged and has had a few brake engagements from the bounced radar system (none of which actually saved me from an accident, however).
 
1. If you are following a passenger vehicle, vision can be fairly easily trained to recognize the second vehicle ahead by seeing thru the windows or by one of the other cameras. FSD has temporal consistency (will improve over time) just as your brain does, so if it sees a second car ahead, it can “remember” that and increase following distance, while also continuing to track through the windows.

2. If you are following a bus or semi truck where you cannot see the second car ahead, the mass of the truck/bus is so much larger that a collision ahead wouldn’t reduce the truck’s velocity as much due to its much higher momentum. Therefore the need for rapid braking is much reduced.

I see the eventual removal of radar as a positive.

Radar is the cause of phantom braking events because radar is easily tricked by metallic objects of certain geometries.

Removing radar lowers hardware cost, repair cost, and energy consumption.

Radar provides little useful information when object density is low.

Radar bounce characteristics actually cause more trouble in tunnels. The active signal bounces off of the walls and can present a confusing picture to the sensor processing.

I have zero doubt that removing radar is the right approach going forward (I said as much a month or two ago in a different thread).

99.9% of the safety benefits that will come from FSD are an automated system’s inability to get distracted, sleepy, emotional (road ragey) or drunk. Given a vision system trained to recognize cars thru windows or around obstructions ahead as Tesla is doing, radar bouncing brings very little statistical benefit.

And I say this as someone who has driven about 50,000+ miles with autopilot engaged and has had a few brake engagements from the bounced radar system (none of which actually saved me from an accident, however).

VAG et al solved all of those perceived problems with RADAR years ago, which leaves the only issue with RADAR being Musky's implementation.
 
Guess what happens if your vision system becomes really good:

You will be able to track the trajectory of the 2nd vehicle ahead through the windshield of the 1st vehicle (when possible). But even when you can't:

You will have great precision in tracking velocity and acceleration of the 1st vehicle ahead of you. You just shift your follow distance such that the car can react to sudden changes in 1st vehicle velocity (regardless of what is happening in front of it).

In the end, there should still be basically no crashes as long as your vision system / distance settings are good.
 
1. If you are following a passenger vehicle, vision can be fairly easily trained to recognize the second vehicle ahead by seeing thru the windows or by one of the other cameras. FSD has temporal consistency (will improve over time) just as your brain does, so if it sees a second car ahead, it can “remember” that and increase following distance, while also continuing to track through the windows.
Do we know that for certain - that FSD actually learns over time? My greatest concern with FSD/NoA is that it is entirely reactionary and not at all predictive. It doesn't seem to take the long view of the road, just what's immediately in front of it. I liken it to the skills of a first-year driver. It can obey traffic controls nicely, but dealing with other (human) drivers and their random actions are problematic.

As someone who's driven for 50 years, I've got a well honed set of situational awareness skills that allow me to predict problem situations. I haven't seen that degreen of learning evident in FSD yet. There's a video of a Tesla highway accident on Twitter where the cars in the adjacent lane panic brake, but the Tesla proceeded happily in it's lane at 70mph til a car swerved directly in front of it. Bang. Anyone who watched that video commented that they would have braked or slowed down immediately when they saw trouble in the adjacent lane about 3 seconds earlier. 3 seconds at 70mph is about the length of a football field.

Until all cars are automated, humans will be the largest safety hazzard. It would be cool if FSD evolved to the point where it possessed the sum total of all the best techniques and habits of the best human drivers to cover every situation.
 
Do we know that for certain - that FSD actually learns over time?

That depends what you mean.

The FSD on your particular car absolutely does not learn

At all. Ever.

Instead it feeds data back to the mothership, where humans label the data, and feed it to the NNs for additional training.... and then once the training update is the way they want, they send out an update to the fleet- which will change the perception done by those NNs (and may also change some of the other stuff besides perception, none of which uses NNs at all right now)- only then will your own cars behavior change.

(Well, technically map updates can change it, but that's less relevant to your question and still requires an update from HQ)


My greatest concern with FSD/NoA is that it is entirely reactionary and not at all predictive. It doesn't seem to take the long view of the road, just what's immediately in front of it. I liken it to the skills of a first-year driver. It can obey traffic controls nicely, but dealing with other (human) drivers and their random actions are problematic.

Right now all driving code is still traditional/static code.

NN is only used for perception.

They've been saying that will be changing for a while now, but it hasn't changed in the production code yet.

It's possible the V9 everyones waiting on will change that- remains to be seen.... (I would tend to think not- given it'll be the first widely tested version using ONLY vision for perception, so I'd expect they want to insure they got THAT right before moving even more stuff to it)