Knightshade
Well-Known Member
Side facing camera is not forward facing. Perhaps Elon meant 5 forward looking, rather than 5 forward facing.
Except they ARE forward facing.
Look at the physical cameras in the b-pillar. They're angled forward.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Side facing camera is not forward facing. Perhaps Elon meant 5 forward looking, rather than 5 forward facing.
Angled forward does not make a side camera forward facing. Lets call it 45 degree facing between side and forward.Except they ARE forward facing.
Look at the physical cameras in the b-pillar. They're angled forward.
Angled forward does not make a side camera forward facing.
If It was literal facing forward, it would be able to see directly forward and forward in different angles. It can't see forward 0 degrees and other angles to the other side. It is in between side facing and forward facing. From the autopilot graphic side cameras have zero overlap with main front facing camera.I mean, it literally does, but ok.
If It was literal facing forward, it would be able to see directly forward and forward in different angles. It can't see forward 0 degrees and other angles to the other side. It is in between side facing and forward facing.
You seem pretty confused in the direction of in between sideways and forward. It is not facing directly forward.Facing the direction the car is travelling; toward the front.
You seem pretty confused in the direction of in between sideways and forward. It is not facing directly forward.
@Knightshade Appreciate all the post today on this topic. One question: If the radar input is removed from the FSD does that mean that it is removed from all input factored into the computer? Like accident avoidance when a person's vision fails to detect risks. Just curious. Again, thanks for the thoughtful posts.
What about fog and heavy rain? Obviously one shouldn't rely on electronics to keep them safe when they can't see the car in front of them, but it does a great job and it seems like we will get a lot of "camera is blocked" errors? Currently I look at it as an added layer of safety.
Likewise it is in regard to seeing ahead of a car in front of you- and we've seen actual real world accidents prevented by having this ability.
It seems that with their vision-only system they’ll just have to do what a human does:
1) Deviate from lane-centered position to allow visibility around vehicle in front.
2) If close following is detected, increase following distance.
3) Probably in general increase following distance, and offset & vary centering in the lane, as this makes it easier to see around vehicles in front. Just like a human!
4) With the central positioning of the front-facing cameras, it will be a little trickier than it is for a human (who will tend to deviate to left side of lane for left-hand drive since it only requires a small adjustment), and I’d expect it would to need to get an addition 1-2 feet to the left of where we would be used to traveling in a lane. This seems a little problematic but perhaps it is possible. The b-pillar forward facing cameras would not have quite the field of view needed to help in this specific endeavor, unfortunately (based on images above assuming they show the whole field of view).
Not particularly necessary in low speed situations, but as speeds increase, these strategies will be important to get to better-than-human-level safety, and we should look to see them being implemented (I would expect them to show up maybe 6 months to a year after initial release of V9 to limited group).
In theory there is no reason that appropriate cameras and driving policy could not do better than a human
, and possibly do as well as the radar[/QUOTE}
Since it physically can not see as much with vision as it could with vision+radar I don't see how you could ever make that case.
It will always have less info without the radar, and unlike LIDAR, it's info vision can not provide in some cases.
As I say it's possible it's "worth" giving that up to get working L3+ FSD years sooner because integrating radar with the new vision system is hard/not possible right now.... but you're objectively giving up data you can't replace to do it.
Yes humans drive without it... they drive without ultrasonics and an eye in the back of their head too but they're keeping those for the moment- remember we want MUCH BETTER than human performance.
Sometimes a big/wide vehicle is in front of you, and a small/narrow one in front of him, no amount of jinking a little left/right within your lane will that 2nd car in view of the any camera on your car- but the radar would see it fine, and notice if it was hard braking (or stopped) before the guy in front of you might.... (exactly the case in the accident we saw a tesla avoid, thanks to radar)
Also sometimes you don't wanna drift too far to one side in a lane because there's an idiot who is already too far over in his own next to you.... or there's a bike...or there's pedestrians- or a myriad of other reasons 7 years of AP history has been to stay as centered as possible as the safest way to drive.
So, outside of sparsely populated areas, you'll constantly be getting cut off by other cars now... another downgrade.
The ability of Teslas to follow CLOSER than I'd seen any other TACC-like system, and thus be cut off far less often- was a huge plus the first time I ever test drove one.
Since it physically can not see as much with vision as it could with vision+radar I don't see how you could ever make that case.
It will always have less info without the radar, and unlike LIDAR, it's info vision can not provide in some cases.
1. If you are following a passenger vehicle, vision can be fairly easily trained to recognize the second vehicle ahead by seeing thru the windows or by one of the other cameras. FSD has temporal consistency (will improve over time) just as your brain does, so if it sees a second car ahead, it can “remember” that and increase following distance, while also continuing to track through the windows.
2. If you are following a bus or semi truck where you cannot see the second car ahead, the mass of the truck/bus is so much larger that a collision ahead wouldn’t reduce the truck’s velocity as much due to its much higher momentum. Therefore the need for rapid braking is much reduced.
I see the eventual removal of radar as a positive.
Radar is the cause of phantom braking events because radar is easily tricked by metallic objects of certain geometries.
Removing radar lowers hardware cost, repair cost, and energy consumption.
Radar provides little useful information when object density is low.
Radar bounce characteristics actually cause more trouble in tunnels. The active signal bounces off of the walls and can present a confusing picture to the sensor processing.
I have zero doubt that removing radar is the right approach going forward (I said as much a month or two ago in a different thread).
99.9% of the safety benefits that will come from FSD are an automated system’s inability to get distracted, sleepy, emotional (road ragey) or drunk. Given a vision system trained to recognize cars thru windows or around obstructions ahead as Tesla is doing, radar bouncing brings very little statistical benefit.
And I say this as someone who has driven about 50,000+ miles with autopilot engaged and has had a few brake engagements from the bounced radar system (none of which actually saved me from an accident, however).
Do we know that for certain - that FSD actually learns over time? My greatest concern with FSD/NoA is that it is entirely reactionary and not at all predictive. It doesn't seem to take the long view of the road, just what's immediately in front of it. I liken it to the skills of a first-year driver. It can obey traffic controls nicely, but dealing with other (human) drivers and their random actions are problematic.1. If you are following a passenger vehicle, vision can be fairly easily trained to recognize the second vehicle ahead by seeing thru the windows or by one of the other cameras. FSD has temporal consistency (will improve over time) just as your brain does, so if it sees a second car ahead, it can “remember” that and increase following distance, while also continuing to track through the windows.
Do we know that for certain - that FSD actually learns over time?
My greatest concern with FSD/NoA is that it is entirely reactionary and not at all predictive. It doesn't seem to take the long view of the road, just what's immediately in front of it. I liken it to the skills of a first-year driver. It can obey traffic controls nicely, but dealing with other (human) drivers and their random actions are problematic.