Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
  • We just completed a significant update, but we still have some fixes and adjustments to make. Please see this thread for more details and to find out how to revert to the old design. Cheers!

Daniel in SD

Well-Known Member
Jan 25, 2018
7,147
10,639
San Diego
Have seen the issue come up in a few video's. Is it possible that Tesla has hidden a camera in the new front headlights? I am under the assumption that there is a giant blind-spot for vehicles crossing in front. I am sure there isn't but makes you wonder if there is.
I doubt that would work. It seems like there would be way too much glare with the headlights on.
I have wondered the same thing about the camera placement. It seems like the B-pillar cameras are too far back and the front wide camera is probably not wide enough (it would be interesting to see the output from that camera!).
My main concern is the safety of monitoring the car in these situations. The car needs to position itself so the driver can verify that it's safe to go (until the system is reliable enough of course).
 
I doubt that would work. It seems like there would be way too much glare with the headlights on.
I have wondered the same thing about the camera placement. It seems like the B-pillar cameras are too far back and the front wide camera is probably not wide enough (it would be interesting to see the output from that camera!).

Glare might not be an issue if the camera isn't too wide of an angle, and is recessed enough it won't be hit by the light directly (this is why lens hoods are used, to stop lens flare / glare). There could be some light reflected from inside the enclosure, but I would think the angle is too shallow to bounce it directly into that spot.

While I think placing a camera within the headlight housing (maybe not in that highlighted area) would be great, I have my doubts this is being done. If it were, would FSD require it? Would that mean another round of hardware upgrades?
 

Mardak

Member
Oct 13, 2018
995
2,083
USA
It seems like the B-pillar cameras are too far back and the front wide camera is probably not wide enough (it would be interesting to see the output from that camera!).
Here's a screenshot from one of green's videos that includes fisheye and B pillar cameras:
fisheye.jpg

The shopping cart return sign is adjacent to the car directly in front of the left pillar camera, and it's placed at the center line where the stop line would likely be at a regular intersection. The fisheye camera in the top right of this screenshot shows the edge of the shopping cart return sign, and there's actually a car coming with fisheye having a slightly better view.
 

JulienW

Active Member
Jul 7, 2018
2,936
3,772
Atlanta
I have raised a few times that blind turns are an issue, especially as the side forward looking cameras sit further back than your eyes
Glad that footage is now confirming this
I have said before I thought Tesla should have used radar on all 4 corners to cover cross traffic. The wide front camera does have a good angle view, but not perpendicular to the front end. Also unlike a human head the B-pillar cameras can't lean forward to get a better look. Same problem to the rear with the Repeater cameras

Screen Shot 2020-11-05 at 6.46.30 PM.png
 

powertoold

Active Member
Oct 10, 2014
3,101
6,677
USA
Oh come on, unprotected left turns with obstructions is an obvious problem. If Tesla didn't think of it after 4 years of FSD development, then we have no hope.

The problem you guys saw in Brandone's video is that the car isn't consistently creeping out on unprotected turns (especially when stop signs where the car stops too far behind the line and then lunges out). This isn't a sensor problem yet. Once we see the car creeping out, AND it still can't see or still makes dangerous turns, then it might be a sensor problem.
 
I have said before I thought Tesla should have used radar on all 4 corners to cover cross traffic. The wide front camera does have a good angle view, but not perpendicular to the front end. Also unlike a human head the B-pillar cameras can't lean forward to get a better look. Same problem to the rear with the Repeater cameras

View attachment 605700
Yep, 2 o’clock to 3 o’clock & 9 o’clock to 10 o’clock are the issues, but more so the former because it’s a more dangerous turn
You have shrubs/brush/trees in this location & the car is totally blind. It has to potentially place the nose of the car in harms way to see
 
Last edited:
  • Like
  • Disagree
Reactions: daktari and aiEV

aronth5

Long Time Follower
Supporting Member
May 8, 2010
2,967
2,244
Boston Suburb
I watched a little of it. It seems like a major concern of his, and mine, is how the car deals with cross traffic at unprotected intersections. The issue is that if the view is obstructed the car can start to cross quickly without the driver being able to verify that the path is clear. It seems like a very difficult maneuver to monitor as one must be aware of traffic in both directions and be very quick to stop the car if it starts moving when it shouldn't. I wonder if an interface where the driver is required to keep their foot on the brake and release it when it is safe to go would be a good idea?
I think one of the challenges is the forward camera and radar are forward of the driver so see the cross traffic sooner so FSD can make the go no go decision sooner.
 
B

banned-66611

Guest
Here's a screenshot that shows the problem:

7LhuuRE.jpg


You can see the side views available at a junction. This is a fairly wide open junction but with buildings closer in it would be even worse. It's hard to see cars coming from a distance and the wide angle makes them extremely small until they get fairly close.

Cameras on the front of the car looking left and right would help, as would lidar that has much greater range. Volvo is fitting a lidar system to do this kind of cross traffic detection next year. It will be integrated into the car, not mounted on a roof dome or anything like that.

Tesla have to make this work with the cameras they have though as they can't retrofit any extra ones.
 
Tesla have to make this work with the cameras they have though as they can't retrofit any extra ones.

I'm sure they'd really like to avoid it, but absolute worst case, on the Model 3 they could probably tuck some cameras into the top corner of the headlight assemblies and (if necessary) multiplex them with the rear-pointing side camera inputs.
 

Daniel in SD

Well-Known Member
Jan 25, 2018
7,147
10,639
San Diego
Here's a screenshot that shows the problem:

7LhuuRE.jpg


You can see the side views available at a junction. This is a fairly wide open junction but with buildings closer in it would be even worse. It's hard to see cars coming from a distance and the wide angle makes them extremely small until they get fairly close.

Cameras on the front of the car looking left and right would help, as would lidar that has much greater range. Volvo is fitting a lidar system to do this kind of cross traffic detection next year. It will be integrated into the car, not mounted on a roof dome or anything like that.

Tesla have to make this work with the cameras they have though as they can't retrofit any extra ones.
The pillar cameras are only a couple feet back from a driver leaning forward to get a better look. I have no idea whether that's good enough. All these pictures posted are not the full resolution and may not show the whole sensor area of the camera, it looks like it could be wider than shown.
Maybe Tesla can do it with these sensors. Intuitively it seems easier with a direct view of cross traffic. Humans do an incredible job of recognizing moving objects that are partially occluded and recognizing what they can't see. Training neural nets to do the same sounds tricky.
The Mobileye AV prototypes also don't have front corner cameras as far as I can tell (they do have one in the grille but I don't think it sees side to side). They have cameras in the side mirrors and on the roof above the driver.

intel_mobileye_reuters_full_1578374382388.JPG
 
  • Informative
Reactions: 1 person

gluu

Member
Jan 18, 2018
154
1,188
Chicago
I watched a little of it. It seems like a major concern of his, and mine, is how the car deals with cross traffic at unprotected intersections. The issue is that if the view is obstructed the car can start to cross quickly without the driver being able to verify that the path is clear. It seems like a very difficult maneuver to monitor as one must be aware of traffic in both directions and be very quick to stop the car if it starts moving when it shouldn't. I wonder if an interface where the driver is required to keep their foot on the brake and release it when it is safe to go would be a good idea?
Brandon complains about how the car attempts to drive into parked car lanes while going straight in traffic. This says to me that Tesla is already starting to do end-to-end learning for the planner, and that the planner currently doesn't distinguish between parked cars and moving cars which is really interesting. Looks like this should be easy to solve by getting more data for these scenarios.
 

Daniel in SD

Well-Known Member
Jan 25, 2018
7,147
10,639
San Diego
Brandon complains about how the car attempts to drive into parked car lanes while going straight in traffic. This says to me that Tesla is already starting to do end-to-end learning for the planner, and that the planner currently doesn't distinguish between parked cars and moving cars which is really interesting. Looks like this should be easy to solve by getting more data for these scenarios.
I'm not sure why it indicates that?
Teslas drive past millions of parked cars a day, I think they have enough data. haha. I think it's actually very difficult for a machine to determine if a car is parked or whether it's stopped in traffic. The example in the video isn't really that scenario though. The car just swerves into the parking spaces with no turn signal. To me that indicates that the issue isn't the car recognizing the parking spaces as a lane. I guess maybe doing random inexplicable things might be an indication that they're using NNs for planning...
 
Brandon complains about how the car attempts to drive into parked car lanes while going straight in traffic. This says to me that Tesla is already starting to do end-to-end learning for the planner, and that the planner currently doesn't distinguish between parked cars and moving cars which is really interesting. Looks like this should be easy to solve by getting more data for these scenarios.

There might be some parked/moving car detection - some of the videos show the FSD beta car passing garbage trucks or regular parked cars on a narrow road, so it seems to know when a car isn't going to move. Unclear what the limits of the current detection algorithms are though.
 

S4WRXTTCS

Well-Known Member
May 3, 2015
5,879
7,068
Snohomish, WA
Humans do an incredible job of recognizing moving objects that are partially occluded and recognizing what they can't see.

Ha, sometimes.

Some of us like myself use the throttle/brake when we get surprised by a moving object we didn't see.

Like the other day I was taking a left onto the main road. My view was obstructed by a car for sale parked on the side of the road to my right. Now the smart way of handling this would have been to wait a couple seconds to make sure it wasn't blocking the view of a car. But, I didn't wait a couple seconds. I dunno why I didn't, but I just went. It's before a nice curvy road so I suppose I got excited that the path would be clear.

As I was going I immediately saw a car coming out of the blocked view, and used the distance to determine that I had ample room to continue if I jam'd the throttle, and launched. I could have also hit the brake.

It wasn't a close call, but I was irked at myself for failing the moment. Any autonomous car that did what I did would have failed for snapping the passengers neck.

Now it's not uncommon for me to fail moments while driving or really life itself. :p

But, either through luck or reactionary skills I somehow avoid crashing (driving or life itself).

It's going to be interesting to see if Tesla can teach it to recognize moments where it can't see. There are cases where you simply can't see without creeping way out. There is a place like that near me where shrubs grow pretty tall on the side of the road. If I'm my Jeep I can easily see, but I can't see in my Model 3. One of these days I'm going to take a chain saw after the shrubs.

I'd love the car to have its own attached drone. So the drone would launch when it needed to see, and would dock as the car was going.
 

Todd Burch

Voltage makes me tingle.
Nov 3, 2009
8,269
34,268
Smithfield, VA
After watching a lot of these videos, here are my thoughts on current state of FSD:

1. I don’t anticipate a public release within a few months. I hope not at least. I’d like to see very few interventions amongst the beta group before it goes to a wider release.

2. I see no evidence yet of a sensor problem. I think the majority of the problems are vehicle path planning problems, which is very complicated. I am confident that we’ll see steady improvement over time, which will be really fun to watch.

Are we close to true FSD? No way, not yet.

Am I excited about where this is headed? Absolutely. Let’s remind ourselves that FSD is probably the most complicated engineering and CS problem ever conceived. Not expecting it to be a fast process.
 

gluu

Member
Jan 18, 2018
154
1,188
Chicago
I'm not sure why it indicates that?
Teslas drive past millions of parked cars a day, I think they have enough data. haha. I think it's actually very difficult for a machine to determine if a car is parked or whether it's stopped in traffic. The example in the video isn't really that scenario though. The car just swerves into the parking spaces with no turn signal. To me that indicates that the issue isn't the car recognizing the parking spaces as a lane. I guess maybe doing random inexplicable things might be an indication that they're using NNs for planning...
My reasoning is that although the visualization detects the parked cars correctly, the car still plans a path that would crash into a parked vehicle, therefore the planner does not explicitly draw a path through free space using a "software 1.0" planner. One possible way this could happen is if the visualization was using a different unrelated network than the planner, so their decisions would not necessarily coincide. Since I know that Dojo and end-to-end learning is currently in progress, I think the current planner is doing something akin to end-to-end, but maybe with a shorter amount of video as input to this planner. For example, the neural network could take the last 1-2 seconds of video recorded as input into the planner network instead of the last 20 seconds. This would allow them to train the planner network without having Dojo, but the downside is that to make a good planner you likely need more than the past 1-2 seconds.
 

Products we're discussing on TMC...

About Us

Formed in 2006, Tesla Motors Club (TMC) was the first independent online Tesla community. Today it remains the largest and most dynamic community of Tesla enthusiasts. Learn more.

Do you value your experience at TMC? Consider becoming a Supporting Member of Tesla Motors Club. As a thank you for your contribution, you'll get nearly no ads in the Community and Groups sections. Additional perks are available depending on the level of contribution. Please visit the Account Upgrades page for more details.


SUPPORT TMC