Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Also, regarding the processing of multiple camera feeds at the same. If the training is taken from human driving examples, and humans cannot see in all directions simultaneously, does the behavior demonstrated in the driving footage reflect knowledge of all cameras at all times?

Yes, by way of a concept called "survivorship bias". All the human left turns are recording using all the cameras, does not matter that the human can't look at all the data. Videos that end in a collision or near miss are excluded (all cameras are tied together as a set for a composite video, so there is no tossing part of a set and keeping the rest, if it's a crash or near miss you toss all camera data related to that crash or near miss*). The ones that remain are the survivor videos and those are the primary videos used to train the AI.

So the AI will use data from more than one camera, based on video from more than one camera, based on a driver that can only see one way at a time.

* When I say toss you mark it as a collision or near miss and don't use it for the training set you are trying to achieve. You might be able to use that video for some other task, it just won't be in the normal data set of behavior to reproduce.
 
Last edited:
Yes, by way of a concept called "survivorship bias". All the human left turns are recording using all the cameras, does not matter that the human can't look at all the data. Videos that end in a collision or near miss are excluded (all cameras are tied together as a set for a composite video, so there is no tossing part of a set and keeping the rest, if it's a crash or near miss you toss all camera data related to that crash or near miss*). The ones that remain are the survivor videos and those are the primary videos used to train the AI.

So the AI will use data from more than one camera, based on video from more than one camera, based on a driver that can only see one way at a time.

* When I say toss you mark it as a collision or near miss and don't use it for the training set you are trying to achieve. You might be able to use that video for some other task, it just won't be in the normal data set of behavior to reproduce.
True. And likewise I imagine they exclude clips that are not necessarily catastrophic but less than ideal. For example a driver passing up a safe opportunity to advance through an intersection. I'm sure they've closely evaluated any clip that made it into the training set. Though they probably have an automated method of filtering out the majority of clips that are likely unsuitable, since they can't watch every clip that gets uploaded to the mothership.
 
  • Funny
Reactions: AlanSubie4Life
On the tangent topic of superhuman performance, I was thinking about how nice it would be for the car to have front bumper cameras for situations where there is limited cross traffic visibility. For example this scenario where Chuck gets stuck @7:15.


The catch 22 is that for an L2 system, even if the car sees that it's clear to advance, if the driver can't verify, the driver would have to disengage anyway.
 
If there’s nobody creeping up behind you, you should find that it generally will stay in the left lane. When someone comes up behind you and there’s nobody in front of you, it should get over. (This is how I usually drive).

Highway logic isn’t using the new neural net stuff yet.
Right, understood and that’s worked for me. But it should move to the right when I use the indicator stalk - sometimes it will but others it won’t no matter how many times I try, then I disengage.
 
She claims this is a v12.4 test drive. It doesn't appear much different than v12.3.6. Still slow approach and indecisive at stop signs. At ~3:00 the driver steps on the accel pedal from a stop sign and again later at a traffic light. Auto max speed still looks problematic. Turn signals still appear to be initiated early. Possibly smoother decel in other scenarios?

 
On the tangent topic of superhuman performance, I was thinking about how nice it would be for the car to have front bumper cameras for situations where there is limited cross traffic visibility. For example this scenario where Chuck gets stuck @7:15.

The catch 22 is that for an L2 system, even if the car sees that it's clear to advance, if the driver can't verify, the driver would have to disengage anyway.
I suppose the car could get around this by moving slowly at first to let the driver see that the coast is clear, then completing the maneuver. But the more important reason is to reduce cases where the car thinks the coast is clear, but is wrong.

E.g. if the 8-camera setup can see just far enough to detect potentially dangerous cars traveling at the speed limit, it might assume the coast is clear but miss a speeding driver who's slightly farther away. If bumper cameras could see better see such drivers from typical intersection stopping points, that would provide extra safety while avoiding the need for the car to creep dangerously far into the intersection for a good view. We do want the car to eventually have superhuman safety, after all.