Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta Videos (and questions for FSD Beta drivers)

This site may earn commission on affiliate links.
But it is called "mind of car". It is supposed to show us what the car is seeing and thinking so that we can have confidence in the system. If it is seeing something but not showing it, then it is not doing a good job as the "mind of car". And if it does not show something on the screen, then it is very reasonable for the driver to assume that the car cannot see it and disengage out of an abundance of caution.

Hopefully, in the future, there were be an option that lets you pick "layers" of information to see. And within one layer like surrounding cars (distance: immediate, medium, max)
 
But it is called "mind of car". It is supposed to show us what the car is seeing and thinking so that we can have confidence in the system. If it is seeing something but not showing it, then it is not doing a good job as the "mind of car".

On a recent V9 video, I heard Chuck say, "it needs to creep up more, there's no way it's seeing the road." Even with "mind of car", the absence of an object may still cause us to doubt whether or not FSD is looking in the right areas. I believe this will happen more and more often in the future. Once the car becomes more and more superhuman, our disengagements will represent uncertainty on our part, but perhaps the car is more certain than us.
 
  • Like
Reactions: rxlawdude
On a recent V9 video, I heard Chuck say, "it needs to creep up more, there's no way it's seeing the road." Even with "mind of car", the absence of an object may still cause us to doubt whether or not FSD is looking in the right areas. I believe this will happen more and more often in the future. Once the car becomes more and more superhuman, our disengagements will represent uncertainty on our part, but perhaps the car is more certain than us.

I think you might be reaching and making excuses. It is also entirely possible that the visualization is telling the truth and it is not showing an object because the camera vision is not seeing it.
 
  • Like
Reactions: daktari and Matias
Either way, with corner-facing cameras active, the wide front camera would be completely useless, because those other cameras would completely replace the entire field-of-view of the wide front camera and then some, but would do so at a much higher resolution and with far less distortion.

That isn't true. The wide front camera is significantly in front of the B-pillar cameras. Such that there are cases where an obstruction, like a fence or shrubs, would completely block the B-pillar cameras view but the wide front camera would still be able to see in that direction.
I think you misread what I said there. I wasn't talking about the b-pillar cameras, but rather about the hypothetical cameras that I proposed adding at the front corners of the car that would be enabled to assist with unprotected turns when the b-pillar cameras are obstructed. If neither those corner cameras (several feet forward from the front cameras) nor the b-pillar cameras can see something, then it is extremely unlikely that the wide front camera would do any better.

Then again, if they're taking advantage of parallax between the b-pillar camera and the front wide camera to make speed and distance measurements more accurate, it might make more sense to keep the front wide camera running and instead switch between the B-pillar camera and the corner camera on either side depending on which camera's view is less obstructed.
 
I think you might be reaching and making excuses. It is also entirely possible that the visualization is telling the truth and it is not showing an object because the camera vision is not seeing it.

Not sure if I was being clear. I was just saying that the car may see that the road is clear, but we aren't as certain, so we would disengage or think that there's no way the car knows it's clear. This is because the cameras have a different vantage point than we do (e.g., front cameras are higher up and more forward).
 
  • Helpful
Reactions: diplomat33

new vid from a driver I haven’t seen before in Virginia. Road conditions look fairly simple, but driver mentions that V9 is a big improvement over 8.2; he mentions that for some reason 8.2 would keep trying to turn left for no reason.

6:57 - they’re going to need to code in the ability to recognize that there isn’t enough space across the intersection for the car to fit and have it stay until there is space so it doesn’t risk causing gridlock. Not a big issue yet.

12:25 - handles a fairly simple construction zone very well. Changes out of the closed lane, avoids the cones smoothly.

19:25 - first intervention, major. For some reason the car tried to change out of the left turn lane over a divider. Screen showed double white lines. I’m guessing the car interpreted the cars in front of it as being parked and was moving to pass them, major screw up I’ve seen in other vids. That logic clearly needs work.

Overall, very impressive. Mostly uneventful dealing with a decent amount of traffic. Nothing overtly complex. The one screw up was a bad one in that it would gone over a low divider (not likely to damage the tires or car), but at least it didn’t look like it was any threat to hit any adjacent cars.
 
Overall, very impressive. Mostly uneventful dealing with a decent amount of traffic. Nothing overtly complex. The one screw up was a bad one in that it would gone over a low divider (not likely to damage the tires or car), but at least it didn’t look like it was any threat to hit any adjacent cars.

The visualization bugged out at around 7:20:

Screen Shot 2021-07-12 at 7.08.52 PM.png
 
  • Informative
Reactions: favo and mhan00
Watching these videos, FSD Beta is so impressive in most straightforward situations (well marked roads, right turns, etc.), and downright terrifying in other situations. In the latter case, I think it's going to become clear pretty quickly which situations can be easily fixed (tweaks to driving policy) and which are simply beyond the capability of the current sensors and/or what we can expect of software anytime soon.

Unprotected left turns are the most obvious example. We may end up six months from now with a safe, reliable FSD that simply can't make those types of turns. Or there may be certain confusing/complex areas where FSD shouldn't even be available. IMO there is NO shame in Tesla geofencing FSD for a long, long time to come. NoA is only available for some parts of our drive today, why couldn't City NoA be the same?

It's just frustrating because the current build of FSD would be incredibly useful and reliable for the area I live in (suburbia, wide well marked roads, simple intersections, etc.). And because I generally drive the same places every day, I would quickly learn where FSD struggles. I would love to have the option to turn it on when I feel the system can handle it.
 
What's under-appreciated about V9 is that Tesla has essentially solved vision and is now just working on gathering more diverse data and labeling everything they can.

Why I think they've solved vision:
1) 3D environment and objects within are extremely stable, and V9 only shows what it sees with very low latency. For example, it only needs to see a small slice of a car to know it's there, its orientation, how far it is, and how fast it's moving.
2) V9 sees brake lights and associates them with the correct cars. Even when it only sees 1 brake light, it correct assumes that both the car's brake lights are on.
3) Visualization shows that V9 can see cars in all their various orientations, smoothly.
4) V9 can see and makes visual inferences (is it moving? is parked? should I be concerned? etc.) on ~40+ objects at a time

Karpathy mentioned recently that he's narrowing in on a labeling workflow that consistently produces better and better results. V9 is a demonstration of how powerful this workflow and "4D" video labeling is.

Although we see a lot of V9 "fails," I think we're very close to an update where everything seems to click, and we get surprisingly good performance.
 
Frenchie is back!
His summary: "Mixed Feelings"
My summary: Short drive, some disengagements, he sounds frustrated - mainly because he's letting it try, try, try.
He is also back-seat driving and talking to the car. "What are we doing? No. Why are we here?" : )

So far I think it’s been a pretty good performance, especially compared to the disasters I’ve seen from Frenchie on previous versions.
At about 12:00 where he got frustrated at the left, I think it did a decent job, just a super cautious one. Crept forward enough to see both ways without impeding traffic, and when he was urging it to go it was waiting for the car across from him to turn since it had pulled out considerably and then it was waiting for the pedestrians. Slow and overly cautious, but probably better to be slow and frustrating but safe, which is the way WayMo has gone too, judging by some of the turns that it takes forever to complete.

Definitely some nav issues with the car changing lanes unnecessarily at that last turn, but so far it looked way better in Chicago than it did before, I remember seeing Frenchie’s car literally driving into opposing traffic on turns in previous versions.
 

@Frenchie The blue cars seem to be the same as the orange boxes from before:

old orange.jpg

new blue.jpg


This seems common where the FSD vehicle is at a stop sign and needs to yield, but I've also seen the orange boxes when two lanes merge and the other merging vehicle is moving faster, so the neural network predicts it should yield. There might actually be more blue vehicles than orange boxes because there's been more training data now more consistently exceeding some threshold to display, or maybe they're just more visible now as solid blue in the updated visualizations.

Green has mentioned predicting right-of-way, but unclear if that's an attribute on each vehicle and if it actually controls behaviors vs just being visualized (e.g., vehicle type and brake lights might also just render visuals based on neural network outputs without necessarily affecting driving behavior).

 
Clearly FSD has a long way to go. What concerns me most is that safety seems to be a low priority. The current version appears to not recognize stationary object that are not identified as a vehicle. It needs to realize that an unrecognized shape can be a hazard. I am surprised this version has been released without this capability.
Yeah.

Why car doesn’t react to monorail concrete pillars or planters on the line? I understand that it might not know what they are, but why it acts like they are invisible?
 
Yeah.

Why car doesn’t react to monorail concrete pillars or planters on the line? I understand that it might not know what they are, but why it acts like they are invisible?
No LIDAR. Haha :p
Maybe the pillars are getting mapped as drivable space because they’re road colored and nothing like them was labeled when they made the neural net.
This is probably why they’re working on ViDAR (using the video feed to generate depth information). Whether or not they’ll be able to get the reliability needed? Who knows, it’s all cutting edge research.
 
On a recent V9 video, I heard Chuck say, "it needs to creep up more, there's no way it's seeing the road." Even with "mind of car", the absence of an object may still cause us to doubt whether or not FSD is looking in the right areas. I believe this will happen more and more often in the future. Once the car becomes more and more superhuman, our disengagements will represent uncertainty on our part, but perhaps the car is more certain than us.

I think you might be reaching and making excuses. It is also entirely possible that the visualization is telling the truth and it is not showing an object because the camera vision is not seeing it.
You can see what his left side looks like from the B pillar even if this is not the exact question. That is a tough corner.

 
This makes a lot of sense. Thanks @Mardak

@Frenchie The blue cars seem to be the same as the orange boxes from before:

View attachment 684015
View attachment 684016

This seems common where the FSD vehicle is at a stop sign and needs to yield, but I've also seen the orange boxes when two lanes merge and the other merging vehicle is moving faster, so the neural network predicts it should yield. There might actually be more blue vehicles than orange boxes because there's been more training data now more consistently exceeding some threshold to display, or maybe they're just more visible now as solid blue in the updated visualizations.

Green has mentioned predicting right-of-way, but unclear if that's an attribute on each vehicle and if it actually controls behaviors vs just being visualized (e.g., vehicle type and brake lights might also just render visuals based on neural network outputs without necessarily affecting driving behavior).