Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta Videos (and questions for FSD Beta drivers)

This site may earn commission on affiliate links.
I'll just report over, and over
I completely agree with your post, except that one statement.

Report once. Then give them 3 or so months, and test it again. If it still fails, report once and restart the 90 day timer.

We don't need to overwhelm them with the same data over and over again.

I'm keeping a little notebook in the car that I write down a brief summary of the date, time of report, and brief context/what happened. Hopefully, it'll prevent me from reporting the same stuff more than once.

But yeah... that really was a good post.
 
I love how it says "FSD Barely Avoids Metal Pole", but it didn't.

The driver avoided the pole.

Why wasn't the pole detected? Supposedly it has 3D voxels so it should be able to see a pole right in front of it.

Usually the FSD Beta failures are routing, and path planning related, but this seems to be a failure to even detect the pole.
Exactly.

They need to show the AI a million pictures of "Pole," with a "do not hit this" flag attached to it.
 
What about this wonky tree maneuver? Or the monorail? I think it's activated, but it's not very good or only maneuvers when confidence is high.

It also has the ability to not hit pedestrians, bicyclists, etc.

So why can it avoid some things and not others? Labeling. It doesn't need voxels turned on in order to not hit labeled objects. What voxel adds is the ability to not hit unknown objects, eg, Elon's "it won't hit a UFO that lands in the street."

So objects that it can accurately slap a label on get labeled and handled according to how that object is classified. Example: New object. Can you identify? yes. What is it? Truck. Apply label, and act appropriately. Is the truck parked? Yes? Handle as static object. No? Handle as dynamic object (track distance, speed, try to guess pathing, etc.)

Short version: voxel doesn't need to be turned on to avoid hitting labeled objects, but does need to be turned on to avoid unlabeled objects.
 
  • Like
Reactions: Matias
When you watch Green's video of the driver monitoring numbers, it's interesting how the NN continually fluctuates wildly even when there's a static picture. That how I see the path tentacle behaving as well - the car wavers between multiple paths.

Not understanding neural nets very well I still wonder why the output is not more consistent, or at least smoothed more. I'd like to see the path make a stable decision and stick to it, or at least don't fluctuate on a fractional-second basis.

Thank you for posting that video!!!!

There was a recent discussion about the camera and eyeball AI recently, and I couldn't find that darn video.

I believe "Tristan" on Twitter has also done some interesting eyeball AI research, too.
 
  • Like
Reactions: Dan D.
In
You can't have ~35k miles between safety disengagements on a L2 vehicle because that would lead to some serious accidents as no one is going to pay attention for 34,999 miles.

The safest element of the FSD Beta is how awful it is.
interesting take, and I’m not 100 percent saying your wrong. But really what you need is better driver attention tracking. Then you can deliver level 2 at a high quality level and insure the driver is paying attention.
 
You can't have ~35k miles between safety disengagements on a L2 vehicle because that would lead to some serious accidents as no one is going to pay attention for 34,999 miles.

The safest element of the FSD Beta is how awful it is.
Thats a myth perpetuated by Waymo.

What we need is "consistent" L2 - so that drivers can anticipate when they might need to intervene. Just as we now naturally pay more attention to certain conditions (kids running around !) - and not to others (cruising along the highway with little traffic).
 
My thought on how Tesla uses these snapshot videos. They have the snapshot's GPS coordinates, so they probably run a simulation based on the snapshot. If all simulation runs are consistent with the snapshot, they probably retrain the NN on that situation. This also might be why the beta does a little better in CA, where there is more history.
 
Thats a myth perpetuated by Waymo.

What we need is "consistent" L2 - so that drivers can anticipate when they might need to intervene. Just as we now naturally pay more attention to certain conditions (kids running around !) - and not to others (cruising along the highway with little traffic).
So you think someone will be able to use FSD V100 for an entire year on their commute, never have to intervene, and then be able to anticipate the failure that happens on day 366?
I'm skeptical. We may never know though. It's not looking like FSD collisions will be due to complacency any time soon.
 
It's not looking like FSD collisions will be due to complacency any time soon.
Yeah, at this point it's more likely to be from a wide awake FSD YouTuber saying, "Let's just see what it's going to do...." Whack.

I swear, some of those guys seem like they'd willingly take a collision if it meant more views.

Man, I miss the old days of YouTube where people posted videos just to be helpful. Ever since people started making careers out of YouTube.... well, I'll just stop there.
 
  • Like
Reactions: rxlawdude and SO16
So you think someone will be able to use FSD V100 for an entire year on their commute, never have to intervene, and then be able to anticipate the failure that happens on day 366?
I'm skeptical. We may never know though. It's not looking like FSD collisions will be due to complacency any time soon.
We have evolved over a billion years to handle unexpected events that might come up suddenly. Sometimes even once in a lifetime. We should be able to pickup something that is unusual and react.

Distracted driving is a problem - with or without FSD Beta.

ps : I'm more worried about highspeed NOA than low speed FSD Beta.
 
It's not looking like FSD collisions will be due to complacency any time soon.

I think the most probable first collisions due to complacency will be some sort of sideswipe of a fixed object. These things can be difficult to judge and I can see someone thinking it is going to clear or identify such an object, and then it doesn't quite do so. I think that could happen soon.

But yes, in general, I think the class of collisions due to FSD incorrectly being allowed to assume right of way or being subjected to a challenging situation where it just runs into something because the driver is letting it "do what it is going to do" to "allow the system to learn" :rolleyes: is more likely in the short term.

I too am not convinced that a much more capable system which is more dependable will be much safer. I think it's possible than in the short term the risk of collisions could go down as the system improves. But once (if?) that initial improvement occurs, it's a lot harder to say what might happen when people start to trust it.

Right now it is so jerky, aggressive, curb-loving, and simultaneously sluggish that there's really very little risk other than through sheer negligence on the part of the driver.
 
I too am not convinced that a much more capable system which is more dependable will be much safer. I think it's possible than in the short term the risk of collisions could go down as the system improves. But once (if?) that initial improvement occurs, it's a lot harder to say what might happen when people start to trust it.
You nailed it right there... the whole "trust" thing.

I could literally type pages about this subject, as it hits kind of close to home with my chosen profession. But I'll save you guys from a wall of text that you'd never read, anyway.

If you only read one thing from any of my posts, ever, please just read this:

NEVER trust autopilot. Never ever forever never.

History is littered with the corpses of people that made the mistake of trusting an automated system. Even worse, it's littered with the corpses of innocent people that died from other people not correctly monitoring automated systems.

Even if you know it's going to take that curve correctly, don't trust it. Expect that this time, even though it's done this corner a hundred times before, that it'll kill you.

Let's take autopilot out of the conversation for a sec. When do most people end up getting in accidents? When they are distracted. When they're not paying attention. When something becomes more important than operating a multi-ton projectile. Adding AP as a "safety net" at this point is just adding an additional layer of distraction... "oh, AP is driving right now. It always does well right here. I'll just check my phone really quick." Sound of impact follows.

I'll get off my soapbox now. :)
 
Yes, aware of that (hence the curb-loving description) - but I'm deliberately excluding that since people seem to think it's normal (I tend to disagree). Damage to the body panels is less normal.
Such "small" accidents (L4) are "normal" - in the sense they tend to happen about once a year on average (1 / 10k miles).

Serious accidents (L2) are lot less common and happen once about 10 years. Source : Cruise.

1634586954442.png
 
  • Funny
Reactions: AlanSubie4Life
Adding AP as a "safety net" at this point is just adding an additional layer of distraction... "oh, AP is driving right now. It always does well right here. I'll just check my phone really quick." Sound of impact follows.

I'll get off my soapbox now. :)
I dunno about this. By your argument, bar coded med administration is a distraction, yet there have been very few reports of that causing a medication error. But lots of reports of it preventing errors that could have been fatal.

I'm a believer in the Swiss cheese model of safety. The more layers, the less chance of harm occurring from an error.
 
I dunno about this. By your argument, bar coded med administration is a distraction, yet there have been very few reports of that causing a medication error. But lots of reports of it preventing errors that could have been fatal.

I'm a believer in the Swiss cheese model of safety. The more layers, the less chance of harm occurring from an error.
Properly monitored, automated systems greatly improve safety, no question about it.

But when people aren't properly trained in monitoring automation, things can get fatal really quickly.

I'm an airline pilot, and would hate to think about how much work it would be to spend a day at work without automation. But automation is a tool, and needs to be used safely.
 
Last edited:
I dunno about this. By your argument, bar coded med administration is a distraction, yet there have been very few reports of that causing a medication error. But lots of reports of it preventing errors that could have been fatal.

I'm a believer in the Swiss cheese model of safety. The more layers, the less chance of harm occurring from an error.

It's interesting that you mentioned bar coded medicine administration. This eventually helped reduced errors, but introduced entirely new classes of errors where nurses would bypass the system by wearing duplicate barcodes on their person instead of scanning the ones on the patient.

Point being that automation isn't always good and it can and will introduce new failure modes that have to be considered. Sometimes these new failure modes can outweigh the initial problem they were trying to solve.

So it's important to look at things for their own merits and be open to considering the fact that tools such as AP introduce new ways of dying that didn't exist before.