Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta 10.69

This site may earn commission on affiliate links.
The creep barrier still provides value indicating where the vehicle thinks the initial stop point is, but that doesn’t preclude the possibility of it moving beyond the barrier and getting stuck in a bad spot.

Surely the creep barrier has always existed and just wasn’t visualized previously, theres a lot that could be visualized but currently isn’t
 
I think FSD didn't respond quickly because there wasn't really imminent danger, but the prudent and human thing to do would be to slow down;
This human would not alter my behavior in anyway until absolutely necessary because I would want the Bolt driver to know that they're bad. I don't think an aspiring robotaxi should do that though.
 
  • Funny
Reactions: AlanSubie4Life
Moving past creep barrier means car is going....working as designed.

The creep barrier still provides value indicating where the vehicle thinks the initial stop point is, but that doesn’t preclude the possibility of it moving beyond the barrier and getting stuck in a bad spot.

Surely the creep barrier has always existed and just wasn’t visualized previously, theres a lot that could be visualized but currently isn’t
I have to disagree. The point of the creep barrier - according to how people are talking about it - is to show the maximum distance the car is going to creep. That is what the car has determined is the maximum safe distance it will go. So people can trust it's not going to end up too far into the road.

However we saw in Chuck's video that the car did creep past the barrier and ended up too far into traffic. If it cannot see well enough at the creep barrier mark then it needs to disengage and tell the driver to complete the turn.

It's an example of a poor safety system.
 
Last edited:
There are many different angles to this; stopping before the stop sign makes the other driver turn left directly without stopping and clears the intersection for FSD. It could save both cars time. Speeding up to the stop sign and brake locks the intersection until the other car is confident to make the left turn. Planning ahead is better.
Slowing down is fine; one has to slow down for the stop sign anyway and the other driver will detect that slowing. Jamming on the brakes is not.

This is “smoothly feather regen-only” situation and clearly brakes were used here.

Not ok, and not necessary to do that so abruptly, to show intent to stop.

FSD should have done a full-regen, regen-only deceleration ending precisely at the stop line (being careful to avoid any corner cutting by other drivers, and adjusting profile with slight braking if necessary to avoid that). Currently it appears incapable of such feats, which can be performed effortlessly by [skilled] human drivers.
 
Last edited:
But it’s been the best build I’ve had and I’ve been testing beta since almost the beginning.
Thanks for the report. This definitely is consistent with what we have seen, and good to hear more confirmation. Hopefully they’ll be able to tweak up a few things and make it even better for us when they do the wider release. Will be great to see some significant steps forward, even though there will still be plenty of issues to resolve.
 
  • Like
Reactions: FSDtester#1
FSD should have done a full-regen, regen-only deceleration ending precisely on the stop line (being careful to avoid any corner cutting by other drivers, and adjusting profile with slight braking if necessary to avoid that). Currently it appears incapable of such feats, which can be performed effortlessly by human drivers.
My impression from watching human drivers especially if I'm a passenger is how differently everyone drives. A few drive effortlessly but most don't and some are just poor drivers. Remember half the drivers on the road are below average. I think this needs to be considered when watching FSD videos. Just because FSD doesn't drive the way the viewer would doesn't always mean FSD did something wrong.
 
My impression from watching human drivers especially if I'm a passenger is how differently everyone drives.
I have added “skilled” to clarify. I thought about including the adjective when I wrote it, but thought it was clear (though implicit) that I was talking about a subset of human drivers.

Just because FSD doesn't drive the way the viewer would doesn't always mean FSD did something wrong.
Sure. But in this case it was certainly, 100%, wrong. Remember, even Whole Mars thought the car had done something wrong! Which he was right about! (Best to stick with your first instinct, Whole Mars…)
 
I have to disagree. The point of the creep barrier - according to how people are talking about it - is to show the maximum distance the car is going to creep. That is what the car has determined is the maximum safe distance it will go. So people can trust it's not going to end up too far into the road.

However we saw in Chuck's video that the car did creep past the barrier and ended up too far into traffic. If it cannot see well enough at the creep barrier mark then it needs to disengage and tell the driver to complete the turn.

It's an example of a poor safety system.
This is why I think the barrier could be problematic. The barrier was always there, whether or not it was visualized, and it existing doesn't mean the vehicle can't end up in a bad spot. FSD Beta users shouldn't be staring at the screen when the vehicle is approaching an intersection, they should be looking out of the windshield and responding to what the car is actually doing.

In terms of the performance of this build, you'd certainly hope the system is becoming better especially when it's a 2+ month wait between releases. I'm patiently awaiting this rolling out to the wider test group so we can see this in other environments.
 
  • Like
Reactions: Dan D.
This is why I think the barrier could be problematic. The barrier was always there, whether or not it was visualized, and it existing doesn't mean the vehicle can't end up in a bad spot. FSD Beta users shouldn't be staring at the screen when the vehicle is approaching an intersection, they should be looking out of the windshield and responding to what the car is actually doing.

In terms of the performance of this build, you'd certainly hope the system is becoming better especially when it's a 2+ month wait between releases. I'm patiently awaiting this rolling out to the wider test group so we can see this in other environments.
Well, we're not sure the barrier has always existed are we, kinda looks in previous versions like the Tesla typically just creeps and creeps making it up as it goes? And really it's not a barrier if it can be breached.

I agree that drivers maybe shouldn't be watching the (time delayed) screen while driving. And yet then why is the visualization showing what appears to be key safety information at all?

I feel the safety information about creeping, starting moment countdown and other text notes should be given loudly, verbally, and not subject to being turned off.

Putting key safety information in tiny text on the distraction screen is unsafe.

FSD Beta needs to disengage itself more, when it can't see safely. Not say "press accelerator", just disengage. Having the driver press the accelerator when Tesla can't see means FSD Beta is driving blind for some period of time until it requires vision. During that period it is unsafe. It needs to be fully off, driver steered, until vision is restored.
 
  • Like
Reactions: tmoz
....I agree that drivers maybe shouldn't be watching the (time delayed) screen while driving. And yet then why is the visualization showing what appears to be key safety information at all?

I feel the safety information about creeping, starting moment countdown and other text notes should be given loudly, verbally, and not subject to being turned off.....
I bought a TeslaBot to watch my screen and tell me what is happening while I drive. Problem solved.🤣
 
Well, we're not sure the barrier has always existed are we, kinda looks in previous versions like the Tesla typically just creeps and creeps making it up as it goes? And really it's not a barrier if it can be breached.

This was definitely prior behaviour for me. On more than one occasion (when there was no traffic), I let the car creep as long as it wanted to at an intersection. It creeped all the way into the middle of the intersection (into the path of cross traffic) trying to get a better view.

It is my opinion, that if something doesn't appear in the visualization, then the car does not see or know about it. I have not personally witnessed the car reacting to anything not also appearing in the visualization. So, for example, when a car disappears from the visualization, the car has forgotten it is there.
 
It didn't seem risky at all to me. It was an easy turn, and with the correct capabilities it should be possible to make it. Remember that previously there was ample evidence that it could not "understand" occluded vehicles and would go at incorrect times (which led to disengagements).

Now the question is whether it can be done reliably and safely, now that it seems to have the basic capabilities and framework to be able to successfully proceed. Can it do it "100%" of the time as Elon said? Right now it clearly has issues but are they just going to be resolved with simple tuning, or are there issues with capability (not clear what that would be at the moment)?
Not an attempt to move goalposts, but when Elon says "we going to solve Chuck's turn 100%" the statement might not mean they're going to solve it 100% of the time, just that they're going to solve it some percent of the time, 100% guaranteed. The nature of neural networks makes "100%" a near-impossibility, just like human brains will never drive perfectly 100% of the time.
 
I imagine the barrier has existed for as long as creeping behavior has existed. When the vehicle has posted up the "creeping for visibility" notification, it was surely basing it on some type of limit/criteria defined by the system.

Tesla has all kinds of assets in FSD that could be visualized but aren't, they can turn them on/off at will. The speed bump visualization is another example, but I'm sure there are many. And there are who knows how many hidden Developer menus within Autopilot/FSD where employees can just click functions on/off, much of this was hacked and leaked a long time ago by "green" on Twitter.

You can find it via Google by searching stuff like "Autopilot Dev Controls"

1661269300391.png


Systems from other competitors obviously have similar functionality as well. For example in this screenshot you can see the "Disable High Curvature Hands-on" -- this is something that Ford's BlueCruise was criticized for back when it was being demonstrated by Sandy Munro and such, because the system made you take over for high-curvature road sections.

The reality is that Ford has a menu just like this one where they can click these functions on and off, it's just that the competitors are much more conservative with what they allow the system to attempt.
 
Last edited:
Sure it's impressive for beta test software but it's not worth 15k and there is no way in hell that they will complete fsd by the end of the year at this rate, which is why everyone is roasting it.
Also not to move goalposts, but it's clear the goal is to do a wide release of FSD by end of the year and then continue to refine as they've done with AP/NoA for years. Hopefully, they'll give Summon the brain of FSD so that feature works better. Same with AutoPark, etc. Likely they will need to teach FSD to back up before its brain can be used for Smart Summon.
 
The nature of neural networks makes "100%" a near-impossibility, just like human brains will never drive perfectly 100% of the time.
That’s the real question, can it do it correctly 99.9999% (or maybe more 9s; hard to know) of the time? That is why I put “100%” in quotes. It’s obviously not what Elon meant, but he is a sales guy, so he cannot help it!
 
Ha… Looks like our city is faster at "solving" unprotected left turns than Tesla! I went to retest 10.12 on some "Chuck Cook style" intersections with median crossover regions, and some have changed with islands that only allow left turns from the main street ("3/4 access") while others have right-in/right-out channelization preventing all left turns. These improvements reduce the number as well as severity of potential conflicts, so overall it's safer but now I need to find new places to test. 😝

I wonder if there aren't as many of these style intersections here because there's plenty of center turn lanes allowing merging to the main street?
 
  • Like
Reactions: JHCCAZ
Thinking about requesting FSD again but what score gets one in these days?
It seems likely Tesla will be adding more people with 10.69, and last batch of new additions with 10.12 included people with 93 score. There is some confusion about Elon Musk talking about "wide release," e.g., vs where I believe the former tweet is specifically about 10.69.2 going to the existing FSD Beta population and potentially new additions while the latter tweet is about FSD Beta in general potentially not requiring Safety Score anymore.