Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta 10.69

This site may earn commission on affiliate links.
Why is the default posture for people not to assume the car will try to kill them? I don’t know why one would assume differently. No informing necessary really - it says right there it will do the wrong thing at the worst time. What else could that mean?
Thanks for your comment old pilot. I appreciated and understood what you said without all this bantering of words.
 
Thanks for your comment old pilot. I appreciated and understood what you said without all this bantering of words.
It’s just so inadequate to the driving task, and that becomes blindingly apparent within ~30 seconds of first engagement, that the chances of encountering a situation where ambiguity about capability exists prior to that realization is so low, that a detailed list of capabilities (really lack of) seems unnecessary. Plus if you make a list of things that cannot be handled, what do you do about the things not on the list?
 
It’s just so inadequate to the driving task, and that becomes blindingly apparent within ~30 seconds of first engagement, that the chances of encountering a situation where ambiguity about capability exists prior to that realization is so low, that a detailed list of capabilities (really lack of) seems unnecessary. Plus if you make a list of things that cannot be handled, what do you do about the things not on the list?
There are problems and problem solvers. It never hurts to try and prevent something in a proactive manner rather than take a stance where it should be obvious and due nothing because of problems.
 
My car also stops short of neighborhood stop signs. Sometimes it creeps slowly up to the sign, at other times it proceeds from the short stop position. Fortunately, in my case, there are no occlusions at these intersections. However, your comment about the car entering the intersection without visibility of oncoming traffic lanes makes me wonder how a Tesla resolves the difference between not seeing cross traffic and seeing that there is no cross traffic. These are two very different things!

The car's occupancy network should be examined to see if it blocks visibility to cross lanes. If it does, then the car should know to creep forward until it can see past the occlusion, thus being able to see there is no cross traffic versus not seeing cross traffic. Perhaps the occlusion is somehow misplaced in the occupancy network so that the car thinks the wall is further away than it really is?
Yes, anything is possible when things get this complicated. I should probably buy a camera mount to gather more details.
 
Maybe I was not clear enough on what I was attempting to say, so let me rephrase. Tesla knows what the FSD has been programed to do and what it has not been programed to do. If GBills had known that the car would not stop for a railroad crossing he would not have been surprised when it did not do what is was not programed to do. The same holds true for a stopped school bus, he/she may expect it to stop, but its not programed to. It is possible to make this experiment safer and less stressful if people are better informed.
I understood and do not disagree. However, my point was you have to not be surprised when FSDb does whatever it does. Use it for what it is, with expected significant limitations, but always expect to be "surprised". Then it's not a surprise and you go home once again alive. For a while at least........
 
Maybe I was not clear enough on what I was attempting to say, so let me rephrase. Tesla knows what the FSD has been programed to do and what it has not been programed to do. If GBills had known that the car would not stop for a railroad crossing he would not have been surprised when it did not do what is was not programed to do. The same holds true for a stopped school bus, he/she may expect it to stop, but its not programed to. It is possible to make this experiment safer and less stressful if people are better informed.
You are assuming they can actually list things it won't do.

But a "known bugs" kind of list would be useful. But Elon doesn't want to do anything like that because anything Tesla says will be used against them in the worst possible way. Having fought media / WS FUD for decades he doesn't want to be transparent.

Having said that .... given that FSDb is with 400k drivers, it is not difficult to figure out kinds of things it simply won't do. Just a bit of reading the forums will be informational enough.
 
  • Like
Reactions: willow_hiller
On .25.2, I had it stop for a rail crossing. I attached a photo of the crossing. The arrow shows the direction I was traveling. I was the only car approaching the rail crossing, car stopped about 15 ft in front of the crossing with the arms down. The only issue was once the gates lifted, the car sat there, I had to tap the accelerator to get it to proceed. For what it’s worth, the gates weren’t full raised, it might have gone ahead, but there was traffic behind me, so I didn’t want to piss them off.
 

Attachments

  • 27621DF3-2AC2-49B0-917D-E02C7C0A646A.jpeg
    27621DF3-2AC2-49B0-917D-E02C7C0A646A.jpeg
    1.2 MB · Views: 59
Why is the default posture for people not to assume the car will try to kill them? I don’t know why one would assume differently. No informing necessary really - it says right there it will do the wrong thing at the worst time. What else could that mean?
That could mean use FSD at your own risk or disengage and don't use even tho you paid 10k or whatever that $$ is.
 
In many respects, you do use FSD at your own risk, despite whatever sum you paid.
Yeah. Now I feel like I have wasted my money in three MS I have owned (5k on my first MS, $5k+ another 2.5k in my second MS, and 10k on my current MS) as it cannot do some of the basic functions like HOV lane open times or other restrictions with HOV Lanes as the car cannot read signs and illiterate.
 
  • Like
Reactions: scottf200
On .25.2, I had it stop for a rail crossing. I attached a photo of the crossing. The arrow shows the direction I was traveling. I was the only car approaching the rail crossing, car stopped about 15 ft in front of the crossing with the arms down. The only issue was once the gates lifted, the car sat there, I had to tap the accelerator to get it to proceed. For what it’s worth, the gates weren’t full raised, it might have gone ahead, but there was traffic behind me, so I didn’t want to piss them off.
Was there a train in the intersection when the gates were down or were the gates the only thing blocking the roadway?
 
You are assuming they can actually list things it won't do.

But a "known bugs" kind of list would be useful. But Elon doesn't want to do anything like that because anything Tesla says will be used against them in the worst possible way. Having fought media / WS FUD for decades he doesn't want to be transparent.

Having said that .... given that FSDb is with 400k drivers, it is not difficult to figure out kinds of things it simply won't do. Just a bit of reading the forums will be informational enough.
Unfortunately I suspect the average new FSDbeta user doesn't read these forums or other sources like Reddit so go into FSDbeta cold. You'd be surprised now many owners have never heard of this forum let alone use it.
 
  • Like
Reactions: VanFriscia
Unfortunately I suspect the average new FSDbeta user doesn't read these forums or other sources like Reddit so go into FSDbeta cold. You'd be surprised now many owners have never heard of this forum let alone use it.
The thing is, expecting FSD to actually stop at a crossing with the arms down and a train coming is an incredibly low bar - even after using FSDb for the last 14 months I can’t blame @Gbills for being surprised Or caught off guard. I mean, it will detect an open car door and show that on the display, right? One would think that maybe detecting and stopping for a train might come before showing an open car door on the screen.
 
Did you notice what was on the display, the train the gates the lights? Do you believe the car stopped for the railroad crossing signal or because its path was blocked by a very large object.
Well, it visualized the blinking red crossing lights as traffic lights (like it has forever), and displayed the train as a bunch of semi trucks back to back. So who knows what it actually stopped for.
 
Well, it visualized the blinking red crossing lights as traffic lights (like it has forever), and displayed the train as a bunch of semi trucks back to back. So who knows what it actually stopped for.
Precisely! We currently don't know if it stops for RR crossings because it is supposed to, or because the neural networks just arbitrarily decided to.

I don't understand why it seems to have to be one thing (provide a list of what it can't do) or another (just give a global warning that we don't know what it can do). One of the most frustrating things in this "beta" test for me over the years is that there is no forum for "beta testers" to provide meaningful, contextual feedback or developers to communicate what has changed and what needs to be tested. I believe that there has to (still) be some procedural code (code "1.0" in Karpathy-speak) that provides guard rails to the neural networks, e.g., stopping at red lights, stopping for pedestrians, stopping for RR crossings, slowing for emergency vehicles, what distance to change lanes before upcoming exit, etc. For everything else, if it's just neural networks, there's no telling what the car can and can't do, and it may (will) vary from training to training (I understand they test and validate). I think knowing the specific functions that the car is specifically programmed to handle could only help in "testers" understanding how the car is supposed to operate, and allow more meaningful feedback. This doesn't mean it WILL do these functions correctly, but at least it is SUPPOSED TO do these functions correctly. For neural network driving decisions, we should always assume the worse.
 
  • Like
Reactions: old pilot