Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
FSD doesn't have OCR the sign to read what it says. FSD just has to know what the image means and what the car should or shouldn't do. Tesla used AI so FSD learned what Stop Signs and Traffic Light's look like and learned that some traffic lights are vertical and others horizontal. So maybe FSD/AI can learn what other signs mean too. I just wouldn't assume the problem is solved by improving map data.

I would expect if FSD watched thousands of videos with these types of signs AI would learn what to do from what the human drivers did. Could be wrong but AI has already done some amazing things for FSD.

View attachment 1035464
I sometimes wonder if a future hardware version will (or should) include an articulated telephoto camera, with dedicated sub-networks for aiming the camera and interpreting the telephoto view. Map data is far too often wrong for FSD not to override it based on prima facie evidence, such as road signs or painted road markings.
 
  • Like
Reactions: SidetrackedSue
Does that come as a surprise to you? It’s a machine learning model. If you feed the exact same input into it you will get the exact same output.
I guess I don't know what I expected, exactly. I guess I figured somehow there might be a bit more variation with an NN. It's not exactly the same input. Could be daytime, could be nighttime, could be cloudy, could be sunny, could be raining. The relevant pieces are the same of course.

I thought there might be a bit more variation than with procedural code. But seems not (in some ways there are, of course - when the input differs in very specific ways, it seems much more sensitive to it)

Dan O' Dowd (representing my alma mater with all the typical class and social graces) is wrong about this. FSD v12.3.x has not been retrained on that swarm of Teslas. Not yet. Next version is going to total nail it when they do retraining (v12.3.x all seem to be the same).

D O'D : After 3 years of chazman sending bug reports for his left turn, and Tesla sending a swarm of cars to gather data, Chuck's latest video shows FSD still can't do it safely.If they can’t fix one left turn, how will $TSLA ever get FSD to work on the millions of other left turns?

 
Last edited:
  • Like
Reactions: VanFriscia
Since yellow lights vary in timing, with no way to tell what that timing is, and the new NN are not coded but instead learn how to handle things by watching curated videos, I wonder if that's why we're seeing this behavior. Your yellows might have a 4 second timer, but the NNs were trained on 2 or 3 second timers, so it brakes out of caution?

For example, if your lights near you always have 4 second timers, and that's your muscle memory for years, then you visit another state where they have 2 second timers, I'd bet you run a few red lights before adapting.
I wonder if yellow light times are coded into map data? If not, maybe they should be? Yellow light times are typically correlated with speed limits (faster limits = longer yellow lights), but there is no real standardization, and red-light-traps do exist. There have been attempts to propose formal standards, but in practice it's still sort of the Wild West out there. At the same time, there are prima facie rules in some states (most states?) that one should stop at a yellow light if it is possible and safe to do so. (If there's a car right behind you, it's arguably not safe to do so.) Some states prohibit accelerating to make it through yellow lights. It's possible that Tesla is taking the lowest common denominator approach that would be the most legal in most states, though in some cases it may compromise safety. I hope this improves as time goes on.
 
I wonder if yellow light times are coded into map data?
I would like it to not tap the brakes, then proceed, when the left-hand-turn light goes yellow and my light stays green (I have video of this, which I absolutely will not post). How on earth did the miraculous NN learn to do that completely incorrect behavior?

Maybe it just takes the average of the light colors. 2 out of 3 were green.
 
  • Funny
Reactions: joelkfla
I guess I don't know what I expected, exactly. I guess I figured somehow there might be a bit more variation with an NN. It's not exactly the same input. Could be daytime, could be nighttime, could be cloudy, could be sunny, could be raining. The relevant pieces are the same of course.

I thought there might be a bit more variation than with procedural code. But seems not (in some ways there are, of course - when the input differs in very specific ways, it seems much more sensitive to it)

Dan O' Dowd (representing my alma mater with all the typical class and social graces) is wrong about this. FSD v12.3.x has not been retrained on that swarm of Teslas. Not yet. Next version is going to total nail it when they do retraining (v12.3.x all seem to be the same).

D O'D : After 3 years of chazman sending bug reports for his left turn, and Tesla sending a swarm of cars to gather data, Chuck's latest video shows FSD still can't do it safely.If they can’t fix one left turn, how will $TSLA ever get FSD to work on the millions of other left turns?


I experienced this first hand today. If not for my intervention, my car would’ve taken 2 hours to complete a left turn, once the traffic would have died down.

I like the progress from v11 to v12 though. If the gap between v13 and v12 is as wide as that between v12 and v11, I think we will have an actual autonomous car in a year.
 
  • Informative
Reactions: AlanSubie4Life
I sometimes wonder if a future hardware version will (or should) include an articulated telephoto camera, with dedicated sub-networks for aiming the camera and interpreting the telephoto view. Map data is far too often wrong for FSD not to override it based on prima facie evidence, such as road signs or painted road markings.
I don't know how they input navigation directions into the end-the-end model, there are different ways you can do this. But one day they'll probably need some multi-modal thing that can handle language tokens like "Turn left on 10th street", and that will make it possible to grok the roadway street signs. And you can give it plain directions like, "park near the west entrance to Costco".
 
  • Like
Reactions: Ben W
It would make sense for them to collect disengagement data then collect real driver data at the disengagement locations to further train the system.
I do expect that they have some sort of Venus Flytrap approach where it takes at least 2 disengagements at a specific location for them to start paying attention to it. [A venus flytrap only snaps shut if it detects two successive movements in nearly the same spot, to prevent random events like raindrops triggering it.] But agreed, subsequent capture of non-FSD driving at these locations should be very helpful.
 
Last edited:
  • Like
Reactions: sleepydoc
I don't want to be a Negative Nancy, but in the most optimistic sense of speaking, the chances of this are zero with current hardware.

The progress from v11 to v12 is really great.

Huh? How is this optimistic? What could be more pessimistic than a zero chance? Could there be negative chances???

Like autonomous driving would intentionally cause accidents???
 
  • Funny
Reactions: AlanSubie4Life
1. Clearly an issue that they need to fix, but I’ve noticed something interesting with 12.3.3 - it accelerates quite well up to about 5-7 MPH under the target speed then sits there totally content. If I don’t focus on the speed and there’s no one behind me it’s not at all unpleasant to let FSD do its thing. Not what I would do but not dangerous in any way and not illegal.
Another weird behavior I've noticed with 12.3.3 is that it will occasionally get stuck at 5-7mph OVER the set "Maximum" limit (e.g. if the speed limit changes) and won't slow down even if the maximum limit is adjusted with the scroll wheel; it requires a full disengagement and re-engagement in that case to get it to slow down. Seems like a C++ bug more than a neural net error.
 
Total conspiracy theory here, but what if early v12 was E2E, but subsequent versions are adding back bits of hard code to solve some issues, like the low speed?

They’re going to have to. These neural networks are built on human behavior. But you want the car to behave better than humans, you’re going to have to hard code a few things, such as: don’t make a rolling stop and don’t speed.
 
Is there a way to prevent the car from going into the HOV lane? On FSD, my Model 3 quickly got into the fast lane and then abruptly tried to change to carpool lane. I don't have HOV stickers and I was the only one in the vehicle. I had to disengage quickly and get back into fast lane.
Yes, there's a "Use HOV Lanes" setting in the Nav preferences. Although I do wish they had an intermediate "Adaptive" setting, using the seat weight sensors to determine occupancy, and using HOV lanes if and only if it determines enough people are in the car. Relatedly, it would be fun to see a "Watt-Hours per Passenger Mile" metric displayed in the Energy stats.
 
Last edited:
Does that come as a surprise to you? It’s a machine learning model. If you feed the exact same input into it you will get the exact same output.
On the contrary; deep-learning models are highly nonlinear. infinitesimal changes in input can result in vastly different changes in output. The important thing is that the behavior should be stable "when it counts", e.g. for not curbing wheels. And in the real world, photon noise in the cameras is enough to make a considerable difference in how the car navigates an otherwise-identical scenario if encountered twice in a row.
 
Last edited:
They’re going to have to. These neural networks are built on human behavior. But you want the car to behave better than humans, you’re going to have to hard code a few things, such as: don’t make a rolling stop and don’t speed.
As I understand it, they implemented no-rolling-stops in v12 by carefully curating their training set to avoid rolling-stop examples, not by hand-coding. It's not clear if this approach is pragmatically scalable to every such parameter, though.
 
As I understand it, they implemented no-rolling-stops in v12 by carefully curating their training set to avoid rolling-stop examples, not by hand-coding. It's not clear if this approach is pragmatically scalable to every such parameter, though.
My assumption has been that they are using tons of simulated stops, in order to program the fundamental stop-sign behavior and achieve the non-human-like profile. (Though it obviously still needs work).

They already said in the livestream video and follow-ups, that not only is it hard to find enough examples of the full-legal stop, but that the qualifying examples are heavily mixed with people who are fiddling with phones, mirrors, lipstick or whatnot, or otherwise plain bad drivers. In other words, even heavy-handed data curation is probably not the way to get this done.
 
Sorry, I thought you were not on the exit lane already.

But, if you are talking about one of those exits that cut across an entrance merge lane, we have one of those on I-5N exit to 152 W. I always disengage to take that exit manually. That type of exits are just too tight and I never trust FSD.
It’s one of those short entrance/exits where as the interstate rises into an overpass you enter/exit at the top. This type of enter exit lane is short and you don’t have much time to exchange with incoming traffic. FSD was signaling and preparing to get into the lane and exit, but then decided to slow and allow incoming traffic to get in front of it. A no no for this kind of exit as the traffic merging on is suppose to enter behind you, while they are still speeding up. I’ve used this particular exit multiple times a week for over 10 years and never ever missed it.

It sounds like that matches your exit cutting across an entrance lane description. I was surprised and unprepared because FSD handled the same exit just fine two days prior so I had thought nothing of it. There was probably less merging traffic then. Now I know and if I can’t find an obvious hole looking ahead I’ll accelerate ahead of incoming traffic or otherwise take over.

Just thought I’d point out poor FSD behavior at this type of entrance/exit.
 
Agree,but I do expect another significant upgrade in April. Tesla will want owners who have taken up the free FSD trial offer to see improvement within 30 days to entice owners to sign up for the subscription service. Lots of potential to get either V12 highway stack, Summons or Reverse Summons (aka Banish). Banish is unlikely IMO but we have heard that Tesla employees are testing Summons.
Did you read my very next post? I basically say the same. Here is link. Also slight correction. It is Summon. Summons is a legal term. ;)
 
I wonder what data we have so far about reactions to V12 from first-time Tesla buyers and first-time FSD users. I would expect general disappointment, but I’m curious how big of a disappointment.
You got STUedtoday and it was not a Dislike but a Laughy. 🤣 Is this a first for the bot?🤔 Link as proof to post.
Screenshot 2024-04-05 at 6.35.04 AM.png

 
Last edited: