Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Profound progress towards FSD

This site may earn commission on affiliate links.
Today for the first time in a while, I took a 12-mile roundtrip all city and only had to take over for corners and areas where there were no clear lines. AP even avoided a car cutting in front from a side street without me getting too nervous. It was a little too cautious, but it did slow in time. A few updates ago, city driving wasn't this reliable for me. It's still a little rough around the edges, but it has improved. So, freeway phantom issues aside, they are making reliability progress in the city from what I can see, and this is pre-rewrite.
 
During today’s Q2 earnings call, Elon mentioned several times how the new 4D version of FSD is a profound improvement over the current 2.5D stack. He he a damn good salesman and has me convinced.

It will be interesting to see what Tesla can achieve in the next 6 - 12 months.

What he says at this point means nothing! Of course he would say that. Saying literally anything else would start scaring away potential buyers for FSD. He's going to say the same thing on the next call, on the next one, in 2021, 2022 and so on...
 
My apologies if this is the wrong thread, but as I close in on finally pulling the trigger on buying a Tesla (3, LR), I am curious about whether this "problem" has yet been addressed by FSD.

I find that pretty much every car I've driven that has some sort of driver assist - including Tesla test drive from a couple years ago - doesn't react well to being cut off on highway driving. Understandable to some degree -- but the issue is that the car doesn't "see" what's happening until well after I do, thus causing an abrupt braking. I can see the dumbass driver in the lane to my right starting to make their move or even put their blinker on to squeeze in between me and the car in front of me - so I start to react. The car sees it only after the car has finally moved right in front of me - so reacts late, applies its distancing algorithm, or whatever, and goes "oh crap" - and brakes hard.

Thanks for any insight.
 
If you had a lease, fortunately you did NOT invest in FSD. A lease only pays for a portion of the car's use. The balance is carried by the lease company. Those buying their car outright are paying for the development of FSD. If it is completed before they sell their car, they get all the step by step feature additions and the final product at no additional charge. Just to clarify.

clearly you have no idea what your talking about. Because there is no leasing companies for tesla in the US. I leased from tesla for the lower payment and option to upgrade. And my payment is through tesla finance. At the time it was worth it cause it was an “inventory vehicle” but they lied to me about the prior history of the car and after 24 service visits and my car being in service for 70+ days. Im okay with getting a different car until they get their crap together.
 
  • Disagree
Reactions: pilotSteve
My apologies if this is the wrong thread, but as I close in on finally pulling the trigger on buying a Tesla (3, LR), I am curious about whether this "problem" has yet been addressed by FSD.

I find that pretty much every car I've driven that has some sort of driver assist - including Tesla test drive from a couple years ago - doesn't react well to being cut off on highway driving. Understandable to some degree -- but the issue is that the car doesn't "see" what's happening until well after I do, thus causing an abrupt braking. I can see the dumbass driver in the lane to my right starting to make their move or even put their blinker on to squeeze in between me and the car in front of me - so I start to react. The car sees it only after the car has finally moved right in front of me - so reacts late, applies its distancing algorithm, or whatever, and goes "oh crap" - and brakes hard.

Thanks for any insight.

I haven't tried other driver assist technologies so I don't have a good comparison. In my experience, it still gets "surprised" by being cut-off. Sometimes I'll just click off AutoPilot for a second to make space to let them in / be safer, or roll the jog wheel down to increase car spacing. Hopefully the new re-write which is suppose to use video, will be able to identify the blinker better than the current mostly still image recognition system.
 
I haven't tried other driver assist technologies so I don't have a good comparison. In my experience, it still gets "surprised" by being cut-off. Sometimes I'll just click off AutoPilot for a second to make space to let them in / be safer, or roll the jog wheel down to increase car spacing. Hopefully the new re-write which is suppose to use video, will be able to identify the blinker better than the current mostly still image recognition system.

This is where the time dimension will really help, I think. AP also gets spooked by vehicles turning off of roads. It sees a vehicle ahead slowing down rapidly, but doesn't yet understand that the trajectory of the slowing vehicle is such that it will be out of the path of travel before you could reach it.
 
It is in production.
From the April 2019 Autonomy presentation:

No it's not. I have not experienced a single one of that (or it activates when it's closer to the car than what I consider safe and what the other driver would interpret me as an a-hole).
I have to break the car out of autosteer every single time.

Actually it slows down in all other situations. When I am in the left lane and cars are solid in the right lane, then it can slow down. So it acts in all other situations than it actually should. Maybe that's not even cut-in detection at all.

This is where the time dimension will really help, I think. AP also gets spooked by vehicles turning off of roads. It sees a vehicle ahead slowing down rapidly, but doesn't yet understand that the trajectory of the slowing vehicle is such that it will be out of the path of travel before you could reach it.

Yes it also gets spooked by cars in the right lane (same direction) when I am in the left / fast lane. So I have to override it constantly by having my foot on the accelerator, thus temporarily disabling it's cruise control.
This is actually to avoid safety problems, because cars behind me will think I am an idiot when that happens.
It's SO annoying! I want to go in the *fast lane*, the whole point is to go faster lol not align my speed with vehicles next to me.
 
Last edited:
  • Disagree
Reactions: APotatoGod
To sum up this thread:

1) "turning at intersections and autosteer on narrow streets" is in alpha development.
2) Elon estimates the rewrite with does 4D is "2-4 months" away
3) Elon also said that the latest alpha software can almost do his commute, which includes construction zones, without any disengagements.
4) Elon thinks that Tesla is "very close" to L5.

Some see it as proof that Tesla is making mind blowing progress on FSD. Others are skeptical on the timeline of when we will get these features and think Tesla is still very far from actual L5.

Nice summary. You forget #5...
5) Everything else is idle speculation until something actually shows on a car.
 
An interesting aspect of Tesla's profound progress and fleet is that even if Waymo has recorded every aspect of their 20+ million total miles driven to study and improve their driving automation, in a few months when the Autopilot rewrite is generally released, there will be approximately 1 million Tesla vehicles with FSD computers driving around. On average, vehicles are driven 10k miles a year, so this means Tesla's fleet with active data collection would be experiencing over 20+ million miles each day even with people driving less these days.

To be clear, I understand Tesla wouldn't be collecting all 20+ million miles of driving data every day, but Autopilot has the opportunity to identify interesting cases to send back to Tesla analyzing more daily miles than Waymo lifetime miles driven since they started over a decade ago.

Other than cut-in detection, it doesn't seem like Autopilot is making future predictions yet, but it will be needed to make safe maneuvers for unprotected left turns or edge cases involving parked car door openings or oncoming traffic entering your lane. Having Autopilot take in 4D inputs to build a perception foundation lends itself to making 4D predictions that should result in even more profound progress with multiple aspects improving in each neural network update.
 
Having Autopilot take in 4D inputs to build a perception foundation lends itself to making 4D predictions that should result in even more profound progress with multiple aspects improving in each neural network update.

I would love the AP re-write to incorporate a probabilistic path prediction into the UI. Cut-in detection seems to be relatively simple ("What do the few frames of video look like before a car cuts in?"), it's a binary "is cutting in" or "is not cutting in" prediction (that probably has a non-binary activation threshold, but that's not exactly what I mean when I say probabilistic).

Imagine training a 4D Autopilot rewrite on all of the human driving behavior observed by the cameras to date. Instead of predicting binary states, you could predict co-dependent future positions based on speed, signals, angles, etc (90% chance of maintaining speed and lane, 5% chance of accelerating, 5% chance of merging to the left, etc.). It's something I do subconsciously as a human driver ("Okay, this guy is in an exit lane to my right, but he's accelerating, what are the odds that he's going to leave the exit lane at the last minute and cut me off?") It would be very useful for human drivers to see a graphic representation of where the system thinks each vehicle around it is headed (with little shadow vehicles of varying opacity, or vector-like arrows of varying direction and length).
 
Instead of predicting binary states, you could predict co-dependent future positions based on speed, signals, angles, etc (90% chance of maintaining speed and lane, 5% chance of accelerating, 5% chance of merging to the left, etc.)
Yeah, showing predictions in the visualization would be pretty neat although probably would need to be simplified significantly for average users. Here's an example of NVIDIA PredictionNet showing both un/certainty and time using points and colors:
nvidia predictionnet.jpg


The current Autopilot visualization kinda does show a little bit of 4D with the single blue Navigate on Autopilot line indicating where the vehicle will drive, e.g., staying in lane vs lane change. So one guess is they'll extend the visualization for your own vehicle first to better indicate what Autopilot will do in the next few seconds, e.g., slowing down or getting ready to enter an intersection, before visualizing the the likeliest behavior of other vehicles, e.g., cut-ins and cross traffic.
 
Yeah, showing predictions in the visualization would be pretty neat although probably would need to be simplified significantly for average users. Here's an example of NVIDIA PredictionNet showing both un/certainty and time using points and colors:
View attachment 569598

The current Autopilot visualization kinda does show a little bit of 4D with the single blue Navigate on Autopilot line indicating where the vehicle will drive, e.g., staying in lane vs lane change. So one guess is they'll extend the visualization for your own vehicle first to better indicate what Autopilot will do in the next few seconds, e.g., slowing down or getting ready to enter an intersection, before visualizing the the likeliest behavior of other vehicles, e.g., cut-ins and cross traffic.

I would love to see more detail on the FSD visualizations about what the car sees and what the car plans to do. It would give me greater confidence in Tesla's FSD.
 
I would love to see more detail on the FSD visualizations about what the car sees and what the car plans to do. It would give me greater confidence in Tesla's FSD.

That's one of the things I find hardest to work with when using FSD / AP. I have to turn off my own human driver early visualisations and start driving like a total noob leaving everything to the last minute to see if AP reacts appropriately... albeit belatedly.

It's all fine and dandy 'staying in control' of your vehicle, but as things stand you pretty much have to respond to stuff later than a good driver would (and use AP) or just leave AP turned off.

Enhanced visualization that gave you more information of how AP is reading ahead would be a big benefit allowing safer driving and letting AP do more of its own thing.

Too many warnings are almost retrospective. 'Take control now' is more of a euphemism for 'AP lost the plot 50 yards back'. IMO it's a bit useless advising that 'Autosteer is limited' just as you are taking the apex of the bend.

Even something as simple as a confidence gauge so that all AP inputs and predictions got bundled up to give a real time measure of how AP regarded its own workload / confidence.

MS R LR 2020.28.5
 
Last edited:
I would love the AP re-write to incorporate a probabilistic path prediction into the UI. Cut-in detection seems to be relatively simple ("What do the few frames of video look like before a car cuts in?"), it's a binary "is cutting in" or "is not cutting in" prediction (that probably has a non-binary activation threshold, but that's not exactly what I mean when I say probabilistic).

Imagine training a 4D Autopilot rewrite on all of the human driving behavior observed by the cameras to date. Instead of predicting binary states, you could predict co-dependent future positions based on speed, signals, angles, etc (90% chance of maintaining speed and lane, 5% chance of accelerating, 5% chance of merging to the left, etc.). It's something I do subconsciously as a human driver ("Okay, this guy is in an exit lane to my right, but he's accelerating, what are the odds that he's going to leave the exit lane at the last minute and cut me off?") It would be very useful for human drivers to see a graphic representation of where the system thinks each vehicle around it is headed (with little shadow vehicles of varying opacity, or vector-like arrows of varying direction and length).

Unfortunately I suspect these predictions won't be exposed close enough to the UI stack to be easy to add to the visualizations - based on Karpathy's recent talks about adding more and more networks, I bet they'll be just adding more classifiers like "car is doing behavior X, Y, or Z" similar to "car will cut in" and then be reacting to these predictions, rather than predicting and reacting to possible trajectories (which is less software 2.0 and more software 1.0 per Karpathy's terminology).
 
  • Informative
Reactions: APotatoGod
........ So I have to override it constantly by having my foot on the accelerator, thus temporarily disabling it.........

This is also a significant issue. Especially on freeways / interstates where you just want to..... cruise........ if you have to keep your foot either depressing the accelerator or cautiously hovering just above, it is at least tiring, but worst case you could be unwittingly overriding TACC.
 
I bet they'll be just adding more classifiers like "car is doing behavior X, Y, or Z" similar to "car will cut in" and then be reacting to these predictions, rather than predicting and reacting to possible trajectories (which is less software 2.0 and more software 1.0 per Karpathy's terminology).
Interestingly, looking through Karpathy's CVPR presentation again, he only refers to "predictions" as the various perception outputs, e.g.,
Karpathy said:
There are actually two customers for all these predictions:
  1. The planning and control module that tries to wind its way around all these environments.
  2. The instrument cluster, so we like to show people as much as possible on the instrument cluster to give them some sort of confidence that Autopilot is doing the correct thing.
Whereas others like Uber refer to "motion forecasting," Wayve "futures", Waymo "behaviors," and others just generally "predictions." So your comment is quite interesting in that maybe Autopilot is handling these in "software 1.0" based on "2.0" output attributes, e.g., velocity of detected objects, so perhaps even without prediction, superhuman reaction time is enough?

Although adding these trajectory-style predictions into the Autopilot neural network should be "easier" than others needing to deal with how to encode HD maps and other sensor data/discrepancies, e.g., Waymo's VectorNet with points, lines and polygons. The rewritten network's internal hidden layers already understands the intersection, objects, traffic light colors, etc. to produce those birds-eye-view outputs, so a future position prediction might just be another classifier attribute leveraging the network's understanding.
 
^ Yeah, that's kind of what it sounded like to me; any UI output would be to some degree a reconstruction rather than "here's exactly what the car is thinking right now". Hopefully we'll see the direction more clearly in the next few months once the rewrite is out in the wild ;)

As the number of predictions increase one can theoretically train a single network that will take all the other predictions as input and output some driving controls. Until then it seems like a gradual replacement of the "software 1.0" pieces with more and more predictions.
 
Last edited:
Sounds like they’re trying to get the number of interventions down before wide release in a few months.
I wonder why Tesla didn't release software that was shown and demoed on Autonomy day last year. Clearly it was "feature complete" with its ability to autosteer on city streets with lane changes and sharp turns. Elon Musk most recent estimate of 2-4 months for the rewrite was also qualified with "Then it's a question of what functionality is proven safe enough to enable for owners." https://twitter.com/elonmusk/status/1278539278356791298

So perhaps Tesla knew the intersections near Palo Alto HQ were "safe enough" while others even just down the street could be quite dangerous, e.g., turning into oncoming traffic. But even then what's the threshold of releasing wider as theoretically gathering failure examples from the fleet even in shadow mode should help improve training unless Karpathy realized squeezing out more functionality based on 2.5D Autopilot would take more work for less benefit?