Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

I'm actually shocked at V11.3.4

This site may earn commission on affiliate links.
My opinion of FSD has been pretty low for a long while now. I got FSD Beta in the 2nd 'original wide' groups in Oct '21. The ending months of 2021, FSD was a toy, but a potentially lethal one. It tried to kill me 3 times. I engaged in a parking lot with a stop sign to the main street about 10 meters ahead. FSD ran up to the stop sign and tried to run directly into traffic. It was a perpendicular road and it didn't stop and it wanted to go straight into the road with oncoming traffic in both directions. I since learned that any engagement in parking lots if dodgy.

Into 2022, FSD was mostly safe with no behavior I would describe as 'literal danger'. FSD however was too annoying to use in many circumstances. I received update after update, and hoped things would improve, but was disappointed every time and eventually ignored the hypetrain and paid no attention to updates. My biggest primary complaints were
  1. It would make very dumb lane selections. Driving on a 4 lane divided avenue (2 lanes each direction) and needing to make a right turn onto a perpendicular 4 lane divided avenue via a turn lane. With the turn about 30 meters off, FSD every drive and after every update would insist on being in the left lane. I could force the car to the right via turn stalk, and it would get right back in the right lane. Engaging FSD in the left lane with the same behavior. Many similar scenarios, not just this 1 example.
  2. I've posted in a few threads here that when FSD is making a turn, particularly left turns (protected left or not) - that FSD driving feels like Tweek from South Park is driving. Spastic wheel shifting, hesitant stop and go, and jarring
In the end I would just cancel FSD approaching any turn and do it myself. A non-turning FSD is basically just standard autopilot then.

Now, V11 FSD is announced (tweeted.. i guess..). Having FSD handling the interstate sounded pretty nice, but I didn't get too excited. A good thing as it took about 3 weeks to get it after first 'going wide'. I did my 1st commute to work today with V11. I live north of Nashville TN and work south of it so my commute is fairly long at about 50mi of interstate each way with 4x interstate intersections to navigate. I immediately notice that V11 will bias left or right away from large vehicles, Semi trucks and regular trucks with trailers. I also notice when trying to take an interstate curve with a 60MPH speed limit at 80MPH - V11 will actually STAY CENTERED in the lane.

These 2 improvements are enough to wow me. Huge huge improvement. I've historically canceled AP/NoAP very regularly when trucks get to close and AP just sits there and would let you get hit or spaz with the red steering wheel of death signal and beeps. AP/NoAP would also bias to the outside lane divider of a curve, putting your car uncomfortably close to the car in the adjacent outside lane. I'm very pleased with V11's improvement over AP/NoAP here. V11's lane selection is also very good on the interstate. It gets in a leftward lane to pass slower cars very well. It also sees cars approaching from the rear that are faster than you and gets back into a rightward lane. Very nicely handled

Now as to the city streets portion of V11, I haven't yet spent a whole lot of time there yet. I did notice that the #1 list item's example intersection was handled correctly this morning. It finally did not insist on being in the left lane for a right turn. Finally!!! WOOO!!! haha. I've had FSD also do maybe 3 or 4 protected lefts and they were much much smoother than any prior version

I'm fairly certain FSD will never be truely autonomous in the lifetime of my car, but I now have some hope that it might be a fully capable L2 ADAS system that will fully drive with supervision and significantly less intervention from annoyance, danger, and incorrect behavior

Finally - a release where I actually observe true improvement as well as new useful features!

Screenshot 2023-04-05 12.55.16 PM.png
 
FSD 11.3.4 installed this past weekend. I also notice a significant improvement.
To add to what's said so far, there are a few annoying things that are prevalent.

I live in Northern California, in Sonoma Co. Our roads are mostly in terrible shape and we have our unique structures. One of the things that often happens on a city road is it flares wider periodically to make a center turn lane then comes back together. Each time the car comes up to this it turns on the blinker indicating a right turn, whereas noone ever uses a blinker when they are going straight through this type of feature. Everyone assumes the car will be making a right turn.

On the flip side, a common feature in CA is that highways just kinda fade into city streets or start abruptly from city streets. There is one such case in here and the car goes onto the onramp (effectively making a right turn) without blinking. Why blink in the other instance but not in this?

Moreover when it goes into the above onramp, rather than taking a smooth curve to the onramp like a person would it continues straight a bit too long then makes a sharper right turn. Northern CA drivers are super aggressive so when the car does this, moreover without a blinker, there is a good chance some impatient driver will be attempting to overtake the car on the right while accelerating to the highway speed. Pretty dangerous.

Finally, when going on the highway onramp, it will keep the speed limit from the city street rather than start accelerating to highway speed. So it will be on there at 35mph.

_MK
 
I'm sure there are some hard coded rules but this makes it even more difficult to manage and the interaction can cause poor behavior as well. Rules can be wrong too---and neural networks might be more robust to transient perception errors as they would be trained on realistic data which had those errors.

Absolutely. Hard-coded rules should be designed to kick in only in situations where you're certain that the output from the neural network is unsafe.


I suspect some of the phantom braking people experience might be due to such rules.
The phantom braking I've seen has mostly been from the lack of nontrivial path planning in the old highway code. If a vehicle strays even slightly into your lane, the only option available to the old highway code was slowing down, and big trucks had a tendency to encroach on the adjacent lane with alarming regularity. Add to that the imprecision of RADAR positioning, and you had a recipe for phantom braking.


Of course a nnet would also be able to assimilate and fuse multi-modal sensors, like cameras and radar, contrary to what Elon said. Of course both radar and vision make mistakes but with enough work you can get a combined improvement.

You can't (or at least shouldn't) fuse data with garbage, though. The problem with the existing RADAR, as I understand it, is that the error bars were so large that they created a high rate of false positive detection, which did more harm than good.
 
  • Informative
Reactions: QUBO
I've had V11 installed for a while but one camera died. 🙄 Just had the service done, new camera, and WOW! With a 2023 / no USS model it could suddenly see my garage door! And be careful what you ask for, with a very small garage, it went from a flickering image of a truck (the door) to now major squealing and Stop!. But it was exciting to see any warning of obstacles.

My car is super stupid in FSDb - changing into occupied lanes, etc. I'll give the new version a try soon. With hands planted firmly on the steering wheel.
Honestly, if another lane has a car in it, it must be a better lane so let's go there....
 
  • Like
Reactions: SteelClouds
I just installed 11.3.6.

It worked very well as a replacement for NoA in socal freeway traffic. I think the v11 when merged to the mainstream line will make for a very good Enhanced Autopilot product. Fortunately I have the ultrasonics.

Previously on my non v11 NoA I set to require manual conformation for lane changes as the auto lane change was often poor or annoying. But with this version on the "Average" profile I let it change lanes and it was good enough on its own for me. Weather was dry and partially sunny, so good circumstances there.

On non-divided roads, I once had to disengage as it attempted to make a left turn on a major boulevard after it failed to enter the two left turn lanes in time to make the turn. It's not so easy for a human in this case who is not familiar with the intersection (I-5 north to Palomar Airport Road): you need to plan ahead and move over to the left multiple lanes immediately after exiting the freeway, but FSD dawdled for too long. When it got to the turn in the 3rd from left lane (marked straight ahead only) it attempted to stop and squeeze in to turning traffic with a jerky move, blocking straight ahead traffic from behind and doing the normal wheel-twisting hesitation when it doesn't know what it's doing.

It knew from the maps and nav display that it needed to be in the two leftmost lanes, and wasn't. It didn't have a far enough ahead planning window. This is probably a computational limitation.

I disengaged and drove straight and made a u-turn some ways down. FSDb said, at the stop light before the u-turn "Unable to complete upcoming maneuver, take over" or something to that effect. That's acceptable to me, though the font was too small.
 
Last edited:
Absolutely. Hard-coded rules should be designed to kick in only in situations where you're certain that the output from the neural network is unsafe.

Agreed. What I hope is Tesla doing is using hand-coded rules to define an envelope of safe solutions and then letting machine learning optimize within the safe solution space.

The "envelope of safe solutions" could define things like acceptable follow distances, max allowed speed, lane positioning, pathing solutions through an intersection, closest approach to other cars or VRUs, stop light interactions, etc.
 
Agreed. What I hope is Tesla doing is using hand-coded rules to define an envelope of safe solutions and then letting machine learning optimize within the safe solution space.

The "envelope of safe solutions" could define things like acceptable follow distances, max allowed speed, lane positioning, pathing solutions through an intersection, closest approach to other cars or VRUs, stop light interactions, etc.

That is the classic dynamical optimization problem where humans define the optimization target and hard or soft constraints. I think that this is the basis of their solutions now and seems like the only sensible approach. I got the feeling from some of their "AI days" that some of their work is 'compressing' the result of optimizations and simulations which might be computationally unfeasible to do on-board with efficient neural network approximations. This is now a big trend in physics sims where many properties of slow first principles simulation can be reasonably well replicated by neural networks with much lower computational cost.

Even still though I suspect that real life is sufficiently messy and context dependent that even such apparently sensible rules/constraints can occasionally lead to bad performance and frustrating behavior when actually implemented by computer.

Humans have an ability to think through general rules and behaviors and violate them when needed, but it's usually instinctive and unconscious---we don't really know what our 'rules' and 'constraints' are other than "don't crash" (but even still occasionally it's better to crash 'here' than 'there', like "hit a car instead of a 5 year old pedestrian").

At some point assimilating and estimating good driving behaviors from empirical human driving is probably still needed, with a decision engine with soft and not hard constraints.
 
Even still though I suspect that real life is sufficiently messy and context dependent that even such apparently sensible rules/constraints can occasionally lead to bad performance and frustrating behavior when actually implemented by computer.

Speculating here, but like a lot of FSD development an iterative approach starting with conservative, rudimentary hard bounds could work here. Like "never cross over the double yellows". It's mostly true! Following it is rarely an unsafe thing, but real people do it. It's rare, but at a million miles a day of FSD driving it's pretty common.

(For the record, FSD can cross double yellows. It's always rather incredible.)

So they might start with a really basic hard bound and iterate on it to something like "don't cross double yellows if there is oncoming traffic within 200 ft"
 
Speculating here, but like a lot of FSD development an iterative approach starting with conservative, rudimentary hard bounds could work here. Like "never cross over the double yellows". It's mostly true! Following it is rarely an unsafe thing, but real people do it. It's rare, but at a million miles a day of FSD driving it's pretty common.

(For the record, FSD can cross double yellows. It's always rather incredible.)

So they might start with a really basic hard bound and iterate on it to something like "don't cross double yellows if there is oncoming traffic within 200 ft"
In a recent Dave Lee YouTube interview, James Douma mentioned that when a new architectural element is introduced, say a new NN replacement for a previously hand-coded driving control subroutine, that Tesla may well have to put in software "guardrails" pending refinement and lots of field data that proves the new module is working and trustworthy. But the guardrails themselves are awkward overlays, and their parameter limits may only be guesses.

The result can be some jerky and unnatural l behavior which really isn't representative of the eventual performance of the new architecture. The remaining hand-coded guardrails are a necessary evil and may contribute to users' judgment that the new architecture is a performance regression. But then things settle out, confidence is raised and the guardrails can be removed. That's what he said, and it makes sense to me in situations like you describe.
 
  • Like
Reactions: gsmith123
when is 11.4 Going out to more testers?
No one knows at the moment. Only employees have gotten it, as of this moment there are no reported downloads to the data set group on TeslaFi or Teslaslascope. The only YouTube video drives on 11.4 are from Tesla Bull, apparently one of the employee testers. No Twitter clues from Elon as far as I know.

Maybe tomorrow :)
 
I'll let you guys handle the testing. I am very comfortable with TACC and it's even more relaxing than any other cruise control on any other vehicle I've driven. Yeah, it would be nice to get FSD, but it needs to be able to drive better than people. (The way people drive in my area, it sounds like what you guys have experienced in FSD really IS better than that.) The one thing I would like to see in an automated driving system is for it to notice and SAFELY avoid potholes.
 
I'll let you guys handle the testing. I am very comfortable with TACC and it's even more relaxing than any other cruise control on any other vehicle I've driven. Yeah, it would be nice to get FSD, but it needs to be able to drive better than people. (The way people drive in my area, it sounds like what you guys have experienced in FSD really IS better than that.) The one thing I would like to see in an automated driving system is for it to notice and SAFELY avoid potholes.
FSD isn't going to be L4 any time soon, and probably never on current hardware.

But the most recent FSD build publicly available, is a significant improvement--as a cruise control--than legacy TACC. It's really nice on the highway, particularly in moderate traffic. The behavior is more natural and human-like.
 
  • Like
Reactions: SteelClouds
FSD isn't going to be L4 any time soon, and probably never on current hardware.

But the most recent FSD build publicly available, is a significant improvement--as a cruise control--than legacy TACC. It's really nice on the highway, particularly in moderate traffic. The behavior is more natural and human-like.
I was having fun yesterday. Just put on the cruise control and barely had to turn the wheel to go around curves. So sweet! I'm usually not on freeways, that's my mom's thing. Always have to be alert around the highways here with the big pickups.