Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Unprotected left turns are simple because it is very clear who has the right of way and you don't have to worry about mistakes made by other drivers.
You’re confusing legal simplicity/clarity with technical simplicity/ease. No one is arguing the legal side. Some people seem to think making a left turn across 3 lanes of traffic is technically simple or easy. I question their judgement.
 
I did a bunch of driving yesterday and got to compare HW3 in my Model 3 on 2024.14.6 and HW4 in the Model Y with 2024.3.25
Both with FSD 12.3.6, the Model 3 is on its 30 day trial. I got to drive >100 miles I each car on a good mix of highway and regular roads.
I'd forgotten how much I miss the accuracy of the USS when parking, so much nicer that the hand waving of the cameras.
The two cars were much closer than they were with 11.4.9.
The Y really did seem to spot traffic lights and other cars much sooner than the 3, but overall the difference is much smaller than it was with 11.4.9 and earlier.
Both versions displayed the same issues, I was curious to see if there was a difference. Still ignores speed limit signs over 60 if the road isn't divided, still has the fetish with the left lane.
Not sure if I'm disappointed or not with the lack of difference, if they're that close what was the point in upgrading all the hardware?
I remember reading that musky said that FSD was targeting HW3 and that HW4 was running HW3 emulation - but that was probably at the time when he was wanting to stop people waiting for HW4 cars :rolleyes:
[edit]
Obviously HW4 does improve visible camera quality and clarity on the display, but I was looking for FSD differences
Elon mentioned that HW4 is being downrezzed to HW3 in current evolutions of FSD--so there shouldn't be any significant differences between HW3 and HW4 right now.
 
Last edited:
Elon mentioned that HW4 is being downrezzed to HW3 in current evolutions of FSD--so there shouldn't be any significant differences between HW3 and HW4 right now.
I remember reading it but couldn't find an actual statement from him directly. Also couldn't find anything recent, so who knows if its still true - other than "it feels like it" :D
 
When I hear "emulation" mode I think loss of performance, technically it means a layer of abstraction that can cost an order of magnitude in throughput (as in the case of, say, one CPU emulating another.). If Elon had really meant to say that HW4 output could just be "sliced" to yield HW3-quality data, I would think he'd have come up with a more performant-sounding term than "emulation"...
 
  • Like
Reactions: rwjr44
When I hear "emulation" mode I think loss of performance, technically it means a layer of abstraction that can cost an order of magnitude in throughput (as in the case of, say, one CPU emulating another.). If Elon had really meant to say that HW4 output could just be "sliced" to yield HW3-quality data, I would think he'd have come up with a more performant-sounding term than "emulation"...
While that may be what you think that is not the definition of emulation.

Screenshot 2024-05-18 at 6.33.27 PM.png


Screenshot 2024-05-18 at 6.36.10 PM.png
 
  • Like
Reactions: FSDtester#1
I suppose the car could get around this by moving slowly at first to let the driver see that the coast is clear, then completing the maneuver. But the more important reason is to reduce cases where the car thinks the coast is clear, but is wrong.

E.g. if the 8-camera setup can see just far enough to detect potentially dangerous cars traveling at the speed limit, it might assume the coast is clear but miss a speeding driver who's slightly farther away. If bumper cameras could better see such drivers from typical intersection stopping points, that would provide extra safety while avoiding the need for the car to creep dangerously far into the intersection. We do want the car to eventually have superhuman safety, after all!
Another option might be to make the additional cameras viewable, though it might be difficult for the driver to manage the additional information in real time. The visualization can combine all of the cameras into one easy to read image, but I think the reliability of the visualization is still up for debate. One thing the visualization lacks is the ability to indicate parts of the road that it cannot actually see. For example, let's say there is a car approaching on the road you are trying to turn onto, but the cameras cannot see that part of the road (maybe a parked car is in the way). The visualization should leave that part of the road undrawn (as in black/blank) rather than render it as if there are no cars/objects present there.

The default for any given video clip is not to be used in training; the overwhelming majority are not.

99.9% of fleet-captured video is never sent to the mothership. The remaining 0.1% is sent either on selective request (if Tesla requests that the fleet upload videos of a particular unusual situation they're trying to gather data for), or perhaps in cases of collision or near-collision.

Of the clips Tesla receives, perhaps 0.1% of those might be chosen for training, and even these are very selectively cherry-picked to be particularly good examples of skilled human driving.

So out of a million miles driven by the fleet, perhaps only one mile is used for training. The fleet has cumulatively driven about 200 billion miles at this point, so that works out to about 200,000 miles used for training. At 30mph for average city-street driving, that's about 2 million twelve-second clips. Of course this is supplemented by vast amounts of synthetic training data.

These guesstimates may be an order of magnitude or two off, but you get the idea. Every one of the training clips will show excellent driving; there shouldn't be any clips that include a driver hesitating at an intersection when they have the safe right of way, and so on.
Do you think disengaging and/or sending a feedback report has any effect on whether a clip is submitted?

Is it safe to assume that any clip making it into the NN was carefully reviewed by hand, or are some of them passed in by an algorithm? I think Elon said they rebuilt the NN for either 12.4 or 12.5 from scratch. If there are 2 million clips comprising the NN, I wonder how quickly they could validate all of them by hand. But maybe he meant the model was rebuilt, not the training set.

Edit: I understand 2 million clips have been used across various versions of the NN, but the number of clips in any one build of the NN might be less.
 
Last edited:
While that may be what you think that is not the definition of emulation.
You mean just because the definitions you cited didn't address there being a performance cost? There's always a cost, just depends how extensive the emulation is. I don't know anything about the architectures involved, I'm just saying it's possible HW4 emulating HW3 could run slower than HW3. The opposite is also possible. The word emulation still implies an added layer of abstraction, which usually costs ya.
 
v12.3.6 is working pretty well but the one area that is a bit worse now has nothing to do with the version. Now that foliage is fully out at a lot of the low visibility intersections I have require additional creep time. Probably an additional 2-3 seconds since FSD has to creep further so the B-pillar cameras can see. It's really obvious at these low visibility intersection where the car stops then creeps and then hesitates for just a moment before the final creep. Before the foliage came out FSD made decisions closer to the intersection since the foliage wasn't blocking the camera view.

I think once Tesla improves the overall creep speed this slight delay won't matter. It's a safety concern though. The T-intersection I take to leave my neighborhood now requires FSD to creep an extra couple of feet. (no shoulder) I've noticed a lot of crossing traffic moves towards the center of the road with the tires either on the center line or over. Come late fall this problem goes away when the foliage falls off and FSD doesn't have to creep as far.
 
Last edited:
v12.3.6 is working pretty well but the one area that is a bit worse now has nothing to do with the version. Now that foliage is fully out at a lot of the low visibility intersections I have require additional creep time. Probably an additional 2-3 seconds since FSD has to creep further so the B-pillar cameras can see. It's really obvious at these low visibility intersection where the car stops then creeps and then hesitates for just a moment before the final creep. Before the foliage came out FSD made decisions closer to the intersection since the foliage wasn't blocking the camera view.

I think once Tesla improves the overall creep speed this slight delay won't matter. It's a safety concern though. The T-intersection I take to leave my neighborhood now requires FSD to creep an extra couple of feet. (no shoulder) I've noticed a lot of crossing traffic moves towards the center of the road with the tires either on the center line or over. Come late fall this problem goes away when the foliage falls off and FSD doesn't have to creep as far.
Even when there is no traffic and no visibility blocking, FSD still waits too long to turn from a 20 mph street to a 30 mph street or from a parking lot to a 30 mph street. I am very frustrated with those situations.
Sometimes it has a lot of audacity, sometimes a lot of timidity.
 
It seems like Tesla gave a demo of 12.4 to Boris Johnson in LA. There's no mention of the FSD version, but there's a short video at the top of the article that shows Johnson with his hands on his knees and not on the wheel. There's also a quote "Instinctively I reach for the wheel; my toe twitches for the brake; but I need not have worried." indicating that it was hands-free.

 
It seems like Tesla gave a demo of 12.4 to Boris Johnson in LA. There's no mention of the FSD version, but there's a short video at the top of the article that shows Johnson with his hands on his knees and not on the wheel. There's also a quote "Instinctively I reach for the wheel; my toe twitches for the brake; but I need not have worried." indicating that it was hands-free.

I guess that was the 12.4 release this weekend musk was talking about