Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD rewrite will go out on Oct 20 to limited beta

This site may earn commission on affiliate links.
Here's an interesting sequence of screenshots driving up Gough crossing at Washington in San Francisco as not only is it cresting, the lane shifts. Autopilot happens to be following a vehicle at 13mph, so that might also help it change it's predicted path from straight to slightly right, but I wonder if it was clear set at 30mph, would it slow down to react to the lane shift in time?

before crest.jpg

at crest.jpg

after crest.jpg
 
That is untrue of course.

tesla explicitly states it's intended only for use on freeways and other divided roads with controlled access.

It's right there in the owners manual.

You pretending the designer of the system isn't the one telling you this notwithstanding.




The car allows you to slam into another car too if you keep it floored. Or drive with your feet.

Neither is a very good idea though.

The car isn't your mom and assumes a reasonably competent driver. Sometimes this is not a valid assumption I guess.
You live in a binary world and I live in the real world so we are just going to have to agree to disagree here.
 
it's actually very difficult for a machine to determine if a car is parked or whether it's stopped in traffic.

I was driving in a French town a couple of years back and came up behind a stretch of road solid with parked cars along its length with no other traffic around. I pulled into the oncoming lane to pass the parked cars. Half way through passing them I saw the traffic lights ahead and suddenly had oncoming traffic heading straight towards me! :oops: And the line of 'parked' cars I was trying to pass started driving forwards.

It's not just difficult for FSD to spot the difference between vehicles parked / stationary / queing to turn.
 
Last edited:
You live in a binary world and I live in the real world so we are just going to have to agree to disagree here.


No, I live in a world where the current Tesla owners manual says:

Model 3 manual page 100 said:
WARNING:
Autosteer is intended for use only on highways and limited access roads with a fully attentive driver. When using Autosteer, hold the steering wheel and be mindful of road conditions and surrounding traffic. Do not use Autosteer on city streets, in construction zones, or in areas where bicyclists or pedestrians may be present.
 
No, I live in a world where the current Tesla owners manual says:

Yet it also says this, hmm...

Model 3 manual page 101 said:
Restricted Speed
Autosteer is intended for use only by a fully attentive driver on freeways and highways where access is limited by entry and exit ramps. If you choose to use Autosteer on residential roads, a road without a center divider, or a road where access is not limited, Autosteer may limit the maximum allowed cruising speed and the touchscreen displays a message indicating that speed is restricted. The restricted speed will be the speed limit of the road plus 5 mph (10 km/h).
 
No, I live in a world where the current Tesla owners manual says:

".... Do not use Autosteer on city streets, in construction zones, or in areas where bicyclists or pedestrians may be present...."

Otherwise you will get to see the latest FSD / AP features we keep bragging about and continue to develop! :rolleyes:

Whatever the book says, Tesla are clearly encouraging existing non-(new) beta use of city based AP features.
 
  • Like
Reactions: DanCar and turnem
".... Do not use Autosteer on city streets, in construction zones, or in areas where bicyclists or pedestrians may be present...."

Otherwise you will get to see the latest FSD / AP features we keep bragging about and continue to develop! :rolleyes:

Whatever the book says, Tesla are clearly encouraging existing non-(new) beta use of city based AP features.
Exactly.

I mean - Hitachi also claims this thing is a back massager yet we all know what it's actually used for.
 
  • Like
Reactions: DanCar
Maybe I'm confused about what Dojo actually is, but I find Elon's latest tweets about it odd: https://twitter.com/elonmusk/status/1325445345275416578?s=19

Despite FSD beta relying on 4D training right now, he says Dojo won't contribute until 1 year from now.

Isn't the whole point of Dojo to reduce watt per CPU operations so that Tesla can save money in the long run, since they'll be doing lots of video training to improve FSD over time?

There's nothing stopping Tesla from paying third-parties for training until they get Dojo up and running (if ever; it's not clear if Dojo provides a cost benefit as new CPU/TPUs are released by third-parties).
 
  • Helpful
Reactions: willow_hiller
Isn't the whole point of Dojo to reduce watt per CPU operations so that Tesla can save money in the long run, since they'll be doing lots of video training to improve FSD over time?

There's nothing stopping Tesla from paying third-parties for training until they get Dojo up and running (if ever; it's not clear if Dojo provides a cost benefit as new CPU/TPUs are released by third-parties).
There are significant costs involved with training in datacentres such as Azure. Not on small scale, but Tesla has a huge dataset, big network and rapid/continuous iterations.

Seems Tesla wants a reinforcing feedback loop where the training data/labeling maintains itself based on real world feedback. That's mostly a software problem. A very hard problem since you need an automated scoring/validation mechanism.

I'm not sure how they solved that problem, but I suspect they found out it requires a lot more hardware resources than their current training based on manual labels. Hence the requirement for Dojo to keep costs at a manageable level.
 
Maybe I'm confused about what Dojo actually is, but I find Elon's latest tweets about it odd: https://twitter.com/elonmusk/status/1325445345275416578
The tweet does somewhat clarify what he was referring to the "rewrite" in that it was primarily the labeling software. This makes the training data quality much improved compared to previous individually labeled photos. Sure, there was probably some changes to the HydraNet to provide additional outputs with the better training data (e.g., birds-eye-view / temporal aspects), but the core structure of the neural network didn't need to be completely rewritten.

If one looks at the progression from AlphaGo (networks trained primarily on human data) to superhuman AlphaGo Zero (networks trained on self play), the network structure is relatively similar (both are "looking" at a Go board and predicts how good a move or position is) but the performance is much improved in the latter because the quality of training data.
 
Last edited: