Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta 10.69

This site may earn commission on affiliate links.
You are correct...but missing out on "safe" driving because you are in FSD/AP mode can hurt your score by magnifying the "unsafe" aspects of your driving. If you miss out on doing a bunch of safe driving because your are AP/FSD most of the time, and then do one or two "bad turns" when you don't have FSD/AP engaged, then your score would be worse than what it could have been if you had not used AP/FSD at all.
I solve this issue by not driving myself at all.
 
Correct me if I'm wrong, but miles driving on FSDb don't count towards your safety score negatively. (In other words, they only help your score, they can't hurt it.) I drive 90% of my miles on AP/FSD...on the days I stick to 99% of miles on AP/FSD, I almost always score a 100. On the days I disengage a lot, my score suffers.

(For those confused, this is for those FSDb users who also have Tesla Insurance.)
If you are a cautious driver using FSDb actually hurts you. When attempting to qualify for FSD back when, all my deductions occurred when FSDb did something I was not comfortable with and caused me to disengage then the car immediately complained about the incident. There should be a waiting period after disengaging before determining an infraction occurred to allow the driver to correct the error made by FSDb. Many deductions were from FSDb allowing a merging vehicle to enter the highway very close ahead. When is disengaged to back off I received a following too close warning.
 
The book of life. Anyone who is paying anything attention to Tech rather than just living in the Tesla bubble would know that Google created the first NPU AI Training chip in 2015 called the TPU and are at v4 right now. Elon like claiming "first" in everything, which his fanatics @MrTemple blinding regurgitate. Which is like being a flat earther.
I find it weird you’d mention Flat Earthers, but one thing they don’t handle well are uncomfortable truths that challenge their dearest held beliefs. Let’s see how you handle some uncomfortable facts that challenge your beliefs…

Google’s TPU is optimized to RUN neural net Tensor operations.

Tesla’s Dojo is made to train neural nets.
I explained this in the parts of my post you didn’t cherry-pick. “🍒

Apologies I didn’t make the distinction explicit in every sentence I wrote describing it. It’s fairly well understood by those who understand what Dojo is. And honestly takes a bit more than a bullet point to explain to lay people. 🤷‍♂️

The difference between Dojo and TPU is ENORMOUS.

Training of the nets the way Dojo does requires an absolutely GOBSMACKING amount of dataflow. Many orders of magnitude more than required by the TPU processing silicon found and embedded in SoC silicon from Google, Apple, et al, which they use to EXECUTE the neural nets to say recognize objects in an image or do speech to text, etc.

Note: This is why I pointed out that Tesla will be uniquely positioned to offer NN training as a service (similar to Amazon’s varied cloud service offerings.)

Basically Google’s TPU (and Apple’s ML enclave in its A and M series chips, etc) are like the FSD computer in Teslas.

Of course we’ve seen a lot of ML processing silicon the past decade. 🙄

Dojo is a COMPLETELY different beast than we’ve yet seen in production (key distinction there).

So when I put a bullet point that Dojo is a key advantage for Tesla’s future. Where exactly do you see a chip in production that competes?

(moderator edit)
 
Last edited by a moderator:
It’s fairly well understood by those who understand what Dojo is. And honestly takes a bit more than a bullet point to explain to lay people. 🤷‍♂️
In your professional opinion, would it be possible for Tesla to repurpose Dojo to help solve the stopping problem? Or is that what it is for?

Just spitballing here.

If this works, it would immediately justify the Dojo investment.
 
  • Funny
Reactions: MrTemple
If you are a cautious driver using FSDb actually hurts you. When attempting to qualify for FSD back when, all my deductions occurred when FSDb did something I was not comfortable with and caused me to disengage then the car immediately complained about the incident. There should be a waiting period after disengaging before determining an infraction occurred to allow the driver to correct the error made by FSDb. Many deductions were from FSDb allowing a merging vehicle to enter the highway very close ahead. When is disengaged to back off I received a following too close warning.
While I agree with you in general, for my daily commute, I know exactly where FSD is bad and where it's fine. I can proactively disengage in the few instances where I'm approaching the bad spots so as to not get dinged. Using this method, I have a consistent 99-100 daily safety score on days featuring only my daily commute.
 
If you are a cautious driver using FSDb actually hurts you. When attempting to qualify for FSD back when, all my deductions occurred when FSDb did something I was not comfortable with and caused me to disengage then the car immediately complained about the incident. There should be a waiting period after disengaging before determining an infraction occurred to allow the driver to correct the error made by FSDb. Many deductions were from FSDb allowing a merging vehicle to enter the highway very close ahead. When is disengaged to back off I received a following too close warning.
There is a short grace period I believe after disengaging FSDb where what you do doesn't ding you. I can't find a description of it right now anywhere though.
 
bleh, i used full selfdriving most of my drive today...since it worked a lot better and I didn't have to disengage it a lot in my neighborhood, I missed out on taking quite a few low speed turns this morning. As a result, my safety score suffered....score of 94 this morning as opposed to my typical 99 or 100 for my morning drive.
Back when I had a SS to worry about, I had no clue how you would ever trigger the aggressive turning. It didn’t matter how aggressive I took a turn, it wouldn’t register in my SS.

Then I got the Beta and realized that me and the people in my town drive very slow and chill. Like holy crap! I hope they introduce an option to turn off /Tokyo drift mode/, that and the /smoke everyone at a green light mode/.

And to be clear, I hope for something like this as a setting, not a change to how it drives in general. I lived in Dallas for 10 years, and it was Mad Max out there which I was down with. But after living in a small town for a few years, I take it easy in my car.
 
In your professional opinion, would it be possible for Tesla to repurpose Dojo to help solve the stopping problem? Or is that what it is for?

Just spitballing here.

If this works, it would immediately justify the Dojo investment.


In the off chance you’re not trolling or somebody actually wants to understand what Dojo is…

Dojo’s primary (sole?) function is to train neural nets faster. MUCH faster than anything out there can do. And on bigger datasets. MUCH bigger datasets than anything out there can do.

Neural Nets are unique in that you can think of a bunch of the processing being offloaded to the development side. This is the training. This frees up the execution side to need MUCH less compute. This allows your phone to be able to do fast, complex image recognition, or allows your car to generate a fairly real-time 3D vector-space of the world including its printed surfaces.

These are things you simply can’t do without neural nets. (Unless your phone is the size of a laptop, and even then not nearly as fast or well).

But that training requires a large amount of a special kind of compute and a CRAZY amount of dataflow.

This is the problem that tiled mosaics of Dojo chips are designed to solve. They are uniquely engineered to offer insanely high data throughput both within chip and in/out of the chip in four directions, so that they can be tiled in enormous clusters without the data-bottlenecks you face with current systems. (This is sort of similar to what Apple’s M series chips are doing from the M1 to M1 Pro/Max/Ultra, which are tiled at a small scale on a single SoC, and of course for entirely different purposes.)

Why training speed matters is that you literally cannot create NNs that function well on complex tasks without a high number of training cycles with a high volume of (quality) training data. (See Tesla’s “Data Engine” which I pointed out as the most slept-on advantage Tesla has for generalized AI.)

So if your training cycle takes 2 weeks, there is no amount of money or developers or overtime you can throw at the problem to speed it up beyond running dozens and dozens of two-week training cycles. That can take a year or more to tune a NN. Where you don’t get to see a result of your tweaks for two weeks after you made it.

Worse, if you radically change a NN (like the switch to the new Occupancy Network), you then have to start this process over from near-scratch (scratch for that particular net in the wider architecture of integrated nets).

So going from Tesla’s top-10 in the world pre-Dojo supercomputer (which took about two weeks for a training cycle) to a Dojo cluster (which I suspect will be announced as operating currently during the AI Day 2 this month) might shave your training cycle to one week. Or even less because it’s built to be so cluster-scalable.

Going from a two week to a one week training cycle won’t speed development by 2x. But it will be close.

So short answer, Dojo will offer MUCH faster AI and NN development which will help the “stopping problem”.
 
Last edited:
So short answer, Dojo will offer MUCH faster AI and NN development which will help the “stopping problem”.
So you think the chances are high that they can train the neural net to measure distances with low jitter, and have the results consistently decrease monotonically with time, under a wide range of conditions?

It seems to me they could take other vehicle sensor input as well, as an input to the neural net (wheel speed, brake force, accelerometers (to measure slope*), etc).

Maybe this Dojo WILL be worth it!

Anyway, apparently an extremely difficult problem.

*instantaneous slope is not enough - the entire profile has to be understood in advance
 
Last edited:
Can someone summarize what to look for on this new update?
10.69 should have better lane selection, drive smoother, understand unprotected turns better (visualizing a blue creep limit/wall) including ability to wait in a median crossover region (visualizing a blue area), and avoid arbitrary objects (visualizing gray blobs).

I've noticed all these improvements with just a regular kids school commute today, so you'll probably notice some of these when you get the update.

Jumping into some details for lane selection, it seems like the old 10.12 behavior wanted to switch lanes about 3000 feet / 1000 meters before the intersection based purely on map data, e.g., switch out of an upcoming right-turn-only lane, whereas now it waits for the new "deep lane guidance" module to visually confirm that the map data is correct about 1500 feet / 500 meters before. This definitely will avoid some unnecessary lane changes from before but the late decision could also mean not enough time to complete a lane change.
 
I suggest that 10.69.1.1 should not be released more widely. Today, it missing two turns while on navigation and it attempted to run two red lights. I tried rebooting after the missed turns, but the red light issue is a major safety hazard. The car does make most turns correctly and it stops for red lights, so there is definitely a bug that is affecting my car. It has never tried before on previous FSD beta versions to proceed through a stoplight that has been visibly red for an extended time.
 
I'm sure the AI team was constantly looking for ways to solve general objects in "4D"
Indeed, as with many things, timing might have just lined up for the video occupancy network to be introduced with 10.69 even though there has been ongoing explorations for years. As you suggest, there could have been an increase in labeling/training compute capacity along with reduction in cost of computing ground truth from a growing data collection fleet combined with experience in supporting "4D" neural network improvements for existing networks, e.g., predicting velocity with Tesla Vision.

Hopefully 2022 AI Day later this month will clarify the overall approach as potentially Elluswamy's CVPR keynote could have focused on the "new fancy" to not cover too many topics at once. Specifically, he briefly mentioned "we can have additional semantic classes that can help with the control strategy later on" which sounds like it's on the way to merging in some of the main benefits of the existing moving and static object networks -- different objects behave and get visualized differently. Having the new occupancy network replace a lot of what was incrementally grown from single-camera to hydra-net to birds-eye-view-net to… probably solves a lot of problems they've discovered along the way "solve" 4D.
 
To be clear and not to confuse others, your vehicle is part of the "early" rollout including others who received FSD Beta without needing to be added via Safety Score. This relatively fixed group gets FSD Beta updates differently from those who get later iterations as part of what seems to be a random rollout, e.g., 10% or 25% random selection.

But what you might be getting at is that there doesn't seem to be a technical restriction for legacy S/X to randomly be selected as your vehicle runs 10.69 fine. So it seems like there might be a rollout (mis-?)configuration issue that hasn't allowed these vehicles to get 10.69.1.1.
That car was not part of the early rollout. I went through the safety score back since later September and got first beta in about November. I'm getting it for a different reason, not because of a misconfiguration. Just because it is on my old MX doesn't mean it rolls out to other ones. I guess that is part of your point :)
 
Last edited:
I suggest that 10.69.1.1 should not be released more widely. Today, it missing two turns while on navigation and it attempted to run two red lights. I tried rebooting after the missed turns, but the red light issue is a major safety hazard. The car does make most turns correctly and it stops for red lights, so there is definitely a bug that is affecting my car. It has never tried before on previous FSD beta versions to proceed through a stoplight that has been visibly red for an extended time.
said by someone that actually got it. 🤣 😆
 
While this is a fine and testable theory, a far more likely theory is that, due to the fact that 90% of FSD beta users are the "last" to get the update, you're just in that 90% most of the time.
So far ALL, not most. [cry me a river rant] I remember getting 1 or maybe 2 early updates quickly after the 10.3 debacle but that was in November or early Dec. Since then I have been back of the pack on every update.

EDIT[MORE cry me a river rant]: It is just EXTRA sore since it has been over 3 months without an update. I need my "heroin fix" and addicted to updates.
 
Last edited:
  • Like
Reactions: GranularHail
I find it weird you’d mention Flat Earthers, but one thing they don’t handle well are uncomfortable truths that challenge their dearest held beliefs. Let’s see how you handle some uncomfortable facts that challenge your beliefs…

Google’s TPU is optimized to RUN neural net Tensor operations.

Tesla’s Dojo is made to train neural nets.
I explained this in the parts of my post you didn’t cherry-pick. “🍒

Apologies I didn’t make the distinction explicit in every sentence I wrote describing it. It’s fairly well understood by those who understand what Dojo is. And honestly takes a bit more than a bullet point to explain to lay people. 🤷‍♂️

The difference between Dojo and TPU is ENORMOUS.

Training of the nets the way Dojo does requires an absolutely GOBSMACKING amount of dataflow. Many orders of magnitude more than required by the TPU processing silicon found and embedded in SoC silicon from Google, Apple, et al, which they use to EXECUTE the neural nets to say recognize objects in an image or do speech to text, etc.

Note: This is why I pointed out that Tesla will be uniquely positioned to offer NN training as a service (similar to Amazon’s varied cloud service offerings.)

Basically Google’s TPU (and Apple’s ML enclave in its A and M series chips, etc) are like the FSD computer in Teslas.

Of course we’ve seen a lot of ML processing silicon the past decade. 🙄

Dojo is a COMPLETELY different beast than we’ve yet seen in production (key distinction there).

So when I put a bullet point that Dojo is a key advantage for Tesla’s future. Where exactly do you see a chip in production that competes?

Or were you just being pedantic about the level of explicitness in the phrasing of my point while missing (or ignoring) that point entirely?
(moderator edit)TPU is a training chip not inference. Edge TPU (Coral) & Pixel Neural Core is the inference chip. (moderator edit)
 
Last edited by a moderator: