Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta 10.69

This site may earn commission on affiliate links.
Lmao your ignorance is baffling. TPU is a training chip not inference. Edge TPU (Coral) is the inference. Every single thing you said is wrong.
Simply amazing. This is the definition of "Your Brain On Elon".
I'm a dumb-a*s in NN domain as most of my career in software field was in pre-NN time.

This is a honest question - can you please point out for me, what those "every single thing he said was wrong" from his original post?

Seriously, I'm here to learn, and this is not trolling.
 
does the car have problems knowing what it can see in some situations (trees occluding for example)?
From the internal occupancy/occlusion visualizations shown at CVPR, it seems like trees/bushes blocking visibility is understood. See the left view with purple occupancy coloring on the bush with occlusion/visibility from it:
occlusion internal visualization.jpg


As usual with Tesla's in-vehicle visualizations, there's a lot more that could be shown but isn't, and from my experience with 10.69 so far, it seems like it only shows occupancy network gray blobs within drivable space typically bounded by the red curbs/road edges. So this would usually prevent trees from being visualized and not allowing us to better debug these types of problems/questions you raise. But sometimes it draws drivable space for home driveways that could have trees and fences, so I've definitely seen it visualize fence/gate occupancy, and if someone can find a tree literally in the middle of a road, that'll be interesting to test.
 
I'm not sure we can treat Bladerskb as a good-faith poster, tbh. He first instructed us to "read a book" and then we pressed which book would help us get on his level, he admitted that the only book he knew of was "the book of life." Not sure which "book of life" will help us understand machine learning, neural nets, and artificial intelligence any better, but it likely isn't available for loan at my public library.
 
I'm not sure we can treat Bladerskb as a good-faith poster, tbh. He first instructed us to "read a book" and then we pressed which book would help us get on his level, he admitted that the only book he knew of was "the book of life." Not sure which "book of life" will help us understand machine learning, neural nets, and artificial intelligence any better, but it likely isn't available for loan at my public library.
The book is called google. It contains almost everything you want to know in life. We are living in a day and age where people REFUSE to educate themselves and want others to spoon feed them, which is the exact reason we have misinformation problem in the first place.
 
From the internal occupancy/occlusion visualizations shown at CVPR, it seems like trees/bushes blocking visibility is understood.

it seems like it only shows occupancy network gray blobs within drivable space typically bounded by the red curbs/road edges
Yes, exactly, that’s what I described to @Dewg earlier.

Yes, it does make it difficult to understand. It’s very hard to understand its creeping decisions right now. It’s good that there is potential for it to understand occlusions, but very hard to know how large a tree is allowed (I have a few larger eucalyptus to the left, not large enough to cause problems really…but there…and with the angles it’s possible that it makes things difficult for the system?). Very hard to know.

Basically just wild speculation. All we can say is it (the creeping) doesn’t work well (as expected).

I tend to think it is not occlusion that is the issue, because I see creep zones drawn way too far forward even when visibility is clearly not obstructed.

Maybe they overfitted on Model Y. 😢
 
So you think the chances are high that they can train the neural net to measure distances with low jitter, and have the results consistently decrease monotonically with time, under a wide range of conditions?

This is the incredibly hard problem that Tesla is years ahead of the competition in solving. Translating the world including its printed surfaces into a near-real-time 3D vector-space, and of course doing it on a Tesla (with its suite of sensors).

Change any of those requirements and the problem is MUCH easier!

Adding Lidar makes this easier (still tough because you need surface recognition for signs/etc though also complex object detection to tell the difference between a shopping bag and a boulder).

Doing this not near 30fps, taking a few minutes to compute against a series of still images or videos makes it WAY easier.

Doing it without surface recognition or with different sensors makes it way easier!

Doing it with a crazy-high energy/compute available on-car makes it a fair bit easier.

--

Tesla is doing it the hard way, but it's working! It just needs more of those two-week training cycles to tune the Neural Nets (this literally isn't something you can out-think and nail without the iterative tuning process, and you definitely can't nail it without a BOATLOAD of quality and varying situation data (correctly 'tagged' for the NN training), which requires those two-week cycles through the Data Engine.

So yeah, they're crazy good at it now. Truly they are. Even with the schyzophrenic behaviour. That's genuinely expected at this point in the NN dev/training lifecycle. Especially with the utterly major shift to the new Occupancy Neural Network, which surprisingly seems to be responsible for a lot of the improvements in 10.69. I'm surprised there wasn't a serious regression in quality switching to this new NN, and it makes me SUPER optimistic.

Obviously they've been working on the Occupancy Net for a while, running on some internal test vehicles, and maybe it's been running in shadow mode on some of the fleet, but it's obviously had a LOT less data to run on for training (so much of the data for training is taken from Beta drivers, using when they take over, report bugs, etc and then the engineers look at those situations, 'tag' it and identify where there was a mistake by the net, etc).

So long-story short, the Occupancy Net is still a baby, but it's working pretty darn good. It's impossible for it to be as good as it will be even if there were no further software changes to be made (of course there will), so the training of it will make it better, and the tweaks to the Net itself will make it better.

These are all expected and very, VERY good news in my estimation. 👍

It seems to me they could take other vehicle sensor input as well, as an input to the neural net (wheel speed, brake force, accelerometers (to measure slope), etc).

💯 certain they already do. Actually there are a TON of sensor inputs that go into the NNs, not just the camera feeds.

Maybe this Dojo WILL be worth it!

Anyway, apparently an extremely difficult problem.

Dojo v1 is a revenue game changer for Tesla. They are SO far ahead on Generalized AI. And FSD is basically like Apple's iPod in 2006, funding the development of the iPhone and what became iOS and the App Store.

That vector-space problem is an ENORMOUSLY difficult problem to solve (with the specific constraints for mobility, compute, energy, real-time, sensor suite, etc). And it's a key problem that will allow for so much behavioural AI to be built on top (FSD is the first one Tesla is tackling, Optimus is the next, but the applications are utterly game-changing across nearly every industry).
 
This is the incredibly hard problem that Tesla is years ahead of the competition in solving. Translating the world including its printed surfaces into a near-real-time 3D vector-space, and of course doing it on a Tesla (with its suite of sensors).

Change any of those requirements and the problem is MUCH easier!

Adding Lidar makes this easier (still tough because you need surface recognition for signs/etc though also complex object detection to tell the difference between a shopping bag and a boulder).

Doing this not near 30fps, taking a few minutes to compute against a series of still images or videos makes it WAY easier.

Doing it without surface recognition or with different sensors makes it way easier!

Doing it with a crazy-high energy/compute available on-car makes it a fair bit easier.

--

Tesla is doing it the hard way, but it's working! It just needs more of those two-week training cycles to tune the Neural Nets (this literally isn't something you can out-think and nail without the iterative tuning process, and you definitely can't nail it without a BOATLOAD of quality and varying situation data (correctly 'tagged' for the NN training), which requires those two-week cycles through the Data Engine.

So yeah, they're crazy good at it now. Truly they are. Even with the schyzophrenic behaviour. That's genuinely expected at this point in the NN dev/training lifecycle. Especially with the utterly major shift to the new Occupancy Neural Network, which surprisingly seems to be responsible for a lot of the improvements in 10.69. I'm surprised there wasn't a serious regression in quality switching to this new NN, and it makes me SUPER optimistic.

Obviously they've been working on the Occupancy Net for a while, running on some internal test vehicles, and maybe it's been running in shadow mode on some of the fleet, but it's obviously had a LOT less data to run on for training (so much of the data for training is taken from Beta drivers, using when they take over, report bugs, etc and then the engineers look at those situations, 'tag' it and identify where there was a mistake by the net, etc).

So long-story short, the Occupancy Net is still a baby, but it's working pretty darn good. It's impossible for it to be as good as it will be even if there were no further software changes to be made (of course there will), so the training of it will make it better, and the tweaks to the Net itself will make it better.

These are all expected and very, VERY good news in my estimation. 👍



💯 certain they already do. Actually there are a TON of sensor inputs that go into the NNs, not just the camera feeds.



Dojo v1 is a revenue game changer for Tesla. They are SO far ahead on Generalized AI. And FSD is basically like Apple's iPod in 2006, funding the development of the iPhone and what became iOS and the App Store.

That vector-space problem is an ENORMOUSLY difficult problem to solve (with the specific constraints for mobility, compute, energy, real-time, sensor suite, etc). And it's a key problem that will allow for so much behavioural AI to be built on top (FSD is the first one Tesla is tackling, Optimus is the next, but the applications are utterly game-changing across nearly every industry).
I guess as long as they have that vector-space and curvature mapped correctly they can just use the accelerometer data at their current location to get the orientation of the scene with respect to gravity correct. There’s not many other ways to tell. I guess they could look at orientation of light poles and signs to establish which way is up. (I guess they could also look at regen power and impact on velocity/wheel speed to get the slope).

In any case, essential, otherwise impossible to correctly plan the stop.

I’m not sure the struggle is more real on slopes than it is on the flat though. I’ll take some more observations on that.

As I understand it they are not using the NNs for the control and planning. Obviously the distance measurements and environment information need to come from the NN, and that feeds into planning.
 
Last edited:
  • Like
Reactions: FSDtester#1
I'm a dumb-a*s in NN domain as most of my career in software field was in pre-NN time.

This is a honest question - can you please point out for me, what those "every single thing he said was wrong" from his original post?

Seriously, I'm here to learn, and this is not trolling.



I'm gonna start from the top.

Google’s TPU is optimized to RUN neural net Tensor operations.


First of all he is wrong, Google TPU is a Neural Net AI Chip that is optimized for both training and inference. Optimized for inference aswell because it supports floating points.

Secondly he said "tensor operations". Because he doesn't know what a tensor is. So he think that's some limited set of operation.
But tensor is a matrix and NN is just matrix multiplication math.

The difference between Dojo and TPU is ENORMOUS.
Not only is TPU a training chip, but each pod (box) of TPU chip equals over 1 exaflop.
Its literally the most powerful training supercomputer.


And leads in Industry's Machine Learning Performance Training Benchmarks
Training of the nets the way Dojo does requires an absolutely GOBSMACKING amount of dataflow. Many orders of magnitude more than required by the TPU processing silicon found and embedded in SoC silicon from Google, Apple, et al, which they use to EXECUTE the neural nets to say recognize objects in an image or do speech to text, etc.

Cloud TPU are different from Edge GPU Coral and Pixel Neural Core (in Pixel Phones).
Cloud TPU are not embedded SOC,
They are pods, which are 4096 TPU Chips that are stacked in a box called a pod.
The pods are stacked in google's data center.

gEvZrtK4bzkY8Jxv8HWFZnUID8I34iUN_datOh3vdPC4gzpx9RJ_wzr7ZJK42MU7uE2J1cujkBuwBZEaNupMqwInsCoOfsG1tI8PXUq8tGRgQN3gduf81kUJVir10800o8UJVjS2JaVlsgQj9A


Note: This is why I pointed out that Tesla will be uniquely positioned to offer NN training as a service (similar to Amazon’s varied cloud service offerings.)
Google already offers this. I mean the level of ignorance is staggering.

Basically Google’s TPU (and Apple’s ML enclave in its A and M series chips, etc) are like the FSD computer in Teslas.
As I have already proved, it is not. And you can here it straight from Google.

Dojo is a COMPLETELY different beast than we’ve yet seen in production (key distinction there).
So when I put a bullet point that Dojo is a key advantage for Tesla’s future. Where exactly do you see a chip in production that competes?

Google has been offering AI Training on the fast AI chip ever made as a service for over 6 years and its not a demo unit that engineers are still trying to figure out how to make work.

 
Last edited:
This is the incredibly hard problem that Tesla is years ahead of the competition in solving. Translating the world including its printed surfaces into a near-real-time 3D vector-space, and of course doing it on a Tesla (with its suite of sensors).

Change any of those requirements and the problem is MUCH easier!

Adding Lidar makes this easier (still tough because you need surface recognition for signs/etc though also complex object detection to tell the difference between a shopping bag and a boulder).

Doing this not near 30fps, taking a few minutes to compute against a series of still images or videos makes it WAY easier.

Doing it without surface recognition or with different sensors makes it way easier!

Doing it with a crazy-high energy/compute available on-car makes it a fair bit easier.

--

Tesla is doing it the hard way, but it's working! It just needs more of those two-week training cycles to tune the Neural Nets (this literally isn't something you can out-think and nail without the iterative tuning process, and you definitely can't nail it without a BOATLOAD of quality and varying situation data (correctly 'tagged' for the NN training), which requires those two-week cycles through the Data Engine.

So yeah, they're crazy good at it now. Truly they are. Even with the schyzophrenic behaviour. That's genuinely expected at this point in the NN dev/training lifecycle. Especially with the utterly major shift to the new Occupancy Neural Network, which surprisingly seems to be responsible for a lot of the improvements in 10.69. I'm surprised there wasn't a serious regression in quality switching to this new NN, and it makes me SUPER optimistic.

Obviously they've been working on the Occupancy Net for a while, running on some internal test vehicles, and maybe it's been running in shadow mode on some of the fleet, but it's obviously had a LOT less data to run on for training (so much of the data for training is taken from Beta drivers, using when they take over, report bugs, etc and then the engineers look at those situations, 'tag' it and identify where there was a mistake by the net, etc).

So long-story short, the Occupancy Net is still a baby, but it's working pretty darn good. It's impossible for it to be as good as it will be even if there were no further software changes to be made (of course there will), so the training of it will make it better, and the tweaks to the Net itself will make it better.

These are all expected and very, VERY good news in my estimation. 👍



💯 certain they already do. Actually there are a TON of sensor inputs that go into the NNs, not just the camera feeds.



Dojo v1 is a revenue game changer for Tesla. They are SO far ahead on Generalized AI. And FSD is basically like Apple's iPod in 2006, funding the development of the iPhone and what became iOS and the App Store.

That vector-space problem is an ENORMOUSLY difficult problem to solve (with the specific constraints for mobility, compute, energy, real-time, sensor suite, etc). And it's a key problem that will allow for so much behavioural AI to be built on top (FSD is the first one Tesla is tackling, Optimus is the next, but the applications are utterly game-changing across nearly every industry).
Well, I am truly gutted now... I was hoping to get 10.69 in Europe...but I realize now that they are going to need to have one of those DoJo computers here and it will take years to run the simulations...and even then..do they really think that they can predict what an Italian driver is going to do next🤷‍♂️
 
While improving the use of regen is a valid goal I'd probably rank it close to the bottom of items that need improvement.
Just to be clear, we’re talking about smooth driving. This is a major issue. It clearly has Tesla’s attention, too.

Continued lack of performance suggests limitations on their perception and/or closed loop planning, which must be resolved. This is one of those things which will potentially dramatically improve performance in all areas if it is solved. Sort of like the occupancy network is a key critical ability.

It’s basically top of the list, as it is fundamental. If you cannot tell where things are and their velocity, it is impossible to drive correctly (and therefore it will be less safe than it could be). If you can tell, but your closed loop control has latency or other issues resulting in instability or underdamped behavior, that’s also a problem.
 
Last edited:
Back when I had a SS to worry about, I had no clue how you would ever trigger the aggressive turning. It didn’t matter how aggressive I took a turn, it wouldn’t register in my SS.

Then I got the Beta and realized that me and the people in my town drive very slow and chill. Like holy crap! I hope they introduce an option to turn off /Tokyo drift mode/, that and the /smoke everyone at a green light mode/.

And to be clear, I hope for something like this as a setting, not a change to how it drives in general. I lived in Dallas for 10 years, and it was Mad Max out there which I was down with. But after living in a small town for a few years, I take it easy in my car.

I've driven rural Georgia + Alabama. And I've also driven the heart of Boston and NYC. Yeah, quite different.

Generally high safety score is much easier to achieve the more gently the culture around you drives. And this was one of the main reasons for deploying SS. People got so worked up about how inaccurate it was in identifying the safety of the driver, when really the safety score reflects how safe the car is due to both driver and its environment.

FWIW, FSDb is way too timid for Boston/NYC. funny to see you describing it like Tokyo Drift :D
 
More driving on 10.69.1.1 today. The good, the bad and the fugly:
  • In general, beta acted quite well with fewer safety disconnects in normal driving than the previous version. It's hard to say for sure though, as I did quite a bit of stress testing on difficult ULTs and URTs.
  • ULTs, as expected, are noticeably improved. I had near 100% success on all ULTs where the car put up a creep wall. But therein lies the rub. The car shows neither creep wall nor target zone for one of the occluded ULTs that I test. I tried it three times and for each, the car obeyed the stop sign, then stated it was creeping, but simply rolled forward a bit and took off. Fortunately, traffic was very light and there was no danger. But this intersection was a fail. I tried another intersection just down the road as well as multiple tests on ULTs elsewhere and beta performed perfectly with creep walls and target zones displayed.
  • I have a highly occluded URT with a large wall blocking the view to the three lanes of traffic coming from the left. FSD beta appears to be creeping adequately to see traffic coming from the left and is a huge improvement over earlier versions where beta seemed to assume that no car seen is the same as no car coming! However, now it tends not to make the turn unless all three lanes are completely devoid of traffic. At midday, this would be a rare thing, so FSD beta just sits and sits and sits. Maybe Tesla could apply the same full court press to URTs as it did for ULTs. Please?
  • Driving on neighborhood streets with no center line is truly fugly. The car positions itself OK, but if another car comes the other direction, the Tesla comes to a complete stop. The streets I was driving are wide enough for two cars to pass with plenty of space and this has not been an issue prior to this version. This is unacceptable!
  • I encountered a construction zone on a city street where my lane was blocked off with a barrier and sign to detour onto the opposing lane. My car navigated this like a champ! Then, immediately afterward, it navigated a very narrow space between two trucks parked on either side of the street. This was probably the the most unexpected improvement of the day as this was something that the car had to deal with completely on its own.
  • Transitioning from FSD beta to NOA seems to have some issues. On the two occasions that I've merged onto a limited access highway, NOA did change the car's set speed to the new speed limit despite the car displaying the correct speed limit. On a third time, switching from FSD beta to NOA on a rural two-lane highway, the car slowed down about five mph before picking up speed to 70 mph.
  • FSD beta will slow to a stop at a green light if a car in a right turn lane is close to your lane. I had the car stop right next to a truck making a right hand turn. No sense in that as any threat from the truck has already passed. Along the same lines, FSD beta still overreacts when a lead car slows to make a turn. It seems better, but still needs improvement.
  • I had no incidents of FSD beta diving into a left or right turn lane when going straight. This has (so far!) been a big improvement over 10.12.2. However, I do have problems with FSD beta not changing lanes to a straight-through lane when the one it is in becomes a turn-only lane. Fix one problem and create another?
  • A few incidents of phantom braking, most of which were very minor and of no consequence. A couple that were bad were approaching and departing from crossing under a highway overpass. The car braked hard at the frontage road intersection on each side of the overpass despite having right of way and no stop signs.
 
said by someone that actually got it. 🤣 😆
I think running red lights is a major bug. It's not just me either:


Links to previous posts often don't work, but if it doesn't scroll up a few pages in this thread to the post by @Kyrne.
 
I suggest that 10.69.1.1 should not be released more widely. Today, it missing two turns while on navigation and it attempted to run two red lights. I tried rebooting after the missed turns, but the red light issue is a major safety hazard. The car does make most turns correctly and it stops for red lights, so there is definitely a bug that is affecting my car. It has never tried before on previous FSD beta versions to proceed through a stoplight that has been visibly red for an extended time.
Have you emailed Tesla yet? If not I would suggest you do and include 10.69.1.1 in the email subject including its running red lights.
Whether that helps I don't know.
Just to be clear, we’re talking about smooth driving. This is a major issue. It clearly has Tesla’s attention, too.

Continued lack of performance suggests limitations on their perception and/or closed loop planning, which must be resolved. This is one of those things which will potentially dramatically improve performance in all areas if it is solved. Sort of like the occupancy network is a key critical ability.

It’s basically top of the list, as it is fundamental. If you cannot tell where things are and their velocity, it is impossible to drive correctly (and therefore it will be less safe than it could be). If you can tell, but your closed loop control has latency or other issues resulting in instability or underdamped behavior, that’s also a problem.
Certainly not a major issue for my car but then we all know different vehicles behave differently.
 
I've noted that sometimes ULT creeping is rather abrupt and feels like the car is taking off even though the car is going to stop at the creep wall. Since the driver has very little time to react to prevent the car from entering the intersection, this abrupt creep is likely going to be misinterpreted, resulting in a lot of disconnects from brake application. It takes a lot of faith to trust the car to stop when there's approaching traffic! Definitely not for the timid.

Of course, keep your foot right above the brake pedal, just in case!
 
Certainly not a major issue for my car but then we all know different vehicles behave differently.
I do not believe vehicles behave differently. You can look at every 10.69.1.1 video showing the regen bar and you will see this behavior (not for every stop, though). It is of course worse when coming to a stop from 40-50mph than it is when moving around at 25mph. But it is there (sometimes) in both cases.
 
I've noted that sometimes ULT creeping is rather abrupt and feels like the car is taking off even though the car is going to stop at the creep wall. Since the driver has very little time to react to prevent the car from entering the intersection, this abrupt creep is likely going to be misinterpreted, resulting in a lot of disconnects from brake application. It takes a lot of faith to trust the car to stop when there's approaching traffic! Definitely not for the timid.

Of course, keep your foot right above the brake pedal, just in case!
Yes, this is a flaw, noted earlier. It’s a “must fix.”

It must engage the creep network earlier, not wait so long to begin the creep, and creep in a more predictable (slower) fashion. This will result in faster, more predictable, more human creeps.