Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta 10.69

This site may earn commission on affiliate links.
Explain to me how FSD could be programmed to understand and deal with this situation all by itself?
As I wrote - if you know the problem and solution - it can be automated.

Not sure why people keep bringing up esoteric problems that happen very rarely. There are multiple ways to deal with it including remote operator assistance.

I‘s argue for a robotaxi, chances getting stuck because of a hardware failure are higher.
 
  • Like
Reactions: FSDtester#1
As I wrote - if you know the problem and solution - it can be automated.

Not sure why people keep bringing up esoteric problems that happen very rarely. There are multiple ways to deal with it including remote operator assistance.

I‘s argue for a robotaxi, chances getting stuck because of a hardware failure are higher.
It's like non-Tesla owners posing scenarios like, "what if there's a big power outage due to a snow storm/hurricane/nuclear war. How will you charge your car?"
 
As I wrote - if you know the problem and solution - it can be automated.

Not sure why people keep bringing up esoteric problems that happen very rarely. There are multiple ways to deal with it including remote operator assistance.

I‘s argue for a robotaxi, chances getting stuck because of a hardware failure are higher.
Do you consider Police/Flag persons working a detail and controlling traffic an esoteric problem? I ask since in the summer I average 1 utility/road construction site with this situation pretty much every day. As long as there is a driver not a problem especially if FSD can detect this and hand off to the driver when needed. Where I live flag persons with Stop/Go signs are never used necessitating the need to understand hand gestures 100% of the time.
End of year is fast approaching. :)
 
This is anecdotal, however I drive 59 miles each way to my office.
I am 1 of the original 100 safety score beta testers.
I use beta 💯% of the time, except when somebody in an Ice 🧊 vehicle wants to find out what Tesla's are about. They learn their lesson and I put beta back on.
I have 17k miles now.
My commute is about 50% highway and 50% city driving each way.
At first, ( I believe 10.3?) I had to intervene 7 or 8 times each way.
Now I have 1, sometimes 2. Mainly due to poor lane selection and could miss a turn.
That is what I call progress, at least for me.
The kids love it, wife hates it. She actually doesn't believe ev's are here to stay!!
I strongly believe that we all have different roads, a hole drivers, obscured signs, potholes, crazy roads, etc.
With that being said I read all the posts and appreciate the high level (stuff above my paygrade) of criticism and agree with most of it. I would prefer if we were a little more civil about it.
Let's continue our mission together to make ev's more popular and beta as good as it can be.
 
You are right and I probably should have left that off.
As we all know Elon's promise for this capability passed a long time ago.

Honest question, why do people remember all of Elon's estimations as "promises," and why do they only recall the "promises" while forgetting the things that have changed between then and now?

Of course nobody (Elon included), like to acknowledge that Tesla has experienced a series of setbacks when working toward FSD. But they have repeatedly run up against major hurdles that have required the re-working of hardware, firmware, and their neural network infrastructure. Why would any of Elon's original timelines still logically hold in the face of all of these setbacks and reworks?

I forget who first made this reference, but achieving actual autonomy will be like the moon landing. Timelines were seriously set back by the Apollo 1 tragedy. But nobody says "Yeah we landed on the moon, but it was 2 years later than they promised!"
 
Honest question, why do people remember all of Elon's estimations as "promises," and why do they only recall the "promises" while forgetting the things that have changed between then and now?

Of course nobody (Elon included), like to acknowledge that Tesla has experienced a series of setbacks when working toward FSD. But they have repeatedly run up against major hurdles that have required the re-working of hardware, firmware, and their neural network infrastructure. Why would any of Elon's original timelines still logically hold in the face of all of these setbacks and reworks?

I forget who first made this reference, but achieving actual autonomy will be like the moon landing. Timelines were seriously set back by the Apollo 1 tragedy. But nobody says "Yeah we landed on the moon, but it was 2 years later than they promised!"
Absolutely.

Plus people are REALLY bad at hearing a promise when Elon makes a qualified goal like “hopefully by…” or “aiming for…” etc.

So bad at taking that as a “promise” that you’d think they might be motivated in some way… 🤔
 
Honest question, why do people remember all of Elon's estimations as "promises," and why do they only recall the "promises" while forgetting the things that have changed between then and now?

Of course nobody (Elon included), like to acknowledge that Tesla has experienced a series of setbacks when working toward FSD. But they have repeatedly run up against major hurdles that have required the re-working of hardware, firmware, and their neural network infrastructure. Why would any of Elon's original timelines still logically hold in the face of all of these setbacks and reworks?

I forget who first made this reference, but achieving actual autonomy will be like the moon landing. Timelines were seriously set back by the Apollo 1 tragedy. But nobody says "Yeah we landed on the moon, but it was 2 years later than they promised!"
I agree with everything you say and I'm not disappointed in the status of FSD. It's just unfortunate that many people purchased FSD because of what Elon has said over the years and now realize reality. At least my first FSD purchase was only $3k.
FSD Timeline Promises (summary)
Looking forward to getting 10.69 and since this is a 10.69 thread and I've gone way off the thread topic this will be my last post on this. Sorry all.
 
Last edited:
Weird that in all of your quotes you are VERY careful to not include where I repeatedly described the vectorspace problem of doing it with the cameras/sensors in question used on Teslas. And then you go on to describe how it's not hard on platforms using entirely different sensors. 🤔

Really, really weird you did that. 🙄

Capturing the world is not difficult, a map is a prior ground truth. Representing the sensor and prior into vector space is not difficult. A Lidar is a sensor that represents data directly in 3d space, vector space is just representing the data in 3d world coordinate. If you think that is difficult, imagine fusing data from 3 different sensor types into a single vector space (3d space), now that must be impossible right? You have cars with 30 to 50 sensors on the road driving autonomously.
As you can see above, I addressed that, or do I need to do so line by line? EVERYONE uses and fuses camera sensors and more of it. Fusing data from 8 cameras is an order of magnitude less bandwidth/compute compared to fusing data from 29 cameras, 5 lidars and 6 radars. Sensor fusion and perception has not really been an issue for others in this field but there are still improvements and ways that it could be better and more efficient. That's even beside the point. Perhaps maybe less time rolling your eyes and more time reading and you see that. Driving policy is one of the hardest parts to tackle as it deals with predicting uncertainties.

You still haven't addressed my question. Where 3 years ago did 95% of experts claim sensor fusion and projecting in 3d space was impossible using cameras? In 2020 Google/UC Berkeley/ UC San Diego published the original paper on NeRFs. If you watch Ashok Elluswamy from Tesla's Keynote from last week's CVPR22 you would see where he directly references that original 2020 NeRF paper. Which should tell you how much Tesla builds on what others have contributed to the field. That is not a bad thing as they are all trying to solve the autonomous driving problem.

Tesla NeRF
JLbhhat.gif


Waymo Block NeRF
AS0w31M.gif
 
Last edited:
You still haven't addressed my question. Where 3 years ago did 95% of experts claim sensor fusion and projecting in 3d space was impossible using cameras?

You seem determined to try to argue a point I never made. And pulling out ALL the bogus tricks trying to do it.

Once again you’re using VERY different systems (Waymo Block NeRFs generated from 10Hz video using nigh-unlimited compute resources by their servers, not in-car) to try to refute a point I made about the specific cameras/sensors on Teslas to build the vectorspace at decent framerate.

And the 2020 paper you quoted took *1-2 DAYS* on a lab machine to map a single object in vectorspace from a few still images at different angles (not 8 simultaneous video feeds mapped into a detailed scene around 30x per second under the compute/power constraints in a Tesla). 🙄

(moderator edit)

1661746989453.png
 
Last edited by a moderator:
You seem determined to try to argue a point I never made. And pulling out ALL the bogus tricks trying to do it.
I'm asking a direct question related what you stated.
Three years ago 95% of experts in the industry would have said it was impossible with the sensors Tesla is using.
What sensor is tesla using that nobody else isn't? Everyone uses 12 to 29 cameras and various other sensors.

Once again you’re using VERY different systems (Waymo Block NeRFs generated from 10Hz video using nigh-unlimited compute resources by their servers, not in-car) to try to refute a point I made about the specific cameras/sensors on Teslas to build the vectorspace at decent framerate.
Watch the presentation. Both are offline using unlimited compute; a scaled back model is proposed as possibility in the future. Moreover, my point is to show how research is shared and built upon by others. What Tesla is doing today is built upon research shared by Google/Waymo. Nobody found projecting in vector space impossible 3 years ago.
And the 2020 paper you quoted took *1-2 DAYS* on a lab machine to map a single object in vectorspace from a few still images at different angles (not 8 simultaneous video feeds mapped into a detailed scene around 30x per second under the compute/power constraints in a Tesla). 🙄
2020 paper took that long to render a single scene, 2022 paper from earlier this year renders at city scale using 2.8 million images. There is no trick in what I said.
(moderator edit)
I'll stay. The only thing I'm pushing is sharing works that's been done by others in this field. It is preposterous to claim projecting image in 3d vector space was an impossible task 3 years ago when everyone in this field has been doing exactly that 3+ years ago.
 
Last edited by a moderator:
checking in to see who else is still holding their breath for this breathtaking release
I am definitely looking forward to it a great deal, but I do not expect to get utility from it. I don’t think it will yet be to the point where it is useful, based on the demonstrated abilities of 10.69 to “stop” and “go” thus far. Stopping and going is one of the key areas that they need to improve to make it more useful.

I do expect turning to be improved a bit, and I think there is hope for lane selection too. Not sure about the blinkers.

Just have to think of it as building blocks. It’s going to be great to try it out in a couple weeks and find all the places it has improved and the places where it is still lacking.

Timelines were seriously set back by the Apollo 1 tragedy. But nobody says "Yeah we landed on the moon, but it was 2 years later than they promised!"
The thing is, they didn’t wildly miss on every new adjusted goal they set after that disastrous setback, I don’t think.
Why would any of Elon's original timelines still logically hold in the face of all of these setbacks and reworks?
They don’t, of course. I don’t think that many would begrudge Elon his prior optimism. I don’t. That’s just the way it is.

But I think where it gets a bit iffy is the continued setting of goals. For example, “FSD (not Beta) wide release by the end of this year.” A goal requiring “insane work.”

I mean, at some point he’ll be right of course. But I think people are getting a bit upset that he does not set a reasonable goal! Rather than at the beginning of this year saying he’d be “shocked” if they didn’t achieve safety better than a human this year, just say “I think in 2-3 years we will be approaching human safety levels.” Or whatever it is. He must have a pretty good idea of what the timeline is - he has a lot of smart people working for him, who probably have a very good feel for the timeline (even after accounting for the uncertainty in this field of AI). It would be great if he would share that vision with us! He doesn’t share any of the vision, as far as I can tell.
 
It is preposterous to claim projecting image in 3d vector space was an impossible task 3 years ago
This is a claim you are entirely making up.

I’m not saying it now and I never said it.

What you just made up is **LIGHTYEARS** different than doing what Tesla is doing creating a decent framerate live vectorspace model on the Teslas running FSD Beta.

(moderator edit)
 
Last edited by a moderator:
Not sure why people keep bringing up esoteric problems that happen very rarely. There are multiple ways to deal with it including remote operator assistance.
Some rare situations don't have the convenience of extra time. Today, I noticed a group of pedestrians giving space to someone in a wheelchair to continue down the sidewalk, but that person quickly motioned to the group that he didn't need them to move and sure enough quickly turned down the ramp into the crosswalk directly in front of us. FSD Beta probably would have had superhuman reflexes to stop if I hadn't disengaged moments earlier, but it probably would have resulted in an even more awkward situation for everyone.

Should FSD Beta be extra cautious around pedestrians that are interacting with each other at the "cost" of being reasonably fast? I suppose more generally, a neural network could have predicted that these pedestrian interacting was more "unusual" than average that warrants extra caution to be safe? FSD Beta seems to already do so at crosswalks where a stopped/slowing adjacent vehicle results in the Tesla also slowing down just in case an unseen pedestrian might be crossing. (Although more often than not, the adjacent vehicle was just waiting to make a left turn, so it'll be interesting to see if 10.69 changes this sensitivity with better object permanence and occlusion/visibility reasoning via the new occupancy network improvements.)
 
I think this is a good sign for 10.69.1.
Not surprised @Daniel in SD loves this. I am getting ready to turn over a beer, though I still have hope for regressions in 10.69.2.

It really will be awesome to see success every time on this turn. It sounds like Elon is committed to having the car use smaller gaps, so if they can get the positioning consistency and hesitation issues fixed, they should be successful.