Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
The car can tell if it's being looked at from across the road?
Are you questioning the hardware capability or software current 12.x vs end-to-end potential? This isn't quite what the car sees, but are you able to tell which of these 3 has someone looking from across the road?

12-1-2-eye-contact-jpg.1019483
 
  • Like
Reactions: GWord
Those here who proclaim that robotaxi could never work on HW3 or even HW4 simply don't know. Nobody knows.

If Tesla is able to improve cycle time by orders of magnitude as planned, then autonomy has a very good chance of being solved. And it could be rather soon.

I definitely don’t know, but I doubt with a 10000x improvement in training compute, that we would get to autonomy (even fairly wide ODD L3) with HW3/4.

Just seems like far too complicated a problem to solve with such rudimentary hardware (HW3/4).

But I could be wrong. As you say, nobody knows. And nobody will ever know if it's possible with current hardware, because by the time anything is remotely close to solved for actual autonomy, we'll have long moved on to much more capable hardware, and no one is going to want to go back and make it run on HW3/4!

Compute improvements like that 10000x would open the way for exploring more approaches to solve the problem on arbitrary inference (and vision) hardware, though! Would help - just not on current hardware.

Just think about the problem for a bit. Think about what you do when driving and how difficult it would be for a computer. It's incredibly complicated and difficult to solve, especially with NN techniques. How could current hardware or any existing technique solve it? We don't have any example in the AI field of any hardware (with basically the maximum power currently available) even on the inference side (let alone the training side) which is capable of doing anything close to solving this problem with the reliability required. Even if you look at OpenAI's latest models - they can't even solve the problem (a very different one) that they are designed to solve with high reliability!

So I just can't see it happening.

Who knows, maybe there will be a breakthrough, but just seem like such a huge space of possible inputs that solving it is going to take a lot more power, and possibly a few breakthroughs as well. People unwisely put a lot of faith in training - remember that training has never been demonstrated to solve this sort of problem (except for humans of course).

I literally know nothing about training and inference, so it's possible there is major Dunning-Kruger going on with me. Not sure where I fall on that curve. May even be possible that I do not fall on the curve; given my lack of knowledge, I may be way off the bottom, even before the peak of Mount Stupid.

Robotaxi can't work on HW3 or HW4 because neither is capable of cleaning the camera lens--except for the in-windshield cameras
Even if you restrict the ODD substantially and forget about robotaxi, even then the problem is likely far too difficult for actual meaningful autonomy (wide ODD L3) in best-case conditions.

Makes me wonder why it hasn't been released wider yet?
Just probably unsafe with many regressions.
I think 2024 is the year FSD needs to really show substantial progress. If this whole year goes by without any meaningful improvements, like seriously substantial improvements, it will be sad.
I don’t think it really needs to. Just a year like any other.

Tesla just needs to sell cars, a lot of them, and develop new cars people want. And build a lot of batteries that don't suck. There’s not a lot of market value assigned to FSD/autonomy currently (no matter what some cray-cray investment folks may be assigning in their alternate reality) so it’s very fortunately not important what happens, unless someone else ends up having something Tesla does not. And that seems unlikely.
I look forward to Tesla's "Chat GPT" moment, I just hope it happens relatively soon.
This was enabled by a much larger model so won’t (cannot) happen here. ChatGPT has a very open design space, doesn't have to be right (at all!), etc. ChatGPT is a perfect manifestation of AI because it can be super wrong a lot of the time and can still be very useful. It's an assistant, but that type of assistant has very different requirements than those of a driving assistant.

v12 will be nothing special. If it is ever released, it will be an incremental improvement, and it may enable more frequent/faster updates (TBD - it's possible validation will take a lot longer because it is harder to look for regressions).

Let's keep expectations reasonable so we can be extremely content when v12 comes out:
1) It's not going to be the path to autonomy.
2) It's only going to improve on some of the issues that v11 had.
3) It will introduce new problems we didn't have before.
4) It's going to require careful monitoring to make sure it doesn't run traffic lights, etc.
5) Maybe it'll lead to more frequent updates.
6) Maybe it'll be a bit smoother.

That's a pretty solid update that we can be happy with!

It's not like we're going to get to L3 this year. Let's just be reasonable and realize we're just looking for something that will be slightly easier for Tesla to iterate on with less manpower - hopefully it will scale and reduce human capital costs for Tesla. Those cost reductions will have trickle-down benefit for FSD owners. Maybe they'll just be able to script the iterations! Make it end-to-end; no human intervention anywhere between releases! Even with the hardware costs, that would be cheap. 😂
 
Are you questioning the hardware capability or software current 12.x vs end-to-end potential? This isn't quite what the car sees, but are you able to tell which of these 3 has someone looking from across the road?

12-1-2-eye-contact-jpg.1019483
Seems unremarkable to me. Not sure what the car picked up on. It’s often pretty slow to respond to VRUs so maybe the recent removal of occlusion a couple seconds prior eventually results in a response? Not really sure.

It could have been the perceived change in direction too.
 
Oh, God, me too!

There's a four-lane street I take every day. At the first intersection I come to, it _used_ to be one left turn lane and three straight-ahead lanes, which fed into a three-lane street.

It was repaved last summer and the markings changed to that there are now two left turn lanes-- clearly marked with large left-arcing arrows-- and two straight ahead lanes feeding into three lanes on the other side of the intersection.

(As an aside, I find this lane design bizarre, since it's illegal to change lanes in an intersection here in Nevada, yet you're pretty much forced to with this setup. Anyway...)

The route I take requires a left turn at the next intersection after this one. For a year or so now FSD would place the car in the leftmost straight-ahead lane to cross the intersection. However, this is now a left-turn lane.

And the car knows it's a left turn lane since the visualization shows the left turn arrow painted on the pavement. But the FSD puts the car in the second left turn lane anyway, and will proceed straight across the intersection (I tested this in a zero-traffic environment).

I know that this is due to a mapping error-- the map hasn't been updated with the new lane configuration. I imagine there are hundreds if not thousands of examples of this across the country.

But I had always hoped if the mapping data and what the cameras saw disagreed, the car would go with the cameras...
There has to be a balance, because the opposite can happen, as per an example up thread, where the car sees a speed limit sign that does not apply to the current road, and then makes some bad decisions based on that.

It could also be a reaction time thing. By the time they less the left arrow, it may be too late (the car has long pulled into that lane well before it was certain about the left arrow).
 
Everything I've read about v12 from people who have driven it points to it being better than v11 in most ways. Still not perfect, but a good step in the right direction.

Makes me wonder why it hasn't been released wider yet? Tesla seems to be very cautiously and slowly releasing v12, even though the firsthand reports seem to claim it's more capable and confident than v11.

I think 2024 is the year FSD needs to really show substantial progress. If this whole year goes by without any meaningful improvements, like seriously substantial improvements, it will be sad.

I look forward to Tesla's "Chat GPT" moment, I just hope it happens relatively soon.

For sure. Up to this point FSD improvements have been mostly a long and drawn out placebo effect with release notes of how things have improved with each release. One of these days something has to give as goodwill is gone.

I don't follow Dirty Tesla but he did a recent video about how he's noticed FSD enthusiasm has changed. He sees it in various places but also a coworker opted for an FSD subscription and on the first drive, DT was in the back seat filming, FSD tried to turn into an oncoming vehicle. So now he refuses to use it. The average customer doesn't tolerate that nonsense.

Hopefully the team can finally pull one out of the hat.
 
Last edited:
  • Like
Reactions: Pdubs and Mengy
I definitely don’t know, but I doubt with a 10000x improvement in training compute, that we would get to autonomy (even fairly wide ODD L3) with HW3/4.

Just seems like far too complicated a problem to solve with such rudimentary hardware (HW3/4).

But I could be wrong. As you say, nobody knows. And nobody will ever know if it's possible with current hardware, because by the time anything is remotely close to solved for actual autonomy, we'll have long moved on to much more capable hardware, and no one is going to want to go back and make it run on HW3/4!

Compute improvements like that 10000x would open the way for exploring more approaches to solve the problem on arbitrary inference (and vision) hardware, though! Would help - just not on current hardware.

Just think about the problem for a bit. Think about what you do when driving and how difficult it would be for a computer. It's incredibly complicated and difficult to solve, especially with NN techniques. How could current hardware or any existing technique solve it? We don't have any example in the AI field of any hardware (with basically the maximum power currently available) even on the inference side (let alone the training side) which is capable of doing anything close to solving this problem with the reliability required. Even if you look at OpenAI's latest models - they can't even solve the problem (a very different one) that they are designed to solve with high reliability!

So I just can't see it happening.

Who knows, maybe there will be a breakthrough, but just seem like such a huge space of possible inputs that solving it is going to take a lot more power, and possibly a few breakthroughs as well. People unwisely put a lot of faith in training - remember that training has never been demonstrated to solve this sort of problem (except for humans of course).

I literally know nothing about training and inference, so it's possible there is major Dunning-Kruger going on with me. Not sure where I fall on that curve. May even be possible that I do not fall on the curve; given my lack of knowledge, I may be way off the bottom, even before the peak of Mount Stupid.


Even if you restrict the ODD substantially and forget about robotaxi, even then the problem is likely far too difficult for actual meaningful autonomy (wide ODD L3) in best-case conditions.


Just probably unsafe with many regressions.

I don’t think it really needs to. Just a year like any other.

Tesla just needs to sell cars, a lot of them, and develop new cars people want. And build a lot of batteries that don't suck. There’s not a lot of market value assigned to FSD/autonomy currently (no matter what some cray-cray investment folks may be assigning in their alternate reality) so it’s very fortunately not important what happens, unless someone else ends up having something Tesla does not. And that seems unlikely.

This was enabled by a much larger model so won’t (cannot) happen here. ChatGPT has a very open design space, doesn't have to be right (at all!), etc. ChatGPT is a perfect manifestation of AI because it can be super wrong a lot of the time and can still be very useful. It's an assistant, but that type of assistant has very different requirements than those of a driving assistant.

v12 will be nothing special. If it is ever released, it will be an incremental improvement, and it may enable more frequent/faster updates (TBD - it's possible validation will take a lot longer because it is harder to look for regressions).

Let's keep expectations reasonable so we can be extremely content when v12 comes out:
1) It's not going to be the path to autonomy.
2) It's only going to improve on some of the issues that v11 had.
3) It will introduce new problems we didn't have before.
4) It's going to require careful monitoring to make sure it doesn't run traffic lights, etc.
5) Maybe it'll lead to more frequent updates.
6) Maybe it'll be a bit smoother.

That's a pretty solid update that we can be happy with!

It's not like we're going to get to L3 this year. Let's just be reasonable and realize we're just looking for something that will be slightly easier for Tesla to iterate on with less manpower - hopefully it will scale and reduce human capital costs for Tesla. Those cost reductions will have trickle-down benefit for FSD owners. Maybe they'll just be able to script the iterations! Make it end-to-end; no human intervention anywhere between releases! Even with the hardware costs, that would be cheap. 😂

Yep, there's always a hopeful world where everything is magically easy and then there's reality with time/resource constraints, product life cycles, and the need for technological advancement. HW3 hasn't happened in a timely fashion so it's looking more and more like a crowdsourced pig and a poke.
 
Are you questioning the hardware capability or software current 12.x vs end-to-end potential? This isn't quite what the car sees, but are you able to tell which of these 3 has someone looking from across the road?

12-1-2-eye-contact-jpg.1019483

Likely none of us have had a chance to drive it so who knows but the UI and path aren't very revealing. There's little info regarding highlighted objects of interest.
 
My biggest problems are more in the annoyance rather than dangerous category, particularly v11's maddening insistence on moving out of the necessary lane about a half mile before a turn, just so that it can try to re-merge back into that lame about 1,000 ft out
I had the same issue and has largely been addressed by removing any and all speed offsets. Now the route plan which I believe is based upon the posted speed limits is adhered to and hence more achievable to have a drive with less lane changes from the desired lane.
 
I definitely don’t know, but I doubt with a 10000x improvement in training compute, that we would get to autonomy (even fairly wide ODD L3) with HW3/4.

Just seems like far too complicated a problem to solve with such rudimentary hardware (HW3/4).

But I could be wrong. As you say, nobody knows. And nobody will ever know if it's possible with current hardware, because by the time anything is remotely close to solved for actual autonomy, we'll have long moved on to much more capable hardware, and no one is going to want to go back and make it run on HW3/4!

Compute improvements like that 10000x would open the way for exploring more approaches to solve the problem on arbitrary inference (and vision) hardware, though! Would help - just not on current hardware.

Just think about the problem for a bit. Think about what you do when driving and how difficult it would be for a computer. It's incredibly complicated and difficult to solve, especially with NN techniques. How could current hardware or any existing technique solve it? We don't have any example in the AI field of any hardware (with basically the maximum power currently available) even on the inference side (let alone the training side) which is capable of doing anything close to solving this problem with the reliability required. Even if you look at OpenAI's latest models - they can't even solve the problem (a very different one) that they are designed to solve with high reliability!

So I just can't see it happening.

Who knows, maybe there will be a breakthrough, but just seem like such a huge space of possible inputs that solving it is going to take a lot more power, and possibly a few breakthroughs as well. People unwisely put a lot of faith in training - remember that training has never been demonstrated to solve this sort of problem (except for humans of course).

I literally know nothing about training and inference, so it's possible there is major Dunning-Kruger going on with me. Not sure where I fall on that curve. May even be possible that I do not fall on the curve; given my lack of knowledge, I may be way off the bottom, even before the peak of Mount Stupid.


Even if you restrict the ODD substantially and forget about robotaxi, even then the problem is likely far too difficult for actual meaningful autonomy (wide ODD L3) in best-case conditions.


Just probably unsafe with many regressions.

I don’t think it really needs to. Just a year like any other.

Tesla just needs to sell cars, a lot of them, and develop new cars people want. And build a lot of batteries that don't suck. There’s not a lot of market value assigned to FSD/autonomy currently (no matter what some cray-cray investment folks may be assigning in their alternate reality) so it’s very fortunately not important what happens, unless someone else ends up having something Tesla does not. And that seems unlikely.

This was enabled by a much larger model so won’t (cannot) happen here. ChatGPT has a very open design space, doesn't have to be right (at all!), etc. ChatGPT is a perfect manifestation of AI because it can be super wrong a lot of the time and can still be very useful. It's an assistant, but that type of assistant has very different requirements than those of a driving assistant.

v12 will be nothing special. If it is ever released, it will be an incremental improvement, and it may enable more frequent/faster updates (TBD - it's possible validation will take a lot longer because it is harder to look for regressions).

Let's keep expectations reasonable so we can be extremely content when v12 comes out:
1) It's not going to be the path to autonomy.
2) It's only going to improve on some of the issues that v11 had.
3) It will introduce new problems we didn't have before.
4) It's going to require careful monitoring to make sure it doesn't run traffic lights, etc.
5) Maybe it'll lead to more frequent updates.
6) Maybe it'll be a bit smoother.

That's a pretty solid update that we can be happy with!

It's not like we're going to get to L3 this year. Let's just be reasonable and realize we're just looking for something that will be slightly easier for Tesla to iterate on with less manpower - hopefully it will scale and reduce human capital costs for Tesla. Those cost reductions will have trickle-down benefit for FSD owners. Maybe they'll just be able to script the iterations! Make it end-to-end; no human intervention anywhere between releases! Even with the hardware costs, that would be cheap. 😂

This is one of those cases where I don't necessarily agree, but I liked it anyway. Good post.
 
  • Like
Reactions: kabin
I had the same issue and has largely been addressed by removing any and all speed offsets. Now the route plan which I believe is based upon the posted speed limits is adhered to and hence more achievable to have a drive with less lane changes from the desired lane.
I don't generally use speed offset with v11; if anything I turn the speed down a little to reduce its tendency to go around other cars. Sometimes I turn it up in light or medium traffic if everyone is already going above the limit.

Overtime, I become more and more convinced that the syndrome of lane-change out of and then back into the needed through lane, prior to an upcoming turn, is a true bug in the software. I had an exchange with @Mardak regarding map errors, but I think it's really more than that. Some kind of strange heuristic code bug where it tries to "prepare" for a lane change before a turn, but it gets the left/right value swapped about 3/4 mile out.

I could go into a long explanation about experiments I've done that tend to support this idea, but I'm hoping it will go away with v12. If it's strictly bad map information, which I now doubt, then it may not get any better.
 
I don't generally use speed offset with v11; if anything I turn the speed down a little to reduce its tendency to go around other cars. Sometimes I turn it up in light or medium traffic if everyone is already going above the limit.

Overtime, I become more and more convinced that the syndrome of lane-change out of and then back into the needed through lane, prior to an upcoming turn, is a true bug in the software. I had an exchange with @Mardak regarding map errors, but I think it's really more than that. Some kind of strange heuristic code bug where it tries to "prepare" for a lane change before a turn, but it gets the left/right value swapped about 3/4 mile out.

I could go into a long explanation about experiments I've done that tend to support this idea, but I'm hoping it will go away with v12. If it's strictly bad map information, which I now doubt, then it may not get any better.
Interesting to know that. As I mentioned I used to have that same issue and it was because of + speed offset it would swap lanes and overtake but then get stuck there because now there is too much traffic in the right lanes to get back to the appropriate rightmost lane to take the exit/turn. The last experience I had like that was almost back in September/October 2023 time frames.
 
  • Like
Reactions: Yelobird
And so it begins:
AI DRIVR is not a regular customer. He is an influencer that lives in SF and friends with Omar. In fact he used to work for Tesla in the early days. Omar was going to email the "team" asking for AI DRIVER to get v12.

Still a great sign but we all must remain patient. If we get before or by the end of March it will be fantastic and fast.
 
  • Like
Reactions: AlanSubie4Life