Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
The 'beta' label has no actual meaning and its removal does not impact the usage or legal state of the product in any way. Tesla keeps those designations solely as a way to caution the user so they remain vigilant while using them. There is no reason to get worked up over either its removal or retention.

Besides, once V12 rate of improvement begins to tail off, Tesla will abandon it in favor of some other new architecture for V13.
Fully agreed. Furthermore I have always felt (assumed) that “beta” was the legal department’s “just in case” audible play should things go wrong.
 
  • Like
Reactions: diplomat33
There's no theoretical limit on V12's approach though, if end-to-end doesn't work, then it's unclear what approach will solve the problem.

The actual limits to V12 are hardware-related.
This is clearly false. There are well-known limits to what one can do with current machine learning practices.

The main limitation is in the training set curation (one cannot have a complete and up to date dataset of "the world"). The limitation is definitely not in inference hardware.
 
Last edited:
The 'beta' label has no actual meaning and its removal does not impact the usage or legal state of the product in any way. Tesla keeps those designations solely as a way to caution the user so they remain vigilant while using them. There is no reason to get worked up over either its removal or retention.

Besides, once V12 rate of improvement begins to tail off, Tesla will abandon it in favor of some other new architecture for V13.
I see the Beta label burying as more of a consumer-facing thing that allows portrayal of the system as more capable than it should be because things aren't locked out and you need to go consult the manual to actually find out you're not supposed to use it under these conditions and these situations. But I doubt many are reading the manual in any detail, and the Autopilot section of the manual seems to be constantly evolving so you'd need to stay up to speed with all the changes.

In terms of impacting usage, I think it does because a non-Beta system would actually, as an example, lock out Autosteer when not on controlled access highways, pass over control when approaching winding roads with sharp curves, construction zones, etc



Personally I'm just waiting for v14 beginning-to-beginning neural nets
 
  • Like
Reactions: DanCar
Exactly - at best it’s like saying ‘somewhere to the north eventually’ when somewhat asks where we’re going yet everyone keeps posting, reposting and perseverating on them like they have actual meaning.
Well ... also because something to occupy the mind / fill out the time while waiting.

I mean... some religious groups have waited hundred or even thousands of years for an event to occur !
 
People jump to conclusions that beta equals L2 and that non-beta would equal autonomy. The label "beta" has zero meaning in this context.

FSD City Streets will most likely be L2 for years to come, beta label or not.
Agreed - if for no other reason than the regulatory, insurance and liability issues will need to be worked out before any company will be willing to commit.
 
  • Like
Reactions: spacecoin
if end-to-end doesn't work, then it's unclear what approach will solve the problem.

How do you define "solve the problem". I think how we define "solved autonomous driving" will greatly affect how we view if an approach is working or not.

I don't think autonomous driving will ever be perfect. Heck, human drivers are far from perfect. And we cannot expect autonomous cars to never be in an accident since sometimes accidents are caused by other drivers and not the fault of the AV. So if we define autonomous driving as needing to be perfect or never getting into an accident, it will never be completely solved 100%. There will likely always be something we can improve in the AV. In fact, one advantage of AVs is that they never stop learning and improving over time. So "solved" is an ongoing process. So rather than talking about "solved", I think it is better to talk about when the autonomous driving is good enough to be deployed without supervision in a given ODD.

When deploying autonomous driving, I think there are a few important goals when deploying AVs:
1) the autonomous driving should be unsupervised.
2) the autonomous driving should be safer than human drivers (to be defined).
3) the ODD should be useful (to be defined).
4) the autonomous driving should be commercially available and affordable.

So maybe we could say that when we achieve these 4 goals together in the same product, autonomous driving is "solved"?

With that said, I don't think there is a magic bullet that will "solve" the problem of autonomous driving. I think what will end up solving the problem in the end will be a combination of many different approaches and a lot of hard work and perseverance. And I believe we should use whatever approach works to solve a part of the problem. So if end-to-end video training allows us to efficiently scale a generalized driving policy then we should do that. If sensor fusion (cameras, radar and lidar) allow us to make perception more reliable in adverse conditions then we should do that. If HD maps make the AV safer by providing useful road info that the perception stack could not get on its own, then we should do that too. Ultimately, I think solving autonomous driving will be a very long grind because of the infamous long tail of edge cases. We will just need to keep grinding away, solving problems, solving edge cases, making the AI better, until it is eventually "good enough".

Lastly, I do think your statement might be a bit short sighted because it implies that end-to-end is the only way to solve autonomous driving. There are other approaches that might work too. Also, there is still a lot more to learn about ML. In fact, we are learning new ML all the time. So even if E2E does not "solve" autonomous driving, there might be some new approach we have not discovered yet that does solve autonomous driving in the future. After all, people use to think that we could solve autonomous driving with just coding perception and planning until we realized that we needed ML. Then we thought we could solve autonomous driving with some NN modules for perception, prediction and planning. That arguably got us much closer to solving the problem but the AI is not quite smart enough so now people think that end-to-end might be the final piece of the puzzle that we need. But who is to say that there isn't some other piece that we are still missing that we have not discovered yet?
 
Lastly, I do think your statement might be a bit short sighted because it implies that end-to-end is the only way to solve autonomous driving.

At this point, it's the only reasonable way to solve generalized autonomy. In hindsight, human heuristics would never capture all the nuances in all the different locales. You'd spend all day tweaking heuristics while messing up others.
 
At this point, it's the only reasonable way to solve generalized autonomy. In hindsight, human heuristics would never capture all the nuances in all the different locales. You'd spend all day tweaking heuristics while messing up others.

It is not binary. There are not just 2 choices: all end-to-end or all human heuristics. There is also the modular NN approach that uses all NN but in a different architecture than end-to-end. There can also be architectures that use a combination of NN and heuristics. Nobody is arguing for all human heuristics. That is a strawman. At this point, I think everyone agrees that ML is key to solving autonomous driving. The question is what is the right architecture.

And end-to-end has its own challenges too. You need to get over billion parameters just right and every time you add more data, the training might tweak one parameter the right way but tweak another parameter the wrong way. So fixing an issue without causing a regression somewhere else is a challenge with end-to-end.
 
It is not binary. There are not just 2 choices: all end-to-end or all human heuristics. There is also the modular NN approach that uses all NN but in a different architecture than end-to-end. There can also be architectures that use a combination of NN and heuristics. Nobody is arguing for all human heuristics. That is a strawman. At this point, I think everyone agrees that ML is key to solving autonomous driving. The question is what is the right architecture.

And end-to-end has its own challenges too. You need to get over billion parameters just right and every time you add more data, the training might tweak one parameter the right way but tweak another parameter the wrong way. So fixing an issue without causing a regression somewhere else is a challenge with end-to-end.

Well, obviously there will always be some human heuristics involved, but when it comes to normal driving decisions and planning, depending mostly on human heuristics isn't scalable for general autonomy.
 
It is not binary. There are not just 2 choices

Well, obviously there will always be some human heuristics involved, but when it comes to normal driving decisions and planning, depending mostly on human heuristics isn't scalable for general autonomy.
Right now, we do not know how the brain works, we do not know how to mimic human intelligence, reasoning or decision making is a machine. We do not know how to make these systems safe enough for general autonomy. We do not know how to make them explainable.

We do know, even though "Transformers" and "back propagation" are useful and perhaps even transformational technologies, that they are likely not the answer to any of the above.

Currently there is no known path to solve general self-driving (L5). We currently use brute force techniques to make these systems "work". There is likely no amount of training data that will suffice to make L5 happen using the techniques of today.

Why does L4 work then? Because you can validate the system in a limited ODD and compensate for NN deficiencies using HD-maps and hifi physical measurements and insane amounts of incredible engineering in both hardware and software. Just getting the drop-off experience acceptable in a robotaxi context currently takes years of work by a silly amount of engineers.

Discussing the short term viability of specific NN architectures makes it sound like you think we have the building blocks already in place for L5 or AGI. This problem will most likely not be solved by some magic "architecture".

To me, that discussion is like having a discussion about what kinds of electro-shock treatment is needed to cure cancer.

Self driving is like any other extremely hard engineering problem (like getting a plane to fly). You don't get from the Wright Flyer to "jet engine-propelled 747 that hardly ever crashes" without 50 years of prior engineering work. There is no magic bullet.
 
Last edited:
Humans can't drive everywhere perfectly. It's just not possible. Drop a Nebraska guy into New Delhi or Bangkok and it'll take some time to learn how to drive on their roads.

Perhaps there will be regional NNs trained for that area. As you move into those areas, the car has to download the nets for that area. 🤔
This is definitely the case for me. We travel quite a bit to new areas we've never been before and I make all kind of dumb mistakes and last minute lane changes.

In the area where I live, I tend to increase max speed a little here and there when FSD slows down. In unfamiliar areas, I tend to lower max speed, or even disengage due to discomfort far more often.

I'm sure I'm not alone with respect to this.
 
Right now, we do not know how the brain works, we do not know how to mimic human intelligence, reasoning or decision making is a machine. We do not know how to make these systems safe enough for general autonomy. We do not know how to make them explainable.

We do know, even though "Transformers" and "back propagation" are useful and perhaps even transformational technologies, that they are likely not the answer to any of the above.

Currently there is no known path to solve general self-driving (L5). We currently use brute force techniques to make these systems "work". There is likely no amount of training data that will suffice to make L5 happen using the techniques of today.

Why does L4 work then? Because you can validate the system in a limited ODD and compensate for NN deficiencies using HD-maps and hifi physical measurements and insane amounts of incredible engineering in both hardware and software. Just getting the drop-off experience acceptable in a robotaxi context currently takes years of work by a silly amount of engineers.

Discussing the short term viability of specific NN architectures makes it sound like you think we have the building blocks already in place for L5 or AGI. This problem will most likely not be solved by some magic "architecture".

To me, that discussion is like having a discussion about what kinds of electro-shock treatment is needed to cure cancer.

Self driving is like any other extremely hard engineering problem (like getting a plane to fly). You don't get from the Wright Flyer to "jet engine-propelled 747 that hardly ever crashes" without 50 years of prior engineering work. There is no magic bullet.

We don't need some special reasoning for self driving. If you look at the reasons why people are disengaging fsd, it's not because of some complex reasoning process.

This is what all the naysayers focus on: perfection. It's not about perfection, it's about 4-10x human safety.

Sure, it's possible we'll never get there (4-10x safety) with end-to-end, but we sure as heck won't get there with heuristics.
 
Last edited:
We don't need some special reasoning for self driving. If you look at the reasons why people are disengaging fsd, it's not because of some complex reasoning process.

This is what all the naysayers focus on: perfection. It's not about perfection, it's about 4-10x human safety.

Sure, it's possible we'll never get there with end-to-end, but we sure as heck won't get there with heuristics.
I don't think you understood any of my points. At all. Did I say anything about "perfection" or "safer than a human" even? For generalised self driving most researchers agree we need plenty of research breakthroughs in both core NN learning performance and safety/explainability.

Stop listening to the ML marketing and look at the problem from a traditional engineering point of view? We're not even close to general autonomy. I don't think it will happen before 2040. ML is not magic. It's a tool for when a rule based approach doesn't work. It's harder than traditional engineering in many ways such as lack of explainability and super hard to "patch" individual issues and to validate the impact of a change in the training set. One need to look at both the pros and cons.

In fact, if we have autonomous consumer vehicles at highway speed on a dry limited access highway during daytime this decade, I'd be positively surprised.
 
Last edited: