Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Is Tesla closer than we think?

This site may earn commission on affiliate links.
Anyone could decide to work for Tesla, get into there fsd development program, and become an fsd validator. Actually you don’t even need to work for Tesla, I believe a few people found out the hard way after turning their car into a tin can vs an 18 wheeler.



anyone can’t get a ride in a Waymo....

Not really. Tesla doesn't have L4. It asks the driver to intervene or take over immediately therefore it is still L2.

That's like saying my old ass Ford Ranger is L4 because I can turn on cruise control and hop in the back and it will still roll forward.
 
  • Disagree
Reactions: mikes_fsd
Not really. Tesla doesn't have L4. It asks the driver to intervene or take over immediately therefore it is still L2.

That's like saying my old ass Ford Ranger is L4 because I can turn on cruise control and hop in the back and it will still roll forward.
What is publicly released, you’d be right. What Tesw has done at a specific time at a specific place, like Waymo, probably not.
 
  • Disagree
Reactions: Demonofelru
Going back to the subject of this thread, how does that mean that Tesla is closer than we think. For all we know Tesla could be where Waymo was years ago at their infancy, hiring their first drivers for testing L4.
Depends on who we is. I’ve already said elsewhere I doubt widescale level 4-5 will ever be released to Public.

waymo is not released to the public in my opinion and it certainly has isn’t wide scale.
 
  • Disagree
Reactions: Demonofelru
Depends on who we is. I’ve already said elsewhere I doubt widescale level 4-5 will ever be released to Public.

waymo is not released to the public in my opinion and it certainly has isn’t wide scale.

Ok it kinda sounded like you and mspisars were on the same page here. While I agree that widescale L4 will not be released anytime in the next couple years, it's hard to say that waymo is not released to the public.

You can argue that it's geofenced, but it's a public product, that anybody from the public can ride in. It's not like it's a rollercoaster driving on some empty tracks. It still drives on public roads with unpredictable human drivers, where Waymo is confident enough to take liability over from the customer. That's way more than you can say of Tesla.
 
Seriously - you are better than this.

We have talked about this - over and over again.



You make the rookie mistake of comparing using a single dimension (features), where there are 3 separate dimensions.

Well my post was throwing your logic back at you to show you how flawed your logic is. I guess you missed it.

What we know is that Tesla has deployed L2 everywhere. Waymo has deployed L4 but only in limited geofenced areas so far. They are testing in other areas but it is not reliable enough yet. That's understandable since Waymo has not solved FSD yet. Nobody has.

Tesla can deploy L2 everywhere because L2 is a lower bar. L2 does not have to work with no human intervention Tesla can deploy L2 that is still beta because the driver is expected to supervise. Waymo does not do L2. Waymo only does L4. This means that Waymo could have FSD that works everywhere but they will only deploy it to the public when it reliable and safe enough to be driverless.
 
You cannot take ownership of waymo fsd suite. Once Tesla has fsd level 4 they will not sell it.

I disagree that waymo has deplored level 4. Operating on a test track isn’t deployed. Tesla has likely done the same sort of testing and it’s release is more widescale due to its nature.

They claimed to have done this years ago. It's all faked. The best the have is what the FSD Beta people got and it's nowhere near L4
 
  • Like
Reactions: diplomat33
I disagree that waymo has deplored level 4. Operating on a test track isn’t deployed. Tesla has likely done the same sort of testing and it’s release is more widescale due to its nature.

Come on now! A test track is a closed circuit that is like 1 mile long. A commercial driverless ride-hailing service open to the public 24/7 in a 50 sq mi is a lot more than a test track. Waymo has deployed their L4 on public roads with other users. That is not a test track.

And Tesla has not done a driverless robotaxi service anywhere! So no, you can't say that Tesla has done the same sort of thing as Waymo. What Tesla has deployed widescale is L2.

There is a huge difference between deploying real robotaxis that are carrying passengers around a 50 sq mi area 24/7 with no driver (Waymo) and deploying a driver assist everywhere that can do some driving tasks but requires a human to take over (Tesla).

And by definition, L4 is any FSD in a limited ODD. So you might think 50 sq mi is too small but it still counts as L4. So yes, a driverless ride-hailing service in a 50 sq mi area counts as deploying L4. So yes, Waymo has deployed L4.
 
Come on now! A test track is a closed circuit that is like 1 mile long. A commercial driverless ride-hailing service open to the public 24/7 in a 50 sq mi is a lot more than a test track. Waymo has deployed their L4 on public roads with other users. That is not a test track.

And Tesla has not done a driverless robotaxi service anywhere! So no, you can't say that Tesla has done the same sort of thing as Waymo. What Tesla has deployed widescale is L2.

There is a huge difference between deploying real robotaxis that are carrying passengers around a 50 sq mi area 24/7 with no driver (Waymo) and deploying a driver assist everywhere that can do some driving tasks but requires a human to take over (Tesla).

And by definition, L4 is any FSD in a limited ODD. So you might think 50 sq mi is too small but it still counts as L4. So yes, a driverless ride-hailing service in a 50 sq mi area counts as deploying L4. So yes, Waymo has deployed L4.

you can still keep posting and being wrong if you like.
 
I think many folks are circling around the essence and substance of this debate, without touching on the fundamental things that really matter.




Why Tesla could be closer than many think (per the thread title):

It is a truism in deep learning that data, compute, and neural network design are the three factors that determine performance. To argue, in the absence of direct quantitative evidence, that one companies' NNs have dramatically better performance than another companies', one must construct a plausible explanation of how that could be the case based on these three factors.

In the autonomous vehicle application in particular, we know that, for major, multi-billion-dollar companies, training compute is by far the least significant constraint of the three. Abundant cheap compute exists in the cloud and these companies can set up their own GPUs or ASICs for training.

Neural network design is the most unpredictable and mysterious of the three factors. However, there is good reason to believe it is a much less significant source of competitive advantage among major companies than data.

As Karpathy stated in the tweet I posted many pages back, most cutting-edge research in AI is conducted by either a) academic labs like Mila or b) industry labs like DeepMind. Counterintuitively, the industry labs publish a huge amount of research that is replicable by other labs based on reading their papers and often even open source their research.


This is not because these corporations are simply generous, but largely because there is a very powerful ethos among AI researchers of publishing replicable papers. If Alphabet suddenly forbade DeepMind from publishing their research, there would no doubt be an exodus of researchers from DeepMind to FAIR or somewhere else that still allowed publishing.

In other words, AI researchers as a subculture are ideologically committed to open science, and this puts pressure on companies that want to do AI research to allow their researchers to do open science.

This is why the primary competitive advantage in the current landscape is data. Compute is abundant and cheap, AI research is largely open, but data is relatively scarce, expensive, and can be jealously hoarded.




Why Waymo’s L4 is not automatically more impressive than Tesla’s L2:

It should not be surprising — it should be obvious — that L4 in a highly constrained environment with lots of crutches is a much easier problem than L4 in the wild, with basically no constraints.

For Tesla to achieve human-level L4 driving with their FSD software would be a vastly larger technical achievement than getting to human-level L4 driving within Waymo’s constraints.

It’s not clear to me which is more difficult: making a driverless robot work in Waymo’s playpen or making an L2 robot work in the wild. It’s possible they’re about equally difficult.

What we cannot accept as sound reasoning is that L4, irrespective of constraints, is better or more impressive or more advanced than L2 in the wild simply because 4 is a higher number than 2. That is folly.

Waymo’s technology could not support an L2 system in the wild because it depends on crutches that Waymo only has within its playpen. If you stripped away the crutches and forced Waymo employees to re-develop the software for L2, I reckon you’d (eventually) end up with something comparable to FSD Beta.

Conversely, if you took Tesla’s technology and built a playpen for it in Arizona with all the same crutches Waymo uses, I bet you’d eventually end up with something comparable to Waymo’s driverless proof of concept.

If anything is going to break through the challenges in perception, prediction, and planning that continue to confound AVs, it will be the application of new approaches or new advances in old approaches — such as 4D vision, multi-task learning, self-supervised learning, imitation learning, and reinforcement learning — at the million-vehicle scale, with thoughtful data curation (using things such as active learning and shadow mode).

Solving L4 in the wild with this data is a fundamentally different problem — a fundamentally easier problem — than solving L4 in the wild (not in a playpen) with the data you can get from a few hundred vehicles. It requires neural networks to generalize much less. It trains them with an amount of data commensurate with what we’ve seen in successful AI projects.

Waymo has driven less than 1,000 years in its totality. Artificial agents that play modern, complex 3D games like StarCraft and Dota are trained on a different order of magnitude of experience: in the ballpark of 100,000 years, rather than 1,000.

This is why we have to look beyond shallow comparisons between Waymo and Tesla. It is too simplistic to say Waymo has more advanced AI because 4 is a bigger number than 2. We have to look at the size of the problem — its scope, its constraints, its crutches, and also the resources, i.e. the data, that a company can use to solve it.



Are you trolling?

No, I’m being tongue-in-cheek to illustrate the error of an argument I’m objecting to. Sarcastically applying an interlocutor’s argument to derive a silly conclusion is a time-honoured tactic in argumentation.

If we go along with the premise that L4 is unconditionally better than L2, we end up with absurd and obviously false conclusions. So, we must reject the premise.

The bar for L4 is farcically low. The impressiveness of any autonomous driving technology is not in its SAE Level alone. It has to be judged based on multiple criteria, including environmental, geographical, temporal, and meteorological scope, as well as statistical success rate (or failure rate) at driving tasks within that scope.
 
I recommend waiting unless you are either happy with the current feature set for $10,000 or the money just isn’t a big deal.

I suspect if Tesla rolls something out in the short term it’s going to be a disaster and get pulled back. Even if it ships successfully, do your own calculation on how much you would be paying per self-driving ride and see if that’s worth it to you.

I’m on my second and third Teslas right now, and FSD has been “just around the corner“ since my first one in mid-2016.
Your points are well said. And $10,000 is a lot of money for a future capability!
 
I think many folks are circling around the essence and substance of this debate, without touching on the fundamental things that really matter.



Why Tesla could be closer than many think (per the thread title):

It is a truism in deep learning that data, compute, and neural network design are the three factors that determine performance. To argue, in the absence of direct quantitative evidence, that one companies' NNs have dramatically better performance than another companies', one must construct a plausible explanation of how that could be the case based on these three factors.

In the autonomous vehicle application in particular, we know that, for major, multi-billion-dollar companies, training compute is by far the least significant constraint of the three. Abundant cheap compute exists in the cloud and these companies can set up their own GPUs or ASICs for training.

Neural network design is the most unpredictable and mysterious of the three factors. However, there is good reason to believe it is a much less significant source of competitive advantage among major companies than data.

As Karpathy stated in the tweet I posted many pages back, most cutting-edge research in AI is conducted by either a) academic labs like Mila or b) industry labs like DeepMind. Counterintuitively, the industry labs publish a huge amount of research that is replicable by other labs based on reading their papers and often even open source their research.


This is not because these corporations are simply generous, but largely because there is a very powerful ethos among AI researchers of publishing replicable papers. If Alphabet suddenly forbade DeepMind from publishing their research, there would no doubt be an exodus of researchers from DeepMind to FAIR or somewhere else that still allowed publishing.

In other words, AI researchers as a subculture are ideologically committed to open science, and this puts pressure on companies that want to do AI research to allow their researchers to do open science.

This is why the primary competitive advantage in the current landscape is data. Compute is abundant and cheap, AI research is largely open, but data is relatively scarce, expensive, and can be jealously hoarded.




Why Waymo’s L4 is not automatically more impressive than Tesla’s L2:

It should not be surprising — it should be obvious — that L4 in a highly constrained environment with lots of crutches is a much easier problem than L4 in the wild, with basically no constraints.

For Tesla to achieve human-level L4 driving with their FSD software would be a vastly larger technical achievement than getting to human-level L4 driving within Waymo’s constraints.

It’s not clear to me which is more difficult: making a driverless robot work in Waymo’s playpen or making an L2 robot work in the wild. It’s possible they’re about equally difficult.

What we cannot accept as sound reasoning is that L4, irrespective of constraints, is better or more impressive or more advanced than L2 in the wild simply because 4 is a higher number than 2. That is folly.

Waymo’s technology could not support an L2 system in the wild because it depends on crutches that Waymo only has within its playpen. If you stripped away the crutches and forced Waymo employees to re-develop the software for L2, I reckon you’d (eventually) end up with something comparable to FSD Beta.

Conversely, if you took Tesla’s technology and built a playpen for it in Arizona with all the same crutches Waymo uses, I bet you’d eventually end up with something comparable to Waymo’s driverless proof of concept.

If anything is going to break through the challenges in perception, prediction, and planning that continue to confound AVs, it will be the application of new approaches or new advances in old approaches — such as 4D vision, multi-task learning, self-supervised learning, imitation learning, and reinforcement learning — at the million-vehicle scale, with thoughtful data curation (using things such as active learning and shadow mode).

Solving L4 in the wild with this data is a fundamentally different problem — a fundamentally easier problem — than solving L4 in the wild (not in a playpen) with the data you can get from a few hundred vehicles. It requires neural networks to generalize much less. It trains them with an amount of data commensurate with what we’ve seen in successful AI projects.

Waymo has driven less than 1,000 years in its totality. Artificial agents that play modern, complex 3D games like StarCraft and Dota are trained on a different order of magnitude of experience: in the ballpark of 100,000 years, rather than 1,000.

This is why we have to look beyond shallow comparisons between Waymo and Tesla. It is too simplistic to say Waymo has more advanced AI because 4 is a bigger number than 2. We have to look at the size of the problem — its scope, its constraints, its crutches, and also the resources, i.e. the data, that a company can use to solve it.





No, I’m being tongue-in-cheek to illustrate the error of an argument I’m objecting to. Sarcastically applying an interlocutor’s argument to derive a silly conclusion is a time-honoured tactic in argumentation.

If we go along with the premise that L4 is unconditionally better than L2, we end up with absurd and obviously false conclusions. So, we must reject the premise.

The bar for L4 is farcically low. The impressiveness of any autonomous driving technology is not in its SAE Level alone. It has to be judged based on multiple criteria, including environmental, geographical, temporal, and meteorological scope, as well as statistical success rate (or failure rate) at driving tasks within that scope.

I’m sorry you think L4 in a controlled environment is less impressive than widespread L2, but you’re wrong.

If you look at disengagement’s per mile, Tesla L2 over potentially thousands of miles is much higher than Waymos. That means that Tesla’s system is designed to fail in which Waymo’s will not. But Tesla does not care because they have no liability. If someone crashes, it’s on them. If the car manages to take a turn correctly, yay! Otherwise sucks to be you in your totaled Tesla.
 
I think many folks are circling around the essence and substance of this debate, without touching on the fundamental things that really matter.




Why Tesla could be closer than many think (per the thread title):

It is a truism in deep learning that data, compute, and neural network design are the three factors that determine performance. To argue, in the absence of direct quantitative evidence, that one companies' NNs have dramatically better performance than another companies', one must construct a plausible explanation of how that could be the case based on these three factors.

In the autonomous vehicle application in particular, we know that, for major, multi-billion-dollar companies, training compute is by far the least significant constraint of the three. Abundant cheap compute exists in the cloud and these companies can set up their own GPUs or ASICs for training.

Neural network design is the most unpredictable and mysterious of the three factors. However, there is good reason to believe it is a much less significant source of competitive advantage among major companies than data.

As Karpathy stated in the tweet I posted many pages back, most cutting-edge research in AI is conducted by either a) academic labs like Mila or b) industry labs like DeepMind. Counterintuitively, the industry labs publish a huge amount of research that is replicable by other labs based on reading their papers and often even open source their research.


This is not because these corporations are simply generous, but largely because there is a very powerful ethos among AI researchers of publishing replicable papers. If Alphabet suddenly forbade DeepMind from publishing their research, there would no doubt be an exodus of researchers from DeepMind to FAIR or somewhere else that still allowed publishing.

In other words, AI researchers as a subculture are ideologically committed to open science, and this puts pressure on companies that want to do AI research to allow their researchers to do open science.

This is why the primary competitive advantage in the current landscape is data. Compute is abundant and cheap, AI research is largely open, but data is relatively scarce, expensive, and can be jealously hoarded.




Why Waymo’s L4 is not automatically more impressive than Tesla’s L2:

It should not be surprising — it should be obvious — that L4 in a highly constrained environment with lots of crutches is a much easier problem than L4 in the wild, with basically no constraints.

For Tesla to achieve human-level L4 driving with their FSD software would be a vastly larger technical achievement than getting to human-level L4 driving within Waymo’s constraints.

It’s not clear to me which is more difficult: making a driverless robot work in Waymo’s playpen or making an L2 robot work in the wild. It’s possible they’re about equally difficult.

What we cannot accept as sound reasoning is that L4, irrespective of constraints, is better or more impressive or more advanced than L2 in the wild simply because 4 is a higher number than 2. That is folly.

Waymo’s technology could not support an L2 system in the wild because it depends on crutches that Waymo only has within its playpen. If you stripped away the crutches and forced Waymo employees to re-develop the software for L2, I reckon you’d (eventually) end up with something comparable to FSD Beta.

Conversely, if you took Tesla’s technology and built a playpen for it in Arizona with all the same crutches Waymo uses, I bet you’d eventually end up with something comparable to Waymo’s driverless proof of concept.

If anything is going to break through the challenges in perception, prediction, and planning that continue to confound AVs, it will be the application of new approaches or new advances in old approaches — such as 4D vision, multi-task learning, self-supervised learning, imitation learning, and reinforcement learning — at the million-vehicle scale, with thoughtful data curation (using things such as active learning and shadow mode).

Solving L4 in the wild with this data is a fundamentally different problem — a fundamentally easier problem — than solving L4 in the wild (not in a playpen) with the data you can get from a few hundred vehicles. It requires neural networks to generalize much less. It trains them with an amount of data commensurate with what we’ve seen in successful AI projects.

Waymo has driven less than 1,000 years in its totality. Artificial agents that play modern, complex 3D games like StarCraft and Dota are trained on a different order of magnitude of experience: in the ballpark of 100,000 years, rather than 1,000.

This is why we have to look beyond shallow comparisons between Waymo and Tesla. It is too simplistic to say Waymo has more advanced AI because 4 is a bigger number than 2. We have to look at the size of the problem — its scope, its constraints, its crutches, and also the resources, i.e. the data, that a company can use to solve it.





No, I’m being tongue-in-cheek to illustrate the error of an argument I’m objecting to. Sarcastically applying an interlocutor’s argument to derive a silly conclusion is a time-honoured tactic in argumentation.

If we go along with the premise that L4 is unconditionally better than L2, we end up with absurd and obviously false conclusions. So, we must reject the premise.

The bar for L4 is farcically low. The impressiveness of any autonomous driving technology is not in its SAE Level alone. It has to be judged based on multiple criteria, including environmental, geographical, temporal, and meteorological scope, as well as statistical success rate (or failure rate) at driving tasks within that scope.

This is a really good post - well written and insightful.
 
Do you transform into an alien when you go from city to city? Does your car transform into a UFO and levitate? Do you walk backwards like in Tenet? You should go to Phoenix if your logic is correct. Waymo perception and prediction sys will fail & should run u over and rear end u.. If your statement were true then all the millions of tourists who fly/drive into Phoenix are in danger of being run over/rear end as Waymo's perception & prediction is brittle, not general & will instantly fail.

Additionally, Huawei would not be ready to release a door to door advanced autopilot system that works anywhere in China in 6 months using just 500 test cars in development. In an environment that is orders of magnitude harder to drive in than the US. At a MPI that is up to 500x higher than FSD beta.

Same is the case for Mobileye, they wouldn't be ready to release a door to door system that was developed in Israel, then works in Germany and Detroit and is about to be deployed all over china in a few months.

Again its about LOGIC.
1+1 will always be 2.
Your logic doesn't check out.

This is like reading Time Cube. 😆
 
Last edited: