Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Have you heard of Moravec's Paradox? Explain why it exists please.
> Moravec's paradox is the observation in artificial intelligence and robotics that, contrary to traditional assumptions, reasoning requires very little computation, but sensorimotor and perception skills require enormous computational resources.

The tasks solved by sensorimotor and perception skills were evolutionarily essential since trilobites and so their performance in biological organisms has been extensively refined to be effective and biologically low cost through clever analog physical computation.

Human reasoning is a late evolutionary development, so it uses a large cerebrum in an energy inefficient way with generic algorithms.

The equivalent in technology is comparing early analog computers (there were computers representing for example aircraft dynamics)---if they could be implemented directly in analog low cost CMOS silicon their power usage would be much lower than running digital simulation on a von neumann generic computer on the same semiconductor process.
 
  • Informative
Reactions: APotatoGod
From Tesla's CVPR presentations about the general world model / vision foundation model, these seem to be generally trained not specific to certain control behaviors, so it can use the giant amount of video available to Tesla without the extra step of preprocessing which videos to use. The example of predicting future video based on past video seems like a relatively straightforward self-supervised "pre-training" step for developing a general world model.
True, it's a clever step in a direction that doesn't obviously lend itself to making an autonomous vehicle, particularly L3 and above, though. It might prove to be useful but the gap from that to a product is wide.

I believe also for language models, the pre-training step is multiple orders of magnitude larger amount of data to train the foundation model than the finetuning step to bias responses and in this case control.
Again true, but the task to be solved by the eventual chatbot (generate novel relevant token sequences) is still nearly the same as the task solved by training the foundation model for a LLM. The fine-tuning is just that, slightly adjusting probability distributions, but the task remains the same.

Training a video world model is much further away than directed autonomous driving with a human-given goal. Synthesizing video is interesting for making synthetic movies---like a Tiktok generator algorithm. You still haven't solved the hard part: drive with a directed goal, and be very safe against extremely unlikely tail distributions of events.

Look what commercial pilots have to do. Their training is heavy on the edge case dangerous situations that are quite different one hopes from ordinary everyday flying.

Here's an example of the difficulty of end-to-end training policy from observed data. Suppose you did the same for aircraft, a ML model which watched 10,000 commercial flights and synthesized everything that needed to be done. It would get really good at pushback from the gate, safety demonstrations, typical autopilot routes, and good weather landings.

How many danger situations? Maybe say 5 go-arounds because of potential runway obstruction in the training set. But the problem with that is the ML model has no idea that the actual danger was a "runway obstruction", it only had a few examples and ML models are notorious for dealing with with irrelevant parts of the huge bitstream in the data and making irrelevant correlations because they solve the train examples, like maybe the go-arounds were all at a certain airport or two which had lights & signs in a certain place, or it was always a particular cargo or military base which did it.

The ML system might never learn that it is actually any runway obstruction and what a collision is, something intuitive to humans, so it will happily slam into an obstruction in a different airport that didnt have the correlations that it picked up upon.

A human pilot on the other hand understands that concept without any training example ever and will train in simulation and understands dynamics of what might happen. Trained pilots have a strong mental model of aircraft aerodynamics and control systems and with practice intuitively understand and predict reactions, like they will know that they need to increase limits on a hot day at higher altitudes---something the ML system might not have seen in its training set correlated with a dangerous example. (And this is why the recent Boeing MCAS fiasco was so bad as they silently inserted something which violated the pilots' mental model of aircraft dynamics)
 
  • Like
Reactions: ZeApelido
Is that with commercial airlines with professional pilots, or includes amateur pilots?

if its the first then it's quite distressing given the much higher training of pilots.
Don't quote me, this was 10-15 years ago, from memory, after hearing someone else tell me about it from a podcast they heard. I'm probably totally wrong.

I'd delete that post if I could, too vague. Sorry.

Air travel has become a lot safer since then too.
 
TSLA has their earnings conference call tonight. There were several FSD questions. Elon says he's driving E2E regularly in Austin with no interventions. :)

There was a question about why FSD price dropped if it was progressing so well. Another question about expanding FSD territory as well as Robotaxis status. For the most part the answers were the same answers we've heard before.

But it got interresting when someone asked when TSLA would cover liability given Mercedes covers liability in some cases - Elon said it feels like TSLA is covering liability given all the lawsuits. Maybe more than a few out of court settlements?

Q3 FSD miles driven didn't show an increased rate of growth. That makes sense to me!
 
Last edited:
Elon said it feels like TSLA is covering liability given all the lawsuits.
That was Elon wit. The lawsuits are predicated on the idea that Tesla is liable for the accidents - which is an L3 feature. He doesn't mean that Tesla IS covering liability, but that it FEELS LIKE everyone thinks that they should. Has Tesla lost any of those lawsuits on the basis of autonomy failing to do something?
 
The present situation looks to me like - We need a really big machine to make cars. That's a lot of work and cost? Well, let's design and build a machine that can design and build the manufacturing machine. Hey, they bought that! Let's do it again. We can abandon trying to write software to drive a car, let's write software that can write the software to drive a car. That'll kick the can on down the road for another decade. Just tell em it will be another two weeks or so.
 
  • Like
Reactions: kabin
That was Elon wit. The lawsuits are predicated on the idea that Tesla is liable for the accidents - which is an L3 feature. He doesn't mean that Tesla IS covering liability, but that it FEELS LIKE everyone thinks that they should. Has Tesla lost any of those lawsuits on the basis of autonomy failing to do something?
Or the costs of FSD lawsuits are comparable to the cost of Mercedes providing liability coverage.

Dollars to donuts TSLA uses out of court settlements especially for ugly cases and otherwise arbitration. They likely require a confidentiality agreement to collect any reward. As they say it's difficult to contract waive negligence.
 
iaWlj0C.png


I think Fred makes a really good point here. Elon seems to focus more on concept and less on execution. Elon's quote that "photons in, inputs out, some neural nets in between" is a classic example of this. Elon touts the idea of vision and e2e as the right approach but seems to gloss over execution which is the hardest part. I chuckle a bit at the "some neural nets in between" part because that is everything. The entire challenge of autonomous driving is in figuring out the "neural nets in between". It is the critical challenge that nobody has fully solved yet. But Elon just goes "some neural nets in between" like it is no big deal. And yes, show us data to show improvement over time.
 
IMHO, FSD will be beta for a few if not several years. One accident of fully released Vxx, e.g. no beta will be a feast for the lawyers!


Slapping the word "beta" on there has no actual legal relevance in a liability suit.

All the "you remain responsible at all times" disclaimers however DO- which would go away if they offered an L3 or higher system regardless of still calling it beta or not.
 
show us data to show improvement over time
The data Tesla has shared is cumulative miles with FSD Beta active, so it does show usage and perhaps indirectly improvement, but it is complicated by more vehicles added over time. Here's some estimates from the charts they've provided in various quarterly reports:

fsd miles 23q3.png


FSD Beta 10.x had been around 12M miles/month and jumped up to about 60M with 11.x, so adding single stack for highway definitely is an improvement overall. The bump around June closer to 68M was the wide release to HW3 vehicles and the most recent month bump could be from adding HW4 and price drop.

Assuming FSD Beta population is now around 450k vehicles, the average daily miles is just above 5 whereas national average is closer to 35. It'll be interesting to see if end-to-end 12.x will result in a larger jump than 10.x-to-11.x reflecting significant improvements in capability and usage.
 
The data Tesla has shared is cumulative miles with FSD Beta active, so it does show usage and perhaps indirectly improvement, but it is complicated by more vehicles added over time. Here's some estimates from the charts they've provided in various quarterly reports:

View attachment 983469

FSD Beta 10.x had been around 12M miles/month and jumped up to about 60M with 11.x, so adding single stack for highway definitely is an improvement overall. The bump around June closer to 68M was the wide release to HW3 vehicles and the most recent month bump could be from adding HW4 and price drop.

Assuming FSD Beta population is now around 450k vehicles, the average daily FSD Beta miles is just above 5 whereas national average is closer to 35. It'll be interesting to see if end-to-end 12.x will result in a larger jump than 10.x-to-11.x reflecting significant improvements in capability and usage.

Thanks. But this data only measures usage. It does not measure reliability or safety.
 
  • Like
Reactions: daktari
Regarding the E2E NN magic in between being like a human of course is a gross simplification since human memory involves emotions, context, updates on the fly, adapts to the unknown, and can nonverbally communication to others on the road be it good or bad. And Elon saying a computer is never distracted ignores workload, SW instability, and bugs.

But equally challenging to date has been training those NNs - sounds like dojo is still slowly coming together. I think they mentioned only 2x improved from baseline? It would be nice to see a comparison to the original dojo hockey stick chart.
 
Last edited:
  • Like
Reactions: spacecoin
Thanks. But this data only measures usage. It does not measure reliability or safety.
I think that many users are quite aware of how terrible and unsafe FSD is (situationally), so they selectively limit their use of it.

So when FSD is not being used for a high percentage of daily driving miles, this suggests a low safety level (since there are many places it is unusable, safety issues being one reason for lack of utility), and the charts suggest it has not particularly improved over time.

So as a proxy I think it is sort of serviceable though direct safety data would be far preferred. Tesla has not provided any usable data on safety of FSD or AP - ever - as we know. (It’s fairly difficult to provide meaningful data, but I wish they would try. My sense is FSD may actually improve safety on the freeway, but I am not sure.)

so adding single stack for highway definitely is an improvement overall.
Maybe I am misunderstanding. This doesn’t show any improvement does it? It just means AP miles started being counted as FSD use, doesn’t it? There is no way to demonstrate ANY increase in use of driver assist from these data, even though FSD does seem to be better than AP on the freeway (in my experience)

It’s not like people suddenly started using AP/FSD way more when single stack was released (we don’t have any way to judge how much increase there was, if any - overall usage might have even gone down - though I doubt it, no way to know!). Sadly, Tesla fails to provide the info (which they definitely possess), to make such assessments.
 
iaWlj0C.png


I think Fred makes a really good point here. Elon seems to focus more on concept and less on execution. Elon's quote that "photons in, inputs out, some neural nets in between" is a classic example of this. Elon touts the idea of vision and e2e as the right approach but seems to gloss over execution which is the hardest part. I chuckle a bit at the "some neural nets in between" part because that is everything. The entire challenge of autonomous driving is in figuring out the "neural nets in between". It is the critical challenge that nobody has fully solved yet. But Elon just goes "some neural nets in between" like it is no big deal. And yes, show us data to show improvement over time.

And eyes are on double gimbals (eyeball and neck), substantially higher resolution in the fovea than the cameras, and stereoscopic. And the real problem is now in planning more than perception.

Elon is a great example of CEO speak --- drop what they think are clever nuggets (but which are 14 year old teenager thinking, obvious and irrelevant to the engineers) and don't want to hear about the details or override the people who tell him what is actually needed when that will incur higher costs.

He didn't used to be like this before 5 years ago as much. And particularly with SpaceX he was more willing to get into the complicated ugly and difficult details, like the difficult metallurgical problems and extreme materials requirements, and in the beginning he personally learned about the construction and requirements of rocket engines and combustion instabilities. He talks about how difficult things are and how long development takes and how they will inevitably blow up rockets.

In other areas, particularly related to machine learning or biological learning (NeuralLink) he is overconfident and very naive. I see him as knowledgeable only on mechanical engineering, and even there the CT was a stretch. I think he was feeling down because of all the problems then with the Tesla paint plant and wanted a product which would never need it and that was his central motivation.
 
Last edited:
I think Fred makes a really good point here. Elon seems to focus more on concept and less on execution. Elon's quote that "photons in, inputs out, some neural nets in between" is a classic example of this. Elon touts the idea of vision and e2e as the right approach but seems to gloss over execution which is the hardest part. I chuckle a bit at the "some neural nets in between" part because that is everything. The entire challenge of autonomous driving is in figuring out the "neural nets in between". It is the critical challenge that nobody has fully solved yet. But Elon just goes "some neural nets in between" like it is no big deal. And yes, show us data to show improvement over time.
As one who has a surface knowledge of e2e as the “new” FSD approach, I don’t understand how it can work when our rules of the road differ significantly among the 50 states and even within each state. For example here in the Phoenix area, there are no car pool lane entry/exit limitations like California and some other states. So will my car know it should copy the behavior of good Phoenix drivers rather than good Los Angeles drivers when interacting with car pool lanes?

There obviously are many other examples. I suppose it would help if rules of the road were set nationally instead of at the state and even local level, but that won’t be happening in my lifetime.
 
  • Informative
Reactions: insaneoctane
As one who has a surface knowledge of e2e as the “new” FSD approach, I don’t understand how it can work when our rules of the road differ significantly among the 50 states and even within each state. For example here in the Phoenix area, there are no car pool lane entry/exit limitations like California and some other states. So will my car know it should copy the behavior of good Phoenix drivers rather than good Los Angeles drivers when interacting with car pool lanes?

Presumably, the solution would be to train the NN on all the rules so that it is able to handle all the different driving. So in your example, you would train the NN to know what a car pool lane is and how to handle them. In some ways, similar to humans. Human drivers learn what a car pool lane is and how to handle them. And human drivers learn the unwritten rules in the areas where we drive the most. So with e2e, you would try to train the NN to do the same. This is perhaps why Elon likes this approach so much. The thinking is that if humans can learn to handle all these different driving cases, why can't we train NN to do the same? But to do this, e2e will require a lot of data. The fact is that if you want e2e to be L5 like Elon wants, you will need massive data and massive training compute because you will need to collect diverse data on the gazillion driving cases everywhere. This is why Tesla is investing so much in dojo. Second, the training needs to be good. You need to make sure the NN is learning the right lessons. You will need to validate in real world to make sure the car is driving as you want it to. This is why I say that e2e is a nice idea but execution will be key.

Could Tesla pull it off? Given enough time, probably. We can never say never. Right now, e2e is reliable enough for demos but not reliable enough to be driverless everywhere. ML and computing are making big improvements every year. So it is possible that in the future, maybe 10 or 20 years from now, e2e will be L5.
 
Last edited: