Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Been posted elsewhere but belongs here.

Tough to find relevant Tweets amongst the stream of diarrheal drivel these days.


Chief Cheerleader (mild term): No. Hardware 3 is a little smoother
Elon: Hardware 4 will ultimately be better, but all training is for Hardware 3, with HW4 running in emulation mode
translation: 99% of their training data is in hw3 res.
 
Citation needed. Exponential growth is hard af to predict since you have no idea where you are on the S curve.
What metric are we looking at? What does it need to get to?
It seems like it might take 5-10 years even if we stay on the exponential part of the S curve.

P.S. I tried very hard to get ChatGPT 4 to estimate the safety critical disengagement rate for me and how much it is improving per year but it proved useless as usual.
 
Last edited:
Citation needed. Exponential growth is hard af to predict since you have no idea where you are on the S curve.
Listen to this for example:



Current NN are incapable of effective learning and you need millions of labeled examples for them to learn anything well. And still it’s not as safe as a human. Also hierarchical planning isn’t really solved yet.

Also, you can’t effectively deploy a humanoid robot that adds any real value while it’s supervised/non-autonomous.

Google Moravec’s paradox.
 
Last edited:
FYI, googling moravec’s paradox has this as the first hit: “By the 2020s, in accordance to Moore's law, computers were hundreds of millions of times faster than in the 1970s, and the additional computer power was finally sufficient to begin to handle perception and sensory skills, as Moravec had predicted in 1976.[4] In 2017, leading machine learning researcher Andrew Ng presented a "highly imperfect rule of thumb", that "almost anything a typical human can do with less than one second of mental thought, we can probably now or in the near future automate using AI."[5]
 
Last edited:
  • Funny
Reactions: AlanSubie4Life
I see your LeCun and raise you a Karpathy:
AK has zero credibility in my book. Ilya was the smart one. AK did some NLP work and was in the right place (beside Ilja and Fei-Fei) and went after the money-grab, helping Elon pump Fake Self Driving.

Direct quote from the AK video: "the details of that are kind of tricky, potentially". The understatement of the year.

I don't think you realise how far from "intelligence" and "reasoning" we are. The researchers can't even explain why or how the models work.

This is a great and balanced interview, mostly related to gen-AI and LLM:s (an Anthropic and a Deepmind guy).

From the timestamp and ten minutes or so is a really good discussion. Both seem to believe there will be progress, not exponential in any way. It's obviously hard engineering and research. This is clearly not about adding more data and compute, even they are all compute constrained.

To my knowledge Tesla has done close to zero research and novel work. They copy with pride and are quick to adapt others people's work.

Robotics in a completely different ball game compared to an LLM. Safety critical and time critical doesn't allow for mistakes. And there is currently no way to avoid mistakes (99.999999% reliability) using pure ML. The ML models are good at estimating the most likely next thing, and that actually works against them for infrequent "long tail" events.
 
Last edited:
FYI, googling moravec’s paradox has this as the first hit: “By the 2020s, in accordance to Moore's law, computers were hundreds of millions of times faster than in the 1970s, and the additional computer power was finally sufficient to begin to handle perception and sensory skills, as Moravec had predicted in 1976.[4] In 2017, leading machine learning researcher Andrew Ng presented a "highly imperfect rule of thumb", that "almost anything a typical human can do with less than one second of mental thought, we can probably now or in the near future automate using AI."[5]
The gist of it still holds: "What's easy for a human is hard for a computer and vise-versa." Getting a robot to walk is super hard. Getting a human to solve hard math problems is "hard" and super easy for a computer.

You can tell a human assistant to go down to the corner shop and get me a coffee and they will do it easily regardless of intelligence level.

Just getting a robot to stand up and get to the elevator is nearly impossible. Getting it to break down the task planning required is not possible at the moment.

There will be no robot coffee buying/fetching this decade, and perhaps even in the next.
 
Last edited:
  • Like
Reactions: OxBrew and DrGriz
The gist of it still holds: "What's easy for a human is hard for a computer and vise-versa." Getting a robot to walk is super hard. Getting a human to solve hard math problems is "hard" and super easy for a computer.

You can tell a human assistant to go down to the corner shop and get me a coffee and they will do it easily regardless of intelligence level.

Just getting a robot to stand up and get to the elevator is nearly impossible. Getting it to break down the task planning required is not possible at the moment.

There will be no robot coffee fetching this decade, and perhaps even in the next.
A robot on wheels is a lot more doable than walking robot.....a robot that only had to navigate on flat surfaces....that could detect obstacles by vision...these are doable. So self driving is doable (and has been done)...the only problem is the complexity of the environment.....but that can be broken down into doable steps....unique situations are a problem to humans as well...look at the chaos that happens when there are animals on the freeway
 
A robot on wheels on wheels is a lot more doable than walking robot.....a robot that only had to navigate on flat surfaces....that could detect obstacles by vision...these are doable. So self driving is doable (and has been done)...the only problem is the complexity of the environment.....but that can be broken down into doable steps....unique situations are a problem to humans as well...look at the chaos that happens when there are animals on the freeway
Who's solved general self driving? Is this year 2034?

You're brushing off the hard parts with a lot of "just"(s). Google had self driving on Tesla's current level or better 10-15 years ago.

"the only problem is the complexity of the environment": yes, lol. that and that people die if the robot makes a mistake.
"but that can be broken down into doable steps": not really
"look at the chaos that happens when there are animals on the freeway": yes
 
  • Like
Reactions: DrGriz
Who's solved general self driving? Is this year 2034?

You're brushing off the hard parts. Google had self driving on Tesla's current level or better 10-15 years ago.

"the only problem is the complexity": yes
"but that can be broken down into doable steps": not really
"look at the chaos that happens when there are animals on the freeway": yes
There are many examples....FSDb getting part of a journey perfectly....or other systems in a geofenced environment etc
 
There are many examples....FSDb getting part of a journey perfectly....or other systems in a geofenced environment etc
Tesla did the paint it black video eight years ago. That car drove by itself and parked itself. The only problem was that it failed often and was only capable of one route. Now the problem is, eight years later, it still fails a lot. A human goes tens of thousands of miles between accidents.

If it took Tesla 8 years to get to 20 miles per DE, it will probably take 8 more years to get to 1000. That's still not enough to remove the driver.