Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
There is a close to zero chance that current hw3/hw4 cars will ever be able to remove the driver in a meaningful (wide) ODD. So I respectfully disagree.
There is a 0% chance of Waymo coming to Atlanta, TX too. :oops:

Also you did NOT specify HW3/4 but just said "Tesla removes the driver". I agree that HW3/4 is not up to L4, even a very limited ODD L4 but you and I have no idea about HW5 or what Tesla may do. Or when Waymo is planning to come to Atlanta. Cruse was starting test drives before they "noise dived" If they get back on track they may end up "beating Tesla". But the jury is still out. ;)
 
no idea about HW5 or what Tesla may do
I don't think HW5 will be good enough to remove the driver. Tesla knows when things are good enough to go that route (I'm sure they are super clear-eyed about it as it is critical for market positioning) and they'll likely do a HW5 upgrade to get improvement in L2 performance long before any L3/L4/L5 efforts.
 
FYI, googling moravec’s paradox has this as the first hit: “By the 2020s, in accordance to Moore's law, computers were hundreds of millions of times faster than in the 1970s, and the additional computer power was finally sufficient to begin to handle perception and sensory skills, as Moravec had predicted in 1976.[4] In 2017, leading machine learning researcher Andrew Ng presented a "highly imperfect rule of thumb", that "almost anything a typical human can do with less than one second of mental thought, we can probably now or in the near future automate using AI."[5]
This is a good, interesting way to think about this or communicate it - and probably not TOO far off.. AI researchers have been trying to use the IQ context for “intelligence” for AI capability.. might be workable for communication of context at some point. I think many LLM/GPT systems/models are ~ up to 100-103 IQ.. certainly better than the average citizen, depending on the country.

I guess the big question is also really more what does “do” mean.
 
This is a good, interesting way to think about this or communicate it - and probably not TOO far off.. AI researchers have been trying to use the IQ context for “intelligence” for AI capability.. might be workable for communication of context at some point. I think many LLM/GPT systems/models are ~ up to 100-103 IQ.. certainly better than the average citizen, depending on the country.

I guess the big question is also really more what does “do” mean.
Super intelligence:
 
  • Funny
Reactions: AlanSubie4Life
Here is one I just did. 🤣 🤣 🤣

IMG_4664.jpeg
 
Ask an LLM about an answer in the training set and it probably gets it right. Ask an LLM about a logic question/test about something not directly in the training set and it probably fails miserably.

AGI must be near if we just had more hardware 😂😂😂

My litmus test is : “I’m holding a piece of paper with two hands. What happens if I let go of the paper with the right hand. It’s a windy day.”
 
  • Like
Reactions: tivoboy
....AGI must be near if we just had more hardware 😂😂😂
Late 50's early 60's "experts" were saying THE computer (as in one) was going to know all about everything. Now it's THE AI is going to become sentient and take over the world. It is JUST and will always be software. In 20 years it will be a lot like the computer turned out to be. There will be billions of AI programs of all sizes, types and abilities running on all devices enhancing their effectiveness.

So AI will take over the world but in the same way the computer did. Everywhere in all sizes doing all kinds of amazing things BUT never everything. And there will never be a one AI to rule it all that takes over everything.
 
  • Like
Reactions: Jeff N
"...different hardware...".Right.

So specifically what hardware installed in a 2018 AP3/MCU2 that works in FSD v12 that is different from the same hardware installed on MS/MX in 2021?

Why do they think they can make such a silly statement and expect people to believe them? Specifically what is different? Sounds very fishy to me. I'm sure we will not get those answers. I'd suggest they just screwed up and are now on. CYA mode.
 
I think some people have set the bar too high.....it’s either Data from Star Trek or nothing....my bar is, can the car drive better in most circumstances than most drivers....we are still not there...but the writing is on the wall
 
I think some people have set the bar too high.....it’s either Data from Star Trek or nothing....my bar is, can the car drive better in most circumstances than most drivers....we are still not there...but the writing is on the wall
Current ML applications, if applied correctly, and typically in narrow-band applications, can augment human creativity and productivity.
However, there is a chasm to cross for it to replace a human in any setting. That 'chasm' is at least 10x wider for safety critical applications.

At present, we're no where near it my my opinion.