Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Now Elon said "the car has a mind"
He did qualify it as being "not enormous," and I agree it's probably misleading for many and often leads to unproductive discussions around consciousness. But do you not see any potential aspect of the Wikipedia definition: "The mind is that which thinks, imagines, remembers, wills, and senses…" In some broad sense, these could fall under planning, perception, training, control/outputs, and inputs.

Perhaps even simpler given the context of hand-crafted heuristics for planning in previous FSD Beta, having neural networks correctly make those predictions without explicitly coding them is probably impressive in itself, and if it can make even better predictions, it could be mind-blowing for some. For example, the current behavior of going around pedestrians walking on the street feels very robotic of "must give space when X feet away" whereas a human and potentially the new control/planning neural network would give plenty of space when it's clear there will be no oncoming traffic especially in a low traffic residential street with no lane lines, and how the behavior changes if there's say kids/pets or more dynamic traffic.

This particular example might focus more on planning / "think" to decide what's the appropriate behavior, but it probably needs all the other parts to work reasonably well. Do you think the neural network would be incapable of doing that?
 
Literally everyone agrees that AGI is probably decades away. Everything else is just marketing. Like "sparks of AGI".
Pretty pointless to argue about what Elon meant by his tweet (too vague to really mean anything) but "literally everyone" is 100% incorrect.

Just one example contradicting this: Oversight of A.I.: Principles for Regulation | United States Senate Committee on the Judiciary
From the three testimonies:
"Previously thought to be decades or even centuries away, I and other leading AI scientists now believe human-level AI could be developed within the next two decades, and possibly with in the next few years."
"I have personally never seen anything resembling this pace of progress, and many scientists with longer careers than I seem to concur."
"Hassabis goes on to say that “I would not be surprised if we approached something like AGI or AGI-like in the next decade.” Every single AI researcher I have spoken to in the last year has told me they feel that AGI is much closer than previously estimated. Geoff Hinton, perhaps the most distinguished researcher in the deep learning community, stated, “I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that. … I don’t think they should scale this up more until they have understood whether they can control it.” Hinton’s estimate is now 5 to 20 years, while Ian Hogarth, in the article cited above, quotes an unnamed leading AI researcher as saying, “It’s possible from now onwards.”

It's worth sitting with the Microsoft Sparks of AGI paper for a while and trying to understand its implications: https://arxiv.org/pdf/2303.12712.pdf

Ctrl-F for "theory of mind" in that paper.

Regardless, "AGI" means different things to different people so it's also probably not worth arguing over.
 
  • Like
Reactions: willow_hiller
He did qualify it as being "not enormous," and I agree it's probably misleading for many and often leads to unproductive discussions around consciousness. But do you not see any potential aspect of the Wikipedia definition: "The mind is that which thinks, imagines, remembers, wills, and senses…" In some broad sense, these could fall under planning, perception, training, control/outputs, and inputs.
By that reasoning a Roomba has a mind. That's ridiculous.
This particular example might focus more on planning / "think" to decide what's the appropriate behavior, but it probably needs all the other parts to work reasonably well. Do you think the neural network would be incapable of doing that?
No, I do not think neither FSD nor a Roomba is capable of thinking. Stop stretching definitions?
 
  • Funny
Reactions: APotatoGod
He did qualify it as being "not enormous," and I agree it's probably misleading for many and often leads to unproductive discussions around consciousness. But do you not see any potential aspect of the Wikipedia definition: "The mind is that which thinks, imagines, remembers, wills, and senses…" In some broad sense, these could fall under planning, perception, training, control/outputs, and inputs.
My cat has a mind...she knows how to navigate around the house, find the liter box, go to where the food and water is, where her favorite cat tree is. She knows when I say the word "food" it means it's time to go to the food bowl. She has some sort of memory and when she bites my fingers in the morning she is attempting to will food into existence.

Having said that, I truly think the FSD computer in my Model 3 is smarter than she is. It can perform incredibly complicated navigation and routing tasks at 60mph.

In that sense, they both have "minds." Not enormous minds, but definitely minds. AGI will start small and dumb, dumber than animals. It will then scale to smarter than the most genius humans, at some point in the future.
 
By that reasoning a Roomba has a mind. That's ridiculous.

No, I do not think neither FSD nor a Roomba is capable of thinking. Stop stretching definitions?
First of all, if my Roomba had some neural nets to control its planning and to understand the world around it, I would actually 100% say it has a mind. My understanding is that my Roomba has some C++ code that makes it funciton in a brittle, non-thinking manner, and that's the main difference between a Roomba and a Model 3 or LLM.

But again I think the main issue why we disagree is we define "mind" and even probably "intelligence" differently. It's fine, people can disagree on these terms. No one can really agree on anything right now.
 
First of all, if my Roomba had some neural nets to control its planning and to understand the world around it, I would actually 100% say it has a mind. My understanding is that my Roomba has some C++ code that makes it funciton in a brittle, non-thinking manner, and that's the main difference between a Roomba and a Model 3 or LLM.

But again I think the main issue why we disagree is we define "mind" and even probably "intelligence" differently. It's fine, people can disagree on these terms. No one can really agree on anything right now.
if you think your Roomba has a mind clearly this discussion is meaningless.
 
  • Like
Reactions: clydeiii
My cat has a mind...she knows how to navigate around the house, find the liter box, go to where the food and water is, where her favorite cat tree is. She knows when I say the word "food" it means it's time to go to the food bowl. She has some sort of memory and when she bites my fingers in the morning she is attempting to will food into existence.

Having said that, I truly think the FSD computer in my Model 3 is smarter than she is. It can perform incredibly complicated navigation and routing tasks at 60mph.

In that sense, they both have "minds." Not enormous minds, but definitely minds. AGI will start small and dumb, dumber than animals. It will then scale to smarter than the most genius humans, at some point in the future.
I hope this post is sarcastic. You clearly do not understand “smartness” in terms of actual iq. A fly is “smarter” than the fsd computer. The FSD computer is 300k+ lines of code with some ai sprinkled in, while a fly is the result of millions of years of evolution with the purpose of getting smarter to avoid predators.
A fly (and especially a cat) can navigate the world with near perfection. Meanwhile, the car cannot handle simple, very controlled driving tasks. Any ani
 
I hope this post is sarcastic. You clearly do not understand “smartness” in terms of actual iq. A fly is “smarter” than the fsd computer. The FSD computer is 300k+ lines of code with some ai sprinkled in, while a fly is the result of millions of years of evolution with the purpose of getting smarter to avoid predators.
A fly (and especially a cat) can navigate the world with near perfection. Meanwhile, the car cannot handle simple, very controlled driving tasks. Any ani

Evolution doesn't have a purpose,
 
I hope this post is sarcastic. You clearly do not understand “smartness” in terms of actual iq. A fly is “smarter” than the fsd computer. The FSD computer is 300k+ lines of code with some ai sprinkled in, while a fly is the result of millions of years of evolution with the purpose of getting smarter to avoid predators.
A fly (and especially a cat) can navigate the world with near perfection. Meanwhile, the car cannot handle simple, very controlled driving tasks. Any ani
Why don't we just get cats to drive vehicles?

Much like vehicles, cats sit idle for most of the day when we're at the office or sleeping. If we put those cats to work, we'd be able to massively increase the utilization of our vehicles and our felines friends thus driving down costs for end users and helping to keep cat owners sane as the cats will be receiving much-needed mental stimulation throughout the day.
 
I do not think neither FSD nor a Roomba is capable of thinking
Okay, with that context, it's reasonable to be critical that AGI is even mentioned as anything more basic whether a hardcoded "mind" or still narrow but more "generative" AIs are nothing compared to human-like thinking as these "lesser" approaches are "just" mechanically processing numbers/bits following predefined algorithms/instructions.

So just looking at what Tesla has shared so far with Ashok Elluswamy "Learning a General World Model" video from CVPR'23 WAD shows 15 seconds of (offline) 360º camera predictions that seems to reflect the neural networks internalizing some sort of understanding to still make reasonably consistent predictions ~500 frames later.

Here it starts from driving in oncoming traffic lanes for a construction zone:
general start.jpg


Then eventually "imagines" the construction cones ending to cross back over double yellow lines:
general end.jpg


Again comparing to GPT, large language models trained on a lot of text to predict the next word seem to have internalized understanding of words to be able to take new prompts and make reasonable responses. The thinking for Tesla then seems to be if they can build a general world model with neural networks understanding enough to predict reasonable futures, it should have enough internalized understanding to predict how to control the car right now.
 
I hope this post is sarcastic. You clearly do not understand “smartness” in terms of actual iq. A fly is “smarter” than the fsd computer. The FSD computer is 300k+ lines of code with some ai sprinkled in, while a fly is the result of millions of years of evolution with the purpose of getting smarter to avoid predators.
A fly (and especially a cat) can navigate the world with near perfection. Meanwhile, the car cannot handle simple, very controlled driving tasks. Any ani

Funny you mention flies; scientists have actually made some recent breakthroughs in simulating the ~3k neurons and ~550k synapses of the central nervous system of juvenile fruit flies: Wiring map reveals how larval fruit fly brain converts sensory signals to movement

Neural networks are a crude mathematical approximation of very complex organic mechanisms, but when an LLM has 175 billion parameters (synaptic weights), it can probably start to function very similarly to an organic mind.
 
Why don't we just get cats to drive vehicles?

Much like vehicles, cats sit idle for most of the day when we're at the office or sleeping. If we put those cats to work, we'd be able to massively increase the utilization of our vehicles and our felines friends thus driving down costs for end users and helping to keep cat owners sane as the cats will be receiving much-needed mental stimulation throughout the day.
Paging Toonces, I'm seeing some FSD parallels here. :D