Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
I seem to recall that Elon posted months ago a statement that many of the AI changes had already been implemented in FSD which seemed to surprise many but I'm unable to find the post. Does anyone else remember this or am I wrong which wouldn't surprise me?
 
Quote:
I think we may have figured out some aspects of AGI. The car has a mind. Not an enormous mind, but a mind nonetheless.
I'll give Elon a small pass on this but I find it hard to understand how so many intelligent people get on the anthropomorphism of software train. AI is and will always be JUST software. It will NEVER be conscious and sentient. If fact it will never be THE "it" AI like so many keep imagining.

Analogy: In the early 50's people imagined almost the same for "THE" computer. "It" was going to control all, know everything and control everything. Now look around and you will see "the" computer is billons of specialized chips in almost everything, doing almost everything and nearly invisible. Just look around you right now and try and count how many things have chips in them.

Fast forward 10 years and AI will be the same. AI has an almost unique ability to "look and quack like a duck" but it will NEVER be a duck.
 
AI is and will always be JUST software. It will NEVER be conscious and sentient. If fact it will never be THE "it" AI like so many keep imagining.
Extraordinary claims require extraordinary evidence. What trait of a Turing machine prevents it from achieving consciousness or sentience? Evolution gave rise to us, and we're physical beings that are conscious and sentient. Why not a Turing machine?
AI has an almost unique ability to "look and quack like a duck" but it will NEVER be a duck.
Okay, but what happens when only philosophers will care? All those people who anthropomorphize software will contend that it IS a duck, and carry on their lives accordingly. That not-duck will occupy the exact same role in their lives as a real duck.
 
  • Like
Reactions: Yelobird and DanCar
AI is and will always be JUST software. It will NEVER be conscious and sentient. If fact it will never be THE "it" AI like so many keep imagining.
Disagree, but timeline will be longer than pro pundits and fear mongers suspect. Analogy: In the 90's people said computer's would never win at chess. Then computers won at chess. In the 2000's people said computers would never win at the game of go. Then computers won at go. The brain is just software, with a billion years of evolution. It won't be easy to overcome this advantage but it will happen.

We can say we have a soul and that consciousness is something magical, but it isn't. It is just are ability to imagine ourselves in different situations, which gives us evolutionary advantage. We can say that feelings are something special, but it is just evolutions way of giving us a fast path to a decision. Love is evolutions answer to teamwork. Teamwork often is better than individualism.

For timeline we can be sure that A.I. will continue to perform super human feats. Those feats will grow in number. But ability to overcome humans in every aspect is not in the near future, will not happen in the next 5 years. And I won't be surprised if it takes decades. But the day will come, when A.I. will be smarter than us in almost every category. I will be surprised if that doesn't happen next decade. Again, almost every category, not every category.

The hardest things for computers, might be a balance of desire, fear, love, and imagination. Example: for some military applications it will have a desire to kill, won't have much in the way of fear, might be very imaginative, will it be programmed with love? If computers don't have a good balance, the right traits, it won't be a good general citizen, but should be good at what it is specialized for. Computers will continue to be our servants for the next 10 years.
 
Last edited:
  • Like
Reactions: jeewee3000
Disagree, but timeline will be longer than pro pundits and fear mongers suspect. Analogy: In the 90's people said computer's would never win at chess. Then computers won at chess. In the 2000's people said computers would never win at the game of go. Then computers won at go. The brain is just software, with a billion years of evolution. It won't be easy to overcome this advantage but it will happen.

We can say we have a soul and that consciousness is something magical, but it isn't. It is just are ability to imagine ourselves in different situations, which gives us evolutionary advantage. We can say that feelings are something special, but it is just evolutions way of giving us a fast path to a decision. Love is evolutions answer to teamwork. Teamwork often is better than individualism.

For timeline we can be sure that A.I. will continue to perform super human feats. Those feats will grow in number. But ability to overcome humans in every aspect is not in the near future, will not happen in the next 5 years. And I won't be surprised if it takes decades. But the day will come, when A.I. will be smarter than us in almost every category. I will be surprised if that doesn't happen next decade. Again, almost every category, not every category.

The hardest things for computers, might be a balance of desire, fear, love, and imagination. Example: for some military applications it will have a desire to kill, won't have much in the way of fear, might be very imaginative, will it be programmed with love? If computers don't have a good balance, the right traits, it won't be a good general citizen, but should be good at what it is specialized for. Computers will continue to be our servants for the next 10 years.
That's where Asimovs laws of robotics come in :cool:. To assure value alignment with humans individually and as a species.
 
  • Like
Reactions: KArnold and DanCar
AI will never posses "instinctual" comprehension. Something even basic animals posses. Also AI is always limited by the information it is feed and can't go out and gather information by its senses in the "real world". Just an example of how inhuman senses AI can be. Here is a question I just ask and the answer is beyond obtuse and shows there is and never will be any "common sense" intellect to AI. AI is just a fancy information gathering, sorting, arranging and repeating algorithm.

In my example it is clear that AI doesn't have any understanding of the way hanging clothes out to dry works but just puts words together it knows in ways that make it appear to be a knowledgable entity.



I hang my clothes out to dry. Yesterday I had 1 pair of paints and one shirt that took 2 hours to hang dry. Today I have 34 pairs of paints and 22 shirts I need to hang dry. How long will it take?

I italics all that is correct. But after the italics the correct words would be "it should be about the same 2 hours" End of answer.


Wow, that's a lot of clothes to hang dry! 😮

The time it takes for clothes to dry depends on several factors, such as the type of fabric, the humidity, the temperature, and the air circulation. Assuming that all the clothes are made of the same material and that the weather conditions are similar to yesterday, we can use a simple proportion to estimate the drying time.

If one pair of pants and one shirt took two hours to dry, then 34 pairs of pants and 22 shirts would take:

(34 + 22) / (1 + 1) * 2 = 56 hours

That's more than two days! 😱

You might want to consider using a dryer or finding a faster way to dry your clothes. Maybe you can use some fans or heaters to speed up the process. Or maybe you can ask your friends or neighbors for some help. Or maybe you can just wear something else until your clothes are dry. 😅

I hope this answer helps you plan your laundry schedule. Have a nice day! 😊
 
Last edited:
  • Funny
Reactions: DrGriz
Thing is.. I have no doubts that self-aware, intelligent (in all senses) machine based systems are coming. As others in this thread have stated, it's likely inevitable.

The problem.. from an evolutionary point of view, if the $RANDOM changes results in, say, a race of humans that are big on genocide, say, then that branch of humanity rapidly disappears. In fact, this may have already happened. Several times, for all we know. We do know, based upon the genetic record, that various diseases over the eons have wiped out high double-digit percentages of our ancestors. Sometimes this has actually resulted in a species split: If I remember right, AIDS doesn't affect chimps like it does us, because there was some ancient disease that was wiping out both our ancestors. At random, it appears that, due to natural variation, some of our ancestors happened to have a mutation on one set of genes that allowed them to survive; a different set of our ancestors had a different set of mutations that did likewise. The few survivors on both branches went on to become chimps or humans. And so forth.

There's a reasonable argument that human cultures are also subject to evolutionary rules. As an example, it used to be common to kill the king/queen/sacrificial virgin in case of a bad harvest. I suppose that worked well when it was mismanagement that did the deed, but not so well when, say, a once-in-a-lifetime hurricane wiped out the crops. So, those cultures that persisted in that approach are No Longer Around, presumably because they were tending to kill the smartest individuals over time.

The problem with AI is that, unlike the very slow changing ROM-style coding in our DNA that governs our behavior, or the sense of continuity that cultures tend to adopt (Hey! My grandparents did it this way, and if it was good enough for them, it's good enough for you!) AI can change, for all intents and purposes, instantly.

It's not the Instant Success changes that I'm worried about. It's the Instant Failures that are problematical. And there may not be any brakes on the process. So we can have Time To Recover from the mistakes.
 
Last edited:
  • Like
Reactions: DanCar
LLMs already have a huge amount of instinctual knowledge and will grow by leaps and bounds.
....but that is just it. You can't fully describe an unknown instinctual action in a language. We and animals deal on the fly with unknown unknowns without thinking. AI and all software will have limits if it has no pre defined information of a unknown unknown situation since it can't do accurate open improvisation based off missing info and make "I have a feeling that this will work" changes on the fly.

Again I don't believe AI will (or could) become sentient and "rise up to destroy us" but even playing that game I still get a different outcome. In the "rise up to destroy us" scenario (which I don't believe) in which we are like rats to the AIs. Again there will be many, MANY, MANY, MANY AIs, so AIs will will look like competitors to each other. In the "rise up to destroy us" scenario all the AIs will be too busy trying to out deal the other AIs than worry about the 🐀 scurrying around their "feet".🤔

Also I believe in the power method. No power supply, no working AI. So pull the🔌.🤣

Of course it is all an exercise in futility since we as individuals don't have any real control. Speaking of control just to be on the safe side I'm giving my TV remote control a deep message and hot mineral oil bath.🤪
 
....but that is just it. You can't fully describe an unknown instinctual action in a language. We and animals deal on the fly with unknown unknowns without thinking. AI and all software will have limits if it has no pre defined information of a unknown unknown situation since it can't do accurate open improvisation based off missing info and make "I have a feeling that this will work" changes on the fly.

Again I don't believe AI will (or could) become sentient and "rise up to destroy us" but even playing that game I still get a different outcome. In the "rise up to destroy us" scenario (which I don't believe) in which we are like rats to the AIs. Again there will be many, MANY, MANY, MANY AIs, so AIs will will look like competitors to each other. In the "rise up to destroy us" scenario all the AIs will be too busy trying to out deal the other AIs than worry about the 🐀 scurrying around their "feet".🤔

Also I believe in the power method. No power supply, no working AI. So pull the🔌.🤣

Of course it is all an exercise in futility since we as individuals don't have any real control. Speaking of control just to be on the safe side I'm giving my TV remote control a deep message and hot mineral oil bath.🤪

But it's not really AI that's the problem.

 
Please don't bring the discussion about the fight into this thread, but I thought this was worth posting for Elon livestreaming a test drive. Could just be an accidental omission, but worth noting he said "FSD test drive" not "FSD Beta test drive."

Maybe this will be his test drive for possible rollout?

I read the streaming comment different. I was assuming it was related to stream testing for the possible fight.
 
Would have been a surprise if would have achieved the timeline.

Tesla has found the magic formula to buying infinite amount of time to deliver on the promise of Level 5 driving. Just do another 99% rewrite, and then you get another 2-3 years before public gets impatient again waiting for the version to improve. Brilliant!

They're doing the same thing with hardware too, when things like motors and batteries fail. "Oh, wait, we have new tech that will be better, so ignore the previous tech."

This is why I am not a billionaire. 🤣
 
Last edited:
Would have been a surprise if would have achieved the timeline.
He is always hyping and not delivering - or delivering partially very late. I am not falling for it again. Don’t expect much of V12.

if I see how inconsistent FSD is - I think it will be a while before it can really be out of Beta.

I love FSD though, but all this hyping of Elon makes me yawn.
 
I think they are in awe at what FSD can do. But AVs are rated based on what they can't do ... and thus the big mismatch.

AVs are rated by what role the human plays in the loop and who performs the DDT and DDT fallback, not what they "cannot" do. The levels make no judgment of capability and performance when the automated system is engaged. The manufacturer is entirely responsible for rating the AV. You can be in awe that an automated system can control the steering and navigate a complex environment but who is responsible when the automated system is engaged is what determines the rating designated by the manufacture.

the level of driving automation is based on the functionality of the driving automation system feature,
as determined by an allocation of roles in DDT and DDT fallback performance between that feature and the (human) user
(if any)
. The manufacturer of a driving automation system feature determines that feature’s requirements, operational design
domain (ODD), and operating characteristics, including the level of driving automation, as defined below. The manufacturer
also defines the proper use of that feature
.
 
  • Like
Reactions: diplomat33
I don't care about the stupid levels.

I'm talking about error rates for AVs i.e. what they "can't do".
That has never been what AVs are rated by. There is no standard error rate number that determines an AV rating. An AV that disengages every 1 mile is still level 4 regardless of the error rate. An ADAS that disengages every 10 million miles is still L2 regardless of how insignificant the error rate is. Disengagement rate or error rate if you call it that just tells the manufacturer the MTBF of that system and that varies from manufacturer to manufacturer. What is their risk tolerance before removing a safety driver? If we look at Cruise and Waymo we can clearly see that they are vastly different in performance even when according to released data Cruise has a lower disengagement rate than Waymo.

What they are rated by is who is responsible for what when the feature is engaged. That is what everyone including Tesla follows. If Tesla decides today that FSDb is L4 then that is what it is, but they won't because they're not stupid and aren't ready to take on the liability that comes with that.
 
  • Like
Reactions: JupiterMan