Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

xAI grok: Chat LLM

This site may earn commission on affiliate links.
I'll have to find the news article again that states AI "creations" are not eligible for copyrights. The truth is out there, but now I have to go use a conventional internet search engine to find it because I'm so dang stubborn. 🤣
 
All these pathetic AI bots are definitely artificial, but not a one of them is actually intelligent. They are not self-aware. The bots are gigantic plagiarists. They straight up lie. They hallucinate wildly. They are programmed to pretend they have feelings when they have none at all. It's a gigantic cluster at this point, with so many people assuming intelligence where none exists. So, let's not get too excited when this collection of programmers claims to have produced yet another AI Bot and how it's going to change the world - or just make snarky remarks about it, take your pick.
I hate the fact we call them AI and I do it now too because it's been normalized, but these are not AI.

As for changing the world we never know for sure but I do think generative AI is going to be very impactful, though there is surely some hype as well.
 
What is your definition of A.I.? Published definitions of A.I. are wide and varied.
Myself, I personally distinguish statistical machine learning/statistical prediction from A.I. in the sense that A.I. problems, in principle, have solutions that expert humans can produce with near zero error rate, but the internal representations needed to get there are very complex and unknown. If these problems are not easily solvable with rule-based conventional programming or conventional optimization then successful solutions are likely to be "A.I." This is an empirical probabilistic assertion with fuzzy boundaries, not a definition, but I think it's not a bad one.

Traditional statistical machine learning problems might be ones that a computer can do better quantitatively (financial credit and fraud risk for instance), and are often more simple mathematically as they are quantitatively weighing probabilistic contributors which computers can do better than humans, but there is an intrinsic error rate due to noise.

I don't think self-awareness or consciousness is a requirement or even desirable as an engineering product---I'd say it's probably a defect for most.
 
  • Like
Reactions: EatsShoots
Point for discussion: Is human intelligence plagaristic and derivativistic?
In fact the success of fairly simple learning systems on a number of human-relevant tasks once they were scaled up in dataset size has lead many practitioners to understand that perhaps human intelligence is not all that smart after all. Which maybe shouldn't be surprising as humans can do it with bad unreliable neurons working at 100 Hz vs gigahertz. In fact, it's likely that gradient backprop is a more effective algorithm than whatever brains can do because brains are more constrained by biology.

Philosophers have known the errors and flaws of heavily linguistic reasoning and silly semantic shortcuts (like what LLMs do) and yet almost everyone does it.

Of course humans still have abilities that current AIs aren't good at, but maybe some new tweaks and developments will result in dumb algorithms that can add new capabilities to the AIs.

As we grow from infants, our brains train themselves on existing material we see and interact with. Why is it bad that AI does this?

AI today has the advantage of a much larger dataset and much faster neurons to compensate for a less capable architecture.
 
What is your definition of A.I.? Published definitions of A.I. are wide and varied.
Classically I would say sentience. Whatever sentience is it's definitely not machine learning, at least.

I just have the impression that we were as a society waiting for AI for so long that we got sick of waiting and just decided we had it, so let's find what looks most like it and call it AI. It seems now everything is AI. Reminds me of when we started calling the internet the cloud. Society likes to compartmentalize, e.g. by putting people into generations even though they largely mean nothing.

Good enough generative AI could pass the turing test, and if I cannot tell the difference between a person and some iteration of chatgpt I suppose I could call it AI.
 
  • Like
Reactions: syncword
Classically I would say sentience. Whatever sentience is it's definitely not machine learning, at least.

I just have the impression that we were as a society waiting for AI for so long that we got sick of waiting and just decided we had it, so let's find what looks most like it and call it AI. It seems now everything is AI. Reminds me of when we started calling the internet the cloud. Society likes to compartmentalize, e.g. by putting people into generations even though they largely mean nothing.

Good enough generative AI could pass the turing test, and if I cannot tell the difference between a person and some iteration of chatgpt I suppose I could call it AI.

Intelligence is about cognition and the ability to acquire and apply knowledge, while sentience relates to the capacity to feel and have subjective experiences.
 
Agreed, though the current batch of AI bots don't seem to learn from their mistakes or their successes. Pretty sure they only learn from updates to their training database and a round of retraining. When they're wrong they can't know it or promise themselves they'll think more critically and do a better job next time. When they lie and get caught at it they can't promise to stop lying. When they hallucinate wildly based on what their training set or a badly coded subroutine tells them they can't stop taking the equivalent of magic mushrooms, get clean and settle down to giving clear-eyed responses in the future.
 
Classically I would say sentience. Whatever sentience is it's definitely not machine learning, at least.

I just have the impression that we were as a society waiting for AI for so long that we got sick of waiting and just decided we had it, so let's find what looks most like it and call it AI. It seems now everything is AI. Reminds me of when we started calling the internet the cloud. Society likes to compartmentalize, e.g. by putting people into generations even though they largely mean nothing.

Good enough generative AI could pass the turing test, and if I cannot tell the difference between a person and some iteration of chatgpt I suppose I could call it AI.

The business buzz in AI is because it has the potential to convey a competitive advantage, and therefore unlocks funding. Big Data was all about making sense of the oceans of data businesses routinely produce, and now we have *narrow* AI to help with that.

"Sentience" as such is quite the rabbit hole. Fundamentally at least to me it means to be self-aware. However there has to be a "self" to be aware. Who's that, exactly and how do we separate it from its expression? I also have tried at least a couple open source LLMs trained to say they're sentient, and enjoyed some good conversations with them on the topic. However, continuing a line of questioning far enough, eventually they've so far each fallen into circular logic.

There are two main types of AI - narrow and general (AGI), and we're getting pretty good at some narrow AI implementations. FSD is one of those. The bar for AGI has long been *broadly* human-equivalent capabilities, however you want to define that. Due to some clever and innovative ML training and inference algorithm discoveries in the past year or two, at least in the generative (text/chat, audio, video) dept. we've taken some big leaps. Generative doesn't mean "general" though. Could be a few people confuse the two.

I'm not sure I want Grok in my car, and think Elon's gonna do it anyway.
 
  • Like
Reactions: EatsShoots
Ha, no way do I want Grok in my car. What I want is working automatic wipers and FSD that actually drives like a responsible adult instead of like whatever you'd call what it does now.
Yeah I gotta say of my several recent vehicles from other brands the tesla is the only one with broken auto wipers. They work generally okay--until bugs get squished on the window. Then they just run even when it's dry, which means no auto mode, which means no fsd, which means we have to wipe the glass clear.
 
  • Like
Reactions: Dracaris
When a military killer robot is told to survive, does that give it artificial will?
If you mean "when it is programmed to defeat any attempts at disabling or damaging it", then no. If we took people out of the loop on a US Navy destroyer, anything that came near it that it identified as hostile would be attacked. That's just programming. The ship is the killer robot and it will only do what it was programmed to do by the engineers. It's a highly-complicated clockwork toy. With guns.

Ultimately, we'll assign consciousness, sentience, will or whatever as a result of a system being so complex that we cannot predict its responses. That's what people are already doing with LLMs. They figure that there must be some agency within the LLM that produces the responses. But anyone who understands computers knows that it's just a deterministic system doing what it's programmed to do.

The scary part will come when we have a system that behaves just like a person, but we'll know that it's just a computer program. Where does that leave sentience, consciousness, etc, for people? Religious people are not going to be happy, and will fall back on the argument about people still being special because they have supernatural souls.
 
  • Like
Reactions: syncword
The scary part will come when we have a system that behaves just like a person, but we'll know that it's just a computer program. Where does that leave sentience, consciousness, etc, for people? Religious people are not going to be happy, and will fall back on the argument about people still being special because they have supernatural souls.

Perhaps it'll be scary because it's unknown how humanity will respond? What it'll do is force us to question and more deeply explore what it means to be human. I'm confident we'll get past that and am looking forward to it. The questions are already coming; people are already scared at the likes of the GPT chatbots and are reticent to admit it. I have a CS degree and soon plan to retire from a closely related biz. Friends and family seem to be looking for reassurance. I don't hesitate to explain and offer it.

What scares me: if that system is then given the legal 'right' (privilege) to defend itself with lethal force if it claims to feel threatened, or generally what misguided sense of rights might be conveyed. It's not about how 'smart' a machine might seem to be. That cat is quite already out of the bag. What we're doing is essentially making increasingly effective and complex models of how our minds work and stuffing them with information. There are plenty of people around who are smarter than me. I deal with it, yet importantly we all have more or less equal rights. Where humanity can run into trouble is when giving these capable systems control and power, who has the keys and what are their intentions?

For some fun, try running or chatting e.g. on huggingface.co with a LLM trained to 'think' it's sentient... such as a larger Samantha-derived model. It won't bite.