Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

xAI grok: Chat LLM

This site may earn commission on affiliate links.

DanCar

Active Member
Oct 2, 2013
3,139
4,376
SF Bay Area
Sign up to get on the tester list here: xAI Grok
Release announcement: Announcing Grok
Excerpts:
  1. Grok is designed to answer questions with a bit of wit and has a rebellious streak, so please don’t use it if you hate humor!
  2. A unique and fundamental advantage of Grok is that it has real-time knowledge of the world via the 𝕏 platform. It will also answer spicy questions that are rejected by most other AI systems.
  3. Grok is still a very early beta product – the best we could do with 2 months of training – so expect it to improve rapidly with each passing week with your help.
  4. We provide a summary of the important technical details of Grok-1 in the model card.
  5. If that sounds exciting to you, apply to join the team here.
  6. We give Grok access to search tools and real-time information, but as with all the LLMs trained on next-token prediction, our model can still generate false or contradictory information.
Tesla cars will run a small version of Grok some year.
screenshot-teslamotorsclub.com-2023.11.07-11_41_21.png
 
Last edited:
Yes , but as complexity increases it has more failure modes. It is also surprisingly bad at math. It use to fail every time at adding 6 digit numbers. Lots of people on reddit have commented on how it makes a good psychiatrist, especially when someone is overly emotional about an event that occurred.
 
All these pathetic AI bots are definitely artificial, but not a one of them is actually intelligent. They are not self-aware. The bots are gigantic plagiarists. They straight up lie. They hallucinate wildly. They are programmed to pretend they have feelings when they have none at all. It's a gigantic cluster at this point, with so many people assuming intelligence where none exists. So, let's not get too excited when this collection of programmers claims to have produced yet another AI Bot and how it's going to change the world - or just make snarky remarks about it, take your pick.
 
All these pathetic AI bots are definitely artificial, but not a one of them is actually intelligent. They are not self-aware. The bots are gigantic plagiarists. They straight up lie. They hallucinate wildly. They are programmed to pretend they have feelings when they have none at all. It's a gigantic cluster at this point, with so many people assuming intelligence where none exists. So, let's not get too excited when this collection of programmers claims to have produced yet another AI Bot and how it's going to change the world - or just make snarky remarks about it, take your pick.
Thanks Dracaris, I enjoyed reading your perspective. As a counter point about changing the world:
  1. Knowledge sharing: The large language models (LLM) pack a large amount of info (yes, often can provide wrong info). People have found many uses for this such as call center automation, language translation, general fact lookup and much more. There are many apps in development that will extend this, like legal help and healthcare diagnosis.
  2. Creative industries: Lots of people are using LLMs to help write novels, generate art and music. Saw an estimate that LLMs will help cut the cost of movie production to a tenth some years down the road.
  3. Lots of programmers are raving about how good it is at helping to code.
  4. Billions are being poured into startups every quarter. Sure seems like investors think this will change the world.
  5. I hate searching through internet links when looking info up. I have switched most of my look ups to use an LLM such as bard.google.com.
This is just the beginning. We are on a freeway of creative content, information and simple reasoning becoming ubiquitous. Society will look much different 10 years from now. People will laugh at people doing a google search 10 years from know instead of asking an LLM. Yes there are issues, such as how do content creators get paid. Similar issue occurred when music internet download became common. The Music publishing industry lost much of its might and profits.
 
Last edited:
  • Like
Reactions: DanCar and STUtoday
All these pathetic AI bots are definitely artificial, but not a one of them is actually intelligent. They are not self-aware. The bots are gigantic plagiarists. They straight up lie. They hallucinate wildly. They are programmed to pretend they have feelings when they have none at all. It's a gigantic cluster at this point, with so many people assuming intelligence where none exists.
Intelligence is not a binary phenomenon, and the AI researchers are showing that they do have some approximate capabilities to reason and generalize. These capabilities are by no means perfect or universal.




One strong limitation: https://arxiv.org/pdf/2305.19555.pdf
The LLMs are language models and have clear limitations which will have to be improved, presumably by fundamental advances in architecture and concepts.

Deficiencies against the best humans is not a requirement--there are plenty of humans who fail at tasks other humans succeed at brilliantly. What's missing in the research is predictive deep theory about how to design and train them to emphasize certain tasks vs the other.

So far, no AI is competitive at say producing a significant novel research research result like the above---but then again most humans aren't either. The best AIs might soon be at the level of a well-read but otherwise average normal functioning highschooler, not an expert.

Some of the people in the field are saying that the apparent success of the LLMs today is more an indictment of superficiality of human behaviors and reasoning in many, but not all, areas.

t's a gigantic cluster at this point, with so many people assuming intelligence where none exists.
An excellent point here that's now appreciated by expert but not so much the public (though it's growing): the AIs upend normal human heuristics on intelligence. Average humans find a person who can generate sophisticated linguistic text at length with significant knowledge and assume they have a high general intelligence common to smart humans. That is not a valid assumption any more because the AIs don't have human neural hardware and algorithms and they don't have typical human learning experiences. They're cheating at the Turing Test---or the Turing test is pretty flawed, more correctly.
 
Last edited:
Thanks for the thoughts, appreciated. However, you're both missing a key point: generative Ai simply isn't. Rather, it's generally plagiaristic and deriviativistic because its training is based entirely on all the data stolen globally from sources from all across the internet.

Given how poorly the muskrat generally handles software development (e.g. FSD, Automatic Wipers, Tesla Vision, Twitter, et. al.), there's no way Grok (it's name is yet another theft, this time from a "content creator" named Robert Heinlein) will be any better than the others, and this one could easily be far worse. So, while I'll watch this creation with interest, I'll not give an inch on the proposition that it is not now, nor will it likely ever be intelligent.

I am not saying that AI bots cannot be useful - quite the contrary. However, these collections of software should not be referred to as AI, because they simply aren't. Sometimes useful, interesting, capable of producing surprising and helpful responses, in most cases easily better than standard Internet searches, yes, but not intelligent, They may even be uniquely helpful in realizing the dream of FSD and autonomous driving, but right now every concept or conclusion they produce must be carefully checked before it's adopted and put to use because it may be a lie, it may be a confabulation, it may be highly biased, it may be plagiarized, and so on.

Oh, and it's my understanding that as of now, nothing that comes out of a large language model can be coprighted, so that's actually kinda cool. As a result of this, it appears that anything one of these engines produces would fall directly and immediately into the public domain - that list would necessarily include art, text, computer code, and so on.
 
... Rather, it's generally plagiaristic and deriviativistic because its training is based entirely on all the data stolen globally from sources from all across the internet. ...
I suppose humans aren't smart either because without training and the data viewed / stolen they would be dumb.
Said another way: You are accusing LLMs of not being intelligent because they require training. Humans would be dumb if they were not trained also. Humans like yourself copy and derive works that you have stolen / seen.

Much of what these LLMs are trained on is wikipedia which is open. My 11K edits on wikipedia have gone to good use.
 
Last edited:
  • Like
Reactions: DanCar and JB47394
Nope, that's not really what I said. You're putting words in my mouth, which is never a good idea, because I have so many already. 😏

You're ignoring the widely reported vast consumption of copyrighted text, art, music, voices, and other mediums of expression by those parties training their large language models. Wikipedia is of course a great example of a public domain source of information, and I have no problem with anybody using it, though, as a matter of courtesy, attribution should be provided whenever it's consumed as a source. The copyrighted sources of information stolen for use by those training LLMs are not consulted, their content is not licensed, the owners of the copyrights are not paid - and yet those building LLMs expect to be rewarded for their work. One beautiful irony in this theft is that nothing produced by a trained LLM, commonly referred to as a "generative AI" can be copyrighted and enters the public domain as soon as it drops into the output hopper, so to speak. That doesn't put the stolen source material into the public domain, of course, but it certainly puts the "work" of any LLM into that realm, where it cannot be claimed as a unique creation and nobody can be sued for copying it outright and using it however they see fit. Fun, right?
 
  • Like
Reactions: spacecoin
... The copyrighted sources of information stolen for use by those training LLMs are not consulted, their content is not licensed, the owners of the copyrights are not paid - and yet those building LLMs expect to be rewarded for their work. ...
I doubt you can find a realiable reference for that. If the book was paid for that the LLM read, would that change things? Copyright law allows for derivative works. There is nothing stolen, and once the court cases that have been filed have been settled, we will know more definitively. Judges have already thrown out most of the claims of copyright infringement. I wouldn't be surprised if there are continual appeals until it reaches the supreme court, so it might be 4-5 years before we know for sure.
 
  • Like
Reactions: JB47394
There is nothing stolen
The theft is of opportunity. I take a year to write a novel. I make $20,000 on it. A company with an LLM drops my book into the input hopper and now they can crank out as many stories in that style as they want pretty much instantly. Now the market will be flooded with stories inspired by my work, and at as low a price as the company cares to sell it because their cost is essentially zero. I'll take another year for me to produce my next work, but readers will have already experienced my general storytelling style so much in the meantime that I won't be providing anything markedly new.

The result would be that nobody will publish any more stories.

This is the same situation as the actor's guild ensuring that their likenesses will not be owned by the studios. Given that control, the studios would eventually figure out how to duplicate the actors' performances and start cranking out movies without actors.

The result would be that nobody will perform in public.

The general pattern is that corporate executives are using technology to supplant the role of people in society. Whatever people can do, our machines can do better. It's important to see that cliff coming and figure out where we want to draw the line between our human society and our robotic economy. Which things need to be done as efficiently as possible and which should be reserved to allowing people to do them however we do them? I guess the simplest example of this is creating a robot that can run at 60 mph that we put into the Olympics. The company behind the robot gets product exposure, and the gold medal - which they can melt down as bullion. Profit.
 
  • Like
Reactions: Dracaris
The theft is of opportunity. ...

On Reddit you can read about authors using LLMs to write books. "The theft is of opportunity" or is the opportunity grandiose to all?

The general pattern is that corporate executives are using technology to supplant the role of people in society.
You can try to put a cork in the jeanie bottle, but it won't happen. Whether it is corporate executives, enterprising individuals, foreign countries, it will happen. I've seen a few cases were people were afraid to lose their jobs because of automation. In reality what happened is that the people where able to make more product. Instead of making 100K per month of a widgets, after automation same people made a million widgets a month.

Whatever people can do, our machines can do better. It's important to see that cliff coming and figure out where we want to draw the line between our human society and our robotic economy.
Only the paranoid survive. It is good to see the potential pitfalls, but it also good to see the new sunrise that is full of opportunities. With massive automation society will be richer and able to accomplish lofty goals. Terraform earth, venus, mars and more distant solar systems. A few thousand years from now, how many planets will we be living on? Will our numbers eventually grow into the trillions?

Which things need to be done as efficiently as possible and which should be reserved to allowing people to do them however we do them?
It will be a long time before our superior intellect is surpassed by computers. A billion years of evolution won't be surpassed so easily. Even when our intellect is surpassed, there will be something said for creativity, imagination, and drive.
 
Last edited:
  • Like
Reactions: DanCar
Thanks for the thoughts, appreciated. However, you're both missing a key point: generative Ai simply isn't. Rather, it's generally plagiaristic and deriviativistic because its training is based entirely on all the data stolen globally from sources from all across the internet.
True, but the researchers have shown that this tremendous amount of information is enough---and with the LLM algorithms---for them to actually discern some kinds of fairly sophisticated reasoning and partial theory of mind. Because all of that was implicitly encoded in the corpus of the training text, and at last the algorithms & data processing is good enough to figure it out. Plagiaristic and derivative, sure but so are most humans.

The success of these models was a surprise to the researchers (and myself)---there is nothing built into them that seems like it would make them that intelligent, no tremendous deep breakthroughs. The transformer architecture is clever and computationally efficient but not all that amazing. R&D is now back engineering some of the processes to figure out what they are doing inside---synthetic neuroscience.

Given how poorly the muskrat generally handles software development (e.g. FSD, Automatic Wipers, Tesla Vision, Twitter, et. al.), there's no way Grok (it's name is yet another theft, this time from a "content creator" named Robert Heinlein) will be any better than the others, and this one could easily be far worse. So, while I'll watch this creation with interest, I'll not give an inch on the proposition that it is not now, nor will it likely ever be intelligent.
Their researchers are good, but more likely Elon will lose interest soon (which will improve their performance if he leaves them alone) but then later cease funding as there is no good business model. Anyway, it's picking Elon's pocket provides potentially some good, if at minimum, it provides effective competition to OpenAI, which is the opposite of its name, now entirely LockedDownSecretAI. Surprisingly, it's Facebook/Meta, led by Yann LeCun, which is the most deeply open of any major corporate AI lab these days. Zuck isn't the bad guy for once.

I am not saying that AI bots cannot be useful - quite the contrary. However, these collections of software should not be referred to as AI, because they simply aren't. Sometimes useful, interesting, capable of producing surprising and helpful responses, in most cases easily better than standard Internet searches, yes, but not intelligent, They may even be uniquely helpful in realizing the dream of FSD and autonomous driving, but right now every concept or conclusion they produce must be carefully checked before it's adopted and put to use because it may be a lie, it may be a confabulation, it may be highly biased, it may be plagiarized, and so on.
This is an important point and one, remarkably, entirely unanticipated by science fiction as far as I know. Sci-fi typically assumed that the computer's superior performance and reliability on rule-based computational tasks would carry over to future AI models, with humans still superior for creativity. But it's turning out that reliability and factuality are the major unsolved problems and you need humans to check their output, while the AI's are 'creative' (within genres, not fully genre-breaking like actual human geniuses). Technically, the LLMs are trained by predicting the next symbol forward and they don't have a strong notion of true that's different from 'highly probable in my train corpus', something that even small children understand well.

Oh, and it's my understanding that as of now, nothing that comes out of a large language model can be coprighted, so that's actually kinda cool. As a result of this, it appears that anything one of these engines produces would fall directly and immediately into the public domain - that list would necessarily include art, text, computer code, and so on.
Probably not---if it's considered like an artist's paintbrush legally, then the wielder of the brush maintains legal right to the production. Or likewise if it's a work-for-hire like most employer-employee relationships then the owner also gets the copyright.
 
  • Like
Reactions: Terminator857