Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

GPT-3 will change everything

This site may earn commission on affiliate links.

heltok

Active Member
Aug 12, 2014
2,937
28,169
Sweden
We should talk about GPT-3. Imo GPT-3 will change all software going forward, change the internet. Change all forms of working with computers, all interactions. And GPT-3 is just the taste of what will come, in one year we will have GPT-4 which will be even more better and more capable. Here is some samples of what internet has done with GPT-3 in the last weeks since OpenAI opened an API to a bunch of programmers:

https://twitter.com/pavtalk/status/1285410751092416513
https://twitter.com/OthersideAI/status/1285776335638614017
https://twitter.com/amasad/status/1285789362647478272
https://twitter.com/sharifshameem/status/1283322990625607681
https://twitter.com/super3/status/1284567835386294273
https://twitter.com/BenjaminBadejo/status/1284732056140951552
https://twitter.com/sharifshameem/status/1284095222939451393

There are hundreds of similar examples... They blow my mind. And this is just the start... I have never seen this incredible explosion of ideas in such a short time... Not with deep learning, not with crypto... It scares the *sugar* out of me...

Q: Will this not change everything? What is the consequences of this?
 
  • Disagree
Reactions: glide
Ummm help me out here...what does GPT-3 mean. And why do u guys do that without explaining what the acronym means first lol

GPT-3 is short for Generative Pre-training Transformer and is the third generation of the machine learning model.

In short, it's an AI tool.

Here is how GPT-3 basically works:

"At its core, GPT-3 is an extremely sophisticated text predictor. A human gives it a chunk of text as input, and the model generates its best guess as to what the next chunk of text should be. It can then repeat this process—taking the original input together with the newly generated chunk, treating that as a new input, and generating a subsequent chunk—until it reaches a length limit."

Some are freaking out because it can do some pretty cool stuff:

"GPT-3’s language capabilities are breathtaking. When properly primed by a human, it can write creative fiction; it can generate functioning code; it can compose thoughtful business memos; and much more. Its possible use cases are limited only by our imaginations."

But some say it is way overhyped because it is not actually "thinking" and can do some pretty dumb stuff:

"But GPT-3 possesses no internal representation of what these words actually mean. It has no semantically-grounded model of the world or of the topics on which it discourses. It cannot be said to understand its inputs and outputs in any meaningful way.

Why does this matter? Because it means that GPT-3 lacks the ability to reason abstractly; it lacks true common sense. When faced with concepts, content, or even phrasing that the Internet’s corpus of existing text has not prepared it for, it is at a loss.

It is straightforward to ask GPT-3 simple questions that it is unable to handle intelligently. As an example, consider the following exchange with GPT-3, courtesy of Kevin Lacker:

Human: How many eyes does a giraffe have?

GPT-3: A giraffe has two eyes.

Human: How many eyes does my foot have?

GPT-3: Your foot has two eyes.

Human: How many eyes does a spider have?

GPT-3: A spider has eight eyes.

Human: How many eyes does the sun have?

GPT-3: The sun has one eye.

Human: How many eyes does a blade of grass have?

GPT-3: A blade of grass has one eye."

GPT-3 Is Amazing—And Overhyped
 
GPT-3 is short for Generative Pre-training Transformer and is the third generation of the machine learning model.

In short, it's an AI tool.

Here is how GPT-3 basically works:

"At its core, GPT-3 is an extremely sophisticated text predictor. A human gives it a chunk of text as input, and the model generates its best guess as to what the next chunk of text should be. It can then repeat this process—taking the original input together with the newly generated chunk, treating that as a new input, and generating a subsequent chunk—until it reaches a length limit."

Some are freaking out because it can do some pretty cool stuff:

"GPT-3’s language capabilities are breathtaking. When properly primed by a human, it can write creative fiction; it can generate functioning code; it can compose thoughtful business memos; and much more. Its possible use cases are limited only by our imaginations."

But some say it is way overhyped because it is not actually "thinking" and can do some pretty dumb stuff:

"But GPT-3 possesses no internal representation of what these words actually mean. It has no semantically-grounded model of the world or of the topics on which it discourses. It cannot be said to understand its inputs and outputs in any meaningful way.

Why does this matter? Because it means that GPT-3 lacks the ability to reason abstractly; it lacks true common sense. When faced with concepts, content, or even phrasing that the Internet’s corpus of existing text has not prepared it for, it is at a loss.

It is straightforward to ask GPT-3 simple questions that it is unable to handle intelligently. As an example, consider the following exchange with GPT-3, courtesy of Kevin Lacker:

Human: How many eyes does a giraffe have?

GPT-3: A giraffe has two eyes.

Human: How many eyes does my foot have?

GPT-3: Your foot has two eyes.

Human: How many eyes does a spider have?

GPT-3: A spider has eight eyes.

Human: How many eyes does the sun have?

GPT-3: The sun has one eye.

Human: How many eyes does a blade of grass have?

GPT-3: A blade of grass has one eye."

GPT-3 Is Amazing—And Overhyped

Sure it makes mistakes. Sure it is expensive. Sure it takes a while to run. TODAY... But see what it can do that very few people one year ago thought it would be able today. Write software code, design apps etc. What do you think GPT-4 will struggle to do next year?

Imo AI went from chimpanzee to undergrad level software developer in a few years. The distance in evolution to Phd machine learning expert is not very far... That is my main worry. But in the short time, name one typ of work with that uses computer that will not be affected by this or similar once this has become mainstream, ie cheap, fast and quirks worked out?
 
Sure it makes mistakes. Sure it is expensive. Sure it takes a while to run. TODAY... But see what it can do that very few people one year ago thought it would be able today. Write software code, design apps etc. What do you think GPT-4 will struggle to do next year?

Imo AI went from chimpanzee to undergrad level software developer in a few years. The distance in evolution to Phd machine learning expert is not very far... That is my main worry. But in the short time, name one typ of work with that uses computer that will not be affected by this or similar once this has become mainstream, ie cheap, fast and quirks worked out?

I think you're begging a rather important question there: will it ever be good enough openly to perform significant failure-sensitive tasks autonomously?
 
Sure it makes mistakes. Sure it is expensive. Sure it takes a while to run. TODAY... But see what it can do that very few people one year ago thought it would be able today. Write software code, design apps etc. What do you think GPT-4 will struggle to do next year?

Imo AI went from chimpanzee to undergrad level software developer in a few years. The distance in evolution to Phd machine learning expert is not very far... That is my main worry. But in the short time, name one typ of work with that uses computer that will not be affected by this or similar once this has become mainstream, ie cheap, fast and quirks worked out?

Of course it will get better and yes, it will improve everything we do with computers. That's the nature of technological development. Our smart phones do stuff today that would seem like scifi just 10 years ago. I am sure someday, we will be able to fully converse with our computers and they will converse back, understand us and anticipate our needs. I think we are getting closer to fully intelligent artificial personal assistants. That would not surprise me one bit.
 
AI & tech development in general isn't a linear timeline. It's an ever-branching 3D roadmap covered by dense fog. That's why we can say both: "Smartphones can do things we never imagined just a decade ago!" as well as "We were supposed to all have flying cars by 2015!"

There's been a lot of cherry-picking of GPT-3 results. Here's a well-balanced evaluation of the implications thus far: Tempering Expectations for GPT-3 and OpenAI’s API | Max Woolf's Blog
 
AI & tech development in general isn't a linear timeline. It's an ever-branching 3D roadmap covered by dense fog. That's why we can say both: "Smartphones can do things we never imagined just a decade ago!" as well as "We were supposed to all have flying cars by 2015!"

There's been a lot of cherry-picking of GPT-3 results. Here's a well-balanced evaluation of the implications thus far: Tempering Expectations for GPT-3 and OpenAI’s API | Max Woolf's Blog

The list of caveats:
Model Latency
Selection Bias Toward Good Examples
Everyone Has The Same Model
Racist and Sexist Outputs

Are these gonna be gonna be major showstoppers without workarounds? Or is this like saying Electric Cars will not take over because batteries are expensive, there is not enough charging infrastructure, deaf people cannot hear them, people like their loud engines...

For me this sounds like very time technical challenges or non issues. Give it one or two years and each of these should be solved...