Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Elon & Twitter

This site may earn commission on affiliate links.
Status
Not open for further replies.
Musk will now be distributing his version of "truth".


Elon Musk says he’s working on “TruthGPT,” a ChatGPT alternative that acts as a “maximum truth-seeking AI.” The billionaire laid out his vision for an AI rival during an interview with Fox News’s Tucker Carlson, saying an alternative approach to AI creation was needed to avoid the destruction of humanity.
 
Musk will now be distributing his version of "truth".


Elon Musk says he’s working on “TruthGPT,” a ChatGPT alternative that acts as a “maximum truth-seeking AI.” The billionaire laid out his vision for an AI rival during an interview with Fox News’s Tucker Carlson, saying an alternative approach to AI creation was needed to avoid the destruction of humanity.


I mean, given how often chatgpt makes stuff up, including inventing sources, papers, court case cites, etc that don't actually exist- a truthful AI would be nice... but there's still vastly less I in AI than a lot of people seem to think... LLMs are mostly really fancy context-aware autocomplete at this point.
 
Tucker thinks community notes is "great"!
But in its current design, Community Notes doesn’t address lies that are divisive, according to current and former employees, and an analysis of its open-source algorithm by Bloomberg News. When volunteers add fact-checking notes to tweets, they only become visible to the public if users from a “diversity of perspectives” are able to agree that a note is “helpful.”
The data shows that approximately 96% of all fact-checking notes contributed by the Twitter community didn’t pass through to public view. That means to date, more than 30,000 notes have gone undisplayed on Twitter for failing to meet the algorithm’s broad-consensus requirements, according to a Bloomberg analysis of Twitter’s data.
For example, a recent tweet from the Family Research Council, an Evangelical lobbying group opposed to abortion and LGBT rights, said that “Abortion is never medically necessary to save the life of a mother,” an assertion rejected by the American College of Obstetricians and Gynecologists.

The same day the abortion tweet was posted, a Community Notes volunteer added a clarifying note, which then was judged by the other volunteers. Twitter’s system used votes from 169 users, filtering out some ratings that were deemed to be “low quality” using a complex ranking system. Ultimately, 164 of the 169 votes, or 97%, found the note “helpful,” but it wasn’t enough.
Misinformation can travel further while the thousands of fact checks wait for enough votes from varied perspectives. For example, former UFC fighter Jake Shields recently said the National Institutes of Health had “added Ivermectin to its list of treatments” for Covid-19.

Despite no evidence that the NIH ever took this step, a note correcting the tweet was unable to attract a single positive rating from right-leaning users. To date, the post, which has been shared more than 45,000 times on Twitter, lacks any clarification.

 
I mean, given how often chatgpt makes stuff up, including inventing sources, papers, court case cites, etc that don't actually exist- a truthful AI would be nice... but there's still vastly less I in AI than a lot of people seem to think... LLMs are mostly really fancy context-aware autocomplete at this point.

I don't think he's specifically going for a truthful AI. Probably just one that won't refuse to say racial slurs.
 
Musk will now be distributing his version of "truth".


Elon Musk says he’s working on “TruthGPT,” a ChatGPT alternative that acts as a “maximum truth-seeking AI.” The billionaire laid out his vision for an AI rival during an interview with Fox News’s Tucker Carlson, saying an alternative approach to AI creation was needed to avoid the destruction of humanity.
But if he is correct that AI is a mortal threat to humans ... does he really want to create something that could potentially decide to seek out and destroy those who spread false assertions?
I mean, it will know where he is even more precisely than ElonJet.
 
  • Funny
Reactions: JRP3
Full interview for anyone that cares to watch it:

Tucker had problems comprehending the dangers of AI. Elon could have spelled that out better.
Tucker had to ask what the danger was about 3 times before Elon even addressed it directly. Elon just went off on tangents. Even then, Tucker had to translate the answer for the audience later during the interlude, before he went on with the wacko conspiracy stuff.
 
But if he is correct that AI is a mortal threat to humans ... does he really want to create something that could potentially decide to seek out and destroy those who spread false assertions?
I mean, it will know where he is even more precisely than ElonJet.

RoboCop was able to get around his programming which prevented him from arresting a criminal executive from the company which built him when the company fired the executive in question. 🤔
 
Tucker had to ask what the danger was about 3 times before Elon even addressed it directly. Elon just went off on tangents. Even then, Tucker had to translate the answer for the audience later during the interlude, before he went on with the wacko conspiracy stuff.

Yeah, this interview was very educational even if Tucker sometimes needed to help. /s
Elon also pointing out a metapher vs a literal meaning.

And we are learning that the assault on truth is coming from the left and the "politically correct"... am I missing something or were all the examples that way? Tucker seemed to be enjoying the political import. Sometimes it even seemed that Tucker was steering the conversation to be more moderate, like he was the more reasonable person.
 
Yeah, this interview was very educational even if Tucker sometimes needed to help. /s
Elon also pointing out a metapher vs a literal meaning.

And we are learning that the assault on truth is coming from the left and the "politically correct"... am I missing something or were all the examples that way? Tucker seemed to be enjoying the political import. Sometimes it even seemed that Tucker was steering the conversation to be more moderate, like he was the more reasonable person.
It was really interesting because he did at many points seem to be the reasonable person, until the glued-on conspiracy theory rants that he ended each interlude with. His face would change when he did them. His eyes would glaze. Like he was going from a normal guy to a crazy fanatic. Maybe it is painful to him.
 
But if he is correct that AI is a mortal threat to humans ... does he really want to create something that could potentially decide to seek out and destroy those who spread false assertions?
I mean, it will know where he is even more precisely than ElonJet.
Yeah, when he signed on to that letter, plenty of people joked he was probably doing it to try to slow down ChatGPT, so his own AI would catch up. Didn't expect that to be proven so quickly.
 
Yeah, this interview was very educational even if Tucker sometimes needed to help. /s
Elon also pointing out a metapher vs a literal meaning.

And we are learning that the assault on truth is coming from the left and the "politically correct"... am I missing something or were all the examples that way? Tucker seemed to be enjoying the political import. Sometimes it even seemed that Tucker was steering the conversation to be more moderate, like he was the more reasonable person.
Tucker Carlson is the more moderate person. He just says crazy stuff for the money. Elon Musk is a classic libertarian and now that he he's rich he says crazy stuff because he's no longer constrained by the money.
 
Like I said, I can't imagine being that s****y of a father or person.
I recall a foreign tv show where the conservative father found out that his adult son is gay and he was very upset. Someone suggested that the father shouldn't think of it as decision or as a negative reflection / weakness of his son. It should be thought of as a birth mark (ie, a characteristic his son is born with that is neither good nor bad). So, not only is it not something bad, but it is also something the person has no control over. There is no reason to be mad at his son for a birth mark.

But the rich and powerful are surrounded by sycophants and they get used to getting their way and thinking they're always correct without any corrections from anyone.
 
I mean, given how often chatgpt makes stuff up, including inventing sources, papers, court case cites, etc that don't actually exist- a truthful AI would be nice... but there's still vastly less I in AI than a lot of people seem to think... LLMs are mostly really fancy context-aware autocomplete at this point.

I think in this sense AI will ever only become more fanciful, and not more intelligent in any distinct sense (as long as it is running on deterministic processors).

However I'd guess (nothing more than guess) that LLMs can be designed/trained to be more fact-oriented, and/or have more math and content logic integrated. The impressive ability of GPT-4 for me is the ability to create internal models for quite complex topics, and I'd think that will remain useful for future versions that can distinguish fact and fiction.
 
Last edited:
Status
Not open for further replies.