Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Confession: Once I saw a post that I was pretty sure was incorrect. Was too tired to answer myself, so just asked chatGPT to explain why it was incorrect. Pasted the response, got lots of upvotes for the answer being presented with so much detail and long post with good formatting etc. Did a better job than me at posting...

So anyway, saw this one:

RIP Internet, it was fun while it lasted and sorry for being part of the downfall...
 
Hm... Sounds just like drug development. So this is standard practice all around?
Not quite. Drug development is based on university research which licenses out a patent. But even from there there is a tremendous amount of work needed to push through clinical trials and it starts out as a normal corporation from the beginning.

Elon doesn't know anything about AI himself, but his characterization of the sleaze that Altman pulled on OpenAI is correct. It was fraud. It was named OpenAI for a reason, but now is entirely closed. They have published nothing.

Somehwat ironically it's the information hoarder Zuckerberg & Meta which have been the most authentically open on their ML developments and research. In significant part because of Yann LeCun, I believe.
 
Elon doesn't know anything about AI himself
For someone who doesn't know anything about AI, Elon has been very lucky with his decision making over the years, investing heavily into getting data, recruiting Karpathy, buying deepscale.ai, developing an alternative to GPUs as a backup if they get too expensive, developing own inference hardware, going into robotics heavily right before everyone else did it, betting on vision etc. It's almost as if he actually is pretty good at doing the physics first analysis of AI. Too bad for Tesla that the CEOs of Toyota, VW, Ford, GM, BMW etc had much better foresight and could outclass Tesla in this domain.
 

Imo AGI is quickly approaching deep blue levels at text based questions. Sure it might not better than every human at everything today. But it has rapidly gotten better than 90% of humans at 90% of tasks that we have tests for. And the limit is not getting to best human level at everything, it will be more like Stockfish which has a 800 ELO rating advantage over Magnus Carlsen, ie the difference between LLMs and Terence Tao/Ed Witten will be like the difference between Magnus Carlsen and me... In every task where a comparison is meaningful.
 
Last edited:
As we get closer to building Al, it will make sense to start being less open. The Open in openAl means that everyone should benefit from the fruits of Al after its built, but it's totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and
possibly medium term for recruitment purposes).


 

Imo AGI is quickly approaching deep blue levels at text based questions. Sure it might not better than every human at everything today. But it has rapidly gotten better than 90% of humans at 90% of tasks that we have tests for.
A usual human working environment problem is something like "well what do you think we should work on next?"

And the limit is not getting to best human level at everything, it will be more like Stockfish which has a 800 ELO rating advantage over Magnus Carlsen, ie the difference between LLMs and Terence Tao/Ed Witten will be like the difference between Magnus Carlsen and me... In every task where a comparison is meaningful.
Ed Witten and Terry Tao have ideas of useful and interesting new research ideas, short and long term. An AI that answers textbook like questions has no idea about that so far. They might find assistants that can usefully scan and summarize research literature as helpful.
 
Claude has been gaining 20IQ points with each version:

1709776734345.png
 
  • Like
Reactions: Buckminster
12 Questions for Sam Altman:
  1. Why did you argue that building AGI fast is safer because it will take off slowly since there's still not too much compute around (the overhang argument), but then ask for $7T for compute?
  2. Why didn't you tell congress your worst fear?
  3. Why did you tell Musk that the leader of OpenAI probably shouldn't be its board, yet you just had yourself re-appointed to it?
  4. Why did you also tell him that OpenAI employees would not be compensated in OpenAI equity, when in fact they are now given hundreds of thousands of dollars in it every year?
  5. Why were you fired from YCombinator?
  6. Why, when you were fired from YCombinator, did you publish a blog post on their website announcing yourself as chairman without any authorization or agreement to do so?
  7. Why are you, in the form of OpenAI, clearly breaking GDPR data protection laws, as the Italian regulator has determined, abusing user data and likely in breach of similar laws in other jurisdictions?
  8. How much of an increase in unemployment do you expect AI to cause?
  9. What is the future of meaning? How do you see people finding meaning in their lives in the future you intend to bring into being?
  10. Do you think you should have as much power over the future as you do?
  11. Will you support calls for an internationally upheld pause on the advancement of AI capabilities and a grand-scale focused effort on AI safety research, via an international treaty? And if not, why not?
  12. Where is Ilya?