Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.

First video my main takeaway is that we should regulate the big actors. Imo the problem with this is that the small actors today can do what only the big actors could do a few years ago. Sometimes the small actors area almost ahead of the big actors, see for example llama2, midjourney etc. So this seems like a very temporary solution. Maybe useful to do, but not enough to ease our worry.

The second video my main takeaway is that we should use AI to prove that AI is safe and even if we are not smart enough to understand the AI, at least we can verify the proof. This sounds nice, as long as everyone always use these proof requirements. But as technology improves more and more people will have the technology to run these experiments and can we really control that all of them will run these proof-checkers all the time? Humanity being saved by mathematical proofs sounds like something a math guy would want to believe and support.

I come out from these talks less optimistic.
 
Last edited:
Super impressive results, will kill so many apps and jobs. Makes me want to renew that GPT4 sub...

1695253853043.jpeg

Screenshot 2023-09-21 at 07.33.15.png

1695252569028.jpeg

 
Last edited:
Without question I am seriously ill-equipped to understand present day innovations in AI. I retain serious interest after a decade working directly with LISP in early 1980’s commercial expert systems then later applications while consulting with some leaders. I have rarely thought much about IBM in the last few years. This article, though, struck me as a strong possible development to ease distributed applications of otherwise computationally limited locations, say, cars, robots, airplanes etc.

Is this really likely to be a material advance in applicability of, say, FSD and Optimus?