Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Amnon Shashua on The Emergence and Dangers of Reasoning in LLMs

This site may earn commission on affiliate links.
Amnon Shashua's talk at the Hebrew University's Celebrating AI Revolution conference, made in honor of The Science of Amnon Shashua. In his talk, Amnon dives deep into the revolution of reasoning in Large Language Models (LLMs) - the advancements and the dangers.


Quick Notes

2023 LLMs are like the "2007 iPhone moment", they might not be great yet but they are the start of a big change.

Reasoning is holy grail of AI. Classical ML can generalize but only within the distribution of data that it has been trained and tested on. Reasoning is being able to generalize outside of the distribution of data.

AI has not achieved reasoning yet but it is not big stretch to think that we will get there.

"in context learning": with prompts, humans can teach machine how to do task and then it will generalize better. If you just give examples without prompts, AI will not generalize.

"training on code". OpenAI developed tool to help programmers write code, called "copilot". It was trained not just from prompts but also from sample code. By-product is AI learned how to translate things into formal language. Result is AI is better at writing code than answering complex questions because code is a formal language. AI does better with formal languages.

Safety of AI

Can LLMs be "aligned" to keep them safe? No.

Can self-driving cars be "aligned" to keep them safe? Yes. See Mobileye work on RSS to provide safety rules.

Can general AI be "aligned" to keep them safe? No. We cannot "tame" general AI.

How to mitigate risk of AI

Limiting AI to be less capable than GPT-4? Not a good idea and not practical. Competition will not follow that rule so you are just going to be left behind.

Use human feedback to try to "tame" the AI? No, because there will always be ways to trick the machine.

Using reinforcement learning "in the wild", interacting with humans, and optimizing reward function? It should be regulated/prohibited.

Cybersecurity for AI: Limit AI access to key infrastructure? Yes. We limit human access to certain sensitive infrastructure, AI should be restricted too. But it will be tricky since AI will find a way.

Conclusion

Interface between man and machine is fundamentally changing. The computer is shifting from being a tool to be an assistant. Computer assistant will be useful for writing code, writing content, summarizing large amounts of text and more efficient search.

The computer "assistant" will make mistakes, can be manipulated but can still be useful to humans.

The rise of "reasoning" will create a new era of machines who can be taught by humans to perform tasks with out of distribution generalization.