Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
I have used ChatGpt to have conversations in a language which has few written sources or learning resources. Looked up some "prompt engineering" prompts to get it going in the right direction. Outputted word frequency lists with both target and English language words for spaced repetition app to use.

I think people who are unimpressed are using single prompts, giving it a go/play instead of iterative improvements and building on prompts/source material that others have already refined. Some prompts can check for factual inaccuracies to minimise delusions.

If you're using artificial intelligence prompting as an assist to your normal work, you can build up a series of prompts, tricks and addins. You're likely to be more productive than peers once you get into the swing of it.
 
  • Like
Reactions: rallykeeper
Seems like Copilot has a pretty good take on what's been going on in this thread lately... Plus it could moderate too!
View attachment 1017725View attachment 1017726

I love to hate going on about this, but I wonder if the outputs are biased towards more positive sentiments and perspectives wrt your question, so you may be influenced to feel good about the outputs and continue using the AI :)
 
IMO very important paper. They trained a relatively small LLM to only get a chess position as input and output the strongest move. So a very challenging dataset. The AI got a 2900 blitz rating just on this. No search. The conclusion is that we may be underestimating how incredibly intelligent these LLMs are if we give them good datasets. Which bodes very well for Optimus, FSD etc…

It's not a language model, and 270M parameters isn't exactly small. And most of all it uses training signals from internal evaluation quantification of a very powerful conventional chess engine.

It's a demonstration of flexibility of generic architectures as distillation but only from tremendous data. I think it means lots of data + stochastic gradient descent + N^2 transformers are pretty good.

10 million games x prepared evaluation of every position in each one of those. Intelligent humans have to play chess without any of that. And the chess engine Stockfish 16 is still better.
 
I find it pretty useful both personally and in my job in finance. A couple of examples:

  • Employee annual reviews. I type the key points in short form and ask chat GPT to covert it into corporate speak. I also get it to change the tone of the message if it sounds too negative. A gripe turns into an exciting opportunity to improve.
  • When learning a new industry I upload all the relevant public regulation and get a decent summary of the context.
  • When justifying positions I ask what are the main reasons for doing x to see if there's any good ideas I can add to bolster the argument that I might have missed.
I find that ChatGPT is generally useful as a brainstorming tool - it's pretty good at coming up with lists. For example, my teenager had to write a current events essay and had no idea what topic to pick. So we went on ChatGPT and asked it for suggestions ("What are some potential topics for an essay on current events?") >and then we asked it to break down a couple of topics that sounded interesting (what are some subtopics, what are the common perspectives on the topic, what websites or organizations can help me dig deeper, etc).

Actual ChatGPT writing is still pretty lame (my kids still write their own essays), but I can see why some people use it for generic emails. Ironically, many people use it to get a summary of emails...

Personally, I've just used custom templates/'canned messages' when I worked in retail educating/troubleshooting with clients. More control, and I'm a fast typer anyway.

Hmm, how to bring this back to TSLA... I think the big progression is using voice to text & vv, so I expect AI language models to eventually have conversations with things like our cars while driving. I think it makes sense to have limited sets of data, like asking your Model X questions that can be found in its user manual. One application is gaming - creating AI characters with limited data sets it has access to: I've played with Inworld a bit, and you can set it so the character will not speculate or make up data beyond what you've programmed it with. I think that could be extremely helpful in many areas. Manuals, rulebooks, textbooks - but it can summarize instead of just finding keywords (it's a slight advantage over being able to search for specific words/phrases in a pdf, which is what I still primarily do when possible).

With Tesla already embracing AI and some close association with Grok. It's a step beyond the current voice commands. I wouldn't mind having a conversation with my car (aka do some brainstorming) on a long drive...
 
It's not a language model, and 270M parameters isn't exactly small. And most of all it uses training signals from internal evaluation quantification of a very powerful conventional chess engine.

It's a demonstration of flexibility of generic architectures as distillation but only from tremendous data. I think it means lots of data + stochastic gradient descent + N^2 transformers are pretty good.

10 million games x prepared evaluation of every position in each one of those. Intelligent humans have to play chess without any of that. And the chess engine Stockfish 16 is still better.
270M is a lot less than 1.5 year old GPT4 which had 1.76T parameters. llama2 small size is 7B which I can run on my Macbook at 40 tokens/second. So yeah, it's relatively small by today's standards.

They are not distilling a neural network, they are distilling a neural network with search. The point is that the data is very high quality and very difficult, it's not like "hot dog or not". But the LLM are still able to figure out how to do this. Before people were saying that GPT4 was bad a chess, but that was mainly because the data in the GPT4 dataset was bad.

This gives me an intuition for how well FSD v12 could perform on a dataset of extremely good drivers driving in extremely difficult situations, for example India like Ashook likes to talk about. Or optimus if Optimus was trained on extremly good chefs cooking or extremely good bed making.
 
Sora is mind blowing and will be another ChatGPT moment for the world. On the same day Google Deepmind released this:

Example:

Imo the world is starting to feel more and more like I imagined the beginning of the singularity would feel. We have a loop of improvements and AI is doing more and more of the work in it. First as a tool used by humans for one task, then for more and more tasks, then for entire blocks, then for the entire thing. Then Singularity... And imo we are not doing enough alignment research and people are being very naïve about how the future will look like. And I am very worried about our future as humans.
 
Not all doom 'n' gloom... Thought this one is hilarious.

GGpPyh5bkAA8TAJ
 
  • Funny
Reactions: Dikkie Dik
I wonder if Ilya going silent was related to Elon suing Sam. Maybe Ilya saw something, wanted to kick out Sam, failed and Sam gave himself full control of OpenAI. Elon likes Ilya and tweeted where is Ilya, Ilya went to Elon's team of hardcore lawyers and they decided they had a case?!
 
  • Like
Reactions: DrChaos