Nice closeup:
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
It might lack hands and is unable to do anything useful, but at least you can kick it!Looks similar to Tesla Bot for $90K.
Will we have something similar in two years for $45K?
I'll buy one for $2,500 if can do the dishes.
When will it be able to clear trenches in Ukraine?
I had always assumed that the bot would be trained by humans wearing cameras. But no reason why Optimus can't translate from a third person view if it is trained in that manner. That would solve many issues.6. Optimus can pick up tires and do other factory work, and this is all ready happening. Can be trained by simply watching videos.
This video really impressed me. For one thing Optimus movement is becoming really smooth, but beyond that it's just amazing to be able to watch the ai engine and training process being built in the early stages.
A more traditional company like Apple for instance likely wouldn't even show a product like this at all until it was ready for the market, but Tesla takes the opposite approach. In order to build their real world ai engines they need to have the robots operating in the real world as early and as much as possible. This is needed to allow them to collect massive amounts of data and refine the training engine.
To me this shows that Tesla really will be the leader in real world ai going forward. It also suggests that companies like Apple may not have any possibility to develop this type of tech without changing their old school approach.
People of the earth, I have hairs sticking up on my neck. Speed and precision. Like the steering on the car eventually smooths out, and like the deer who just starts walking then figures it out on it's own. I just saw the future!
This is exactly why I will need to buy my other shares back soon. This could also mean production miss as they try to juice the share price. Could be wrong but I just said the robot might just show us something cool... like yesterday I believe.
(Edit to fix stutter.)
Still only impressive if you understand what’s going on under the hood. Otherwise Boston dynamics looks light years ahead
Quick takeaways.
The robot is a lot less shaky
The control algorithm for the sorting is quick to update to disturbances while still being smooth, indicating that it is pure RL, not only high level RL.
Hands are next level, so much more impressive than anything else I have seen.
Doing robot yoga is not super hard, but at least that's not a problem anymore.
Before when you compared Optimus to Atlas you could point at some things that Atlas did a lot better. Not very important ones, but visual ones to impress the masses. Now that advantage has decreased a lot, meanwhile Tesla's advantage in scalability is still there.
It will be interesting to follow the development. They are probably pushing hard on all levers for their development so we are seeing multiple improvements per demo and there are likely many things they are not demoing. This demo indicates that they believe in using neural networks even more for control, maybe FSD 12 has given Elon an indication how Optimus software will end up.
watching it a few times, when the guy moves the legos to showcase how Optimus can react it’s very obvious that the whole video is sped up, which is a little disappointing.
Still, I realize it’s pretty impressive
Yeah, but I can forgive them since this is a general audience video and real time would have made it too slow for most people to watch.
These working prototypes are interesting. They have a lot more heavy duty wires visible than I had seen before.
I’m impressed. I’m still wondering how they are going to get non engineers to “train” them for specific tasks.
I would imagine it makes more sense to train it to watch just a person like any human trainee would than to hire a bunch of people to do things with a bulky set of cameras on their head.
If it can learn to mirror the actions of a person standing in front of them teaching it becomes like teaching any other person.
I don't think anyone needs to be an engineer for training. I believe they are just having the worker wear the stereo head gear and feeding Dojo raw video of the task.
Calibrate the nodes, which I see as a separate function, combined with aligning the viewing angle to the robot eyes in vector world much like the Birdseye view in our vehicles. The trainer may not need headgear, could be an overhead stereo camera (IMO). What's the difference really. Then load the Block Sorting V1.0 for the task and press go.
There are so many FSD parallels here. (Not an expert, just how I interpret the video based on prior demos and FSD).
While there may be some limitation that requires that that I’m not familiar with that seems unlikely and very un-musk.
Watching my own kids grow up they first learn to do stuff with there hands shortly after realizing the thing that darts across their line of vision often is their hand and they have control.
Eventually when they master that they learn to replicate by looking at you do things.
There’s no reason Optimus wouldn’t follow the same path.
And in fact that path would also lower all barriers to teaching it.
Edit: I can hear musk in a meeting in my head right now. “A person can learn to do a job just by observing another person, why does Optimus have to have a 1st person perspective to learn a task? Make it work”
Hmm.
At 0:24, it appears to me that the tumbling block is in real-time; I am undecided if the human's arm movement is or is not sped up.
I just wondering how many Optimi they could get into GigaTexas and trained over the past couple of weeks?
jk
I have a 5yr old boy. When Optimus drops the block on it's side, pauses and then picks it back up and puts it in it's upright position reminds me exactly of my boy three years ago when he was a 2yr old playiing with blocks.
What's my point?
In my opinion, it's apparent that this is not a learned maneuver, but rather something that occurs as a result of actively thinking/learning while performing the task. To me, this is even more impressive than ChatGPT, etc. because it demonstrates that Optimus is literally using it's neural network in an active manner similar to humans while learning a task. I guess one could argue that ChatGPT will apologize when a human points out that it made a mistake in a response that it previously gave, but that response only happens when ChatGPT is prompted by a human and actually shown their error. Optimus, without any human prompting, corrected a mistake that it made on it's own, and repositioned the block in an upright state.
TLDR: Optimus is, in my opinion, closer to artifical general intelligence than any previous application of neural networks that I have witnessed.
Production ramp is due to start in November which could be just 5 weeks away. Maybe Cybertruck launch event has been delayed to coincide with bot launch...![]()
Where did they say bot production is supposed to ramp in Nov? Any other details? Exciting stuff!
33 mins; Manufacturing ramp starts November with new high volume Tesla actuators.
Should be doing useful things in Tesla factories in 2024.
Optimus arms and legs can be used to create a Cyborg body combined with Neuralink.
Where did they say bot production is supposed to ramp in Nov? Any other details? Exciting stuff!