I'm still trying to figure out if Optimus is learning, mimicking, or copying - each have different levels of complexity and purpose. Spare me, I have limited knowledge in AI but extensive experience in human learning + basic common sense. I've concluded that it's possible it's doing all 3 combined for reasons given below.
Add to this how the "remote control" aspect for Optimus is merely part of the learning phase, where the robot is shown the human activity via touch, movement, and vision, then, can interpret how to do that activity on it's own.
Agree, with a bit of a twist.
Think about "seeing vs doing" and the "hands-on experience" gained from doing a task, even if just mimicking or copying (as in pure R/C).
What if the data is faster to acquire by inputting directing from Optimus Sensor data while it is simply copying the teacher's motions R/C with a bodysuit? It's like a transfer function to get the task in Optimus data form, while combined with actual touch data. Hey, isn't this like shadow mode on FSD where the driver inputs the data directly into the controls? But how would you do that with Optimus?
You can't 100% drive Optimus... unless you use a body suit, gloves etc. Although, I could also argue that it doesn't take much data analysis to translate pure vision into bone kinematics for Optimus, but perhaps an extra step is removed to optimize the iterative process - a step closer to real-time learning.
I have a funny feeling more will be revealed here. No way Optimus is just copying, it's the data capture, analysis, and speed of learning we have yet to hear about - in less than 2 weeks. And it may have to do with the compensation package - Elon knows what's coming more clearly than most.
Anyone think a DITM leap is worth the risk? Will 2 yrs out take us past the recession?