Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla AI day

This site may earn commission on affiliate links.
One interesting question I have on the bot is the source of training data, in domestic applications...

IMO a bot owner will automatically label objects, by telling the bot what the object is.. That may give the bot permission to share the image for wider training purposes.

But bot owner will mostly not be happy with the bot sharing video of the inside of their house. or a map of the house.

The bot can map the house via a simple walk around whc can be done slowly..

Tesla will need to source images of house internals form staff or a specific group of customers who provide permission to share images, perhaps on an image-by-image basis or in certain room at certain times... Regardless approval needs to be given in advance..

For for bot training in domestic situations I can see Tesla using simulations and perhaps using team members houses as test cases...

This is fine, the privacy aspect isn't a show stopper, just something that needs to be considered.
 
Two non-trivial issues that come to mind. Neither is a show stopper, but both stand in the way of bot being capable of domestic applications and many industrial applications. You want to be able to say “this is dull and repetitive, it’s a bot job”, not have to analyse in detail each job to determine if bot is up to it…

The presentation labeled the hands as “human level”. Does this refer only to dexterity, or does it mean “touch”. If bot has expensive touch sensitive hands, how long does the skin last? Without touch, bot will be a klutz.

Secondly, 58kg falling on a baby could be tragic. Can bot possibly fall less often than humans do?

I’m excited for the project, but determined not to get over excited too soon. I had reservations about FSD and following AI day realise I was wrong. Looking forward to being wrong about the above too.
 
Two non-trivial issues that come to mind. Neither is a show stopper, but both stand in the way of bot being capable of domestic applications and many industrial applications. You want to be able to say “this is dull and repetitive, it’s a bot job”, not have to analyse in detail each job to determine if bot is up to it…

The presentation labeled the hands as “human level”. Does this refer only to dexterity, or does it mean “touch”. If bot has expensive touch sensitive hands, how long does the skin last? Without touch, bot will be a klutz.

Secondly, 58kg falling on a baby could be tragic. Can bot possibly fall less often than humans do?

I’m excited for the project, but determined not to get over excited too soon. I had reservations about FSD and following AI day realise I was wrong. Looking forward to being wrong about the above too.

I had some thought on the bot overnight...

Thought 1- the bot is a kind of computer/calculator on legs, hence a handy storage for information, manuals, videos etc, and that is a 2-way link it can also record video and enter information... When there is a production line fault it might be handy to have a bot around to record everything, and provide information about previous fixes.

Thought 2 - A bot can handle most routine car deliveries, hand over the paperwork, walk the customer to their car help set up phones as keys and record a video walkaround of the car at the time the inspect it prior to delivery, also note any defects or issues customer points out. For tricky deliveries the bot can call a person... The saving here is simply on employing people to do deliveries especially when the load varies with end of quarter pushes etc, For many customers the bot can enhance the delivery experience, those that don't want a bot can opt for an in-person delivery..

In terms of not falling, humans ability to not fall varies with age and also with circumstances like slippery floors... etc... A 4 legged version of the bot eliminates the falling problem... The bot has the advantage cars do, the world is slow and it can detect a possible fall earlier, it can put it's arms down to beak a fall without worrying about a broken arm... I also think the outer surface of the bot in many areas is likely to be soft impact absorbing foam.. similar to a car seat.

When in doubt test - prior to domestic applications, they should test the bot our on a variety of slippery surfaces and with a variety of path obstacles.
 
  • Helpful
Reactions: Carl Raymond
Thinking back to Elon’s 25 April 2018 tweet, “Oh btw I’m building a cyborg dragon”

Surely this is the one-more-thing on AI day. “Dragon” gives them licence to make a cool mobile robot in a non-humanoid shape that works better for functionality. Am I crazy?
A dragon would still be cool. Maybe Dojo Shanghai could train the dragon.
 
When in doubt test - prior to domestic applications, they should test the bot our on a variety of slippery surfaces and with a variety of path obstacles.
My thoughts on the bot are similar.

I think it is quite safe to say that the bot won't be interacting with the general public (consumers) for a long time. The first years of production the bots will be used by Tesla on assembly lines, i.e. "behind closed doors" and in an already hazardous environment so the humans around the bots are extra vigilant.

Tesla will learn a lot from deploying the bots at its assembly line, which will lead to upgrades in the software (downloaded via OTA updates of course) just like FSD.

Then after some time (i.e. when it can do enough useful tasks), the Tesla bot will be good enough to sell to other manufacturers/professionals, at lofty prices as prices should still be dropping and software should still be improving.

Only some time after that, and I'm talking around 10 years from now at the earliest, Tesla bot will be freely purchasable for use in homes. In my mind this is more like 2035-2040, but I'd love to be proven wrong.

Go Tesla!
 
  • Like
Reactions: MC3OZ
My thoughts on the bot are similar.

I think it is quite safe to say that the bot won't be interacting with the general public (consumers) for a long time. The first years of production the bots will be used by Tesla on assembly lines, i.e. "behind closed doors" and in an already hazardous environment so the humans around the bots are extra vigilant.

Tesla will learn a lot from deploying the bots at its assembly line, which will lead to upgrades in the software (downloaded via OTA updates of course) just like FSD.

Then after some time (i.e. when it can do enough useful tasks), the Tesla bot will be good enough to sell to other manufacturers/professionals, at lofty prices as prices should still be dropping and software should still be improving.

Only some time after that, and I'm talking around 10 years from now at the earliest, Tesla bot will be freely purchasable for use in homes. In my mind this is more like 2035-2040, but I'd love to be proven wrong.

Go Tesla!

It is also true that using the bot internally, helps Tesla overall mission by allow them to make and deliver cars cheaper.
In the factory environment, the bot software can be heavily optimised for one particular task... not all bots need to be running the same version of software or even the same application... Some tools can be developed to integrate with the bot and help it do jobs better....

Tesla can slowly increase the range of jobs the bot can do with improved versions of the software.

Eventually I can see to the bots doing '"factory cleaning" and that is a good basis for developing the techniques needed for "domestic cleaning".

IMO bot can bridge some of the gap between humans and factory automation, eventually they can be trained to move faster, and repeat the same task with higher precision, and less variability... Perhaps not as fast and consistently as regular factory robots, but bots can be deployed to other tasks more easily as requirements change.. Installing bots may help prevent a need to upgrade the regular factory robots, or they may be able to bridge the gap when staff are absent, or machinery breaks down...
 
“Consider two types of robots. The first robot paints cars in a factory. We want car-painting robots to be fast, accurate, and unchanging. We don’t want them trying new spraying techniques each day or questioning why they are painting cars. When it comes to painting cars on an assembly line, single-purpose, unintelligent robots are what we need. Now say we want to send a team of robot construction workers to Mars to build a livable habitat for humans. These robots need to use a variety of tools and assemble buildings in an unstructured environment. They will encounter unforeseen problems and will need to collaboratively improvise fixes and modify designs. Humans can handle these types of issues, but no machine today is close to doing any of this. Mars construction robots will need to possess general-purpose intelligence.”

— A Thousand Brains: A New Theory of Intelligence by Jeff Hawkins
 
I'm suspicious this is the actual reason why.

My speculation:

30th September is coincidentally the last day of Q3, and by the time AI Day 2.0 airs, the end quarter delivery push will be complete.

I suspect that HW4 will have started shipping by 30th September on deliveries going forward and the timing here is to avoid Osborne effect.

Similar happened with Autonomy day in 2019.... HW3 had just started shipping already by the time that event came round

Still expect Optimus preview, but I suspect that is not the main reason at play here.

Speculation ends.

James Douma did a great interview on his thoughts about Optimus. Interestingly, he doesn't think the image that was shown during the shareholder meeting, of the stainless steel robot hands, is real in any way.


Reasons why Optimus might be easier than FSD, at least for performing boring, repetitive jobs in a factory.

Safety
FSD controls a multi-ton machine zooming around outdoors at lethal velocity, with risk of catastrophic damage to life, limb and property if the wrong decisions are made even once. Teslabot is a ~125 lb machine with a maximum speed of 5 mph operating in a controlled indoor environment that, at least for initial applications, could have humans excluded from the vicinity. Teslabot simply requires drastically lower levels of reliability to be considered functional and be allowed to operate without human supervision.

FSD also has to work within tight constraints on processing speed for the inference engine, because at highway speeds it will travel a meter in 30 milliseconds. Traveling too far while waiting to make a decision is unacceptably risky. Teslabot doing a repetitive factory task can afford to take more clock cycles to perform more computations to reach a higher level of certainty about a classification decision before acting upon that information. If I understand correctly how the FSD computer works, this means Teslabot could use deeper or wider neural nets if needed to improve inference performance.

Simplicity
FSD has to deal with a huge variety of potential environmental conditions and problems to solve. The lighting varies, the weather varies, and there is a factorial explosion of possible combinations of lane lines, road designs, pavement materials, other road users and their behaviors, etc. FSD needs to be able to solve almost every imaginable setup to reach the required level of reliability. Nothing has more degrees of freedom than reality.

Teslabot will initially be restricted to boring, repetitive tasks, operating in a controlled factory environment with almost perfectly constant lighting conditions, sheltered from all weather. It also will generally only need to know how to do one thing, and by virtue of only doing one thing, it will also have less challenge with overfitting to the training data, because the training data will be almost perfectly representative of the data it will see during test time (i.e. doing the job).

Sensors
FSD has to compensate for factors that distort camera readings, such as vibration and rain/dust/other debris occluding the housing over the lens. A factory Teslabot would have minimal vibration and no sources of occlusion.