Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
That's not surprising at all.

You HAVE to do simulation for multiple reasons, no matter how much real world data you have.

Three examples:


1) Incredibly rare situations. The couple and dog jogging on the highway example they showed. Or the Elk crossing the road. You're simply not going to gather a significant amount of real life examples even with millions of cars in the fleet.

2) Counter-factual testing. You have a real world case where three different parties (one of them being the Tesla) did A, B, and C respectively and you know what happened. But what would the system do if they'd done A, X, and Y instead? Or done A, Q, and R? Or A, B, and W? With simulation you can start 100 runs from the same, real-world, start making slight changes each run and see what happens.

3) Accidents. You will get SOME accident data from the fleet of course... but as the rate of accidents of Teslas is pretty low in general, you won't get a ton of it... and you can stack this with item 2... For example "Ok, the car fails to avoid this real world accident in these specific road, traffic, and weather conditions... what if we run the same situation but in clear visibility? What if we run it with the car going 10 mph slower? What if we run it at a different time of day for lighting difference? etc...




Those are SOFTWARE problems- not hardware. And have improved a lot since the original iPad implementation.


This specifically mentions iOS 14 on the new phones has much better results than the original ipad release.



Nope. See again the link.--



5x margin of error with the image based solution vs the LIDAR one.

Specifically it mentions the old iPad version you appear to have experience with only had 574 depth points per frame available.... but the new iOS 14 features allow for depth maps with up to 49,152 depth points.... over an 85x increase.


Again I'm not saying they're going to use it- I'm saying that there's a lot of reasons it makes no sense to use it on the car, but several where it might make sense to use it for a humaniform robot you want to take precise human actions, especially since you'd need far fewer sensors, and a much cheaper one at that.
Thanks for your response and sorry my comment missed my target. I was trying to imply others use simulations to primarily build their platforms (failing) and Tesla uses simulations to finish theirs (succeeding). In a similar vein, I agree with Elon, LIDAR has its place and applications, but not as a foundation for autonomous driving.
 
Nvidia's A100 has 54.2billion transistors, more than D1. And we don't count 25 d1 chips as that's a package using off chip interconnect.
While your point is valid, the number of transistors and/or the processing speed are not the only criteria we should be using to judge Dojo..
The high speed interconnect and the ability to scale were impressive...

Tesla having the in house ability to design an execute a solution like Dojo was impressive.
But the in house design approach is only truly impressive, if it is the right solution...
Dojo is specifically designed for NN training, and seems like the fastest and most cost effective way to do training..

But developing Dojo merely to solve FSD might have been expensive.

When Dojo can be used for Robots and NN training as a service, the cost of developing Dojo and on going development is easier to justify..

I don't think a apples to apples comparison between Dojo and any other solution is possible, Dojo is sufficiently unique, to render a chip level comparison fairly meaningless..

Time taken on a NN training task, and the total cost of the training computer, are the true metrics...
 
I have to admit. I felt pretty sure Tesla would unveil robot plans. I didn't expect it to look so human, I realize the final product could look much different though. But about 30 minutes after the presentation for the first time I really did have a little dread. Tesla's probably the only company that I have a high degree of confidence in them pulling off a mass production robot. And Im glad they plan to engineer in physical limitations so it can't beat you up. But just on the potential to take jobs its pretty scary. I realize that there are A LOT of people in the world. And Tesla would have to build A LOT of robots to take away any noticeable amount of labor opportunities. But it is unsettling to think how many peoples jobs are on the verge of being eliminated. Up until this point I have always regarded Tesla and its products as objectively great for the world. A robot in the product line makes that feeling a little more gray.
 
But it is unsettling to think how many peoples jobs are on the verge of being eliminated.

It will take a lot of time before those robots are more intelligent than the dumbest human. And we will only know if we can reach that point when we get there. It’s not like the development of FSD has been a walk in the park. And that’s a problem with a very narrow knowledge space.
 
While your point is valid, the number of transistors and/or the processing speed are not the only criteria we should be using to judge Dojo..
The high speed interconnect and the ability to scale were impressive...

Tesla having the in house ability to design an execute a solution like Dojo was impressive.
But the in house design approach is only truly impressive, if it is the right solution...
Dojo is specifically designed for NN training, and seems like the fastest and most cost effective way to do training..

But developing Dojo merely to solve FSD might have been expensive.

When Dojo can be used for Robots and NN training as a service, the cost of developing Dojo and on going development is easier to justify..

I don't think a apples to apples comparison between Dojo and any other solution is possible, Dojo is sufficiently unique, to render a chip level comparison fairly meaningless..

Time taken on a NN training task, and the total cost of the training computer, are the true metrics...
D1 has the following merits.

1. It allows Tesla to train 4x faster at the same cost of Nvidia, mostly because Tesla no longer needs to pay a crazy over inflated prices Nvidia charges.

2. The interconnect is state of the art

3. The power delivery such as vrm being 3d stacked ontop of the silicon is ground breaking. The cooling solution is also out of this world.


When it comes to performance, it's 30% better than Nvidia but we don't know if this is at the hardware level or from software optimization since this is custom made for teslas software stack.

So it's a little soon to call the d1 chip some alien technology as it's pretty good but not blow your face off good. The FSD computer was actually like 2.5x better than Nvidia pascal architecture from the performance per watt perspective while d1 is 1.3x better than A100. This new packaging however is alien technology.
 
I have to admit. I felt pretty sure Tesla would unveil robot plans. I didn't expect it to look so human, I realize the final product could look much different though. But about 30 minutes after the presentation for the first time I really did have a little dread. Tesla's probably the only company that I have a high degree of confidence in them pulling off a mass production robot. And Im glad they plan to engineer in physical limitations so it can't beat you up. But just on the potential to take jobs its pretty scary. I realize that there are A LOT of people in the world. And Tesla would have to build A LOT of robots to take away any noticeable amount of labor opportunities. But it is unsettling to think how many peoples jobs are on the verge of being eliminated. Up until this point I have always regarded Tesla and its products as objectively great for the world. A robot in the product line makes that feeling a little more gray.
This is not a new issue. 200 years ago, the vast majority of humans were peasant farmers. 90% of the jobs were growing food. Today, through mechanization, only 1% of jobs are growing food.

So what did the other 89% of people do when their jobs were automated away?

Humans are adaptable. It would not even be possible to describe what Mr. Uujjj does for a living (computer architecture) to someone from 200 years ago. Likewise, what the people of the far future will do for a living is probably indescribable within the bounds of the contemporary English language.
 
It will take a lot of time before those robots are more intelligent than the dumbest human. And we will only know if we can reach that point when we get there. It’s not like the development of FSD has been a walk in the park. And that’s a problem with a very narrow knowledge space.
Ummm....One dumb human comes to mind right now :)
 
I have to admit. I felt pretty sure Tesla would unveil robot plans. I didn't expect it to look so human, I realize the final product could look much different though. But about 30 minutes after the presentation for the first time I really did have a little dread. Tesla's probably the only company that I have a high degree of confidence in them pulling off a mass production robot. And Im glad they plan to engineer in physical limitations so it can't beat you up. But just on the potential to take jobs its pretty scary. I realize that there are A LOT of people in the world. And Tesla would have to build A LOT of robots to take away any noticeable amount of labor opportunities. But it is unsettling to think how many peoples jobs are on the verge of being eliminated. Up until this point I have always regarded Tesla and its products as objectively great for the world. A robot in the product line makes that feeling a little more gray.

I predicted Tesla would develop their own robots more than three years ago, based on their computer vision and neural net technology. It's a natural growth path for the tech, and I said so in a conversation with @Carsonight on DISQUS. It was also the most obvious way (3 years ago) to solve permanently the chronic labor shortage in the Reno / Sparks region. Nobody there is losing a job. Worse, nobody is moving there to take an existing opening. Telsa HAS to have robots to continue to expand production at Giga Nevada.

The other concerns were all addressed on AI Day: it's 125 lbs of slow moving, lightly powered machinery. It's in a human shape because the WORLD was designed by and for humans (octopods don't fit current jobs). Further, creating the capability to have an existing skill demonstrated by a human expert, trained into a neural net, and finally reproduced at scale by a robots is a limited, specific use case with unmatchable ROI. Some jobs CAN NOT be done safely by a human, even though they know clearly what must be done, and how to do it (Fuku).

Unless you want to get out of your car to plug it in at the Supercharger when its 20 below and blowing sleet. The robot SuC Ranger will look pretty good then.
 
Here’s a twist on recruiting and Dojo-aaS: Use the app store model to enable developers to sell Tesla bot hosted solutions to particular problems.

Perhaps Tesla will make their bots available early to third party developers so that these developers can create new capabilities, for which I’ll pencil in the term "caps" (anything but ‘apps’ please Elon).

I’m thinking the developers would use the dev bots along with simulations and other sources to create data sets applicable to solving a particular problem—picking grapes, weeding, ironing, operating a fry cooking station, …—then they would use the Dojo service to accelerate the training of their new capability.

Tesla could vet and distribute the caps and take a cut a la Apple‘s App Store.

New gold rush right there.
Well, it sounds good but there is a problem.

A malicious app can a) harvest your personal data and/or outright spy on you b) convert your phone into a bitcoin mining unit or c) trick you into being robbed by stealing your identity credentials.
These scenarios are all bad.
Bad but not lethal.
A sufficiently malicious robot app can cause the Tesla Bot to harm you badly or perhaps even kill you.
Sure, Tesla can do a lot of safe-guarding and simulation too weed out dangerous stuff. Tesla can and will probably build in a lot of safety-stuff at the very core level of the robot OS.
But, in the end, the ethical risk and existential risk to the company for allowing Tesla Bots to be used in a harmful way is just ... monumental.
 
Last edited:
  • Informative
Reactions: capster
Just to put this nail in the coffin of "Tesla should build their own fab to make D1 chips" discussion.

Tesla needs 120 wafer scale Tiles per Exapod. TSMC charges 10k per wafer. This means it cost Tesla only 1.2 million dollars worth of silicon for 1 exapod. Of course the packaging is very expensive for something this exotic but we are only talking about the chip itself. Tesla doesn't need to spend 10-16 BILLION dollars on a state of the art fab to make silicon. It's absurd...
They don't need the factory right now - true.
(By the way, my original post did not constrain the chip factory to producing D1 chips.)

Teslas stated stretch goal is 20 million cars a year in 2030.
How many Tesla Bots a year in 2030?
How big would DOJO need to get in order to support both self-driving cars and AI learning for medium to complex humanoid robot tasks? 10X? 100X? 1000X?
How many other robotic companies will follow Teslas lead and develop different form factor robots using Tesla sillicon and Dojo as a service? And perhaps buy chips from Tesla?

Then add somewhere around 20x that amount of, in comparison, simple chips used for controling the car or robot.
So perhaps 3 lines: 1 line for Tesla Vision, 1 for misc. control-chips, and one for Dojo chips.

Would supporting the annual production of above be sufficient demand for a chip factory?

Elon imagines the future.
If something is definitely needed in 9-10 years and possible needed in 5 years he starts working towards that goal.
 
Last edited:
It will take a lot of time before those robots are more intelligent than the dumbest human. And we will only know if we can reach that point when we get there. It’s not like the development of FSD has been a walk in the park. And that’s a problem with a very narrow knowledge space.
How many humans have you met exactly? You may need to increase your sample size.
 
No demand for Model Y in Frankfurt Germany.
Saturday morning and the queue is 50 people deep…
 

Attachments

  • 72309684-73B5-4C84-A8E2-D50DB8CF6456.jpeg
    72309684-73B5-4C84-A8E2-D50DB8CF6456.jpeg
    636.2 KB · Views: 407
I…Up until this point I have always regarded Tesla and its products as objectively great for the world. A robot in the product line makes that feeling a little more gray.
There is another way to look at this that may result in less gray.

Tesla is working to preempt bad robots. Some would say by “trashing the [capital] market.”

Here is a graph of the total revenue delivered by a product, used as the top line to determine return on investment.
Screen Shot 2021-08-21 at 5.54.44 AM.png

The Tesla robot will obsolete many robot companies making more dangerous robots.

Did recordings of Louis Armstrong and Ella put live musicians out of business? Should those recordings be destroyed? As said here somewhere, people adapt.

Let’s add a requirement: operate a Hoyer lift. Hoyer Lift / Body Sling Transfer to Wheelchair
 
Last edited by a moderator:
I have to admit. I felt pretty sure Tesla would unveil robot plans. I didn't expect it to look so human, I realize the final product could look much different though. But about 30 minutes after the presentation for the first time I really did have a little dread. Tesla's probably the only company that I have a high degree of confidence in them pulling off a mass production robot. And Im glad they plan to engineer in physical limitations so it can't beat you up. But just on the potential to take jobs its pretty scary. I realize that there are A LOT of people in the world. And Tesla would have to build A LOT of robots to take away any noticeable amount of labor opportunities. But it is unsettling to think how many peoples jobs are on the verge of being eliminated. Up until this point I have always regarded Tesla and its products as objectively great for the world. A robot in the product line makes that feeling a little more gray.

Internet & Computers, calculators have replaced scores of jobs/businesses already. And yet here’s we are…. With new industries.. -Ordering stuff on Amazon, hitching an Uber, going into self checkout lines at Home Depot, zoombombing, TurboTax, watching Netflix instead of movies to name a few.

Don’t fear change. To fear the humanoid will be same as fearing that oil/gas companies will Be out of jobs too. I don’t hear complaints of legacy auto employees needing new jobs etc..

OT- Also I was talking to my PA yesterday and I don’t see why they can’t use neural network to train to read xrays/identify fractures, read CT scans to a relatively high accuracy. Would need thousands of labeling and training but could work. It would Displace some nighthawk radiologists etc. I definitely see some medical application to the dojo/aws training service too. Some urgent cares/docs offices are terrible at reading basic fractures. (Let alone hard patterns or dislocations - perilunate as an ex). This application/ software can be applied globally. Just random thoughts..
 
Last edited:
If prosthetics don’t need it, the bot won’t.
It is fantastic that affordable upper limb prosthetics are becoming available. These no doubt are extremely useful and enabling for those that get them.

But consider that humans are operating these and that this is a promotional video then look at what they can really do:
0:46 mark: from a very lucky accessible starting position a battery is grasped and picked up. It's nearly knocked to the floor instead.
2:11 mark: a can is placed into the prosthetic hand by the person's other real hand, held in place there until the prosthetic closed around it and then they could use their real hand to open the can held in place by the prosthetic.
2:14 mark: a cup is placed into the prosthetic with the real hand, then tested (by the real hand) to make sure it is secure, then the prosthetic holds it in place while they use their real hand to fill it.
2:43 mark: a tape measure is carefully placed into the prosthetic by the real hand, then carefully adjusted (by the real hand forcibly shifting it in the prosthetic grasp) so they can use their real hand to pull out the tape and make a measurement.

There are more examples in the video but hopefully you get the point. Without tactile feedback, even with human control these hands aren't going to be anywhere good enough for a robot to do the sorts of things people here have been talking about. A few years ago I worked on a prosthetic hand that actually had control and tactile feedback to/from direct brain implants in human patients and they were still far from what I would call capable. Life changing for people missing both upper limbs but very, very far from the capability of real hands.

I'm not trying to say this is insolvable, but it is decades, not years away.