Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
That's actual an hilariously valid question.


LIDAR doesn't make sense for the car- because you'd need expensive sensors in every direction, that extended out over long distances, and need to integrate in real time with cameras in every direction, at potentially high speeds--- all to get you HIGHLY precise distance/velocity for a task you don't need nearly that level of precision- the 1 cm they're targeting with vision alone is plenty.


On the other hand LIDAR might actually make sense for the robot. Cell phones have LIDAR now and can provide detailed 3D mapping of the nearby environment. SHORT range, single direction, LIDAR is quite cheap- but would provide precision you might indeed care about for human-type tasks.

They probably won't do it just on principle (although Elon has said he's fine using LIDAR where it makes sense... SpaceX uses it on Dragon for example)- and can probably get what they need with moving the robot head around for parallax/triangulation, but still funny to think about.
Also think it's funny the immense data collection from the fleet is not enough and they are supplementing with highly developed simulations.
 
  • Like
Reactions: Drax7
Did GM just recall every bolt ever made?
BUY BUY BUY!! GM SOARS! It’s that quality Union labor.
I'll just leave this beauty here - one of my favorite articles from the good 'ol days (pre-stock price surge).

 
I think Tesla is seriously considering building a chip factory.

Yes, it is expensive. The sticker prize is around 20 G dollars.

Indeed, Elon said its 12 to 18 months to do so, so he's definately been thinking about it. Personally, I expect the near-term results of a partnership with Samsung and their new Fab in Austin will set the course for Tesla. They tend to only bring products in-house when they can not obtain a reliable supply of top-quality components from suppliers. Elon has already "fired a shot across the bow" for R. Bosch in Germany. Smart suppliers will pay attention to their own future prospects.


Cheers!
 
i saw it also, momentarily
me too

index.php
 
It's specific to AI/ML training though.

Which AFAIK is not a huge % of Amazon or Googles cloud revenue- but I'm open to correction if you have specific numbers.

So while it might certainly take SOME business from them, my impression was that most of their cloud business was from more traditional computing use that Dojo is ill suited for.


It's certainly a potential valuable revenue stream for Tesla, but I wouldn't run out and short Amazon over it.
Yes - indeed, my bad, I should have qualified: for AI/ Neural Network training applications, any NN training application, not just vision oriented ones
 
  • Like
Reactions: hiroshiy
On the other hand LIDAR might actually make sense for the robot. Cell phones have LIDAR now and can provide detailed 3D mapping of the nearby environment. SHORT range, single direction, LIDAR is quite cheap- but would provide precision you might indeed care about for human-type tasks.
A friend of mine has lidar on his iPad.
It is not worth it.
Too low resolution, bad reconstruction, many software limitations (like cannot move object while scanning etc )
The image based solutions (Microsoft photosynth or so was a thing 10 years ago or so) reconstruct models as good from images as the lidar solution does.
And by now a camera is always better considering the bang you get for a buck.
 
Looks to me you're applying for a job at Tesla Robotics. Go for it!
The application is pretty trivial. I linked to a portfolio and an SAE Journal of Alternative Powertrains published paper and said I was good at packaging and system engineering. I should have said power-electronics-packaging because that is a lot of what I have done and a lot of what a robot is.

The habit of leaving out adverbs and adjectives, and using words like "stuff" hurts audience recognition of what they are looking at...

But there, I gave Tesla 2 hours.

  • One to think about why wrists are so complicated [hand speed with energy efficiency] and
  • one to type into the easy, easy application.

So overall, I got a good return from AI day and Tesla on this.

Thank you for the encouragement on the second hour.

Edit: "Hand" implies dexterity to me. Another case of leaving out a word. This time the obvious function of a wrist as the foundation of ~50%? of human dexterity (leaving out language and hearing). You would have more dexterity if your wrist were 4 inches wide...
 
Last edited by a moderator:
  • Like
Reactions: SOULPEDL
Agree. I went out of my way last night to text the people I know, who listened to me and bought TSLA, to tell them no matter how many shares they have they do not own enough. Buy more.

Buy with impunity has been my tag line for years, it continues to remain.
We met with our financial advisors today. Annual mtg, but taking on more significance as we are 5-18 mos out from retiring (if I had invested earlier, one or both of us would already be out). Neither of them really had deep knowledge of Tesla though 1 watched the event last night, albeit not completely. Both are very successful and have large value base. I spent 5-7 mins educating them about the real strengths of the company, and what last night really meant. The more senior of the two got an “aha” look in their eyes and I could see he was calculating..

As an aside, it just shows how valuable this forum is in terms of providing insight and real knowledge over what gets reported by the majority of analysts. Thank you.
 
Also think it's funny the immense data collection from the fleet is not enough and they are supplementing with highly developed simulations.

That's not surprising at all.

You HAVE to do simulation for multiple reasons, no matter how much real world data you have.

Three examples:


1) Incredibly rare situations. The couple and dog jogging on the highway example they showed. Or the Elk crossing the road. You're simply not going to gather a significant amount of real life examples even with millions of cars in the fleet.

2) Counter-factual testing. You have a real world case where three different parties (one of them being the Tesla) did A, B, and C respectively and you know what happened. But what would the system do if they'd done A, X, and Y instead? Or done A, Q, and R? Or A, B, and W? With simulation you can start 100 runs from the same, real-world, start making slight changes each run and see what happens.

3) Accidents. You will get SOME accident data from the fleet of course... but as the rate of accidents of Teslas is pretty low in general, you won't get a ton of it... and you can stack this with item 2... For example "Ok, the car fails to avoid this real world accident in these specific road, traffic, and weather conditions... what if we run the same situation but in clear visibility? What if we run it with the car going 10 mph slower? What if we run it at a different time of day for lighting difference? etc...


A friend of mine has lidar on his iPad.
It is not worth it.
Too low resolution, bad reconstruction, many software limitations (like cannot move object while scanning etc )

Those are SOFTWARE problems- not hardware. And have improved a lot since the original iPad implementation.


This specifically mentions iOS 14 on the new phones has much better results than the original ipad release.

The image based solutions (Microsoft photosynth or so was a thing 10 years ago or so) reconstruct models as good from images as the lidar solution does.

Nope. See again the link.--

The newest lidar-enabled version is accurate within a 1% range, while the non-lidar scan is accurate within a 5% range

5x margin of error with the image based solution vs the LIDAR one.

Specifically it mentions the old iPad version you appear to have experience with only had 574 depth points per frame available.... but the new iOS 14 features allow for depth maps with up to 49,152 depth points.... over an 85x increase.


Again I'm not saying they're going to use it- I'm saying that there's a lot of reasons it makes no sense to use it on the car, but several where it might make sense to use it for a humaniform robot you want to take precise human actions, especially since you'd need far fewer sensors, and a much cheaper one at that.
 
Just to put this nail in the coffin of "Tesla should build their own fab to make D1 chips" discussion.

Tesla needs 120 wafer scale Tiles per Exapod. TSMC charges 10k per wafer. This means it cost Tesla only 1.2 million dollars worth of silicon for 1 exapod. Of course the packaging is very expensive for something this exotic but we are only talking about the chip itself. Tesla doesn't need to spend 10-16 BILLION dollars on a state of the art fab to make silicon. It's absurd...
 
Very impressed with Ashok. He seemed very knowledgeable and loved his gentle presentation style with no hyperbole or dramatics.
Another one I was impressed with was Ingo? (really hard to make out the name he introduces himself with) the simulation engineer. One of several of the previously unknown (to me at least) engineers on the FSD team.
 
Does anyone have an estimate on how much Dojo costs to build? Does Dojo as an asset show up on Tesla’s balance sheet?
See the post just above yours. Yes it would show up as a balance sheet asset, though likely buried in property plant and equipment, and relative to the cost of their factories, it will be a rounding error.