Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla autopilot HW3

This site may earn commission on affiliate links.
Agreed. FSD coast to coast claims, 500k M3 by end of 2018 claims.. all of that is unwarranted. He only gets a pass with his fanbase because he achieves more than anyone else in the industry.

Yes, I for one would not have bought into Tesla via car purchase if not convinced the man has many redeeming features.

Have also seen the somewhat cynical argument plausibly made that a generous degree of bull-shitting talent was always required to make Tesla a success in a very tough business environment post-2008.
 
Yes, I for one would not have bought into Tesla via car purchase if not convinced the man has many redeeming features.

Have also seen the somewhat cynical argument plausibly made that a generous degree of bull-shitting talent was always required to make Tesla a success in a very tough business environment post-2008.

Software is a PITA to estimate. It doesn't work right up until it does...
(and then you test more and find out it doesn't)
Even worse when dealing with 3rd part libraries or OS then you have your problems, plus theirs to deal with.
 
Lex Fridman recently said “I don’t think Silicon Valley has ever met a deadine” which I found funny. Elon is the id of Silicon Valley so of course he manifests this even more than most. While also manifesting other Silicon Valley qualities like disrupting old industries and changing the world more than most

Before judging Elon too harshly take note of the next time you text someone “I’ll be there at ___” and see how accurate you are :p
 
Lex Fridman recently said “I don’t think Silicon Valley has ever met a deadine” which I found funny. Elon is the id of Silicon Valley so of course he manifests this even more than most. While also manifesting other Silicon Valley qualities like disrupting old industries and changing the world more than most

Before judging Elon too harshly take note of the next time you text someone “I’ll be there at ___” and see how accurate you are :p
I know if I say “I’ll be there at 11” I’ll wake up later than if I said “I’ll be there at 10”.... I think Elon is setting difficult goals on purpose.
 
Audi has delayed their e-tron because of software glitches. Software is never on time and it’s like writing a book - never finished. As a leader, Musk has to setup goals, and he likes to set them high.

Some do not agree with his approach and that is why we have Aurora. I am very curious how well they are doing considering that for the past two years Tesla has been getting rid of their code.

One thing that Musk has been good at is to pick a direction. One example would be Gigafactory1, another Mobileye and lastly AP3. None of his decisions are perfect but they are good enough and so far better than all other players.
 
Last edited:
@chillaban @Electroman I do think this comes down to perspective. @Bladerskb is obviously an autonomous driving technology enthusiast and possibly someone in the technology side of the business themselves. He/she has shown repeated interest in the technology more so than in the cars themselves.

So taken from this perspective it is not unfathomable to view EyeQ3 that ran AP1 as superior to Tesla Vision as shipped at this time. Paired with similar amount of cameras and sufficient chips it seems likely to me even EyeQ3 would deliver a superior experience (let alone EyeQ4). It seems to have the more mature computer vision with those traffic signs but also a more mature vision engine for stabler and wider range of identification of cars and objects internally.

That is the key word here of course. Internally.

Because @S4WRXTTCS is right too. For many of us this is not just about watching technology providers because we don’t work on those products and can’t drive those products. We can drive cars we can buy and there — for all its and their faults — Tesla offers a bleeding edge product. This is what makes Teslas interesting for sure and a choice for many of us. And indeed it means — finally — that AP2 delivers in many cases more than AP1 does for the consumer.

That said we are left to wonder what wonders could MobilEye and Tesla have delivered together if they had kept on aligning the aggressiveness of Tesla’s implementation with MobilEye’s mature internal self-driving components... AP1 compared to the rest of the industry at the time is some proof that it could have been exciting... Think of what Tesla could do by now if they shipped EyeQ4s with eight cameras...

Mobileye was a dead end until Intel acquired them, by that time Tesla has been heavily invested in their own system and have reasons to believe that they came up with superior solution plus they don’t get to be scolded by Mobileye.
 
Mobileye was a dead end until Intel acquired them, by that time Tesla has been heavily invested in their own system and have reasons to believe that they came up with superior solution plus they don’t get to be scolded by Mobileye.

I disagree completely. Intel bought MobilEye because they had a great roadmap.

The EyeQ4 from MobilEye we are looking at today doing the Level 4 demos etc is the system Tesla too was reviewing (and indeed took their tri-focal setup from) back in 2016.

MobilEye is much underappreciated in the Tesla circles I find.
 
According to EETimes the upcoming Mobileye EyeQ5 will have 24TOPS at 7nm tech at 10W. If Tesla's new HW3 will be able to use 7nm tech that'll be great. But I'm more interested why Tesla decided to build their own chip vs using the Nvidia new chips because the new Nvidia Xavier is a massive improvement vs the PX2 Tesla is currently using.

money
 
But I'm more interested why Tesla decided to build their own chip vs using the Nvidia new chips because the new Nvidia Xavier is a massive improvement vs the PX2 Tesla is currently using.

Four reasons, I think:

1) Money (as iTech just said): you can develop a chip for only around $50 million. You can save money at large enough volumes by cutting out Nvidia’s profit margin.

2) Time: I believe Tesla can get its own chips into its cars faster than the Nvidia chips by skipping all the steps you need to take to turn something into a commercial product. I might be wrong about this, though.

3) Control: Tesla can update the chip’s firmware at will. It can also change the hardware design if it wants. If there is a hardware issue (e.g. difficulty integrating the chip with other vehicle components), there is a better chance Tesla can fix it itself instead of waiting on Nvidia, which has a lot of other priorities.

4) Reducing computational overhead: someone at Tesla, maybe Jim Keller, mentioned computational overhead could be reduced through vertical integration. This is true with iPhones, which run faster than Android phones with faster processors and more RAM.

Are Nvidia’s GPUs faster at running deep neural networks than customer ASICs? That would be one obvious reason, but I’m not sure if it’s true.
 
Four reasons, I think:


Are Nvidia’s GPUs faster at running deep neural networks than customer ASICs? That would be one obvious reason, but I’m not sure if it’s true.
Isn't it obvious that ASIC is faster than GPU? The whole idea of ASIC is, that it is faster.

E.g. in bitcoin mining, ASIC:s have completely overtaken GPU:s.
 
Problem is this is all a gamble. Maybe they can get AI to do it, maybe HW3 will be powerful enough... Or maybe not and cheap, compact lidar will make it all obsolete before it's even released.

And they royally screwed themselves by selling it to customers for years when it's probably 5+ years away.
 
Problem is this is all a gamble. Maybe they can get AI to do it, maybe HW3 will be powerful enough... Or maybe not and cheap, compact lidar will make it all obsolete before it's even released.

And they royally screwed themselves by selling it to customers for years when it's probably 5+ years away.

I'm pretty sure they have been running a FSD capable NN on non-automotive HW with logged data for a while. Those processing requirements were the basis for the new chip in HW3. The two rounds of employee FSD sign ups show the the new HW is showing much potential.

Lidar still needs processing, driving logic, stop light recognition and such. Its main advantage is a quick path to solid object detection.
 
I disagree completely. Intel bought MobilEye because they had a great roadmap.

The EyeQ4 from MobilEye we are looking at today doing the Level 4 demos etc is the system Tesla too was reviewing (and indeed took their tri-focal setup from) back in 2016.

MobilEye is much underappreciated in the Tesla circles I find.


Mobileye was a dead end for Tesla because of their refusal to let the Tesla loose in their development. Mobileye were afraid that Tesla accidents would reflect badly on their product.

Bottom line is that Mobileye were not ready to accept perceived risk from Tesla development while Tesla did not want to give up their competitive edge. That is why Mobileye relationship was a dead end.

Another reason was that Mobileye 4 was vaporware until Intel acquired them.
 
I'm pretty sure they have been running a FSD capable NN on non-automotive HW with logged data for a while. Those processing requirements were the basis for the new chip in HW3. The two rounds of employee FSD sign ups show the the new HW is showing much potential.

Lidar still needs processing, driving logic, stop light recognition and such. Its main advantage is a quick path to solid object detection.

But what does this mean in practice? They had some employees with prototype AP3 shadowing them? Because their letter states that they did zero autonomous miles, i.e. any miles they did were under human control to at least the extent that level 2 AP is (hands on wheel, paying attention).

Lidar info is much easier to process because each sample includes spacial information. With cameras the spacial information has to be determined by the NN before a model of the environment can be built, and it seems like Tesla has realized that they need the ability to compare multiple frames for a cheap analogue of stereo vision to do that. Hence the need for more processing power.
 
Mobileye was a dead end for Tesla because of their refusal to let the Tesla loose in their development. Mobileye were afraid that Tesla accidents would reflect badly on their product.

Bottom line is that Mobileye were not ready to accept perceived risk from Tesla development while Tesla did not want to give up their competitive edge. That is why Mobileye relationship was a dead end.

I think you are right, and it turned out pretty badly for Tesla. MobileEye has a better rep, competitive tech and a much clearer path forward.
 
But what does this mean in practice? They had some employees with prototype AP3 shadowing them? Because their letter states that they did zero autonomous miles, i.e. any miles they did were under human control to at least the extent that level 2 AP is (hands on wheel, paying attention).

I'm saying, employees are out there commuting with Model 3s running FSD code on HW3.

As long as the 'driver' has to click a steering wheel button or apply wheel torque(at a level below disengagement) every 'nag value' seconds, the testing does not require reporting.
 
According to EETimes the upcoming Mobileye EyeQ5 will have 24TOPS at 7nm tech at 10W. If Tesla's new HW3 will be able to use 7nm tech that'll be great. But I'm more interested why Tesla decided to build their own chip vs using the Nvidia new chips because the new Nvidia Xavier is a massive improvement vs the PX2 Tesla is currently using.

It's still a GPU. The TPU from Tesla is estimated to be almost twice than twice as fast as Xavier (and more than twice as fast as EyeQ5). Per TPU. Times two. Hardware designed for tensor processing tends to be a lot faster at evaluating tensor models than any general purpose GPU.

I'm not sure why 7nm vs. 10nm or even 14 nm is even relevant here, nor the wattage consumed. It matters a lot in mobile electronics, because you're power-constrained, so performance per watt matters. But Tesla HW3 is strapped to a giant battery. :)