Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

MIT’s Deep Learning for Self-Driving Cars: 2019 edition!

This site may earn commission on affiliate links.
How does Lex Fridman keep a straight face when Elon says they’ll be hands off the wheel in 6 months?
Was that what he said? I though he said that maybe by the end of this year (or really surprised if not next year) that human intervention would decrease safety.
I think the 6 months was the time frame to when hands on wheel would not be needed (due to refinement validation) of NoA.
 
Was that what he said? I though he said that maybe by the end of this year (or really surprised if not next year) that human intervention would decrease safety.
I think the 6 months was the time frame to when hands on wheel would not be needed (due to refinement validation) of NoA.
Listen to the question at 17:10. He was asked how long would Tesla’s system require supervision of a human being.
Now that I watch it a couple more times it is possible that he’s only talking about the type of monitoring though that doesn’t seem possible since Tesla doesn’t have any other way of doing monitoring.
I agree with what he says about validation though really it doesn’t seem like that much driving is required to get a statistically significant sample. I bet 100 million miles would be more than enough and that would be very cheap to do if your test drivers were also transporting paid customers like Waymo is doing.
 
Is computing power what’s limiting other companies autonomous vehicles?
I always assumed that these systems were very amenable to parallelization so limited computing power was not an issue.

yes, I'd say computing power is a massively limiting factor for production vehicles.

Wanna do a darpa grand challenge? Then sure load up the dash, cabin, back seats, cargo space, whatever full of as much computing hardware you want. Heck use the outside of the vehicle as well. You don't care about aerodynamics or cargo space on a darpa grand challenge.

Now when you get back to production cars you only have whatever space won't be missed inside the dash between the firewall (or equivalent for EVs) and the surfaces the driver sees. That's a volume limit, a heat limit, and a power limit.

tight spaces with no room for massive heat sinks mean either slowing down the CPU to run passive, keeping it at some intermediate speed using fans and airflow, keeping it at a higher speed using liquid cooling loop or something more exotic. Every step up that continuum you go means more power for not only the CPU but also for the cooling solution.

The existing solution for AP 2.x was hundreds of watts. Vampire drain and Wh/mile are affected if they consume more power. If they put 10x the computing power in using the old paradigm it'd be too costly but it'd also be hotter, louder, and people would notice the range loss.

Switch to a new architecture that can do the same with 1/10th the load and you are back in the realm of reasonable for production use on all of those counts while still allowing for the AP software to be more complex and improve over time.

Elon isn't talking about a huge increase in software only in 6 months, he is talking about new computing hardware + new software in that time frame and cars with the old hardware will have to be upgraded or replaced to see that level of improvement.
 
Last edited:
yes, I'd say computing power is a massively limiting factor for production vehicles.

Wanna do a darpa grand challenge? Then sure load up the dash, cabin, back seats, cargo space, whatever full of as much computing hardware you want. Heck use the outside of the vehicle as well. You don't care about aerodynamics or cargo space on a darpa grand challenge.

Now when you get back to production cars you only have whatever space won't be missed inside the dash between the firewall (or equivalent for EVs) and the surfaces the driver sees. That's a volume limit, a heat limit, and a power limit.

tight spaces with no room for massive heat sinks mean either slowing down the CPU to run passive, keeping it at some intermediate speed using fans and airflow, keeping it at a higher speed using liquid cooling loop or something more exotic. Every step up that continuum you go means more power for not only the CPU but also for the cooling solution.

The existing solution for AP 2.x was hundreds of watts. Vampire drain and Wh/mile are affected if they consume more power. If they put 10x the computing power in using the old paradigm it'd be too costly but it'd also be hotter, louder, and people would notice the range loss.

Switch to a new architecture that can do the same with 1/10th the load and you are back in the realm of reasonable for production use on all of those counts while still allowing for the AP software to be more complex and improve over time.

Elon isn't talking about a huge increase in software only in 6 months, he is talking about new computing hardware + new software in that time frame and cars with the old hardware will have to be upgraded or replaced to see that level of improvement.
My point was that other companies don’t have any of those constraints and they still haven’t developed level 3-5 systems. To believe that this will enable Tesla to do so is to believe that their software is way ahead of everyone else. Especially so when you consider that their sensor suite is also way less expensive.
 
My point was that other companies don’t have any of those constraints and they still haven’t developed level 3-5 systems. To believe that this will enable Tesla to do so is to believe that their software is way ahead of everyone else. Especially so when you consider that their sensor suite is also way less expensive.

How do other companies not have the constraints? Physics doesn't change when you put the computer in another brand of car.

What constraints to you think are Tesla only?

To your second half tesla is introducing a custom chip that increases cpu compute power 10x, their software can't be more advanced than anyone else's software until they have compute power greater than anyone else in that price range / use scenario. That's the whole point of increasing the compute power.
 
One thing I haven’t seen discussed is the difficulty of classifying an obstacle in the road along a spectrum of how safe to hit (a plastic bag) vs. how unsafe (a boulder). It seems like there are so many 1000’s of possible objects (the long tail), I don’t see how this can be trained. It is a not too uncommon problem that our human intuition solves fairly easily. I’m guessing they’ll just classify the most likely and swerve or brake if it’s not classified?

Have you heard any ideas how autonomy can handle this?

The so called "intuition" is just that we have whole lot more stuff stored in our brain. Machines don't often see chairs on the road, for example, and will have a hard time to decipher it when see one but we can tell right away what it is from experiences had elsewhere. It still comes down to machines needing a lot more learning to know edge cases.
 
Last edited:
  • Like
Reactions: kbM3
How do other companies not have the constraints? Physics doesn't change when you put the computer in another brand of car.

What constraints to you think are Tesla only?
They're constrained by cost. I think that Waymo and Cruise are spending way more money on sensors and computation power. It just seems like better sensors and more computational power should make the problem easier. Is the claim that HW3 actually has more computational power than Waymo and Cruise have in their vehicles? I've never heard Tesla claim this.
 
  • Funny
Reactions: dhanson865
They're constrained by cost. I think that Waymo and Cruise are spending way more money on sensors and computation power. It just seems like better sensors and more computational power should make the problem easier. Is the claim that HW3 actually has more computational power than Waymo and Cruise have in their vehicles? I've never heard Tesla claim this.

It's the computation power and algorithm, the brain, but not sensors, eyes, that is the limitation. Cameras can already match or exceed human capabilities. You don't hear people say a (bad) driver needs to get a better set of eyes in order to improve.

Waymo maybe using its own proprietary chip but there is little information on that. For the rest Nvidia without doubt has the most advanced ML processor in the industry. Intel is late to the game and had to spend (actually waste) a lot of money to acquire Mobileye. Tesla did claim its new AI chip is 10x more powerful than the Nvidia PX2 currently used in HW2x. Not to mention its special tailored for the Tesla NN instead of as a general purpose processor. It's definitely the most powerful one except perhaps for Waymo's which is not known outside the company.

A possibility is these companies are still using a big computer in the truck of their test cars. With that and the Lidar I don't know how they could make it a consumer product though.
 
Last edited:
It's the computation power and algorithm, the brain, but not sensors, eyes, that is the limitation. Cameras can already match or exceed human capabilities. You don't hear people say a (bad) driver needs to get a better set of eyes in order to improve.

Waymo maybe using its own proprietary chip but there is little information on that. For the rest Nvidia without doubt has the most advanced ML processor in the industry. Intel is late to the game and had to spend (actually waste) a lot of money to acquire Mobileye. Tesla did claim its new AI chip is 10x more powerful than the Nvidia PX2 currently used in HW2x. Not to mention its special tailored for the Tesla NN instead of as a general purpose processor. It's definitely the most powerful one except perhaps for Waymo's which is not known outside the company.

A possibility is these companies are still using a big computer in the truck of their test cars. With that and the Lidar I don't know how they could make it a consumer product though.
I think it's very likely that they're all using more computing power than HW3. They're probably hoping that by the time they get their systems working the price of hardware will come down. I view LIDAR as more of a safety system than a necessity (after all humans do fine with just eyes). If you're building a cost is no object system why not use it?
The first cellphones were the size of briefcases and cost ten thousand dollars. It's not clear to me that a company that tried to make a handheld one first would have gotten to market faster and at lower cost.
 
I think it's very likely that they're all using more computing power than HW3. They're probably hoping that by the time they get their systems working the price of hardware will come down. I view LIDAR as more of a safety system than a necessity (after all humans do fine with just eyes). If you're building a cost is no object system why not use it?
The first cellphones were the size of briefcases and cost ten thousand dollars. It's not clear to me that a company that tried to make a handheld one first would have gotten to market faster and at lower cost.

Mobile phone is not a good example. It was hand coded software then but it is machine learning now. You can't put costly Lidar and big computers in just a handful of test cars to do what Tesla is doing.
 
Waymo maybe using its own proprietary chip but there is little information on that. For the rest Nvidia without doubt has the most advanced ML processor in the industry. Intel is late to the game and had to spend (actually waste) a lot of money to acquire Mobileye. Tesla did claim its new AI chip is 10x more powerful than the Nvidia PX2 currently used in HW2x. Not to mention its special tailored for the Tesla NN instead of as a general purpose processor. It's definitely the most powerful one except perhaps for Waymo's which is not known outside the company.

A possibility is these companies are still using a big computer in the truck of their test cars. With that and the Lidar I don't know how they could make it a consumer product though.

Tesla's board is no where near as powerful.
Waymo uses Edge TPU and Intel CPU.
Tesla chip HW3 board is 5x faster than a 2016 Drive PX 2 (an almost 4 years old chip that Tesla only used half configuration).

STOP COMPARING TESLA's 2019 HW3 TO AN ALMOST 4 YEARS OLD SoC!

There's nothing special about Tesla's NN accelerators.
Nvidia also have their own NN accelerators and so does Mobileye.

Nvidia's AGX Pegasus Board has ~320 TOPS (~400 watts)
Tesla's HW3 board has ~80-100 TOPS. (unknown watts)
Mobileye EyeQ5 AV Kit Board has ~75 TOPS and is the most efficient out of all of them (~30 Watts)
 
  • Funny
  • Like
Reactions: am_dmd and CarlK
They're constrained by cost. I think that Waymo and Cruise are spending way more money on sensors and computation power. It just seems like better sensors and more computational power should make the problem easier. Is the claim that HW3 actually has more computational power than Waymo and Cruise have in their vehicles? I've never heard Tesla claim this.

How is that working for Boeing, United Launch Alliance, Northrop Grumman, and Aerojet Rocketdyne? Turns out you can spend billions of dollars on the wrong approach and not be ahead or even get a working product.

Great if you are an employee of one of those companies riding the payroll. Crappy if you are the customer looking for a working product.

During the joint Senate-NASA presentation in September 2011, it was stated that the SLS program had a projected development cost of $18 billion through 2017, with $10 billion for the SLS rocket, $6 billion for the Orion Multi-Purpose Crew Vehicle. These costs and schedule were considered optimistic in an independent 2011 cost assessment report by Booz Allen Hamilton for NASA. An unofficial 2011 NASA document estimated the cost of the program through 2025 to total at least $41bn for four 95 t launches (1 uncrewed, 3 crewed), with the 130 t version ready no earlier than 2030.

In the mean time SpaceX launches all the time and continues to improve launch after launch.

Saying Waymo spends more doesn't prove that they are on the right course or that Tesla is on the wrong course. I see it in a similar way.

Waymo has little or no product in the real world. Tesla is launching new products all the time and getting better and better every few months.

Lidar is likely a dead end. Tesla is going down the path now that avoids that. If they are right 10x the existing computing power all the sudden looks very smart with the existing sensors. And it won't take Billions of dollars to make happen. It's just around the corner and in fact will be cheaper than paying Nvidia going forward (Nvidia would want to bump the price and keep profits every time the system improves, Tesla making it's own chips reduces that third party cost and keeps that profit in house).
 
Tesla's board is no where near as powerful.
Waymo uses Edge TPU and Intel CPU.
Tesla chip HW3 board is 5x faster than a 2016 Drive PX 2 (an almost 4 years old chip that Tesla only used half configuration).

STOP COMPARING TESLA's 2019 HW3 TO AN ALMOST 4 YEARS OLD SoC!

There's nothing special about Tesla's NN accelerators.
Nvidia also have their own NN accelerators and so does Mobileye.

Nvidia's AGX Pegasus Board has ~320 TOPS (~400 watts)
Tesla's HW3 board has ~80-100 TOPS. (unknown watts)
Mobileye EyeQ5 AV Kit Board has ~75 TOPS and is the most efficient out of all of them (~30 Watts)

You seems to have the habit of comparing paper value of things that are in development to what has already in a real product, the car everyone could get his hands on. What a joke!
 
Last edited:
  • Disagree
  • Like
Reactions: am_dmd and kbM3
How is that working for Boeing, United Launch Alliance, Northrop Grumman, and Aerojet Rocketdyne? Turns out you can spend billions of dollars on the wrong approach and not be ahead or even get a working product.

Great if you are an employee of one of those companies riding the payroll. Crappy if you are the customer looking for a working product.



In the mean time SpaceX launches all the time and continues to improve launch after launch.

Saying Waymo spends more doesn't prove that they are on the right course or that Tesla is on the wrong course. I see it in a similar way.

Waymo has little or no product in the real world. Tesla is launching new products all the time and getting better and better every few months.

Lidar is likely a dead end. Tesla is going down the path now that avoids that. If they are right 10x the existing computing power all the sudden looks very smart with the existing sensors. And it won't take Billions of dollars to make happen. It's just around the corner and in fact will be cheaper than paying Nvidia going forward (Nvidia would want to bump the price and keep profits every time the system improves, Tesla making it's own chips reduces that third party cost and keeps that profit in house).
I'm not saying Waymo is better because the spend more, I'm saying they're more likely to get FSD working because they're probably ten year ahead, their hardware is faster and their sensors are better. I just think getting a system working first and then doing a cost down is the right approach. I'll be happy to be proven wrong. After all I own a Tesla and would love to buy L3-5 FSD.
FSD seems like something that will always be "just around the corner" :(
 
  • Like
Reactions: am_dmd