Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autopilot HW 3

This site may earn commission on affiliate links.
From Techcrunch:

"Nvidia CEO Jensen Huang revealed a forthcoming generation of the company’s self-driving Drive PX supercomputers for use in vehicles – the successor to the current Drive Pegasus, which will be called the Drive Orin and which will use essentially two Drive Pegasus computers combined into one, much smaller packages."
I think Techcrunch might be wrong. Because this what Jensen said:

"DRIVE Pegasus, as powerful as it is, multiple ones are being used in self-driving cars. Our next step is called Orin – we’ll take eight chips, two Pegasuses, and put them into two Orrins. This is our drive roadmap."
I want to know who is using multiple Pegasuses.
 
Last edited:
Waymo gets away with less computing power because they have better sensors. Lidar massively reduces the amount of processing power you need to recognize a scene and objects in it, and to build a 3D map of the environment.

If Tesla want to rely just on cameras they will need a lot more power, both in terms of processing and energy consumption.
 
  • Like
Reactions: SlicedBr3ad
Waymo gets away with less computing power because they have better sensors. Lidar massively reduces the amount of processing power you need to recognize a scene and objects in it, and to build a 3D map of the environment.

If Tesla want to rely just on cameras they will need a lot more power, both in terms of processing and energy consumption.

Its actually the opposite of it. Having more sensors means you need to run neural networks on all your sensors, meaning you need way more neural networks. You also need to do more signal processing and sensor fusion.

Waymo has 8 cameras (with higher res) that they run neural networks on.

They also have 5 lidars they also run separate stack of neural networks on.

Then they have 4 custom built 360 radar.

They are clearly doing more processing than Tesla with 8 cameras and 1 radar.
 
Update: I have to apologize to Nvidia. Their messaging with regard to the computing power required for fully autonomous driving has actually been pretty clear and consistent all along.

6965821-14955099714689455.png
 
I read your latest article @strangecosmos

While Mobileye only need 24TOPS for their Level 5 vision system, mapping and driving policy (a testament of how efficient their networks are). their AV kit is based on 4x eyeq5.

For three reasons, first is to maximize profits, second is to separate the sub systems for full redundancy, third is to be able to sell the sub systems separately.

1) Camera Vision and REM Map localization
2) Another does Sensor Fusion. (if lidar and radar is used)
3) Another does Driving Policy
4) Another does reduced Camera Vision and reduced Sensor Fusion to act as a backup (Fail Operation Board)

Outside of that OEMs and Tier 1s can mix and match and use any config they want.
Whether just one open eyeq5 so they can run their own code,
or just one closed eyeq5 to use for camera perception and/or mapping, sensor fusion, driving policy.
Two closed, using one for camera perception and/or mapping, sensor fusion and another for driving policy using Mobileye's api.
 
Last edited:
Its actually the opposite of it. Having more sensors means you need to run neural networks on all your sensors, meaning you need way more neural networks. You also need to do more signal processing and sensor fusion.

Waymo has 8 cameras (with higher res) that they run neural networks on.

They also have 5 lidars they also run separate stack of neural networks on.

Then they have 4 custom built 360 radar.

They are clearly doing more processing than Tesla with 8 cameras and 1 radar.

Waymo doesn't use NN for every sensor. The lidar is used to build a 3D map of the world and calculate movement using algorithms, not AI.

Tesla is the same. No NN for the ultrasonic sensors, they are just using a simple filtering algorithm to measure distance. Radar too.