Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla autopilot HW3

This site may earn commission on affiliate links.
Why does Fred keeps pushing that Andrej was behind googleNet when he wasn't?

"Tesla’s Director of AI and Autopilot Vision, Andrej Karpathy, was behind the GoogLeNet neural net when he worked at Google."

Does journalism even exist nowaways? Why is anything tesla related filled with so many myths and fables? Disgusting!

So if he was not behind, that must mean Andrej was ahead of GoogLeNet? Even better!
 
Re: Camera Agnostic

I asked this in another thread, but has anyone evaluated the wiring harness to the cameras in AP2.0+? Does it appear to be able to handle more bandwidth than is currently used? Is it hardwired or does it use a snap-on connector?

I'd love to see the possibility of a relatively easy camera swap. Sensor technology is improving rapidly, so I feel like that could become the limiting factor (e.g. resolution, dynamic range).
 
  • Love
  • Like
Reactions: PaulJohn and OPRCE
Re: Camera Agnostic

I asked this in another thread, but has anyone evaluated the wiring harness to the cameras in AP2.0+? Does it appear to be able to handle more bandwidth than is currently used? Is it hardwired or does it use a snap-on connector?

I'd love to see the possibility of a relatively easy camera swap. Sensor technology is improving rapidly, so I feel like that could become the limiting factor (e.g. resolution, dynamic range).

From first hand experience at-least the repeater cameras are easy to swap out. It took a mobile tech about 20 minutes to replace a side repeater camera on my Model 3 in my garage that was causing Autopilot to see ghost cars. But some of the cameras are definitely easier than others, anything in a sealed enclosure like the B-pillar will require a lot of work. Not impossible, but I suspect the swapping of the Autopilot computer will take more than the optimistic 30 minutes that was quoted so Tesla won't want to complicate the process with a bunch of tedious labor.
 
No. ... In fact AP team said that only main unit will be swapped during upgrade.

I know they have said that but what else is "camera agnostic" then supposed to mean?

It would naturally be smarter to design it with flexibility for future sensor upgrades, just in case they should ever prove necessary, rather like how Elon claimed HW2 would be sufficient for FSD, while planning for the contingency that it would not by quietly initiating development of HW3 three years ago now.

"It is unlikely that cameras will have to be upgraded for AP3."

This is what they are of course hoping and saying publicly but it very much remains to be seen if e.g. the current repeater cameras have resolution really up to the task of safely overtaking in FSD on let's say the German Autobahn with cars overtaking at ~300kmh. IMHO the "Designed for California" approach will likely fall short here and better resolution cannot hurt in recognising the approaching risk sooner, hence avoiding spectacular potential PR casualties.

P.S.: Let us hope the new HW3 is also "radar agnostic", as that old junk [which infamously cannot distinguish a stopped fire-truck in the planned path from roadside clutter at highway speeds] is certainly long overdue an upgrade too.
 
Does this mean that HW3 is prepared for eventual upgrades to improved camera specs, or what?

[Edit: same question as MarkS22]

camera agnostic architecture is a confusing term, what I believe people mean is that instead of Autopilot running 8 instances of a NN (one for each camera trained for that usecase), there is a single NN that combines a unified input from all 8 cameras. This allows the NN to take into account data from multiple cameras at the same time to understand it's environment.

One example:
The current issue with V9 where sometimes 2 cars are rendered or semi trucks are split into 2 is a result of each NN seeing a car there at the same time and the software that's putting the output of both NNs together isn't sure if there's a single car or 2 cars. When a single unified NN is taking the inputs of all cameras this will likely be solved.

The accuracy and confidence will also likely improve when a single NN can see objects in both cameras from different angles and it's likely to be able to better predict position and relative velocity.
 
camera agnostic architecture is a confusing term, what I believe people mean is that instead of Autopilot running 8 instances of a NN (one for each camera trained for that usecase), there is a single NN that combines a unified input from all 8 cameras. This allows the NN to take into account data from multiple cameras at the same time to understand it's environment.

Got it. "Multiple camera fusion" seems like a more accurate phrase.

That said, I do feel that a system relying so heavily on vision will quickly find the camera sensor data becoming the limiting factor. It would be amazing if they could swap sensors easily, even if it was just the trifocal cluster in the front. Higher resolution could mean the recognizing a vehicle in your path seconds earlier. At 60mph, that could be ~200 feet.
 
  • Like
Reactions: tezzla and OPRCE
Got it. "Multiple camera fusion" seems like a more accurate phrase.

That said, I do feel that a system relying so heavily on vision will quickly find the camera sensor data becoming the limiting factor. It would be amazing if they could swap sensors easily, even if it was just the trifocal cluster in the front. Higher resolution could mean the recognizing a vehicle in your path seconds earlier. At 60mph, that could be ~200 feet.

Fair point. If they did decide they needed better resolution it would probably be the narrow forward camera that would be the one to upgrade.
 
Wow! What a great post and summary in semi-layman terms, @verygreen @DamianXVI @jimmy_d

Thanks for taking the time to do this and explain it. I’m sure you must enjoy it a little bit to spend the time on it, but wanted to say THANK YOU as I certainly enjoyed reading it and learning.

I do also enjoy @Bladerskb comments too. :) Nothing lost from hearing diverse opinions on complex topics!
 
@verygreen You say you believe the Exynos SoC is a 2015 vintage, do you have the actual part number? I'd be curious which unit it actually is, because given your description of the components it seems like you're suggesting that it's at least based on a 7580 or 7870. Can we get a firm ID on it?

Also, do you know if the clock is actually locked at 1.6G, or is it simply running at 1.6G?

If I were choosing a platform for the new generation, it'd absolutely be ARMv8 based, and the Exynos platform is a pretty solid choice for general processing. I'd have based it on the 7880 or something similar, removed any baseband components, and added BT5.0. That way you can make the mobile and wifi radios discrete components for simple replacement when the technology ages out.
 
@verygreen You say you believe the Exynos SoC is a 2015 vintage, do you have the actual part number? I'd be curious which unit it actually is, because given your description of the components it seems like you're suggesting that it's at least based on a 7580 or 7870. Can we get a firm ID on it?

Also, do you know if the clock is actually locked at 1.6G, or is it simply running at 1.6G?

If I were choosing a platform for the new generation, it'd absolutely be ARMv8 based, and the Exynos platform is a pretty solid choice for general processing. I'd have based it on the 7880 or something similar, removed any baseband components, and added BT5.0. That way you can make the mobile and wifi radios discrete components for simple replacement when the technology ages out.
verygreen,

Easier request can you upload the decompiled Device tree.
That way we can all read exactly what hardware it has and drivers. (maybe add the kconfig for good measure)

If you need help on how to do this let me know.
 
Without reading the post at all and only looking at just the picture. i just wanted to say that non-stacked convolution layers is pretty standard in the world of CNN nowadays, it all started with the first inception network from google which Google really showed that you don't have to just stack conv layers with pooling, drops, on top of each other, but that you can get clever with it. That and using smaller conv filters there is nothing mind blowing about it today. its pretty standard.

Here is Inception v1 from 2014

googlenet_diagram.png


EDIT: After reading the post. My comments doesn't change. Only thing i would add is that I'm surprised people still take anything jimmy_d says seriously.
Why all the dislikes?

What this post says is true. The architecture is pretty old and well understood. It's however still very useful nonetheless. The bit on jimmy_d I am not familiar.
 
  • Funny
Reactions: Vitold
verygreen,

Easier request can you upload the decompiled Device tree.
That way we can all read exactly what hardware it has and drivers. (maybe add the kconfig for good measure)

If you need help on how to do this let me know.

Here you go.

Kernel config: hw3 kconfig - Pastebin.com
dtb1 (some sort of eval board?): hw3 dtb1 - Pastebin.com
dtb2 (the actual dtb?): hw3 dtb2 - Pastebin.com

I guess I should have looked into DTBs at the start for more info.

@verygreen You say you believe the Exynos SoC is a 2015 vintage, do you have the actual part number? I'd be curious which unit it actually is, because given your description of the components it seems like you're suggesting that it's at least based on a 7580 or 7870. Can we get a firm ID on it?

Also, do you know if the clock is actually locked at 1.6G, or is it simply running at 1.6G?

If I were choosing a platform for the new generation, it'd absolutely be ARMv8 based, and the Exynos platform is a pretty solid choice for general processing. I'd have based it on the 7880 or something similar, removed any baseband components, and added BT5.0. That way you can make the mobile and wifi radios discrete components for simple replacement when the technology ages out.

I don't see a firm id. clocks are based on what bootloader sets them to at bootup. though it appears dtb pegs them at up to 2.4GHz?
 
Exynos 7880 8 core A72, weak single core but fast multi-core:

Samsung Galaxy A5 (2017) Benchmarks - Geekbench Browser

Edit: Mali C71 ISP sounds interesting:

The Mali-C71 is designed for the emerging smart automotive market, capturing twice the dynamic range of a standard single exposure sensor, and in some cases even outperforming the human eye.

high-end ISP, supporting 4 real-time (streaming) camera inputs and another 16 non-streaming inputs. Altogether the ISP can stream/process 1.2GPixels/second.
 
Last edited:
  • Informative
Reactions: croman
Exynos 7880 8 core A72, weak single core but fast multi-core:
how do you arrive at 7880? Also there are 3 cpu clusters of 4 cores mentioned, but does that rally mean 12 cores? I don't deal with DTs often so I don't really know.

The Mali-C71 is designed for the emerging smart automotive market, capturing twice the dynamic range of a standard single exposure sensor, and in some cases even outperforming the human eye.

This kind of makes no sense. Camera sensor determines the dynamic range, not the unit that receives the results, no?
 
Last edited:
I don't see a firm id. clocks are based on what bootloader sets them to at bootup. though it appears dtb pegs them at up to 2.4GHz?

Did this image happen to come from a development/beta vehicle? Some of the kernel debug parameters make sense in a production environment, but others really only make sense during development because of the potential negative performance impact.

Anyway, from the hardware files you linked to, it would seem at first glance that the SoC is actually a 7880 with three clusters of four cores each. Those cores are at least compatible with the A72, so this could be a modified 7880 or it could be a custom run specifically for Tesla that is compatible enough with the 7880. In quantities much less than 500k, CPU manufacturers will very happily do some modifications and enable/disable features, so it's totally conceivable that Tesla did exactly that.

I have an absolute ton of questions about the running kernel, but those are likely more appropriate for a private conversation and then a public dump of knowledge gained.

Exynos 7880 8 core A72, weak single core but fast multi-core:

Samsung Galaxy A5 (2017) Benchmarks - Geekbench Browser

Benchmarks of a device that happens to have the CPU in it aren't really applicable, and this benchmark wouldn't tell us anything meaningful anyway. What does "773" mean? What does it represent? What is the workload creating the number 773 and what relation does it have to the MCU? Benchmarks are best for testing relative change of a known system or a known workload when making changes to that system or workload. Outside of that, they don't offer much value.

Examples:
SSE vs non-SSE acceleration of the same task on the same CPU in the same device to test improvements from using optimized instruction sets.

Intel Bronze 4114 vs Bronze 4116 running the same binary, performing the same work in the same host, to test improvements from more cores.

how do you arrive at 7880? Also there are 3 cpu clusters of 4 cores mentioned, but does that rally mean 12 cores? I don't deal with DTs often so I don't really know.

Customized SKU of the base SoC could account for the additional cluster of cores. Really, Tesla could just be using the compatibility with the A72 and the actual SoC isn't even an Exynos. But since the kernel config and hardware descriptors seem to imply at least that the basis of the SoC is a Samsung device, it's probable that they just customized it for Tesla.

This kind of makes no sense. Camera sensor determines the dynamic range, not the unit that receives the results, no?

This is exactly correct. The SoC camera sensor components probably aren't used except for maybe the cabin-facing unit. I'd bet Tesla has additional capture hardware on the board to handle all of the EAP cameras. Does anybody have detailed, high-res photos of the HW2.5 and/or HW3 boards?
 
Last edited: