Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Neural Networks

This site may earn commission on affiliate links.
Untitled.jpg

airz.JPG
 
Last edited:
The AP 2.5-board itself clearly doesn’t «require» liquid cooling, as the S/X runs air cooled 2.5 apes.

The same can be said for ICE’s Infotainment-board - assuming S/X MCU2 has the same board (S/X MCU2 is air cooled).

So the liquid cooling for M3 is either because the sandwich/double board layout «requires» it (brighter minds than mine can answer that), or, it was an engineering decision based on a bunch of other parameters like NVH, maintenance/lifespan, dash design ++

[Admittedly whipping the off-topic horse here, but d00d what a fine horse]

Whether it is strictly required or not I'm not going to comment on. Current fabbed chips and capacitors can take some brutal abuse. It does, however, fall within having fewer overall parts and a more streamlined production line. Eliminating multiple fans that also need to be ensured air flow and replacing it with a liquid loop simplifies overall engineering and assembly I would think in addition to the spatial advantages.

And yeah, this horse is way off thread topic. Should be forked.

And then a week later came and said on a conference call that it's a 30-minute swap?

Seemed pretty quick when I was ninja observing a guy at the SC doing it to a Model 3 for a bricked unit while I was there. Was only half paying attention, should have been more attentive, my bad.

IMHO, this is the easiest explanation there is no cost saving by removing FSD since they still have to do retrofits, and they cannot ship the cars without the AP computer since it also handles regular safety functions which are free.

Agreed. I think it's just a move to nonkickstarterish upgrades... should they need an updated ASIC in the future to achieve their aims. This is mostly a precautionary legal measure in my opinion. Deliver exactly what is payed for at the point of sale... not some ambiguous future upgrade.
 
Last edited:
Really exciting stuff, Jimmy. Pretty fascinating idea that one big neural network will learn a “concept” of an object in a camera agnostic way, by viewing it through 8 different cameras with different fields of view. And that it will do a better job of recognizing objects by “seeing” motion by processing two frames rather than just one. These both sound like ways to have more robust object “concepts”.

Jeff Hawkins’ company Numenta just published a theory of intelligence wherein learning through movement is a key part of developing concepts or models of objects. It makes me think of what you do when you find some new item (e.g. on the ground, or in a box in an attic) and you’re trying to figure out what it is: you turn it around in your hand and look at it from every angle. This is sort of analogous to looking at an object moving between two frames through 8 cameras.

The octo-network idea also reminds me of a paper I saw recently (couldn’t find it again easily) about how neural networks had superhuman performance at classifying images that had one type of image distortion after being trained to do so, but poor performance once the researchers combined different types of distortion. Someone tweeted this paper as an example of how deep learning will fail us. But my immediate thought was: why not train neural networks on the different combinations of distortion? If you want to use a neural network in a practical application, and not to make an academic discovery about transfer learning or whatever, just train the neural network on the different combinations that actually occur. Then, hopefully, you’ll get superhuman performance on those combinations as well as one the individual distortion types.

This would be a way to give neural networks more robust object “concepts” that are more resilient to noise at the level of the perceptual system. Distortion types seem analogous to camera views. The most resilient network of all would be able to to recognize an object regardless of camera angle or field of view.

I wonder if neural networks can develop object models or “concepts” that are as resilient to surface-level variation as human concepts. That is, with enough neurons, weights, and diverse training data, can neural networks learn the essence of an object type? Or will they always rely on shortcuts and clever tricks? (Until we develop more advanced architectures.)
 
Last edited:
Previous versions have shown a tendency to stop at T intersections. May have been map based vs vision.

This is nothing more than regular behavior for 42.2 after drive on nav completed. I do wish it could read signs but, My car stopped in the middle of road to wait until I pressed accelerator and this signifies hand off from drive on nav to regular autopilot. Based on that, the video shows a random chance of that stop happening at a location with stop sign.
 
This is nothing more than regular behavior for 42.2 after drive on nav completed. I do wish it could read signs but, My car stopped in the middle of road to wait until I pressed accelerator and this signifies hand off from drive on nav to regular autopilot. Based on that, the video shows a random chance of that stop happening at a location with stop sign.
It may not be reading signs but it's definelty stopping based on a map location (or maybe distance after exit/shutdown) I done multiple types of exits and it always stops at the right spot they all have variable distance, lane counts and some even have loops and end at a signal.
 
Tesla Daily: Neural Networks & Autopilot V9 With Jimmy_d

Jimmy_d joins the podcast to expand on his post from the Tesla Motors Club forum about neural networks and what he is seeing in the most recent Autopilot software update

Listen:

This has to be the worst talk on autonomous driving i have ever heard. False statement after false statement after false statement.
Tesla grandstanding after Tesla grandstanding. I couldn't even make it half way through. I can't take anything Jimmy says seriously anymore. Sorry its that bad!

"Tesla is the poster child for the post neural network effort, NN are essential to what they are doing"
"People who knew NN were out there and that they could use them they look at the problem differently"
"if you think that NN are gonna get alot better that's the thing to do. Use cameras and use neural networks"
"If you already got a huge investment in Lidar or HD map. HD maps are alot less necessary and useful in the NN/Vision world."
"HD Maps are really useful in the pre neural network world"


Typical Tesla bubble. So wrong on so many levels.

Other companies use way MORE neural network than Tesla. NN doesn't require vision and its used whether its a lidar based system or camera based. All companies also use cameras. Lastly NO ONE. NO ONE. ZIP. NADA uses NN on anything other than perception.

HD map has nothing to do with perception other than redundancy.
I couldn't get past 30 mins.

"Lidar is really useful for helping a simple algorithm not hit stuff."
"Tesla is gambling that the NN thing works"
"Right now it looks like Tesla is gonna win"


I almost threw up!
 
@jimmy_d

Thank you for your analysis. I just listened to your podcast with Rob (Tesla Daily) and have been re-reading this thread. Your easy to understand explanations have begun (baby steps) to peak my interest and want to dive in more deeply to understand more.

Can you roughly compare the capabilities of AKNET_V9 vs humans?
How do the two compare as far as:
1. amount of raw visual input, 9+ sensors vs two eyes (or even just one!)
2. the level of recognition/labeling
3. decision making capability
4. failure rate

I would imagine:
- the machine is way ahead for 1. vision and 2. recognition, by 20x.
- decision making capability is roughly equal for the next few years (in the limited L2 autonomy cases).
- the machine has more (2x?) success, primarily because it stays alert 100% of time (DDD).

The fact that folks can drive through fog on a snowy road at night with one eye is just astounding.
On the other hand, 40K auto fatalities per year rests primarily on us.
The road to Level 5 autonomy, over the next thirty years, is going to be very fun to witness!
 
Other companies use way MORE neural network than Tesla

Out of curiosity what can I buy (car, electronics. etc) in the US today that has as much edge computing based neural network(s) running than a Tesla with V9? Obviously Facebook, Google, etc have absolutely massive neural networks, but that's not the same thing as they are up in the cloud.

As to the podcast I think it's important to understand that as a NN person, Jimmy is naturally going to be biased towards NN based approaches. We all have different perspectives, and different approaches to a problem we'd take. I'm a strong advocate for sensor fusion, and I don't think Tesla has enough with the current sensor suite.
 
As to the podcast I think it's important to understand that as a NN person, Jimmy is naturally going to be biased towards NN based approaches.
Also important to understand which audience Jimmy must know he was speaking to - layfolks and teslafans. Just listen to the interviewer’s questions/comments, like, this guy didn’t even know what Jimmy was talking about when he brought up @DamianXVI and @verygreen ’s videos. (Not holding it against him, I just think it’s telling of what audience Jimmy had for his talk.)

Jimmy deserves huge cred for daring to take the challenge of letting himself be interviewed in this context. I think he did an awesome job, especially considering he’s one dude and not some billion dollar company executive with his own PR-staff
 
Also important to understand which audience Jimmy must know he was speaking to - layfolks and teslafans. Just listen to the interviewer’s questions/comments, like, this guy didn’t even know what Jimmy was talking about when he brought up @DamianXVI and @verygreen ’s videos. (Not holding it against him, I just think it’s telling of what audience Jimmy had for his talk.)

Jimmy deserves huge cred for daring to take the challenge of letting himself be interviewed in this context. I think he did an awesome job, especially considering he’s one dude and not some billion dollar company executive with his own PR-staff

Haven't listened to it yet but you make it sound like he bombed the episode or something... o_O