Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Neural Networks

This site may earn commission on affiliate links.
There's about 80mm of separation between the two farthest forward facing cameras, and 40mm between each set. Human males average 65mm distance between our eyes, and from what I recall can judge distance by parallax to about 10 meters. Past that, it's mainly based on apparent size compared to known scale.

I love it when people report the size of a flying UFO. No one ever asks how they judged that the cigar shaped object was 300' long.
 
  • Love
Reactions: jimmy_d
I don’t think that you can use motion parallax, because you don’t know, at which speed the other object is moving. If you knew at which speed the other object is moving, you could use motion induced parallax.

If the other car is for instance moving at the same speed to the same direction, there is no parallax.

And you aren't going to hit it....
 
  • Funny
Reactions: jimmy_d
If the other car is for instance moving at the same speed to the same direction, there is no parallax.

It can be determined indirectly. For example, my Model 3 is moving, and the NN knows the distance the camera has traveled between frames. I can use parallax to determine the distance to a stationary point of reference, such as a specific contrast pattern on the road beneath a car. It knows the distance to a point underneath the car, and thus the distance to that car. Knowing the distance, and seeing how much the car traveled between the frames can be used to calculate speed.
 
  • Helpful
  • Like
Reactions: NateB and mongo
Besides Vicarious, is there anyone trying to bridge the gap between cognitive science and deep neural networks? That is, use discoveries in cognitive science to design better neural networks? On the face of it, it seems like essentially all neural network people are computer science majors, and it seems like a lot of people in cognitive science have a passing understanding or interest in neural networks. I’ve never heard of a neuroscientist who is thinking, hey, how can we apply neuroscience to artificial neural networks?

It seems like the science of intelligence is not talking to the engineering of intelligence at all.

I’m wondering if this is a path to nowhere or an amazing opportunity. Maybe theoretical discoveries in the cognitive sciences are too high-level to apply in an engineering context. Maybe neural networks are about linear algebra and not neuroscience.

But has anyone tried?
 
Besides Vicarious, is there anyone trying to bridge the gap between cognitive science and deep neural networks? That is, use discoveries in cognitive science to design better neural networks? On the face of it, it seems like essentially all neural network people are computer science majors, and it seems like a lot of people in cognitive science have a passing understanding or interest in neural networks. I’ve never heard of a neuroscientist who is thinking, hey, how can we apply neuroscience to artificial neural networks?

It seems like the science of intelligence is not talking to the engineering of intelligence at all.

I’m wondering if this is a path to nowhere or an amazing opportunity. Maybe theoretical discoveries in the cognitive sciences are too high-level to apply in an engineering context. Maybe neural networks are about linear algebra and not neuroscience.

But has anyone tried?

Yes, lots of work in this area. Check out Numenta: Numenta | Where Neuroscience Meets Machine Intelligence
 
Yeah, but Numenta is doing original research in both neuroscience and AI, as opposed to simply trying to translate discoveries in neuroscience and the other cognitive sciences into practical application in neural networks.

From what I gathered from Jimmy_D's interview, computer neural networks differ quite a bit in basic structure from biological neural networks. Off the top of my head (or inside it), biological neural networks have neurons with unidirectional signaling, typically with each neuron having many inputs, but only one output. Computer NN were more inspired on an older model of biological NN, which may not hold up any more.
 
Yes. But I was thinking how current generation Tesla does this.

One possible solution (which humans use) is to use apparent size as a measure of distance. If you see a car looking object with a small apparent size, it is most likely far from you. Or it is a toy car near you. In traffic enviroment the former is most likely true.

I've been curious about this myself. I'd like to see the NN output from Everygreen if you put a model car (or Tesla Radio Flyer) in visual range. Alternatively, you could do it with a big LCD monitor but with pedestrians too (easier than finding hobbits).
 
So in other words I disagree that the movement in the driver display is "a lot larger than" the jitter in the bounding boxes themselves. I think there's (a) likely a lot of jitter in bounding boxes and distance estimates for objects in the side cameras, which often are looking at a very close car and only seeing part of the car, and (b) even small jitter in 2d space causes large jumps in 3d space.

Then videos that @verygreen and @DamianXVI posted are either inaccurate or they performed some massaging themselves on the data. Would have thought they'd mention that if so. Far less jitter there.

Look how far behind they are. After 2 years of working on their own solution, the big feature they launched was:
lane changes- on protected roads, under strict human supervision.

You can lane change any time you want irrespective of whitelisted roads. What they choose to release in "Navigate on Autopilot" versus what the system is capable of doing are quite different.

They minimal feature they released is way too unreliable to be trusted on it's own. This is the practically the sum total of 2 years of progress since switching from Mobileye.

I've had no issues with it under "Mad Max".

Moving to a chip that's 10x faster doesn't solve FSD, anymore than the fact that your iPhone X is 100x faster than an iPhone 4 makes it capable of FSD. It's a minor implementation detail (and in any case, Tesla is basically just holding pace with NVIDIA's new chips).

GPU manufacturers' offerings are merely castrated GPUs. Machine learning requires a fraction of the instructions and precision. Making a purpose built chip is far more simple.

Nothing Elon has promised around FSD has ever come to pass. It's always 2 years away.

Elon time does suffer from the effects of relativity.
 
  • Like
Reactions: jimmy_d
Then videos that @verygreen and @DamianXVI posted are either inaccurate or they performed some massaging themselves on the data. Would have thought they'd mention that if so. Far less jitter there.



You can lane change any time you want irrespective of whitelisted roads. What they choose to release in "Navigate on Autopilot" versus what the system is capable of doing are quite different.

The point is- this is an absolutely trivial addition compared to the larger task of FSD. And NONE of it is safe enough to use without human supervision. I mean, it's OK 99% of time, maybe 99.9% but it'll kill you every few thousand miles if you don't pay attention. That's not FSD- that's 2 orders of magnitude away from FSD.
And we're not even talking about truly hard stuff like pedestrian interactions or construction. We're talking about one of the absolute simplest tasks.

I've had no issues with it under "Mad Max".




GPU manufacturers' offerings are merely castrated GPUs. Machine learning requires a fraction of the instructions and precision. Making a purpose built chip is far more simple.

I'm not sure what you mean, but the point remains. Tesla's chip isn't any faster than NVIDIA's latest silicon. What you say about instructions and precision is absolutely true, and it's absolutely what NVIDIA (and everyone else) is doing as well.
BTW, this is classic Musk. He makes casual observers think he's way ahead, when everyone with actual in depth knowledge is going "WTF is he talking about?"


Elon time does suffer from the effects of relativity.
 
Mobileye Bullish on Full Automation, but Pooh-Poohs Deep-Learning AI for Robocars

Can't tell if you are trying to bolster the MobilEye case? If so I think you're doing it wrong.

Also, MobilEye hardware isn't facilitating the lane change or the driving algorithm. So making some sort of comparison on Tesla's progress versus MobilEye with auto lane change being your litmus test is kind of odd.

I'm not bolstering MobileEye. I think they are pretty far from FSD.
My point is the whole neural nets discussion wrt FSD is a bit silly. The serious competitors (WayMo, maybe Cruise) have moved far beyond these sorts of problems. NN architecture/GPU performance isn't even on their radar (HA!). They are generally solving much more difficult problems like predicting pedestrians and other cars, understanding unusual road conditions/situations, etc...
 
  • Helpful
  • Like
Reactions: rnortman and GWord
I'm not bolstering MobileEye. I think they are pretty far from FSD.
My point is the whole neural nets discussion wrt FSD is a bit silly. The serious competitors (WayMo, maybe Cruise) have moved far beyond these sorts of problems. NN architecture/GPU performance isn't even on their radar (HA!). They are generally solving much more difficult problems like predicting pedestrians and other cars, understanding unusual road conditions/situations, etc...

That makes sense. Thanks!
 
The serious competitors (WayMo, maybe Cruise) have moved far beyond these sorts of problems. NN architecture/GPU performance isn't even on their radar (HA!). They are generally solving much more difficult problems like predicting pedestrians and other cars, understanding unusual road conditions/situations, etc...

Waymo has a job opening for an engineer to use deep neural networks to predict the behaviour of "road users". Perhaps Waymo is innovating the architecture of the neural network it uses for prediction. We don't know.

Waymo doesn't need to worry about compact, efficient, affordable computing hardware because, if it wants, it can just put a huge $50,000 computer and a Powerwall in the trunk. Tesla has to worry about its hardware going into production cars.


That title and article seems to be inaccurate because, as far as I understand, Mobileye is use deep supervised learning for perception and deep reinforcement learning (the same technology as AlphaGo) for driving policy.

Also, the title hyphenates "deep learning", which is incorrect, and maybe a hint that something was lost in translation.
 
Last edited:
Waymo doesn't need to worry about compact, efficient, affordable computing hardware because, if it wants, it can just put a huge $50,000 computer and a Powerwall in the trunk. Tesla has to worry about its hardware going into production cars.

By the way so everyone knows. Its been reported that Waymo uses a 50TFLOP chip from intel. Also Waymo director of engineering talks about the optimizations they do for their NN. How they avoid fully connected layers and use embeddings when they need to, etc. Waymo says they only need millions not billions (or anywhere close to that) because of how improved their architecture is.

+36 Min

A huge computer and powerwall works when you are in early development, but not when you are trying to go into production.
 
Last edited: