Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Neural Networks

This site may earn commission on affiliate links.
Well, he also believes the EyeQ4 can achieve L4, which even Mobileye does not:

https://www.mobileye.com/wp-content/uploads/2016/11/Mobileye_EyeQ_Infographic6-002.pdf

And as @Vitold pointed out, EyeQ4 was late. So, maybe Elon learned from Mobileye on product timelines?

That's pure marketing, eyeq4 is fully capable of even L5 self driving, obviously you would need multiple of them or use another processor in addition.

Marketing department says we need to sell newer and more expensive chip as the next best thing. So you adjust each new chip announcement as whats needed.

Look at nvidia drive px announcements, each new chip announcement is whats gonna get the job done, forgetting the fact that they marketed the previous chip to get the job done. The only thing that eyeq5 brings is open compute and probably built-in driving policy by mobileye.

Remember that nvidia drive px x2 announcement they said you would need two for Level 5 and one for Level 4. Now all of a sudden you need the latest and greatest pegasus.

Samething with mobileye, they are already just starting to market eye6 and saying you need eye6 for level 5.

Its a shell game. Hardware is not what solves self driving, software does and eyeq4 has all it needs. obviously you will need to write driving policies and require additional eyeq4s to run it all or another chip.
 
That's pure marketing, eyeq4 is fully capable of even L5 self driving, obviously you would need multiple of them or use another processor in addition.

Marketing department says we need to sell newer and more expensive chip as the next best thing. So you adjust each new chip announcement as whats needed.

Look at nvidia drive px announcements, each new chip announcement is whats gonna get the job done, forgetting the fact that they marketed the previous chip to get the job done. The only thing that eyeq5 brings is open compute and probably built-in driving policy by mobileye.

Remember that nvidia drive px x2 announcement they said you would need two for Level 5 and one for Level 4. Now all of a sudden you need the latest and greatest pegasus.

Samething with mobileye, they are already just starting to market eye6 and saying you need eye6 for level 5.

Its a shell game. Hardware is not what solves self driving, software does and eyeq4 has all it needs. obviously you will need to write driving policies and require additional eyeq4s to run it all or another chip.
So let me get this right: Tesla says that autopilot will do around L4, and you say it can't, whereas mobileye doesn't believe eyeq4 can do L5, but you do believe that. Do you know things that neither company knows? Could you tell me next weeks lotto numbers?
 
That's pure marketing, eyeq4 is fully capable of even L5 self driving, obviously you would need multiple of them or use another processor in addition.

Marketing department says we need to sell newer and more expensive chip as the next best thing. So you adjust each new chip announcement as whats needed.

Look at nvidia drive px announcements, each new chip announcement is whats gonna get the job done, forgetting the fact that they marketed the previous chip to get the job done. The only thing that eyeq5 brings is open compute and probably built-in driving policy by mobileye.

Remember that nvidia drive px x2 announcement they said you would need two for Level 5 and one for Level 4. Now all of a sudden you need the latest and greatest pegasus.

Samething with mobileye, they are already just starting to market eye6 and saying you need eye6 for level 5.

Its a shell game. Hardware is not what solves self driving, software does and eyeq4 has all it needs. obviously you will need to write driving policies and require additional eyeq4s to run it all or another chip.

Honestly, all this sounds to me like:

the-x-files-i-want-to-believe-print_a-G-9095787-0.jpg


Claiming that something is capable of doing something that hasn't really been proved or deployed to end users smells fishy.
 
Honestly, all this sounds to me like:

the-x-files-i-want-to-believe-print_a-G-9095787-0.jpg


Claiming that something is capable of doing something that hasn't really been proved or deployed to end users smells fishy.

In that case I wonder what Elon Musk smells like? :p

Anyways, In my experience the disconnect is really SW.

The HW gets developed first, and then it gets sold with claims on what it can do. Where it only needs SW to be written for it.

But, then once the SW is written they discover tons of bottlenecks and caveats.

So then it requires new hardware, and the cycle continues.

There are also other things like power consumption. With Nvidia in particular they typically make large improvements in power efficiency. A good example of this is comparing a Jetson TX2 with an Xavier. It's significantly more power efficient for neural network inference. So I don't consider anything said about the TX2 to be marketing speak or lies. I can't speak for anyone else, but I do want my Rover robot to have more battery life.

Will it make for lazier programmers and neural network people? Probably, but that's how it goes. The Jetson Xavier has sure made it easier to do what I want with it than before.

Spreadsheet programs do the same thing they did years ago, but at the same time amateur programmers can write some amazing stuff even if massively inefficient.

Trying to compare Tesla with MobileEye is a bit of a funny comparison. MobileEye was built on efficient technology because it had to be. They were the first ones, and they went through a LOT of development just to get to the point of misreading signs in AP1. Okay, sure it got them right 90% of the time if not more, but it was still wrong a lot.

Where Tesla's technology was really developed from a "throw everything you have at it" approach with massively aggressive timelines/budgets/etc. Where the Hardware was sold with the expectation that it would likely get replaced.
 
Last edited:
So let me get this right: Tesla says that autopilot will do around L4, and you say it can't, whereas mobileye doesn't believe eyeq4 can do L5, but you do believe that. Do you know things that neither company knows? Could you tell me next weeks lotto numbers?

He claims he's always right. But, the problem with that claim is a cynic is usually right more often than they are not. He also only chooses to go after the most optimistic of the Tesla bunch, and not the more realistic ones.

He can be optimistic about MobileEye because he can always blame someone else if it doesn't do what he claims. He also knows that Neural Networks won't get us there alone. He's not even what I'd consider to be a vision advocate, and I'm not either. So lots of things have to come together on a vehicle to even match what Blader would bet on.

MobileEye isn't really a horse he can bet on as they have no control over what happens when its used in a vehicle. So instead he's just a frustrated individual waiting for everything he's been promised in tech demos. I think he knows the very real fear that we consumers might never get L5. That might end up being fleet vehicles only. That's very much the fear I have.

I put my money on a Tesla not because I think they will get there first, but that it will make for an interesting journey.

There isn't anything like this thread anywhere else. There is nothing in consumers hands like it. I say this as both a good thing, and a bad thing.

Bladar is here because of that. He knows there isn't anywhere else to go. All the action is happening here because this is the wild west.
 
Last edited:
I don't see Elon as a misdirection kind of guy. Wacky and over-optimistic, yes. Intentionally misleading, no. Of course I don't know the guy except from twitter and youtube... YMMV.

I wish I could believe this, but honestly how can this be believed given the 2016 FSD demo video and everything we now know about how far along they actually were at that point? That was clearly smoke and mirrors; they were nowhere near as far along as that video implied.I doubt they were even running on HW2; they probably had a beefy set of extra GPUs in the trunk. And the wording of the FSD option description that only "validation and regulatory approval" was required before they would enable FSD? That's pretty much the definition of "misdirection" if you ask me.

Here is the most charitable explanation: Elon justified these little "white lies" to himself because he honestly believed they were less than a year away from being able to deliver FSD features. So what's the harm in a little deceptive marketing, a little misdirection? After all, his enemies have vast resources and are always out to get Tesla, but little ol' Tesla is just trying to save the human species... playing dirty is required sometimes, right?

Sorry, I understand that kind of thinking, but I cannot forgive it or wave it away as harmless optimism. It was not harmless and a person in Elon's position has a responsibility to be, well, responsible.
 
That's pure marketing, eyeq4 is fully capable of even L5 self driving, obviously you would need multiple of them or use another processor in addition.

This is the most ridiculous thing I've ever read. Do you understand that L5 driving means, among other things, driving off-road? It means driving onto a ferry? It means handling a tire blowout? Nobody -- nobody, nobody, nobody -- has an L5 system or is even really seriously working on L5 systems.

Blader, you and I are often together arguing against the Tesla fanboys, but holy cow you are way worse than the worst Tesla fanboy when it comes to MobileEye. What is with that? Seriously, what is with your obsession with MobileEye?
 
To confirm are you saying the single net processing all cameras is NOT the one you now believe is running the car and rather it's individual NNs processing each independently like v8?

Just trying to understand.

I believe he means the current hardware iteration doesn't have enough compute capability to run the "new" singular v9 NN.

Its a shell game. Hardware is not what solves self driving, software does and eyeq4 has all it needs. obviously you will need to write driving policies and require additional eyeq4s to run it all or another chip.

Given that hardware is a "finite" resource for software to run on, both would be what solves self-driving, not one or the other.
 
4K is unnecessary as it's not about absolute resolution. As long as there is sufficient relative resolution (PPI) which can be chosen by different focal lengths for different distances, the comparatively low res of 1280x960 can suffice.
Case in point; maybe the same result could be done with a single, fixed focal length camera of 4K as with three lower res cameras at different focal lengths. But probably more lower res cameras are better computationally regarding the use of the NN than one gigantic sensor.
And that's not even considering high-res sensor problems such as readout speeds, noise and heat buildup.
 
4K is unnecessary as it's not about absolute resolution. As long as there is sufficient relative resolution (PPI) which can be chosen by different focal lengths for different distances, the comparatively low res of 1280x960 can suffice.
Case in point; maybe the same result could be done with a single, fixed focal length camera of 4K as with three lower res cameras at different focal lengths. But probably more lower res cameras are better computationally regarding the use of the NN than one gigantic sensor.
And that's not even considering high-res sensor problems such as readout speeds, noise and heat buildup.

That's not how it works. The primary image goes through convolution layers, effectively scanning the next layer over the whole image, looking for recognizable objects in any part of the image. If doing this in hardware you'd just want to do it in parallel, it just means more "neurons" for that layer, but not for the next layer. You can just as well do it serially, if required to fit in your hardware, at the cost of speed. The whole net does not get multiplied by the number of pixels.

edit - I should also say the size constraints were serious while using Nvidia's hardware. They are after all GPU's. With Tesla's own hardware, they can make the net as big as they want, for training or inference.
 
Last edited:
  • Like
Reactions: Joel
This is the most ridiculous thing I've ever read. Do you understand that L5 driving means, among other things, driving off-road? It means driving onto a ferry? It means handling a tire blowout? Nobody -- nobody, nobody, nobody -- has an L5 system or is even really seriously working on L5 systems.

Blader, you and I are often together arguing against the Tesla fanboys, but holy cow you are way worse than the worst Tesla fanboy when it comes to MobileEye. What is with that? Seriously, what is with your obsession with MobileEye?

Driving off-road is not a barometer for L5 driving as most human drivers are not capable of it. You grab a random driver on the street, tell him you're feeling sleepy and have him drive you off-road and you will wake up dead!


Now on the other hand, easy paved trails is like taking candy from a baby for a SFS and HPP NN model. That's how eyeq3 is able to predict the drive-able path during snow. I have no doubt that Mobileye has SFS and PP models running on eyeq4 because they have shown it running on the 2014 eyeq3 launch for handling snow.

ABMChlE.png



Handling unpaved roads like a dirt road would be simple as the path is laid out for you already and ofcourse you have obstacle detection to not hit anything. Driving into ferries would be the same as driving into a parking lot/through a toll booth. A tire-blowout must be handled under SAE L4.

So yes EyeQ4 has all the perception capability necessary especially with REM Mapping for L5 driving. What @Jimmy article has shown is that Tesla is brute-forcing their way into a better perception system while Mobileye constructed and cultivated their system from scratch like fine wine which is why it ridiculously needs only 2.5tflops.

People talk about the vast amount of training data needed(trillions) for Tesla's latest NN as though its a good thing. If Tesla Vison needs trillions of data to do sub-par level of detection then that's horrific.

I know @strangecosmos is salivating about the potential of billions/trillions being required so he can prove his miles data theory. but what jimmy is failing to tell him is that data augmentation accounts for much of the data in a NN. For example, if you have 1 million pictures, you turn it into 100 million pictures through augmentation. If you have 100 million, you turn it into 1 billion. You get my point. There are also other techniques. These fine details @jimmy_d is gladly leaving out. Leading to tesla fans losing their minds in ecstasy, only to be disappointed later.
 
Last edited:
Driving off-road is not a barometer for L5 driving as most human drivers are not capable of it. You grab a random driver on the street, tell him you're feeling sleepy and have him drive you off-road and you will wake up dead!

A typical terrible human driver, upon arriving at an outdoor event (say, a concert) with lawn parking (i.e., driving on grass with no markings or pathways at all), can easily follow cues from other drivers and from people directing traffic to navigate to a suitable parking spot, and then navigate back out. Even when other humans do stupid things. My departed grandparents' house in the mountains required you to drive up what was essentially a creek bed to get to the house, and even without any special offroad training I was able to do this, so long as the creek wasn't too high. Then you had to cross a very narrow, rickety wooden bridge to get over the ditch and into their house. Yup, made me nervous, sure, but no problem. That's L5, in addition to all the crazy urban and highway tasks that L5 also requires.

I often say that L5 driving means driving in a sand storm during a zombie apocalypse with passengers hanging out the windows firing shotguns at the pursuing zombie hoardes, because in principle a human driver could do that, no GPS required. But you don't have to go that extreme to find situations that no existing autonomy system is even close to being able to handle, and to even claim that we know what kind of hardware or software will be required to handle it is pure folly.
 
A typical terrible human driver, upon arriving at an outdoor event (say, a concert) with lawn parking (i.e., driving on grass with no markings or pathways at all), can easily follow cues from other drivers and from people directing traffic to navigate to a suitable parking spot, and then navigate back out. Even when other humans do stupid things. My departed grandparents' house in the mountains required you to drive up what was essentially a creek bed to get to the house, and even without any special offroad training I was able to do this, so long as the creek wasn't too high. Then you had to cross a very narrow, rickety wooden bridge to get over the ditch and into their house. Yup, made me nervous, sure, but no problem. That's L5, in addition to all the crazy urban and highway tasks that L5 also requires.

I often say that L5 driving means driving in a sand storm during a zombie apocalypse with passengers hanging out the windows firing shotguns at the pursuing zombie hoardes, because in principle a human driver could do that, no GPS required. But you don't have to go that extreme to find situations that no existing autonomy system is even close to being able to handle, and to even claim that we know what kind of hardware or software will be required to handle it is pure folly.

I think you are conflating the different parts of an autonomous driving system and what i'm claiming. There's sensing (perception), mapping (10 cm accurate crowd-sourced mapping), path planning (driving using driving policy).

Eyeq4 is a solution for sensing and mapping, not the driving policy.
What you described above IS the department of driving policy.

Sensing will tell you where the cars in your scenario is at, objects and of-course people. This also includes their orientation, speed, velocity, intention, etc. Including reading people directing traffic. Working with all that information is the job of planning. Tesla in AP1 already used the basic car detection in eyeq3 to do car following when there was no lanes.

Your scenario basically boil down to "Can you use eyeq4 to make a self driving golf cart" and the answer is yes!

You are also conflating L5 with intelligence. A L5 car won't handle a zombie apocalypse because its not intelligent. A L5 car still requires a waypoint (aka where to go) even though it will be able to infer in most cases. You still have to tell it you want to go to burger king. Sure in the future UI/UX tech will improve so that you can say things like "I want to go over there near the lake".

But it won't simply jump the fence or hop the curb like humans do unless its programmed to do that (which ofcourse no manufacturer will do). Because that's not what L5 is all about. Hence you won't survive a zombie Apocalypse, sorry :(

TLDR: You are making the mistake of giving L5 intellect.
 
  • Disagree
Reactions: LargeHamCollider
@Bladerskb --- what exactly are you insinuating that @jimmy_d purposefully left out? You are ascribing malicious actions to someone who has never held back the veil so far. Jimmy is taking the time to describe something novel. You claim he is withholding detrimental information but fail to elucidate beyond a smear. I'm giving you the opportunity to elaborate or I believe you are merely slandering a good person who doesn't deserve your petty invective.