Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

HW2.5 capabilities

This site may earn commission on affiliate links.
It also doesn't mean that it can if the map was outdated...

Mobileye eyeq3 detects and classifies traffic lights, signs, its position and lane assignment without using any map.

I think we all agree that it should be able to regardless of having a map.

That's currently how all SDC system works.
Should be able to and being allowed to are two different things.

It should rely on maps only when visibility is poor like most companies do.

Rather it should rely on map at all times to augment what its live vision is seeing and fill in the blanks of what its not seeing and only be disregarded when the map information doesn't match what the live vision is seeing.

Visibility is always poor and i don't mean visibility in terms of weather but in terms of FOV, RANGE and upcoming roadway

They don't have to be created with lidar though, as evidenced through the MobilEye approach.

I don't think anyone is saying they have to. But there are already Lidar

When not created through lidar they are more likely to be up to date if cars also don't have lidar to crowdsource the mapping.

All L3/L4/L5 car will have lidar that i can guarantee you.

GM L3/L4 Highway system WILL also have Lidar.
This Highway Lidar map they are creating is not a stop gap measure, its a precursor to their super cruise 2 system.

GM calls Super Cruise a foundational technology for its future plans, including fully autonomous vehicles that will drive themselves on surface streets and around cities. Expect to see it on more vehicles from Cadillac and GM’s other brands, and watch for more features to make it even more user-friendly.

Super Cruise’s data base includes 160,000 miles of limited-access highways in the U.S. and Canada. That detailed digital map — which GM owns exclusively for its own use — is one of the defining differences between Super Cruise and other systems.

GM is currently creating an equally detailed map of China, the next big market slated to get Super Cruise.
 
EyeQ3 is just hardware... you can run DNNs through it along with other algorithms available from their sdk. Amnon's prior research involved deep networks before 2014.

here's a data sheet
http://www.solder.net/components//com_chronoforms/uploads/Contact//20170313175315_EYEQ3 Data sheet.pdf

They call out "Deep Layered Networks" in their SEC filing March of 2015. Remember that they only become a publically traded company Aug 2014.

Here's a chip described in 2012 for this purpose as well
NeuFlow: Dataflow vision processing system-on-a-chip - IEEE Conference Publication
full paper: http://yann.lecun.com/exdb/publis/pdf/farabet-ecvw-11.pdf

Thanks! I searched soldernet at one point but didn't find it. Much obliged. Am really interested in whether they explicitly supported NN primitives, or whether the architecture is a conventional vision oriented DSP that was repurposed to NN acceleration after the fact.

Regarding ME's description in their SEC filing: I don't dispute that ME, like pretty much everyone in the space, was looking really hard at NN's by 2015. I just can't find any evidence that they had a product in the market as of Oct 2014 that was developed as a neural network accelerator. I remember looking at their jobs listings in late 2014 and they didn't have any openings for candidates focused on neural networks. At a time when *everyone* in the space was scrambling for whatever talent they could get with some NN experience the appearance of ME not even caring enough to list an opening spoke volumes about what they considered critical to their technology.

On the NeuFlow chip - it's a design for a not yet fabricated academic research part, not an example of a commercial IC candidate. Lots of those designs existed in FPGA implementations or in verilog/vhdl simulations in academia and industrial research labs going way back. IBM has a whole garage full of them. LeCunn (the advisor on that university project), along with Hinton and Bengio were the guys keeping the NN flame alive all through the dark period from the 90s through to late 2012. As a (sad) aside you might note that the NeuFlow paper does not mention neural nets, even though that part was definitely a NN part. Aug 2012 was still in the era when you couldn't get anything published if your paper talked about neural nets as such.

My how times have changed.

As far as I've been able to determine so far, the first true commercial NN IC was Google's first generation TPU, which they started working on in 2013 and managed to get deployed into their datacenters in 2015, but which they didn't announce until April 2017 in this paper: [1704.04760] In-Datacenter Performance Analysis of a Tensor Processing Unit. They have a blog entry about it too:
An in-depth look at Google’s first Tensor Processing Unit (TPU) | Google Cloud Big Data and Machine Learning Blog  |  Google Cloud Platform

Now when I say this is a commercial part, it was deployed in volume for a commercial application, but the application was completely internal to Google. If you were to only consider parts that were sold onto the merchant market then I'd have to say I haven't yet seen a part that was developed from the ground up as a pure NN accelerator.
 
Now when I say this is a commercial part, it was deployed in volume for a commercial application, but the application was completely internal to Google. If you were to only consider parts that were sold onto the merchant market then I'd have to say I haven't yet seen a part that was developed from the ground up as a pure NN accelerator.

For a general purpose NN accelerator I would say Movidius was the first. You can build a model with Caffe, and then have it accelerated by the chip they designed.

Then they made a USB3 version of it so you can attach it to a Raspberry Pie.

Here is a link to it.
NCSM2450.DK Intel | Mouser

I have a couple of these at work, but for myself I vastly prefer the Jetson TX2 board from NVidia. While it's not a pure NN accelerator it's better for general purpose needs. Plus I don't personally feel as if you need a dedicated NN accelerator for inference at least not yet. It certainly helps in terms of efficiency, and cost reasons though.

Maybe one day I'll get my hands on a Tensor chip from Google, but in the mean time I'll be happy they open-sourced TensorFlow.
 
Last edited:
  • Like
Reactions: scottf200
All L3/L4/L5 car will have lidar that i can guarantee you.

You manage to drive a car by hand without getting in accident most days with no interventions where you hand control to someone else, and no LIDAR, radar, or even ultrasonics, don't you?

Every AP2 car already has far more information about the environment than any human driver ever had, yet we mostly get along okay.

That means the problem *can* be solved with the current hardware and sufficiently advanced software/processing, although it *may* be more cost effective to include better sensors and spend less time/money/processing cycles on the software side.
 
Ugh - turns out that eyeq3 sheet is an overview, so it doesn't provide enough detail to be definitive. But the block diagram maps well onto an embedded vision processor. There's no sign that they provide, for instance, hardware support for computing the nonlinearities for neuron outputs. I'll also note that the datasheet itself doesn't mention anything that would imply it's intended for use in NN's and does not include any neural networking terminology. Additionally the software development stack doesn't include references to any NN tools, libraries, or frameworks. Though to be fair the development stack is little more than compiler/linker/libraries/debugger/RTOS so that might not be saying anything.

While searching for something that included the instruction set for their vector processors I ran across a March 2015 presentation by Mobilieye's CTO/founder/principle Amnon Shashua.


It's much more pro neural network than what I recall him saying in 2014 and before, though he still seems to be deprecating NNs to a subset of computer vision and implies they are mainly useful in academic investigations by pointing out that AlexNet (the network that won Imagenet in 2012 and which subsequently became a kind of performance benchmark) takes 6 seconds to process a single frame on the eyeq3. He points out correctly that carefully optimized systems can get a lot more utility with a lot fewer compute cycles.

Incidentally the demos from the first part of the video are very nice and he gives a nice overview on the challenges of using vision for self driving vehicles.

But I disagree with the implication that neural nets are too slow to run on processors that can be sold into automotive applications - which require under $10 and under 2.5W power consumption according to Shashua. I don't know exactly what NVIDIA silicon is present in AP2, but it's probably Pascal architecture and possibly one of the parts optimized for inference. (Inference is basically using an NN after it's been trained as opposed training it. Tesla does training in their datacenter so the car is free to be an inference-only platform). If Tesla is using NVIDIA's P4 chip then they could, for instance, run AlexNet at 170fps for each watt of power that the chip uses up to a maximum of 36 watts or 6120fps. That's pretty clearly fast enough to run in a car in real time - even on multiple cameras simultaneously. Of course AlexNet is not what would Tesla would be running, especially since there are much better systems that have been developed in the last few years. But AlexNet will suffice as a benchmark and it shows that you can put an NN chip that's 10,000x faster than the eyeq3 in a car. It's an existence proof because there are already lots of AP2 Tesla's out there. And with that kind of power it's reasonable to go end-to-end with neural networks. As painful as that must be to a company like ME that undoubtedly invested a lot of time and money into pre-NN vision processing for this application it happens to be the truth today. I think they deserve to be proud of what they accomplished, but technology moves forward and it's perfectly happy to move forward without you.

Oh - here's NVIDIA's brochure page on P4 and related:

New Pascal GPUs Accelerate Inference in the Data Center

As a neural network dork I find it fun to think about what kind of performance a fully custom NN chip from Tesla might achieve. At a minimum it should be possible to duplicate what Google did with the TPU in 2013. That chip is kind of large, but it's implemented in 28nm and it's a pretty simple design, so a first cut done today at 20nm or 14nm would give you the same performance inside a 10W power envelope and at an acceptable price. The TPU achieve's something like 95% utilization of 64K MACs at 700MHz on CNNs like AlexNet. That works out to about 54,000fps of AlexNet, or 300K times faster than the eyeq3.
 
You manage to drive a car by hand without getting in accident most days with no interventions where you hand control to someone else, and no LIDAR, radar, or even ultrasonics, don't you?

Every AP2 car already has far more information about the environment than any human driver ever had, yet we mostly get along okay.

That means the problem *can* be solved with the current hardware and sufficiently advanced software/processing, although it *may* be more cost effective to include better sensors and spend less time/money/processing cycles on the software side.

Call me when you can plug a computer as intelligent as a human brain to a car.

Are you saying current AP2/2.5 Model S/X/3 will never be Level 3 or above?

Absolutely. Lidar will be cheap as Radar in 2019 because all automakers will be calling to put it on their 2020/2021 cars.

Tesla will be forced to oblige. Right now theyare saving millions by not putting short ranges corner radar on their cars
 
Call me when you can plug a computer as intelligent as a human brain to a car.

An additional point I like to make is: Even if for a moment we assume vision can be solved to the requisite degree, different types of sensors still have different qualities in different conditions. Radar and lidar see in situations where vision can't and vice-versa. Not to mention the more sensors you, less issues you have with sensor blockage etc. So a complementary suite of redundant sensors does make sense...

Absolutely. Lidar will be cheap as Radar in 2019 because all automakers will be calling to put it on their 2020/2021 cars.

Tesla will be forced to oblige. Right now theyare saving millions by not putting short ranges corner radar on their cars

Fair enough on lidar and radar.

However, the idea that Tesla can't reach even a scenario-limited Level 3 with AP2/2.5 hardware... not even something akin to Audi's traffic-jam pilot? I would assume that would be a fairly massive failure on Tesla's part - especially as they are selling full self-driving hardware on a mass-volume Model 3...

Don't get me wrong, I'm not saying I can't see the scenario where it unfolds like this. Especially the full full self-driving and autonomous Tesla Network stuff seem hard to believe... but it still seems like such a massive failure if AP2/AP2.5 remain as Level 2 driver's aids...
 
Call me when you can plug a computer as intelligent as a human brain to a car.



Absolutely. Lidar will be cheap as Radar in 2019 because all automakers will be calling to put it on their 2020/2021 cars.

Tesla will be forced to oblige. Right now theyare saving millions by not putting short ranges corner radar on their cars

Won't you be busy dodging Terminators by then? :p
 
  • Funny
Reactions: jimmy_d and croman
No I think he’s saying the infant alien with many eyes is going to have a love child with a shark with lasers and the offspring will rule the world.

Btw, this thread is getting more and more depressing as it goes on....

It's funny you say that because before the last few pages, I was thinking this amazing discussion was turning into the sugar tits-biznomb-smack-dizzle thread.... all the nobel laureates were present.... then @Bladerskb and @AnxietyRanger pulled up in a virtual Audi and GM supercruise Cadillac, only to pop the trunk... and proceeded to unload 30 pounds of Hotz-dung into the thread.

For anyone not keeping up with the thread, Hotz-dung is what happens when you almost land a billion dollar deal with Elon only to walk away because you thought it would be cooler to sell cheap looking self-driving iPads for $1000.

Don't worry @Bladerskb and @AnxietyRanger, I'm just joking.
 
It's funny you say that because before the last few pages, I was thinking this amazing discussion was turning into the sugar tits-biznomb-smack-dizzle thread.... all the nobel laureates were present.... then @Bladerskb and @AnxietyRanger pulled up in a virtual Audi and GM supercruise Cadillac, only to pop the trunk... and proceeded to unload 30 pounds of Hotz-dung into the thread.

For anyone not keeping up with the thread, Hotz-dung is what happens when you almost land a billion dollar deal with Elon only to walk away because you thought it would be cooler to sell cheap looking self-driving iPads for $1000.

Don't worry @Bladerskb and @AnxietyRanger, I'm just joking.

We all have our biases. :) I thought this thread was infinitely more informative before it was "found" by the pro-Tesla voices. Even @Bladerskb behaves nicer when people are just talking the data.

I guess argumentation begets argumentation.
 
Ok back on topic!!

I have an interesting observation from a drive yesterday that I wonder how 2/2.5 and really liar will solve currently.

I drove up a very small hill not more then 3-5’ in elevation and as you crest let’s call it a hill it immediately turns hard left, but the cameras don’t see crap as they are effectively pointing at the sky, AP freaks out and dives to the left because it has no tracking. Even lidar unless it’s on a gimble of sorts would be screwed as lidar isn’t used for lane lines.

It works for humans because we can pivot or heads and watch the lanes and markings, but the cameras all face straight, so how does this get solved?
 
Ok back on topic!!

I have an interesting observation from a drive yesterday that I wonder how 2/2.5 and really liar will solve currently.

I drove up a very small hill not more then 3-5’ in elevation and as you crest let’s call it a hill it immediately turns hard left, but the cameras don’t see crap as they are effectively pointing at the sky, AP freaks out and dives to the left because it has no tracking. Even lidar unless it’s on a gimble of sorts would be screwed as lidar isn’t used for lane lines.

It works for humans because we can pivot or heads and watch the lanes and markings, but the cameras all face straight, so how does this get solved?

Well, one of the things that all the autonomous folks have been talking about is high precision GPS and high precision mapping. In principle, the car could be taught that the road has that sharp bend, and learn to take it based on GPS even though the cameras are pointed at the sky.

For safety, though, I don't think the car can afford to have all of the cameras pointed at the sky while it is moving. FSD will be using the front wide angle for a variety of tasks, and if it can see far enough down (I'm pretty sure that it can see further down than a human leaning over the wheel can,) then the car could use that image to do object recognition for the lane lines while the other two are tilted too far upwards. It's not ideal because of the short focal length, but it should be sufficient for catching that turn.
 
  • Like
  • Helpful
Reactions: bhzmark and croman