Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

HW2.5 capabilities

This site may earn commission on affiliate links.
Actually I think it was. They said EAP would use 4 of the 8 cameras and we already know two of cameras are the front normal and front narrow. We also know EAP includes automatic lane changes which means it also needs to two side cameras in the front left and right quarter panel.

So for EAP the cameras are: Front Narrow, Front Normal, Front Left Rear Facing, Front Right Rear Facing.

Your logic makes sense, and I'm sure it's the logic Tesla used to make the actual decisions.

My point was that while the blog didn't tell you which cameras by name, Tesla had sent an email that did identify them specifically.
 
I believe it's cheating. A good system should be able to work no matter if you place it in a city or on a back country road it's never seen before.

Yesterday I was navigating for a group and an entire neighborhood was not on Google maps, highly doubt it's been mapped by some lidar mapping service. Would self driving simply stop working in this neighborhood if it wasn't previously mapped? If your system relies on lidar data then all your cars need lidar. There's no way around that for lvl 5 when relying on such data.

The easiest way to avoid that situation is to have the cars not rely on lidar data :)

Granted, in the above situation with a neighborhood not being mapped, something that relies on other kinds of mapping wouldn't be able to navigate either, but at least it'd be able to drive, make turns, etc.
Isn't that how a lot of the earliest self driving demos operated? Any location they traveled was mapped in extreme detail beforehand, so the task was made a lot easier (since for example, the exact location of the traffic signal, stop sign, other traffic controls, and fixed obstacles are already known). However, if they were dropped into a location that was never mapped before, the car would be extremely crippled (like @jimmy_d put, the vision only played a supplementary role; it primarily depended on accurate mapping done beforehand to recognize the environment).

This does not match how a human works. If you throw someone in an unknown place, they may be lost, but would still be able to drive without any problem (recognizing traffic signals and signs).

I agree that scheme is pretty much cheating, esp when the Cadillac doesn't have lidar itself (so can't generate/update the same data in real time or at least in a crowdsourced way).
 
  • Like
Reactions: lunitiks
Mobileye eyeq3 was built on deep neural networks. You were clearly looking at all the wrong places or not even looking at all.

Well Mr. Bladerskb, if you want to suggest that I'm lying that's fine, but it would be nice if you could back that up with some data. Otherwise I might think be inclined to assume you're just trolling me. I would be willing to accept any Mobileye documents authored prior to Oct 2014 which include reference to neural networks. If you intend to belatedly search for such material may I suggest consulting the internet archive snapshot of Mobileye's website at:

Mobileye - Our Vision. Your Safety.

This is the snapshot from the day that Tesla announced autopilot. I have not been able to find any reference to neural networks of any type in that record. And you might be interested in looking through their published "research" papers as of that date, none of which mention neural networks.

As an aside, I do not believe that mobileye's lack of NNs at that point in time was necessarily a blot on their record. NNs for use in imaging applications didn't break into the mainstream until after the 2012 Imagenet results. Nobody had deployed a hardware system that was "built on neural networks" as of 2014. There was vision hardware that had been belatedly adapted to work with neural networks at that point, but no commercial vision product which had been built from the ground up with NNs as the core operating algorithm. Of course Mobileye has since then accepted the transition to neural networks, just as many resistant organizations have had to, and I'm sure that the silicon they are selling today has better support for it than the eyeq2 did (the most recent part for which a public spec was available in Oct 2014).
 
Mobileye most certainly was using a neural net in the EyeQ3 chip that does object & path recognition and path planning in AP1. It was simply a mature, pre-trained neural net (or rather a whole group of specific neural nets used for different specific purposes) provided by Mobileye to Tesla.

That's an interesting assertion. I have yet to get my hands on a real eyeq3 spec - where did you find it? Or alternately, can you share it with me?

Not sure what you mean by 'pre-trained' neural network. All neural networks used in commercial products today are trained prior to deployment, is that what you're referring to? Or are you referring to some kind of semi-manual process where networks weights are constrained prior to conventional backpropagation training? What I've been able to glean about ME's approach is that their kernels were hand made.
 
EyeQ3 is just hardware... you can run DNNs through it along with other algorithms available from their sdk. Amnon's prior research involved deep networks before 2014.

here's a data sheet
http://www.solder.net/components//com_chronoforms/uploads/Contact//20170313175315_EYEQ3 Data sheet.pdf

They call out "Deep Layered Networks" in their SEC filing March of 2015. Remember that they only become a publically traded company Aug 2014.

Here's a chip described in 2012 for this purpose as well
NeuFlow: Dataflow vision processing system-on-a-chip - IEEE Conference Publication
full paper: http://yann.lecun.com/exdb/publis/pdf/farabet-ecvw-11.pdf
 
Last edited:
  • Helpful
Reactions: hiroshiy
EyeQ3 is just hardware... you can run DNNs through it along with other algorithms available from their sdk. Amnon's prior research involved deep networks before 2014.

here's a data sheet
http://www.solder.net/components//com_chronoforms/uploads/Contact//20170313175315_EYEQ3 Data sheet.pdf

They call out "Deep Layered Networks" in their SEC filing March of 2015. Remember that they only become a publically traded company Aug 2014.

Here's a chip described in 2012 for this purpose as well
http://ieeexplore.ieee.org/document/6292202/
That didn't load but here is googles cached one: EyeQ3™ Data Sheet
 
  • Helpful
  • Informative
Reactions: GSP and BigD0g
EyeQ3 is just hardware... you can run DNNs through it along with other algorithms available from their sdk. Amnon's prior research involved deep networks before 2014.

here's a data sheet
http://www.solder.net/components//com_chronoforms/uploads/Contact//20170313175315_EYEQ3 Data sheet.pdf

They call out "Deep Layered Networks" in their SEC filing March of 2015. Remember that they only become a publically traded company Aug 2014.

Here's a chip described in 2012 for this purpose as well
NeuFlow: Dataflow vision processing system-on-a-chip - IEEE Conference Publication
full paper: http://yann.lecun.com/exdb/publis/pdf/farabet-ecvw-11.pdf
The SEC filing you refer to seems to relate to EyeQ4, not EyeQ3 if this is the one you are referring to:
https://www.sec.gov/Archives/edgar/data/1607310/000157104915001611/t1500431_ex99-1.htm

I did look through that datasheet for EyeQ3 and there is no mention of similar terms. I'm usually pretty good with googling and tried to find any Mobileye references to neural networks from before October 2014 as @jimmy_d challenged, but failed to find any.
 
Last edited:
  • Informative
  • Helpful
Reactions: bhzmark and GSP
The SEC filing you refer to seems to relate to EyeQ4, not EyeQ3 if this is the one you are referring to:
https://www.sec.gov/Archives/edgar/data/1607310/000157104915001611/t1500431_ex99-1.htm

I did look through that datasheet for EyeQ3 and there is no mention of similar terms. I'm usually pretty good with googling and tried to find any Mobileye references to neural networks from before October 2014 as @jimmy_d challenged, but failed to find any.
I'm referring to this
http://s2.q4cdn.com/670976801/files/doc_financials/2014/Mobileye-Form-20-F-2014.pdf
filed March 2015 for the period ending 12/31/14

From the Q4 2014 earnings call:

Our second and much more intensive challenge we successfully addressed through 2014 was and still is the preparation for automated driving launches slated for the 2016 timeframe. The challenge consists of adding new industry first customer functions, most notable is traffic light detection and actuation on red light crossing which will be launched in the US later this year by one of our OEM customers, and multiple additional functions that form a natural growth of our existing capabilities.

But most importantly is the introduction of a new set of algorithmic capabilities centered around deep learning networks that were designed to support two major new functions. One is the - free space, where the system outputs the category label for every pixel in the image and determines where the host car is free to drive and the second is the holistic path planning feature, which provides the forward driving path in situations where the lane markings are non-existent or too weak to rely on.

Those two functions form the backbone of hands-free driving, where the steering control needs to know the location of the safe and unsafe zones to drive.

The initial launch of these functions will begin later this year in the US. Deep learning networks leverages two strong features of Mobileye . The first is that we have a very big and unbiased data base that can be used for training the networks and second is that our EyeQ3 chip has a very high utilization, above 90%, for the network operation. We spent much effort in designing compact networks and problem modeling to allow realtime performance at minimal chip capacity.

Our free space and holistic path planning together takes around 5% of the EyeQ3 capacity. We believe that Mobileye deployment of deep networks algorithms later this year will constitute the first deep networks running in production on an embedded platform in any industry, not only automotive.
...
https://s.t.st/media/xtranscript/2015/Q1/13062243.pdf

basically they are saying that they were designing and testing DNNs with EyeQ3 through 2014. Including the pretrained networks in the production SDK was later in 2015.
 
Last edited:
Have a nice weekend! Just found this, don't know if it's too vague to reveal.

The Evolution of EyeQ - Mobileye

"Mobileye has been able to achieve the power-performance-cost targets by employing proprietary computation cores (known as accelerators), which are optimized for a wide variety of computer-vision, signal-processing, and machine-learning tasks, including deep neural networks. These accelerator cores have been designed specifically to address the needs of the ADAS and autonomous-driving markets. Each EyeQ® chip features heterogeneous, fully programmable accelerators; with each accelerator type optimized for its own family of algorithms. This diversity of accelerator architectures enables applications to save both computation time and chip power by using the most suitable core for every task. Optimizing the assignment of tasks to cores thus ensures that the EyeQ® provides “super-computer” capabilities within a low-power envelope to enable price-efficient passive cooling.

The fully programmable accelerator cores are as follows:

  • The Vector Microcode Processors (VMP), which debuted in the EyeQ®2, is now in its 4th generation of implementation in the EyeQ®5. The VMP is a VLIW SIMD processor, with cheap and flexible memory access, provides hardware support for operations common to computer vision applications and is well-suited to multi-core scenarios.
  • The Multithreaded Processing Cluster (MPC) was introduced in the EyeQ®4 and now reaches its 2nd generation of implementation in the EyeQ®5. The MPC is more versatile than any GPU and more efficient than any CPU.
  • The Programmable Macro Array (PMA) was introduced in the EyeQ®4 and now reaches its 2nd generation of implementation in the EyeQ®5. The PMA enables computation density nearing that of fixed-function hardware accelerators without sacrificing programmability."
 
I'm referring to this
http://s2.q4cdn.com/670976801/files/doc_financials/2014/Mobileye-Form-20-F-2014.pdf
filed March 2015 for the period ending 12/31/14

From the Q4 2014 earnings call:



basically they are saying that they were designing and testing DNNs with EyeQ3 through 2014. Including the pretrained networks in the production SDK was later in 2015.
That matches @jimmy_d's characterization then about not finding any references to neural networks before October 2014. They were still working on it at that time and didn't launch it until later in 2015 (which explains why no public references).
 
Have a nice weekend! Just found this, don't know if it's too vague to reveal.

The Evolution of EyeQ - Mobileye

"Mobileye has been able to achieve the power-performance-cost targets by employing proprietary computation cores (known as accelerators), which are optimized for a wide variety of computer-vision, signal-processing, and machine-learning tasks, including deep neural networks. These accelerator cores have been designed specifically to address the needs of the ADAS and autonomous-driving markets. Each EyeQ® chip features heterogeneous, fully programmable accelerators; with each accelerator type optimized for its own family of algorithms. This diversity of accelerator architectures enables applications to save both computation time and chip power by using the most suitable core for every task. Optimizing the assignment of tasks to cores thus ensures that the EyeQ® provides “super-computer” capabilities within a low-power envelope to enable price-efficient passive cooling.

The fully programmable accelerator cores are as follows:

  • The Vector Microcode Processors (VMP), which debuted in the EyeQ®2, is now in its 4th generation of implementation in the EyeQ®5. The VMP is a VLIW SIMD processor, with cheap and flexible memory access, provides hardware support for operations common to computer vision applications and is well-suited to multi-core scenarios.
  • The Multithreaded Processing Cluster (MPC) was introduced in the EyeQ®4 and now reaches its 2nd generation of implementation in the EyeQ®5. The MPC is more versatile than any GPU and more efficient than any CPU.
  • The Programmable Macro Array (PMA) was introduced in the EyeQ®4 and now reaches its 2nd generation of implementation in the EyeQ®5. The PMA enables computation density nearing that of fixed-function hardware accelerators without sacrificing programmability."
According to internet archive that page didn't exist until 2017.
 
I believe it's cheating. A good system should be able to work no matter if you place it in a city or on a back country road it's never seen before.
It isn't cheating. Just because a system is disabled in a particular scenario doesn't mean it doesn't work in that scenario.
Supercruise works just fine in the city or back country just like volvo pilot assist 2 or ap1 which all uses the same system. Its being limited to highway for a specific reason.

Not everyone has the same strategy. Have you wondered why audi never had a full speed driver assist even though they had a good L3 full speed system all the way from 2015? Because their strategy has always been Traffic Jam Assist > Traffic Jam Pilot > Highway Speed Pilot.

They couldn't release a L2 highway speed and then the next year release traffic jam pilot because the modes would confuse drivers.

Every automaker has a different strategy, none are the same.

GM's strategy is First Hands Free L2 Supercruise then 2019/2020 Eyes Free/Mind Free (L3/L4) Supercruise 2

Would self driving simply stop working in this neighborhood if it wasn't previously mapped?

Nope. I think what people fail to realize is the MAPS are just another layer to a SDC system. They are another layer of redundancies. A SDC can drive without HD Maps. The difference being, in a mapped area your disengagement rate could be 1 in 200k miles while in a none mapped area its 1 in 50k miles.

If your system relies on lidar data then all your cars need lidar. There's no way around that for lvl 5 when relying on such data.

Again this is actually not true. But like i said every automaker have different strategy. GM mapping out the highway is simply another step towards their SP2 system which will have multiple forward lidars and will update the maps through the cloud. SP1 doesn't need lidar.

GM SP1 uses lidar for slowing and handling curves and another layer of lane redundancy just like Tesla uses gps mapping for slowing and handling curves and another layer of lane redundancy.

The easiest way to avoid that situation is to have the cars not rely on lidar data :)

This is not understanding other companies strategy and the idea that everyone has to do things the way Tesla does it. GM will have lidar sensors in their L4 Highway system just like Audi has, they know exactly what they are doing. they don't need to strap on a bunch of useless sensors like Tesla and call it FSD in an attempt to sell cars. They will progressively set up all the tech and groundwork necessary for their system release date which is 2019/2020. This lidar highway data is simply part of the plan.

Granted, in the above situation with a neighborhood not being mapped, something that relies on other kinds of mapping wouldn't be able to navigate either, but at least it'd be able to drive, make turns, etc.

Again maps are simply another layer of redundancy.

Everyone has different types of them. Tesla currently uses crowdsourced gps map for somewhat accurate lanes and for turning on curved roads. They call this fleet-learned roadway curvature [1]. Then they are tagging radar information into the gps map for whitelisting overhead signs and bridges. They also use other third party maps for different things.

They are now trying to create HD maps for their FSD.

GM has created their own LIDAR map for all highways, they are also in mobileye's REM HD Maps program. So they will ultilize and contribute to REM. GM will use eyeq4 in their L3/L4 highway program using REM. Since they are using mobileye there is no more reason to create their own crowdsourced HD maps.

Also their subsidiary Cruise is also doing their own mapping of 100 cities for L4 robot taxi city deployment.

Mercedes in their new drive pilot 4.5 uses HERE maps, this also helps them in handling curves and slowing and turning at T intersection and handling roundabouts.

Volvo and Audi will both use mobileye HD Map REM.

For GM to be cheating then Tesla is also cheating and everyone else because they have different strategies.
 
Last edited:
Isn't that how a lot of the earliest self driving demos operated? Any location they traveled was mapped in extreme detail beforehand, so the task was made a lot easier (since for example, the exact location of the traffic signal, stop sign, other traffic controls, and fixed obstacles are already known).

However, if they were dropped into a location that was never mapped before, the car would be extremely crippled (like @jimmy_d put, the vision only played a supplementary role; it primarily depended on accurate mapping done beforehand to recognize the environment).

Nope.

Traffic lights, etc are mapped because it reduces false positives and false negatives. No because the cars can't function without it.

Vision always played the primary role in Any and ALL SDC unless the cars was just a GPS following car.
Maps only play a redundancy role to what the car is using.

This does not match how a human works. If you throw someone in an unknown place, they may be lost, but would still be able to drive without any problem (recognizing traffic signals and signs).

Having a map of all stop lights, signs, etc doesn't mean the system can't recognize a stop light.

I agree that scheme is pretty much cheating, esp when the Cadillac doesn't have lidar itself (so can't generate/update the same data in real time or at least in a crowdsourced way).

To my understanding you are basically saying since tesla is not doing it, or doing it differently, its cheating.

But its not as i explained to @JeffK. Maps are simply backups, and especially a L2 doesn't need an always up to date map since the map is only used for handling curves and redundant lane information.

GM will use Mobileye crowdsourced rem hd map and also will use multiple lidar and update their now created and ready lidar map using onstar when SP2 comes out.
 
Last edited:
  • Informative
Reactions: lunitiks
Just because a system is disabled in a particular scenario doesn't mean it doesn't work in that scenario.
Actually in this case that's exactly what it means since there's no lidar data.

Not everyone has the same strategy ... their strategy has always been Traffic Jam Assist > Traffic Jam Pilot > Highway Speed Pilot.
Speaking of confusing drivers haha. They are purposely delaying safety features that others implemented years ago because they don't want to put the time and money into it.

Nope. I think what people fail to realize is the MAPS are just another layer to a SDC system. They are another layer of redundancies. A SDC can drive without HD Maps. The difference being, in a mapped area your disengagement rate could be 1 in 200k miles while in a none mapped area its 1 in 50k miles.

You said yourself previously that level 3, 4, and 5 cars should have instant disengagements. It either works in that area or it doesn't. If it doesn't work without the previously captured lidar maps, then it doesn't work or adapt to new surroundings. The thing is, for the disengagement rate that you're suggesting, they'd have to be fantastic at the vision/radar only like Tesla's approach. In reality, no one has this just yet, so without lidar makes these cars aren't the greatest.

SP1 doesn't need lidar.

GM SP1 uses lidar
interesting...o_O

Lidar mapping isn't providing true redundancy if you can't update the maps without it. You'd need to constantly be updating these maps. how does it provide redundancy if there was a curve in this road last year and it's not their today? What about construction?

If you aren't using crowdsourcing to update HD maps, then how would they plan to keep these updated? If they don't plan to and they are just using it as a temporary crutch that's one thing, but either way all companies are going to have to get better at the vision/radar only approach unless they want to include lidar on every car regardless of their rollout strategy.
 
Having a map of all stop lights, signs, etc doesn't mean the system can't recognize a stop light.
It also doesn't mean that it can if the map was outdated...

I think we all agree that it should be able to regardless of having a map. It should rely on maps only when visibility is poor like most companies do. They don't have to be created with lidar though, as evidenced through the MobilEye approach.

When not created through lidar they are more likely to be up to date if cars also don't have lidar to crowdsource the mapping.
 
Last edited:
Actually in this case that's exactly what it means since there's no lidar data.

Map data are only for redundancy, please understand that.

Speaking of confusing drivers haha. They are purposely delaying safety features that others implemented years ago because they don't want to put the time and money into it.

And Tesla has already killed two people and caused hundreds of accidents because of their recklessness...

You said yourself previously that level 3, 4, and 5 cars should have instant disengagements.

A Level 3+ "CANNOT" disengagement in the sense it can't handle controls back to the driver (only level 3 with a duration timer). A disengagement will be a system failure and would result to a crash or potential crash.

It either works in that area or it doesn't.
There is no either it works or it doesn't. There are rates of disengagement. Even a L5 car can fail and crash.
Rates of disengagement for a L3+ car are seen as potential accident rates. No car will ever be perfect until we automate everything and even then systems still fail.

If it doesn't work without the previously captured lidar maps, then it doesn't work or adapt to new surroundings.

That's not how any of this works... plus you keep using lidar as though lidar is the only maps being created, there are camera slam maps, gps maps, radar maps, vector maps, etc.

Mobileye system for example works with/without its rem hd maps.

The thing is, for the disengagement rate that you're suggesting, they'd have to be fantastic at the vision/radar only like Tesla's approach. In reality, no one has this just yet, so without lidar makes these cars aren't the greatest.

Basically what you're saying is tesla way is the only right way. I thought this was the autonomous forum where you should leave your fanboism out the door? lol

So you mean the tesla system that is at 3 miles per disengagement? Or the Google system that will currently be above 25k miles per? or the GM Cruise system that will surpass 5k miles per? And that is on the street level which is a million times of magnitude more complicated than highway driving. Google has probably surpassed 250k miles per in highway driving which is why they started their self driving trucks.

Volvo once said that they were aiming their Highway L4 system to reach 400k miles per disengagement before its ready.

interesting...o_O

SP1 doesn't need lidar sensors because it doesn't need to live maps.

Lidar mapping isn't providing true redundancy if you can't update the maps without it. You'd need to constantly be updating these maps. how does it provide redundancy if there was a curve in this road last year and it's not their today? What about construction?

A L2 system doesn't need true redundancy which is why your tesla ap1 which is most times than not traveling on a road without map can still drive in its lane and handle the curves although less accurately than mapped areas.

If you aren't using crowdsourcing to update HD maps, then how would they plan to keep these updated? If they don't plan to and they are just using it as a temporary crutch that's one thing, but either way all companies are going to have to get better at the vision/radar only approach unless they want to include lidar on every car regardless of their rollout strategy.

I already said that GM will put Lidar on their L3/L4 cars and will crowdsource using onStar.

Tesla is not a vision/radar only approach. First of all tesla is not even a radar approach. Tesla is a Vision, GPS and HD Maps approach.

That front radar is useless and they use GPS and HD Maps.