I thought I was the only one tracking updates with a spreadsheet. Mine starts three months after yours but is nearly identical!
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
I thought I was the only one tracking updates with a spreadsheet. Mine starts three months after yours but is nearly identical!
Actually I think it was. They said EAP would use 4 of the 8 cameras and we already know two of cameras are the front normal and front narrow. We also know EAP includes automatic lane changes which means it also needs to two side cameras in the front left and right quarter panel.
So for EAP the cameras are: Front Narrow, Front Normal, Front Left Rear Facing, Front Right Rear Facing.
I thought I was the only one tracking updates with a spreadsheet. Mine starts three months after yours but is nearly identical!
Yes, pleaseI have a google doc as well doing the same, perhaps we should all just unify in a central location
Isn't that how a lot of the earliest self driving demos operated? Any location they traveled was mapped in extreme detail beforehand, so the task was made a lot easier (since for example, the exact location of the traffic signal, stop sign, other traffic controls, and fixed obstacles are already known). However, if they were dropped into a location that was never mapped before, the car would be extremely crippled (like @jimmy_d put, the vision only played a supplementary role; it primarily depended on accurate mapping done beforehand to recognize the environment).I believe it's cheating. A good system should be able to work no matter if you place it in a city or on a back country road it's never seen before.
Yesterday I was navigating for a group and an entire neighborhood was not on Google maps, highly doubt it's been mapped by some lidar mapping service. Would self driving simply stop working in this neighborhood if it wasn't previously mapped? If your system relies on lidar data then all your cars need lidar. There's no way around that for lvl 5 when relying on such data.
The easiest way to avoid that situation is to have the cars not rely on lidar data
Granted, in the above situation with a neighborhood not being mapped, something that relies on other kinds of mapping wouldn't be able to navigate either, but at least it'd be able to drive, make turns, etc.
Mobileye eyeq3 was built on deep neural networks. You were clearly looking at all the wrong places or not even looking at all.
Mobileye most certainly was using a neural net in the EyeQ3 chip that does object & path recognition and path planning in AP1. It was simply a mature, pre-trained neural net (or rather a whole group of specific neural nets used for different specific purposes) provided by Mobileye to Tesla.
That didn't load but here is googles cached one: EyeQ3™ Data SheetEyeQ3 is just hardware... you can run DNNs through it along with other algorithms available from their sdk. Amnon's prior research involved deep networks before 2014.
here's a data sheet
http://www.solder.net/components//com_chronoforms/uploads/Contact//20170313175315_EYEQ3 Data sheet.pdf
They call out "Deep Layered Networks" in their SEC filing March of 2015. Remember that they only become a publically traded company Aug 2014.
Here's a chip described in 2012 for this purpose as well
http://ieeexplore.ieee.org/document/6292202/
The SEC filing you refer to seems to relate to EyeQ4, not EyeQ3 if this is the one you are referring to:EyeQ3 is just hardware... you can run DNNs through it along with other algorithms available from their sdk. Amnon's prior research involved deep networks before 2014.
here's a data sheet
http://www.solder.net/components//com_chronoforms/uploads/Contact//20170313175315_EYEQ3 Data sheet.pdf
They call out "Deep Layered Networks" in their SEC filing March of 2015. Remember that they only become a publically traded company Aug 2014.
Here's a chip described in 2012 for this purpose as well
NeuFlow: Dataflow vision processing system-on-a-chip - IEEE Conference Publication
full paper: http://yann.lecun.com/exdb/publis/pdf/farabet-ecvw-11.pdf
I'm referring to thisThe SEC filing you refer to seems to relate to EyeQ4, not EyeQ3 if this is the one you are referring to:
https://www.sec.gov/Archives/edgar/data/1607310/000157104915001611/t1500431_ex99-1.htm
I did look through that datasheet for EyeQ3 and there is no mention of similar terms. I'm usually pretty good with googling and tried to find any Mobileye references to neural networks from before October 2014 as @jimmy_d challenged, but failed to find any.
Our second and much more intensive challenge we successfully addressed through 2014 was and still is the preparation for automated driving launches slated for the 2016 timeframe. The challenge consists of adding new industry first customer functions, most notable is traffic light detection and actuation on red light crossing which will be launched in the US later this year by one of our OEM customers, and multiple additional functions that form a natural growth of our existing capabilities.
But most importantly is the introduction of a new set of algorithmic capabilities centered around deep learning networks that were designed to support two major new functions. One is the - free space, where the system outputs the category label for every pixel in the image and determines where the host car is free to drive and the second is the holistic path planning feature, which provides the forward driving path in situations where the lane markings are non-existent or too weak to rely on.
Those two functions form the backbone of hands-free driving, where the steering control needs to know the location of the safe and unsafe zones to drive.
The initial launch of these functions will begin later this year in the US. Deep learning networks leverages two strong features of Mobileye . The first is that we have a very big and unbiased data base that can be used for training the networks and second is that our EyeQ3 chip has a very high utilization, above 90%, for the network operation. We spent much effort in designing compact networks and problem modeling to allow realtime performance at minimal chip capacity.
Our free space and holistic path planning together takes around 5% of the EyeQ3 capacity. We believe that Mobileye deployment of deep networks algorithms later this year will constitute the first deep networks running in production on an embedded platform in any industry, not only automotive.
...
https://s.t.st/media/xtranscript/2015/Q1/13062243.pdf
That matches @jimmy_d's characterization then about not finding any references to neural networks before October 2014. They were still working on it at that time and didn't launch it until later in 2015 (which explains why no public references).I'm referring to this
http://s2.q4cdn.com/670976801/files/doc_financials/2014/Mobileye-Form-20-F-2014.pdf
filed March 2015 for the period ending 12/31/14
From the Q4 2014 earnings call:
basically they are saying that they were designing and testing DNNs with EyeQ3 through 2014. Including the pretrained networks in the production SDK was later in 2015.
According to internet archive that page didn't exist until 2017.Have a nice weekend! Just found this, don't know if it's too vague to reveal.
The Evolution of EyeQ - Mobileye
"Mobileye has been able to achieve the power-performance-cost targets by employing proprietary computation cores (known as accelerators), which are optimized for a wide variety of computer-vision, signal-processing, and machine-learning tasks, including deep neural networks. These accelerator cores have been designed specifically to address the needs of the ADAS and autonomous-driving markets. Each EyeQ® chip features heterogeneous, fully programmable accelerators; with each accelerator type optimized for its own family of algorithms. This diversity of accelerator architectures enables applications to save both computation time and chip power by using the most suitable core for every task. Optimizing the assignment of tasks to cores thus ensures that the EyeQ® provides “super-computer” capabilities within a low-power envelope to enable price-efficient passive cooling.
The fully programmable accelerator cores are as follows:
- The Vector Microcode Processors (VMP), which debuted in the EyeQ®2, is now in its 4th generation of implementation in the EyeQ®5. The VMP is a VLIW SIMD processor, with cheap and flexible memory access, provides hardware support for operations common to computer vision applications and is well-suited to multi-core scenarios.
- The Multithreaded Processing Cluster (MPC) was introduced in the EyeQ®4 and now reaches its 2nd generation of implementation in the EyeQ®5. The MPC is more versatile than any GPU and more efficient than any CPU.
- The Programmable Macro Array (PMA) was introduced in the EyeQ®4 and now reaches its 2nd generation of implementation in the EyeQ®5. The PMA enables computation density nearing that of fixed-function hardware accelerators without sacrificing programmability."
It isn't cheating. Just because a system is disabled in a particular scenario doesn't mean it doesn't work in that scenario.I believe it's cheating. A good system should be able to work no matter if you place it in a city or on a back country road it's never seen before.
Would self driving simply stop working in this neighborhood if it wasn't previously mapped?
If your system relies on lidar data then all your cars need lidar. There's no way around that for lvl 5 when relying on such data.
The easiest way to avoid that situation is to have the cars not rely on lidar data
Granted, in the above situation with a neighborhood not being mapped, something that relies on other kinds of mapping wouldn't be able to navigate either, but at least it'd be able to drive, make turns, etc.
Isn't that how a lot of the earliest self driving demos operated? Any location they traveled was mapped in extreme detail beforehand, so the task was made a lot easier (since for example, the exact location of the traffic signal, stop sign, other traffic controls, and fixed obstacles are already known).
However, if they were dropped into a location that was never mapped before, the car would be extremely crippled (like @jimmy_d put, the vision only played a supplementary role; it primarily depended on accurate mapping done beforehand to recognize the environment).
This does not match how a human works. If you throw someone in an unknown place, they may be lost, but would still be able to drive without any problem (recognizing traffic signals and signs).
I agree that scheme is pretty much cheating, esp when the Cadillac doesn't have lidar itself (so can't generate/update the same data in real time or at least in a crowdsourced way).
Actually in this case that's exactly what it means since there's no lidar data.Just because a system is disabled in a particular scenario doesn't mean it doesn't work in that scenario.
Speaking of confusing drivers haha. They are purposely delaying safety features that others implemented years ago because they don't want to put the time and money into it.Not everyone has the same strategy ... their strategy has always been Traffic Jam Assist > Traffic Jam Pilot > Highway Speed Pilot.
Nope. I think what people fail to realize is the MAPS are just another layer to a SDC system. They are another layer of redundancies. A SDC can drive without HD Maps. The difference being, in a mapped area your disengagement rate could be 1 in 200k miles while in a none mapped area its 1 in 50k miles.
interesting...SP1 doesn't need lidar.
GM SP1 uses lidar
It also doesn't mean that it can if the map was outdated...Having a map of all stop lights, signs, etc doesn't mean the system can't recognize a stop light.
Actually in this case that's exactly what it means since there's no lidar data.
Speaking of confusing drivers haha. They are purposely delaying safety features that others implemented years ago because they don't want to put the time and money into it.
You said yourself previously that level 3, 4, and 5 cars should have instant disengagements.
There is no either it works or it doesn't. There are rates of disengagement. Even a L5 car can fail and crash.It either works in that area or it doesn't.
If it doesn't work without the previously captured lidar maps, then it doesn't work or adapt to new surroundings.
The thing is, for the disengagement rate that you're suggesting, they'd have to be fantastic at the vision/radar only like Tesla's approach. In reality, no one has this just yet, so without lidar makes these cars aren't the greatest.
interesting...
Lidar mapping isn't providing true redundancy if you can't update the maps without it. You'd need to constantly be updating these maps. how does it provide redundancy if there was a curve in this road last year and it's not their today? What about construction?
If you aren't using crowdsourcing to update HD maps, then how would they plan to keep these updated? If they don't plan to and they are just using it as a temporary crutch that's one thing, but either way all companies are going to have to get better at the vision/radar only approach unless they want to include lidar on every car regardless of their rollout strategy.