Separate names with a comma.
Discussion in 'Autonomous Vehicles' started by lunitiks, Mar 26, 2017.
A very impressive and thought provoking talk by MobilEye CEO at CES 2017.
Warning: Long video
EDIT: For some reason the same video is included twice (x2) in this YouTube clip. Plus a very long skip-though introduction (graphics), so the video itself is not that long.
MobilEye appears to have a very methodical and robust approach in place.
Well worth watching. I have newfound respect for MobilEye and Intel. Hoping that Elon made the right choice for us.
Of course he is making the right choice with Nvidia.
When an audience asked about its silicon competitors using AI (that is what Tesla/Nvidia is doing AI) but MobilEye shot down that notion of "competitors". I think he meant that MobilEye got more contracts in the market which he thinks alone is a sign of success.
On the contrary, MobilEye is just too slow for Tesla.
MobilEye won't start collecting REM (Road Experience Management) or mapping until next year, in 2018.
Tesla has been collecting data for Tesla Neural Network Vision since forward camera was introduced in 2014, for the past 3 years.
Tesla driverless LAX to NYC trip is planned in about 9 months at the end of this year and you might wait a long time for MobilEye to plan one soon.
Mobileye made/makes great technology. But when watching Amnon always remember he has a lifetime of research to defend - and an approach to autonomous driving to justify as well. The unsupervised approach was not possible even 36 months ago because the GPU power simply didn't exist to run these networks. Mobileye's supervised learning, compartmentalized approach to vision was the only way to execute for the last number of years. Unsupervised end-to-end is a threat to Mobileye's business model.
I very much agree @calisnow . This is undoubtably a "war" between two schools of thought. There's so much implicit criticism of Nvidia's approach in Amnons remarks. (Especially in the Q&A.)
I'll give it to Amnon, though: He's a very, very good speaker
And he is also brilliant man, no doubt. Will be very interesting to see these schools compete with each other over the next few years. Probably they will each move closer to the other.
Actually a drive from Tesla HQ to Time Square has just 2 miles of surface street so that trip elon has been hyping not only has been done multiply times before but its also completely worthless as a fully self driving indicator. 0.0006% of the drive only matters to self driving. That's 2 miles out of 2,943.
Lastly, no tesla hasn't been collecting data because they never had access to the raw camera feed of AP1 because mobileye wouldn't let them.
Another debunked myth.
Even today, there is no evidence that tesla has started collecting data for true hd mapping.
You mean collecting no "data" at all? That's obviously wrong. Be more specific and with sources for your claims that you clam constitute "debunking"
"Tesla confirmed to Electrek last week that 1.3 billion of those miles were driven by Tesla vehicles with Autopilot hardware.
To be clear, those are not miles driven on Autopilot (with Autosteer and TACC), but miles driven in cars with Autopilot first generation hardware. Tesla still uses the data even when the Autopilot is not active in order to feed its machine learning system and improve its Autopilot programs: Autopilot, Enhanced Autopilot, and Full Self-Driving Capability."
Tesla has now 1.3 billion miles of Autopilot data going into its new self-driving program
Proof of your claims please?
Here's a picture of High Definition mapping collected well before AP1 was activated.
AP1 was dormant for 1 year until 2015 while it silently collected HiDef mapping & other relevant data to prepare for driverless demo coming at the end of this year:
I really don't understand what you are saying.
Are you saying Tesla has done driverless cross country "multiply times before"
And your reason that it could do "multiply times before" because it's mostly freeway?
I would rejoice if driverless Tesla can do freeway only in a near future! That would be a real breakthrough.
Couldn't help chuckling watching the examples demonstrating the complexity of a dual-lane merge where cars are randomly coming into a highway from two sides and then have to negotiate to exit in two different directions within a very short distance. Just returned from a trip to Atlanta where drivers from five different highways merge into a 16-lane downtown connector only to have to negotiate bumper-to-bumper traffic where two lanes on both the left and right are exit-only lanes that disappear within a quarter mile. Talk about a cluster-fork.
Mobileye gave none of their supplier access to the raw video feed. They only gave output of aggragated data from their algorithms. These aggrageted data are useless for anything meanful when it comes to full self driving.
There is no such thing as miles data as none of the data collected by tesla can actually be used in a FSD system.
They are useless. Infact they are also basically useless to AP2 aswell. As you can see from the struggles of AP2.
If the data tesla collected were so OP. They won't still be struggling to get AP2 in parity with AP1 after 7 months.
If i drove around a track for acouple hours, tesla would count that as 1,000 miles data.
miles data is really what your ordometer says acording to tesla.
FSD needs three type of data.
1) HD Data: what mobileye is doing with REM. A map with exact lanes, lane markings, intersections, traffic sign lights, road signs, light poles, road barriers/edges, and landmarks.
That type of data requires raw pixel data which wasn't available to tesla in ap1.
2) Object Recognition: Deep learning CNN is used for object recognition and requires feeding a model millions of photos of oject that have been tagged and labeled by humans and it then learns to recognize these objects. This also requires raw pixel data.
3) Driving Policy Development : Collecting aggregated 3d data of different scenarios, this requires surround cameras which also wasn't available with ap1.
These things were touched on by mobileye ceo in his presentation aswell.
These are what's neccesary for FSD. Everything else is meanless including that stupid 1 billion miles data.
Couldn't help Ap2 and won't do nothing for FSD. Tesla will begin collecting data for their FSD so enough.
watch this from 52 minutes.
What year Tesla do you own Bladerskb? You appear to be simply parroting what Amnon has said in several presentations and presenting it as though it is fact. Amnon *claims* FSD need those things. Preface your statement about what FSD needs with "The CEO of mobile claims you need the following three things" and then you will have stated fact. Whether or not Amnon is correct is not yet known.
"Tesla is creating high-precision digital maps of the Earth using GPS."
Read your own article.
GPS is only accurate to a-couple meters.
FSD needs accuracy of a-couple cm which is provided by HD maps.
What tesla has is HP maps which is useless for both AP2 development and FSD.
FSD needs HD maps.
You know me well enough to know i don't parrot people. I always respond with a well elaborate and detail analysis.
If you disagree with the three points i listed out (hd map, machine learning for object recognition and driving policy aka edge cases) then you disagree with the entire industry and academia including tesla themselves.
Mobileye is not the only one doing hd mapping, everyone is and there are two categories.
The full lidar map that here's map, tomtom, google, uber, gm's cruise for example use versus the abstract end to end map that mobileye, nvidia uses.
TomTom's DNA Map
End-to-end HD Mapping for Self-Driving Cars | NVIDIA
I could add more videos from other companies but it will just make this post longer.
2) Object Recognition
The entire industry uses machine learning for detecting and classifying object. Whether the system is primarily lidar as the case is for google or camera as the case is for tesla. Its the same thing.
Google does deep learning on millions of 3d point cloud objects (cars, pedestrians, bikes, etc)
Go to 20:00 minutes mark
Tesla does deep learning on millions of pictures of objects (cars, pedestrians, bikes, etc)
[See Nvidia's video above]
I could add more videos from other companies but it will just make this post longer.
3) Driving Policy
The entire industry also agree that edge cases is the main problem of driverless cars today, which is what developing driving policy is all about.
If you disagree with any of this, you disagree with the entire industry and academia.
Tesla 1 billion miles data however is useless and frankly was only mentioned in relation to "fleet learning" which also was mentioned only in relation to their high precision mapping from using gps and then their recent use of radar more as of late 2016 and the blacklisting of false positive braking as a result of bridges and freeway signs. That's all the 1 billion miles data is, which is good for level 2 autonomy (ap1 like) but useless for level 4. But the ofcourse hyped fans simply turn it into something else. And you got guys like fred/electrek writing up fantasies and lies.
But me, I will reiterate. All i post are facts, not speculations, not hype.
Facts, and facts don't care about your feelings. They are simply truth.
I always wonder how you speak in absolute terms and what kind of sources give you that authority (like insider knowledge of Tesla's systems), but it seems you are pulling facts out of thin air.
@wk057 has pulled raw camera feed from the AP1 camera. This was saved as part of the crash data. So obviously Tesla has a way to pull the raw camera feed.
Jason Hughes on Twitter
@wk057 has been doing this for multiple cars. The last few frames before a crash are stored in the car's black box (EDR).
Tesla Autopilot camera stores footage after a crash like a dashcam – here’s an example
Several Tesla owners claim Model X accelerated/crashed on its own but everything points to user error
Except Tesla is very likely not using conventional GPS but likely integrating odometry into their systems or at least a higher accuracy system like WAAS. People who pulled data from Visible Tesla are able to tell what stall their car is parked in and also how far forward the car is pulled into the stall. Basically accuracy is within 2-3 feet, whatever way Tesla is doing it.
How accurate is the GPS? • r/teslamotors
The odometry approach can lead to accuracy to within an inch, but Tesla probably doesn't need that close of an accuracy just to stay in a lane.