Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Poll: Testing the wisdom of crowds - how far off is Autopilot V2 Hardware?

How far off is release of Autopilot V2 Hardware in Model S and X?

  • 1-2 months

    Votes: 4 2.9%
  • 2 -4 months

    Votes: 3 2.2%
  • 4-6 months

    Votes: 10 7.4%
  • 6-8 months

    Votes: 25 18.4%
  • 8-10 months

    Votes: 12 8.8%
  • 10-12 months

    Votes: 23 16.9%
  • 12-14 months

    Votes: 12 8.8%
  • 14-16 months

    Votes: 10 7.4%
  • 16-18 months

    Votes: 13 9.6%
  • More than 18 months

    Votes: 24 17.6%

  • Total voters
    136
  • Poll closed .
This site may earn commission on affiliate links.
Let's have a prediction contest and see how close the "wisdom of crowds" comes to getting this right. I'm leaving the poll open for 30 days, then locking it - and we can look back and see how close the crowd came to finding autopilot version 2's release time frame. Please cast your votes to mean how much time after June 23 (the lock date of the poll) will elapse prior to autopilot version 2's release.

Screen Shot 2016-05-24 at 6.57.45 PM.png


This is driven by my personal plan to trade in my current 70D for a fully loaded-to-the-max Model S as soon as hardware is released capable of full autonomy - and then keep it for 10-15 years and 300,000-500,000 or more miles. I also need a second Model S right now, but the thought of the financial hit of trading in *two* Teslas makes me a bit nauseous.

As soon as the next hardware comes out what will happen is nobody in my family will want to drive the older Tesla. I can picture the family fights now: "What? You want to send me out on the road in that deathtrap that has no redundant systems or rear facing cameras? I see what my life is worth to you. Thanks a bunch." "WHAT? You want me to drive that puny little Model 3 eh? What if I get hit by a Suburban? I see what my life is worth to you. Thanks a bunch."

There are a few schools of thought:

1 - We will see no major hardware changes until Model 3 is released. At that time Model S and X and 3 will simultaneously get autopilot version 2.

2 - Model S & X will get updated Autopilot hardware around the time of the "Reveal Part 2" of Model 3 - but still before the release of Model 3 into customer hands.

3 - Autopilot V2 hardware will be released at some other unknown time prior to Model 3 in the S & X.
 
Last edited:
Here's my own latest theory - it isn't coming until Q1 2017 - I base this off Mobileye CTO Amnon Shashua's latest presentation in January 2016 at CES, where he seems to indicate the first multi-camera OEM system is now not happening in 2016 but 2017 instead - he is backing off earlier public predictions by his own company. I am wagering that Musk will stick with Mobileye for V2 (first with multiple linked EyeQ3 systems, later using a single EyeQ4).

I do, however, think that V2 is going to be released in the S/X before Model 3. This is because I believe Musk will reveal Model 3's full autonomy capabilities during "Reveal Part 2" - and if he doesn't update the Model S/X hardware at the same time he will definitely tank sales of those cars. So I'm going with Q1 2017.

Prior to CES 2016, TMC member posted in December 2015 the following history of Mobileye public statements, which all seemed to hint at a 2016 multi-camera OEM system from Tesla. The following quote is from post #54 in this thread: Autopilot 2.0 Not Imminent Based On Production Model X Design Studio [Speculation]

"I would bet my mortgage that the software will lag behind the hardware and I am just fine with that. It is pure speculation that the v2 hardware will appear in the MS or MX in 2016, but when you start adding up all the statements by Amnon (CTO of MobilEye, Ziv (CEO of MobilEye) and Elon, it's enough to keep me on the sideline for now.

March 23, 2015: “By 2016 there’s going to be new launches by GM and Tesla as well” - Amnon Shashua of MobilEye - Brains, Minds Machines Seminar Series: Computer Vision that is changing our lives - YouTube

August 31, 2015: "First launch of it (8 cameras) is going to be 2016" - Amnon Shashu of MobilEye. - Prof. Amnon Shashua - Computer vision, wearable computing and the future of transportation - YouTube

September 9, 2015: "Today we are already preparing with one of the OEM, a first vehicle based on 8 cameras...The system will run on 5 EyeQ3 chips and all of them will be connected.” - Ziv Aviram, CEO MobilEye - Supplier hints at next generation Autopilot hardware for Tesla as soon as this year | Electrek

November 3, 2015: “As we said, these capabilities are already implemented and will be implemented in the future semi-autonomous launches, including in 2016 by two of our OEM customers. The Tesla auto pilot feature is currently using a mono camera sensor for performing the most important understanding of the scene the visual interpretation. Our multiple camera sensor configuration launches are planned to begin as early as next year..... What we presented is Lane Keeping Assist system rather than auto pilot system. Auto pilot system is going to be presented next year, where is going to be 360 degrees coverage around the vehicle and is going to be multiple cameras with additional sensors. What we have today is just a mono camera looking forward. So it’s a very limited input that we have on the road. But importance of this launch (AP/LKA) is, Tesla is willing to push the envelope faster and more aggressively than any other OEM, and definitely this is a very important in step forward to introduce the beginning of semi-autonomous application that will start being launched next year." - Ziv Aviram, CEO MobilEye - Mobileye's (MBLY) CEO Ziv Aviram On Q3 2015 Results - Earnings Call Transcript | Seeking Alpha

November 19, 2015: "Ramping up the Autopilot software team at Tesla to achieve generalized full autonomy. If interested, contact[email protected]." - Elon Musk via Twitter"

-----------

However the latest Mobileye presentation at CES 2016 by CTO Amnon Shashua seems to push back the date of the multi-camera system to 2017. Shashua says that the first multi-camera system will still use 5 linked EyeQ3 systems for processing, and will debut in 2017. Given that we are halfway through 2016 and we know it is not Mercedes' new E-Class (MBZ uses NXP for processing I believe) - the first multi-camera Mobileye setup is still probably Tesla. Presentation video is here: Mobileye N.V. - Investor Relations - Events & Presentations - CES 2016 Presentation
 
Last edited:
If you have any interest in the CES 2016 presentation here are the notes I made from the video. They make a lot more sense if you watch the video and follow along.

Tesla gen 1 autopilot has “industry first” DNN, free-space, HPP, AEB (fusion)

2016 will have 5 new launches with 52 new models of cars using Mobileye technology (not autonomous driving but simple aids)

  • More free space detection functionality
    Second generation camera-only ACC - full speed, industry first
  • General object detection (industry first for mono)
  • Animal detection - volvo
  • Traffic jam assist

Future

Two camps of thought on how to reach Level 5 autonomy - driver completely out of loop.

Three pillars of autonomous driving:

  1. Sensing: interpreting 360 degree awareness and make an “environmental model”

    1. Curves, barriers, guardrails, objects, other cars etc.
  2. Mapping: questionable assumption because humans don’t need maps to drive. 6:28

    1. What we mean is not clear - navigation maps, high definition (tomtom, here - diff uses), Google high resolution lidar maps
  3. Planning (driving policy): Answer to question why a 16 y/o kids needs to take driving lessons - learn to negotiate a driving path in the presence of other cars - “multi agent game” - other drivers on the road - some follow/bend/violate rules, some are aggressive, some are courteous, etc.

How to combine sensing & mapping - two main camps - and you must be committed once you are in the camp - you can’t cherry pick ideas.

  • Sensing - pros in the field know what each tech can do

    • Cameras - highest density of information

      • Image variability is big challenge (day, night, dusk, rain, dust etc.)
    • Radars - thousands of samples per second - more weather robust - can see through weather

    • Lidars - hundreds of thousands of samples per second

      • Can’t sense texture reliably but can sense 3D reliably

  • Mapping - not as clear consensus in the profession - must think about localization - highly detailed map doesn’t help if you don’t know how to find yourself in it.

    • None - no localization

    • Navigation - gps approximately 10m

    • HD-Maps (Tomtom, HERE) - 10cm

    • Google 3D - centimeter scale, ½ gigabytes data per kilometer - 10cm

    • Localization

  • Localization - two camps - somewhere vs everywhere

    • Google, Baidu - “somewhere” with full capability

      • 3D detailed, cm scale, gigabytes data, can use low resolution lidar sensor. Doesn’t have to be dense - after you record the map you can use the “principle of subtraction” to find moving objects

        • So Google can drive cars with only laser scanner because they have very
    • Car industry - “everywhere” with partial capability

      • Fusion between camera and radar
    • Ultimate goal - “everywhere” with full capability

    • Challenges to both camps

      • Google

        • geographic scalability

        • Updates

        • How to do it? Mobileye says “i don’t know”
      • Car industry

        • Stronger A.I. to get from partial to full autonomy

          • Risky - we don’t know how much time to get to truly strong AI - 5 years or 50 years

          • Therefore let’s settle for StrongER AI in the man time, and compensate for lack of full AI by using a detailed map in conjunction with deep learning
        • Higher-resolution maps

The rest of this talk is about using Higher Resolution Maps with Stronger A.I.

Idea: Simplify the creation of high resolution maps by having one unit which interprets the scene and uses that ability to create and update high resolution maps via crowd sourcing. (Basis of Volkswagen and GM).

How to get stronger AI 360 sensing environmental model, planning

  • Generate a sparse 3D map - not a detailed one - use landmarks (signs, posts etc.)

  • Dense 1D - dense information for the lanes - but we don’t need full density for 3D.

  • Crowd sourced - only 10 kb/km - 5 orders of magnitude smaller than Google’s approach.

  • Advantage of this data - can be transmitted by cars to the cloud and transmitted back to the cars as a map.

Name: Road Experience Management (REM) - will talk about how they make the stronger AI and how they will build these higher resolution maps.

EyeQ3 - mono front-facing camera - today’s best technology

  • Powerhouse of today’s ADAS (collision avoidance)

  • Today 50 degrees, 2018 -> 75 degrees, 2019 -> 100 degrees

  • Today 1.3M imager, 2019 -> 1.7M, 2020 -> 7.2M

  • VERY low light sensitivity sensors - more than consumer cameras

EyeQ4 - trifocal front-facing

  • 3 optical paths: 150, 50, 25 degrees

  • Enables highway autopilot in a safe manner

  • 4 production launches 2017/18

  • Typically with front radar, 4 corner radars and in some cases front (back) lidar (veoscanna)

“Full Vision” 360 coverage: eyeq4 or multiple eyeq3's linked together

  • Trifocal + 5 cameras: any segment of the field of view is covered yb at least one camera

  • Together with redundancy layers (radar/lidar from moving objects, REM for drivable paths) will support Full Autonomous Driving

  • Launches in 2017 with partial functionality - software will be updated over time

  • Some initial launches will use multiple linked EyeQ3’s instead of one Eyeq4

hqPkvGGW-lqEwTCO3PzGCdRpBbhCRnc55WPjyePvEbaMLkc6sfXdqT9HMOBwDs8wsJf5XNFLq9hMxn-RcU8J4bVmfBInjkfNIlSGYGb6tzG2hH2_c01E_NPB6O3ZG8irqbU1Zqc




Learning with well defined input/output relationships are ideal scenarios for machine learning - data is labeled/annotated by humans. Example: what is in a bounding-box - where is the path delimiter of a road (curb, hedge, concrete barrier, lane marker etc.). Give the computer lots and lots of examples and the neural networks go to work.


What is special about deep learning at Mobileye?

  • They started in 2012

  • EyeQ3 launched 10/2015 with Tesla’s autopilot and contains Mobileye deep learning algorithms - each one trained end to end with a deep learning module:

    • Object detection

    • Environmental model - free space

    • Path planning - holistic path planning

    • Scene recognition
  • Unique challenges

    • Real Time constraints - high-res at 36 fps

    • Input/output modeling, network architecture and utility functions need innovation

    • Least interesting problem is simple object detection
  • Most likely we launched the industry’s first portable real time embedded deep network DNN in volume production - in any industry.

    • Not connected to cloud - computations done real time in the car
  • Not a garden variety network copied from some academic paper

Examples 3DVD - bounding boxes on each face of a vehicle - not just the rear end (as is done today).

  • This is a difficult problem because the faces are not always visible

    • This problem hasn’t been dealt with yet in any academic paper

    • This capability is coming late 2017/early 2018 on one EyeQ4 and late 2016/early 2017 on a system running 3XEyeQ3’s via one manufacturer.

    • Anything

Free Space through Pixel Labeling

  • Uses context

  • Launched on Tesla autopilot

  • Boundaries of freespace have category labels - 15 different ones

    • Path delimiter of moving vs stationary objects - and what types of objects they are within those two categories

Path planning using holistic cues

  • Launched in Tesla 2015

  • Integral part of “lane detection” going forward in 2016 in all programs

  • System fuses information from lead vehicle with holistic path planning

Driver Policy / Planning - how to plan the vehicle’s next actions

  • Autonomous cars must learn to drive like humans

  • Driving is a “multi-agent” game - behaviors to be learned so FAD should adopt “human-like” driving skills

  • This is a technological problem, not an ethical problem

Sensing vs Planning

  • Sensing: the present, single agent, perfectly predictable - input/output - if you have enough data you learn the output from DNN mapping

    • Technology - deep supervised learning with multiple end-to-end modules
  • Planning/ driver policy -Planning for future, multi-agent, “what will happen if” reasoning, not perfectly predictable

    • Technology - “reinforcement learning:”

      • One type of RL: Deep Q-learning (Google DeepMind) - but it is not suitable for driving. Why not? Other agents are not Markovian, Q-function is not smooth (large Lipschitz constant), difficult to break down into separate modules, very long training time

      • This talk is not technical so we won’t go into the above bullet point. See arXiv paper by Mobileye - on the website
    • Example of planning problem - simulated host car (red) merging into round-about - after many iterations the car can learn the best way to merge into traffic the best way without upsetting other drivers.

MAPPING

Map definition

What should a map enable in the context of autonomous driving? Localization and finding the drivable paths assuming no obstacles (obstacles

What are the map requirements to enable “everywhere autonomous driving”?

  • Map updating must be a continuous process - near real time

    • Process must be “crowd sourced”

      • Small data - 10kb per km

        • Transmitting images or other raw data is out of the question

        • Advanced processing must be done on-board
      • Preferable not to introduce any dedicated hardware for this task - use cameras.
  • GPS for localization?

    • Problem - 10m accuracy (not consistent, much less in urban environments)

    • DGPS, RTK - perhaps acceptable accuracy in open areas, not so for urban, city traffic

    • SLAM - Simultaneous Localization and Mapping

      • Idea - every image has a descriptor and you track these descriptors in subsequent images so you can localize yourself (look up wikipedia for more details). Many feature points per frame + ego-motion

      • 1 MB / meter, 1 GB / km per camera
    • TomTom “RoadDNA”

      • Only localization

      • Compressed lidar - 25kb / km

      • Not crowd sourced - does not make map. HD-map[ is separate engine.
  • Mobileye’s idea: Road Experience Management

KAO0uJ0V-zixTtitCJggXakoJI6NscZpoOtpvm3XZWlbfiU014S81GQQHGcYy3E9_9G5nLM0zNl3r0dF-4D5q_SEkIvP0Uxu2v8UkBu2MYbW4vxHriP9ZJ5cQ5XeDhMRp-Cqs_M



Look for landmarks in the image - traffic signs, directional signs, general rectangular signs, lampposts and reflectors, additional families of landmarks (e.g., dashed lines) will be added if needed.


3ISXb8oL-oHtIthXd74NL7LsoylRVTAjv3SfwY7z148i-L4jQQcxlgcz5tfzQwKyULnI7RvB5WUOr3t8u8RGAvVpHrlsUV7-i_8RsacOheFBWqempSCTK3VhdaSS_8NGplgz7hM



In absolute worst case - boring texas highway - every 100m you encounter a landmark.


This is a per land mark process, not a per image process - you use environmental model to find these landmarks.

Volvo Driveme project has 4 surround cameras and uses Nvidia to to do parking processing.

Cameras are for far range - not for near range. Mobileye is impressed with what Tesla has done for the near range with ultrasonic sensors.

Nvidia provides a “pre-trained” network - Mobileye says it is inadequate for auto industry production.

Roadbook is built only from forward facing camera. Needs only additional software for eyeq chip and means to communicate - existing 3G/LTE ability to communicate.

Who does Mobileye view as its real competitors:

  • Classical - Bosch, Denso, Autolieve, Continental - companies with experience getting production awards - supplying both hardware and content. Maybe more. Not Nvidia.

Monetization

  • Mobileye says it’s too early to speak. 2016 launch with GM - so by 2018 can talk about map services. Post 2020 things will be very interesting from a commercial POV. Mobileye thinks these maps will generate more income than regular GPS maps today. Shared mobility business models.

  • Build infrastructure for future - the money will come.

Who owns the data?
 
Does "V2" mean "fully autonomous" or just "whatever the next AP hardware upgrade is regardless of how significant?"

If the former I think greater than 18 mos. I'd guess 24 mos.
If the latter I'd take 6-8 mos. though I'm not sure that there will be anything between the existing hardware and full autonomous.

...I marked the "6-8 mos." option.
 
Does "V2" mean "fully autonomous" or just "whatever the next AP hardware upgrade is regardless of how significant?"

If the former I think greater than 18 mos. I'd guess 24 mos.
If the latter I'd take 6-8 mos. though I'm not sure that there will be anything between the existing hardware and full autonomous.

...I marked the "6-8 mos." option.

Good question - I should have clarified. I sure wish posts didn't "lock" after 10 minutes - now I can't update it.

By "V2" I personally mean 360 degree camera coverage - which is what Mobileye keeps publicly claiming is necessary for full autonomy - and which they also claim is coming to at least one OEM in 2017 (prior to Jan 2016 CES they claimed it was arriving in 2016).

Mobileye says an 8 camera solution augmented by radar and sonar is coming to one OEM before any others. They don't identify that OEM, but in another presentation they identify Tesla as the OEM willing to push the limits more than others. They also claim this 8 cam system does not need their next-generation EyeQ4 to work - they say it will work with a linked system of 5 EyeQ3 SOC's working together. Since EyeQ3 is obviously found in autopilot right now, the next-gen system does not seem to need to wait for a new SOC to ramp up. I have also seen Mobileye say somewhere that even the single SOC EyeQ3 systems in use right now are using only about 10% of the hardware's existing processing capability.

My wild-ass guess is that whatever new capabilities EyeQ4 will bring to the table will be backwards compatible with a multi-EyeQ3 system.

Mobileye also detailed their camera based, low bandwidth, crowd-sourced high precision mapping project at this CES 2016 presentation. Their plan is to use machine learning to gradually build up low density maps which will contain geographic markers (such as a billboard or other road-side object) for cars to use to precisely triangulate their positions even when road markings are not visible.

Now, what Mobileye does not talk about is how their system would work in heavy fog, smoke, snow or other low-visibility condition where the cameras cannot see road-side markers. However I would still consider a "sunny day" full autonomy system to be "full autonomy." Plus I live in California - so, heh. :p

Will Tesla eventually start using lidar? Hmmmm.
 
Last edited:
My guess is 16-18 months, putting it in Q4 2017, corresponding with the phase 2 reveal and launch of the Model 3. The hardware will likely be secretly rolled out in the Model S/X prior to this (just as the original autopilot hardware was), but it will be formally announced at the phase 2 reveal of the Model 3. I believe that event will happen shortly before, or possibly even as the first deliveries are made.

Actual full autonomous software will not be ready at that time, but will be pushed out when it is ready - possibly as much as 2 years later.
 
EyeQ4 will have advantages over multiple EyeQ3s in the form of lower power consumption and the addition of new dedicated processors, like a vector processor that doesn't exist on EyeQ3.

I think Calisnow has done an excellent overview. We're not ready for autonomy yet (legally), so why push the hardware? Cameras are getting better and cheaper every month. They should wait until the regulations are closer and all these Autopilot miles are analyzed before locking in a new suite.

And there are plenty of software upgrades (sign and traffic light recognition, animal tracking, summon, and general autopilot improvements) that will keep the team busy for the next 9-12 months.

I don't think it's wise to splinter the hardware with an incremental update to a system that still hasn't been maximized. It's like releasing a next generation video game console before the software has maxed out. And console add-ons never work.

Nor is it wise to leave your patient, fully-optioned X buyers in the cold with an outdated Autopilot suite less than 6-12 months after getting their car... while the lower priced Xs start coming off the line with a new sensor suite. Not cool.

Now, a caveat: They may add safety hardware as a stopgap this year. For example, rear radar for better blind spot detection or something similar to Mercedes PreSafe (for rear collision help) or Ford's BLIS. I think this can happen outside of Autopilot 2.0. I can also see a refresh to the center display/dash (including major GPU overall) coming sooner.

People thought I was crazy when I said this before the X launch/deliveries. And before the 3 reveal. And before the last refresh. And yet not a single Autopilot hardware upgrade has happened.

Give them time. They'll keep adding great new features--I suspect traffic sign reading will be the next big update--and we'll see the new sensor suite sometime around next summer/fall. Let them perfect highway Autopilot and summon/parking this year, along with crowd sourcing landmark-based mapping that will be used with 2.0 hardware and eventual autonomy.
 
  • Informative
Reactions: calisnow
I would expect to see Autopilot 2.0 hardware before the end of the year. I expect it will have hardware from both NVIDIA and Mobileye.

I think the Model 3 will have a cost reduction version of Autopilot 1.0. The NVIDIA hardware will be replaced with a custom Telsa CPU/GPU (mainly an ARM reference core). It may be more like Autopilot 1.5 (a few extra sensors).

Autopilot 3.0 will likely run on 100% Tesla hardware cutting out both NVIDIA and Mobileye. I would look for this 2 years after Autopilot 2.0 hardware is released.
 
Based on all the great info from @calisnow I'd expect to see the hardware in 1Q17. How much of its capability will be provided to drivers then is a different story. Drips and drabbles over the next several years. The last 1% of making it as reliable and safe as a human driver (which is not very reliable or safe) will take a long time.
 
We've seen what look like leaked schematics (they look genuine, but you never know) showing what appears to be MobilEye's proposed 3-camera forward view solution (one narrow FOV, one medium, one wide angle). It seems likely to me that an Autopilot hardware revision is coming sooner rather than later. I wouldn't expect to make it very far into 2017 without one, especially as other automakers quickly catch up. At the same time, I also wouldn't expect it in the next few months or they would have bundled it with the refreshed Model S to make a bigger splash.
 
  • Like
Reactions: gsxdsm
We've seen what look like leaked schematics (they look genuine, but you never know) showing what appears to be MobilEye's proposed 3-camera forward view solution (one narrow FOV, one medium, one wide angle). It seems likely to me that an Autopilot hardware revision is coming sooner rather than later. I wouldn't expect to make it very far into 2017 without one, especially as other automakers quickly catch up. At the same time, I also wouldn't expect it in the next few months or they would have bundled it with the refreshed Model S to make a bigger splash.

Mobileye has been touting that trifocal camera cluster for years now. It's a clear part of their roadmap and was discussed in detail by their CTO in 2015.

I would expect news in the mid-2017 timeframe for true 2.0 sensors. I agree it would be odd to do a refresh without new sensors if they were imminent.

I don't think the other manufacturers are anywhere near Tesla on the software and they're not getting millions of miles of real world learning yet. Mercedes already has much better sensors but their version of "autopilot" is horrible. (I have a 2016, delivered in May, with Distronic and it's night and day.)
 
Mercedes already has much better sensors but their version of "autopilot" is horrible. (I have a 2016, delivered in May, with Distronic and it's night and day.)

I haven't seen any discussion about this topic - trying to answer the question of how it was that Tesla leap-frogged Daimler with the quality of its autopilot software, despite the fact that Daimler has decades of in-house research under its belt on autonomous driving. On some level this has to be embarassing for Mercedes.

It'll be interesting to see if the 2017 E-class is as accurate or more accurate at lane keeping on poorly marked roads than whatever the latest firmware of Tesla's autopilot is at the time the 2017 MBZ's start to ship. The new generation of MBZ's drive pilot does sound impressive on paper.
 
  • Like
Reactions: William3
The model 3 will likely have next gen hardware and software across the board. Not only will Tesla update mobileye and other sensors, but the MCU and most software. Including re-skinning the UI.

When does Tesla have the capacity to do this heavy upgrade to the S/X? I think they would want to do it Q1 2017. My gut tells me that they don't have the capacity to do this big change before the model 3 launches.

Tesla struggles with the Model X may have completely hosed the update release schedule. The Model 3 tech could be out of synch with the S/X.

I haven't seen any discussion about this topic - trying to answer the question of how it was that Tesla leap-frogged Daimler with the quality of its autopilot software, despite the fact that Daimler has decades of in-house research under its belt on autonomous driving. On some level this has to be embarassing for Mercedes.

It'll be interesting to see if the 2017 E-class is as accurate or more accurate at lane keeping on poorly marked roads than whatever the latest firmware of Tesla's autopilot is at the time the 2017 MBZ's start to ship. The new generation of MBZ's drive pilot does sound impressive on paper.

Easy. The custom map Tesla has built. Lane keeping is sloppy without the car being able to precisely know its position within the lane.
You don't seem to have grasped mobileye core IP.
 
Easy. The custom map Tesla has built. Lane keeping is sloppy without the car being able to precisely know its position within the lane.
You don't seem to have grasped mobileye core IP.

No, not easy at all - which is perhaps why there hasn't been a ton of speculation on what is going on. The first problem with the mapping theory is that when 7.0 was released in October there was no way the custom maps could've been built for the entire U.S. Yet, even on roads out in the boonies - which I drove extensively back in October - the system was far ahead of Mercedes in terms of accuracy. Second, Tesla has said the cars are gradually building the high precision maps but they have made no claim yet that the cars are using them. There has been some discussion here speculating that using statistical methods of averaging position, the current cars could build high precision maps using GPS, but would not necessarily be able to use those maps with current sensing hardware. On the other hand, Musk claims that the mapping techniques did solve the poor-lane-markings problem on sections of I-405 in Los Angeles in the run-up to launch last October. Other people speculate that his public claim was based on test cars with sensing hardware more advanced than what is in the cars now. As for me, I reserve judgment. Third, there is a lot about Mobileye which Tesla is not using - Tesla is serving as its own Tier 1 integrator. The improvements in accuracy in 7.1 and later small updates could be due to additional training of the neural networks for sensing only - not mapping. There's no way to know for sure - neither Tesla nor Mobileye is telling anyone. Mobileye has claimed publicly in one of its presentations that Tesla has implemented its learning-on-the-fly DNN in the current version of autopilot - but they have not elaborated on what learning is taking place in the real world, and neither has Tesla.

And finally, just to be clear, Mobileye's crowd sourced mapping project using visual landmarks so far does not have Tesla signed on as a partner. Tesla has claimed its fleet is building precision maps but it has never explained how precise those maps are, nor how they are being built - except to say it's being done with GPS data. Since non-differential GPS is accurate to only about 10 meters, it is possible that precise maps could be built using many position samples averaged out (assuming that successive samples converge to a tighter window than 10 meters).

If you want to tell me what it is I haven't grasped please explain - I'd love to solve this mystery.
 
Last edited:
  • Informative
Reactions: MarkS22
I do think it's going to be a long term mistake not to join Mobileye's map data group. Way more data with the likes of GM signed up.

Tesla may be forced to join Mobileye - seems like everyone else in the industry is forming partnerships for mapping. OTOH Musk loves to go it alone - maybe he thinks the huge volume of the Model 3 will be sufficient to build his own maps.
 
I haven't seen any discussion about this topic - trying to answer the question of how it was that Tesla leap-frogged Daimler with the quality of its autopilot software, despite the fact that Daimler has decades of in-house research under its belt on autonomous driving. On some level this has to be embarassing for Mercedes.

It'll be interesting to see if the 2017 E-class is as accurate or more accurate at lane keeping on poorly marked roads than whatever the latest firmware of Tesla's autopilot is at the time the 2017 MBZ's start to ship. The new generation of MBZ's drive pilot does sound impressive on paper.


The 2017 E-class has much better autopilot hardware than Tesla. This is fact. Tesla should be somewhat embarrassed by this. I haven't driven one yet so I don't know about the software side of it. Mercedes lawyers might cripple it with UberNanny nags. Anyone drive one yet? I'd love to hear some thoughts from someone who's driven the 2017 E-Class.