Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Speculation about EAP and FSD codelines

This site may earn commission on affiliate links.

Matias

Active Member
Apr 2, 2014
4,008
5,937
Finland
How will Tesla merge EAP and FSD code lines? Or will they be merged?

I think Tesla is currently developing two separate code lines; EAP and FSD. My belief is based to the fact, that Tesla demoed FSD already last year and is still developing EAP to reach parity with AP1 and beyond. Furthermore, Elon is speaking about the autonomous demo drive across the continent in December.

So they must have different FSD branch at the moment, right?

Disclaimer; I'm 100% layman.
 
The ongoing improvements to EAP isn't the code still being written — it's the activation of already-coded features after receiving enough active driving data to validate that it works as intended. There's no doubt some tweaking that's been happening with that data, but largely the EAP code is complete.

FSD is built on top of that — things like lane keeping, adaptive cruise control, summon, self-parking, etc. are all part of EAP and the basis for the further functions of FSD.

But with something as monstrously complex as EAP and FSD, I have no doubt that there are numerous development branches within Tesla for each core feature.
 
I think it is simpler than that. This post adequately explains the process:
1) create a neural network (whatever you think that means)
2) plug the ethernet from fleet data into it (whatever you think that means)
3) ?????
4) profit

avatar.jpg
 
  • Like
  • Funny
Reactions: Johan and Bladerskb
I think it is simpler than that. This post adequately explains the process:
As a whole, I agree, but basic road rules must be applied on top of that as well. Essentially if you look at "big data" analytics, (even though big data is now a cliche term) the larger and more granular data sets you can provide these analytic tools, the more patterns emerge that previously were unknown. These patterns can be automatically found by the tools without even specifying what to look for. I think the same can be applied to fleet learning for driving.

Plenty of people pooh pooh the value of this fleet data, but I think it will be amazingly important and differentiating compared to other driving AI platforms.

But I agree with the topic as a whole, that I think there are two programs being developed, one is AP2 simply trying to mirror AP1 functionality, and the FSD program, which has little to do with AP2 beyond the hardware.
 
I believe EAP will remain similar. The neural networks will continue to improve and detect more stuff. And it will do s better job if placing it in the lane.

FSD is in development in a branch and will rely on high def map data to roughly control the car. It will know traffic signals, signs, lanes, exact radius of curves, etc. the EAP will position the car accurately within the map data.

If you believe the lvl5 / payver sales pitch, Tesla must be pretty far along.

I'm waiting to see the Tesla center council "game" to somehow entice people to manually drive all the road network. It works for Waze and payver is taking a pay per point/mile approach.
 
As a whole, I agree, but basic road rules must be applied on top of that as well. Essentially if you look at "big data" analytics, (even though big data is now a cliche term) the larger and more granular data sets you can provide these analytic tools, the more patterns emerge that previously were unknown. These patterns can be automatically found by the tools without even specifying what to look for. I think the same can be applied to fleet learning for driving.

Plenty of people pooh pooh the value of this fleet data, but I think it will be amazingly important and differentiating compared to other driving AI platforms.

Sorry to burst your bubble, but there is no fleet data, whatever you mean by this.
Also(I am no expert on this, so don't quote me here) if you don't train NN first with stuff of interest, I am not even sure it can recognize patterns, or at least it probably would quickly decide that all of Earth is covered with pavement, most of cars are black, all cars have wheels and the upper 1/4 of all pictures is blue for various values of blue as seen by onboard cameras.
(if you refer to self learning NNs that play videogames, those don't just look at pictures, they have means of doing stuff and then seeing what happens and learn via that).

Now, to train your NN with useful data you need a good dataset first. After that you can feed it with new data and see how it does. So just hire a bunch of interns to watch the process and correct the NN.
Or, in case of Tesla, they have a better option.
They can downsample the pictures from those random 10 seconds clips and feed them into mobile eye chip and see what it reports, then feed the results into their own model somehow (or compare the two models and make corrections to theirs).
Once their model is superior - they can move to some next thing ;)

Oh, btw, another thing. You know who has more data than Tesla from the road? Blackvue and other such companies that offer cloud storage for dashcams. And it's not in tiny 10 second chunks too.
Also google/youtube ;)

The ongoing improvements to EAP isn't the code still being written — it's the activation of already-coded features after receiving enough active driving data to validate that it works as intended. There's no doubt some tweaking that's been happening with that data, but largely the EAP code is complete.
Yeah, right, that's why the already written code undergoes drastic transformations, I guess, because it's already finished and they are just activating new abilities after validating.
 
These posts and the rest of the Frustrated with FSD timeline thread are still relevant reading IMO.

I think the big mistake that people keep making is that AP2 is not FSD V1.0. AP2 and FSD are different software and different code bases today. In a perfect world, Tesla would have been able to use AP1 in conjunction with HW2, but Mobileye wouldn't allow it knowing that Tesla was working on its own solution. This is public knowledge and an indisputable fact that Telsa and Mobileye had a falling out and would not allow Tesla to use it in parallel with HW2. This forced Tesla's hand to implement AP2 on HW2 and separately develop FSD. They obviously thought this would be easier then they had anticipated because they had already done it once with EyeQ3 and one camera.

FSD will not need to see lane markings, though it will still use them if it can. It will use high definition 3d maps that will use a combination of GPS and landmarks, sensors and telemetry from the car. This would include lane markings but doesnt require them as it can use other land marks to compensate for GPS's weaknesses. GPS gets you within 3 meters, adding high def maps with land marks can get you to 10 centimeters or so from watching nVidia's presentations. Adding the cars other sensors and telemetry can get you down to 2 centimeters. Once you have an identifiable landmark in a high precision 3d map, you dont even need GPS to continue to work. GPS gets you into the neighborhood, landmarks get you into the lane and the other sensors and the speed/direction the car is traveling gets you the rest of the way to 2cm. The high def maps could be created by Lidar and updated by radar in current cars as roads and landmarks change. AP1 and AP2 do not do any of that. They identify the lanes by seeing the lane markings and curb or edge of the road. It then uses the TACC radar not to run into the car in front of it. That is all AP1/2 does.

FSD is being developed separately and is nothing like AP2. At some point in the near future (EOY), FSD should replace AP2 and there will only be AP1 on EyeQ3 and FSD on HW2. They only had to do this clunky way because of the fallout with Mobileye after the accident where the guy died using AP1 and it didnt see a truck passing in front of it.

If you go back to the Oct. video that Tesla released when it started talking about FSD, you will the different camera, many more then 2 and you can see how the high def mapping works with landmarks and how it knows the surface of the road. It can see all the signs as noted by different color boxes around signs, people, cars and other objects. None of this stuff happens with AP2. It sees lanes and curbs and the radar keeps you from driving through the car directly in front of you. That is it.. that's all AP2 does. So unless you think the video from Oct. 2016 was just Hollywood product, there has to be two different solutions being developed independently by Tesla.

Tesla is also not building either system of the ground up. They are leveraging AP1 software that was created by Tesla to enhance it and they are piggybacking on nVidia's solution to jump start FSD. A major difference between FSD and nVidia's solution is FSD uses radar to create a point cloud and nVidia uses Lidar. Both have their strengths but I dont think Tesla will ever use a Lidar. Not because of the cost, not because of the fact that its ugly, but the fact that is not good at detecting objects in rain, fog or snow. You could say, well then why dont they use both. Because then you have to decide, which one do you trust? There will be times when they disagree on what they are seeing and you cant just slam on the brakes until it figures out which is right. So this problem is one that Tesla and as far as I can tell, only Tesla is attempting. They are attempting to use the radar to confirm what they are seeing using the Vision system and they believe that they can create a detailed enough picture of the world around them, that is more accurate then what you and I can see while be constantly distracted. When I say distracted I dont just mean talking on your cellphone, but even something as simple as looking at your current speed or Navigation or even admiring a passing Tesla or waiving at a nice looking lady in the next car over. Tesla is not going to sacrifice good enough for perfect, they have shown that they can save lives without a perfect solution.


Reaching AP1 parity was very critical that all Tesla developers focused on it. This caused the delay for the famous 8.1 update which was supposed to release last year. Elon has even mentioned that all hands were on deck for AP2 reaching AP1 parity and that 8.1 update being done were contingent on AP2's progress.

I have mentioned in the past that there are currently two teams with two separate code base. Although the FSD development has stalled in response to AP2 reaching parity. There is only so much software engineer resources to go around, but as AP2 reaches AP1 parity, probably close to half of those developers would move onto FSD while the other half continue towards EAP development.



AP2 or rather EAP (autosteer+, onramp/offramp, self lane change, self park, smart summon) purpose was to realize the vision and promises of AP1. AP1 promised smart summon, self parking, traffic light/ stop sign reading, self lane change and on-ramp/off-ramp. None of those were fully delivered. So essence, it is a stop gap



FSD doesn't actually replace AP2 (EAP). What Tesla is planning on doing is forking the EAP code repository into two branches. One that is simply EAP features (which those who paid for EAP will have access to) and another that includes some basic FSD features (which those who paid for FSD will have access to).

They would then gradually field FSD features in the second branch as driver assistance until their FSD repository and code (which is completely different) is ready to be fielded as driver assistance. That doesn't mean their system is done, it could be at 50 miles per disengagement and they feel like they are ready. Then their FSD code then replaces the EAP+Basic FSD fork code.

Basically:

EAP | EAP + Basic FSD | FSD


I have seen it mentioned acouple times that AP1 reads speed limits so i won't be surprised that AP2 does aswell.
They also both fall back to a database of speed limit from the NTHSA.



I'm pretty sure AP1 will benefit from the hd maps created from AP2 and will be able to properly handle all curve roads and even be able to maintain lanes with no lane markings alot more. They could replace the high precision map that AP1 uses with the HD maps that AP2 will create.



I believe we will see who has the advantage when L4 without human drivers are deployed, whether in mobility or in the hands of consumers.
 
  • Love
Reactions: lunitiks
Some more past posts that are relevant:

I been told you people from day one that they were only using 1 camera to replicate ap1. but you ppl refuse to listen.
secondly the camera they are using is the main camera not the wide.


Don't hold your breath that more cameras will be used.
The two rearward facing cameras will be used to change lanes only for example.
EAP is primarily based on one camera configuration. In the conference Elon said they use the user redundant front cameras. (Main and Narrow).


The real FSD development and EAP development are two seperate teams.
and when i say real FSD development, i don't mean the EAP dev team that will start working on partial FSD features when EAP is done.

I have said things that most people today come to take as facts.
But before I was killed for it.

EAP HW2 - my experience thus far... [DJ Harry] (only one camera active)

Enhanced Autopilot 2 may never happen before Full Self Driving is ready! (my accurate predicted timeline)

But I have been right for every one of it. whether its the delay or ap currently using only one camera, or the fact that mapping is required for meaningful self lane-change and off/on-ramp. Anything other than that would be a glorified ap1. The fact that fleet learning is a myth and the fact that shadow mode has taken a life of its own but is actually only a steering validation tool and nothing more.

So EAP update was released yesterday and its subpar and only contained
Traffic Aware Cruise Control feature, Forward Collision Warning, and (low speed) Autosteer.

Infact the update doesn't even match AP1 and there is no mention of summon or autopark.

Based on Tesla page we were supposed to get autosteer+, self-autolanechange, autoexit, smart summon and self-autopark.
We got none of those. All we got was cruise control and an auto steer that works below 35 mph

What am i saying? with Tesla development release timeline being 2-3 months as said from Elon during his AP2 hardware conference. It will take another 3-4 months for an update for EAP to bring it within parity with AP1. Then it will take another 2-3 for another update that will introduce some samples of the true EAP features listed above. Then another 2-3 for a update will bring the cars close to feature-complete of the promises of EAP.

So the estimate is atleast 9 months before Tesla is able to deliver anything close to what it promised. That's Sept 2017.

I can't say i didn't tell you so. Now you say 9 months isn't "NEVER".
Well i said "close feature complete" not feature complete. Based on elon, the end of 2017 is the time for their FSD demonstration.
Meaning that it will be in the final stages, although i don't believe they will be able to showcase anything independently verifiable in 2017

Current HW2 Autopilot using 2 of 8 cameras * Testing Inside *

Enhanced Autopilot 2 may never happen before Full Self Driving is ready!
 
  • Love
Reactions: lunitiks
One hater vs one believer. We will find out by December.
Hater? what.. every single thing i have said have been bulleyes while your post has several inaccuracies like Lidar not working in rain, snow plus radar being better than lidar. Lastly you can't confirm what you're doing with something that has a 30 degree FOV. 30 FOV can't back up 360 FOV of a camera.

Me on the other hand has been spotless.

Including The number of cameras used, The use of HD mapping and data collection for object recognition, EAP being delayed until FSD codebase is ready and actual EAP features appearing Aug-Sept and many more.

This is with most of my predictions coming in Dec 2016. Before issues were apparent.
That's too much of a perfect record to be a hater. Haters are typically blindly negative and because of that are mostly wrong.

highlight

So the estimate is atleast 9 months before Tesla is able to deliver anything close to what it promised. That's Sept 2017.
 
Last edited:
As per the investors' call today, the cross country autonomous drive is still on track. Worst case scenario early next year.

Yes, and a pig will fly across the solar eclipse on Monday. Sorry, but given the current state of AP2, unless it's Elon sending 500 cars across the country in the hopes that "one" makes it, it's just not happening, and let's not even get into "soon". If we are lucky Dec 2018, maybe....

I REALLY REALLY want to be wrong on this.
 
Yes, and a pig will fly across the solar eclipse on Monday. Sorry, but given the current state of AP2, unless it's Elon sending 500 cars across the country in the hopes that "one" makes it, it's just not happening, and let's not even get into "soon". If we are lucky Dec 2018, maybe....

I REALLY REALLY want to be wrong on this.
Well, prepare to be wrong. I sincerely doubt that AP2 is running on the same code base as FSD, just in the basic behavior of current AP2 versus what was in the FSD demo. AP1 and AP2 can't even recognize relatively simple things such as red lights, green lights, or freeway exit navigation based upon your entered destination and planned route. If they were using a subset of FSD as part of AP2, you would see some of these more easily implemented features put into AP2, but you don't. It's because AP2 was developed merely as a temporary replacement as AP1 with MobilEye was lost.

When we see these more basic features such as traffic light recognition in place, we will know they've switched over the code base to a subset of FSD.
 
Well, prepare to be wrong. I sincerely doubt that AP2 is running on the same code base as FSD, just in the basic behavior of current AP2 versus what was in the FSD demo. AP1 and AP2 can't even recognize relatively simple things such as red lights, green lights, or freeway exit navigation based upon your entered destination and planned route. If they were using a subset of FSD as part of AP2, you would see some of these more easily implemented features put into AP2, but you don't. It's because AP2 was developed merely as a temporary replacement as AP1 with MobilEye was lost.

When we see these more basic features such as traffic light recognition in place, we will know they've switched over the code base to a subset of FSD.

I'm very happy to be wrong on this.... However, I suspect i'm not, we've already seen a leak of stop sign detection and indications in the debug screens for AP. I don't think @verygreen has found any "notion" of a completely separate code base.

reference: Tesla’s new Autopilot update detected and displayed stop signs, but it didn’t act on them
 
  • Informative
Reactions: DurandalAI
I'm very happy to be wrong on this.... However, I suspect i'm not, we've already seen a leak of stop sign detection and indications in the debug screens for AP. I don't think @verygreen has found any "notion" of a completely separate code base.
Whole idea of a separate codebase is that it is separate - i.e. no signs of it in the unrelated codebases.

Anywaym in the ape code the is a very separate thing - the NN code and trained model - those are drop-in bits from somewhere else. Whenever it's a different codebase or there's a different team doing them and only providing periodic snapshots - I don't know, but there's this datapoint. Literally nothing else (but third-party code drops like electrobit) look the same ,everything else seems to be built from the same coedbase at the same time monolitically.
 
  • Like
  • Informative
Reactions: landis and BigD0g
  • Like
Reactions: BigD0g