Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Frustrated with FSD timeline

This site may earn commission on affiliate links.
Tesla has never been shy about forward-looking marketing. I believe that if there were a disengagement free point-to-point operator designated route, it would be on Tesla's site already.

In 2016, Tesla only tested 4 AVs on public roads, from Oct 14 to Nov X? You'll see why Nov X is unknown in a minute.

This is what was posted publicly Nov 18, 2016:

"Autopilot Full Self-Driving Hardware (Neighborhood Short) von Tesla, Inc on Vimeo"

If you look at the map, you notice that it's probably not a 'real' route. It appears to be a programmed path. ie- This was pre-programmed into the car. It was not a route you would take to go from the start location to the end location.

But there is something that you should look at. https://www.dmv.ca.gov/portal/wcm/c...a/Tesla_disengage_report_2016.pdf?MOD=AJPERES

There was only 1 Tesla AV car operating in Nov 2016 or later and it was perfect. No dates reported in November.

SYJXCCE43GF017809 was the last car operating in 2016, since it was the only car operating in Nov 2016. It had 63 disengagements in 97 miles in October. Then it jumped to zero disengagements for the 20 miles it operated in November. Prior to that, the last time that car was operated was Oct 14-17.

There were 3 other cars operating in October. All had high disengagement rates, just like #17809.

Statistically, SN 17809 was the car most likely to be in that video since it reported zero-disengagement in operations of 20 miles, which would have been near impossible for any of the 4 cars to accomplish in October, since there were 3.66 miles per disengagement in the fleet of 4 cars in October.

But why zero disengagements for 1 car in Nov? That's not actually credible when you look at the data and dates. The rate of disengagements does not fall rapidly near the end of October testing. They do not bring out all 4 cars for the November test. In fact, after the brief single car November test, they halt testing on public roads for the remainder of the year.

I don't think they rigged the test, but I do believe the video exaggerated the progress at that point. Pre-programming to reduce decisions required was most likely used. Kind of like a CNC machine that has a touch probe in the library. You know exactly what you are going to do, but the touch probe verifies you are not off your path.
I did the analysis here already. It's fairly obvious they had it on public roads only for making the videos. Most of the disengagements in October were from wet weather.

Funny that you say you don't think they rigged the test, but throughout your whole post that is exactly what you are implying they did.
Sigh... Let me set the record straight (in fact, I corrected you before @oktane).
Wait on adding AP2 and FSD or you will be sorry...maybe

There were 2 videos.

This first video from October 20, 2016 had cuts (although it is still possible they used footage from one continuous take; they just didn't think to show all views together as they did in the second video). There were 8 disengagements during the dry days (10/15, 10/17) that they could have possibly used footage. The other disengagements in the reports were either from wet days or after the video.
Full Self-Driving Hardware on All Teslas

A second video was released November 18, 2016 that had no cuts. There were no reported disengagements for the 20 miles they drove in November.
Tesla Self-Driving Demonstration

Disengagement report:
https://www.dmv.ca.gov/portal/wcm/c...a/Tesla_disengage_report_2016.pdf?MOD=AJPERES
 
I saw a more ambitions proof of concept than the Nvidia video. The Nvidia video was obviously very limited in training, but I thought Tesla's had used more training because they had more data or more motivation or something. I thought the Tesla video used a similar technique as described in the Nvidia white papers as far as for training. Now I honestly don't know what to think.

So, this is a real question and I'm a computer engineer and I spend a lot of time developing code too, so I can take a little jargon thrown my way. I'm being brief so before replying immediately with "No! They're completely different branches!" keep reading. Let's assume both FSD and EAP equipped cars and going to show lane lines. Is the assumption of those who say EAP and FSD are completely different branches that lane line detection is going to be completely different? What about the code (whatever that technique / philosophy / method is) that manages highway interchanges?

I assumed that EAP was going to be a limited version of FSD, but underneath both was going to be the same code and that was going to be based on something akin to the Nvidia method described in their white paper. So a NN was going to detect lane lines, stop signs, cars on the road and position the car in the lane. And, we'd see highway lane keeping before dirt road lane keeping because the NN was going to pick up on the highway sooner in training because it is a more uniform medium across the world. And then more and more environments and actions (turn left, stop for stop light, etc.) would be available as trained but then those would only be "unlocked" for FSD and not EAP.

Again, just for an example, I don't understand how lane keeping can be where it is with EAP but then FSD can somehow be so much better. Why not just run the FSD software and only utilize the steering control output if the FSD branch's control is so much better? Show the algorithmically determined EAP lane lines just for graphics, but use the FSD NN for the steering control. Since they're not it makes me think that the FSD steering control can't be as good as EAP is if indeed FSD actually exists somewhere. I guess that's why I'm frustrated with the timeline. I think the "FSD branch" has to be worse than EAP at lane keeping so how can it be anywhere close to making a right turn (an example from the Nvidia video) or any of the other more challenging maneuvers?
 
I saw a more ambitions proof of concept than the Nvidia video. The Nvidia video was obviously very limited in training, but I thought Tesla's had used more training because they had more data or more motivation or something. I thought the Tesla video used a similar technique as described in the Nvidia white papers as far as for training. Now I honestly don't know what to think.

So, this is a real question and I'm a computer engineer and I spend a lot of time developing code too, so I can take a little jargon thrown my way. I'm being brief so before replying immediately with "No! They're completely different branches!" keep reading. Let's assume both FSD and EAP equipped cars and going to show lane lines. Is the assumption of those who say EAP and FSD are completely different branches that lane line detection is going to be completely different? What about the code (whatever that technique / philosophy / method is) that manages highway interchanges?

I assumed that EAP was going to be a limited version of FSD, but underneath both was going to be the same code and that was going to be based on something akin to the Nvidia method described in their white paper. So a NN was going to detect lane lines, stop signs, cars on the road and position the car in the lane. And, we'd see highway lane keeping before dirt road lane keeping because the NN was going to pick up on the highway sooner in training because it is a more uniform medium across the world. And then more and more environments and actions (turn left, stop for stop light, etc.) would be available as trained but then those would only be "unlocked" for FSD and not EAP.

Again, just for an example, I don't understand how lane keeping can be where it is with EAP but then FSD can somehow be so much better. Why not just run the FSD software and only utilize the steering control output if the FSD branch's control is so much better? Show the algorithmically determined EAP lane lines just for graphics, but use the FSD NN for the steering control. Since they're not it makes me think that the FSD steering control can't be as good as EAP is if indeed FSD actually exists somewhere. I guess that's why I'm frustrated with the timeline. I think the "FSD branch" has to be worse than EAP at lane keeping so how can it be anywhere close to making a right turn (an example from the Nvidia video) or any of the other more challenging maneuvers?

Likely there is going to be similar structure to the neural networks and likely the same training for certain things. Other things might be very different.

Since we do not know anything about the current state of FSD, we cannot infer one way or the other as to what is superior. Don't worry about the graphics so much as they mean nothing, what matters more is what the car is actually detecting.
 
Likely there is going to be similar structure to the neural networks and likely the same training for certain things. Other things might be very different.

Since we do not know anything about the current state of FSD, we cannot infer one way or the other as to what is superior. Don't worry about the graphics so much as they mean nothing, what matters more is what the car is actually detecting.

But can you really imagine a software engineer having a very sophisticated object recognition system under the hood and not showing it off to the world on the instrument cluster?

For example, I find it hard to believe that an AP2 equipped car is actually detecting other cars (other than the one directly in front of it), since AP1 shows other cars, and you'd think an engineer would want to show that on the UI since it was standard before.
 
I saw a more ambitions proof of concept than the Nvidia video. The Nvidia video was obviously very limited in training, but I thought Tesla's had used more training because they had more data or more motivation or something. I thought the Tesla video used a similar technique as described in the Nvidia white papers as far as for training. Now I honestly don't know what to think.

So, this is a real question and I'm a computer engineer and I spend a lot of time developing code too, so I can take a little jargon thrown my way. I'm being brief so before replying immediately with "No! They're completely different branches!" keep reading. Let's assume both FSD and EAP equipped cars and going to show lane lines. Is the assumption of those who say EAP and FSD are completely different branches that lane line detection is going to be completely different? What about the code (whatever that technique / philosophy / method is) that manages highway interchanges?

I assumed that EAP was going to be a limited version of FSD, but underneath both was going to be the same code and that was going to be based on something akin to the Nvidia method described in their white paper. So a NN was going to detect lane lines, stop signs, cars on the road and position the car in the lane. And, we'd see highway lane keeping before dirt road lane keeping because the NN was going to pick up on the highway sooner in training because it is a more uniform medium across the world. And then more and more environments and actions (turn left, stop for stop light, etc.) would be available as trained but then those would only be "unlocked" for FSD and not EAP.

Again, just for an example, I don't understand how lane keeping can be where it is with EAP but then FSD can somehow be so much better. Why not just run the FSD software and only utilize the steering control output if the FSD branch's control is so much better? Show the algorithmically determined EAP lane lines just for graphics, but use the FSD NN for the steering control. Since they're not it makes me think that the FSD steering control can't be as good as EAP is if indeed FSD actually exists somewhere. I guess that's why I'm frustrated with the timeline. I think the "FSD branch" has to be worse than EAP at lane keeping so how can it be anywhere close to making a right turn (an example from the Nvidia video) or any of the other more challenging maneuvers?
I don't know how adjustable it is in Tesla's system, but one major difference between a level 2 system (EAP) and a level 4 or 5 system (FSD) is that the former can rely on falling back on the human. So a level 2 system would tend to be tuned to avoid false positives (even if it may increase false negatives). A level 4 or 5 system must be tuned to avoid false negatives as much as possible (even if it causes false positives).
 
But can you really imagine a software engineer having a very sophisticated object recognition system under the hood and not showing it off to the world on the instrument cluster?

For example, I find it hard to believe that an AP2 equipped car is actually detecting other cars (other than the one directly in front of it), since AP1 shows other cars, and you'd think an engineer would want to show that on the UI since it was standard before.
Isn't that the case for most non-Tesla ADAS systems? They can detect cars on the side, but the UI can only show the car ahead in the same lane.
 
Isn't that the case for most non-Tesla ADAS systems? They can detect cars on the side, but the UI can only show the car ahead in the same lane.

While true, Tesla does have an UI and screen to display them - and does on AP1. The fact that they don't on AP2 suggests it is not either seeing them as well or is not very reliable at it on AP2.
 
  • Like
Reactions: pilotSteve
Isn't that the case for most non-Tesla ADAS systems? They can detect cars on the side, but the UI can only show the car ahead in the same lane.

Not sure the other systems detect adjacent lanes.

While true, Tesla does have an UI and screen to display them - and does on AP1. The fact that they don't on AP2 suggests it is not either seeing them as well or is not very reliable at it on AP2.

Exactly my point. AP1 showed cars in adjacent lanes and differentiated car vs. motorcycle vs. truck.
 
  • Like
Reactions: pilotSteve
I did the analysis here already. It's fairly obvious they had it on public roads only for making the videos. Most of the disengagements in October were from wet weather.

Funny that you say you don't think they rigged the test, but throughout your whole post that is exactly what you are implying they did.
FSD should not stand for Functions Solely in Deserts. All AV tech works when wet, or it would be sort of pointless to deploy it on Earth.

In any case there was 0.00 inches of recorded rain from Oct 14-22, 2016 in Fremont. There were reports of fog, and drizzy on 3 of the testing days, but not a measureable amount of moisture. The 17th, 19th, and 22nd testing days in October were dry.

But other than the video, there is no indicator of where the 560? miles were run. Could have been pouring or dry as a bone where they tested.

I'm saying that video is not representative of the operational condition of FSD in 2016. If you want to say it's "rigged" go ahead. I won't make that assumption because there are multiple scenarios possible.
 
  • Funny
Reactions: Canuck
Did you bother reading the actual law?

"§ 227.18. Requirements for Autonomous Vehicle Test Drivers.
A manufacturer shall not conduct testing of an autonomous vehicle on public roads unless the vehicle is operated or driven by an autonomous vehicle test driver who meets each of the following requirements:
(a) The autonomous vehicle test driver is either in immediate physical control of the vehicle or is actively monitoring the vehicle's operations and capable of taking over immediate physical control."

As for the second link, if you read it, the bill only allows no operators for a specific pilot program in Contra Costa in a geofenced area up to 35 mph (and the vehicle has no steering wheel or other controls). If you want to do testing in general, you need a test driver in the car. That's obviously what the video was referring to.

I read it. You skipped the definition of an autonomous vehicle. Having an autonomous vehicle test driver in a car doesn't make it an autonomous vehicle.
 
Last edited:
  • Disagree
Reactions: MP3Mike and JeffK
I read it. You skipped the definition of an autonomous vehicle. Having an autonomous vehicle test driver in a car doesn't make it an autonomous vehicle.
No, the definition according to the state of california is:

§ 227.02
(b) “Autonomous vehicle” means any vehicle equipped with technology that has the capability of operating or driving the vehicle without the active physical control or monitoring of a natural person, whether or not the technology is engaged, excluding vehicles equipped with one or more systems that enhance safety or provide driver assistance but are not capable of driving or operating the vehicle without the active physical control or monitoring of a natural person.

(c) “Autonomous vehicle test driver” means a natural person seated in the driver's seat of an autonomous vehicle, whether the vehicle is in autonomous mode or conventional mode, who possesses the proper class of license for type of vehicle being driven or operated, and is capable of taking over active physical control of the vehicle at any time.
 
control or monitoring

On a tangent - for thode who claim FSD could not be released due to regulations:

I'd guess it that underlying part that means "FSD" is not autonomous if it is Level 2 - i.e. requiring driver monitoring...

The real regulatory hurdle starts if you want to lool past Level 2...

Regulations are not constraining Tesla yet, nowhere near... their code maturity is...
 
Is there anyone that believes any Tesla is "equipped with technology that has the capability of operating or driving the vehicle without the active physical control or monitoring of a natural person?" By all means summon your car to open the garage door and pick you up. :cool:
 
Is there anyone that believes any Tesla is "equipped with technology that has the capability of operating or driving the vehicle without the active physical control or monitoring of a natural person?" By all means summon your car to open the garage door and pick you up. :cool:

Public vehicles are equipped with the hardware to perform that demo, not the enabled software.
 
Then the car currently doesn't have the technology capability defined in the statute to qualify as an autonomous vehicle. Having unused hardware whirring away in the car does not an autonomous vehicle make. Same goes for a robot without software. It lacks the "technology" to make Teslas.
 
Last edited:
FSD should not stand for Functions Solely in Deserts. All AV tech works when wet, or it would be sort of pointless to deploy it on Earth.

In any case there was 0.00 inches of recorded rain from Oct 14-22, 2016 in Fremont. There were reports of fog, and drizzy on 3 of the testing days, but not a measureable amount of moisture. The 17th, 19th, and 22nd testing days in October were dry.

But other than the video, there is no indicator of where the 560? miles were run. Could have been pouring or dry as a bone where they tested.

I'm saying that video is not representative of the operational condition of FSD in 2016. If you want to say it's "rigged" go ahead. I won't make that assumption because there are multiple scenarios possible.
The October video was recorded in Palo Alto, near Tesla headquarters. It was raining from October 14-16 (fog and rain in the 15th). It cleared up on the 17th. This matches up with the report.
Weather History for Palo Alto, CA | Weather Underground

Raining and foggy conditions are more difficult than dry weather for even the best autonomous systems.

Here's Waymo's article describing issues with rain (car pulls over if conditions are particularly bad):
Sensing in the rain. The limits of self-driving in sunny California.
For a while Waymo didn't do any testing when there was heavy rain (or snow at all) because the system couldn't handle it:
Google’s Self-Driving Cars Still Face Many Obstacles
They only started that in 2016.
Google testing self-driving cars in Washington

I think you are over estimating the capabilities of autonomous vehicle tech so far. Definitely not all systems can handle adverse weather conditions.
 
Last edited:
The October video was recorded in Palo Alto, near Tesla headquarters. It was raining from October 14-16 (fog and rain in the 15th). It cleared up on the 17th. This matches up with the report.
Weather History for Palo Alto, CA | Weather Underground

Raining and foggy conditions are more difficult than dry weather for even the best autonomous systems.

Here's Waymo's article describing issues with rain (car pulls over if conditions are particularly bad):
Sensing in the rain. The limits of self-driving in sunny California.
For a while Waymo didn't do any testing when there was heavy rain (or snow at all) because the system couldn't handle it:
Google’s Self-Driving Cars Still Face Many Obstacles
They only started that in 2016.
Google testing self-driving cars in Washington

I think you are over estimating the capabilities of autonomous vehicle tech so far. Definitely not all systems can handle adverse weather conditions.

There's a different between rain and HEAVY RAIN.
None of the articles you posted mentioned anything about google's (potential former) inability to drive in rain but rather their cautiousness of driving in HEAVY RAIN.

I'm sorry but driving in the rain is easy, just because Tesla struggles with it doesn't mean you have to paint other company with the same brush in order to make Tesla not look terrible.

simple startups can handle rain perfectly, shows you how behind tesla really is.

 
I read it. You skipped the definition of an autonomous vehicle. Having an autonomous vehicle test driver in a car doesn't make it an autonomous vehicle.
First of all the argument is over if California legally allows a car to be operated without a driver.

Even if you don't consider Tesla's vehicle an autonomous vehicle, California has no legal provision to allow a someone to operate a driverless vehicle. And if the vehicle is capable of operating without a driver, it is an autonomous vehicle under that stature anyways.

Are you arguing that California's law allows Tesla to operate that vehicle without driver (AKA that driver in the video was not legally required)?

Secondly, your argument sounds suspiciously like Uber's. They claimed since they required human drivers to monitor the vehicles, their cars are not autonomous vehicles (but rather level 2 vehicles like a Tesla), so they don't have to get a California autonomous vehicle permit and that the provisions don't apply to their cars.
http://jalopnik.com/uber-is-so-petty-that-it-won-t-pay-150-for-a-state-per-1790225716
A couple months later, they complied with their tails between their legs.
Uber concedes to DMV, says it will apply for self-driving permit in California

Then the car currently doesn't have the technology capability defined in the statute to qualify as an autonomous vehicle. Having unused hardware whirring away in the car does not an autonomous vehicle make. Same goes for a robot without software. It lacks the "technology" to make Teslas.
It does with the software Tesla is running in the vehicles in the videos. That's enough to require them to have an autonomous vehicle permit, which they operate those vehicles under.