Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
Wanted to mention something, to see if anyone has experienced. I was sitting at a stop light yesterday evening, as the sun was setting directly behind me. I heard the green light chime go off in my car, glanced at the dash and it showed the stoplight was green. But when I looked up at the stoplight itself, I could (barely) see that the light was actually still red, but the sun's bright glare appeared to somewhat illuminate the green light. Makes me wonder what the behavior of the car would have been if I were on autopilot driving towards that light a few months from now when we no longer have to confirm when approaching stoplights (with no cars in front of us) AND how can any form of NN/AI/ML/DOJO solve for that situation?
 
  • Informative
Reactions: ladysbff
But when I looked up at the stoplight itself, I could (barely) see that the light was actually still red, but the sun's bright glare appeared to somewhat illuminate the green light. Makes me wonder what the behavior of the car would have been if I were on autopilot driving towards that light a few months from now when we no longer have to confirm when approaching stoplights (with no cars in front of us) AND how can any form of NN/AI/ML/DOJO solve for that situation?
I've had similar situation (not on FSD) but I couldn't tell if the light has switched from red to green as the sun was making the green light look "lit".
What I did is check my surrounding (did the car in the next lane go) what about cross traffic, then I inched forward and saw that the light was still red.

Maybe a similar approach?
 
Which edge case do you think will take the longest for Tesla to solve and how long do you think it'll take?

There are really so many that I have no idea which will take the longest. I think at least five years and maybe ten before I can buy a car that never needs anybody in the driver's seat. One edge case that concerns me:

Two-lane road with narrow or no shoulders. Bicycle is approaching from the other direction. The cars coming up behind the cyclist will move over to give him room. Will an FSD car anticipate and give those oncoming cars some room? I encounter this frequently whenever I take the non-highway route to the shopping area of my city.

Really, my pessimistic outlook about the time line is based on the number of times the driver has to take over, and situations that require anticipating the actions of other drivers, such as the above. You and I would see that cyclist on the other side of the road and we'd know that the cars behind him are going to swerve. Will an FSD program know that?

I expect to see a gradual reduction in the number of times the driver has to take over, and an almost imperceptible shift from "still needs a safety driver" to "Hey, we haven't needed to intervene in a long time. Maybe we're at Level 5." When that time comes, it will have to be confirmed and verified with more testing. I think that December 2021 is overly optimistic.
 
Two-lane road with narrow or no shoulders. Bicycle is approaching from the other direction. The cars coming up behind the cyclist will move over to give him room. Will an FSD car anticipate and give those oncoming cars some room? I encounter this frequently whenever I take the non-highway route to the shopping area of my city.

Really, my pessimistic outlook about the time line is based on the number of times the driver has to take over, and situations that require anticipating the actions of other drivers, such as the above. You and I would see that cyclist on the other side of the road and we'd know that the cars behind him are going to swerve. Will an FSD program know that?

In the latest video from AIDRVR (in Berkeley), he shows some interesting predictions involving pedestrians and cyclists. Also, the car is becoming better and better in narrow streets, as Kim Paquette demonstrates.

As for the cyclist situation, to me, it seems like just another moving object in a situation that FSD beta has demonstrated "understanding" before, see Kim Paquette's videos on yielding or avoiding cars on narrow roads:

https://twitter.com/kimpaquette/status/1351215351523143683
 
  • Like
Reactions: mikes_fsd
...One edge case that concerns me:

Two-lane road with narrow or no shoulders. Bicycle is approaching from the other direction. The cars coming up behind the cyclist will move over to give him room. Will an FSD car anticipate and give those oncoming cars some room? I encounter this frequently whenever I take the non-highway route to the shopping area of my city.

Really, my pessimistic outlook about the time line is based on the number of times the driver has to take over, and situations that require anticipating the actions of other drivers, such as the above. You and I would see that cyclist on the other side of the road and we'd know that the cars behind him are going to swerve. Will an FSD program know that?

I see no reason that a well-behaved FSD can't anticipate and predict situations. In fact it probably had better do so in order to avoid creating a hazard or needing to take an emergency avoidance maneuver.

I doubt Tesla's version is doing this right now. Mainly it drives like it's reacting to each obstacle in turn rather than how two+ obstacles are interacting (or will be interacting in the near future).

Again, a 16-year old new driver may not be thinking this far ahead either, but hopefully they learn.
 
You are being dishonest. You literally preached that Tesla will have Level 5 by the end of 2018.
Infact you were so adament, it prompted me to ask that question for you to reiterate.
The statement you made in May 2017 was



That doesn't sound like some neural statement. That is an adamant statement of believe and fact. You are making a definite assertion. If you swapped out the subject with anything else to remove bias, no human would see that statement other than it being a definite assertion being made by someone. You asserted that "Even if Tesla releases in late 2018/early 2019, they will still be ahead."

This is what prompted me to ask you the question in which you basically responded with yes but blamed others and regulation if it doesn't happen. Not them having the software ready.

Your question was "so you truly believe that tesla will release L5 in 2018?". I basically responded no. Where did I say I believe they would get the software done in 2018 (much less be willing to bet on it)? That's all things you are imagining and putting words in my mouth. My point was even if they do so, it's irrelevant, since if the legal framework is not there to use, they can't release it anyways (and from the whole SAE definition argument ongoing it's not really L5 if usage is limited to very small geofenced areas that might allow it). Your question was about release, not about getting the software done (you never asked me about that).

The whole Audi level 3 system is a prime example. By all means it seemed they had the software and hardware pretty much ready to go, but the legal framework to allow it was not there, so they had to ditch their plans.

You didn't find it unlikely that they won't get the software done. That is what's important here. You found it unlikely because you believe that big bad bogie man oil automakers will try to enact regulation to slow Tesla down.

You already asserted what you believed which is that "Even if Tesla releases in late 2018/early 2019, they will still be ahead."
LOL, again trying to put words in my mouth. Where did I say I believed Tesla will release it in late 2018/early 2019? I was responding to your claim that the delay in the promised schedule would mean fans who claim Tesla is ahead would have been wrong (context below). My point was even if Tesla released with that delayed schedule, they would have still been ahead. Did I say I believe they will do so? Nope, in fact as above I said the exact opposite. And the funny thing here in 2021, is given all the efforts back then have been delayed or cancelled (except perhaps Waymo's), Tesla still has a chance at being ahead depending on how the beta goes.

This also dispel the notion from many tesla fans who claim that tesla is ahead and not only ahead but ahead by years. But we both know that's not how it works right? People will twist and contour things to fit their own warped logic.

Example below and look at those upvotes

Capture3.png

If @powertoold post was strictly accident then guess what? It was already fulfilled in the very first month. The beta testers went over 150k miles without an accident and there still hasn't been any accident.
Any source to this claim? A quick search did not come up with how many miles Beta testers have travelled so far. Plus any accidents that have happened may not necessarily have been reported yet. Note even if they passed the 150k mark they are not out of the woods yet. Two accidents in short succession would break that. As below the claim was talking about "on average," not just passing 150k miles without an accident.

Clearly the only way to compare and evaluate a SDC that is still in testing to average human reliability and accident rate, is not by trying to count accidents that happen when there are humans literally taking over and preventing them. Because there won't be any accidents. Because the drivers are preventing them. duhhhhhhhhh.

But by actually counting accidents that WOULD HAVE occurred if the human driver didn't take over.
Hence they are called safety related disengagement.

The context here is that Tesla is done, its game over and Tesla is 5+ years ahead. They already won and it will be ready in 6 months and it will have human reliability (accident rate). Then someone then looked up the stats for human reliability and then @powertoold then said it will easily match that in 6 months.

Its quite clear to any sane and logical thinking person that @powertoold is now back peddling and trying to change the definition and it doesn't make sense because his prediction if viewed any other way would be fulfilled because there hasn't been an accident over hundreds of thousands of miles already.

No logical person would take the angle you took.

Safety related disengagement from one DirtyTesla video with many more safety related disengagement that i didn't include.
Point is @powertoold never backpedaled. He was explicitly responding to a question about accidents per 150k miles: "What is your estimate / guestimate for no accidents on average every 150K miles?
It's crazy I even think this, but 6-9 months lol.". This is again another prime example of you putting words into people's mouth to fight against a strawman position someone else never made, even after they made clarifying statements that is not what they meant.
 
Last edited:
  • Love
Reactions: mikes_fsd
I see no reason that a well-behaved FSD can't anticipate and predict situations. In fact it probably had better do so in order to avoid creating a hazard or needing to take an emergency avoidance maneuver.

I doubt Tesla's version is doing this right now. Mainly it drives like it's reacting to each obstacle in turn rather than how two+ obstacles are interacting (or will be interacting in the near future).

Again, a 16-year old new driver may not be thinking this far ahead either, but hopefully they learn.

The reason I think it's more difficult than the more optimistic among us is that I think it implies AGI, not just AI. And I'm far more pessimistic about AGI than some folks are.

A few decades ago I had an exchange with a professor who taught game programming. He said that computers are not smart. They're just stupid very very fast. Computers and their software have gotten better and better at solving difficult but clearly-defined problems, and they're even learning how to define the problem for themselves and solve it. But they've still got nothing like intelligence.

Where I see autonomous driving going is that they get so good at solving so many well-defined problems that they eventually match and then surpass the safety record of humans, but without ever having the sort of insight that humans have. They will be able to prevent many of the kinds of accidents that humans have, while being subject to kinds of accidents that humans seldom have. What we care about are the overall number of injuries and deaths. But without AGI (which I don't think we'll have this century) progress will be incremental. I don't think we're going to get a re-write that suddenly "solves" autonomous driving. I think we'll get gradually fewer and fewer disengagements until we reach a level of safety that we agree you're safer in an autonomous car than one driven by a human. My best guess for that is ten years, give or take a few.
 
  • Informative
Reactions: Dan D.
Wanted to mention something, to see if anyone has experienced. I was sitting at a stop light yesterday evening, as the sun was setting directly behind me. I heard the green light chime go off in my car, glanced at the dash and it showed the stoplight was green. But when I looked up at the stoplight itself, I could (barely) see that the light was actually still red, but the sun's bright glare appeared to somewhat illuminate the green light. Makes me wonder what the behavior of the car would have been if I were on autopilot driving towards that light a few months from now when we no longer have to confirm when approaching stoplights (with no cars in front of us) AND how can any form of NN/AI/ML/DOJO solve for that situation?
Your experience lines up with recent (worsening) issues I've experienced with EAP and sunlight:
Autopilot blinded, disengaged by sun on interstate

I think "edge case" sunlight has the potential to cause major safety issues with autonomous driving. I put edge case in quotes because for folks at different latitudes, the angle of the sun can cause this issue to happen repeatedly in a short time frame, depending on where you live.

In the PNW, I think the month of December has the potential to be more dangerous than others. This, perhaps only coincidentally, aligns with Q4 having a higher number of autopilot crashes than Q1, Q2, or Q3. I assumed that's because Q4 was when the majority of deliveries were happening (new drivers) and is also when we have plenty of inclement weather in the US. But now I wonder if the angle of the sun could also be related?
 
  • Informative
Reactions: 2101Guy
Your experience lines up with recent (worsening) issues I've experienced with EAP and sunlight:
Autopilot blinded, disengaged by sun on interstate

I think "edge case" sunlight has the potential to cause major safety issues with autonomous driving. I put edge case in quotes because for folks at different latitudes, the angle of the sun can cause this issue to happen repeatedly in a short time frame, depending on where you live.

FSD beta uses different code / approach for its predictions, so we can't extrapolate AP issues to the fsd build. (Can't find the tweet)
 
  • Like
Reactions: mikes_fsd
I see, there are many videos where fsd beta seemed to navigate in direct afternoon sunlight just fine.

Small example at 0:44 here (there are a lot but I don't remember them):
Thanks for the link. Again, due to different latitudes, I don't think this video would be representative of Washington state in December, particularly in the afternoon. The sun would be different - different angle, different intensity, different reflections on the road.
 
  • Funny
Reactions: mikes_fsd
Thanks for the link. Again, due to different latitudes, I don't think this video would be representative of Washington state in December, particularly in the afternoon. The sun would be different - different angle, different intensity, different reflections on the road.
The sun still sets in the west - even in the State of Washington!

Point is driving into direct sunlight does not change much unless you are north of the arctic circle.
 
An interesting question is, how harmless or how dangerous is a disengagement situation?

I think the pure number of disengagements does not tell the whole story. I don't mind having to take over, if the car signals a problem in time or if the car behaves in a sufficiently defensive way. I do mind if I get into a dangerous situation that I cannot recognize in time.
 
  • Like
Reactions: ladysbff
What real-life driving situations would AGI be needed? And how often (just your guestimate)?

There are two different ways to tackle a difficult problem, and autonomous driving is certainly one of the most difficult of problems:

One is through the use of intelligence. That's how people do it. We're lousy at calculation but pretty good at intelligence (though when one looks at the rate of vaccine refusal one would be excused for doubting human intelligence ;) ). Intelligence involves actually understanding a problem. The other is by brute-force calculation. Humans play chess by employing intelligence. Computers play chess by brute-force calculation. Chess programs have gotten so good at brute-force calculation that they can beat almost all human players. (There are serious accusations that the Deep Blue team cheated! But the $4 chess app on my tablet can beat the great majority of humans.) But the programs are not intelligent. They're just very good at calculating.

Autonomous driving does not require AGI. But without AGI the hardware and software resources needed for the necessary brute-force calculations are, in my opinion, greater than we have yet. This is why I am not optimistic about a true Level 5 driverless car in the next few years.

Obviously, the above is merely my opinion. I've been watching Tesla since before the Roadster went on sale (got a ride in a prototype) and I am a total Tesla fan. The great thing about Elon is that he doesn't believe anything is impossible and he's willing to put his money behind it. The problem with Elon is that he thinks things can be accomplish in far less time than they actually can.

So I'm optimistic about a Tesla that can drive itself with nobody at the controls, but not as soon as some believe.

An interesting question is, how harmless or how dangerous is a disengagement situation?

I think the pure number of disengagements does not tell the whole story. I don't mind having to take over, if the car signals a problem in time or if the car behaves in a sufficiently defensive way. I do mind if I get into a dangerous situation that I cannot recognize in time.

A disengagement situation is not necessarily dangerous in a Level 2 vehicle with an alert driver who understands what Level 2 entails. But a disengagement tells you that this vehicle is not yet ready to go driverless.

We count disengagements, not to assess the safety of the car, but to assess how close the car is to being ready to drive itself without a person at the controls.
 
An interesting question is, how harmless or how dangerous is a disengagement situation?

I think the pure number of disengagements does not tell the whole story. I don't mind having to take over, if the car signals a problem in time or if the car behaves in a sufficiently defensive way. I do mind if I get into a dangerous situation that I cannot recognize in time.
That is an interesting question! I try to use EAP as intended - at least one hand on the wheel, paying attention to the road ahead - so while the disengagements I've experienced due to the sunlight have been startling (as they were unexpected), the situation wasn't immediately dangerous because I was using the product as prescribed.

However, I had a recent disengagement on an early morning commute on the interstate on a dry, clear, and dark morning. Although my hands were on the wheel, in the morning I'm more relaxed (hadn't had caffeine yet...) and less in defensive driving mode. I wasn't as quick to react in that moment. So, although the driving conditions were better, my reaction time as slower. Luckily it was early so there were few cars on the road.
 
Last edited:
Perhaps the NN framework (like ResNet) is the same, but the type of predictions is different. For example, current AP isn't predicting the entire intersections / curbs / whatnot all at once. It's mostly just predicting lane lines.
Another thing I remember was the rewrite involved remembering past inferences. The current system (still in use by Autopilot from what I found) only labels things based on the current frame, completely ignoring previous frames. That would explain why it's so jumpy in Autopilot. The rewrite would factor in previous frames, which should make a huge difference, as even in direct sunlight, not every frame is affected equally.

It makes zero sense for a car that was just there a split second ago to suddenly vanish into thin air, but with the previous system that can happen quite often given the system isn't taking into account previous frames.
 
Last edited: