Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

What will happen within the next 6 1/2 weeks?

Which new FSD features will be released by end of year and to whom?

  • None - on Jan 1 'later this year' will simply become end of 2020!

    Votes: 106 55.5%
  • One or more major features (stop lights and/or turns) to small number of EAP HW 3.0 vehicles.

    Votes: 55 28.8%
  • One or more major features (stop lights and/or turns) to small number of EAP HW 2.x/3.0 vehicles.

    Votes: 7 3.7%
  • One or more major features (stop lights and/or turns) to all HW 3.0 FSD owners!

    Votes: 8 4.2%
  • One or more major features (stop lights and/or turns) to all FSD owners!

    Votes: 15 7.9%

  • Total voters
    191
This site may earn commission on affiliate links.
When Tesla first announced the Model 3 and said it had all the hardware needed for FSD I called Bullshit. 2018 Model 3 and probably 2019 Model 3 just do not have the necessary hardware for driverless operation. At some point they're going to have to admit it and refund the money people paid for FSD. I really want Tesla to be the first company to sell a driverless car because Tesla is the only company putting a proper drivetrain in cars. Other companies are putting stinky gas engines that destroy the environment into their cars. Or Nissan with its piddle little no-power electric motor. I would really hate to have to buy a GM car because they're the first with FSD because Elon is too stubborn to recognize that lidar is necessary.

Tesla needs to bite the bullet, acknowledge publicly that it screwed up, refund a shitload of FSD advance-payment money, and start putting a more complete suite of sensors into its cars. They need to tell people "We thought we could do it with these sensors, but we were wrong. Your car will never be true FSD. Here's your money back, and something really nice to thank you for your trust in us."

Never. Elon would never admit this. He would hire detectives to dig up dirt on anyone demanding a refund before that happens. :eek:
 
  • Funny
Reactions: DrDabbles
Well, i'm growing optimistic again. 40.2 is a huge leap in AP. The traffic sensing is a step change improvement, I used autolane changes to agressively move over 4 lanes in busy city traffic while others were doing the same. Total chaos and AP seemed good with that. I think we can be pretty confident traffic lights and stop sign sensing is in the near term pipeline, so that really only leaves left/right turns at intersections for 'feature complete'

What about detecting cross traffic at stop sign? Besides cars, the Autopilot cameras will need to detect and predict paths of bikes and pedestrians. And their behaviors are much more unpredictable than typical cars. I know some intersections here have a bit of blind corners for bike and pedestrian detection due to landscaping, so you'll have to be extra careful and make sure you look at the right place. And then there are intersections where you only have stop signs in one direction and not the other. How does the Autopilot handle that? And what about stop signs that are only painted to the ground and not a post with signage? Would the car see those and react?

There are so many questions, I am not sure the current suite of cameras and sensors are enough to safely operate on the street.
 
  • Informative
Reactions: pilotSteve
What about detecting cross traffic at stop sign? Besides cars, the Autopilot cameras will need to detect and predict paths of bikes and pedestrians. And their behaviors are much more unpredictable than typical cars. I know some intersections here have a bit of blind corners for bike and pedestrian detection due to landscaping, so you'll have to be extra careful and make sure you look at the right place. And then there are intersections where you only have stop signs in one direction and not the other. How does the Autopilot handle that? And what about stop signs that are only painted to the ground and not a post with signage? Would the car see those and react?

There are so many questions, I am not sure the current suite of cameras and sensors are enough to safely operate on the street.

The above is spot-on!

Predicting the paths of cyclists and pedestrians is one of those things that will be really hard to do. Some cyclists ignore stop signs, but we still don't want to run them over. Stop signs obscured by landscaping are sometimes hard for humans to spot, but they're still legally binding and the car will need to be at least as good as a human at spotting them. Considering how bad computers are at recognizing patterns, this is another tough nut to crack.

On the highway there's an assumption, which human drivers reasonably make, that a car will continue at its current speed and in its current lane. The driver or car only needs to be able to detect that and spot when a car begins to leave its lane or signals to do so. I wonder if AP now can detect turn signals on other cars, but it seems to be pretty good at the rest of it. Add pedestrians and cyclists into the mix and who knows!

Bottom line, the driver is still responsible. When I approach a cyclist close to the lane I disengage AP because I don't trust it. I give the cyclist extra room after checking the adjacent lane. If there's a car on the other side of me I slow down. AP doesn't have to do any of this because it's Level 2 and I am responsible. But FSD at Level 3 or above will have to do that. We might get stop sign and stoplight recognition and even city NoA, but they are going to be Level 2 for a very long time due to the difficulty of developing software that can deal with real-life driving beyond very restricted roads.
 
  • Like
Reactions: pilotSteve
What about detecting cross traffic at stop sign? Besides cars, the Autopilot cameras will need to detect and predict paths of bikes and pedestrians. And their behaviors are much more unpredictable than typical cars. I know some intersections here have a bit of blind corners for bike and pedestrian detection due to landscaping, so you'll have to be extra careful and make sure you look at the right place. And then there are intersections where you only have stop signs in one direction and not the other. How does the Autopilot handle that? And what about stop signs that are only painted to the ground and not a post with signage? Would the car see those and react?

There are so many questions, I am not sure the current suite of cameras and sensors are enough to safely operate on the street.
Elon just fired you. :eek:
 
And then there are intersections where you only have stop signs in one direction and not the other. How does the Autopilot handle that? And what about stop signs that are only painted to the ground and not a post with signage? Would the car see those and react?
Yes, they will have to train for all those. Infact I'm quite sure they have a lot more weird and edge cases than these.

Afterall Karpaty talked about blue traffic lights.

There are so many questions, I am not sure the current suite of cameras and sensors are enough to safely operate on the street.
This is speculation at best, fear mongering more likely. If 2 eyes can manage, so can all those cameras.

Predicting the paths of cyclists and pedestrians is one of those things that will be really hard to do.
We have talked about this a lot. There are 3 things that FSD needs to do.
- Figure out how to drive in a static environment.
- Predict what others will do (what you are talking about)
- Predict what others will do, given what the car is going to do <- most difficult part
 
  • Funny
Reactions: AlanSubie4Life
Bottom line, the driver is still responsible. When I approach a cyclist close to the lane I disengage AP because I don't trust it. I give the cyclist extra room after checking the adjacent lane. If there's a car on the other side of me I slow down. AP doesn't have to do any of this because it's Level 2 and I am responsible. But FSD at Level 3 or above will have to do that. We might get stop sign and stoplight recognition and even city NoA, but they are going to be Level 2 for a very long time due to the difficulty of developing software that can deal with real-life driving beyond very restricted roads.

Fully agree, the driver needs to be responsible. The concern here is that someone may have false sense of car's capability, and stopped paying proper attention after the car demonstrate good capabilities. The consequences on the street is way higher than on freeway!
 
Your "low" goal is not as easy as you think. To make a car drive between two points you choose, the car needs to be able to drive in all situations without human intervention, in all conditions. Basically what I'm saying here is Level 4/5 autonomy may not actually be a solvable problem. We simply do not know, because we do not know how complex a system would need to be to solve this massive problem set.


Also remember not all humans can drive in all conditions. We pull over for limited viability, or slow way down. If conditions are terrible sometimes we turn back. Furthermore, there’s a lot more people that try to drive when they probably should have pulled over or turned around.
 
  • Like
Reactions: S4WRXTTCS
We have talked about this a lot. There are 3 things that FSD needs to do.
- Figure out how to drive in a static environment.
- Predict what others will do (what you are talking about)
- Predict what others will do, given what the car is going to do <- most difficult part


I think Tesla already has a leg up on the third one. They just make a few erratic aborted lane change motions that scare the other drivers into staying the heck out of the way, and then everything else gets easier. :D
 
  • Funny
Reactions: daniel
Your "low" goal is not as easy as you think. To make a car drive between two points you choose, the car needs to be able to drive in all situations without human intervention, in all conditions. Basically what I'm saying here is Level 4/5 autonomy may not actually be a solvable problem. We simply do not know, because we do not know how complex a system would need to be to solve this massive problem set.

Full autonomy is certainly a difficult problem but the idea that full autonomy is unsolvable is pretty silly IMO. Heck, a lot of engineering problems seem unsolvable at first but eventually better technology comes around that makes the problem easier. So never say never. Also remember that just because a problem is extremely difficult that is not the same thing as being unsolvable.

And just look at Cruise or Waymo. Cruise self-driving cars can navigate busy city streets in San Francisco with unpredictable cyclists, pedestrians and cars. They can navigate around double parked cars, yield for cross traffic, read hand gestures of police officers, pull over for emergency vehicles, do unprotected left turns through busy intersections and much more. In other words, they have already solved a lot of problems that we thought were too difficult a few years ago.

Here is a quick video of Cruise cars doing unprotected left turns in busy San Francisco:


So I don't buy the argument that city streets are just too complex and unpredictable for self-driving cars to ever be able to handle. Clearly, we have self-driving cars now that can handle them pretty well, not perfectly yet, but getting better every day. So it is not an insurmountable problem.

Again, I am not saying that full autonomy is completely solved yet. We still have work to do. But clearly if we can solve those problems, I think we can eventually solve the remaining problems too.
 
Also remember not all humans can drive in all conditions. We pull over for limited viability, or slow way down. If conditions are terrible sometimes we turn back. Furthermore, there’s a lot more people that try to drive when they probably should have pulled over or turned around.

This isn't an effective argument. The discussion is this- If level 5 autonomy is ever a thing, then it is likely that legislation outlawing human drivers will follow. If that is the case, then we will rely on machines to drive themselves in all conditions that exist, just like we rely on (some number of) humans to do so today. Perhaps there are people not properly trained, but that's not what's being discussed here. If there is a condition on earth that requires driving for any reason, then a true level 5 system must be able to handle that.

Otherwise, you get an automated ambulance that lets you die in the back because it's too foggy. The whole purpose of the blended suites of sensors is to give autonomous systems super human qualities.

Full autonomy is certainly a difficult problem but the idea that full autonomy is unsolvable is pretty silly IMO.

You're entitled to that opinion. I believe that people not realizing the complexity of the problem and just hand-waving it away by saying some magical future technology will make everything automatically better is silly.

Heck, a lot of engineering problems seem unsolvable at first but eventually better technology comes around that makes the problem easier.

See, now you're making claims in my area of expertise. Can you name a single computer engineering problem that was considered impossible, and the simple march of time produced the solution?

So never say never. Also remember that just because a problem is extremely difficult that is not the same thing as being unsolvable.

I haven't said never, but I'm about as close to saying never as could be given what I see as the present state of the art. We don't need to have a discussion about complexity versus whether a problem is solvable. That's not really the crux of what I'm driving at here. Most of driving requires things like intuition and reasoning. Both are things that computers do not do, and likely will never be able to do. And I'm beyond doubtful that simply rubbing some neural networks on the problem is the solution. That still leaves 90% of the problem to be solved, since as of right now NNs are pretty much only being used for the sensor suites. Just slapping an LSTM on a data stream doesn't tell a computer "hey, it's 3:15pm in north america on a school day, so there's a high chance of a kid popping out randomly from the side of the road". Computers will never get that eerie sense that humans do, which tells us to be on the lookout for something odd.

The only benefit I see computers offering right now, and possibly forever into the future, is that they don't fall asleep or get distracted. And they react faster in most situations. Or at least they can react faster.

Just remember, the first time a robot car runs over a blonde white girl in a rich neighborhood, this stuff is going to get regulated and clamped down on big time. And given what I see from all of the players that are publicly making waves, and those keeping much more quiet, that day is guaranteed to come. I really hope I'm wrong, and that the industry does a better job of controlling itself, but we already see the BS that Uber pulled last year when they killed that lady in Arizona. They're not the only ones out there making stupid choices.
 
You're entitled to that opinion. I believe that people not realizing the complexity of the problem and just hand-waving it away by saying some magical future technology will make everything automatically better is silly.

Respectfully, I feel like you are essentially doing the same thing in the other direction. You are observing the immense difficulty of autonomous driving and are throwing up your hands and going "it's just too hard. it must be unsolvable!"

See, now you're making claims in my area of expertise. Can you name a single computer engineering problem that was considered impossible, and the simple march of time produced the solution?

Well did computer engineers in the 1980's ever conceive that we would have 1TB hard drives no bigger than a small book? No, but they are common place now. The fact is that most of what our smart phones and tablets do now, would have been impossible on a 1980's computer.

But if you want something more concrete, how about this? Google claims that their quantum computer aced an impossible test:
Google's Quantum Computer Just Aced an 'Impossible' Test | Live Science

Most of driving requires things like intuition and reasoning. Both are things that computers do not do, and likely will never be able to do. And I'm beyond doubtful that simply rubbing some neural networks on the problem is the solution. That still leaves 90% of the problem to be solved, since as of right now NNs are pretty much only being used for the sensor suites. Just slapping an LSTM on a data stream doesn't tell a computer "hey, it's 3:15pm in north america on a school day, so there's a high chance of a kid popping out randomly from the side of the road". Computers will never get that eerie sense that humans do, which tells us to be on the lookout for something odd.

No, computers will never have human intuition. But driving does not require intuition. You are unnecessarily defining driving into something that computers can't do in order to justify that autonomous driving is unsolvable. But you don't need computers to have intuition in order to solve autonomous driving. Driving requires competency and experience. You can program competency and you can use past experience to improve the self-driving going forward.

Let's use your example. The self-driving car does not need intuition in your example. You can program the car to drive a bit slower in residential areas or school zones on school days. And if the self-driving car has reliable and adequate sensors, including full 360 degree lidar and camera and radar with no blind spots, then it does not really matter if the car lacks that eerie sense to watch out, because the car will detect everything in its vicinity anyway and computers, as you say, are quicker to react. The car just needs to slow down when it sees a school zone sign on a school day and if the sensors detect a kid that pops out from behind something, then the car will be able to brake in a millisecond thanks to the fast computer chip.
 
So what realistically is going to happen? Well, my crystal ball is no better than anyone elses, but anyone who expects the car to be able to drive you downtown or to the mall is, I think, going to be disappointed, at least for a year or more.

What we might see, and seems more realistic, is enough added smarts that the car can handle rural-type environments where it can negotiate through a (quiet) Stop intersection, negotiate a roundabout, and get through a simple single-lane turn signal.

And I for one don't think that will be a bad start. Is it FSD? Not really. Is it what was promised? Perhaps.
 
Let's use your example. The self-driving car does not need intuition in your example. You can program the car to drive a bit slower in residential areas or school zones on school days. And if the self-driving car has reliable and adequate sensors, including full 360 degree lidar and camera and radar with no blind spots, then it does not really matter if the car lacks that eerie sense to watch out, because the car will detect everything in its vicinity anyway and computers, as you say, are quicker to react. The car just needs to slow down when it sees a school zone sign on a school day and if the sensors detect a kid that pops out from behind something, then the car will be able to brake in a millisecond thanks to the fast computer chip.

Actually, this does point to an interesting scenario that I don't think anyone has discussed, but does happen once in a while in real life. There has been cases of people getting hurt/shot because they drove into the wrong neighborhood or the wrong situation. In some sense, that's intuition. Most human can sense the change in surrounding and identify it as dangerous situation, whether it's by the look of the neighborhood, the way people behaves, or other visual and non-visual cues. How would an autonomous car do this? And what happen if a fully autonomous car drives its occupants into a dangerous neighborhood?
 
Well did computer engineers in the 1980's ever conceive that we would have 1TB hard drives no bigger than a small book?

This is factually incorrect. We already saw literal decades of shrinking of electronics, so it wasn't only conceived of, but well predicted. And this doesn't answer the question I asked. What was thought impossible, and then time proved not to be in computer science?

The fact is that most of what our smart phones and tablets do now, would have been impossible on a 1980's computer.

Wouldn't have been possible in the form factor they have today, but all was possible. You're talking about the miniaturization of electronics, and I'm talking the literal invention of an entire subset of computer science the field.

But driving does not require intuition

It does. Intuition is how you know that the car you saw swerving a little bit is probably being driven by a distracted driver. Even subtle signs like reacting to the environment a little too slowly.

You can program competency and you can use past experience to improve the self-driving going forward.

You should go look up a bunch of the talks given by MobilEye over the years. Programming competency is in effect impossible. Every area of the world has different norms and different rules. Attempting to catalog and perform all of this is in effect impossible, because literally none of them are written down.

As for past experience, the amount of memory or the size of a network required to keep long and very long term knowledge is going to require something well beyond where we are now. Which is again why I'm pretty confident in saying neural nets aren't going to get us there.

You can program the car to drive a bit slower in residential areas or school zones on school days.

Now. Catalog every single situation you can ever come upon that requires the simulation of human intuition, and then represent that in code. The odds of you even enumerating them is zero, which is why humans use intuition for so many things. And it's also why MobilEye has put so much effort into finding ways to appear to act human like in different regions, and not actually trying to represent it all in code. It's also why they've been at this for over 20 years and still haven't solved the problem.
 
What was thought impossible, and then time proved not to be in computer science?
Who says FSD is impossible ?

In the latest AI podcast with Noam Chomsky, there is an interesting quote. He says we don't know what the limitations of deep learning are (because they certainly don't learn the way human brain learns).

So, I don't think any of us can accurately predict whether FSD is solvable in the next 5 years.
 
... If 2 eyes can manage, so can all those cameras. ...

But it's not the cameras (or the eyes) that are analyzing the surroundings and making decisions. In a human, it's the brain. In the putative driverless car it's the computer. And computers, while blindingly fast, are also unbelievably stupid. What a human can do with visual information alone, a computer (in the near term) will need much more input to do. Radar, lidar, and sonar can complement visual imagery to give the (stupid) computer more to work with.

I absolutely believe the world will have driverless cars one day, if we don't blow ourselves up first, or suffocate ourselves in our own pollution or starve to death because anthropogenic climate change has rendered our agricultural regions sterile. But it will be a decade before the human driver can take her/his hands off the wheel; i.e., we move beyond Level 2. And two decades before an ordinary consumer can buy a car that does not require a driver in it for more than simple very-low-speed summon-type features.

Caveat: Gen 5 cell service may open the door to remote chauffeurs: The driver may be in India. But there will still be a human driver.
 
Who says FSD is impossible ?

In the latest AI podcast with Noam Chomsky, there is an interesting quote. He says we don't know what the limitations of deep learning are (because they certainly don't learn the way human brain learns).

There's a couple things wrong here. First, I am suggesting that true full Level 5 autonomy may never be possible. But especially with neural networks.

Second, Noam Chomsky is an interesting character, but he holds zero sway in this area of study. Simply put, this isn't his area of expertise.

Finally, and by far most importantly, these things aren't learning. Not in the biological sense, not in the classical sense, not in any sense. These networks have no intelligence either. At all, or in any sense. What they are is multi-layered, multi-variable probabilistic calculations. It's not that they don't learn the way a human brain does, it's that there's no learning involved at all. A training process twiddles weights and biases until the network outputs something desirable. That's not learning.

People really need to stop attributing agency to these machines, because there is none. It's a computer algorithm, and that's it. It's the same was it was in the 1950s.


So, I don't think any of us can accurately predict whether FSD is solvable in the next 5 years.

Accurately? Probably not. But the odds of solving level 5 driving in 5 years is as close to zero as makes no difference. So maybe someone gets lucky, or maybe someone throws an unbelievable amount of programmers at the problem, but those are about the only ways this is happening in the next 5 years. Basically it's the odds of infinite monkeys with typewriters coming up with Shakespeare. An infinitesimally probable event.

But it's not the cameras (or the eyes) that are analyzing the surroundings and making decisions. In a human, it's the brain. In the putative driverless car it's the computer. And computers, while blindingly fast, are also unbelievably stupid. What a human can do with visual information alone, a computer (in the near term) will need much more input to do. Radar, lidar, and sonar can complement visual imagery to give the (stupid) computer more to work with.

A million times, THIS. These things aren't magic. It may appear like magic to the uninformed, but it's just software running in a (specialized) processor running matrix math- That part of Algebra class most people slept through.