Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Things even FSD won't do

This site may earn commission on affiliate links.

BridgeMojo

Member
Supporting Member
Dec 7, 2018
165
151
Monrovia, CA
  • Avoid a pothole
  • Shift a few inches to not ride on a road seam
  • Let the fellow signaling for a lane change in
  • Invite someone to go next
  • Avoid a tire tread in the lane
  • Give a little more space on the side with the oversized flatbed trailer
  • Avoid driving directly between two other vehicles (Remember "Leave yourself an out?")
What else? Did I get any of these wrong?

Mojo
 
  • Avoid a pothole
  • Shift a few inches to not ride on a road seam
  • Let the fellow signaling for a lane change in
  • Invite someone to go next
  • Avoid a tire tread in the lane
  • Give a little more space on the side with the oversized flatbed trailer
  • Avoid driving directly between two other vehicles (Remember "Leave yourself an out?")
What else? Did I get any of these wrong?

Mojo
Making a "rettungsgasse"
Pulling to the side to let motorcycles pass.
Slowing down if being overtaken (2 way roads)
 
If we are talking current status and if current AP is any guidance, there is no word that it will reliably avoid driving under a truck trailer, or react to head-on traffic reliabily. Basically all sorts of unexpected obstacles are a potential issue because the camera has to reliabily recognize them all to react, it is not like Lidar that can just say reliabily something is there...

Making a "rettungsgasse"
Pulling to the side to let motorcycles pass.
Slowing down if being overtaken (2 way roads)

Interestingly Audi’s Traffic-jam Pilot (the Level 3 system in testing) makes a rettungsgasse.
 
  • Informative
Reactions: OPRCE
Do you mean Tesla’s FSD product, or the general concept of self-driving cars?

There is a lot that Tesla’s FSD product doesn’t do now. It is quite limited today, and it will get less limited over time.

In principle, I don’t see why self-driving cars won’t be able to do complex negotiations with other vehicles, like the ones you mentioned. This might require replacing the hand-coded approach to driving policy and path planning (i.e. the actions that the car takes) — which has been the norm for the past 15 years — with an imitation learning and/or reinforcement learning approach. It might take years to get it right. Maybe many years. It might require innovations in the imitation learning and/or reinforcement learning state of the art. But, in principle, I don’t see why these will always be things self-driving cars can’t do — even assuming no major, fundamental breakthroughs in artificial intelligence.

This is just a demo and might have been cherry-picked out of a hundred failed attempts, but it’s still interesting:

ytCropper | Prof. Amnon Shashua at 2018 Intel Capital Global Summit

At least 1 successful attempt gives us more information than 0 successful attempts.

People disagree on whether what Tesla’s FSD product can do will ultimately converge on what self-driving cars in general can do. I think they will converge.

Why do people disagree? One big reason is lidar. But as best I can tell, cameras can do everything lidar can do, and lidar can’t do everything cameras can do. Mobileye and Anthony Levandowski (once an important engineer at Waymo) agree lidar isn’t necessary.

Another big reason is that people think that while perception can only be solved by neural networks, action can be solved by hand coding. Whoever has the best programmers, or whoever has the codebase that has been worked on the longest — that’s who is in the lead. I’m skeptical that hand coding can solve complex multi-agent interactions (like negotiating with other cars) or similarly hard real world problems. I’m not aware of any example where it has in the past. It also seems a lot like perception— humans see, but we can’t write down a computer program that makes a computer see like we do. It’s too subtle and complicated and on the level of implicit knowledge. Humans drive, but that doesn’t mean we can write down how we do it. (Can you describe exactly how to ride a bike, so that a robot could follow your instructions?)

A third big reason is that people think whoever makes the best neural networks will have the best self-driving car. But this ignores the importance of training data. Say we put on a contest between some grad students and the best machine learning people in the world. They are competing to correctly classify images from the ImageNet test set. The grad students get 100% of the ImageNet training dataset. The best machine learning people only get 1% of the images from each category. The grad students would win, hands down.

HW2 Teslas drive over 12 million miles per day. Waymos drive about 12 million miles per year. When it comes to training data that doesn’t need to be labelled by humans, and especially data that (unlike raw sensor data) is easy to store and transmit a lot of, Tesla has a 100x advantage. What the exact consequences will be for neural network performance, no one can say— this has never been tried before. Machine learning isn’t yet a design science where we can predict the performance of a system based on its components. The only strong theoretical precept we have is that more data is better. Sometimes it’s only marginally better, and sometimes it’s radically better.

Tesla also has the ability to recruit machine learning people who aren’t just grad students. It’s one of the most desirable companies in the world for people who work in tech. Although he didn’t stay long, the fact that Tesla could pull Chris Lattner from Apple shows its allure. So at Tesla it’s not just grad students with an 100x data advantage vs. elite machine learning people. It’s elite machine learning people with an 100x data advantage vs. other elite machine learning people.

A lot of the world’s best machine learning people publish their research in academic papers, and sometimes even put the code on GitHub. Not just people in academia, but also people at OpenAI, DeepMind, Google, and Facebook. A lot of ideas are shared freely. Tesla doesn’t need all of the best people working under its roof to use some of their ideas.
 
Last edited:
Ah the misinformed, when FSD actually rolls out, it's going to blow everyone's mind.

Will it be perfect? Not at first, but it will still be 1000x better than current EAP. Mark my words.

You realize some people on this forum have root access to their Tesla and can even read some of the code metadata? This forum is the least misinformed.
 
Apparently @MikeS1 claims to have no insider knowledge so his view seems to be a timeleap to the common belief system of Tesla owners of circa January 23rd, 2017.

The ”FSD codebase is different” argument is back people. Too bad it ignores so much of what we have come to know about Tesla and how it operates.

That's because you make assumptions that are not logical in any sense.
It's absolutely clear there is zero FSD logic in EAP. Thats proven by your lack of faith in the autonomy because you are drawing an illogical comparison to the functionality of FSD.

I don't have insider knowledge but I do have a background in electrical / computer engineering. And the most logically answer is you do not see any of the code or machine learning for FSD.
 
What the car sees and how I'd reacts are completely different.

I can't believe everyone still things EAP in the current car is basically FSD. That'd like showing a 5 year old how to stay in the lines for coloring his book and then putting him in a car and say here drive in a city
 
If we are talking current status and if current AP is any guidance

Why would you judge the capabilities of a yet to be released product like Full Self Driving by a product that is admittedly not FSD? There is a reason it's only called AutoPilot at this point of its development. Irrational at best, FUD at its worst.

it is not like Lidar that can just say reliabily something is there...

That's factually incorrect. Lidar doesn't work in the rain or snow, cameras can be trained to recognize in any rain or snow that a human can see things in. So it's FUD to say that lidar is reliable while cameras cannot be.
 
  • Like
Reactions: BridgeMojo
It simply won't work in the Seattle area because at stop lights we play the "You go, no you go" game.

This game usually takes a bit of time to complete.

Which is just proof that machines will have some of the same problems that humans have. I'm not sure why that would disqualify it from "working" since it's already a problem with humans. By that standard, humans don't "work" as drivers because they have a problem negotiating the right-of-way. Machines should actually be better because they don't do stupid emotional things like being kind, they will judge everything by the rules of right of way. Machines will actually cause humans to be better drivers as humans get "trained" who has the right of way at intersections. Because it's pretty obvious many are confused about how it works in certain situations. At many tasks, machines will be 1000 times better (more reliable) than humans.
 
Last edited:
Drive at night in the rain without lane divider bumps or reflectors (particularly when road seams are more visible and not coincident with the lane markings). Much of the northern part of the country has no physical lane dividers--just paint on blacktop.
 
  • Like
Reactions: BridgeMojo
Yeah I'm interested to see how well FSD works without line markers as claimed. No subdivision has them and in the winter a lot of the times are very hard to see.

I'm sure they have a lot of real data from cars driving on the road but need to see this beast in real time. Wish I could get on the early access program for HW3 and any updates
 
Which is just proof that machines will have some of the same problems that humans have. I'm not sure why that would disqualify it from "working" since it's already a problem with humans. By that standard, humans don't "work" as drivers because they have a problem negotiating the right-of-way. Machines should actually be better because they don't do stupid emotional things like being kind, they will judge everything by the rules of right of way. Machines will actually cause humans to be better drivers as humans get "trained" who has the right of way at intersections. Because it's pretty obvious many are confused about how it works in certain situations. At many tasks, machines will be 1000 times better (more reliable) than humans.

I'm huge advocate of vehicle to vehicle communication, and infrastructure to vehicle communication. Quite frankly I don't think we should even be bothering trying to automated driving without utilizing the benefits that automation can bring.

We should get to a point where autonomous driven cars don't even need to stop at a stop sign. On roads approved for autonomous driving the stop signs should be replaced with stop lights where they support infrastructure to vehicle communication. So autonomous cars can schedule crossings with other autonomous cars. They deal with non-autonomous cars by simply telling the light to turn red for them.

It should be setup to autonomous cars have priority. But, it's so quick/advanced that the priority goes unnoticed by manually driven cars.

To improve safety all vehicles/pedestrians/bicyclists need to have passive transponders.

As a bicyclist I wouldn't be opposed to having to wear a passive transponder like an RFID tag on roads white listed for autonomous driving. So that not only would an autonomous car have an additional way to know where I was and to track me, but a manually driven car would as well if it had the SW/HW to detect me.

The problem with our current approach to autonomous driving is all the problems with driving are still going to exist, and it also won't work because humans will bully the crap out of autonomous cars. It's already happening to Waymo vehicles. Humans adapt and they quickly adopt a dominance over other things.

Autonomous vehicles are an infrastructure issue. The infrastructure needs to be setup so autonomous have priority over humans.

We also need established rules for humans getting in the way of autonomous machines. The machines still need to slow down to not kill anyone, but a love tap wouldn't be a bad idea. Just to let the human know who's boss.
 
  • Disagree
Reactions: Mader Levap
Yeah I'm interested to see how well FSD works without line markers as claimed. No subdivision has them and in the winter a lot of the times are very hard to see.

I'm sure they have a lot of real data from cars driving on the road but need to see this beast in real time. Wish I could get on the early access program for HW3 and any updates

I have a feeling like those of us in northern climates will see long stretches of time where autonomous features are unavailable.