Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autosteer on motorway = great. Autosteer on A road = dangerous?

This site may earn commission on affiliate links.
Excellent comments, but I think you’re all missing the point. Whereas the ability to respond to perceived data will continue to improve, it bothers me that Musk appears to be directing zero attention to solving the biggest problem, which is to be able to develop a realistic model of what other road users are thinking and intending — aka empathy. Until that is addressed autonomous cars will continue to behave obtusely.

While I am as enthusiastic about ai and machine learning as the next person, I am troubled by the thought that perhaps the Level 5 driving problem is computationally undecidable.
 
  • Like
Reactions: gangzoom and pdk42
A great example is how the current autosteer completely fails to deal with the markings on UK bus stops - the car thinks that the lines are the edge of a lane and then goes on to steer the car into oncoming traffic.

Actually that is a great example of the car handling a scenario that is has probably never been taught - since the scenario you are explaining is outside the AP approved use case. I'm not sure that most 8 year olds don't knows anything about bus stops other than thats where buses stop.

Examples like this is like saying that an airline pilot should be able to fly a glider because every 8 year old knows what a joystick is.

Constantly dragging up AP doesn't work because it can't do xyz then extrapolating that to FSD will never work because AP can't do xyz is pretty pointless. There seems to some expectation that AP should do xyz now even though AP doing xyz is not part of, or will be not part of AP's feature set and will only be part of FSD's future feature set.
 
Last edited:
that Musk appears to be directing zero attention to solving the biggest problem, which is to be able to develop a realistic model of what other road users are thinking and intending — aka empathy

Whilst not necessarily a conscious input, isn't the cumulative effect of all the empathy being expressed in the learned scenarios actually embedded in the results?
 
Actually that is a great example of the car handling a scenario that is has probably never been taught - since the scenario you are explaining is outside the AP approved use case. I'm not sure that most 8 year olds don't knows anything about bus stops other than thats where buses stop.
The bus stop example is just a current example. I'm sure they could easily add it. My point is that there will always be some things that it hasn't been taught - and that because it has no "understanding/intelligence", it cannot adapt and learn based on previous experience/common sense.

Examples like this is like saying that an airline pilot should be able to fly a glider because every 8 year old knows what a joystick is.
Actually, an airline pilot would probably make a half-decent hack of flying a glider, mainly because he/she will have flown light aircraft for hundreds of hours before graduating to large jet aircraft - and gliders handle much like light aircraft. And the wider point is that the basic stick/rudder skills across all fixed wing aircraft are pretty much the same - and the human brain is pretty good at adapting to new scenarios without needing to be trained with thousands upon thousands of annotated images beforehand.

Constantly dragging up AP doesn't work because it can't do xyz then extrapolating that to FSD will never work because AP can't do xyz is pretty pointless. There seems to some expectation that AP should do xyz now even though AP doing xyz is not part of, or will be not part of AP's feature set and will only be part of FSD's future feature set.
OK, but the point is that the long tail of stuff that happens in the real world means that so long that the machine may never get there. We humans don't need all that because we can adapt to things we haven't necessarily seen before.
 
You could make the same point about allowing people to drive. They also do not act well in challenging situations, are often distracted, feeling poorly, have a piece of sand in their eye, are looking one way when a threat comes from another direction, drop a cigarette, or spill hot coffee.

People get in accidents hundreds of thousands of times every day. Does not mean that those flawed individuals cannot routinely get where they wish to go safely.

Autopilot does not need to be perfect, just better than humans to have great success.

Holding them to a standard of perfection is perhaps flawed.
 
  • Like
Reactions: Big Earl and tsh2
Autopilot does not need to be perfect, just better than humans to have great success.
While it may be better than humans in navigating a world of moving objects, I will go out on a limb and suggest it may never be better at understanding and communicating with humans. Without that skill, which is a good 50% of driving amongst other road users, it is going to have problems.

Once all other cars are autonomous and there develops a means of interactive communication between autonomous cards, that problem will go away. But it will mean forbidding humans from driving, or finding more positive ways to persuade them to give up.
 
While it may be better than humans in navigating a world of moving objects, I will go out on a limb and suggest it may never be better at understanding and communicating with humans. Without that skill, which is a good 50% of driving amongst other road users, it is going to have problems.

Once all other cars are autonomous and there develops a means of interactive communication between autonomous cards, that problem will go away. But it will mean forbidding humans from driving, or finding more positive ways to persuade them to give up.

There are
While it may be better than humans in navigating a world of moving objects, I will go out on a limb and suggest it may never be better at understanding and communicating with humans. Without that skill, which is a good 50% of driving amongst other road users, it is going to have problems.

Once all other cars are autonomous and there develops a means of interactive communication between autonomous cards, that problem will go away. But it will mean forbidding humans from driving, or finding more positive ways to persuade them to give up.

A lot of the small things humans do is just unwritten following rules. That behaviour could be programmed.I don't think that's the challenge.
 
With enough relevant data, I agree that it can get very close to recognising things pretty accurately within a particular narrow scope. Backed by a sufficiently rich model of road traffic behaviour, I'd even accept that it can do some interesting self-driving party tricks.

However, the computer lacks any real "understanding" of what's going on, and it certainly can't infer anything if it's confronted by a situation that it's not been trained or programmed to see. The self-driving system is a closed loop with a finite number of situations it can deal with - but there are always going to be edge cases that it's not seen before - and without any understanding of what's going on, it's going to get it wrong. Anyone who drives a Tesla today on AP knows this. It's an interesting party trick, but it's a long way from FSD. A great example is how the current autosteer completely fails to deal with the markings on UK bus stops - the car thinks that the lines are the edge of a lane and then goes on to steer the car into oncoming traffic. Not even an 8 year old would fail to appreciate that the bus stop markings are there to stop people parking there - not as a lane guide. It's common sense - but the computer has no common sense.

If you step back a little and try to see how humans learn, you will see a VERY close analogy. How many years was it before you knew what a stop sign looked like? How would someone from the states do with the bus lines? Would they inherently understand it or does it take some learning for someone without a knowledge of UK lane markings?
I know in the states, that individual states mark some roads differently and it's hard to understand the difference. In Seattle, the HOV is separated by a solid white line that has no breaks, you are supposed to cross it to get in and out of the HOV lane. In Atlanta, crossing a solid white line is against the law.

How many humans are able to react to every road condition perfectly? I guess few, since there are so many wrecks. AP just needs to be better than humans to be worthwhile, it doesn't need to be perfect yet.
 
I have probably 40k miles experience driving on Autopilot (in multiple different Tesla’s) and my experience is a lot better than some here. You get to know it strengths and weaknesses and realise that it does learn. There are roads it tried to drive me off the road initially, usually at turnings etc, but by holding the wheel and correcting, I have found that the system learns. Those same roads I now never have to intervene. I do think those roads where people have had poor experience are probably routes driven by few or no Tesla before on Autopilot.

I drive in both the UK and US and I do think a lot of the UK roads will not be suitable for FSD for many years, if ever when shared by people driven cars, such as single track lanes. On motorways the system is performing well and on many A roads, but the narrow roads in the UK and so many parked cars on roads, do cause a major problem.

FSD ready I believe means that all the features will then be in the system, such as stop sign and traffic light recognition, but does not mean it will be able to full self drive at that stage. That will be a learning experience and require humans to still be in control. Maybe by 2021 level 4 autonomy will be possible with the car being safe to drive itself on certain routes. But Elon Musk has already stated Europe will probably be the last place in the world to see full self driving due to the restrictive regulations, so we won’t see any time soon in the UK. I am sure I will enjoy FSD in the USA years before I will at home. It is questionable if the FSD package is worth buying in the UK, with its very limited capabilities against in the USA.
 
Last edited:
  • Informative
Reactions: Cogarch
They also seem to be setting themselves an unreasonably (and unecessary) high target by talking about robo-taxi. I never expected 'F'SD to mean driving under all conditions on all possible roads; what I did expect was for it to take 100% responsibility up until the point where it says "sorry, I can't do that" having put itself in a safe position. A lot of the tricky situations (eg. what to do when two cars come face-to-face on a single track road) are almost impossible to solve, but don't need to be solved for FSD to be useful. But for robotaxi where there's no guarantee of a licensed driver in the car in the first place needs that higher level.

This sums up a key point for me, about getting big benefits from an incomplete system as soon as possible. The day we can be "hands off" on just some roads will be a big day. At that point in time there might be 3 hour journeys like the one I've just done, but where the human is just taking a break for 2 of those hours, leaving just a short and not remotely tiring little job to do and so all but eliminating the effect of that tiring drive on the rest of the working day. The relatively huge later challenge of delivering FSD on the difficult little country/town roads either end with all their single tracks, passing places etc is still not done, but its for a relatively small benefit in that use case. In the robotaxi use case, at this point in time they'll still have delivered nothing.

Will this day come on today's car? I find it hard to believe but I'd love it if it does.
 
Excellent comments, but I think you’re all missing the point. Whereas the ability to respond to perceived data will continue to improve, it bothers me that Musk appears to be directing zero attention to solving the biggest problem, which is to be able to develop a realistic model of what other road users are thinking and intending — aka empathy.

Exactly, the last part of my daily commute is on roads like this.

You can have x10 LIDAR sensors, a drone flying over head to give you live HiRes video, but all that information is 100% uselss unless your system can look into the eyes of the driver ahead, work out with a good guess if/when they will stop or not, let other drivers past if there is a crazy queue building up etc....all this decision making we do in a split second.

Getting obsessive about getting more sensory really is just wasting time and money. Until you can crack human behaviour its all pointless.

Are Tesla likely to succeed, highly unlikely, but for the price of FSD am happy to come along for the ride.

2_PWR_HMB_1711_03.jpg
 
  • Like
Reactions: Cogarch
I quickly learned not to bother with it on anything but motorways. On A roads it has characteristics similar to those found in my elderly, cataract ridden grandmother if I gave her a gram of coke and popped her behind the wheel.
For goodness sakes don't do that. She absolutely shouldn't be driving while snorting coke, you have to be able to have BOTH hands on the wheel. Completely irresponsible, don't do that again.
 
  • Funny
Reactions: KennethS and Yev000
Until you can crack human behaviour its all pointless.

This is not a game of absolutes...

When every single car can do today's autosteer (Tesla or otherwise) in ~20 years you can automate most dual carriageways.

The picture above might not even be an issue by then because people will use cars differently.

And you can have a car that does automation on some roads and not others, that's OK too... You really don't have to use autosteer all the time just because you can.

Of course most can agree that FSD and especially robotaxi is a pipe dream in the near future.
 
Exactly, the last part of my daily commute is on roads like this.

You can have x10 LIDAR sensors, a drone flying over head to give you live HiRes video, but all that information is 100% uselss unless your system can look into the eyes of the driver ahead, work out with a good guess if/when they will stop or not, let other drivers past if there is a crazy queue building up etc....all this decision making we do in a split second.

Getting obsessive about getting more sensory really is just wasting time and money. Until you can crack human behaviour its all pointless.

Are Tesla likely to succeed, highly unlikely, but for the price of FSD am happy to come along for the ride.

2_PWR_HMB_1711_03.jpg


I don't understand the point of view that it is no use until it is complete everywhere. If they delivered "hands off" on the motorway network alone, that would be of huge benefit to many. Adding DCs another substantial benefit, then A roads with roundabouts would deliver it on maybe 80% of my driving, leaving me to do only the last 20%. If that last 20% was much harder to deliver and took much longer or different hardware, so be it, i might even enjoy it...
 
I don't understand the point of view that it is no use until it is complete everywhere.

I use AP on our 2.0 car everyday.

What we need is better/more intelligent processing, not more sensors which is what some people believe.

Can Tesla squeeze enough processing power into a rather small box with not much space for heat transfer if its working too hard.....seems unlikely.

But I still paid for FSD, doesn't mean I believe it'll happen, but £5k is actually not much to pay to enjoy state of the art silicon/software. The Nivdia 'Tesla' GPU make what Tesla want for potential FSD seem rather cheap!