Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autosteer on motorway = great. Autosteer on A road = dangerous?

This site may earn commission on affiliate links.
It's all very interesting tech, and I applaud Tesla for pushing the boundaries as they have, but the facts are that they are still failing on very basic stuff - the most egregious of which is simple traffic aware cruise control (TACC). It's something many other manufacturers provide - invariably implemented using some sort of radar/lidar technology. It works. The same functionality on a Tesla is very unreliable - phantom braking, aborted lane changes, and confusion with parked traffic are just some of the problems.

Yet I find TACC to be very reliable on my model 3, certainly as good as any I have tried elsewhere (Mercedes, Infiniti). I'm not aware of ANY production car using Lidar for TACC or anything else; Lidar in its current form is far too expensive (though some impressive work is being done on frequency shifting Lidar that might improve things one day). To the best of my knowledge everyone, Tesla included, is using front-facing radar, the better ones (again, Tesla included) using road-bounce radar that can see two cars ahead.

Lane changes isn't TACC, it's AP. What other cars have you tried that do this reliably? Or at all?

And what of those aborted lane changes? Well, that's called caution. Yes, YOU knew it was safe, and yes YOU are still a better judge than the car, because your neural net is better. But, sensibly, the Tesla engineers know this and have tuned the system for an abundance of caution; if in doubt, abort .. would you rather it risked crashing? I'm not being an apologist here, I'd like it to be better, and it IS getting better, with every update. Will it ever have as good a judgement as a human? Probably not. Will it have better reactions than a human? It already does. And at some point that combination of untiring vigilance, faster reaction time, and good enough neural processing will reach parity with humans (who, as I have often noted, dont exactly set a very high bar to reach).
 
Last edited:
  • Like
Reactions: Obliter8
Actually, to be fair to the cameras, they are in many cases BETTER than human eyes. Last night I was driving home in the dark/rain and the lane lines were virtually impossible for me to see (reflections, shiny road surface etc). Yet the Tesla was picking them out better than my eyes could; almost flawlessly in fact.

^this^
I've heard some say that AP is a better driver than them in poor weather.

Will it have better reactions than a human? It already does.

Ironically, this is my worst perceived issue with AP and especially NoA. I'm sitting waiting for AP/NoA to do something well towards the end of my comfort zone. I wish it would make its intentions more obvious earlier sometimes, such as indicating for an off, or slowing a bit earlier or more. I'm sure some of this is a trust thing and not looking at the screen, but I still like to keep my eyes on the road and use my own senses to give me feedback on whether I am driving too fast or too slow.
 
Last edited:
Ironically, this is my worst perceived issue with AP and especially NoA. I'm sitting waiting for AP/NoA to do something well towards the end of my comfort zone. I wish it would make its intentions more obvious earlier sometimes, such as indicating for an off, or slowing a bit earlier or more. I'm sure some of this is a trust thing and not looking at the screen, but I still like to keep my eyes on the road and use my own senses to give me feedback on whether I am driving too fast or too slow.

You make a good point, and its interesting how many people are anthropomorphizing AP: If you watch videos they say things like 'Well, it didn't do too well on that turning. Hopefully it will get batter soon." Despite the fact that, in its current incarnation, AP does certain things like follow lane lines, and nothing else. I've listened to people comment that it should do "better" in intersections, even though it has no concept of such a thing (at present), and has no more chance of "doing better" than it does of learning to fly. And one incarnation of this is expecting the car to react on the same time-scale as a human, and out unease when it does not (and surprise when it reacts much faster, such as in emergencies).
 
This all smacks of the 80/20 rule.

I'm a software bod too and have a similar background / experiences

I have no way of knowing where Tesla are really at. My perception is that they are hiding their light under a bushel (no idea if the light is bright, or dim). For example, they do not (as required by law) post numbers of disengagements during testing in California. So they cannot be testing FSD in California ... either because it is rubbish, or they don't want to give any clue to competitors.

What is available in the car is what they can get away with regulators. Anything more advanced, but with risks / whatever, won't be included "yet". That said, they are taking a far more aggressive approach to what they are prepared to release, compared to established Auto (which has deeper pockets for Lawyers to sue of course ...)

I am of the opinion that 80/20 isn't where they are at. AI development and advances are very different to conventional APPs, and Elon's approach is different to what others are trying to do.

My expectation is that what Tesla are trying to achieve is a drive coast-to-coast with zero disengagements. To achieve that they need Highway driving (Check), On/Off ramps (NoA), street driving to get to Supercharger (Traffic light recognition Check), and manoeuvring in the Supercharger car park (Advanced Summon Check). All that lot puts them "close". They can try the coast-to-coast drive every day, keep on failing, and then on the one day when a dog doesn't run across the road :) publish the video and shout "DONE!"

I have no idea whether that project roadmap is a good route to get to FSD, but coast-to-coast self driving would sure boost Tesla stock price, and sales ... and from my onlookers position, and observation of Tesla history, that is my guess of where Elon and Tesla Board wants to be.

Thereafter Tesla has the ability to put a "test version" in every car, and have it shadow what the driver actually does, and see how well it performs. Seems (from raw data logging that people have done) that this is not happening yet, or not on those/many/most? cars. But Tesla could do it tomorrow if they wanted to test against the whole fleet. So when they actually get their hands on something their route to Proof is pretty sweet.

only recently have we seen an explosion in image and voice recognition

But But But Sir! ... they are cheating and using heavy-iron for those advancements. I've been using Dragon Dictate for donkeys years. It has improved in terms of recognising regional accents and needing negligible training, but its error rate (once trained) is about the same as ever it was. My CPU power in that time has increased massively ... so that hasn't solved the problem. When Alexa can figure out what I am saying without sending the recorded speech to Heavy Iron i'll be impressed :)

for many 60 miles is a short daily commute

its a UK forum ... "Americans think that 200 years is a long time, Brits think that 200 miles is a long way" :)

My conclusion is that Tesla have probably backed the wrong horse in trying to do FSD almost entirety based off real time image recognition

That's their bet though. That LIDAR is too expensive for single-car installation (apart from very high mileage vehicles, such as Taxis)

Tesla have some very clever people. They will either be right or wrong ... I am an armchair critic and know that I have no informed opinion on which it might be. But I do consider what the consequences will be if they fail.
 
  • Funny
Reactions: MrBadger
its interesting how many people are anthropomorphizing AP

That's my biggest issue with AP, being a human and using human experiences I have absolutely no idea what AP might be deciding to do. The better AP gets the greater the risk that I become more and more complacent and then, on the fateful day, fail to do the obvious thing. (As such I have an absolutely rigid rule for one-hand-on-wheel and eyes-on-road. There have been fatal accidents when folk have stopped doing that)

if I take a 17 year old out for their first ever drive I have a pretty good idea of every single dumb thing they might do, and can be ready for them. But AP will pass a similar looking truck 99 times without issue, and then then next time at (what seems to me to be) identical spacing, lighting, line markings it will decide that one is a major threat

But AP + Me is way better than just Me. I can't see under the car in front to know that the traffic in front of that is braking heavily and guy-in-front is asleep. TACC has no concerns about guy-in-front brake lights being broken. AP is also doing a constant 360 lookout, so can just change lanes to avoid an incident without having to do the whole mirror-signal-manoeuvre procedure, and so on.
 
My conclusion is that Tesla have probably backed the wrong horse in trying to do FSD almost entirety based off real time image recognition. I can understand why they've done it, but I can't see the tech getting to a workable result any time soon.

How else can solve the problem if your not using image recognition?

Having more data from additional sensor doesnt help the system think or understand any better. Quite the opposite, how is Lidar going to understand the extra sensory input caused by driving through a puddle/flooded road as we are seeing often in the UK at present?

How is extra radar info going to tell you how to navigate through a complex busy junction when you have bus drivers doing the Indian sytle of mgiht is right driving?

The only way to progress FSD is true AI, a mad focus on extra sensory info is just a distraction.

As I keep on pointing out all of us on here can identify the dangerous/pit falls of driving down this road, using a single still at sub VGA resolution image.

Whats not needed is more sensory information - thats just a distraction to try and avoid the much harder task, developed true AI that can processes an image like the one below as well as any human.

220px-Albemarle_Street_%28S%29.jpg
 
  • Like
Reactions: WannabeOwner
Musks view seems to be that if you can get the 3D space correct, then the driving is just like a video game.

I think that there is lots of evidence that they are doing pretty well at 3D space (I think the cone visualisation is a bit of a 'look what we can do' type of thing even though they have probably been doing it behind the scene for a while), but still a way to go. He admits that very tight rural roads and complex traffic lights are 'hard' - latter being 100% solvable in time with V2I, but their ultimate aim is to do everything visually which makes a few things much harder than perhaps they should be.

I'm less convinced of self drive in that 3D space being based on something like Grand Theft Auto though ;)
 
He admits that very tight rural roads ... are 'hard'

Haven't repeated, yet, with my AP2, but last AP2 loaner I had I managed to get to engage AP on a single track rural road where there were some centre-line markings on an isolated bend ... it was a wild ride yo-yoing between the verges :) and I chickened out when there was oncoming traffic.

That's more than a year ago, I'll try it again :rolleyes:

A Computer Engineer, Programmer, and Systems Analyst are in a car coming down a steep mountain road when the brakes fail. By good luck they manage to pull into an uphill driveway and stop. Visibly shaken they jump out ...

Computer Engineer: "I think I can fix it"
Systems Analyst: "I think we should walk to that village down there and get help"
Programmer: "I agree, but lets jump back in and see if it will do it again"

I'm a Programmer :)
 
Haven't repeated, yet, with my AP2, but last AP2 loaner I had I managed to get to engage AP on a single track rural road where there were some centre-line markings on an isolated bend ... it was a wild ride yo-yoing between the verges :) and I chickened out when there was oncoming traffic.

Weird to see any single track rd with centre line markings. Lots of single track here and never get my car showing any suggestion of AP even with good verges. Mind you we have a lot where the edges are walls of mud with a hedge on top and enough where you have to stop and move the fallen branch out of the way...
 
AFAIK whenever you override AP, the car sends that instance of data back to the network. Then the system, (or maybe humans?) interpret the reason why the human overrode the software, and that allows continuous improvement on the so called corner cases.

For example, although you are placed accurately in the middle of your lane on a bend, when the bus or truck suddenly appear oncoming, you instinctively move over to the left to give more space to pass. Apparently AP has now learnt to do this in some circumstances, although I have not seen any evidence of it.

Personally I find the Model 3 AP and TACC to be exceptional compared to what else I have driven and rented in the past. The worst being a fully loaded rental Volvo XC60 which was extremely disappointing to say the least, and actually downright suicidal. Tesla AP is far ahead.

OTA updates are the big differentiator for us, as at any moment an update may give your car a step change improvement in AP capability. For now no other cars offer that, so I’m quite confident Tesla have chosen the right path.
 
AFAIK whenever you override AP, the car sends that instance of data back to the network. Then the system, (or maybe humans?) interpret the reason why the human overrode the software, and that allows continuous improvement on the so called corner cases..

So we are lead to believe. But with 1/2 a mill teslas out there worldwide it will be 10's of thousands of incidents daily. Beyond human interpretation and depends on another AI to interpret. Didn't they say they were building a faster base AI for this? AI will try multiple scenarios, decide what it 'thinks; works best - send it out as an update - hopefully not kill anyon and let everyone test the new idea out yet again. It may well be faster to test that new theory with an advanced driver who would drive defensively instead.
 
Weird to see any single track rd with centre line markings

Yeah, in fairness the road is a bit wider than that, but this particular bend is more generous, not sure a car properly fits on each side of it though! Anyway, I assumed it had been painted there specifically to enable AP for testing on narrow roads :) (AP1 never gave me the (T) symbol though ...)

AFAIK whenever you override AP, the car sends that instance of data back to the network

From what I have read (people with ROOT access to car) there isn't evidence that data is sent / that happening routinely. Definitely capable of doing that, just seems that is not (currently) the norm (might have changed of late, I've stopped following some of the threads ... Input Overload ...)

You can send a "notification of problem" ... whether that actually gets any action is unknown.
 
I have an absolutely rigid rule for one-hand-on-wheel and eyes-on-road. There have been fatal accidents when folk have stopped doing that

This IMO is the one aspect of Tesla's strategy that has a clear insoluble problem.

Whether or not they can get to a useful level of FSD with the vision systems and compute power that they have is a question that outsiders can't prove conclusively one way or the other.

However, their strategy to proceed incrementally from TACC to FSD relying on human monitoring has a big problem. With a system that requires you to intervene every few minutes there's no real problem - you are effectively driving the car just with assistance. With a system that requires intervention (say) once a day, it will be hard for even conscientious drivers to maintain concentration. IMO, at some point they've got to give up on incremental improvements with 'beta' status and make a big jump to monitoring not required (under whatever specified circumstances).

They also seem to be setting themselves an unreasonably (and unecessary) high target by talking about robo-taxi. I never expected 'F'SD to mean driving under all conditions on all possible roads; what I did expect was for it to take 100% responsibility up until the point where it says "sorry, I can't do that" having put itself in a safe position. A lot of the tricky situations (eg. what to do when two cars come face-to-face on a single track road) are almost impossible to solve, but don't need to be solved for FSD to be useful. But for robotaxi where there's no guarantee of a licensed driver in the car in the first place needs that higher level.
 
A lot of the tricky situations (eg. what to do when two cars come face-to-face on a single track road) are almost impossible to solve,

Actually, going forward that will be trivial to solve. As CAV becomes more common (although no evidence that Teslas Connected in CAV will be compatible with others), then the cars will know when not to enter the mutex zone, ie if not clear.

Is there a consensus view on when the price will increase?

Its already gone up in US - they are holding off increase over here until more functionality to justify it - advanced summon was trigger in US.
 
Actually, going forward that will be trivial to solve. As CAV becomes more common (although no evidence that Teslas Connected in CAV will be compatible with others), then the cars will know when not to enter the mutex zone, ie if not clear.

Maybe so if the CAV has long range and can prevent the problem arising in the first place. I was thinking in particular the sort of situation where the lane isn't clearly and specifically single track with passing places but just rather narrow, with the ability to pass varying with the size of the other vehicle and the state of hedgerow growth. So there's a compromise between how far you are able to risk vehicle damage by pulling onto the verge to squeeze past vs. trying to reverse (when there might be traffic behind you) vs. trying to make the other guy reverse. And also rules like "tractors and landrovers are expected to put themselves further onto the verge/into the hedge than I am".

Still, quite likely the car with FSD will be more competent at reversing at speed than I am ....
 
Weird to see any single track rd with centre line markings. Lots of single track here and never get my car showing any suggestion of AP even with good verges. Mind you we have a lot where the edges are walls of mud with a hedge on top and enough where you have to stop and move the fallen branch out of the way...

Yes, there's a terminology clarification needed. A single track road doesn't have a centre line because it's not wide enough to allow vehicles travelling in opposite directions to pass each other (in practice sometimes they can but may require to stop or to use passing places). The term for roads with a defined centre line is a single carriageway road (that has traffic travelling in opposite directions). Some single carriageway roads in rural areas have a centre line but each side of the centre line may not actually be as wide as a modern large vehicle, especially if the road is further narrowed by overgrown hedging or broken road edges. In the UK motorways and some "A" roads are dual carriageways with at least 2 lanes travelling in each direction and a clear central divider that normally includes a physical barrier.
250px-C1026_Single_track_road_sign.jpg
 
Is there a consensus view on when the price will increase?

Was intended to go up here but Elon reacted to pressure based on the fact that we don't actually have the new features over here ... yet ... so price increase likely to be when regulatory folk permit the new features to be activated over here.

Question is: will there be a Fire Sale between NOW and THEN :)
 
  • The cameras aren't eyes. They actually have better resolution, but have been de-rated explicitly because better resolution isn't better. Go do a little research and see how little area that the eyes have good resolution in. No, the side cameras don't have wipers, but the front does. They don't need to rotate, they can see everything at once. BTW, people get confused in low sun angles. That's why many cities have the sunrise and sundown slowdowns in traffic.
Well, I guess we have to disagree on that. Estimates of the resolution on the human eye by Dr Roger Clark (a recognised expert in vision systems) puts it at around 576 Mp (see Clarkvision Photography - Resolution of the Human Eye). That figure takes into account the ability of the eye/brain to rapidly scan an area to build a highly detailed metal model of some area of interest. In the driving context, we are all capable of seeing the direction a car's front wheels are pointing in when at a T-junction; or where the driver is looking and the expression on his/her face. I very much doubt the Tesla cameras are even close to doing that.

  • And just how do you think that your brain does it. How do you know what a "car" is? A "building" , a "road" that's because you've had years of learning, called childhood, to learn it. And trust me, humans do equally poor in known (even known) situation. That's where wrecks and slow cars come from.
This is obviously a contentious point - but I just don't accept that what the computer is doing is even close to what our brains are doing. Computer image recognition is based on a probabilistic algorithm by which a target image is compared to a library of human-annotated images. A particular target needs to be presented over and over in different configurations, lighting, orientation etc in order for the pattern matching to work. Sometimes odd false recognitions are seen which a human would never even think were close - simply because the algorithm is just computing some correlation score that somehow manages to identify a Skoda as a horse, or such like.

With enough relevant data, I agree that it can get very close to recognising things pretty accurately within a particular narrow scope. Backed by a sufficiently rich model of road traffic behaviour, I'd even accept that it can do some interesting self-driving party tricks.

However, the computer lacks any real "understanding" of what's going on, and it certainly can't infer anything if it's confronted by a situation that it's not been trained or programmed to see. The self-driving system is a closed loop with a finite number of situations it can deal with - but there are always going to be edge cases that it's not seen before - and without any understanding of what's going on, it's going to get it wrong. Anyone who drives a Tesla today on AP knows this. It's an interesting party trick, but it's a long way from FSD. A great example is how the current autosteer completely fails to deal with the markings on UK bus stops - the car thinks that the lines are the edge of a lane and then goes on to steer the car into oncoming traffic. Not even an 8 year old would fail to appreciate that the bus stop markings are there to stop people parking there - not as a lane guide. It's common sense - but the computer has no common sense.

Now, I'm not saying that one day AI won't be able to do human-like things, but it seems to me that it's still a long way off. A human's effective compute power, especially for image processing and building a dynamic model of future events, is still several orders of magnitude ahead of even fancy dedicated neural net processors like those Tesla are using.

So my point isn't that the system that Tesla are working on isn't capable of doing interesting things - just that in order to make it work reliably in all the ways that it needs to work reliably is such a huge step away that it's doomed at current levels of tech. The marketing of "FSD feature complete by end 2020" doesn't resonate at all with what we all experience today.

I'd much prefer it if they concentrated on getting a more limited set of use cases (e.g. basic TACC) working in a robust manner. I applaud Tesla's innovation, but the marketing is misleading IMHO.

So, are you saying that if you were in a control room with a set of 360 degree monitors, that you couldn't drive the car remotely?
Let's turn the table around, seeing that you seem to be a LIDAR pundit, if I put you in that same control room, you could drive?
I do reckon it would be a lot harder to drive a car in a control room with 360 deg cameras - for the simple reason that the monitors would not be as good as my eyes at flicking around the scene rapidly and the ability for my brain to build a model of what was going on would be slower. Then the feedback to my actions would be limited - no accelerative forces, no feel from the wheel, limited peripheral vision etc.

As to LIDAR - I'm not claiming to be a pundit at all (or even a proponent of it). However, I do think a simpler tech based on radar or lidar in order to deliver basic TACC would be much more likely to succeed since using Tesla's tech to do it is basically a sledge hammer to crack a nut - and a sledgehammer that's got a billion working parts to it at that.

People can drive with a single eye. In that case we know that it's all simple perception with distance being extremely hard to determine. Tesla has stereoscopic imaging that can determine the distance and RADAR to give an even better determination. With cameras, you can also get cues, such as color, that isn't available with LIDAR.
After the basic image recognition of either cameras or LIDAR, it's the same set of hard solutions to create. If you take a look at any of the raw image with interpretations shown, it's pretty obvious that Tesla is already got 99+% of the image recognition problem solved.
The question is whether they are at 99% or 9% - which depends of course on what you're measuring. Perhaps they are recognising 99% of typical road vehicles and the road itself and its "furniture", but expand the question to whether they are delivering a dynamic model to control the car safely and reliably, and I'd put it nearer the 9% mark,
 
Last edited:
  • Like
Reactions: ChrisA70