Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Wiki [UK] V11 UI Changes and Features

This site may earn commission on affiliate links.
I think you have a car to car communication protocol - 3 wavelengths of the electromagnetic spectrum are monitored in most directions...

I kinda like musk's argument that it's enough for us, it's enough for a car.

There are a few ways your situation could play out (theoretically, let's face it we are no where near use on single track roads ATM). Baseline assumption of you are making is that some gesturing is needed to sort the problem, but really, you and the other car likely both know who is closer to a passing place and there for who should reverse. An AI actually takes all the ego out of it and hopefully suggests the obvious solution. Possibly for tie breakers the more northerly car has to make the first move, this could be mandated (although I dislike that approach). It's also worth bearing in mind that humans fail this task too (road rage, refusal to compromise and do the sensible thing, crashed car etc)

There are lots of other considerations here, but the point is you don't need car to car comms to solve the situation, it's just one way. You could develop a theory of mind equivalent to let the car consider the problem from the other cars perspective (although this is one way to promote accidental self awareness I believe).

Agree level 5 is a way off, but I don't think we are even close enough to advocate one solution over another tbh.

Successfully driving most of the time in most situations seems within grasp from the v9 videos.
I don't believe for a nanosecond that there's no need for comms. But I guess we'll see how it plays out.
There are daily complex scenarios where several people have to reverse to remove a deadlock, it's rarely as simple as who is nearest a passing point, you usually have no way of knowing how close the other car is to a passing point, and what if both cars are close to one and both reverse?
Just because humans fail at it, doesn't mean communication isn't necessary.

I would also argue that vision only is *not* enough for us, so it's *not* enough for a car. We communicate through hand gestures, facial expressions, headlight flashes, horns. In some cases several people have to get out of the car to work together to agree how to unpick a deadlock.

It is, in my opinion, not going to happen without comms, I guess we'll see!

To look at it from the opposite direction, imagine cars without indicators or brake lights, which are both active communications output, not just passive vision.
 
Last edited:
  • Disagree
Reactions: pow216
I have tried hard to take onboard the view that vehicle to vehicle and vehicle to infrastructure communication aren't fundamental requirements for FSD, but I still can't see it.

It is correct that human drivers do (at least need to) use far more than pure vision when driving, and combined inference from these sources is what I believe is essential for safe, effective driving.
 
Last edited:
  • Like
Reactions: TestPilot
Lets be clear here we are talking some mythical L5 driving nirvana where a driverless, steeringwheel less car is expected to navigate -everywhere-. I do think that musk was over egging it, even by his standards, when talking about wheel less cars. Although I could see it for an urban taxi solution, it is a long way from being most cars (I think, could be wrong). FSD and the next 12 months is a different discussion really.
I have tried hard to take onboard the view that vehicle to vehicle and vehicle to infrastructure communication aren't fundamental requirements for FSD, but I still can't see it.

It is correct that human drivers do (at least need to) use far more than pure vision when driving, and combined inference from these sources is what I believe is essential for safe, effective driving.
Vision + processing (vision+ inference) is what you mean here isn't it? I don't generally use other senses to drive?

There are daily complex scenarios where several people have to reverse to remove a deadlock, it's rarely as simple as who is nearest a passing point, you usually have no way of knowing how close the other car is to a passing point, and what if both cars are close to one and both reverse?

I have to wonder at this point where you are driving? I'd also say that both cars reversing isn't actually a fail? For extreme edge cases this is a pretty OK solution, again, egos aside. I spent the last 2 weeks driving around single track roads and a passing place was always in sight. I can imagine situations in carparks or around roadwork where something might happen, but reversing would be as much a solution there as else where. If traffic was totally jammed up behind you, then the car has the same problem as you would have? No amount of hand waving or getting out of the car improves on everyone reversing? It may not be optimal, but if you are chasing the 999's of avoiding disengagements, it would work?

To look at it from the opposite direction, imagine cars without indicators or brake lights, which are both active communications output, not just passive vision.
and are included in v9 FSD. Brake lights just now, indicators promised (2 weeks, right?) (Tesla Vision Sees Brake Lights: Turn Signals, Hazards, More Soon)

Signals were not a priority as you can't necessarily trust them - indicators get left on, brake lights can be broken. Hence the nural nets are trained (just now) on what a car does, not what it indicates. But there are obviously improvements to be made by looking at what its indicating, but the set of communications we have just now are adequate for this. I'm not saying that inter car comms wouldn't be awesome (I always fantasise about a brake light that could indicate how hard someone is braking so you can feather your reaction better), but that kind of centralised dictat is not the roads systems we have. It could be, but what can be done with it that can't be done using the visual indicators that exist and work, and what is the implementation plan that makes governments sign up for it? You can't ever ignore the other cars that don't have the system fitted, so you still need all the visual processing too?
True, and it will (potentially, maybe) react quicker.
I do worry a bit about the quality of the vision inputs, though. The recordings don’t seem to be particularly high resolution or high quality and the car seems unable to “drive three cars ahead”.
This I think is a fair accusation to level at the current system. It might be enough, but it definitely isn't optimal. Try going back and playing various version of mario cart and see how you do - in particular I noticed that I am waaaay worse on a Wii 4 player using a quarter of a 480p screen. Extreme example, but games etc are pushing past 960p for a reason. The combo of a wide and narrow camera facing forward helps, but I'd ideally like something like a variable res sensor that get slowly more dense towards the center (or is steerable?) Sounds a little like an eye I guess! Side cameras at 960p is probably fine, but I'd like more pixels forward facing to, as you say, help with understanding things further ahead.

The driving 3 cars ahead is probably a software problem however, separate from the cameras. Or as I call it, driving like a teenager still learning. It needs more planning capacity rather than more input. Big strides are being made in this area of AI too, but not necessarily by Tesla.

Interesting that someone brought up the 10x safer than humans - as that is the stated near term goal really? Car to car coms and other L5 related things are less immediate and it may turn out that L5 does need the re-introduction of radar, other sensors or other options. We have to wait and see really. Does removing the human increase safety 10x? I think it could. Going through idiotsincars on reddit, its the software largely to blame (ie the people) for all the messups on there. Ignoring rules of road, or lights, or just putting getting to their destination in a rush over driving safely is probably most of the problems on there. Accidents on unsighted corners/junctions can be solved by approaching slowly enough to do something about it if there is a surprise - the key is recognising the requirement (I can't see round this corner, ie vision problem) and prioritising that over going fast for fun or being on time.

So I am confident pure vision could get to 10x safer than human. I am hopeful both that the hardware in the model 3 is enough to support this, and that it can get most of the way to L4 (driverless in most scenarios).

Bugger, I sound like a right old git for some of that, sorry.
 
  • Like
Reactions: Obliter8
I was thinking 2D camera / 3 or 4D computer AI generated vs human (sound plus obvious vision plus reflected image plus understanding of environment (under / over pass, tunnel, road surface) and both temporary and long term local environment.
I agree the temporary and long term location awareness would be a useful addition, using a model similar to Google maps editing (suggested edit, multiple independent validations prior to publishing). Useful definitely, required? I guess we get to see!
 
  • Like
Reactions: DenkiJidousha
I'd also say that both cars reversing isn't actually a fail?
But what happens next? Both cars go forwards again :p

No amount of hand waving or getting out of the car improves on everyone reversing?
This is the *only* option for resolving the deadlock, if you don't do this everyone just sits there wondering what's going on 5 cars ahead.

and are included in v9 FSD. Brake lights just now, indicators promised (2 weeks, right?)
This for me is if anything an indication that it *is* needed, otherwise why bother?

A particular recent example of this for me was in the Mendips recently when a main road was gridlocked. Many cars were attempting to avoid the main road by going down a narrow country road with lots of blind corners and few passing points. It was *impossible* to negotiate without communication, the road would just get blocked.

There are just so many scenarios I can think of, like if a tree falls ahead out of sight, causing a complete blockage. How does the rear-most car know what the blockage is, that it's impassable, and that it must be the first to reverse?

I think that perhaps the disconnect here is that I'm saying that L5 automation is impossible without communication, and maybe that's just not the target, but I doubt Elon would ever say that!

The "We only need vision" position needs context methinks.
 
Last edited:
  • Like
Reactions: Avendit
So that would mean the car would be as good as humans, then add in it never gets distracted which makes the car better than human

The cars are presently distracted quite frequently and then may choose an inappropriate response. The nature of the distraction is different but it's still distraction in my opinion. A human may have their mind wandering and not paying as much attention so the car should avoid that type of distraction. However, the car is distracted by basic misinterpretation of inputs and therefore taking the wrong action. Spurious sudden braking events when passing some HGVs that risk causing an accident is one example. The distraction also means that it hasn't taken account of the car travelling behind that is at risk of crashing into the back of you. The wipers management is distracted by tree sap or squashed insects on the windscreen such that they sometimes don't operate as intended. As they are required for clearing for the car's cameras as well as for the driver this may lead to further vision related misinterpretation.
 
  • Like
Reactions: Battpower
The cars are presently distracted quite frequently and then may choose an inappropriate response. The nature of the distraction is different but it's still distraction in my opinion. A human may have their mind wandering and not paying as much attention so the car should avoid that type of distraction. However, the car is distracted by basic misinterpretation of inputs and therefore taking the wrong action. Spurious sudden braking events when passing some HGVs that risk causing an accident is one example. The distraction also means that it hasn't taken account of the car travelling behind that is at risk of crashing into the back of you. The wipers management is distracted by tree sap or squashed insects on the windscreen such that they sometimes don't operate as intended. As they are required for clearing for the car's cameras as well as for the driver this may lead to further vision related misinterpretation.

"in my opinion", got any facts to back that up?
phantom braking - dealt with here (RADAR software stack is rubbish)
You're mistaking poor coding for distraction - distraction being doing something other than the primary purpose - letting the mind wander

The wipers aren't "distracted by the tree sap", the coding to understand what's going on isn't fit for purpose - humans being distracted with creating fart noises and games is to blame there.
 
"in my opinion", got any facts to back that up?
phantom braking - dealt with here (RADAR software stack is rubbish)
You're mistaking poor coding for distraction - distraction being doing something other than the primary purpose - letting the mind wander

The wipers aren't "distracted by the tree sap", the coding to understand what's going on isn't fit for purpose - humans being distracted with creating fart noises and games is to blame there.
If you read my post again you will spot that I was not talking about inattention. I am ultimately talking about the observable results of car behaviour but in the case of tree sap on the windscreen the fact that the sensors and computing algorithm get taken up with measuring what turns out to be the wrong thing is the equivalent of distraction in my book. (Agree It’s poor coding or poor sensor capability)

All computing systems are made by fallible humans so in practice have the mistake making capabilities built in! I agree that they don’t have human style inattention as well, unless the whole system crashes of course, and then there’s no attention at all!
 
If you read my post again you will spot that I was not talking about inattention. I am ultimately talking about the observable results of car behaviour but in the case of tree sap on the windscreen the fact that the sensors and computing algorithm get taken up with measuring what turns out to be the wrong thing is the equivalent of distraction in my book. (Agree It’s poor coding or poor sensor capability)

All computing systems are made by fallible humans so in practice have the mistake making capabilities built in! I agree that they don’t have human style inattention as well, unless the whole system crashes of course, and then there’s no attention at all!
I like the idea of considering these as equivalent of distractions.

Although, humans are less and less involved in writing the AP code. Previous Kaparthy talks have shown how V2 code is taking over the AP code base. These are the parts that have trained neural nets instead of a human writing if then else statements. It does mean that they are influencing the code at a step removed by changing the training data set rather than the code. A whole other debate to be had here TBH about how important it is or not that these changes are trackable and auditable with some output of -why- the net is doing something.

The powerful thing about modern techniques like this (and cloud computing, networks as code etc) is that when there is a problem it can be fixed once and that's your problem gone. Where Tesla do seem to be weak is regression and unit testing to make sure that any fix doesn't decrease previous functionality.
 
  • Informative
Reactions: Adopado
The powerful thing about modern techniques like this (and cloud computing, networks as code etc) is that when there is a problem it can be fixed once and that's your problem gone. Where Tesla do seem to be weak is regression and unit testing to make sure that any fix doesn't decrease previous functionality.
Thats the whole problem with non deterministic models such as ML. I'm not sure the regulators have their head ready to understand it.

I'm sure phanton braking only came about because Tesla moved the acceptable thresholds on false positives v false negatives - because they are a trade. Phantom braking started after Joshua brown went under a lorry, after that they seemed to prefer a false positive and the car brake for a shadow rather than a false negative and that lorry across the road is dismissed. They'd never admit it of course. These things are always trades, you can decrease the number over all, but they're effectively probability models and not decision tree or rule based, so by inference they will get it wrong some of the time.

The second challenge is humans still need to define the domain and degrees of freedom together with the inputs. Taking away the radar for example was a human decision. It's not a case of "here's some data, learn to drive", the whole thing will be broken down into smaller scenarios, I think the one we're seeing in V9 is the sum of the "build a model of whats around me" which in itself is made up of thousands of models that say "is that a roadsign", "is that a car", "is that a depedestrian", but what if it doesn't have a model for "is that a carnival elephant" which it doesn't know how to handle. That location space is not the same as "drive the car", thats a second model that takes the data from the first. Drive the car will probably have some decision tree logic built in, the highway code for instance, it's unlikely to use a statistical model that says its ok to drive this bit of road at 40 because others do when the limit is 30, thats a simple example of a rule based logic. So its wrong to assume because you have the first, the second will easily follow, but it is fair to say you can't have the second until you have a sufficiently good first.

Back to the regulators, what confidence will they need that a change to the models won't upset things? I know when I've looked at these things professionally it opens up lots of issues very quickly and unintended consequences soon pop up usually where you least expect it.
 
  • Like
Reactions: OttoR and Adopado
Whole big lump of agree there. V9 does seem to be lots of effort on the understand your surroundings with less focus on 'what to do with that info'. I wonder if the (commercial) radar was just too simple a tool to work with - it never had enough data to say whether a sudden appearance in its data was a false positive or someone stepping into the road, and its narrow physical (reporting??) cone doesn't allow it to see things approaching - from its perspective its either there or not. That time based series can't be extracted from the commercial units I suspect. Hence dump the tech or build your own options.

I think we have a few years before the regulators start specing anything about neural net systems, although they should. I wonder if eventually you need an separate regulator to look at that aspects of all systems that use them? Rather than having road/car safety regulating driving nets, the FCA attempting to manage trading nets and the home office looking over the ones that review CCTV to identify and lockup 'suspicious people' (we all know how that ends). These should all be able to highlight back to the operator why a decision was made, but I suspect that for the time being end results will be enough.
 
I don't think the regulators will sign off on anything untill they understand it and that ultimately will stop even Level 3 except under very controlled and probably a manufacturer fleet of cars. Regulators answer to politicians and they're going to be either gung how (bad) or archaic (also bad). Couple that with the quality of software we currently get and its buggy nature and you wouldn't let tesla automate a lift at the moment if we're honest. Sit in the car and the windscreen wipers not work properly and your confidence would immediately drop through the floor. Tesla would need to come across not as a radical, next generation maveric, but a deep thinking, properly engaged, educating the regulator type of body, something that Waymo are doing in the states. One only has to look at Teslas safety stats and the conclusion they try and put across (ie its getting on for 10x safer) and realise thats dreadful mathematics and lacking any scientific rigour. Musk either really believes its true (very worrying if thats the case) or he doesn't but says it anyway (lack of trust will develop). They've probably got the actual data that could compare active autopilot on motoways/freeways in the rain in daylight with only passive systems on motorways/freeways in the rain in daylight - and while there are still other variables, that would be a much much more meaningful comparison, but also one I would place money on being nowhere near 2x safer let alone 10x like they'd like you believe.

I did read an interesting article that says basically the whole Tesla Vision and 4D is backed up by Teslas data as they've made effectviely zero progress in improving the systems in the last 18 months, and the data does seem to suggest that. They've reached a ceiling for what the cars can do and its not high enough so they're effectively starting again. Then add the questions over the sensor suite and they had 3 options - try and keep going with what they have (despite years of trying its plateau'd), add more sensors (which means a massive retrofit bill to cars with FSD), or try and simplify and hope less is more. They've chosen that last one, but given the 3 choices it pretty much was the only one available to them. Musk being the showman shares the visuals to keep the faithful faithful, but I'd be surprised if a lot of that wasn't known in the car, all they've done is developed a fancy render because people get excited about that.
 
I don't think the regulators will sign off on anything untill they understand it and that ultimately will stop even Level 3 except under very controlled and probably a manufacturer fleet of cars. Regulators answer to politicians and they're going to be either gung how (bad) or archaic (also bad). Couple that with the quality of software we currently get and its buggy nature and you wouldn't let tesla automate a lift at the moment if we're honest. Sit in the car and the windscreen wipers not work properly and your confidence would immediately drop through the floor. Tesla would need to come across not as a radical, next generation maveric, but a deep thinking, properly engaged, educating the regulator type of body, something that Waymo are doing in the states. One only has to look at Teslas safety stats and the conclusion they try and put across (ie its getting on for 10x safer) and realise thats dreadful mathematics and lacking any scientific rigour. Musk either really believes its true (very worrying if thats the case) or he doesn't but says it anyway (lack of trust will develop). They've probably got the actual data that could compare active autopilot on motoways/freeways in the rain in daylight with only passive systems on motorways/freeways in the rain in daylight - and while there are still other variables, that would be a much much more meaningful comparison, but also one I would place money on being nowhere near 2x safer let alone 10x like they'd like you believe.

I did read an interesting article that says basically the whole Tesla Vision and 4D is backed up by Teslas data as they've made effectviely zero progress in improving the systems in the last 18 months, and the data does seem to suggest that. They've reached a ceiling for what the cars can do and its not high enough so they're effectively starting again. Then add the questions over the sensor suite and they had 3 options - try and keep going with what they have (despite years of trying its plateau'd), add more sensors (which means a massive retrofit bill to cars with FSD), or try and simplify and hope less is more. They've chosen that last one, but given the 3 choices it pretty much was the only one available to them. Musk being the showman shares the visuals to keep the faithful faithful, but I'd be surprised if a lot of that wasn't known in the car, all they've done is developed a fancy render because people get excited about that.
All very true. OTOH, they didn't buy the 6th (or whatever) largest super computer for nothing. Is the local peak in the approach, or some other limiting factor? Sounds like they are saying training data. Fingers crossed.

Musk hasn't previously fallen for the sunk cost fallacy thing - witness the pivot of the starship from carbon to steel and all the other changes of direction they have been through. But he didn't have so much sunk there in terms of installed fleet or face.
 
Sooo, if you have beta 10 and 11 planned, at what point do you push it wider and enable a button (of doom)? Time to roll for doubt I'm afraid.

I think they would have to fix left turn (US) before pushing it to more people. Beta, yes, but you can't push it to millions with a note that says 'don't let it try to turn across traffic'.
 
If Musks response is to be taken at face value, it looks like finally the 'stack' behind 'city streets, highway & complex parking lots' will become one. Hopefully that will mean that some improvements from City Streets FSD beta will find their way back into parts of regular autopilot.


Isn't everything in beta anyway, he really needs to get familiar with SDLC
 
  • Like
Reactions: TestPilot
I think they would have to fix left turn (US) before pushing it to more people. Beta, yes, but you can't push it to millions with a note that says 'don't let it try to turn across traffic'.

Waymo does exactly that though. I believe the cars route planning activity avoids road layouts that are deemed too complex for it handle.

Even I do it when out on my road bike with clipped in pedals. I actively avoid busy junction that involve crossing two lanes due to the potential not been able to clip in quickly enough to get through the junction without dallying around.
 
Waymo does exactly that though. I believe the cars route planning activity avoids road layouts that are deemed too complex for it handle.

Even I do it when out on my road bike with clipped in pedals. I actively avoid busy junction that involve crossing two lanes due to the potential not been able to clip in quickly enough to get through the junction without dallying around.

There's an option in Waze to avoid difficult turns