Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Elon: "Feature complete for full self driving this year"

This site may earn commission on affiliate links.
the new cars that are being designed are getting lots of redundant features, sensors, networking, lots of things. I hope tesla joins along, but I know others are doing this (in the design stages, now). for L4, some things MUST be asil-d, you just can't say no to that. so, why not start now, when we are lat L2? L2 does not mandate -D levels but its still not a bad idea and as a driver, I really do want that.

over-design is a good thing and while it costs money, I personally think its worth it and I push for it whenever I can, when it has a safety aspect to it. overhead dome light? no. but some other important things, yes. overall, I want more 'yes' than 'no' in this department.
 
  • Disagree
Reactions: mikes_fsd
As I approach a stoplight (or another car) and I watch the visualization screen, the Tesla camera+radar system seems to have significant less range than other cars. I hope that is a visualization problem and not a range problem. But the max following distance is not as far as other cars can be set.

Besides the failure to automatically re-engage the autosteering like other cars, it also does not appreciate it when you correct it's line. If AP was a driving student, I'd instruct them to look further ahead and stop fixating at 10 feet in front the car. In traffic, mine kicks off fairly often when I try to put the car into the correct path for the upcoming bend or lane split. Other brands are more tolerant of driver input.
 
But again, I2V would be helpful in sending the color of the traffic light to the car in advance.

As someone who worked on traffic light systems and been on the periphery of the V2I development, I can say that many traffic lights (there are various types, some of which change mode at various times of the day) do not know what colour they will be from one second to another. What they will know is a time frame that a light should stay green, but outside that, all bets are off as the optimiser recalculates its options second by second within a set of parameters. What the car could get from this though, is a speed range that it needs to travel at to guarantee passing at green. For a set of consecutive set of lights, this speed could also govern what the car needs to do to pass all at green - so called 'green waving'.

This technology benefits more than just CAV and since I was last involved several years back, a few more options will have seen the light of day, albeit in limited form - I recall reading about an app that would pass this information on to the driver of any vehicle.

Another useful thing about V2I is that it can give information that is visually impossible to deduce or predict.
 
  • Informative
Reactions: pilotSteve
uhm, no.

you DO have redundant sensors and other things. I speak from experience, ok. for L4 driving, many things MUST be asil-d.

go google that and then come back.

Strictly speaking, I think redundancy is only required as far as it helps a sensor determine when it has a faulty reading.

I think you could have a perfectly workable L5 autonomous vehicle with minimal redundancy as long as it could detect when a camera was blinded or damaged, and safely pull over. L5 doesn't require a vehicle to work after being damaged.
 
its not just sensors. networking can be (and for some designs, it is) redundant. NICs, network switches, even storage media for critical things (think RAID).

there is diversity for physical stuff (running wires on one side of the car and logically equiv and redundant on the other side) and sensors. and yes, you can often know when a sensor is out of whack and the other is in normal range. its not always an issue of 'two clocks, which do you trust?'. if one sensor shows a flat line and the other varies (and should vary, normally), I take the varying one. if one is bouncing up and down and the other SHOULD be mostly constant, I'll take that one. it can sometimes be easy to know which of the data sources looks and feels real.

this is actually happening, I'm not dreaming or making it up. I work on this stuff and its going to happen (don't know the dates, and yes, its kind of far out in dates, not going to see this this year or likely even next).
 
Strictly speaking, I think redundancy is only required as far as it helps a sensor determine when it has a faulty reading.

I think you could have a perfectly workable L5 autonomous vehicle with minimal redundancy as long as it could detect when a camera was blinded or damaged, and safely pull over. L5 doesn't require a vehicle to work after being damaged.

I guess but the question would be how reliable and dependable would it be. A L5 car would not be very useful if it has to frequently pull over because something is wrong.

Redundancy is like having 3 people check the vote tally after an election. The chance of all 3 people making a mistake and getting the vote tally wrong is far less than if you just had 1 person alone doing the vote tally. So redundancy reduces the chances of the car failing which is a good thing. Obviously, if your hardware never failed, you would not need redundancy, but that is not realistic.
 
Optimizer -> That thing that sees you coming, and immediately turns the light from green to yellow at the worst possible moment.

But, in all seriousness that's my main hope with I2VV2I. If it knows I'm coming (with no other cars around) then it won't punish me.

The issue with traffic lights is that if you change them at random intervals, you can trigger a set of circumstances that can stuff the surrounding traffic networks - transitions between peak plans often adjust timings over a period if time to avoid a sudden adjustment. Thats why an adaptive system, which responds to immediate and surrounding traffic (the sensors for our system were normally situated at the exit of the preceding junction so could be quite a distance away) has a set bounds that it can adapt within. Vehicle actuated (VA) lights, which are the type of lights that appear to change to green when it detects a vehicle, often split the desired sequence into several shorter sequences (effectively running double time) which, back to back, give the same result. When a vehicle is detected, it will then replace one of these shorter sequences with an alternate sequence which will give the impression that its just let you through - the reality is, that its still running to the same timings, which is why there is often a delay in the light changing even if the coast appears clear.

So you may still be 'punished' for turning up at the lights at the wrong moment in time, but what some features of V2I will provide is the ability for you to know a speed range you would need to be travelling to hit the lights during the sweet spot.
 
@willow_hiller On the issue you raised of redundancy, there are basically 3 types of failures that can happen in an autonomous car that you need to be able to protect against as best you can:
1) Hardware failure (examples: computer crashes, camera breaking, a wire snapping)
2) Sensors blocked (examples: direct sunlight temporarily blinding a camera, dirt or snow covering one or more cameras, a large object blocking the view)
3) Software failure (example: NN fails to recognize an object correctly or mistakes a sign for a traffic light, or software tells the car to steer the wrong way)

You are never going to have a car with zero failures ever. So the question becomes what redundancy can help minimize these failures.

In the case of #1, adding duplicate hardware can help reduce hardware failures. Have 2 chips so if one crashes, the other one can take over which I believe the AP3 computer has. Or have 2 wires so if one snaps, the other one will still work.

In the case of #2, having self-cleaning sensors and different sensors that are not subject to the same weaknesses will help. That is why autonomous cars like Waymo cars have cameras and lidar and radar. If heavy rain makes it difficult for the cameras to see, the radar will still work. Or if direct sunlight blinds the cameras, the lidar will still work. With 3 different types of sensors, the chances of all 3 being unable to do perception is greatly reduced compared to if you have one sensor type.

In the case of #3, writing better software is of course the best solution but also writing software that can account for possible failures will help. So writing software that will notify the driver if it detects a failure or loss of confidence will help since the driver can take over.
 
  • Like
Reactions: willow_hiller
you DO have redundant sensors and other things.
Redundant sensors do not have to be a completely different type of sensor, i.e. having 5 camera's pointed forward provides overlapping redundancy.

the "other things" are redundant power supply (for one) to critical components which Tesla has been implementing throughout the system JPR007 on Twitter
I speak from experience
:rolleyes:
 
  • Funny
Reactions: willow_hiller
As I approach a stoplight (or another car) and I watch the visualization screen, the Tesla camera+radar system seems to have significant less range than other cars. I hope that is a visualization problem and not a range problem. But the max following distance is not as far as other cars can be set.

Besides the failure to automatically re-engage the autosteering like other cars, it also does not appreciate it when you correct it's line. If AP was a driving student, I'd instruct them to look further ahead and stop fixating at 10 feet in front the car. In traffic, mine kicks off fairly often when I try to put the car into the correct path for the upcoming bend or lane split. Other brands are more tolerant of driver input.
At this stage, what you see on the screen is not the extent of what the car "sees". It is simply a sample rendering of some of what we will see when FSD is realized. Remember when this latest update came out Tesla described the visualization as a "taste" of FSD to come. The system itself is much further advanced than what is being rendered at any given time on the screen, or for that matter what the car physically reacts to at this point.

Dan
 
At this stage, what you see on the screen is not the extent of what the car "sees". It is simply a sample rendering of some of what we will see when FSD is realized. Remember when this latest update came out Tesla described the visualization as a "taste" of FSD to come. The system itself is much further advanced than what is being rendered at any given time on the screen, or for that matter what the car physically reacts to at this point.

Dan

But it does puzzle me why my other cars can set a further distance than a Tesla can. That might be why ACC works at speeds over 100 mph on other brands, or more importantly, can ID threats sooner.

Or maybe they nerfed it. Who knows? When it brakes for stoplights we will find out.
 
But it does puzzle me why my other cars can set a further distance than a Tesla can. That might be why ACC works at speeds over 100 mph on other brands, or more importantly, can ID threats sooner.

Or maybe they nerfed it. Who knows? When it brakes for stoplights we will find out.
I would think the maximum follow distance is an arbitrary distance that Tesla used, not necessarily a function of the operational distance of the radar or cameras.

Dan
 
As I approach a stoplight (or another car) and I watch the visualization screen, the Tesla camera+radar system seems to have significant less range than other cars. I hope that is a visualization problem and not a range problem. But the max following distance is not as far as other cars can be set.

Could be an issue with lack of high quality maps.

It's hard for cars to recognize traffic lights. They come in a variety of shapes and sizes and often multiple lights are visible so it has to work out which ones are for its lane. So the usual tactic is to cheat, give it a high definition map of where all the lights are so it knows were to look and can ignore things that are bright red/green but not where it knows a traffic light is.

As we saw earlier Tesla don't seem to be able to do that. In a sample image their system was detecting a red sign as a traffic light.

Relying on vision alone the car will have to get much closer to the light before it can recognize it with computer vision than if it knows where to look for red/green pixels.
 
As someone who worked on traffic light systems and been on the periphery of the V2I development, I can say that many traffic lights (there are various types, some of which change mode at various times of the day) do not know what colour they will be from one second to another. What they will know is a time frame that a light should stay green, but outside that, all bets are off as the optimiser recalculates its options second by second within a set of parameters. What the car could get from this though, is a speed range that it needs to travel at to guarantee passing at green. For a set of consecutive set of lights, this speed could also govern what the car needs to do to pass all at green - so called 'green waving'.

This technology benefits more than just CAV and since I was last involved several years back, a few more options will have seen the light of day, albeit in limited form - I recall reading about an app that would pass this information on to the driver of any vehicle.

Another useful thing about V2I is that it can give information that is visually impossible to deduce or predict.
It should be possible to expose this information realtime at some REST / WebSockets API. Doesn't really require any new technology, just a little bit of development.