Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Has Tesla been working on traffic logic since 10/16 - and if so - how?

This site may earn commission on affiliate links.
Paging the experts such as @verygreen and that other guy on the 2.5 thread. What are your thoughts/guesses as to whether Tesla has been internally working on FSD "driving policy" or "traffic logic" for the last year and if so - how?

I'm curious about what methods will / would work:

1 - Pure reinforcement training: Let the "traffic logic" NN drive in shadow mode and punish/reward it based on how its actions correlate/deviate to the human customer driver.

2 - Pure hand coded algorithms that tell the car what to do with objects that the NN identified?

3 - A mix of 1 & 2?

4 - Training only on company test cars on a private track/simulated city somewhere?

5 - Training a decision-making NN in software simulation at first - then taking that trained NN and doing further training on a real-world private fake city track - then training company cars in the actual real world - then pushing to shadow mode in customer cars.

6 - None of the above, @calisnow should shut his mouth before it isn't just @AnxietyRanger who thinks he is a fool - and quietly contemplate his choice to major in philosophy and econ long ago instead of computer science.
 
6 - None of the above, @calisnow should shut his mouth before it isn't just @AnxietyRanger who thinks he is a fool - and quietly contemplate his choice to major in philosophy and econ long ago instead of computer science.

You do like to bring me up, don't you? ;) I don't think you are a fool. Certainly you err on the side of pro-Tesla, where I err on the side of anti-Tesla - and you are wrong about me - but overall a balanced, objective post as you do post when paranoia doesn't overtake you.

The question of driving policy and the status of the different players on that is an interesting one. I don't claim to know the answer. What we've seen of the Musk/Hotz pair in public so far is tackling the visual, we've heard MobilEye is working on driving policy for EyeQ4 (which, one would expect, would affect many of the traditional automotive manufacturers), and Waymo is of course doing its thing... but so far very little seems to be known about the real status of these players on this.
 
I don't think you are a fool.

My quip was false modesty - comic relief.

as you do post when paranoia doesn't overtake you.

GASLIGHTING again - one of your favorite techniques to try to disarm your interlocutor. You can't argue with the content so you assign an emotional/psychological state to your interlocutor without supporting evidence. I'm just the one starting to say publicly and illustrate in detail what many people say about you in private.

There are people who would like to say what I'm saying but can't because of the positions they are in - they need to appear neutral and remain "above the fray." Somebody has to do it.
 
You can't argue with the content so you assign an emotional/psychological state to your interlocutor without supporting evidence.

It is indeed very hard to argue claims that are not true. Whatever I do, your reinforcement bias strenghtens your beliefs. Say, I remain silent, you take silence as "guilt" as I haven't been heard of for a while - you have called out my silence on many times. I defend myself with extensive posting histories and personal anecdotes, you consider that a "guilty" trait as well - as you did today. What it would take, in this case, to break the cycle is a bit of goodwill on your part. I can't do it for you.

There are people who would like to say what I'm saying but can't because of the positions they are in - they need to appear neutral and remain "above the fray." Somebody has to do it.

And there are people doing the opposite, as always. I get PMs of support as well, not everyone wants to get involved. I respect that. Many people know me from this forum from a very, very long time. You didn't spend time on the Model X side apparently over the years, so I am an unknown or a recent online acquaintance to you, but not to all. You keep ignoring all the posting history I present to you from my past, you ignored @bonnie vouching for me, and you ignore my personal driving scenario when assessing my reports.

Very, very hard to argue with anything since you ignore all points counter to your beliefs.
 
  • Like
Reactions: Swift
Ooooooo Weee
 

Attachments

  • E346D6FD-4718-4EF3-8643-8C4DF71E7319.jpeg
    E346D6FD-4718-4EF3-8643-8C4DF71E7319.jpeg
    147.8 KB · Views: 81
IMHO, Tesla and others are feeling their way in the dark about traffic logic. To get a NN to do this would require a level of sophistication that I think no one has reached yet. To properly anticipate what a vehicle is going to do like a human would requires a lot of real world knowledge. Here's a simple example: the high level concept of a lane that is ending and cars taking turns merging into the remaining lane. Ideally, you'd like a (or several linked) NN to know what should happen, and how you should drive yourself. But no one knows how to build such a NN.

AI and NN research haven't reached anywhere close to the level needed for a NN to guide how to drive a car. So we are left with hard coded rule based algorithms. Can you traditionally program enough rules to make self driving somewhat safe? I don't know.

Most people don't realize that the neural nets that Tesla, Mobileye and others are using only do image recognition. Identifiying what is a car, what is a lane marker, what is a stop sign, what is a traffic light, what is a speed sign. Actually driving the car, selecting the speed, slowing down for curves, positioning the car within a lane, that is all done by traditional programming. And nothing in the research literature indicates this is going to change anytime soon.
 
Your point (1) seems a bad idea. "Never trust user input" is a good adage in the computer world. I guess with enough users, you could smooth out thing ? But then what happens when a law change the way people are supposed to drive* ? Retraining the NN would be nearly impossible.

Then (2) seems fine for L2/3, L4 probably not, but certainly not FSD.

I have trouble imagining how you could mix (1) and (2) ? Either the NN sends control commands (steering, brakes, accelerator) or it doesn't. And if it does, I can't see haw a traditional program could second-guess it ?

That leaves (4) and (5). I guess (5) is a more promising approach, but I can't really imagine the how realistic the simulation has to be (and wether we are capable of doing one sufficiently realistic.)


*example : Here in Belgium, when a two-lane road became one-lane, people used to all get on one lane a long time before the actual merging (so that one of the two lanes on the two-lane road was effectively unused for as long as a few km). At some point, the law made it clear how it was supposed to merge (use all available lanes, at merging point one car from 1st lane goes, than 2nd, than 1st, etc.) but people were resisting change. How can a NN basically replicating what most drivers do learn to do the right thing, when most drivers do the wrong thing ?
 
*example : Here in Belgium, when a two-lane road became one-lane, people used to all get on one lane a long time before the actual merging (so that one of the two lanes on the two-lane road was effectively unused for as long as a few km). At some point, the law made it clear how it was supposed to merge (use all available lanes, at merging point one car from 1st lane goes, than 2nd, than 1st, etc.) but people were resisting change. How can a NN basically replicating what most drivers do learn to do the right thing, when most drivers do the wrong thing ?

People in the US will drive well past the lane ending (onto the shoulder) to get 1-2 cars ahead. In the US, driving policy will be easier if you train a NN what not to do...
 
IMHO, Tesla and others are feeling their way in the dark about traffic logic. To get a NN to do this would require a level of sophistication that I think no one has reached yet. To properly anticipate what a vehicle is going to do like a human would requires a lot of real world knowledge. Here's a simple example: the high level concept of a lane that is ending and cars taking turns merging into the remaining lane. Ideally, you'd like a (or several linked) NN to know what should happen, and how you should drive yourself. But no one knows how to build such a NN.

AI and NN research haven't reached anywhere close to the level needed for a NN to guide how to drive a car. So we are left with hard coded rule based algorithms. Can you traditionally program enough rules to make self driving somewhat safe? I don't know.

Most people don't realize that the neural nets that Tesla, Mobileye and others are using only do image recognition. Identifiying what is a car, what is a lane marker, what is a stop sign, what is a traffic light, what is a speed sign. Actually driving the car, selecting the speed, slowing down for curves, positioning the car within a lane, that is all done by traditional programming. And nothing in the research literature indicates this is going to change anytime soon.
Very interesting. How does this mesh with experience of google's deep mind 1 and ver 2. Version 1 learned from other players, version 2 just played itself.

Is it possible that the AI (NN) is training itself in simulations, and now cutting itself off at every opportunity or hiding in virtual blind spots to prevent itself from passing? :oops:
 
IMHO, Tesla and others are feeling their way in the dark about traffic logic. To get a NN to do this would require a level of sophistication that I think no one has reached yet. To properly anticipate what a vehicle is going to do like a human would requires a lot of real world knowledge. Here's a simple example: the high level concept of a lane that is ending and cars taking turns merging into the remaining lane. Ideally, you'd like a (or several linked) NN to know what should happen, and how you should drive yourself. But no one knows how to build such a NN.

AI and NN research haven't reached anywhere close to the level needed for a NN to guide how to drive a car. So we are left with hard coded rule based algorithms. Can you traditionally program enough rules to make self driving somewhat safe? I don't know.

Most people don't realize that the neural nets that Tesla, Mobileye and others are using only do image recognition. Identifiying what is a car, what is a lane marker, what is a stop sign, what is a traffic light, what is a speed sign. Actually driving the car, selecting the speed, slowing down for curves, positioning the car within a lane, that is all done by traditional programming. And nothing in the research literature indicates this is going to change anytime soon.

Not sure about Waymo, but doesn't the DrivePX software work without hard coding?

This was one of the reasons why I initially thought Tesla would go all-in on Nvidia for FSD, although now it seems clear that they will not tie themselves so tightly to an external supplier for key technologies.
 
I have a few thoughts on this. I'm an experienced programmer who has a history of solving things that most people deem hard or impossible. Naturally I have a great interest in Teslas FSD project. I'm thinking of writing an article on Teslas FSD soon.

Now there are many ways of doing this, but I will summarize the two most common.

1. End to end, inputs to driver controls, deep neural network. This solution will with a huge amount of training data seemingly get pretty far pretty fast. But it is basically a big blackbox. Maybe you get a good reliability one day with enough tweaking, but iterating the software further is practically impossible for now. Probably possible in the future when generalised in a human intelligence matter, but not in year 2017 in any other way than experiments .

Which leaves us at option 2:

2. Small specialized neural networks, mainly for object recognition. But heavily assisted by regular logic/algorithms. This has the following advantages:
- Parts can be debugged, replaced, iterated and tested separately.
- The driving behaviour will be very predictable.

Now, one question might be: How do you with regular logic account for billions of different situations that the car might never have seen before? Now I will answer that soon, because the solution to that is rather simple (though extensive).

Let's break this problem into a few chunks.

First let's start with the basic part:
- Where can the car drive? Now this is a complex question, but let's start really simple by asking: Where is it physically possible to drive? This is one of the hardest chunks to solve, and I believe Tesla has been working on that for a long time.

This consists of using the camera, textures, geometry-lines, assisted by neural networks to map out where in your surroundings it is physically possible to drive. I have a few ideas how that can be implemented, but that is a lot of work and requires lots of data and testing.

Now that you know where it is possible to drive and have mapped this area into a 3D model, you have already reduced the problem of FSD a little. You now know where you can't drive, which is usually most of your sorrounding area.

Next step is flagging this drivable area into multiple categories/groups. This is another complex operation, and each category requires a separate set of NNs helped by algorithms to be safely recognized.
- Preferred
- Unpreferred
- Low speed only
- Illegal
- Etc.

Generally a preferred drivable area is your lane, bordered by lane markings and unpreferred areas. I'm going to later write a document explaining how these categories can be mapped in more details.

Next step is objects. The 3D model that is built for mapping out drivable area is retained, modified, extended as long as the car has not moved out of this area.

Objects come and go, but generic obstructions to drivable area must be recognized as non-terrain and objects. This is also a really hard part, but fortunately now we have mentioned all the 3 hard parts. Unfortunately you cannot safely drive much at all before you have solved these. Or you can cheat and buy an expensive device that does most of this 3D mapping for you (Lidar) and save most of the hard camera work up until now.

Now we got this:
- Overview over drivable areas
- We know where the objects are and what space they occupy.
- Using the past few seconds of data we also know eventual velocity of these objects.

Next step is predicting where this objects will be in the future and the uncertainties in the prediction. Now a moving object that we don't know what is has a big uncertainty, because we don't know how it moves. If all objects were ungrouped, the car would not be able to drive anything but extremely slow and safely.

So we're interested in specifying these objects as much as possible. These have fairly common techniques for recognizing:
- Car objects
- Human objects
- Children
- Bike objects
- Similar...

Now take this objects and feed them into their own movement predictors. Now a car next to your lane is likely to continue following his lane. No need to assume he might be swerving into your lane in the future unless he is blinking turn lights or is approaching your lane closely. Cars in front of you on highway is predicted to be ahead also in a few seconds because they're cars and they have a velocity. The human is likely to continue walking the same way, but has a bigger uncertainty. Children have a big uncertainty in predicted movements.

Now calculate your own vehicles preferred path based on drivable area as far as the cameras can see, signs and marking. Every area you don't see is flagged as uncertainty. Do the future prediction for the other objects and look for intersects and potential intersects (uncertainty paths). Now group intersects into a few categories and process them. Adapt your own planned path to a low risk solution that avoids all certain intersects and maintains an acceptable risk for uncertain intersects.

Example of adaptations:
1. You're on a highway. Several vehicles approach from the right on entry lane. Because they're moving in an ending lane, the path predictor will plan their paths intersecting with your planned path. Every drivable path is considered and reduce speed to avoid intersect or changing to another lane (less preferred area) is the desired action to avoid intersect.

2. Children play by the road. Path predictor deems the uncertainty to a big circle, with max speed of N. Driving speed is reduced so stopping distance is less than distance to child + worst movement uncertainty. Possible future intersect reduced to minimum.

3. A man jumps out in front of the car with oncoming cars. Every preferred drivable area results in a head on crash or killing the man. Choose the illegal path or the grass outside the road (unpreferred terrain, slow speed only) as the most likely successful outcome.


Now the last step is creating artificial objects in your scene that actually do not exist. An engine works out every possible place to put an object in an unseen area of your scene and predicts their path in the same manner as a real object.

This is for example to work out that an opposite lane as a dangerous place to be before a turn because of the potential car being placed right around the turn approaching at 80 kmt. Causing an uncertainty intersection. Result: Don't pass another car unless you see it's clear.

Or that building corner should be passed slowly because someone might be walking around the corner. Causing an uncertainty intersect. Result: Car slows down to walking speed, OR chooses a different path further from the wall and higher speed.



This post got quite long, and I've not gone into all the details. But my conclusion is that selfdriving in a safe manner that actually can handle every possible situation is not unthinkable. Actually I have yet to think of a situation this FSD would not handle. Tesla chose the hard path going all vision, but this might actually work out!
 
Once all cars have that capability that man is going to be jumping out in front of cars to watch them drive off the road for his own entertainment !!

Except before the car even recognizes people it has solved 2.1 (in ChrML‘s algorithm) which is, where can the car actually drive, and it knows what part of the road is a drivable area. So autonomous car will either:
  • Steer around stupid man
  • Brake to lessen impact on stupid man
  • Or if enough information available skid off the road avoiding impact with stupid man yet possibly causing slight damage to property
Adding stupid person possibilities into programming will be necessary. However with the amount of data collection coupled with recording, the stupid people will be on camera.
 
  • Like
  • Informative
Reactions: croman and ChrML
Once all cars have that capability that man is going to be jumping out in front of cars to watch them drive off the road for his own entertainment !!

Interesting.

Kind like this moment:

e6T3EsE.jpg


Which I guess would be a popular pastime outside many a school gate. ;)

But if all cars have the capability then the behaviour of "traffic" can be different. Average speeds can be slower and average separation between moving vehicles can be greater.

Should traffic remain "dangerous" in the broadest sense?

Should AI controlled traffic be like a good Terminator - it's not going to kill you, but it will go for your kneecaps :D
 
  • Informative
Reactions: Snerruc
Except before the car even recognizes people it has solved 2.1 (in ChrML‘s algorithm) which is, where can the car actually drive, and it knows what part of the road is a drivable area. So autonomous car will either:
  • Steer around stupid man
  • Brake to lessen impact on stupid man
  • Or if enough information available skid off the road avoiding impact with stupid man yet possibly causing slight damage to property
Adding stupid person possibilities into programming will be necessary. However with the amount of data collection coupled with recording, the stupid people will be on camera.
Yeah. Also jumping in front of selfdriving cars would be stupid, because in addition to being recorded on video, there is also a risk of getting hit. When the risk if killing the passenger exceed the risk of killing the man (when humans are recognized as an object type), and there is no other alternative, the car would choose to hit the stupid man, but brake as much as possible to reduce impact.

Even worse if you have two passengers, as the current selfdriving laws explicitly state that it should choose the path of least risk for the fewest amount of human lives. That will reduce the threshold for running over the man if driving off the road to avoid him is a significant risk for the two passengers.

In some way you can jump in front of human drivers too and cause similar effects, except the outcome is more dependent on reaction time, emotions and instincts of the driver.
 
Last edited:
But if all cars have the capability then the behaviour of "traffic" can be different. Average speeds can be slower and average separation between moving vehicles can be greater.

Should traffic remain "dangerous" in the broadest sense?
In the current sense the selfdriving vehicle will probably be like you say more careful, because it knows about potential risks using the algorithms above that a regular driver might not have thought of.

I didn't mention this in the post above, but in long term you can add vehicle-to-vehicle communication to this algorithm. With electronic signatures you can send low-latency safe information to the cars around you over the new 5G internet designed for this use. You can send information about your planned path, and what you have seen a head.

This will cause the object predictor of other vehicles to have accurate information to work with, as well as extending the drivable area map with more info. The result of this is that the traffic can actually be tighter and faster in areas where the vehicles around you are "trusted" and provide information. Less uncertainty is the key here by providing more info than the vehicle can see by itself.

This is not a requirement for selfdriving cars, but a natural extension to do once it becomes widespread to optimize traffic congestions and safety.
 
  • Informative
Reactions: malcolm