Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

New Self Driving Demonstration Video (Is Tesla Ahead of Schedule?)

Is Tesla ahead of schedule? (on Autonomous cars)


  • Total voters
    28
This site may earn commission on affiliate links.
The car was unsure of itself quite a few times, stopping in illegal and extremely dangerous ways.

I'm not seeing facial expression analysis to increase probability accuracies for questions like do they see you and do they see other vehicles to which they may or may not appropriately react, and how much attention is on new ideas like turning right or left.

It was sped up so you can't see blinkers or other key signals.

I can't tell from this display if they calculate speed, attitude, driver function of each car, to predict their possible motivations.

I can't tell if they are willing to cross a double yellow line to avoid running over people. Do they cause stopping accidents maybe killing pedestrians in order to not cross double yellows?

I can't tell if it honks at dangerous situations.

I can't tell if it quickly starts to slow down slowly, watching rear cars to see if they react, and continue to monitor the rear cars progress at stopping while increasing stopping speed, until safely and comfortably stopped.

I don't see multilane analysis, lane path probabilities, and other things about driving.


But, it's looking better than the original video.

The less they hand code and the more they generalize rules, the better: never sudden changes unless emergency; try to figure out what will happen, and plot course to probable destinations (skip stop at nearby store if rush hour traffic increasing; etc.) through this without hitting anything bad; attempt to follow laws. General rules like that would be better than hand coding each little thing. Pro race drivers and truck drivers could test its knowledge to make certain it is thinking correctly. Throw in there also chofeurs(sp) and extreme driving teachers to round it out.
 
Last edited:
  • Like
Reactions: Krypto Kat
I love naysayers (not) with their continuous real time 'they won't be able to' comments, all the while abandoning their previous "can't"s as they are proved wrong without a nary apology. Tesla's products are in continuous development. AP does not need to be perfect, only demonstratively better than some level of present driving. Some say "this will never work in X situation" ... maybe they are right, maybe AP will not be offered in Mombay, say. That is not a fail.
 
The car was unsure of itself quite a few times, stopping in illegal and extremely dangerous ways.

A computer program can be "unsure"? It looked to me like the program did what it was suppose to, and I'd much prefer to have AP2.0 driving other cars on the road than most humans. I also didn't see anything "extremely dangerous" but perhaps you can direct me to the times so I can take another look.
 
The car was unsure of itself quite a few times, stopping in illegal and extremely dangerous ways.

I'm not seeing facial expression analysis to increase probability accuracies for questions like do they see you and do they see other vehicles to which they may or may not appropriately react, and how much attention is on new ideas like turning right or left.

It was sped up so you can't see blinkers or other key signals.

I can't tell from this display if they calculate speed, attitude, driver function of each car, to predict their possible motivations.

I can't tell if they are willing to cross a double yellow line to avoid running over people. Do they cause stopping accidents maybe killing pedestrians in order to not cross double yellows?

I can't tell if it honks at dangerous situations.

I can't tell if it quickly starts to slow down slowly, watching rear cars to see if they react, and continue to monitor the rear cars progress at stopping while increasing stopping speed, until safely and comfortably stopped.

I don't see multilane analysis, lane path probabilities, and other things about driving.


But, it's looking better than the original video.

The less they hand code and the more they generalize rules, the better: never sudden changes unless emergency; try to figure out what will happen, and plot course to probable destinations (skip stop at nearby store if rush hour traffic increasing; etc.) through this without hitting anything bad; attempt to follow laws. General rules like that would be better than hand coding each little thing. Pro race drivers and truck drivers could test its knowledge to make certain it is thinking correctly. Throw in there also chofeurs(sp) and extreme driving teachers to round it out.

So, this commentary is about what the video didn't show you ("I can't see...I can't tell....I'm not seeing...I don't see.... You can't see....") and not necessarily about what AP 2.0 can't do?
 
  • Like
Reactions: calisnow
The car was unsure of itself quite a few times, stopping in illegal and extremely dangerous ways.

Every time is stopped it was for legit reasons (bicyclist coming up from behind and pedestrians potentially in the road.) except once at 1:32 (about 2:32 in paint it black version)... I cannot make heads or tails why it decided to make the turn then stop again.
 
Last edited:
  • Like
Reactions: MP3Mike
Every time is stopped it was for legit reasons (bicyclist coming up from behind and pedestrians potentially in the road.) except once at 1:32 (about 2:32 in paint it black version)... I cannot make heads or tails why it decided to make the turn then stop again.

I think it stopped at that point because it rendered the double yellow line over to the left more than it actually was. This made the oncoming car seem like it was going to collide with the Tesla. Look at the middle frame on the right, you can see the red line overlay is to the left of the double yellow line. This makes the oncoming car (in blue) too close. At least that's what I can garner from the video.

TeslaSelfDrive234.jpg

Another thought is that the oncoming car was being designated with a both blue and a green box. Maybe that got the algorithm confused.
 
  • Informative
Reactions: Alketi
A computer program can be "unsure"?
Absolutely. It's all about probabilities, and if they get too low, or all the choices are equal, there has to be some default behavior or tie breakers.

Interesting anecdote: while teaching my teen driving yesterday, on a 40 mph two lane highway we come across a car parked halfway into the road ahead. There is also an approaching car in the other lane. My son slows down to a complete stop, unwilling to cross the double yellow line while a car is approaching. I was terrified we were going to be rear ended (we weren't). Afterward while talking this choice over with him, he said he wasn't confident enough to pass the parked car with the other car coming. So by his calculations the probability of not hitting a car were too low, and the right response was to stop until the probability increased and he could pass safely. Experience will increase his confidence and feel for the amount of space he needs to pass, along with the experience of getting honked at or hit from behind if he stops where others aren't expecting. Two competing probabilities shaping the driving behavior.
 
Absolutely. It's all about probabilities, and if they get too low, or all the choices are equal, there has to be some default behavior or tie breakers.

You must have a different definition of "unsure" than the dictionary which is "not fixed or certain". If the computer program works on "probabilities" and has "default behavior or tie breakers", as you say, then you're making my point that the program is acting in a fixed or certain way, which is the way it was programmed to act. As such, it cannot be unsure.