Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Follow-the-leader discussion out of Market Action

This site may earn commission on affiliate links.
False. You can assert false claims all you like, but they're still false. You have pretty much demonstrated in this conversation that you don't know how to drive, so I suggest you shut up before you make yourself look any stupider. I've already explained what works better.

You’ve demonstrated you don’t understand simple probabilities. At any given moment, a random driver on the road is VASTLY more likely to be making a correct/safe choice. This isn’t even debatable.
 
False. You can assert false claims all you like, but they're still false. You have pretty much demonstrated in this conversation that you don't know how to drive, so I suggest you shut up before you make yourself look any stupider. I've already explained what works better.

Understand your point,but sometimes the lines don't work...(and this is the market thread... for now)

30 mph, blizzard, followed semi trailer lights home. But it they started tilting, I would brake quickly and steer away from the lean.

Highway through Georgia in torrential downpour. Kept lead car tail lights barely in view.

If the car in front keeps moving, it's a good sign. If you can't see a lead car, you don't know what's ahead. If cars are swerving, it might be a good idea to do so too.
 
Also known as "following too close"

Actually you want to be far enough back to detect the pothole on your own even if the car in front of you hits it directly. Particularly if it does.

Again, we're talking a few mph here. Think bumper-to-bumper traffic.

If that's the best they can do, it's not ready to use. And I don't think it's the best they can do; this isn't a computing power problem, this is a fundamental design error. This is *bad driving*. If I saw a human following this algorithm (and to be fair, I have), I would tell the human to cut it out.

I'm glad the instrument cluster shows what AP is "thinking", but it's doing a crap job, because it's been programmed by incompetent drivers.

Primary guidance when driving should be, first, the *detected stationary objects* which you are avoiding, followed by the *calcuated blind spots* which you are *also* avoiding (because they might contain things you can't see). This is how I learned to drive. After that you start dealing with such things as moving objects which you are avoiding, and finally the route you're actually trying to take.

I'm glad that there are so many people here who seem to have a good understanding of how Tesla's Autosteer is currently working. As currently done, it's really unacceptable and should be scrapped. Thankfully, it actually seems like a simple problem to get a decent system working -- the error is that they've told it to do the wrong thing, so tell it to do the right thing and you can probably get it working.

Well you seem to have it all figured out. Why not start your own automated driving system? Sounds like it should be pretty easy for you.
 
  • Funny
Reactions: EinSV
"Assuming the lead car knows what they are doing" is precisely the mistake in the design here.

In all of the automated driving system today, it can be generally divided into two parts. First is the perception part, which is in charge of the car's surrounding. Second it's decision making system, where the car makes prediction and also decides what to do next.

The first part relies on deep learning technique extensively. The second part, however, uses mostly traditional algorithm that hard ever involves any deep learning.

Now the first part is often drastically inferior compared to human perception system. What you said about using mail boxes, texture of the road to identify where the road is, is very challenging for a computer.

Now the second part can only relies on the 3d model of the surrounding produced by the first part. Given how incomplete the model is, following the car in front is often the best bet.
 
I am pretty skeptical that we (the folks on this thread) can figure out in the abstract the optimal algorithm for anything that AP does -- that requires an enormous amount of validating data.

On a related note, there is some very interesting speculation in the thread below that the "simple path" for on ramp to off ramp that Elon referred to in the shareholder meeting is (in my lay terms) a relatively uncomplicated neural net (and supporting code) deciding the best way to handle the challenge with minimal hard-coded instructions.

Two Paths (AP On-Ramp to Off-Ramp)
 
Last edited:
No he isn't.

Yes, he is.

(etc)

I'd argue that there are times that the lead vehicle may be the best to follow - but also that these times are almost certainly not a good time for even a competent L5 autonomous system never mind a L2/L3 one to be operating, or really even a human (torrential downpour, blizzard, etc). Sometimes, the correct path is not the path marked on the road, if the markings are even visible, but in those cases I would not expect even L5 cars to handle it correctly, or some humans. Such scenarios are generally well beyond being corner cases.

Following the lead car briefly is not necessarily a bad tactic, generally, but any time it's being relied on is probably a good time to back off and gain better visibility of the road ahead / around, and warning the human they may need to take over with zero notice is a good idea too.

/OT
 
  • Love
  • Like
Reactions: neroden and 22522
I suppose this discussion should be in "the other discussion thread" :rolleyes: But anyway...
I guess car following IS part of my own subconscious driving algorithm. But there are a lot of scenarios that I automatically switch to other visual queues.

At this point in the evolution of self-driving, autopilot, whatever, I always think about the potential for malfeasance. Is foremost in my mind when you consider unoccupied vehicles off on their own to pick up a fare or return home, etc. "ooo an unoccupied car! Let's have some fun!" I've never gotten past that potential hazard with unoccupied FSD. Anyway, I'll be glad when EAP gets "exponentially better". Especially as the M3 ramps and the number of yahoos that misuse it keeps going up. Makes me nervous with so much skin in the game.
 
Last edited:
  • Like
Reactions: neroden
I agree. I would rather have AP simply beep at me to take over when it can no longer detect the lane. I do not like simply following the lead car and assuming he is driving well. At the very least, I would like a warning when it switches over so I can take over immediately if I want.
In a stop&go traffic, because of proximity of cars all around you, AP constantly loses and finds line every few seconds. So it's a complex issue.
But more importantly, there are smart people working on this at Tesla, and I doubt we'll outthink them here in casual conservation. Though something tells me @neroden may not agree :)
 
Also known as "following too close"


Actually you want to be far enough back to detect the pothole on your own even if the car in front of you hits it directly. Particularly if it does.



If that's the best they can do, it's not ready to use. And I don't think it's the best they can do; this isn't a computing power problem, this is a fundamental design error. This is *bad driving*. If I saw a human following this algorithm (and to be fair, I have), I would tell the human to cut it out.

I'm glad the instrument cluster shows what AP is "thinking", but it's doing a crap job, because it's been programmed by incompetent drivers.

Primary guidance when driving should be, first, the *detected stationary objects* which you are avoiding, followed by the *calcuated blind spots* which you are *also* avoiding (because they might contain things you can't see). This is how I learned to drive. After that you start dealing with such things as moving objects which you are avoiding, and finally the route you're actually trying to take.

I'm glad that there are so many people here who seem to have a good understanding of how Tesla's Autosteer is currently working. As currently done, it's really unacceptable and should be scrapped. Thankfully, it actually seems like a simple problem to get a decent system working -- the error is that they've told it to do the wrong thing, so tell it to do the right thing and you can probably get it working.

There is something here about Bolinger Bands...

Anyway,

1) the car should be creating a vector field (let's say seen in the top view) from all the information coming in. This would include trajectories of cars on all sides including front and back. This includes lane markers on each side and the momentum, legitimate curvature of dashed lines or dots ( consider how lane definition is done throu intersections with 90 degree corners that are optional). This includes oil drips and tire paths and residual gravel that piles up where tires don't go, and grass that lives in the middle of a rutted gravel road. The car is moving through a field of directional information.

2) the car needs to weigh that directional information differently depending on which way it is trying to go. At the fatal gore point, it should have been set to: weigh (track) the right lane marker rather than the left; weigh the trajectory of the cars on the right rather than on the left. At every gore or decision point the system needs to know which way you want to go. Autopilot cannot work well without knowledge of that preference.

3) On a country road with a crumbling shoulder, the dashed center line is the guiding vector.

... There are a couple of points here.

A) most voices here are expressing valid positions
B) autopilot is not well posed if no directional/side preference is expressed before gore points/singularities.

Fear of Elon and lack of common vocabulary may have prevented Tesla employees from effectively explaining item B to Elon - they were not saying it right.

Gladwell wrote a book called "Outliers" that details cockpit communication culture that caused Korean airlines to crash more frequently than other airlines.

Edit add
[Chapter Seven - Turnaround in the Skies: "Captain, the weather radar has helped us a lot."]

Some of the recent lossy process at Tesla is caused by attenuated communication between [too] narrow specialists and the management... The vocabulary might be too narrow.

Anyway, autopilot is not well posed without a directional preference before gore points. With that preference the mountain view accident would not have happened. The car would have tracked the right lane marker and the cars on the right, avoiding the gore point altogether.

This is a simple fix for Tesla (Elon may have already alluded to this during the annual meeting, at least that is how I read/guessed it.)


[edit: upon reflection, you can set side tracking preference based on radar return, as in over weight the vectors on side where less ground clutter is being filtered out. This should help with some construction zone issues. ]
 
Last edited by a moderator:
The lanes are just guidelines anyways. As long as you never run into anything or anyone, no one will really hassle you about staying in between the lines.
I don’t know in the USA, but in Europe the lines are law. Crossing a solid white line between two lanes is a major offense. And driving in the middle of two lanes separated by a dashed line will at least be considered stupid behavior.
 
  • Like
Reactions: Matias
Reading Tesla's report Autopilot was nagging the driver much more to take over, reading the NTSB report the put more pressure on the AP accelerating right into the divider. Its the spin that concerns me.

Are you referring to Tesla's official blog, An Update on Last Week’s Accident or some other piece of reporting?

The driver had received several visual and one audible hands-on warning earlier in the drive and the driver’s hands were not detected on the wheel for six seconds prior to the collision.

NTSB:
During the 18-minute 55-second segment, the vehicle provided two visual alerts and one auditory alert for the driver to place his hands on the steering wheel. These alerts were made more than 15 minutes prior to the crash.

regarding several vs two: NTSB does not mention any alerts, or lack thereof, in rest of the trip:

The Autopilot system was engaged on four separate occasions during the 32-minute trip, including a continuous operation for the last 18 minutes 55 seconds prior to the crash.

This american idiot is hoping for a green day, and so should you since you don't have the time to...
[DELETED BY ANTI-POETRY FILTER]
 
You’ve demonstrated you don’t understand simple probabilities. At any given moment, a random driver on the road is VASTLY more likely to be making a correct/safe choice. This isn’t even debatable.
I was trained by a professional statistician. You weren't.

You're saying that a random driver on the road is "VASTLY" more likely to be making a correct/safe choice than... uh... what?

I explained how one actually drives defensively. This isn't debatable. Practicing actual defensive driving is vastly more likely to be correct and safe than following some random yahoo, even if more than half of the yahoos are driving safely. This isn't even debatable.

At least half of the yahoos aren't driving safely; ever collected tailgating stats? I did, once. Depressing exercise.

I don't know why you insist on acting as if a claim which is verifiably false is definitely true.