Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Follow-the-leader discussion out of Market Action

This site may earn commission on affiliate links.
Again, we're talking a few mph here. Think bumper-to-bumper traffic.



Well you seem to have it all figured out. Why not start your own automated driving system? Sounds like it should be pretty easy for you.
Yeah, it probably should be pretty easy, given that it looks like the hard technical part's been done and now they just need someone who is competent at driving to explain what the car is SUPPOSED to do. Specify the problem wrong, get the wrong results.

I don't particularly want to: I'm independently wealthy and I have other things I would rather be doing, but I'm open to job offers. They'd do better to simply hire a couple of professional defensive driving instructors to determine what they should be training the car to do.
 
  • Disagree
Reactions: imherkimer
In all of the automated driving system today, it can be generally divided into two parts. First is the perception part, which is in charge of the car's surrounding. Second it's decision making system, where the car makes prediction and also decides what to do next.

The first part relies on deep learning technique extensively. The second part, however, uses mostly traditional algorithm that hard ever involves any deep learning.

Now the first part is often drastically inferior compared to human perception system. What you said about using mail boxes, texture of the road to identify where the road is, is very challenging for a computer.

Now the second part can only relies on the 3d model of the surrounding produced by the first part. Given how incomplete the model is, following the car in front is often the best bet.

Then it's not fit for use. :shrug: You're describing giving a drivers' license to a car which is, at best, like a severely vision-impaired driver. While we do give drivers' licenses to severely vision-impaired human drivers in the US, we shouldn't. (Yeeargh... what a country.)

When I read about all these snazzy computer perception things, I figured they actually had them working. You know, identifying the roadside milepost markers, spotting potholes, you know, that stuff. The computer vision demos have all been showing off how good they are at identifying things.
 
(etc)

I'd argue that there are times that the lead vehicle may be the best to follow - but also that these times are almost certainly not a good time for even a competent L5 autonomous system never mind a L2/L3 one to be operating, or really even a human (torrential downpour, blizzard, etc). Sometimes, the correct path is not the path marked on the road, if the markings are even visible, but in those cases I would not expect even L5 cars to handle it correctly, or some humans. Such scenarios are generally well beyond being corner cases.

Following the lead car briefly is not necessarily a bad tactic, generally, but any time it's being relied on is probably a good time to back off and gain better visibility of the road ahead / around, and warning the human they may need to take over with zero notice is a good idea too.

/OT
OK, I have to say I agree with you entirely, and thanks for your very polite description of what I said less politely. Basically if I'm reduced to following a lead car, I wish I wasn't on the road at all and am going to try to get off the road safely to a gas station or hotel or whatever until the weather improves.
 
In a stop&go traffic, because of proximity of cars all around you, AP constantly loses and finds line every few seconds. So it's a complex issue.
Not ready for general use then.

In stop-and-go traffic, at low speeds, you probably don't have any lines on the road anyway. Inching forward behind the car in front of you is OK, I guess. You shouldn't have any problems with obstacles appearing when the car in front of you moves away, because you should be going at 2 mph.

But the "following lead car" phenomenon appears to be happening at higher speeds than 10 mph, according to the self-appointed commenters who are describing Autopilot behavior on *expressways*.

But more importantly, there are smart people working on this at Tesla, and I doubt we'll outthink them here in casual conservation. Though something tells me @neroden may not agree :)

I've spent my life around people who are truly brilliant, and I don't think I'm smart compared to them. But I've also, repeatedly, gone into environments with dozens of supposedly very smart, very credentialed people, and identified what to do with their problems in days better than they had done in months. So I never assume that "very smart people" will outthink me.

I mean, hell, here I am, beating the stock market averages by individual stock-picking, doing better than index funds and better than many of the "very smart people" at major investment banks. Only a little better, but you see my point. I'm pretty sure I can under some circumstances outthink hundreds of smart people, because I have evidence.

Tesla's smart people have failed to fix simple USB music bugs for two years. They took 5 years to comply with the trivial license requirements for the software they were pirating. I think it's a fair assumption that they are repeatedly making boneheadedly stupid basic oversights that I can spot without difficulty, even while doing excellent work in other areas.

Having a fundamentally incorrect "how to drive" algorithm, because they never consulted defensive driving experts, is actually the sort of mistake I'd *expect* them to make based on Tesla's corporate psychology, which repeatedly undervalues human factors.
 
Then it's not fit for use. :shrug: You're describing giving a drivers' license to a car which is, at best, like a severely vision-impaired driver. While we do give drivers' licenses to severely vision-impaired human drivers in the US, we shouldn't. (Yeeargh... what a country.)

When I read about all these snazzy computer perception things, I figured they actually had them working. You know, identifying the roadside milepost markers, spotting potholes, you know, that stuff. The computer vision demos have all been showing off how good they are at identifying things.
This is exactly why I don't think we will see level 5 anytime soon.

Computer vision is better than human in localized perception tasks about 95% of the time. There are two big problems. 1st, nobody knows exactly under what condition it stops working. This one is very hard to improve. 2nd, it really really sucks at combining none obvious visual q to make sensible decision, especially signals received at different point in time. This one is likely to be improved over time. But a pretty long time.

Waymo's approach is centimeter level detailed mapping with geo fencing. And those cars never seen snow yet
 
  • Like
Reactions: YasB
Waymo's approach is centimeter level detailed mapping with geo fencing. And those cars never seen snow yet

Waymo has started to test their system driving in snow last winter. I doubt it's ready for commercial use in such circumstances anytime soon, but they seem to be quite a bit ahead. Of course they are not relying on maps alone, but seem to have a pretty decent system to fuse radar, lidar, camera and ultrasonic input in realtime.

Alphabet looks to snowy Michigan to test self-driving cars

Google I/O Recap: Turning self-driving cars from science fiction into reality with the help of AI
 
I've spent my life around people who are truly brilliant, and I don't think I'm smart compared to them. But I've also, repeatedly, gone into environments with dozens of supposedly very smart, very credentialed people, and identified what to do with their problems in days better than they had done in months. So I never assume that "very smart people" will outthink me.

I mean, hell, here I am, beating the stock market averages by individual stock-picking, doing better than index funds and better than many of the "very smart people" at major investment banks. Only a little better, but you see my point. I'm pretty sure I can under some circumstances outthink hundreds of smart people, because I have evidence.

Tesla's smart people have failed to fix simple USB music bugs for two years. They took 5 years to comply with the trivial license requirements for the software they were pirating. I think it's a fair assumption that they are repeatedly making boneheadedly stupid basic oversights that I can spot without difficulty, even while doing excellent work in other areas.

Having a fundamentally incorrect "how to drive" algorithm, because they never consulted defensive driving experts, is actually the sort of mistake I'd *expect* them to make based on Tesla's corporate psychology, which repeatedly undervalues human factors.

I agree with your general sentiment, but I think you are wrong in this particular instance (autopilot). I know defensive driving, I've ridden motorcycles for years, where that's the only way to survive - assume everyone else is a moron and can make a mistake at any point, and it's your job to save it. I've also spent probably close to 100 days on the racetrack with various driving events honing my control of the car, and in general having an adrenaline induced blast. Driving is my passion, and as an Engineer in Software I also enjoy finding patterns of traffic movement on clogged highways, associating it with crowd behaviours etc. I've spent plenty of time thinking AP problems and you are right Tesla AP is not all it needs to be, but a) it's still useful and b) there are no easy solutions to problems that they're wrestling with. Now of course, this is an opinion, so I could be wrong and you could be right.
About software, again as an Engineer and leader of Engineering group, I can easily see how and why they'd have license issue and USB music bug. Software is always exercise in priority, and you as a niche user (USB music) would be constantly weighted very low to the point of that bug never being fixed. I've walked once into a company where I run system that had over 1000 P1 bugs, and walked away 15 months later with guess what, still over 1000 P1 bugs, of which 600 were original ones (diff. discussion why, and this is about as bad as it gets). So this happens in software. License issue? May have been as simple as that cost of non-compliance was immaterial compared to opportunity loss. And it's not as simple as just publishing code, it needs to get reviewed and stress-tested as first moment of publishing is the moment when you increase risk of being broken into. License compliance was your pet-peeve, but that doesn't matter to decision makers. If I had a binary choice of 1) improving autopilot, which could save lives, vs 2) complying with license, my decision would've been the same. It's never as simple in real life, but typical software organization has typically backlog of thousands of software requests etc...
Now, after spending lots of time defending Tesla, I've seen some signs that make me think they were not where they needed to be a year or two ago. I've reported a very serious bug, and some of the stuff I've seen wasn't excusable, it wasn't the matter of resources, but poor processes/organization. Not sure about current state. BTW, this same logic applies to both issues you mentioned. I'm making an assumption that it was the case of resource competition, as that is what you always find in SW orgs., but it could've been just a poor organization, I'll admit that. I don't think we can know.

*To clarify about license, Tesla could have said: We're not going to bother with license compliance, as that is a formality, and we're not hurting anyone (very important!), and we'll donate $50K to one of the open source projects, foundation etc...
 
  • Helpful
Reactions: neroden
I agree with your general sentiment, but I think you are wrong in this particular instance (autopilot). I know defensive driving, I've ridden motorcycles for years, where that's the only way to survive - assume everyone else is a moron and can make a mistake at any point, and it's your job to save it. I've also spent probably close to 100 days on the racetrack with various driving events honing my control of the car, and in general having an adrenaline induced blast. Driving is my passion, and as an Engineer in Software I also enjoy finding patterns of traffic movement on clogged highways, associating it with crowd behaviours etc. I've spent plenty of time thinking AP problems and you are right Tesla AP is not all it needs to be, but a) it's still useful and b) there are no easy solutions to problems that they're wrestling with. Now of course, this is an opinion, so I could be wrong and you could be right.
About software, again as an Engineer and leader of Engineering group, I can easily see how and why they'd have license issue and USB music bug. Software is always exercise in priority, and you as a niche user (USB music) would be constantly weighted very low to the point of that bug never being fixed. I've walked once into a company where I run system that had over 1000 P1 bugs, and walked away 15 months later with guess what, still over 1000 P1 bugs, of which 600 were original ones (diff. discussion why, and this is about as bad as it gets). So this happens in software. License issue? May have been as simple as that cost of non-compliance was immaterial compared to opportunity loss. And it's not as simple as just publishing code, it needs to get reviewed and stress-tested as first moment of publishing is the moment when you increase risk of being broken into. License compliance was your pet-peeve, but that doesn't matter to decision makers. If I had a binary choice of 1) improving autopilot, which could save lives, vs 2) complying with license, my decision would've been the same. It's never as simple in real life, but typical software organization has typically backlog of thousands of software requests etc...
Now, after spending lots of time defending Tesla, I've seen some signs that make me think they were not where they needed to be a year or two ago. I've reported a very serious bug, and some of the stuff I've seen wasn't excusable, it wasn't the matter of resources, but poor processes/organization. Not sure about current state. BTW, this same logic applies to both issues you mentioned. I'm making an assumption that it was the case of resource competition, as that is what you always find in SW orgs., but it could've been just a poor organization, I'll admit that. I don't think we can know.

*To clarify about license, Tesla could have said: We're not going to bother with license compliance, as that is a formality, and we're not hurting anyone (very important!), and we'll donate $50K to one of the open source projects, foundation etc...
As a fellow software engineer I also found sugar load of inexcusable problems with Apple software, just saying...
 
  • Like
Reactions: neroden
assume everyone else is a moron and can make a mistake at any point, and it's your job to save it

this isn't describing what you're doing while on the road though. true, you are assuming someone *might* be a moron, but as you are in traffic you have to overall trust, but verify, their actions. driving simply wouldn't work among humans without a high level of trust.
 
this isn't describing what you're doing while on the road though. true, you are assuming someone *might* be a moron, but as you are in traffic you have to overall trust, but verify, their actions. driving simply wouldn't work among humans without a high level of trust.
I kinda disagree, but it could be semantics: But what I said is what official Canadian(or Ontario, not sure) 'Rider's Guide' book says, one you use for exams, just in a much more polite way.
What do I mean with this is to always be ready for others to screw up, and discourage them where you can, and be ready to compensate where you can't. I interpret this so that if/when I'm involved in the accident, it's always my fault to some degree, that I didn't position myself defensively enough to prevent it. On a motorcycle, it really doesn't matter whose fault it is, motorcycle rider is the one that is dead. Some actions recommended in the official Guide: position yourself in a lane so that you're blocking/intimidating cars from drifting into your line (towards right in the quickest, towards left in the slowest, never hang around in the middle line), position yourself always where there is a space left and/or right you can escape if someone cuts into your line, never ride in parallel with anyone etc, etc...
 
  • Like
Reactions: neroden
I spent the day helping a group of friends move today. After reading all this autopilot stuff earlier I ask each of them privately (as to not get answers like "what he/she said"). I also asked a few strangers, like the waiter at breakfast.

When one is driving, most people have three main thought processes.

1) Don't hit anything.
2) Stay in the lines.
3) Follow the car in front of you.

4) follow the laws was actually never a consideration by anyone because number 1 pretty much is why the laws exist.

Everyone I talked to says that number 1 is the absolute most important one around.
The majority agreed number 2 was the 2nd most important.
The minority prioritized number 3.

Now when I say majority and minority, I am talking more like a US Presidential election.

Interestingly it was the city folks who had a strong opinion about number 2. The country people laughed at number 2 and said 3 was the most important because dirt roads do not have lines and when "Billy" takes out his front-end in a washout you go a different path.

I can see 2 and 3 easily being debated forever as both do have merit depending on the situation.

NO ONE I talked to would EVER want to be in any thing at all that did not have number 1 as the absolute top priority. Now I love Tesla to death but I think it is extremely important that Tesla makes it perfectly clear that Autopilot (or I think any system at this point) doesn't comprehend the "don't run into things" part very well at all because none of them know exactly what "things" are. It relies on the driver to be alert and do that part.

It was unanimously agreed that when the car that doesn't know what to not run into is following the leader that the driver should definitely know... as in very clearly know so they can be extra vigilant.
 
  • Helpful
Reactions: neroden
Here are some interesting slides from Mr. Karpathy showing some of the problems building the image processing stack. Seems that even following lane lines is hard. Maybe it's interesting for some of you software guys.

https://www.figure-eight.com/wp-content/uploads/2018/06/TRAIN_AI_2018_Andrej_Karpathy_Tesla.pdf

I sorta have a prediction on what will happen with autonomous vehicles. I’ve actually been increasingly bearish on its feasibility in the near future.

The short answer is, I think all self driving efforts will declare the current driving rules too hard for a computer to process reliably enough, and seek changes in lane markings and possibly supplement roads with more technology for autonomous cars to interact with.

Instead of trying to get an autonomous car to follow the rules of the road, modify the roads to follow the rules of the computer of autonomous cars...
 
  • Like
Reactions: neroden
Oh, and standardize the way that autonomous cars can communicate with each other. Having both human drivers and non human drivers on the road together is of course an unfortunate nescessity today that will make things more difficult.
 
About software, again as an Engineer and leader of Engineering group, I can easily see how and why they'd have license issue and USB music bug. Software is always exercise in priority, and you as a niche user (USB music) would be constantly weighted very low to the point of that bug never being fixed. I've walked once into a company where I run system that had over 1000 P1 bugs, and walked away 15 months later with guess what, still over 1000 P1 bugs, of which 600 were original ones (diff. discussion why, and this is about as bad as it gets). So this happens in software. License issue? May have been as simple as that cost of non-compliance was immaterial compared to opportunity loss. And it's not as simple as just publishing code, it needs to get reviewed and stress-tested as first moment of publishing is the moment when you increase risk of being broken into. License compliance was your pet-peeve, but that doesn't matter to decision makers. If I had a binary choice of 1) improving autopilot, which could save lives, vs 2) complying with license, my decision would've been the same.

You have a definite point. On the other hand, illegal behavior (such as ignoring warranty issues or violating software licenses) remains illegal, and multimillion dollar lawsuits may help clarify Tesla executive thinking. I guess they need more of them. Perhaps we should assist them by filing more. Normal methods of communication don't seem to work with Tesla; lawsuits seem to.

It should take them at most a week to fix the *warranty issues* they created with USB music in 2016 since I could fix it in a day. Cost/benefit prioritization says to fix them.

It's never as simple in real life, but typical software organization has typically backlog of thousands of software requests etc...
Now, after spending lots of time defending Tesla, I've seen some signs that make me think they were not where they needed to be a year or two ago. I've reported a very serious bug, and some of the stuff I've seen wasn't excusable, it wasn't the matter of resources, but poor processes/organization.
I'm absolutely sure their problems are almost entirely poor process and organization at this point. I've just seen way too much *chaos* over stuff where half-decent organization would have gotten it done quicker. Including stuff I haven't mentioned here. I see that you've seen much of the same thing.

Not sure about current state. BTW, this same logic applies to both issues you mentioned. I'm making an assumption that it was the case of resource competition, as that is what you always find in SW orgs., but it could've been just a poor organization, I'll admit that. I don't think we can know.
I think it's poor organization, not resource competition. We can't prove it either way, but if it's resource competition, I can state definitively that they have no sense of priorities and are inviting unnecessary trouble, risking large harms to save pennies.

If it's poor organization... well, it's just poor organization, nobody ever doing cost/benefit analysis, priorities being wrong because there is nobody in charge of prioritization, regression bugs being introduced because there is no testsuite.

This seems more likely.

*To clarify about license, Tesla could have said: We're not going to bother with license compliance, as that is a formality, and we're not hurting anyone (very important!), and we'll donate $50K to one of the open source projects, foundation etc...

Tesla would have gotten hit with a lawsuit *forcing them to stop distributing cars*, which they would have lost. The copyleft people much prefer to just get compliance, but if they have to get an order to halt distribution of infringing software, they will do it, and they can, and they have. Someone at Tesla simply didn't know what they were doing, because as a calculated decision, the cost/benefit ratio of deciding not to comply with the licenses would have been way out of whack: any risk of having car distribution halted is clearly too much risk to take, and the cost of license compliance is nearly nothing.