Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Navigate on Autopilot is Useless (2018.42.3)

This site may earn commission on affiliate links.
Ah right, of course. All those silly cars running FSD beta are in no way an advance toward such a goal.
Nope. If you recheck the autopilot page, you will remember that the FSD Capability end goal was always declared to have a "human in the driver's seat" (typically we call those people "drivers") in all cases so that eventually that human would not have to do anything in all cases.
So it's a driver assist, always will be for this particular product.
 
That does not answer my question, just evades it. It's computer science, and you clearly see the departure from local maxima, so you should be able to explain what and how you are measuring to see this, pretty simple, right? Otherwise it's just buzzwords that you repeat pecause they sound cool or whatever.
Not purposefully trying to evade anything, but you're asking incredibly complex questions.

You can measure progress on so many different fronts: vision, decision-making, path planning, comprehension of surroundings, measurement of distances, "human-like" steering/accelerating/declerating (ie keeping jerk to a minimum), and probably dozens of others.

"local minima"/"local maxima" aren't buzzwords, in any case, I'm simply using standard science terms relevant to the topic at hand: Maximum and minimum - Wikipedia
 
Not purposefully trying to evade anything, but you're asking incredibly complex questions.

You can measure progress on so many different fronts: vision, decision-making, path planning, comprehension of surroundings, measurement of distances, "human-like" steering/accelerating/declerating (ie keeping jerk to a minimum), and probably dozens of others.

"local minima"/"local maxima" aren't buzzwords, in any case, I'm simply using standard science terms relevant to the topic at hand: Maximum and minimum - Wikipedia
the words are not buzzwords when used correctly by people that know what they are talking about, but when used like you do they are buzzwords. you say them, but you are just parroting other people, you do not measure anything yourself, and you don't even know what the people you are parroting are measuring and what their results are.
 
the words are not buzzwords when used correctly by people that know what they are talking about, but when used like you do they are buzzwords. you say them, but you are just parroting other people, you do not measure anything yourself, and you don't even know what the people you are parroting are measuring and what their results are.
Ok cool guess you’re going to make it personal.
 
  • Like
Reactions: drtimhill
Not a single aspect of autonomy has gotten a single bit better? Really? Do you remember what it was like in 2018 vs now?
this is correct. Not a single aspect of autonomy got better because... there's no autonomy in Tesla cars. It's driver assist all the way down. See how much more restrictive driver monitoring became compared to 2018? Some might argue it means the autonomy got worse (I am not one of those people for the originally stated reason, but I could totally see this argument being made)
 
Nope. If you recheck the autopilot page, you will remember that the FSD Capability end goal was always declared to have a "human in the driver's seat" (typically we call those people "drivers") in all cases so that eventually that human would not have to do anything in all cases.
So it's a driver assist, always will be for this particular product.
So, your argument that has there has been no progress is to unilaterally declare that the progress that has been made is not "progress" at all, but some kind of dead end that is in no way related to the technology needed to self-driving. Do I hear the distant crash of goal posts being shifted???
 
  • Like
Reactions: Electroman
this is correct. Not a single aspect of autonomy got better because... there's no autonomy in Tesla cars. It's driver assist all the way down. See how much more restrictive driver monitoring became compared to 2018? Some might argue it means the autonomy got worse (I am not one of those people for the originally stated reason, but I could totally see this argument being made)
So you dont understand the concept of incrementally approaching a goal then? As in, every major invention ever created since the dawn of humanity? And please don't hide behind sophistry like "there is no autonomy in Tesla therefore there is no progress" .. it's just silly. OF COURSE many of the technologies being developed as part of the FSD program contribute to advancement toward a fully autonomous vehicle.

In other news, ALL of medical science is a complete failure because humans still die.
 
So you dont understand the concept of incrementally approaching a goal then? As in, every major invention ever created since the dawn of humanity? And please don't hide behind sophistry like "there is no autonomy in Tesla therefore there is no progress" .. it's just silly. OF COURSE many of the technologies being developed as part of the FSD program contribute to advancement toward a fully autonomous vehicle.

In other news, ALL of medical science is a complete failure because humans still die.
Rivian. Thankfully Driver+ is usable and they aren't trying to do anything nonsensical like FSD.



That's actually the crux of the problem. If this was a solved problem in 2016, why are we still having this conversation? This thread is from a 2018 release and we're still seeing posts of NoAP doing absurd things. Remember that there was going to be a coast-to-coast demo in 2017, too.

I absolutely forgive slipped timelines, and as someone that has done everything from hardware design to software development, I can absolutely appreciate an amateur underestimating the challenge that solving a problem presents. But I can not abide a company continually claiming a vehicle that can't handle basic lane changes consistently is going to completely drive itself within the year for seven years straight.
Dabbles, I’d like to hear what makes Driver+ useful compared to AP or NoA..? It seems very similar to AP and less feature rich than NoA (which is either a good thing or a bad thing depending on your NoA experiences I suppose).
 
Dabbles, I’d like to hear what makes Driver+ useful compared to AP or NoA..? It seems very similar to AP and less feature rich than NoA (which is either a good thing or a bad thing depending on your NoA experiences I suppose).

From this thread's 62 pages you can see my opinion of NoA- It's useless and I hated it. I think Tesla's basic AP could be okay if it didn't instantly disable when you turn the blinker on for a lane change, and obviously if it didn't phantom brake where I drive. I've used basic AP in California and it does the same thing out there.

Driver+, on the other hand, hasn't phantom braked for me once (not saying it won't, we're only a week in), and it doesn't disable itself the instant I turn a blinker on. So it solves 99% of the actual problem of LKA + TACC without the foolish gimmicks of blindly following incorrect map data and making unnecessary maneuvers. The only issue I've had with it was on an extremely windy day, it ping ponged a couple times. But so did my Tesla and it was a lot smaller.
 
I disagree. They are not any closer than back then. (Edit: still 2 years away in fact! 🙃 )
Vision stack has clearly been refined. But that has resulted in almost no perceptible benefit to an outsider given that path planning was garbage before and still garbage now. Clearly path planning has changed over that time, the biggest change I can recall was when they actually taught the driving model about the size of the car and that it can't turn with zero radius, so sometime around fall of 2022 it was able to negotiate corners without running over curbs. It's a bit embarrassing that this feature wasn't shipped with the first beta of FSD. Just like many of the other behaviors that it gets wrong. The rest of the planner changes in the last 2-3 years are nearly just random number generators - works better sometimes on some turns. As my post above I believe they most likely lack the technical talent to finish this, or for some reason are still spending almost no engineering resources in planning all time in vision/perception. Maybe they think if they get vision millimeter accurate it will cover up for all the failures of the planning system.
 
Oh and by the way, I think the code they used to get it to run a path using the size of the ego car without hitting curbs is also what makes the car randomly turn the wrong way and freak everyone out when it's trying to make a turn where it's not going to clip a curb, i.e. left turn on to a 2 way street. So it wasn't done particularly well at all. It didn't have this behavior previously. It's just shitty engineering.
 
Oh and by the way, I think the code they used to get it to run a path using the size of the ego car without hitting curbs is also what makes the car randomly turn the wrong way and freak everyone out when it's trying to make a turn where it's not going to clp a curb, i.e. left turn on to a 2 way street. So it wasn't done particularly well at all. It didn't have this behavior previously. It's just shitty engineering.
Maybe. I see the overall story on full self driving as a classic example of how technologists coming from software and a computer science background underestimate the complexity of what biological brains do. Computers on the other hand struggle to do things that any two-year-old can achieve with minimal to no effort. I don't think Elon has always respected this fact, and therefore has chronically underestimated how much work and how sophisticated the software and Hardware would need to be to emulate a conscious agent making serial decisions from the paradoxical substrate of a whole lot of parallel processing going on. This disparaging attitude towards biological systems or at least an underestimation of them leads us down all too familiar bad pathways that we're now facing as a technological society, with consequences far worse than crappy autonomous driving systems.
 
Last edited:
Maybe. I see the overall story on full self driving as a classic example of how technologists coming from software and a computer science background underestimate the complexity of what biological brains do. Computers on the other hand struggle to do things that any two-year-old can achieve with minimal to no effort. I don't think Elon has always respected this fact, and therefore has chronically underestimated how much work and how sophisticated the software and Hardware would need to be to emulate a conscious agent making serial decisions from the paradoxical substrate of a whole lot of parallel processing going on. This disparaging attitude towards biological systems or at least an underestimation of them leads us down all too familiar bad pathways that were now facing as a technological society, with consequences far worse than crappy autonomous driving systems.
That's all the vision/perception part and is essentially solved now, mostly thanks to an explosion in compute power and a tiny bit to improvement in algorithms. But computing proper paths and driving behaviors once the environment is known is just a normal hard engineering problem. Although I kind of assume Tesla will try to shove it through a NN because they don't know how else to do it. Which just means N more years until it works right. I think Tesla actually messed up here by not hiring Goerge Hotz, he seems to actually understand it.
 
That's all the vision/perception part and is essentially solved now

It's absolutely nowhere near solved. It's as solved as it was in 2016 when we were first told the entirety of self driving was "solved".

mostly thanks to an explosion in compute power

There hasn't really been an explosion in compute. In fact, the growth in what we would now measure in per-core compute power has declined significantly, and the per-package compute has been far behind Moore's "law" since the late 1990s. Modern CPUs spend a ton of their transistors on interconnects and fabrics rather than on actual processing.

There has been an increased adoption in custom ASICs thanks to bundled IP packages and those interconnects I mentioned. But ASICs have been around a long, LONG time. And if you decide to go your own way to fab your own custom ASIC you're going to be spending a lot of time building compilers and attempting to produce optimizations where mature products have already been doing so for several years.

The actual explosion was cheap money and VCs willing to lose a lot of it on the off chance that one out of countless ML investments paid off. So far we're waiting for that pay off still, and the VC money is drying up quick.
 
That's all the vision/perception part and is essentially solved now, mostly thanks to an explosion in compute power and a tiny bit to improvement in algorithms. But computing proper paths and driving behaviors once the environment is known is just a normal hard engineering problem. Although I kind of assume Tesla will try to shove it through a NN because they don't know how else to do it. Which just means N more years until it works right. I think Tesla actually messed up here by not hiring Goerge Hotz, he seems to actually understand it.
This is an example of the kind of overconfidence technologists have about AI. Although it can do impressive things, it's best at narrow types of expertise. Driving is not one of those things.
 
There hasn't really been an explosion in compute. In fact, the growth in what we would now measure in per-core compute power has declined significantly, and the per-package compute has been far behind Moore's "law" since the late 1990s. Modern CPUs spend a ton of their transistors on interconnects and fabrics rather than on actual processing.

There has been an increased adoption in custom ASICs thanks to bundled IP packages and those interconnects I mentioned. But ASICs have been around a long, LONG time. And if you decide to go your own way to fab your own custom ASIC you're going to be spending a lot of time building compilers and attempting to produce optimizations where mature products have already been doing so for several years.

The actual explosion was cheap money and VCs willing to lose a lot of it on the off chance that one out of countless ML investments paid off. So far we're waiting for that pay off still, and the VC money is drying up quick.

This is absolutely incorrect 3 times over. Comparison to general purpose compute cores is completely unfounded. You're just not paying attention. Custom ASIC's used for AI training is because it's doing an entirely different task. It's got nothing to do with VC's. It's all about how many transistors you can buy per dollar and how many you can put to doing work. The eventual conversion is from kWh->trained models.
 
  • Funny
Reactions: DrDabbles
This is an example of the kind of overconfidence technologists have about AI. Although it can do impressive things, it's best at narrow types of expertise. Driving is not one of those things.
Statements like these make me root for Tesla to finish quickly even though I'm spending this thread calling out their obvious flaws. It's a talent limitation not a hardware limitation.