Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
Road signs are regulated and relatively consistent. They have to make sense so NN training can be used to extract relevant contextual info and put it into the driving stack. That's mimicking what human drivers do while driving.

Keeping a city map up-to-date and in-sync in real-time is going to involve an immense amount of overhead. That's probably one reason why Tesla never went that route.

regulated and fairly consistent

I'm not sure you understand the real difference between a driverless robo-taxi and an ADAS. Admittedly, Tesla has done a great job at obfuscating the issue.

A driverless vehicle cannot operate on roads where the road signs are only "fairly consistent" without some human in the pipeline to make decisions about new signage. You cannot have a fleet of tens to hundreds of thousands of vehicles making illegal maneuvers because the city council put up a one off sign somewhere that day. This is an unacceptable disaster scenario. Also consider the helpless passengers who are only able to watch as the car "guesses" about what it's supposed to do.

The cars must detect a situation that they do not understand and have a means of getting a human to make the final policy decision. Anything less than that, and you are actually talking about the invention of AGI.

The human in the FSD system is a critical component, just as the remote human operators are for driverless robo-taxies. Tesla will not be able to remove the human without putting the same "limitations" in place that the robo-taxi companies are launching with.

Tesla sidesteps all these issues by relying on the human to make these policy decisions. This is absolutely fine for an ADAS, and I'll be very happy to use FSD in that capacity when (if) it's finally safe enough and released. And maaaaaaaaybe, one day, FSD will be a platform upon which an actual robo-taxi service is built.

But make no mistake: Teslas are not becoming driverless without major changes to their approach, which will necessarily include high resolution maps and remote operators.

Also maps do not require as much maintenance as you think they do. Mapping happens while the driverless cars are doing their routes. Also, consider this: if roads aren't traveled enough to be kept up to date, a robo-taxi service has an army of autonomous vehicles they can dispatch to an area to update the maps for them.
 
Now they have this Zeekr Assisted Drive (ZAD), which is supposedly the Mobileye SuperVision Full Stack of awesome. ZAD will be able to do door to door L2, just like all the Mobileye autonomous driving videos on YouTube. I can't wait to see it respond to traffic controls like AP, make left and right turns like fsd beta! (/s):

Do we even know what features are included in ZAD packages? What ever happened to traffic lights or stop signs?
You asked sarcastically and they delivered realistically.

Here we see Zeekr 001 perform left turn and u turn at busy intersection:


I think it shows that Zeekr001 will have "FSD Beta" type features. It looks like they are testing the city driving features for the advanced ZAD package.

Also, note that the additional screen appears to have the same FSD visualizations that we've seen on Mobileye's FSD demos.
 
Good blog from Motional on their framework for improving their autonomous driving:


86d91f3a851b9e61bffd46053fbb357f92cc5fbe-1000x507.gif
 
You be you and keep on being wrong. ;)

I'm not sure you understand the real difference between a driverless robo-taxi and an ADAS. Admittedly, Tesla has done a great job at obfuscating the issue.

A driverless vehicle cannot operate on roads where the road signs are only "fairly consistent" without some human in the pipeline to make decisions about new signage. You cannot have a fleet of tens to hundreds of thousands of vehicles making illegal maneuvers because the city council put up a one off sign somewhere that day. This is an unacceptable disaster scenario. Also consider the helpless passengers who are only able to watch as the car "guesses" about what it's supposed to do.

The cars must detect a situation that they do not understand and have a means of getting a human to make the final policy decision. Anything less than that, and you are actually talking about the invention of AGI.

The human in the FSD system is a critical component, just as the remote human operators are for driverless robo-taxies. Tesla will not be able to remove the human without putting the same "limitations" in place that the robo-taxi companies are launching with.

Tesla sidesteps all these issues by relying on the human to make these policy decisions. This is absolutely fine for an ADAS, and I'll be very happy to use FSD in that capacity when (if) it's finally safe enough and released. And maaaaaaaaybe, one day, FSD will be a platform upon which an actual robo-taxi service is built.

But make no mistake: Teslas are not becoming driverless without major changes to their approach, which will necessarily include high resolution maps and remote operators.

Also maps do not require as much maintenance as you think they do. Mapping happens while the driverless cars are doing their routes. Also, consider this: if roads aren't traveled enough to be kept up to date, a robo-taxi service has an army of autonomous vehicles they can dispatch to an area to update the maps for them.
 
  • Disagree
Reactions: glide
  • Disagree
Reactions: glide
You be you and keep on being wrong. ;)

What am I wrong about?

That Tesla's cannot become driverless without some human somewhere capable of giving it instructions of some kind when it encounters an unexpected scenario?

Or that high definition maps are not that difficult to produce and maintain, are worth the effort, and are essentially required for driverless functionality?

Or that Tesla intentionally uses the critical human component of their system to mask how far away their vehicles are from becoming driverless?

If you want me to rephrase my point in a positive light: yes I think we will be getting a useful door to door ADAS from Tesla sooner rather than later. But the Tesla robo-taxi fleet isn't happening soon, it isn't happening on HW3, and if it is ever made, will almost certainly require human operators.
 
What am I wrong about?

That Tesla's cannot become driverless without some human somewhere capable of giving it instructions of some kind when it encounters an unexpected scenario?

Of course it can.

L4 can be driverless and isn't required to handle EVERY possible situation.

it only has to be able to "fail" as safely as possible if it finds itself encountering something it can't handle on its own.

I agree with you BTW that HW3 is insufficient for this task.... and increasingly (esp. having driven the FSDBeta some now) I think it's gonna need at minimum 2 more cameras too (that can see to the sides located further forward than the current B-pillar ones)

But the idea you can't ever have driverless without humans backing them up simply ain't so.

Not gonna get into the map debate as it's largely mental masturbation with people on both sides shuffling goalposts around about what "high definition" actually means in this context.
 
What am I wrong about?

That Tesla's cannot become driverless without some human somewhere capable of giving it instructions of some kind when it encounters an unexpected scenario?

Or that high definition maps are not that difficult to produce and maintain, are worth the effort, and are essentially required for driverless functionality?

Or that Tesla intentionally uses the critical human component of their system to mask how far away their vehicles are from becoming driverless?

If you want me to rephrase my point in a positive light: yes I think we will be getting a useful door to door ADAS from Tesla sooner rather than later. But the Tesla robo-taxi fleet isn't happening soon, it isn't happening on HW3, and if it is ever made, will almost certainly require human operators.
@mark95476 is a pure Tesla apologist. You cannot say anything critical or he goes into full meltdown mode.
 
L4 can be driverless and isn't required to handle EVERY possible situation.

I'm personally not as sure of this. I feel like there are too many places where the car can get stuck, and most of the places that a car can get stuck aren't known or trainable for. Getting stuck (either locally stuck, or in a navigation loop stuck) as a robo-taxi passenger isn't acceptable.

I feel like at a bare minimum, you need a robust means of detecting when the current situation isn't something that has been coded in and tested for beforehand. Then you need a way to instruct the cars of what contingent behavior is acceptable.

Three of my specific concerns are: one off signs, unknown road semantics, and detours.

A sngle car, with a basic pathfinding algorithm, could eventually navigate a detour without understanding the signage or context (as long as every closed road is sufficiently blocked off with a physical barrier!). But it would be pretty painful to be in with a lot of backtracking.

A network of cars could probably figure out detours easier with some level of cooperation and data sharing.

But a human operator is still the best solution here. A human would be alerted of the issue, and be able to interpret the detour and put rules and restrictions on the vehicles. They would also be alerted whenever the car gets stuck. They could put rules in so that the cars understand that the entire roads are actually closed, and not just the lanes. Chuck Cook put up a video this week of FSD attempting to go around a road closed sign, because it only blocked one of the two road lanes!

There are just too many possible situations to train for up front right now. So much changes the moment you remove the driver. Even if the driver is only performing a fraction of a percent of the driving task, removing the human would be a fatal mistake. And with FSD, watch closely for every intervention, and you'll see the human in the system is performing a _lot_ still.

This post got a bit rambly, apologies lol.
 
I'm personally not as sure of this.

Well, the literal definition of L4 says it's true.

If the car can operate in an ODD (even a narrow one) and fail safely, without a human, it's L4.


Of course there's a difference between "what is technically an L4 car under the SAE definition" and "What a company is willing to actually put on the road"

A car that can safely pull over and stop on the shoulder if it can't proceed without any human involved is L4, but if it happens any significant amount of time it's a poor robotaxi.



But a human operator is still the best solution here. A human would be alerted of the issue, and be able to interpret the detour and put rules and restrictions on the vehicles. They would also be alerted whenever the car gets stuck. They could put rules in so that the cars understand that the entire roads are actually closed, and not just the lanes. Chuck Cook put up a video this week of FSD attempting to go around a road closed sign, because it only blocked one of the two road lanes!

Really this depends on your goals and scalability.

If your plan is to operate a tiny handful of robotaxis in a tiny suburb in Arizona, and that's it- having human remote assist is totally doable.

You probably only need like 2 guys.

If your plan is to deploy millions of vehicles that drive themselves though-- not so much.

So in that situation you might have to settle for less "ideal" solutions like janky re-routing if roads are closed...obviously you'll need better contextual understanding for the system to understand it's closed entirely-- but that's where Teslas massive data gathering advantage can come into play- especially once Dojo is up and running (and probably once HW4 is in cars allowing significantly larger and more complex NNs to be running in-car).




There are just too many possible situations to train for up front right now. So much changes the moment you remove the driver. Even if the driver is only performing a fraction of a percent of the driving task, removing the human would be a fatal mistake. And with FSD, watch closely for every intervention, and you'll see the human in the system is performing a _lot_ still.


Right now? Sure.

But the system is still in relative infancy as far as machine learning goes....including Tesla having gone down some blind alleys and having to do pretty fundamental re-writes a few times... and we know it's already hitting up against various HW limits that are various degrees of fixable (the computer swap should be really easy for example once they're building em at scale-- the needing other cameras might be harder)

Chess took ~50 years from the first published computer program till the first one that won a single match against a world champion- but now deep learning computers have ratings far higher than any human in the world.

Go was first written for a computer in 1968... through 2014 it still couldn't beat serious players. Then deep learning entered the arena and began winning matches against the best players in the world shortly thereafter.

Improvements are still happening to FSD beta obviously (10.x works better than 9,x which worked better than 8.x) but I expect you're gonna need to wait on Dojo and HW4 to start seeing genuinely MASSIVE leaps. But I do expect you'll see em.
 

Tesla pulled its latest ‘Full Self Driving’ beta after testers complained about false crash warnings and other bugs​

Sunday afternoon Elon Musk tweeted that Tesla is “Seeing some issues with 10.3, so rolling back to 10.2 temporarily.”

This byline is such clickbait BS. Tesla wasn’t responding to complaints….Alternate byline: “FSD Beta testing program performing exactly as it should.” Seriously, it’s catching errors and refining the system before it gets to all consumers as a fully developed feature. That’s what Beta testing is for. The system is working as intended.