Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
And the "RT" element in the question adds an ENITRE additional level of regulation since it's commercial.
Yes.

But the Robotaxi element in the question ALSO adds an entire level of software and systems development.

Where is the evidence of Tesla software to manage fleets of self-driving cars, or billing, or reimbursement to vehicle owners, or scheduling, or any of the other aspects that Uber and Lyft employ hundreds of software developers to engineer. Any massive job posts? Any leaks or tests? He claimed a Robotaxi service would be launched last year!

There was never going to be a Tesla Robotaxi service in 2020, in spite of frantic spinning of Elon’s very clear statement trying to will it into existence.
 
  • Like
Reactions: Matias
But they have some direct control over it.

They can determine the iterations, they can determine how much $ and hardware they will throw at the back end (Dojo, etc), they can determine how much $ and hardware they'll throw at the front end (HW4- any potentential sensor changes, etc).

No! They have no control over when FSD will be a reality. They can only control how much money, labor power, and resources they put into it. When is up to the vagaries of how well their engineers can solve the issues involved.

What level of safety?

A level that will result in fewer accidents or fatalities or both, than are caused by humans, on a per-mile basis. People who test cars for safety will figure out ways to test the cars, and there will be real-world testing in jurisdictions that have a more FSD-friendly regulatory structure.

Elon has not even applied for regulatory approval because he knows that his cars are not yet ready for it, and he does not know when they will be ready. He's saying that regulatory approval is the big unknown when he does not have the vaguest idea when FSD will be ready to ask for regulatory approval.

Any rational person, calmly looking at the present state of Tesla's "FSD" can see that they are years away from advancing beyond level 2. First they need a promising self-driving system, then they need years of beta-testing before they can move to level 3 (car is responsible but driver is in the driver's position ready to take over) and years of testing that before they go to driverless.

And nobody seriously thinks that "feature complete" is the goal or is what the public envisions as "FSD." And Tesla is definitely not going to request regulatory approval for a driverless car based on "feature complete." And they're not even at "feature complete" yet.

Tesla is and always has been a safety-oriented company. Our cars are the safest on the road. The good in all of this hubbub over FSD is that I don't believe Tesla will release a driverless car until the cars are truly safer than a human driver The bad is that Elon will continue to lie and mislead about the timeline, and will continue to sell a pig in a poke and gullible drivers will pay $10K for a feature that they never actually get because they've junked their cars for old age before driverless FSD ever comes to their car.
 
He's saying that regulatory approval is the big unknown when he does not have the vaguest idea when FSD will be ready to ask for regulatory approval.
He finally admitted that regulatory approval is not the issue (in the US) during the Q2 earnings call.

"At least in the US we don't see regulation as a fundamental limiter. We've also got to make it work and then demonstrate that if the reliability is significantly in excess of the average human driver or to be allowed... um... you know for before people to be able to use it without... uh... paying attention to the road... um... but i think we have a massive fleet so it will be I think... uh... straightforward to make the argument on statistical grounds just based on the number of interventions, you know, or especially in events that would result in a crash. At scale we think we'll have billions of miles of travel to be able to show that it is, you know, the safety of the car with the autopilot on is a 100 percent or 200 percent or more safer than the average human driver"
 
No! They have no control over when FSD will be a reality. They can only control how much money, labor power, and resources they put into it. When is up to the vagaries of how well their engineers can solve the issues involved.


Those two things don't agree with each other though.


"I can control how much $ I'm putting into parts and labor and tools, but I have no control over when the repair will be done"

Obviously stuff can come up that might make it faster or slower (stripped bolts, delayed parts deliveries, one of the workers calls in sick, whatever) but you absolutely have SOME control over the speed of the process when you control all the inputs to it and the resources applied to it.







A level that will result in fewer accidents or fatalities or both, than are caused by humans, on a per-mile basis. People who test cars for safety will figure out ways to test the cars, and there will be real-world testing in jurisdictions that have a more FSD-friendly regulatory structure.

Tesla claims they already have that at L2.

And in the US the regulators are fine leaving that as is.

In the EU they're not and significantly nerf the features.


So clearly your idea there's ONE standard of safety ALL regulators will agree on is...not how that has worked at all.



Elon has not even applied for regulatory approval because he knows that his cars are not yet ready for it,

Obviously. Because step 2 isn't done yet.

Why would you ask for approval of a system before you can even demonstrate the safety level of the system?


he does not have the vaguest idea when FSD will be ready to ask for regulatory approval.

On the contrary- he has LOTS of ideas when it'll be ready.

So far they've all been wrong of course- I mean that was kind of the original point of the thread :)


Any rational person, calmly looking at the present state of Tesla's "FSD" can see that they are years away from advancing beyond level 2.


Adding "rational" to a baseless claim doesn't make it any less baseless.


"That toddler keeps falling over every 6 inches, it'll be YEARS before they can walk 5 feet!"


That's not how the pace of machine learning works at all.


Again I'm not at all claiming they'll have L5 soon or anything... but I don't think the present state of FSD is something a "rational person" can definitively say won't get to say L3, or a tight OOD L4, sooner than you suggest.


Pretty much "solve the stopped object/partial in lane issue with vision" and you're at L3 highway today for example.


First they need a promising self-driving system, then they need years of beta-testing before they can move to level 3

Can you give us rational, fact based, evidence- with detailed rates of machine learning for this task- to support this claim?

(Spoiler: nope)



And nobody seriously thinks that "feature complete" is the goal or is what the public envisions as "FSD."

Nor did anybody claim it was.

Elon went out his way to be clear that's step one of a three step process.

You're not sounding very rational or calm here my dude.



And Tesla is definitely not going to request regulatory approval for a driverless car based on "feature complete."

Again- nobody suggested they would.

Again Elon was super super clear regulatory requests were the last of three steps-- feature complete was step 1.



And they're not even at "feature complete" yet.

What feature are they missing? Specifically? (using the definition of what feature complete is already repeatedly explained)
 
He finally admitted that regulatory approval is not the issue (in the US) during the Q2 earnings call.

"At least in the US we don't see regulation as a fundamental limiter. We've also got to make it work and then demonstrate that if the reliability is significantly in excess of the average human driver or to be allowed... um... you know for before people to be able to use it without... uh... paying attention to the road... um... but i think we have a massive fleet so it will be I think... uh... straightforward to make the argument on statistical grounds just based on the number of interventions, you know, or especially in events that would result in a crash. At scale we think we'll have billions of miles of travel to be able to show that it is, you know, the safety of the car with the autopilot on is a 100 percent or 200 percent or more safer than the average human driver"
The current Level2 V9.x is fine&dandy - Tesla should give PIF FSD owners the option to accept it as it currently exists - with or without an NDA, hold harmless&indemnity, etc., etc., ad nauseum! I'm well past ready4it.
 
  • Like
Reactions: Cheburashka
Those two things don't agree with each other though.


"I can control how much $ I'm putting into parts and labor and tools, but I have no control over when the repair will be done"

Obviously stuff can come up that might make it faster or slower (stripped bolts, delayed parts deliveries, one of the workers calls in sick, whatever) but you absolutely have SOME control over the speed of the process when you control all the inputs to it and the resources applied to it.

This is not a valid analogy: When you repair something you are fixing broken or flawed parts in a known technology. You have a device that you could build from scratch easily if you were willing to put in enough money. Repair is just a matter of figuring out which part is defective and replacing it.

Creating a whole new technology that does not yet exist is in no way similar. You cannot know whether the thing you want to invent is even possible (in this case we're all agreed it is, but we cannot know until it's done) and you do not know how to build it. You're experimenting and theorizing and testing. You don't know what might go wrong or which lines of investigation may prove to be dead ends. You cannot know how long it will take.

As for how I know it will take years of testing to get from an actual driverless car to mass distribution: Because I believe that Tesla will not release a product until it's been verified to be safe when used as intended. And verification takes time. AP is still "beta" even after all these years. And current FSD, while truly amazing, is nowhere near being able to dispense with a fully-alert driver.
 
  • Like
Reactions: FloridaJohn
Pay me $10,000 for the opportunity to kill or maim you or your loved ones.
Percentage of people taking Elon up on this offer, with regard to 2021 Model S:
1628957872527.png

Source: Google Docs
 
Does this timeline change for you based on AI Day comments?

Then: "All Teslas produced since 2016 have necessary hardware for FSD"
Now: "We're coming out with new FSD hardware next year"

This is news to me. (I don't follow the day-to-day news from Tesla.) Admitting that they need new hardware is a big step forward. I did not think they'd ever achieve FSD with the present hardware. This admission (IMO) puts them back in the race. We'll see what that hardware is and what it can do. I still think that a fully autonomous car that I can buy and that will drive itself where I want to go (nothing rugged, but the main non-highway north-south road here will be a big challenge) is at least a decade away, and maybe more. From any car maker.
 
This is news to me. (I don't follow the day-to-day news from Tesla.) Admitting that they need new hardware is a big step forward.

This didn't actually happen though.

They said there was new HW coming, but they felt the current HW was still good enough to do self driving 2-3x better than a human, while the new stuff might be 10x better.

Might not work out that way, but the current company line is "coming but not required to self drive"
 
  • Funny
Reactions: Daniel in SD
This didn't actually happen though.

They said there was new HW coming, but they felt the current HW was still good enough to do self driving 2-3x better than a human, while the new stuff might be 10x better.

Might not work out that way, but the current company line is "coming but not required to self drive"
Why on earth make a huge investment in new hardware, and software to use it, if it is not needed to be twice or triple as safe as safe as a human? This should upset stakholders I guess! ;-)
 
This didn't actually happen though.

They said there was new HW coming, but they felt the current HW was still good enough to do self driving 2-3x better than a human, while the new stuff might be 10x better.

Might not work out that way, but the current company line is "coming but not required to self drive"

Thank you for the correction. I'm still encouraged by the fact that they're developing new hardware, because I don't think they can achieve FSD with the current hardware. So even with the correction, the fact that they're going to come out with new hardware, IMO, puts them back in the game.

Why on earth make a huge investment in new hardware, and software to use it, if it is not needed to be twice or triple as safe as safe as a human? This should upset stakholders I guess! ;-)

Because 10 X better than human is far more desirable than 2 or 3 X better and will gain far more public acceptance. And as noted above, I don't think they can get even equal to human with the current hardware. They say they can. I don't believe it. They've already admitted that my car does not have adequate hardware, contrary to their promise when I bought it.

Tesla does not like to admit they were wrong. They never issued an apology for promising cross-country full autonomy to the early buyers of FSD; they just quietly changed the promise they made to later buyers. I see the newest announcement as a two-step process: First claim that the old hardware is still good enough but the new will be even better, then quietly stop promising FSD on the old hardware. Maybe even offering to compensate people whose cars cannot be made autonomous.
 
Thank you for the correction. I'm still encouraged by the fact that they're developing new hardware, because I don't think they can achieve FSD with the current hardware. So even with the correction, the fact that they're going to come out with new hardware, IMO, puts them back in the game.



Because 10 X better than human is far more desirable than 2 or 3 X better and will gain far more public acceptance. And as noted above, I don't think they can get even equal to human with the current hardware. They say they can. I don't believe it. They've already admitted that my car does not have adequate hardware, contrary to their promise when I bought it.

Tesla does not like to admit they were wrong. They never issued an apology for promising cross-country full autonomy to the early buyers of FSD; they just quietly changed the promise they made to later buyers. I see the newest announcement as a two-step process: First claim that the old hardware is still good enough but the new will be even better, then quietly stop promising FSD on the old hardware. Maybe even offering to compensate people whose cars cannot be made autonomous.
I agree with you. It is hard to admit one is wrong. And especially if money/stock is involved.

In my eyes new HW4 is a $ billion budget overrun, but of course I don't know what the budget has been since 2016 and what is planned.
 
Why on earth make a huge investment in new hardware, and software to use it, if it is not needed to be twice or triple as safe as safe as a human? This should upset stakholders I guess! ;-)

Not really.

For example HW3 lowered their costs per vehicle. I'd expect HW4 will too. Great for shareholders- especially as volume production ramps- even if they had to retrofit some folks.

Plus- they don't make any $ off of it if you don't buy FSD. Being able to say "10x safer" rather than 2-3x safer might induce more buyers for FSD- which is also good for shareholders.
 
Not really.

For example HW3 lowered their costs per vehicle. I'd expect HW4 will too. Great for shareholders- especially as volume production ramps- even if they had to retrofit some folks.

Plus- they don't make any $ off of it if you don't buy FSD. Being able to say "10x safer" rather than 2-3x safer might induce more buyers for FSD- which is also good for shareholders.
Good arguments but I would guess that would be a separate investment decision, only when they are close to finishing first iteration. Only then a second, cheaper and better system will be smart investment.

And if they come to market with documented 3x safer than human, they will crush all other brands, so 10x is not needed.
 
Not everyone lives on Tesla social media. I only started reading Tesla forums since I bought my car.

I bought my Tesla without knowing much about Elon Musk. What convinced me to get FSD was the showroom representative (service advisor) who claimed (what the website also said) that there will be full city self driving within the year.
Exactly the same to me.
 
... if they come to market with documented 3x safer than human, they will crush all other brands, so 10x is not needed.

Big "IF." And what if they come to market with 3 X safer than human six months after some other company comes out with 5 X safer than human? It's like Pascal's Wager. Pascal assumes that either Christianity is right or there's no God at all. You are assuming that if Tesla comes out with 3 X FSD there will be no competition that's even safer than that.

In fact, it's a race among many players, and the competition doesn't stop with the first autonomous car. Maybe in ten years from now on HW17.3 (or next year on HW3 if you're that optimistic) Tesla comes out with 3 X FSD and a month later Hyundai comes out with 5 X FSD. Tesla better have 10 X in the pipeline or it will be relegated to history. Or maybe Ford comes out with 5 X while Tesla is still working on 3 X. If Tesla had all its eggs in the 3 X basket they might as well close up shop.

3 X is barely adequate for public acceptance and the race doesn't stop there. Any auto maker that hopes to stay in the game will need at least 10 X better than human safety.

P.S. And a lot of people would buy an autonomous car that's ten times safer than human who would not buy a car that's three times safer. 10X opens up a huge market that's not there for 3X.
 
  • Like
Reactions: daktari