Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Robotaxi : The business of competing with human drivers

This site may earn commission on affiliate links.
Actually, getting to that point isn't very hard. Just look at the last 10-14 years of failed autonomy efforts and the billions of money invested into such ventures.

Getting from prototype hardware and software to autonomy has proven to be the really hard part.
Also keep in mind, even having a working prototype doesn't guarantee anything. You still have to create a viable profitable business plan/model. You can have all the investor funds, but if you never create a business that can be profitable, those investors go away.
Having the technical engineering expertise to make the working prototype is just the first step.
Designing the business plan that will succeed in making a profit long-term, is a completely different skill set.
 
  • Like
Reactions: spacecoin
...Some have hinted that Tesla wants to own its own fleet of RT's (the Waymo model). But I think a lot of Tesla owners would like their personal cars to be capable of L4 driving, so that they can be chauffeured around in their own vehicle, and potentially (or not) send that vehicle out to earn more money when they don't need it (the Uber model).
This. I would mostly use my vehicle to take myself and family where we need to go
 
Define working?
Working at a level of reliability and confidence that the authorities will approve to operate autonomously (L4) and the public are prepared to purchase individually and/or a large company or startup will be prepared to purchase a fleet of them.

If you build it they will come.

Waymo have a business model that will progress them towards achieving a working prototype, if they continue to fund it. There have been questions about its ODD scalability due to mapping requirements.

Tesla have a business model that will progress them towards achieving a working prototype via a different pathway, which is via progression of L2 ADAS. There have been questions about whether their current sensor suite will ever get them to the point of L4 with a good enough ODD.

And, of course, there are others in the race.

Waymo has surprised sceptics, and recently Tesla's V12 has surprised other sceptics. However, both have a way to go, and I doubt either are going to give up. The only question is time. They are all aware that the first-to-market advantage is significant (see Uber), and are also aware that the last-to-market disadvantage can be crippling (see Toyota).
 
Working at a level of reliability and confidence that the authorities will approve to operate autonomously (L4) and the public are prepared to purchase individually and/or a large company or startup will be prepared to purchase a fleet of them.

If you build it they will come.

Waymo have a business model that will progress them towards achieving a working prototype, if they continue to fund it. There have been questions about its ODD scalability due to mapping requirements.

Tesla have a business model that will progress them towards achieving a working prototype via a different pathway, which is via progression of L2 ADAS. There have been questions about whether their current sensor suite will ever get them to the point of L4 with a good enough ODD.

And, of course, there are others in the race.

Waymo has surprised sceptics, and recently Tesla's V12 has surprised other sceptics. However, both have a way to go, and I doubt either are going to give up. The only question is time. They are all aware that the first-to-market advantage is significant (see Uber), and are also aware that the last-to-market disadvantage can be crippling (see Toyota).
You understand that Waymo has Level 4 vehicles providing driverless rides today, right?
 
If you have a few thousand zero disengagement drives in a row then we're approaching autonomy levels of reliability.
To be fair, we really don't know what level of reliability the governing bodies are going to require before they'll approve anything.
I don't think they even know at this point.

Are they going to compare it to human data? Wrecks per trip average? Does L4 have to have half the wrecks? 1/100 wrecks? 1/1000?

Also, IMO, I think some factor of decision making has to be acknowledged regarding why a disengagement was done.
As the autonomy evolves, there is going to be a point where it's making decisions that a driver may not make, but that doesn't mean it's the wrong decision. So just because it's choosing one way to do something, and the driver wants to do something else, doesn't make that disengagement a negative.
If the autonomy is making the less risky choice, then it would actually have a higher safety score than the typical driver, making it a better choice for RT's.
It just has to get to the point where the autonomy can recognize most of the hazards and not miss the ones that would be blatantly obvious to a human driver. Again, an actual number/percentage would need to be established by the powers that be, and I'm not sure they even have those figures yet.
 
I'm pretty sure an accident every thousand drives is unacceptable. Not need to overthink it beyond that at this point. ;)

I didn't use the 1000 drives number.
My point was that some number has to be established. Then the bar is set for companies to pass.

Quick internet search:
  • about 20,000 reported crashes per DAY in the US, so roughly 50 days to reach 1 million.
  • ~7.5 per million are non-fatal w/injury
  • The average driver can expect to be in 3-4 accidents in their driving lifetime (more as a passenger)
Just using those figures, if autonomy can reduce those by 50%, is that good enough? I would think so, but I don't get to make those decisions.
 
I didn't use the 1000 drives number.
My point was that some number has to be established. Then the bar is set for companies to pass.

Quick internet search:
  • about 20,000 reported crashes per DAY in the US, so roughly 50 days to reach 1 million.
  • ~7.5 per million are non-fatal w/injury
  • The average driver can expect to be in 3-4 accidents in their driving lifetime (more as a passenger)
Just using those figures, if autonomy can reduce those by 50%, is that good enough? I would think so, but I don't get to make those decisions.
No. You can't take the whole average.

You can only compare with collisions that are the fault of drivers who are:
1) experienced
2) sober and not high
3) not obviously deliberately driving like an idiot
4) not texting or otherwise significantly distracted in a really dumb way
5) not doing some other things I missed

When you do that, you'll require a reasonable and much higher standard,.
 
  • Like
Reactions: Ben W and spacecoin
No. You can't take the whole average.

You can only compare with collisions that are the fault of drivers who are:
1) experienced
2) sober and not high
3) not obviously deliberately driving like an idiot
4) not texting or otherwise significantly distracted in a really dumb way
5) not doing some other things I missed

When you do that, you'll require a reasonable and much higher standard,.
Under those parameters, there wouldn't be enough data to compare. 😂
Taking out weather-related ones, and mechanical failures as well.

How many accidents don't involve someone: drunk/high, distracted, or otherwise intentionally breaking the law?
What's left? How else do accidents occur? Only happens due to a driver not paying attention or a misjudgment, right?
That's the whole purpose of autonomy, full-time attention, safer decision making.

You cannot remove any of those statistics for "human" errors/behaviors.
The whole point is to get a true representation of the difference autonomy would make. Drunk drivers wouldn't have to be driving. Texters wouldn't have to be paying attention. Idiots don't have to make the decisions.
All accidents count. You don't get to choose which accidents count and which don't, for humans or autonomy.
Both are judged the same.

At the end of the day, how many more lives would be saved if we removed humans from the driving? How good does autonomy need to get to be just 1% better than the best human drivers? That's my bar. Because once we reach that level, it'll only continue to improve the more it learns, so if it's better than any human, even a tiny bit, than it's time to let it take the wheel.
 
  • Like
Reactions: johnnycnote
If you have a few thousand zero disengagement drives in a row then we're approaching autonomy levels of reliability.
I would take it a step farther and say if a thousand people have a thousand zero-disengagement drives each.

Doing the same commute over and over won't quite expose the AI to everything it might encounter. A greater variety of routes will expose the AI to more situations.
 
  • Like
Reactions: Ben W