Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
You keep saying that people are willing to pay more for a private vehicle but it's not true. People pay about $1.50 a mile to ride an Uber here in San Diego. Very few people spend that much to own and operate their private vehicle (AAA seems to say average is $0.56 a mile for 15k miles a year).

I'm not talking about the amount people will pay for transportation. I'm talking about the cost of putting an asset you already own into commercial use, versus purchasing that same asset in order to put it into commercial use. If you already own an asset (in this case your personal car) it costs you nothing to put it to work when you're not using it. But if you want to start a taxi business from scratch you need to buy the car. The difference in the start-up cost is the entire price of a car.

If you put your personal car, that you already own, into use as a taxi, all the money it makes is profit. But if you buy a car just to use as a taxi, the money it makes isn't profit until it's paid off its own purchase price.
 
Most if not all believe. The time frame is the question. I'm thinking it will happen in ten years for something like level 4, less time for level 3.

Call me a skeptic. It is more of a fantasy then most people think. It will remain small scale. You are more likely to get FSD Level 3 from Tesla than you are to get FSD from Waymo or Cruise in the next few years.

With the existing HW on a HW3 vehicle?

I'm not going to bet against FSD given a 10 year time frame for FSD with any hardware. That's way to open ended both in what the hardware will look like and what Neural Networks will be capable at the time. Lidar isn't going to look anything like it used to, and is going to be much more like what the iPhone 12 has.

Where people are disbelievers is with the existing Hardware (or upgraded HW for free), and a narrower first day of 2025 time frame for L4. In fact I couldn't find anyone to bet against. Maybe only one person in this autonomous section was even close to voting for that.

L3 isn't something people are especially interested in, and doesn't factor in at all when it comes to the argument I was making. I don't question that you'll have L3 (like a traffic assist one) from Tesla/MobileEye/etc in most areas (especially the EU) before L4 from Waymo/Cruise goes beyond time test areas.

But, you'll definitely have L4 from Waymo/Cruise in major metropolitan areas (Phoenix, SF, etc) well before you have L4 from Tesla and well under the 10 year time span.

Ultimately its going to be a roll out where some people will have it well before other people have it.
 
  • Like
Reactions: diplomat33
If you took a poll you'd find the opposite. Who wouldn't be interested in working on their computer, or watching a movie while Tesla is driving you?
Agreed, Interstate Highway Level 3 would be awesome for people living in California. Traffic jam speeds only would be less awesome.
My dream would be to have autonomous only lanes that achieved higher throughput by caravanning.
 
That's an "old" video from August. I think we've already shared that video on this forum. But thanks.

Mobileye is without a doubt a leader in FSD.

So Mobileye feels than can get 10,000 hours per accident with cameras. That's interesting as Amnon Shashua is an actual adult who likely gives prediction he believes are true.

This makes me believe that Tesla may get further with current hardware than I thought. I wonder how Tesla can prove they can do 20,000 hours per accident.

I also wonder how much mapping Tesla is actually doing.
 
  • Funny
Reactions: mikes_fsd
So Mobileye feels than can get 10,000 hours per accident with cameras. That's interesting as Amnon Shashua is an actual adult who likely gives prediction he believes are true.

This makes me believe that Tesla may get further with current hardware than I thought. I wonder how Tesla can prove they can do 20,000 hours per accident.

Keep a few things in mind. Mobileye uses 12 cameras instead of Tesla's 8. So Mobileye is using more cameras that might make their FSD more reliable. Also, Mobileye does use HD maps which helps the reliability of the system. Tesla does not use HD maps.

Now, Tesla might still achieve 10,000 or even 20,000 hours per accident with camera-only at some point. The problem is that even 20,000 hours per accident would not be anywhere near good enough to remove driver supervision, according to Amnon.

That's the whole point of why Mobileye wants to combine 2 FSD systems, a camera-only system and a lidar system. Mobileye's strategy is that if both systems can independently achieve 10,000 hours per accident, then the combined system will be 10,000*10,000 or 10M hours per accident which would be good enough to remove driver supervision and achieve driverless L5. That's why Mobileye is still planning to include lidar even though their camera-only system can do FSD.

I also wonder how much mapping Tesla is actually doing.

As far as we know, Tesla is not doing any mapping themselves. Tesla is not doing HD maps. They are using 3rd party standard maps.
 
  • Informative
Reactions: 1 person
Where people are disbelievers is with the existing Hardware (or upgraded HW for free) ...

Agreed.

L3 isn't something people are especially interested in ...

Disagree. I'd pay whatever they charge to have the current features of my EAP at Level 3. Doesn't have to have any of the added features of the "FSD" package. I'd even trade my Model 3 for the latest Model 3 if that would get me Level 3 sooner than waiting for them to swap out the computer.

Ultimately its going to be a roll out where some people will have it well before other people have it.

Definitely true for robotaxi availability. That's already the case. And manufacturers that go with geofencing will have something like cell phones: Your car will be driverless-capable in certain geographical areas and/or on certain roads, and not others, the way your phone might be G4 in some places and G3 in others. But Tesla seems committed to not geofencing, so it would seem as though if and when Tesla gets there it will be everywhere at once. Or at worst, available for purchase by region, but operable everywhere.

So Mobileye feels than can get 10,000 hours per accident with cameras. That's interesting as Amnon Shashua is an actual adult who likely gives prediction he believes are true.

I really think that Musk only gives predictions that he believes are true. But he is so wildly and unrealistically optimistic that his predictions are irrelevant to the consumer. OTOH, without that wild optimism we would not have Tesla or SpaceX today. You have to take the bad with the good.
 
Didn't Musk alter his position on mapping a couple of years ago? The large amount of upload from the car is perhaps partly location data. Mobileye does this at fairly low bandwidth.

No. Elon has not changed his mind on HD maps as far as we know. Tesla uses standard maps, ie not cm level maps. However, Tesla maps do contain additional information such as location of traffic signs, stop signs and speed limits.
 
Keep a few things in mind. Mobileye uses 12 cameras instead of Tesla's 8. So Mobileye is using more cameras that might make their FSD more reliable. Also, Mobileye does use HD maps which helps the reliability of the system. Tesla does not use HD maps.

Now, Tesla might still achieve 10,000 or even 20,000 hours per accident with camera-only at some point. The problem is that even 20,000 hours per accident would not be anywhere near good enough to remove driver supervision, according to Amnon.

That's the whole point of why Mobileye wants to combine 2 FSD systems, a camera-only system and a lidar system. Mobileye's strategy is that if both systems can independently achieve 10,000 hours per accident, then the combined system will be 10,000*10,000 or 10M hours per accident which would be good enough to remove driver supervision and achieve driverless L5. That's why Mobileye is still planning to include lidar even though their camera-only system can do FSD.



As far as we know, Tesla is not doing any mapping themselves. Tesla is not doing HD maps. They are using 3rd party standard maps.

Did this come from Mobileye or is it your theory because it makes no sense that you would have 2 FSD systems running one car and also no sense that you can just multiply the numbers to get a new incidence rate.
 
  • Love
Reactions: mikes_fsd
Did this come from Mobileye or is it your theory because it makes no sense that you would have 2 FSD systems running one car and also no sense that you can just multiply the numbers to get a new incidence rate.
Yes, that's what they say they're going to do. Whether it works or not would be dependent on how correlated the failures are and how well they can choose the safer plan when there is a conflict between the two systems. I'm also skeptical.
Intel’s Mobileye has a plan to dominate self-driving—and it might work
 
Did this come from Mobileye or is it your theory because it makes no sense that you would have 2 FSD systems running one car and also no sense that you can just multiply the numbers to get a new incidence rate.

This comes straight from Mobileye.

Amnon Sashua at CES 2020 at the 13:18 mark: "The way we reach this number is redundancy, 2 redundancies. Not fusion which is the dominant school of thought in the industry. But actually to separate, to have separate streams, one stream that is only camera and one stream that is only radar and lidar and each one them can reach 10^-4 and because those systems are approximately independent, a product of them will give us approximately 10^-8 and with safety margins and because they are not really statistically independent, we will reach our 10^-7"

Source:


From the demo at the 6mn38 mark:

"Let me elaborate on our sensing stack. The system in action here is camera only. However, this is only one part of our final sensing stack which is composed of 2 separate subsystems, one relying on cameras, like we have here, and the second subsystem that is relying on radars and lidars. The goal is to achieve full self-driving capabilities with each of those subsystems, such that our driving policy mechanism will eventually be fed from two completely independent environmental models. We call this concept "true redundancy".

Source:

 
Did this come from Mobileye or is it your theory because it makes no sense that you would have 2 FSD systems running one car and also no sense that you can just multiply the numbers to get a new incidence rate.

If one is 1 in 10,000 and the other is independently also 1 in 10,000 - if you combine the two, statistically, for something to slip from both it'd have to hit a 1 in 10,000, and then another 1 in another 10,000, making it 1 in 100 million.

However, these things can't really be statistically independent. If the situation is an "edge case" enough that it hit the 1 in 10,000 chance that the camera would miss it, then the chances of the LIDAR also missing it can't be 1 in 10,000 as well - the LIDAR's chances of missing it would also increase.

In more real life terms, this means that the strengths and weaknesses of camera and LIDAR are different, but they do overlap (if the camera has a hard time with something, in some cases the LIDAR will also have a hard time with it). That's probably why they quoted 1 in 10 million instead of 100 million, though I have no idea how they came up with that approximation either.
 
  • Funny
Reactions: mikes_fsd
This comes straight from Mobileye.

Amnon Sashua at CES 2020 at the 13:18 mark: "The way we reach this number is redundancy, 2 redundancies. Not fusion which is the dominant school of thought in the industry. But actually to separate, to have separate streams, one stream that is only camera and one stream that is only radar and lidar and each one them can reach 10^-4 and because those systems are approximately independent, a product of them will give us approximately 10^-8 and with safety margins and because they are not really statistically independent, we will reach our 10^-7"

Source:


From the demo at the 6mn38 mark:

"Let me elaborate on our sensing stack. The system in action here is camera only. However, this is only one part of our final sensing stack which is composed of 2 separate subsystems, one relying on cameras, like we have here, and the second subsystem that is relying on radars and lidars. The goal is to achieve full self-driving capabilities with each of those subsystems, such that our driving policy mechanism will eventually be fed from two completely independent environmental models. We call this concept "true redundancy".

Source:


There is no way that the failure modes / edge cases of camera system and lidar system are going to be entirely orthogonal & independent. They are at least going to be partially correlated. Maybe 0.5 correlation?

I don't know the computation but this would massively affect the output. It seems Shashua agrees but only downgrades his rate by one order of magnitude. That still seems aggressive to me but of course I don't have the data to make a real claim.
 
There is no way that the failure modes / edge cases of camera system and lidar system are going to be entirely orthogonal & independent. They are at least going to be partially correlated. Maybe 0.5 correlation?

I don't know the computation but this would massively affect the output. It seems Shashua agrees but only downgrades his rate by one order of magnitude. That still seems aggressive to me but of course I don't have the data to make a real claim.

True. That is why Sashua admits that they are not completely statistically independent. It seems like he is "hand waiving" a bit. He feels like if he downgrades by a factor of 10 that it will be "close enough" to 10^-7 that they want.
 
Last edited:
If you took a poll you'd find the opposite. Who wouldn't be interested in working on their computer, or watching a movie while Tesla is driving you?

In the context of the discussion?

The context of the discussion was robotaxis, and L3 isn't even relevant for it. Your car can't join a car sharing autonomous fleet network with only L3 capability. L3 won't allow your car to earn money for you.

The way I perceive L3 is a traffic assist level that's limited to speeds under 35mph, and limited to controlled access freeway/highways. Now sure the Level itself allows for more than that, but so far we haven't seen what regulatory agencies or insurance agencies will allow.

As a traffic assist package I don't think most people will find it all that compelling.

Then there are questions as to whether Tesla could even implement L3 without having proper driver monitoring. The car needs to be able to make sure you don't fall asleep and it has no way to do so without driver monitoring. I don't see Tesla being cable of doing L3 with the existing HW (across the Model S, X, Y, and 3 platforms), and in fact its why I put two different L3 levels on the betting pool recommendation back when people were going to back up their beliefs with bets.

My plan was to bet against L3 in a HW3 Tesla all the way till the end of 2024.

Worst case I lose the bet, but I get L3.

I definitely wouldn't mind freeway speed L3 system as that would be pretty sweet. But, I don't think its going to happen with a Tesla and I even wonder if it will happen with a competitors EV before 2024.

Now maybe I'm biased because I'm someone who believes L3 is a bad idea. I'd much rather have a rest stop to rest stop L4 system. That way it avoids the entire hand off issue L3 has.
 
The way I perceive L3 is a traffic assist level that's limited to speeds under 35mph, and limited to controlled access freeway/highways. Now sure the Level itself allows for more than that, but so far we haven't seen what regulatory agencies or insurance agencies will allow.

In theory, L3 can do more than just a traffic assist at low speeds. I think the main reason regulators are limiting L3 so far to traffic jams at low speeds on limited freeways is to increase safety. The problem with L3 is that it shares responsibility with the driver. It needs to do FSD but also be able to ask a driver to stop what they are doing and take over on command. There could be driving situations where that transition is tricky. And if the L3 does not transition safely, that would be a problem. So to avoid cases where the L3 might not transition safely, regulators nerfed L3 to a very limited ODD where they are confident it could handle it and transition safely.

Personally, the transition between L3 and the driver is the reason why I think L3 is a bad idea. I think it is better to just focus on L4 where you can remove the driver completely.
 
  • Like
Reactions: S4WRXTTCS