Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

The competitive landscape of autonomous car companies

This site may earn commission on affiliate links.
I don’t have an us vs. them mentality with regard to Tesla and other companies when it comes to autonomy. I care most about preventing people from dying in car accidents. Two of my friends died that way. It is so heartwrenching to think about them — their plans for the rest of their lives, and how their parents must feel. Car crashes kill about as many people as HIV/AIDS but (unlike HIV/AIDS) we don’t think of road deaths as a crisis because we assume it can’t be solved — we just accept it as part of life, like dying of old age. Self-driving cars offer a new hope.

Self-driving cars can also create much-needed economic growth at a time when slow growth is probably contributing to social and political problems, and to a general spirit of pessimism about life and the future. The closer real per capita economic growth gets to 0%, the more that the easiest way to obtain wealth is to take it from someone else — either through political or military means. When it gets to 0%, we get Genghis Khan.

More relevant to this era— if we want to stop populism or communism from taking over, we should try to create lots of economic growth and make sure it’s broadly shared. If you are someone concerned about plutocracy, probably the same is true, actually. I believe Thomas Picketty theorized (based on lots of data) that faster real per capita growth reduces wealth inequality, even before government measures to do so. Government measures to redistribute wealth are also probably easier to pass when there is more wealth to go around.

Then there’s just the fact that self-driving cars are cool and exciting: they feel like magic to me, and they would make my life so much easier. I hate driving, I’m bad at it (I once hit the same tree twice in one day), but I often can’t get around travelling by car. Visiting family means going down a dirt road. In the city, I actually really enjoy taking the bus when it’s uncrowded and it’s warm enough to have the windows open. It’s serene and nice to be around people. But in the winter at peak hours it often means being sardined up against a throng of irate people while sweating profusely under my winter coat. That kind of experience is what makes some people buy cars.

So, I am rooting for Waymo, Cruise, Mobileye, Apollo, Voyage, Aurora, Wayve, and whoever else is working on the problem. If any of them succeed, we all win. We can save lives, create economic growth, and make it way easier and more enjoyable to get around. Make it happen!

I also think it’s worth trying a diversity of approaches— throw everything at the wall and see what sticks. Mobileye is trying to decide the car’s actions based on a set of deterministic, hand-coded rules. The critique is that these rules will be brittle. Wayve is taking the exact opposite approach, by doing end-to-end deep learning from perception to action. The critique is that this will result in mean regression, and it will be bad at handling uncommon or unusual situations. Even if I think some approaches are a bit weird or cheesy, no one actually knows what works yet, and it is well worth burning the capital to find out what works.

Still, I want to form an opinion about which approach is most likely to succeed. First, just out of plain ol’ curiosity. I love technology. I want to understand how it works and why. I want to get to a deep, first principles understanding and be capable of informed, intelligent original thought on the subject. Second, I want to invest in the opportunity I see coming. I want to make money so I can stop worrying as much about money and have more freedom to focus on what I think is important. I used to feel that finance is somewhat depraved (and still sometimes do), but I’m willing to slog through it so later I can focus on sublime things.

I also want my friends and family to herald me as a prophet, and stop giving me such a hard time. So what if I crashed into the same tree twice, and the barbecue, and knocked the porch stairs over. So what if I spill a glass water on myself once a day. My insight paid for those Christian Louboutin sunglasses!

There is no inherent reason to pick just one autonomous car company to invest in. Right now, I’m just invested in Tesla, but I would love to diversify if I can find the right investment vehicle. One option is just to invest in a self-driving car ETF comprising dozens of companies. Just spread your bets wide and hope it pays off. That might be a good option. ARK Invest’s Industrial Innovation ETF is one I might consider. However, investing in an actively managed ETF means you probably don’t have the resources to do your own research on all the companies you’re invested in. You’re putting your trust in someone else’s research process.

So, for now, I prefer to design my own portfolio. I have been thinking for a while about investing in GM because of Cruise. I like the vertical integration, the tech talent, and how aggressive the company is pushing the tech forward. The main reasons I’m hesitant are: 1) autonomous electric vehicles might cause GM to tank because it’s still so dependent on legacy vehicles and 2) it’s really hard to evaluate what’s legit tech and what is just demoware.

If Alphabet were the size of GM or Tesla, I would probably invest because of Waymo. Both GM and Tesla are around $50 billion. That means if self-driving cars are worth $500 billion in market cap, there is the opportunity to 10x. Alphabet’s market cap is $840 billion. Adding $500 billion would only be an increase of 60%. So the opportunity is much smaller just because Alphabet is such a big company.

Similarly, Intel is already valued at $220 billion. The opportunity is about 3x instead of 10x. I also don’t know the first thing about the chip business, although I have heard people say Intel’s competitiveness has been receding — which would be a worry. But most of all, the worry I have about Mobileye is not the tech but its business model. It is a supplier to automotive manufacturers. Mobileye’s plan is to sell the hardware and software enabling full autonomy to BMW, Audi, Fiat, Nissan, Honda, etc. That means those cars companies can operate the autonomous ride-hailing services and make virtually all of the money.

Mobileye’s bargaining position will probably not be good. BMW, for example, is also partnering with Aurora and Apollo. Aptiv (Delphi) is attempting to compete as a supplier. Waymo has suggested it may be willing to share autonomous ride-hailing revenue with manufacturers. Car companies are increasingly acquiring self-driving startups and starting their own internal self-driving divisions — an attempt to vertically integrate away a supplier like Mobileye. If Mobileye raises its prices too high or demands too large a cut of autonomous ride-hailing revenue, manufactures might turn elsewhere.

Tesla ticks all the boxes:
  1. ~$50 billion market cap.
  2. Vertically integrated: vehicles, software, and even some components.
  3. Top-tier tech talent. Software culture.
  4. Not dependant on legacy vehicles.
  5. Not dependent on uncertain microchip sales.
That’s from a business strategy and return on investment point of view. You can see how the company with the most advanced technology isn’t necessarily the best one to invest in. It’s more complicated.

Waymo might have the most advanced technology, but it’s part of a giant company. Mobileye might have the most advanced technology, but it’s just a supplier to car companies. Apollo might have the most advanced technology, but it’s an open source project. Android is the most popular OS, but the iPhone makes way more money.

The technology is the most uncertain part. This is the area where the least is known (but also the area where people are the most vocal and passionate — ain’t that the way).

How do we assess the competitive landscape in terms of technology? Here are a few ideas.

Look for direct empirical evidence, like:
  • Demo videos
  • California disengagements data
  • Performance of production systems
  • Statistics shared by companies
Assess the resources available to different companies, like:
  • capital (R&D budget)
  • tech talent
  • computing power
  • training data
  • HD map data
  • test cars
  • data labelling employees
Second guess the technical decisions made by companies, like:
  • whether to use lidar
  • whether to use end-to-end deep learning
  • whether to control vehicle action using only formalized rules
Direct evidence, resources, technical decisions.

The one bullet point that stands out to me most is training data. Here’s why. Deep learning is what makes self-driving cars feasible. The obstacles to progress seem to be performing better on tasks that either deep learning already does or that deep learning could potentially take over. (Plus compiling HD maps.) We’re told that the key to making deep learning do things better is more and better data, carefully labelled, cleaned, and curated. That’s supposedly even more important than neural network architecture. So, logically then, self-driving car progress should be a function of the quantity and quality of the data, and how well it is labelled, cleaned, and curated.

Does anyone dispute this point? I would love to hear reasons to doubt if this is indeed true.

It seems like Tesla has access to the most data. There are around 215,000 Hardware 2 cars on the road. It’s putting about 8,000 more on the road every week. The main limit to the quality and quantity of data Tesla collects seems to be the selection of the conditions that trigger a sensor recording. Can it design triggers that selectively capture plenty of useful data without missing too much of what’s important?

Please let me know if you can think of reasons to doubt this — that Tesla can collect the most data that is useful for training neural networks relevant to driving.

If it is true that:

1. Progress on autonomy is a function of data.​

And it is true that:

2. Tesla has the most data.​

Then it follows that:

3. Tesla is making the most progress on autonomy.​

This is the logical conclusion I keep coming back to, but I’m very open to the possibility that it’s not correct. I am looking for someone who articulate why this conclusion isn’t sound.

Some ideas:
  • lidar is too important (I would like to see more hard data on this)
  • data from testing in autonomous mode is what matters (perhaps for path planning and control?)
  • computing, not data, is what matters (esoteric)
  • HD mapping data is what matters (assumes everything else is already production-ready)
  • AI talent is what matters (needs elaboration)
  • what else?
 
Last edited:
Great progress report from an audio interview with George Hotz on 8/8/2018:

Self-Driving Engineering with George Hotz - Software Engineering Daily

Transcript:

https://softwareengineeringdaily.com/wp-content/uploads/2018/08/SED649-Self-Driving-George-Hotz.pdf

1) Autonomous progress seems to be a secret because the result is extremely difficult. It just takes forever and takes a long time!

2) By watching GM cruise, it has a hard time replicating MobilEye.

3) LIDAR localizes themselves so the system can draw a center line and follow it

On the other hand, his computer vision can't localize to 10 cm accuracy but he's working on it.

4) Camera system (Tesla) is reactive without precise maps: More humanlike.

5) His system tracks disengagements and users can label the reasons and his supervised learning system would reduce the numbers subsequently.

6) Mountain View Autopilot fatal crash: "Not swerved into the divider". "The car slowly drifted out of the lane." It's level 2 so the driver had plenty of time to correct the steering.

7) He called driverless Waymo as "few PR stunts" because they still use attentive drivers when they are not doing PR stunts.

8) Available GPS data for latitude, longitude, and height is not usable because it's about 10-meter accuracy so his Gray Panda triangulates 10 satellites for better accuracy.

and so on...
 
Last edited:
  • Helpful
Reactions: strangecosmos
I am skeptical of comma.ai. It seems like all it takes is a few graduate students to build a basic self-driving prototype. A Udacity student can make one that works in simulation. A threadbare demo... isn’t really anything.

Comma.ai is a 15-person startup with a few million in capital. It only has a few hundred customers. And George Hotz recently seemed to sour on the idea of making self-driving cars? Instead, he talked up driver assistance.

So, comma.ai kinda feels like a student project with VC money, led by a particularly bright and ambitious engineer. And that’s it. I don’t know.
 
Last edited:
...comma.ai...

I agree that investment wise, I think George Hotz has been unable to get rich by monetizing his skills.

However, knowledge-wise, he is more transparent than other companies.

He was bullish about the autonomous future when he met Elon Musk and dared that he could make a better system for Tesla than MobilEye could.

And thanks for the referenced link that he's now no longer bullish about autonomous system.

Like him, when I heard Tesla talking about Fully Self Driving, I was very hopeful that it should arrive very soon because Elon Musk promised autonomous coast-to-coast demo would happen at the end of 2017.

In theory, Tesla can gather the massive data from customer fleet and feed it to its machine learning system and it should work.

The reality is fatal accidents still happen as the current system does not know how to learn to automatically apply brakes for obstacles in 2018.

That is not a good sign for autonomous system progress.

Unlike Comma.ai that gives feedback of why I died in an accident and I could go back and label the disengagement so next generation won't die the way I did, Tesla is quite secretive about the process.

Uber was really bullish and bought out Otto when it saw the demo of beer truck delivery with the driver reading magazine in the back seat but now it shut the autonomous truck program down.

I think it was a PR stunt and to make it safe, it would just take lots of more time and efforts.

I now would take a pause on autonomous system hype: It will happen but it is much more difficult and requires much more time!
 
The one bullet point that stands out to me most is training data. Here’s why. Deep learning is what makes self-driving cars feasible. The obstacles to progress seem to be performing better on tasks that either deep learning already does or that deep learning could potentially take over. (Plus compiling HD maps.) We’re told that the key to making deep learning do things better is more and better data, carefully labelled, cleaned, and curated. That’s supposedly even more important than neural network architecture. So, logically then, self-driving car progress should be a function of the quantity and quality of the data, and how well it is labelled, cleaned, and curated.

Does anyone dispute this point? I would love to hear reasons to doubt if this is indeed true.

Deep learning through out the industry is ONLY used for sensing not for planning. There's a huge difference between deep learning and reinforcement learning. Alot of people conflate the two.

The bottleneck for SDC today is not sensing, its planning as proven by disengagement related to perception for most companies being only 10% give or take.

I also think it’s worth trying a diversity of approaches— throw everything at the wall and see what sticks. Mobileye is trying to decide the car’s actions based on a set of deterministic, hand-coded rules. The critique is that these rules will be brittle. Wayve is taking the exact opposite approach, by doing end-to-end deep learning from perception to action. The critique is that this will result in mean regression, and it will be bad at handling uncommon or unusual situations. Even if I think some approaches are a bit weird or cheesy, no one actually knows what works yet, and it is well worth burning the capital to find out what works.

This is not true. First of all, Mobileye (including everyone) uses end to end deep learning for perception.
Secondly, No mobileye doesn't use "hand-coded rules" to control the cars action. They use reinforcement learning for driving policy.

Also Wayve doesn't use end to end deep learning for driving policy either. They use reinforcement learning. Its a big difference!

It seems like Tesla has access to the most data. There are around 215,000 Hardware 2 cars on the road. It’s putting about 8,000 more on the road every week. The main limit to the quality and quantity of data Tesla collects seems to be the selection of the conditions that trigger a sensor recording. Can it design triggers that selectively capture plenty of useful data without missing too much of what’s important?

Please let me know if you can think of reasons to doubt this — that Tesla can collect the most data that is useful for training neural networks relevant to driving.

If it is true that:

They are not, its been debunked hundreds of times. Less than 0.1% of raw data is uploaded to Tesla HQ.
 
Secondly, No mobileye doesn't use "hand-coded rules" to control the cars action. They use reinforcement learning for driving policy.

Well, this issue is a bit more complicated than I first thought. One of the one hand, this is Mobileye’s description of the system it uses: a “predetermined set of rules”; a “formalized”, “mathematical” model.

A paper by Amnon Shashua and others goes into more detail. I don’t really understand what they are saying, since the paper is highly technical. However, they say their approach is to formalize “common sense” rules of driving.

One the other hand, on page 26, they do describe using machine learning to decide between different predetermined actions based on an evaluation function:

Most of the time, this simple approach yields a powerful driving policy. However, in some situations a more sophisticated quality function is required. For example, suppose that we are following a slow truck before an exit lane, where we need to take the exit lane. One semantic option is to keep driving slowly behind the truck. Another one is to overtake the truck, hoping that later we can get back to the exit lane and make the exit on time. The quality measure described previously does not consider what will happen after we will overtake the truck, and hence we will not choose the second semantic action even if there is enough time to make the overtake and return to the exit lane. Machine learning can help us to construct a better evaluation of semantic actions, that will take into account more than the immediate semantic actions.

This fine print seems to somewhat contradict their other statements, even in the abstract of the paper: "we propose a white-box, interpretable, mathematical model".

Deep learning through out the industry is ONLY used for sensing not for planning.

Waymo has a job listing for an engineer to use deep learning to predict the behaviour of entities on the road. Prediction is used for path planning, since you want to make sure the car’s path doesn’t intersect another entity’s path at a future time step.

So, I don’t think it is known for sure that deep learning is only used for object detection and classification (which you call “sensing”).

Also Wayve doesn't use end to end deep learning for driving policy either. They use reinforcement learning. Its a big difference!

On Wayve’s website, they say: “Wayve is pioneering artificial intelligence software for self-driving cars. Our unique end-to-end machine learning approach learns to drive in new places more efficiently than competing technology.”

In a paper published by Wayve, they describe an end-to-end “deep reinforcement learning algorithm for autonomous driving.”

Less than 0.1% of raw data is uploaded to Tesla HQ.

Yes, I know that. This does not in itself imply that Tesla collects no more useful data overall than competitors who collect no data at all from production fleets.

If just 0.005% of miles are recorded from 215,000 vehicles driving 32 miles per day, that’s still 1 million miles per month that are recorded from Tesla’s production fleet.

That would mean Tesla is recording as many miles from its production fleet as Waymo records from its test fleet. Plus, Tesla has its own internal test fleet.

Moreover, Tesla is adding something like 8,000 cars per week to the road. That means in a month from now it would be 1.15 million miles per month, and in seven months it would be over 2 million miles per month, even with no increased rate of Model 3 production.
 
Last edited:
The bottleneck for SDC today is not sensing, its planning as proven by disengagement related to perception for most companies being only 10% give or take.

There can be multiple areas where progress is needed.

If object detection and classification fails more often for autonomous cars than it fails for human drivers, then more improvement is needed. The way to determine if a subsystem of a fully autonomous car is ready for commercial launch is to compare its performance with a human baseline. If the subsystem performs worse than humans, then it’s not ready. If it performs better, it’s ready.

Just because object detection and classification performs better than path planning does not mean it performs better than humans.

What we need to know is how these subsystems are peforming against a human benchmark. And that information is not easy to come by. At least, I have a had a hard time finding it so far.

If anyone knows of a good way to compare the performance of these subsystems to the performance of human drivers, please let me know. I would love to be able to measure progress more concretely.
 
Last edited:
Bladerskb, I think I understand now the distinction you were making above. I didn’t realize deep reinforcement learning isn’t considered a type of deep learning. I now know deep learning only encompasses three techniques:
  • supervised learning (e.g. labelled images used to train ConvNets)
  • unsupervised learning
  • semi-supervised learning
I had thought “deep” just refererred to the number of neuron layers in the neural network, but apparently the taxonomy of learning techniques doesn’t work this way — deep reinforcement learning is apparently not a type of deep learning.

Thank you for making that distinction.
 
Last edited:
An important addendum to what I said about Tesla’s miles of recorded data:

Say that Waymo and Tesla want lots of sensor data from construction zones. Suppose Tesla’s trigger successfully goes off near construction zones 90% of the time. This is a way Tesla can get a lot more data of construction zones than Waymo. In a month, it can get 185 million miles’ worth of construction zones, whereas Waymo can only get 1 million miles’ worth. Waymo also only has a backlog of 8 million miles.

In 6 months, Tesla would have 1.6 billion miles’ worth of construction zones. In a year, it would have 4.5 billion. (Here’s the spreadsheet I used to compute that.)

This shows how crucial it is for Tesla go have good triggers. Doing it right, Tesla can get 20x to 200x more data than Waymo (or any other company) about a specific kind of entity or situation it wants to understand.
 
Last edited:
Well, this issue is a bit more complicated than I first thought. One of the one hand, this is Mobileye’s description of the system it uses: a “predetermined set of rules”; a “formalized”, “mathematical” model.

A paper by Amnon Shashua and others goes into more detail. I don’t really understand what they are saying, since the paper is highly technical. However, they say their approach is to formalize “common sense” rules of driving.

One the other hand, on page 26, they do describe using machine learning to decide between different predetermined actions based on an evaluation function:



This fine print seems to somewhat contradict their other statements, even in the abstract of the paper: "we propose a white-box, interpretable, mathematical model".

The RSS mathematical model is simply a hard constraint on the reinforcement learning models.

You should watch this entire video because it explains everything.

 
  • Helpful
Reactions: strangecosmos
An important addendum to what I said about Tesla’s miles of recorded data:

Say that Waymo and Tesla want lots of sensor data from construction zones. Suppose Tesla’s trigger successfully goes off near construction zones 90% of the time. This is a way Tesla can get a lot more data of construction zones than Waymo. In a month, it can get 185 million miles’ worth of construction zones, whereas Waymo can only get 1 million miles’ worth. Waymo also only has a backlog of 8 million miles.

In 6 months, Tesla would have 1.6 billion miles’ worth of construction zones. In a year, it would have 4.5 billion. (Here’s the spreadsheet I used to compute that.)

This shows how crucial it is for Tesla go have good triggers. Doing it right, Tesla can get 20x to 200x more data than Waymo (or any other company) about a specific kind of entity or situation it wants to understand.

To add a little bit of context. There are about 100 road constructions going on right now in the state of Michigan and about 200 road constructions that started in the month of august. road constructions based on the date i have looked at averages between 10-30 days give or take.

The exaction location of the construction roads and map is publicly available. Including what type of road it is and if and how many lanes are closed.

https://i.imgur.com/thL1SFS.jpg

When you add this context to the discussion you discover that the 185 million miles driven by Tesla in the span of a month is not necessary. Because there are not millions of construction zones. If those miles were driven in a single state, you will only collect data on 200 construction zones in a month. So they will only have 200 unique data. Whereas Waymo can easily get all of those 200 construction zones data by driving less than 10,000 miles with a few cars.

If you divide the 185M miles by 50 states. You have 4 million miles per state. But because there are only 200 unique construction zones per month, you can only collect 200 construction zone data per month.

Having more miles is useless when talking about sensing part of sdc.

As @verygreen noted, if a trigger goes off, the network already recognizes what its seeing. So your acquisition of new data becomes even less likely. So maybe a different cone/bollard/sign/barrier that it doesn't recognize pops up every now and then. Gets captured among the other stuff it recognizes during its a couple seconds of recorded data/pics. That doesn't mean much other than saying they didn't do their homework as you can easily get information on all manufactured construction cone types, barrier, signs, and train with them.

We also don't even know if Tesla won't active the trigger again if its driven back to the location that it was activated previously to remove duplicates.

My point is. a few non-autonomous fleet of cars can gather more than enough raw data you need for sensing and you don't need hundreds of thousands of cars to do that. Every other company understands this. When CEO of cruise was asked. He said there is no demand for more data, that they have enough.

4.5 billion miles to get 2,400 construction zone data in each state? ..Scratch that, actually more like around 600 because constructions only happen in the summer time.
 
Last edited:
Construction zones are just an example. This applies across the board to anything that a company would want to collect data on. Another example is deer.

My point is. a few non-autonomous fleet of cars can gather more than enough raw data you need for sensing and you don't need hundreds of thousands of cars to do that. Every other company understands this. When CEO of cruise was asked. He said there is no demand for more data, that they have enough.

Source for the Kyle Vogt quote?

I’m skeptical of the notion that a fleet of a few hundred tests cars can collect all the data that could ever be useful for the development of autonomy. Even with just object recognition, I haven’t seen any hard data that Mobileye, Cruise, Waymo, or any other company’s ConvNets exceed average human performance. Until that can be shown quantitatively, we need further progress on object recognition before fully autonomy can be deployed commercially. Do we have any quantitative evidence that this is the case?

Even if/when we reach the point where ConvNets across the board exceed average human performance, there still may be room for a company with more data to outperform companies with less data. More data can still be useful.

For image classification, Google found that performance continued to improve even when the training set was expanded intro hundreds of millions of images. Facebook took this further by using billions of images. So we know that for the ImageNet benchmark, we haven’t yet hit a ceiling where more data stops being helpful. Perhaps the ceiling is much lower for object recognition in an autonomous car context than image classification on the ImageNet benchmark. But perhaps not.
 
Haven't had a chance to read your lengthy posts, but we already have plenty of progress updates and news at Autonomous Car Progress. There's some good reading that I pointed to at Autonomous Car Progress.

Given that Silicon Valley has a large body of software engineering talent, it might be interesting to look at Testing of Autonomous Vehicles, specifically disengagement reports on CA public roads:
Autonomous Vehicle Disengagement Reports 2015
Autonomous Vehicle Disengagement Reports 2016
Autonomous Vehicle Disengagement Reports 2017

Also look at Safety Report – Waymo and Why testing self-driving cars in SF is challenging but necessary.

On the lines of Waymo, see Waymo Orders 62,000 Autonomous Chrysler Pacifica Hybrid Vans | CleanTechnica. Inside Cruise’s Bumpy Ride: The Limits of Self-Driving Cars has a lot of good interesting boundary cases and problems that GM's Cruise Automation has been hitting (e.g. aborted lane changes, trying to give cues to others that a vehicle is about to change lanes by hugging the side of the lane, humans breaking traffic laws all the time, problems handling things like steam coming out of man holes, faint traffic lights, going around cars trying to parallel park, tunnels, etc.)

Also, I'm sure the chart at Navigant Puts Tesla In Last Place Among Autonomous Vehicles has been discussed before. Some of the rankings raise eyebrows given the times it was published (esp. Apple given how secretive they area). I had seen an Aptiv (Lyft and Aptiv have completed 5,000 paid trips in their self-driving taxis) vehicle in Vegas when I was there earlier this month. When I requested a Lyft ride, I recall being asked whether I'd be ok w/being picked up in a self-driving (I'm sure with safety driver) vehicle, to which I said yes. I'd known of them Aptiv being at this in Vegas for awhile as it was mentioned in CES materials for 2017 or 2018 (I've been attending CES in Vegas for the past few years).
 
Last edited:
  • Helpful
Reactions: strangecosmos
Inside Cruise’s Bumpy Ride: The Limits of Self-Driving Cars has a lot of good interesting boundary cases and problems that GM's Cruise Automation has been hitting (e.g. aborted lane changes, trying to give cues to others that a vehicle is about to change lanes by hugging the side of the lane, humans breaking traffic laws all the time, problems handling things like steam coming out of man holes, faint traffic lights, going around cars trying to parallel park, tunnels, etc.)

Awesome, thanks! I didn’t know I could read that article for free. A subscription to The Information costs like $400/year. They have great articles but for that money I could buy a Nintendo Switch instead and read free Ars Technical articles relaying what The Information reported.

Also, I'm sure the chart at Navigant Puts Tesla In Last Place Among Autonomous Vehicles has been discussed before.

That chart doesn’t seem useful to me because 1) you have to pay thousands of dollars for the full report to even understand how they ranked companies, 2) one of the criterion for the rankings is literally just how many cars a company produces per year (Ford is #4 on the list, but is Ford #4 in technology?), and 3) it’s not clear if any experts in deep learning, reinforcement learning, computer vision, robotics, or autonomous cars generally contributed to the report.
 
Well, this issue is a bit more complicated than I first thought. One of the one hand, this is Mobileye’s description of the system it uses: a “predetermined set of rules”; a “formalized”, “mathematical” model.

A paper by Amnon Shashua and others goes into more detail. I don’t really understand what they are saying, since the paper is highly technical. However, they say their approach is to formalize “common sense” rules of driving.

One the other hand, on page 26, they do describe using machine learning to decide between different predetermined actions based on an evaluation function:



This fine print seems to somewhat contradict their other statements, even in the abstract of the paper: "we propose a white-box, interpretable, mathematical model".



Waymo has a job listing for an engineer to use deep learning to predict the behaviour of entities on the road. Prediction is used for path planning, since you want to make sure the car’s path doesn’t intersect another entity’s path at a future time step.

So, I don’t think it is known for sure that deep learning is only used for object detection and classification (which you call “sensing”).



On Wayve’s website, they say: “Wayve is pioneering artificial intelligence software for self-driving cars. Our unique end-to-end machine learning approach learns to drive in new places more efficiently than competing technology.”

In a paper published by Wayve, they describe an end-to-end “deep reinforcement learning algorithm for autonomous driving.”



Yes, I know that. This does not in itself imply that Tesla collects no more useful data overall than competitors who collect no data at all from production fleets.

If just 0.005% of miles are recorded from 215,000 vehicles driving 32 miles per day, that’s still 1 million miles per month that are recorded from Tesla’s production fleet.

That would mean Tesla is recording as many miles from its production fleet as Waymo records from its test fleet. Plus, Tesla has its own internal test fleet.

Moreover, Tesla is adding something like 8,000 cars per week to the road. That means in a month from now it would be 1.15 million miles per month, and in seven months it would be over 2 million miles per month, even with no increased rate of Model 3 production.

This is assuming the quality of data that Tesla collects is as good as Waymo is collecting.

Amount of data does not equate to quality of data.

So far Waymo is only company that can run up to 5600 miles in full level 4 mode without driver intervention.
 
So far Waymo is only company that can run up to 5600 miles in full level 4 mode without driver intervention.

This may well be true, but not all companies test in California, and California is the only jurisdiction (as far as I know) that publicly reports disengagement rates. Even most of Waymo’s testing is outside California, and yet we only know the California disengagement rate.
 
@Bladerskb Thanks for clarifying RSS and sharing the Mobileye video that explains how Mobileye is using deep reinforcement learning. Mobileye’s approach makes a lot more sense to me now.

Also thanks for drawing the distinction between supervised learning and reinforcement learning. I had been conflating the two in my mind. Lately I have been learning and thinking more about reinforcement learning and self-play. Whereas previously I mostly thought and learned just about supervised learning and labelled data sets.

Tesla is dead silent about most of what it is doing under the hood, whereas Mobileye is very open. However, of what little we do know, Mobileye and Tesla’s strategies seem pretty similar. Both are developing full autonomy without lidar, using a similar layout of cameras, collecting HD map data from production cars, and starting with highway driver assistance.

The main difference as far as I can tell is Mobileye’s lack of production fleet learning. Production cars send HD map data, but there is no mention of them sending sensor data or other data that could be used for supervised learning or reinforcement learning.

What’s true for learning is also true for validation. Elon Musk talks about using billions of miles of data to statistically validate safety. Amnon Shashua talks about using a formal model to validate safety.