Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Skeptical of crowd sourcing claims.

This site may earn commission on affiliate links.

lolachampcar

Well-Known Member
Nov 26, 2012
6,472
9,378
WPB Florida
Given Tesla's recent performance (over the last year) I'm a bit skeptical of the crowd sourcing improvement changes coming.

Would members with with direct development/usage experience with crowd sourced big data chime in on the legitimacy of Tesla's claims?

Bill
 
Not what you asked for: but I developed algorithms for a while, I have no experience with crowd sourcing. The more training data you have for development, the better chance you'll have at your algorithm meeting the corner cases and reducing your false alarm rates.

So it seems very feasible to me. Grow your database from users, instead of what you already have. Retrain -> test -> deploy. But again, I have no experience directly with crowd sourced data.
 
This to me is even more impressive if done right than the actual auto-drive algorithms. They will have to be very judicious about how to manage the data from the car to keep it from being overwhelming. Packaging the important events and figure out what to ignore, etc. Then at the back end, having something to cull through all that intelligently is another challenge.

But I can see it working something like this in some cases:

- Every time a driver takes over to correct the autopilot (let's say it tried to exit), log an event with the GPS coordinates and all relevant info. Maybe a camera snapshot.
- In the backend, combine those reports and keep track of the frequency at locations. Then score each location based on frequency, severity etc.
- Locations with high scores are either looked at by a human or a further algorithm makes adjustments to the mapping database.

That's just one scenario, I'm sure there are many others.
 
At TMC Connect Tesla's autopilot engineer described how they tuned the Forward Collision Warning and Automatic Emergency Braking. They had two versions of the software on the car. One active and one dormant test version. They could compare the actions the dormant, non-active software would have taken vs the actions the driver or active software took. Every time the FCW or AEB activated, or the user intervened, it logged the data and uploaded it. They used this data to tune their software. Based on the way he described it, it sounded like a human was involved in review of the updated software, not solely an automated software program.
 
I'm actually a little surprised that TM hasn't all ready been doing this. Their cars are wifi enabled, and have gps. Presumably owners of MS are driving on roads, so just take nav/gps data and superimpose on real world maps... They all ready have access to millions of miles of data with successful trips...
 
I'm actually a little surprised that TM hasn't all ready been doing this. Their cars are wifi enabled, and have gps. Presumably owners of MS are driving on roads, so just take nav/gps data and superimpose on real world maps... They all ready have access to millions of miles of data with successful trips...
I think the key here is getting data on the autopilot use. That is, looking at the actual routes people take, which lanes they use and when there are times when the operator has to correct the autopilot. Also, even with autopilot turned off, the new software is active in keeping track of lanes and routes, etc. so that will also feed back to the database.
I think I recall that they will be collecting a million miles a day (?week) of driving data so the database should quickly build up to something very useful.
 
The idea/concept makes sense but I'm taking to considering what Tesla says with a bit more scrutiny these days. I was hoping there are some forum members here with learning machine experience that can shed some light on this to see if its coders looking at where things did not work, tweaking then redeploying or if it truly could be learning machine based (which seems to be the concept that everyone is excited about).

Call me a little slower to get excited these days.
 
The idea/concept makes sense but I'm taking to considering what Tesla says with a bit more scrutiny these days. I was hoping there are some forum members here with learning machine experience that can shed some light on this to see if its coders looking at where things did not work, tweaking then redeploying or if it truly could be learning machine based (which seems to be the concept that everyone is excited about).

Call me a little slower to get excited these days.

I think it depends on what they're truly doing with the data. In my experience, it took months (sometimes many months) between acquiring enough instances of new data, training the algorithms to make sure you didn't break something else accidentally, making sure you didn't overtrain, making sure your algorithm is actually trained to do what you want it to do, and finally updating the product in the field.

If all Tesla needs is the fusion output from the camera/radar/sonar and possibly driver reactions on roads, and they still tell you to hold the wheel because it's a beta, and blah blah blah, I could see it deploying weekly as EM stated.

I wouldn't trust an algorithm to update itself/retrain itself without some human supervision, but technology progresses fast and I'm likely not up to date on the latest/greatest.
 
There are two things that stand out immediately as possibilities for Tesla-owner sourcing of data:

1) Correction events: keep track of every place that the autopilot decided it couldn't handle the situation as well as whenever the driver takes over and see why and what happened.

2) High resolution GPS based mapping from all cars, but especially those with autopilot in order to refine their maps. They can do this in sections of roads where they want to collect data where their data is not good enough. It doesn't have to happen all the time.

These two seem to be pretty easy to feed back to the mothership and for them to act upon it.
 
I dearly love Tesla. However, in my opinion, people are blowing this learning stuff way out of proportion. Are they logging information and using that data to make the Autosteer work better? Absolutely. Are they building a database of map information to store in the car of nationwide road anomalies and how the car should react when it gets to that point on the Earth? Preposterous.

I'm no expert,but in my tiny brain there are two basic ways companies are trying to do automated steering -- huge databases of road information (Google) or algorithms to react to WHATEVER the system encounters on the road. Tesla is in the latter camp.

Do people realistically think that a company that can't (or won't) build a functioning navigation system are building a massive database of road quirks and instructing the car to react in particular ways to those anomalies, with all the variables of traffic and speed and weather conditions?

Inconceivable.
 
Here is an example of what I think they should be doing. On my first outing with v7.0 this morning I drove home on Route 50, which has two lanes and plenty of intersections and turning lanes. I had AP off on that road, but it was fun to watch the display of what AP was sensing. Driving past a an intersection with a short right exit lane I could see the indicated road get pulled off to the right and then snap back left to rejoin the actual road. I was glad I didn't have AP on, because the car would have jerked right then left. But it occurs to me that the car should be programed to notice, while AP is off, when the system does a sharp correction, but the driver drives straight. Lots of those incidents can be collected as cars drive around without AP even on. They can all be plotted on a map at home base, and when a human looking over the map notices there are multiple system-corrects-but-human-drives-straight events at a location that corresponds to a crossroads, then the human can click on the map, which marks those coordinates as a location where cars should ignore the pull to the right and drive straight. Eventually, such locations on the road ahead could be radioed ahead to cars driving on that road, telling them not to swerve at those points.
 
Inconceivable.

You keep using that word....

vizzini-iocaine.jpg


They showed direct images of the mapping database routes on the 405 in the presentation. They are doing it, maybe not on a quirk by quirk basis but certainly determining true routes and lanes and exits and things at a much finer level than a nav database would.
 
Great idea sillydriver. Would be nice to know what the car really does, though. The problem with crowd source claims is that the biggest sensor input the car uses for auto steer is the video camera. Beaming video back to the mothership for review is going to consume bandwidth pretty quickly. And I think it would have to be the raw video since the preprocessed video uses the faulty algorithms, so sending back compressed data of what the auto steers "sees" isn't going to help anyone since it obviously saw something different from what the human saw.
 
Great idea sillydriver. Would be nice to know what the car really does, though. The problem with crowd source claims is that the biggest sensor input the car uses for auto steer is the video camera. Beaming video back to the mothership for review is going to consume bandwidth pretty quickly. And I think it would have to be the raw video since the preprocessed video uses the faulty algorithms, so sending back compressed data of what the auto steers "sees" isn't going to help anyone since it obviously saw something different from what the human saw.

They can send back feature data from the vision algorithms without sending the whole image. Plus, they only need probably a few frames around an incident anyway.
 
Great idea sillydriver. Would be nice to know what the car really does, though. The problem with crowd source claims is that the biggest sensor input the car uses for auto steer is the video camera. Beaming video back to the mothership for review is going to consume bandwidth pretty quickly. And I think it would have to be the raw video since the preprocessed video uses the faulty algorithms, so sending back compressed data of what the auto steers "sees" isn't going to help anyone since it obviously saw something different from what the human saw.

You don't need raw/uncompressed data. You can compress the video feeds. You can reduce the frame rates. There are a million things you can do.

Also, we don't know what stages of pre-processing they have. They may be able to get through 1/2 the steps and then send the data over, etc.
 
Great idea sillydriver. Would be nice to know what the car really does, though. The problem with crowd source claims is that the biggest sensor input the car uses for auto steer is the video camera. Beaming video back to the mothership for review is going to consume bandwidth pretty quickly. And I think it would have to be the raw video since the preprocessed video uses the faulty algorithms, so sending back compressed data of what the auto steers "sees" isn't going to help anyone since it obviously saw something different from what the human saw.

The processing would be at the car. It's very easy for the car to calculate sideways G loadings (just a function of speed and turn angle) so if AP plots a course (even when not engaged) where it jerks right then left over, say, 0.2 Gs each way, that is a potential event. If the driver drives straight, then it's an anomaly -- an actual event. All the car transmits back is rightward-event-at-this-GPS. Not much CPU, bandwidth or complexity.