Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Skeptical of crowd sourcing claims.

This site may earn commission on affiliate links.
The processing would be at the car. It's very easy for the car to calculate sideways G loadings (just a function of speed and turn angle) so if AP plots a course (even when not engaged) where it jerks right then left over, say, 0.2 Gs each way, that is a potential event. If the driver drives straight, then it's an anomaly -- an actual event. All the car transmits back is rightward-event-at-this-GPS. Not much CPU, bandwidth or complexity.

I agree, but continuing down the speculation train - that might not be enough information for them to fix the event/non-event that occurred.
 
Are they building a database of map information to store in the car of nationwide road anomalies and how the car should react when it gets to that point on the Earth? Preposterous.

I'm no expert,but in my tiny brain there are two basic ways companies are trying to do automated steering -- huge databases of road information (Google) or algorithms to react to WHATEVER the system encounters on the road. Tesla is in the latter camp.
I doubt that they are storing nationwide information in any car. However, they do have a (mostly) full time data connection which can download information for the road ahead in real time.
Tesla is in both camps. They have a basic navigation database in the car and they supplement that with real time data for the road ahead from their database. The algorithms in the car use all of this information to make decisions.
 
I agree, but continuing down the speculation train - that might not be enough information for them to fix the event/non-event that occurred.

The concept is that you have the AP doing all its processing in the car, while the human does all its 'processing' by driving the car. When the AP (even though it is not actually in use) would call for something strange, like jerking right then left, but the human behaves normally and just drives straight, that is evidence that the AP is mishandling some situation. If at home base they see a cluster of such events from different cars and different drivers, all at the same location, where that location is an intersection, then that is good evidence the AP algorithm is mishandling that particular intersection. The person at home base clicks on the map to tell the APs in all the cars that will later head that way to override the algorithm for a moment and steer straight in that spot. By telling the AP to override its algorithm and do what the humans do at particular locations will make the AP look much smarter.
 
The concept is that you have the AP doing all its processing in the car, while the human does all its 'processing' by driving the car. When the AP (even though it is not actually in use) would call for something strange, like jerking right then left, but the human behaves normally and just drives straight, that is evidence that the AP is mishandling some situation. If at home base they see a cluster of such events from different cars and different drivers, all at the same location, where that location is an intersection, then that is good evidence the AP algorithm is mishandling that particular intersection. The person at home base clicks on the map to tell the APs in all the cars that will later head that way to override the algorithm for a moment and steer straight in that spot. By telling the AP to override its algorithm and do what the humans do at particular locations will make the AP look much smarter.

Yes, that would work; but I don't think that's how AP works currently (from what I understood they're doing), I could be wrong.

I don't think it'd be smart to send a command to AP to drive straight at a particular intersection, too many other variables at play. I would think they're keeping a database of either lane markings or some condensed sensor fusion result. They could [manually] update one of the entries in that database, which is just one entry in to the AP algorithms and that would avoid the swerve-left-swerve-right maneuver. For that, they need more than just location data. My 2c.
 
Sillydriver, it cannot work that way. "Don't do anything different like turn" is a recipie for disaster. Road conditions change all the time. A bicycle could be in the way today, but not tomorrow. The system HAS to at all times, be guided by what it senses. It cannot have hard and fast rules, like go straight through this area. Any crowd sourcing will have to improve what it senses, not give it special rules for areas.
 
So telemetry, big data, IoT are part of my day job. There is nothing particularly special about what Musk is talking about when it comes to collecting and analyzing the data to look for trends and patterns. Its not clear from his comment exactly how the analytics then drive further product refinement. Keeping track of anomalies is not a big deal (Waze does something like this now with, for example, traffic cameras). Something a bit more interesting would be to flag areas where multiple drivers override AP then examine the area in person or via Google Maps, etc and figure out how tweak the algorithms to handle the situation better.
 
Yes, that would work; but I don't think that's how AP works currently (from what I understood they're doing), I could be wrong.

I don't think it'd be smart to send a command to AP to drive straight at a particular intersection, too many other variables at play. I would think they're keeping a database of either lane markings or some condensed sensor fusion result. They could [manually] update one of the entries in that database, which is just one entry in to the AP algorithms and that would avoid the swerve-left-swerve-right maneuver. For that, they need more than just location data. My 2c.

While I think my idea would reduce rather than increase risk, I agree with the point that I think you are making which is that the ideal way to handle right turn lanes would be to program the AP to recognize them and handle them properly.
 
They can send back feature data from the vision algorithms without sending the whole image. Plus, they only need probably a few frames around an incident anyway.

I agree. They also could send back the telemetry about the event for review, and once HQ feels it is warranted, they earmark those GPS coordinates for deeper inspection and then the cars driving through there send back more detailed information. They likely are doing a mixture though. The car will make a "best guess" on compact info to send and if the teams think they aren't getting enough data back to decide on a corrected course of action (or code it), then they could put that area in under deeper scrutiny.

I work on a Mainframe transaction processing system call CICS. At a high-level, you generally have a couple levels of trace data you can enable for catching unexpected events. No tracing at all; exception tracing with slight overhead to everything (level 1), but decent information about the specific fault; and then special tracing in the specific domains of operating code with moderate overhead per domain (level 2). Using various filters and techniques, you can narrow down when this level 2 data is activated so it doesn't impact everything, but level 1 can provide a great deal of knowledge around some unexpected situation.
 
Sillydriver, it cannot work that way. "Don't do anything different like turn" is a recipie for disaster. Road conditions change all the time. A bicycle could be in the way today, but not tomorrow. The system HAS to at all times, be guided by what it senses. It cannot have hard and fast rules, like go straight through this area. Any crowd sourcing will have to improve what it senses, not give it special rules for areas.

I see your point. There goes my patent! :wink:
 
The idea/concept makes sense but I'm taking to considering what Tesla says with a bit more scrutiny these days. I was hoping there are some forum members here with learning machine experience that can shed some light on this to see if its coders looking at where things did not work, tweaking then redeploying or if it truly could be learning machine based (which seems to be the concept that everyone is excited about).

Call me a little slower to get excited these days.

I spoke with a friend who is heavily into machine learning and processing big data sets (Has a PhD related to it). He totally believes that Tesla is doing this. It would be a combination of humans (coders) and the system itself working together to improve. The way he put it is that the higher quality data set you have, the better you can analyze what's going on and make adjustments based on specific data. The sensors and systems that Tesla is using provides a rich data set to mine and use to improve the outcomes. You have radar, ultrasonics, camera, GPS, vehicle data (speed, temperature etc.) and system data on when human intervention occurs. This is a huge advantage over all the other automakers who are operating with very limited data sets and have no system to either pull in observation data on how the cars and drivers are reacting to the rules in the system or the ability to push out improvements on a continual basis to their system when the rules and interpretations of them are modified because they are not creating the desired outcomes.

A way to think of what's going on is that the cars have a certain set of rules and interpretations of the rules that Tesla has put in them. As the system observes how the cars react and how people react to the rules, it can see when the rules are broken and when they are questioned. The system can point out those instances and then the coders can adjust the rules and interpretations to improve it's behavior. He also said that what you want to do is actually have failures and cases happen where the rules are broken. It's randomness in the system that will make the system better if that randomness is observed and acted upon. He used an example of kids and how if we put them on a tight schedule with no variations, they never learn how to deal with situations outside the routine. It's the variations in the routine (Maybe a later bedtime one night to make them a bit more tired the next day, or a change in their activity routine that makes them less active) that helps them learn how to adapt to a new feeling or situation. With the dataset Tesla is collecting now with it's first release version of Autopilot, it will see more random events in the system than any team of people could ever imagine. As they see those random events and their effect on the cars, they can analyze and adjust the behavior of the system to operate more and more along the lines that they wish.

It is really amazing stuff and it is a massive shift in the automotive transport world that we are witnessing today. If Tesla can truly execute on this promise, it will be a massive change in the paradigm of transport and will see it with in most of our lifetimes.
 
Last edited:
This is a very interesting topic. In my short time driving with the new Autopilot features, I've catalogued three or four spots on local roadways where the AP's actions required my intervention, and that I will revisit on a regular basis. I'm looking forward to seeing how the AP system's response to these locations changes with each new release; or even at different times, under different conditions, on the same release.
 
This is a very interesting topic. In my short time driving with the new Autopilot features, I've catalogued three or four spots on local roadways where the AP's actions required my intervention, and that I will revisit on a regular basis. I'm looking forward to seeing how the AP system's response to these locations changes with each new release; or even at different times, under different conditions, on the same release.

The way Elon described it on the press conference call was that some of these algorithm improvements won't need firmware updates for. He hinted that they will update about weekly.
 
Not sure I am believing the minutia from Elon any more. There seems to be a bit of a disconnect between his and my reality. Still a fan, love the vision and believe the overall more global statements but just not buying the details as much any more. It seems the more they do on the fly, the more the small utterances are drifting from the final truth.
 
I work with machine learning including a fair bit of overlapping technology in a military context and I'm quite sure that Telsa is doing some of this. People probably overestimate how automated the learning process is, but collecting the data, analyzing it and using it t refine mapping and algorithms is not bleeding edge at all, lots of us do that. In fact, all the GIS players use data collection vehicles to do traffic and road mapping. How quickly the data gets analyzed and filtered and put back into the field is another question -- you'll notice that Elon finessed the answer to that question by saying not every day, maybe weekly.

I also believe the much more detailed maps he talked about and showed a slide of are a real thing. That would be a very elaborate lie to tell if it wasn't true. I'd also expect them to track things that user override of the autosteer and the times when it alerts and demands you take over immediately. If they see it happen repeatedly at the same location and they can't fix the behavior, they could set a fence that demands you take over earlier than the problem area as a proactive measure. They might even flag certain roads and stretches of road to just say "not available" if the incidence of alarms and inappropriate steering output is high enough.

- - - Updated - - -

This is a very interesting topic. In my short time driving with the new Autopilot features, I've catalogued three or four spots on local roadways where the AP's actions required my intervention, and that I will revisit on a regular basis. I'm looking forward to seeing how the AP system's response to these locations changes with each new release; or even at different times, under different conditions, on the same release.

Yes we should be able to prove or disprove this in a matter of weeks, because everyone will have a familiar spot or two on their commute where autopilot acts up. Some of us will see them fix or we'll know that their process for accumulating and acting on the information isn't very good. They are presumably about to get buried with data, hopefully they are prepared to ramp up.
 
But I can see it working something like this in some cases:

- Every time a driver takes over to correct the autopilot (let's say it tried to exit), log an event with the GPS coordinates and all relevant info. Maybe a camera snapshot.
- In the backend, combine those reports and keep track of the frequency at locations. Then score each location based on frequency, severity etc.
- Locations with high scores are either looked at by a human or a further algorithm makes adjustments to the mapping database.

What happens when a driver overrides AP to exit at his chosen off ramp?
 
What happens when a driver overrides AP to exit at his chosen off ramp?

This kind of data will be very noisy. There will be many, many cases where the driver overrode AP that didn't indicate any fault at all. Maybe the driver got nervous, but the car was not going to fail, maybe the driver wanted to take an exit, maybe they just felt like being in charge again. Figuring out how to synthesize all this input is not a trivial problem.
 
I hear lolachampcar's skepticism. I think Tesla will be doing this data mining as they say, but have no real expectation of when/how I could possibly see them change that little spot in my neighborhood (okay, not a highway but a large road with lane markings) where the car veers off all the time. I mean that's sorta how we will be measuring this, right? Lucky for Tesla, this time there is no deadline on these improvements, which I'm happy with!

Having said that, I assume the motivation for them to use all of this data is there because it will get them to full autonomous quicker than any other company, except Google. So I have some hope.

I do still wonder sometimes if they are pouring too many resources into this arena when the ultimate goal is to electrify wheeled transport for as many people as possible.
 
Autonomous driving and dramatic re-writing of the battery production arena (fixed storage for utilities and batteries for cars) seem to be ultimate goals. Every start up I've done has been an exercise in moving targets like finding your way through a maze. You have no idea where you are really going, only the general direction.
 
I do still wonder sometimes if they are pouring too many resources into this arena when the ultimate goal is to electrify wheeled transport for as many people as possible.

I agree that their core mission is sustainable transport. And I also agree that there is a risk of devoting resources in too many directions. However, I think in this instance it is likely justified. I think in a strategy vs. tactics view, being a leader in autonomous driving (let's call this a tactic) is crucial to their goal of sustainable transport (call this the strategic goal). Why? Well, I think it is similar to their decision to start at the high end (luxury/performance) of the market and then work down to mass market. Namely, to get people to buy their cars, they needed to make them highly desirable, so that they could sell them profitably, and use the surplus to grow the company. Similarly, being the leader in autonomous driving makes their cars more desirable. Creating a brand that is the technology and performance leader is crucial to their ability to sell in large volume with high margins.

Now, just how their resources get allocated between one aspect vs another of being the "IT" brand is surely debatable. But I think they have to devote enough resources to this area to be perceived as a leader, if not the leader.

Not really disagreeing here so much as brainstorming. Your thoughts?