Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Elon: "Feature complete for full self driving this year"

This site may earn commission on affiliate links.
He's at it again, this time on the ARK podcast: On the Road to Full Autonomy With Elon Musk — FYI Podcast

"I think we will be feature complete FSD this year, meaning the car will be able to find you in a parking lot, pick you up and take you all the way to your destination this year. I would say I'm certain of that. That is not a question mark."

Getting back to the topic of the OP, I think one of the most fascinating pieces of information to come out about autonomy lately is that Waymo is using imitation learning.

Imitation learning, for those who don't know, means a neural network learns to map certain kinds of actions to certain kinds of environment states based on observing what humans do. By training on lots and lots of examples of human action, the neural network learns, "If you see this, do that." Like, "If you see a stop sign, stop." Or, "If you see a parked car blocking your way, nudge around it like so."

Drago Anguelov, the lead of Waymo’s research team, recently gave a talk at MIT where he went deep into this topic.


One of the most interesting slides from the presentation:

qgIgFnI.jpg


Anguelov says for the long tail of the human driving behaviour distribution — rare situations — there aren’t enough training examples in Waymo’s dataset to do imitation learning. I mean, imagine a situation that arises every 30 million miles on average. Waymo has done under 15 million miles, so it might not have encountered that situation even once. Or a situation that occurs every 1 million miles. It might have less than 15 examples, which might be far too few. (I don't know what's true for imitation learning, but for image classification the rule of thumb is you want at least 1,000 examples.) There are lots of rare situations Waymo has never seen, or has seen too few times.

So, while Anguelov would prefer to do imitation learning across the whole human driving behaviour distribution — including the long tail — Waymo just doesn’t have the data to do it. Well, who does have the data? Tesla... And is Tesla doing imitation learning? Amir Efrati at The Information has reported that it is, citing at least one source who has worked in Tesla’s Autopilot division:

“Tesla’s cars collect so much camera and other sensor data as they drive around, even when Autopilot isn’t turned on, that the Autopilot team can examine what traditional human driving looks like in various driving scenarios and mimic it, said the person familiar with the system. It uses this information as an additional factor to plan how a car will drive in specific situations—for example, how to steer a curve on a road or avoid an object. Such an approach has its limits, of course: behavior cloning, as the method is sometimes called…​

But Tesla’s engineers believe that by putting enough data from good human driving through a neural network, that network can learn how to directly predict the correct steering, braking and acceleration in most situations. “You don’t need anything else” to teach the system how to drive autonomously, said a person who has been involved with the team. They envision a future in which humans won’t need to write code to tell the car what to do when it encounters a particular scenario; it will know what to do on its own.”
Tesla hasn’t confirmed this, but Elon made some comments on the ARK Invest podcast that could be interpreted as describing imitation learning.

At 13:30:

"The advantage that we have that I think is very difficult to overcome is that we have just a vast amount of data on interventions. So, effectively, the customers are training the system on how to drive. And there are millions of corner cases that are so obscure and weird you wouldn't believe it..."​

At 14:25:

“Everytime somebody intervenes — takes over from Autopilot — it saves that information and uploads it to our system. ... And we’re really starting to get quite good at not even requiring human labelling. Basically the person, say, drives the intersection and is thereby training Autopilot what to do."
These comments are ambiguous and there are multiple possible interpretations. But, to me, imitation learning fits most closely with what Elon said.

Important to clarify: Tesla wouldn't need to upload any sensor data for this to work. It would only need to upload the perception neural network's mid-level representation — the judgments (or predictions, to use the technical term) it makes about what it sees — paired with data about what the human driver did with the steering wheel and pedals. These state-action pairs don't need human annotation.

Okay, let's suppose Tesla is using imitation learning on the whole distribution of human driving behaviour, including the long tail of millions of obscure and weird situations. Can this approach really train a neural network to execute complex tasks?

Yes! Using just imitation learning on millions of StarCraft games played by humans, AlphaStar achieved performance estimated by DeepMind to be at the level of a Gold/Platinum league human player. That's roughly in the middle of the ranked ladder. In other words, AlphaStar reached roughly median human performance using imitation learning alone. StarCraft is a complex task that has been considered by many to be a significant AI challenge.

Imitation learning was also used by DeepMind to bootstrap reinforcement learning (RL) for AlphaStar. This allowed it to go on and beat one of the world's top pro players. Could the same be done for autonomous driving? Maybe, but there is an important difference between StarCraft and driving. To quote Oriol Vinyals, one of the creators of AlphaStar:

"Driving a car is harder. The lack of (perfect) simulators doesn’t allow training for as much time as would be needed for Deep RL to really shine."​

Why can't we do RL in a driving simulator? According to two researchers at Waymo:

"...doing RL requires that we accurately model the real-world behavior of other agents in the environment, including other vehicles, pedestrians, and cyclists. For this reason, we focus on a purely supervised learning approach in the present work, keeping in mind that our model can be used to create naturally-behaving “smart-agents” for bootstrapping RL."
Drago Anguelov also emphasized "smart agents" for robust simulation in his talk:

PXug2RA.jpg


So, to summarize:
  • Waymo is doing imitation learning, but not for the long tail because it lacks training data
  • Tesla might be doing imitation learning, and has training data for the long tail
  • AlphaStar reached roughly median human performance on StarCraft with imitation learning
  • Imitation learning bootstrapped reinforcement learning for AlphaStar
  • The same could potentially be done for autonomous driving
I can't predict the future, and neither can Elon. I can't predict the outcome of machine learning experiments or novel engineering projects in advance. But I have put the above pieces together to make an explanatory theory of how autonomous driving might be solved — imitation learning and possibly reinforcement learning — that implies, if this approach works, Tesla will likely be the first to solve it.

It also implies that, if this approach works, progress will happen at the speed of a machine learning project, not at the speed of a classical robotics project. For the past 15 years, classical robotics software has played a major role in almost all autonomous vehicles (with Wayve and Nvidia's BB8 being two exceptions). Classical robots software makes progress slowly. And it might just never work. Elon doesn't seem to think so (15:05):

"...a series of if-then-else statements and lidar is not going to solve it. Forget it. Game over."
Just because humans know how to do things doesn't mean we know how we do them. That's a problem for psychology, neuroscience, cognitive science, artificial intelligence research. Not something a software engineer can understand through introspection or casual observation and then program into a robot. Some tasks we can program, and some we can't. Classical robotics doesn't have a lot of complex tasks under its belt.

If classical robotics software is our best approach to autonomous driving, it might be hopeless. Even if it isn't, progress will probably be about as slow over the next 15 years as the last 15 years. This would make linear extrapolation of progress rational. If you extrapolate progress linearly, then talking about full autonomy in 2020 doesn't make sense.

If the approach isn't classical robotics, but machine learning, then progress could be super-linear. Linear extrapolation of progress wouldn't be rational. I think this is where Elon is coming from. If you think of Tesla AI on the timeline of AlphaStar, then full autonomy in 2020 makes a lot more sense. AlphaStar took less than 3 years of development. The final training run before it beat MaNa — one of the world's top players — run took 17 days. The imitation learning portion took just 3 days. An agent can go from zero to human-level at StarCraft in a long weekend. So, to say full autonomy is definitely not going to happen next year because of what's on the road today doesn't make sense, if you think imitation learning is a viable approach.
 
Last edited:
So, while Anguelov would prefer to do imitation learning across the whole human driving behaviour distribution — including the long tail — Waymo just doesn’t have the data to do it. Well, who does have the data? Tesla...
  • Tesla might be doing imitation learning, and has training data for the long tail.

They dont based on your own post...

  • But I have put the above pieces together to make an explanatory theory of how autonomous driving might be solved — imitation learning and possibly reinforcement learning — that implies, if this approach works, Tesla will likely be the first to solve it.

How will they be first? You didn't lay out any fact or any evidence? Just alot of conjecture.

It also implies that, if this approach works, progress will happen at the speed of a machine learning project, not at the speed of a classical robotics project.

driving is not a game toy research project.

If the approach isn't classical robotics, but machine learning, then progress could be super-linear. Linear extrapolation of progress wouldn't be rational. I think this is where Elon is coming from. If you think of Tesla AI on the timeline of AlphaStar, then full autonomy in 2020 makes a lot more sense. AlphaStar took less than 3 years of development. The final training run before it beat MaNa — one of the world's top players — run took 17 days. The imitation learning portion took just 3 days. An agent can go from zero to human-level at StarCraft in a long weekend. So, to say full autonomy is definitely not going to happen next year because of what's on the road today doesn't make sense, if you think imitation learning is a viable approach.

That's not where elon is coming from, he been saying 2 years since 2015. We know exactly where he is coming from. It has been proven hundreds of times that elon's statement concerning AP has and always been complete lies and this is refutable factual evidence.

Theres no difference with him saying 2020 now and you backing him up, just like when he said 2019 and you were coming up with theories that will fit 2019.

You dont create theories based on actual independent evidence and finding you create theories based on anything elon says and then want us to pretend like the last theory you said doesnt exist. Now the drum for 2020 begins. When 2020 comes and doesnt happen and elon says end of 2021. you will start drumming 2021 aswell. This is why people call you out because you repeat the same thing with a new time line from elon and a different reason why this new time line is THE ONE!
 
@strangecosmos

The biggest one is, so far we have no insight that Tesla is gaining meaningful state-action-pairs from their AP2 trigger data that could actually be used for such training — in fact what we know so far is that they are not — and we have a lot of reason to believe they are not likely to anytime soon.

Training NNs on consumer data seems like a seriously iffy proposition in the vision sense and Tesla is a long way from training beyond that given that the next level (training an abstracted driving policy for example) would require a mature vision engine which Tesla does not seem to have.
 
If anyone has any technical criticisms of my conjecture, I’m all ears...

(But ad hominem remarks and accusations of bad faith are not welcome)

Here's a semi-technical criticism: Despite all of their data from two years of AP2 fleet usage, and all of their amazing, ground-breaking, nobody-can-touch-this machine learning expertise, they still cannot recognize a stopped fire truck on the highway in order to avoid slamming into it.

@Bladerskb 's criticism is not entirely ad hominem. He pointed out that the track record of Elon's tweets and statements during earning calls regarding Autopilot capabilities (and particularly timelines) is abysmal -- this is demonstrable fact. And your arguments tend to rest entirely on Elon's tweets and statements during earnings calls! This is not ad hominem.

How about you use evidence beyond what Tesla says and look at what Tesla does -- what actual capabilities have they demonstrated? Where is the coast-to-coast FSD demonstration? Why are AP2 cars still slamming into fire trucks and concrete barriers? Many people on this forum have extensive first hand daily experience with Autopilot's capabilities, and that is by far the most reliable data we have about where Tesla is and where they're going, since the tweets and public statements are demonstrably unreliable.
 
It is hard to go into detail when you feel the entire premise is wrong. That is how I approach @strangecosmos unfortunately.

I have detailed debates with many peope who disagree with me here but I find it we are too far apart probably on this one to say anything meaningful.

I think @strangecosmos has an unrealistic view of both Tesla in particular and the industry in general.
 
If anyone has any technical criticisms of my conjecture, I’m all ears...

(But ad hominem remarks and accusations of bad faith are not welcome)

Ad hominem (Latin for "to the person"), short for argumentum ad hominem, is a fallacious argumentative strategy whereby genuine discussion of the topic at hand is avoided by instead attacking the character, motive, or other attribute of the person making the argument

Where in my response did i address your character, motive or attribute? Stop playing the victim card and address the countless fallacies in your theories/thesis!

If you actually looked through my response to your thesis. I mentioned three points. First and foremost the fact that your entire thesis is always based on Elon's statements which has been proven completely false. This is an absolute fact. Yet all your thesis/theories hinges on it. Then I pointed out how you don't construct these thesis/thoeris/points based on independent evidence/facts but based exactly on Elon's statements. Therefore you are now hyping up 2020 because elon just pushed his timeline to 2020, just as you hyped up 2019, likewise will also hype up 2021 when the inevitable push back of 2020 happens.

This isn't an ad-homien attack on your character, motive, or other attribute, this is pointing out how your points/thesis are constructed, what they are based on. IF what they are based on has a track record of being wrong consistently then likewise the theories based on it is.

Calling that ad-homien is like a guy whose shouting the end will come this month, for the last 12 months, and each month uses a tweet he read as a basis for his argument and points. When his argument is deconstructed and its pointed out that his points as always based on tweets that have been proven to be wrong. He responds with, "That ad-homien, why don't you address the fact that the world will end this month!"


Part 1...
 
Where in my response did i address your character, motive or attribute? Stop playing the victim card and address the countless fallacies in your theories/thesis!

If you actually looked through my response to your thesis. I mentioned three points. First and foremost the fact that your entire thesis is always based on Elon's statements which has been proven completely false. This is an absolute fact. Yet all your thesis/theories hinges on it. Then I pointed out how you don't construct these thesis/thoeris/points based on independent evidence/facts but based exactly on Elon's statements. Therefore you are now hyping up 2020 because elon just pushed his timeline to 2020, just as you hyped up 2019, likewise will also hype up 2021 when the inevitable push back of 2020 happens.

This isn't an ad-homien attack on your character, motive, or other attribute, this is pointing out how your points/thesis are constructed, what they are based on. IF what they are based on has a track record of being wrong consistently then likewise the theories based on it is.

Calling that ad-homien is like a guy whose shouting the end will come this month, for the last 12 months, and each month uses a tweet he read as a basis for his argument and points. When his argument is deconstructed and its pointed out that his points as always based on tweets that have been proven to be wrong. He responds with, "That ad-homien, why don't you address the fact that the world will end this month!"


Part 1...

Reliance on Elon is one half of @strangecosmos ’ issue. Maybe you will go into the other half, but I’ll share my view.

This is the other half:

He does seem to believe the amount of data Tesla can gather from consumer cars is the key, and that Tesla has an unique advantage in this.

This is unrelated to Elon himself, I will give him that. The problem for me is, this presumption is equally iffy — if not more so than listening to Elon.

There is absolutely nothing pointing at Tesla training NNs based on consumer state-action-pairs at all. (He has evolved from raw data to abstracted data as a possibility but the problem then again is Tesla is very far away from this and he ignores MobilEye EyeQ4 is actually already ahead with a similar advantage on abstracted data.)
 
Last edited:
If anyone has any technical criticisms of my conjecture, I’m all ears...

(But ad hominem remarks and accusations of bad faith are not welcome)

Part 2...

Then in my other point i talked about how your own post disproves the statement you made. You said it yourself, Tesla needs HW3 because only that is said to have traffic lights, traffic sign, road signs, road markings, potholes, debris, general object and more accurate detection, etc. Therefore how can they already have the training data they need as you just said? As the current firmware in AP2.X has none of those detection capability?

Another point is the fact that there is no shred of evidence for the collection of steering/accel/brake and processed environmental model data. I believe @verygreen will be able to see this data if it were actually being retrieved. @wk057 has shown that you can easily read and send data to the ECU relating to steering and accel/brake commands and im sure green can see any request made inquiring these pieces of data from the ECU by the autopilot software. Especially if its done continuously as would be needed for the AlphaStar thesis.

Thirdly you didn't provide any supporting evidence necessary for the AlphaStar thesis. Things such as where is the HD Map that is needed to do accurate simulation and RL? What about the simulator itself? Amir himself said Tesla hadn't even started working on a simulator, you refuted Amir assertion by pointing towards Tesla's AP1 Lane keeping and adaptive cruise control simulator, which anyone would know is night and day compared to a simulator required for actual full self driving.

So not only is Tesla lacking in HD Maps, but they are also lacking in a simulator as of late 2018. Both of these are necessary for AlphaStar thesis.

Evidence of state/action data collection X
Evidence of HD Map X
Evidence of self-driving simulator X
 
Last edited:
He does seem to believe the amount of data Tesla can gather from consumer cars is the key, and that Tesla has an unique advantage in this.

This is unrelated to Elon himself, I will give him that. The problem for me is, this presumption is equally iffy — if not more so than listening to Elon.

I asked @strangecosmos in another thread to run the numbers on the cost of collecting, storing, retrieving, labeling, and training on this supposedly vast amount of fleet data, but he never responded. Training machine learning algorithms on large amounts of data is very expensive. If there is any human labeling that is also very expensive, and the evidence that unlabeled data is useful for what they're trying to do is rather iffy. Even just transmitting, storing, and retrieving it is expensive, but that's nowhere near the most expensive part.

This is why Tesla does not actually have data from billions of fleet miles. They have very small snippets from those triggers that the people who know what they're talking about found. And they've probably thrown most of that data away already even. They probably start almost from scratch with collecting new data from new triggers whenever they are ready to train a new feature.
 
  • Informative
Reactions: electronblue
I asked @strangecosmos in another thread to run the numbers on the cost of collecting, storing, retrieving, labeling, and training on this supposedly vast amount of fleet data, but he never responded. Training machine learning algorithms on large amounts of data is very expensive. If there is any human labeling that is also very expensive, and the evidence that unlabeled data is useful for what they're trying to do is rather iffy. Even just transmitting, storing, and retrieving it is expensive, but that's nowhere near the most expensive part.

Whereas I did respond. Collecting data is cheap (built into cars), storing data is cheap (commodity hard drives), retrieving is cheap (for same reason as storing), training is either a one time cost to buy the processing power, or a per use cloud system, still not exorbitant it's just GPU time. Labeling is human intensive, but is not a highly skilled task, so very outsourcable (which is what Tesla does, based on what I read somewhere).
 
Another point is the fact that there is no shred of evidence for the collection of steering/accel/brake
what are you talking about? Do you think that sttering wheel angle/speed/power/braking stuff in the bottom right part of the main camera window happens by amgic? Of course there is a constant stream of the canbus traffic that the autopilot has from which it can see all of that.

This is not to say Tesla sends any of it outside of snapshots anywhere, but the data stream is there ready for picking once it's needed.
 
what are you talking about? Do you think that sttering wheel angle/speed/power/braking stuff in the bottom right part of the main camera window happens by amgic? Of course there is a constant stream of the canbus traffic that the autopilot has from which it can see all of that.

This is not to say Tesla sends any of it outside of snapshots anywhere, but the data stream is there ready for picking once it's needed.

By collection obviously @Bladerskb meant collection by Tesla, not a datastream inside the car. Is this datastream sent to Tesla?
 
Whereas I did respond. Collecting data is cheap (built into cars), storing data is cheap (commodity hard drives), retrieving is cheap (for same reason as storing), training is either a one time cost to buy the processing power, or a per use cloud system, still not exorbitant it's just GPU time. Labeling is human intensive, but is not a highly skilled task, so very outsourcable (which is what Tesla does, based on what I read somewhere).

You did respond and I responded to your response and... I still don't see any numbers. You assert this stuff is cheap but it's simply not. And labeling is a semi-skilled task; poor labeling leads to garbage in, garbage out. Yes, it can be outsourced, but does that somehow make it free? And yes, the cost per GPU-hour seems kind of small... until you add up how many GPU-hours you need, and then realize that it's not a one-time cost -- every single time you tweak your training algorithm slightly, you need to re-train to evaluate the effects of your tweaks. You have to do this repeatedly.
 
what are you talking about? Do you think that sttering wheel angle/speed/power/braking stuff in the bottom right part of the main camera window happens by amgic? Of course there is a constant stream of the canbus traffic that the autopilot has from which it can see all of that.

This is not to say Tesla sends any of it outside of snapshots anywhere, but the data stream is there ready for picking once it's needed.

That's what i mean. Its very visible and accessible so you will see if they are siphoning it and uploading it to the mother-ship.
 
  • Informative
Reactions: electronblue
The resolution of Tesla's gateway logging is pretty high for things like steering angle and pedal position. They log tons and tons of data, completely independent of the autopilot system. That's not to say that they couldn't use this data at some point for autopilot training of some kind, but generally the gateway logs are left alone and only uploaded to Tesla when needed by service personnel, so they don't appear to be using it for any autopilot related machine learning models, but who knows. Thing is, the logging is super efficient in their proprietary format, with 25,000 miles of driving only taking up a GB or so. No reason they couldn't just one day be like, "OK, we need all of this data from the fleet," and start a mass log upload.

I for one am pretty sure we're not going to see anything that would be true full self driving from Tesla any time in the next few years. They keep moving the goal posts, but I'm talking about (regardless of regulatory approval) hardware that's get in, punch an address, and go-to-sleep capable... just not happening, not with the current sensor set, and AFAIK they're not planning on retrofitting cars with more radars. Just my guess, but it's going to be 2025 before we have hardware/software that's even close to that level of usefulness... again, ignoring regulatory concerns.

A single demo doesn't count, either. Why? Here's why. I can make my 2014 S drive driveway to driveway hands free on a few local routes without much issue, and that's with AP1 hardware. Doesn't mean I'm going to start selling full self driving software to folks just because my little project works decent on a predefined course. I could hype the hell out of it (*cough* Tesla's now multi-year-old tech demo video that they still use to show off autopilot on their website *cough*), but it wouldn't mean I'm anywhere near the level of sophistication needed for even full highway self driving.

At the end of the day, the Tesla FSD nonsense is just that to me... nonsense. I like Tesla's cars, but the company itself is just flat out terrible in almost every respect when it comes to the sales/marketing side of things. Given even a small glimpse of their history (let alone being a reasonably early adopter and "living" through all of it) you'd have to be absolutely out of your mind to fork over money for the full self driving package right now or previously. Seriously, just wait until it actually exists, actually does more than party tricks, and just buy a new car with the feature when it's available. In the meantime, invest that money somewhere sane and make better use of it than a donation to Tesla. Even investing in TSLA would have been better for many of the people who bought the FSD package (release {edit: release as in, when they first started taking people's money for it... they haven't actually released anything} vs today is like +30%...). (Disclaimer, I don't invest in TSLA anymore, nor short, nor anything related to TSLA stock at all).
 
Last edited:
You did respond and I responded to your response and... I still don't see any numbers. You assert this stuff is cheap but it's simply not. And labeling is a semi-skilled task; poor labeling leads to garbage in, garbage out. Yes, it can be outsourced, but does that somehow make it free? And yes, the cost per GPU-hour seems kind of small... until you add up how many GPU-hours you need, and then realize that it's not a one-time cost -- every single time you tweak your training algorithm slightly, you need to re-train to evaluate the effects of your tweaks. You have to do this repeatedly.
It's a one time cost if you buy the hardware (even more so if you use solar to power it).

training is either a one time cost to buy the processing power, or a per use cloud system,

Nor have you shown any numbers to say it is expensive.
Labeling accuracy can be helped by using low cost labor and submitting the same image multiple times... (also works regardless of pay level, Karpathy didn't score as high as one would think on image net due to the multiple classifications)
 
By collection obviously @Bladerskb meant collection by Tesla, not a datastream inside the car. Is this datastream sent to Tesla?
It is sent to Tesla if the trigger asks for it, or in case of crash and such.

That's what i mean. Its very visible and accessible so you will see if they are siphoning it and uploading it to the mother-ship.
They do, every one in a while.
 
@Bladerskb Tesla is releasing a L4 feature! Granted it is only in a 150 ft radius area, but the car is driving autonomously without a driver! ;) :p

if the car is moving autonomously and navigating a parking lot without anyone in it, that qualifies as L4 IMO. It's geofenced to a very small area but still L4.

March 15 should bring the no-nag NoA along with improvements in performance.

It sounds (to my optimistic ears) like advanced summon with drive around a parking lot to get to you, so your house should be do-able. I'm guessing it will do the reverse to park (or circle forever while you Christmas shop)...

Should be an interesting discussion in a little over 2 weeks.


Enhanced Summon Manual said:
Warning: Enhanced Summon is designed and intended for use only on parking lots and driveways on private property where the surrounding area is familiar and predictable.

Warning: Enhanced Summon is a BETA features. You must continually monitor the vehicle and its surroundings and stay prepared to take immediate action at any time. It is the driver's responsibility to use Enhanced Summon safely, responsibly, and as intended.

Warning: Enhanced Summon may not stop for all objects (especially very low objects such as some curbs, or very high objects such as a shelf) and may not react to all traffic. Enhanced Summon is unable to anticipate crossing traffic.

Warning: Enhanced Summon's performance depends on the ultrasonic sensors, the visibility of the cameras, and on the availability of GPS data.

Warning: Enhanced Summon is intended for use only in parking lots and private property. Do not use Enhanced Summon on public roads.

Warning: When using Enhanced Summon, you must maintain a clear line of sight between you and Model SModel XModel 3 and stay prepared to stop the vehicle at any time.

Warning: When you release the button to stop Model SModel XModel 3, a slight delay occurs before the vehicle stops. Therefore, it is critical that you pay close attention to the vehicle's driving path at all times and proactively anticipate obstacles that the vehicle may be unable to detect.

Warning: Use extreme caution when using Enhanced Summon in environments where movement of obstacles can be unpredictable. For example, where people, children or animals are present.

Warning: Enhanced Summon may not stop for all objects (especially very low objects such as some curbs, or very high objects such as a shelf) and may not react to all oncoming or side traffic. Pay attention and be ready to stop Model SModel XModel 3 at all times by releasing the button on the mobile app.

Limitations:
Enhanced Summon is unlikely to operate as intended in the following types of situations:
GPS data is unavailable due to poor cellular coverage.
The driving path is sloped. Enhanced Summon is designed to operate on flat roads only (up to 10% grade).
A raised concrete edge is detected. Enhanced Summon will not move the vehicle over an edge that is higher than approximately 1 in (2.5 cm).
One or more of the ultrasonic sensors or cameras is damaged, dirty, or obstructed (such as by mud, ice, or snow, or by a vehicle bra, excessive paint, or adhesive products such as wraps, stickers, rubber coating, etc.).
Weather conditions (heavy rain, snow, fog, or extremely hot or cold temperatures) are interfering with sensor or camera operation.
The sensors are affected by other electrical equipment or devices that generate ultrasonic waves.

Warning: The list above does not represent an exhaustive list of situations that may interfere with proper operation of Enhanced Summon. It is the driver's responsibility to remain in control of Model SModel XModel 3 at all times. Pay close attention whenever Enhanced Summon is actively moving Model SModel XModel 3 and stay prepared to take immediate action. Failure to do so can result in serious property damage, injury or death.

giphy.gif