Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autopilot is already improving.

This site may earn commission on affiliate links.
Given that the system is apparently active in that it shows the lanes on the display, I have to assume that it's watching how I drive and comparing my behaviour with what it would have done if AP was actively in use. Much like the theory that REM sleep (or the type of sleep where you're paralyzed - I'm hardly an expert on sleep!) is a good time to have dreams that test scenarios - falling off the cliff etc. - because you can't damage yourself, I suspect that AP is doing the same. Not able to act dangerously, but able to test scenarios in a dream state! It would be interesting to know how or if AP's 'watching' differs from a drivers 'panic corrections' with respect to how the learning occurs. This is likely something Ohmman could comment on based on his experience?

There are a number of ways Tesla can be doing this, but first and foremost, it's learning from the driver. That's how these things work, and Elon said as much too. So when you're driving, you're training. When AP is on, it's mostly just obeying the model. There is a slim chance that there is a reinforcement learning algorithm that runs while AP is on, which has a penalty for things like line crossing or a driver taking over without prompting. However, the main training method is for drivers to control the vehicle while it collects environmental data.

One common thought seems to be that the car is learning specific routes or specific locations based on an individual's driving. That's quite unlikely for a few reasons, the most obvious being that a model needs many iterations to adjust. More likely, if a driver is seeing a real difference on the route, the vehicle is learning how to react given a set of inputs which may be unique to that setting. The benefit is that then the model can generalize this to another location on the other side of the country/world where the inputs are similar. And someone there is aiding the model in learning your particular set of inputs. In other words, if AP is doing better on your particular drive, it's because it's doing a better job generalizing to your route.

A simple example is the classic handwritten character recognition problem. If you have a specific way of writing a 4 which isn't classified properly in the existing model, sending it one or two of your 4s with a label ("this is 4") isn't going to do much. However, if there are hundreds of people like you who write slightly similar 4s, and those are part of the labeled training set, it'll properly classify your 4 even without your examples. Compare that to a spot in your route where maybe the line is wavy or worn off at one spot. Surely in the AP collection space, there are other locations like this. Every time a driver completes their trajectory through, the model is getting more confident in that set of inputs. If you waited long enough, the car would work properly even if it hadn't ever seen that particular spot on your route.
 
The resolution of the GPS, especially at speed, is unlikely to be tight enough to actually safely follow a GPS track (if that's what you're suggesting - not sure). But I suspect that the system does learn that "the left lane edge paint anomaly at (approximate Northing and Easting) northbound should be disregarded - go straight and don't deke left". And perhaps, "it turns out the right lane appears at N: E: is a climbing lane and not a right turn lane - safe to follow fog line".

Where I am, I suspect much of my travelled lane km's have been tracked by me alone, or mostly me. Since the summer ended and especially since version 7 dropped, I haven't seen another Tesla on the highway I travel most often. Obviously, I'm not out there 24-7, but Tesla density isn't overly high in this neck of the woods! So I suspect much of the 'training' I've been seeing has largely been from my car, maybe a pass or two by another small number of Teslas. On one day early in using 7.0 when I had some slack in my schedule, I actually turned around and tried problem spots again to see if I could reproduce the error. I generally couldn't and I was left with the impression that AP learns very quickly! However, there have been a few occasions since that time where previously improved behaviour has regressed back to what I saw the first few times through. But it seemed to learn the correct behaviour again. Not sure how or why that is happening!

Given that the system is apparently active in that it shows the lanes on the display, I have to assume that it's watching how I drive and comparing my behaviour with what it would have done if AP was actively in use. Much like the theory that REM sleep (or the type of sleep where you're paralyzed - I'm hardly an expert on sleep!) is a good time to have dreams that test scenarios - falling off the cliff etc. - because you can't damage yourself, I suspect that AP is doing the same. Not able to act dangerously, but able to test scenarios in a dream state! It would be interesting to know how or if AP's 'watching' differs from a drivers 'panic corrections' with respect to how the learning occurs. This is likely something Ohmman could comment on based on his experience?

I am very excited to find out more about this learning behavior. It seems like Autopilot is just scratching the surface of its capabilities.
 
Are you implying that AP learning is close to real time and that quick? What is your theory on how the group leaning works?
I have no idea! All I know is that in two instances (locations) where I decided to do multiple test passes, I had to take control to avoid a serious problem. In the first case, I turned around and went back for another go. I waited until there was no traffic visible in either direction and traversed the same spot at about half my usual speed (to give me time to let it do its worst and not lose control). To my surprise, I went right on by without a problem. So I went back again and tried the spot at regular speed. Again, no problem.

At the next location (same day, same trip), I experienced essentially the same problem. This time I went back and tried it again at regular speed, in case the slow pass changed the outcome and learning. No problem this time either.

For the next few days, I drove past these spots (and others I had taken control but not tested by turning around) with only a very minor steering hesitation that did not require my intervention. Then about a week later I experienced the original problem at the second location. But not since. Ohmman has the knowledge to better speculate on this than I do... I only know what I saw. And whether it was all due to learning in real time or something else, I don't know. I've played with following a car closely and compared to no car ahead and haven't been able to detect a difference. It's beyond my ability to speculate with any degree of certainty, but I find it all very fascinating! (Extremely fascinating - if I had it to do over again, I'd love to get into this area of engineering/software)

There are a number of ways Tesla can be doing this, but first and foremost, it's learning from the driver. That's how these things work, and Elon said as much too. So when you're driving, you're training. When AP is on, it's mostly just obeying the model. There is a slim chance that there is a reinforcement learning algorithm that runs while AP is on, which has a penalty for things like line crossing or a driver taking over without prompting. However, the main training method is for drivers to control the vehicle while it collects environmental data.

Speaking as a non-expert, I would have assumed that a 'reinforcement learning algorithm' would be extremely valuable, especially at the beginning when the system's knowledge is at its lowest. Combining accelerometer data with the driver taking over could take a stab at what was a close call and thus, worthy of further investigation by human minds still working on the software (or prioritized for learning).

One common thought seems to be that the car is learning specific routes or specific locations based on an individual's driving. That's quite unlikely for a few reasons, the most obvious being that a model needs many iterations to adjust. More likely, if a driver is seeing a real difference on the route, the vehicle is learning how to react given a set of inputs which may be unique to that setting. The benefit is that then the model can generalize this to another location on the other side of the country/world where the inputs are similar. And someone there is aiding the model in learning your particular set of inputs. In other words, if AP is doing better on your particular drive, it's because it's doing a better job generalizing to your route.
This makes perfect sense to me. Maintaining a database on every road and curve driven and uploading that data to cars on the routes would be a huge undertaking. However, the parameters must be quite specific, because the problems I was having were with the identical scenario (from the perspective of civil engineer with highway design experience). Same number of lanes. Same tangent (no horizontal curve). Same method of introducing a left turn bay. Essentially the same paint marking design applied to the pavement. Only differences were with the vertical curve parameters. I had an example of each of 'sag', 'crest' and 'none'. The K values were quite large, however, meaning sight distances were good and I wouldn't have expected the learning to be impacted... what was painted on the ground was most important.

Where it appeared to learn the problem at one location, it couldn't apply it to either of the next two. As well, it didn't learn about additional locations on the same route that I had passed several times with me doing the steering... because the problem manifested itself at these locations when I tried AP after a half dozen or more trips with it off. Taking control thus seemed to teach it more than simply driving through multiple times manually.

A simple example is the classic handwritten character recognition problem. If you have a specific way of writing a 4 which isn't classified properly in the existing model, sending it one or two of your 4s with a label ("this is 4") isn't going to do much. However, if there are hundreds of people like you who write slightly similar 4s, and those are part of the labeled training set, it'll properly classify your 4 even without your examples. Compare that to a spot in your route where maybe the line is wavy or worn off at one spot. Surely in the AP collection space, there are other locations like this. Every time a driver completes their trajectory through, the model is getting more confident in that set of inputs. If you waited long enough, the car would work properly even if it hadn't ever seen that particular spot on your route.
It would be interesting to be able to set the fleet learning back to zero and design a few experiments to see all of this in action and improve over time. Too far in to do that now. I'd love to see a detailed technical explanation of the basic logic from Tesla!
 
This makes perfect sense to me. Maintaining a database on every road and curve driven and uploading that data to cars on the routes would be a huge undertaking. However, the parameters must be quite specific, because the problems I was having were with the identical scenario (from the perspective of civil engineer with highway design experience). Same number of lanes. Same tangent (no horizontal curve). Same method of introducing a left turn bay. Essentially the same paint marking design applied to the pavement. Only differences were with the vertical curve parameters. I had an example of each of 'sag', 'crest' and 'none'. The K values were quite large, however, meaning sight distances were good and I wouldn't have expected the learning to be impacted... what was painted on the ground was most important.

Where it appeared to learn the problem at one location, it couldn't apply it to either of the next two. As well, it didn't learn about additional locations on the same route that I had passed several times with me doing the steering... because the problem manifested itself at these locations when I tried AP after a half dozen or more trips with it off. Taking control thus seemed to teach it more than simply driving through multiple times manually.

But what about the High Resolution Maps? Are they not being built and expanded every day? Isn't that what Elon specifically said? Maybe not perfect learning but learning routes nonetheless? Huge undertaking-most definitely but getting done.
 
Last edited:
But what about the High Resolution Maps? Are they not being built and expanded every day? Isn't that what Elon specifically said? Maybe not perfect learning but learning routes nonetheless? Huge undertaking-most definitely but getting done.

That is indeed what he said. Their data is part of the feature space which includes the radar, ultrasonic, camera, holistic path detection, and high precision maps (along with maybe some other stuff he didn't mention). So the mapping feature is provided a much heavier weighting in the locations where the others aren't strong. He gave specific examples of a certain highway, I can't remember which one.

I still maintain that a single vehicle passing one location a couple of times would take a while to propagate successfully into the model, but perhaps they allow for much higher weighting on the mapping than I would expect.
 
But what about the High Resolution Maps? Are they not being built and expanded every day? Isn't that what Elon specifically said? Maybe not perfect learning but learning routes nonetheless? Hugh undertaking-most definitely but getting done.
I think that the maps he has mentioned are more about providing a baseline understanding of highways and byways in terms of how many lanes are going each way, which lane becomes an off-ramp, etc... to improve upon the simple single line drawings we typically associate with maps. My comments about the difficulty of maintaining a database was intended to be about specific instructions to cars... like 'don't be fooled by the left turn bay at Northing/Easting X,Y'. In other words, specifics rather than trends. Or rally instructions rather than an actual map. After pondering Ohmman's comments, I expect my thoughts on this are mostly wrong!

Like the cars, I'm trying my best to learn and understand this as time and inputs accumulate! :biggrin: And after pondering Ohmman's comments, I can see that the training process would presumably cause the car to look at the high res maps and see that the highway goes straight through... that knowledge would be combined with the camera's input that the paint line heads left and (hopefully) the correct result of the data crunching would be to go straight through, ignoring the left turn bay. And I would assume that after a few people have manually driven the left turn bay and turning movement, that higher level of mapping detail would eventually also tell the car that the bay exists, not just that the main route goes straight. Or maybe a few cars would have to take the turn before the assumption of straight through is possible... I don't know.

Google Maps is pretty good at allowing users to add visible or invisible overlays of their mapping data, so I imagine the higher lane resolution exists invisibly to us at this point, but perfectly visible to the car. Data tidbits of all sorts could also be embedded that could elaborate on linework - think of this as a complex GIS that we see only as the Google Map interface. And in reality, I expect that the learned information is in fact being compiled into a massive GIS maintained by Tesla and tied to Google Maps.

It wouldn't be a stretch to say that accelerometer information from cars could also be added to calculate a safe speed around a corner. Simply knowing the radius of the curve isn't adequate, as the superelevation plays a big role in what speed is workable. This could be geo-referenced data that can be pulled from the GIS. I have to wonder if the TACC speed changes people are reporting are somehow tied to this as a work in progress. Maybe one day temperature inputs will allow seasonal cornering speed adjustments for sub-zero conditions?

Since I don't see how the GPS can be accurate enough to 'follow a track' without RTK (real time kinematic) corrections, the hi res mapping would presumably still be the blueprint and the camera would have to provide the confirmation of which lane you're traveling in... I guess? Unless there was some way to filter the GPS data from multiple passes and create an approximation of how many lanes exist through the blur of tracks, the camera would have to also provide some suggestions of where the car was on the road, relative to other lanes, back to the GIS. Again, a complete guess on my part.

edit: I see Ohmman also responded while I was typing...
 
Last edited:
Thought I would post this here, even though it is a bit off topic. But I think the Tesla auto-pilot rollout is responsible for this from Google.

Google puts out a monthly report on how their autonomous car testing is going. Typically the media only reports on accidents. This month, there were no accidents to report. The interesting thing about this monthly report is that Google lays out the case for why they believe they needed to wait for full level 3 autonomy and not try to commercialize level 2 autonomy like Tesla just did with the auto-pilot:

https://static.googleusercontent.com/media/www.google.com/en//selfdrivingcar/files/reports/report-1015.pdf

"In the end, our tests led us to our decision to develop vehicles that could drive themselves from point A to B, with no human intervention. (We were also persuaded by the opportunity to help everyone get around, not just people who can drive.) Everyone thinks getting a car to drive itself is hard. It is. But we suspect it’s probably just as hard to get people to pay attention when they’re bored or tired and the technology is saying “don’t worry, I’ve got this...for now.”"

This is a basic philosophical difference between the companies. Looks like Tesla believes the handoff back to human drivers is a viable way to proceed, for those willing to accept the consequences. They are selling cars, and trying to lead the way. Google is doing research, so maybe isn't quite so motivated by the "urgency" of competition? Maybe the end game for Google is to sell/license the level 3 software to companies that don't want the hassle of designing and developing the code base, but would rather just build the hardware and let Google install the OS so to speak.

With the way that technology is advancing now, whomever is providing the cars software is really in the "drivers seat" so to speak. The hardware manufacturers may become less relevant over time. Tesla is doing both :smile:

RT
 
This is a basic philosophical difference between the companies. Looks like Tesla believes the handoff back to human drivers is a viable way to proceed, for those willing to accept the consequences. They are selling cars, and trying to lead the way. Google is doing research, so maybe isn't quite so motivated by the "urgency" of competition? Maybe the end game for Google is to sell/license the level 3 software to companies that don't want the hassle of designing and developing the code base, but would rather just build the hardware and let Google install the OS so to speak.

Thanks, good post. I think you're right in that the Google project is more of a research project taking place inside of a commercial entity. Tesla has a reason to push AP out in order to continue to stay ahead of the competition, and sell cars. It can be argued that Tesla should gain quite a bit by getting the cars and the technology out there - the data collection, as we've mentioned, is enormous. I suppose the counterargument might be that Tesla could lose focus on full autonomous driving because of their reliance on a human driver, though that argument seems weak.

I don't necessarily think I could make a choice on who is taking the "proper" path. In fact, they are really two different paths. Google's car has been stated to be intended for slower fully autonomous travel in cities. Tesla's working on a more long-distance solution, with the goal of covering everything eventually.
 
I have evidence that implies at least some learning occurs by training individual cars rather than the fleet, and that some occurs when AP is on, not when it is disengaged but comparing against what the driver does. If correct, these findings are the opposite of how I had thought it worked.

Quite a while ago my car learned to handle a couple trouble spots going west on Route 50. I had not done much experimentation going east on the same road, partly because the spots that look like they would be trouble for AP are worse. I finally decided to tackle a spot going east where the two lane road expands to four by swerving right to sidestep the median while gaining a new lane. Trying for the first time two days ago the car drove like a drunkard. This morning the car correctly swerved all the way into what would become the new right lane: a perfect job.

The point is that AP failed when I first tried it going east last Friday, but has been handling the trouble spots going west on the same road perfectly for the last two weeks. Yet I drive Route 50 both ways on weekdays. If AP was learning by comparing its predictions against my driving when it is not engaged, then it should have gotten the new spot right the first time through, since I had already provided as big a training set with AP disengaged going east as I have going west. But it did not begin to learn going east until I engaged AP going east.

Also, mine is not the only Tesla driven in the Middleburg area. If the fleet had been laying down experience, then AP should not have failed going east last Friday, only to get it right today. Last Friday, AP should have performed similarly going east as it does going west.

Go figure.
 
Nice post. I think that, in the end, Google has been forced by business considerations rather than technology or safety to go in the direction they are going.

From a business point of view, if they tried to commercialize their system, they would be competing with MobileEye, Bosch, Delphi, and others-- does Google really want to be in the low margin auto parts business? And would they even be successful in that space? The OEMs have already largely made their choices, and they didn't choose Google, so Google would have been, at best, a second-tier provider.

Thus, Google needed to upend the entire space. They couldn't do this by building an electric sedan with autonomous features because Tesla is already there. The only option was to take the next step and use the fully autonomous, elevator model, in cars of their own design.

Furthermore, their system seems to be heavily map based, as opposed to all the other OEM using the "see and react" model, influenced by maps. In my opinion, the MobileEye "see and react" model is more flexible and scalable, but again, Google was already locked into a different approach.

"In the end, our tests led us to our decision to develop vehicles that could drive themselves from point A to B, with no human intervention. (We were also persuaded by the opportunity to help everyone get around, not just people who can drive.) Everyone thinks getting a car to drive itself is hard. It is. But we suspect it’s probably just as hard to get people to pay attention when they’re bored or tired and the technology is saying “don’t worry, I’ve got this...for now.”"

This is a basic philosophical difference between the companies. Looks like Tesla believes the handoff back to human drivers is a viable way to proceed, for those willing to accept the consequences. They are selling cars, and trying to lead the way. Google is doing research, so maybe isn't quite so motivated by the "urgency" of competition? Maybe the end game for Google is to sell/license the level 3 software to companies that don't want the hassle of designing and developing the code base, but would rather just build the hardware and let Google install the OS so to speak.

With the way that technology is advancing now, whomever is providing the cars software is really in the "drivers seat" so to speak. The hardware manufacturers may become less relevant over time. Tesla is doing both :smile:

RT
 
The point is that AP failed when I first tried it going east last Friday, but has been handling the trouble spots going west on the same road perfectly for the last two weeks. Yet I drive Route 50 both ways on weekdays. If AP was learning by comparing its predictions against my driving when it is not engaged, then it should have gotten the new spot right the first time through, since I had already provided as big a training set with AP disengaged going east as I have going west. But it did not begin to learn going east until I engaged AP going east.
This is about what I've been thinking/guessing based on my experiences. Except that it eventually seems to 'unlearn' some improvements too (at least in my case)... which I don't like and can't explain. I did my somewhat-regular 230 km return trip today and it behaved well in areas that had recently been a problem, but completely blew it in the spot I've been using as a benchmark (and had been doing well since initial failures). I don't know what to take from that.

I've noted also that recent trouble areas had been sections of road I'd driven without AP engaged. The system had omnisciently observed my driving pattern several times before I drove through with it on, but failed when engaged. And now seems to behave properly since the fails. I'll watch to see if it reverts in these spots too.

I saw a white Model S going the other way this afternoon - the first I've seen on this highway in a month or more. I wonder how his car behaved...!
 
I too have long thought that Google isn't interested in making hardware (cars) so much as owning the OS for that hardware. (An Uber like experience with ads by google on the screen?) And I don't think it matters whether car manufacturers currently use one company or another - 100% of autonomous production cars have yet to be designed - Google has a better chance than most anyone to capture the market.
 
Nice post. I think that, in the end, Google has been forced by business considerations rather than technology or safety to go in the direction they are going.

From a business point of view, if they tried to commercialize their system, they would be competing with MobileEye, Bosch, Delphi, and others-- does Google really want to be in the low margin auto parts business? And would they even be successful in that space? The OEMs have already largely made their choices, and they didn't choose Google, so Google would have been, at best, a second-tier provider.

Thus, Google needed to upend the entire space. They couldn't do this by building an electric sedan with autonomous features because Tesla is already there. The only option was to take the next step and use the fully autonomous, elevator model, in cars of their own design.

Furthermore, their system seems to be heavily map based, as opposed to all the other OEM using the "see and react" model, influenced by maps. In my opinion, the MobileEye "see and react" model is more flexible and scalable, but again, Google was already locked into a different approach.

From a business point of view, it's hard to advertise to someone who's driving.
 
Very odd. After giving the autopilot a little over a week off, it has unlearned what to do in places it worked fine before. This morning it failed, and I had to grab the wheel and disengage, in three of four tricky places on Route 50 where it had been working consistently earlier. Traffic was non-existent and lighting was challenging -- stripey tree shadows -- but it handled these same conditions well in the past. Nor have I installed a new build lately. Strange.
 
Same thing happening for me ....

Very odd. After giving the autopilot a little over a week off, it has unlearned what to do in places it worked fine before. This morning it failed, and I had to grab the wheel and disengage, in three of four tricky places on Route 50 where it had been working consistently earlier. Traffic was non-existent and lighting was challenging -- stripey tree shadows -- but it handled these same conditions well in the past. Nor have I installed a new build lately. Strange.

We are road tripping through New Mexico and West Texas and yesterday I had the car (mostly) ignoring freeway exits. But today it seems to be darting for them most of the time. :confused:
 
We are road tripping through New Mexico and West Texas and yesterday I had the car (mostly) ignoring freeway exits. But today it seems to be darting for them most of the time. :confused:

As of this morning, my car has darted toward one particular exit (right turn lane) three days in a row. So guesses are 1) there was an algorithm update, not delivered as a new build, which needs to re-learn from scratch; or 2) (less likely) this is their misguided way of restricting where and when AP is used. Actually neither of these guesses really sound right.
 
Well, I'm going to remove myself from the "autopilot is already improving" camp. I've found the whole darting towards exit ramps thing to be completely predictable now that I believe I understand why it does it in the first place. And thinking back, this behavior hasn't changed at all.

You're coming up on an exit ramp. No cars in front, clear visibility ahead and the ramp begins by pushing the solid white line on the right to the right to follow the ramp, and adds a dotted line or dashed line to go along with where the solid white line for the highway continues up until the actual solid while line continues past the exit ramp entrance.

With me so far?

Now, if the car has enough forward camera visibility to fully see where the solid white line continues past the ramp it won't dart towards the ramp. This is also true if the solid white line on the right turns into a dashed white line that closely matches the dashed white line to the left of the car immediately or almost immediately after the solid white line follows the exit.

Why? When the solid white line veers to the right to follow the exit, if the car can see that the line continues on the current path not too far ahead, it ignores the break. If it can't, it appears to believe that the lane has widened and it starts to try and center itself in the new widened lane, up until it sees the solid white line continue ahead and will dart back towards the original center.

You can test this by driving at night past an exit where the car can't quite see the reappearance of the solid white line with the low beam headlights, but is easily visible with high beams. I did this last night a few times and was able to predict with 100% accuracy when the car would go towards the exit and when it would not based on my own visibility of the reappearance of the solid white line and looking at the dash to estimate how far ahead the car was able to see. At a car length or two before the beginning of the exit ramp if I can't clearly see the solid white line returning, the car heads for the exit and the on-screen lane appears to briefly widen when it does, confirming my theory that it just thinks the lane widened and it needs to recenter. If I had my high beams on, and the solid line returning was clearly visible, the car wouldn't dart over.

This is also repeatable where a right-hand exit begins inside a left bend. The camera can never see far enough ahead to see the solid line return, so it always darts for the exit.

Keep in mind that the car doesn't appear to immediately recognize new lane markers, and this appears to be part of the reason that the car doesn't immediately recognize the exit ramp as a lane when it begins with the solid white line moving over and a dotted line that doesn't match the rest of the lane markers. This is also why when passing an on-ramp, where the solid line just becomes a dashed line, the car never tries to jump over because the original marking never moved, just changed form (exactly like in the end of Tesla's 2014 demo).

This is all with no cars ahead of you within range of the forward radar. None of this applies when the car can see other vehicles ahead.

The exception to all of this is when the car has tracked *any* other car ahead driving past the exit in either the right lane (assuming you're in the right lane) OR the adjacent left lane. If a car not far ahead of you in your lane or an adjacent lane skips the exit the car is less likely to immediately try and recenter (dart to the exit) and will wait a short period more for the solid line to reappear before it does, if at all. I confirmed this behavior a few times last night as well. It appears to believe that if other vehicles are not heading in that direction where it thinks it should, then it shouldn't do so immediately either. If a car ahead in the right lane takes the exit, then the above no-cars-around rules appear to apply, especially if the other vehicle taking the exit obscures the reappearance of the solid white line.

So, that's my conclusion. I think the improvements people think they're seeing more likely just have to do with varying visibility along with varying traffic conditions ahead. There may be some subtle improvements, but this (darting to exits) doesn't appear to be one of them.

After well over 1,000 miles of autopilot driving, I've found the system to be completely predictable.
 
Last edited: