Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Software Update 2018.18

This site may earn commission on affiliate links.
Yep my car has been weird since I got 2018.18 I get a system powering up message several times a day. The center screen freezes and a couple times it stays black till I reboot it.

The LTE connection is also lost as well till I reboot.
The autopilot on .18 is very good with confidence and lane keeping. They replaced my MCU yesterday because the connectivity was lost on WiFi and lte for 3 days, and they said something looked awry on the tests.... since then, I have the new AP firmware and it is a nice improvement but I’ve still had a few freezes and issue connecting with the nav. And intermittent loss of blinker sound and autopilot on off chimes ... seems likely to be bugs related to maps if you ask me. That’s when my car started struggling. I’d be curious to know if hw 2.5 is having this problem.
 
  • Informative
Reactions: Cirrus MS100D
A couple of days ago I got an error message I had never seen before. While I was still driving, the center screen froze and the message said something like "screen unresponsive, turn both scroll wheels until the screen reboots". I had to fiddle with the scroll wheels for a good minute before it booted up. Pressing both the scroll wheels at the same time did not do anything.


AP2 S75 2018.18.
 
I find that this release is a step back in many ways from 2018.12. It does some things better (shows vehicles angles better in curves, shows the curves better) but it mostly is unsteady and prone to jumpy movements towards exits or only lanes.
 
I find that this release is a step back in many ways from 2018.12. It does some things better (shows vehicles angles better in curves, shows the curves better) but it mostly is unsteady and prone to jumpy movements towards exits or only lanes.

It's so odd how we see different things with different cars or in different places.

I was just about to rave about 2018.18. It did both aggressively curved mountain roads on I-80 near Lake Tahoe, as well as irregular city lane lines on a 2-lane highway, CA-267. Both of these gave 2018.10.4 a ton of trouble, and 2018.12 a bit of trouble.

For me, this release is noticeably better than anything before it.


EDIT: I actually had my passenger record a bit of video while I was driving. I'll have to ask for it to see if it's any good... but we were both floored by how well Autosteer was doing. It was basically zero highway interventions for the trip, except to make exits and certain lane changes.
 
It's so odd how we see different things with different cars or in different places.

I was just about to rave about 2018.18. It did both aggressively curved mountain roads on I-80 near Lake Tahoe, as well as irregular city lane lines on a 2-lane highway, CA-267. Both of these gave 2018.10.4 a ton of trouble, and 2018.12 a bit of trouble.

For me, this release is noticeably better than anything before it.

Definitely more disengagements for me. Its handling curves better but it still also does this slither at stop and start bumper to bumper. Way better than the wheel lock to wheel lock action from 2018.12 (sometimes) but its not great either. I also notice more wandering "straight" lines again which were mostly tamped down with 2018.10.4.

I'm looking forward to another release. Hopefully with some real improvements and utilization of the AP2 hardware.
 
  • Informative
Reactions: chillaban
certain lane changes? Did it make some of the lane changes without your input and intervention?
I mean, auto lane change worked as well as I've seen it work.

To clarify, I was trying to compactly describe that I took over for lane changes:

- When I wanted to make an aggressive passing maneuver that is outside of the realm of Autopilot's driving capability.
- When I was making lane changes on a road that disallowed auto lane changes.
 
  • Like
Reactions: kvandivo
This is correct, the NN is unchanged. I also agree it is showing curves better, and trying to show cars within those cars, but still failing pretty badly at that. I also thing it's showing further out in the distance then before, and braking sooner for stopped cars. However, it's also swerving a lot more to me inside of intersections and such.

Side repeaters are still at 0%, I would assume one day they are going to calibrate, but what the heck do I know, they haven't calibrated in many months, so maybe Tesla is just teasing us for another three months maybe, but definitely six...

As for the IC, I really have no clue at all why they are not showing this, it's maddening to me, they have all the data, it's all on the can bus and they can clearly show it if they want to, why they choose not to is beyond me and makes no sense to me. You can see it here:

@jimmy_d . You are on deck.

Re: classifying vehicles - don't know for sure from looking at the networks or what I can easily see in the vision binary. The camera networks themselves are putting stuff they see into at most a half dozen classes (4 to 6 depending on the camera). Some of those could be various classes of vehicles but there's not room for too many when you probably need to have classes for signage, pavement boundaries, and non-vehicle obstacles of various types.

The post processing networks do more work after that and might well be generating vehicle classes internally but their consuming function names seem to be looking mainly at lane configurations (is that a fork in the road? is this a lane split? is that an exit?), assigning vehicles it 'sees' to various lanes, and especially with identifying which lane is the one the vehicle is currently using. There is a post processing network that looks like it identifies moving objects (distinguishing some of them as "parked" moving objects) in addition to traffic signals, signs, roadway, and landmarks but I don't see any indication that it is distinguishing between vehicle types.

When thinking about the NNs it's worth keeping in mind that we now see that most of NNs are not present in the filesystem but rather are compiled directly into the vision binary and we don't know what the role of the redundant external networks is or which ones are being used at any particular time. As of 10.4 the vision binary included a complete suite of camera networks plus as many as 3 post processing networks. From 18.10.4 to 18.12 to 18.14 the vision binary changed in only very minor ways. Because the sizes of the embedded network binaries didn't change we can infer that any network changes would only be fine tuning and not the introduction of any significant new features. I have not had a look at 18.18 yet.

On calibration:
At the moment there doesn't seem to be anyone with ape root access to tell us what cameras are being activated. Previous evidence shows cameras that we know are being used also having their status set to calibrated, but it's not out of the question that the pillars/repeaters could be used even without calibration since their function is quite different. Calibration for the main/narrow may be primarily there to enable stereo depth estimation from vision - something which is specific to those two cameras. (Does the fisheye get calibration?) What we know about when recalibration occurs seems to support the notion that the stereo alignment of the forward cameras is part of what's happening there, and that kind of calibration is not going to be needed for the other cameras. Lens figure calibration (the hardest calibration that usually gets applied to these applications) may not be necessary for what the non-stereo pair cameras are doing right now. And alignment calibration for the pillars and repeaters might not be needed, or it could be skipped if their raw FOV overlaps with the edge of the vehicle at any point - the vehicle body itself could provide a long baseline reference for precisely aligning the camera without having to go through a more conventional alignment process.

I have a pet theory that the pillars and repeaters are being used to support lane changes post 18.10.4 and that this accounts for why the lane change process is so different now compared to pre-10.4, but I don't have any good evidence to support that thesis.

disclaimer: all of this is just speculation. I don't even know for sure that my source is giving me the real software, when it comes right down to it. It would be a pretty elaborate deception if I was getting fakes, but it's not impossible.
 
I've had my first phantom breaking again today in a known spot where I didn't have any since about 8 month, before 2017.42. Before that they were frequent, tunnel exit with metal signage along the wall. Super annoying, as it was a hard break from only about 30mph.
18.18 is really no highlight to me.
 
Re: classifying vehicles - don't know for sure from looking at the networks or what I can easily see in the vision binary. The camera networks themselves are putting stuff they see into at most a half dozen classes (4 to 6 depending on the camera). Some of those could be various classes of vehicles but there's not room for too many when you probably need to have classes for signage, pavement boundaries, and non-vehicle obstacles of various types.

The post processing networks do more work after that and might well be generating vehicle classes internally but their consuming function names seem to be looking mainly at lane configurations (is that a fork in the road? is this a lane split? is that an exit?), assigning vehicles it 'sees' to various lanes, and especially with identifying which lane is the one the vehicle is currently using. There is a post processing network that looks like it identifies moving objects (distinguishing some of them as "parked" moving objects) in addition to traffic signals, signs, roadway, and landmarks but I don't see any indication that it is distinguishing between vehicle types.

When thinking about the NNs it's worth keeping in mind that we now see that most of NNs are not present in the filesystem but rather are compiled directly into the vision binary and we don't know what the role of the redundant external networks is or which ones are being used at any particular time. As of 10.4 the vision binary included a complete suite of camera networks plus as many as 3 post processing networks. From 18.10.4 to 18.12 to 18.14 the vision binary changed in only very minor ways. Because the sizes of the embedded network binaries didn't change we can infer that any network changes would only be fine tuning and not the introduction of any significant new features. I have not had a look at 18.18 yet.

On calibration:
At the moment there doesn't seem to be anyone with ape root access to tell us what cameras are being activated. Previous evidence shows cameras that we know are being used also having their status set to calibrated, but it's not out of the question that the pillars/repeaters could be used even without calibration since their function is quite different. Calibration for the main/narrow may be primarily there to enable stereo depth estimation from vision - something which is specific to those two cameras. (Does the fisheye get calibration?) What we know about when recalibration occurs seems to support the notion that the stereo alignment of the forward cameras is part of what's happening there, and that kind of calibration is not going to be needed for the other cameras. Lens figure calibration (the hardest calibration that usually gets applied to these applications) may not be necessary for what the non-stereo pair cameras are doing right now. And alignment calibration for the pillars and repeaters might not be needed, or it could be skipped if their raw FOV overlaps with the edge of the vehicle at any point - the vehicle body itself could provide a long baseline reference for precisely aligning the camera without having to go through a more conventional alignment process.

I have a pet theory that the pillars and repeaters are being used to support lane changes post 18.10.4 and that this accounts for why the lane change process is so different now compared to pre-10.4, but I don't have any good evidence to support that thesis.

disclaimer: all of this is just speculation. I don't even know for sure that my source is giving me the real software, when it comes right down to it. It would be a pretty elaborate deception if I was getting fakes, but it's not impossible.

I wish it did a better job with pedestrians. It will generally slow down for a group of pedestrians but seldom for individuals. I suspect it is mainly using radar based this, as I’d assume the cameras would have an easier time with individual vs groups of people. Maybe not... I apologize to all my unsuspecting guenia pigs lol.
 
Since we finally got our Auto-wipers and updated maps have rolled out to most of us, we need a new cause to keep us together… otherwise the TMC may shut down.

How about an option to unfold the mirrors only if a butt is in the seat? Should take about 5min of coding, hope for it in v2019.x?
That's how summons works. When you put the car in drive, the mirrors then fold out.
 
I wish it did a better job with pedestrians. It will generally slow down for a group of pedestrians but seldom for individuals. I suspect it is mainly using radar based this, as I’d assume the cameras would have an easier time with individual vs groups of people. Maybe not... I apologize to all my unsuspecting guenia pigs lol.

I find that at really low speeds, like 20mph, if a pedestrian is in the middle of your field of view, it somewhat reliably stops for them. But this is frightening to test in the real world. It definitely does not seem ready for prime time. Stopping for pedestrians is less reliable than stopping for stopped cars.
 
I checked my iPhone this morning and someone must have swapped my S90D battery with a S100D battery while I slept last night.
The IC showed that my S90D at 90% charge level, charged to 327 miles. It should have charged to 260 miles. This has never happened before so it must be that the 2018.18 firmware is causing the issue.

upload_2018-5-15_12-30-26.png