Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Any human relationship is irrelevant to this discussion as I am not trying to say that Tesla should act more like a human in this case.

Yes, Tesla uses map data, but that map data may be incorrect or the speed limit part may be non existent

Yes the car uses vision data for all kinds of things, that is the main source of data for the driving capability. I never said that Tesla cannot see something, it sees everything but the code determines what it can recognize and what it chooses to display.

Whether you are correct or not and whether "common sense" may seem to indicate something, you are making assumptions not based on data. In your case of a sign sometimes being followed and sometimes not, you can probably infer that if there was map data that it would follow that speed change every time, I am mostly ok with making that inference, however, I am less likely to make the inference that because it doesn't follow the sign that it must be using visual data ONLY. Back to my comments about the possibility of the car making assumptions about road type in the absence of map data. If the assumptions aren't hard coded, which a lot of things are not with Tesla because it is using NN's to analyze the dynamic scenario, you can get different answers different times because of some variable that wasn't appropriately learned in the NN.

You even seem to acknowledge unknown variables with your scenario... Even if we say sure it MUST be using visual data ONLY...then why isn't it following that same speed limit sign EVERY time? What is the variable preventing it from reading that sign every time... THIS is my biggest question, what is the variable or variables that allow or prevent the car from reading speed limit signs? You yourself has a sign that is doesn't always follow, I have 3 known signs that it NEVER follows or displays...WHY? Answering that question can allow us to prove/disprove the visual sign reading capability, or at least better characterize it.

There are a lot of variables to look at with this and a lot of you don't know what you don't know. I have run lots of tests where I thought I had all the variables figured out in the lab but then when out in the real world things didn't work right because of something that either wasn't related enough to be "on the radar" for test planning, or was assumed to be a non relevant variable.
Sigh - and you fail to give plausible explanations or acknowledge the data you claim doesn’t exist. Can’t argue with a lack of logic.
 
I think you are mixing me up with some other person with a set in stone position. I am perfectly willing to look at new data and any well written out theories about the subject of visually READING, visualizing, AND reacting/applying speed limit signs in NON AP1(mobile eye) cars. Plausible explanations for what? And what data do I claim doesn't exist? I don't know if you are mis-reading or misinterpreting what I am saying. You yourself are making statements and opinions while at the same time saying that you have not looked at the visualization aspect on your own display in your own scenario! I am perfectly willing to bring in new data and reassess my positions based on the new data.

I have not done my own test, but there was a test done that I extensively commented on that was a pretty good test, but the results were kind of all over the place, which was great because some of the results contradicted assumptions that may have been made if just looking at other results in the same test...too many unknown variables.

Here is the LINK to the post where I analyzed the video I have been referencing and the video is in there too. I have not actually gone back and read my comments so some wording and opinion statements may have changed since then but take a look at it see what you think.

The test in the video is a pretty good start and I would definitely adjust some things and make it a bit more comprehensive. The other thing would be to do it in multiple locations, as well as on a closed actual road(preferably with no map speed limit data(easier said than done).

I also think that the referenced video is at this time just about the best testing data set for this subject out there. It was a well executed test based on an in advance test plan.

Priority of data source is a big issue too. I think any visual speed limit sign data should be priority(with safety logic in place to prevent gross speed changes from fraudulent signage)...I don't believe that is the case, I don't even know if there is a true priority logic code in place right now.
 
How Tesla sets the speed has also changed through various software versions. In the past it was primarily map data supplemented by vision. As an example there was road where the speed limit increased from 35 to 55 as it left a town. The car would see the 55MPH sign and start to accelerate then suddenly it would drop the speed down to 35 again with reason. This always happened and I also noted that the computer would set the speed to 35 when turning onto that section of road at another location. Clearly the map data was incorrect and it was overriding the sign it saw with the map data.

with the latest software version it no longer drops down to 35 after seeing the 55 sign but if I turn onto the road it will set the limit to 35, not 55. I’ve also noticed a significant increase in cases where the speed limit changes but the car doesn’t seem to see the sign so it continues at the last known limit. Essentially it seems to have changed how it prioritizes speed limit data between maps and vision.

Here is a plausible explanation... conflicting speed limit data within the same mapping dataset. Lets take OSM data as an example. I am going to use this as a surrogate reference for whatever mapping data Tesla uses because it would be reasonable to assume some similarities in functionality.

In OSM, you can tag a road/road segment at a specific speed limit. Then presumably when there is a speed limit sign which you know the location, you can change the speed at that point on the road so when the car passes the essentially geo tagged sign, the car changes the speed limit. If any additional segments past the new 55mph segment don't get changed, then the car will revert back to whatever speed limit is assigned to the next segment.

So the car going from 35 to 55mph as it passes a sign could be explained by both map data or visual data. The car dropping back down to 35mph could be explained by map data or some other unknown reason.

Now you turning onto the road and it picks 35mph could be explained by map data or unknown reason.

So we can't prove visual data use here, we can't disprove it either, but which is more plausible if we want to guess? Maybe our opinions differ on that, but based on knowledge of how OSM data works, I think conflicting mapping data within that dataset is more plausible.
 
Cover the sign. See what happens.

I mean, other than potentially getting arrested.

Exactly. I just don't have a good location right now to try and do that test...and it seems that no one else has either...unless someone out there has a video....

I think personally why I haven't tried to do this is because I have 3 signs that I pass daily that do not show up on the visualization...so @sleepydoc what is your plausible explanation for that?
 
I think you are mixing me up with some other person with a set in stone position. I am perfectly willing to look at new data and any well written out theories about the subject of visually READING, visualizing, AND reacting/applying speed limit signs in NON AP1(mobile eye) cars. Plausible explanations for what? And what data do I claim doesn't exist? I don't know if you are mis-reading or misinterpreting what I am saying. You yourself are making statements and opinions while at the same time saying that you have not looked at the visualization aspect on your own display in your own scenario! I am perfectly willing to bring in new data and reassess my positions based on the new data.

I have not done my own test, but there was a test done that I extensively commented on that was a pretty good test, but the results were kind of all over the place, which was great because some of the results contradicted assumptions that may have been made if just looking at other results in the same test...too many unknown variables.

Here is the LINK to the post where I analyzed the video I have been referencing and the video is in there too. I have not actually gone back and read my comments so some wording and opinion statements may have changed since then but take a look at it see what you think.

The test in the video is a pretty good start and I would definitely adjust some things and make it a bit more comprehensive. The other thing would be to do it in multiple locations, as well as on a closed actual road(preferably with no map speed limit data(easier said than done).

I also think that the referenced video is at this time just about the best testing data set for this subject out there. It was a well executed test based on an in advance test plan.

Priority of data source is a big issue too. I think any visual speed limit sign data should be priority(with safety logic in place to prevent gross speed changes from fraudulent signage)...I don't believe that is the case, I don't even know if there is a true priority logic code in place right now.
I don't think anyone is confusing you with someone else. You are the only one here that is claiming there isn't sufficient evidence Tesla reads speed limit signs at all, despite people posting evident to the contrary. To be clear, no one here is claiming Tesla is ONLY reading the sign, it is self evidence there is other data given the car has speed limits even without a posted sign.

Evidence refuting your position:
1) Tesla's own wording is quite clear they have the ability to use the cameras specifically (not only map data as previously)
"Speed Assist now leverages your car’s cameras to detect speed limit signs to improve the accuracy of speed limit data on local roads. Detected speed limit signs will be displayed in the driving visualization and used to set the associated Speed Limit Warning."

Video evidence:
You dismissed other examples, but here's one I found just with a couple seconds of Google.
The first few examples where it shows the visualization of existing signs and the posted limit, based on your past arguments, I know you will just dismiss as possible map data.

But at 5:24 he holds up a 45 speed limit sign in a residential zone (presumably 25 mph) where no preexisting sign exists. Not only does it visualize it at the location with a pole, it also changes the detected limit to 45mph. He tries the same spot with a 40mph sign and it changes to 40mph. With both signs next to each other it shows 45mph (he speculates the sign closer to curb takes precedent). How do you explain that other than the camera reading it? It does have memory of the sign as it showed the same 45mph sign when he drove back even though no one is holding up a sign, but in the comments he noted the next day it disappeared.
 
  • Like
  • Informative
Reactions: JB47394 and derotam
I don't think anyone is confusing you with someone else. You are the only one here that is claiming there isn't sufficient evidence Tesla reads speed limit signs at all, despite people posting evident to the contrary. To be clear, no one here is claiming Tesla is ONLY reading the sign, it is self evidence there is other data given the car has speed limits even without a posted sign.

Evidence refuting your position:
1) Tesla's own wording is quite clear they have the ability to use the cameras specifically (not only map data as previously)
"Speed Assist now leverages your car’s cameras to detect speed limit signs to improve the accuracy of speed limit data on local roads. Detected speed limit signs will be displayed in the driving visualization and used to set the associated Speed Limit Warning."

Video evidence:
You dismissed other examples, but here's one I found just with a couple seconds of Google.
The first few examples where it shows the visualization of existing signs and the posted limit, based on your past arguments, I know you will just dismiss as possible map data.

But at 5:24 he holds up a 45 speed limit sign in a residential zone (presumably 25 mph) where no preexisting sign exists. Not only does it visualize it at the location with a pole, it also changes the detected limit to 45mph. He tries the same spot with a 40mph sign and it changes to 40mph. With both signs next to each other it shows 45mph (he speculates the sign closer to curb takes precedent). How do you explain that other than the camera reading it? It does have memory of the sign as it showed the same 45mph sign when he drove back even though no one is holding up a sign, but in the comments he noted the next day it disappeared.

Perfect! You did a better job of searching than I did months ago when I was searching for videos. I may have had less patience in my video watching stamina however, I got tired of the huge number of videos just passing an existing sign and saying SEE!!!.

I can't dismiss that video and it has plenty of information to prevent me or anyone else to bring in what if's.... The map on the display clearly shows a residential street, adjacent to another small residential street(which prevents other possible issues), and the person use two different speed limit signs to show a change.

The examples I have dismissed in the past were dismissed with cause and reasonably good reasons.

...I also like being a little more scientific and not using a manufacturers statements as "proof".

I never said anyone was claiming Tesla is ONLY reading signs.

Thanks for that perfect video! See, given enough of the correct kind of data I am perfectly willing to change my position. i maintained my position because all of the evidence that I had found and what was presented to me up till now was not complete enough to counter my theory. Manufacturer statements and news articles about those statements are things to be proven/disproven and not to be used as factual data points.
 
  • Like
Reactions: dtdtdt and JB47394
My opinion can change...and my wording may not be the best at all times... I think I am saying that without knowing more variables, AND based on currently known(to me) testing scenarios and videos, we can't prove whether the car can read signs or not, or even that it can sometimes read them. I think in the videos I have seen there was so much variability in results that you get into the what am I missing category because you can't explain the variance, or the variance in test result was such that it cannot be explained as it seemed almost random.
You are using a lot of words to say nothing.

Clearly - at least sometimes - FSD can read signs as I showed you. I hundreds of drive videos everyone of which will show dozens of speed limit signs being read and visualized.
 
Hmmm, almost 400 posts. My takeaway is an even greater need for dumb cruise control.
Heh. Long threads are indications of contentiousness, not consensus.

I can't see Tesla building a dumb cruise control given that the robotaxi is one of Elon's crusades, and it would just be another feature that they'd have to implement, document, support and maintain. Tesla has been almost solely focused on autonomy, dumping anything that interferes with that goal. If anything, this Large Language Model (LLM) flareup has probably given the Tesla engineers some new distractions, making something as mundane as an old-school cruise control even less likely.
 
On a reconfigured divided highway near me, they took out cross traffic at both ends of it, extended the divided part and higher 70mph about a mile each end (was 55 or 65).

New config has been open about a year, more or less.

Still won't see the very large, clear 70mph signs at both ends, as you approach the new sections. Does appear to see the old existing signs a mile later, which used to be the transition from lower speeds to 70.

By "see" I mean it shows the sign on the screen, with the new speed on it, and changes the speed limit shown as you pass it.

The new signs are neither shown on the screen nor adjusted for.

The maps appear to be updated, the divided highway is shown correctly split, but navigation follows the old road, has the blue line on top of what is now a very wide median (up to 200ft wide in some places).

Does not seem to be able to read a speed limit sign without some other data to confirm it.
 
Heh. Long threads are indications of contentiousness, not consensus.

I can't see Tesla building a dumb cruise control given that the robotaxi is one of Elon's crusades, and it would just be another feature that they'd have to implement, document, support and maintain. Tesla has been almost solely focused on autonomy, dumping anything that interferes with that goal. If anything, this Large Language Model (LLM) flareup has probably given the Tesla engineers some new distractions, making something as mundane as an old-school cruise control even less likely.
So true and I completely agree: Tesla will never allow a dumb cruise control option anymore. At least one of us would really REALLY like it though as an option to the problematic TACC in my car.
 
  • Like
Reactions: JB47394
Hi guys,

Just wanted to share an experience that I've had today so that others looking at Teslas to use for long distance driving can make an informed decision.

My wife and I drove our 2023 MYLR from Tucson to Las Vegas. It's a ~7 hour drive (8.5 with charging stops, ~400 miles) mostly through an open desert highway. Traffic is minimal and driving an ICE car is easy on cruise control. Most of the drive is going straight on a highway. We've done it dozens of times since we travel between the two spots often and is a reason we got a Tesla.

The car is 6 days old and running the latest software. During our trip we experienced 19 phantom breaking incidents where the car decided to break at highway speeds for no reason. In all cases there were no cars or obstructions in the way and this occurred at various stretches of the trip. The breaking was very aggressive.

After the first few phantom breaking events we started disabling various "autopilot" features such as emergency breaking, etc. In the end, nothing made a difference and the phantom breaking was occurring even on regular "cruise control" (one pull down) with all other features disabled.

To summarize, the experience was unpleasant and dangerous. If at any time during the phantom breaking event there was a car following us closely there would have been an accident. I do not feel safe operating this vehicle with any type of "autopilot" feature because it's unsafe and behaves erratically.

I know people will say that this is all "beta" and "experimental" and I should always be ready to take over, and of course that part is correct. But when the car breaks suddenly at highway speeds for no reason "taking over" is difficult, especially if this behavior creates an accident. Furthermore, the expectation is that it's 2022 and even the simplest of vehicles offer a cruise control that doesn't slam its breaks on the highway.

I'd be curious to know if others have the same issue. I feel like this is a SERIOUS safety problem and now I am very weary of my Tesla.

Luca
Yup, same issue and it was so bad I reported it to our Transportation Safety Agency. Almost got rear ended.
 
So true and I completely agree: Tesla will never allow a dumb cruise control option anymore. At least one of us would really REALLY like it though as an option to the problematic TACC in my car.
I have been thinking about this. It seems a consensus that most drivers that experience automatic slow downs on 2-lane roads, with turns and hills, or not, have to disengage/not use TACC altogether. Essentially this is a loss of functionality that has been available in vehicles for decades. I can think of instances in older vehicles that I have driven where I use "dumb" cruise control on two lane roads and even those with major elevation and turns - the Grapevine section of I-5 for example (although this is divided and 3 lanes wide).
Short of implementing a totally "dumb" - traffic unaware CC, unless the TACC can be perfected for these types of road situations, it would be reasonable to implement at the very least, a "two lane" road mode for TACC that will ignore objects in the on-coming lane as much as possible or at least restrict focus to a limited distance directly in front of the vehicle. It's all about marketing this "slight less smart TACC" to Tesla engineers.
 
I just took ownership of my first Tesla yesterday. I notice that there is a log of most events that happen, such as me veering into adjacent lane while trying to find something on the console and Lane Departure warning going off.

So two questions related to major PB events:
1) what is reported in the log for these events?
2) I am not sure if there is a way to disable regenerative braking completely. Maybe set Hold mode to "creep". Has anyone tried to reduce regenerative braking to differentiate if the sudden slow down is coming from Regenerative Braking or actual Brake actuation. Although this is not a solution it might mitigate the severity of deceleration when driving wide open roads where there is not much value to regenerative braking. I know in the manual I read something if the car slows down for any reason with TACC engaged that it will illuminate the brake lights. Hmmm. That makes me think- I did a hard ABS test yesterday - I will go back and check log to see if it reported this event.

If it is going to be left up to the users to solve these PB issues, for any incident reports I would suggest always reporting the date of event, the car model and year, the current software versions, and whether FSDb package installed, and if subscribed to Premium Connectivity. The latter, I mention because I read one report that suggested the poor cellular signal in desolate areas might have had some correlation to PB occurrences.
 
I would say that AEB activation's are ALWAYS accompanied by chimes.

AEB engages the brakes(physical) to rapidly slow down the vehicle while pushing a big red alert on the screen with audible chimes. An AEB even can ONLY be canceled by the following methods:
1. Car decided to end it
2. Driver presses on the brake pedal
3. Driver presses the accelerator pedal almost full down

#3 is not generally a good option in my opinion however, given the vehicles massive amount of instant torque and rapid acceleration capability.

Any AEB warnings would be better termed as Forward Collision Warnings(as per the manual), there are no "AEB Warnings".

Early FCW setting was annoying to me in stop and go traffic, I have mine to medium.
Took delivery of Model 3 yesterday. Just wanted to mention, AEB activation would also be reported in the system log file. I noticed, and instance of Lane Divergence Warning while driving yesterday was in the log file.

I have not read owner's manual from beginning to end. Wondering - is there anyway this log file be downloaded by the user? They need a Send to Printer or Print to PDF function (lol). Can entries be deleted by the user (as opposed to service person), or are they collected for the lifetime of the vehicle?
 
From our perspective (individual owners) we can't make any assumptions about who gets more PB events. I only know for sure my own experience, and I try to qualify that with my exact configuration. 9/21 built Y LR, no FSD of any flavor, ever.

Out of the gate, PB was BAD, from 9/21 to 8/22. For some reason, it went away, literally never happened, from 8/22 to 3/23. Since then, PB came back until about late 5/23.

At first I thought it was software update related, but I never found any correlation with other owners on the same software versions.

If you dig into how Tesla rolls out changes, you quickly realize it's not like most car makers, or even really like any product engineering company. They continuously roll out changes, weekly or even daily, so each car is closer to a unique custom build than a "model" and "revision" like most product development systems. Tesla stores a set of data about your cars build that's more like a fingerprint than a configuration. This seems to make it nearly impossible for fully validating any software version against the entire fleet, so you might see bugs or weird stuff in some 2021 LR model Ys that other 2021 LR model Ys have no issues with. The public has now way of figuring out what causes one apparently identical car to behave entirely differently than another one. Hell, even Tesla can't figure it out.

But I say, go for it. It's a fun journey, and even with the worst PB all the time, it's still an amazing car. If your car has no PB issues after your first update (it's delivered with a production build), then just don't update, unless you really want to, and do a bunch of research on that version. If you do get PBs, update whenever possible, until you get a release that solves it. Works for me.
I have read the Owner's Manual but not memorized it (LOL). Just took delivery of M3LR yesterday. Per your suggestion of not updating software rollouts if I find a one that gives a better solution to a TeslaVision deficiency can you answer these questions?

How do I prevent an automatic update? Options are only Standard and Advanced. It appears to be a "forced" push if vehicle is connected to WiFi. Vehicle was delivered with v11.1 (2023.12.200). Is the only solution to never connect vehicle to WiFi? What do I lose by not connecting to WiFi? Do I lose ability to view cameras in Sentry Mode or is this available through premium connectivity?

I did see mention in special cases, Tesla could perform an updater via Premium Connectivity wireless link.

I usually never update my iPhone immediately after a new update, because you can't go backwards unless you reset phone and restore from backup.