Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Thanks. But then that makes me wonder why it did, in a couple of places.
A possible guess is that V12' NNs "decide" on how they drive and no longer have a direct rule based vision decision that can be easily rendered in the UI. So in effect the UI is now mostly a "guesstimate" of what the NN are thinking and going to do. So even the "tentacle" can be wrong since it is just what it "thinks" the NNs should do.
 
I had no idea humans were that slow. 1.5 seconds is an eternity.
...but we also posses something intrinsic that can't be measured or defined and certainly not replicated by a computer. We just have a "sense" of the actions or actions to come in our surrounding and an instinctual ability to pre-react to unforeseen, unknowable or unpredictable situations and all without forethought. It is more of a vestige of our animal survival heritage.

To me this is an area that AI will NEVER be able to imitate and another reason that all this sentient talk is BS.
 
Ok, I have had FSD for over a year. This past week was the first time this has ever happened and I had it happen again today and wanted to ask about it.

I was on a straight two lane road, no other cars are close and I get a warning noise for a 4-5 seconds. A little panic from me for a second like "what was going on" and then it just stops.
I get the warning noise with no visual indicators (no blue flashing, no "take over now w/ hands on the wheel", no red car in front of mine for being too close, etc) and wanted to ask if anyone else has had this happen on 12.3.4??

Very strange as I am always watching the road and car behavior closely and every warning is obvious and generally makes sense with it being overly cautious. This recent event now happening twice with no reason for a warning noise is VERY unsettling in the moment with no indicators.

Thoughts?
Yep, I've been experiencing the same thing randomly on my 2022 MS. Driving along just fine on a straight road, when the piercing "danger" sounds go off with no warning, and no indication on the screen that anything is actually wrong. My hands are on the wheel. It's happened a bunch of times with 12.3.4. Not sure why. It is disconcerting...
 
Any chance it was highways and V11 was in control? Or going really slowly and the parking program was doing the visualizations (I don't use that so don't know if it draws such things while searching for a parking spot.)
Well, yes, the few times they visualized we were at highway speeds, but again, only a tiny fraction visualized. In the past, and with AP, they all visualized with V 11.
 
Yep, I've been experiencing the same thing randomly on my 2022 MS. Driving along just fine on a straight road, when the piercing "danger" sounds go off with no warning, and no indication on the screen that anything is actually wrong. My hands are on the wheel. It's happened a bunch of times with 12.3.4. Not sure why. It is disconcerting...
Possibly the forward collision warning system being stupid. I haven't seen this behavior, but I have seen the FCW being fairly useless. No warning when I expect one, a warning when there's no apparent reason for one, etc.

I hope everyone is disengaging for these excursions and giving a description of each problem. The feature name doesn't include "Beta" anymore, but we're obviously still helping to test a system with plenty of flaws.
 
  • Like
Reactions: cyborgLIS
...but we also posses something intrinsic that can't be measured or defined and certainly not replicated by a computer. We just have a "sense" of the actions or actions to come in our surrounding and an instinctual ability to pre-react to unforeseen, unknowable or unpredictable situations and all without forethought. It is more of a vestige of our animal survival heritage.

To me this is an area that AI will NEVER be able to imitate and another reason that all this sentient talk is BS.
There's no "intellect" or intelligence in AI. It's just a next level algorithm. The term should be AA.
 
  • Like
Reactions: JulienW
...but we also posses something intrinsic that can't be measured or defined and certainly not replicated by a computer. We just have a "sense" of the actions or actions to come in our surrounding and an instinctual ability to pre-react to unforeseen, unknowable or unpredictable situations and all without forethought. It is more of a vestige of our animal survival heritage.

To me this is an area that AI will NEVER be able to imitate and another reason that all this sentient talk is BS.

Well put.

If there's a line of parked cars and not enough sight-line to see if there is someone about to open an door to exit their parked car, or a child running out, I'm already going below the speed limit, in most cases well below the speed limit, OR, I maintain a higher speed (likely slightly below the speed limit) by driving in the centre of the road to leave more room/reaction time for the unexpected, and moving back into my lane, and slowing down furthervwhen there is an oncoming car.

At present, the AI is struggling to find an appropriate speed in relationship to the road's limit (on all roads). It also seems to be unable to synthesize the recommended speed signs for curves or ramps, the speed limit signs, and the conditional speed limit signs, with the current conditions (dozens of children walking home from school vs 45 minutes earlier when, in my city, the school speed limit is in effect but there are no pedestrians at all around.) It is taking its cue from other drivers which is as likely to be a flawed example as a good example. At this point the AI has demonstrated no understanding of the rules, no ability to read signs or resolve mapping errors vs what the cameras say, AND no general 'gut feeling' of clear-sailing vs pay-attention situations in the same location, just at different time/days/weather.
 
IMG_1382.jpeg


Pattern indicates new FW will drop any day now. Will most likely be the spring updates bundled with an updated V12.
 
  • Like
Reactions: paulchou
Approx route excluding end points? Curious about speeds.

I think at low speed it’s probably pretty smooth except on the occasion when it completely does not anticipate (as likely occurred here).

Hopefully they’ll actually make the driving styles for acceleration programmable. Seems fine to me at the moment; it is nice to finally be keeping up with and even pulling away from other traffic after stops. Much safer at intersections that way (better visibility for red light runners).
Good point. I think however, pulling ahead of others at stop lights increases the chance of getting hit by a red light runner.
 
Good point. I think however, pulling ahead of others at stop lights increases the chance of getting hit by a red light runner.
No this is incorrect.

If you can’t see the red light runner coming you cannot avoid them. Other traffic can block your view. You can always stop before you get through the crosswalk!

That is why I said this.

It’s all situation dependent but on many occasions you want to move first. In the end you have to have unobstructed sight lines so you can see whether to allow FSD to proceed.

Fortunately right now FSD pushes forward promptly but cautiously at first then picks up the pace.
 
it noses over the line just a little and right back, apparently to gauge their reaction. If they back off, it goes and head and does a full lane change
Interesting idea, but I suspect it's just changing it's mind, something I've seen it do with no other cars around.

It was so far over the left side lane marker that we were bouncing over the cats eyes. One time it was bouncing on the center lane markers with oncoming traffic
Mine does this a lot. I was about to calibrate the cameras, but it doesn't happen when running v11 on highways, so I don't think that's it.

I was on a straight two lane road, no other cars are close and I get a warning noise for a 4-5 seconds. A little panic from me for a second like "what was going on" and then it just stops.
I get the warning noise with no visual indicators (no blue flashing, no "take over now w/ hands on the wheel", no red car in front of mine for being too close, etc) and wanted to ask if anyone else has had this happen on 12.3.4??
Yes, I've seen it.
 
  • Like
Reactions: hobertg
Well put.

If there's a line of parked cars and not enough sight-line to see if there is someone about to open an door to exit their parked car, or a child running out, I'm already going below the speed limit, in most cases well below the speed limit, OR, I maintain a higher speed (likely slightly below the speed limit) by driving in the centre of the road to leave more room/reaction time for the unexpected, and moving back into my lane, and slowing down furthervwhen there is an oncoming car.

At present, the AI is struggling to find an appropriate speed in relationship to the road's limit (on all roads). It also seems to be unable to synthesize the recommended speed signs for curves or ramps, the speed limit signs, and the conditional speed limit signs, with the current conditions (dozens of children walking home from school vs 45 minutes earlier when, in my city, the school speed limit is in effect but there are no pedestrians at all around.) It is taking its cue from other drivers which is as likely to be a flawed example as a good example. At this point the AI has demonstrated no understanding of the rules, no ability to read signs or resolve mapping errors vs what the cameras say, AND no general 'gut feeling' of clear-sailing vs pay-attention situations in the same location, just at different time/days/weather.
There's some advantage from the cameras seeing more than I can see, in terms of pedestrians. But I agree that we have a sixth sense from years of experience. I'm surprised the FSD does as well as it does on crowded freeways.

Regarding the speed oddities with the latest v12, how do we expect this to improve if they aren't adding more code? If this is based on the driving of others, we know most people exceed the speed limit. But what about the extra slow speeds, on roads with no other traffic or pedestrians?

It seems that they'll have to give us the option to set the max speed like the freeway behavior.
 
  • Like
Reactions: FSDtester#1
Often humans are braking before any braking occurs (negative reaction time).

You’re thinking 1.5 seconds is reaction time somehow (what is being discussed).

Fortunately the perception cycle which may last 1.5 seconds often starts well before anything happens, so in many situations this leads to negative reaction time.

In worst-case situations of sudden hazards the 1.5 seconds might happen.

Compare to FSD which seems to have extremely limited ability to perceive and just has reaction, which seems to be clocked currently at 0.35s.

Check the possum video, for example (frame where timing began linked). I did not time that but in my bleary eyed (I have night vision problems) and distracted state you can see my reaction time was similar to FSD. It took 1 second for me to react (watch the brake pedal) in my compromised vision-impaired state, which is coincidentally the same as FSD (observe regen bar - note the path planner did not fully reflect the braking of the car in this case). (I would beat it handily in the daytime, assuming I saw the hazard.) And that is kind of a “sudden hazard” case.

My pedestrian case seems to have made the Dan O'Downer highlight reel. A dubious honor. I'm sad that he did not feature the Awesome Possum.

Those types of situations are difficult to assess response time given we don't know when FSD detects the opossum. In any event per the video I still get ~1.5sec beginning the clock when I first see an object in the shadows through the wide lens video. I'm using the regen bar's initial movement for FSD's initial response. I'm sure FSD would have responded earlier with the high beams on. Darkness is one of the challenges of pure vision and we can't always drive with the high beams on.
 
In any event per the video I still get ~1.5sec beginning the clock when I first see an object in the shadows through the wide lens video.
I picked the time when the object moved into the street. Linked above.

Anyway this was not supposed to be an example of good reaction time! It took a second!

Can’t assess FSD reaction time here anyway since we don’t know how it responds to such objects (unlike yellow/green lights).

My reaction time was really poor. But it was nighttime, I have vision problems and I can’t see as well as the camera can, I was distracted, etc.
 
Last edited:
If you can’t see the red light runner coming you cannot avoid them.
Yes you can. Especially if your car is protected by other cars that jump ahead of your car. Being, as they may, red light runner shields. Gotta give the runner a millisecond or so time to clear the intersection. When you jump out, it's already too late when there's a red light runner. Your car jumping out ahead of all others is assuming there's no such thing as a red light runner. This is just one reason to not be the first out of the gate.
 
Yes you can. Especially if your car is protected by other cars that jump ahead of your car. Being, as they may, red light runner shields. Gotta give the runner a millisecond or so time to clear the intersection. When you jump out, it's already too late when there's a red light runner. Your car jumping out ahead of all others is assuming there's no such thing as a red light runner. This is just one reason to not be the first out of the gate.
My point is you want to be last into the intersection and proceed the minimum distance, if there is a red light runner. That is why you establish visibility by moving promptly and pulling forward. You go quickly so you can see, expecting to stop immediately.

Unless you stay stopped for some time, you are more likely to be further into the intersection and involved in the collision (another shielding car pushed into yours) if you use other traffic as a shield. Red light runners can be very delayed - you have to see them coming, not wait for them to clear.

Having good visibility is key, so you can stop sooner and not be as far into the intersection. If you’re lucky you’ll be so far back that you are not involved.
 
Last edited:
Regarding the speed oddities with the latest v12, how do we expect this to improve if they aren't adding more code? If this is based on the driving of others, we know most people exceed the speed limit. But what about the extra slow speeds, on roads with no other traffic or pedestrians?
I consider this to be the same problem as empty intersections; there's just not enough going on for the neural network to train on. When there are cars crossing the intersection, there are obvious stimuli for the neural network to react to. When there is traffic around or a car to follow on a section of open road, there are again those obvious stimuli. So either the "low stimulus" environments are getting lost in the neural network, or Tesla isn't giving the system enough training data of cars driving on open roads and navigating empty intersections.

Alternately, Tesla will have to manufacture new senses for the network, such as "this is an open intersection" or "this is an open road". If the network won't self-organize to recognize those things (or, like I said, those sorts of signals are being lost in the network as being inconsequential), then they'll have to be shoved in its face to make sure they're recognized as a proper stimulus.
 
  • Informative
Reactions: cyborgLIS
This was on the freeway and my understanding is that v11 is still rules based and not using any training video?
That may indeed be the case.. sort of. Tesla has mentioned a few times on the 2024.x releases that they've "improved" the highway software here and there.

So: The EAP/highway code is NN image recognition and such and 300k+ lines of C++ code. Nothing says that that stuff's been unchanged since December of 2023.

At some point it's expected that Tesla's going to replace the C++ with NN code, so it's NN from beginning to end. But when (and if) that happens is strictly in the air. In the meantime, it appears that the code maintainers are doing what code maintainers do: Kill bugs.
 
...but we also posses something intrinsic that can't be measured or defined and certainly not replicated by a computer. We just have a "sense" of the actions or actions to come in our surrounding and an instinctual ability to pre-react to unforeseen, unknowable or unpredictable situations and all without forethought. It is more of a vestige of our animal survival heritage.

To me this is an area that AI will NEVER be able to imitate and another reason that all this sentient talk is BS.
Actually.. you got to be pretty careful on these blanket statements.

With humans we have an.. interesting relationship with Time and Predictions. If you think about it for more than a bit, with a long reaction time, how in the heck can a baseball batter hit a ball, or how can someone catch a ball; or, for that matter, ride a bicycle where at any moment the bike may hit an obstruction and we have to react quickly?

Thing is, humans are sort of running an analog computer that predicts where where we're going and, before we get there, sends motor neuron impulses before the critical time so that the propagation times of the brain, neurons to the muscles, and muscle propagation times all line up so we catch the ball, hit the ball, recover from a trip, and all that jazz. Our wetware is slow compared to computers, but it runs algorithms that aren't so much real-time as pre-time.

There's no particular reason that computers can't be made to do the same kind of thing; in fact, they are.

Speaking as a working engineer who dabbled on and off with control theory, I will say that it's easier to put a speedy computer on top of a fast sensor with a fast reacting actuator (or the equivalent), since the math comes out easier.

I happen to be married to a Human Factors engineer; a good part of her training in design is taking account of what I'll call, for lack of a better term, is "Wetware in the Loop". With the additional delays, design of hardware can get interesting; it's possible to build uncontrollable-by-humans vehicles that look like they ought to work, but won't, since the overall system (with people) has to take into account delays, reaction times, and all that jazz.

Finally.. It's rare that humans run into novel problems all that often. In general, when faced with a spanking new problem, it takes a human significant time to figure out what to do. And we're not talking 0.5 seconds, here - more like minutes, or longer, sometimes. The whole point of road regulations, build requirements, standard signs, and all that jazz is to limit the number of potential choices that a driver has to run into. And then licensing and driver's ed to train people up with what one runs into out there so they don't have to think about it - just react on pre-loaded memories.
 
You don't even need high intelligence to be able make these types of time/movement predictions and then fire off neurons to corresponding muscles. Pretty much all animals do that instinctively and they do it far better than us in most cases.

Ever seen a leopard jumping 10 feet in the air to intercept an escaping antelope?

Sure, the deers may not know what a "road" is but with enough conditioning/training, even in my neighborhood the deers would casually hang out at the side of the road watching cars zooming by them w/o over-reacting.


Actually.. you got to be pretty careful on these blanket statements.

With humans we have an.. interesting relationship with Time and Predictions. If you think about it for more than a bit, with a long reaction time, how in the heck can a baseball batter hit a ball, or how can someone catch a ball; or, for that matter, ride a bicycle where at any moment the bike may hit an obstruction and we have to react quickly?

Thing is, humans are sort of running an analog computer that predicts where where we're going and, before we get there, sends motor neuron impulses before the critical time so that the propagation times of the brain, neurons to the muscles, and muscle propagation times all line up so we catch the ball, hit the ball, recover from a trip, and all that jazz. Our wetware is slow compared to computers, but it runs algorithms that aren't so much real-time as pre-time.

There's no particular reason that computers can't be made to do the same kind of thing; in fact, they are.

Speaking as a working engineer who dabbled on and off with control theory, I will say that it's easier to put a speedy computer on top of a fast sensor with a fast reacting actuator (or the equivalent), since the math comes out easier.

I happen to be married to a Human Factors engineer; a good part of her training in design is taking account of what I'll call, for lack of a better term, is "Wetware in the Loop". With the additional delays, design of hardware can get interesting; it's possible to build uncontrollable-by-humans vehicles that look like they ought to work, but won't, since the overall system (with people) has to take into account delays, reaction times, and all that jazz.

Finally.. It's rare that humans run into novel problems all that often. In general, when faced with a spanking new problem, it takes a human significant time to figure out what to do. And we're not talking 0.5 seconds, here - more like minutes, or longer, sometimes. The whole point of road regulations, build requirements, standard signs, and all that jazz is to limit the number of potential choices that a driver has to run into. And then licensing and driver's ed to train people up with what one runs into out there so they don't have to think about it - just react on pre-loaded memories.