Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

TACC failed to brake at stop, nearly accident.

This site may earn commission on affiliate links.
That seems ambitious. Is the Q50 tall enough to always see over the car in front?

If you have sensors on each side view mirror, you could see the further car around the nearer car. If it is smart, it can also look in reflections (shop windows, other vehicle surfaces, etc) and compare colors and shapes to what it knows already from what it saw in front of it, and also looking through windows of vehicles ahead. This is what humans do anyway (who are decent drivers), so it would basically be mimicking what humans do, only on a more mechanical scale. A lot of programmers get lazy and claim this is "impossible", but clearly it is not only possible but within the realm of a corporation to easily do with many (properly organized) engineers.
 
I believe this is done by glancing radar on the road underneath the car in front of you to see further ahead. I'd guess the Tesla hardware has this capability, though not sure if they're using it. I also wonder how that works when all cars are using a similar system.

Yes, that obviously would work great. Also just directly looking at tires ahead from underneath would work very well (straight, cross angles, etc.). I hadn't thought of those. Those are great.

- - - Updated - - -

Tesla makes it clear that TACC in its current state is for highway use. Just be aware that using it on surface streets means that you are assuming risk for this type of thing. Always be ready to step in.

That seems like an insufficient warning and poor instructions. From what other people explained here, if you are in the #1 lane on I-280 south at 4PM workday coming up to the stoppage area before the carpool lane starts somewhere before CA-85, if traffic is heavy, the car in front of you can be at a dead stop not too far around a gentle curve, and you are using "TACC in highway use" and you will slam into the car in front of you at 79MPH. So you used it as Tesla lawyers described, but it failed.

The explanation by rneugebauer in post #24 TACC failed to brake at stop, nearly accident. - Page 3 is much clearer, understandable, precise, logical, and useful to a driver, and seems more accurate by anecdote, than what the lawyers wrote. This is a case of I think the lawyers are culpable for killing people and property damage.
 
I had the exact same experience, HOWEVER what is very very strange is that it only happens when the car in front turns to the RIGHT. When it turns to the left I experience the OPPOSITE. The car is already out of my lane and TACC slams on the brakes !

*Exactly* how my 2007 Infiniti M35 behaved. Left-hand exits on the highway were extremely annoying (when there were slow cars in the right lane and I was passing... of course I'd normally be in the right lane for normal travel ;-) ).

Luckily, I have a pre-Oct 2014 car so I don't need to worry about whether TACC / AEB etc are engaging or not :)
 
If you have sensors on each side view mirror, you could see the further car around the nearer car. If it is smart, it can also look in reflections (shop windows, other vehicle surfaces, etc) and compare colors and shapes to what it knows already from what it saw in front of it, and also looking through windows of vehicles ahead. This is what humans do anyway (who are decent drivers), so it would basically be mimicking what humans do, only on a more mechanical scale. A lot of programmers get lazy and claim this is "impossible", but clearly it is not only possible but within the realm of a corporation to easily do with many (properly organized) engineers.

As a software engineer, I'm not comfortable with the characterization that not developing a system that can opportunistically look at reflections in shop windows and somehow figure out what they refer to is somehow lazy. I'm pleased that you think so highly of us though.
 
I don't disagree about stopping for parked cars in general but I'm not sure how you can say that TACC knew not to follow the Prius anymore. Neither of us knows for sure what the TACC actually thought here. We can't even see what the indicator was on the dash. Based on the video and my experiences with TACC, it looks to me like the Prius was sufficiently in front of the car for TACC to stay locked on to it, thinking it was just going around a bend past some stopped/parked cars. Either way, it doesn't really matter. The point is that this is simply another example scenario (out of many) for why we should not expect TACC to work well on surface streets. We can use it on surface streets at our own risk but we should be very prepared to intervene often.

I don't see the difference between surface streets and highways.

When I think highway, I think highway CA-17, I-280, US-101, I-880, I-680, and further east, CA-152, I-580, I-5, CA-120, CA-99. These are often jammed packed, often fast moving, often many lanes, often people coming and going, and some of them have cross traffic and construction crews moving about in odd directions. CA-152 has stoplights and freeway portions. CA-132 is the same. You can be driving down a large freeway then suddenly run into a meter light for a transfer to another freeway, solid with traffic creeping or stopped. Farm areas are even more interesting in dissolving this distinction.

Many surface streets have long expanses which allow fast driving without a lot of cross traffic, and then you get to areas where there is cross traffic.

Any kind of autopilot would have to know what the driver intends to do. I.e., in the example here, if the Prius turning right was the direction you wanted to go in, then autopilot would follow that vehicle so as to not hit any other vehicle and not hit the Prius. But in the example here, if you actually wanted to go the direction of the stopped light, the autopilot would similarly need to know that you intended to go the direction of the stopped light ("straight" in driving nomenclature). TACC is not able to interpret your intentions and actual future actions, if for no other reason than the programmers don't know whether or not you know whether or not you put your turn signal on or not (or, for that matter, are a good signaler). Obviously, a good autopilot should predict probable outcomes by looking at all sensory input, interpreting movements, signals, visuals, attitudes, etc. But I think this is a key difference between the current infantile version of TACC and what autopilot would be.

Let me say that distinction a bit more: if the autopilot is driving, it knows where it could and will try to go. If the autopilot isn't driving, it (TACC, assist, autopilot, etc.) doesn't know what you (the driver) are intending and are going to decide to do and end up doing. * This is a HUGE difference between what we've all seen Google Driverless cars do and what Tesla Autopilot features are currently available. We are easily confused by watching all the Youtubes Google puts out and thinking Tesla is more modern than that (it isn't **).

I think the programmer to driver communication from Tesla ought to be better. You'd think that the manual writer in Palo Alto software division would just put it on the instruction manual and all the Silicon Valley Tesla customers would read that manual and everything would be easy. But instead I have this foreboding fear that the driver instruction manual is actually written in a foreign language dozens of timezones away by insular software programmers, routed through Southern California, East Coast Lawyerville (Iowa/Connecticut), Palo Alto and Fremont before getting sent to translators in China and India then back to Palo Alto, many months after the actual software is released.



---
* What if you intended to go right, had your turn signal on, and there wasn't space in the lane for your fat Tesla to fit between the stopped car and the right curb even though the skinny Prius easily made it? What if there was room, but you didn't get far right enough because you thought there wasn't? What is driver assist supposed to do then? Autopilot would figure it out (supposedly), but driver assistance would have to be far more superior to that to guess whether or not you were going to try to squeeze through, if it even knew with confidence you were going that way to begin with.

** They say LIDAR units cost $150,000.00 (the things Google driverless cars use).
 
Last edited:
If you have sensors on each side view mirror, you could see the further car around the nearer car. If it is smart, it can also look in reflections (shop windows, other vehicle surfaces, etc) and compare colors and shapes to what it knows already from what it saw in front of it, and also looking through windows of vehicles ahead. This is what humans do anyway (who are decent drivers), so it would basically be mimicking what humans do, only on a more mechanical scale. A lot of programmers get lazy and claim this is "impossible", but clearly it is not only possible but within the realm of a corporation to easily do with many (properly organized) engineers.
I think you are underestimating how hard it is to mimick what humans do. Our brain does a lot of processing behind the scenes of visual information and makes a lot of qualitative decisions about identifying things.

For example, a human can easily distinguish between a human, bicycle, motorcycle, car, debris, etc. in a split second (as well as direction of motion) and in all kinds of lighting conditions, but our cameras + AI aren't able to do so easily. Not to mention the computational load might be too much for a typical car computer to handle. That why a lot of the autonomous vehicles rely mainly on radar or lidar for road conditions and not a camera system (beyond relatively simple things like road signs).
 
I think the programmer to driver communication from Tesla ought to better. You'd think that the manual writer in Palo Alto software division would just put it on the instruction manual and all the Silicon Valley Tesla customers would read that manual and everything would be easy. But instead I have this foreboding fear that the driver instruction manual is actually written in a foreign language dozens of timezones away by insular software programmers, routed through Southern California, East Coast Lawyerville (Iowa/Connecticut), Palo Alto and Fremont before getting sent to translators in China and India then back to Palo Alto, many months after the actual software is released.

Did a SW developer steal your lunch money or something?
 
As a software engineer, I'm not comfortable with the characterization that not developing a system that can opportunistically look at reflections in shop windows and somehow figure out what they refer to is somehow lazy. I'm pleased that you think so highly of us though.

You're welcome.

- - - Updated - - -

Did a SW developer steal your lunch money or something?

I think you pegged it. I wasn't aware I had such prejudices seeping through but there they are.

- - - Updated - - -

I think you are underestimating how hard it is to mimick what humans do. Our brain does a lot of processing behind the scenes of visual information and makes a lot of qualitative decisions about identifying things.

For example, a human can easily distinguish between a human, bicycle, motorcycle, car, debris, etc. in a split second (as well as direction of motion) and in all kinds of lighting conditions, but our cameras + AI aren't able to do so easily. Not to mention the computational load might be too much for a typical car computer to handle. That why a lot of the autonomous vehicles rely mainly on radar or lidar for road conditions and not a camera system (beyond relatively simple things like road signs).

I admit I have to defer to your current experience of the state of the art in this. I thought we were already further down this road than we must be.
 
I admit I have to defer to your current experience of the state of the art in this. I thought we were already further down this road than we must be.

I've done machine vision applications for medical. Even with ideal conditions it is very very very very hard. The kinds of image segmentation techniques we can apply quickly are not really the way the brain works to match patterns. The brain is amazing.
 
I admit I have to defer to your current experience of the state of the art in this. I thought we were already further down this road than we must be.
Even Elon (who is relatively optimistic) says we are still 5-6 years off from fully autonomous. And he doesn't say we can accomplish it using only camera systems (more expensive lidar systems like Google uses might turn out to be necessary).

The issue is coming up with algorithms that can reliability process the images. That's not something putting in long hours of programming or even throwing a lot of computational resources (like a supercomputer) at the problem can solve. At the current state, lidar remains the system that is reliable enough for full autonomous driving (and even then Google is limiting to 25mph use). Computer Vision AI and current camera systems are still not reliable enough without human intervention. They are used in emergency braking systems (which don't guarantee stopping in time to prevent collision, only reducing the impact). Volvo's late-2010 pedestrian detection system famously failed 3 times out of 12 demonstrations, which obviously isn't good enough for autonomous driving.
 
Yes, we must always be on top of it. I have had the same thing. My assumption is that TACC is locked on the Prius, which is still moving, so it is simply trying to follow it. We are WAY too early in this tech to simply let go and allow tech to take care of us. Personally, I think that will take quite some time to get there. Some sort of AI? Just don't know.
 
Anyone who has used adaptive cruise control on other cars will recognize that this worked exactly as it is supposed to. Nothing to see here folks -- just new customers learning how adaptive cruise control works. It will only stop for a stopped car if it earlier locked onto that car while it was moving. Its pretty simple.
 
It will only stop for a stopped car if it earlier locked onto that car while it was moving. Its pretty simple.

Is this confirmed? I believe I may have had TACC lock onto a stopped vehicle and successfully stopped normally before. Although maybe even the slightest movement could trigger the TACC to lock.

I understand that you guys are comparing TACC to other adaptive cruise control on other brands, but isn't TACC far too much different than other ones to say that they all work the same?