Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Annoying braking on TACC - is this normal?

This site may earn commission on affiliate links.
I don't have an issue with it just "showing" vehicles being in the wrong position. I do have an issue for it braking (hard) for them.

It’s likely related, though. It currently doesn’t seem to be able to measure accurately where objects truly are. So objects move erratically on the screen and if the TACC uses the same data, it panics and brakes. I’d expect them to sort this out eventually but right now that’s the quality of Tesla’s image processing and vision AI.
 
It’s likely related, though. It currently doesn’t seem to be able to measure accurately where objects truly are. So objects move erratically on the screen and if the TACC uses the same data, it panics and brakes. I’d expect them to sort this out eventually but right now that’s the quality of Tesla’s image processing and vision AI.
In the autonomy section it has been shown that the car sees more than is shown on the display. So it is fair to say the display isn’t what the car makes decisions on. We used to not have a representation of what the car sees, and sometimes I wonder if we would be better off if they hadn’t added it in.
 
In the autonomy section it has been shown that the car sees more than is shown on the display. So it is fair to say the display isn’t what the car makes decisions on. We used to not have a representation of what the car sees, and sometimes I wonder if we would be better off if they hadn’t added it in.

Yeah, but that was a curated tech demo intended to convince investors.

You don’t know what hardware and software was used for this demo unless you are a Tesla employee with insider knowledge.

The self driving car may have had a trunk full of computer hardware and not a small HW3 board.

I don’t understand why people think that the automation code somehow gets better data than the visualization? Why wouldn’t the visualization use the allegedly better data that the automation uses? It seems bizarre.

Isn’t it likely that the automation gets the same data with the released software as the visualization and hence you get random panik braking because just like the visualization shows, the automation code thinks there’s suddenly a car about to enter your lane.

Unless the FSD protagonists on this forum have insider knowledge, I would not believe them over the observed behavior of the car.
 
Yeah, but that was a curated tech demo intended to convince investors.

You don’t know what hardware and software was used for this demo unless you are a Tesla employee with insider knowledge.

The self driving car may have had a trunk full of computer hardware and not a small HW3 board.

I don’t understand why people think that the automation code somehow gets better data than the visualization? Why wouldn’t the visualization use the allegedly better data that the automation uses? It seems bizarre.

Isn’t it likely that the automation gets the same data with the released software as the visualization and hence you get random panik braking because just like the visualization shows, the automation code thinks there’s suddenly a car about to enter your lane.

Unless the FSD protagonists on this forum have insider knowledge, I would not believe them over the observed behavior of the car.
I think you misunderstood me. @verygreen has video showing what the AP cameras see. And it differs from what the display shows (it sees more than displayed). The automation code runs on different hardware than the visualization. That is why you can restart your MCU and still be in AP.
 
I think you misunderstood me. @verygreen has video showing what the AP cameras see. And it differs from what the display shows (it sees more than displayed). The automation code runs on different hardware than the visualization. That is why you can restart your MCU and still be in AP.

Of course you can restart the MCU without affecting the AP.

It would not occur to me that the visualization uses a separate image processor and neuronal net to detect objects.

Are you saying that the visualization has its own object detection neuronal net???

The data path is likely:

Cameras—Image Processor—Neuronal Net (object detection/positioning) and that data then gets fed into the automation logic (running on the AP/FSD computer) and also the MCU with the visualization code.

So if the currently released (to the general public) has AP/FSD SW that is basic enough to run on HW2.5 and if it doesn’t utilize the power of HW3.0 on such equipped cars, then it’s not what they showed during their FSD pitch during the investor event.

I’d be careful trusting the released system.