Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Another tragic fatality with a semi in Florida. This time a Model 3

This site may earn commission on affiliate links.
Status
Not open for further replies.
There is push for this regulation in the United States, with some cities and private organizations already signing up - but it does not seem to be ubiquitous. It is not only considerably safer but for trucks traveling on freeways and long-distances, there are also fuel efficiency gains. Source: Truck Side Guards Resource Page

Unfortunately, our government has become increasingly polarized politically, and I believe it is becoming difficult to get things legislated with all of the distraction going on. Unfortunately, in this case, it is putting peoples lives at risk.
While true, that's probably not the reason. U.S. legislators have never made safety regulations for industry unless it directly affected them. (Repeated info, so skip if already known). In the days of the railroad barons, Westinghouse developed a safety brake for trains, which would reduce the number of accidents. Every railroad baron was against putting this on because it would harm profits and the railroad would go bankrupt (according to them). Eventually a Congressman's family was killed in a railroad accident and legislation was pushed through. It turned out that this made the railroad barons more money than before because they could schedule more trains on a given track. Freight companies not putting on side guards is the same flawed logic. This logic is also why the U.S. is not leading the way in renewable energy and is now calling gas "freedom molecules" instead of "death molecules".
 
I believe the relative change and re-detection passes is what is confusing Autosteering. The NN really seems to have a problem computing the borders of the truck - this should not be that hard, but yet it is clearly struggling with this once it gets closer - whereas when its at a distance it does a fairly respectable job.
As far as I can tell, from all the autopilot videos I've watched, the system has no concept of object permanence and only analyzes individual frames.
 
As far as I can tell, from all the autopilot videos I've watched, the system has no concept of object permanence and only analyzes individual frames.

That seems to be the case, and I'm wondering why they've avoided assigning object permanence. There must be a good reason for this, and Tesla probably decided it would be better not to. I don't believe this is the way our brains work, we seem to attach tags and identifiers to objects, but we're making use of our memory and prior recognition to do so. I guess that there could be a tradeoff (false positive) that Tesla doesn't want to entertain in the event it gets it wrong initially.

Even if we fail to accurately attach a label to an object after detecting it positively in the past, we should at least get its border regions accurate.

This may be the ultimate problem right now -- If the issue is with the white truck and the relative contrast values with the sky, it would be fascinating to see the same test against a darker truck, or one painted fire-truck red. Perhaps Tesla would do better with a full-spectrum camera and apply multiple filter passes to determine the difference between the truck's surface and the sky.

If I had the time, I'd take some IR captures on an image stack and see what changes it has to contrasted edges.
 
That seems to be the case, and I'm wondering why they've avoided assigning object permanence. There must be a good reason for this, and Tesla probably decided it would be better not to. I don't believe this is the way our brains work, we seem to attach tags and identifiers to objects, but we're making use of our memory and prior recognition to do so. I guess that there could be a tradeoff (false positive) that Tesla doesn't want to entertain in the event it gets it wrong initially.

Even if we fail to accurately attach a label to an object after detecting it positively in the past, we should at least get its border regions accurate.

This may be the ultimate problem right now -- If the issue is with the white truck and the relative contrast values with the sky, it would be fascinating to see the same test against a darker truck, or one painted fire-truck red. Perhaps Tesla would do better with a full-spectrum camera and apply multiple filter passes to determine the difference between the truck's surface and the sky.

If I had the time, I'd take some IR captures on an image stack and see what changes it has to contrasted edges.

@verygreen, correct me here: This is the feed from the NN doing object detection and recognition. It is not the output from the higher level path determination code where object tracking and persistence would take place.
 
  • Like
Reactions: sixela
I believe the relative change and re-detection passes is what is confusing Autosteering. The NN really seems to have a problem computing the borders of the truck - this should not be that hard, but yet it is clearly struggling with this once it gets closer - whereas when its at a distance it does a fairly respectable job.

My bet is all automakers and research universities worldwide have been working on this same problem, and the fact that it is still a problem for that long implies it is a difficult problem to solve.
 
My bet is all automakers and research universities worldwide have been working on this same problem, and the fact that it is still a problem for that long implies it is a difficult problem to solve.

The issue seems to be related to feature extraction (computing edges, corners, blobs, ridges) and may be a problem at lower levels during image acquisition time where the camera has a hard time differentiating the sky from the white panels of the truck.

Although not directly related, the human brain compensates for holes in its vision all of the time, and we don't even realize it unless we test it in particular ways. Here is an example:

.

Presumably, as things progress, the NN should eventually be able to be trained and compensate for holes (occlusions) in its sensor data, so perhaps there are several ways to address this phenomenon.
 
  • Informative
Reactions: buyer123456
While true, that's probably not the reason. U.S. legislators have never made safety regulations for industry unless it directly affected them. (Repeated info, so skip if already known). In the days of the railroad barons, Westinghouse developed a safety brake for trains, which would reduce the number of accidents. Every railroad baron was against putting this on because it would harm profits and the railroad would go bankrupt (according to them). Eventually a Congressman's family was killed in a railroad accident and legislation was pushed through. It turned out that this made the railroad barons more money than before because they could schedule more trains on a given track. Freight companies not putting on side guards is the same flawed logic. This logic is also why the U.S. is not leading the way in renewable energy and is now calling gas "freedom molecules" instead of "death molecules".

Here is some recent activity by the Government Accountability Office (GAO), including a request for additional "study." This is amazing to me, given the fact that the research is out already, they should be able to fast-track this.

See:

FMCSA, NHTSA Should Study Side Underride Guards, GAO Says
FMCSA, NHTSA Should Study Side Underride Guards, GAO Says

There is more (quoting):

The study also raised questions about the long-term viability and industry acceptance of side underride guards, which are still in the developmental stage and for which no federal standards currently exist.

“Side underride guards are being developed, but stakeholders GAO interviewed identified challenges to their use, such as the stress on trailer frames due to the additional weight,” GAO said. “NHTSA has not determined the effectiveness and cost of these guards, but manufacturers told GAO they are unlikely to move forward with development without such research.”

Regarding single-unit trucks, such as dump trucks, the National Transportation Safety Board has recommended that NHTSA develop standards for underride guards, but the agency has concluded that “these standards would not be cost-effective,” GAO said.
To your point, I hope it doesn't take a death in the family (of a government official) to press this forward, but often families have been able to push for change in legislation after taking it on themselves to bring visibility to the issues at hand.
 
The issue seems to be related to feature extraction (computing edges, corners, blobs, ridges) and may be a problem at lower levels during image acquisition time where the camera has a hard time differentiating the sky from the white panels of the truck.

Although not directly related, the human brain compensates for holes in its vision all of the time, and we don't even realize it unless we test it in particular ways. Here is an example:

.

Presumably, as things progress, the NN should eventually be able to be trained and compensate for holes (occlusions) in its sensor data, so perhaps there are several ways to address this phenomenon.

Hence my premise to just let the drivers see these video feeds in real time, and provide audio/visual cues too.

Just imagine this is a fighter cockpit, give the pilot a chance by alerting of incoming projectiles damn it!@#$!
 
I don't use auto lane changing. The people that went 90MPH into a wall most likely were asked to wiggle the steering, and pulled hard enough to disengage AP and the car swerved into the wall. That is one reason why they added the encoders to the "are you there" request. It's not autonomous, it's just assist. I use it every week, driving from my home to LA (2.5hrs) and I have learned all the oddities, and am very alert. The loss of stress from gripping the wheel is the only thing that saves my brain.

I'm actually more aware of my surroundings, since I can look around a bit more at other drivers and stay away from the ones that are using a cell phone or not paying attention.
 
  • Like
Reactions: Jeeps17
Confirmation of lane changes should be required until we reach FULL SELF DRIVING.
Right now auto pilot is a driver assist feature that requires the driver to always be in control.
Don't blur the line between assist and FSD. It causes confusion for average drivers.
 
Confirmation of lane changes should be required until we reach FULL SELF DRIVING.
Right now auto pilot is a driver assist feature that requires the driver to always be in control.
Don't blur the line between assist and FSD. It causes confusion for average drivers.


Auto lane change only works if it actively detects hands on wheel- so there's no blur there- it won't work if the driver isn't remaining in control.
 

I know the conclusion and final report to come Feb 25th will be interesting. However, I know they have to investigate these occurrences, but knowing the driver is responsible to maintain control of their Tesla at this point with self driving, I don’t know how Tesla can be held responsible.
 
Status
Not open for further replies.