Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

HW2.5 capabilities

This site may earn commission on affiliate links.
Does anyone have an informed opinion on the effect of the camera heaters on SNR? I know that hotter camera sensors=more noise. And I've read that NNs can be easily disrupted/fooled by simple image filters.
I know you said “informed”, and I am not informed.

But if the car works in Phoenix when it is 118, I doubt the heaters get that hot when it is below freezing. Especially near the actual imaging device.
 
Does anyone have an informed opinion on the effect of the camera heaters on SNR? I know that hotter camera sensors=more noise. And I've read that NNs can be easily disrupted/fooled by simple image filters.

The claim that NNs can be easily fooled is misleading. It's only 'easy' if you know a lot of things with great precision and have very good control of the environment. This whole line of misinformation comes from research on the nature of NN model stability which backtracked particular inference results to discover what it took to induce category changes in the output. By searching the space of all possible image changes to find those with particular characteristics researchers found some instances in which very small and extremely precise changes to a particular image could result in an erroneous category. To a layman this is extremely counter intuitive and thus it results a lot of clickbait headlines. But this 'weakness' exists mainly in the theoretical realm because to exploit it you need to look closely at a particular network's response to a very specific image. Very slightly changing the lighting, angle, focus, framing etc invalidates any particular miscategorization and returns the classification to the correct category.

What this means is that real world vulnerabilities are very hard to find and virtually impossible to exploit. The reason this research occurs is to identify the particular strengths and weaknesses of various NN designs so that they can improve. The attention that came about from these discoveries has led to a lot of work on reducing the brittleness that leads to these vulnerabilities and that helps to produce networks that are just better overall - more resistant to noise, more capable of dealing with extremes of focus, lighting, obstruction and so forth. Of course research on failure modes continues and surprising examples come up from time to time, but networks getting used in the real world are less susceptible to being systematically fooled than human drivers are.
 
The claim that NNs can be easily fooled is misleading. It's only 'easy' if you know a lot of things with great precision and have very good control of the environment. This whole line of misinformation comes from research on the nature of NN model stability which backtracked particular inference results to discover what it took to induce category changes in the output. By searching the space of all possible image changes to find those with particular characteristics researchers found some instances in which very small and extremely precise changes to a particular image could result in an erroneous category. To a layman this is extremely counter intuitive and thus it results a lot of clickbait headlines. But this 'weakness' exists mainly in the theoretical realm because to exploit it you need to look closely at a particular network's response to a very specific image. Very slightly changing the lighting, angle, focus, framing etc invalidates any particular miscategorization and returns the classification to the correct category.

What this means is that real world vulnerabilities are very hard to find and virtually impossible to exploit. The reason this research occurs is to identify the particular strengths and weaknesses of various NN designs so that they can improve. The attention that came about from these discoveries has led to a lot of work on reducing the brittleness that leads to these vulnerabilities and that helps to produce networks that are just better overall - more resistant to noise, more capable of dealing with extremes of focus, lighting, obstruction and so forth. Of course research on failure modes continues and surprising examples come up from time to time, but networks getting used in the real world are less susceptible to being systematically fooled than human drivers are.


I completely agree with everything you've said above with regards to adversarial images (e.g. the clickbait "this panda with a tiny bit of invisible noise is now a nematode" attacks)….

But in terms of other forms of attacks against Autopilot and similar ADAS systems are probably much much lower hanging fruit and realistic real-world attacks. AP1 already can be seen misreading I-80 signs as 80mph. It follows pavement grooves instead of lane lines. I'm much more worried about what happens when someone deliberately paints some sort of shiny dark red (just a arbitrary example) false lane line leading into a cliff, such that no human in their right mind would take it seriously but AP1/AP2 might. Or false road signs, or false parking spots into a ditch, large video screens on the back of a trailer depicting a car accelerating towards you, etc etc etc.

From a security standpoint, it still worries me that it seems hard enough to build a working computer vision + distance/speed sensor based ADAS system that we've barely started considering how to build a robust one against an attacker maliciously intending to fool the system.

But as a Tesla customer, that's not what I care about right now. I'd like to see wipers that work and lane following that works and maybe stopping at a stoplight. I'll gladly volunteer to be on the lookout for Wile. E. Coyote style painted tunnels.
 
I completely agree with everything you've said above with regards to adversarial images (e.g. the clickbait "this panda with a tiny bit of invisible noise is now a nematode" attacks)….

But in terms of other forms of attacks against Autopilot and similar ADAS systems are probably much much lower hanging fruit and realistic real-world attacks. AP1 already can be seen misreading I-80 signs as 80mph. It follows pavement grooves instead of lane lines. I'm much more worried about what happens when someone deliberately paints some sort of shiny dark red (just a arbitrary example) false lane line leading into a cliff, such that no human in their right mind would take it seriously but AP1/AP2 might. Or false road signs, or false parking spots into a ditch, large video screens on the back of a trailer depicting a car accelerating towards you, etc etc etc.

From a security standpoint, it still worries me that it seems hard enough to build a working computer vision + distance/speed sensor based ADAS system that we've barely started considering how to build a robust one against an attacker maliciously intending to fool the system.

But as a Tesla customer, that's not what I care about right now. I'd like to see wipers that work and lane following that works and maybe stopping at a stoplight. I'll gladly volunteer to be on the lookout for Wile. E. Coyote style painted tunnels.
A proper self driving system should not follow the fake lines into nothing. Because the path the lines lead to is not recognized as a drivable area.

On the other hand, a camera only system can be fooled by 3D looking paintings. Such as those 3D boxes someone has drawn on the road for pedestrian crossing.
 
A proper self driving system should not follow the fake lines into nothing. Because the path the lines lead to is not recognized as a drivable area.

On the other hand, a camera only system can be fooled by 3D looking paintings. Such as those 3D boxes someone has drawn on the road for pedestrian crossing.
Sure, prior knowledge mapping is one level of mitigation, though trusting maps more than the cameras think they see can be dangerous too (e.g. poorly communicated construction lane rerouting).

But at any rate, my general concern still remains — these systems are almost certainly not robust against intentional nefarious acts. I'm not saying they cannot be, I'm just saying I highly doubt any of the current systems being sold to consumers or demo'ed/tested have implemented sufficient defenses against various attacks meant to coerce the system into behaving unsafely.
 
Sure, prior knowledge mapping is one level of mitigation, though trusting maps more than the cameras think they see can be dangerous too (e.g. poorly communicated construction lane rerouting).

But at any rate, my general concern still remains — these systems are almost certainly not robust against intentional nefarious acts. I'm not saying they cannot be, I'm just saying I highly doubt any of the current systems being sold to consumers or demo'ed/tested have implemented sufficient defenses against various attacks meant to coerce the system into behaving unsafely.
I agree very much. The system should never rely on map info that it can't see/confirm for itself. But the system can use maps to add another source of info causing extra carefullness based on what other vehicles has experienced before. Just to increase general safety above what a human driver would be able to see. Can also be used to reduce trust in visuals that "don't make sense" (e.g. someone painted a tunnel on the wall, drive it really carefully)
 
Sure, prior knowledge mapping is one level of mitigation, though trusting maps more than the cameras think they see can be dangerous too (e.g. poorly communicated construction lane rerouting).

But at any rate, my general concern still remains — these systems are almost certainly not robust against intentional nefarious acts. I'm not saying they cannot be, I'm just saying I highly doubt any of the current systems being sold to consumers or demo'ed/tested have implemented sufficient defenses against various attacks meant to coerce the system into behaving unsafely.

Bad things happen and eventually we all lose the game. Terrorism and mass shootings happen also and manage to make people "concerned" - even though statistically they are so rare as to be insignificant. Yet we are so concerned about them - and about somebody taking the time to paint a lane line off a cliff. They are dramatic events which make for great media headlines. There is something about the idea of another human trying to harm you that irrationally frightens people out of all proportion to the odds of the event happening to any given individual.

Yet when it comes to events which DO affect huge numbers of people we blithely ignore facts. We do not spend our time wringing our hands with "concern" about the long term devastating health effects of poor dietary habits most Americans practice, for example.
 
Bad things happen and eventually we all lose the game. Terrorism and mass shootings happen also and manage to make people "concerned" - even though statistically they are so rare as to be insignificant. Yet we are so concerned about them - and about somebody taking the time to paint a lane line off a cliff. They are dramatic events which make for great media headlines. There is something about the idea of another human trying to harm you that irrationally frightens people out of all proportion to the odds of the event happening to any given individual.

Yet when it comes to events which DO affect huge numbers of people we blithely ignore facts. We do not spend our time wringing our hands with "concern" about the long term devastating health effects of poor dietary habits most Americans practice, for example.

This is true. The lack of perceived control is a factor in fear, which is why flying scares people much more than driving.
 
  • Like
Reactions: croman and calisnow
Sure, prior knowledge mapping is one level of mitigation, though trusting maps more than the cameras think they see can be dangerous too (e.g. poorly communicated construction lane rerouting).

But at any rate, my general concern still remains — these systems are almost certainly not robust against intentional nefarious acts. I'm not saying they cannot be, I'm just saying I highly doubt any of the current systems being sold to consumers or demo'ed/tested have implemented sufficient defenses against various attacks meant to coerce the system into behaving unsafely.
I try to find situations with confusing and tricky scenarios that might stress autopilot, and this video is the best I can come up with to illustrate the above argument. The first situation does not require disengagement, the second scenario does require one... what this means for intentional acts to confuse the autonomous vehicles is another matter.

 
Last edited:
  • Disagree
Reactions: Christopher1
I try to find situations with confusing and tricky scenarios that might stress autopilot, and this video is the best I can come up with to illustrate the above argument. The first situation does not require disengagement, the second scenario does require one... what this means for intentional acts to confuse the autonomous vehicles is another matter.
v23Rzh5.jpg
 
Curious as to whether the 2nd scenario really did need a disengagement. We have seen that the system can detect construction, and I've experienced it following (taller) cones. Or would it have just stopped for the parked vehicle? Maybe we will never know...
 
  • Like
Reactions: buttershrimp
I completely agree with everything you've said above with regards to adversarial images (e.g. the clickbait "this panda with a tiny bit of invisible noise is now a nematode" attacks)….

But in terms of other forms of attacks against Autopilot and similar ADAS systems are probably much much lower hanging fruit and realistic real-world attacks. AP1 already can be seen misreading I-80 signs as 80mph. It follows pavement grooves instead of lane lines. I'm much more worried about what happens when someone deliberately paints some sort of shiny dark red (just a arbitrary example) false lane line leading into a cliff, such that no human in their right mind would take it seriously but AP1/AP2 might. Or false road signs, or false parking spots into a ditch, large video screens on the back of a trailer depicting a car accelerating towards you, etc etc etc.

From a security standpoint, it still worries me that it seems hard enough to build a working computer vision + distance/speed sensor based ADAS system that we've barely started considering how to build a robust one against an attacker maliciously intending to fool the system.

But as a Tesla customer, that's not what I care about right now. I'd like to see wipers that work and lane following that works and maybe stopping at a stoplight. I'll gladly volunteer to be on the lookout for Wile. E. Coyote style painted tunnels.

Agree. AP2 has much bigger problems than image spoofing attacks. The whole line of discussion is just ridiculous. If you want to create havoc on the highways there are a million trivial ways to do it, all of which will land you in jail. Nobody is going to bother with exotic vulnerabilities except to prove a stupid point and gather the attention of clickbait craving future malware victims.
 
  • Like
Reactions: buttershrimp