You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
I know you said “informed”, and I am not informed.Does anyone have an informed opinion on the effect of the camera heaters on SNR? I know that hotter camera sensors=more noise. And I've read that NNs can be easily disrupted/fooled by simple image filters.
But if the car works in Phoenix when it is 118, I doubt the heaters get that hot when it is below freezing. Especially near the actual imaging device.
Does anyone have an informed opinion on the effect of the camera heaters on SNR? I know that hotter camera sensors=more noise. And I've read that NNs can be easily disrupted/fooled by simple image filters.
The claim that NNs can be easily fooled is misleading. It's only 'easy' if you know a lot of things with great precision and have very good control of the environment. This whole line of misinformation comes from research on the nature of NN model stability which backtracked particular inference results to discover what it took to induce category changes in the output. By searching the space of all possible image changes to find those with particular characteristics researchers found some instances in which very small and extremely precise changes to a particular image could result in an erroneous category. To a layman this is extremely counter intuitive and thus it results a lot of clickbait headlines. But this 'weakness' exists mainly in the theoretical realm because to exploit it you need to look closely at a particular network's response to a very specific image. Very slightly changing the lighting, angle, focus, framing etc invalidates any particular miscategorization and returns the classification to the correct category.
What this means is that real world vulnerabilities are very hard to find and virtually impossible to exploit. The reason this research occurs is to identify the particular strengths and weaknesses of various NN designs so that they can improve. The attention that came about from these discoveries has led to a lot of work on reducing the brittleness that leads to these vulnerabilities and that helps to produce networks that are just better overall - more resistant to noise, more capable of dealing with extremes of focus, lighting, obstruction and so forth. Of course research on failure modes continues and surprising examples come up from time to time, but networks getting used in the real world are less susceptible to being systematically fooled than human drivers are.
A proper self driving system should not follow the fake lines into nothing. Because the path the lines lead to is not recognized as a drivable area.I completely agree with everything you've said above with regards to adversarial images (e.g. the clickbait "this panda with a tiny bit of invisible noise is now a nematode" attacks)….
But in terms of other forms of attacks against Autopilot and similar ADAS systems are probably much much lower hanging fruit and realistic real-world attacks. AP1 already can be seen misreading I-80 signs as 80mph. It follows pavement grooves instead of lane lines. I'm much more worried about what happens when someone deliberately paints some sort of shiny dark red (just a arbitrary example) false lane line leading into a cliff, such that no human in their right mind would take it seriously but AP1/AP2 might. Or false road signs, or false parking spots into a ditch, large video screens on the back of a trailer depicting a car accelerating towards you, etc etc etc.
From a security standpoint, it still worries me that it seems hard enough to build a working computer vision + distance/speed sensor based ADAS system that we've barely started considering how to build a robust one against an attacker maliciously intending to fool the system.
But as a Tesla customer, that's not what I care about right now. I'd like to see wipers that work and lane following that works and maybe stopping at a stoplight. I'll gladly volunteer to be on the lookout for Wile. E. Coyote style painted tunnels.
Sure, prior knowledge mapping is one level of mitigation, though trusting maps more than the cameras think they see can be dangerous too (e.g. poorly communicated construction lane rerouting).A proper self driving system should not follow the fake lines into nothing. Because the path the lines lead to is not recognized as a drivable area.
On the other hand, a camera only system can be fooled by 3D looking paintings. Such as those 3D boxes someone has drawn on the road for pedestrian crossing.
I agree very much. The system should never rely on map info that it can't see/confirm for itself. But the system can use maps to add another source of info causing extra carefullness based on what other vehicles has experienced before. Just to increase general safety above what a human driver would be able to see. Can also be used to reduce trust in visuals that "don't make sense" (e.g. someone painted a tunnel on the wall, drive it really carefully)Sure, prior knowledge mapping is one level of mitigation, though trusting maps more than the cameras think they see can be dangerous too (e.g. poorly communicated construction lane rerouting).
But at any rate, my general concern still remains — these systems are almost certainly not robust against intentional nefarious acts. I'm not saying they cannot be, I'm just saying I highly doubt any of the current systems being sold to consumers or demo'ed/tested have implemented sufficient defenses against various attacks meant to coerce the system into behaving unsafely.
Sure, prior knowledge mapping is one level of mitigation, though trusting maps more than the cameras think they see can be dangerous too (e.g. poorly communicated construction lane rerouting).
But at any rate, my general concern still remains — these systems are almost certainly not robust against intentional nefarious acts. I'm not saying they cannot be, I'm just saying I highly doubt any of the current systems being sold to consumers or demo'ed/tested have implemented sufficient defenses against various attacks meant to coerce the system into behaving unsafely.
Bad things happen and eventually we all lose the game. Terrorism and mass shootings happen also and manage to make people "concerned" - even though statistically they are so rare as to be insignificant. Yet we are so concerned about them - and about somebody taking the time to paint a lane line off a cliff. They are dramatic events which make for great media headlines. There is something about the idea of another human trying to harm you that irrationally frightens people out of all proportion to the odds of the event happening to any given individual.
Yet when it comes to events which DO affect huge numbers of people we blithely ignore facts. We do not spend our time wringing our hands with "concern" about the long term devastating health effects of poor dietary habits most Americans practice, for example.
I try to find situations with confusing and tricky scenarios that might stress autopilot, and this video is the best I can come up with to illustrate the above argument. The first situation does not require disengagement, the second scenario does require one... what this means for intentional acts to confuse the autonomous vehicles is another matter.Sure, prior knowledge mapping is one level of mitigation, though trusting maps more than the cameras think they see can be dangerous too (e.g. poorly communicated construction lane rerouting).
But at any rate, my general concern still remains — these systems are almost certainly not robust against intentional nefarious acts. I'm not saying they cannot be, I'm just saying I highly doubt any of the current systems being sold to consumers or demo'ed/tested have implemented sufficient defenses against various attacks meant to coerce the system into behaving unsafely.
I try to find situations with confusing and tricky scenarios that might stress autopilot, and this video is the best I can come up with to illustrate the above argument. The first situation does not require disengagement, the second scenario does require one... what this means for intentional acts to confuse the autonomous vehicles is another matter.
my bad try this
No but playing Franz Ferdinand in my video has unleashed the block police like no other, bummerHmmm, did you move to the UK? Neither post plays without stating it's blocked.
No but playing Franz Ferdinand in my video has unleashed the block police like no other, bummer
I connected my IPVanish VPN to servers in Germany and was able to watch the videos.No but playing Franz Ferdinand in my video has unleashed the block police like no other, bummer
I completely agree with everything you've said above with regards to adversarial images (e.g. the clickbait "this panda with a tiny bit of invisible noise is now a nematode" attacks)….
But in terms of other forms of attacks against Autopilot and similar ADAS systems are probably much much lower hanging fruit and realistic real-world attacks. AP1 already can be seen misreading I-80 signs as 80mph. It follows pavement grooves instead of lane lines. I'm much more worried about what happens when someone deliberately paints some sort of shiny dark red (just a arbitrary example) false lane line leading into a cliff, such that no human in their right mind would take it seriously but AP1/AP2 might. Or false road signs, or false parking spots into a ditch, large video screens on the back of a trailer depicting a car accelerating towards you, etc etc etc.
From a security standpoint, it still worries me that it seems hard enough to build a working computer vision + distance/speed sensor based ADAS system that we've barely started considering how to build a robust one against an attacker maliciously intending to fool the system.
But as a Tesla customer, that's not what I care about right now. I'd like to see wipers that work and lane following that works and maybe stopping at a stoplight. I'll gladly volunteer to be on the lookout for Wile. E. Coyote style painted tunnels.