Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Security from the frontlines... (OR How to Hack A Model S talk at DEFCON)

This site may earn commission on affiliate links.
Yes, not a scary hack at all, physical access means you can do anything. Why shut off the car with all this when you can cut the brake lines, etc.

Right. IMO, security risk is like 2/10. This is definitely not something a valet is going to pull off while they have your car for a couple hours. This requires technical knowledge, dismantling complex car components, having special Ethernet connectors, cracking into Tesla's VPN, obtaining a rolling code, broadcasting another code somehow derived from the first to wake up the blocked service port, then finally logging into the MCU using the token, and injecting malicious code. Extremely far fetched.

As Ingineer points out they might as well cut the brake lines or loosen a few bolts if they're going to all that trouble.
 
They "welcome further research" on the aspects I thought they would be addressing... even with a gateway, I'd be looking at methods to traverse that gateway, one of them being the software update mechanisms (as I describe above).

The firmware bundle contains what seem to be drive system firmware images. We did not evaluate any of the security mechanisms of this update process and welcome further research in this area.
 
Probably cuz they didn't want to brick their car which is understandable. I myself have zero concerns about someone sending a malicious update to the car. I don't install any update until I've checked it out on TMC and believe me, Tesla would pull the update within 24 hours at any rate.
 
Right. IMO, security risk is like 2/10. This is definitely not something a valet is going to pull off while they have your car for a couple hours. This requires technical knowledge, dismantling complex car components, having special Ethernet connectors, cracking into Tesla's VPN, obtaining a rolling code, broadcasting another code somehow derived from the first to wake up the blocked service port, then finally logging into the MCU using the token, and injecting malicious code. Extremely far fetched.

As Ingineer points out they might as well cut the brake lines or loosen a few bolts if they're going to all that trouble.

All true. But still some pretty basic admin/network "issues" noted once inside the perimeter. These undoubtedly will be hardened going forward but these are the typical soft inside used once another vector through the hard outside is discovered.
 
Probably cuz they didn't want to brick their car which is understandable. I myself have zero concerns about someone sending a malicious update to the car. I don't install any update until I've checked it out on TMC and believe me, Tesla would pull the update within 24 hours at any rate.

My point upthread is that Tesla may not know about an update, if it's possible for someone to bypass Tesla's signatures through root on the CID. Certainly they didn't want to brick the car - but maybe there's something cheap enough for researchers to get their hands on. :)

Here, they had root on the CID from physical access, and were not able to figure out remote exploit vectors. It sounds like Tesla did some more jailing/walling of the browser to help limit that as an infection vector. But - like the researchers concluded in their point about "hard shell, soft chewy center" - we have to assume that at some point it may be possible for someone to gain root access in the CID, and figure out what can be limited there.

So yes, this particular hacking effort was not a big deal, and I hope it causes Tesla to look more closely at closing some other potential holes (at a minimum, having the IP/CAN gateway do some sort of validation of update sources, if it already does not do that). I'm happy that they have convinced the researchers that this is a relatively safe car.
 
Honestly I think any hack that involves more than 30 seconds of physical access to the car is not a real hack. If it can't be done completely remotely, or at the very least after a sneaky person manages to plug something into a diagnostic port during a valet or something similar, then other physical "hacks" are more effective as others have mentioned (cutting the brake lines, loosening an important bolt or two, etc etc).

I don't consider a hack that involves removing parts of my dash to pull off any hack that I would have even the slightest bit of concern about in the real world. So I really hope people don't read too much into the headlines on this particular "hack." While impressive that they were able to get so far with physical access, this isn't useful to anyone at all as far as causing mayhem like a "legit" hack would have the potential to.

If they come back later with a hack that if I visit a particular website on the in dash browser, or load a particular file or song on a USB drive, and that gives them OTA remote access to something... then I'll be concerned. But I don't see any avenue where that would be possible.

I think if some entity really wanted to hack the Model S remotely they would have to do so by bribing or blackmailing key Tesla employees, or personally infiltrating the same.
 
So, I was at DEFCON and saw this talk and I came away seriously unimpressed. But not for the reasons you might think.

For background, I have spent my career (25+ years) bouncing back and forth between the computer security and embedded Unix worlds. I've spent hundreds, maybe thousands, of hours thinking about how to build systems that are resilient in the face of attacks like this, and a fair bit of time breaking into competitors' gear to see how they did it.

I walked out of this talk with a few main things in mind:

1) The researchers "just happened to" have set themselves ground rules that "just happened to" prevent them from investigating any lines of attack on the car that would have been really technically hard. Setting the rule that they wouldn't alter persistent state (in-memory attacks only) meant they didn't have to -- because they "weren't allowed to" -- investigate propagation of compromise via the firmware bundles that the CID holds for other devices. Setting the rule that they would attack infotainment systems only meant that they didn't have to conduct any kind of remote-oracle attack on the gateway and its VAPI interface, or even figure out what it was or see whether it had any other vulnerabilities, such as boot-time vulnerabilities. It also meant they didn't have to really dig into the boot process for any of the elements of the system to see whether there were a vulnerability there that could allow persistent compromise after a single remote attack, or loading of unsigned firmware, etc.

Every one of these lines of attack was used in Miller & Valasek's Jeep work, every one represented an impressive technical achievement, and every one of them yielded results. So from my point of view, why didn't these guys achieve remote compromise of the Model S? Because -- giving them the benefit of the doubt, I'll say it was accidental and probably more aimed at not having to buy their friend a new $100K car -- at the outset, they set ground rules that ensured they would not do the very hard work that might have produced that result.

I think unfortunately it's telling that they also gave up "after a few hours" at the task of exploiting a known remote code execution bug in the browser (a remote exploit!) "because they had an easier way already" via physical access. Is it a big pain in the tuchas to bootstrap a browser you can crash remotely (and know why, and know memory's being overwritten) into a remote shell on a system with a funny architecture and oddball C runtime? Sure. Once you can reproduce the crash, do you know as a security researcher it's possible? Pretty much yes, and if you already have another means of access to the system it's trivial to confirm this. What's in between? Potentially, weeks and weeks of fiddly, unpleasant, technically demanding work. They just didn't do it.

2) The presenters' repeated (and somewhat disrespectful, in my opinion) comparison of their work to the Jeep work really just made them seem inexperienced with embedded systems and vehicle systems, and unaware of the details of the Jeep work (which had been presented at Black Hat several days earlier). It didn't say much about the architecture of the Jeep, the Tesla, or about either parties' attacks.

For example, there was repeated, emphatic praise of the Tesla "architecture" and slagging of the Jeep. But in fact, architecturally, these two systems for mediating user input/output and communicating on the CANbus with vehicle systems are strikingly similar. In particular, both cars embed in the "head unit" a separate processor which takes abstract commands from the touchscreen and performs all CANbus communication. In the Jeep, this "gateway" is connected to the infotainment system CPU by an SPI bus. In the Tesla, it's connected by an Ethernet. Neither car actually allows any user-exposed CPU to perform CANbus messaging - in fact, since the Tesla has far more other stuff connected to the bus (Ethernet) that can talk to the gateway, thus far more attack surface, I'd have to say "advantage-Jeep!" on this one.

So why did the Jeep guys manage to do arbitrary CANbus messaging while the Tesla guys did not? The answer seems pretty clear: they actually bothered to try! They reverse-engineered all four of the gateway's firmware, the managed firmware update process for devices other than the touchscreen, the touchscreen-to-gateway protocol, and the car's CANbus messaging. If the Tesla guys attempted to do any of these things, they would appear to have failed -- though they did expressly say they did not try to do some of them because of their "ground rules".

I think it is a pretty fair suspicion on my part that had Miller & Valasek had a go at these interfaces on the Tesla as they did on the Jeep, they would have found something. This stuff is freaking hard to get right (having gotten my fair share of it wrong myself, in my career, and likely to do so again).

So -- given the known browser bug, equivalent in effect to though more challenging to exploit than, the Jeep's exposed DBus port -- why wasn't there a remote compromise of the Model S that allowed arbitrary CANbus messaging, like there was for the Jeep? My take on it is that nobody did the (potentially very) hard work, for whatever reason. Not to say there wasn't similarly hard work on the Jeep. If you want to disassemble and patch a bunch of nasty Super-H code using a buggy tool, be my guest.

3) There was a serious Tesla reality distortion field in effect. They had JB Straubel up on stage and DEFCON staff and presenters alike lavished him and the company with praise (in fairness, Miller & Valasek said some really nice things about Jeep too, but nobody had a Jeep exec up to the front to do shots). Tesla brought a car to the "car hacking village" and unlike all the other cars there, nobody was allowed to remove any interior or exterior trim -- in fact, for most of the first day nobody was allowed to even sit in the car without a Tesla employee in the passenger seat. Tellingly, when things calmed down a little bit I did some brief experimentation with the vehicle and determined that every wireless interface had been disabled, including Bluetooth. The booth staff looked extremely uncomfortable when I asked why.

(Just to be clear, to me the reason why seems pretty obvious: they didn't want someone fuzzing their Bluetooth stack or further probing the browser, and if the car hadn't had -- guessing here -- all its antennas snipped, both those things would surely have happened)

4) One thing the presenters got right: OTA upgrade is a lifesaver. If for no other reason, because it means there need be no readily exposed port that would allow an easy attack on the vehicle by someone who does have physical access. Unfortunately, it does also represent additional remote attack surface. If Tesla ever has its OpenVPN servers (or any of the infrastructure behind them) popped -- look out. But we accept this risk with regard to almost every modern device in our lives, so it does not seem so different to accept it for our cars.

The bottom line for me: this was an interesting investigation that probably would have yielded far more dramatic and scary results if conducted as aggressively as the Jeep work -- or, really, any work by top security researchers. As it was, it was so constrained, and the results so limited, that if it weren't for the wow-Tesla-sexy factor I'm not sure it would have even been accepted (it probably would have been, but then again, similar attacks on other cars have been demonstrated for years now). Also unfortunately the Tesla RDF was not an entirely positive thing.

Any way you slice it a system with the complexity of the Model S has a lot more attack surface than something simpler like a modern ICE vehicle. That means Tesla has a lot more work to do to defend it. I'm confident they can, but at present we have little data to know whether they actually have or will.
 
Last edited:
Thanks for the view from the front lines. I had come to many of the same conclusions.

Anyone want to go in on a used or salvage Model S and actually try some no-ground-rules (aside from not physically destroying the car) hacks? :)
 
I think one striking difference between the Jeep hack and Tesla is the level of engagement and willingness to work with researchers and provide a bounty program.

I remember when the Jeep hack leaked Chrysler was threatening legal action rather than hiring them.
 
I think one striking difference between the Jeep hack and Tesla is the level of engagement and willingness to work with researchers and provide a bounty program.

I don't think anyone doubts Tesla's good intentions! I am perhaps a little more jaded about the entire public-relations process around them than you're inclined to be. Given the costs associated with actually doing the security engineering behind the scenes, I'd say that what's on offer in the bounty program is de minimus. But so are most bug bounty programs.

I would have been a lot more impressed if Tesla hadn't effectively vaccinated the car they brought to the show against any on-site research! How many people exactly are going to actually be able to pony up the cost of a Model S, have the skills to do research that could potentially destroy it, but cross the motivational threshold because of up to $10K from a bug bounty? You're still $90K in the hole and the population who could do this stuff is not large in the first place. The sub-population that's unfazed by a $100K cover charge? Way smaller.

Never mind that if you're serious about this, you can forget about using that beautiful car as your daily driver. You'll have it torn apart as your work in progress far too much of the time. So if you love the Model S (like I do) and want to actually drive yours, you'll need to be able to afford two if you're going to make one the target of your research.

A bug bounty for a browser or an operating system is more than a little different: it doesn't cost $100K to play that game.

vvanders:1103794 said:
I remember when the Jeep hack leaked Chrysler was threatening legal action rather than hiring them.
Sure. Jeep's initial response had nothing like the slickness of Tesla's. Of course -- though it hardly reduces their responsibility -- Jeep also seem to have been blind-sided by a component supplier. It was Harman-Kardon's box that exposed that "run anything!" port to the Internets.

It bears consideration though that, unless I misunderstood their talk, in the course of their experimentation M&V may have actually exploited security holes in other people's cars without permission (though likely without intending to do so) (If you were there, or have the video of the Black Hat talk, I'm referring to the part about suddenly realizing "hey, that car's in Oklahoma somewhere!". I didn't see the rerun of the talk at DEFCON so if they clarified this and I have it wrong, my bad). That alone is pretty likely to get lawyers riled up.

Obviously some of R&M's "ground rules" for their Model S work were aimed at avoiding anything like that. Unfortunately the actual rules they chose avoided a lot more, as did their decision to not take the bull by the horns and dig in to the remote vulnerability they did notice.
 
Last edited:
It bears consideration though that, unless I misunderstood their talk, in the course of their experimentation M&V may have actually exploited security holes in other people's cars without permission (though likely without intending to do so) (If you were there, or have the video of the Black Hat talk, I'm referring to the part about suddenly realizing "hey, that car's in Oklahoma somewhere!". I didn't see the rerun of the talk at DEFCON so if they clarified this and I have it wrong, my bad). That alone is pretty likely to get lawyers riled up.

You probably have more knowledge from seeing their talk, but from what I understood of the articles I read online, they were able to scan for vulnerable cars by connecting a laptop to Sprint's network through a burner phone. It didn't sound like the attempted anything on any of those cars, but they used that method for their attack on the Jeep that was the subject of their research to prove that it could be done without physical access. It was implied that it might be difficult to find a specific car, but I got the impression that if they scanned and logged vulnerable cars over time, they could build a database of all vulnerable cars and target individuals from that list.
 
See my above comment. PM works well.

My workings have other things in mind... but even then, ethics are something that are needed to be adhered to. Follow the BCP for everything you discover. The first duty a penetration tester does when a vulnerability is found is report it. I should hope that everyone here has that same ethical standard on mind.
 
My workings have other things in mind... but even then, ethics are something that are needed to be adhered to. Follow the BCP for everything you discover. The first duty a penetration tester does when a vulnerability is found is report it. I should hope that everyone here has that same ethical standard on mind.

Depends on the vulnerability. If it's critical enough that it allows remote exploitation and affects the drivetrain I agree. If it requires several days of tinkering with the car, reporting to the manufacturer before releasing to the public is just plain silly. We wouldn't have jailbroken cellphones otherwise. There are perfectly legitimate and safe circumstances where vulnerabilities can be exploited.