So, I was at DEFCON and saw this talk and I came away seriously unimpressed. But not for the reasons you might think.
For background, I have spent my career (25+ years) bouncing back and forth between the computer security and embedded Unix worlds. I've spent hundreds, maybe thousands, of hours thinking about how to build systems that are resilient in the face of attacks like this, and a fair bit of time breaking into competitors' gear to see how they did it.
I walked out of this talk with a few main things in mind:
1) The researchers "just happened to" have set themselves ground rules that "just happened to" prevent them from investigating any lines of attack on the car that would have been really technically hard. Setting the rule that they wouldn't alter persistent state (in-memory attacks only) meant they didn't have to -- because they "weren't allowed to" -- investigate propagation of compromise via the firmware bundles that the CID holds for other devices. Setting the rule that they would attack infotainment systems only meant that they didn't have to conduct any kind of remote-oracle attack on the gateway and its VAPI interface, or even figure out what it was or see whether it had any other vulnerabilities, such as boot-time vulnerabilities. It also meant they didn't have to really dig into the boot process for any of the elements of the system to see whether there were a vulnerability there that could allow persistent compromise after a single remote attack, or loading of unsigned firmware, etc.
Every one of these lines of attack was used in Miller & Valasek's Jeep work, every one represented an impressive technical achievement, and every one of them yielded results. So from my point of view, why didn't these guys achieve remote compromise of the Model S? Because -- giving them the benefit of the doubt, I'll say it was accidental and probably more aimed at not having to buy their friend a new $100K car -- at the outset, they set ground rules that ensured they would not do the very hard work that might have produced that result.
I think unfortunately it's telling that they also gave up "after a few hours" at the task of exploiting a known remote code execution bug in the browser (a remote exploit!) "because they had an easier way already" via physical access. Is it a big pain in the tuchas to bootstrap a browser you can crash remotely (and know why, and know memory's being overwritten) into a remote shell on a system with a funny architecture and oddball C runtime? Sure. Once you can reproduce the crash, do you know as a security researcher it's possible? Pretty much yes, and if you already have another means of access to the system it's trivial to confirm this. What's in between? Potentially, weeks and weeks of fiddly, unpleasant, technically demanding work. They just didn't do it.
2) The presenters' repeated (and somewhat disrespectful, in my opinion) comparison of their work to the Jeep work really just made them seem inexperienced with embedded systems and vehicle systems, and unaware of the details of the Jeep work (which had been presented at Black Hat several days earlier). It didn't say much about the architecture of the Jeep, the Tesla, or about either parties' attacks.
For example, there was repeated, emphatic praise of the Tesla "architecture" and slagging of the Jeep. But in fact, architecturally, these two systems for mediating user input/output and communicating on the CANbus with vehicle systems are strikingly similar. In particular, both cars embed in the "head unit" a separate processor which takes abstract commands from the touchscreen and performs all CANbus communication. In the Jeep, this "gateway" is connected to the infotainment system CPU by an SPI bus. In the Tesla, it's connected by an Ethernet. Neither car actually allows any user-exposed CPU to perform CANbus messaging - in fact, since the Tesla has far more other stuff connected to the bus (Ethernet) that can talk to the gateway, thus far more attack surface, I'd have to say "advantage-Jeep!" on this one.
So why did the Jeep guys manage to do arbitrary CANbus messaging while the Tesla guys did not? The answer seems pretty clear: they actually bothered to try! They reverse-engineered all four of the gateway's firmware, the managed firmware update process for devices other than the touchscreen, the touchscreen-to-gateway protocol, and the car's CANbus messaging. If the Tesla guys attempted to do any of these things, they would appear to have failed -- though they did expressly say they did not try to do some of them because of their "ground rules".
I think it is a pretty fair suspicion on my part that had Miller & Valasek had a go at these interfaces on the Tesla as they did on the Jeep, they would have found something. This stuff is freaking hard to get right (having gotten my fair share of it wrong myself, in my career, and likely to do so again).
So -- given the known browser bug, equivalent in effect to though more challenging to exploit than, the Jeep's exposed DBus port -- why wasn't there a remote compromise of the Model S that allowed arbitrary CANbus messaging, like there was for the Jeep? My take on it is that nobody did the (potentially very) hard work, for whatever reason. Not to say there wasn't similarly hard work on the Jeep. If you want to disassemble and patch a bunch of nasty Super-H code using a buggy tool, be my guest.
3) There was a serious Tesla reality distortion field in effect. They had JB Straubel up on stage and DEFCON staff and presenters alike lavished him and the company with praise (in fairness, Miller & Valasek said some really nice things about Jeep too, but nobody had a Jeep exec up to the front to do shots). Tesla brought a car to the "car hacking village" and unlike all the other cars there, nobody was allowed to remove any interior or exterior trim -- in fact, for most of the first day nobody was allowed to even sit in the car without a Tesla employee in the passenger seat. Tellingly, when things calmed down a little bit I did some brief experimentation with the vehicle and determined that every wireless interface had been disabled, including Bluetooth. The booth staff looked extremely uncomfortable when I asked why.
(Just to be clear, to me the reason why seems pretty obvious: they didn't want someone fuzzing their Bluetooth stack or further probing the browser, and if the car hadn't had -- guessing here -- all its antennas snipped, both those things would surely have happened)
4) One thing the presenters got right: OTA upgrade is a lifesaver. If for no other reason, because it means there need be no readily exposed port that would allow an easy attack on the vehicle by someone who does have physical access. Unfortunately, it does also represent additional remote attack surface. If Tesla ever has its OpenVPN servers (or any of the infrastructure behind them) popped -- look out. But we accept this risk with regard to almost every modern device in our lives, so it does not seem so different to accept it for our cars.
The bottom line for me: this was an interesting investigation that probably would have yielded far more dramatic and scary results if conducted as aggressively as the Jeep work -- or, really, any work by top security researchers. As it was, it was so constrained, and the results so limited, that if it weren't for the wow-Tesla-sexy factor I'm not sure it would have even been accepted (it probably would have been, but then again, similar attacks on other cars have been demonstrated for years now). Also unfortunately the Tesla RDF was not an entirely positive thing.
Any way you slice it a system with the complexity of the Model S has a lot more attack surface than something simpler like a modern ICE vehicle. That means Tesla has a lot more work to do to defend it. I'm confident they can, but at present we have little data to know whether they actually have or will.