It sounds like you've stipulated conditions that force the answer to your question to be no.
Now if you are really interested in a realistic answer, then I think it is this: there will never be a case whereby there is not a way to turn off the car. If there ever was such a hypothetical case where there was a fully autonomous vehicle without a safety switch, either regulators or white hat hackers would discover this fact and prevent it from being sold on the market.
That's not to say a bad actor couldn't design their own autonomous vehicle that didn't have a shut off, but you could make that argument about practically any kind of technology that has the potential to be abused.
The other stretch of your imagination in this setup is that AI is sentient and all by itself decides it wants to rewrite its software in a way to take control of the vehicle. Despite what the popular science (and non-science) press says about AI systems, this is not really how they work. Not that AI can't "write" software, but it has to be prompted by an directive of some kind to even "decide" to do that. So there would have to be a prompt somewhere in the picture to direct the AI system to do this. It could be an innocent prompt like "write software to autonomously control a vehicle that is invulnerable to hacking and external intervention", but again, it would have to be an intentional omission of some kind of hardware kill switch (which I doubt would ever be allowed in a consumer product).
Now none of this really applies to military and other nefarious devices built by bad actors. There is where things could potentially spiral out of control. I recommend the film
Colossus: The Forbin Project if you want to delve into unintended consequences.