I don't see how that's possible. There can be various degrees of autonomous driving, but "fully autonomous" by definition means that the car must be capable of driving itself with no human involvement whatsoever (i.e. nobody needs to even be inside the car).
Now it's certainly possible for a fully autonomous car to have restrictions on where it can drive (just as a human driven car has restrictions on where it can drive depending on the capabilities of the car or driver).
Exactly this - this is I think the area which causes confusion. From my perspective, an autonomous vehicle that can only operate safely in, say, 30% of the conditions it may encounter... is not autonomous. Here's why: road conditions change, sometimes severely.
I grew up in the midwest so perhaps I have a perspective that people native to more sunny regions may not. It would take hours to expound upon the number of times, in my youth, I found myself shoving cardboard and rock salt under the drive wheels of my (or a random stranger's) car while trying to extract the vehicle from a snow drift... or even to just get out of the parking lot at work after a snow plow had been through, or after an unexpected plunge in temperature had turned light rain into ice.
And that sort of variability in real world conditions is why it is unreasonable to expect that fully autonomous vehicles will really be workable unless their limitations closely approximate a human driver. As a thought experiment, what happens to the commuters who got to work in their autonomous Google cars (with no human controls) when that sudden ice storm makes the car decide it cannot safely drive? They don't go home, I suppose. What happens when a snow plow leaves a 1 foot column of snow all the way across the intersection where the car wants to turn? I guess it doesn't, and just gets stuck there. What happens when rain causes flooding, and the vehicle finds itself in rising waters, and needs to get out? Does it shut down, because the driving conditions are unsafe?
The questions involved here range from convenience factors to safety.
My point here is that in the real world, limitations of the sort you describe will cause consumers to reject the technology, because it will be both inconvenient and occasionally unsafe. And that burden - the burden required to overcome consumer objections - will be much higher for vehicles that don't allow human input (like Google's cars), because people will expect them to be
more capable than another vehicle of a similar class.
What I expect to see, really, is an approach much more like Tesla's - I expect to see autopilot features get better and better, such that the conditions in which they can reliably autonomously drive increase until the point that in day to day conditions, they will mostly be able to drive unassisted. However, there will still be conditions in which a human will have to intervene. And that brings us back to where we started, because by definition, a vehicle that requires human input, no matter how infrequently... is not a fully autonomous vehicle.