AIs aren't going to REPLACE DOCTORS, which is what the other person to whom I was responding was claiming. We flat out don't know enough about the human body yet. You can't train a neural net or an expert system when you don't really know what you're doing either; "computerized exploratory science" is in its infancy. Might replace specialists in some fairly cut-and-dried areas.
Not to get too off topic on this, but let me use the doctors example to help illustrate that the issue with machine learning isn't in the capability of the neural network, but in the paradigm space that humans trying to train it operate in (i.e. are you asking the right question?) and in the quality of a training set.
If you could quantify a good chunk of critical data about a person, such as:
1. Genome
2. Lifestyle stuff: diet, predominant world view and psychological states, sleeping habits, envoronment, etc
3. Gut biome
4. Historical data on lifestyle, injuries, medications taken, past traumatic events, etc.
5. Current comprehensive blood, urine, stool, fMRI, whatever else tests that can be done
6. A bunch of other stuff I can't think of now
..then you add labeling to this dataset. Someone shows up at the doc and we do 1-6, they get a diagnosis and a prescription and then they die within half a year. Or they recover. Or they don't do what the prescription says and do something else, and have a different outcome.
So you stuff all of this into a training dataset. And I bet you will start getting things that will make a lot of people real uncomfortable, but also creepy accurate. Like "stop eating cookies or die in 23 month". "Decreased life span if take this antibiotic by 3 years, not going to help with the current symptom. Go to Hawaii instead, healed in 3 weeks". "Divorce or heart attack in 3.4 years".
Our use of machine learning is limited by how well can we present real world conditions and outcomes to it. Sure, currently we do stuff like train to be able to classify cancer by image. But that is only because we haven't learned (or, as I think would soon be the case, are dis-incentivesed to even try) to collect more representative data in more dimensions and train bigger NN's with those bigger datasets.
For self-driving, overwhelming majority of the overall problem can be broken down into well understood classification and prediction sub-problems and solved with an appropriately big and representative training set. Which is what Tesla is doing, and have gotten fairly far along that road. Driving is a lot simpler than medicine, but that is not to say medicine can't be much better solved by machine learning. Same thing applies: it's a huge multiplier when you can train one "uber-doctor" on all the corner cases that any one doctor would never ever be able to see in their lifetime, not even 1%. And also train that doctor without any preconceived notions of what should and should not work, that would consider any solution. Same with humans and driving. Except driving is something that won't encounter all sorts of moral and other ways humans will resist acknowledging reality, which I fully expect to happen in the case of medicine. Drove from A to B, no accidents, reasonable speed -- all good, we're on board.