Thanks for your help. I think both articles aside from being over my head are somewhat different from my question. The notion suggested was not starting from an existing data set and applying the machine to discover patterns in preparation to label future patterns, but iterating all possible neural net designs and computing the best for different applications. Kinda like my limited understanding how one of the experiments taught itself to play GO just by playing the game over and over. Probably the hitch is what are the rules of neural nets? Sounds like that requires people to design them. Might be possible if layers could be built in more than three dimensions, say over time.
This is probably where I need to admit I don't know what I'm trying to talk about.
I can take a crack at it!
Neural networks (typically) are one of many kinds of classifiers within supervised learning, the primary characteristic of all supervised learning methods being that you have a labeled data set (your comment leads me to believe that you're familiar with the idea of labeled data vs. unlabeled data).
So one option, in theory, given some set of square data (rows and columns), set each column as the label and then go use the remaining columns as independent variables and build models - many, many models. And while this can in theory be done, you still lack the human knowledge of what any of the results means.
An ML tool that is becoming increasingly available is one that uses your given (an existing data set), and an additional given - which column is the label - and goes from there to build a wide range of different kinds of models (neural net(s), logistic regression, linear regression, decision trees, random forest, fill-in-the-blank
These tools are essentially automating the model building process, while still keeping a human in the loop at the beginning (what is the problem I'm solving, what is the relevant data that I can get for that problem, what do my inputs mean that lead to the outputs), while solving the model building itself through a combination of brute force (build millions of models and compare them for "goodness"), and clever tricks (keep track of the models and notice which ones are getting better, and use that knowledge to cut out branches in the brute force solution).
Some of these tools are also getting into feature engineering (combining the independent variables in interesting ways that are more predictive / descriptive than they are separately). Trivial example - you've got a shipping app with package dimensions of height, width, and length. Your app might be able to learn a pattern based on these three variables, and it also might get better results using volume (height*width*length) as a variable. Or maybe area (height * length) for some reason!
The trick with the GO program, is that a human made a bunch of decisions to set up the circumstances within which the machine could iterate and operate. A human decided that finding a solution to GO was worth doing (and then did the programming to do so). A human coded the rules of GO into the program. A human coded the win condition for GO into the program.
The program was then able to to "solve" the problem in a highly unintuitive fashion to us humans. It started making random plays and using those random plays as labeled data to start finding patterns about which ones were helpful to winning and which ones harmful and useless. It had to make millions / billions / trillions of useless plays to start finding patterns. That's where neural networks excel - amazingly large data sets to start learning patterns from.
A computer program with lots of compute can easily run as much of this as you want to feed it time and electricity to do. And it turns out that once the network gets traction and starts learning, it often learns in a seemingly very fast fashion. (This is where I leave my direct experience, and just go with what I've been reading - I work in the field, but so far not with neural nets - they don't much apply to my problem domain).
I personally can see line of sight to increasingly automated ML / model building, when the human has defined the problem.
I don't yet see line of sight to a program that can assess a random environment and decide what's a problem (or opportunity), what's a problem worth solving, and what a solution to that problem might look like, decide what data is relevant to the problem, go get that data, do the data prep / organization to get it into form for solving the problem, so it can then use the automated ML stuff to find and choose a model that solves the problem.