Hassaan Ahmed's recent article "Is a master algorithm the solution to our machine learning problems?" offers a categorization of "... the different schools of thought of machine learning". It appears to be adapted from the 2015 book The Master Algorithm by University of Washington professor Pedro Domingos, who offers the chart:
|Symbolists||Logic, philosophy||Inverse deduction|
|Evolutionaries||Evolutionary biology||Genetic programming|
As Ahmed characterizes the "Five Tribes":
This school of thought believes in deducing knowledge through the connections between the neurons. The connectionists focus on physics and neuroscience and believe in the reverse engineering of the brain. They believe in the back-propagation or "backward propagation of errors" algorithm to train the artificial neural networks to get the results.
Many researchers in the field of machine learning — especially the connectionists — believe that the deep learning model is the answer to all the problems of AI and consider it a master algorithm.
The symbolists' approach is based on the "high-level" interpretation of problems. The symbolists focus more on philosophy, logic and psychology and view learning as the inverse of deduction. John Haugeland called it "Good Old-Fashioned Artificial Intelligence" (GOFAI) in his book Artificial Intelligence: The Very Idea. The symbolists' approach solves the problem using pre-existing knowledge to fill the gaps. Most of the expert systems use the symbolists' approach to solve the problem with an If-Then approach.
The third school of thought, the evolutionaries, draw their conclusions on the basis of genetics and evolutionary biology. John Holland, who died in 2015 and previously taught at the University of Michigan, played a very important role in bringing Darwin's evolution theory into the computer sciences. Holland was the pioneer of genetic algorithms and his "fundamental theorem of genetic algorithms" is considered the foundation in this area.
If you've been using emails for 10 to 12 years now, you know how spam filters have improved. This is all because of the Bayesian school of thought in machine learning. The Bayesians focus on the probabilistic inference and Bayes' theorem to solve the problems. The Bayesians start with a belief that they call a prior. Then they obtain some data and update the prior on the basis of that data; the outcome is called a posterior. The posterior is then processed with more data and becomes a prior and this cycle repeats itself until we get the final answer. Most of the spam filters work on the same basis.
The fifth tribe of machine learning, the analogizers, depend on extrapolating the similarity judgements by focusing more on psychology and mathematical optimization. The analogizers follow the "Nearest Neighbor" principal for their research. The product recommendations on different e-commerce sites like Amazon or movie ratings on Netflix are the most common examples of the analogizers' approach.
... not deep, but perhaps useful — and perhaps even (mostly? partially?) true!
^z - 2017-02-02