In popular culture, the prevailing concept of "artificial intelligence" and "machine intelligence" refers to a manufactured being that acts, talks, thinks like and mimics a human. Science fiction, much of it wonderfully written, is full of such AI “beings”. Fortunately, sci-fi is not the exclusive domain of AI capability. One of the most influential pioneers of Computer Science, Alan Turing, pondered whether human-made equipment could possess intelligence, could think, or could feel. His proposition, published in 1950, was that if a computer system could communicate with human beings so well, that a human would think that they were talking to another human, then one would have to say that such a system exhibited an intelligent trait.

While the rest of the computing world was focused on crunching numbers and tabulating data, some researchers were taking on the challenge thrown by Turing and working on software that could understand human language. In 1956, this study got the name "Artificial Intelligence", or "AI" for short. The earliest attempts to translate from one language to another were made during that time. In 1966 the world met Eliza, a program that pretended to be a psychoanalyst, and did it quite well. Eliza used algorithms to recognize words and write reasonable responses, giving birth to a basic type of AI.

Fast-forward to today: millions of people converse with AI systems such as Siri and Alexa. While they know that they are not talking to a human, reasonable human-style dialog is achieved. Such systems typically use artificial neural nets, which we will discuss shortly. Neural nets are another type of AI, which is different from algorithmic systems. But the goals of Eliza and of Siri are the same: to recognize what humans want from them, without forcing people to learn a programming language.

Another area of early AI research went into programming computers to play games. Again, the goal was to make the program seem intelligent. A human who is a good chess player is considered quite intelligent, so a program that can play chess is a type of AI. The task is conceptually quite simple: player #1 has a finite number of possible moves to make; after each of those moves player #2 has a number of possible moves to respond with; then it's back to player #1 with all possible moves from that position, and so on until the game finishes. All a program has to do is create a Decision Tree: lay out all these possible paths from the start of the game, and at every juncture select the path that leads to victory… except that the number of paths is so huge for any game more complicated than Tic Tac Toe, that there's not enough computing power in the world to do it this way. So this type of AI must disregard less promising branches, and investigate only the more promising branches. Instead of looking for a perfect solution, the program uses heuristics (educated guesses) to select its best path through the possibilities. The goal of such AI systems is to find a good path from a starting point to the desired end, without having to write a program for every step.

The third group of AI researchers came from the database world. While database designers were busy with certainly important questions of how to organize and index data (relational vs. trees vs. networks, and so on), data collection abilities increased so much, that the sheer size of stored data grew tremendously. This area of artificial intelligence eventually became known as "Big Data". Its goal is to enable people to draw conclusions from vast, cumulative, updating but mostly static information.

The other area of interest in this historical primer is Transactional AI. The distinction is that the focus for transactional AI systems is not a huge accumulation of data, but rather information that is continuously happening. Their processing power is dedicated to understanding the correlations among the data items as they enter the analytical engine.

Some AI's, like the game-playing systems, give the appearance of human-like "thinking", but in fact they are implemented in very machine-like fashion. Other AI approaches try to imitate human behavior. Expert Systems are an example of this kind of AI: If you know the pattern that you are specifically looking for, then you can describe that pattern in detail, and what to do when it occurs. The same can be done with software: an expert system captures human expertise into a set of rules, and then applies them as necessary to solve the problem at hand. The main advantage of expert systems is speed. By optimizing an expert system to a specific domain of knowledges, rules can be simplified and made more focused, and questions can be answered extremely quickly.

As technology progressed, the next goal in AI research was systems that could develop their own rules — Learning Systems. The earlier ones had modest goals: to learn their way around a room with obstacles. Their descendants are now successfully rolling around homes and vacuuming consumers' floors.

Another type of learning-system AI went straight to the definition of intelligence, and tried to mimic, in software, the human brain. When we learn something new, some neural passages in our brains get stronger while others get weaker. Artificial Neural Networks don't look anything like our brains, but they also "learn" by processing representative data for which results are already known, and adjusting the weight for each connection from one node to the next based on the probability of that connection being the one that leads to the right results. After training, the neural net is used to process data for which the answer is yet to be discovered, on the assumption that the weights computed during training apply to the new data as well. There are numerous ways to organize the nodes, and numerous ways to transform the data before it enters the neural net and as it passes through each node. A specific combination of node layout, data transformation, and weights constitutes a model.

Artificial neural nets have become useful relatively recently, because training them requires a lot of computer processing power. It took a long time since the invention of neural net software in 1958 for computer processing speeds to catch up with the concept. In the meantime, another type of AI became popular in the 1990's: Genetic Programming. It also takes its inspiration from nature, but instead of mimicking a brain, it mimics natural selection — evolution. With genetic programming, both the starting point and the end result are known, and the task is to produce a program that figures out the path between them, without requiring a human programmer to write it. The basic method is to start with a randomly generated program. The data is run through, and the difference between the obtained result and the required result is noted. Then two portions of that program are picked at random and they are exchanged — they trade places within the program. The same data is run through, and the difference is measured again. Whichever of the two versions got closer to the required result wins that match for survival of the fittest. Then a third version is created by taking the winner and exchanging two other parts of it (taking care to not create a version that has already been considered), and again the results are compared to see who is the evolutionary champion. The process continues until a program evolves that achieves the goal. Thus, the system learned how to create a program that gets from the initial data to the final result, without any human actually writing any code.

Experience has shown that real-life problems are best solved by judicial combination of several types of AI with tools that enhance the abilities of human beings to think and act. Modern AI solutions combine artificial and natural intelligence: they value the abilities of human experts, who were the inspiration for expert systems; they value the ability of people's brains to recognize situations when they see them, which was the inspiration for artificial neural nets.