A Brief History of Artificial Intelligence

By   /  April 5, 2016  /  No Comments

kf_histai_040316 The roots of modern Artificial Intelligence, or AI, can be traced back to the classical philosophers of Greece, and their efforts to model human thinking as a system of symbols. More recently, in the 1940s, a school of thought called “Connectionism” was developed to study the process of thinking. In 1950, a man named Alan Turing wrote a paper suggesting how to test a “thinking” machine. He believed if a machine could carry on a conversation by way of a teleprinter, imitating a human with no noticeable differences, the machine could be described as thinking. His paper was followed in 1952 by the Hodgkin-Huxley model of the brain as neurons forming an electrical network, with individual neurons firing in all-or-nothing (on/off) pulses. These events, at a conference sponsored by Dartmouth College in 1956, helped to spark the concept of Artificial Intelligence.

Jonathan Crane is the CCO of Ipsoft, the creators of the virtual assistant called Amelia. He had this to say about the current state of Artificial Intelligence:

“AI is driving a huge change in the way we can target our marketing and advertising, even for smaller companies. This means that businesses are able to target their ‘spend and increase ROI’ and allow advertising to do what it should, giving people adverts they want to see.”

Mr. Crane is referring to AI’s use of Big Data. Artificial Intelligence can be combined with Big Data to handle complex tasks, and can process the information at much faster speeds than any previous systems.

The development of AI has not been streamlined and efficient. Starting as an exciting, imaginative concept in 1956, Artificial Intelligence research funding was cut in the 1970s, after several reports criticized a lack of progress. Efforts to imitate the human brain, called “neural networks,” were experimented with, and dropped. The most impressive, functional programs were only able to handle simplistic problems, and were described as toys by the unimpressed. AI researchers had been overly optimistic in establishing their goals, and had made naive assumptions about the problems they would encounter. When the results they promised never materialized, it should come as no surprise their funding was cut.

The First AI Winter
AI researchers had to deal with two very basic limitations, not enough memory, and processing speeds that would seem abysmal by today’s standards. Much like gravity research at the time, Artificial Intelligence research had its government funding cut, and interest dropped off. However, unlike gravity, AI research resumed in the 1980s, with the U.S. and Britain providing funding to compete with Japan’s new “fifth generation” computer project, and their goal of becoming the world leader in computer technology. The stretch of time between 1974 and 1980 has become known as ‘The First AI Winter.’

The First AI Winter ended with the introduction of “Expert Systems,” which were developed and quickly adopted by competitive corporations all around the world. The primary focus of AI research was now on the theme of accumulating knowledge from various experts. AI also benefited from the revival of Connectionism in the 1980s.

Cybernetics and Neural Networks
Cybernetics studies automatic control systems. Two examples are the brain and nervous system, and the communication systems used by computers. Cybernetics has been used to train the modern version of neural networks. However, neural networks would not become financially successful until the 1990s, when they started being used to operate optical character recognition programs and speech pattern recognition programs.

Expert Systems
Expert Systems represent an approach in Artificial Intelligence research that became popular throughout the 1970s. An Expert System uses the knowledge of experts to create a program. Expert Systems can answer questions and solve problems within a clearly defined arena of knowledge, and uses “rules” of logic. Their simplistic design made it reasonably easy for programs to be designed, built, and modified. Bank loan screening programs provide a good example of an Expert System from the early 1980s, but there were also medical and sales applications using Expert Systems. Generally speaking, these simple programs became quite useful, and started saving businesses large amounts of money.

For instance, in 1980, Digital Equipment Corporation began requiring their sales team use an Expert System named XCON when placing customer orders. DEC sold a broad range of computer components, but the sales force was not especially knowledgeable about what they were selling. Some orders combined components that didn’t work together and some orders were missing needed components. Before XCON, technical advisers would screen the orders, identify nonfunctional combinations, and provide instructions for assembling the system. Since this process (including communications with the customer) caused a bottleneck at DEC, and many efforts to automate it had failed, DEC was willing to try a technology that was relatively new to this kind situation. By 1986, the system was saving DEC $40 million annually.

XCON (sometimes referred to as R1) was a large system with roughly 750 rules, and even though it could process multiple orders, it still needed to be adjusted and tweaked before DEC could use it efficiently. DEC learned the system could not be used as originally designed, and that they did not they have the expertise to maintain it. The “knowledge” in the system needed to be collected and added by people trained in Expert Systems, and in knowledge acquisition. Many of its technical advisers were engineers, but they were not AI experts, and the team of engineers DEC finally organized were “familiar” with AI, but members of the group were not chosen for their Artificial Intelligence expertise (there simply were not that many experts available), and no one in the group was familiar with the language it was written in, OPS-4. After roughly a year, with a huge amount of assistance from Carnegie-Mellon (the program’s original writers), and having grown to nearly 1000 rules, DEC was able to take over the programming and maintenance of XCON. Integrating XCON into the DEC culture was a difficult, but successful experience. Management learned an Expert System requires specially trained personnel, and they took responsibility for training and hiring people to meet those needs.

At its peak, XCON has 2,500 rules and had evolved significantly (though currently its popularity has waned, as it has become a bit of a dinosaur). XCON was the first computer system to use AI techniques in solving real world problems within an industrial setting. By 1985, corporations all over the world had begun to use Expert Systems, and a new career field developed to support them. XCON could configure sales orders for all VAX-11 computer systems manufactured in the United States, but the system needed to be continuously adjusted and updated, and required a full-time IT team.

 The Second AI Winter

The AI field experienced another major winter from 1987 to 1993. This second slowdown in AI research coincided with XCON, and other early Expert System computers, being seen as slow and clumsy. Desktop computers were becoming very popular and displacing the older, bulkier, much less user-friendly computer banks. Eventually, Expert Systems simply became too expensive to maintain, when compared to desktops. They were difficult to update, and could not “learn.” These were problems desktop computers did not have. At about the same time, DARPA (Defense Advanced Research Projects Agency) concluded AI would not be “the next wave” and redirected its funds to projects deemed more likely to provide quick results. As a consequence, in the late 1980s, funding for AI research was cut deeply, creating the Second AI Winter.

Conversation with Computers Becomes a Reality

In the early 1990s, Artificial Intelligence research shifted its focus to something called an intelligent agent. These  intelligent agents can be used for news retrieval services, online shopping, and browsing the web. Intelligent agents are also sometimes called agents or bots. With the use of Big Data programs, they have gradually evolved into personal digital assistants, or virtual assistants.

Currently, giant tech businesses such as Google, Facebook, IBM, and Microsoft are researching a number of Artificial Intelligence projects, including virtual assistants. They are all competing to create assistants such as Facebook’s M, or Cortana from Microsoft, or Apple’s Siri.The goal of Artificial Intelligence is no longer to create an intelligent machine capable of imitating human conversation with a teletype. The use of Big Data has allowed AI to take the next evolutionary step. Now, the goals are to develop software programs capable of speaking in a natural language, like English, and to act as your virtual assistant. These virtual assistants represent the future of AI research, and may take the form of robots for physical help, or may be housed in laptops and help in make business decisions, or they may be integrated into a business’s customer service program and answer the phone. Artificial Intelligence is still evolving and finding new uses.

About the author

Keith is a freelance researcher and writer.He has traveled extensively and is a military veteran. His background is physics, and business with an emphasis on Data Science. He gave up his car, preferring to bicycle and use public transport. Keith enjoys yoga, mini adventures, spirituality, and chocolate ice cream.

You might also like...

Radware Survey Finds Nearly Half of Companies Have Suffered a Data Breach in the Past Year

Read More →