Advertisement

A Brief History of Artificial Intelligence

By on
kf_histai_040316

In 1950, a man named Alan Turing wrote a paper suggesting how to test a “thinking” machine. He believed if a machine could carry on a conversation by way of a teleprinter, imitating a human with no noticeable differences, the machine could be described as thinking. His paper was followed in 1952 by the Hodgkin-Huxley model of the brain as neurons forming an electrical network, with individual neurons firing in all-or-nothing (on/off) pulses. These combined events, discussed at a conference sponsored by Dartmouth College in 1956, helped to spark the concept of artificial intelligence.

A PC Magazine’s survey showed Google Assistant, Alexa, and Siri are the most popular nonhuman virtual assistants. Have we achieved true artificial intelligence? Sofia Altuna, of Google Assistant, said during an interview:

“Google Assistant brings together all of the technology and smarts we’ve been building for years, from the knowledge graph to natural language processing. Users can have a natural conversation with Google to help them in their user journeys.”

The development of AI has been far from streamlined and efficient. Starting as an exciting, imaginative concept in 1956, artificial intelligence research funding was cut in the 1970s, after several reports criticized a lack of progress. Efforts to imitate the human brain, called “neural networks,” were experimented with, and dropped.

Their most advanced programs were only able to handle simplistic problems, and were described as toys by the unimpressed. AI researchers had been overly optimistic in establishing their goals (a recurring theme), and had made naive assumptions about the difficulties they would encounter. After the results they promised never materialized, it should come as no surprise their funding was cut.

The First AI Winter

The stretch of time between 1974 and 1980 has become known as ‘The First AI Winter.’ AI researchers had two very basic limitations — not enough memory, and processing speeds that would seem abysmal by today’s standards. Much like gravity research at the time, Artificial intelligence research had its government funding cut, and interest dropped off. However, unlike gravity, AI research resumed in the 1980s, with the U.S. and Britain providing funding to compete with Japan’s new “fifth generation” computer project, and their goal of becoming the world leader in computer technology.

The First AI Winter ended with the promising introduction of “Expert Systems,” which were developed and quickly adopted by large competitive corporations all around the world. The primary focus of AI research was now on the theme of accumulating knowledge from various experts, and sharing that knowledge with its users. AI also benefited from the revival of Connectionism in the 1980s.

Expert Systems

Expert Systems were an approach in artificial intelligence research that became popular throughout the 1970s. An Expert System uses the knowledge of experts to create a program. The process involves a user asking the Expert System a question, and receiving an answer, which may or may not be useful. The system answers questions and solves problems within a clearly defined arena of knowledge, and uses “rules” of logic.

The software uses a simplistic design and is reasonably easy to design, build, and modify. Bank loan screening programs provide a good example of an Expert System from the early 1980s, but there were also medical and sales applications using Expert Systems. Generally speaking, these simple programs became quite useful, and started saving businesses large amounts of money. (Expert systems are still available, but much less popular.)

The Second AI Winter

The AI field experienced another major winter from 1987 to 1993. This second slowdown in AI research coincided with XCON, and other early Expert System computers, being seen as slow and clumsy. Desktop computers were becoming very popular and displacing the older, bulkier, much less user-friendly computer banks.

Eventually, Expert Systems simply became too expensive to maintain, when compared to desktop computers. Expert Systems were difficult to update, and could not “learn.” These were problems desktop computers did not have. At about the same time, DARPA (Defense Advanced Research Projects Agency) concluded AI “would not be” the next wave and redirected its funds to projects more likely to provide quick results. As a consequence, in the late 1980s, funding for AI research was cut deeply, creating the Second AI Winter.

Conversation with Computers Becomes a Reality

Natural language processing (NLP) is a subdivision of artificial intelligence which makes human language understandable to computers and machines. Natural language processing was sparked initially by efforts to use computers as translators for the Russian and English languages, in the early 1960s. These efforts led to thoughts of computers that could understand a human language. Efforts to turn those thoughts into a reality were generally unsuccessful, and by 1966, “many” had given up on the idea, completely.

During the late 1980s, Natural language processing experienced a leap in evolution, as a result of both a steady increase in computational power, and the use of new machine learning algorithms. These new algorithms focused primarily on statistical models – as opposed to models like decision trees. During the 1990s, statistical models for NLP rose dramatically.

Intelligent Agents

In the early 1990s, artificial intelligence research shifted its focus to something called intelligent agents. These intelligent agents can be used for news retrieval services, online shopping, and browsing the web. Intelligent agents are also sometimes called agents or bots. With the use of Big Data programs, they have gradually evolved into digital virtual assistants, and chatbots.

Machine Learning

Machine learning is a subdivision of artificial intelligence and is used to develop NLP. Although it has become its own separate industry, performing tasks such as answering phone calls and providing a limited range of appropriate responses, it is still used as a building block for AI. Machine learning, and deep learning, have become important aspects of artificial intelligence.

  • Boosting: In 1990, Robert Schapire introduced the concept of boosting in a 1990 paper, The Strength of Weak Learnability. Schapire wrote, “A set of weak learners can create a single strong learner.” The majority of boosting algorithms are repetitive weak learning classifiers that, when added together, form a strong classifier.
  • Speech Recognition: Most of the speech recognition training being done is the result of a deep learning technique referred to as long short-term memory (LSTM). This is based on a neural network model developed in 1997, by S. Hochreiter and Jürgen Schmidhuber. The LSTM technique supports learning tasks which use memories of thousands of small steps (this is important for learning speech). Around  2007, LSTM began surpassing the more established speech recognition programs. During 2015, Google’s speech recognition program reported a 49 percent increase in performance by using a LSTM that was CTC-trained.
  • Facial Recognition: In 2006, the National Institute of Standards and Technology sponsored the “Face Recognition Grand Challenge,” and tested popular facial recognition algorithms. Various iris images, 3D face scans, and high-resolution facial images were examined. They found some of the new algorithms to be ten times as accurate as the facial recognition algorithms popular in 2002. Some of the new algorithms could surpass humans in recognizing faces (these algorithms could even identify identical twins). In 2012, an ML algorithm developed by Google’s X Lab could sort through and find videos which contained cats. In 2014, the DeepFace algorithm was developed by Facebook — it recognized people in photographs with the same accuracy as humans.

Digital Virtual Assistants and Chatbots

Digital virtual assistants understand spoken commands, and respond by completing tasks.

In 2011, Siri (of Apple) developed a reputation as one of the most popular and successful digital virtual assistants supporting natural language processing. Online assistants, such as Alexa, Siri, and Google, may have started as convenient sources of information about the weather, the latest news, and traffic reports, but advances in NLP and access to massive amounts of data have transformed digital virtual assistants into a useful customer service tool. They are now capable of doing many of the same tasks a human assistant can. They can even tell jokes.

Digital virtual assistants can now manage schedules, make phone calls, take dictation, and read emails aloud. There are many virtual digital assistants on the market today, with Apple’s Siri, Amazon’s Alexa, Google Assistant, and Microsoft’s Cortana as well known examples. Because these AI assistants respond to verbal commands, they can be used hands-free, allowing a person to drink their coffee, or change diapers, while the assistant accomplishes the assigned task.

These virtual assistants represent the future of AI research. They are driving cars, taking the form of robots to provide physical help, and performing research to help with making business decisions. Artificial intelligence is still evolving and finding new uses.

Chatbots and digital virtual assistants are quite similar. Chatbots (sometimes called “conversational agents”) can talk to real people, and are often used for marketing, sales, and customer service. They are typically designed to have human-like conversations with customers, but have also been used for a variety of other purposes. Chatbots are often used by businesses to communicate with customers (or potential customers) and to offer assistance around the clock. They normally have a limited range of topics, focused on a business’ services or products.

Chatbots have enough intelligence to sense context within a conversation and provide the appropriate response. Chatbots, however, cannot seek out answers to queries outside of their topic range or perform tasks on their own. (Virtual assistants can crawl through the available resources and help with a broad range of requests.)

Passing Alan Turing’s Test

In my humble opinion, digital virtual assistants and chatbots have passed Alan Turing’s test, and achieved true artificial intelligence. Current artificial intelligence, with its ability to make decisions, can be described as capable of thinking. If these entities were communicating with a user by way of a teletype, a person might very well assume there was a human at the other end. That these entities can communicate verbally, and recognize faces and other images, far surpasses Turing’s expectations.

Image used under license from Shutterstock.com

Leave a Reply