Advertisement

How the Three Paradigms of AI/Machine Learning Will Fuel the Tech Evolution

By on

Click here to learn more about Ram Sivasankaran.

The rapid evolution and democratization of advanced technology have accelerated humankind’s ability to achieve what was unthinkable even a decade ago. Self-driving cars, robotic vacuums, and the virtual assistant known as Siri have put us not just over the top technologically, but on conceptual overdrive as well. The idea of using all of this technology to improve our lives bears down on a society still grappling with how to use, to its fullest potential, an ever-changing array of handheld devices more powerful than the computers that put us on the moon.

According to the 2017 Huffington Post article Why Tech is Accelerating, the first steps of human invention, such as sharp edges, the wheel, and fire, took tens of thousands of years. In contrast, the latter half of the last millennium saw breakthroughs such as penicillin, electricity, telecommunications, space travel and, of course, the World Wide Web.

The exponential growth of the internet in the 1990s enabled scientists and researchers to access vast numbers of online knowledge bases. This ability increased the speed of innovation, as research teams shared their findings in online repositories, supported by what were at the time fairly costly data-processing systems. Today, data processing and storage are far cheaper, more efficient, and more powerful than ever before.

The next step in this evolution of the means and methods of knowledge acquisition, processing, and delivery, is Artificial Intelligence, or AI. Let’s quickly revisit what the term means and explore some aspects of its still vastly untapped potential to shape the course of history. 

Enter AI 

AI works on the bedrock concept called machine learning, which is designed to make sense out of vast, variegated, and evolving collections of data. It allows computer systems to “learn” from “experience,” as opposed to expanding their abilities through the hard-coding of new parameters and constraints into their base programming.

For example, consider a relatively simple task for humans – identifying whether an animal in a video is a cat. Without consciously thinking about it, humans draw from a metaphorical database in their brains to access some of the most tell-tale traits of a cat – does the creature in the video have whiskers? Does it have pointy ears? Does it “meow”? These attributes most commonly associated with cats would have been collected and compiled into the human brain over past experiences, most likely from early childhood. This accumulation of data goes on up to a point where the margin of error, for a human, to properly and accurately identify an animal as a cat – or whatever it may be – diminishes to near zero.

Identifying a Cat
Image: Wikihow

Assume at some point in life that a young human is shown a picture of a lion and told that it is a member of the feline family, as is also the common, domestic cat. The human will now have to assimilate the data on domestic cats, and those on lions, into another newly-created classification of felines, thereby recognizing both similarities and differences between them.

The above is precisely one kind of ability for adaptive learning provided to computer systems to answer questions of various kinds. The computer systems are still endowed with programming – there can be no AI without human-provided programming – but the program is capable of expanding on its abilities by processing and assimilating new information in lieu of new lines of code. This is a good place to branch out and talk about the three primary forms of machine learning, while also exploring a few examples of how the AI stemming from them might find application in various walks of life in the years and decades to come.

Supervised Learning

What is it? To put it simply, this is the type of machine learning where a problem is defined and the computer system is fed with multiple examples of how it may be solved through curated and validated examples. For example, if the problem were the question ‘is the animal in the given video a cat?’, the computer system could be fed with videos of various cats, paired with the answer ‘yes,’ and even some of dogs, ducks and humans, paired with the answer ‘no.’ The algorithms can be designed to collect and narrow down the attributes they will then use to draw a reference between those that belong exclusively to cats and those that do not. The key phases and steps of supervised learning are illustrated below:


Figure 1: Building Blocks of Supervised Learning

Applications: Supervised learning is already being used extensively in the training of self-driving vehicles. The more vehicles with data-collecting cameras (enabling ‘computer vision’) drive around, the more they become capable of better recognizing lane markers, traffic lights, crossing pedestrians, and other vehicles and objects on the road. This is a form of technology that will work best under ideal driving conditions, i.e., those where lane markers are clear, people drive civilly, and so on, but might have to learn significantly to account for the randomness brought in by other factors such as faded or unclear road signs, traffic junctions with glaring blind spots, human drivers breaking road rules, etc. Each time something unexpected is encountered on the road, the algorithm designed to run the autonomous vehicle will have to “learn to expect” it and include contingencies for avoiding or mitigating adverse consequences.

The same technology could also be extended to other modes of transport such as airplanes, trains, and trams. In a way, it might be better suited, at least for the moment, to such vehicles. Supervised learning will work best when the training data required to achieve acceptable standards of safety and accuracy are manageable and these forms of transport have clearly defined travel paths, schedules, and strictly-adhered-to rules of operation. The algorithm running the vehicle should not have to get into a vicious cycle of encountering and accounting for an ever-increasing array of unforeseen scenarios. Therefore, a train that is constrained to its tracks and operated within the parameters of set rules and protocols will achieve high standards of safety margins faster than an autonomous road vehicle continuously contending with and learning from the more vibrant dynamics of roadway travel.

Taking this a step further, let us imagine expanding supervised learning to try to answer questions in the realm of the unknown. For example, there is yet a handful of elements waiting to be discovered and placed in empty spots reserved for them in the Periodic Table. Some of the predicted properties of these elements are already shared by others that are well known and established in the Periodic Table. Could we train a computer system, with the appropriate physical sensors and mechanisms, to analyze, classify, and identify new forms of matter? In a similar manner, could AI be infused with the means to recognize (or suspect) life as we know it and programmed into rovers headed out for interplanetary missions – even as we build more capable spacecraft to scour the far reaches of our solar system? These are only a couple of potential applications of supervised learning to aid in the human zest to charge into new frontiers.

Unsupervised Learning

What is it? Much like supervised learning, unsupervised learning is provided with foundational programming to get it started. However, it does not work on improving itself based on “experience” to answer clearly-defined questions. Instead, this form of machine learning is actually designed to seek out and identify patterns from within large sets of incongruous data. It then attempts to group (cluster) those data based on various attributes that it recognized from processing. This, in turn, sets the stage for humans to analyze the processed data, recognize non-obvious correlations between elements, and establish relationships between them (wherever applicable). The three steps of working with data processed through unsupervised learning are illustrated below:


Figure 2: Building Blocks of Unsupervised Learning

Applications: Perhaps the above academic description could be supplemented by an example from everyday life. I will draw much of my inspiration here from a Netflix series I recently watched. Consider a hypothetical example from the discipline of forensic science and investigation. A crime took place in a house and there is no tangible lead for investigators except for a security camera pointed out into the street. However, despite the fact that over a hundred cars passed the purview of the camera over the period surrounding the crime, no person was seen to approach the front door and break in. Nevertheless, a clue could yet be buried in the traffic flowing through the street, opposite the front door, in the period surrounding the crime. Did any single vehicle pass by the address more than twice before and/or after the alleged crime? Investigators might assume that the suspect might have driven by several times both before and after the crime to vet the situation at the victim’s residence. They say, after all, that criminals are the most frequent visitors to crime scenes.

In this case, watching hours of surveillance video and trying to keep track of the minutiae of the make, model, color, and license plate of each car, might be a formidable task for a human being or even a team of humans. Not so much for a machine. An algorithm powered by unsupervised learning might extract and group all vehicles that passed the crime scene, in a given time window, by any number of attributes of interest. Based on all this information, if one vehicle is recognized to have crossed the crime scene over five times in the described window, would that discovery not be a great lead for investigators?

The greatest advantage of unsupervised machine learning lies in its ability to ingest large amounts of seemingly random or chaotic data and help correlate the information contained within (if valid correlations do exist) in such a way as to facilitate statistically meaningful conclusions. This ability to process and draw non-obvious correlations between large amounts of data could find particular application in the study of cause-and-effect in various disciplines. Indeed, it might offer the potential for deeper insight in wide-ranging applications such as the proper diagnosis of deadly diseases at early stages, based on the identification of previously unknown or subtle symptoms, and/or developing more sophisticated early warning systems for natural disasters such as earthquakes, tsunamis, and hurricanes, based on the same principles. 


Figure 3: Building Blocks of Reinforcment Learning

Reinforcement Learning

What is it? This last pillar of machine learning is based loosely on the common phrase ‘practice makes perfect.’ Simply speaking, this form of machine learning is about allowing computer systems to experiment with all possible means and methods for executing a task, scoring all those different iterations based on clearly-defined performance criteria, and then picking the method with the best score for deployment. The computer system will be rewarded with points for meeting success criteria and penalized for failing some or all of them in each reinforcement iteration.

Applications: To exemplify, let’s consider the use of robots in surgery. Training a robotic arm to perform a (relatively) simple procedure such as LASIK will involve multiple simulations of the operation with one or more variables changed each time – the type of eye being operated on, the amount of cataract on the eye, the age of the patient, etc. Risk factors could be simulated and overall performance evaluated at the end of each simulation. Iterative training could account for and correct misses from all of the previous rounds. When sufficient training is deemed to have been provided to the robotic surgeon, human doctors could delegate certain aspects of surgery to machines while focusing on the more complex ones. This might be particularly useful in lowering the costs of certain types of surgery, thereby making them more widely available where the lack of manpower and expertise might otherwise have presented a challenge.

Reinforcement learning, like supervised learning, could also prove to be an immensely powerful tool with applications in space exploration. With the very real goals of placing humans on other worlds, such as Mars, terraforming them, and so on, the success of high-stakes missions to safely carry humans to their destinations might be preceded by several simulations and field tests of robotic missions put through rounds of reinforcement learning. Spacecraft navigation systems, developed with reinforcement learning from unmanned training missions, could help mitigate risks to humans and other critical equipment in subsequent manned missions.

Going Beyond: Can AI Tend Toward Humankind’s Cerebral Abilities?

I would like to conclude by reasserting, in general, that the evolution of computers and advanced data processing techniques has helped, and will continue to help, in discovering new and unanticipated frontiers, both in the world we live in and the ones we are looking to explore. Human intelligence and ingenuity is behind every invention, whether it’s performing only the most rudimentary of tasks and calculations or thinking independently about constantly bettering the individual and civilization alike.

We have demonstrated and established that intelligence to perform tasks effectively and efficiently can be infused into machines. Perhaps the machines could themselves one day be integrated with an inventive drive to improve our civilization. I sometimes dream of a world in which AI is not only delegated tasks to perform but also given free rein to explore on behalf of us and in addition to us. The sophistication of the human brain, by design, seems an impossible benchmark to replicate. However, I have no doubt that the next wave of scientific and technological advancements in AI will take a step closer toward mirroring the speed and depth of our very own thoughts, ideals, and drive to improve lives. 

Leave a Reply