Advertisement

Using Graph Technology in the Evolution of AI

By on

Graph technology is being used to promote the evolution of artificial intelligence. Graph databases show how data is interlinked, expressing relationships within the data that cannot be communicated using a tabular SQL system. They work especially well when complex patterns must be identified quickly. Graphs are an excellent tool for inferring relationships and enhancing artificial intelligence by presenting context.

Graph AI (graph-based artificial intelligence) is in its infancy but shows great promise. Graph databases provide greater expressiveness in modeling, and also support significant computational complexity. As a consequence, they are proving to be useful in developing and using graph-based machine learning (ML) and deep learning (DL) models. 

Graph-based ML and DL models are improving accuracy and modeling speeds, and are making the building of artificial intelligence solutions more accessible. Graphs offer superior performance by using contextual model optimization and provide explainability to neural networks.

Using Relationships as Predictors

Graphs are uniquely suited for making and presenting the interconnections of relationships. The objects represented on graphs are called “nodes” (also known as “vertices”) and the lines connecting them are known as “relationships” (“edges”). Nodes and relationships can also contain properties and attributes.

Graph algorithms are designed to focus on relationships and can find structures and reveal patterns in the connected data. This model presents a closer representation of reality for artificial intelligence to work with than SQL, which stores data as rows and columns. These algorithms have the potential to develop a more human-like artificial intelligence.

Because graph algorithms provide context, they offer a closer analogy of how the human mind works than SQL storage systems.

Humans use context when determining what is important within a situation. For AI entities to make decisions similar to how humans make decisions, the AI needs to incorporate large amounts of context.

Real-world networks typically take the form of dense groups with structures and clumpy distributions. This type of pattern is expressed in everything from social networks to transportation and economic systems. Graph analytics differs from more traditional analysis by including metrics that are based on relationships. Traditional statistical approaches don’t make the associations graph databases do; they average out the distributions.

Machine Learning, Deep Learning, and Artificial Intelligence

Artificial intelligence can be described as an effort to automate intellectual tasks that are normally performed by humans. This goal can be achieved through a combination of several different machine learning algorithms, or a mixture of ML and deep learning algorithms. (ML works well with small to medium amounts of data and DL works well with large amounts of data.)

Machine learning was first presented in 1952, with an algorithm that learned how to play checkers. It was used as a training tool from the late 1970s to early 1980s. This was when AI research became focused on using knowledge-based, logical approaches rather than algorithms, and neural network/deep learning research was abandoned by AI researchers.

The machine learning industry was reorganized and became a separate field that struggled to survive for nearly a decade. It began to thrive only in the 1990s (primarily because of internet growth). When the second AI winter ended around 1993, machine learning once again became a training tool for artificial intelligence, along with deep learning and neural networks.

Deep learning supports a training process for algorithms requiring a minimum of human intervention, and typically uses a neural network. It can transform unstructured data into manageable groups of data in a process called dimensionality reduction.

An artificial neural network uses “neuron layers” (also called “node layers”). This includes an input layer, several hidden layers, and finally an output layer. Each node, or neuron, is connected to other neurons and supports an associated weight (the strength of the neuron’s connections) and a threshold. If any layer’s value exceeds its threshold, the data is then transferred to the next layer. 

Deep learning supports speech recognition and facial recognition. 

Graph Databases, Containers, and Artificial Intelligence

Businesses can accelerate the development of deep learning and machine learning models by using containers. Containerized environments can work much faster than the more traditional virtual machine environments, provisioning the container in minutes, rather than weeks or months. Containerized development environments allow clusters to be spun up easily when needed, and when done, can spin them back down. 

During the training phase, a container offers the flexibility to develop distributed training environments using multiple host servers. Once the ML model is trained, it can be deployed to other systems, such as a public cloud, on premises, or on the edge of a network.

Containers allow an organization to use different ML and DL frameworks, with conflicting software issues, while running on the same server. 

The use of Docker containers supports the deployment of machine learning models. Docker is open-source software that is designed to support and simplify application development by creating isolated, virtualized environments that can be used to build, test, and deploy applications. A Docker “image” is an unchangeable (read-only) file – a kind of template – containing the source code, dependencies, tools, and libraries, as well as the other files required for an application to operate. The Docker “container” is an environment – a virtualized environment that isolates applications from the underlying system.

Graph Databases Supporting Artificial Intelligence 

There are essentially three graph databases currently supporting the development of graph AI: NebulaGraph, HyperGraphDB, and Neo4j. All three offer an open-source version, although Neo4j’s open-source version (the Community Edition) does not support artificial intelligence, and HyperGraphDB does not yet have any graph AI use cases.

NebulaGraph Database, on the other hand, has developed at least one version of artificial intelligence, Nebula Siwi (a primitive chatbot developed in 2021, and still evolving). It also acts as a knowledge graph supporting deep learning (referred to as graph learning) and machine learning. If the goal is to experiment with graph databases in developing artificial intelligence, NebulaGraph contains all the elements necessary for developing a high-functioning artificial intelligence.

In graph AI, knowledge graphs complement machine learning techniques.

The NebulaGraph Core separates the data storage process and computing process. The details of NebulaGraph are described in a paper titled “NebulaGraph: An open source distributed graph database.” Other components supporting its use as a data lakehouse are:

  • A Data Collection Service: NebulaGraph calls it “Breadth-First Search” 
  • The Data Storage Layer: “The Storage Service”
  • The Metadata Layer: “The Meta Service”
  • The API Layer: “The Storage Interface”
  • The Data Consumption Layer: “Nebula Algorithms & Analytics Framework”

NebulaGraph can function as an OLTP database, supporting the processing of streaming data in real-time, and automatically identifying risks. 

The NebulaGraph Explorer works with the NebulaGraph DBMS, and supports querying data by using tags, VIDs, and subgraphs. It is user-friendly, supports graph exploration, and can be deployed using fairly simple steps. 

NebulaGraph is designed to work with Docker containers, which promotes the accelerated development of deep learning (graph learning) and machine learning. 

The Future of Graph AI

Graph AI is rapidly becoming useful for influence analysis, sentiment monitoring, anti-fraud, engagement optimization, market segmentation, and developing artificial intelligence.

In her article on why graph databases are the future, Cristina Mulas Lopez wrote, “The real world is very interconnected, and graph databases aim to mimic those sometimes consistent – sometimes erratic – relationships in an intuitive way. That’s what makes the graph paradigm different than other database models: it maps more realistically to how the human brain maps and processes the world around it.”

Graph AI will take a few years to evolve, but it should provide a superior form of artificial intelligence.

Image used under license from Shutterstock.com

Leave a Reply