Advertisement

How a Neuro-Symbolic AI Approach Can Improve Trust in AI Apps

By on
Jan Aasman headshot
Read more about author Jans Aasman.

As a cognitive scientist, I’ve been immersed in AI for more than 30 years – specifically in speech and natural language understanding, as well as machine-based learning and rule-based decision-making. Progress in our field is always uneven, unfolding in fits and starts. Those of us in the AI field have witnessed multiple “AI winters” over the decades, yet we continue to advance the vision of AI. With the emergence of ChatGPT and other generative AI large language models (LLMs), we have reached a tipping point in the trajectory of AI – a juncture I never thought we’d achieve in my lifetime. 

But LLMs on their own are only one piece of the AI puzzle. The real leap forward for AI comes with combining the different approaches of AI into a single system that utilizes the unique strengths of each approach, while simultaneously addressing their inherent weaknesses. 

By integrating machine learning (statistical AI), neural network-based decision-making (neuro AI),  symbolic logic and reasoning (symbolic AI), and the powerful capabilities of large language models (generative AI), we can solve complex problems that require reasoning abilities, while also learning efficiently with limited data and expanding the applicability of AI across a broader array of tasks. Importantly, the blending of symbolic AI, statistical AI, and neuro AI with generative AI produces decisions that are understandable to humans and explainable, an important step in the progression of AI.

The synergy of these technologies is of particular interest to enterprises because it offers the potential to significantly enhance trust in AI inferences. By fostering more transparent and explainable AI systems, organizations can achieve a higher level of confidence in the decisions and insights generated by their AI systems, paving the way for more reliable and understandable AI-driven solutions.

The Role of Knowledge Graphs

Semantic knowledge graphs are fundamental to neuro-symbolic AI. The first generation of knowledge graphs, emerging around 15 years ago, primarily utilized symbolic logic and rule-based approaches to generate valuable insights. While systems founded on logic or rules are known for their reliability and consistency, they frequently face challenges with complexity and ongoing maintenance. 

The second generation, beginning approximately 10 years ago, incorporated classical machine learning and graph neural networks to draw inferences directly from the knowledge graph data. This innovation introduced abilities to classify objects, predict connections within the knowledge graph, or forecast events concerning customers, aircraft, or patients. These machine learning techniques are amazing at uncovering new rules and patterns from extensive datasets, but they unfortunately also often suffer from opacity and ingrained biases.

The latest, third generation of knowledge graphs, which started around two years ago, integrates the capabilities of large language models (LLMs) and local vector stores (or retrieval-augmented generation – RAG) into the knowledge graph framework. The introduction of LLMs has revolutionized our ability to make inferences about entities within the knowledge graph, leveraging the immense computational power of LLMs. This generation has significantly simplified the creation of ontologies, taxonomies, and the formulation of queries and rules. However, LLMs introduce their own challenges, notably a skepticism towards the reliability of their inferences, necessitating rigorous verification of each inference made.

Let’s explore some ways in which AI systems can complement each other, keeping in mind that these examples represent just a small portion of the potential applications.

Using LLMs to Build, Curate, and Query Knowledge Graphs 

Large language models (LLMs) can be instrumental in the development, curation, and querying of knowledge graphs. Organizations aiming to construct a knowledge graph could employ an LLM to generate initial taxonomies and ontologies. Subsequently, LLMs can also assist in creating extract, transform, load (ETL) processes to populate the knowledge graph from conventional data sources such as relational database management systems (RDBMSs) and even utilize LLMs themselves as a data source. 

Furthermore, LLMs have proven highly effective in curating existing information within knowledge graphs. For instance, despite its critical role in healthcare, the Unified Medical Language System by the National Library of Medicine contains errors, including cyclical taxonomies. A novel approach that merges graph algorithms with an LLM acting as a decision-making “oracle” has successfully resolved these issues. In addition, LLMs can be used to generate new symbolic logic rules, such as creating diagnostic guidelines for specific diseases, or to formulate queries for knowledge graph interrogation.

Enhancing Machine Learning with LLMs

Organizations possessing information within their knowledge graphs and other databases may seek to employ machine learning for generating novel, data-driven insights. As an illustration, consider an application of machine learning within a healthcare knowledge graph aimed at forecasting the likelihood of a patient’s readmission to a hospital within a 30-day timeframe. A recurrent neural network (RNN) demonstrated exceptional accuracy in predicting readmissions, yet it failed to provide any explanatory insight into its predictions. 

In contrast, a large language model (LLM), having analyzed over 36 million PubMed articles, was capable of offering detailed explanations behind these predictions, bridging the gap between raw predictive power and understandable rationale. It’s worth noting that we’ve also observed how large language models (LLMs) can assist developers in generating the feature vectors necessary for machine learning models, and even in writing the Python code required to train these models. This points to a not-too-distant future in which a knowledge graph could potentially automate machine learning processes on its internal data with minimal or no human intervention.

Mitigating LLM Hallucinations with Knowledge Graphs

All AI systems are susceptible to generating incorrect inferences, with large language models (LLMs) being notably prone to what are commonly referred to as “hallucinations.” A contemporary knowledge graph (KG) can mitigate these inaccuracies by employing its native query language to interact directly with an LLM and store the responses within the graph. Aware of the potential for errors in these responses, it is possible to implement secondary verification methods to identify potentially incorrect inferences. 

For instance, a KG might request an LLM to list the ten most expensive cars along with their prices and then store this information. Following this, a secondary verification could involve querying each of the ten cars through a Google search via the SERP API to verify the prices. A price discrepancy exceeding 20% may indicate a hallucination. This approach leaves open the determination of which source is more accurate.

Neuro-Symbolic AI Can Improve Trust in AI Applications

This integrative approach represents a paradigm shift in the trajectory of AI by bringing us closer to creating systems that not only exhibit remarkable problem-solving capabilities, but also align more with human understanding. By creating a holistic AI framework, we address the limitations of individual methodologies, while introducing a level of interpretability crucial for fostering trust and transparency in AI applications. 

I believe the collaboration between these distinct AI branches can reshape how we perceive and interact with artificial intelligence, bridging the gap between technical sophistication and human comprehension.