• AI Threads
  • Posts
  • Why Does Gartner Consider Graphs a 'Critical Enabler' Right Now?

Why Does Gartner Consider Graphs a 'Critical Enabler' Right Now?

In this issue of AI Threads, we examine how knowledge graphs complement large language models - and why combining structure with reasoning could be essential to the future of intelligent systems.

What do Large Language Models (LLMs), Gartner’s tech predictions, bioinformatics, and the historic town of Königsberg have in common? The answer is knowledge graphs.

Knowledge graphs - often just called “graphs” - provide a mathematical framework for representing relationships, such as connections between people, companies, objects, or concepts, in a structured, machine-readable way. They help organise and interpret complex, connected data - and they’re becoming increasingly important in how modern AI systems, including LLMs, process and understand information.

Example of a knowledge graph.

In the example above, we see a knowledge graph that maps out entities like John, Sarah, and Graeme - each classified under categories such as Person, Location, Company, or Food. These entities are linked by labelled relationships - born in, manages, friend of - which describe how they are connected.

The graph also highlights how new information can be inferred from existing relationships. For instance, while the connection Graeme works at Attercop isn’t shown directly (it’s marked as a dashed line), we can deduce it based on the facts that Graeme manages John, and John works at Attercop. This is a simple example of how knowledge graphs enable reasoning and help uncover hidden or implied connections within data.

LLMs and Transformers: Graph-Like Structures for Language

While knowledge graphs represent relationships between real-world entities, language models rely on comparable structures to capture how words relate to one another in text.

LLMs are often treated as synonymous with AI itself, but their core strength comes from transformers - the “T” in GPT - which enable models to understand how words relate to one another within a sentence. Transformers use a mechanism called attention to figure out which words should be considered together. This creates an internal map of relationships between words - a structure that is, in many ways, similar to a graph.

For instance, when an LLM reads the sentence “John likes peanut butter,” it identifies “likes” as the relationship connecting two entities - John and peanut butter. Recognising this link helps the model build context and meaning, which it then applies across longer texts and broader topics.

An example of how LLMs learn to identify and attend to important words.
Source: Survey of Neural Text Representation Models

The image above offers a visual example of how this attention mechanism works in practice. Each line represents a connection the model has learned between words in the sentence, with thicker lines indicating stronger relationships. Rather than processing words in isolation or in strict sequence, the model learns which words are most relevant to one another - effectively creating a web of relationships that guides its understanding. This is what allows LLMs to capture context, resolve ambiguity, and generate more coherent responses. In this way, attention forms a graph-like structure within the model - closely mirroring how knowledge graphs organise connections between real-world entities.

From Königsberg to Knowledge Graphs

It was in Königsberg that Leonhard Euler first formulated Graph Theory - a mathematical toolkit for analysing relationships and structures. He sought to determine whether it was possible to traverse the city’s islands in one continuous path without retracing any route. 

Unlike LLMs, which learn from data distributions and are prone to hallucinations (generating incorrect or fictional information) knowledge graphs are built on structured, factual data. They provide a more reliable foundation for reasoning. Recent research from Stanford and MIT has shown that combining knowledge graphs with LLMs can lead to significantly better results. This includes fewer hallucinations, clearer explanations, and more robust reasoning. In hybrid systems, graphs effectively act as a grounding layer - anchoring the fluid, probabilistic outputs of LLMs to verifiable information.

Knowledge graphs are not just theoretical tools; they power practical applications. Social media platforms like Pinterest use them for personalised recommendations through systems like InterestTaxonomy - a structured graph that links users, content, and topics based on patterns of engagement and relevance. Biopharma companies leverage them to accelerate the discovery of new medicines by linking genes, diseases, compounds, and clinical data - making it easier to identify promising drug targets or repurpose existing treatments.

Why Knowledge Graphs in the era of LLMs?

Knowledge graphs excel at enabling relational learning - a form of reasoning that focuses on understanding how different pieces of information are connected, based on logical rules rather than patterns in data. It allows systems to infer new facts by applying structured logic to known relationships, instead of relying on statistical associations alone. For example, if we know that 0 is smaller than 1 and 1 is smaller than 2, we can logically infer that 0 is smaller than 2. This kind of step-by-step deduction is something LLMs often struggle with, as they lack a formal structure for applying consistent logic. One paper likens this to students trying to read a blackboard without glasses - they might pick up some key ideas, but the fine details remain blurry and unreliable.

By combining knowledge graphs with LLMs, we can ask more complex questions and extract higher-level insights. For example, given (Graeme, friend-of, X) and (X, works-at, Attercop), we can identify all entities (X) who are both friends of Graeme and work at Attercop. This kind of multi-hop reasoning is difficult for LLMs alone, but becomes much more reliable when supported by a structured graph.

Researchers at Rice University (KnowGPT) also demonstrated that using knowledge graphs to inform prompting can significantly improve the quality of LLM outputs - making them more relevant, consistent, and useful.

This synergy between LLMs and knowledge graphs shows how relational structure can complement neural models - enhancing reasoning, reducing hallucinations, and helping reveal deeper connections in data.

Figure 5. A comparison of large language models and knowledge graphs

This balance of strengths is what makes the combination of LLMs and knowledge graphs so powerful. While one brings flexibility and language fluency, the other contributes structure and reliability. The illustration above captures this contrast - showing how each system supports what the other lacks, and how knowledge graphs contribute essential capabilities like grounding, logic, and structured relationships that LLMs alone can’t easily replicate.

The Next Frontier: Where Knowledge Graphs and LLMs Meet

And all of this is why Gartner highlights knowledge graphs and learning on them as a critical enabler - and one of the next frontiers in AI. As language models continue to evolve, the need for systems that can reason, explain, and reliably connect information is only growing. Knowledge graphs offer exactly that: structure, logic, and grounded understanding.

The connection between neural and symbolic intelligence is still taking shape, but it’s already powering some of the most advanced systems in the world - like AlphaGo, which mastered the game of Go by combining deep learning with structured search, and AlphaFold, which predicts protein structures with remarkable accuracy. Even the algorithms behind real-time video rendering rely on this kind of hybrid thinking. As this convergence continues, systems that blend language, logic, and structure will be key to unlocking the next wave of AI capabilities.

What do you think? Are knowledge graphs the missing piece in making AI more reliable, or just one part of a much bigger puzzle?

We’d love to hear your take - reply with your thoughts, share your perspective, or pass this along to someone who’s interested in topics like these. And if you’d like to get more posts like this straight to your inbox, don’t forget to subscribe.

Reply

or to participate.