Electric utilities are facing numerous converging challenges posing major complications to the way they manage their infrastructure, manage their operations, and meet evolving customer needs. The increasing frequency of storms and wildfires are pressing issues. Aging infrastructure and a graying workforce create additional vulnerabilities. And other challenges include unprecedented increases in demand, supply chain disruptions and changing regulatory pressures.
Against that backdrop, the maturation of AI could not be more timely for the utility industry. AI provides a transformational new tool for responding to the perfect storm of challenges above, particularly when it is combined with location intelligence. Location intelligence refers to important insights derived from the sea of location-based data that every organization collects from their infrastructure, mobile and other devices and IoT networks. Those insights enable employees to solve complex business problems across every industry, and AI/ML enables organizations to automate the process of obtaining those insights and putting them to work.
With the right AI + location intelligence strategy, utilities can begin solving the most complex problems posed by this perfect storm of challenges. A utility can use AI to manage and maintain its infrastructure in a smarter, more cost-effective and more proactive way. AI can also be deployed to achieve progress toward decarbonization goals through more effective deployment of renewables and much more. AI has powerful use cases related to risk reduction, wildfire mitigation and disaster planning. And much more.
Understanding AI Confabulation
To succeed with AI, however, organizations must mitigate a thorny issue that has been grabbing an increasing number of technology headlines: Generative AI’s tendency to make mistakes. The phenomenon is often referred to as “AI Hallucination,” although we prefer the term “AI Confabulation.” These errors occur when AI engines misinterpret commands, data and the context of how data relates to those commands. This behavior is remarkably human in a way – just as people can mishear a request, misinterpret information or fill in the gaps with mistaken assumptions and come to the wrong conclusions, AI does it as well. There is debate in the technology community about whether these confabulations are simply confusion, false confidence or outright lying.
Regardless, the most important thing for us to know is that AI is fallible, and the errors are frequent. AI confabulation rates (in the absence of effective mitigation) are proving to be worrisomely high. Numerous AI-focused experts and organizations studying the issue regularly report statistics from their tests of the most commonly-used generative AI engines. For example, Vectara’s site, which benchmarks the top 25 Large Language Models (LLMs), highlights error rates ranging from 1.3% to 4.4% in their most recent analysis. Past analyses have shown error rates consistently reaching double digits. Those rates are unacceptably high for an industry like utilities where operational reliability, worker safety and public safety are so critical. But it is important to note that these error rates tend to be even higher when AI is working with highly-technical data, which is exactly what AI would be analyzing in the utility industry. The takeaway is clear: utilities planning AI initiatives must have a strategy for mitigating AI confabulation.
Mitigating the Risks: The Importance of Context
One of the best weapons against AI errors is augmenting the information the LLM is working with in ways that provide domain knowledge akin to what an experienced industry professional would have. This domain knowledge provides AI with essential “context,” allowing it to understand queries better, analyze data more accurately, report findings more effectively, and—as a result—catch itself before it makes errors. Just as an experienced professional would be able to apply their expertise to make better decisions and minimize missteps, AI models can be trained in the same way.
The technical term for this approach is Retrieval-Augmented Generation (RAG), which equips LLM models with domain knowledge they would not otherwise have from the public data on which they are built. This helps the LLMs better understand the data and thereby produce better results.
Mitigating the Risks: Improve Data Quality
Improving data quality also plays a crucial role in reducing confabulation. The quality of technical data that utilities work with can vary dramatically. Not all data is AI-ready, and using data lacking in accuracy, timeliness and richness will undermine the accuracy of AI-driven analysis. Having a data strategy that assesses the quality of data, enhances its accuracy and richness, and breaks down barriers to timely access is an important way to reduce AI confabulation.
Mitigating the Risks: Enhancing Queries
Another important approach for mitigating confabulation is enhancing the precision of AI queries for location intelligence use cases. Given how complex location-based data is, the accuracy of queries can make or break the success of a given task. To make these queries as precise as possible, we recommend having an engineer serve as a second set of eyes, ensuring that they will translate from natural language into SQL instructions that minimize the chance for confabulation. This is akin to having your best geospatial/location intelligence professional working alongside your team.
Mitigating the Risks: Utilizing Knowledge Graphs
The final best practice for mitigating confabulation involves a “Measure, Teach and Repeat” methodology, using a knowledge graph. LLMs are designed to learn over time, and that is one of our best weapons against confabulation. Using a knowledge graph, organizations can perform fact verification to identify areas where mistakes are happening and where fine-tuning needs to happen.
Simply put, it uses a model to check and teach the model — accelerating the learning curve for LLMs in ways that reduce errors. This process is akin to providing 24/7 tutoring to your LLM from your best geospatial/location intelligence professional. It is important to note that this methodology has a distinct advantage over other approaches: using a knowledge graph enables the successful capture of important context about relationships between entities that are vital to producing accurate results. Other methodologies may not successfully capture that contextual information, impacting efforts to reduce confabulation in the process.
Utilities need to harness the power of AI to address the urgent challenges they face. Incorporating these best practices into your AI strategy will be instrumental in reducing the risk of confabulation and enabling your AI initiatives to be successful.
—Amit Sinha is the Lead Artificial Intelligence and Machine Learning Engineer at TRC, a global professional services firm providing integrated strategy, consulting, engineering and applied technologies in support of the energy transition. Todd Slind is the Vice President of Technology at TRC.