With decades of experience in energy management and electricity delivery, Christopher Hailstone is an invaluable guide for navigating the complex intersection of technology and utility operations. He joins us today to discuss how electric and water utilities can move beyond the hype of AI and build a practical, resilient foundation for the future. We’ll explore the critical importance of a trusted data foundation, the tangible steps for unifying siloed information, and the measurable savings that come from a truly AI-ready strategy.
Utilities are facing immense pressure from aging infrastructure, extreme weather, and rising customer expectations. How does AI specifically address these challenges, and what are the biggest operational risks for organizations that fail to build an AI-ready foundation?
That’s the central question on everyone’s mind. The pressures are immense, and the old ways of operating just can’t keep up. AI is the key to making smarter, faster decisions when everything is on the line. Think about a line transformer under stress during a heatwave. In the past, you’d wait for it to fail. With an AI model fed by trusted historical and real-time data, you can spot those stress patterns, predict a potential failure, and dispatch a crew for proactive maintenance. This prevents an outage, protects the grid, and keeps customers in service. The risk of not building this AI-ready foundation is catastrophic. Without it, you’re flying blind. You’re left reacting to failures, facing mounting reliability pressures, and struggling to deliver the firm capacity that regulators and customers demand. In an industry where safety and service continuity are non-negotiable, being unprepared is a corporate strategy you simply can’t afford.
Many executives believe AI’s full potential is only realized when built on a foundation of trust. In a utility context, what does a “trusted data foundation” actually look like, and how does it prevent AI from producing unreliable or non-compliant outcomes?
It’s a sentiment I hear constantly, and the research backs it up; I believe Accenture found that nearly three-quarters of utility executives agree on this point. A “trusted data foundation” isn’t an abstract concept—it’s a tangible framework where your enterprise information is unified, governed, and activated. In practice, this means an AI model trying to optimize crew dispatch has access to the complete, accurate history of an asset, including its engineering drawings, maintenance records, and recent SCADA logs. It operates with full context and auditability. Without this foundation, the AI is working with fragments. It might see a sensor reading but miss a recent work order that explains it, leading to a flawed recommendation. This is how you get unreliable or non-compliant outcomes. Trust means knowing your AI’s conclusions are based on a complete, verifiable picture of reality, which is absolutely essential for critical infrastructure.
To create a single source of truth, utilities must connect siloed data like SCADA logs, engineering files, and customer records. What are the first practical steps a utility should take to begin this “discovery” process, and what metrics can they use to measure their progress?
The first step is always the hardest, as you’re staring at a mountain of data spread across decades-old systems. The key is to start with “discovery.” This isn’t just about inventorying databases; it’s about surfacing the information hiding in those silos and, crucially, establishing its lineage. You need to map where data came from, how it’s been modified, and what it relates to. You begin by identifying a critical business process, like asset management, and tracing all the information that supports it—from the initial engineering files to the latest work orders. As for measuring progress, you can track the percentage of critical assets that have a complete, linked data history. Another great metric is the time-to-insight: how long does it take an engineer to pull up the complete, lifetime record of a specific substation component? When you see that time drop from days to minutes, you know you’re on the right path.
The lifecycle of utility information spans from engineering drawings to maintenance records and financial controls. How does unifying the management of this data improve day-to-day operations for field crews, and what kind of traceability does this provide for regulators?
The impact on field crews is immediate and profound. Imagine a crew arriving at a site for an emergency repair. Instead of relying on outdated paper schematics or calling back to the office to confirm details, they can pull up the complete asset lifecycle on a tablet. They see the original engineering drawings, every piece of maintenance ever performed, and any related supply chain information for replacement parts. This unified view reduces rework, prevents errors, and, most importantly, improves safety. For regulators, this creates an unbreakable chain of evidence. When they ask why a certain decision was made or need to audit a maintenance program, you can provide a complete, auditable history that connects the physical asset in the field to the business processes and financial controls that support it. This level of traceability is the gold standard for compliance.
System rationalization—eliminating redundant tools and standardizing on integrated platforms—is presented as a key step for savings. Could you walk us through how this process simplifies governance and ultimately accelerates the deployment of new AI tools?
System rationalization feels like a massive undertaking, but the payoff is equally massive. Most utilities have accumulated a tangled web of redundant applications over the years, each with its own costs, security protocols, and data silos. The process starts with an audit: identifying which tools serve the same function and standardizing on a single, integrated platform for things like content management or data analytics. By eliminating these redundant systems, you immediately save on licensing and maintenance costs. But the real magic is in governance. It is infinitely easier to secure, manage, and apply consistent data policies to one platform than to twenty. This simplification directly accelerates AI deployment. When your data is already on a standardized, observable platform, you can plug in new AI services and models without having to build custom integrations for every single data source. You’re building on a clean foundation, not a messy one.
Automating tasks like outage triage and meter exceptions can free up teams to focus on safety and innovation. Can you share a step-by-step example of how automating one of these processes leads to measurable improvements in efficiency and customer experience?
Let’s take outage triage, a process that is often manual and high-stress. In a traditional scenario, calls flood the control center, and an operator has to manually cross-reference customer reports with system alerts to pinpoint the likely source of the fault. It’s slow and prone to error. Now, let’s automate it. Step one: An AI system ingests thousands of data points simultaneously—smart meter alerts, customer calls, weather data, and SCADA logs. Step two: The AI analyzes these patterns in real-time to triangulate the most probable fault location, often down to a specific circuit or transformer. Step three: It automatically generates a work order with all the relevant asset history and dispatches the nearest available crew. The improvement is dramatic. You’ve cut diagnosis time from minutes or hours to seconds. This leads to faster restoration times, which is a huge boost for customer experience. At the same time, your highly skilled control room operators are freed from that chaotic triage work and can now focus on more complex grid stability and safety challenges.
What is your forecast for the adoption of AI in the utilities sector over the next five years?
Over the next five years, I predict AI adoption will shift from isolated pilot projects to a fundamental component of core utility operations. We will see utilities move beyond predictive maintenance to full-scale AI-driven grid optimization, energy forecasting, and real-time asset management. The biggest change, however, will be cultural. Utilities that successfully build that trusted data foundation now will create a virtuous cycle: better data will lead to more reliable AI, which will drive more trust and investment in the technology. Those who wait will face an ever-widening operational gap. The technology is here, and the imperative is clear. The organizations that embrace this AI-ready information strategy will become the resilient, efficient, and customer-centric utilities of the future.
