Power Magazine
Search
Home Commentary Why AI Pilots Stall Without Operating Discipline

Why AI Pilots Stall Without Operating Discipline

Why AI Pilots Stall Without Operating Discipline

Artificial intelligence (AI) has moved quickly from the margins to the mainstream in electric utilities. Control room vendors promote AI-driven insights, asset platforms promise predictive intelligence, and most major utilities are running at least one pilot or proof of concept. More than 80% of North American utilities already report using AI in some form.

Adoption has been widespread, but durable results have not followed. Early pilots stall, momentum fades, and ROI remains difficult to demonstrate within the reliability and financial frameworks to which utilities are accountable.

COMMENTARY

In a regulated environment defined by safety, reliability, and capital discipline, AI fails when it is treated as a side project rather than managed with the same rigor as day-to-day operations.

Hidden Risk of the Pilot Mindset

The pilot mindset carries real risk in regulated utility environments. Reliability and capital discipline matter more than speed, and initiatives not designed to scale quickly lose credibility. Pilots that linger without a clear path to operational use do more than stall progress; they create skepticism among leaders, regulators, and frontline teams.

Several failure modes show up repeatedly:

  • AI isolated from capital planning and rate cases. When initiatives are funded as discretionary innovation rather than embedded in approved investment plans, they struggle to survive budget cycles and regulatory scrutiny.
  • Unclear operational ownership. AI often sits with IT or innovation teams without direct accountability to leaders responsible for reliability and performance, leaving initiatives disconnected from the outcomes utilities are measured on.
  • Activity mistaken for impact. Progress is measured by models built, data sets explored, or pilots launched, rather than by measurable improvements in SAIDI, SAIFI, or operating and maintenance efficiency.

These patterns conflict directly with the regulatory compact under which utilities operate. Utilities earn trust and recover investment by demonstrating prudence, discipline, and measurable performance. When AI is treated as an experiment instead of an operational capability, it falls outside the frameworks utilities rely on to justify investment and demonstrate value.

From Pilots to Operating Capability

Treating AI as an operating capability means moving away from open-ended experimentation and toward disciplined execution. A sustained operational capability is planned and funded through normal cycles, governed with clear ownership and auditability, and embedded directly in trusted operational workflows.

The difference shows up quickly in practice. In vegetation management, a pilot might analyze imagery for a subset of circuits and generate insights that sit outside the work management process. An operational capability prioritizes risk across the full system, feeds directly into trim cycles and crew scheduling, and produces results that can be defended in a rate case. In outage response, a pilot may predict restoration times during storms. A sustained capability integrates those predictions into dispatch, communications, and post-event reporting, shaping decisions before, during, and after an event.

What Operationalizing AI Changes

Once AI is operationalized, it becomes easier to defend and easier to manage. Investments fit within existing planning and oversight processes, which gives leaders a clear basis for regulatory discussion. AI no longer sits outside the system of record; it operates inside the same structures utilities use to justify spend and manage performance.

Day-to-day behavior changes as well. Teams stop arguing about potential value and focus on execution. Performance is monitored, gaps are addressed, and capabilities that do not deliver are corrected or retired. That pressure exposes weaknesses that pilots often mask. Data quality improves because bad data shows up as operational risk. Governance tightens because accountability is explicit. Workforce readiness advances because operators, supervisors, and planners are expected to use these tools in real decisions, not as optional add-ons.

This approach lowers risk rather than adding to it. Industrialized AI is more predictable, easier to monitor, and easier to intervene when conditions change. Controls are clear, oversight is built in, and decision authority remains aligned with reliability responsibilities.

Most important, the yardstick stays consistent. AI is evaluated by its effect on reliability and affordability. When managed as infrastructure, it strengthens service and cost discipline instead of competing for attention as a standalone innovation.

Executive Decisions That Shape AI Outcomes

AI programs stall or scale based on a small set of executive signals that appear early and consistently:

  • Whether AI shows up in capital planning. When AI is discussed alongside grid hardening, system modernization, and reliability investments, it gains staying power. When it sits outside those conversations, it remains discretionary and easy to defer.
  • What leaders ask for in reviews. Executives who press for outcome-based measures, reliability impact, risk reduction, and cost performance force teams to move beyond experimentation. When updates focus on activity or future potential, accountability weakens.
  • How governance is applied. Utilities that define approval thresholds, human sign-off points, and intervention authority before deployment move faster during audits, incidents, and storms. Where governance is reactive, uncertainty surfaces at exactly the wrong time.

These signals shape behavior long before formal policies or roadmaps take hold. Utilities that scale AI do so because leaders make expectations clear through the decisions they prioritize and the metrics they review.

Choosing Where to Start

Value does not come from launching more AI initiatives, but from choosing a small number of operational decisions where AI can materially change outcomes and committing to them.

The most effective starting points sit close to the core of utility performance. High-volume workflows tied to reliability, risk exposure, or operating cost provide natural feedback loops and clear evidence of value. These efforts force alignment across data, governance, and operations early, exposing gaps that matter rather than ones that are merely inconvenient.

Structured guidance helps leaders make these choices deliberately. It reduces the risk of chasing well-intentioned but low-impact use cases and prevents capital from being spread too thin across disconnected efforts.

The Leadership Test Ahead

AI now sits at a decision point for electric utilities. The technology is present, pilots are common, and expectations are rising. What remains unresolved is how firmly AI is anchored to the operating responsibilities utilities already carry.

Utilities that move forward do so by applying familiar discipline to a new capability. They decide where AI must perform, what outcomes it is expected to influence, and how results will be reviewed over time. That clarity reduces ambiguity for teams and makes tradeoffs easier to manage. It also creates a clear line between efforts that deserve continued investment and those that do not.

AI earns its place through measurable impact on reliability, risk, and cost. Utilities that succeed treat AI as part of grid operations, with outcomes that reinforce affordability and public trust over time.

Travis Jones is COO and AI Transformation Leader at Logic20/20, and the author of AI Playbook for Utility Leaders: Managing Risk, Powering Reliability.