Power

The POWER Interview: Supporting Real-Time Visibility Along the Power Grid

The power grid is stressed. Aging infrastructure is a problem, exacerbated by extreme weather events that increasingly threaten reliability and result in outages.

The integration of more renewable energy resources, with stochastic output, is another challenge. Then there are threats both cyber and physical, heightening the need for sophisticated visibility and control all along the grid.

Real-time monitoring of grid conditions is more valuable than ever, and new technologies are being introduced to help utilities and grid operators be aware of what’s happening at all times when it comes to electricity supply and demand.

Astrid Atkinson

Astrid Atkinson is CEO and co-founder of Camus Energy, which provides a grid-aware DER “Orchestration Platform” for utilities leading the clean power transition. Camus is among the companies using software engineering to address climate change by supporting the adoption of renewable energy, to reduce carbon emissions from the electricity sector.

Atkinson, prior to founding Camus, was considered an early leader in distributed systems reliability as Senior Director of Software Engineering at Google, where she led development of the global web-serving platform and framework that powers all of Google’s major product lines.

Atkinson serves on the board of directors for the Washington, D.C.-based GridWise Alliance, a group of electricity industry stakeholders “focused on accelerating innovation that delivers a more secure, reliable, resilient, and affordable grid to support decarbonization of the U.S. economy.” Atkinson recently provided POWER with insight into her company’s work, and the importance of enhancing the visibility of the power grid for those involved in the electricity sector.

POWER: Could you talk about your background and how you became involved in the software side of the energy industry?

Atkinson: My background prior to co-founding Camus was at Google, where I was part of the team that was responsible for developing Google’s approach to building highly reliable, large-scale software systems. My area of emphasis was on developing reliability and operations approaches for large-scale distributed systems, and several of my colleagues pioneered this work in the computing field.

A lot of the work that our team did at Google was focused on transitioning from a centrally managed, high-reliability model with data centers that were connected by reliable networks to one where you’re dealing with lots of small computing resources that are connected by unreliable networks. Instead of building a reliable network of robust servers, it required building system architectures that are inherently adaptable and flexible and can recover from failure.

There are a lot of parallels with what’s happening in the energy industry. One big one is the growing importance of real-time monitoring: bringing in data from billions of participants in close to real time to understand the state of the system at all points, at all times. There’s also a set of software-based load-balancing technologies that spreads work across millions of computers. That work has direct relevance to the orchestration of millions of energy resources across a distributed grid.

My co-founders and I were really interested in translating our experience into working with grid operators to support the energy transition—and ultimately those parallels were a big driver for founding the company.

POWER: Why do utilities need system-wide visibility of their operations? Has this become more important with the integration of distributed energy resources (DERs) to the power grid?

Atkinson: It’s difficult to manage a system you can’t see. Traditionally, grid operators have been able to handle changes to infrastructure when things are changing one at a time or relatively slowly. With lots of careful forward engineering, operators can conduct an engineering study for the new configuration and be pretty comfortable in understanding the possible operational impacts of the additional assets or infrastructure.

This becomes operationally impossible if you have dozens or hundreds of distributed energy resources being added all over the place, with some assets that you can’t do an engineering study on, like EV (electric vehicle) chargers. The other challenge is that new assets are constantly changing or behaving in different ways. Operational conditions can change and interact in ways that were previously undetected or unknown. Observing the behavior of the system in the field means that if something happens, that is unexpected, you’ll be able to see it. Operators don’t have to predict it to know that it’s there. They can have a lot of confidence that if changes are made and it causes an issue, they can see that issue right away and move to address it. That’s impossible if you don’t have real-time visibility.

Utilities have made a lot of really great investments in grid visibility across the last 10 to 20 years, but the capabilities only go so far. Most utilities have pretty good instrumentation in their data system for equipment at the substation and feeder level and often to points downline like reclosers. Systems that deal with SCADA (supervisory control and data acquisition) data like an ADMS (advanced distribution management system), however, can only see as far as the installed grid infrastructure. Being able to see what’s happening on the customer side for both the meter and individual devices like EV chargers or batteries, requires being able to bring meter data and customer device data together with the system data that you get from SCADA. That’s where I think software and data investments can help take advantage of that prior instrumentation investment and really get more value out of it with real time or close to real-time visibility initially intended from those rollouts.

POWER: How will increased demand for power (such as from EVs) affect utility operations? What should utilities do to account for these impacts?

Atkinson: Load growth impacts utilities in a couple of different ways. It’s important to note, most utilities haven’t dealt with a lot of load growth across the past 10 to 20 years. It’s been normal to see load growth in the 0.5% to 1% range or even load losses. Growing loads requires building out more distribution infrastructure in many cases, and from a utility perspective, that’s expected. But when load growth starts to happen faster than the five-year distribution resource planning timeline, or it starts to show up in load patterns that are unexpected, addressing that problem by rapidly building out more grid capacity becomes unmanageable and potentially cost-prohibitive.

The first impact is simply the rising costs of addressing load growth. The second impact is that new loads are showing up in ways that are meaningfully different from how loads have historically appeared on the grid. For example, a 19-kW EV charger draws three times the power of the base load of the house at which it’s parked, and only shows up for a couple of hours a day. If you have two EVs parked together and charging on their own chargers, that’s six times the normal load from a single location when the EV charges. Addressing those changes in load patterns is a priority, and that’s where the grid-wide visibility matters a lot. Grid owners have to be able to see where problems are showing up in order to do something about it.

POWER: How does software enable utilities and/or grid operators to optimize their systems?

Atkinson: There are a number of ways software can be helpful for utilities and grid operators. As mentioned, one is providing grid-wide visibility. For utilities, being able to pick up data from connected assets and use it as the foundation for advanced analytics approaches is indispensable for filling in the gaps where there isn’t instrumentation. Software also provides ways to get more out of the instrumentation that is already deployed. An example is using machine learning-driven lightweight disaggregation analysis to find EVs based on particular electricity usage patterns in meter data.

A second way in which software can be especially helpful for utilities is managing the impact of changing load patterns. Take the EV usage spikes I mentioned earlier. With software, you can ask those EV chargers to behave differently. I think the most widespread approach to managing EV impacts is time-of-use (TOU) rates. TOU rates are really effective at addressing one negative impact from EVs: that EV owners tend to plug their vehicles in at times when generation is expensive and transmission constrained. While TOU rates can be effective at shifting loads to periods when generation is cheap and transmission available, they exacerbate other problems.

When a utility sets a time-of-use rate to encourage EV owners to charge between 9 p.m. and 2 a.m, they’ve just told everybody to start charging exactly at 9 p.m.—meaning that all of those loads spike at the same time. And anyone who has spent time in planning knows that it’s coincident usage that drives network capacity needs. So the utility has suddenly created a new problem: a massive demand spike on the distribution grid. More advanced approaches that provide incentives for drivers to spread out their charging in a coordinated, managed way can address that. But active managed charging needs to be put in place as people are adopting EVs, rather than waiting for those load patterns to show up and start to cause really serious localized problems.

POWER: How are utilities and grid operators using software to dispatch instructions to aggregators and resource owners?

Atkinson: Software can help support both direct control and indirect or aggregator-type control models. Practically, I think it’s going to be difficult for utilities to get every customer asset signed up as part of their own programs. And some of the upcoming regulatory changes, like FERC Order 2222, require DERs to be allowed to participate outside of utility structures in organized markets; so that change is coming regardless. I believe that if utilities want access to the customer devices, they need to provide incentive structures to pay for the services that they’re looking to procure.

They could directly pay customers, but they could also pay aggregators. There’s a lot of really interesting work being done in Australia and the UK around models for providing market incentives for both local network and transmission-level services. My personal view is that utilities will want to ensure that each device can make its own decisions about its local job.  For thermostats, their job is to regulate temperature in the house. For a battery, the job is often to provide local backup power. That’s their primary job.

However, customer assets can also provide grid services. And that’s where central coordination is critical because no single asset understands the role that it needs to play in the grid. The grid operators have the best perspective on what services are needed to support the network as a whole. Being able to orchestrate assets to serve the needs of the broader network is crucially important to managing grid reliability and affordability.

You don’t need central control. You do need some form of central coordination. So my preferred model is localized control with central coordination—as that gives you the best of both worlds. Obviously, though, the devil is in the details.

POWER: How can power companies and grid managers use network models to monitor grid conditions in near time or real time? How important will that function be to the future of power generation and power delivery? How is it possible to integrate third-party telemetry alongside utility data to support situational awareness and coordinated control of customer-owned resources?

Atkinson: Understanding the physics of the grid is critical, and that’s what the models provide us. They’re especially important for planning use cases when grid operators are looking to understand the potential impacts of changes that haven’t happened yet, because they can change external conditions in the model and run the model to see the results.

However, models are only as good as the data provided. And, especially for the distribution grid, the data landscape is very complicated. Many models attempt to reduce complexity by dropping details, reducing time scales or skipping over parts of the grid environment entirely. But models that don’t consider every single piece of data impacting the grid aren’t very helpful in understanding unexpected or abnormal outcomes.

To combat uncertainty, it is critical to have access to large amounts of high-fidelity data from instrumentation in the field and use the collected data as a first-class citizen for monitoring and management. That’s how utilities can see the things that are arising under real world conditions and manage an environment which is becoming much more complex.

POWER: How does Camus harmonize data from disparate utility systems, such as SCADA, GIS, OMS, and more, into a single source of accurate information?

Atkinson: First, Camus is a cloud-native platform, enabling us to manage massive amounts of data. If a utility is using an on-premises implementation, they typically can add a few servers when needed. In our cloud environment, we can add dozens nearly instantly. We never need to throw data away. We never need to downsample. We’re able to make sure that we capture all the data in its raw form and in any processed form, so that if we decide we need an additional field from data going back five years, we can simply reprocess the raw data and pull it into our structured data sets. That’s not possible with an on-prem approach—and it’s fundamental to harmonizing data from a bunch of different systems.

With the data in hand, we incorporate it all into our own composite data model to help understand the relationships between data from different sources. Some of the big challenges associated with this work are terribly boring yet crucially important. For example, data sets are often missing fields, and data from different sources have different timescales and time steps. There’s a lot of clean-up required. That’s the sort of task that utility teams can spend months wrestling with. We take care of all of it. We build and automate the processes in a large-scale way, using model-based forecasts to fill in the gaps and provide a trusted source for what’s happening on the grid right now.

Finally, utilities can use the same forecast capabilities to estimate future demand too. Currently, the platform can forecast net and gross load one month ahead, but we are working to extend those capabilities further. We forecast for any point in time at any point on the grid, from rooftop solar system to meter to transformer to feeder to substation to the whole grid. If we’re managing a non-wires alternative, for example, we’ll forecast loading and voltage at the feeder level and at all of the relevant meters or devices nearby. Bringing all those pieces together gives utilities a rich view of what’s happening on their grid—and helps the utility make really good use of the data that they’re already collecting.

Darrell Proctor is a senior associate editor for POWER (@POWERmagazine).

SHARE this article