Power Plant Efficiency: A Key to Profitable Performance

Building power plants is only the first step to generating success. Running plants efficiently, and consistently improving efficiency as they run, is the path to putting profits on the bottom line.

Building new plants of any generating technology is a difficult and crucially important part of the electricity business. But operating the plants is what brings in revenue. Operating them efficiently can mean the difference between profit and loss, particularly in competitive markets. In an operating environment, plant efficiency is job No. 1 for a power plant, whether the generation comes from nuclear, coal, or gas. For fossil-fuel plants, efficiency also is key to reducing air emissions, including carbon dioxide.

A major efficiency goal for fossil-fueled plants is heat rate improvement. The fewer Btus it takes to generate a kilowatt-hour of power, the less fuel is used, and the more money that flows to the bottom line. In a two-part series published in the November and December 2014 issues of POWER—“Coal-Fired Power Plant Heat Rate Improvement Options”—Sam Korellis of the Electric Power Research Institute wrote, “Unfortunately, since the mid-1960s, the average heat rate of fossil-fueled power plants in the United States has gradually increased. Several factors have contributed to this slow degradation in unit performance,” including the rise of nuclear power, which diverted utility money away from efficiency programs at fossil plants, and the arrival of environmental retrofits that increased the amount of power diverted to run the environmental controls.

When it comes to fossil heat rate improvements, wrote Korellis, “The biggest hurdles are not technical, they’re financial.”

Plumbing Critical for Operations

Plumbing is a critical path for all steam electric plants, as pipe runs, pumps, valves, and welds (Figure 1) can determine how well a plant operates, or if it operates at all. That’s particularly true for nuclear units. A former Nuclear Regulatory Commission (NRC) engineer once said, “Nuclear physics is simple. It’s the plumbing that’s complicated.”

Fig 1_Turbine-Hall-Feedwater-Tank
1. Designed for efficiency. This 100,000-gallon feedwater tank is part of the system plumbing that supplies water to the boiler at the Rheinhafen-Dampfkraftwerk power plant in Karlsruhe, Germany. To improve efficiency, the plant extracts some of the steam from the water-steam cycle to reheat it. Courtesy: GE Power

In a recent promotional publication, Honeywell, a major supplier of plant efficiency programs, outlined the areas of concern for improving plant efficiency, particularly at fossil plants. There are several items that require operators’ attention.

Fuel and Water Management. Honeywell says plant efficiency needs “good water treatment” to reduce contaminants and corrosion. Fuel handling, particularly in coal plants, means operators “must keep a continuous eye on levels inside coal bunkers… attention is needed on the purity of condensate water, raw water, and makeup water.”

Generator Issues. While hydrogen is a good coolant to increase steam turbine generator efficiency and save costs, it also is very explosive. Lube oil circulation in pumps is important to avoid equipment damage.

Boilers. Steam generating units such as boilers require monitoring of water and steam circuits, along with control of combustion processes used to fire them. Also important is paying attention to molten ash accumulation that can reduce heat transfer and cause boiler tubes to fail.

Turbines. Turbines require precise handling of parameters like steam speed pressure and temperature.

Water and Wastewater Treatment. Water is critical to plant operation and monitoring recirculated water for pH and dissolved oxygen “is essential for regulatory compliance.”

Control and Data Acquisition. “The central control panel in a power plant is where the entire monitoring, control and regulation of power generation takes place.”

Other areas also contribute to greater plant efficiency.

In a paper five years ago, Thorsten Mathaeus of ABB, another supplier of plant efficiency equipment and services, commented, “The huge potential of improved efficiency of power plant assets is often underestimated…. Power plant optimization is often associated with upgrading the performance of combustion and steam processes, but another prime candidate for efficiency improvement are the plant’s electrical systems, better known as the electrical balance of plant (or EBoP).”

Mathaeus noted that a conventional power plant uses as much as 7% of its electrical output to operate plant electrical systems. That means a 600-MW plant uses about 40 MW to power motors running pumps, mills, fans, and auxiliary systems. “Older motors are often inefficient by today’s standards, and in addition, many systems are still controlled by throttling. This means the motor driving a pump or a fan runs at constant power regardless of load requirements. The flow of water or air is controlled by bypasses, resulting in significant energy waste.”

Efficiency Through Technology

Modern digital technology is increasingly applied to plant efficiency improvements, with considerable success. Several major vendors offer customized services applying new technological approaches. Siemens a few years ago worked with Kansas City Power & Light to install upgraded combustion optimization technology at La Cygne Unit 2 in Kansas (Figure 2), a 715-MW unit burning Powder River Basin coal.

Fig 2_La Cygne Power Plant_KCP&L
2. Combustion optimization. Kansas City Power & Light (KCP&L) worked with Siemens to install improved combustion optimization technology at Unit 2 of the La Cygne power plant in Kansas. The project used Siemens SPPA-P3000 combustion optimization solutions, which were “employed on the boiler without over-fired air by utilizing the existing combustion equipment, existing combustion controls system, and a new in-furnace laser-based combustion monitoring system,” according to Siemens. Courtesy: KCP&L

The work included installing laser measurements inside the combustion chamber, allowing careful calculations of how combustion occurred in the chamber using computer-aided tomography and combustion controls based on mathematical modeling and neural networking. Siemens said these “are software-based solutions that require no more than minor modifications to mechanical equipment and are relatively straightforward when it comes to operations and maintenance training.” The result at La Cygne: “Significant NOx reductions while improving the overall combustion and heat rate.”

The Stunning History of Nuclear Capacity Factors

Nothing illustrates the value of increased plant efficiency better than the U.S. history of nuclear power. Over many years, a concerted effort to improve plant practices and personnel has resulted in a stunning turnaround from dismal plant performances in the 1980s to a sterling record today.

The most common measure of nuclear performance is capacity factor: the average power generated, divided by the rated peak power, expressed as a percentage. For example, for a wind turbine of 5-MW capacity that produces an average of 2 MW, the capacity factor is 40%.

Nuclear plants in the U.S. are not designed to follow load, largely a result of nuclear regulatory policy. The U.S. regulators have not wanted to strain the incredibly complex machines that are atomic energy plants by ramping them up and down, not a consideration in other technologies.

A Nuclear Regulatory Commission (NRC) response to a Department of Energy inquiry about smart grid technology in 2010 said, “Nuclear Power Plants (NPPs) are designed as base load units and are not designed to load follow (either by plant operator action or automatically via external control signal). While operators can adjust power in general, rapid changes are difficult and power changes are most problematic near the end of a fuel cycle (typically 18 months) where reactor power control is more complicated.”

Nuclear operators strive for a 100% capacity factor. A plant that isn’t running doesn’t generate revenue.

Despite hyperbole and optimism at the time, the first generation of commercial nuclear power plants operated poorly in their early years. U.S. nuclear plants in aggregate ran at a capacity factor of less than 60% consistently during the 1980s, according to the Nuclear Energy Institute, drawing from U.S. Energy Information Administration (EIA) data. Emergency reactor shutdowns (scrams, allegedly an acronym said to have been coined by pioneering nuclear physicist Enrico Fermi for “safety control rod axe man”), forced outages because of degraded equipment, and lengthy refueling outages drove the abysmal performance. Scrams are particularly problematic, as they violently challenge plant safety and recovery systems.

The root cause of the poor performance may well have been the unfamiliarity with the demanding technology, as many of the personnel involved (generally excluding reactor operators) came from less-demanding generating technologies with different performance goals, such as oil, gas, and coal.

That began to change in the 1980s. Legendary engineer and Duke Power executive William States “Bill” Lee III led an initiative to establish the Atlanta-based Institute for Nuclear Power Operations (INPO), which he helped create in 1979 as an industry response to the Three Mile Island accident, to improve nuclear plant performance nationwide. The explosion of the Chernobyl reactor in the Ukraine in 1986 spurred the effort as well, leading to the creation of the World Association of Nuclear Operators.

At the front line of the INPO effort was nuclear engineer and Navy nuke veteran Zack Pate. Under Pate’s direction, the industry worked effectively to identify causes of its poor operational performance and develop strategies to improve. In a 1986 article for the International Atomic Energy Agency shortly after the Chernobyl disaster, Pate wrote, “Since early 1981, INPO has been working to develop a performance indicator program to support utility efforts in achieving high-level performance.”

Pate added, “It is widely recognized that nuclear plants with high equivalent availability, small numbers of forced outages, few unplanned scrams, few significant events, and low personnel radiation exposures are generally well-managed overall. Such plants are more reliable and can be expected to have higher margins of safety. Thus, the performance indicator programme and its use by utilities in setting long-range goals directly support improvements in plant safety and reliability.”

In 1985, INPO undertook an analysis of how performance indicators could be used for long-term plant improvements. The effort produced a list of 10 overall indicators that it found were significant. Pate said, “Utilities are now tracking their performance in these 10 areas and establishing long-term goals in most of them. Each utility began reporting data to INPO on a quarterly basis during 1985. INPO analyzes these data and provides periodic reports to its members on progress and trends. We also share these industry-wide data with the NRC.”

It worked. A steady increase in the capacity factor of the U.S. nuclear fleet began around 1990 and has risen steadily since. By the turn of the century, the fleet capacity factor had topped 90% and has remained there. By 2016, according to EIA, the figure was 92.5%.

The scram rate, another performance indicator, also improved dramatically. According to NRC data, in 1986 only 1.5% of nuclear plants had not experienced a scram during the year. By 1993, the figure was up to 33%.

In the 1980s, the chief threat to the success of nuclear power was poor plant performance. That is no longer the case, as most U.S. nukes have exemplary performance. Today’s challenge to nuclear power is entirely economic.

Sudha Thavamani of Siemens wrote in 2013: “In today’s power generation market, steam power plants are focused on identifying ways to operate more efficiently and effectively to reduce losses, maximize reliability, and boost revenue. Due to continually changing demands by environmental and governmental authorities as well as consumers, plant operators and engineers are continuously seeking more effective and sophisticated technologies to support this focus and gain an edge in meeting today’s complex electricity generation market.”

Internet-based technology is helping to improve plant performance. In Italy, as reported at Decentralized-Energy.com, Centro Energia Teverola used Emerson performance modeling technology on a 150-MW combined cycle gas-fired plant, which consists of two Ansaldo-Siemens gas turbines, two Foster Wheeler heat recovery steam generators, and one Ansaldo steam turbine.

In a six-month trial, according to Vicenzo Piscitelli, general manager of Centro Energia Teverola, the Emerson technology contributed to an efficiency improvement of more than 1%. Piscitelli said, “The core of this technology is a computer-based thermodynamic model of the equipment. A stream of current data from an operating plant is transmitted to the computer, which produces useful comparisons of actual performance versus the optimum model.”

Piscitelli said, “Since the software is web-based, operating data can be transmitted from anywhere to the central file server with the thermodynamic models. No additional hardware is required on-site, so the system was implemented very quickly once the model was configured.”

Getting the system up and running took just a day. “Almost immediately thereafter,” said Piscitelli, “Emerson analysts noticed a severe pressure drop in the air flow into one turbine—an indication of inlet filter fouling.” With that information, maintenance supervisors were able to figure out when to replace the blocked filter at the least cost. A single filter change, he said, “paid for this service for two years.”

Importance of Predictive Maintenance

A key to this and other advanced technological approaches to plant efficiency is predictive maintenance, opposed to the standard operating procedure of scheduled preventive maintenance. Gainesville Regional Utilities in Florida, for example, said since it began focusing on prediction ahead of prevention, based on sophisticated data-driven algorithms, it has noted a 50% reduction in time spent troubleshooting suspected valve problems.

The key to predictive maintenance is identifying what elements could cause a plant to shut down unexpectedly and giving them priority for maintenance personnel. Equipment that won’t harm plant operations in the event of failure gets less attention, and routine maintenance is planned in advance around maintenance schedules.

This diagnostic approach requires monitoring operating data. Duke Energy, which employs predictive maintenance, said monitoring and diagnostics is of “strategic importance” for the company, and said the goal is “To detect operational or equipment problems as-soon-as-possible for the most critical units in the fleet in order to: mitigate damage and risk; planned repairs vs. forced outages; identify performance problems; improve safety; and manage market opportunities.”

A recent analysis of predictive maintenance notes, “Such information has been difficult to obtain in the past because condition reports generally depended on human inputs, which were inconsistent at best and inaccurate at worst. Today’s predictive maintenance programs rely largely on intelligence delivered to asset management software by advanced monitoring and analysis technologies. These systems are capable of raising alarms if components suddenly develop symptoms of impending failure, so immediate corrective action can be taken.”

Typically, the diagnostic information comes from smart field devices and valve controllers, including vibration monitors on pumps, motors, fans, turbines, compressors, and heat exchangers.

Juan Panama of Emerson Automation Solutions, in the article “How to Increase Power Plant Asset Reliability Using Modern Digital Technology” in the February 2018 issue of POWER, wrote, “Adding more instrumentation produces a lot of data, and turning data into actionable information takes sophisticated analytical tools. A plant-wide digital network brings all this data to one place, serving up valuable information necessary to improve operations. Analytics can find the cause-and-effect relationships driving plant performance by watching key performance indicators and answering critical questions capable of moving the needle in the right direction.”

Power plant efficiency is a complex mix of engineering, planning, and technology. But ultimately the challenge is human performance, especially on the part of plant management (see sidebar). EUCI, a Colorado firm that specializes in energy conferences aimed at technology companies, offers courses in human performance improvement for power plants.

An EUCI introduction for one of its recent conferences cites Alexander Pope’s “Essay on Criticism”: “To err is human….” the company says, “But does human fallibility doom us to failing over and over again?” Human error is often the cause of nasty power plant events. The devastating 1975 fire at the Tennessee Valley Authority’s three-unit Browns Ferry nuclear station was the result of two workers using a candle to check for air leaks in a cable spreading room, not aware that the insulation on the cables was combustible.

The March 28, 1979, meltdown at the Three Mile Island plant in Pennsylvania began with a minor problem that grew when a relief valve stuck open and instrumentation did not give plant operators a clear picture of the event. According to the NRC, the operators “took a series of actions that made conditions worse.”

According to EUCI, “The average American company wastes $637 per year per employee on human errors. For power plants, [the] number is much higher.”

Human Performance and Efficiency

How should human performance be thought of in terms of power plant efficiency? It’s a central question for management and focuses on careful identification of problems and solutions, and assigning and motivating people to perform.

Writing in the Plant Services online magazine, Tom Moriarty, mechanical engineer and consultant in management and reliability in plants of various technologies, offers the “Five Whats” of plant performance improvement. Moriarty suggests managers must develop a “framework for understanding and communicating the challenge and benefits of performance improvement” by outlining questions they should ask to improve performance. The Five Whats are:

    ■ What is the performance you want to achieve?
    ■ What indicators will there be to let you know if you are achieving the desired performance level?
    ■ What data do you need to develop the indicators of performance?
    ■ What has kept you from putting the processes in place and collecting the data needed to achieve higher performance levels?
    ■ What internal and external support do you need to achieve higher performance?

“A good leader,” writes Moriarty, “must ask an honest question about why the organization is not already achieving these performance levels. Is it lack of senior level support? Lack of internal capacity or expertise? Lack of funding?”

Ultimately, he says, “All performance improvements drive up costs in the near term. Your objective should be to reduce the net impact by managing the process to attain a net return on investment expeditiously.” ■

Kennedy Maize is a long-time energy journalist and frequent contributor to POWER.