O&M

How to conduct a plant performance test

Completing a power plant’s start-up and commissioning usually means pushing the prime contractor to wrap up the remaining punch list items and getting the new operators trained. Staffers are tired of the long hours they’ve put in and are looking forward to settling into a work routine.

Just when the job site is beginning to look like an operating plant, a group of engineers arrives with laptops in hand, commandeers the only spare desk in the control room, and begins to unpack boxes of precision instruments. In a fit of controlled confusion, the engineers install the instruments, find primary flow elements, and make the required connections. Wires are dragged back to the control room and terminated at a row of neatly arranged laptops. When the test begins, the test engineers stare at their monitors as if they were watching the Super Bowl and trade comments in some sort of techno-geek language. The plant performance test has begun (Figure 1).

1. Trading spaces. This is a typical setup of data acquisition computers used during a plant performance test. Courtesy: McHale & Associates

Anatomy of a test

The type and extent of plant performance testing activities are typically driven by the project specifications or the turnkey contract. They also usually are linked to a key progress payment milestone, although the value of the tests goes well beyond legalese. The typical test is designed to verify power and heat rate guarantees that are pegged to an agreed-upon set of operating conditions. Sounds simple, right? But the behind-the-scenes work to prepare for a test on which perhaps millions of dollars are at stake beyond the contract guarantees almost certainly exceeds your expectations (see box).

Long before arriving on site, the test team will have:

  • Gathered site information.

  • Reviewed the plant design for the adequacy and proper placement of test taps and for the type and location of primary flow elements.

  • Developed plant mathematical models and test procedures.

  • Met with the plant owner, contractor, and representatives of major original equipment manufacturers (OEMs) to iron out the myriad details not covered by contract specifications. Experienced owners will have made sure that the plant operations staff is included in these meetings.

Tests are normally conducted at full-load operation for a predetermined period of time. The test team collects the necessary data and runs them through the facility correction model to obtain preliminary results. Usually within a day, a preliminary test report or letter is generated to allow the owner to declare "substantial completion" and commence commercial operation. The results for fuel sample analysis (and/or ash samples) are usually available within a couple of weeks, allowing the final customer report to be finished and submitted.

The art and science of performance testing require very specialized expertise and experience that take years to develop. The science of crunching data is defined by industry standards, but the art rests in the ability to spot data inconsistencies, subtle instrument errors, skewed control systems, and operational miscues. The experienced tester can also quickly determine how the plant must be configured for the tests and can answer questions such as, Will the steam turbine be in pressure control or at valves wide open in sliding-pressure mode? What control loops will need to be in manual or automatic during testing? and At what level should the boiler or duct burners be fired?

For the novice, it’s easy to miss a 0.3% error in one area and an offsetting 0.4% error in another area that together yield a poor result if they aren’t resolved and accounted for. With millions of dollars on the line, the results have to be rock solid.

Mid-term exams

There are many reasons to evaluate the performance of a plant beyond meeting contract guarantees. For example, a performance test might be conducted on an old plant to verify its output and heat rate prior to an acquisition to conclusively determine its asset value. Other performance tests might verify capacity and heat rate for the purpose of maintaining a power purchase agreement, bidding a plant properly into a wholesale market, or confirming the performance changes produced by major maintenance or component upgrades.

Performance tests are also an integral part of a quality performance monitoring program. If conducted consistently, periodic performance tests can quantify nonrecoverable degradation and gauge the success of a facility’s maintenance programs. Performance tests also can be run on individual plant components to inform maintenance planning. If a component is performing better than expected, the interval between maintenance activities can be extended. If the opposite is the case, additional inspection or repair items may be added to the next outage checklist.

Whatever the reason for a test, its conduct should be defined by industry-standard specifications such as the Performance Test Codes (PTCs) published by the American Society of Mechanical Engineers (ASME), whose web site—www.asme.org—has a complete list of available codes. Following the PTCs allows you to confidently compare today’s and tomorrow’s results for the same plant or equipment. Here, repeatability is the name of the game.

The PTCs don’t anticipate how to test every plant configuration but, rather, set general guidelines. As a result, some interpretation of the codes’ intent is always necessary. In fact, the PTCs anticipate variations in test conditions and reporting requirements in a code-compliant test. The test leader must thoroughly understand the codes and the implications of how they are applied to the plant in question. Variances must be documented, and any test anomalies must either be identified and corrected before starting the test or be accounted for in the final test report.

A performance test involves much more than just taking data and writing a report. More time is spent in planning and in post-test evaluations of the data than on the actual test. Following is a brief synopsis describing the process of developing and implementing a typical performance test. Obviously, the details of a particular plant and the requirements of its owner should be taken into account when developing a specific test agenda.

Planning for the test

The ASME PTCs are often referenced in equipment purchase and/or engineering, procurement, and construction (EPC) contracts to provide a standard means of determining compliance with performance guarantees. The ASME codes are developed by balanced committees of users, manufacturers, independent testing agencies, and other parties interested in following best engineering practices. They include instructions for designing and executing performance tests at both the overall plant level and the component level.

Planning a performance test begins with defining its objective(s): the validation of contractual guarantees for a new plant and/or the acquisition of baseline data for a new or old plant. As mentioned, part of planning is making sure that the plant is designed so it can be tested. Design requirements include defining the physical boundaries for the test, making sure that test ports and permanent instrumentation locations are available and accessible, and ensuring that flow metering meets PTC requirements (if applicable).

After the design of the plant is fixed, the objectives of testing must be defined and documented along with a plan for conducting the test and analyzing its results. A well-written plan will include provisions for both expected and unexpected test conditions.

Understanding guarantees and corrections

The most common performance guarantees are the power output and heat rate that the OEM or contractor agrees to deliver. Determining whether contractual obligations have been met can be tricky. For example, a plant may be guaranteed to have a capacity of 460 MW at a heat rate of 6,900 Btu/kWh—but only under a fixed set of ambient operating conditions (reference conditions). Typical reference conditions may be a humid summer day with a barometric pressure of 14.64 psia, an ambient temperature of 78F, and relative humidity of 80%.

The intent of testing is to confirm whether the plant performs as advertised under those specific conditions. But how do you verify that a plant has met its guarantees when the test must be done on a dry winter day, with a temperature of 50F and 20% relative humidity? The challenging part of performance testing is correcting the results for differences in atmospheric conditions. OEMs and contractors typically provide ambient correction factors as a set of correction curves or formulas for their individual components. But it is often up to the performance test engineers to integrate the component information into the overall performance correction curves for the facility.

The reference conditions for performance guarantees are unique to every site. A simple-cycle gas turbine’s ratings assume its operation under International Standardization Organization (ISO) conditions: 14.696 psia, 59F, and relative humidity of 60%. The condition of the inlet air has the biggest impact on gas turbine–based plants because the mass flow of air through the turbines (and consequently the power they can produce) is a function of pressure, temperature, and humidity. Performance guarantees for steam plants also depend on air mass flow, but to a lesser extent.

The barometric pressure reference condition is normally set to the average barometric pressure of the site. If a gas turbine plant is sited at sea level, its barometric pressure reference is 14.696 psia. For the same plant at an altitude of 5,000 feet, the reference would be 12.231 psia, and its guaranteed output would be much lower.

The relative humidity reference condition may or may not have a significant bearing on plant performance. In gas turbine plants the effect is not large (unless the inlet air is conditioned), but it still must be accounted for. The effect of humidity, however, is more pronounced on cooling towers. Very humid ambient air reduces the rate at which evaporation takes place in the tower, lowering its cooling capacity. Downstream effects are an increase in steam turbine backpressure and a reduction in the turbine-generator’s gross capacity.

The most important correction for gas turbine plant performance tests involves compressor inlet air temperature. Although a site’s barometric pressure typically varies by no more than 10% over a year, its temperatures may range from 20F to 100F over the period. Because air temperature has a direct effect on air density, temperature variation changes a unit’s available power output. For a typical heavy-duty frame gas turbine, a 3-degree change in temperature can affect its capacity by 1%. A temperature swing of 30 degrees could raise or lower power output by as much as 10%. The effect can be even more pronounced in aeroderivative engines.

ISO-standard operating conditions or site-specific reference conditions are almost impossible to achieve during an actual test. Accordingly, plant contractors and owners often agree on a base operating condition that is more in line with normal site atmospheric conditions. For example, a gas turbine plant built in Florida might be tested at reference conditions of 14.6 psia, 78F, and 80%. Establishing a realistic set of reference conditions increases the odds that conditions during a performance test will be close to the reference conditions. Realistic reference conditions also help ensure that the guarantee is representative of expected site output.

Establishing site-specific reference conditions also reduces the magnitude of corrections to measurements. When only small corrections are needed to relate measured performance from the actual test conditions to the reference conditions, the correction methods themselves become less prone to question, raising everyone’s comfort level with the quality of the performance test results.

Beyond site ambient conditions, the PTCs define numerous other correction factors that the test designer must consider. Most are site-specific and include:

  • Generator power factor.

  • Compressor inlet pressure (after losses across the filter house).

  • Turbine exhaust pressure (due to the presence of a selective catalytic reduction system or heat-recovery steam generator).

  • Degradation/fired hours, recoverable and unrecoverable.

  • Process steam flow (export and return). Blowdown (normally isolated during testing).

  • Cooling water temperature (if using once-through cooling, or if the cooling tower is outside the test boundary).

  • Condenser pressure (if the cooling water cycle is beyond the test boundary).

  • Abnormal auxiliary loads (such as heat tracing or construction loads).

  • Fuel supply conditions, including temperature and/or composition.

Choose the right instrumentation

Instrumentation used to record test measurements should be selected based on a pre-test uncertainty analysis (see "Understanding test uncertainty"). This analysis is important to fine-tune the instrumentation to ensure that the quality of the test meets expectations. The test instruments themselves are usually a combination of temporary units installed specifically for testing, permanently installed plant instrumentation, and utility instrumentation (billing or revenue metering). Temporary instruments are typically installed to make key measurements that have a significant impact on results and where higher accuracy is needed to reduce the uncertainty of test results. Among the advantages of using a piece of temporary instrumentation is that it has been calibrated specifically for the performance test in question following National Institute of Standards and Technology (NIST) procedures.

Another benefit of installing temporary instrumentation is to verify the readings of permanent plant instruments. Plant instrumentation typically lacks NIST-traceable calibration or has been calibrated by technicians who are more concerned with operability than with accurate performance testing. There’s a good reason for the former: Performing a code-level calibration on plant instrumentation can be more expensive than installing temporary test instrumentation. An additional benefit of a complete temporary test instrumentation setup is that the instrumentation, signal conditioning equipment, and data acquisition system are often calibrated as a complete loop, as is recommended in PTC-46 (Overall Plant Performance).

All performance instruments should be installed correctly, and any digital readings should be routed to a central location. Choosing a good performance data center is very important. A performance command center should be out of the way of site operations yet close enough to observe plant instrumentation input and operation.

Obviously, performance instrument readings should be checked against those of plant instruments, where available. This is one of the most important checks that can be made prior to a performance test. When a performance tester can get the same result from two different instruments that were installed to independent test ports and calibrated separately, there’s a good chance the measurement is accurate. If there’s a difference between the readings that is close to or exceeds instrument error, something is likely to be amiss.

Typically, when plant guarantees are tied to corrected output and heat rate, the two most important instrument readings are measured power and fuel flow. If either is wrong, the test results will be wrong. For example, say you’re testing a unit whose expected output is 460 MW. The plant instrument is accurate to within 1%, and the test instrument is even more accurate: +/–0.3%. In this case, the tester prefers to see the two readings well within 1% of each other (4.6 MW) but they still may be as far apart as 5.98 MW (1.3%) and technically be within the instruments’ uncertainty.

When setting up for a performance test, it is not uncommon to find errors in permanent plant instrumentation, control logic, or equipment installation. These errors can influence the operation of a generating unit, for example by causing over- or underfiring of a boiler or gas turbine and significantly impacting the unit’s output and heat rate. In cases where the impact on actual operation continues undetected, the corrected test report values may still be in error due to corrections made based on faulty instrument readings. If these reported values are used as the basis of facility dispatch, a small error could have an enormous impact on the plant’s bottom line, ranging from erroneous fuel nominations to the inability to meet a capacity commitment.

Conduct the test

The performance test should always be conducted in accordance with its approved procedure. Any deviations should be discussed and documented to make sure their impact is understood by all parties. If the test is conducted periodically, it is important to know what deviations were allowed in previous tests to understand if any changes in performance might have been due to equipment changes or simply to the setup of the test itself.

Calibrated temporary instrumentation should be installed in the predetermined locations, and calibration records for any plant or utility instrumentation should be reviewed. Check any data collection systems for proper resolution and frequency and do preliminary test runs to verify that all systems are operating properly.

The performance test should be preceded by a walk-down of the plant to verify that all systems are configured and operating correctly. It’s important to verify that plant operations are in compliance with the test procedure because equipment disposition, operating limits, and load stability affect the results. Data can then be collected for the time periods defined in the test procedure and checked for compliance with all test stability criteria. Once data have been collected and the test has been deemed complete, the results can be shared with all interested parties.

Because the short preliminary test may be the most important part of the process, be sure to allow sufficient time for it in the test plan. The preliminary test must be done during steady-state conditions following load stabilization or when the unit is operating at steady state during the emissions testing program. The preliminary test has three objectives: to verify all data systems, to make sure manual data takers are reading the correct instruments and meters, and to have the data pass a "sanity check."

After the test data have been collected, the readings should be entered into the correction model as soon as possible and checked for test stability criteria (as defined by the test procedure). At this point, depending on the correction methods, the test director may be able to make a preliminary analysis of the results. If the numbers are way out of whack with expected values, a good director will start looking for explanations—possibly, errors in the recorded data or something in the operational setup of the unit itself. Though everyone is concerned when a unit underperforms, a unit that performs unexpectedly well may have problems that have been overlooked. For example, a unit that corrected test results indicate has a 5% capacity margin may need to have its metering checked and rectified, or it may have been mistuned to leave it in an overfired condition.

Although an overtuned gas turbine may produce more megawatt-hours during initial operations, the gain comes with a price: increasing degradation of the unit’s hot section, shortening parts life and increasing maintenance costs. The most common mistake in testing is acceptance of results that are too good. If results are bad, everyone looks for the problem. If the results are above par, everyone is happy—especially the plant owner, who seems to have gotten a "super" machine. However, there’s a reason for every excursion beyond expected performance limits—for better or worse.

If all the pretest checks are done properly, the actual performance test should be uneventful and downright boring. It should be as simple as verifying that test parameters (load, stability, etc.) are being met. This is where the really good performance testers make their work look easy. They appear to have nothing to do during the test, and that’s true because they planned it that way. Having done all the "real" work beforehand, they can now focus on making sure that nothing changes during the test that may affect the stability of the data.

Analyze the results

Almost immediately after the performance test (and sometimes even before it is complete), someone is sure to ask, "Do you have the results yet?" Everyone wants to know if the unit passed. As soon as practical, the performance group should produce a preliminary report describing the test and detailing the results. Data should be reduced to test run averages and scrutinized for any spurious outliers. Redundant instrumentation should be compared, and instrumentation should be verified or calibrated after the test in accordance with the requirements of the procedure and applicable test codes.

The test runs should be analyzed following the methods outlined in the test procedure. Results from multiple test runs can be compared with one another for the sake of repeatability. PTC 46 (Overall Plant Performance) outlines criteria for overlap of corrected test results. For example, if there are three test runs, a quality test should demonstrate that the overlap is well within the uncertainty limits of the test.

Once test analysts are satisfied that the results were proper, the test report can be written to communicate them. This report should describe any exceptions to the test procedure that may have been required due to the conditions of the facility during the test. In the event that the results of the performance test are not as expected, the report may also suggest potential next steps to rectify them.

For sites where the fuel analysis is not available online or in real time, a preliminary efficiency and/or heat rate value may be reported based on a fuel sample taken days or even weeks before the test. Depending on the type and source of the fuel, this preliminary analysis may be significantly different than that for the fuel burned during the test. It’s important to understand that preliminary heat rate and efficiency results are often subject to significant changes. Once the fuel analyses are available for the fuel samples taken during the test, a final report can be prepared and presented to all interested parties.

Tina L. Toburen, PE, is manager of performance monitoring and Larry Jones is a testing consultant for McHale & Associates. Toburen can be reached at 425-557-8758 or [email protected]; Jones can be reached at 865-588-2654 or [email protected].

SHARE this article