How to supply and configure an energy economy and infrastructure for a world of more than 10 billion inhabitants by mid-century is perhaps the principal long-range issue facing human civilization today. The challenge will be finding the most environmentally benign way to supply that energy.
A key variable in this socioeconomic equation is the extent to which Earth’s remaining fossil fuel reserves can be exploited. Even though the link between observed increasing global temperature and increasing carbon dioxide emissions is debatable, all agree that such a link is plausible. The Kyoto Protocol represents a first attempt to limit climate change; coming decades are likely to see worldwide adoption of national carbon caps that could severely restrict the use of fossil fuels for both transportation and the production of thermal and electrical energy. One major harbinger of this trend is accelerated efforts to develop technology to displace hydrocarbons with hydrogen for fueling surface transportation. An example is California’s Hydrogen Highways initiative.
But there is a downside to the hydrogen equation that must be considered. Production of sufficient hydrogen—either by electrolysis or by thermal splitting of water or methane—to displace current consumption of petroleum by automobiles and trucks in the U.S. alone would require increasing by 50% the nation’s current electricity generation capacity. Given the massive amounts of CO2 that would need to be sequestered should hydrogen be generated either directly or indirectly from fossil fuels—and given the enormous land areas needed for new biomass, wind, or solar plants that such an expansion would require—only nuclear power can feasibly enable a complete hydrogen economy.
In a certain sense, hydrogen and electricity can be considered "mutually fungible." In a number of instances, each can replace or be transformed into the other—hydrogen as potential energy, and electricity as kinetic energy. However, it will be most realistic to provide both and let the end user decide which to use. Figure 1 depicts just such a scenario on an urban scale. The electric portion of the grid would use high-current DC superconducting cables for power transmission, with liquid hydrogen as the core coolant. The electric power and hydrogen would come from nuclear and other power plants spaced along the grid. Electricity would exit the system at various taps, connecting into the existing AC power grid. The hydrogen would also exit the grid, providing a readily available alternative fuel, perhaps for the next generation of fuel cell–powered automobiles.
1. The grid of the future? In this conceptual drawing, an urban community’s entire electricity supply comes from a nuclear plant and rooftop photovoltaic panels. The nuclear plant also generates hydrogen, which is distributed along with the electricity by a SuperCable ring bus. Source: Dr. Paul Grant
A short history of superconductivity
Almost immediately after superconductivity was discovered in 1911, scientists proposed applying the phenomenon to electricity transmission and distribution. Superconducting wires and cables held the promise of carrying direct current without loss. However, the early superconductors were primarily elemental metals whose superconducting properties disappeared in the presence of even moderate currents and magnetic fields. Furthermore, the superconductors’ operational need for large amounts of liquid helium was a major barrier. It wasn’t until the post–World War II discovery of "hard" superconducting alloys capable of sustaining practical levels of current, the ability to manufacture long wire lengths of these materials, and the availability of efficient helium liquefaction equipment that any proposal to use superconductivity for the transmission of electricity could be taken seriously.
In 1967, Richard Garwin and Juri Matisoo of IBM published a paper proposing the construction of a 100-GW, 600-mile, superconducting DC transmission line based on the then newly discovered type II compound, Nb3Sn. The line would have to be refrigerated along its entire length by liquid helium at 4.2 Kelvin (K). At the time, it was thought that remote nuclear power plant farms or hydroelectric plants would provide a major portion of growing national electricity demand, and that "high power bandwidth" superconductor cable transmission at near-zero loss would become economical. In principle, Garwin’s and Matisoo’s idea presaged many aspects of the "SuperCable" concept.
In the 1970s and early 1980s, more studies on the feasibility of both AC and DC superconducting cables appeared, and two watershed AC superconducting cables were built and successfully tested at Brookhaven, N.Y., and Graz, Austria. The latter cable actually provided live grid service for several years. In 1975, a report assembled by Stanford University and the U.S. National Institute of Standards and Technology (NIST) examined the use of "slush hydrogen" at 14K as cryogen for a cable using Nb3Ge with a transition temperature near 20K as the superconductor. However, no attention was given to the use of hydrogen as an energy agent itself.
The discovery of high-temperature superconductors (HTSCs) in 1986 and the development of practical HTSC tape and wire in the early 1990s gave rise to the idea at EPRI that an HTSC DC "electricity pipeline" cooled by liquid nitrogen could compete economically with conventional high-voltage DC transmission lines or gas pipelines for the task of transporting energy over distances greater than 120 miles. Although today several prototype HTSC cables are being demonstrated and tested worldwide, all of these projects envision AC applications at T&D voltage levels of 66 kV and above. But the major advantage of superconductivity is its ability to transport large DC currents at relatively low voltage. Superconductors are lossless conductors only under constant-current conditions. When current levels fluctuate, heat-producing hysteretic losses occur, and they require extra cryogenic capacity in addition to that needed to to remove ambient thermal in-leak to the cable. Moreover, the use of lower voltages reduces dielectric stress and improves cable reliability and longevity.
A preliminary design
Perhaps the most important design issue for the SuperCable involves the absolute and relative amounts of hydrogen and electric power to be delivered. As a first-order analysis, assume the peak demand of a typical home is 5 kW equivalent. To service a community of 200,000 households, a SuperCable would have to deliver 1,000 MWe via superconductors and 1,000 MWt via flowing hydrogen for heating and cooking.
2. SuperCable. This SuperCable cross section and schematic for one pole of a bipolar circuit is roughly to scale. Source: Dr. Paul Grant
Figure 2 depicts the essential physical characteristics and cross section of a basic SuperCable, and a circuit based on it. Note that each "cable" delivers half the total hydrogen power. Tables 1 and 2 contain estimates of the physical dimensions and superconductor material performance needed to achieve the target 1,000 MW capacities for both hydrogen and electricity.
Table 1. Nominal SuperCable parameters enabling delivery of 1,000 MWt of hydrogen. Source: Dr. Paul Grant
Table 2. Superconductor current density and annular wall thickness enabling delivery of 1,000 MWe, given the parameters of Table 1. Source: Dr. Paul Grant
Finally, it is interesting to consider hydrogen in the SuperCable acting not only as a cryogen and an energy delivery agent but also as a possible electricity storage medium. For example, suppose that in the circuit in Figure 2 the liquid hydrogen is circulated through both "poles" (rather than flowing unidirectionally in each), with only small amounts tapped off for delivery, leaving most of hydrogen available for conversion to electricity.
In such a configuration, a 250-mile SuperCable circuit would store the equivalent of TVA’s Raccoon Mountain reservoir (the largest pumped-storage hydro unit in the U.S.) with a considerably smaller footprint (Table 3). The big caveat here is that the "round-trip efficiency" of reversible fuel cells has yet to be determined. Of course, not all this hydrogen would be immediately available, and a reserve supply—probably stationed at the "recooling booster" stations the SuperCable requires every 5 to 15 miles—would be needed to maintain a sufficient amount of hydrogen for cryogenic purposes. A nationwide development of SuperCable infrastructure could enable the long-sought "commoditization" of electricity through its storage as liquid hydrogen, thereby revolutionizing electricity markets.
Table 3. Comparing the SuperCable’s potential energy storage capacity with that of existing systems. Source: Dr. Paul Grant
Reaching across the country
The SuperGrid is the next step in our vision of the future of the nation’s grid. EPRI has proposed a "continental-scale" (coast-to-coast) underground, superconducting hydrogen-electric transmission system using SuperCables that would supplement the existing grid (Figure 3). Advanced nuclear reactors, such as a high-temperature gas-cooled reactor, capable of producing electricity and hydrogen would be spaced along SuperGrid corridors to supply the hydrogen required to cool the SuperCables, either as a liquid or as cooled, higher-pressure gas. Excess hydrogen could be sold into the local energy market for transportation fuel or might be used as a high-density energy storage medium. Load centers along the SuperGrid could also be designed to withdraw power and hydrogen as required. Integrating non-invasive renewable technologies with the SuperGrid would produce an energy supply chain with no greenhouse gas emissions.
3. Integrated nationwide. The SuperCable could be integrated into a coast-to-coast SuperGrid that moves electricity and hydrogen directly from source to user. Source: Dr. Paul Grant
Technical obstacles remain
Leaving aside for the moment the frictional energy released by the viscous flow of liquid hydrogen, the principal losses from the SuperCable would be radiation from the surrounding ambient environment and replenishment of the recooling booster stations with hydrogen to maintain design temperatures in the SuperCable. But on a practical basis, the level of ripple induced in a DC line by rectification and imperfect filtering of an AC generation source also could become a serious issue.
For example, even if the ripple factor were only 1%, at 100,000 amps, a 1,000-A (rms) current will exist whose heat production will have to be dealt with. Moreover, managing supply/load variations will require constant current control by changing the voltage level. Energizing and de-energizing the electrical system of the SuperCable must be handled with great care, a well-known challenge with persistent-current superconducting magnets. Finally, heat in-leak due to thermal conduction from the ambient environment can be neglected if the vacuum level between the inner cryostat and outer high-voltage insulation sheath can be kept below 10-5 torr. Each of these technical issues will undoubtedly be the subject of more research and development.
The SuperCable is, of course, a highly speculative concept, even though currently available superconducting materials suggest the concept is technically feasible right now. Yes, there are many difficult engineering issues that remain to be addressed. Among them are these questions:
- How can the substantial forces between two monopole cables created from the magnetic fields surrounding the flow of 100,000-A currents be accommodated?
- What sort of power electronics infrastructure is required to maintain the lowest possible ripple factor?
- How can load/supply variations at constant current be managed?
But, with the potential benefits so high for human civilization, it behooves the worldwide engineering community to marshal its considerable resources to begin seeking answers to these questions.
Want to know more?
The author would like to acknowledge the support, encouragement, and inspiration provided over the years by many colleagues at EPRI—especially his mentor, Dr. Chauncey Starr. Dr. Starr led the Manhattan Project uranium enrichment effort during World War II and later became one of the founders of commercial nuclear power and the founder of EPRI. He died on April 17, 2007—three days after his 95th birthday.
—Dr. Paul M. Grant is an IBM research staff member emeritus and an EPRI science fellow (retired). He holds an appointment as visiting scholar in applied physics at Stanford University. Dr. Grant is also an independent energy consultant and can be reached at [email protected] or at www.w2agz.com.