Connected Plant

Cybersecurity: Keeping Current on a Moving Target

What it means to have a secure energy generating facility has changed in recent years as the threat of cyberattacks grows. As the nation’s energy sector becomes increasingly interconnected, it is more important now than ever to safeguard power plants from cybersecurity threats such as hacking, malware, and viruses.

The benefits of implementing an Internet of Things (IoT) strategy at a power plant are clear, but the stark reality of this day and age is that doing so opens many doors for cyber attackers. IoT at a power plant can increase the efficiency of the plant’s operations, pinpoint maintenance needs earlier than has ever been possible, and help avoid costly operational failures. However, with every new connection made in a connected plant, a new vulnerability is created.

The trick to having the best of both worlds—a secure and connected plant—is implementing a robust cybersecurity strategy that can identify vulnerabilities as they pop up, close them, and alert the user to any breaches of the system’s perimeter.

The Basics of a Secure Plant

With new threats cropping up every day, it is debatable if a power plant can ever truly be secure. A plant that is impervious one day can be the target of an unheard of threat the next. “We have kind of a failure of imagination sometimes, about what those new threat vectors could be,” Susan Peterson Sturm, director of product marketing and strategy at Honeywell Industrial Cyber Security, said.

Securing a plant is not a process with an end point, Peterson Sturm suggested, but an ongoing effort to stay abreast of an ever-changing threat. “It’s a misnomer to say there is a secure plant. I think it’s totally cool for somebody to say we have approached this with security by design, we have implemented these security controls,” she said.

Jon Stanford, Americas leader for IoT services and advanced services, and global practice lead for industrial securities with Cisco Advanced Services, disagrees slightly, saying that a plant can be truly secure “if the security is addressed one, fundamentally, and two, with the operational realities of that plant in mind.”

Stanford noted three key factors that must be present in a cybersecurity strategy (Figure 1). First, he said, there must be a focus on identifying threats and vulnerabilities. “There’s a lot of concern about the threat actors and their motivations and how vulnerable the IT environment is,” he said. “The approach [to addressing that is] doing vulnerability scanning, identifying those, and then putting remediations in place.”

Fig 1_Cybersec
1. The secure plant. Successfully securing a power plant from cyberattack requires several levels of protection. Source: POWER/Sonal Patel

The second must-have for a secure plant, Stanford said, is robust perimeter protection. “Even though firewalls and intrusion protection are not enough—they’re table sticks really today—you have to do them. So do very robust separation between the plant environment and anything external,” he said.

Finally, according to Stanford, an operator must develop a baseline by which they can track any operational changes within the plant, alerting them to any potential breaches. “You want to have the ability to monitor your environment and detect changes in real time and respond to them,” he said.

The Least You Could Do

There is a minimum amount of effort that can be put into a cybersecurity strategy. Threats from outside actors abound, and doing nothing is no longer an option.

In the past, that baseline was air-gapping. Air-gapping is essentially protecting assets by keeping them entirely disconnected from the outside world. While that sounds like a valid means to halt any external cyberthreats, in 2017 there are a few problems with that system. First, air-gapping only protects against external threats. Unfortunately, that’s not enough. “It’s erroneous, basically, to only think about outsider in types of threats. We really need to think about insider threats, intentional or otherwise,” Peterson Sturm said.

Second, the practice of air-gapping is based on a concept that is no longer compatible with the realities of where business is headed. “What we’ve found is basically about 10 years ago customers started networking their equipment, so that notion of an air gap is like a unicorn, it just doesn’t exist anymore,” Peterson Sturm said.

“Let me state something that’s sort of a fundamental truth, historically if you go back decades these power plants were truly air gapped,” Stanford agreed. “Over time that’s changed, just in the natural course of operation, just as a natural result of how power operators have changed procedures and as the technology that they use on a daily basis has evolved … There are very few instances of truly air-gapped environments. There is some level of connectivity.”

More feasible, though still likely not enough to keep a plant safe, is the installation of firewalls. Firewalls are a necessity at any plant, though installing them presents its difficulties. Due to the way that power plants are built, simply as a reality of the industry, a plant will have a large variety of equipment from a number of different vendors, each with its own system and its own firewall.

“For a lot of customers, they have to go to each individual ICS (Industrial Control System) provider and buy their security widget, which is another huge pain in the neck,” Peterson Sturm said. “So, not only do they have to know how to operate that supplier’s industrial control system and maintain expertise and training, having operators who know how to use that system, but [they must also do that] on the security side.”

On the other hand, Stanford said, while having multiple systems with multiple security processes may be bulky for the operator, such a setup is also bulky for potential cyber attackers. “From the security perspective, arguably a lack of homogeneity could be a form of protection. There are some who feel like that makes a less-exploitable environment,” he said. “If you have all Windows computers, and if I can exploit Windows, then I can exploit your entire environment. So, if I have malware it’s a simple matter of me, just like a disease, I can just quickly propagate that through your environment… But, if you have, say, five different kinds of operating systems in place, and I exploit Windows then I’m not going to affect the other four types you have.”

While such a setup may help, Stanford specified, that’s not necessarily a good strategy in and of itself, more is needed to have a truly secure plant.

To add to this confusion, the equipment in use at any given plant is not only going to vary in brand, but also in age. “Control systems that have been around for more than ten years weren’t designed with cybersecurity in mind,” Peterson Sturm said, explaining that different pieces of equipment in the same plant can have very different levels of built-in security.

Avoiding and Halting Attacks

The first step to developing an effective cybersecurity strategy is identifying a plant’s vulnerabilities and understanding how a bad actor might take advantage of those weak spots. Both Stanford and Peterson Sturm recommend undergoing an assessment to better understand the operation of the plant’s operations.

“Our first step is really to go in and do an assessment, and that will include people, process, and technologies,” Peterson Sturm said of Honeywell’s process (Figure 2). This assessment, Peterson Sturm explained, includes interviewing employees to understand what they do and how they do it. This helps to identify any potential impacts that day-to-day operations can have on the security of the plant.

Light-Dashboard-and-industry
2. Establishing a baseline. Cybersecurity experts must first establish a baseline of plant operations, which they can then use to monitor any anomalies using products such as Honeywell’s Risk Manager. Courtesy: Honeywell

Honeywell then does a review of company policies and procedures that could help to reinforce the company’s security.

Finally, an evaluation of all plant technology is completed. “It is kind of going through the facility and getting a baseline of the electronic devices, basically any type of embedded control device, what software is on there, what protocol it’s using, which ports and services are open, the level of patching, and then any type of antivirus etcetera,” she said.

Once a baseline security level has been established, it is time to think like a criminal (Figure 3). Honeywell employs a cybersecurity researcher who works to get into the mind of a hacker, identifying how they might go about attacking a plant. “I mentioned the failure of imagination,” Peterson Sturm said, “[We have] someone who can think about ‘well, what’s the next possible type of threat?’ ”

The Honeywell Industrial Cyber Security Lab located in Duluth, Georgia is a world-class environment used to develop and test new cyber security solutions for the industrial market. It also serves as a means to share technological advancements and solutions with interested users.
3. Keeping up. In the cybersecurity realm, there is always a new danger lurking. At Honeywell’s Industrial Cyber Security Lab, cybersecurity researchers work to identify new threats and develop solutions. Courtesy: Honeywell

Cisco focuses less on trying to identify new threats, and more on ensuring that should a hacker make it past the robust perimeter defense put in place, they are still unable to cause significant damage to the plant. “You want to have the ability to monitor your environment and detect changes in real time and respond to them. The most effective type of solution to address that embodies threat intelligence within it,” Stanford said.

The security products and services offered by Cisco have threat intelligence and machine learning embedded within, according to Stanford, “They can watch these environments to make sure these critical processes don’t change from their authorized patterns,” he said. “That’s really where the biggest bang for the buck comes in.”

The Enemy Within

Whether they mean to be or not, a plant’s operators can be one of its greatest vulnerabilities. The Stuxnet, the cyberattack that caused significant damage to Iran’s nuclear program in 2009, for example, was housed on USB drives. According to reviews of the virus, attackers first infected the computers of five companies with ties to the nuclear program. Employees of these companies then unknowingly spread the virus to Iran’s nuclear facilities on USB drives they did not know were infected (Figure 4).

Fig 4_Cybersec_web
4. Hidden risks. Cybersecurity risks can be present in the most mundane places. Recent studies have shown that nearly 20% of people will use USB drives picked up in public places, potentially opening the door for bad actors. Source: POWER/Abby L. Harvey

Stuxnet didn’t just wreak havoc on computers but caused physical damage. After being infected, centrifuges used to enrich uranium gas at Iran’s Natanz Nuclear Facility starting failing. The virus made its way into the facility in July 2009, and centrifuges quickly started failing. By August, 328 centrifuges had fallen. By November, another 656 were failing.

“The biggest threat vector actually is through malware that’s hand carried into the environment in terms of the employees or their authorized vendors or contractors being a threat. A huge concern is the devices that they use on a daily basis,” Stanford said. “This would include portable media, like the USB drives and computers, typically laptops, that they use to support the equipment that operates the power plant.”

According to a 2015 study conducted by tech association CompTIA, roughly 17% of employees will pick up and use a discarded USB drive. CompTIA littered USB drives in high-traffic public areas. The USBs were programmed with text files, which instructed anybody who plugged them in to send an email message to a listed address or click on a trackable link. The results of the study claim that some of the people who clicked on the link or sent an email were employed in the IT industry. “As the findings show, even the most IT literate end users can make precarious decisions when faced with potentially suspicious technology, demonstrating how challenging it can be to instill strong cybersecurity habits (not merely knowledge),” the study says.

Unfortunately, at this point, there may not be much to do to ensure that employees aren’t accidentally carrying malware into a facility. “We talk about people, process, and technology. Training, making people aware,” Peterson Sturm said. “There are tons and tons [of] statistics about the number of people who will click on a phishing email, so that’s a big thing on an awareness level.”

In the future, it may be easier to protect a facility from the dangers of an infected USB. “We don’t have a good way right now to really secure the used USB, so that’s one area where Honeywell is doing some more [research]. We see a significant need around that, and we’re doing some product development in that area. Policy, corporate policy, and training alone just can’t do it.”

Should an employee ignore their training, monitoring is the last defense against an accidental cybersecurity breach. “We have a set of capabilities—we have solutions that we can put in place—that will watch for and detect anomalies within these environments,” Stanford said.

Of course, there is also the possibility of a malicious insider attack. Luckily, Stanford said, “The number of incidents of malicious insider behavior [suggests it] is quite rare.”

Every power plant has vulnerabilities; no plant is impervious to attack. The world of cybersecurity is constantly evolving as new attacks are designed every day. Keeping a plant as secure as possible is a constant struggle that requires constant attention. However, the potential costs of a cyberattack, both to the plant operator and the customers they serve, far outweigh the costs of implementing a robust cybersecurity strategy. ■

Abby L. Harvey is a POWER reporter.

SHARE this article