The following text is a transcript from the video with the same title:

 

 

 

In 2007, an unidentified person submitted a code sample to the collaborative anti-virus platform Virustotal. Not recognized by any anti-virus company at the time, that code was the first true cyber weapon in history, designed to physically attack a military target. This is the story of the malware that became the icon of cyber warfare.

Backstory

Every story has a backstory. Ours goes back to 1975.

In that year, a Pakistani metallurgist named Abdul Qadeer Khan, or A. Q. Khan for short, working for the European uranium enrichment consortium Urenco in the Netherlands steals design plans for the gas centrifuges that are the backbone of the enrichment process. Pretending to go on holiday leave, Khan will never come back. Instead, he becomes the head of the Pakistani nuclear program that ultimately leads to their atomic bomb.

In addition to this achievement, Khan turns out to be a businessman of sorts. Understanding that other nations are eager to develop nuclear weapons as well, he seizes the business opportunity and starts a lucrative side business. Unknown to the Pakistani government, his Khan Research Labs sells uranium enrichment technology to the highest bidder, including North Korea, Libya, and Iran.

In the case of Iran, these dealings first happen in 1987, at the height of the war with Iraq, when Iraq had massively increased their use of chemical weapons.

But after the war ended in 1988, it still takes Iran over a decade before getting serious about their little Manhattan Project.

In 2000, they secretly start building the Natanz fuel enrichment plant.

The giant construction site and its illicit purpose make headlines two years later when a domestic opposition group reveals the existence of undeclared nuclear facilities.

Unable to hide the obvious any more, Iran comes clean about Natanz in 2003 and enters into negotiations with the European Union format known as the EU3, meaning France, Germany, and the United Kingdom. Hassan Rowhani becomes Iran’s chief negotiator; yes, it’s the same person who will later become Iran’s president. The EU3 ultimately negotiates that Iran halts their enrichment activities for the time being. Remember the phrasing here: For the time being.

In the meantime, the United States takes a more hands-on approach to the problem. The CIA gets actively involved in taking down the Khan network, and US and British operatives start to sabotage Iran’s program by compromising the supply chain with bogus parts.

Khan’s nuclear trafficing operation is shut down by the Pakistani government, and Khan customer Libya is forced to dismantle its enrichment program.

Things didn’t go as well with Iran. In 2005, hardliner Mahmud Ahmadinejad becomes elected president of Iran and publicly announces that he will restart the nuclear program. And for good measure, he also announces his intention to wipe Israel off the map.

In Iran’s interpretation, the relaunch is not even a violation of the EU3 agreement. As Iran’s chief negotiator Rowhani reminds the media, the EU3 conditions were only accepted for those parts of the plant where Iran didn’t face technical difficulties. And technical difficulties they had. In other words, Rowhani had negotiated a way for Iran to push their technological development forward while they couldn’t start production anyway. He had simply outsmarted the EU.

Technical Difficulties

It is quite obvious what those technical difficulties were, and they get us right to Stuxnet.

Due to trouble purchasing equipment under embargo conditions and due to sheer technical incompetence, Iran lacked the technology for precision manufacturing of the centrifuge rotors that are supposed to spin at a constant 63.000 RPM for months, if not for years.

Lifetime of a Pakistani P-1 centrifuge is about ten years. Iranian centrifuge rotors, on the other hand, kept cracking constantly. So, starting up a uranium enrichment production for real was simply out of the question.

That is, until somebody figured out a clever workaround. What if one could invent a centrifuge cascade design that would tolerate individual centrifuges to fail?

And that’s exactly what Iran did. They — or their international contractors — invented a fault tolerant design for their cascades, using modern digital automation technology. They equipped centrifuges with vibration sensors and valves, allowing for a defect centrifuge to be isolated from the flow of gas. The isolation is achieved by three valves that, when closed, cut off a vibrating centrifuge from the cascade. The faulty centrifuge can then be stopped and replaced while the cascade keeps operating.

Shutdown centrifuges within an operational cascade are an everyday fact of normal operation at the Natanz plant. We can even see them in official press photos, just as if Iran was proud to show off their accomplishment. In this photo from the 2008 press tour, the noteworthy item is not president Ahmedinejad looking at a computer screen, but the grey dots indicating inoperative centrifuges.

There was one problem left though. Isolating centrifuges via shut-off valves impacts the overall gas pressure in the respective cascade stage, something that needed to be compensated for.

In order to do this, Iran invented a clever hack for their cascade dump system, which is normally used to evacuate a centrifuge cascade during an emergency. They extended the dump system to compensate for the overpressure that results from cutting individual centrifuges off.

In every cascade stage they installed an overpressure valve that is controlled by a dedicated pressure controller. The controller monitors stage pressure via a local pressure sensor in an arrangement that control engineers call a closed loop. And when overpressure is detected, it is simply released into the dump system.

The end result is a plant full of technically obsolete and ill-manufactured centrifuges stuffed with digital automation technology that allows them to enrich uranium with minimal efficiency.

Compare the centrifuges at Natanz with these at Urenco, where A.Q. Khan stole the design. While the Iranian cascades are packed with valves, sensors, and cables, one only sees clean pipes at Urenco. No valves, no sensors, no cables.

And so, with a truckload of digital automation technology, Iran solved their technical difficulties, while making themselves vulnerable for cyber attacks.

Absolute Cyber Power

With the new Cascade Protection System in place, Iran starts to commission the Pilot Fuel Enrichment Plant in 2006. Located in the above ground part at Natanz, it is kind of a lab environment for testing centrifuge operations and cascade designs; a mini fuel enrichment plant with a total of six cascades that are not meant for actual production.

The next year, the underground part goes operational as well — and with it does Stuxnet.

Now if that isn’t some coincidence. Given that the development of the malware must have taken well over a year, one cannot but acknowledge that the attackers were well prepared.

The destructive code sequences did not target computers, a fact that caused anti-virus companies to completely miss the topic when looking at the code sample submitted to VirusTotal, and also to stay baffled three years later even when Stuxnet had become the most publicized malware of all time. Stuxnet didn’t delete, steal, or manipulate data on Windows PCs. It didn’t even get to those PCs by self-propagation as we see it in the later version. It got there by physical transmission, either by compromised laptops that traveled in and out of the plant, or by infected USB sticks. And it did not communicate with command-and-control servers on the Internet, a feature of the later version that made it easy to detect. It operated autonomously and in perfect silence.

The intermediate target for the attack code is what we call engineering systems in automation technology. These systems are used to configure industrial controllers that directly control a physical process in real time. Since these controllers don’t have a keyboard, screen, and a configuration user interface, regular Windows computers equipped with engineering software of the respective automation vendor are used to do the job.

When connected to a controller that is part of the Cascade Protection System, the real destructive parts of the code, also called the payload in infosec terminology, jumps over to the small grey boxes and merges with the legitimate control logic. And then it sits there and does little more but analyze process conditions for weeks.

When conditions are right, the malware takes control while allowing legitimate control logic to continue execution in the background, but disconnected from physical reality. In a much publicized stunt that I discovered in 2010, the legitimate code is fed fake sensor values that are recorded by the malware just before taking over control. While everything appears to be normal for the original code, the malware operates completely under cover and manipulates valves at will.

The goal of the attack is to damage centrifuge rotors by overpressure. And here we’re not talking about multiples of atmospheric pressure as you use it for the tires of your car. We’re talking about fractions of atmospheric pressure. How so, you may wonder. Well, because the centrifuges operate near abolute vaccum, and there’s a reason for it. Uranium hexafluoride solidifies at around 100 mbar, or one tenth of atmospheric pressure. It turns into solid material, just like water turns into ice below freezing level. If the pressure level goes beyond what is called the triple point, the gas solidifies and destroys the centrifuges instantly.

Now wouldn’t that be a great idea, you may think. Good point, and I’ll get back to this. For now, just consider that this was not the goal of the attackers. How do I know? Because they could easily have done so; it would have made the attack much simpler.

So in reality it was about creating temporary rotor stress by increasing gas pressure, thereby getting more uranium hexafluoride into the centrifuge, which means more rotor pressure. But just to the point where the aluminum tubes were slightly damaged, resulting in a shorter operational lifetime. The attackers went to great lengths in order to cover this up.

They closely monitored cascade operation until a certain set of process conditions was met. Then they cut off both ends of the cascade by closing valves, which inevitably leads to pressure in the cascade rising constantly. This would have done nothing to the cascade if the attackers hadn’t also manipulated the overpressure valves.

It is clear from the attack code that the attackers went out of their way to avoid catastrophic damage.

When, based on sensor readings, the attack code decides that enough is enough, things are restored back to normal.

Contrary to common belief, the recording and replay of sensor values on the controller was not primarily used or even necessary to fool human operators. It was used to fool the legitimate control logic, which was still executing in parallel.

In case you are wondering, in order to pull this off the attackers used a legitimate product feature of the controllers which is still functional today. Intended for simulation purposes, the ill documented feature allows software to overwrite actual sensor inputs with fake values.

But the attackers DID need to fear detection by human operators. However, these operators would not sit in the control room. They walk around in the cascade hall. Remember the inline pressure controllers for overpressure relief? They are sitting right in the cascade, and display gas pressure for the cascade stage using a small Liquid Chrystal Display. By de-calibrating these controllers, the attack code made sure that only normal values were shown.

It is crystal clear that the attackers must have had a realistic testbed available. And I’m not just talking about a couple of centrifuges with automation equipment. I’m talking about an operational cascade filled with actual uranium hexafluoride.

Putting all these characteristics together, one cannot fail to acknowledge the unprecedented absolute cyber power on display. A bunch of bits and bytes was capable of compromising the operation of a nuclear facility that was a designated military target. It did so in absolute silence, and complete autonomy. Different from the later version, this code did not call out to command-and-control servers on the Internet. It did not use fancy zero-day exploits. It was the first true cyber weapon; a software artifact designed to cause physical harm.

We don’t know exactly what results the first campaign achieved. IAEA inspectors noticed an above usual amount of hexafluoride in the cascade dump systems at Natanz, which would be an indicator of Stuxnet at work. But apparently, no dramatic effects were caused, which might just be due to the fact that the attackers feared detection more than causing instant destruction even by accident.

The sophistication of the campaign has more of a nerdy engineering project than of a military operation. It impresses by the demonstrated total and undetected control over adversary infrastructure, but certainly not by audacity.

All this would change soon.

Concert in the Cascade Hall

In April 2008, President Ahmadinejad invites the international press to a tour of the Fuel Enrichment Plant. Photos of that tour go around the globe and find proliferation experts shocked by realizing  how far developed the Iranian program has become.

Two months later, Israel starts an extensive military exercise to practice an air strike against the facility. They also make it clear to the US government that they want a piece of the cyber action.

Later that year, Barack Obama becomes elected President of the United States. Not only is he eager to continue the cyberwar project he had inherited from the Bush administration, it also gets a complete overhaul.

The new Stuxnet variant that emerges in 2009 uses different tactics, and it is clear that it was developed by a different team, or multiple teams to be precise.

The infosec cavalry gets a green light from Washington and Tel Aviv and uses the opportunity to show off. The best taxpayer funded hackers put their offensive cyber arsenal to work. The result is Tailored Access Operations on steroids: Mulitple zero-day exploits and stolen digital certificates are assembled to infiltrate ONE target; a target for which the option of physical infiltration apparently was no longer available.

As much as the effort on the infiltration routines is extended, as much is it reduced for the cyber-physical part that goes onto the controllers. The new payload is much smaller, much less sophisticated, and targets a different automation system: The Centrifuge Drive System. It controls the exact speed at which the rotors spin. The Cascade Protection System is left alone in the second campaign.

There’s a reason why Iran operates their centrifuges 4.000 RPM below design speed, which is 63.000 RPM. Higher rotor speed means more mechanical pressure on the ill-manufactured rotors. In the second campaign, a whole cascade group of up to 984 centrifuges is accelerated 21.000 RPM above design speed, or 40% above normal operating speed. After several minutes at overspeed, normal speeds are restored.

In the next run, which executes about a month later, the malware brings the centrifuges almost to a halt before spinning them up again. This way the rotors are taken through their critical frequencies, which is guaranteed to cause vibration that has a chance to break the rotor. The whole deceleration and acceleration run takes fifty minutes.

Different from the silent first campaign, there is no way that the Iranian operators could not have realized what was going on — that is, unless they were deaf.

Any rotating or oscillating physical object emits air waves that humans can hear, given that they are in the frequency range between 20 and 16.000 Hertz. Wavelength is proportional to rotation speed. Hence, fast rotation means high pitch, and slow rotation means low pitch. Everybody has experienced this when driving a car, for example. The more you accelerate, the higher the pitch of the engine sound.

In the following clip, listen to the background noise. What you hear is IR-1 centrifuges at Natanz spinning at their normal operating speed at 59.000 RPM.

Now listen to what the same centrifuges sound like when accelerated to 84.000 RPM.

The pitch change is impossible to miss. Even more so for the deceleration run every other month where the rotors are spinned down to 120 RPM or 2 Hz. Since 2 Hz is below the range of audible frequencies for the human ear, the affected cascades went silent.

The stealthy cyber weapon had been turned into a prank.

Iranian operators could not mistake what they heard. It was all too obvious that their sophisticated control technology was not working as intended. And so, Iranian engineers begin to search for causes. In August 2009, Iran shuts down over 600 centrifuges, in November the same year another 300 plus centrifuges, and two months later yet another cascade of 164 tubes.

Due to the new self-propagation mechanism, Stuxnet spreads well beyond Natanz. No damage is done to other control systems because the attackers made sure that the real attack routines can only affect controllers with a matching configuration — of which they apparently had a copy beforehand. However, infections occur quickly throughout the world, and it is predictable that rather soon the virus would catch the attention of antivirus experts.

With its wealth of zero-day exploits and noisy network traffic that not even a junior cyber security specialist on his first day on the job could miss, it was only a question of time when Stuxnet would be detected. That time comes in June 2010, when antivirus experts receive a code sample, spot the zero-day exploits used in the dropper, and sound the alarm. Stuxnet is now all over the news.

However even the world’s best antivirus talent still has no clue what the purpose of the mysterious malware is: An uber-virus with nation-state level exploits and a mysterious payload for an unidentified target. It takes another couple of months until September 2010 when I determine based on our forensic analysis that Stuxnet’s target is the Iranian nuclear program; something that neither the media nor Iranian experts wanted to believe for weeks, some for months. It just seemed too far-fetched at the time.

In November 2010, Iran halts operation at Natanz completely in an effort to get rid of the malware. Their Stuxnet story had ended.

Afterstory

The most common misconception about Stuxnet is that its mission objective was to destroy the centrifuges at Natanz in a more or less catastrophic event, and that the attackers failed miserably. But this is just nonsense. The attackers could have achieved that easily but chose not to.

In the first campaign, they could simply have kept the outflow valves closed until the operating pressure in the centrifuges had reached the level where the uranium hexaflouride solidifies. At that moment the whole cascade unit with round about 1000 centrifuges would have pretty much exploded due to excessive vibration.

In the second campaign, the attackers could have left the centrifuges spinning at overspeed until even the last centrifuge was shut off by the still functioning Cascade Protection System or by Iranian operators executing an emergency shutdown.

But again, the attackers chose not to. After their little concert in the cascade hall, they carefully decelerated back to normal operating speed as if nothing had happened.

The reason for not attempting catastrophic destruction is obvious. Iran was long capable of producing low-grade centrifuge rotors at industrial scale and had a substantial stockpile which could be deployed instantly.

The obvious mission objective was to slow Iran down on their way to the production of weapons grade uranium, making it more costly, and ideally having Iran lose confidence in their capability to get there with given resources.

Along these lines we see a significant shift of mission focus that was missed by most observers.

Stuxnet started as yet another attempt to sabotage the Iranian centrifuges. The US had provided Iran with compromised parts in the past. They just took existing efforts to the digital realm.

And pretty much by coincidence, along the way, the concept of cyber warfare materialized. The attackers realized that they had pretty much accidentally created something bigger than just another means to mess up Iran’s centrifuges.

The much different second campaign can be viewed as aggressive experimentation with the new concept. All the good and expensive stuff from offensive cyber operations was brought in: Zero-day exploits, stolen digital certificates, remote updates via rogue command-and-control servers operated by government entities, the works. Staying covert was no longer a priority. Natanz had been turned into a test range for cyber weaponry where digital live rounds were fired.

But while the second campaign was a show of force, it was much more show than actual use of force. Knowing that the malware was bound to be discovered, the attackers even left the much more terrifying payload from the first campaign in the code, even though de-activated. They wanted to make sure that the world would see it.

Anything but disappointed about the outcome of this experiment, the United States formed US CYBER COMMAND in 2011, arguably the best funded and most capable military cyber organization on the globe.

But whatever CYBER COMMAND is doing, they did not launch further cyber-physical attacks that anybody would know of. The Obama administration shifted their Iran policy towards a more friendly stance which resulted in the so-called “nuclear deal” in 2015. The Trump administration reversed course but chose crippling economic sanctions as their main lever.

In the ten years since Stuxnet was uncovered, we saw only one confirmed and successful cyber-physical attack worth mentioning. It happened in late 2015 and involved parts of the Ukrainean power grid. Cyber attackers took down a ridiculously insecure power distribution system and caused 200 thousand Ukraineans to sit out a Christmas evening in the dark. Depending on their stock of candles that may have been more than an inconvenience, but it certainly was not as much of a problem as for the over 3 million Californians who endured loss of electrical power in 2019 caused by deliberate power shutdowns. And even that didn’t cause destruction, death, or chaos.

The simple fact is that we didn’t see disastrous cyber-physical attacks against critical infrastructure, manufacturing plants, or terrorist targets such as nuclear power plants and chemical facilities. Not one. That’s certainly not what I expected back in 2010.

I was afraid that Stuxnet would be the beginning of widespread cyber-physical attacks by nation states, criminals, terrorists, and weirdos (which, I think, these days are called “activists”). Luckily, that didn’t happen. That doesn’t mean we are safe, but it does mean that you don’t need to pay attention to every hyperventilating news story or vendor briefing that suggests Cyber Armageddon is just around the corner.

The reality is that you keep getting alarmed about an intangible menace by an industry that thrives on fear. You keep hearing about an alleged constantly increasing cyber threat that nobody can actually measure. For any botched cyber attack you are educated more about what COULD have happened rather than about what actually DID happen, which, in the cyber-physical space, is usually not even worth mentioning.

Consider this example which is representative for so many others. In 2015, there were alarming news about an Iranian cyber attack on a dam in New York.

If your first impression when reading the headline was something like Manhattan under water, you would not have been alone, yet completely fooled. The “New York dam” actually was a small floodgate in upstate New York, and the attack, if it would have ever been executed — which it was not — would barely have resulted in any noteworthy damage.

Ten years ago the media didn’t want to believe my claim that Stuxnet was designed to physically attack Iran’s nuclear program and ignored my reporting for weeks. These days, it’s almost the complete opposite. Every major or minor explosion and power outage is speculated to have a cyber background. Media and cyber security vendors attempt to substitute real-world cyber-physical attacks with speculations about what could happen, and on hypothetical elaborations about shady cyber adversaries and their imagined super powers.

Fact is that even though Stuxnet introduced cyber weapons for real, cyberwar didn’t happen in the decade that followed, and we have the means to make sure that it stays that way for quite a while.

And so our Stuxnet story ends on a positive note.