«

»

Jun
07

2011

Enumerating Stuxnet’s exploits

There are several misconceptions about the exploits used in Stuxnet, such as that all underlying vulnerabilities would have been fixed by now, or that there’s no need to worry about copycats because the exploits at the controller level were highly specific and would require insider knowledge and extreme resources to be copied. Here we will explain why such provisions are wrong and why Stuxnet can actually be thought of as some kind of toolbox for the wannabe cyber warrior.

Most people think of Stuxnet’s exploits as some complex, but structured hacker stuff on the operating system level, plus some mushy, arcane 70s-style controller code that cracked centrifuge rotors. In reality, the automation side of Stuxnet is as modular, structured and complex as the coding that can be found at the operating system level. So let’s try to break down Stuxnet’s exploits in categories:

1. Operating system exploits (generic)
1.1 Two stolen digital certificates
1.2 Four zero-day vulnerabilities plus at least one known vulnerability
1.3 Peer-to-peer update logic

2. Windows application exploits (generic)
2.1 Default database password for SCADA application, plus SQL injection, plus forced SQL execution
2.2 Hijacking the legitimate driver DLL (s7otbxdx.dll)
2.3 Executing arbitrary code in project folders of the engineering software

3. Controller exploits (generic)
3.1 Code injection to any operation block, taking priority over legitimate code
3.2 Hooking system functions
3.3 I/O Filter & faker

4. Physical process exploits (mostly target specific)

From all the exploits listed, only exploit category four is tied to a specific target configuration. In essence, this is application-level code that is specifically designed to slowly damage IR-1 centrifuges. However, not even this code is 100% tied to centrifuges, because potential attackers can learn from the example how to damage machines that use variable frequency drives by inducing mechanical stress. All other exploits listed have zero relationship to centrifuges or uranium enrichment and can be re-used against any other target, be it a power plant, a chemical facility, or an air traffic control system, to name a few out of many. Certainly the exploit code is product specific on the binary level, so for example the exploits listed under category three can only be used to attack controllers of the Siemens S7-300 and 400 series, but not, for example, Allen-Bradley controllers, even if similar vulnerabilities might exist on those products. While a potential attacker might be able to copy the concepts, he or she would not be able to simply copy the code.

1. Exploit category one (operating system exploits) had the largest coverage in the media; perhaps because it is also the area that IT security and anti-virus companies understand best. While it is a known fact that Microsoft has issued security patches for all exploits by now, this certainly does not mean that industrial installations would already be protected by these fixes. Just like any other interesting exploit, they made their way into exploit tools, a.k.a. penetration testing frameworks such as Metasploit and Canvas. Industrial installations with sloppy (or non-existing) patch management can easily be attacked by lazy malware developers who just use one of these products to test if they can penetrate systems – systems they usually don’t know much about. The more advanced stuff, i.e. the PtP update, is also available in Canvas and ready for use for potential attackers.

2. Exploit category two affects the engineering software, known as Simatic Manager, and the corresponding SCADA application, known as WinCC. Besides some comments on the default database password issue, the vendor chose to be silent about these exploits. However, compared to the other two vulnerabilities in this category, the default database password is just a nuisance. Exploit 2.3 is not a buffer overflow but a design flaw, and it is also the major reason why a Stuxnet-infested site is so difficult to cleanup. To our best knowledge, the vendor has no intention to issue security patches for the underlying vulnerabilities of this category. Maybe as some alternative, both software applications have been certified a couple of weeks ago for use with the whitelisting solution from McAfee. (We urged vendors to certify whitelisting solutions back in September 2011.) End users who do not install this security product remain vulnerable by the exploits mentioned. What’s more, even end users who do protect their engineering stations and SCADA systems with SolidCore are still at risk, because the Windows side is only half of the problem. An attacker can install a pirated driver DLL on other (unprotected) systems on the process network to attack controllers, or just dismiss the DLL right away and implement a custom driver. So unless every Windows PC with access to the controllers is not properly whitelisted, regardless of whether it is running the vendor’s engineering software / SCADA application or not, the installation as such remains insecure. The bottom line is that with the controllers remaining vulnerable, any security upgrade on the IT side is of little use. It should also be kept in mind that with one controller in a network being compromised using the attack vector used by Stuxnet (see next paragraph), it is technically possible to propagate an attack to peer controllers.

3. Exploit category three contains exploits that literally open the door to extremely aggressive attacks that do not have to be nearly as surgical as it was seen in Stuxnet. The exploits listed here offer an attacker a wide range of opportunities from target-specific mechanical destruction to low-profile, yet effective DoS-style attacks. For example, an attacker can learn from Stuxnet which code to insert for a condition-based freeze of the main cycle. This can be achieved with less than ten lines of code, which might someday be downloadable from the Internet, or already be pre-packaged in an exploit tool. The difference between a controller being stopped and freezing the main cycle is that in the latter case, outputs don’t fallback to safe. They just keep their present state for as long as the attacker wants. Drives will keep spinning, valves will remain open, pumps will keep running. Every process engineer knows what that means. The controller’s run mode LED will stay green, and no entry will be in the diagnostic log. All this requires zero insider knowledge. Fixing the underlying vulnerabilities is particularly difficult because they are considered “features” rather than “bugs”. This given, the only technical workaround is to support digitally signed code, which is what we recommended more than six months ago. Since checking the digital signature is only required at configuration time, this imposes no processing overhead on the controller. It can actually be done with existing hardware. While so far the vendor has made no announcement to support digitally signed code, our intelligence from inside sources has it that this feature is scheduled to be introduced to the S7-300 and 400 series in 2012. Those who don’t want to wait that long can buy our Controller Integrity Checker software today, or can use that rusty nail that’s also available to prevent illegitimate ladder logic loads: The project password that is available for years, but has never found real love among control system engineers. Anyhow we urge every Siemens user to set a project password for every project until better protection is available. Do it this Monday if you haven’t done so already. Understand though that the project password can be hacked easily by brute force if an attacker has access to your project files, and that it offers zero protection for a man-in-the-middle attack hijacking the driver DLL, as in the Stuxnet attack configuration.

A final word of caution: It has been argued that with Stuxnet being so complex, it would require a similar amount of resources to produce a similar weapon. Unfortunately, this is complete nonsense. The first cyberwar weapon is comparable to the first nuclear weapon. To bild the first nuclear bomb, it took a genius like Oppenheimer and the resources of the Manhattan project. To copy the design, it requires just a bunch of engineers – no genius needed. Certainly you need fissile material, too. The problem to obtain fissile material is the major barrier why rogue nation states, terrorists, and criminals have been largely unsuccessful in commanding nuclear weapons – not the problem of obtaining the know-how. So while Operation Myrtus required one or two geniuses to design Stuxnet, understanding and copying the design can be achieved by average engineers. Even worse, the design AND PRODUCTION process can be packaged into a software tool , enabling immoral idiots and geniuses alike to configure highly aggressive cyber weapons. Different from nuclear weapons, no fissile material (or other hard-to-obtain substance) is required — just bits and bytes, and the resulting product can be delivered within minutes via the Internet, untraceable for virus scanners, intrusion detection systems, and Einstein II marvels.