If you have followed the Stuxnet saga for several weeks, you will probably have heard that it would be impossible to determine Stuxnet’s targets, just because the automation products in question and the attack strategy used could target any process. That’s nonsense. Let’s look how we can narrow down the targets by forensic analysis. Step with us into the crime scene and let’s imagine we have just found Stuxnet and have been able to extract the payload – the two digital warheads that run on the PLCs.

The first thing to look at is configuration. This has nothing to do with code; you don’t have to understand a single line of code you have seen in the last blog post in order to understand this. So we know that Stuxnet only attacks specific controller types. Let’s look at the characteristics of these controller types. The first is a S7-315-2DP. We also know that Stuxnet looks for additional Profibus (field bus) extension cards on the 315. Hey, that’s a first clue of something particular, because a S7-315-2DP has two built-in Profibus interfaces (hence the name “2DP”, where the DP stands for decentral periphery). This gives us a clue that the attacked installation is heavy on Profibus. It’s not much, but better than nothing.

If we then take the effort to dig into the code, we see that Stuxnet talks to up to six Profibus network, with up to 31 devices on each. Our fellow researchers from Symantec got that wrong in their previous dossier versions where they showed a picture in which the attacked PLC was talking to three Profibus devices. Anyhow they’re not to blame because, as they state frankly, they are not control system experts. So we have a configuration with up to 186 actuators in six networks that is low on computing needs – the 315 can’t do very much with 186 actuators. We also see that only a few parameters are set at the actuators, suggesting that we’re looking at very simple devices, or at devices with embedded intelligence.

When further looking at the code, we also see that there cannot be tight external monitoring either of the PLCs or of the devices themselves. Control is taken away from the legitimate PLC program during the DEADFOOT condition for several minutes (that’s huge for a controller), but the attackers assume that it won’t be recognized. We may therefore infer that the controlled devices are not under tight monitoring from operators. We can also pretty much rule out to find this installation in a power plant. This is the most important aspect: It’s not about getting confirmation, but about being able to rule out branches of potential target clusters. So if there was only the 315 attack code, the Bushehr scenario could be ruled out. – All this can be inferred from configuration analysis, without going deep into code structure. Wasn’t that difficult, was it?

Next to the 417 attack code. As we have said before, the 417 is the big Bertha. If you play golf, you know you don’t use the BB for putting, or for a 100 yard shot. In other words, the attacked installation will not be the proverbial cookie plant. Let’s switch to code. The 417 code is awfully complex, with huge data blocks, and a lot of pointer operations that are a pain in the butt to reverse engineer. However, in addition to user code there are these system function calls that we can understand easily. Get Symantec’s dossier and look up the diagram for what they call “attack sequence C”. It’s pretty accurate. Try to ignore all those funny arrows and focus on the function calls in the grey boxes. You could look up all the SFCs in published documentation, but to save you the hassle: There’s some pretty low-level stuff going on that you don’t need in a regular program. That’s a whistleblower. SFC26, SFC27, SFC41, SFC42… what are they after, what are they needing this for? SFC22, SFC23 for dynamic data block management, strange… what is this all about? So from here we go to examine the user code that calls these suspicious functions in depth, just to end up in the man-in-the-middle engine. It’s so obvious that you just can’t miss it.

From there we can take it further, stepping from code analysis to scenario analysis. So we have this big controller, perhaps in redundant configuration. Since the attackers invested the effort for the MITM, we can infer that the process is under tight external control, both from the legitimate PLC program, the DCS, and operators standing at their panels. And as the invention and reliable implementation and testing of the MITM must have taken several man-years, this is another clue that lets us track down potential targets – and narrow down the suspects. Yet another clue comes from a completely direction: While the impact of Stuxnet must have been dramatic, the victim has a problem in telling so. Lots of clues. Much better than nothing, ain’t it?