If you have followed the marketing buzz in the OT security software space, you cannot have missed the hype around “passive network scanning”, which is a hallmark of network anomaly detection products. Interestingly, this buzz expects you to ignore the following logical problem:

Why do vendors who keep telling you how dangerous active network scans are focus on completely different risks with their products?

Let’s be honest, if active scanning really was so dangerous, shouldn’t you do something about it, long before worrying about how to detect sophisticated and complex attack patterns as we have seen them in Stuxnet, or in the Triconex attack? Why are the same vendors treating the network scans (that their sensors discover quite well) as reconnaissance rather than as aggressive cyber attacks, intended to disrupt your operations? After all, it’s the same vendors who tell you that active network scanning would be “unsafe” and had a high potential of disrupting your operations!

How risky is active scanning, and why?

Let’s go back to the basics.

Legend has it that actively scanning a process network is “unsafe” because funny behavior of non-resilient automation components that are confronted with network packets that they didn’t expect. And there’s little dispute that funny things can happen if you let your Nmap and Nessus loose in a process network populated with legacy OT devices.

As a matter of fact, twenty years ago our company was among the first to tell asset owners that aggressive scans of process networks are not a good idea. So it looks like in this respect, we succeeded! What we did not expect was that decades later, asset owners would throw the baby out with the bath water. But we’ll get back to that later.

Now here’s the point. If active scanning would be as dangerous as claimed, it would easily account for your highest OT security risk — for the simple reason that any script kid could do it. So you got low sophistication, no access credentials, network access vector, severe consequence. If that isn’t high risk, we don’t know what it is. It would or should then also be your highest priority for remediation, either by network security measures, or by replacing or upgrading the respective automation components. But apparently, this is not what everybody is doing.

There seems to be an apparent glitch in the risk calculation.

Why does it matter?

Why should you re-consider scanning? Well, for important practical reasons. The orthodox restraint from active scanning results in severe limitations of your capability to accurately identify your OT configuration, to spot vulnerabilities (both CVEs and unpublished), accidental misconfigurations, and actual cyber attacks.

Consider this: Passive scanning cannot directly query your OT devices for configuration data — data that the devices may be happy to tell you about, if you just asked in a polite manner! Instead, this data is extracted from wire traffic. Often it cannot even be extracted directly from packet content but needs to be inferred (that’s where “artificial intelligence” usually comes in). And this only works where at least a minimum of such configuration data is actually transmitted over the wire. For silent applications and systems, such data is not available. And where more delicate configuration details such as software and firmware versions or network medium types are inferred, the accuracy of the result may not beat reading tea leaves by much.

Just think about your average computer system on the plant floor, running an outdated version of Microsoft Windows, and a whole lot of applications and libraries that should not have been installed on that system in the first place because they’re completely not needed for the desginated use case. Yet since most of these vulnerable applications and libraries don’t start talking on the network — that is, unless attacked by malware — they remain undetected, waiting to be exploited. It may well be that your network anomaly detection solution discovers the attack, but wouldn’t it have been better to identify and mitigate the risk weeks, months, or years before the attack so that it can actually be prevented?

Active OT scanning re-visited: Forget Nmap

Before you let yourself be scared by marketing collateral, consider this. Active scanning is not identical to aggressive Nmap and Nessus scans, where hundreds or thousands of TCP and UDP ports of an individual system are hit with complex data patterns. Forget these dumb scans completely.

Instead, think about your network management software products which actively connect to your devices and extracts configuration data using legitimate interfaces. Examples are SNMP (Simple Network Management Protocol) and WMI (Windows Management Instrumentation). Using these interfaces for what they have been designed for is anything but random “scanning”; it’s a targeted probe that leverages a product feature that was implemented in the product for this very reason — to query configuration data. Therefore, we don’t call using these protocols and interfaces “scanning”, but “probing”, and probing from an OT security solution isn’t any more dangerous than from a network management software product.

In the automation space, the situation is similar. Take an automation protocol such as Profinet or Ethernet/IP. Both have built-in functions and data field specifically for collecting configuration data, and these functions are used all the time by your automation systems. Using them for asset discovery purposes doesn’t do any harm whatsoever, because that’s what they designed for. The same applies to Modbus (function 47), DNP3, IEC 60870-5-104, ┬áBACnet, and standard protocols. Same for prorietary protocols, take Siemens S7 or GE SRTP. All of these come with discovery functions that do no harm at all. And best of all, they return accurate and reliable information about configuration details.

Meet selective probing

For these reasons, we abandoned passive scanning for our products ten years ago and replaced it with selective probing. The result is better and more extensive configuration data that allows us to go far beyond alerting on active cyber attacks via the network. As an example, it allows us to spot unauthorized changes of ladder logic and software that were administered via a local interface such as USB or serial point-to-point. It also allows us to explore the vulnerability space much deeper than it would be possible using passive detection, as we can enumerate all software packages (and their vulnerabilities) no matter if noisy or silent on the network. It allows us to identify unauthorized devices on the network even if they don’t transmit data. And the list continues, but you get the point.

The bottom line is: Don’t fall into the trap of being overconcerned about a risk that those who alert you of pretty much ignore once that you start using their OT security product.

This whole article is not about dismissing passive scanning / network anomaly detection solutions. It’s about making you aware that in this field, it’s not all black and white. Evaluating a targeted asset discovery technology, a.k.a. selective probing, will present you with a viable alternative when shopping for the best OT asset management solution.

For more information check out our OT-BASE asset management system, and test-drive selective probing as implemented in OT-BASE Asset Discovery in your own environment. It’s easy to do, doesn’t require the installation of appliances and collection networks, and delivers immediate results.