The other day I asked Dan Geer about his opinion on the anti-risk piece by myself and Perry Pederson. Dan is one of the sharpest minds in the cyber risk camp, if not THE sharpest, so I was prepared for a solid repudiation. Which didn’t happen; what certainly does not imply that Dan would endorse our argument. Anyhow what struck me was a concept that Dan mentioned in his response: Systems that are too important to trust.
That certainly did strike a chord. What’s better than trust? Verification. The fact is that in ICS security, asset owners much too often simply trust their vendors’ claims about cyber security. Trust is not a bad thing, but for the more critical systems, it should be supported by verifiability. Some regulators in the nuclear domain have been smart enough to incorporate that criterion in their regulation.
Anyone who has been around in ICS security for some time may have had experiences similar to these:
- A vendor who is challenged on the security posture of his products counters with the argument that it must be secure simply because there are X deployments that have been running without problems for years (note: even if we accept this as a fact without checking, there were millions of Siemens S7 deployments before Stuxnet demonstrated multiple vulnerabilities in the bread-and-butter product).
- A vendor who is challenged on the security posture of his product flat out denies the allegation even though it is backed by his own technical documentation. A promise to straighten up the obvious discrepancy to the product documentation is not followed up upon.
- A vendor who is challenged on the security posture of his product responds with “no, that’s not true. We have checked this. That is not a problem.” without delivering any kind of technical details, not to speak about test results, that would support the claim.
Real-life examples like these demonstrate the sorry state of the art that we have to deal with. They also demonstrate a complete lack of understanding about cyber security on the part of specific vendors, no matter how many designated cyber security experts they might have on staff. (Usually the problem is more related to corporate culture than to staff members.)
ICS security for critical systems does not resolve into a matter of trust in the vendor. I don’t know if Dan intentionally implied it, but this notion apparently is in discrepancy with Bruce Schneier’s recent thinking on cyber security and trust. With the asset owner usually accepting all the responsibility, not to say risk (!), it’s the latter’s duty to demand verifiable information that backs up the vendor’s claims. If the request for verification catches the vendor with his pants down, as it happens from time to time, it demonstrates the need for the exercise — and, maybe, to start searching for competitors.
Verifiability and verification are important concepts to separate the hot vapor in ICS security from hard fact. We encourage asset owners to use these concepts to their advantage even when not mandated by a regulator.