When I took my first class on financial engineering as a naïve applied mathematics undergrad, we started with portfolio selection and the capital asset pricing model. In my typically confident (some might say arrogant) fashion, I decided I knew more than the professors, and that we should be focused on maximizing returns, rather than with the almost religious deference we were giving the notion of risk. A few case studies on LTCM (and modern hedge funds) brings into sharp relief the importance of risk. And yet, years later, I did it again. A few years ago, I claimed to be an expert on risk. In actuality, I was an expert on security, who knew very little about risk. In fact, I knew so little about risk, I had no idea how little I knew about it.
I come from the information security space. I spent a number of years there, and throughout my tenure, I continually abused the word “risk.” Oh, I had no idea I was doing it. In fact, 99% of my colleagues in security were doing the same thing. The fact of the matter is, the cloak and dagger security types, self-professed “security experts,” continue to misuse the word. It wasn’t until I really tried to peel back the onion and build a product that managed risk for the security space that I realized that what often passes for risk management in IT is actually control management and compliance. True risk management deals with uncertainty around unexpected losses – looking at consequences in business terms and weighing those against potential reward. Information security management, as currently practiced, is in most regards a necessary, but not sufficient, component of information risk management.
A little experience in different disciplines and verticals can make all the difference in the world. Financial Services is arguably the most sophisticated industry when it comes to managing risk. From a credit and market risk perspective, the average investment bank or hedge fund has teams of Ivy League PHDs running thousands of financial models 24×7 with a virtually unlimited budget on server farms with more firepower than NASA. From an operational risk perspective (much more analogous to information security), these same banks have taken the lessons they’ve learned in years of managing credit and market risk and have applied them to the more esoteric. Where they lack the hard, quantitative data of their peers, they’ve adapted clever ways of working around it.
Information security practitioners, on the other hand, are great at managing compliance by checklist. We have impressive standards, frameworks and regulations like ISO 17799, PCI, BITS, CobiT and a whole slew of others that are pretty good at spelling out a series of “thou shalt have’s.” NIST 800-30 even gives a set of guidelines for doing risk management for IT systems. So what’s missing?
Information Security standards and guidelines are a good thing, but they can be very easily misused and abused. They encourage cookie cutter thinking, and miss the bigger point – no two industries are the same. No two companies within an industry are the same. No two geographies within a company are the same. No two data centers within a company geography are the same. No two services run on hardware in the same data center are the same. No two business processes serviced by the same service are the same. And guess what? Depending on the time of the year, the needs of your customers and other factors, the same business process may have different needs on different days!
OK, clearly mapping all of those dependencies is hard. So, most organizations give a data sensitivity rating to their information assets. Maybe they get cute, and provide a “platinum, gold, silver, bronze” type scheme. Maybe they even set some arbitrary control thresholds based on this classification. So why do we have multiple large company executives going on record claiming that PCI compliance is too hard? Two things here – first, PCI is an ISO 17799 derivative. Second, with sensitive customer data sitting on these information assets, shouldn’t they have already warranted a platinum rating? Logically, it should follow that in any 17799 shop (many), information assets should be close to PCI compliant.
In reality, however, we all know that InfoSec groups are asked to do way too much with increasingly smaller budgets. It’s difficult to get management to buy into the need for information security, which exacerbates the problem. As such, it’s critical that we work smarter, not harder. If only there was a tool that let us do that…
Enter risk management. Throwing a set of checklist controls at our enterprise architecture is not risk management. Theoretically, it should result in some risk reduction, granted, but that’s not an optimal return on investment. Imagine running a hedge fund without a complex risk model for every conceivable position – running countless layers deep. You’d be insolvent within a month.
So what are the roadblocks to risk management in information security? The biggest is a lack of business context. For years, IT has talked about aligning to the needs of the business. It’s still a challenge. The fact of the matter is, it’s tough getting C-level executives to prioritize business objectives and processes amongst themselves (think politics, agendas, silos), much less as a deliverable for IT (who they see as less and less of a strategic asset). And even if they could agree on a real priority for those corporate objectives, navigating the rat’s nest down of dependency from the objective to the asset level would prove difficult for most organizations. As a result, it’s impossible to prioritize the consequence of an attack on a specific tangible thing.
That starts to cover the consequence side of things. How about impact? Actuaries have tables for flood rates, financial engineers have volatility metrics for options calculations. Unfortunately, it’s very difficult to compile reliable loss data on the IT side of the house. Difficult, but not impossible. We safeguard that information like it is customer data. But, if you look at our peers managing operational risk, there several initiatives around sharing anonymous loss data. Banks collaborate on internal loss metrics to quantify the costs and probability of fraud, malfeasance, etc. Back to security, TJX set aside a penny a share to cover their data breach, and current press estimates range from $12 – $25 million. (Many experts think these estimates are overwhelmingly optimistic, by the way). Are the metrics we have available perfect? Not even close. But qualitative factors are a good stopgap to supplement the limited quant data we have.
Don’t get me wrong – we have some brilliant people working information security. Brilliant people doing amazing things with limited budgets in a game with stakes that would make a high roller at the Bellagio head for the nickel slots. What we need is to buy them some leverage. Risk Management help information security professionals make better decisions faster, helping practitioners do more with less. Risk Management is a great tool to help information security practitioners work more efficiently – just don’t confuse the two.