Over 8,000 New Vulnerabilities Disclosed in 2010- That's a Record
Bryan Casey 270003BSJV BFCASEY@US.IBM.COM | | Tags:  x-force tom cross ibm vulnerability report security trend
0 Comments | 2,657 Visits
Twice a year our X-Force team releases their insights and observations on the security landscape, and today we’re announcing the release of the IBM X-Force 2010 Trend and Risk Report. In 2010 we saw the continued rise in the number of disclosed vulnerabilities as well as the continued prevalence of web application vulnerabilities. However, 2010 also gave us a lot of new things to mull over. We’re seeing sophisticated threats and attackers become more prevalent then ever before. Mature exploit code for mobile devices, while not yet commonplace, is becoming increasingly more available. We saw spam volumes rise dramatically before tapering off and the SQL slammer completely vanished.
This week I sat down with Tom Cross, Manager X-Force Threat Intelligence and Strategy to discuss in a bit more detail some of what we’ve seen over the course of the past year as well as what we should be looking for in years ahead.
Bryan: So, let’s start with this number. 8,562 vulnerabilities disclosed last year. This is a 27% increase from 2009 and is the most ever disclosed in a single year…What’s driving this rapidly increasing number and is it something that is necessarily cause for concern?
Tom: We think this increase is a consequence of software development houses taking the security of their software more seriously. Many companies that develop software are currently investing in improvements to development and quality assurance processes that are intended to identify and eliminate security vulnerabilities before products are shipped to customers. However, there is a lot of code out in the field right now that didn’t benefit from the latest in software engineering practices and so vulnerabilities are getting discovered that have to be patched.
It’s not necessarily a cause for concern. It represents progress toward a safer internet – but for those of us who work on remediating vulnerabilities and defending networks from attacks that target them, it means we’ve got a lot more work to do.
Bryan: Do you anticipate that vulnerability disclosures will continue increasing in 2011 at the rate they did in 2010? Will we reach 11,000 next year?
Tom: As improved software engineering practices result in better code out there I think that we will eventually round the corner and start seeing sustained decreases in these numbers, but it is hard to predict exactly when that will happen. We thought we were already on the way last year, and then this year surprised us. The total number of vulnerability disclosures has been up and down for the past 4 years, so next year’s totals are anybody’s guess.
Bryan: The new report mentions that often exploits are released tens to even hundreds of days after the public disclosure of the vulnerabilities they target. Why does this happen? Are exploit writers just slow?
Tom: We think that the bad guys develop exploit code quickly after vulnerabilities are disclosed. In some cases exploits are circulating before disclosure. But they aren’t made public. They are used to break into computers. Eventually, as systems get patched, these exploits become less valuable as attack tools, and some of them find their way onto public websites and mailing lists that we track.
The fact that this is taking a long time indicates that people aren’t patching quickly enough. The window of opportunity for an attacker has two components: the amount of time between vulnerability disclosure and patch release, as well as the amount of time between patch release and installation. In some cases it can take a long time for software vendors to release patches, but they are often made available quickly, particularly for critical issues. We think that attackers are holding on to exploits for a long time primarily because those patches aren’t getting installed everywhere that they need to be.
Fixing this requires improvements in endpoint management. Network managers need to know what computer systems are on their network, what software is on those computer systems, what vulnerabilities are in that software, and what patches are available. This is an area that is going to be a focus for both technological and operational development over the next 5 years. Of course, it also makes sense to have good threat prevention in the network as well.
Bryan: We see some recurring year to year trends in this report, such as the significance and prevalence of web application vulnerabilities. However, I’m curious what’s new this year. What’s changing in the security landscape that people need to be aware of?
Tom: Lots of new technologies – such as Mobile and Cloud, Virtualization, IPv6 and DNSSec. We keep making new software and software systems that have new security implications. While we’re getting better at making software, it still has a maturity lifecycle. When a new software program is released there are very few vulnerabilities that have been disclosed in it, but the code hasn’t had much of an opportunity for independent audit and real world use. Over time, people find bugs, and the number of known vulnerabilities in that software increases. Eventually, if the software remains static, it can reach a stable state where few new vulnerabilities are being discovered. However, most commercial software doesn’t remain static. New features are added. Things are changed. Product management occurs. Entirely new technologies like IPv6 can present large code bases to the Internet that haven’t been subject to much real world use. There are bugs in there, and also people need to learn how to deploy these technologies safely and that takes time as well.Another notable thing that happened this year is broadening awareness of sophisticated, targeted attacks that may be state sponsored. These kinds of attackers are hard to keep out of a computer network. They really do their homework on the organizations they are targeting and they are very patient. They are also coming at you with vulnerabilities that no one else knows about and custom trojans with covert command and control protocols. It’s a hard problem. A few years ago it was a problem that only governments and other critical sites had to worry about, but the sorts of organizations dealing with this today seem to be widening.
Bryan: It seems like there’s a lot happening in the security world right now. From the continued rise of advanced persistent threat, to mobile platforms and cloud computing each introducing new risks and challenges, to the scale and sophistication of an attack like Stuxnet…security seems to be everywhere and I’m hoping you can boil some of this down for us. As we look back on 2010, what were the key things we learned? What should we expect to see in 2011?
Tom: Concerns about things like Advanced Persistent Threat are driving the adoption of different approaches to network security, which includes more physical network segmentation, better endpoint management and awareness, better log retention and analysis, and a more forensics driven approach. All of these developments make networks more resilient against everyday threats.I think that Stuxnet also shined a light on the risks that customized industrial control systems face. Computer security people are familiar with being ignored when we point out potential risks until a real event occurs. People have been talking about the computer security risks of Internetworked control systems for years. Hopefully now those warnings will not be ignored.
What should we expect to see in 2011? I think Wikileaks has gotten people thinking about information control in their organizations. What stuff does your enterprise know that is just sitting out there on internal file servers and could easily be leaked on the Internet by a disgruntled employee? A clear set of best practices has yet to emerge around this but people are starting to think about how Data Loss Prevention and Watermarking technologies might be brought to bear on the problem.
But, I expect 2011 to surprise us. Every year there are developments that we don’t anticipate. A few weeks ago the SQL Slammer worm all but disappeared from the Internet. Computers infected with that worm have been a reliable source of malicious traffic on the Internet since the worm first emerged back in 2003. One day in March, poof, the thing just disappears. We’re currently looking through the evidence that we have to see if we can find an explanation, but so far it is proving illusive. The Internet is a big place – it’s unpredictable.
For other Trend Report highlights, including interactive graphics, please see my recent post on the IBM Institute for Advanced Security. It can be found here.