One of the trends that we are seeing today is the convergence of security management and systems management.The better job you can do managing your infrastructure, the better equipped you will be to define and enforce security policies and controls across that infrastructure.There are few places where this convergence is more evident than the endpoint.
As the notion of a perimeter disappears, and we see the continued proliferation of an increasing number of traditional and non-traditional endpoints, such as servers, desktop PCs, laptops, ATMs, point-of-sale devices, and self-service kiosks, organizations are looking for a comprehensive approach to how they best manage and secure all of their endpoints.This includes, but is not limited to, identifying all of the endpoints that you have in your environment, managing the complete lifecycle of that endpoint, providing continuous security and compliance, effectively deploying patches in a timely manner and finally, managing the power usage of that endpoint.
Tivoli Endpoint Manager, built on BigFix technology, can address all of those needs, but in this blog, I want to focus on that last piece of the conversation, because it is one that does not immediately come to mind when people are typically thinking about the most critical elements of managing an endpoint.However, we have seen that effective power management is something that can actually pay for all of the other benefits that Tivoli Endpoint Manager can provide.You can ultimately end up saving money, the environment, and in the process, deploy critical security and systems management controls across all of your endpoints (even the ones you didn’t originally know you had).
In a recent article (click here), Penn State wrote about their deployment of Big Fix (now called Tivoli Endpoint Manager) and indicated that it could save them about $800,000 annually.At a large university like Penn State, they have thousands of computers that can be included in their power management initiative, and many of these computers are only heavily used during peak hours.Tivoli Endpoint Manager allows the Penn State IT staff to automatically put these computers in sleep mode when they aren’t in use.They are anticipating not only a significant ROI (about $800,000 annually), but are also hoping to reduce the amount of carbon dioxide released into the atmosphere by 60,000 tons.
One of the objections that people often bring up when it comes to power management for the endpoint is that it can interfere with the patch process.This is one of the areas where the convergence of security and systems management is so important.The policies that you create and enforce from a systems management perspective need to work hand-in-hand with the policies related to security management.For that reason, Tivoli Endpoint Manager was built on the core concepts of convergence, scalability and granular policy setting.It allows an IT staff to automatically wake computers at a designated time, apply required patches or enforce configuration policies, reboot, and then bring the endpoint back down to a hibernated, low energy state, or shut it down altogether.
The Chichester School District (click here) provides yet another great example of power management savings. This regional school district in Delaware County, Pennsylvania, manages more than 2,000 Microsoft Windows desktops and 50 Microsoft Windows servers throughout a six-school network.The Chichester School District implemented energy conservation using the power management capabilities of Tivoli Endpoint Manager to help reduce computing energy costs by 70 percent. Their IT team also uses the distributed “Wake-on-LAN” functionality to distribute and install patches to those machines that are turned off at night. This allows for a reduction of energy resources and confirms machines are securely patched—without impacting employee productivity.
The integrated patch and power management capabilities of IBM Tivoli Endpoint Manager provides IT staff with real-time information on remote endpoints to simplify patch processes, conserve energy and reduce on site troubleshooting.
So, I was at Pulse this year and was the source of a pretty constant ridicule for carrying around what felt like a fifty pound laptop bag.It was horrible, and inconvenient, and not even effective.I had hard copies of schedules that were out of date about 30 seconds after I clicked print.By the end of the conference I had calluses on my fingers and I couldn’t walk more than about ten steps without having to change hands.It was really a constant reminder that I need to go to the gym more.
Anyway, interestingly enough, most vendors in the endpoint security space have basically adopted this same approach in designing their technology.Incoming attacks get blocked by signatures, and in order to keep you “prepared,” some companies just create and update these huge signature files, shoot them across the network, fold their hands and hope they get properly installed, and then get right back to work because the files they just sent are more or less immediately out of date.I can tell you from experience that lugging around a bulky bag of incomplete, outdated information is no way to do your job.It’s also no way to keep your employees, and by extension, your company, ahead of threats.
What companies need to do is focus on what a defense-in-depth of the endpoint would really look like.It means you need a lot of things.You need to have antivirus and firewall protection.You need a patch process that actually works.You need centralized policy management that is easily enforceable.And, of course, you need all of this in real-time.Until recently, that also meant you needed a lot of aspirin.
With its acquisition of BigFix last July, IBM basically invested in the convergence of security and systems management, two pieces of the operational infrastructure that will continue to become more intertwined.You can’t just write the policy, or obtain the patch, you also need to be confident that these changes and updates are continually being enforced at every single endpoint.Try automatically applying patches to computers that aren’t turned on and you’ll pretty quickly understand why convergence is so important.
Up until this week there were four offerings that were part of the Tivoli Endpoint Manager suite of products, all of which are managed under the same roof.We have solutions for lifecycle management, security and compliance, power management and patch management.This week, we were pleased to announce Tivoli Endpoint Manager for Core Protection, a solution designed to add another layer of depth to your endpoint security posture.Tivoli Endpoint Manager for Core Protection is the result of the relationship between IBM and Trend Micro, and offers the real-time, lightweight threat protection that other endpoint security solutions can’t really compete with.
I spoke earlier about how other vendors were sending these huge signature files across their network, files that were outdated before you even figured out how to install them on your PC.Tivoli Endpoint Manager for Core Protection is different because while it does employ the use of some signature files, it also leverages the cloud to reduce the amount of information that needs to be sent across the network and also provides the real-time protection that static signature files cannot.As the cloud is updated with the latest threat information, so too are all of the endpoints that are in conversation with that cloud.
This has proven to be extremely effective. In a recent third party test, the Trend Micro technology blocked 100% of all incoming malware (the second place competitive product came in at 77%) by taking a multi-layer approach. Nearly all (97.5%) of the malware was detected and blocked in the first layer (URL reputation) and the remaining pieces of malware were blocked in the two subsequent layers of defense. Now, here's where it gets even more impressive. An hour after the original test, they again tested just the malware that got through URL reputation, but this time it did not get through even that first layer of defense. This is protective technology that is updating and hardening its defenses as new threats come in.
I don't think I really need to explain the importance of endpoint security to anyone reading this. We all have different things at stake, whether it's your back accounts, your music collection, confidential information for work or even just a photo album. What I can say is that 77% isn't good enough when it comes to protecting any of those things.
The strength of Tivoli Endpoint Manager is that it combines first-rate security with the systems management capabilities needed to ensure that protection is deployed across the entire infrastructure. When it comes to endpoint management, it's about no longer looking at technology in silos, it's about understanding why and how we can integrate different complementary offerings. Tivoli Endpoint Manager is built on that philosophy.
For more information about Tivoli Endpoint Manager, please visit:
Today's post comes from Sandy Hawke, Manager IBM Security Solutions.
I recently presented to the ISACA community on a live webinar. I focused the discussion on how to leverage automation to improve endpoint security and compliance. The archived webinar is available here. Just as a brief background, ISACA is an international professional association that focuses on all aspects of IT Governance and has over 95,000 members worldwide.
The online event drew a pretty substantial audience which is good, and yet a bit surprising in two key ways. First of all, many of the recommendations I made to the audience were not radically new concepts, but basic foundational controls that all security professionals agree are critical for achieving and maintaining solid security and demonstrable compliance. So haven't they heard this story before?
Maybe not. And that's the second observation. Most of the ISACA membership is in the IT audit/Risk Management line of business. While they're not the folks who are implementing security technologies on a daily basis (i.e. "hands at keyboards")- they are keen to understand how security is implemented, how it works, how automation can be used to facilitate audits, etc. And that's the new trend we've been witnessing. While the audit team knows what the policy controls should be, they may not know if/how these controls get enforced, maintained, monitored and reported on- essentially how security is "operationalized." The more that they know what's possible with respect to security operations and automation, the better they'll be at knowing what questions to ask IT operations during audits, what technologies to recommend, etc.
Years ago, the IT Audit/Risk Manager organization and activities were kept quite separate from the IT Operations/IT Infrastructure teams. And at the time there were pretty good reasons to keep these groups as distinct as possible- you've all heard of "keeping the fox out of the hen house" analogy, right? The IT Audit/Risk Mgmt teams could set and enforce policy and conduct assessments that wouldn't be influenced by the operations staff. Well, with the advent of converging technologies, economic trends, and the increased importance of measuring security investments and compliance program- in real time, these groups are coming together. More so than ever before.
And technologies that can foster that type of trust, cooperation, and collaboration are indispensable.
When IBM first kicked off the Dynamic Infrastructure announcement at Pulse 2009 conference, we heard some rumblings on whether Dynamic Infrastructure was just another executive buzzword or if there was real meat behind "the concept."
Doug McClure summarized the feeling well in his blog: “While this is great for executive level folks, I think we needed to drive this message into consumable and actionable things that lower level technical attendees could take back to their companies. They may be the ones who need to execute and show how previous or planned investments could help their company become smarter and more dynamic.”
After IBM’s announcement yesterday on new Dynamic Infrastructure offerings, critics will be hard-pressed to wonder whether Dynamic Infrastructure is actionable.Not only did IBM announce new products and services in the areas of Information Infrastructure, Virtualization, Service Management, and Energy Efficiency, but they also demonstrated how these solutions are helping three of our clients--the Taiwan High Speed Rail Corporation, Tricon Geophysics and the United States Bowling Congress--build new, more dynamic infrastructures to help reduce costs, improve service and manage risk.
A key piece of the announcement is the IBM Service Management Center for Cloud Computing, which now includes new IBM Tivoli Identity and Access Assurance, IBM Tivoli Data and Application Security, and IBM Tivoli Security Management for z/OS, for Cloud environments. I don’t know about you, but all that’s more meat than this vegetarian can handle. :)
To continue driving home the Dynamic Infrastructure success, IBM is sponsoring a variety of events for the public to learn more. Register for a free, local Pulse Comes to You event to see how Service Management is a key component for enabling a DyanmicInfrastructure for a Smarter Planet.
What is IBM Tivoli Software? We know you want the short version. Steven Wright of Tivoli Software breaks it all down for us in less than 7 minutes on a white grease board. Check it out while you have your morning coffee, afternoon tea, or while you get your miles in on the treadmill or trail with your smart phone. Then visit ibm.com/software/tivoli for more details on how IBM Tivoli Software can help you run a smarter business. .
Today's post comes from Perry Swenson, Market Manager, IBM Security Solutions.
IT departments at financial services firms are under tremendous pressure to ensure servers, desktops, mobile devices and other endpoints are secure and compliant. At the same time, they’re continually looking for ways to save time and resources in areas like software licensing, patch management, asset inventory and security configuration. IBM Tivoli Endpoint Manager, built on BigFix technology, is helping these firms better understand and manage the status of their endpoints, regardless of where they’re located.
In the below video of Nate Howe, VP of Risk Management at Western Federal Credit Union talks about how Tivoli Endpoint Manager provides real-time patching for operating systems and third party applications and utilities. With over $1.4 billion in assets and 32 branches in 10 states serving more than 120,000 members nationwide, Western Federal Credit Union is one of the leading credit unions in the United States. Nate explains that they now have a single view into all aspects of the systems and security for their 400 employees, 100 servers and 2 data centers, including a better inventory of installed software. And, they can do more with fewer people, which enables them to focus less on infrastructure and more on business applications and enabling business automation.
Another customer that’s realizing benefits from Tivoli Endpoint Manager is SunTrust Banks, Inc. Based in Atlanta, SunTrust enjoys leading market positions in some of the highest growth markets in the United States and also serves clients in selected markets nationally. SunTrust has a highly distributed environment with nearly 1,800 branch locations and no local IT resources at most of those locations. Using Tivoli Endpoint Manager, SunTrust now maintains a 98.5 percent patch and update compliance rate. They’ve also decreased update and patch cycle times from 2-3 weeks to 2-3 days while increasing productivity through automation. Read the SunTrust case study here.
By enabling improved endpoint visibility and new levels of automation, Tivoli Endpoint Manager is a powerful solution to help financial services firms enhance their security and compliance.
It almost goes without saying, but, hey, I'll say it anyway...Security is top of mind for everyone these days, no matter your industry, no matter the size of your organization - and even on a personal level, too. You certainly don't have to be a security manager to be concerned about security, particularly internet security.
Case in point: Which of the following internet vulnerabilities is keeping you up at night these days?
Perhaps a more precise answer would be "All of the above plus a few more."
So, how can you stay ahead of these types of threats - understanding what the most critical and recurrent vulnerabilities are and what you can do to prevent them? One excellent source of emerging information is the IBM X-Force Research and Development team. For more than a dozen years, these security specialists have tracked well over 40,000 different vulnerabilities, from Trojan horses to malware to Web spoofing, and documented them in the world's largest and most comprehensive threat database.
The IBM X-Force researches and monitors the latest internet threat trends, develops security content for IBM customers, and helps advise customers and the general public on how to respond to emerging and critical threats. Twice a year, the team releases a detailed report discussing the latest security complexities. These reports are far more than just abstract information. They are actionable intelligence, designed to lead to more comprehensive security and a better business outcome. Take a look at the latest report.
For more information about how the IBM X-Force research can help your organization (and perhaps even keep you from losing sleep worrying about security threats), check out this Service Management in Action article.
Signing off for this week,
Your friendly roving Integrated Service Management reporter
Today's post comes from Vidhi Desai, Market Manager, IBM Security Solutions.
Today’s business environment calls for information sharing at an unprecedented scale. Sensitive information is shared between organizations, end consumers and even business partners. The biggest challenge that organizations face in doing so, is how to ensure that sensitive information is securely shared with different parties and that the right people are accessing the data. With the adoption of cloud and Software as a service deployment models, ensuring secure access is even more critical and challenging.
Consider a scenario where a government agency needs to share information with different agencies, local governments, citizens or even with other business entities (eg. Revenue agency that needs to share information with citizens and other entities like a tax preparation service). If one of the entities is operating in a public cloud environment, its becomes critical for government to ensure that right person is accessing the right data without sacrificing privacy, security or scalability (party requesting information really is the government revenue agency or tax preparer they claim to be).
Over the past couple years, we have seen how the US government has taken steps to ensure secure sharing of data between agencies with regulations such as FISMA, which was introduced in 2002, bringing attention to the critical nature of cyber security and its impact on national security.
Identity is at the core of any information sharing transaction. Hence whenever an individual attempts to access secure online sites or web portals, their identity has to be verified to ensure they are authorized to view that data. Additionally from the end user or citizen’s perspective, they should be able to set up their identity once and then log in to multiple systems without having to log in multiple times.
Federated identity management is the solution which enables multiple applications to share user credentials based on trust. This is especially critical in supporting cloud deployments for secure information sharing across private, public and hybrid clouds. With federated SSO, users can log on to the sites of multiple businesses and organizations by using the same user id and password, hence gaining a seamless and secure entry to multiple applications.
Tivoli Federated identity manager from IBM is an access management solution that provides web and federated single sign on to end users across multiple applications resulting in improved user experience. Tivoli Federated Identity Manager enables central management of access, enhanced user productivity and facilitates trust by delivering single sign on across separately managed infrastructure domains, both within an organization and across organizations.
Today's post comes from Veronica Shelley, Market Manager, IBM Security.
With IBM's October 12th SmartCloud launch, perhaps you're considering cloud computing for your organization. After all, the benefits of cloud computing are well known. Cloud computing is flexible, scalable, and cost-effective, and it's a proven delivery platform for providing business or consumer IT services over the Internet. Cloud computing can help you cut costs and IT complexity, provide new services to customers, and streamline business processes. Cloud computing is gaining in popularity and may be the wave of the future. Yet, many organizations hesitate to get started due to security concerns and confusion over how to get started.
Perceived risk versus actual risk
Cloud computing may seem new, but the fact is companies have been outsourcing services and technology for years. Providers already deliver hosted technology offerings that are located off-site with client access via the Internet. This is a common scenario for services such as remote storage or hosted email and other software as a service (SaaS) solutions. And just because companies may give up some control to the provider when they move to a cloud-based environment (just as they give up some control in any outsourced arrangement), it doesn't mean they have to compromise on security. By asking the right questions and adequate preparation, companies can build a "trust and verify" relationship with the cloud provider they are working with.
Questions to ask to ensure cloud security
It's important to remember that the same factors apply to ensuring security whether it is cloud-based or within a traditional IT infrastructure. The key difference in the cloud model is that it includes external elements, and those elements will be managed by the cloud service provider. This means companies need to understand the environment beyond their own data center and consider how it impacts the organization from a security standpoint. To help ensure security and peace of mind, as well as a good working relationship with the cloud provider, the client company should always identify and prioritize cloud-specific security risks beforehand. Often, companies will find they have the same amount of control, if not more, with a cloud service.
There are specific tactics an organization can use to enhance cloud security. For identity and access management issues, companies need to control passwords, support privileged users and enable role-based access to these cloud services. With data protection, a key concern is knowing whether or not a company's hosted data is secure, especially if data from rival companies is also being stored on the provider's cloud service. Companies should also ensure the cloud provider is deploying antivirus software on all supported systems that could be exposed to attacks, and ensuring that selected programs can identify and protect against malicious software or processes. From an auditing and monitoring perspective, companies need to determine how the cloud provider is testing and monitoring the infrastructure to meet legal and regulatory requirements.
Reaping the benefits of cloud
Organizations interested in reaping the benefits of cloud can best begin by understanding the security ramifications of a cloud deployment to their business, keeping in mind they can start small by deploying cloud in low-risk workload areas like email services. This easing-in process gives organizations valuable time to become familiar with cloud on a scale that's simpler to grasp and doesn't put them at increased security risk. And as familiarity of cloud and trust in the provider grows over time, companies can expand their use of cloud computing into other areas of business. By following this gradual path, companies can start enjoying the benefits of cloud in a way that's safe and secure.
Today's post comes from Vikash Abraham, Market Manager, IBM Security.
Virtualization has proven its business worth as a technology, however there is still limited understanding about how to secure it. To many, the question still remains - why do virtual environments need separate security when we have already secured the physical environment i.e. physical servers and the network in a data center. To answer this, it is essential to understand that the virtual environment creates a totally new layer above the physical server, which in turn, acts like a mini data center with all the complexities of multiple virtual machines, hypervisors, virtual networks and virtual appliances. The biggest risk that comes with a virtualized environment is the lack of visibility into it. Thus even if the environment is being attacked it isn’t necessary that the administrators are aware of it. Hackers are also excited with the hope of unveiling a set of new vulnerabilities that this environment could come with.
Having realized this risk of vulnerability and possible loss of millions-worth of data, the PCI Security Standard Council has come up with compliance guidelines for virtual environments. In June 2011, PCI group released ‘PCI DSS Virtualization Guidelines’ that broadly describes aspects that need to be considered while securing a virtual cardholder data environment. The guidelines consider the new entities that pop up with virtualization, such as Hypervisors, Virtual Machines, Virtual Appliances, Virtual Switches or Routers, Virtual Applications & Desktops and provide the virtualization considerations across the 12 PCI DSS requirements.
It is clear that a new approach to security is required, with concepts like ‘secure by design’ making further sense in this multilayered environment. Also, a specialized security solution would be needed to provide visibility, control and proactive protection. The solution needs to protect all entities of the virtual environment and monitor data that is being shared between these entities.
While securing virtual environments, the physical components of the data center should not be ignored. These physical components should continue to be secured as it would have been prior to virtualization. The PCI guideline points out that to ensure total security, the entire infrastructure hierarchy needs to be secured. This means that even if only one Virtual Machine (VM) is carrying cardholder data, both the hypervisor and the physical server need to be secured. Since the VM sits on the hypervisor and the physical server, a compromise to either of them can lead to the VM getting compromised.
Also with the increasing buzz around Cloud computing and Cloud-based service offerings, there would be further security requirements and considerations that need to be implemented to create a secure Cloud based cardholder data environment. However, if Cloud is considered as the next level of virtualization, the additional security required would be on top of the current virtualization considerations.
An enterprise would one day need to move on to the virtualized environment, considering the pressure to carry out continuous optimization and increase utilization. This would also mean that the ever growing cardholder data would need to move into this environment. The current deterrents that hinder this move are the lack of understanding of the environment and its security requirements to achieve a PCI compliant datacenter. However, sooner or later, the compelling business advantage of virtualization would push a CIO to take that leap.