So, I was at Pulse this year and was the source of a pretty constant ridicule for carrying around what felt like a fifty pound laptop bag.It was horrible, and inconvenient, and not even effective.I had hard copies of schedules that were out of date about 30 seconds after I clicked print.By the end of the conference I had calluses on my fingers and I couldn’t walk more than about ten steps without having to change hands.It was really a constant reminder that I need to go to the gym more.
Anyway, interestingly enough, most vendors in the endpoint security space have basically adopted this same approach in designing their technology.Incoming attacks get blocked by signatures, and in order to keep you “prepared,” some companies just create and update these huge signature files, shoot them across the network, fold their hands and hope they get properly installed, and then get right back to work because the files they just sent are more or less immediately out of date.I can tell you from experience that lugging around a bulky bag of incomplete, outdated information is no way to do your job.It’s also no way to keep your employees, and by extension, your company, ahead of threats.
What companies need to do is focus on what a defense-in-depth of the endpoint would really look like.It means you need a lot of things.You need to have antivirus and firewall protection.You need a patch process that actually works.You need centralized policy management that is easily enforceable.And, of course, you need all of this in real-time.Until recently, that also meant you needed a lot of aspirin.
With its acquisition of BigFix last July, IBM basically invested in the convergence of security and systems management, two pieces of the operational infrastructure that will continue to become more intertwined.You can’t just write the policy, or obtain the patch, you also need to be confident that these changes and updates are continually being enforced at every single endpoint.Try automatically applying patches to computers that aren’t turned on and you’ll pretty quickly understand why convergence is so important.
Up until this week there were four offerings that were part of the Tivoli Endpoint Manager suite of products, all of which are managed under the same roof.We have solutions for lifecycle management, security and compliance, power management and patch management.This week, we were pleased to announce Tivoli Endpoint Manager for Core Protection, a solution designed to add another layer of depth to your endpoint security posture.Tivoli Endpoint Manager for Core Protection is the result of the relationship between IBM and Trend Micro, and offers the real-time, lightweight threat protection that other endpoint security solutions can’t really compete with.
I spoke earlier about how other vendors were sending these huge signature files across their network, files that were outdated before you even figured out how to install them on your PC.Tivoli Endpoint Manager for Core Protection is different because while it does employ the use of some signature files, it also leverages the cloud to reduce the amount of information that needs to be sent across the network and also provides the real-time protection that static signature files cannot.As the cloud is updated with the latest threat information, so too are all of the endpoints that are in conversation with that cloud.
This has proven to be extremely effective. In a recent third party test, the Trend Micro technology blocked 100% of all incoming malware (the second place competitive product came in at 77%) by taking a multi-layer approach. Nearly all (97.5%) of the malware was detected and blocked in the first layer (URL reputation) and the remaining pieces of malware were blocked in the two subsequent layers of defense. Now, here's where it gets even more impressive. An hour after the original test, they again tested just the malware that got through URL reputation, but this time it did not get through even that first layer of defense. This is protective technology that is updating and hardening its defenses as new threats come in.
I don't think I really need to explain the importance of endpoint security to anyone reading this. We all have different things at stake, whether it's your back accounts, your music collection, confidential information for work or even just a photo album. What I can say is that 77% isn't good enough when it comes to protecting any of those things.
The strength of Tivoli Endpoint Manager is that it combines first-rate security with the systems management capabilities needed to ensure that protection is deployed across the entire infrastructure. When it comes to endpoint management, it's about no longer looking at technology in silos, it's about understanding why and how we can integrate different complementary offerings. Tivoli Endpoint Manager is built on that philosophy.
For more information about Tivoli Endpoint Manager, please visit:
One of the trends that we are seeing today is the convergence of security management and systems management.The better job you can do managing your infrastructure, the better equipped you will be to define and enforce security policies and controls across that infrastructure.There are few places where this convergence is more evident than the endpoint.
As the notion of a perimeter disappears, and we see the continued proliferation of an increasing number of traditional and non-traditional endpoints, such as servers, desktop PCs, laptops, ATMs, point-of-sale devices, and self-service kiosks, organizations are looking for a comprehensive approach to how they best manage and secure all of their endpoints.This includes, but is not limited to, identifying all of the endpoints that you have in your environment, managing the complete lifecycle of that endpoint, providing continuous security and compliance, effectively deploying patches in a timely manner and finally, managing the power usage of that endpoint.
Tivoli Endpoint Manager, built on BigFix technology, can address all of those needs, but in this blog, I want to focus on that last piece of the conversation, because it is one that does not immediately come to mind when people are typically thinking about the most critical elements of managing an endpoint.However, we have seen that effective power management is something that can actually pay for all of the other benefits that Tivoli Endpoint Manager can provide.You can ultimately end up saving money, the environment, and in the process, deploy critical security and systems management controls across all of your endpoints (even the ones you didn’t originally know you had).
In a recent article (click here), Penn State wrote about their deployment of Big Fix (now called Tivoli Endpoint Manager) and indicated that it could save them about $800,000 annually.At a large university like Penn State, they have thousands of computers that can be included in their power management initiative, and many of these computers are only heavily used during peak hours.Tivoli Endpoint Manager allows the Penn State IT staff to automatically put these computers in sleep mode when they aren’t in use.They are anticipating not only a significant ROI (about $800,000 annually), but are also hoping to reduce the amount of carbon dioxide released into the atmosphere by 60,000 tons.
One of the objections that people often bring up when it comes to power management for the endpoint is that it can interfere with the patch process.This is one of the areas where the convergence of security and systems management is so important.The policies that you create and enforce from a systems management perspective need to work hand-in-hand with the policies related to security management.For that reason, Tivoli Endpoint Manager was built on the core concepts of convergence, scalability and granular policy setting.It allows an IT staff to automatically wake computers at a designated time, apply required patches or enforce configuration policies, reboot, and then bring the endpoint back down to a hibernated, low energy state, or shut it down altogether.
The Chichester School District (click here) provides yet another great example of power management savings. This regional school district in Delaware County, Pennsylvania, manages more than 2,000 Microsoft Windows desktops and 50 Microsoft Windows servers throughout a six-school network.The Chichester School District implemented energy conservation using the power management capabilities of Tivoli Endpoint Manager to help reduce computing energy costs by 70 percent. Their IT team also uses the distributed “Wake-on-LAN” functionality to distribute and install patches to those machines that are turned off at night. This allows for a reduction of energy resources and confirms machines are securely patched—without impacting employee productivity.
The integrated patch and power management capabilities of IBM Tivoli Endpoint Manager provides IT staff with real-time information on remote endpoints to simplify patch processes, conserve energy and reduce on site troubleshooting.
Today's post comes from Sandy Hawke, Manager IBM Security Solutions.
I recently presented to the ISACA community on a live webinar. I focused the discussion on how to leverage automation to improve endpoint security and compliance. The archived webinar is available here. Just as a brief background, ISACA is an international professional association that focuses on all aspects of IT Governance and has over 95,000 members worldwide.
The online event drew a pretty substantial audience which is good, and yet a bit surprising in two key ways. First of all, many of the recommendations I made to the audience were not radically new concepts, but basic foundational controls that all security professionals agree are critical for achieving and maintaining solid security and demonstrable compliance. So haven't they heard this story before?
Maybe not. And that's the second observation. Most of the ISACA membership is in the IT audit/Risk Management line of business. While they're not the folks who are implementing security technologies on a daily basis (i.e. "hands at keyboards")- they are keen to understand how security is implemented, how it works, how automation can be used to facilitate audits, etc. And that's the new trend we've been witnessing. While the audit team knows what the policy controls should be, they may not know if/how these controls get enforced, maintained, monitored and reported on- essentially how security is "operationalized." The more that they know what's possible with respect to security operations and automation, the better they'll be at knowing what questions to ask IT operations during audits, what technologies to recommend, etc.
Years ago, the IT Audit/Risk Manager organization and activities were kept quite separate from the IT Operations/IT Infrastructure teams. And at the time there were pretty good reasons to keep these groups as distinct as possible- you've all heard of "keeping the fox out of the hen house" analogy, right? The IT Audit/Risk Mgmt teams could set and enforce policy and conduct assessments that wouldn't be influenced by the operations staff. Well, with the advent of converging technologies, economic trends, and the increased importance of measuring security investments and compliance program- in real time, these groups are coming together. More so than ever before.
And technologies that can foster that type of trust, cooperation, and collaboration are indispensable.
When you think of the levels of innovation you are required to give to the business, cloud is the right technology to do it.
Since the Cloud lives on the Internet, which is built upon the "bricks" of open standards, it should not surprise you that there is a drive to an ubiquitous Infrastructure as a Service (IaaS) open source cloud computing platform for public and private clouds.
In October 2011, my neighbors to the South at Rackspace founded the OpenStack Foundation.
Today, IBM is announcing that we will be joining the OpenStack Foundation as Platinum-level sponsors along with AT&T, Canonical, HP, Nebula, Rackspace, Red Hat and SUSE.
The OpenStack Foundation has a great blog post that covers what's happening today and what the next steps will be.
This is the start of a very exciting future for cloud computing and of course there will be more news coming from the OpenStack Foundation in the weeks to come at events like the OpenStack Design Summit & Conference in San Francisco on April 16-20 and IBM Impact in Las Vegas on April 29 - May 4.
I am at VMWorld this week, in San Francisco CA where IBM is a platinum sponsor.
With the growing adoption of Cloud implementations, private, public and hybrid, there's clearly a desire in the clients exploring solutions here to optimize and exploit their environments rather than a maintenance and steady state approach. Therefore, it is timely that Bala Rajaraman and Pratik Gupta, two IBM Distinguished Engineers, are presenting a Collaborative DevOps session at VMWorld
I sat with Pratik and Bala, and asked them what the impetus and motivation for developing this talk. The crux of the pitch, as Pratik explained to me, is that current conditions have created four drivers that a majority of customers are facing, that are making a DevOps approach an imperative.
At the heart is the desire in companies for agility. The desire in the
Line of Business leaders to create value in their offerings is resulting
in an urgent need for business agility. This in turn challenges the
development organization to take an agile development approach. As more
and more deployments move to a Cloud delivery model, it requires an
operational discipline that is not always present. Add to this the human element. If you're in an enterprise shop, you know already that this is not purely solvable by software. Cultural gaps exist between the Line of Business sponsors, the developers and the Operations team. Notions of completion, priority and quality also are different.
Right now, companies are not getting this right. 50% of the applications released into production are rolled back. As much as 51% of projects are missing critical features. Quality and end user expectation delivery are clearly an issue.
Pratik and Bala will frame this problem space and then show how adopting a continuous delivery model can help address this.
If you're debating between this session and something else at this time, it might also help to know that the first 75 attendees will receive a snappy IBM jacket. The rest will get to pick one of an assorted set of items. There's also a giveaway of an iDoodah. Come by the IBM booth on the expo floor if you want details.
I went to an itSMF UK regional
meeting last week. I haven’t managed to get to our local meeting for a while
and I found I was being introduced to new members as someone who has been
around ‘since the beginning of ITL’.
Now that kind of thing, apart from making
me feel old (which is, admittedly, a fair enough feeling at my age) also made
me look back and think on where we (the ITIL community) have come from and
where we are now.
The first thing that occurs to me in
thinking back to the early days of ITIL is that we now find ourselves in a
place that none of us imagined we would. Don’t get me wrong, the original
inventors and drivers of the
ITIL idea were not short on confidence or vision, nor in seeing the benefits
that documenting this aspect of best practice would bring. But I suspect that
world domination of this industry sector by the word ‘ITIL’ was beyond even
their best possible visions.
The key to the expansion of ITIL was that
it quickly became about more than just the books. The ITIL advertising leaflets
produced in the mid 90s coined the term ‘ITIL philosophy’ to represent this
scope of ITIL. I suppose I should confess that I invented that phrase
and also the diagram that went with it – a version from about 1997 is shown
here. The accompanying words suggested that, even back then, less than 1% of
‘ITIL-related sales’ were about the actual ITIL books, and the rest were
The fact that I couldn’t even hazard a
guess at what that percentage might be today indicates a few, pretty
When I was writing those things in 1996-1998, I felt I could
pretty much ‘take-in’ what was going on related to ITIL, and even know
most of the people developing and delivering new ideas. Nowadays no-one
can honestly claim to be able to do that.
What is ‘ITIL-related’ has become a much more debatable
concept. Whatever its faults might have been (and there were many) ITIL
was just about alone in its market space. The initiatives kicked-off by
ITIL have spawned fellow travellers, such as COBIT, ISO20000 and others.
The fact that I could easily start a long running – and probably vitriolic
– debate on
the social media pages by asserting which are and which are not ITIL
derived, ITIL alternatives etc indicates that this is now a loosely
bounded region. That makes any assessment of its scale, scope and success
Some other things have changed too.
Nowadays the maturity of the ITIL ideas
means most players are focused on market share rather than growing the sector
itself. That means more competition than there used to be. Nonetheless there
are still lots of examples of that collaboration still easily found. Probably
the best example is the ‘Back2ITSM’ facebook group – a place where free advice,
constructive debate and openly shared thoughts are still the norm.
The itSMF was born in 1991, and played –
probably – the major coordinating role is promoting the idea, importance and
approaches of service management. Like ITIL, itSMF predates the term ‘service management’,
having started as the ITIMF. Even here we have seen a lot more competition
during the last third of its lifetime: both competition from other community
organisations and also considerable internal competition. I hope itSMF will
evolve form this to carry on delivering benefit to its members. I am a bit too
frightened to work out what percentage of my time has been given to itSMF over
the last 17 years – or at least frightened what my employers over that period
might think. But that commitment does make me wish hard for its future health.
So, looking back should makes us appreciate
where we are now – nostalgia can be deceptive for usually the past wasn’t
better; because progress is exactly that – going forward and getting more. And
wherever ITIL is now, IT Service management has come a wondrous way in the last
20 years. Global technology changes have made a difference to that journey;
we’ve seen personal computing and the internet make all but unbelievable levels
of change. We may well see Cloud do the same; personally I think cloud might do
that by freeing us from some of the technical baggage and letting us see and
address real service management issues, without the obfuscation of technology
issues or the opportunity to hide behind them any more.
We’ve seen a move from books being the
go-to source of wisdom when ITIL started to an amazing range of information
sources. Nowadays your typical service management will expect their influences
to come via social media, electronically delivered white papers and the like.
Interestingly, in many cases, they would also expect them to come for free, and
that throws a real challenge on the thought leadership business. If ITIL 4 ever
happens I think it will be a radically different entity from versions1-3.
Where I want to see ITSM going is towards
SM. IT is now so pervasive that it is everywhere, which to me means that ITSM
cannot be a subsection of overall SM anymore because it logically applies to
everything, since all services now depend on IT. Nevertheless, IT has treated
SM well, and – after some effort –has taken it seriously. I hope those lessons
will work their way into broader adoption and we will see an improved – and
critically an integrated – approach to service management across enterprises
because of that. I am driven to optimism in this (not my natural state you
understand so it is noteworthy) by the fact that, alongside this blog, I am
involved just in this same month in a webinar and an article for IBM’s SMIA
series on the idea that IT is now spreading its ideas – and delivering its
technology and specifically its evolved software solutions – to the broader
I wonder what we will be saying in another
20 years looking back – maybe ITIL will survive another 20 years, maybe not,
but I am certain service management will progress and improve.
 And the top two names I would put here are Pete Skinner and John
Stewart – perhaps our least sung heroes, especially the late Mr Skinner – but
pivotal all the same.
 I don’t plan to, and hope no-one else is tempted – there are far
more constructive things for intelligent service management practitioners to
progress knowledge about.
 And if you are interested (sad?) enough to be reading this then you
should be part of that group if you aren’t already.
What is IBM Tivoli Software? We know you want the short version. Steven Wright of Tivoli Software breaks it all down for us in less than 7 minutes on a white grease board. Check it out while you have your morning coffee, afternoon tea, or while you get your miles in on the treadmill or trail with your smart phone. Then visit ibm.com/software/tivoli for more details on how IBM Tivoli Software can help you run a smarter business. .
For those new to the blog, IBM SmartCloud Control Desk was one of the new announcements made at Pulse. It is a service catalog/service desk based on IT Infrastructure Library™ (ITIL™) V3 and ideal for streamlining incident, problem, change, configuration, release, and IT asset management.
This service desk offering will assist customers in process control center for managing change & configuration, assets, incidents/problems, service requests, SW licenses and more.
The announcement letter (212-051) was published on March 13 and we now have a very cool demo that showcases the solution.
There have been a lot of good discussions
on Back2ITSM recently. I find the site a wonderful reminder of the two
universal constant truths: ‘everything changes’ and ‘there is noting new under
the sun’. They might seem contradictions at first, yet the older I get the more
both seem true.
Firstly, if you aren’t looking at the
Back2ITSM group on facebook then you are missing out - go sign up, now! Let me
explain what it is and how it is brand new and full of ITSM tradition at the
Secondly, it is about people talking with
each other. That’s the bit that is the same as it’s always been. The
willingness to share ideas, help others – even those in competing organisations
– is just exactly like many itSMF regional meetings I have been to, in UK,
Canada and New Zealand; except that now we are all in three at the same time.
Of course, social media isn’t new, and
facebook is not the newest kid in town. But what is 21st century
about this kind of group are the immediacy of comment and dialogue and the wide
spectrum of simultaneous participants it allows. Since it has active members
from all across the world, there is constant input and comment.
OK, so we have all know that the technology
for this has been around a while. After all it is ‘just’ about real time input
to a forum – and we now have about 20 or 30 people across the world presenting
their opinions to an audience of 500+ (lurking is positively encouraged). For
me what is important is precisely that I am not aware of the clever technology
or feel all the time that I am using a novel means of communicating or even
just how damned clever the whole thing is. With this group I have reached stage
three in my own ‘using technology’ scale: comfort and taking for granted.
Stage 1 is when
you are using some new way of doing things just because you can. This isn’t
just about IT of course, many of us may recall how such things have affected
our choice of travel (my
example is choosing an airline because they had A380s
on the route, and even if a bit dearer I had never been on one of them before
Stage 2 is when
the mean is no longer overwhelming the ends – you’re using it now because it is
logical to do so, and it is delivering value. But, you are still very aware of
how cool it is. And you probably keep telling other people how cool it is too.
Stage 3 is when
your focus is totally on what you are doing. I can now just read what is written, comment
if I have something to say. You know it’s a normal conversation because it goes
off at tangents, people get flippant, say daft things, agree, argue, make
subtle (and sometimes not so subtle) digs at each and launch jokes that no-one
else notices. In short, it’s normal human conversation, without thinking about
how you are achieving it nor where all the people are, or what time it is there.
And to me this is a good motif for
successful technology. It isn’t when it is there and running that the implementation
part is properly over. Real success is when people don’t notice it any more,
but just get on with using it, unconsciously – as part of their everyday lives.
It’s one more example of how success is
about being invisible. First time I flew in an A380 I was excited about it –
last time I was watching a movie before we reached the runway. That’s success.
(Ok, so there was a little re-attention on the technology after the Qantas 380
had an engine explode but I am back to ignoring it again now.)
So the important lesson and message that I
see is how we need a customer perspective on the introduction of new
technology. And maybe what you actually want is people to stop telling you how
impressed they are, because then they are getting on with using it, which was,
after all, the real point of the exercise, wasn’t it?
When IBM first kicked off the Dynamic Infrastructure announcement at Pulse 2009 conference, we heard some rumblings on whether Dynamic Infrastructure was just another executive buzzword or if there was real meat behind "the concept."
Doug McClure summarized the feeling well in his blog: “While this is great for executive level folks, I think we needed to drive this message into consumable and actionable things that lower level technical attendees could take back to their companies. They may be the ones who need to execute and show how previous or planned investments could help their company become smarter and more dynamic.”
After IBM’s announcement yesterday on new Dynamic Infrastructure offerings, critics will be hard-pressed to wonder whether Dynamic Infrastructure is actionable.Not only did IBM announce new products and services in the areas of Information Infrastructure, Virtualization, Service Management, and Energy Efficiency, but they also demonstrated how these solutions are helping three of our clients--the Taiwan High Speed Rail Corporation, Tricon Geophysics and the United States Bowling Congress--build new, more dynamic infrastructures to help reduce costs, improve service and manage risk.
A key piece of the announcement is the IBM Service Management Center for Cloud Computing, which now includes new IBM Tivoli Identity and Access Assurance, IBM Tivoli Data and Application Security, and IBM Tivoli Security Management for z/OS, for Cloud environments. I don’t know about you, but all that’s more meat than this vegetarian can handle. :)
To continue driving home the Dynamic Infrastructure success, IBM is sponsoring a variety of events for the public to learn more. Register for a free, local Pulse Comes to You event to see how Service Management is a key component for enabling a DyanmicInfrastructure for a Smarter Planet.