One of the trends that we are seeing today is the convergence of security management and systems management.The better job you can do managing your infrastructure, the better equipped you will be to define and enforce security policies and controls across that infrastructure.There are few places where this convergence is more evident than the endpoint.
As the notion of a perimeter disappears, and we see the continued proliferation of an increasing number of traditional and non-traditional endpoints, such as servers, desktop PCs, laptops, ATMs, point-of-sale devices, and self-service kiosks, organizations are looking for a comprehensive approach to how they best manage and secure all of their endpoints.This includes, but is not limited to, identifying all of the endpoints that you have in your environment, managing the complete lifecycle of that endpoint, providing continuous security and compliance, effectively deploying patches in a timely manner and finally, managing the power usage of that endpoint.
Tivoli Endpoint Manager, built on BigFix technology, can address all of those needs, but in this blog, I want to focus on that last piece of the conversation, because it is one that does not immediately come to mind when people are typically thinking about the most critical elements of managing an endpoint.However, we have seen that effective power management is something that can actually pay for all of the other benefits that Tivoli Endpoint Manager can provide.You can ultimately end up saving money, the environment, and in the process, deploy critical security and systems management controls across all of your endpoints (even the ones you didn’t originally know you had).
In a recent article (click here), Penn State wrote about their deployment of Big Fix (now called Tivoli Endpoint Manager) and indicated that it could save them about $800,000 annually.At a large university like Penn State, they have thousands of computers that can be included in their power management initiative, and many of these computers are only heavily used during peak hours.Tivoli Endpoint Manager allows the Penn State IT staff to automatically put these computers in sleep mode when they aren’t in use.They are anticipating not only a significant ROI (about $800,000 annually), but are also hoping to reduce the amount of carbon dioxide released into the atmosphere by 60,000 tons.
One of the objections that people often bring up when it comes to power management for the endpoint is that it can interfere with the patch process.This is one of the areas where the convergence of security and systems management is so important.The policies that you create and enforce from a systems management perspective need to work hand-in-hand with the policies related to security management.For that reason, Tivoli Endpoint Manager was built on the core concepts of convergence, scalability and granular policy setting.It allows an IT staff to automatically wake computers at a designated time, apply required patches or enforce configuration policies, reboot, and then bring the endpoint back down to a hibernated, low energy state, or shut it down altogether.
The Chichester School District (click here) provides yet another great example of power management savings. This regional school district in Delaware County, Pennsylvania, manages more than 2,000 Microsoft Windows desktops and 50 Microsoft Windows servers throughout a six-school network.The Chichester School District implemented energy conservation using the power management capabilities of Tivoli Endpoint Manager to help reduce computing energy costs by 70 percent. Their IT team also uses the distributed “Wake-on-LAN” functionality to distribute and install patches to those machines that are turned off at night. This allows for a reduction of energy resources and confirms machines are securely patched—without impacting employee productivity.
The integrated patch and power management capabilities of IBM Tivoli Endpoint Manager provides IT staff with real-time information on remote endpoints to simplify patch processes, conserve energy and reduce on site troubleshooting.
Today's post comes from Sandy Hawke, Manager IBM Security Solutions.
I recently presented to the ISACA community on a live webinar. I focused the discussion on how to leverage automation to improve endpoint security and compliance. The archived webinar is available here. Just as a brief background, ISACA is an international professional association that focuses on all aspects of IT Governance and has over 95,000 members worldwide.
The online event drew a pretty substantial audience which is good, and yet a bit surprising in two key ways. First of all, many of the recommendations I made to the audience were not radically new concepts, but basic foundational controls that all security professionals agree are critical for achieving and maintaining solid security and demonstrable compliance. So haven't they heard this story before?
Maybe not. And that's the second observation. Most of the ISACA membership is in the IT audit/Risk Management line of business. While they're not the folks who are implementing security technologies on a daily basis (i.e. "hands at keyboards")- they are keen to understand how security is implemented, how it works, how automation can be used to facilitate audits, etc. And that's the new trend we've been witnessing. While the audit team knows what the policy controls should be, they may not know if/how these controls get enforced, maintained, monitored and reported on- essentially how security is "operationalized." The more that they know what's possible with respect to security operations and automation, the better they'll be at knowing what questions to ask IT operations during audits, what technologies to recommend, etc.
Years ago, the IT Audit/Risk Manager organization and activities were kept quite separate from the IT Operations/IT Infrastructure teams. And at the time there were pretty good reasons to keep these groups as distinct as possible- you've all heard of "keeping the fox out of the hen house" analogy, right? The IT Audit/Risk Mgmt teams could set and enforce policy and conduct assessments that wouldn't be influenced by the operations staff. Well, with the advent of converging technologies, economic trends, and the increased importance of measuring security investments and compliance program- in real time, these groups are coming together. More so than ever before.
And technologies that can foster that type of trust, cooperation, and collaboration are indispensable.
Well, we are well into 2012 now and we have just about got though the ‘my predictions for 2012’ phase and in to ordinary routines again. Whatever the predictions, like with most years I predict that 2012 will look a lot like an older version of 2011.
There is still talk of recession, companies that struggled for funding in 2011 are no richer, Cloud is still talked about by a lot more people than understand it.
On a personal level 2012 has already delivered some of the improvements planned in 2011 – and I hope the same will happen workwise. Next major thing on my work horizon is IBM’s big service management show – Pulse. Back again at the MGM Grand in Las Vegas we are promised it will be bigger and better than ever. I understand that bigger is important in as Vegas but I am usually even keener on better. Actually though, to be fair I am delighted that ‘my bit’ at Pulse looks like being bigger this year – with not one but two chances to deliver the cloud-readiness simulator on the weekend before the show itself starts. In fact there will be a strong focus on simulator this year with our team being on the exhibition floor to explain what, why and how they can help you.
Of course – like I implied above – this isn’t exactly new, but it is proven. Of course there will be lots of new stuff available – geeks welcomed and catered for. The technologists will – of course – be well catered for with lots of ‘future possibles’ and indeed a vision of some possible futures too. But service management’s primary focus is not on what might happen next year; it has always been about delivering value this year. In fact one of my favourite aspects of service management is how it rests on widely applicable principles, even though how they are applied might alter. For example, while change management processes in a cloud environment might need different considerations to make them most effective –the basics remain. I was working in service management long before I ever touched a computer. I remain constantly delighted to discover that lessons learned 30 years ago in supply and transport are still relevant to the 21stcentury IT based services we manage today.
So, if you are going to be at Pulse come along and tell me whether you agree that old-fashioned service concepts are still valuable – or come and explain why dinosaurs like me should be swept away by the meteor strike that is cloud. Either way – at Pulse or elsewhere – I look forward to good, informed and enjoyable debates. Good to think of the new year building on the successes of the old – at home and at work.
 If you follow me on twitter - @ivormacf - you will know where and when I will be in terms of events. Useful, whether you want to know how to find or to avoid me – same thing works both ways.
Last week, I attended my first IBM Pulse conference. I really enjoyed the sights and sounds of Vegas, and met many of my Tivoli colleagues for the first time. I also probably walked the equivalent of 15 miles over the five days within the mammoth MGM facility. But what I found most valuable over the five days were my interactions with our customers and business partners.
On Day 1 of the conference, my focus was the ISM Simulator workshop that I helped coordinate. Given that the workshop was: a) taking place prior to any other Pulse activities, b) located in the bowels of the MGM hotel, and
c) three hours in duration...
...I was a bit apprehensive that all the customers and business partners who had RSVP'd would actually show up. But when people started rolling in 30 minutes before the start time, I was confident that this workshop was going to be a success.
When we got started, we had 21 participants sitting around four tables, which is all but ideal for this role playing workshop. Like other simulator workshops that I have attended, it started out a bit chaotic, as participants tried to process the firehose of information that was being thrust upon them. By the end of the three hours, they had come full circle, and were effectively working together to the tune of a $5 million profit for their hypothetical shipping company.
As I chatted with some of them after the session, and listened in on some of their video testimonials, the words I heard most often were "eye-opening", "outstanding" and "insightful".
On Monday and Tuesday, I worked on the expo floor and showed off our cool new ISM Simulator video game. The game allowed users to experience various issues affecting service management and corporate profitability in a simulated organization. At the ped, I got great feedback from customers and partners, who, by virtue of playing the game, were able to get a better grasp of the sometimes abstract concepts of service management.
You can play the IBM Service Management Mission game here.
All in all, it was a great conference, and stay tuned for the video from the workshop!
Yes, I love being one of
the ambassadors for IBM’s Client Reference Program, a structured platform that
gives our valued Clients many opportunities to promote their unique
capabilities and stand tall in the, otherwise very competitive, market. IT
revolution, ease of internet, change in consumer behavior etc have all added to
While I write this blog, the
two things that I had studied, during school days in Biology, are shouting
aloud from my mind; one, Darwin’s ‘Survival of the Fittest’ and two,
‘legume-rhizobial symbiosis’. Interestingly, these biological phenomenon do
have real examples in economics too. A symbiotic relationship with clients/peers,
thus, is ‘very’ crucial in surviving the Darwinian marketplace. And, what
better way than registering for IBM’s Client Reference Program? :-)
For me, it’s great being a
Client Reference Specialist for Tivoli. Working in collaboration to create
Reference Profiles for our Clients has brought in a lot of advantages. Networking
opportunities with my fellow IBMers, Business Partners and Clients from across
industries is just a ‘cake’, but the real ‘icing’ is my continuous learning
about IBM’s Tivoli software for 'Integrated Service Management' that “provides
smarter solutions and the expertise you need to design, build and manage a
dynamic infrastructure that enables you to improve service, reduce cost and
manage risk.” Yes, I’m always in an awe of how IBM’s Tivoli solutions have
helped our Clients overcome their challenges.
PS: Rebecca Wissinger in
her blog ‘IBM Client Activities at Pulse 2011’ talks about the ways IBM is
saying THANK YOU to our immensely valued, extraordinary Clients at Pulse
2011. If you are attending Pulse 2011 then you will not give her blog a
I’m a big fan of IBM’s mission of Smarter Planet. As
an IBMer based out of Bangalore, India, I get inspired by Big Blue’s rich history
and the impact it has been creating on the world’s business systems.
This week, India is celebrating the “Joy of Giving
Week” (JGW), a pan-India initiative started in 2009 to celebrate a “festival of
giving” to the needy and to our society, through various forms of giving: time,
skills, resources, money etc. JGW is held annually for a week, starting on a
Sunday and ending on a Saturday. These dates also contain Mahatma Gandhi’s birth anniversary on the 2nd of October.
Donation boxes are kept in IBM offices across all
the locations. Interestingly, as I was making a list of things to be donated to
bring in smiles into a few innocent faces, a thought occurred to me….. and then
my joy knew no bounds. I realized that to be associated with IBM, which works
towards giving back to our Earth with a mission of making it a Smarter Planet
through innovations in products and services, is a joy in itself. A joy of
giving to the world we live in, for our smart and sustainable living.
Further, I love my job which is working on our Tivoli Success Stories for our IBM Client References.Many of
these stories talk about the work that we are doing with our customers and
their implementation of Smarter Planet solutions.Our customers, using these solutions, are
having a significant impact on making our lives better and more fulfilling. And, YES.....I can see the ‘Joy of Giving’ being passed on from IBM to our clients and
to the world :)
What is IBM Tivoli Software? We know you want the short version. Steven Wright of Tivoli Software breaks it all down for us in less than 7 minutes on a white grease board. Check it out while you have your morning coffee, afternoon tea, or while you get your miles in on the treadmill or trail with your smart phone. Then visit ibm.com/software/tivoli for more details on how IBM Tivoli Software can help you run a smarter business. .
For MSP's, IBM is providing a bundle of services and support in the way of marketing skills, technical expertise and financing options. On the marketing side, MSP's will have the ability to better target their customers and generate demand for their services through IBM education that includes topics such as developing effective marketing plans and exploiting the burgeoning social media space. Additionally, MSP's can sell IBM SmartCloud services under their own brand names. On the technical side, MSP's will have access to four new "Centers of Excellence" (located in China, Japan, Germany; and New York City) where they can collaborate with IBM technical experts to build their cloud services, and connect with other IBM ISV's. In terms of funding their efforts, IBM announced a financing offer which includes 12-month, 0% loans for IBM Systems, Storage and Software, and allows MSP's to defer payments for up to 90 days.
For end-users in the SMB space who often lack the necessary IT skills, this is a great opportunity to leverage local technology providers and take advantage of a cost effective "pay-as-you-go" model that cloud computing affords them. In addition, end-users will have the confidence of knowing that the services provided were built on an IBM platform.
Finally, for IBM, this is a great opportunity to expand its cloud ecosystem, and leverage the growing population of MSP's, who are continuing to gain traction in the cloud computing space for SMB's.
Today we trust computers – literally and
unconsciously with our very lives. I was reflecting on this level of trust when
I got £50 of cash out from my local ATM and declined the offer of a receipt.
Seems I now have total faith the computer systems will ‘get it right’. I’ve
come a long way from keeping all my own cheque books to cross check against
later bank statements.
Now, combining that faith with a little
healthy British cynicism, and triggered by watching the Olympics tennis finals on
TV, a mischievous but irresistible thought came to my mind.
It used to be that when a ball hit the
ground near the line we relied on the human eye to say whether it was ‘in’ or
‘out’. That caused disagreements and discussion – and - in tennis often -
sulking, swearing and the full range of petulant behaviour.
Nowadays that is all replaced by
referencing the technology. When there is doubt – or one of the players
questions a call - then we simply ask the computers. What we get then is a neat
little picture representing the appropriate lines on the court and a blob
showing where the ball had hit. So, problem solved: disappointment still for
one player but, so it seems, total acceptance that the computer is right. After
all it is an expensive system working away inside a very expensive box – must
be right, mustn’t it. Or to put it another way ‘computer says in’, who would
But what occurred to me is this. All we can
actually see is some boxes around the court, and a stylised display with a blob
on it. That could be delivered by one person with a tablet showing the court
lines and them touching the screen where they think it landed. Very cheap and
still solves all the arguments because – naturally – everyone trusts technology
Now – of course, and before anyone calls
their lawyers – I am not suggesting for the merest moment that there is the
slightest possibility of such a thing happening. But it’s fun to think it might
be possible. There is little public awareness of what accuracy the system – and
here I presume it does really exist – works to. If you dig around on the web
you can find out (the answer by the way for tennis is 3.6mm). You also find out
there is some very minor grumbling and questioning going on. But that seem at
geek level – in everyday use the audience stands instantly convinced.
So, thinking it through there are a couple
of interesting consequences to real IT life:
Once you realise that trust depends on quality of presentation
at least as much as on accuracy, should you focus more on that? Certainly
you have to take presentation seriously, because the corollary is that if you
deliver perfection but don’t make it look good, then no-one will believe
it even though you are right.
Whose responsibility is it to check – and is it even possible? I
suspect this discussion will take us into the territory of ‘governance’. But
even before we get there it implies that User Acceptance Testing needs to
do more than look at things. Of course yours does, doesn’t it?
I guess my big issue is to wonder how
comfortable we are – as the deliverers of the technological solutions for our
customers – and especially our users - to have such blind faith. Of course,
people being the irrational things they undoubtedly are, that blind faith in
the detail is often accompanied by a cynical disregard for overall competence –
think faith in ATMs and on-line bank account figures with the apparent level of
trust in the banking industry as a whole.
As a little codicil to the story, I registered
with anew doctor yesterday – the nurse asked me questions, took blood pressure
etc and loaded all the data she collected into a computer. The system was
clearly ancient, with a display synthesising what you typically got on a DOS3.0
system. First thought: ‘OMG why are they using such old software, that can’t be
good? Second thought: ‘They’ve obviously been using it for years, so they
really understand it, have ironed out all the bugs and it does what they need. It
ain’t broke so they aren’t fixing it’. But my instinctive reaction of suspicion
of it for not being pretty was there and I had to consciously correct myself.
Would you as a service provider prefer more
questioning of what you package up and present to your customers and users, or
are you happy to have that faith? My own view is that the more blind faith they
have in you, the more the retribution will hurt if things do go wrong. Or
perhaps that’s just me being cynical again?
One statement: simultaneously reassuring and terrifying.
Firstly it’s reassuring because anything that works towards the realisation that development and operation are not really separated by any kind of wall has to be a good thing. Of course there are different areas of focus at different times in the life of a service but they all should have the same aim – delivering what is needed in best possible way. We already all knew that, it is so obviously sensible that who would vote against it? The equally obvious fact that we then don’t do it is one for the psychologists and later blogs, but does lead me into my other reaction:-
The horror that we should be 50+ years into IT services before this seems important to enough for people to give a trendy name. How on earth have we survived this long without a “collaborative and productive relationship” between the people who build something and the people who operate it? And bear in mind both those groups are doing it for the same customer (in theory anyway).
To be fair to IT people though, perhaps this is an obligatory engineering practice we have picked up. Who remembers the days when getting your car repaired was unrelated to buying it? You bought it in the clean and shiny showroom at the front of the dealer, took it to the oily shed around the back if it broke. One of the things that has seen a step-change in the car industry – and is also changing ours and most others – is the realisation that we are now all delivering services and not products. So we are finally realising that long term usability and value is what defines success, not a shiny new – but fragile – toy. In fact, thinking of toys we all recall the gap between expectation and delivery of our childhood toys – the fancy and expensively engineered product that broke by Christmas evening compared to the cheap and solid – be it doll or push along car – that lasted until we outgrew it.
The car industry saw that happen – and we now have companies leading their adverts with a promise of lifetime car driving with their latest vehicles – with the mould really having been broken by Asian manufacturers offering 5 year unlimited mileage warranties. That was about selling a self-controlled transport service instead of a car – and really that is what most of us want. Amazing strides taking place on that front, of course, being taken by companies like Zipcar who have thought simply enough to see there is no absolute link between that service (self controlled transport) and car ownership. (Some of us want other things from a car of course – but that just leads us into the key first step of any successful service, know what your customer(s) want.)
Why I get so interested in all this is its basically what I’ve been saying for the last 20 years – my big advantage is that I came into IT from a services environment (I worked in a part of our organisation called ‘services group’) – and I never really understood why IT needed such a large and artificial wall between build and do. ITIL was (in large part) set up to try and break down the walls – initially an attempt to set up serious best practices and methodologies within operations to match what was already alive and well in development (hence the original name of the project – GITIMM, to mirror SSADM).
So … what am I saying? Please take devops seriously if that is what is needed to get better services. The complexity we need to address now means we have to stop maintaining any practices that prevent good ongoing service design and delivery. If giving it a name and a structure helps then let’s go there.
One of the things I am most proud about in the books I have contributed to is that we made up a fancy name for something good people already did (in our case early Life Support) – the intention was to give it profile and then people would add it to job roles and actually start to plan for it and then, finally, do it better.
Of course that brings with it the chance of looking like the emperor in his new clothes once you examine the detail and originality too carefully. But that’s good too – clever and original usually = doesn’t work too well at first. Solid old common sense (eventually) seems to me to offer a much firmer foundation to build on.
We need good foundations because the situation is actually a lot more complicated than we pretend – multiple customers, other stakeholders, users, operations as users – enough for a dozen more blogs, a handful of articles and a book. So … I’d better get on writing – and maybe so should you?
 Seems so to me anyway – the Delphic oracle was widely believed, responsibility free and most of those who used it didn’t understand where the knowledge came from.