One of the trends that we are seeing today is the convergence of security management and systems management.The better job you can do managing your infrastructure, the better equipped you will be to define and enforce security policies and controls across that infrastructure.There are few places where this convergence is more evident than the endpoint.
As the notion of a perimeter disappears, and we see the continued proliferation of an increasing number of traditional and non-traditional endpoints, such as servers, desktop PCs, laptops, ATMs, point-of-sale devices, and self-service kiosks, organizations are looking for a comprehensive approach to how they best manage and secure all of their endpoints.This includes, but is not limited to, identifying all of the endpoints that you have in your environment, managing the complete lifecycle of that endpoint, providing continuous security and compliance, effectively deploying patches in a timely manner and finally, managing the power usage of that endpoint.
Tivoli Endpoint Manager, built on BigFix technology, can address all of those needs, but in this blog, I want to focus on that last piece of the conversation, because it is one that does not immediately come to mind when people are typically thinking about the most critical elements of managing an endpoint.However, we have seen that effective power management is something that can actually pay for all of the other benefits that Tivoli Endpoint Manager can provide.You can ultimately end up saving money, the environment, and in the process, deploy critical security and systems management controls across all of your endpoints (even the ones you didn’t originally know you had).
In a recent article (click here), Penn State wrote about their deployment of Big Fix (now called Tivoli Endpoint Manager) and indicated that it could save them about $800,000 annually.At a large university like Penn State, they have thousands of computers that can be included in their power management initiative, and many of these computers are only heavily used during peak hours.Tivoli Endpoint Manager allows the Penn State IT staff to automatically put these computers in sleep mode when they aren’t in use.They are anticipating not only a significant ROI (about $800,000 annually), but are also hoping to reduce the amount of carbon dioxide released into the atmosphere by 60,000 tons.
One of the objections that people often bring up when it comes to power management for the endpoint is that it can interfere with the patch process.This is one of the areas where the convergence of security and systems management is so important.The policies that you create and enforce from a systems management perspective need to work hand-in-hand with the policies related to security management.For that reason, Tivoli Endpoint Manager was built on the core concepts of convergence, scalability and granular policy setting.It allows an IT staff to automatically wake computers at a designated time, apply required patches or enforce configuration policies, reboot, and then bring the endpoint back down to a hibernated, low energy state, or shut it down altogether.
The Chichester School District (click here) provides yet another great example of power management savings. This regional school district in Delaware County, Pennsylvania, manages more than 2,000 Microsoft Windows desktops and 50 Microsoft Windows servers throughout a six-school network.The Chichester School District implemented energy conservation using the power management capabilities of Tivoli Endpoint Manager to help reduce computing energy costs by 70 percent. Their IT team also uses the distributed “Wake-on-LAN” functionality to distribute and install patches to those machines that are turned off at night. This allows for a reduction of energy resources and confirms machines are securely patched—without impacting employee productivity.
The integrated patch and power management capabilities of IBM Tivoli Endpoint Manager provides IT staff with real-time information on remote endpoints to simplify patch processes, conserve energy and reduce on site troubleshooting.
Today's post comes from Sandy Hawke, Manager IBM Security Solutions.
I recently presented to the ISACA community on a live webinar. I focused the discussion on how to leverage automation to improve endpoint security and compliance. The archived webinar is available here. Just as a brief background, ISACA is an international professional association that focuses on all aspects of IT Governance and has over 95,000 members worldwide.
The online event drew a pretty substantial audience which is good, and yet a bit surprising in two key ways. First of all, many of the recommendations I made to the audience were not radically new concepts, but basic foundational controls that all security professionals agree are critical for achieving and maintaining solid security and demonstrable compliance. So haven't they heard this story before?
Maybe not. And that's the second observation. Most of the ISACA membership is in the IT audit/Risk Management line of business. While they're not the folks who are implementing security technologies on a daily basis (i.e. "hands at keyboards")- they are keen to understand how security is implemented, how it works, how automation can be used to facilitate audits, etc. And that's the new trend we've been witnessing. While the audit team knows what the policy controls should be, they may not know if/how these controls get enforced, maintained, monitored and reported on- essentially how security is "operationalized." The more that they know what's possible with respect to security operations and automation, the better they'll be at knowing what questions to ask IT operations during audits, what technologies to recommend, etc.
Years ago, the IT Audit/Risk Manager organization and activities were kept quite separate from the IT Operations/IT Infrastructure teams. And at the time there were pretty good reasons to keep these groups as distinct as possible- you've all heard of "keeping the fox out of the hen house" analogy, right? The IT Audit/Risk Mgmt teams could set and enforce policy and conduct assessments that wouldn't be influenced by the operations staff. Well, with the advent of converging technologies, economic trends, and the increased importance of measuring security investments and compliance program- in real time, these groups are coming together. More so than ever before.
And technologies that can foster that type of trust, cooperation, and collaboration are indispensable.
What is IBM Tivoli Software? We know you want the short version. Steven Wright of Tivoli Software breaks it all down for us in less than 7 minutes on a white grease board. Check it out while you have your morning coffee, afternoon tea, or while you get your miles in on the treadmill or trail with your smart phone. Then visit ibm.com/software/tivoli for more details on how IBM Tivoli Software can help you run a smarter business. .
For MSP's, IBM is providing a bundle of services and support in the way of marketing skills, technical expertise and financing options. On the marketing side, MSP's will have the ability to better target their customers and generate demand for their services through IBM education that includes topics such as developing effective marketing plans and exploiting the burgeoning social media space. Additionally, MSP's can sell IBM SmartCloud services under their own brand names. On the technical side, MSP's will have access to four new "Centers of Excellence" (located in China, Japan, Germany; and New York City) where they can collaborate with IBM technical experts to build their cloud services, and connect with other IBM ISV's. In terms of funding their efforts, IBM announced a financing offer which includes 12-month, 0% loans for IBM Systems, Storage and Software, and allows MSP's to defer payments for up to 90 days.
For end-users in the SMB space who often lack the necessary IT skills, this is a great opportunity to leverage local technology providers and take advantage of a cost effective "pay-as-you-go" model that cloud computing affords them. In addition, end-users will have the confidence of knowing that the services provided were built on an IBM platform.
Finally, for IBM, this is a great opportunity to expand its cloud ecosystem, and leverage the growing population of MSP's, who are continuing to gain traction in the cloud computing space for SMB's.
Today we trust computers – literally and
unconsciously with our very lives. I was reflecting on this level of trust when
I got £50 of cash out from my local ATM and declined the offer of a receipt.
Seems I now have total faith the computer systems will ‘get it right’. I’ve
come a long way from keeping all my own cheque books to cross check against
later bank statements.
Now, combining that faith with a little
healthy British cynicism, and triggered by watching the Olympics tennis finals on
TV, a mischievous but irresistible thought came to my mind.
It used to be that when a ball hit the
ground near the line we relied on the human eye to say whether it was ‘in’ or
‘out’. That caused disagreements and discussion – and - in tennis often -
sulking, swearing and the full range of petulant behaviour.
Nowadays that is all replaced by
referencing the technology. When there is doubt – or one of the players
questions a call - then we simply ask the computers. What we get then is a neat
little picture representing the appropriate lines on the court and a blob
showing where the ball had hit. So, problem solved: disappointment still for
one player but, so it seems, total acceptance that the computer is right. After
all it is an expensive system working away inside a very expensive box – must
be right, mustn’t it. Or to put it another way ‘computer says in’, who would
But what occurred to me is this. All we can
actually see is some boxes around the court, and a stylised display with a blob
on it. That could be delivered by one person with a tablet showing the court
lines and them touching the screen where they think it landed. Very cheap and
still solves all the arguments because – naturally – everyone trusts technology
Now – of course, and before anyone calls
their lawyers – I am not suggesting for the merest moment that there is the
slightest possibility of such a thing happening. But it’s fun to think it might
be possible. There is little public awareness of what accuracy the system – and
here I presume it does really exist – works to. If you dig around on the web
you can find out (the answer by the way for tennis is 3.6mm). You also find out
there is some very minor grumbling and questioning going on. But that seem at
geek level – in everyday use the audience stands instantly convinced.
So, thinking it through there are a couple
of interesting consequences to real IT life:
Once you realise that trust depends on quality of presentation
at least as much as on accuracy, should you focus more on that? Certainly
you have to take presentation seriously, because the corollary is that if you
deliver perfection but don’t make it look good, then no-one will believe
it even though you are right.
Whose responsibility is it to check – and is it even possible? I
suspect this discussion will take us into the territory of ‘governance’. But
even before we get there it implies that User Acceptance Testing needs to
do more than look at things. Of course yours does, doesn’t it?
I guess my big issue is to wonder how
comfortable we are – as the deliverers of the technological solutions for our
customers – and especially our users - to have such blind faith. Of course,
people being the irrational things they undoubtedly are, that blind faith in
the detail is often accompanied by a cynical disregard for overall competence –
think faith in ATMs and on-line bank account figures with the apparent level of
trust in the banking industry as a whole.
As a little codicil to the story, I registered
with anew doctor yesterday – the nurse asked me questions, took blood pressure
etc and loaded all the data she collected into a computer. The system was
clearly ancient, with a display synthesising what you typically got on a DOS3.0
system. First thought: ‘OMG why are they using such old software, that can’t be
good? Second thought: ‘They’ve obviously been using it for years, so they
really understand it, have ironed out all the bugs and it does what they need. It
ain’t broke so they aren’t fixing it’. But my instinctive reaction of suspicion
of it for not being pretty was there and I had to consciously correct myself.
Would you as a service provider prefer more
questioning of what you package up and present to your customers and users, or
are you happy to have that faith? My own view is that the more blind faith they
have in you, the more the retribution will hurt if things do go wrong. Or
perhaps that’s just me being cynical again?
One statement: simultaneously reassuring and terrifying.
Firstly it’s reassuring because anything that works towards the realisation that development and operation are not really separated by any kind of wall has to be a good thing. Of course there are different areas of focus at different times in the life of a service but they all should have the same aim – delivering what is needed in best possible way. We already all knew that, it is so obviously sensible that who would vote against it? The equally obvious fact that we then don’t do it is one for the psychologists and later blogs, but does lead me into my other reaction:-
The horror that we should be 50+ years into IT services before this seems important to enough for people to give a trendy name. How on earth have we survived this long without a “collaborative and productive relationship” between the people who build something and the people who operate it? And bear in mind both those groups are doing it for the same customer (in theory anyway).
To be fair to IT people though, perhaps this is an obligatory engineering practice we have picked up. Who remembers the days when getting your car repaired was unrelated to buying it? You bought it in the clean and shiny showroom at the front of the dealer, took it to the oily shed around the back if it broke. One of the things that has seen a step-change in the car industry – and is also changing ours and most others – is the realisation that we are now all delivering services and not products. So we are finally realising that long term usability and value is what defines success, not a shiny new – but fragile – toy. In fact, thinking of toys we all recall the gap between expectation and delivery of our childhood toys – the fancy and expensively engineered product that broke by Christmas evening compared to the cheap and solid – be it doll or push along car – that lasted until we outgrew it.
The car industry saw that happen – and we now have companies leading their adverts with a promise of lifetime car driving with their latest vehicles – with the mould really having been broken by Asian manufacturers offering 5 year unlimited mileage warranties. That was about selling a self-controlled transport service instead of a car – and really that is what most of us want. Amazing strides taking place on that front, of course, being taken by companies like Zipcar who have thought simply enough to see there is no absolute link between that service (self controlled transport) and car ownership. (Some of us want other things from a car of course – but that just leads us into the key first step of any successful service, know what your customer(s) want.)
Why I get so interested in all this is its basically what I’ve been saying for the last 20 years – my big advantage is that I came into IT from a services environment (I worked in a part of our organisation called ‘services group’) – and I never really understood why IT needed such a large and artificial wall between build and do. ITIL was (in large part) set up to try and break down the walls – initially an attempt to set up serious best practices and methodologies within operations to match what was already alive and well in development (hence the original name of the project – GITIMM, to mirror SSADM).
So … what am I saying? Please take devops seriously if that is what is needed to get better services. The complexity we need to address now means we have to stop maintaining any practices that prevent good ongoing service design and delivery. If giving it a name and a structure helps then let’s go there.
One of the things I am most proud about in the books I have contributed to is that we made up a fancy name for something good people already did (in our case early Life Support) – the intention was to give it profile and then people would add it to job roles and actually start to plan for it and then, finally, do it better.
Of course that brings with it the chance of looking like the emperor in his new clothes once you examine the detail and originality too carefully. But that’s good too – clever and original usually = doesn’t work too well at first. Solid old common sense (eventually) seems to me to offer a much firmer foundation to build on.
We need good foundations because the situation is actually a lot more complicated than we pretend – multiple customers, other stakeholders, users, operations as users – enough for a dozen more blogs, a handful of articles and a book. So … I’d better get on writing – and maybe so should you?
 Seems so to me anyway – the Delphic oracle was widely believed, responsibility free and most of those who used it didn’t understand where the knowledge came from.
The following article was written by Cameron Allen, Pierre Coyne and Beth Sarnie and is the second in our OSLC series.
In non-acronym speak, what I'm saying is that the future of service management has arrived in the form of Open Services for Lifecycle Collaboration.
But, what is OSLC and what does it have to do with you?
If you are a user of service management tools of any kind, or rely on information from tools to do your job, then you probably know that finding the right information is half the battle, and getting realtime access to that information when it is not under your direct control can feel next to impossible.
OSLC means you can now leverage the simplicity and ease of web links to both find and share information across your management tools (be they IBM, or any vendor tools).
Just as web pages can be linked on the Internet, data can be linked together from one application to another – creating an application ecosystem where applications don't care what vendor they're from. They look up who has the data in a directory, and jump right to it.
OSLC is not something new, and Tivoli is not the first to adopt it for integration. If you're an IBM Rational user, you may already be a believer. IBM Rational, its users, and an extensive ecosystem of partners have been using OSLC to successfully interconnect the application lifecycle for years.
In fact, Rational Jazz is the realization of OSLC community specifications and shared services in an open platform that anyone can use to interconnect the application lifecycle. Rational just delivered their 4th incarnation of the integrated product offering called Collaborative Lifecycle Management based on Jazz.
Tivoli is now leveraging these same principles to help break down silos of information across the end-to-end service lifecycle. That means expanding the notions behind Jazz from service design and development to now include service delivery and management. We call this Jazz for Service Management.
Take for example, problem management. In order to diagnose and resolve a given trouble ticket, the problem information must be gathered and aggregated from multiple sources. We may need information pertaining to the application topology, the health of a system within that topology, outages or events that may be affecting the application, the CPU utilization, the versions and configurations of the hardware and software that this application is dependent upon. I could go on...
The problem is that all of this information lives in different places. You can either call around to the various owners of that information, or you pay a business partner to learn the API of the tool in order to get to the data, or you can have a highly skilled, in-house resource write the integration. These options require extensive expertise in vendor-specific APIs and lots of maintenance to keep them current.
OSLC utilizes community defined specifications for sharing and linking data applied to specific service management scenarios so that in a critical outage scenario, all relevant information relating to that outage can be accessed in real time from any number of sources, displayed in the context of that problem, in a single integrated view, with related actions that can be taken.
The difference is simplicity. You might be able to do this this now with a lot of experts and time but OSLC delivers simplicity.
And, most importantly, because OSLC uses community specifications for service management scenarios, integrations can be built once and applied across multiple 'related' OSLC-enabled tools. "Write-once, Apply-many."
For more information, listen to this podcast on the Tivoli User Community. This podcast provides a deeper insight into the next generation of service management built using linked data.
Also, at Pulse 2012 (video link), developerWorks' Scott Laningham is joined by Don Cronin, program director, Tivoli Technical Strategy and Architecture; and Mike Kaczmarski, IBM Fellow and Tivoli Chief Integration Architect to discuss the Magic of linked data.
Leave your comments on how you are using OSLC in your organization below and don't forget to follow us on Twitter @servicemgmt and be sure to bookmark our OSLC story on Storify.
The following article was written by Cameron Allen, Pierre Coyne and Beth Sarnie and is the second in our OSLC series.
In fact, if you were at Pulse 2012...you heard how IBM Watson will be used to help doctors diagnose medical conditions and improve patient care at WellPoint.
For those of you, like myself, that don’t have a Watson-like recollection, here’s a quick flashback detailing a millisecond in Watson's brain on a sample patient:
Watson is given specific information on a patient’s symptoms, and makes a preliminary diagnosis of the flu as the most likely illness.
Based on the unique patient's name, Watson looks up records of the patient's history for the past few years, providing new insights that point to the better possible cause of, for example, a Urinary Tract Infection.
Based on the patient's family connections, Watson is able to use the family history to derive that the mostly likely cause is now diabetes.
And finally, Watson is able to access a patient’s latest tests to derive a final diagnosis.
If you're in the business of IT, this may sound a lot like incident management. And as any level 1 support person can attest, diagnosing the root cause of an incident is much like diagnosing a patient's condition. You need information from multiple sources (e.g. service desk, license, CMDB, monitoring, and asset management systems), but more importantly, it has to be in context, up to date, and delivered in a timely basis to make an accurate diagnosis of the root cause.
The problem has always been that an incident manager, like a doctor, has to jump between tools, entering requests in each system for the right information...and that is time consuming. In some cases, information isn't readily available and must be requested from other sources, not under their direct control.
One of the ways Watson is able to be such a great diagnostician (and incident manager) is through "linked data," which allows it to seek out and find related information on the patient from multiple sources in a fraction of a second to facilitate faster, more accurate patient diagnosis.
Until now, an incident manager did not have this same luxury.
That's where Jazz for Service Management comes in. Jazz is IBM's realtime platform for integrating management across multivendor tools, and across service lifecycle processes and functions. Like Watson, Jazz for service management uses principles of linked data, along with community standards (including OSLC) to support Watson-like service management decisions, regardless of what vendor tools you have in place.
What I am about to share here is a true story about Integrated Service Management. I changed the name of the customer to Customer because I didn’t ask permission to use Customer’s real name. So you’ll just have to believe me :oD
Oh, What a Better Web We Need
Once upon a time, Customer needed to test the interoperability of hardware, software, operating systems and customer solution stacks for new product releases. Customer needed to coordinate multiple global teams working on an abundance of machines. With thousands of operating system instances in test, Customer faced an enormous management challenge. Growth over time resulted in homegrown tools from many teams that did not interoperate, making data collection difficult. Visibility into tasks assigned to global teams was limited, and often resulted in duplicate testing and lost productivity. In addition to standardizing tools and improving workload tracking and visibility, Customer sought to automate as many repetitive processes as possible, improving productivity and freeing up engineers for more complex testing work.
Integrated Service Management to the Rescue!
The solution for Customer included a RaTivo integration of Rational Quality Manager (RQM) and Tivoli Provisioning Manager (TPM) to allow automatic provision of test machines with the required test configuration, saving Customer manual work and time from request to provision. Additionally, Customer applied Rational Test Lab Manager and Tivoli Application Discovery and Dependency Manager (TADDM) to discover available test lab machines and display the list in RQM, saving Customer test time as all the information is displayed one tool.
All’s Well that Ends Well
You can’t argue with these results. Customer directly benefited from Integrated Service Management by:
Eliminating an estimated 20 percent of testing duplication
Increasing visibility and automation allows better allocation of shared equipment, reducing hardware requests
Locating available test machines for testing without the need to learn a new tool or collaborate with the operation teams.
Automating provisioning of new test configurations on available machines, speeding the test cycle.
Enabling managers to pull their own custom reports, thereby improving visibility and coordination.
The following article was written with significant contributions from Cameron Allen, Pierre Coyne and Beth Sarnie
Question of the day: why is IT agility so darn elusive?
Follow up question: after spending multiple millions in technology to improve service delivery, quality, and productivity, why do so many line of business executives perceive that IT is still not moving "fast enough?"
Silo'd information presents a big speedbump to agility. According to the 2012 IBM study of CEOs, high performing organizations are able to access data 108% more, draw insights from that data 110% more, and act on that data 86% more, than their underperforming peers.
Which brings us back to the specific problem: Information exists, but it is not shared. Information remains trapped in silo'd tools and departmental applications. It's not only not moving "fast enough," it's not moving at all.
If you agree with ITIL and related methodologies, agility is directly linked to your IT processes. So while we can improve process methodology and connections across roles and functions, and within specific technology siloes with tools, if the data and resources can not be freely shared across process-enabling tools, then its all for not.
Going one level deeper, what is the cause of this 'information black hole', where data enters tools, and is never seen again? Your reality is that you probably rely on a mix of multi-vendor tools. Those vendor tools rely on proprietary APIs for integration and trying to make tools with different APIs communicate requires the IT equivalent of a team of United Nations translators, where each is an expert in their applications main language (API). Once successful, the herculean effort can create a constant maintenance cost, and might not work well in the end - things will be lost in translation. That said, even single vendor tool suites are notoriously difficult to integrate.
So what can be done?
Stop for a moment and consider the best example that demonstrates simplicity of integration on a massive scale. It's the Internet. With the Internet, you can get information from millions of different web sites and all you need is a browser.
So for argument's sake, if tools are the equivalent of web sites, then all we need are links to connect two tools. We can take that one step further, borrowing principles from social networks like LinkedIn or IBM Connections, where we can search for one person, and see relationships to other people (making searching for data across tools much easier).
That in essence is OSLC (Open Services for Lifecycle Collaboration): A set of open, community agreed upon specifications for linking tools using web technology. (And before you ask, no. It's not a standard, because apparently standards alone have not done the job)
Data from any vendor tool is registered in a directory like a search engine, where other tools can find it, its relationship to other data, and access it via simple web link technology. Not similar to the Internet, but exactly like the Internet.
What that means is you can easily interconnect tools and processes. You can even replace tools with competitive tools - eliminating vendor lock in. It also means you can re-purpose one integration across a series of 'like' tools. "Write once, reuse-many" inherently applies here. All of this translates into simpler and faster access to information by people and tools, better analytics leading to better decisions, and better automation of workflow.