One of the trends that we are seeing today is the convergence of security management and systems management.The better job you can do managing your infrastructure, the better equipped you will be to define and enforce security policies and controls across that infrastructure.There are few places where this convergence is more evident than the endpoint.
As the notion of a perimeter disappears, and we see the continued proliferation of an increasing number of traditional and non-traditional endpoints, such as servers, desktop PCs, laptops, ATMs, point-of-sale devices, and self-service kiosks, organizations are looking for a comprehensive approach to how they best manage and secure all of their endpoints.This includes, but is not limited to, identifying all of the endpoints that you have in your environment, managing the complete lifecycle of that endpoint, providing continuous security and compliance, effectively deploying patches in a timely manner and finally, managing the power usage of that endpoint.
Tivoli Endpoint Manager, built on BigFix technology, can address all of those needs, but in this blog, I want to focus on that last piece of the conversation, because it is one that does not immediately come to mind when people are typically thinking about the most critical elements of managing an endpoint.However, we have seen that effective power management is something that can actually pay for all of the other benefits that Tivoli Endpoint Manager can provide.You can ultimately end up saving money, the environment, and in the process, deploy critical security and systems management controls across all of your endpoints (even the ones you didn’t originally know you had).
In a recent article (click here), Penn State wrote about their deployment of Big Fix (now called Tivoli Endpoint Manager) and indicated that it could save them about $800,000 annually.At a large university like Penn State, they have thousands of computers that can be included in their power management initiative, and many of these computers are only heavily used during peak hours.Tivoli Endpoint Manager allows the Penn State IT staff to automatically put these computers in sleep mode when they aren’t in use.They are anticipating not only a significant ROI (about $800,000 annually), but are also hoping to reduce the amount of carbon dioxide released into the atmosphere by 60,000 tons.
One of the objections that people often bring up when it comes to power management for the endpoint is that it can interfere with the patch process.This is one of the areas where the convergence of security and systems management is so important.The policies that you create and enforce from a systems management perspective need to work hand-in-hand with the policies related to security management.For that reason, Tivoli Endpoint Manager was built on the core concepts of convergence, scalability and granular policy setting.It allows an IT staff to automatically wake computers at a designated time, apply required patches or enforce configuration policies, reboot, and then bring the endpoint back down to a hibernated, low energy state, or shut it down altogether.
The Chichester School District (click here) provides yet another great example of power management savings. This regional school district in Delaware County, Pennsylvania, manages more than 2,000 Microsoft Windows desktops and 50 Microsoft Windows servers throughout a six-school network.The Chichester School District implemented energy conservation using the power management capabilities of Tivoli Endpoint Manager to help reduce computing energy costs by 70 percent. Their IT team also uses the distributed “Wake-on-LAN” functionality to distribute and install patches to those machines that are turned off at night. This allows for a reduction of energy resources and confirms machines are securely patched—without impacting employee productivity.
The integrated patch and power management capabilities of IBM Tivoli Endpoint Manager provides IT staff with real-time information on remote endpoints to simplify patch processes, conserve energy and reduce on site troubleshooting.
Today's post comes from Sandy Hawke, Manager IBM Security Solutions.
I recently presented to the ISACA community on a live webinar. I focused the discussion on how to leverage automation to improve endpoint security and compliance. The archived webinar is available here. Just as a brief background, ISACA is an international professional association that focuses on all aspects of IT Governance and has over 95,000 members worldwide.
The online event drew a pretty substantial audience which is good, and yet a bit surprising in two key ways. First of all, many of the recommendations I made to the audience were not radically new concepts, but basic foundational controls that all security professionals agree are critical for achieving and maintaining solid security and demonstrable compliance. So haven't they heard this story before?
Maybe not. And that's the second observation. Most of the ISACA membership is in the IT audit/Risk Management line of business. While they're not the folks who are implementing security technologies on a daily basis (i.e. "hands at keyboards")- they are keen to understand how security is implemented, how it works, how automation can be used to facilitate audits, etc. And that's the new trend we've been witnessing. While the audit team knows what the policy controls should be, they may not know if/how these controls get enforced, maintained, monitored and reported on- essentially how security is "operationalized." The more that they know what's possible with respect to security operations and automation, the better they'll be at knowing what questions to ask IT operations during audits, what technologies to recommend, etc.
Years ago, the IT Audit/Risk Manager organization and activities were kept quite separate from the IT Operations/IT Infrastructure teams. And at the time there were pretty good reasons to keep these groups as distinct as possible- you've all heard of "keeping the fox out of the hen house" analogy, right? The IT Audit/Risk Mgmt teams could set and enforce policy and conduct assessments that wouldn't be influenced by the operations staff. Well, with the advent of converging technologies, economic trends, and the increased importance of measuring security investments and compliance program- in real time, these groups are coming together. More so than ever before.
And technologies that can foster that type of trust, cooperation, and collaboration are indispensable.
Well, we are well into 2012 now and we have just about got though the ‘my predictions for 2012’ phase and in to ordinary routines again. Whatever the predictions, like with most years I predict that 2012 will look a lot like an older version of 2011.
There is still talk of recession, companies that struggled for funding in 2011 are no richer, Cloud is still talked about by a lot more people than understand it.
On a personal level 2012 has already delivered some of the improvements planned in 2011 – and I hope the same will happen workwise. Next major thing on my work horizon is IBM’s big service management show – Pulse. Back again at the MGM Grand in Las Vegas we are promised it will be bigger and better than ever. I understand that bigger is important in as Vegas but I am usually even keener on better. Actually though, to be fair I am delighted that ‘my bit’ at Pulse looks like being bigger this year – with not one but two chances to deliver the cloud-readiness simulator on the weekend before the show itself starts. In fact there will be a strong focus on simulator this year with our team being on the exhibition floor to explain what, why and how they can help you.
Of course – like I implied above – this isn’t exactly new, but it is proven. Of course there will be lots of new stuff available – geeks welcomed and catered for. The technologists will – of course – be well catered for with lots of ‘future possibles’ and indeed a vision of some possible futures too. But service management’s primary focus is not on what might happen next year; it has always been about delivering value this year. In fact one of my favourite aspects of service management is how it rests on widely applicable principles, even though how they are applied might alter. For example, while change management processes in a cloud environment might need different considerations to make them most effective –the basics remain. I was working in service management long before I ever touched a computer. I remain constantly delighted to discover that lessons learned 30 years ago in supply and transport are still relevant to the 21stcentury IT based services we manage today.
So, if you are going to be at Pulse come along and tell me whether you agree that old-fashioned service concepts are still valuable – or come and explain why dinosaurs like me should be swept away by the meteor strike that is cloud. Either way – at Pulse or elsewhere – I look forward to good, informed and enjoyable debates. Good to think of the new year building on the successes of the old – at home and at work.
 If you follow me on twitter - @ivormacf - you will know where and when I will be in terms of events. Useful, whether you want to know how to find or to avoid me – same thing works both ways.
Last week, I attended my first IBM Pulse conference. I really enjoyed the sights and sounds of Vegas, and met many of my Tivoli colleagues for the first time. I also probably walked the equivalent of 15 miles over the five days within the mammoth MGM facility. But what I found most valuable over the five days were my interactions with our customers and business partners.
On Day 1 of the conference, my focus was the ISM Simulator workshop that I helped coordinate. Given that the workshop was: a) taking place prior to any other Pulse activities, b) located in the bowels of the MGM hotel, and
c) three hours in duration...
...I was a bit apprehensive that all the customers and business partners who had RSVP'd would actually show up. But when people started rolling in 30 minutes before the start time, I was confident that this workshop was going to be a success.
When we got started, we had 21 participants sitting around four tables, which is all but ideal for this role playing workshop. Like other simulator workshops that I have attended, it started out a bit chaotic, as participants tried to process the firehose of information that was being thrust upon them. By the end of the three hours, they had come full circle, and were effectively working together to the tune of a $5 million profit for their hypothetical shipping company.
As I chatted with some of them after the session, and listened in on some of their video testimonials, the words I heard most often were "eye-opening", "outstanding" and "insightful".
On Monday and Tuesday, I worked on the expo floor and showed off our cool new ISM Simulator video game. The game allowed users to experience various issues affecting service management and corporate profitability in a simulated organization. At the ped, I got great feedback from customers and partners, who, by virtue of playing the game, were able to get a better grasp of the sometimes abstract concepts of service management.
You can play the IBM Service Management Mission game here.
All in all, it was a great conference, and stay tuned for the video from the workshop!
Yes, I love being one of
the ambassadors for IBM’s Client Reference Program, a structured platform that
gives our valued Clients many opportunities to promote their unique
capabilities and stand tall in the, otherwise very competitive, market. IT
revolution, ease of internet, change in consumer behavior etc have all added to
While I write this blog, the
two things that I had studied, during school days in Biology, are shouting
aloud from my mind; one, Darwin’s ‘Survival of the Fittest’ and two,
‘legume-rhizobial symbiosis’. Interestingly, these biological phenomenon do
have real examples in economics too. A symbiotic relationship with clients/peers,
thus, is ‘very’ crucial in surviving the Darwinian marketplace. And, what
better way than registering for IBM’s Client Reference Program? :-)
For me, it’s great being a
Client Reference Specialist for Tivoli. Working in collaboration to create
Reference Profiles for our Clients has brought in a lot of advantages. Networking
opportunities with my fellow IBMers, Business Partners and Clients from across
industries is just a ‘cake’, but the real ‘icing’ is my continuous learning
about IBM’s Tivoli software for 'Integrated Service Management' that “provides
smarter solutions and the expertise you need to design, build and manage a
dynamic infrastructure that enables you to improve service, reduce cost and
manage risk.” Yes, I’m always in an awe of how IBM’s Tivoli solutions have
helped our Clients overcome their challenges.
PS: Rebecca Wissinger in
her blog ‘IBM Client Activities at Pulse 2011’ talks about the ways IBM is
saying THANK YOU to our immensely valued, extraordinary Clients at Pulse
2011. If you are attending Pulse 2011 then you will not give her blog a
I’m a big fan of IBM’s mission of Smarter Planet. As
an IBMer based out of Bangalore, India, I get inspired by Big Blue’s rich history
and the impact it has been creating on the world’s business systems.
This week, India is celebrating the “Joy of Giving
Week” (JGW), a pan-India initiative started in 2009 to celebrate a “festival of
giving” to the needy and to our society, through various forms of giving: time,
skills, resources, money etc. JGW is held annually for a week, starting on a
Sunday and ending on a Saturday. These dates also contain Mahatma Gandhi’s birth anniversary on the 2nd of October.
Donation boxes are kept in IBM offices across all
the locations. Interestingly, as I was making a list of things to be donated to
bring in smiles into a few innocent faces, a thought occurred to me….. and then
my joy knew no bounds. I realized that to be associated with IBM, which works
towards giving back to our Earth with a mission of making it a Smarter Planet
through innovations in products and services, is a joy in itself. A joy of
giving to the world we live in, for our smart and sustainable living.
Further, I love my job which is working on our Tivoli Success Stories for our IBM Client References.Many of
these stories talk about the work that we are doing with our customers and
their implementation of Smarter Planet solutions.Our customers, using these solutions, are
having a significant impact on making our lives better and more fulfilling. And, YES.....I can see the ‘Joy of Giving’ being passed on from IBM to our clients and
to the world :)
What is IBM Tivoli Software? We know you want the short version. Steven Wright of Tivoli Software breaks it all down for us in less than 7 minutes on a white grease board. Check it out while you have your morning coffee, afternoon tea, or while you get your miles in on the treadmill or trail with your smart phone. Then visit ibm.com/software/tivoli for more details on how IBM Tivoli Software can help you run a smarter business. .
For MSP's, IBM is providing a bundle of services and support in the way of marketing skills, technical expertise and financing options. On the marketing side, MSP's will have the ability to better target their customers and generate demand for their services through IBM education that includes topics such as developing effective marketing plans and exploiting the burgeoning social media space. Additionally, MSP's can sell IBM SmartCloud services under their own brand names. On the technical side, MSP's will have access to four new "Centers of Excellence" (located in China, Japan, Germany; and New York City) where they can collaborate with IBM technical experts to build their cloud services, and connect with other IBM ISV's. In terms of funding their efforts, IBM announced a financing offer which includes 12-month, 0% loans for IBM Systems, Storage and Software, and allows MSP's to defer payments for up to 90 days.
For end-users in the SMB space who often lack the necessary IT skills, this is a great opportunity to leverage local technology providers and take advantage of a cost effective "pay-as-you-go" model that cloud computing affords them. In addition, end-users will have the confidence of knowing that the services provided were built on an IBM platform.
Finally, for IBM, this is a great opportunity to expand its cloud ecosystem, and leverage the growing population of MSP's, who are continuing to gain traction in the cloud computing space for SMB's.
Today we trust computers – literally and
unconsciously with our very lives. I was reflecting on this level of trust when
I got £50 of cash out from my local ATM and declined the offer of a receipt.
Seems I now have total faith the computer systems will ‘get it right’. I’ve
come a long way from keeping all my own cheque books to cross check against
later bank statements.
Now, combining that faith with a little
healthy British cynicism, and triggered by watching the Olympics tennis finals on
TV, a mischievous but irresistible thought came to my mind.
It used to be that when a ball hit the
ground near the line we relied on the human eye to say whether it was ‘in’ or
‘out’. That caused disagreements and discussion – and - in tennis often -
sulking, swearing and the full range of petulant behaviour.
Nowadays that is all replaced by
referencing the technology. When there is doubt – or one of the players
questions a call - then we simply ask the computers. What we get then is a neat
little picture representing the appropriate lines on the court and a blob
showing where the ball had hit. So, problem solved: disappointment still for
one player but, so it seems, total acceptance that the computer is right. After
all it is an expensive system working away inside a very expensive box – must
be right, mustn’t it. Or to put it another way ‘computer says in’, who would
But what occurred to me is this. All we can
actually see is some boxes around the court, and a stylised display with a blob
on it. That could be delivered by one person with a tablet showing the court
lines and them touching the screen where they think it landed. Very cheap and
still solves all the arguments because – naturally – everyone trusts technology
Now – of course, and before anyone calls
their lawyers – I am not suggesting for the merest moment that there is the
slightest possibility of such a thing happening. But it’s fun to think it might
be possible. There is little public awareness of what accuracy the system – and
here I presume it does really exist – works to. If you dig around on the web
you can find out (the answer by the way for tennis is 3.6mm). You also find out
there is some very minor grumbling and questioning going on. But that seem at
geek level – in everyday use the audience stands instantly convinced.
So, thinking it through there are a couple
of interesting consequences to real IT life:
Once you realise that trust depends on quality of presentation
at least as much as on accuracy, should you focus more on that? Certainly
you have to take presentation seriously, because the corollary is that if you
deliver perfection but don’t make it look good, then no-one will believe
it even though you are right.
Whose responsibility is it to check – and is it even possible? I
suspect this discussion will take us into the territory of ‘governance’. But
even before we get there it implies that User Acceptance Testing needs to
do more than look at things. Of course yours does, doesn’t it?
I guess my big issue is to wonder how
comfortable we are – as the deliverers of the technological solutions for our
customers – and especially our users - to have such blind faith. Of course,
people being the irrational things they undoubtedly are, that blind faith in
the detail is often accompanied by a cynical disregard for overall competence –
think faith in ATMs and on-line bank account figures with the apparent level of
trust in the banking industry as a whole.
As a little codicil to the story, I registered
with anew doctor yesterday – the nurse asked me questions, took blood pressure
etc and loaded all the data she collected into a computer. The system was
clearly ancient, with a display synthesising what you typically got on a DOS3.0
system. First thought: ‘OMG why are they using such old software, that can’t be
good? Second thought: ‘They’ve obviously been using it for years, so they
really understand it, have ironed out all the bugs and it does what they need. It
ain’t broke so they aren’t fixing it’. But my instinctive reaction of suspicion
of it for not being pretty was there and I had to consciously correct myself.
Would you as a service provider prefer more
questioning of what you package up and present to your customers and users, or
are you happy to have that faith? My own view is that the more blind faith they
have in you, the more the retribution will hurt if things do go wrong. Or
perhaps that’s just me being cynical again?
One statement: simultaneously reassuring and terrifying.
Firstly it’s reassuring because anything that works towards the realisation that development and operation are not really separated by any kind of wall has to be a good thing. Of course there are different areas of focus at different times in the life of a service but they all should have the same aim – delivering what is needed in best possible way. We already all knew that, it is so obviously sensible that who would vote against it? The equally obvious fact that we then don’t do it is one for the psychologists and later blogs, but does lead me into my other reaction:-
The horror that we should be 50+ years into IT services before this seems important to enough for people to give a trendy name. How on earth have we survived this long without a “collaborative and productive relationship” between the people who build something and the people who operate it? And bear in mind both those groups are doing it for the same customer (in theory anyway).
To be fair to IT people though, perhaps this is an obligatory engineering practice we have picked up. Who remembers the days when getting your car repaired was unrelated to buying it? You bought it in the clean and shiny showroom at the front of the dealer, took it to the oily shed around the back if it broke. One of the things that has seen a step-change in the car industry – and is also changing ours and most others – is the realisation that we are now all delivering services and not products. So we are finally realising that long term usability and value is what defines success, not a shiny new – but fragile – toy. In fact, thinking of toys we all recall the gap between expectation and delivery of our childhood toys – the fancy and expensively engineered product that broke by Christmas evening compared to the cheap and solid – be it doll or push along car – that lasted until we outgrew it.
The car industry saw that happen – and we now have companies leading their adverts with a promise of lifetime car driving with their latest vehicles – with the mould really having been broken by Asian manufacturers offering 5 year unlimited mileage warranties. That was about selling a self-controlled transport service instead of a car – and really that is what most of us want. Amazing strides taking place on that front, of course, being taken by companies like Zipcar who have thought simply enough to see there is no absolute link between that service (self controlled transport) and car ownership. (Some of us want other things from a car of course – but that just leads us into the key first step of any successful service, know what your customer(s) want.)
Why I get so interested in all this is its basically what I’ve been saying for the last 20 years – my big advantage is that I came into IT from a services environment (I worked in a part of our organisation called ‘services group’) – and I never really understood why IT needed such a large and artificial wall between build and do. ITIL was (in large part) set up to try and break down the walls – initially an attempt to set up serious best practices and methodologies within operations to match what was already alive and well in development (hence the original name of the project – GITIMM, to mirror SSADM).
So … what am I saying? Please take devops seriously if that is what is needed to get better services. The complexity we need to address now means we have to stop maintaining any practices that prevent good ongoing service design and delivery. If giving it a name and a structure helps then let’s go there.
One of the things I am most proud about in the books I have contributed to is that we made up a fancy name for something good people already did (in our case early Life Support) – the intention was to give it profile and then people would add it to job roles and actually start to plan for it and then, finally, do it better.
Of course that brings with it the chance of looking like the emperor in his new clothes once you examine the detail and originality too carefully. But that’s good too – clever and original usually = doesn’t work too well at first. Solid old common sense (eventually) seems to me to offer a much firmer foundation to build on.
We need good foundations because the situation is actually a lot more complicated than we pretend – multiple customers, other stakeholders, users, operations as users – enough for a dozen more blogs, a handful of articles and a book. So … I’d better get on writing – and maybe so should you?
 Seems so to me anyway – the Delphic oracle was widely believed, responsibility free and most of those who used it didn’t understand where the knowledge came from.
What I am about to share here is a true story about Integrated Service Management. I changed the name of the customer to Customer because I didn’t ask permission to use Customer’s real name. So you’ll just have to believe me :oD
Oh, What a Better Web We Need
Once upon a time, Customer needed to test the interoperability of hardware, software, operating systems and customer solution stacks for new product releases. Customer needed to coordinate multiple global teams working on an abundance of machines. With thousands of operating system instances in test, Customer faced an enormous management challenge. Growth over time resulted in homegrown tools from many teams that did not interoperate, making data collection difficult. Visibility into tasks assigned to global teams was limited, and often resulted in duplicate testing and lost productivity. In addition to standardizing tools and improving workload tracking and visibility, Customer sought to automate as many repetitive processes as possible, improving productivity and freeing up engineers for more complex testing work.
Integrated Service Management to the Rescue!
The solution for Customer included a RaTivo integration of Rational Quality Manager (RQM) and Tivoli Provisioning Manager (TPM) to allow automatic provision of test machines with the required test configuration, saving Customer manual work and time from request to provision. Additionally, Customer applied Rational Test Lab Manager and Tivoli Application Discovery and Dependency Manager (TADDM) to discover available test lab machines and display the list in RQM, saving Customer test time as all the information is displayed one tool.
All’s Well that Ends Well
You can’t argue with these results. Customer directly benefited from Integrated Service Management by:
Eliminating an estimated 20 percent of testing duplication
Increasing visibility and automation allows better allocation of shared equipment, reducing hardware requests
Locating available test machines for testing without the need to learn a new tool or collaborate with the operation teams.
Automating provisioning of new test configurations on available machines, speeding the test cycle.
Enabling managers to pull their own custom reports, thereby improving visibility and coordination.
Tivoli Foundations Application Manager offers turn-key performance and availability monitoring for mid-market companies. It allows them to restore a service that is experiencing performance and/or availability problems with the shortest mean time to recovery possible.
Tivoli Foundations Application Manager also delivers real-time information allows an organization to visualize service performance and health across their network, server, middleware and application components enabling them to effectively manage risk and improve service quality.
Tivoli Foundations Application Manager helps clients optimize their resource allocation and reduce cost by giving them the ability to identify underutilized resources and reallocate them to support new business operations. At the same time, risk is reduced by anticipating resource over-utilization and generating proactive events and reports against resources that do not have the capacity to address growing business needs.
Tivoli Foundations Application Manager comes with out-of-the box best practice monitoring policies that track IT Infrastructure health against pre-defined thresholds. This allows organizations to quickly and proactively identify and respond to problems and issues before critical applications and customers are impacted. Overall service is improved by restoring the service or application that is experiencing performance problems with the shortest mean time to recovery possible using autonomic capabilities to fix problems before human intervention is even needed. Reducing problem determination time decreases cost and allows organizations to spend more resources focusing on business innovation and creating competitive advantage.
IBM Tivoli Foundations Service Manager
Tivoli Foundations Service Manager provides service desk capabilities that allow mid-size companies to handle help desk calls, track problems, and make changes that prevent existing problems without creating new ones. It also provides a self-service, searchable knowledge base that delivers fast answers to common IT problems.
In addition, Tivoli Foundations Service Manager delivers dashboards that provide real-time performance views and out-of-the box content including workflows, templates, key performance indicators (KPIs), queries and reports targeted for mid-size clients.
The Tivoli Foundations Service Manager appliance-based service desk solution helps mid-market clients reduce their costs by optimizing the productivity of operations personnel through its built-in problem solving tools, providing operations a way to increase the efficiency of its service support functions. The robust self-help portal which is populated with best practice resolutions to common problems, gives end-users a way to quickly resolve problems on their own without having to involve any additional personnel.
Managing risk is key to small and mid-market clients that have extremely limited IT skills in-house. The Tivoli Foundations Service Management solution ensures process compliance by integrating standards-based incident and problem management processes resulting in a repeatable and consistent service support process.
Tivoli Foundations Service Manager delivers streamlined standards-based incident and problem management processes that enable rapid service restoration and improved overall service quality. It provides real-time visibility to end users on priority, urgency, and impact of problems, incidents, and service requests. These built-in survey capabilities allow organizations to track and trend overall end-user satisfaction with their operations and creates a closed loop environment where overall service quality can continually be improved.
The following article was written by Cameron Allen, Pierre Coyne and Beth Sarnie and is the second in our OSLC series.
In fact, if you were at Pulse 2012...you heard how IBM Watson will be used to help doctors diagnose medical conditions and improve patient care at WellPoint.
For those of you, like myself, that don’t have a Watson-like recollection, here’s a quick flashback detailing a millisecond in Watson's brain on a sample patient:
Watson is given specific information on a patient’s symptoms, and makes a preliminary diagnosis of the flu as the most likely illness.
Based on the unique patient's name, Watson looks up records of the patient's history for the past few years, providing new insights that point to the better possible cause of, for example, a Urinary Tract Infection.
Based on the patient's family connections, Watson is able to use the family history to derive that the mostly likely cause is now diabetes.
And finally, Watson is able to access a patient’s latest tests to derive a final diagnosis.
If you're in the business of IT, this may sound a lot like incident management. And as any level 1 support person can attest, diagnosing the root cause of an incident is much like diagnosing a patient's condition. You need information from multiple sources (e.g. service desk, license, CMDB, monitoring, and asset management systems), but more importantly, it has to be in context, up to date, and delivered in a timely basis to make an accurate diagnosis of the root cause.
The problem has always been that an incident manager, like a doctor, has to jump between tools, entering requests in each system for the right information...and that is time consuming. In some cases, information isn't readily available and must be requested from other sources, not under their direct control.
One of the ways Watson is able to be such a great diagnostician (and incident manager) is through "linked data," which allows it to seek out and find related information on the patient from multiple sources in a fraction of a second to facilitate faster, more accurate patient diagnosis.
Until now, an incident manager did not have this same luxury.
That's where Jazz for Service Management comes in. Jazz is IBM's realtime platform for integrating management across multivendor tools, and across service lifecycle processes and functions. Like Watson, Jazz for service management uses principles of linked data, along with community standards (including OSLC) to support Watson-like service management decisions, regardless of what vendor tools you have in place.
The following article was written by Cameron Allen, Pierre Coyne and Beth Sarnie and is the second in our OSLC series.
In non-acronym speak, what I'm saying is that the future of service management has arrived in the form of Open Services for Lifecycle Collaboration.
But, what is OSLC and what does it have to do with you?
If you are a user of service management tools of any kind, or rely on information from tools to do your job, then you probably know that finding the right information is half the battle, and getting realtime access to that information when it is not under your direct control can feel next to impossible.
OSLC means you can now leverage the simplicity and ease of web links to both find and share information across your management tools (be they IBM, or any vendor tools).
Just as web pages can be linked on the Internet, data can be linked together from one application to another – creating an application ecosystem where applications don't care what vendor they're from. They look up who has the data in a directory, and jump right to it.
OSLC is not something new, and Tivoli is not the first to adopt it for integration. If you're an IBM Rational user, you may already be a believer. IBM Rational, its users, and an extensive ecosystem of partners have been using OSLC to successfully interconnect the application lifecycle for years.
In fact, Rational Jazz is the realization of OSLC community specifications and shared services in an open platform that anyone can use to interconnect the application lifecycle. Rational just delivered their 4th incarnation of the integrated product offering called Collaborative Lifecycle Management based on Jazz.
Tivoli is now leveraging these same principles to help break down silos of information across the end-to-end service lifecycle. That means expanding the notions behind Jazz from service design and development to now include service delivery and management. We call this Jazz for Service Management.
Take for example, problem management. In order to diagnose and resolve a given trouble ticket, the problem information must be gathered and aggregated from multiple sources. We may need information pertaining to the application topology, the health of a system within that topology, outages or events that may be affecting the application, the CPU utilization, the versions and configurations of the hardware and software that this application is dependent upon. I could go on...
The problem is that all of this information lives in different places. You can either call around to the various owners of that information, or you pay a business partner to learn the API of the tool in order to get to the data, or you can have a highly skilled, in-house resource write the integration. These options require extensive expertise in vendor-specific APIs and lots of maintenance to keep them current.
OSLC utilizes community defined specifications for sharing and linking data applied to specific service management scenarios so that in a critical outage scenario, all relevant information relating to that outage can be accessed in real time from any number of sources, displayed in the context of that problem, in a single integrated view, with related actions that can be taken.
The difference is simplicity. You might be able to do this this now with a lot of experts and time but OSLC delivers simplicity.
And, most importantly, because OSLC uses community specifications for service management scenarios, integrations can be built once and applied across multiple 'related' OSLC-enabled tools. "Write-once, Apply-many."
For more information, listen to this podcast on the Tivoli User Community. This podcast provides a deeper insight into the next generation of service management built using linked data.
Also, at Pulse 2012 (video link), developerWorks' Scott Laningham is joined by Don Cronin, program director, Tivoli Technical Strategy and Architecture; and Mike Kaczmarski, IBM Fellow and Tivoli Chief Integration Architect to discuss the Magic of linked data.
Leave your comments on how you are using OSLC in your organization below and don't forget to follow us on Twitter @servicemgmt and be sure to bookmark our OSLC story on Storify.