A: Linked Data
The following article was written by Cameron Allen, Pierre Coyne and Beth Sarnie and is the second in our OSLC series.
In fact, if you were at Pulse 2012...you heard how IBM Watson will be used to help doctors diagnose medical conditions and improve patient care at WellPoint.
For those of you, like myself, that don’t have a Watson-like recollection, here’s a quick flashback detailing a millisecond in Watson's brain on a sample patient:
- Watson is given specific information on a patient’s symptoms, and makes a preliminary diagnosis of the flu as the most likely illness.
- Based on the unique patient's name, Watson looks up records of the patient's history for the past few years, providing new insights that point to the better possible cause of, for example, a Urinary Tract Infection.
- Based on the patient's family connections, Watson is able to use the family history to derive that the mostly likely cause is now diabetes.
- And finally, Watson is able to access a patient’s latest tests to derive a final diagnosis.
If you're in the business of IT, this may sound a lot like incident management. And as any level 1 support person can attest, diagnosing the root cause of an incident is much like diagnosing a patient's condition. You need information from multiple sources (e.g. service desk, license, CMDB, monitoring, and asset management systems), but more importantly, it has to be in context, up to date, and delivered in a timely basis to make an accurate diagnosis of the root cause.
The problem has always been that an incident manager, like a doctor, has to jump between tools, entering requests in each system for the right information...and that is time consuming. In some cases, information isn't readily available and must be requested from other sources, not under their direct control.
One of the ways Watson is able to be such a great diagnostician (and incident manager) is through "linked data," which allows it to seek out and find related information on the patient from multiple sources in a fraction of a second to facilitate faster, more accurate patient diagnosis.
Until now, an incident manager did not have this same luxury.
That's where Jazz for Service Management comes in. Jazz is IBM's realtime platform for integrating management across multivendor tools, and across service lifecycle processes and functions. Like Watson, Jazz for service management uses principles of linked data, along with community standards (including OSLC) to support Watson-like service management decisions, regardless of what vendor tools you have in place.
If you want to learn more about OSLC and linked data in the context of service management, join the IBM developerWorks Jazz for Service Management community for demonstrations, and to gain early access to beta software.
A while back I wrote a blog just mentioning
devops, and what a sensible idea it seemed – certainly the word ‘devops’ hit
some bells and I got 3 times my normal hits in the first day. At the beginning
of this year (2012 in case you got here late) I wrote a blog inspired by a
discussion with a TOGAF fan; I felt we in parts of the IT world need to talk to
our neighbours a lot more.
I was reminded of these by seeing several
devops write-ups recently (separate articles in itSMF UK and US
magazines in the same month). Both are encouraging and make the unavoidable
point: what devops suggests as a matter of principle is clearly something to
be supported like the proverbial apple pie. It is just so obvious, it has to be right - why would
you not use the people who built and know a new piece of software (or anything
else for that matter) to get it in place and working, and as first point of
call should anything not work as expected?
Both articles argue that ITSM people should embrace
the ideas rather than rush to defend their empires. Devops is not the only
example, but it seems to me that what we might be faced with is set of
driven from disparate firm foundations in our vast ocean of IT
In fact the commonality between the
approaches is massive, especially once you get past a temptation to overly
rigorous application. It amazes me that the same IT people who would never
dream of reading the instructions before using their new technology toys insist
on applying every word of best practice.
If you want an example of how ITIL®
overlaps the base devops concept look at section 6.7, page 236 of Stuart
Rance’s Service Transition book in ITIL 2011.
The point I really wanted to make is that
we need to get above the point of origin and see identification, creation
delivery and operation of service as the real goal and the subject of some
integrated guidance. Everything we have so far shows its origins.
- ITIL comes from operations, for all its gallant attempts to
preach service strategy it is not really getting to the people who
should be doing so because they originate from other parts of IT/business
- Devops is coming from the development community and so
reflects their take on life. Things
like OSLC that will help smooth some of the boundaries are also being
pitched – so far – from the development side
- All of the stuff that I see is coming out of parts of IT, when
to me IT is only a part (albeit a big and important part most times).
I started my career helping organisations
establish and improve services, I got sidetracked into IT and oft-times I miss
that bigger image. I still find it hard to think only of IT aspects and
solutions, but I find I am often talking with people – suppliers and customers
– who are content to be restricted to IT aspects.
In the short term I think what we need is
more selling of the neighbour’s ideas. I want to see devops being evangelised
by someone from the ITSM community, and we need the converse too. Otherwise it
can feel like the recommendations for apple pie are coming exclusively from the
apple marketing board; doesn’t mean they are wrong but they can less than convincing, especially to a cynical audience or to one that has something they feel they must defend. Maybe I have stumbled onto my
subject for next year’s conferences – anyone interested in inviting me?
As the Western Hemisphere was slumbering, news from Singapore was lighting up Twitter as our senior executives took the stage at the IBM InterConnect conference to talk about some of the latest announcements from the IBM corporation on innovation and a Smarter Planet.
Much of the reporting has been done on Twitter (hashtag #IBMInterConnect) and these keynotes are available on the LiveStream including an amazing speech by Dr. Michio Kaku about the future of computers ("everywhere and nowhere").*
These are supplemented by interviews conducted by Todd "Turbo Todd" Watson, also on the LiveStream.
Since this event was focused on a Smarter Planet (the entire IBM portfolio), we covered a lot of ground. Big Data. Social. Mobility. And, of course, cloud.
For SmartCloud Foundation, the Tivoli organization has a number of exciting solutions that are designed to help you increase the levels of innovation you provide to your clients.
For this blog, I thought it'd be good to focus on three of the new solutions you might not have seen before that are going to help you in building out your private cloud.
IBM SmartCloud Cost Management is one of the key components in transforming IT from a "cost center" to an innovation center by providing levels of visibility, and transparency, to the IT costs associated with your cloud. Measure, analyze, report, and invoice the utilization and costs of physical, virtualized, and cloud computing resources, storage and network resources, applications, and other non-IT cost drivers.
IBM SmartCloud Patch Management combines the benefits of two solutions, IBM Endpoint Manager for Patch Management and IBM SmartCloud Provisioning, to provide an effective entry point that delivers lower costs and improves the visibility and control of physical, virtual, and cloud environments.
Finally, the IBM SmartCloud Virtual Storage Center is a solution that you might have seen us talk about at Pulse 2012 and it's now an exciting addition to the portfolio. This solution helps IT storage managers migrate to an agile cloud-based storage environment and manage it effectively without having to replace existing storage systems. If you're looking to increase your storage efficiency in cloud, but don't have the checkbook to do a "rip and replace" of your entire infrastructure, you need to be looking at this solution.
There's more going on in Singapore over the next two days, and more discussion of SmartCloud Foundation and IBM Smarter Planet. Stay tuned to Twitter and the LiveStream and feel free to post comments below.
* I have to confess that this blog was delayed because I got sucked into watching the keynotes.
This blog post was written by George Mina
Earlier today, IBM shared its point of view on the future of the data center with Smarter Computing V3 (press release). A central focus is IBM Enterprise Systems (zEnterprise EC12 and Power) and their ability to deliver exceptional value through a private Cloud. We've seen how organizations have been able to leverage IBM Enterprise Systems to achieve significant benefits. Take the City of Honolulu for example which was able to lower its licensing costs by 68% while increasing tax revenue by $1.4M USD in just three months.
By adding Tivoli software to their current IT environment, organizations can advance their enterprise-class Cloud environment while protecting their existing IT investment. How? IBM SmartCloud Foundation software is deeply rooted in openess - an open standards approach and common management tools that are platform agnostic. Essentially, you pick the platform(s) that best meets your business goals and we deliver a set of interoperable Cloud management tools across your heterogeneous environment. Of course, there are intrinsic benefits to building a Cloud management stack on top of IBM Enterprise Systems given the tight integration between hardware and software. OMEGAMON for example leverages a deep integration with zEnterprise systems to deliver advanced monitoring that reduces typical time to resolution from 90 minutes to 2 minutes.
Whether your starting to consider virtualizing your IT environment or deep into your Cloud journey, we have open Cloud management tools that help you expand your Cloud footprint without fear of vendor "lock-in". Learn more about the latest announcement and our Cloud solutions by visiting this site and attending the System z webcast on October 17.
The following article was written by Cameron Allen, Pierre Coyne and Beth Sarnie and is the second in our OSLC series.
In non-acronym speak, what I'm saying is that the future of service management has arrived in the form of Open Services for Lifecycle Collaboration.
But, what is OSLC and what does it have to do with you?
If you are a user of service management tools of any kind, or rely on information from tools to do your job, then you probably know that finding the right information is half the battle, and getting realtime access to that information when it is not under your direct control can feel next to impossible.
OSLC means you can now leverage the simplicity and ease of web links to both find and share information across your management tools (be they IBM, or any vendor tools).
Just as web pages can be linked on the Internet, data can be linked together from one application to another – creating an application ecosystem where applications don't care what vendor they're from. They look up who has the data in a directory, and jump right to it.
OSLC is not something new, and Tivoli is not the first to adopt it for integration. If you're an IBM Rational user, you may already be a believer. IBM Rational, its users, and an extensive ecosystem of partners have been using OSLC to successfully interconnect the application lifecycle for years.
In fact, Rational Jazz is the realization of OSLC community specifications and shared services in an open platform that anyone can use to interconnect the application lifecycle. Rational just delivered their 4th incarnation of the integrated product offering called Collaborative Lifecycle Management based on Jazz.
Tivoli is now leveraging these same principles to help break down silos of information across the end-to-end service lifecycle. That means expanding the notions behind Jazz from service design and development to now include service delivery and management. We call this Jazz for Service Management.
Take for example, problem management. In order to diagnose and resolve a given trouble ticket, the problem information must be gathered and aggregated from multiple sources. We may need information pertaining to the application topology, the health of a system within that topology, outages or events that may be affecting the application, the CPU utilization, the versions and configurations of the hardware and software that this application is dependent upon. I could go on...
The problem is that all of this information lives in different places. You can either call around to the various owners of that information, or you pay a business partner to learn the API of the tool in order to get to the data, or you can have a highly skilled, in-house resource write the integration. These options require extensive expertise in vendor-specific APIs and lots of maintenance to keep them current.
OSLC utilizes community defined specifications for sharing and linking data applied to specific service management scenarios so that in a critical outage scenario, all relevant information relating to that outage can be accessed in real time from any number of sources, displayed in the context of that problem, in a single integrated view, with related actions that can be taken.
The difference is simplicity. You might be able to do this this now with a lot of experts and time but OSLC delivers simplicity.
And, most importantly, because OSLC uses community specifications for service management scenarios, integrations can be built once and applied across multiple 'related' OSLC-enabled tools. "Write-once, Apply-many."
For more information, listen to this podcast on the Tivoli User Community. This podcast provides a deeper insight into the next generation of service management built using linked data.
Also, at Pulse 2012 (video link), developerWorks' Scott Laningham is joined by Don Cronin, program director, Tivoli Technical Strategy and Architecture; and Mike Kaczmarski, IBM Fellow and Tivoli Chief Integration Architect to discuss the Magic of linked data.
Leave your comments on how you are using OSLC in your organization below and don't forget to follow us on Twitter @servicemgmt and be sure to bookmark our OSLC story on Storify.
Depending on where you're from, some people call it "autumn" and other people call it "fall."
Either way, it's when things in most offices start getting a bit hectic.
Back in the autumn of 1968, IBMers in Boca Raton were putting together 1130 computer systems for customers.
Here's a neat photo of them hard at work in the factory.
As you can see, once demand picks up in autumn. It doesn't slow down.
Even today. Demands on services are high and they keep getting higher which is why application performance monitoring (APM) becomes important.
It's why I'm pleased to let you know that Gartner identifies IBM as a leader in the 2012 Magic Quadrant for Application Performance Management (APM).*
The full report is available on the Gartner website.
Give it a read and let us know how you are using APM in your organization in the comments section below.
PS I recognize that the 1300 has nothing to do with this blog post. I just felt the need to post pictures of classic IBM hardware...
* Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose
The following article was written with significant contributions from Cameron Allen, Pierre Coyne and Beth Sarnie
Question of the day: why is IT agility so darn elusive?
Follow up question: after spending multiple millions in technology to improve service delivery, quality, and productivity, why do so many line of business executives perceive that IT is still not moving "fast enough?"
Silo'd information presents a big speedbump to agility. According to the 2012 IBM study of CEOs, high performing organizations are able to access data 108% more, draw insights from that data 110% more, and act on that data 86% more, than their underperforming peers.
Which brings us back to the specific problem: Information exists, but it is not shared. Information remains trapped in silo'd tools and departmental applications. It's not only not moving "fast enough," it's not moving at all.
If you agree with ITIL and related methodologies, agility is directly linked to your IT processes. So while we can improve process methodology and connections across roles and functions, and within specific technology siloes with tools, if the data and resources can not be freely shared across process-enabling tools, then its all for not.
Going one level deeper, what is the cause of this 'information black hole', where data enters tools, and is never seen again? Your reality is that you probably rely on a mix of multi-vendor tools. Those vendor tools rely on proprietary APIs for integration and trying to make tools with different APIs communicate requires the IT equivalent of a team of United Nations translators, where each is an expert in their applications main language (API). Once successful, the herculean effort can create a constant maintenance cost, and might not work well in the end - things will be lost in translation. That said, even single vendor tool suites are notoriously difficult to integrate.
So what can be done?
Stop for a moment and consider the best example that demonstrates simplicity of integration on a massive scale. It's the Internet. With the Internet, you can get information from millions of different web sites and all you need is a browser.
So for argument's sake, if tools are the equivalent of web sites, then all we need are links to connect two tools. We can take that one step further, borrowing principles from social networks like LinkedIn or IBM Connections, where we can search for one person, and see relationships to other people (making searching for data across tools much easier).
That in essence is OSLC (Open Services for Lifecycle Collaboration): A set of open, community agreed upon specifications for linking tools using web technology. (And before you ask, no. It's not a standard, because apparently standards alone have not done the job)
Data from any vendor tool is registered in a directory like a search engine, where other tools can find it, its relationship to other data, and access it via simple web link technology. Not similar to the Internet, but exactly like the Internet.
What that means is you can easily interconnect tools and processes. You can even replace tools with competitive tools - eliminating vendor lock in. It also means you can re-purpose one integration across a series of 'like' tools. "Write once, reuse-many" inherently applies here. All of this translates into simpler and faster access to information by people and tools, better analytics leading to better decisions, and better automation of workflow.
Now, IT will be seen as agile.
No longer elusive.
This is the first in a series of articles we will be posting about OSLC. Feel free to leave your comments below. Be sure to listen to the podcast we did for OSLC on the Tivoli User Group - TUC Podcast: OSLC Series - Learn how Tivoli’s enhanced architecture strategy will help you simplify integration across products – IBM and Other Vendors, and don't forget to follow us on Twitter @servicemgmt.
Also, stay tuned to the blog for more in our series of articles about OSLC.
PS a reminder that InterConnect 2012 in Singapore is coming. October 9-11 and it's going to be an amazing conference. Tiffany Winman has a great post about it on the IBM Software Group blog.
I am at VMWorld this week, in San Francisco CA where IBM is a platinum sponsor.
With the growing adoption of Cloud implementations, private, public and hybrid, there's clearly a desire in the clients exploring solutions here to optimize and exploit their environments rather than a maintenance and steady state approach. Therefore, it is timely that Bala Rajaraman and Pratik Gupta, two IBM Distinguished Engineers, are presenting a Collaborative DevOps session at VMWorld
I sat with Pratik and Bala, and asked them what the impetus and motivation for developing this talk. The crux of the pitch, as Pratik explained to me, is that current conditions have created four drivers that a majority of customers are facing, that are making a DevOps approach an imperative.
At the heart is the desire in companies for agility. The desire in the
Line of Business leaders to create value in their offerings is resulting
in an urgent need for business agility. This in turn challenges the
development organization to take an agile development approach. As more
and more deployments move to a Cloud delivery model, it requires an
operational discipline that is not always present. Add to this the human element. If you're in an enterprise shop, you know already that this is not purely solvable by software. Cultural gaps exist between the Line of Business sponsors, the developers and the Operations team. Notions of completion, priority and quality also are different.
Right now, companies are not getting this right. 50% of the applications released into production are rolled back. As much as 51% of projects are missing critical features. Quality and end user expectation delivery are clearly an issue.
Pratik and Bala will frame this problem space and then show how adopting a continuous delivery model can help address this.
If you're debating between this session and something else at this time, it might also help to know that the first 75 attendees will receive a snappy IBM jacket. The rest will get to pick one of an assorted set of items
. There's also a giveaway of an iDoodah. Come by the IBM booth on the expo floor if you want details.
I recently discovered ANOTHER great resource for IBM Business Partners. The IBM "Ready To Execute" initiative, which was originally launched internally to improve the quality of our marketing campaigns and drive higher quality leads, has been extended to Business Partners. In a nutshell, Ready to Execute is a web-based model that provides the foundation and all the elements for launching an effective marketing campaign, including multi-touch e-mails, telemarketing scripts, digital strategies, and compelling offers.
As I began researching all the specifics of the program for our Business Partners, I stumbled upon a blog post
from one of my colleagues in Software Group, Jacqi Levy, who has done a fabulous job of summarizing the benefits of the program, as well as providing a great overview on how Business Partners can get started on launching a campaign. Nicely done, Jacqi!
One of the coolest things about working at IBM is the global nature of our company.
Which is why the announcement of the new IBM Research - Africa (press release) is so cool. Of note is their focus on Smarter Cities. Specifically:
Smarter Cities – with initial focus on water and transportation: Rates of urbanization in Africa are the highest in the world. The single biggest challenge facing African cities is improving access to and quality of city services such as water and transportation. IBM, in collaboration with government, industry and academia, plans to develop Intelligent Operation Centers for African cities – integrated command centers – that can encompass social and mobile computing, geo-spatial and visual analytics, and information models to enable smarter ways of managing city services. The initial focus will be on smarter water systems and traffic management solutions for the region.
I'm looking forward to seeing the work that our African IBM team is going to do in this space and can't wait to work with them on future projects.
Today we trust computers – literally and
unconsciously with our very lives. I was reflecting on this level of trust when
I got £50 of cash out from my local ATM and declined the offer of a receipt.
Seems I now have total faith the computer systems will ‘get it right’. I’ve
come a long way from keeping all my own cheque books to cross check against
later bank statements.
Now, combining that faith with a little
healthy British cynicism, and triggered by watching the Olympics tennis finals on
TV, a mischievous but irresistible thought came to my mind.
It used to be that when a ball hit the
ground near the line we relied on the human eye to say whether it was ‘in’ or
‘out’. That caused disagreements and discussion – and - in tennis often -
sulking, swearing and the full range of petulant behaviour.
Nowadays that is all replaced by
referencing the technology. When there is doubt – or one of the players
questions a call - then we simply ask the computers. What we get then is a neat
little picture representing the appropriate lines on the court and a blob
showing where the ball had hit. So, problem solved: disappointment still for
one player but, so it seems, total acceptance that the computer is right. After
all it is an expensive system working away inside a very expensive box – must
be right, mustn’t it. Or to put it another way ‘computer says in’, who would
But what occurred to me is this. All we can
actually see is some boxes around the court, and a stylised display with a blob
on it. That could be delivered by one person with a tablet showing the court
lines and them touching the screen where they think it landed. Very cheap and
still solves all the arguments because – naturally – everyone trusts technology
Now – of course, and before anyone calls
their lawyers – I am not suggesting for the merest moment that there is the
slightest possibility of such a thing happening. But it’s fun to think it might
be possible. There is little public awareness of what accuracy the system – and
here I presume it does really exist – works to. If you dig around on the web
you can find out (the answer by the way for tennis is 3.6mm). You also find out
there is some very minor grumbling and questioning going on. But that seem at
geek level – in everyday use the audience stands instantly convinced.
So, thinking it through there are a couple
of interesting consequences to real IT life:
- Once you realise that trust depends on quality of presentation
at least as much as on accuracy, should you focus more on that? Certainly
you have to take presentation seriously, because the corollary is that if you
deliver perfection but don’t make it look good, then no-one will believe
it even though you are right.
- Whose responsibility is it to check – and is it even possible? I
suspect this discussion will take us into the territory of ‘governance’. But
even before we get there it implies that User Acceptance Testing needs to
do more than look at things. Of course yours does, doesn’t it?
I guess my big issue is to wonder how
comfortable we are – as the deliverers of the technological solutions for our
customers – and especially our users - to have such blind faith. Of course,
people being the irrational things they undoubtedly are, that blind faith in
the detail is often accompanied by a cynical disregard for overall competence –
think faith in ATMs and on-line bank account figures with the apparent level of
trust in the banking industry as a whole.
As a little codicil to the story, I registered
with anew doctor yesterday – the nurse asked me questions, took blood pressure
etc and loaded all the data she collected into a computer. The system was
clearly ancient, with a display synthesising what you typically got on a DOS3.0
system. First thought: ‘OMG why are they using such old software, that can’t be
good? Second thought: ‘They’ve obviously been using it for years, so they
really understand it, have ironed out all the bugs and it does what they need. It
ain’t broke so they aren’t fixing it’. But my instinctive reaction of suspicion
of it for not being pretty was there and I had to consciously correct myself.
Would you as a service provider prefer more
questioning of what you package up and present to your customers and users, or
are you happy to have that faith? My own view is that the more blind faith they
have in you, the more the retribution will hurt if things do go wrong. Or
perhaps that’s just me being cynical again?
I’ve had a recent burst of situations where things just seem to be difficult for no obvious reason, and maybe that has made me even more cynical than usual - yes, it is, just about, possible. My first assumption – of course – is that these are yet more examples of bad service management. Each is one more case of services not being matched to customer requirements, but then maybe a sneaking suspicion creeps in: are they really deliberately designed to deliver what the real customer wants, rather than the apparent one (or user as ITIL might call them).
Of course we have all experienced this to some extent: the complaints department that is very hard to contact, with a premium rate phone number and an interminable set of IVR choices before you can get anywhere near a real person – all costing you £1.75 a minute to listen to. Typically we give up in disgust just after we have spent more on the phone call than we spent on the product we are trying to complain about. While the first thought is that the supplier hasn’t thought through how they need to be contactable, second thought makes you realise that they don't want people being able to complain easily. And if you have an angry customer who is unlikely to buy more from you, then you might as well make what money you can out of them calling you to complain and tell you they won’t buy any more. So maybe this is actually clever design – to meet the primary customer’s requirement?
Sometimes you just aren’t sure – I was also watching someone applying for a visa – for a well known country in North America. It reminded me very much of the classic customer complaints system I just outlined. Rather confusing instructions, no web-based option to book an appointment – only telephone at £1.23 per minute (plus ‘network extras’ whatever they might be), and then surprise, surprise a computerised voice – talking slowly - offers you some options. Appointments are issued, it seems ‘en block’ and you are warned you must queue outside, whatever the weather. Oh, and no mobile phones or any other electrical items can be taken into the building, and, no, there is no facility to leave them anywhere safe while you go in.
So, is this bad service build, or is it carefully designed to reduce the number of applicants? After all, the people who need visa are – by and large – from less affluent countries, and won’t spend that much when they get there. Could be the whole service was carefully designed to discourage.
Now I suspect the real truth is a perfectly justifiable need for security and a sensible imperative to reduce costs. But it does perhaps make you realise that it is oh so easy to get sidetracked and judge things only by what are actually the second level measures and deliverables, rather than being sure we tie everything back to our organisation’s overall visions and objectives.
It is not always as easy as it sounds – especially in large companies where day-to-day operations can be a long way from corporate targets. For example, focusing on selling widgets that work, continue to work and get fixed quickly should they fail means that you probably just focus on ensuring your direct customers are happy widgetters. Yet if the profit margin on widgets is low, the market difficult and competitive and your widgets do tend to break more often than other manufacturers’…well then the best contribution to your corporate objective of maximising shareholder return is, quite correctly, to get out of supplying widgets altogether. Even if that means abandoning your long time faithful widget customers, well, if you have got your overall prime objective right, then abandoning them is right for the company.
We see the same thing with internal services, is that travel booking service there to make it easier for you to spend the company money on travel, or is it there to make sure you only go through with it if you really need to go? If reducing costs is what the owners of that service want, then ease of use is a bad thing.
Secretly though, I suspect a lot of bad service really is just that. But – it can be a fun game to play next time you get bad service. Is it really bad, or is it targeted to drive you away because that’s what they want? Is it hard to buy something because of incompetence or because the profit margin is too low?
Next time you get awful service, maybe it is worth congratulating the service provider about their commitment to higher objectives, maybe even ask them if they would be so kind as to tell you the corporate objectives they are rigorously pursuing; so you can write to their CEO and congratulate them too on how well their staff strives to reduce unhelpful customer satisfaction. Or then again, they may not be so pleased to hear from you after all, and just leave you with an expensive IVR system to listen to.
A colleague of mine just introduced me to another great Tivoli resource for Business Partners. The Tivoli Knowledge Center
is a great place for partners to get the training and skills to successfully sell, service, and become certified on our most important and strategic IBM Tivoli product lines. It includes marketing tools, as well as technical, training and sales resources.
For those partners who are new to the Tivoli family, there is a very intuitive "Getting started with Tivoli" section. For the 'seasoned veterans' who already have a relationship with Tivoli, there are quick links to sales kits, sales plays, and incentives.
One of this month's top stories will point you to the Business Partner Summit presentations from Pulse 2012. Within that page, you can find a link to the "Small Deals Equals Big Revenue"
charts that were presented by Tamara Crawford and Michele Payne to an audience of about 60 partners at Pulse 2012. I was fortunate enough to attend that presentation in Vegas, and got some great insight from the presenters and the partners, who offered up a lot of great questions and comments.
The Tivoli Knowledge Center
can be found within the PartnerWorld web site so feel free to share this resource with YOUR colleagues!
How would you feel, as manager in your
company’s IT department, when the marketing people specified, commissioned and
developed an IT application for their needs?
I was driven to ask this question by several ‘customer surveys’ that I have seen come out of the IT departments. An extract from my very favourite is shown here, which while it demonstrates admirable self-confidence it is perhaps not the perfect basis for objective assessment.
It just seems strange to me that an
industry built entirely upon providing specialist expertise to allow others to
deliver their jobs doesn't always feel the need to get specialist advice
Now, personally, I do believe I know at
least as much about building, delivering and analysing surveys as I do about
technology application. But that is mostly because I know so little about
technology. In both situations I would always welcome expert advice if I need
to get something right.
Even IT listens to the CFO’s people when it
comes to costs and accounting, yet many have potential access to significant
expertise in their marketing people that goes untapped.
This feels important to me simply because
of the all the bad surveying we still see. I suspect that availability of free
services like Survey Monkey leads us to build and do surveys without any real
planning, and without thinking through how we might analyse and use the results
when we have them. Basically a good example of reducing the ‘Plan-Do-Check-Act’
cycle down to ‘Do’ - speedy and economic but not usually very effective.
As for the confusion and the wrong results
taken from unrepresentative samples …
For simple, but telling, examples think
about how many ‘customer survey’ results you have seen where in fact it is only
users who have been addressed. It is an important thing, user satisfaction, but
it isn’t customer satisfaction and we need to find out both and act accordingly
on what we find. For example if you have 100% perfect user satisfaction, then
the odds are your customers will think they are spending too much. And you will
frequently see a mix of customers and users asked questions that are not really
targeted at all, just asked because they can. This is often based on the –
misplaced – belief that the more people you ask, then the more accurate the
answer, ignoring the whole ‘sample selection process’.
Take a classic ITSM example, where a
support unit routinely sends questionnaires to those who have made use of the
service desk. This, of course, gives you a satisfaction result amongst those
who have had sufficient problems to make them phone for help. Might you expect
a rather lower score from these people than the ones who have been working
quite happily without the need for support.
We know we need to care more and more about
understanding what our customers – and users and other stakeholders – want and
need. We also need to understand it is not always an easy task to find that
out. There is a whole professional specialism out there that delivers this
service – as service providers ourselves, proud of our professional expertise,
should we recognise that more – and take some better advice before we ‘knock
something up to measure satisfaction?
Maybe you do consult with your internal
experts if you have them, or maybe you buy in expertise. It would be good to
hear if you do.
I recently stumbled upon a great little co-marketing resource for Business Partners . The Mid Market Asset Gallery
is a slick and extremely user-friendly one-stop-shop repository for the most recent midmarket advertising and demand generation assets. What I liked most about this tool was its search functionality, which enabled me to easily filter by campaign, asset type or date.
I did a quick search on a topic that is near and dear to me lately - IBM SmartCloud, and I unearthed a very cool downloadable print add that speaks to the ways that IBM and its Business Partners can work with a midmarket business to "take all or part of their IT infrastructure to the cloud".
I would encourage you to check it out
to increase your awareness of the various campaigns and drive usage of the assets with your BPs.
Or watch this quick demo