There have been a lot of good discussions
on Back2ITSM recently. I find the site a wonderful reminder of the two
universal constant truths: ‘everything changes’ and ‘there is noting new under
the sun’. They might seem contradictions at first, yet the older I get the more
both seem true.
Firstly, if you aren’t looking at the
Back2ITSM group on facebook then you are missing out - go sign up, now! Let me
explain what it is and how it is brand new and full of ITSM tradition at the
Secondly, it is about people talking with
each other. That’s the bit that is the same as it’s always been. The
willingness to share ideas, help others – even those in competing organisations
– is just exactly like many itSMF regional meetings I have been to, in UK,
Canada and New Zealand; except that now we are all in three at the same time.
Of course, social media isn’t new, and
facebook is not the newest kid in town. But what is 21st century
about this kind of group are the immediacy of comment and dialogue and the wide
spectrum of simultaneous participants it allows. Since it has active members
from all across the world, there is constant input and comment.
OK, so we have all know that the technology
for this has been around a while. After all it is ‘just’ about real time input
to a forum – and we now have about 20 or 30 people across the world presenting
their opinions to an audience of 500+ (lurking is positively encouraged). For
me what is important is precisely that I am not aware of the clever technology
or feel all the time that I am using a novel means of communicating or even
just how damned clever the whole thing is. With this group I have reached stage
three in my own ‘using technology’ scale: comfort and taking for granted.
Stage 1 is when
you are using some new way of doing things just because you can. This isn’t
just about IT of course, many of us may recall how such things have affected
our choice of travel (my
example is choosing an airline because they had A380s
on the route, and even if a bit dearer I had never been on one of them before
Stage 2 is when
the mean is no longer overwhelming the ends – you’re using it now because it is
logical to do so, and it is delivering value. But, you are still very aware of
how cool it is. And you probably keep telling other people how cool it is too.
Stage 3 is when
your focus is totally on what you are doing. I can now just read what is written, comment
if I have something to say. You know it’s a normal conversation because it goes
off at tangents, people get flippant, say daft things, agree, argue, make
subtle (and sometimes not so subtle) digs at each and launch jokes that no-one
else notices. In short, it’s normal human conversation, without thinking about
how you are achieving it nor where all the people are, or what time it is there.
And to me this is a good motif for
successful technology. It isn’t when it is there and running that the implementation
part is properly over. Real success is when people don’t notice it any more,
but just get on with using it, unconsciously – as part of their everyday lives.
It’s one more example of how success is
about being invisible. First time I flew in an A380 I was excited about it –
last time I was watching a movie before we reached the runway. That’s success.
(Ok, so there was a little re-attention on the technology after the Qantas 380
had an engine explode but I am back to ignoring it again now.)
So the important lesson and message that I
see is how we need a customer perspective on the introduction of new
technology. And maybe what you actually want is people to stop telling you how
impressed they are, because then they are getting on with using it, which was,
after all, the real point of the exercise, wasn’t it?
A: Linked Data
The following article was written by Cameron Allen, Pierre Coyne and Beth Sarnie and is the second in our OSLC series.
In fact, if you were at Pulse 2012...you heard how IBM Watson will be used to help doctors diagnose medical conditions and improve patient care at WellPoint.
For those of you, like myself, that don’t have a Watson-like recollection, here’s a quick flashback detailing a millisecond in Watson's brain on a sample patient:
- Watson is given specific information on a patient’s symptoms, and makes a preliminary diagnosis of the flu as the most likely illness.
- Based on the unique patient's name, Watson looks up records of the patient's history for the past few years, providing new insights that point to the better possible cause of, for example, a Urinary Tract Infection.
- Based on the patient's family connections, Watson is able to use the family history to derive that the mostly likely cause is now diabetes.
- And finally, Watson is able to access a patient’s latest tests to derive a final diagnosis.
If you're in the business of IT, this may sound a lot like incident management. And as any level 1 support person can attest, diagnosing the root cause of an incident is much like diagnosing a patient's condition. You need information from multiple sources (e.g. service desk, license, CMDB, monitoring, and asset management systems), but more importantly, it has to be in context, up to date, and delivered in a timely basis to make an accurate diagnosis of the root cause.
The problem has always been that an incident manager, like a doctor, has to jump between tools, entering requests in each system for the right information...and that is time consuming. In some cases, information isn't readily available and must be requested from other sources, not under their direct control.
One of the ways Watson is able to be such a great diagnostician (and incident manager) is through "linked data," which allows it to seek out and find related information on the patient from multiple sources in a fraction of a second to facilitate faster, more accurate patient diagnosis.
Until now, an incident manager did not have this same luxury.
That's where Jazz for Service Management comes in. Jazz is IBM's realtime platform for integrating management across multivendor tools, and across service lifecycle processes and functions. Like Watson, Jazz for service management uses principles of linked data, along with community standards (including OSLC) to support Watson-like service management decisions, regardless of what vendor tools you have in place.
If you want to learn more about OSLC and linked data in the context of service management, join the IBM developerWorks Jazz for Service Management community for demonstrations, and to gain early access to beta software.
A while back I wrote a blog just mentioning
devops, and what a sensible idea it seemed – certainly the word ‘devops’ hit
some bells and I got 3 times my normal hits in the first day. At the beginning
of this year (2012 in case you got here late) I wrote a blog inspired by a
discussion with a TOGAF fan; I felt we in parts of the IT world need to talk to
our neighbours a lot more.
I was reminded of these by seeing several
devops write-ups recently (separate articles in itSMF UK and US
magazines in the same month). Both are encouraging and make the unavoidable
point: what devops suggests as a matter of principle is clearly something to
be supported like the proverbial apple pie. It is just so obvious, it has to be right - why would
you not use the people who built and know a new piece of software (or anything
else for that matter) to get it in place and working, and as first point of
call should anything not work as expected?
Both articles argue that ITSM people should embrace
the ideas rather than rush to defend their empires. Devops is not the only
example, but it seems to me that what we might be faced with is set of
driven from disparate firm foundations in our vast ocean of IT
In fact the commonality between the
approaches is massive, especially once you get past a temptation to overly
rigorous application. It amazes me that the same IT people who would never
dream of reading the instructions before using their new technology toys insist
on applying every word of best practice.
If you want an example of how ITIL®
overlaps the base devops concept look at section 6.7, page 236 of Stuart
Rance’s Service Transition book in ITIL 2011.
The point I really wanted to make is that
we need to get above the point of origin and see identification, creation
delivery and operation of service as the real goal and the subject of some
integrated guidance. Everything we have so far shows its origins.
- ITIL comes from operations, for all its gallant attempts to
preach service strategy it is not really getting to the people who
should be doing so because they originate from other parts of IT/business
- Devops is coming from the development community and so
reflects their take on life. Things
like OSLC that will help smooth some of the boundaries are also being
pitched – so far – from the development side
- All of the stuff that I see is coming out of parts of IT, when
to me IT is only a part (albeit a big and important part most times).
I started my career helping organisations
establish and improve services, I got sidetracked into IT and oft-times I miss
that bigger image. I still find it hard to think only of IT aspects and
solutions, but I find I am often talking with people – suppliers and customers
– who are content to be restricted to IT aspects.
In the short term I think what we need is
more selling of the neighbour’s ideas. I want to see devops being evangelised
by someone from the ITSM community, and we need the converse too. Otherwise it
can feel like the recommendations for apple pie are coming exclusively from the
apple marketing board; doesn’t mean they are wrong but they can less than convincing, especially to a cynical audience or to one that has something they feel they must defend. Maybe I have stumbled onto my
subject for next year’s conferences – anyone interested in inviting me?
As the Western Hemisphere was slumbering, news from Singapore was lighting up Twitter as our senior executives took the stage at the IBM InterConnect conference to talk about some of the latest announcements from the IBM corporation on innovation and a Smarter Planet.
Much of the reporting has been done on Twitter (hashtag #IBMInterConnect) and these keynotes are available on the LiveStream including an amazing speech by Dr. Michio Kaku about the future of computers ("everywhere and nowhere").*
These are supplemented by interviews conducted by Todd "Turbo Todd" Watson, also on the LiveStream.
Since this event was focused on a Smarter Planet (the entire IBM portfolio), we covered a lot of ground. Big Data. Social. Mobility. And, of course, cloud.
For SmartCloud Foundation, the Tivoli organization has a number of exciting solutions that are designed to help you increase the levels of innovation you provide to your clients.
For this blog, I thought it'd be good to focus on three of the new solutions you might not have seen before that are going to help you in building out your private cloud.
IBM SmartCloud Cost Management is one of the key components in transforming IT from a "cost center" to an innovation center by providing levels of visibility, and transparency, to the IT costs associated with your cloud. Measure, analyze, report, and invoice the utilization and costs of physical, virtualized, and cloud computing resources, storage and network resources, applications, and other non-IT cost drivers.
IBM SmartCloud Patch Management combines the benefits of two solutions, IBM Endpoint Manager for Patch Management and IBM SmartCloud Provisioning, to provide an effective entry point that delivers lower costs and improves the visibility and control of physical, virtual, and cloud environments.
Finally, the IBM SmartCloud Virtual Storage Center is a solution that you might have seen us talk about at Pulse 2012 and it's now an exciting addition to the portfolio. This solution helps IT storage managers migrate to an agile cloud-based storage environment and manage it effectively without having to replace existing storage systems. If you're looking to increase your storage efficiency in cloud, but don't have the checkbook to do a "rip and replace" of your entire infrastructure, you need to be looking at this solution.
There's more going on in Singapore over the next two days, and more discussion of SmartCloud Foundation and IBM Smarter Planet. Stay tuned to Twitter and the LiveStream and feel free to post comments below.
* I have to confess that this blog was delayed because I got sucked into watching the keynotes.
This blog post was written by George Mina
Earlier today, IBM shared its point of view on the future of the data center with Smarter Computing V3 (press release). A central focus is IBM Enterprise Systems (zEnterprise EC12 and Power) and their ability to deliver exceptional value through a private Cloud. We've seen how organizations have been able to leverage IBM Enterprise Systems to achieve significant benefits. Take the City of Honolulu for example which was able to lower its licensing costs by 68% while increasing tax revenue by $1.4M USD in just three months.
By adding Tivoli software to their current IT environment, organizations can advance their enterprise-class Cloud environment while protecting their existing IT investment. How? IBM SmartCloud Foundation software is deeply rooted in openess - an open standards approach and common management tools that are platform agnostic. Essentially, you pick the platform(s) that best meets your business goals and we deliver a set of interoperable Cloud management tools across your heterogeneous environment. Of course, there are intrinsic benefits to building a Cloud management stack on top of IBM Enterprise Systems given the tight integration between hardware and software. OMEGAMON for example leverages a deep integration with zEnterprise systems to deliver advanced monitoring that reduces typical time to resolution from 90 minutes to 2 minutes.
Whether your starting to consider virtualizing your IT environment or deep into your Cloud journey, we have open Cloud management tools that help you expand your Cloud footprint without fear of vendor "lock-in". Learn more about the latest announcement and our Cloud solutions by visiting this site and attending the System z webcast on October 17.
I went to an itSMF UK
meeting last week. I haven’t managed to get to our local meeting for a while
and I found I was being introduced to new members as someone who has been
around ‘since the beginning of ITL’.
Now that kind of thing, apart from making
me feel old (which is, admittedly, a fair enough feeling at my age) also made
me look back and think on where we (the ITIL community) have come from and
where we are now.
The first thing that occurs to me in
thinking back to the early days of ITIL is that we now find ourselves in a
place that none of us imagined we would. Don’t get me wrong, the original
inventors and drivers of the
ITIL idea were not short on confidence or vision, nor in seeing the benefits
that documenting this aspect of best practice would bring. But I suspect that
world domination of this industry sector by the word ‘ITIL’ was beyond even
their best possible visions.
The key to the expansion of ITIL was that
it quickly became about more than just the books. The ITIL advertising leaflets
produced in the mid 90s coined the term ‘ITIL philosophy’ to represent this
scope of ITIL. I suppose I should confess that I invented that phrase
and also the diagram that went with it – a version from about 1997 is shown
here. The accompanying words suggested that, even back then, less than 1% of
‘ITIL-related sales’ were about the actual ITIL books, and the rest were
The fact that I couldn’t even hazard a
guess at what that percentage might be today indicates a few, pretty
- When I was writing those things in 1996-1998, I felt I could
pretty much ‘take-in’ what was going on related to ITIL, and even know
most of the people developing and delivering new ideas. Nowadays no-one
can honestly claim to be able to do that.
- What is ‘ITIL-related’ has become a much more debatable
concept. Whatever its faults might have been (and there were many) ITIL
was just about alone in its market space. The initiatives kicked-off by
ITIL have spawned fellow travellers, such as COBIT, ISO20000 and others.
The fact that I could easily start a long running – and probably vitriolic
– debate on
the social media pages by asserting which are and which are not ITIL
derived, ITIL alternatives etc indicates that this is now a loosely
bounded region. That makes any assessment of its scale, scope and success
Some other things have changed too.
Nowadays the maturity of the ITIL ideas
means most players are focused on market share rather than growing the sector
itself. That means more competition than there used to be. Nonetheless there
are still lots of examples of that collaboration still easily found. Probably
the best example is the ‘Back2ITSM’ facebook group – a place where free advice,
constructive debate and openly shared thoughts are still the norm.
The itSMF was born in 1991, and played –
probably – the major coordinating role is promoting the idea, importance and
approaches of service management. Like ITIL, itSMF predates the term ‘service management’,
having started as the ITIMF. Even here we have seen a lot more competition
during the last third of its lifetime: both competition from other community
organisations and also considerable internal competition. I hope itSMF will
evolve form this to carry on delivering benefit to its members. I am a bit too
frightened to work out what percentage of my time has been given to itSMF over
the last 17 years – or at least frightened what my employers over that period
might think. But that commitment does make me wish hard for its future health.
So, looking back should makes us appreciate
where we are now – nostalgia can be deceptive for usually the past wasn’t
better; because progress is exactly that – going forward and getting more. And
wherever ITIL is now, IT Service management has come a wondrous way in the last
20 years. Global technology changes have made a difference to that journey;
we’ve seen personal computing and the internet make all but unbelievable levels
of change. We may well see Cloud do the same; personally I think cloud might do
that by freeing us from some of the technical baggage and letting us see and
address real service management issues, without the obfuscation of technology
issues or the opportunity to hide behind them any more.
We’ve seen a move from books being the
go-to source of wisdom when ITIL started to an amazing range of information
sources. Nowadays your typical service management will expect their influences
to come via social media, electronically delivered white papers and the like.
Interestingly, in many cases, they would also expect them to come for free, and
that throws a real challenge on the thought leadership business. If ITIL 4 ever
happens I think it will be a radically different entity from versions1-3.
Where I want to see ITSM going is towards
SM. IT is now so pervasive that it is everywhere, which to me means that ITSM
cannot be a subsection of overall SM anymore because it logically applies to
everything, since all services now depend on IT. Nevertheless, IT has treated
SM well, and – after some effort –has taken it seriously. I hope those lessons
will work their way into broader adoption and we will see an improved – and
critically an integrated – approach to service management across enterprises
because of that. I am driven to optimism in this (not my natural state you
understand so it is noteworthy) by the fact that, alongside this blog, I am
involved just in this same month in a webinar and an article for IBM’s SMIA
series on the idea that IT is now spreading its ideas – and delivering its
technology and specifically its evolved software solutions – to the broader
I wonder what we will be saying in another
20 years looking back – maybe ITIL will survive another 20 years, maybe not,
but I am certain service management will progress and improve.
The following article was written by Cameron Allen, Pierre Coyne and Beth Sarnie and is the second in our OSLC series.
In non-acronym speak, what I'm saying is that the future of service management has arrived in the form of Open Services for Lifecycle Collaboration.
But, what is OSLC and what does it have to do with you?
If you are a user of service management tools of any kind, or rely on information from tools to do your job, then you probably know that finding the right information is half the battle, and getting realtime access to that information when it is not under your direct control can feel next to impossible.
OSLC means you can now leverage the simplicity and ease of web links to both find and share information across your management tools (be they IBM, or any vendor tools).
Just as web pages can be linked on the Internet, data can be linked together from one application to another – creating an application ecosystem where applications don't care what vendor they're from. They look up who has the data in a directory, and jump right to it.
OSLC is not something new, and Tivoli is not the first to adopt it for integration. If you're an IBM Rational user, you may already be a believer. IBM Rational, its users, and an extensive ecosystem of partners have been using OSLC to successfully interconnect the application lifecycle for years.
In fact, Rational Jazz is the realization of OSLC community specifications and shared services in an open platform that anyone can use to interconnect the application lifecycle. Rational just delivered their 4th incarnation of the integrated product offering called Collaborative Lifecycle Management based on Jazz.
Tivoli is now leveraging these same principles to help break down silos of information across the end-to-end service lifecycle. That means expanding the notions behind Jazz from service design and development to now include service delivery and management. We call this Jazz for Service Management.
Take for example, problem management. In order to diagnose and resolve a given trouble ticket, the problem information must be gathered and aggregated from multiple sources. We may need information pertaining to the application topology, the health of a system within that topology, outages or events that may be affecting the application, the CPU utilization, the versions and configurations of the hardware and software that this application is dependent upon. I could go on...
The problem is that all of this information lives in different places. You can either call around to the various owners of that information, or you pay a business partner to learn the API of the tool in order to get to the data, or you can have a highly skilled, in-house resource write the integration. These options require extensive expertise in vendor-specific APIs and lots of maintenance to keep them current.
OSLC utilizes community defined specifications for sharing and linking data applied to specific service management scenarios so that in a critical outage scenario, all relevant information relating to that outage can be accessed in real time from any number of sources, displayed in the context of that problem, in a single integrated view, with related actions that can be taken.
The difference is simplicity. You might be able to do this this now with a lot of experts and time but OSLC delivers simplicity.
And, most importantly, because OSLC uses community specifications for service management scenarios, integrations can be built once and applied across multiple 'related' OSLC-enabled tools. "Write-once, Apply-many."
For more information, listen to this podcast on the Tivoli User Community. This podcast provides a deeper insight into the next generation of service management built using linked data.
Also, at Pulse 2012 (video link), developerWorks' Scott Laningham is joined by Don Cronin, program director, Tivoli Technical Strategy and Architecture; and Mike Kaczmarski, IBM Fellow and Tivoli Chief Integration Architect to discuss the Magic of linked data.
Leave your comments on how you are using OSLC in your organization below and don't forget to follow us on Twitter @servicemgmt and be sure to bookmark our OSLC story on Storify.
Depending on where you're from, some people call it "autumn" and other people call it "fall."
Either way, it's when things in most offices start getting a bit hectic.
Back in the autumn of 1968, IBMers in Boca Raton were putting together 1130 computer systems for customers.
Here's a neat photo of them hard at work in the factory.
As you can see, once demand picks up in autumn. It doesn't slow down.
Even today. Demands on services are high and they keep getting higher which is why application performance monitoring (APM) becomes important.
It's why I'm pleased to let you know that Gartner identifies IBM as a leader in the 2012 Magic Quadrant for Application Performance Management (APM).*
The full report is available on the Gartner website.
Give it a read and let us know how you are using APM in your organization in the comments section below.
PS I recognize that the 1300 has nothing to do with this blog post. I just felt the need to post pictures of classic IBM hardware...
* Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose
The following article was written with significant contributions from Cameron Allen, Pierre Coyne and Beth Sarnie
Question of the day: why is IT agility so darn elusive?
Follow up question: after spending multiple millions in technology to improve service delivery, quality, and productivity, why do so many line of business executives perceive that IT is still not moving "fast enough?"
Silo'd information presents a big speedbump to agility. According to the 2012 IBM study of CEOs, high performing organizations are able to access data 108% more, draw insights from that data 110% more, and act on that data 86% more, than their underperforming peers.
Which brings us back to the specific problem: Information exists, but it is not shared. Information remains trapped in silo'd tools and departmental applications. It's not only not moving "fast enough," it's not moving at all.
If you agree with ITIL and related methodologies, agility is directly linked to your IT processes. So while we can improve process methodology and connections across roles and functions, and within specific technology siloes with tools, if the data and resources can not be freely shared across process-enabling tools, then its all for not.
Going one level deeper, what is the cause of this 'information black hole', where data enters tools, and is never seen again? Your reality is that you probably rely on a mix of multi-vendor tools. Those vendor tools rely on proprietary APIs for integration and trying to make tools with different APIs communicate requires the IT equivalent of a team of United Nations translators, where each is an expert in their applications main language (API). Once successful, the herculean effort can create a constant maintenance cost, and might not work well in the end - things will be lost in translation. That said, even single vendor tool suites are notoriously difficult to integrate.
So what can be done?
Stop for a moment and consider the best example that demonstrates simplicity of integration on a massive scale. It's the Internet. With the Internet, you can get information from millions of different web sites and all you need is a browser.
So for argument's sake, if tools are the equivalent of web sites, then all we need are links to connect two tools. We can take that one step further, borrowing principles from social networks like LinkedIn or IBM Connections, where we can search for one person, and see relationships to other people (making searching for data across tools much easier).
That in essence is OSLC (Open Services for Lifecycle Collaboration): A set of open, community agreed upon specifications for linking tools using web technology. (And before you ask, no. It's not a standard, because apparently standards alone have not done the job)
Data from any vendor tool is registered in a directory like a search engine, where other tools can find it, its relationship to other data, and access it via simple web link technology. Not similar to the Internet, but exactly like the Internet.
What that means is you can easily interconnect tools and processes. You can even replace tools with competitive tools - eliminating vendor lock in. It also means you can re-purpose one integration across a series of 'like' tools. "Write once, reuse-many" inherently applies here. All of this translates into simpler and faster access to information by people and tools, better analytics leading to better decisions, and better automation of workflow.
Now, IT will be seen as agile.
No longer elusive.
This is the first in a series of articles we will be posting about OSLC. Feel free to leave your comments below. Be sure to listen to the podcast we did for OSLC on the Tivoli User Group - TUC Podcast: OSLC Series - Learn how Tivoli’s enhanced architecture strategy will help you simplify integration across products – IBM and Other Vendors, and don't forget to follow us on Twitter @servicemgmt.
Also, stay tuned to the blog for more in our series of articles about OSLC.
PS a reminder that InterConnect 2012 in Singapore is coming. October 9-11 and it's going to be an amazing conference. Tiffany Winman has a great post about it on the IBM Software Group blog.
One of the coolest things about working at IBM is the global nature of our company.
Which is why the announcement of the new IBM Research - Africa (press release) is so cool. Of note is their focus on Smarter Cities. Specifically:
Smarter Cities – with initial focus on water and transportation: Rates of urbanization in Africa are the highest in the world. The single biggest challenge facing African cities is improving access to and quality of city services such as water and transportation. IBM, in collaboration with government, industry and academia, plans to develop Intelligent Operation Centers for African cities – integrated command centers – that can encompass social and mobile computing, geo-spatial and visual analytics, and information models to enable smarter ways of managing city services. The initial focus will be on smarter water systems and traffic management solutions for the region.
I'm looking forward to seeing the work that our African IBM team is going to do in this space and can't wait to work with them on future projects.
Today we trust computers – literally and
unconsciously with our very lives. I was reflecting on this level of trust when
I got £50 of cash out from my local ATM and declined the offer of a receipt.
Seems I now have total faith the computer systems will ‘get it right’. I’ve
come a long way from keeping all my own cheque books to cross check against
later bank statements.
Now, combining that faith with a little
healthy British cynicism, and triggered by watching the Olympics tennis finals on
TV, a mischievous but irresistible thought came to my mind.
It used to be that when a ball hit the
ground near the line we relied on the human eye to say whether it was ‘in’ or
‘out’. That caused disagreements and discussion – and - in tennis often -
sulking, swearing and the full range of petulant behaviour.
Nowadays that is all replaced by
referencing the technology. When there is doubt – or one of the players
questions a call - then we simply ask the computers. What we get then is a neat
little picture representing the appropriate lines on the court and a blob
showing where the ball had hit. So, problem solved: disappointment still for
one player but, so it seems, total acceptance that the computer is right. After
all it is an expensive system working away inside a very expensive box – must
be right, mustn’t it. Or to put it another way ‘computer says in’, who would
But what occurred to me is this. All we can
actually see is some boxes around the court, and a stylised display with a blob
on it. That could be delivered by one person with a tablet showing the court
lines and them touching the screen where they think it landed. Very cheap and
still solves all the arguments because – naturally – everyone trusts technology
Now – of course, and before anyone calls
their lawyers – I am not suggesting for the merest moment that there is the
slightest possibility of such a thing happening. But it’s fun to think it might
be possible. There is little public awareness of what accuracy the system – and
here I presume it does really exist – works to. If you dig around on the web
you can find out (the answer by the way for tennis is 3.6mm). You also find out
there is some very minor grumbling and questioning going on. But that seem at
geek level – in everyday use the audience stands instantly convinced.
So, thinking it through there are a couple
of interesting consequences to real IT life:
- Once you realise that trust depends on quality of presentation
at least as much as on accuracy, should you focus more on that? Certainly
you have to take presentation seriously, because the corollary is that if you
deliver perfection but don’t make it look good, then no-one will believe
it even though you are right.
- Whose responsibility is it to check – and is it even possible? I
suspect this discussion will take us into the territory of ‘governance’. But
even before we get there it implies that User Acceptance Testing needs to
do more than look at things. Of course yours does, doesn’t it?
I guess my big issue is to wonder how
comfortable we are – as the deliverers of the technological solutions for our
customers – and especially our users - to have such blind faith. Of course,
people being the irrational things they undoubtedly are, that blind faith in
the detail is often accompanied by a cynical disregard for overall competence –
think faith in ATMs and on-line bank account figures with the apparent level of
trust in the banking industry as a whole.
As a little codicil to the story, I registered
with anew doctor yesterday – the nurse asked me questions, took blood pressure
etc and loaded all the data she collected into a computer. The system was
clearly ancient, with a display synthesising what you typically got on a DOS3.0
system. First thought: ‘OMG why are they using such old software, that can’t be
good? Second thought: ‘They’ve obviously been using it for years, so they
really understand it, have ironed out all the bugs and it does what they need. It
ain’t broke so they aren’t fixing it’. But my instinctive reaction of suspicion
of it for not being pretty was there and I had to consciously correct myself.
Would you as a service provider prefer more
questioning of what you package up and present to your customers and users, or
are you happy to have that faith? My own view is that the more blind faith they
have in you, the more the retribution will hurt if things do go wrong. Or
perhaps that’s just me being cynical again?
I’ve had a recent burst of situations where things just seem to be difficult for no obvious reason, and maybe that has made me even more cynical than usual - yes, it is, just about, possible. My first assumption – of course – is that these are yet more examples of bad service management. Each is one more case of services not being matched to customer requirements, but then maybe a sneaking suspicion creeps in: are they really deliberately designed to deliver what the real customer wants, rather than the apparent one (or user as ITIL might call them).
Of course we have all experienced this to some extent: the complaints department that is very hard to contact, with a premium rate phone number and an interminable set of IVR choices before you can get anywhere near a real person – all costing you £1.75 a minute to listen to. Typically we give up in disgust just after we have spent more on the phone call than we spent on the product we are trying to complain about. While the first thought is that the supplier hasn’t thought through how they need to be contactable, second thought makes you realise that they don't want people being able to complain easily. And if you have an angry customer who is unlikely to buy more from you, then you might as well make what money you can out of them calling you to complain and tell you they won’t buy any more. So maybe this is actually clever design – to meet the primary customer’s requirement?
Sometimes you just aren’t sure – I was also watching someone applying for a visa – for a well known country in North America. It reminded me very much of the classic customer complaints system I just outlined. Rather confusing instructions, no web-based option to book an appointment – only telephone at £1.23 per minute (plus ‘network extras’ whatever they might be), and then surprise, surprise a computerised voice – talking slowly - offers you some options. Appointments are issued, it seems ‘en block’ and you are warned you must queue outside, whatever the weather. Oh, and no mobile phones or any other electrical items can be taken into the building, and, no, there is no facility to leave them anywhere safe while you go in.
So, is this bad service build, or is it carefully designed to reduce the number of applicants? After all, the people who need visa are – by and large – from less affluent countries, and won’t spend that much when they get there. Could be the whole service was carefully designed to discourage.
Now I suspect the real truth is a perfectly justifiable need for security and a sensible imperative to reduce costs. But it does perhaps make you realise that it is oh so easy to get sidetracked and judge things only by what are actually the second level measures and deliverables, rather than being sure we tie everything back to our organisation’s overall visions and objectives.
It is not always as easy as it sounds – especially in large companies where day-to-day operations can be a long way from corporate targets. For example, focusing on selling widgets that work, continue to work and get fixed quickly should they fail means that you probably just focus on ensuring your direct customers are happy widgetters. Yet if the profit margin on widgets is low, the market difficult and competitive and your widgets do tend to break more often than other manufacturers’…well then the best contribution to your corporate objective of maximising shareholder return is, quite correctly, to get out of supplying widgets altogether. Even if that means abandoning your long time faithful widget customers, well, if you have got your overall prime objective right, then abandoning them is right for the company.
We see the same thing with internal services, is that travel booking service there to make it easier for you to spend the company money on travel, or is it there to make sure you only go through with it if you really need to go? If reducing costs is what the owners of that service want, then ease of use is a bad thing.
Secretly though, I suspect a lot of bad service really is just that. But – it can be a fun game to play next time you get bad service. Is it really bad, or is it targeted to drive you away because that’s what they want? Is it hard to buy something because of incompetence or because the profit margin is too low?
Next time you get awful service, maybe it is worth congratulating the service provider about their commitment to higher objectives, maybe even ask them if they would be so kind as to tell you the corporate objectives they are rigorously pursuing; so you can write to their CEO and congratulate them too on how well their staff strives to reduce unhelpful customer satisfaction. Or then again, they may not be so pleased to hear from you after all, and just leave you with an expensive IVR system to listen to.
Over 51 million tourists travel to Orlando, Florida every year, but only the cool ones go to attend IBM Edge and IBM Innovate.
As I type this, so many of our customers, partners and my colleagues are in the "brutal" 88°F* weather learning more about storage and software & system innovation.
Since much of my focus is around product announcements, I wanted to point folks to the IBM Tivoli Storage Productivity Center V5.1 announcement that happened yesterday (Announcement Letter 212-189).
For content coming from the conference, a number of the marketing team are on the ground at Edge and tweeting. Be sure to follow Maria, Martha and Branavan (and of course, @ibmstorage) as well as the hashtag #ibmedge.
The Rational team have a number of exciting new announcements around Jazz and they will be talking quite a bit about mobile, cloud, industry solutions and a few other things including DevOps.
For us service management folks, DevOps translates into tangible benefits we can bring back to the business; like fewer errors and faster time to resolving errors if they do occur.
Back at Pulse 2012, we announced, among other things, the Beta for IBM SmartCloud Continuous Delivery (see the blog post and press release).
Of course, at Innovate there's a lot more to talk about with DevOps. Including the announcement from last week of IBM SmartCloud Application Performance Management 7.5 (Announcement Letter 212-143).
Along with IBM SmartCloud Control Desk and IBM SmartCloud Provisioning Manager (among others), it's about developers and testers having access to the same tools, data and information that operations uses and leveraging them to fix problems before they occur. And if problems do occur, the linkages with tools like Rational Application Developer and Rational Performance Tester allow the developers and testers to quickly resolve these issues as everyone and everything is connected.
As stated before, fewer errors and faster time to resolving errors if they do occur. This translates into using time to be productive and being innovative. Innovation is what provides value back to the business.
There's a good press release from yesterday, "IBM Expands Collaborative Software Development Solutions to Cloud, Mobile Technologies" that highlights some of the integrations and new solutions (including Application Performance Management).
The conference is being livestreamed (with video available right on the IBM Innovate home page) and be sure to follow the discussion on Twitter using the hashtag #ibminnovate and be sure to read the Invisible Thread blog for updates on what's happening on the floor.
* 88°F = 31°C.
For most of last week I was attending and –
I hope – contributing to itSMF’s international publishing meeting. This was
held in Warsaw
in beautiful spring weather, while
was being blasted by wind and rain. That was nice but nowhere the most
important or most pleasurable thing that the week had to offer.
Now, first a little background, just in case
there is anyone who does not know what the itSMF is. The letters stand for IT
Service Management Forum – and that sums it up quite well: a place for those
interested in ITSM to talk, learn, teach, compare and discuss. Part of that communication
naturally involves publication – and our group focuses on that – from reviewing
others’ books through translation and dissemination to encouraging authoring
and publishing books. Crucial to its attitudes and success, itSMF is a
non-profit organisation, owned by its members.
OK, as you may imagine it is – as well as serious
working meeting – a chance to catch up with friends and colleagues of the ITSM
global village. And the active ITSM community really is like a village, except
that it spread across some 50 countries – we have all the relationships that
you would expect: friends, enemies and lots in between.
All of us have our day jobs, many of us
working for cut-throat competitors but that all gets set aside and we settle
back into our ‘all in this together’ mode. One of things that I came back from Warsaw thinking about was
that different set of attitudes we have while focused on itSMF business. Some
of that rests upon the different nature of not for profit organisations – at least
compared to the more usual owned by shareholder companies. It is hard sometimes
to make the switch, but a good lesson for anyone in the service management business
to realise the differences that do exist. Probably the best description I know
is this one: ‘Commercial companies need to do things to in order to make money;
not-for-profit organisations need to make money in order to do things’.
That makes the non-profit member owned
organisations a lot like government – and like governments today we are strapped
for cash. These are hard times and no-one has much in the way of spare money.
But we still strive to fight against what would be a sensible approach for an
organisation focused on shareholder value. We still need to deliver what the ‘right
things’. From our publishing perspective it would be tempting to look only at
safe books – rearranging established best practice into easier, shorter or
simpler reads. Instead though, everyone at our meeting sees that we need a
focus on innovation and stretching our industry.
Of course we need to be financially successful
with enough of our projects, and we have work to do on building a firm base to
take ourselves – and our industry – forwards. But I am proud that the books we
have already managed to publish contain real industry innovations and new
perspectives – both on service management as you would expect but also into wider
topics such as organisational change.
So, I came back feeling the need to write
down how much work people put in – for nothing – last week. I’m not claiming I did
that much, but lots of work was put in, and even more commitments made to keep
the momentum going and I felt that it was a few day’s work I was proud to have
been a part of and an effort worth recording
here. In some later blogs I might relate more about other aspects of the trip - like using budget airlines and the change in perspective of value that goes with that.
So – please go read about what we have
already managed (6 books published, quarterly magazine, whitepaper competition
etc.). You can find out about the books are – and read the magazines for free
by going to http://www.itsmfi.org/content/publications.
If that gets you interested in how you can get your ideas written up and out
there then get in touch. My portfolio responsibility is ‘Authoring’, so I would
love to hear from you. We are keen to find new authors, for whitepapers, books
or articles – and happy to offer any level of support you might need – from
final review through mentoring and even to co-authoring or ghost writing.
By my next blog, I will be back in successful
company mode, but it is good to remember that the commercial companies also
live in and benefit from the wider community. It is good to see that being
recognised through sponsorship and support. IBM sponsored the meeting last year - this time we had support from TSO and BTC. massive thanks to those companies. With more support next year we should have more people and achieve even more.
No trouble spotting the biggest news in
service management this week – with COBIT 5 available. I guess with both ITIL
and COBIT having released new versions over the last 12 months, that should
tell us something about the SM industry. Mostly, I think it tells us that as a
concept and topic to take seriously, service management is not going away any
But I suspect we might reading more in the
next few weeks of the ‘should I do ITIL or COBIT’ type of question. That’s a
shame, because it is still not a sensible question. Both ITIL and COBIT are
expanding their scope of course and that means more and more overlap, but I
can’t – admittedly after quick glance through only –see where any real
Of course COBIT is still a product of ISACA
and it builds upon a philosophy of control and governance. ITIL initially came from
a team set up to advise on approach rather than massive detail and that still
shows even in the 2011 version I think. And I do still believe any serious SM
profession would have both on their (electronic) bookshelf, the way a good cook
will have books by more than one cookery author on their kitchen bookshelf.
Analysing the content, requirements and
fine print can come later – and will open us up to all sorts of interpretation
and contextual adjustment. But some things hit you straight away. The core
COBIT product is available for free and takes up 685k of pdf file. The core
ITIL books cost around £300, weigh five kilos and/or take up 77.4MB of my hard
drive inside a fancy secure Adobe reader to make sure I don't pass them on to anyone
who hasn’t paid their £300. Now I know that there are lots more books around
the COBIT 5 core than give you more detail – and ISACA charges for those - but
still I must confess to liking the idea of free entry to the gig even if it
doesn’t get you that near the stage.
Putting a positive spin on the size
differential and the lack of real conflict, you can see that it shows how the
two products can be seen as complementary: COBIT’s distillation of what should
be done and structure with ITIL’s more wordy guidance.
And COBIT’s heritage shows through with several
pages on maturity assessment – great stuff for the ‘give me a number’ crew.
But maybe the most encouraging thing is the
differences that exist – the pretty clear realisation that frameworks aren’t competition
but different perspectives. Everyone in this business is really concentrating
on helping each other get better at delivering value to the customer. COBIT 5
will help so this is a good week.
Now all I need is a long flight somewhere to
give me peace and quiet to read it carefully.