Illumination tapped IBM and its Paris-based Business Partner Serviware to build a server farm based on IBM's iDataPlex system. With this system's efficient design and flexible configuration, the company was able to meet the intense computing requirements for the film and save room by doubling the number of systems that can run in a single IBM rack. The entire space used to house the data center amounted to four parking spots in the garage of the production facility, about half of what had initially been allotted. The studio's iDataPlex solution included IBM's innovative Rear Door Heat eXchanger, a water-cooled door that allows the system to run with no air conditioning required, saving up to 40% of the power used in typical server configurations. Overall, the installation included 6,500 processor cores.
Chris Meledandri, Producer of "Despicable Me" and founder of Illumination Entertainment:
"Thanks to the capacity of IBM's rendering technology and the skills of our artists, we were able to bring our creative vision to life through the completion of a wonderfully entertaining film and build the foundation for a large pipeline of projects in development."
Steve Canepa, General Manager, IBM Media & Entertainment industry:
"IBM is delighted to work with Illumination Entertainment on this exciting project to advance digital film-making production," said "The combination of our film industry expertise and powerful, flexible and cost-effective technology solutions is helping to accelerate the adoption of new digital technologies like 3-D into the creative process of film making."
Illumination Entertainment's film "Despicable Me," is being released by Universal Studios July 9.
Visit "Despicable Me" at Facebook at http://www.facebook.com/DespicableMe.
For more information on IBM, visit www.ibm.com/media
Yes. I used an exclamation point. Because this is that exciting! (there it is again)
The zEnterprise is, as we call it, a “smarter system.” It’s fast. It’s scalable. It’s efficient. It’s reliable. It’s secure. Most important, it’s highly manageable.
With that, IBM Service Management on System z is a single service management engine to give you the visibility, control, and automation needed to deliver quality services, manage risk and compliance, and accelerate business growth.
Together they will assist our customers in innovating their business; and that’s what it’s all about.
The road to a Smarter Planet is going to take systems and software that can be used to create a Smarter Data Center. It's worth your time to read more about it. There’s a ton of press coverage (point your favorite search engine at “zEnterprise” and it’s dealer’s choice on articles). Twitter is already trending with #zEnterprise from analysts, IBMers and customers. And, I’ve also put some ibm.com links below.
That said, in honor of the new announcement I give you a tribute to an old Jeff Foxworthy bit and a little something we like to call “You might be a not so Smarter Data Center.” (and feel free to add yours to the comments section).
If your data center has its own postal code, you might be a not so smart data center.
If your LOB signs their SLA with “no backsies,” you might be a not so smart data center.
If you count the number of forests it takes to print your server inventory, you might be a not so smart data center.
If your energy usage ever won you a free lunch from your power company, you might be a not so smart data center.
If your service management is done with a forklift, you might be a not so smart data center.
If scalability means renting more buildings, you might be a not so smart data center.
If your problem management is done with a game of pin the tail on the donkey, you might be a not so smart data center.
If your data center security is a bicycle lock and a hide-a-key, you might be a not so smart data center.
If your downtime is measured with a calendar, you might be a not so smart data center.
The following article was written by Cameron Allen, Pierre Coyne and Beth Sarnie and is the second in our OSLC series.
In non-acronym speak, what I'm saying is that the future of service management has arrived in the form of Open Services for Lifecycle Collaboration.
But, what is OSLC and what does it have to do with you?
If you are a user of service management tools of any kind, or rely on information from tools to do your job, then you probably know that finding the right information is half the battle, and getting realtime access to that information when it is not under your direct control can feel next to impossible.
OSLC means you can now leverage the simplicity and ease of web links to both find and share information across your management tools (be they IBM, or any vendor tools).
Just as web pages can be linked on the Internet, data can be linked together from one application to another – creating an application ecosystem where applications don't care what vendor they're from. They look up who has the data in a directory, and jump right to it.
OSLC is not something new, and Tivoli is not the first to adopt it for integration. If you're an IBM Rational user, you may already be a believer. IBM Rational, its users, and an extensive ecosystem of partners have been using OSLC to successfully interconnect the application lifecycle for years.
In fact, Rational Jazz is the realization of OSLC community specifications and shared services in an open platform that anyone can use to interconnect the application lifecycle. Rational just delivered their 4th incarnation of the integrated product offering called Collaborative Lifecycle Management based on Jazz.
Tivoli is now leveraging these same principles to help break down silos of information across the end-to-end service lifecycle. That means expanding the notions behind Jazz from service design and development to now include service delivery and management. We call this Jazz for Service Management.
Take for example, problem management. In order to diagnose and resolve a given trouble ticket, the problem information must be gathered and aggregated from multiple sources. We may need information pertaining to the application topology, the health of a system within that topology, outages or events that may be affecting the application, the CPU utilization, the versions and configurations of the hardware and software that this application is dependent upon. I could go on...
The problem is that all of this information lives in different places. You can either call around to the various owners of that information, or you pay a business partner to learn the API of the tool in order to get to the data, or you can have a highly skilled, in-house resource write the integration. These options require extensive expertise in vendor-specific APIs and lots of maintenance to keep them current.
OSLC utilizes community defined specifications for sharing and linking data applied to specific service management scenarios so that in a critical outage scenario, all relevant information relating to that outage can be accessed in real time from any number of sources, displayed in the context of that problem, in a single integrated view, with related actions that can be taken.
The difference is simplicity. You might be able to do this this now with a lot of experts and time but OSLC delivers simplicity.
And, most importantly, because OSLC uses community specifications for service management scenarios, integrations can be built once and applied across multiple 'related' OSLC-enabled tools. "Write-once, Apply-many."
For more information, listen to this podcast on the Tivoli User Community. This podcast provides a deeper insight into the next generation of service management built using linked data.
Also, at Pulse 2012 (video link), developerWorks' Scott Laningham is joined by Don Cronin, program director, Tivoli Technical Strategy and Architecture; and Mike Kaczmarski, IBM Fellow and Tivoli Chief Integration Architect to discuss the Magic of linked data.
Leave your comments on how you are using OSLC in your organization below and don't forget to follow us on Twitter @servicemgmt and be sure to bookmark our OSLC story on Storify.
I’ve done a few talks to camera recently –
interviews at the itSMF Spain
conference and a mock programme at the UK. The UK thought I was perfect for
‘Antiques Roadshow’ and I have to admit I fit the title’s parameters. I watched
the people using modern video equipment and it did make me feel old. Nearly 40
years ago I was editor of the student TV society at University and I was
recalling how many of us it took to deliver 30 minutes worth of black
& white programme onto 2 inch wide reel-to-reel video tape. It seems all but
unbelievable watching the kids now (the age I was then) record it in perfectly balanced
colour on something the size of a small book – when our kit weighed more than
the library. But the whole situation is another example of getting focused on
the changes and missing what stays the same.
While the television technology has changed
beyond recognition, the basics of interviewing haven’t.So hopefully I helped by trying to follow
those basic rules for an interviewee – ignore the camera, keep talking, try to
say something interesting. You can judge for yourself at http://www.best-management-practice.tv/best-management-practice-at-the-itsmf-uk-conference-2010.
(Actually if you are sad enough to be interested in the earlier ITIL days, I
shall be writing an article on that next year.)
So, this TV stuff is like most services
these days – the technology bit keeps changing, using new ideas – basically
becoming far more complex to understand whilst at the same time becoming ever
easier to use. That means customer expectations keep increasing (you don’t find
many people content with black & white TV any more) but at the real core,
the prime deliverables remain the same. We might talk more and more about
plasma vs LCD, 3D, surround sound, HD and all the rest; but the real
satisfaction comes from watching people be clever, funny, informative etc in a
way that holds our attention and entertains us.
And there is the heart of most of what I
have been talking about at conferences for the past few years. It is easy to
measure things like pixels and screen size and the number of channels and hours
of programming available, but so much harder to measure what we actually want from
a TV service.
Keeping that old television link, last week
was the 30th anniversary of John Lennon’s murder: a sad time for anyone
of my age and background. So I found myself watching old clips of Lennon on a
programme recalling his life. Now the man was clearly an extremist with
impossible dreams – and I may well return to my belief that we need some
extremists to make the majority move at all, but that’s another blog. One of
his lines, though, did trigger the realisation that this need for real
measurement isn’t a new idea. He was ranting about governments (as usual) and
said “If anybody can put on paper what our government, and the American
government etc., and the Russian, Chinese, what they are actually trying to do,
you know, and what they think they're doing, I'd be very pleased to know what
they think they're doing”. Now he followed that with “I think they're all
insane!” which perhaps is more about presumed results than objective
measurement, but nonetheless the basic concept is interesting.
We want to know what is at the heart of
our and others’ behaviour but it is very difficult to express that. It is hard
even to ask sometimes in a way that doesn’t sound as if you have failed to pick
up the social or business norms; because often we just presume there is a
reason and take the usual comfort in things ‘that have always been done like
that’. Maybe it is just easier to hide behind the numbers and the detail of how
you are doing things rather than making it all that clear what it is you are
trying to do, why you are doing it or even who you think you are doing it for.
One last seasonal example maybe, since it
is mid-December as I write this. Many of us will get back to work in January to
be greeted by the question ‘Did you have a good Christmas?’ For those who did,
you will know without recourse to precise measurements – it isn’t based on the
number of presents you received, how many carols you sang or how much turkey
you ate. Unless the biggest fun you have is skiing, it probably won’t have
mattered that much if it snowed. But if you had a good Christmas then you will
know – but my, isn’t it hard to set genuinely accurate measures beforehand?
And what can we learn from that, or at
least set out to do better? Maybe if we are buying or delivering any kind of
service we should at least try to be aware of – if not the ultimate – then at
least a higher level goal. And don’t be surprised or disappointed if your
expensive new TV might not affect the entertainment value, although it will help
you see the ball better in the cricket, and that might be an important factor.
And at work, a new finance package won’t make your profit margins higher – but
it might tell you faster what they are, and perhaps that makes an important
difference. Just be sure that’s important enough for what it is costing you,
and that you know the knock-on effect onto the higher level measure.
The Integrated Service Management demos at Innovate, that is. Evidently, integrating development and operations is a heck of a lot easier than you would think. Even more simple, are the reasons why you want want to do that. A unified strategy focused on visibility, control, and automation can help you minimize the cost and risk of delivering the next generation of smarter products and services and so you can take charge of your software design and delivery ifecycle.
If you're currently at Innovate, look up in the Expo for the signs marked "Integrated Service Management" or come find me in the Client Connection Lounge and I'll walk over and introduce you to the great guys hosting the Integrated Service Management demos.
Last week, IBM announced an enhancement to our cloud portfolio that will deliver CloudBurst on POWER7-based hardware, as well as offering it as software that can run on currently installed IBM and non-IBM systems.
With CloudBurst, IBM is tying together the hardware, storage, networking, virtualization, and service management as an all-in-one package for enterprises to build a private cloud. This is significant because it removes the arduous manual processes that in-house IT departments often face when configuring and managing their cloud systems.
In the press release, IBM states that it estimates CloudBurst's automated configuring capabilities "can cut IT staff's labor in integrating systems, provisioning and managing storage up to 95 percent." That seems pretty impressive, when you consider that if I could cut 95% off of my work week, I'd be logged on for a total of two and a half hours.
At the heart of this new offering is the IBM Service Delivery Manager, a stand-alone integrated service management software bundle which automates the deployment, monitoring and management of a cloud solution on IBM or non-IBM hardware.
If you'd like to delve into the details behind these solutions, and understand how to decrease your costs and increase your efficiency with CloudBurst, you can contact your IBM sales rep and/or Business Partner (Business Partner Locator Site).
There have been a lot of good discussions
on Back2ITSM recently. I find the site a wonderful reminder of the two
universal constant truths: ‘everything changes’ and ‘there is noting new under
the sun’. They might seem contradictions at first, yet the older I get the more
both seem true.
Firstly, if you aren’t looking at the
Back2ITSM group on facebook then you are missing out - go sign up, now! Let me
explain what it is and how it is brand new and full of ITSM tradition at the
Secondly, it is about people talking with
each other. That’s the bit that is the same as it’s always been. The
willingness to share ideas, help others – even those in competing organisations
– is just exactly like many itSMF regional meetings I have been to, in UK,
Canada and New Zealand; except that now we are all in three at the same time.
Of course, social media isn’t new, and
facebook is not the newest kid in town. But what is 21st century
about this kind of group are the immediacy of comment and dialogue and the wide
spectrum of simultaneous participants it allows. Since it has active members
from all across the world, there is constant input and comment.
OK, so we have all know that the technology
for this has been around a while. After all it is ‘just’ about real time input
to a forum – and we now have about 20 or 30 people across the world presenting
their opinions to an audience of 500+ (lurking is positively encouraged). For
me what is important is precisely that I am not aware of the clever technology
or feel all the time that I am using a novel means of communicating or even
just how damned clever the whole thing is. With this group I have reached stage
three in my own ‘using technology’ scale: comfort and taking for granted.
Stage 1 is when
you are using some new way of doing things just because you can. This isn’t
just about IT of course, many of us may recall how such things have affected
our choice of travel (my
example is choosing an airline because they had A380s
on the route, and even if a bit dearer I had never been on one of them before
Stage 2 is when
the mean is no longer overwhelming the ends – you’re using it now because it is
logical to do so, and it is delivering value. But, you are still very aware of
how cool it is. And you probably keep telling other people how cool it is too.
Stage 3 is when
your focus is totally on what you are doing. I can now just read what is written, comment
if I have something to say. You know it’s a normal conversation because it goes
off at tangents, people get flippant, say daft things, agree, argue, make
subtle (and sometimes not so subtle) digs at each and launch jokes that no-one
else notices. In short, it’s normal human conversation, without thinking about
how you are achieving it nor where all the people are, or what time it is there.
And to me this is a good motif for
successful technology. It isn’t when it is there and running that the implementation
part is properly over. Real success is when people don’t notice it any more,
but just get on with using it, unconsciously – as part of their everyday lives.
It’s one more example of how success is
about being invisible. First time I flew in an A380 I was excited about it –
last time I was watching a movie before we reached the runway. That’s success.
(Ok, so there was a little re-attention on the technology after the Qantas 380
had an engine explode but I am back to ignoring it again now.)
So the important lesson and message that I
see is how we need a customer perspective on the introduction of new
technology. And maybe what you actually want is people to stop telling you how
impressed they are, because then they are getting on with using it, which was,
after all, the real point of the exercise, wasn’t it?
The following article was written with significant contributions from Cameron Allen, Pierre Coyne and Beth Sarnie
Question of the day: why is IT agility so darn elusive?
Follow up question: after spending multiple millions in technology to improve service delivery, quality, and productivity, why do so many line of business executives perceive that IT is still not moving "fast enough?"
Silo'd information presents a big speedbump to agility. According to the 2012 IBM study of CEOs, high performing organizations are able to access data 108% more, draw insights from that data 110% more, and act on that data 86% more, than their underperforming peers.
Which brings us back to the specific problem: Information exists, but it is not shared. Information remains trapped in silo'd tools and departmental applications. It's not only not moving "fast enough," it's not moving at all.
If you agree with ITIL and related methodologies, agility is directly linked to your IT processes. So while we can improve process methodology and connections across roles and functions, and within specific technology siloes with tools, if the data and resources can not be freely shared across process-enabling tools, then its all for not.
Going one level deeper, what is the cause of this 'information black hole', where data enters tools, and is never seen again? Your reality is that you probably rely on a mix of multi-vendor tools. Those vendor tools rely on proprietary APIs for integration and trying to make tools with different APIs communicate requires the IT equivalent of a team of United Nations translators, where each is an expert in their applications main language (API). Once successful, the herculean effort can create a constant maintenance cost, and might not work well in the end - things will be lost in translation. That said, even single vendor tool suites are notoriously difficult to integrate.
So what can be done?
Stop for a moment and consider the best example that demonstrates simplicity of integration on a massive scale. It's the Internet. With the Internet, you can get information from millions of different web sites and all you need is a browser.
So for argument's sake, if tools are the equivalent of web sites, then all we need are links to connect two tools. We can take that one step further, borrowing principles from social networks like LinkedIn or IBM Connections, where we can search for one person, and see relationships to other people (making searching for data across tools much easier).
That in essence is OSLC (Open Services for Lifecycle Collaboration): A set of open, community agreed upon specifications for linking tools using web technology. (And before you ask, no. It's not a standard, because apparently standards alone have not done the job)
Data from any vendor tool is registered in a directory like a search engine, where other tools can find it, its relationship to other data, and access it via simple web link technology. Not similar to the Internet, but exactly like the Internet.
What that means is you can easily interconnect tools and processes. You can even replace tools with competitive tools - eliminating vendor lock in. It also means you can re-purpose one integration across a series of 'like' tools. "Write once, reuse-many" inherently applies here. All of this translates into simpler and faster access to information by people and tools, better analytics leading to better decisions, and better automation of workflow.
I recently had some first hand experience –
from the receiving end – how much of an effect genuinely good customer service
can have. The experience started in dismay but was recovered well beyond
Anyway, to start at the beginning ….
I had to go and ‘swear an affidavit’ –
which for those of you not into the jargon of jurisprudence means to formally
promise what you are saying on a form is true. In England you can either pay a
solicitor for this service, or you can get it for free at the county court. So,
of course, I went off to the County Court.
Now, it started, I admit, with me failing in my responsibility to be a proactive customer. I did not think
through what I knew. County Courts in England are where the most
serious crimes are tried, so it is where the most dangerous criminals would be.
A moment’s thought, therefore, would make it obvious that there will be fairly
impressive security. But of course I was just thinking about delivering a form
so the metal scanner and request to empty my pockets took me by surprise. And
my producing my Swiss Army penknife from my pocket sent the security man into
action. The knife was confiscated – suggestions that I wasn’t even in the
building yet and could just go back, leave offending items in the car and start
again, were not allowed to be considered. I was told that I could not get my
knife back when I left but instead I needed to write in to the court manager
asking for it to be returned by post.
So, I had a perfect example of a ‘Moment of
Truth’; putting me instantly, and very extremely, ‘anti’ the staff and the
processes. It seemed obviously the staff are required to leave common-sense at
home and not bring it to work with them.
And thus, in a bad mood I reached the court
officer with whom I was to sign and swear that my forms told the truth. She
spots my mood, finds out why and explains that the rules are for protection and
cannot be altered – causing no improvement in my mood. She then looks at my
forms and points out that I have not brought all the right documents – and then
throws in for good measure that my solicitor has supplied my with the wrong set
So … it is now clear to me that I have
driven into town, paid for my car parking, lost my knife for the duration and
all for nothing because my paperwork is wrong. But fear not – after this it
gets better. I had been expecting a businesslike word or two of sympathy and if
I allowed myself a glimmer of optimism then maybe even an explanation of what I
needed to go back and fetch, so that it would work when I came back.
Instead the lady reacted very differently.
She pointed out that the forms I have forgotten are copies of documents they
already have lodged with them, and that they have blank forms of the right
kind. She fetches the missing forms, lends me a pen and helps me understand
what is needed on the right form, checks it through, makes corrections and then
duly witnesses it and formally logs it in the system as sworn and correct. As she
put it “Well the purpose is to get your stuff recorded, if I can make that
happen then why wouldn’t I help?”
Of course she was perfectly right, her job
is to help get these things done, and so thinking for herself and helping
people get there is an obviously correct attitude. Isn’t that exactly how
everyone in service delivery sees it?
Well, of course we all know that it isn’t –
not yet! The sad aspect of this kind of story
is how surprised we all are by them – that they are worthy or repeating
because this quality of service is still unusual.’
The key aspect of this story – with its two
different approaches to dealing with the customers - is how much good service
experience depends on customer facing staff that are knowledgeable of the
customer’s context and goals. But more than that even, the management trusted
and empowered (at least some of) their staff to use common sense and do what
was right – maybe even if it didn’t follow exact procedures.
Are the customer-facing staff in your
organisation trusted and empowered? If not, is it because they can’t be
trusted, or because they have been given the knowledge? Or is it just that
no-one has ever thought it would be a good idea to trust and empower them? What
happens in your organisation – do you get good service or do you a strict
process delivered, whether or not it is appropriate?
How would you feel, as manager in your
company’s IT department, when the marketing people specified, commissioned and
developed an IT application for their needs?
I was driven to ask this question by several ‘customer surveys’ that I have seen come out of the IT departments. An extract from my very favourite is shown here, which while it demonstrates admirable self-confidence it is perhaps not the perfect basis for objective assessment.
It just seems strange to me that an
industry built entirely upon providing specialist expertise to allow others to
deliver their jobs doesn't always feel the need to get specialist advice
Now, personally, I do believe I know at
least as much about building, delivering and analysing surveys as I do about
technology application. But that is mostly because I know so little about
technology. In both situations I would always welcome expert advice if I need
to get something right.
Even IT listens to the CFO’s people when it
comes to costs and accounting, yet many have potential access to significant
expertise in their marketing people that goes untapped.
This feels important to me simply because
of the all the bad surveying we still see. I suspect that availability of free
services like Survey Monkey leads us to build and do surveys without any real
planning, and without thinking through how we might analyse and use the results
when we have them. Basically a good example of reducing the ‘Plan-Do-Check-Act’
cycle down to ‘Do’ - speedy and economic but not usually very effective.
As for the confusion and the wrong results
taken from unrepresentative samples …
For simple, but telling, examples think
about how many ‘customer survey’ results you have seen where in fact it is only
users who have been addressed. It is an important thing, user satisfaction, but
it isn’t customer satisfaction and we need to find out both and act accordingly
on what we find. For example if you have 100% perfect user satisfaction, then
the odds are your customers will think they are spending too much. And you will
frequently see a mix of customers and users asked questions that are not really
targeted at all, just asked because they can. This is often based on the –
misplaced – belief that the more people you ask, then the more accurate the
answer, ignoring the whole ‘sample selection process’.
Take a classic ITSM example, where a
support unit routinely sends questionnaires to those who have made use of the
service desk. This, of course, gives you a satisfaction result amongst those
who have had sufficient problems to make them phone for help. Might you expect
a rather lower score from these people than the ones who have been working
quite happily without the need for support.
We know we need to care more and more about
understanding what our customers – and users and other stakeholders – want and
need. We also need to understand it is not always an easy task to find that
out. There is a whole professional specialism out there that delivers this
service – as service providers ourselves, proud of our professional expertise,
should we recognise that more – and take some better advice before we ‘knock
something up to measure satisfaction?
Maybe you do consult with your internal
experts if you have them, or maybe you buy in expertise. It would be good to
hear if you do.