So, I was at Pulse this year and was the source of a pretty constant ridicule for carrying around what felt like a fifty pound laptop bag.It was horrible, and inconvenient, and not even effective.I had hard copies of schedules that were out of date about 30 seconds after I clicked print.By the end of the conference I had calluses on my fingers and I couldn’t walk more than about ten steps without having to change hands.It was really a constant reminder that I need to go to the gym more.
Anyway, interestingly enough, most vendors in the endpoint security space have basically adopted this same approach in designing their technology.Incoming attacks get blocked by signatures, and in order to keep you “prepared,” some companies just create and update these huge signature files, shoot them across the network, fold their hands and hope they get properly installed, and then get right back to work because the files they just sent are more or less immediately out of date.I can tell you from experience that lugging around a bulky bag of incomplete, outdated information is no way to do your job.It’s also no way to keep your employees, and by extension, your company, ahead of threats.
What companies need to do is focus on what a defense-in-depth of the endpoint would really look like.It means you need a lot of things.You need to have antivirus and firewall protection.You need a patch process that actually works.You need centralized policy management that is easily enforceable.And, of course, you need all of this in real-time.Until recently, that also meant you needed a lot of aspirin.
With its acquisition of BigFix last July, IBM basically invested in the convergence of security and systems management, two pieces of the operational infrastructure that will continue to become more intertwined.You can’t just write the policy, or obtain the patch, you also need to be confident that these changes and updates are continually being enforced at every single endpoint.Try automatically applying patches to computers that aren’t turned on and you’ll pretty quickly understand why convergence is so important.
Up until this week there were four offerings that were part of the Tivoli Endpoint Manager suite of products, all of which are managed under the same roof.We have solutions for lifecycle management, security and compliance, power management and patch management.This week, we were pleased to announce Tivoli Endpoint Manager for Core Protection, a solution designed to add another layer of depth to your endpoint security posture.Tivoli Endpoint Manager for Core Protection is the result of the relationship between IBM and Trend Micro, and offers the real-time, lightweight threat protection that other endpoint security solutions can’t really compete with.
I spoke earlier about how other vendors were sending these huge signature files across their network, files that were outdated before you even figured out how to install them on your PC.Tivoli Endpoint Manager for Core Protection is different because while it does employ the use of some signature files, it also leverages the cloud to reduce the amount of information that needs to be sent across the network and also provides the real-time protection that static signature files cannot.As the cloud is updated with the latest threat information, so too are all of the endpoints that are in conversation with that cloud.
This has proven to be extremely effective. In a recent third party test, the Trend Micro technology blocked 100% of all incoming malware (the second place competitive product came in at 77%) by taking a multi-layer approach. Nearly all (97.5%) of the malware was detected and blocked in the first layer (URL reputation) and the remaining pieces of malware were blocked in the two subsequent layers of defense. Now, here's where it gets even more impressive. An hour after the original test, they again tested just the malware that got through URL reputation, but this time it did not get through even that first layer of defense. This is protective technology that is updating and hardening its defenses as new threats come in.
I don't think I really need to explain the importance of endpoint security to anyone reading this. We all have different things at stake, whether it's your back accounts, your music collection, confidential information for work or even just a photo album. What I can say is that 77% isn't good enough when it comes to protecting any of those things.
The strength of Tivoli Endpoint Manager is that it combines first-rate security with the systems management capabilities needed to ensure that protection is deployed across the entire infrastructure. When it comes to endpoint management, it's about no longer looking at technology in silos, it's about understanding why and how we can integrate different complementary offerings. Tivoli Endpoint Manager is built on that philosophy.
For more information about Tivoli Endpoint Manager, please visit:
One of the trends that we are seeing today is the convergence of security management and systems management.The better job you can do managing your infrastructure, the better equipped you will be to define and enforce security policies and controls across that infrastructure.There are few places where this convergence is more evident than the endpoint.
As the notion of a perimeter disappears, and we see the continued proliferation of an increasing number of traditional and non-traditional endpoints, such as servers, desktop PCs, laptops, ATMs, point-of-sale devices, and self-service kiosks, organizations are looking for a comprehensive approach to how they best manage and secure all of their endpoints.This includes, but is not limited to, identifying all of the endpoints that you have in your environment, managing the complete lifecycle of that endpoint, providing continuous security and compliance, effectively deploying patches in a timely manner and finally, managing the power usage of that endpoint.
Tivoli Endpoint Manager, built on BigFix technology, can address all of those needs, but in this blog, I want to focus on that last piece of the conversation, because it is one that does not immediately come to mind when people are typically thinking about the most critical elements of managing an endpoint.However, we have seen that effective power management is something that can actually pay for all of the other benefits that Tivoli Endpoint Manager can provide.You can ultimately end up saving money, the environment, and in the process, deploy critical security and systems management controls across all of your endpoints (even the ones you didn’t originally know you had).
In a recent article (click here), Penn State wrote about their deployment of Big Fix (now called Tivoli Endpoint Manager) and indicated that it could save them about $800,000 annually.At a large university like Penn State, they have thousands of computers that can be included in their power management initiative, and many of these computers are only heavily used during peak hours.Tivoli Endpoint Manager allows the Penn State IT staff to automatically put these computers in sleep mode when they aren’t in use.They are anticipating not only a significant ROI (about $800,000 annually), but are also hoping to reduce the amount of carbon dioxide released into the atmosphere by 60,000 tons.
One of the objections that people often bring up when it comes to power management for the endpoint is that it can interfere with the patch process.This is one of the areas where the convergence of security and systems management is so important.The policies that you create and enforce from a systems management perspective need to work hand-in-hand with the policies related to security management.For that reason, Tivoli Endpoint Manager was built on the core concepts of convergence, scalability and granular policy setting.It allows an IT staff to automatically wake computers at a designated time, apply required patches or enforce configuration policies, reboot, and then bring the endpoint back down to a hibernated, low energy state, or shut it down altogether.
The Chichester School District (click here) provides yet another great example of power management savings. This regional school district in Delaware County, Pennsylvania, manages more than 2,000 Microsoft Windows desktops and 50 Microsoft Windows servers throughout a six-school network.The Chichester School District implemented energy conservation using the power management capabilities of Tivoli Endpoint Manager to help reduce computing energy costs by 70 percent. Their IT team also uses the distributed “Wake-on-LAN” functionality to distribute and install patches to those machines that are turned off at night. This allows for a reduction of energy resources and confirms machines are securely patched—without impacting employee productivity.
The integrated patch and power management capabilities of IBM Tivoli Endpoint Manager provides IT staff with real-time information on remote endpoints to simplify patch processes, conserve energy and reduce on site troubleshooting.
Today's post comes from Sandy Hawke, Manager IBM Security Solutions.
I recently presented to the ISACA community on a live webinar. I focused the discussion on how to leverage automation to improve endpoint security and compliance. The archived webinar is available here. Just as a brief background, ISACA is an international professional association that focuses on all aspects of IT Governance and has over 95,000 members worldwide.
The online event drew a pretty substantial audience which is good, and yet a bit surprising in two key ways. First of all, many of the recommendations I made to the audience were not radically new concepts, but basic foundational controls that all security professionals agree are critical for achieving and maintaining solid security and demonstrable compliance. So haven't they heard this story before?
Maybe not. And that's the second observation. Most of the ISACA membership is in the IT audit/Risk Management line of business. While they're not the folks who are implementing security technologies on a daily basis (i.e. "hands at keyboards")- they are keen to understand how security is implemented, how it works, how automation can be used to facilitate audits, etc. And that's the new trend we've been witnessing. While the audit team knows what the policy controls should be, they may not know if/how these controls get enforced, maintained, monitored and reported on- essentially how security is "operationalized." The more that they know what's possible with respect to security operations and automation, the better they'll be at knowing what questions to ask IT operations during audits, what technologies to recommend, etc.
Years ago, the IT Audit/Risk Manager organization and activities were kept quite separate from the IT Operations/IT Infrastructure teams. And at the time there were pretty good reasons to keep these groups as distinct as possible- you've all heard of "keeping the fox out of the hen house" analogy, right? The IT Audit/Risk Mgmt teams could set and enforce policy and conduct assessments that wouldn't be influenced by the operations staff. Well, with the advent of converging technologies, economic trends, and the increased importance of measuring security investments and compliance program- in real time, these groups are coming together. More so than ever before.
And technologies that can foster that type of trust, cooperation, and collaboration are indispensable.
When you think of the levels of innovation you are required to give to the business, cloud is the right technology to do it.
Since the Cloud lives on the Internet, which is built upon the "bricks" of open standards, it should not surprise you that there is a drive to an ubiquitous Infrastructure as a Service (IaaS) open source cloud computing platform for public and private clouds.
In October 2011, my neighbors to the South at Rackspace founded the OpenStack Foundation.
Today, IBM is announcing that we will be joining the OpenStack Foundation as Platinum-level sponsors along with AT&T, Canonical, HP, Nebula, Rackspace, Red Hat and SUSE.
The OpenStack Foundation has a great blog post that covers what's happening today and what the next steps will be.
This is the start of a very exciting future for cloud computing and of course there will be more news coming from the OpenStack Foundation in the weeks to come at events like the OpenStack Design Summit & Conference in San Francisco on April 16-20 and IBM Impact in Las Vegas on April 29 - May 4.
I am at VMWorld this week, in San Francisco CA where IBM is a platinum sponsor.
With the growing adoption of Cloud implementations, private, public and hybrid, there's clearly a desire in the clients exploring solutions here to optimize and exploit their environments rather than a maintenance and steady state approach. Therefore, it is timely that Bala Rajaraman and Pratik Gupta, two IBM Distinguished Engineers, are presenting a Collaborative DevOps session at VMWorld
I sat with Pratik and Bala, and asked them what the impetus and motivation for developing this talk. The crux of the pitch, as Pratik explained to me, is that current conditions have created four drivers that a majority of customers are facing, that are making a DevOps approach an imperative.
At the heart is the desire in companies for agility. The desire in the
Line of Business leaders to create value in their offerings is resulting
in an urgent need for business agility. This in turn challenges the
development organization to take an agile development approach. As more
and more deployments move to a Cloud delivery model, it requires an
operational discipline that is not always present. Add to this the human element. If you're in an enterprise shop, you know already that this is not purely solvable by software. Cultural gaps exist between the Line of Business sponsors, the developers and the Operations team. Notions of completion, priority and quality also are different.
Right now, companies are not getting this right. 50% of the applications released into production are rolled back. As much as 51% of projects are missing critical features. Quality and end user expectation delivery are clearly an issue.
Pratik and Bala will frame this problem space and then show how adopting a continuous delivery model can help address this.
If you're debating between this session and something else at this time, it might also help to know that the first 75 attendees will receive a snappy IBM jacket. The rest will get to pick one of an assorted set of items. There's also a giveaway of an iDoodah. Come by the IBM booth on the expo floor if you want details.
For those new to the blog, IBM SmartCloud Control Desk was one of the new announcements made at Pulse. It is a service catalog/service desk based on IT Infrastructure Library™ (ITIL™) V3 and ideal for streamlining incident, problem, change, configuration, release, and IT asset management.
This service desk offering will assist customers in process control center for managing change & configuration, assets, incidents/problems, service requests, SW licenses and more.
The announcement letter (212-051) was published on March 13 and we now have a very cool demo that showcases the solution.
What is IBM Tivoli Software? We know you want the short version. Steven Wright of Tivoli Software breaks it all down for us in less than 7 minutes on a white grease board. Check it out while you have your morning coffee, afternoon tea, or while you get your miles in on the treadmill or trail with your smart phone. Then visit ibm.com/software/tivoli for more details on how IBM Tivoli Software can help you run a smarter business. .
I went to an itSMF UK regional
meeting last week. I haven’t managed to get to our local meeting for a while
and I found I was being introduced to new members as someone who has been
around ‘since the beginning of ITL’.
Now that kind of thing, apart from making
me feel old (which is, admittedly, a fair enough feeling at my age) also made
me look back and think on where we (the ITIL community) have come from and
where we are now.
The first thing that occurs to me in
thinking back to the early days of ITIL is that we now find ourselves in a
place that none of us imagined we would. Don’t get me wrong, the original
inventors and drivers of the
ITIL idea were not short on confidence or vision, nor in seeing the benefits
that documenting this aspect of best practice would bring. But I suspect that
world domination of this industry sector by the word ‘ITIL’ was beyond even
their best possible visions.
The key to the expansion of ITIL was that
it quickly became about more than just the books. The ITIL advertising leaflets
produced in the mid 90s coined the term ‘ITIL philosophy’ to represent this
scope of ITIL. I suppose I should confess that I invented that phrase
and also the diagram that went with it – a version from about 1997 is shown
here. The accompanying words suggested that, even back then, less than 1% of
‘ITIL-related sales’ were about the actual ITIL books, and the rest were
The fact that I couldn’t even hazard a
guess at what that percentage might be today indicates a few, pretty
When I was writing those things in 1996-1998, I felt I could
pretty much ‘take-in’ what was going on related to ITIL, and even know
most of the people developing and delivering new ideas. Nowadays no-one
can honestly claim to be able to do that.
What is ‘ITIL-related’ has become a much more debatable
concept. Whatever its faults might have been (and there were many) ITIL
was just about alone in its market space. The initiatives kicked-off by
ITIL have spawned fellow travellers, such as COBIT, ISO20000 and others.
The fact that I could easily start a long running – and probably vitriolic
– debate on
the social media pages by asserting which are and which are not ITIL
derived, ITIL alternatives etc indicates that this is now a loosely
bounded region. That makes any assessment of its scale, scope and success
Some other things have changed too.
Nowadays the maturity of the ITIL ideas
means most players are focused on market share rather than growing the sector
itself. That means more competition than there used to be. Nonetheless there
are still lots of examples of that collaboration still easily found. Probably
the best example is the ‘Back2ITSM’ facebook group – a place where free advice,
constructive debate and openly shared thoughts are still the norm.
The itSMF was born in 1991, and played –
probably – the major coordinating role is promoting the idea, importance and
approaches of service management. Like ITIL, itSMF predates the term ‘service management’,
having started as the ITIMF. Even here we have seen a lot more competition
during the last third of its lifetime: both competition from other community
organisations and also considerable internal competition. I hope itSMF will
evolve form this to carry on delivering benefit to its members. I am a bit too
frightened to work out what percentage of my time has been given to itSMF over
the last 17 years – or at least frightened what my employers over that period
might think. But that commitment does make me wish hard for its future health.
So, looking back should makes us appreciate
where we are now – nostalgia can be deceptive for usually the past wasn’t
better; because progress is exactly that – going forward and getting more. And
wherever ITIL is now, IT Service management has come a wondrous way in the last
20 years. Global technology changes have made a difference to that journey;
we’ve seen personal computing and the internet make all but unbelievable levels
of change. We may well see Cloud do the same; personally I think cloud might do
that by freeing us from some of the technical baggage and letting us see and
address real service management issues, without the obfuscation of technology
issues or the opportunity to hide behind them any more.
We’ve seen a move from books being the
go-to source of wisdom when ITIL started to an amazing range of information
sources. Nowadays your typical service management will expect their influences
to come via social media, electronically delivered white papers and the like.
Interestingly, in many cases, they would also expect them to come for free, and
that throws a real challenge on the thought leadership business. If ITIL 4 ever
happens I think it will be a radically different entity from versions1-3.
Where I want to see ITSM going is towards
SM. IT is now so pervasive that it is everywhere, which to me means that ITSM
cannot be a subsection of overall SM anymore because it logically applies to
everything, since all services now depend on IT. Nevertheless, IT has treated
SM well, and – after some effort –has taken it seriously. I hope those lessons
will work their way into broader adoption and we will see an improved – and
critically an integrated – approach to service management across enterprises
because of that. I am driven to optimism in this (not my natural state you
understand so it is noteworthy) by the fact that, alongside this blog, I am
involved just in this same month in a webinar and an article for IBM’s SMIA
series on the idea that IT is now spreading its ideas – and delivering its
technology and specifically its evolved software solutions – to the broader
I wonder what we will be saying in another
20 years looking back – maybe ITIL will survive another 20 years, maybe not,
but I am certain service management will progress and improve.
 And the top two names I would put here are Pete Skinner and John
Stewart – perhaps our least sung heroes, especially the late Mr Skinner – but
pivotal all the same.
 I don’t plan to, and hope no-one else is tempted – there are far
more constructive things for intelligent service management practitioners to
progress knowledge about.
 And if you are interested (sad?) enough to be reading this then you
should be part of that group if you aren’t already.
There have been a lot of good discussions
on Back2ITSM recently. I find the site a wonderful reminder of the two
universal constant truths: ‘everything changes’ and ‘there is noting new under
the sun’. They might seem contradictions at first, yet the older I get the more
both seem true.
Firstly, if you aren’t looking at the
Back2ITSM group on facebook then you are missing out - go sign up, now! Let me
explain what it is and how it is brand new and full of ITSM tradition at the
Secondly, it is about people talking with
each other. That’s the bit that is the same as it’s always been. The
willingness to share ideas, help others – even those in competing organisations
– is just exactly like many itSMF regional meetings I have been to, in UK,
Canada and New Zealand; except that now we are all in three at the same time.
Of course, social media isn’t new, and
facebook is not the newest kid in town. But what is 21st century
about this kind of group are the immediacy of comment and dialogue and the wide
spectrum of simultaneous participants it allows. Since it has active members
from all across the world, there is constant input and comment.
OK, so we have all know that the technology
for this has been around a while. After all it is ‘just’ about real time input
to a forum – and we now have about 20 or 30 people across the world presenting
their opinions to an audience of 500+ (lurking is positively encouraged). For
me what is important is precisely that I am not aware of the clever technology
or feel all the time that I am using a novel means of communicating or even
just how damned clever the whole thing is. With this group I have reached stage
three in my own ‘using technology’ scale: comfort and taking for granted.
Stage 1 is when
you are using some new way of doing things just because you can. This isn’t
just about IT of course, many of us may recall how such things have affected
our choice of travel (my
example is choosing an airline because they had A380s
on the route, and even if a bit dearer I had never been on one of them before
Stage 2 is when
the mean is no longer overwhelming the ends – you’re using it now because it is
logical to do so, and it is delivering value. But, you are still very aware of
how cool it is. And you probably keep telling other people how cool it is too.
Stage 3 is when
your focus is totally on what you are doing. I can now just read what is written, comment
if I have something to say. You know it’s a normal conversation because it goes
off at tangents, people get flippant, say daft things, agree, argue, make
subtle (and sometimes not so subtle) digs at each and launch jokes that no-one
else notices. In short, it’s normal human conversation, without thinking about
how you are achieving it nor where all the people are, or what time it is there.
And to me this is a good motif for
successful technology. It isn’t when it is there and running that the implementation
part is properly over. Real success is when people don’t notice it any more,
but just get on with using it, unconsciously – as part of their everyday lives.
It’s one more example of how success is
about being invisible. First time I flew in an A380 I was excited about it –
last time I was watching a movie before we reached the runway. That’s success.
(Ok, so there was a little re-attention on the technology after the Qantas 380
had an engine explode but I am back to ignoring it again now.)
So the important lesson and message that I
see is how we need a customer perspective on the introduction of new
technology. And maybe what you actually want is people to stop telling you how
impressed they are, because then they are getting on with using it, which was,
after all, the real point of the exercise, wasn’t it?
For MSP's, IBM is providing a bundle of services and support in the way of marketing skills, technical expertise and financing options. On the marketing side, MSP's will have the ability to better target their customers and generate demand for their services through IBM education that includes topics such as developing effective marketing plans and exploiting the burgeoning social media space. Additionally, MSP's can sell IBM SmartCloud services under their own brand names. On the technical side, MSP's will have access to four new "Centers of Excellence" (located in China, Japan, Germany; and New York City) where they can collaborate with IBM technical experts to build their cloud services, and connect with other IBM ISV's. In terms of funding their efforts, IBM announced a financing offer which includes 12-month, 0% loans for IBM Systems, Storage and Software, and allows MSP's to defer payments for up to 90 days.
For end-users in the SMB space who often lack the necessary IT skills, this is a great opportunity to leverage local technology providers and take advantage of a cost effective "pay-as-you-go" model that cloud computing affords them. In addition, end-users will have the confidence of knowing that the services provided were built on an IBM platform.
Finally, for IBM, this is a great opportunity to expand its cloud ecosystem, and leverage the growing population of MSP's, who are continuing to gain traction in the cloud computing space for SMB's.
When IBM first kicked off the Dynamic Infrastructure announcement at Pulse 2009 conference, we heard some rumblings on whether Dynamic Infrastructure was just another executive buzzword or if there was real meat behind "the concept."
Doug McClure summarized the feeling well in his blog: “While this is great for executive level folks, I think we needed to drive this message into consumable and actionable things that lower level technical attendees could take back to their companies. They may be the ones who need to execute and show how previous or planned investments could help their company become smarter and more dynamic.”
After IBM’s announcement yesterday on new Dynamic Infrastructure offerings, critics will be hard-pressed to wonder whether Dynamic Infrastructure is actionable.Not only did IBM announce new products and services in the areas of Information Infrastructure, Virtualization, Service Management, and Energy Efficiency, but they also demonstrated how these solutions are helping three of our clients--the Taiwan High Speed Rail Corporation, Tricon Geophysics and the United States Bowling Congress--build new, more dynamic infrastructures to help reduce costs, improve service and manage risk.
A key piece of the announcement is the IBM Service Management Center for Cloud Computing, which now includes new IBM Tivoli Identity and Access Assurance, IBM Tivoli Data and Application Security, and IBM Tivoli Security Management for z/OS, for Cloud environments. I don’t know about you, but all that’s more meat than this vegetarian can handle. :)
To continue driving home the Dynamic Infrastructure success, IBM is sponsoring a variety of events for the public to learn more. Register for a free, local Pulse Comes to You event to see how Service Management is a key component for enabling a DyanmicInfrastructure for a Smarter Planet.
Over 51 million tourists travel to Orlando, Florida every year, but only the cool ones go to attend IBM Edge and IBM Innovate.
As I type this, so many of our customers, partners and my colleagues are in the "brutal" 88°F* weather learning more about storage and software & system innovation.
Since much of my focus is around product announcements, I wanted to point folks to the IBM Tivoli Storage Productivity Center V5.1 announcement that happened yesterday (Announcement Letter 212-189).
For content coming from the conference, a number of the marketing team are on the ground at Edge and tweeting. Be sure to follow Maria, Martha and Branavan (and of course, @ibmstorage) as well as the hashtag #ibmedge.
The Rational team have a number of exciting new announcements around Jazz and they will be talking quite a bit about mobile, cloud, industry solutions and a few other things including DevOps.
For us service management folks, DevOps translates into tangible benefits we can bring back to the business; like fewer errors and faster time to resolving errors if they do occur.
Back at Pulse 2012, we announced, among other things, the Beta for IBM SmartCloud Continuous Delivery (see the blog post and press release).
Along with IBM SmartCloud Control Desk and IBM SmartCloud Provisioning Manager (among others), it's about developers and testers having access to the same tools, data and information that operations uses and leveraging them to fix problems before they occur. And if problems do occur, the linkages with tools like Rational Application Developer and Rational Performance Tester allow the developers and testers to quickly resolve these issues as everyone and everything is connected.
As stated before, fewer errors and faster time to resolving errors if they do occur. This translates into using time to be productive and being innovative. Innovation is what provides value back to the business.
Today we trust computers – literally and
unconsciously with our very lives. I was reflecting on this level of trust when
I got £50 of cash out from my local ATM and declined the offer of a receipt.
Seems I now have total faith the computer systems will ‘get it right’. I’ve
come a long way from keeping all my own cheque books to cross check against
later bank statements.
Now, combining that faith with a little
healthy British cynicism, and triggered by watching the Olympics tennis finals on
TV, a mischievous but irresistible thought came to my mind.
It used to be that when a ball hit the
ground near the line we relied on the human eye to say whether it was ‘in’ or
‘out’. That caused disagreements and discussion – and - in tennis often -
sulking, swearing and the full range of petulant behaviour.
Nowadays that is all replaced by
referencing the technology. When there is doubt – or one of the players
questions a call - then we simply ask the computers. What we get then is a neat
little picture representing the appropriate lines on the court and a blob
showing where the ball had hit. So, problem solved: disappointment still for
one player but, so it seems, total acceptance that the computer is right. After
all it is an expensive system working away inside a very expensive box – must
be right, mustn’t it. Or to put it another way ‘computer says in’, who would
But what occurred to me is this. All we can
actually see is some boxes around the court, and a stylised display with a blob
on it. That could be delivered by one person with a tablet showing the court
lines and them touching the screen where they think it landed. Very cheap and
still solves all the arguments because – naturally – everyone trusts technology
Now – of course, and before anyone calls
their lawyers – I am not suggesting for the merest moment that there is the
slightest possibility of such a thing happening. But it’s fun to think it might
be possible. There is little public awareness of what accuracy the system – and
here I presume it does really exist – works to. If you dig around on the web
you can find out (the answer by the way for tennis is 3.6mm). You also find out
there is some very minor grumbling and questioning going on. But that seem at
geek level – in everyday use the audience stands instantly convinced.
So, thinking it through there are a couple
of interesting consequences to real IT life:
Once you realise that trust depends on quality of presentation
at least as much as on accuracy, should you focus more on that? Certainly
you have to take presentation seriously, because the corollary is that if you
deliver perfection but don’t make it look good, then no-one will believe
it even though you are right.
Whose responsibility is it to check – and is it even possible? I
suspect this discussion will take us into the territory of ‘governance’. But
even before we get there it implies that User Acceptance Testing needs to
do more than look at things. Of course yours does, doesn’t it?
I guess my big issue is to wonder how
comfortable we are – as the deliverers of the technological solutions for our
customers – and especially our users - to have such blind faith. Of course,
people being the irrational things they undoubtedly are, that blind faith in
the detail is often accompanied by a cynical disregard for overall competence –
think faith in ATMs and on-line bank account figures with the apparent level of
trust in the banking industry as a whole.
As a little codicil to the story, I registered
with anew doctor yesterday – the nurse asked me questions, took blood pressure
etc and loaded all the data she collected into a computer. The system was
clearly ancient, with a display synthesising what you typically got on a DOS3.0
system. First thought: ‘OMG why are they using such old software, that can’t be
good? Second thought: ‘They’ve obviously been using it for years, so they
really understand it, have ironed out all the bugs and it does what they need. It
ain’t broke so they aren’t fixing it’. But my instinctive reaction of suspicion
of it for not being pretty was there and I had to consciously correct myself.
Would you as a service provider prefer more
questioning of what you package up and present to your customers and users, or
are you happy to have that faith? My own view is that the more blind faith they
have in you, the more the retribution will hurt if things do go wrong. Or
perhaps that’s just me being cynical again?
Earlier today, IBM shared its point of view on the future of the data center with Smarter Computing V3 (press release). A central focus is IBM Enterprise Systems (zEnterprise EC12 and Power) and their ability to deliver exceptional value through a private Cloud. We've seen how organizations have been able to leverage IBM Enterprise Systems to achieve significant benefits. Take the City of Honolulu for example which was able to lower its licensing costs by 68% while increasing tax revenue by $1.4M USD in just three months.
By adding Tivoli software to their current IT environment, organizations can advance their enterprise-class Cloud environment while protecting their existing IT investment. How? IBM SmartCloud Foundation software is deeply rooted in openess - an open standards approach and common management tools that are platform agnostic. Essentially, you pick the platform(s) that best meets your business goals and we deliver a set of interoperable Cloud management tools across your heterogeneous environment. Of course, there are intrinsic benefits to building a Cloud management stack on top of IBM Enterprise Systems given the tight integration between hardware and software. OMEGAMON for example leverages a deep integration with zEnterprise systems to deliver advanced monitoring that reduces typical time to resolution from 90 minutes to 2 minutes.
Whether your starting to consider virtualizing your IT environment or deep into your Cloud journey, we have open Cloud management tools that help you expand your Cloud footprint without fear of vendor "lock-in". Learn more about the latest announcement and our Cloud solutions by visiting this site and attending the System z webcast on October 17.
IBMers are hyper-aware of our clients and the issues that they address when they're on the job. So much so, that I've said in past blogs that the majority of conversations I have with my colleagues start with, "How does [blank] beneift our customers?"
To that end, everything we do revolves around questions like - how can we give our customers what they need to get their job done and stay innovative in their industry?
Questions like that get answered at conferences like Pulse 2012. It's where we continue to deliver value to our customers.
And, as mentioned in yesterday's blog about the general session keynotes from Danny Sabbah, not technology just for technology's sake. Providing real business value.
This particular blog is going to focus on the specific announcements we made around cloud, starting with SmartCloud Foundation.
IBM SmartCloud Virtual Storage Center
Storage is "the next big line item" for IT, which is why the idea of improving storage efficiency has always been a hot topic.
Storage virtualization brings the promise of not only improving efficiency, but also providing levels of data mobility that are crucial to delivering modern services to customers.
The ideal solution for storage virtualization should be able to do both the virtualization/provisioning as well as the actual management.
And IBM SmartCloud Virtual Storage Center does both and it's one of the most impressive things being shown on the Expo Center floor here at Pulse 2012. Not to worry though, the team has information on the website and the team talks about this as well as all storage information on our @ibmstorage Twitter account and the Storage blog.
IBM SmartCloud Monitoring and IBM SmartCloud Provisioning
If you were following our SmartCloud announcements last year, you saw these two solutions make a big splash in the market and we're continuing to add value to both of these solutions.
Today. As in right this second, you can go to the ISM Library and download the "Service Health for IBM SmartCloud Provisioning" that will integrate provisioning and monitoring so that you easily monitor what you've provisioned and be able to identify and react to issues in your environment.
To help further simplify how you provision, we've released a statement of direction for SmartCloud Provisioning that may provide enhancements with image lifecycle management.
New features that may provide the ability to control image sprawl, an Image Construction and Composition Tool as well as highly automated self-service deployment of virtual machines.
All of which translate into spending less time wrestling your virtualization and cloud environments to ground and more time working on innovation.
IBM Endpoint Manager for Mobile Devices (New)
Yesterday's general session keynote emphasized mobile.
Between "Bring Your Own Device" (BYOD) and organizations embracing using their own mobile devices for their employees, mobile is the new platform of choice. (which means it's probably time to ditch my IBM 5100)
As you know, our IBM Endpoint Manager solution is built on BigFix technology and it's been invaluable to our overall service management strategy for Visibility. Control. Automation.(TM) (VCA)
On January 31, we announced an update to one of the key pieces of this portfolio; IBM Security Identity and Access Assurance 1.2.
Security was one of the three areas of focus with regard to increasing complexity and new features deliver improved identity and access governance with open authentication standards, role modeling and lifecycle management, and a virtual appliance delivery method all simplify deployment and provides faster time to value for security while reducing risk.
IBM SmartCloud Continuous Delivery
Continuous Delivery is a topic that we have discussed quite a bit on this blog (it has also been known as "collaborative development and operations" or "DevOps").
The challenge of getting services to users is balanced by ensuring that speed does not come at the expense of governance and increased risk.
The strategy to bring development and operations teams together is often stalled when the tools each team are using don't work well together.
Per the announcement letter, "IBM plans to provide an extensible architecture for delivering and managing the entire application lifecycle, creating an environment that brings development and operation teams together with collaboration, automation, and analysis."
IBM SmartCloud Control Desk
With IBM SmartCloud Control Desk, IBM plans to deliver a solution for service catalog, service desk, and IT Infrastructure Library™ (ITIL™) V3 based processes for incident, problem, change, configuration, release, and IT asset management.
This service desk offering will assist customers in process control center for managing change & configuration, assets, incidents/problems, service requests, SW licenses and more.
Software As A Service (SaaS) - IBM SmartCloud Solutions
The innovations happening with Smarter Planet, are quite simply staggering. One of the most interesting, and most visible, areas is in the Intelligent City solutions.
You've seen these solutions in market and in any number of places in the past, but now Intelligent Operations, Intelligent Transportation and Intelligent Water also have SaaS offerings that allow customers to quickly get started, since there is no hardware to procure or installation services to contract.
Infrastructure as a Service (IaaS) - IBM SmartCloud Enterprise - Object Storage
Last quarter, we announced SmartCloud Enterprise, and this quarter we have added a very compelling new feature; object storage.
Object storage enables you to upload and share files of any size from anywhere in the world; supporting millions of users, billions of objects, and exabytes of data.
It's a nice bookend to the SmartCloud Virtual Storage Center in that it gives customers options on how to solve their storage issues.
Back To Visibility. Control. Automation.™ (VCA)
This is a lot of "stuff" with regard to features and functions. But what does it mean for you, as a customer?
I keep going back to the Danny Sabbah general session keynote because it really hit home the message so well.
"Providing information on all platforms is table stakes these days."
Cloud done right is about mobile + cloud. The infrastructure must deliver value back to the business. We must simplify, standardize and automate.
Cloud done right is about delivering VCA:
Visibility - see and understand your business in real time.
Control - transform and adapt .while limiting risks.
Automation - achieve greater efficiency by standardizing best practices.
Cloud computing and VCA means less time (and resources and money) working on your infrastructure issues and more time being innovative.
To find out more about any of these solutions, contact your IBM sales rep contact your IBM sales rep or one of our Business Partners using the Business Partner Locator website.
* some of the new announcements are statements of direction and they are noted as such here and in the announcement letter. (and see the announcement letter and the bottom of this blog as the standard disclaimers apply).
Statement of direction disclaimer
IBM's statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM's sole discretion. Information regarding potential future products is intended to outline our general product direction and it should not be relied on in making a purchasing decision. The information mentioned regarding potential future products is not a commitment, promise, or legal obligation to deliver any material, code or functionality. Information about potential future products may not be incorporated into any contract. The development, release, and timing of any future features or functionality described for our products remains at our sole discretion.
One statement: simultaneously reassuring and terrifying.
Firstly it’s reassuring because anything that works towards the realisation that development and operation are not really separated by any kind of wall has to be a good thing. Of course there are different areas of focus at different times in the life of a service but they all should have the same aim – delivering what is needed in best possible way. We already all knew that, it is so obviously sensible that who would vote against it? The equally obvious fact that we then don’t do it is one for the psychologists and later blogs, but does lead me into my other reaction:-
The horror that we should be 50+ years into IT services before this seems important to enough for people to give a trendy name. How on earth have we survived this long without a “collaborative and productive relationship” between the people who build something and the people who operate it? And bear in mind both those groups are doing it for the same customer (in theory anyway).
To be fair to IT people though, perhaps this is an obligatory engineering practice we have picked up. Who remembers the days when getting your car repaired was unrelated to buying it? You bought it in the clean and shiny showroom at the front of the dealer, took it to the oily shed around the back if it broke. One of the things that has seen a step-change in the car industry – and is also changing ours and most others – is the realisation that we are now all delivering services and not products. So we are finally realising that long term usability and value is what defines success, not a shiny new – but fragile – toy. In fact, thinking of toys we all recall the gap between expectation and delivery of our childhood toys – the fancy and expensively engineered product that broke by Christmas evening compared to the cheap and solid – be it doll or push along car – that lasted until we outgrew it.
The car industry saw that happen – and we now have companies leading their adverts with a promise of lifetime car driving with their latest vehicles – with the mould really having been broken by Asian manufacturers offering 5 year unlimited mileage warranties. That was about selling a self-controlled transport service instead of a car – and really that is what most of us want. Amazing strides taking place on that front, of course, being taken by companies like Zipcar who have thought simply enough to see there is no absolute link between that service (self controlled transport) and car ownership. (Some of us want other things from a car of course – but that just leads us into the key first step of any successful service, know what your customer(s) want.)
Why I get so interested in all this is its basically what I’ve been saying for the last 20 years – my big advantage is that I came into IT from a services environment (I worked in a part of our organisation called ‘services group’) – and I never really understood why IT needed such a large and artificial wall between build and do. ITIL was (in large part) set up to try and break down the walls – initially an attempt to set up serious best practices and methodologies within operations to match what was already alive and well in development (hence the original name of the project – GITIMM, to mirror SSADM).
So … what am I saying? Please take devops seriously if that is what is needed to get better services. The complexity we need to address now means we have to stop maintaining any practices that prevent good ongoing service design and delivery. If giving it a name and a structure helps then let’s go there.
One of the things I am most proud about in the books I have contributed to is that we made up a fancy name for something good people already did (in our case early Life Support) – the intention was to give it profile and then people would add it to job roles and actually start to plan for it and then, finally, do it better.
Of course that brings with it the chance of looking like the emperor in his new clothes once you examine the detail and originality too carefully. But that’s good too – clever and original usually = doesn’t work too well at first. Solid old common sense (eventually) seems to me to offer a much firmer foundation to build on.
We need good foundations because the situation is actually a lot more complicated than we pretend – multiple customers, other stakeholders, users, operations as users – enough for a dozen more blogs, a handful of articles and a book. So … I’d better get on writing – and maybe so should you?
 Seems so to me anyway – the Delphic oracle was widely believed, responsibility free and most of those who used it didn’t understand where the knowledge came from.
The following article was written by Cameron Allen, Pierre Coyne and Beth Sarnie and is the second in our OSLC series.
In non-acronym speak, what I'm saying is that the future of service management has arrived in the form of Open Services for Lifecycle Collaboration.
But, what is OSLC and what does it have to do with you?
If you are a user of service management tools of any kind, or rely on information from tools to do your job, then you probably know that finding the right information is half the battle, and getting realtime access to that information when it is not under your direct control can feel next to impossible.
OSLC means you can now leverage the simplicity and ease of web links to both find and share information across your management tools (be they IBM, or any vendor tools).
Just as web pages can be linked on the Internet, data can be linked together from one application to another – creating an application ecosystem where applications don't care what vendor they're from. They look up who has the data in a directory, and jump right to it.
OSLC is not something new, and Tivoli is not the first to adopt it for integration. If you're an IBM Rational user, you may already be a believer. IBM Rational, its users, and an extensive ecosystem of partners have been using OSLC to successfully interconnect the application lifecycle for years.
In fact, Rational Jazz is the realization of OSLC community specifications and shared services in an open platform that anyone can use to interconnect the application lifecycle. Rational just delivered their 4th incarnation of the integrated product offering called Collaborative Lifecycle Management based on Jazz.
Tivoli is now leveraging these same principles to help break down silos of information across the end-to-end service lifecycle. That means expanding the notions behind Jazz from service design and development to now include service delivery and management. We call this Jazz for Service Management.
Take for example, problem management. In order to diagnose and resolve a given trouble ticket, the problem information must be gathered and aggregated from multiple sources. We may need information pertaining to the application topology, the health of a system within that topology, outages or events that may be affecting the application, the CPU utilization, the versions and configurations of the hardware and software that this application is dependent upon. I could go on...
The problem is that all of this information lives in different places. You can either call around to the various owners of that information, or you pay a business partner to learn the API of the tool in order to get to the data, or you can have a highly skilled, in-house resource write the integration. These options require extensive expertise in vendor-specific APIs and lots of maintenance to keep them current.
OSLC utilizes community defined specifications for sharing and linking data applied to specific service management scenarios so that in a critical outage scenario, all relevant information relating to that outage can be accessed in real time from any number of sources, displayed in the context of that problem, in a single integrated view, with related actions that can be taken.
The difference is simplicity. You might be able to do this this now with a lot of experts and time but OSLC delivers simplicity.
And, most importantly, because OSLC uses community specifications for service management scenarios, integrations can be built once and applied across multiple 'related' OSLC-enabled tools. "Write-once, Apply-many."
For more information, listen to this podcast on the Tivoli User Community. This podcast provides a deeper insight into the next generation of service management built using linked data.
Also, at Pulse 2012 (video link), developerWorks' Scott Laningham is joined by Don Cronin, program director, Tivoli Technical Strategy and Architecture; and Mike Kaczmarski, IBM Fellow and Tivoli Chief Integration Architect to discuss the Magic of linked data.
Leave your comments on how you are using OSLC in your organization below and don't forget to follow us on Twitter @servicemgmt and be sure to bookmark our OSLC story on Storify.
The following article was written by Cameron Allen, Pierre Coyne and Beth Sarnie and is the second in our OSLC series.
In fact, if you were at Pulse 2012...you heard how IBM Watson will be used to help doctors diagnose medical conditions and improve patient care at WellPoint.
For those of you, like myself, that don’t have a Watson-like recollection, here’s a quick flashback detailing a millisecond in Watson's brain on a sample patient:
Watson is given specific information on a patient’s symptoms, and makes a preliminary diagnosis of the flu as the most likely illness.
Based on the unique patient's name, Watson looks up records of the patient's history for the past few years, providing new insights that point to the better possible cause of, for example, a Urinary Tract Infection.
Based on the patient's family connections, Watson is able to use the family history to derive that the mostly likely cause is now diabetes.
And finally, Watson is able to access a patient’s latest tests to derive a final diagnosis.
If you're in the business of IT, this may sound a lot like incident management. And as any level 1 support person can attest, diagnosing the root cause of an incident is much like diagnosing a patient's condition. You need information from multiple sources (e.g. service desk, license, CMDB, monitoring, and asset management systems), but more importantly, it has to be in context, up to date, and delivered in a timely basis to make an accurate diagnosis of the root cause.
The problem has always been that an incident manager, like a doctor, has to jump between tools, entering requests in each system for the right information...and that is time consuming. In some cases, information isn't readily available and must be requested from other sources, not under their direct control.
One of the ways Watson is able to be such a great diagnostician (and incident manager) is through "linked data," which allows it to seek out and find related information on the patient from multiple sources in a fraction of a second to facilitate faster, more accurate patient diagnosis.
Until now, an incident manager did not have this same luxury.
That's where Jazz for Service Management comes in. Jazz is IBM's realtime platform for integrating management across multivendor tools, and across service lifecycle processes and functions. Like Watson, Jazz for service management uses principles of linked data, along with community standards (including OSLC) to support Watson-like service management decisions, regardless of what vendor tools you have in place.
IBM had another great four speaking sessions today, and a
colleague of mine -Lauren Mort (@Laurenmort2), joined me to help with our
social media activities throughout the day. Below are the key points that
Lauren and I thought were raised during the sessions.
Despite our first session being a report of the one given
by Simon Smith yesterday, we still learnt some more interesting facts whilst he
took the audience through the journey from basic, to proficient to a final
state of optimised security (which you can see in more detail in our blog from
yesterday - http://ibm.co/IoV9ju). Simon talked about how the optimisation
needs to be the specific to the individual company, be it a large multinational
bank or a 100 person company in the UK. A good security model can mean high
levels of staff retention, because employees are able to be innovative on other
projects, rather than having to deal with the daily struggle of keeping the
about how you need to start understanding what in your network is a normal
state and what isn't a normal state in order to achieve the desired “optimised”
state. The security needs to fit your business processes to ensure the maximum
amount of availability on your systems. Simon finished
by talking about how security needs to be built into the design, in an ideal
world from the word go – which often is untenable, but it certainly should not
just be a “bolt on”. Security is all about risk, and it’s the effective
managing of this risk that can lead you to the desired “optimised” state.
The second session of the day was given by John Smith on application security hacking 101 – to a packed room of over 70 people! He opened the
session by talking about the work of our X-Force team, who monitor 14b security events every day and produce an
annual trends and risks report on what security breaches etc we have seen over
the last 12 months. John talked to the
audience about SQL injection attacks against web servers, and how they are on
the rise - saying there must be a return for the attacker even if it is not at
apparent at first. John told the
audience that in 2011, 41% security vulnerabilities affected web apps – which is
good news as that was down 8% from the previous years, and the lowest it’s been
since 2005. This stat shows the organisations are taking the important steps
needed to address this problem – by using products like IBMs AppScan!
continued the session by looking into XSS vulnerabilities, which still appear
in 40% of app scans that IBM perform for companies – which he said was scary as
they can so easily be addressed. John explained how injection flaws have “become
the poster child of application security”. John then gave the audience an example of an
XSS attack, and how much easily a lot of damage can be done, despite warning
end users of such possibilities.
the discussion by looking at black box (dynamic) analysis & white box
(static) analysis, and gave examples of how these both work. He then offered all the audience a free
demonstration of IBM AppScan on their own networks – which many of the audience
took him up on!
Rob Ford and
Jef Gielkens were next up for IBM, who gave a presentation on Integrated, Intelligent
Security analytics for Enterprises. They talked about as the world is becoming
more and more digitalised and interconnected, we are opening the door to
emerging threats and more data leaks. They looked at four key components that
we are currently seeing, all of which are affecting IT Security in some way – Data
Explosion, Consumerisation of IT, Everything Everywhere and Attack Sophistication.
Jef then looked at the different attacker types and techniques that we are now
seeing, and how this is making security a board room discussion, be in
affecting brand image, business results, supply chain, legal exposures, impact
of hacktivism and audit risk.
about how it is no longer enough just to protect the perimeter, silo point
products are not enough to secure your enterprises, IBM is integrating across
it silos with security intelligence solutions. He spoke about the
X-Force protection systems – which is a purpose built, multi tenanted
infrastructure designed to collect, aggregate, store, summarise and analyse
data to derive the events of most interest.
Rob then took
over and showed the audience the MSS architecture overview and how it
can be used to optimise security intelligence. He looked into suspicious hosts
and IP intelligence. He then took the audience through three use case scenarios
– visibility despite encryption or obfuscation, identification of reconnaissance
and infected websites. Jef wrapped up the session by stating that intelligent
security solutions provide the DNA to secure a Smarter Planet.
Rob Whitters gave the final session of the day for IBM (entitled
Next Generation SIEM in Action), who has just joined IBM through the
acquisition of Q1 labs. Rob opened by giving a brief history of Q1 labs and his
involvement with the company. He explained that Q1 labs
solve customer problems with total security intelligence. He explained how they
are able to help customers look at the threats on their networks, predict risks
against the business, consolidate data silos and detect insider fraud. Rob
spoke about how the product can be used to link context to what threats we are seeing
on the network, where it’s from, which asset it is affecting, changes in
network protocol etc and from this derive vulnerability data.
Rob then took
the audience through a demonstration of the QRadar product, looking at the
customisable dashboards, the role based permissions/access and various
workflows. He explained how QRadar allows you to get to the facts quickly and
the data allows you to be proactive, to do something intelligent with it. He closed by talking about some of the 1500
report templates available inbuilt in the product, that can be used to
demonstrate immediate value.
If you would like to see live comments during the day from the show, please follow
me @RSwindell and @IBMSecurity.
What I am about to share here is a true story about Integrated Service Management. I changed the name of the customer to Customer because I didn’t ask permission to use Customer’s real name. So you’ll just have to believe me :oD
Oh, What a Better Web We Need
Once upon a time, Customer needed to test the interoperability of hardware, software, operating systems and customer solution stacks for new product releases. Customer needed to coordinate multiple global teams working on an abundance of machines. With thousands of operating system instances in test, Customer faced an enormous management challenge. Growth over time resulted in homegrown tools from many teams that did not interoperate, making data collection difficult. Visibility into tasks assigned to global teams was limited, and often resulted in duplicate testing and lost productivity. In addition to standardizing tools and improving workload tracking and visibility, Customer sought to automate as many repetitive processes as possible, improving productivity and freeing up engineers for more complex testing work.
Integrated Service Management to the Rescue!
The solution for Customer included a RaTivo integration of Rational Quality Manager (RQM) and Tivoli Provisioning Manager (TPM) to allow automatic provision of test machines with the required test configuration, saving Customer manual work and time from request to provision. Additionally, Customer applied Rational Test Lab Manager and Tivoli Application Discovery and Dependency Manager (TADDM) to discover available test lab machines and display the list in RQM, saving Customer test time as all the information is displayed one tool.
All’s Well that Ends Well
You can’t argue with these results. Customer directly benefited from Integrated Service Management by:
Eliminating an estimated 20 percent of testing duplication
Increasing visibility and automation allows better allocation of shared equipment, reducing hardware requests
Locating available test machines for testing without the need to learn a new tool or collaborate with the operation teams.
Automating provisioning of new test configurations on available machines, speeding the test cycle.
Enabling managers to pull their own custom reports, thereby improving visibility and coordination.