Cloud & Service Management blog
Branavan Ganesan 110000SGFR firstname.lastname@example.org Tags:  continuous-delivery devops vmworld agile 2 Comments 9,658 Visits
I am at VMWorld this week, in San Francisco CA where IBM is a platinum sponsor.
With the growing adoption of Cloud implementations, private, public and hybrid, there's clearly a desire in the clients exploring solutions here to optimize and exploit their environments rather than a maintenance and steady state approach. Therefore, it is timely that Bala Rajaraman and Pratik Gupta, two IBM Distinguished Engineers, are presenting a Collaborative DevOps session at VMWorld
The session is entitled: SPO 3304: Best Practices for Collaborative DevOps with Optimal Application Performance in VMware Environments
I sat with Pratik and Bala, and asked them what the impetus and motivation for developing this talk. The crux of the pitch, as Pratik explained to me, is that current conditions have created four drivers that a majority of customers are facing, that are making a DevOps approach an imperative.
At the heart is the desire in companies for agility. The desire in the Line of Business leaders to create value in their offerings is resulting in an urgent need for business agility. This in turn challenges the development organization to take an agile development approach. As more and more deployments move to a Cloud delivery model, it requires an operational discipline that is not always present. Add to this the human element. If you're in an enterprise shop, you know already that this is not purely solvable by software. Cultural gaps exist between the Line of Business sponsors, the developers and the Operations team. Notions of completion, priority and quality also are different.
Right now, companies are not getting this right. 50% of the applications released into production are rolled back. As much as 51% of projects are missing critical features. Quality and end user expectation delivery are clearly an issue.
Pratik and Bala will frame this problem space and then show how adopting a continuous delivery model can help address this.
ivor macfarlane 2700022KPS IVORMACF@uk.ibm.com Tags:  itil ibm service-management itsmf tivoli ivor 1 Comment 5,685 Visits
Today we trust computers – literally and unconsciously with our very lives. I was reflecting on this level of trust when I got £50 of cash out from my local ATM and declined the offer of a receipt. Seems I now have total faith the computer systems will ‘get it right’. I’ve come a long way from keeping all my own cheque books to cross check against later bank statements.
Now, combining that faith with a little healthy British cynicism, and triggered by watching the Olympics tennis finals on TV, a mischievous but irresistible thought came to my mind.
It used to be that when a ball hit the ground near the line we relied on the human eye to say whether it was ‘in’ or ‘out’. That caused disagreements and discussion – and - in tennis often - sulking, swearing and the full range of petulant behaviour.
Nowadays that is all replaced by referencing the technology. When there is doubt – or one of the players questions a call - then we simply ask the computers. What we get then is a neat little picture representing the appropriate lines on the court and a blob showing where the ball had hit. So, problem solved: disappointment still for one player but, so it seems, total acceptance that the computer is right. After all it is an expensive system working away inside a very expensive box – must be right, mustn’t it. Or to put it another way ‘computer says in’, who would argue?
But what occurred to me is this. All we can actually see is some boxes around the court, and a stylised display with a blob on it. That could be delivered by one person with a tablet showing the court lines and them touching the screen where they think it landed. Very cheap and still solves all the arguments because – naturally – everyone trusts technology don’t they!
Now – of course, and before anyone calls their lawyers – I am not suggesting for the merest moment that there is the slightest possibility of such a thing happening. But it’s fun to think it might be possible. There is little public awareness of what accuracy the system – and here I presume it does really exist – works to. If you dig around on the web you can find out (the answer by the way for tennis is 3.6mm). You also find out there is some very minor grumbling and questioning going on. But that seem at geek level – in everyday use the audience stands instantly convinced.
So, thinking it through there are a couple of interesting consequences to real IT life:
I guess my big issue is to wonder how comfortable we are – as the deliverers of the technological solutions for our customers – and especially our users - to have such blind faith. Of course, people being the irrational things they undoubtedly are, that blind faith in the detail is often accompanied by a cynical disregard for overall competence – think faith in ATMs and on-line bank account figures with the apparent level of trust in the banking industry as a whole.
As a little codicil to the story, I registered with anew doctor yesterday – the nurse asked me questions, took blood pressure etc and loaded all the data she collected into a computer. The system was clearly ancient, with a display synthesising what you typically got on a DOS3.0 system. First thought: ‘OMG why are they using such old software, that can’t be good? Second thought: ‘They’ve obviously been using it for years, so they really understand it, have ironed out all the bugs and it does what they need. It ain’t broke so they aren’t fixing it’. But my instinctive reaction of suspicion of it for not being pretty was there and I had to consciously correct myself.
Would you as a service provider prefer more questioning of what you package up and present to your customers and users, or are you happy to have that faith? My own view is that the more blind faith they have in you, the more the retribution will hurt if things do go wrong. Or perhaps that’s just me being cynical again?
David Ojalvo 060001CNQC DAOJALVO@US.IBM.COM Tags:  software execute tivoli partners to ready business ibm 3,348 Visits
I recently discovered ANOTHER great resource for IBM Business Partners. The IBM "Ready To Execute" initiative, which was originally launched internally to improve the quality of our marketing campaigns and drive higher quality leads, has been extended to Business Partners. In a nutshell, Ready to Execute is a web-based model that provides the foundation and all the elements for launching an effective marketing campaign, including multi-touch e-mails, telemarketing scripts, digital strategies, and compelling offers.
As I began researching all the specifics of the program for our Business Partners, I stumbled upon a blog post from one of my colleagues in Software Group, Jacqi Levy, who has done a fabulous job of summarizing the benefits of the program, as well as providing a great overview on how Business Partners can get started on launching a campaign. Nicely done, Jacqi!
And those who want to jump right in and start executing can go directly to the Ready to Execute campaigns that are specific to the Midmarket !
Noah Kuttler 110000SVNJ email@example.com Tags:  smarter-cities service-management research africa 1 Comment 4,029 Visits
One of the coolest things about working at IBM is the global nature of our company.
I'm looking forward to seeing the work that our African IBM team is going to do in this space and can't wait to work with them on future projects.
Noah Kuttler 110000SVNJ firstname.lastname@example.org Tags:  linked-data cloud service-management tivoli oslc-series oslc 4,121 Visits
The following article was written with significant contributions from Cameron Allen, Pierre Coyne and Beth Sarnie
Question of the day: why is IT agility so darn elusive?
Follow up question: after spending multiple millions in technology to improve service delivery, quality, and productivity, why do so many line of business executives perceive that IT is still not moving "fast enough?"
Silo'd information presents a big speedbump to agility. According to the 2012 IBM study of CEOs, high performing organizations are able to access data 108% more, draw insights from that data 110% more, and act on that data 86% more, than their underperforming peers.
Which brings us back to the specific problem: Information exists, but it is not shared. Information remains trapped in silo'd tools and departmental applications. It's not only not moving "fast enough," it's not moving at all.
If you agree with ITIL and related methodologies, agility is directly linked to your IT processes. So while we can improve process methodology and connections across roles and functions, and within specific technology siloes with tools, if the data and resources can not be freely shared across process-enabling tools, then its all for not.
Going one level deeper, what is the cause of this 'information black hole', where data enters tools, and is never seen again? Your reality is that you probably rely on a mix of multi-vendor tools. Those vendor tools rely on proprietary APIs for integration and trying to make tools with different APIs communicate requires the IT equivalent of a team of United Nations translators, where each is an expert in their applications main language (API). Once successful, the herculean effort can create a constant maintenance cost, and might not work well in the end - things will be lost in translation. That said, even single vendor tool suites are notoriously difficult to integrate.
So what can be done?
Stop for a moment and consider the best example that demonstrates simplicity of integration on a massive scale. It's the Internet. With the Internet, you can get information from millions of different web sites and all you need is a browser.
So for argument's sake, if tools are the equivalent of web sites, then all we need are links to connect two tools. We can take that one step further, borrowing principles from social networks like LinkedIn or IBM Connections, where we can search for one person, and see relationships to other people (making searching for data across tools much easier).
That in essence is OSLC (Open Services for Lifecycle Collaboration): A set of open, community agreed upon specifications for linking tools using web technology. (And before you ask, no. It's not a standard, because apparently standards alone have not done the job)
Data from any vendor tool is registered in a directory like a search engine, where other tools can find it, its relationship to other data, and access it via simple web link technology. Not similar to the Internet, but exactly like the Internet.
What that means is you can easily interconnect tools and processes. You can even replace tools with competitive tools - eliminating vendor lock in. It also means you can re-purpose one integration across a series of 'like' tools. "Write once, reuse-many" inherently applies here. All of this translates into simpler and faster access to information by people and tools, better analytics leading to better decisions, and better automation of workflow.
Now, IT will be seen as agile.
No longer elusive.
This is the first in a series of articles we will be posting about OSLC. Feel free to leave your comments below. Be sure to listen to the podcast we did for OSLC on the Tivoli User Group - TUC Podcast: OSLC Series - Learn how Tivoli’s enhanced architecture strategy will help you simplify integration across products – IBM and Other Vendors, and don't forget to follow us on Twitter @servicemgmt.
Also, stay tuned to the blog for more in our series of articles about OSLC.