I was reminded by a recent customer call that only real sysprog's use
green screens. It reminded me of a very brilliant lady who worked with
my sons teachers and to paraphrase her quote, "If you only do what you know,
you only get what you have gotten." This whole discussion was about
leveraging a Portal for system programmers as a way to provide visibility
for operations, etc. I mentioned all the portals and GUIs being deployed
for folks to manage the heterogeneous environments to which I didn't make
much of an impact on the image of portal vs greenscreen. Of course, SHARE
is coming up, Pulse is coming up, System z Tech Universities where I
watch other sysprog's and customers demonstrate and leverage portals for
operations, LOBs, end to end management. Asking the basic question,
When have you folks been to a conference lately?
How do you know what other folks are doing or available
to leverage and manage end to end solutions. In my case, I discussed
Integrated Performance Management where workload exists on multiple platforms
including the z. I think when you work so long in an environment and don't
get outside of it to smell the roses or in this case, listen and look at how
others attack problems by leveraging technologies, then you only do what you
only know and the world passes you by. Certainly, I hope sysprog's out there
in the world today can put equal value for management in both the green screen
for deep dive analysis on z platform resources and a portal for putting
non z info like zBX or linux on z or blades, datapower, end to end
transaction tracking and other not typical green screen info in a GUI where
it provides visibility across a plethora of IT resources.
For Tivoli z solutions here is the link to the System z Management
community group on Developerworks.
go get registered and get the latest and greatest tidbits and advice.
Several changes for customers of Tivoli and especially the OMEGAMON products have taken place. One
change that took affect on Nov 1 was a change in the way Marketing enhancement requests are going to
take place in the future. In the old process, if an marketing request was required, it would take
the level 2 or 3 person or an IBM rep to open the request and provide customers with the MR number and
perhaps they would get feedback on its progress. Many times it would seem to customers that these
requests went into some DB from which no information ever was returned. In truth, the database being
used had never been "scrubbed" and for me, I had enhancement requests when I took the job of product
manager over 5 years old. The times they are a changing which is good for everyone. The Tivoli portfolio
is moving to a new process for customers.
The Tivoli Request for Enhancement on developerworks is now the way the requests will be handled.
To read more on this, go to http://www-01.ibm.com/support/docview.wss?uid=swg21449404
Customers will be able to enter directly their requests but will require that they use their customer number,
password, etc. The advantage of this for all parties is that you will be able to see other RFE requests so
that if another customer has a requirement that you feel would also benefit your business, you can add
your name to that request and follow along. Most of the RFE information describing what the request is
can be read by everyone. From Tivoli, we will have a better opportunity to ask for more details and not
have to have several email trails trying to fully understand the request. Certain areas of the request
will remain private, such as the business case for the requirement and how it would benefit your company.
So now it will shed more light on the process, the requests outstanding, work going on and it will be
able to provide for you when the requests come available as this is the front end system now of an
agile process where these requirements can be driven right into a release plan. Rational has been using
this for several years so I think all OMEGAMON customers will like this fresh approach. If you have an
old marketing request, there is no reason in the world why you couldn't open up the same request using the
new process. It is pilot and expect perhaps a few glitches along the way, but overall I think it is will
prove out best for the products and customers. It will be easier to prioritize work if 50 customers want
a particular enhancement request and would see benefits vs just 1.
I also just returned from a meeting of the Swiss Tivoli User Group where they had a dedicated z track for the
customers. In fact, I know the NYC TUG has at least a meeting each year where the z is focused on. If
your a member of a TUG and think that your group could benefit from a z track, let the Tivoli Rep know
and we would be glad to support the meeting. They can always contact me directly. So a bit of vacation
for me over the Thanksgiving holiday as I try to use up the days I have acquired. Enjoy yours.
As I discussed in my last blog, Systems management or Service Management. What is the
level of interaction from z IT groups in integrating the z platform and systems. I see this
being addressed in 3 distinct directions. The first one is pretty simple. A centralized
focal point for event management. This seems so simple to do, yet because of a deep seated
history of how events are managed or technologies that are in place, many customers today
still have no enterprise view of what would be deemed an "enterprise" event where all of IT
should have awareness. This lack of visibility and roadblock of not being able to track
events impedes the ability to do service management. Systems management might have several
event management systems, but the lack of integration of these event management systems
basically is costing the business in time, dollars and additional processes. The second
is the understanding or discovery of the systems and subsystems and their relationships or
dependencies that are running on the z platform and being loaded into CMDBs. To provide a
service such as Internet Banking and managing Internet Banking, you must have a total
inventory of all the "stuff" and the relationships of the "stuff" that makes this service
available for the clients. For example, you need a web service available 24 x 7 as part
of this service. You would need to provide, TCP/IP, HTTP, DNS, DHCP all bolted up and
running all the time, plus access security, even before defining what transaction engine
and databases are being used. If you can't define what the total inventory of "stuff"
including the transactions of Internet Banking as a service and what needs to be up and
running. Again, that is going to cost you time, dollars and additional processes.
Both of these IT functions can be done today and are the base building blocks on
moving to Service Management which enables, Cloud, SaaS and other computing models.
For those of you that are subscribed or those that haven't, the monthly electronic magazine zAdvisor is out.
You can catch up at http://www-01.ibm.com/software/tivoli/systemz-advisor/ Some tidbits in
this months magazine include a write up about the new alternative to installing and upgrading
the OMEGAMON product family with something other than ICAT. The ICAT tool has been around
and updated constantly since IBM acquired Candle. As part of the product management team
for the OMEGAMON family, I have traveled and discussed with customers how to improve their
experience with working with the product set. ICAT was something that either someone really
liked or someone really loathed. There was no middle ground. In March of this year, a first
phase of giving you sys prog's an alternative with what we are calling a parmlib approach
was released via some ptfs. In June, phase 2 was released which of course is more
of an autonomic way of gathering system metrics to build the RTEs. In the zAdvisor, a write up
on some of the details of what the new parmlib approach brings. For those with large
OMEGAMON installs, this will definitely save you time and effort and above all, make your
life easier especially if you found ICAT something you would have to retrain yourself on
each and every time you used it.
Also, for all those AF Oper customers, the Event Pump on z/OS now has a feed available so
messages that AF Oper sends to the syslog can be changed into EIF and sent to OMNIbus or TBSMz.
This comes along at a time when event management as a service is becoming a very important
part of what will be the cloud infrastructure service management. Being able to recognize from multiple
programs and management attributes that an impact can bring from a centralized and consolidated product is
key to understanding both what the architecture guys miss with the cloud architecture to avoid
a complete outage. Likewise if a bypass condition occurs and there is a failure somewhere in
the infrastructure that needs to be fixed prior to losing your N+1 setup. Finally if your about to miss your
SLAs for conditions that are not really an outage but a performance or degradation of a service
condition where an event has been generated from one of the many monitoring tools or systems. Lots
of reasons to get all these events to a single focal point.
For those that are heading to SHARE in Boston, look me up, I will be there.
I was discussing the need to centralize as one process the ability
have a focal point for an event management system. Many customers today
have a group that is managing distributed systems and a different group
that is managing the z platform. This is sometimes a issue that has been
promoted over time and the business ends up with two processes. In Tivoli
for centralized management of events, OMNIbus is the product customers
deploy for integration of both z and distributed event mananagement. In
December, Tivoli released the Event Pump on z/OS v4.2.1. This product
reads the syslog for messages that the IT group has registered
as important and changes them into an event and forwards it to OMNIbus.
This gives customers with Tivoli the ability to centralize all events to a
focal point. The Event Pump on z/OS in its new release has increased the
out of the box capability of handling messages as well as picking up new
message feeds. The Event Pump on z/OS gives customers the capability
of having out of the box value as the product is delivered with a best
practice of important messages already registered to be changed into
events and forwarded to OMNIbus. RMFIII, CICS TDQ, CPSM were recently added
to DB2, IMS, CICS, SA, OPS/MVS, TWS and new feeds are generally added about
every quarter. Many sys progs have perhaps built their own way of integrating
events, but the Event Pump on z/OS has a lot of benefits vs RYO. With
the new release, the zPump has the capability for handling messages with
the same message id by being able to use literals in message text to
distinguish differences. Also, there is the capability of adding
data to the messages as they are changed into events for Operations
management. Another nice item instead the RYO approach is that
in a plex environment, a down message saying something is
offline can be brought online by a message from a different system. The zPump
has awareness of the messages so the sys progs don't have to figure out all
the particulars of what system generates the clearing messages.
The other item here is that the Event Pump for z/OS also has toolkit that is
named a Data Source Customizer which gives the sys progs the ability to
add messages or create new messages that can be registered and monitored. So
a very easy way for sys progs to integrate z messages into a centralized
event management system is with the zPump. Of course, OMNIbus or NetCool
is required, but instead of trying to keep up with new messages, upgrades to
new releases, etc etc. A nice solution.
Hard to believe it is already October. Hope readers of this got at
least some vacation during the summer. Since my last post I have
been to SHARE, zExpo as well as many customer visits. At this time
the OMEGAMON family v 5.1 release now include Messaging, IMS and
Storage as well as CICS, zOS and DB2. The last of the releases,
Mainframe Networks, is about to come out the door. Many of the
companies are already in production and using the new enhanced
3270 user interface as well as the other features and functions of
the portfolio including zIIP enablement and the use of specialty
processors if they are available. I am amazed that the OEM vendors
claim that they use less MIPS because they leverage specialty processors
and leave out the fact that OMEGAMON does this also. It seems to be
that this is something they are the only vendor to provide when in fact,
that is not the case. Hopefully, when folks understand where you compare
apples to apples versus apples to starfish, OMEGAMON is not the villain
that some vendors make it out to be on MIPS usage. Most of these discussions
are what I would call a marketecture vs reality.
Meantime, on developerworks, I have placed some documentation. An updated
Parmgen reference upgrade guide, some e3270ui workspace reference layouts.
Kind of a cheat sheet to understand the layouts. Also, many of you
have taken advantage of the OMEGAMON open house events that have taken
place around the world.
Pulse is coming, March 3-6 and abstracts are being called for. We continue
to add z content to this event. This year the Automated Operations Technical
Conference will be joint located in the Pulse conference. Just like last year.
Also the Parmgen lab will be repeated.
Well, I will promise to update this a bit more regular than lately.
Here is the link to the System z Management team room on Developerworks.
go get registered and get the latest and greatest tidbits and advice.
Spent time at the Automated Operations Technical conference in Philadelphia last
week. Some pretty good updates on the Automation portfolio including Systems Automation
and AF Oper. Also some good details on Netview 5.4 release and all of it included some
down the road thinking. Which made me wonder, for us z folks, which is the conference
where, if we have approval, we would attend? I have some of my own opinions
here on this but I would really like to understand where the subject matter experts on
z systems and subsystems go. There is of course SHARE. They have the early spring
session, just had it in Seattle and the summer SHARE session (the voting one) in Boston
in July. Which one if you had to attend one, would you go to. Then again, would you
go to zExpo as a conference which is an STG sponsored event? That runs once in Europe in
late spring or in NA in early fall. There are also choices like Impact, Pulse for
conferences and like I said, AOTC here in the states or EOTC sponsored by the GSE group in
Europe. Are conferences not worth it? Can we get the info we need from the WWW or forum
discussions or linked in groups? Certainly the networking is better at a conference but
if you had to pick one, which one? I would be interested in hearing or reading any posted
comments on this. There are probably conferences that I have not listed. CMG for one, but
is that where you would go to get a broad view on what is happening on the z platform? I know
with some conferences you could get a deep view of say, workload schedulers, or security, etc.
but where, in your opinion, do you go to get a deep view of what is happening on the z.
Let me know,
As the year winds down, probably the last conference of the year was CMG in Orlando. Even in
December I think the attraction is that it is in a warm place, however for those attending, it was
quite chilly so no warm breezes. I presented and discussed using the z platform as a key for cloud.
Simply put, because of the characteristics of the z platform to jump start cloud projects, it has some basic
principles that help it as an enabler. When introducing or trying to roll out a cloud project there
are some discussion points or obstacles that need to be understood. For example, I discussed what
would be required just to provide a web service which would be a key access for all customers because
of the use of the internet. This would be a basic infrastructure delivery. You would need to ensure
24 hour availability of, IP, HTTP, DDHCP, DDNS, just to put out the shingle. Then there is security,
spinning out some storage, usage and billing, etc. etc. This all needs to be done prior to even
advertizing the services available via the cloud. So why the z? First obstacle would be security and
with the z, you get a multi-tenant design point with EAL 5 certification. Supports Virtualization,
a share all approach to systems resources to create the scalability required to reduce costs and deliver
standardized services. Provides availability measured in years, 24 x 7 x 365 with zero data loss recovery.
It is efficient when you look at the energy and power required, up to 80% reduction in costs versus using
the same computing power in a distributed environment. Finally, it is scale, the ability to handle
massive demand of users and data. When interviewed, CxO's point to the importance of cloud for their
enterprise. It could be a public, private or hybrid, but they all point out it is a way to reduce
costs and deliver services for their consumers. It was interesting that at CMG, the attendees that
sat through this session all thought that their IT departments need to be working on their cloud
strategy and for the infrastructure, the z folks had a better grasp of how to leverage what you have
versus buying more IT technology. More about this and managing a zEnterprise in my next update
So for those folks going to zSymposium in Vienna, please look me up as I will be there covering a topic
entitled "the new Virtualized Data Center". It seems as IT organizations move to use virtualization to
save costs, floor space all those benefits, some groups run on their own so instead of having a virtualized
platform, they end up creating even more silo's with in the IT groups. A lot of choices out there for
hypervisors, OS support, etc. A lot of good data on why zVM and linux on z is the most cost effective if
your IT shop has z/VM. Another thing for our OMEGAMON XE users is that we have been working on a
performance tuning document. I always chuckle when I listen to all the hype about a vendors claim
about how they use less MIPS then a Tivoli solution. The concern that I see is that the
depth and breadth of what is monitored is in the eye of the system programmers. When you actually get
down to getting the research or comparable apple to apple monitoring, much of it is the same, one
monitor might be less on z/OS, but use more with monitoring CICS. The approach seems to be to discuss
one monitor where it might be true, but then use that to justify all the monitors. There is a point
where in most cases, a full suite of performance monitors for the zOS systems and subsystems including
z/VM and Linux are not available from most vendors like they are with OMEGAMON. For this reason, systems that were installed
5 years ago or even last year, still need to be tuned as z platforms change. The field technical support
folks who travel and work with customers to help reduce MIPS consumption by doing health checks on
the production level z using the OMEGAMON portfolio and giving tuning recommendations
have help put together a best practices tuning book. The pdf is located on developer works at this url
System Hygiene is important so this document will help as it gives you tips and hints on how to reduce
MIPS usage. A must read for system programmers. I know it will help.
For those of you that are subscribed or those that haven't, the monthly electronic magazine zAdvisor is out.
You can catch up at http://www-01.ibm.com/software/tivoli/systemz-advisor/ If you haven't subscribed, always
good things to review such as Rocky's corner for some performance tips. I was at SHARE earlier this month and
have been digesting information about the new zEnterprise. The good thing for those getting 196 machines is that
from what I can figure out, the Tivoli software your using today, will work just like it does today. Some new
things like the optimizer blades will be managed by OMEGAMON XE for DB2, etc as new functions are being exploited
for the fit for purpose workloads that the machine is designed for. For those folks that have added zOSMF to
the z, the OMEGAMON development team has added a technote and it should be posted shortly on the zOSMF website of
how to add a link for OMEGAMON from the performance links displayed on zOSMF. Some very cool discussions are
now going on as to how organizations will look at managing events, performance, operation views as either
different systems or as a system of systems. I have heard all types of discussions on this. Some are saying
that for the time being, we will manage it as components, yet they are also saying it will help them
step up and start managing based on service delivery. I think that the new frontier will be as service
delivery. Looking at the zEnterprise as something like a local machine, but then as a plex or group
of plexes or as a group of ensembles, managing as individual technologies will have to be coordinated across
a workload view of the fit for purpose workloads and applications. Changing something in a blade and not coordinating with
the rest of the ensemble could have a different impact if you don't understand the services being provided by
that piece of the systems of system technology. I believe it may get IT staff and strategy acting as one
component aligned with the business as it gets adopted. For those that say the z and systems management will
never change, it is what it is, then some new thinking is in line. As I get new info, I will post in my blog
some of the things that I am finding out about approaches and a pragmatic view of what you can expect. This
was just the general announcement #1, GA1 info, GA2 it sounds like is being updated and getting ready to
put on the truck. Next blog will cover some more details on the GA1 and the ISM approach
I have been trying to catch up on things but simply seem to never have
enough time. Last week, I was out at SHARE in Seattle. It seemed to be
about the same size and attendance that was in Austin. The key note on z
this year was Tom Rosamilia from IBM and STG in particular. He spent time
going over the game plan on how the z platform and operating system are
constantly changing to handle new workloads, applications and data. Some
great charts on 13 customer scenarios where customers have decided that for
some reason, an application is more cost effective to run on a distributed
open systems platform. I get asked this question all the time in my travels.
I was just in the Total solutions on system z event in Amsterdam where
a system programmer asked for help with this business justification. The
question seems to be asked at least several times a year with the idea that
it would be a cost savings. So the chart gave 13 examples from different
customers where an analysis was done on this concept. The answer was not
much of surprise to the audience, but the details of what data was gathered
for the analysis was what I thought a total sizing. I would bet if you log on
to the SHARE web site this presentation would be available to view and peruse.
It goes back to the same discussions that happened at PULSE a week earlier
when the discussions came up for when a Unit of Work becomes less on a
z platform then a multiple core distributed platforms. That discussion brought
up the crossover point at something around 250 MIPS.
A lot of presentations at SHARE this year on virtualization of everything. It
seems a lot of customers are beyond getting their feet wet now running workloads
on z/VM and Linux on z. A Tivoli perspective on this is that most of our tooling
runs on Linux on z in 64 bit mode and customers can use the z platform as a
centralized platform for managing the end to end enterprise. The approach
which was announced at PULSE in 2008 was that we needed to port our key
applications for integrated service management to Linux on z which is what
development has delivered on. Customers wanting to use the z platform for
its scalability, availability are able now to use both the z/OS Native and z/VM
and Linux on z to build out these service management application. I expect when
summer SHARE happens at Boston, then I believe we will see more and more customer
presentations virtualization and how customers are integrating tooling solutions
to generate better efficiencies for the IT staff on how to reduce costs and improve
Well it was a great week at Pulse here in Vegas. I was surprised
by the amount of customer presentations and the details of how they
were using the Tivoli portfolio. For those that attended it was
a way to get healthy with all the walking from the conference center
back to the lobby and to the rooms. BOA, T-systems, Key Bank were just
some of the very cool presentations on how they view enterprise management
and deliver services to the clients. The presentations included
the z platforms and applications running on the z as crucial to be
included to deliver services and be managed. A lot of solutions discussed
have the customers deploying on linux on z running on z/VM. The
presentations were a great example of how customers are deploying using
the Service Management Center for z concept and managing using the z as a Hub.
Another great presentation was how this one customer was using linux on z
running on z/VM as a way to do cloud computing based on costs, scalability
and using the characteristics of the z. Many of the customer
presentations were based on moving forward on ITIL processes at several
different levels. They showed business metrics of how they are tracking
and being successful in gaining control over change and incidents
as well as moving from systems management to service management.
Many training sessions, hands on labs, demos and the solution center
was packed with business partner solutions. I attended a presentation
that showed how well integrated the tools have become.
ITCAM for Transactions was how it started out showing the topology
at different levels as a help desk or customer service group might
be looking at the enterprise. It showed end to end of both
distributed and z domains. Then the speaker generated a problem, and bang,
it shows up in ITCAM for Transactions topology and off we as the audience
went to solve the problem. Doing some analysis between MQ, CICS, DB2
to isolate the problem, then he seamlessly moved
into OMEGAMON XE to do more deep dive and further analysis as the audience
guided him to the actual failure and how it could be repaired. Lots of
other presentations gave migration ideas, end to end solutions. Although
there wasn't a z track this year, the customers included the z in presentations
and discussions because of the ever expanding critical nature of service
delivery and the capabilities of the z platform's reliability. Also, a
discussion of how at a little over 250 MIPS, that the UOW becomes cheaper
to deploy on a z versus a distributed platform. That was probably an
eye opener to a lot of attendees.
For our IMS folks.
OMEGAMON XE for IMS v420 IF3 just was made generally available
and is focused on two areas: Improved granularity for
application metrics and usability enhancements to Application Trace.
OMEGAMON collected application metrics previously based on an application
schedule. That is, if an application was scheduled and processed 10 messages
before terminating the application metrics displayed were cumulative
based on processing all 10 messages.
IF3 has modified this and now displays application metrics based on unit of recovery (UOR).
In this fashion, runaway applications are easier to spot. In addition twenty-five new metrics
have been added to the OMEGAMON 3270 and TEP views. Some of these include new IMS
application calls (ICAL), external subsystem calls, elapsed times for intent conflicts,
pool space, and application scheduling as well as VSAM/OSAM I/O counts.
OMEGAMON XE for IMS v420 greatly enhanced the application trace capabilities. OMEGAMON XE
for IMS v420 IF2 added CPU and elapsted times for DL/I, DB2, and MQSeries calls. We've
enhanced these capabilities in IF3 by making them a bit easier to use. An application trace
repository has been added which retains all application trace requests. These requests,
once activated, will remain active across OMEGAMON starts if so desired. The management of
application traces is easier with an updated management interface.
This updated interface allows more trace filtering options as well as the ability to
add, delete and clone trace requests. In addition, the interface allows viewing by trace
request saving screen switching. The trace filtering interface has also been greatly enhanced.
One can specify multiple transactions, programs, abend codes, scheduling classes, among others.
New to filtering is the ability to filter by region type and elapsed and/or CPU time, either
total or by DL/I, DB2, or MQSeries call type,
The largest enhancement to application trace in IF3 is the exception journal. One can setup
an exception trace specifying service level commitments in total application elapsed time,
abend conditions, elapsed time by call time (DL/I, DB2, and/or MQSeries), and total CPU time
for the application or by call type. This is an especially useful feature for higher volume
shops where 99.9% of the applications run within the defined service level commitment times.
For those problem transaction instances, application trace will capture the trace data and save
this off in a new exception repository. This should make finding mis-behaving transaction
quicker to locate.
The best thing about this IF3 is that it had a beta with several customers, and the input on how
it should work, act and function was part of the agile process of development and reviewed monthly
with the customers requesting these functions which enabled them to retire another vendors
Well a bit of vacation and catching up. A couple of things to let you all know
about. The OMEGAMON documentation has come up with some improvements about providing
an easier way to use the all the documentation that is provided. The OMEGAMON XE Integrated
Information Center is a redesigned way to access data. Instead of looking within one book about
something, when searching it covers the many books that are delivered with the product. Hopefully
this will make it easier to find what your looking for. It is worth going and looking
at the quick start section which provides some guidance on how to use the changes that have been
added. It has been vastly improved to help install and configure the products. So, recommend you all
check it out. .
This week is zExpo in Boston where it is a bit grey and rainy, much different then the SHARE week,
however I would say, great attendance with a lot of focus on zEnterprise. In discussions with the
attendees a lot of zOS background folks but not many z Networking folks. For the zEnterprise and
the new network connectivity through the OSAs, I would like to point out that for the customer
network and new connections, that OMEGAMON XE for Mainframe Networks has support for these
connections and performance monitoring and folks that are considering using and attaching the
blade extensions would benefit by understanding or monitoring these networks proactively versus
trying to debug with pings, etc when a performance degradation appears. Is it the network?
a constraint with the lpar or blades gets a bit more crucial when your managing by workload and
fit for purpose Providing the visibility into the OSA CHPID types as an example you can have a
connection from app server to CICS, knowing its on internal network and would set higher
expectations for throughput and faster response time so you might set lower threshold on
those connections with a situation to trigger an event sooner and then notify both the network
management folks as well as the folks in charge of the zHMC admin to debug what is going on
in virtual servers and internal network. A lot of customers today work on just making sure the
connections are available, but in the future, besides the connections, performance will be a key
kpi to monitor.
So my last discussion here on my blog was about conferences and which ones as
z focused folks that we attend. I was at zExpo in Berlin last week with about
400-450 other z oriented folks. A lot of interactions and networking going on
most of the week. I thought the conference was great. Many sessions were repeated
and that gave a lot of us who always fear that we will pick the wrong session to
go to when you have options to actually see both. The next big conference I guess
would be back to Boston and SHARE in August. At zExpo I spent time talking about
the concept of moving from systems management which for z folks is old hat, to
moving to service management. Seems simple, but talking with the folks in attendance
it was easy to see that many of us are still very focused on ensuring that the
technology that we are responsible for is still how we manage. So, if you are
responsible for the z/OS operating system, that is your focus. When you talk
Cloud computing, SaaS, other new service oriented focused solutions, the idea
of just managing the system or z applications kinda falls flat. A service being
offered in a Cloud could depend on more than one application, security, storage,
networks and all of those items need to be managed as a coordinated action by IT and
measured all together as the service. As z folks, it is understanding that your
SME skills are just part of the team that enables the business to deliver the
service that the clients are paying for. That is what I believe ITIL 3 is about as
we try to enable IT to move out of the silos and establish the service definition
that makes the business successful. I was surprised at the conference that many
of my conversations with the attendees were along the lines of, "yes, I understand
that but my group or dept have not been involved in any of these discussions." Now
that to me was pretty disturbing that the backend systems are generating all this
business value via transactions but when involving a service delivery organization
that the z folks are somewhat participating. It would be nice to see some presentations
from IT folks on how they get organized for service management and are being successful.
Anyway, this is a short thread on the concept of up lifting systems management to the
next level, service management which is what will deliver Cloud, reduce TCO, improve
ROI and create a leaner IT that works smarter and not harder. That is something
we can all rally around.