I was reminded by a recent customer call that only real sysprog's use
green screens. It reminded me of a very brilliant lady who worked with
my sons teachers and to paraphrase her quote, "If you only do what you know,
you only get what you have gotten." This whole discussion was about
leveraging a Portal for system programmers as a way to provide visibility
for operations, etc. I mentioned all the portals and GUIs being deployed
for folks to manage the heterogeneous environments to which I didn't make
much of an impact on the image of portal vs greenscreen. Of course, SHARE
is coming up, Pulse is coming up, System z Tech Universities where I
watch other sysprog's and customers demonstrate and leverage portals for
operations, LOBs, end to end management. Asking the basic question,
When have you folks been to a conference lately?
How do you know what other folks are doing or available
to leverage and manage end to end solutions. In my case, I discussed
Integrated Performance Management where workload exists on multiple platforms
including the z. I think when you work so long in an environment and don't
get outside of it to smell the roses or in this case, listen and look at how
others attack problems by leveraging technologies, then you only do what you
only know and the world passes you by. Certainly, I hope sysprog's out there
in the world today can put equal value for management in both the green screen
for deep dive analysis on z platform resources and a portal for putting
non z info like zBX or linux on z or blades, datapower, end to end
transaction tracking and other not typical green screen info in a GUI where
it provides visibility across a plethora of IT resources.
For Tivoli z solutions here is the link to the System z Management
community group on Developerworks.
go get registered and get the latest and greatest tidbits and advice.
Several changes for customers of Tivoli and especially the OMEGAMON products have taken place. One
change that took affect on Nov 1 was a change in the way Marketing enhancement requests are going to
take place in the future. In the old process, if an marketing request was required, it would take
the level 2 or 3 person or an IBM rep to open the request and provide customers with the MR number and
perhaps they would get feedback on its progress. Many times it would seem to customers that these
requests went into some DB from which no information ever was returned. In truth, the database being
used had never been "scrubbed" and for me, I had enhancement requests when I took the job of product
manager over 5 years old. The times they are a changing which is good for everyone. The Tivoli portfolio
is moving to a new process for customers.
The Tivoli Request for Enhancement on developerworks is now the way the requests will be handled.
To read more on this, go to http://www-01.ibm.com/support/docview.wss?uid=swg21449404
Customers will be able to enter directly their requests but will require that they use their customer number,
password, etc. The advantage of this for all parties is that you will be able to see other RFE requests so
that if another customer has a requirement that you feel would also benefit your business, you can add
your name to that request and follow along. Most of the RFE information describing what the request is
can be read by everyone. From Tivoli, we will have a better opportunity to ask for more details and not
have to have several email trails trying to fully understand the request. Certain areas of the request
will remain private, such as the business case for the requirement and how it would benefit your company.
So now it will shed more light on the process, the requests outstanding, work going on and it will be
able to provide for you when the requests come available as this is the front end system now of an
agile process where these requirements can be driven right into a release plan. Rational has been using
this for several years so I think all OMEGAMON customers will like this fresh approach. If you have an
old marketing request, there is no reason in the world why you couldn't open up the same request using the
new process. It is pilot and expect perhaps a few glitches along the way, but overall I think it is will
prove out best for the products and customers. It will be easier to prioritize work if 50 customers want
a particular enhancement request and would see benefits vs just 1.
I also just returned from a meeting of the Swiss Tivoli User Group where they had a dedicated z track for the
customers. In fact, I know the NYC TUG has at least a meeting each year where the z is focused on. If
your a member of a TUG and think that your group could benefit from a z track, let the Tivoli Rep know
and we would be glad to support the meeting. They can always contact me directly. So a bit of vacation
for me over the Thanksgiving holiday as I try to use up the days I have acquired. Enjoy yours.
As I discussed in my last blog, Systems management or Service Management. What is the
level of interaction from z IT groups in integrating the z platform and systems. I see this
being addressed in 3 distinct directions. The first one is pretty simple. A centralized
focal point for event management. This seems so simple to do, yet because of a deep seated
history of how events are managed or technologies that are in place, many customers today
still have no enterprise view of what would be deemed an "enterprise" event where all of IT
should have awareness. This lack of visibility and roadblock of not being able to track
events impedes the ability to do service management. Systems management might have several
event management systems, but the lack of integration of these event management systems
basically is costing the business in time, dollars and additional processes. The second
is the understanding or discovery of the systems and subsystems and their relationships or
dependencies that are running on the z platform and being loaded into CMDBs. To provide a
service such as Internet Banking and managing Internet Banking, you must have a total
inventory of all the "stuff" and the relationships of the "stuff" that makes this service
available for the clients. For example, you need a web service available 24 x 7 as part
of this service. You would need to provide, TCP/IP, HTTP, DNS, DHCP all bolted up and
running all the time, plus access security, even before defining what transaction engine
and databases are being used. If you can't define what the total inventory of "stuff"
including the transactions of Internet Banking as a service and what needs to be up and
running. Again, that is going to cost you time, dollars and additional processes.
Both of these IT functions can be done today and are the base building blocks on
moving to Service Management which enables, Cloud, SaaS and other computing models.
For those of you that are subscribed or those that haven't, the monthly electronic magazine zAdvisor is out.
You can catch up at http://www-01.ibm.com/software/tivoli/systemz-advisor/ Some tidbits in
this months magazine include a write up about the new alternative to installing and upgrading
the OMEGAMON product family with something other than ICAT. The ICAT tool has been around
and updated constantly since IBM acquired Candle. As part of the product management team
for the OMEGAMON family, I have traveled and discussed with customers how to improve their
experience with working with the product set. ICAT was something that either someone really
liked or someone really loathed. There was no middle ground. In March of this year, a first
phase of giving you sys prog's an alternative with what we are calling a parmlib approach
was released via some ptfs. In June, phase 2 was released which of course is more
of an autonomic way of gathering system metrics to build the RTEs. In the zAdvisor, a write up
on some of the details of what the new parmlib approach brings. For those with large
OMEGAMON installs, this will definitely save you time and effort and above all, make your
life easier especially if you found ICAT something you would have to retrain yourself on
each and every time you used it.
Also, for all those AF Oper customers, the Event Pump on z/OS now has a feed available so
messages that AF Oper sends to the syslog can be changed into EIF and sent to OMNIbus or TBSMz.
This comes along at a time when event management as a service is becoming a very important
part of what will be the cloud infrastructure service management. Being able to recognize from multiple
programs and management attributes that an impact can bring from a centralized and consolidated product is
key to understanding both what the architecture guys miss with the cloud architecture to avoid
a complete outage. Likewise if a bypass condition occurs and there is a failure somewhere in
the infrastructure that needs to be fixed prior to losing your N+1 setup. Finally if your about to miss your
SLAs for conditions that are not really an outage but a performance or degradation of a service
condition where an event has been generated from one of the many monitoring tools or systems. Lots
of reasons to get all these events to a single focal point.
For those that are heading to SHARE in Boston, look me up, I will be there.
Hard to believe it is already October. Hope readers of this got at
least some vacation during the summer. Since my last post I have
been to SHARE, zExpo as well as many customer visits. At this time
the OMEGAMON family v 5.1 release now include Messaging, IMS and
Storage as well as CICS, zOS and DB2. The last of the releases,
Mainframe Networks, is about to come out the door. Many of the
companies are already in production and using the new enhanced
3270 user interface as well as the other features and functions of
the portfolio including zIIP enablement and the use of specialty
processors if they are available. I am amazed that the OEM vendors
claim that they use less MIPS because they leverage specialty processors
and leave out the fact that OMEGAMON does this also. It seems to be
that this is something they are the only vendor to provide when in fact,
that is not the case. Hopefully, when folks understand where you compare
apples to apples versus apples to starfish, OMEGAMON is not the villain
that some vendors make it out to be on MIPS usage. Most of these discussions
are what I would call a marketecture vs reality.
Meantime, on developerworks, I have placed some documentation. An updated
Parmgen reference upgrade guide, some e3270ui workspace reference layouts.
Kind of a cheat sheet to understand the layouts. Also, many of you
have taken advantage of the OMEGAMON open house events that have taken
place around the world.
Pulse is coming, March 3-6 and abstracts are being called for. We continue
to add z content to this event. This year the Automated Operations Technical
Conference will be joint located in the Pulse conference. Just like last year.
Also the Parmgen lab will be repeated.
Well, I will promise to update this a bit more regular than lately.
Here is the link to the System z Management team room on Developerworks.
go get registered and get the latest and greatest tidbits and advice.
Spent time at the Automated Operations Technical conference in Philadelphia last
week. Some pretty good updates on the Automation portfolio including Systems Automation
and AF Oper. Also some good details on Netview 5.4 release and all of it included some
down the road thinking. Which made me wonder, for us z folks, which is the conference
where, if we have approval, we would attend? I have some of my own opinions
here on this but I would really like to understand where the subject matter experts on
z systems and subsystems go. There is of course SHARE. They have the early spring
session, just had it in Seattle and the summer SHARE session (the voting one) in Boston
in July. Which one if you had to attend one, would you go to. Then again, would you
go to zExpo as a conference which is an STG sponsored event? That runs once in Europe in
late spring or in NA in early fall. There are also choices like Impact, Pulse for
conferences and like I said, AOTC here in the states or EOTC sponsored by the GSE group in
Europe. Are conferences not worth it? Can we get the info we need from the WWW or forum
discussions or linked in groups? Certainly the networking is better at a conference but
if you had to pick one, which one? I would be interested in hearing or reading any posted
comments on this. There are probably conferences that I have not listed. CMG for one, but
is that where you would go to get a broad view on what is happening on the z platform? I know
with some conferences you could get a deep view of say, workload schedulers, or security, etc.
but where, in your opinion, do you go to get a deep view of what is happening on the z.
Let me know,
I was discussing the need to centralize as one process the ability
have a focal point for an event management system. Many customers today
have a group that is managing distributed systems and a different group
that is managing the z platform. This is sometimes a issue that has been
promoted over time and the business ends up with two processes. In Tivoli
for centralized management of events, OMNIbus is the product customers
deploy for integration of both z and distributed event mananagement. In
December, Tivoli released the Event Pump on z/OS v4.2.1. This product
reads the syslog for messages that the IT group has registered
as important and changes them into an event and forwards it to OMNIbus.
This gives customers with Tivoli the ability to centralize all events to a
focal point. The Event Pump on z/OS in its new release has increased the
out of the box capability of handling messages as well as picking up new
message feeds. The Event Pump on z/OS gives customers the capability
of having out of the box value as the product is delivered with a best
practice of important messages already registered to be changed into
events and forwarded to OMNIbus. RMFIII, CICS TDQ, CPSM were recently added
to DB2, IMS, CICS, SA, OPS/MVS, TWS and new feeds are generally added about
every quarter. Many sys progs have perhaps built their own way of integrating
events, but the Event Pump on z/OS has a lot of benefits vs RYO. With
the new release, the zPump has the capability for handling messages with
the same message id by being able to use literals in message text to
distinguish differences. Also, there is the capability of adding
data to the messages as they are changed into events for Operations
management. Another nice item instead the RYO approach is that
in a plex environment, a down message saying something is
offline can be brought online by a message from a different system. The zPump
has awareness of the messages so the sys progs don't have to figure out all
the particulars of what system generates the clearing messages.
The other item here is that the Event Pump for z/OS also has toolkit that is
named a Data Source Customizer which gives the sys progs the ability to
add messages or create new messages that can be registered and monitored. So
a very easy way for sys progs to integrate z messages into a centralized
event management system is with the zPump. Of course, OMNIbus or NetCool
is required, but instead of trying to keep up with new messages, upgrades to
new releases, etc etc. A nice solution.
As the year winds down, probably the last conference of the year was CMG in Orlando. Even in
December I think the attraction is that it is in a warm place, however for those attending, it was
quite chilly so no warm breezes. I presented and discussed using the z platform as a key for cloud.
Simply put, because of the characteristics of the z platform to jump start cloud projects, it has some basic
principles that help it as an enabler. When introducing or trying to roll out a cloud project there
are some discussion points or obstacles that need to be understood. For example, I discussed what
would be required just to provide a web service which would be a key access for all customers because
of the use of the internet. This would be a basic infrastructure delivery. You would need to ensure
24 hour availability of, IP, HTTP, DDHCP, DDNS, just to put out the shingle. Then there is security,
spinning out some storage, usage and billing, etc. etc. This all needs to be done prior to even
advertizing the services available via the cloud. So why the z? First obstacle would be security and
with the z, you get a multi-tenant design point with EAL 5 certification. Supports Virtualization,
a share all approach to systems resources to create the scalability required to reduce costs and deliver
standardized services. Provides availability measured in years, 24 x 7 x 365 with zero data loss recovery.
It is efficient when you look at the energy and power required, up to 80% reduction in costs versus using
the same computing power in a distributed environment. Finally, it is scale, the ability to handle
massive demand of users and data. When interviewed, CxO's point to the importance of cloud for their
enterprise. It could be a public, private or hybrid, but they all point out it is a way to reduce
costs and deliver services for their consumers. It was interesting that at CMG, the attendees that
sat through this session all thought that their IT departments need to be working on their cloud
strategy and for the infrastructure, the z folks had a better grasp of how to leverage what you have
versus buying more IT technology. More about this and managing a zEnterprise in my next update
So for those folks going to zSymposium in Vienna, please look me up as I will be there covering a topic
entitled "the new Virtualized Data Center". It seems as IT organizations move to use virtualization to
save costs, floor space all those benefits, some groups run on their own so instead of having a virtualized
platform, they end up creating even more silo's with in the IT groups. A lot of choices out there for
hypervisors, OS support, etc. A lot of good data on why zVM and linux on z is the most cost effective if
your IT shop has z/VM. Another thing for our OMEGAMON XE users is that we have been working on a
performance tuning document. I always chuckle when I listen to all the hype about a vendors claim
about how they use less MIPS then a Tivoli solution. The concern that I see is that the
depth and breadth of what is monitored is in the eye of the system programmers. When you actually get
down to getting the research or comparable apple to apple monitoring, much of it is the same, one
monitor might be less on z/OS, but use more with monitoring CICS. The approach seems to be to discuss
one monitor where it might be true, but then use that to justify all the monitors. There is a point
where in most cases, a full suite of performance monitors for the zOS systems and subsystems including
z/VM and Linux are not available from most vendors like they are with OMEGAMON. For this reason, systems that were installed
5 years ago or even last year, still need to be tuned as z platforms change. The field technical support
folks who travel and work with customers to help reduce MIPS consumption by doing health checks on
the production level z using the OMEGAMON portfolio and giving tuning recommendations
have help put together a best practices tuning book. The pdf is located on developer works at this url
System Hygiene is important so this document will help as it gives you tips and hints on how to reduce
MIPS usage. A must read for system programmers. I know it will help.
I have been trying to catch up on things but simply seem to never have
enough time. Last week, I was out at SHARE in Seattle. It seemed to be
about the same size and attendance that was in Austin. The key note on z
this year was Tom Rosamilia from IBM and STG in particular. He spent time
going over the game plan on how the z platform and operating system are
constantly changing to handle new workloads, applications and data. Some
great charts on 13 customer scenarios where customers have decided that for
some reason, an application is more cost effective to run on a distributed
open systems platform. I get asked this question all the time in my travels.
I was just in the Total solutions on system z event in Amsterdam where
a system programmer asked for help with this business justification. The
question seems to be asked at least several times a year with the idea that
it would be a cost savings. So the chart gave 13 examples from different
customers where an analysis was done on this concept. The answer was not
much of surprise to the audience, but the details of what data was gathered
for the analysis was what I thought a total sizing. I would bet if you log on
to the SHARE web site this presentation would be available to view and peruse.
It goes back to the same discussions that happened at PULSE a week earlier
when the discussions came up for when a Unit of Work becomes less on a
z platform then a multiple core distributed platforms. That discussion brought
up the crossover point at something around 250 MIPS.
A lot of presentations at SHARE this year on virtualization of everything. It
seems a lot of customers are beyond getting their feet wet now running workloads
on z/VM and Linux on z. A Tivoli perspective on this is that most of our tooling
runs on Linux on z in 64 bit mode and customers can use the z platform as a
centralized platform for managing the end to end enterprise. The approach
which was announced at PULSE in 2008 was that we needed to port our key
applications for integrated service management to Linux on z which is what
development has delivered on. Customers wanting to use the z platform for
its scalability, availability are able now to use both the z/OS Native and z/VM
and Linux on z to build out these service management application. I expect when
summer SHARE happens at Boston, then I believe we will see more and more customer
presentations virtualization and how customers are integrating tooling solutions
to generate better efficiencies for the IT staff on how to reduce costs and improve
For those of you that are subscribed or those that haven't, the monthly electronic magazine zAdvisor is out.
You can catch up at http://www-01.ibm.com/software/tivoli/systemz-advisor/ If you haven't subscribed, always
good things to review such as Rocky's corner for some performance tips. I was at SHARE earlier this month and
have been digesting information about the new zEnterprise. The good thing for those getting 196 machines is that
from what I can figure out, the Tivoli software your using today, will work just like it does today. Some new
things like the optimizer blades will be managed by OMEGAMON XE for DB2, etc as new functions are being exploited
for the fit for purpose workloads that the machine is designed for. For those folks that have added zOSMF to
the z, the OMEGAMON development team has added a technote and it should be posted shortly on the zOSMF website of
how to add a link for OMEGAMON from the performance links displayed on zOSMF. Some very cool discussions are
now going on as to how organizations will look at managing events, performance, operation views as either
different systems or as a system of systems. I have heard all types of discussions on this. Some are saying
that for the time being, we will manage it as components, yet they are also saying it will help them
step up and start managing based on service delivery. I think that the new frontier will be as service
delivery. Looking at the zEnterprise as something like a local machine, but then as a plex or group
of plexes or as a group of ensembles, managing as individual technologies will have to be coordinated across
a workload view of the fit for purpose workloads and applications. Changing something in a blade and not coordinating with
the rest of the ensemble could have a different impact if you don't understand the services being provided by
that piece of the systems of system technology. I believe it may get IT staff and strategy acting as one
component aligned with the business as it gets adopted. For those that say the z and systems management will
never change, it is what it is, then some new thinking is in line. As I get new info, I will post in my blog
some of the things that I am finding out about approaches and a pragmatic view of what you can expect. This
was just the general announcement #1, GA1 info, GA2 it sounds like is being updated and getting ready to
put on the truck. Next blog will cover some more details on the GA1 and the ISM approach
For our IMS folks.
OMEGAMON XE for IMS v420 IF3 just was made generally available
and is focused on two areas: Improved granularity for
application metrics and usability enhancements to Application Trace.
OMEGAMON collected application metrics previously based on an application
schedule. That is, if an application was scheduled and processed 10 messages
before terminating the application metrics displayed were cumulative
based on processing all 10 messages.
IF3 has modified this and now displays application metrics based on unit of recovery (UOR).
In this fashion, runaway applications are easier to spot. In addition twenty-five new metrics
have been added to the OMEGAMON 3270 and TEP views. Some of these include new IMS
application calls (ICAL), external subsystem calls, elapsed times for intent conflicts,
pool space, and application scheduling as well as VSAM/OSAM I/O counts.
OMEGAMON XE for IMS v420 greatly enhanced the application trace capabilities. OMEGAMON XE
for IMS v420 IF2 added CPU and elapsted times for DL/I, DB2, and MQSeries calls. We've
enhanced these capabilities in IF3 by making them a bit easier to use. An application trace
repository has been added which retains all application trace requests. These requests,
once activated, will remain active across OMEGAMON starts if so desired. The management of
application traces is easier with an updated management interface.
This updated interface allows more trace filtering options as well as the ability to
add, delete and clone trace requests. In addition, the interface allows viewing by trace
request saving screen switching. The trace filtering interface has also been greatly enhanced.
One can specify multiple transactions, programs, abend codes, scheduling classes, among others.
New to filtering is the ability to filter by region type and elapsed and/or CPU time, either
total or by DL/I, DB2, or MQSeries call type,
The largest enhancement to application trace in IF3 is the exception journal. One can setup
an exception trace specifying service level commitments in total application elapsed time,
abend conditions, elapsed time by call time (DL/I, DB2, and/or MQSeries), and total CPU time
for the application or by call type. This is an especially useful feature for higher volume
shops where 99.9% of the applications run within the defined service level commitment times.
For those problem transaction instances, application trace will capture the trace data and save
this off in a new exception repository. This should make finding mis-behaving transaction
quicker to locate.
The best thing about this IF3 is that it had a beta with several customers, and the input on how
it should work, act and function was part of the agile process of development and reviewed monthly
with the customers requesting these functions which enabled them to retire another vendors
Well it was a great week at Pulse here in Vegas. I was surprised
by the amount of customer presentations and the details of how they
were using the Tivoli portfolio. For those that attended it was
a way to get healthy with all the walking from the conference center
back to the lobby and to the rooms. BOA, T-systems, Key Bank were just
some of the very cool presentations on how they view enterprise management
and deliver services to the clients. The presentations included
the z platforms and applications running on the z as crucial to be
included to deliver services and be managed. A lot of solutions discussed
have the customers deploying on linux on z running on z/VM. The
presentations were a great example of how customers are deploying using
the Service Management Center for z concept and managing using the z as a Hub.
Another great presentation was how this one customer was using linux on z
running on z/VM as a way to do cloud computing based on costs, scalability
and using the characteristics of the z. Many of the customer
presentations were based on moving forward on ITIL processes at several
different levels. They showed business metrics of how they are tracking
and being successful in gaining control over change and incidents
as well as moving from systems management to service management.
Many training sessions, hands on labs, demos and the solution center
was packed with business partner solutions. I attended a presentation
that showed how well integrated the tools have become.
ITCAM for Transactions was how it started out showing the topology
at different levels as a help desk or customer service group might
be looking at the enterprise. It showed end to end of both
distributed and z domains. Then the speaker generated a problem, and bang,
it shows up in ITCAM for Transactions topology and off we as the audience
went to solve the problem. Doing some analysis between MQ, CICS, DB2
to isolate the problem, then he seamlessly moved
into OMEGAMON XE to do more deep dive and further analysis as the audience
guided him to the actual failure and how it could be repaired. Lots of
other presentations gave migration ideas, end to end solutions. Although
there wasn't a z track this year, the customers included the z in presentations
and discussions because of the ever expanding critical nature of service
delivery and the capabilities of the z platform's reliability. Also, a
discussion of how at a little over 250 MIPS, that the UOW becomes cheaper
to deploy on a z versus a distributed platform. That was probably an
eye opener to a lot of attendees.
OMEGAMON v5.1 will be generally available on March 9th. Announced on Feb 7th, this
release is what I would call a customer guided development piece of work. As part
of the effort Tivoli is expanding the Early Access Program. The EAP is a way
for customers and system programmers to follow along and comment and improve how
they want the OMEGAMON tools to function and work. The 5.1 releases will follow
what we call problem solving scenarios. The purpose of the scenarios are to ensure
that we just don't add metrics without the thought of how they relate to problem
solving. A major change required by the customers was to stop just putting data
on a screen and format the data as information that is organized around how they
use the tools to analyze and correlate key performance indicators. In zOS it is
about moving from a plex view, to an lpar view, to an address space and organizing
the flow of information to help keep related kpi's associated with problems as
defined by the system programmers as they tell us what are the common problems or
information they need to quickly get to the root cause from the same green screen.
The new Early Access Program will involve all OMEGAMON products. With the new
integrated architecture, it makes sense for us to have only one customer program
where if you participate you could follow along and comment on Messaging, DB2, IMS,
Storage, Mainframe Networks as well as the next CICS and z/OS development work.
It helps both development and customers focus on how tools are being used today.
For example, a CICS SME told us that one major issue in there environment was that
a user would call and say their session was hung. The normal procedure was to then
get a large amount of data and start whittling it down with sorts, filters trying
to find the users session. The business as usual was for monitors to return 140
thousand rows of data, spiking a cpu, then reduce and reduce. They told us on
average this process took about 2 hours. Innovation rules on opportunities to
provide problem solving scenarios like this. Now, we have instituted a FIND
command. The end user can usually tell you his user id and by typing find and the
user id, the query brings back not only a significant lesser amount of data and
reduces CPU cycles, but also shows where the user is connected in about a minute. It is
so cool, that plans now are to leverage a FIND or locate in all the other
monitors. By using the EAP program, you can follow along with only one monitor
or multiples, there is no need to actually download code, the sessions last about
an hour, all recorded for playback. So, if your a system programmer and want to
have an impact on the next release, think of it as designing the tool you use
to solve problems you need to solve, then join the program. And, last, look
me up at Pulse or SHARE in Atlanta.
Join the EAP..
Steps to participate:
1. A completed nomination form should be completed and submitted.
For Customers and Business Partners, please use the following link
(you will need to login with your IBM Web ID):
Well, they say you can't teach an old dog new tricks. Things that have existed for years
and decades will simply continue to exist. Why do something new or change?
So I am a guy who has worked in IBM for 36 years, but has changed jobs and
positions every 5 years or so. I guess I have job hopped within the same business
and been luckier than most to be able to do it. The one thing that has been pretty
constant in all of this is that the z platform was here before I started with IBM and
will be here long after I leave IBM. I guess you can call it a survivor even though
there seems to be articles every year that are touting its impending doom. I think like
my job hopping, you pick up new skills, address new challenges and adapt to
keeping yourself valuable to the business. It would seem to me that the z has done
the same and continues to adapt and change, but keeps the same value proposal
that major business requires as it moves through time.
One of the interesting ideas that changing times is creating is a big stir for the z and all z
users. Is the green screen the only presentation service that is required or that
will ever be required for the z subject matter experts? This has created different
opinions from many of the businesses I talk with as well as operations folks,
tooling support folks and others in IT that are starting to see alternatives show
up on the doorstep, regardless of whether it is needed or not. As far as I have read,
a lot of the details or say the imperative of alternatives to green screens
would be that all the kids (I myself am far from a kid) or people entering the IT world
are from a gaming background or are so trained in web browsers and pop ups that
the only way to keep the business going is to change the old technologies to
adapt or adopt to make IT support and the z platform viable and to have a way to
bring new talent into the IT business.
In particular, the z stands out in this because of the vast green screens being
used today and the idea that with a greying workforce (that could be me), there
is a critical shortage of upcoming people who would find it appealing enough
to work on z systems or subsystems if it were just that plain old green screen.
That to me is pretty interesting as part of a product manager team with
Tivoli and OMEGAMON and and discussing System Management requirements
with customers because I think that down the road, there will be requirements
where both the green screen and the vast arrays of GUIs with web 2.0 and beyond
will have a pretty dramatic impact on the IT folks who support the z and z subsystems.
I think the idea of having a reach and range of information about how IT systems
are working or not working will create what I call a funnel system that works
in several ways.
A basic premise is that if you want to manage the business effectively and with
an eye on the TCO of all the running technologies deployed, you integrate an
end-to-end perspective, and ensure that any bump in the night generates an
event/alert and that gets sent to a focal point where someone or some automation
routine is running. This seems to be a standard operation within a mature systems
management IT shop. This data is filtered, reduced and hopefully has
a meaningful message when it gets to the end of the funnel and someone looks at it.
Since we are talking and end-to-end view and consolidation of this information....
Should it be a green screen? A MCS for the enterprise?
I think that the diverse information being received precludes that from ever becoming
a reality in IT. As companies staff help desks and customer service centers, it would
seem that graphics and Web GUIs are taking the predominant role with all sorts of
capabilies to sort, or display and relate impacts on the business.
In fact, this area keeps changing, it seems, every 6 months with new internet technologies
being introduced all the time. There are different use cases and personas using this
web technology that are the first line of awareness even if the trouble ticket is cut
automatically when something goes bump in the night. The event shows up at this
integrated web/portal based GUI orin some business several portals/GUIs where
the different personas might have a different IT process to work on. So, I think the case
of only having this in a green screen has passed in time.
Now the issue is if the bump in the night happened on the z or a z subsystem...haste
and speed to resolve are a priority because of the potential loss of $$. So if the bump
in the night is shown in a nice graphical web based portal, and we all know that web
response time is not what you call as dynamic as a green screen, it creates the dilemma
that IT people are discussing. Is there only one choice here or multiple?
Is it better to have a wide range of data available to view or is it better that we need a
quick reach into the z system and subsystems with speed?
Next up Funneling out..