Service Management on System z Blog
barbara kennedy 2700025GG7 firstname.lastname@example.org Tags:  virtualization tivz ztiv mainframe 2 Comments 2,875 Visits
Perils, pitfalls and problems of managing heterogeneous virtualization
Recently I spoke with Jasmine Noel, founding partner of Ptak Noel and Associates, LLC, regarding her research on the increasing drive for virtualization. What follows are some interesting observations -
"Enterprises keep asking for more agility (i.e. deploy more stuff even faster) and lower capital costs (i.e. minimize idle resources). Virtualization seems like the perfect answer. Virtual images can be deployed faster to provide more agility. More virtual images can be packed onto fewer physical systems to lower capital costs.
If only that were the end of the story.
Datacenter operations teams are realizing that virtualization is not a homogeneous, one-size-fits-all solution. Enterprises seem intent on acquiring different virtualization platforms as well as heterogeneous hardware platforms. This heterogenous virtualization is changing systems administration in some very fundamental ways:
• Virtualization decouples the traditional server and OS management from the physical hardware. This means administrators are being given a bunch of new, overlapping tools to manage the different virtual systems, which increases the potential for errors that impact business services.
• Virtual image management becomes more important (because someone will have to clean out the inevitable virtual sprawl) and more complex (because someone will have to minimize the business risk of high-speed image deployment).
• Matching dynamic workloads to virtual resources that can be easily and automatically added, deleted, moved or changed can’t be done manually. Imagine trying to hitch a bucking rodeo bronco to a wagon that changes size every few minutes -- which doesn’t bode well for guaranteed completion of critical business workloads.
There is no escaping these issues. Every enterprise has virtualization strategies for different types of infrastructure and applications. No single form of virtualization fits every aspect of those strategies. Thus there will be different hypervisors in the enterprise -- heterogeneity will persist. So what can be done about it?
• Start educating the executives that a virtual image needs proactive administration to keep it healthy They also need to understand that managing a roster of hundreds of virtual images is vastly different from keeping a spreadsheet list of 10-20 golden configurations. IT staff will need a major productivity boost keep up.
• Start seriously thinking about managing resources via policies, since it’s the only way to control changes that can happen automatically.
• Start evaluating which of the repetitive administrative activities can be done in a cross-platform, cross-VM manner. The more those tasks can be automated, the more time there’ll be to spend on avoiding the perils, pitfalls and problems of managing heterogeneous virtualization."
IBM is a client of Ptak Noel and has provided compensation to Ptak Noel for participation in this interview.
barbara kennedy 2700025GG7 email@example.com Tags:  mainframe omegamon tivz service-mgmt 1,130 Visits
Surely you have heard the buzz around Pulse. We are now 10 days from the opening bell and hopefully you all will be there to meet in person, share best practices, and learn something. Just a few of the “special for this event” highlights:
Meet the Experts!
Talk one-on-one with Product Experts
Two Experts on hand at all times
Walk up during open hours
Schedule a Meeting
Submit a question to a Kiosk
Sunday, 2/21/10 6:30 pm – 8:30 pm
Monday, 2/22/10 12:00 pm – 8:00 pm
Tuesday, 2/23/10 12:00 pm – 6:00 pm
Expos will include a live z10 with demos dedicated to System z software. New, improved and integrated software will be demonstrated to see and discuss. Our business partners will be actively sharing successes and capabilities.
There are 22 sessions within the Pulse curriculum with system z content, many presented by businesses like yours. There will also be 2 z Expo theatre session, and 2 Birds of a Feather sessions. AND, a system z reception on Monday evening.
It is NOT too late to register and attend.
barbara kennedy 2700025GG7 firstname.lastname@example.org Tags:  mainframe tivz service-management ibmtivoli virtualization 1,576 Visits
Perils, pitfalls and problems need solutions. So, I asked Jasmine Noel to continue her discussion and to focus on what an enterprise might do to address the issues. Jasmine -
Tools to Tame Heterogeneous Virtualization Management
As I mentioned in my initial blog, virtualization is elevating the daily stress for over-extended datacenter administrators to alarming levels. Higher business agility and lower capital costs through virtualization can’t be achieved without sophisticated, complete, and always-on management of heterogenous virtual computing resources. In other words, agile management of heterogeneous virtualization is critical for business agility and profitability. Hence, the reason administrator stress levels are going through the roof is that they don’t have agile management of heterogeneous virtualization.
It is vital, and not optional then, that IT organizations become adept at matching increasingly dynamic needs (be they automated workload schedules or on-demand service requests) with the resource flexibility afforded by virtualization’s rapid provisioning strengths. As such, system administrators are quickly realizing that they need to look for a different type of management solution. So what are the characteristics of this new solution type?
A policy-based approach is a key characteristic of this new solution type. Virtualization enables application images to move to different virtual machines very quickly (in some cases before admins get a chance to check their emailed service tickets). Policies enable IT to introduce some necessary controls around those moves and changes. Policies can also simplify the workload to resource matching process. For example, policies can ensure that multiple workloads all run on the same virtual machine in a particular datacenter location, or incorporate time of day requirements so that a workload spans ten virtual machines during the day but thirty virtual machines at night.
Centralized management of dense computing infrastructures (be they Blades, packaged Cloud systems, or mainframes) appears to be a natural extension of this phenomena because for the first time, administrators truly need a centralized management platform to manage virtual systems. Many IT organizations are learning the hard way about the sprawl created when virtual images are deployed at will with minimal oversight or visibility into how the environment is changing. System administrators need management solutions that make oversight and resource visibility as quick and seamless as deploying the virtual images. Solutions which afford centralized control over all virtualization options and extend across a diverse infrastructure enable optimal use of administrators’ time.
Besides centralization, the solution should also have a workflow-based approach to automation embedded in its design. Automating the many other tasks (patching, security checks, compliance checks, etc.) that surround virtual image deployment drive down the business risk of high-speed image deployment. When these tasks are orchestrated as complete workflows, IT productivity skyrockets, which gives administrators the time to focus on policy design and decision-related resource analysis.
There is no escaping these solution requirements. The system management status quo can’t deliver the agile management of heterogeneous virtualization that is essential for business agility and profitability. What administrators can do is demand that management vendors prove how they deliver on these requirements.
IBM is a client of Ptak Noel and has provided compensation to Ptak Noel for participation in this interview.
Wayne Bucek 120000989Y email@example.com Tags:  mainframe servicemanagement ztiv 2 Comments 1,782 Visits
OMEGAMON XE for Messaging is a complete WMQ management solution. Many customers focus exclusively on the performance monitoring aspect of the product, which is the ability to monitor WebSphere Broker and WebSphere MQ, and disregard the WMQ configuration agent.
The configuration agent is a robust, feature rich, component of the solution. However, the features and associated benefits of the configuration component are distinctly different from those of the monitoring components. When discussing the WMQ Configuration agent, I like to begin by covering the business benefits it offers. The following items (IMHO) represent the business benefits of the WMQ Configuration agent.
* Centralized administration of WebSphere MQ objects
* Single GUI across platforms
* Eliminates configuration errors
* Assists in dealing with large numbers of objects
* Disaster recovery tool
The list of features offered by the WMQ Configuration agent are too lengthy to list here, but a full accounting can be found in the product documentation at:
Future blog entries will cover this material in more detail.
Many customers manage the performance of their MQ environments based on requirements from the application development community. These requirements are often as simple as monitoring the current depth of the queues used by the application. When the current queue depth of a queue exceeds the threshold specified for the application, an alert must be raised. This relatively straight forward process can become an administrative nightmare in an environment where thousands of queues need to be monitored with this level of granularity. In this case, an OMEGAMON XE for Messaging situation would be required to monitor each queue. This design results in a high cost of collection, along with a high cost of ownership, as the administrative burden of implementing this scheme is prohibitive.
A better way to approach the problem is to group queues into queue depth threshold brackets. For example, queues A, B, C; are all evaluated against a current depth threshold of 10 of more. Queues D, E, F are evaluated against the next higher threshold bracket, perhaps current depth greater than 50. Bracket thresholds are determined based on the current depth requirements specified by the application owners. As many brackets as needed can be defined.
The key to the solution lies in the assignment of a queue to a threshold bracket. Customers will seldom have a queue naming standard based on their queue depth monitoring needs. However, OMEGAMON XE for Messaging provides a creative solution to the problem. OMEGAMON is sensitive to the queue definitions description field. The description field, typically a comment, can be modified to indicate which queue depth threshold bracket the queue should be evaluated against. Then, a single OMEGAMON XE situation can be constructed, reflective of multiple (perhaps all) queue depth threshold brackets. This situation would use an 'OR' construct, essentially checking for combination of Current Depth < threshold AND description field == queue bracket threshold limit, across the multiple specified brackets.
Wayne Bucek 120000989Y firstname.lastname@example.org Tags:  mainframe servicemgmt ztiv 2 Comments 2,272 Visits
Hi. I am Wayne Bucek a 20-year veteran in the systems management arena. When I have to come off the bike trails, I am a Consulting IT Specialist for zTivoli solutions. My favorite part of the job is to assist customers with the selection, design, and implementation of IBM zTivoli products. I am IBM-certified in WebSphere MQ.
Prior to joining vendor community, I had a real job - 10 years' experience across the disciplines of z/OS systems programming, application programming, and performance management.
I hope to make this blog a place z techies can come to link their real world business problems to zTivoli solutions.
Have you seen this video? It's fun. It was produced to help celebrate the 10th birthday of Linux on System z. It is System z focused, but later in the clip, you'll notice a very familiar screen. It looks like Tivoli Enterprise Portal.
Link to Video http://www.youtube.com/user/IBMLabGermany#p/u/0/0i7kBnhN3Lg
barbara kennedy 2700025GG7 email@example.com Tags:  ztiv tivz z/os service-management 2 Comments 1,424 Visits
Those attending the Automated Operations Technical Council (AOTC) 2010 conference in Philadelphia this spring would like to share their experiences. Hope to see you too next year!
One of the challenges in reporting on large amounts of data is, well, reporting on large amounts of data! The process of collecting, storing, and subsequently displaying thousands of objects is a daunting task for any performance management tool. Certain types of data lend themselves to this quandary, and WebSphere MQ queues in large Z/OS queue managers is certainly one of those cases.
Large Z/OS queue managers can easily accommodate 1000’s of WMQ objects. I have seen customer environments which are in excess of 10,000 objects in a single queue manager. Obviously, there is a significant cost in retrieving and displaying such a large amount of data.
Understanding the requirement to reduce the TCO of system management tools, Tivoli R&D has recently delivered functionality which significantly decreases the CPU required to process large amounts of data. CPU reductions of up to 85% have been observed when executing user requests returning a result set of 16mb.
This enhancement is available for Z/OS and distributed system based OMEGAMON / ITM / ITCAM agents. The following table outlines PTFs delivering this function.
OMEGAMON V420 and XE MSG V7 Solutions APAR PTF
All OMEGAMON V420 require ITM 6.2.2 FP1 IZ57703
ITM 6.1.0 OA30094 UA51110
ITM 6.2.0 OA31804
ITM 6.2.1 OA31024 UA51456
ITM 6.2.2 OA31048 UA51194
OMEGAMON V410 and ITM 6.1 Solutions APAR PTF
OM XE on z/OS OA31017 UA51634
OM XE for z/MC OA31025 UA51606
OM XE for CICS TG OA31016 UA51592
OM XE for IMS OA31020 UA52123
OM XE for Storage OA31023 UA51964
OM XE for CICS OA31015 UA51593
OM XE for MFN OA31022 UA52539
OM XE for DB2 PM00695
NetView V53 OA31027
OM XE for Websphere MQ Mon V6 OA31029
OM XE for Websphere MQ Config V6 OA31030
OM XE for Websphere MQ Msg Bkr Mon V6 OA31031
Raymond Sun 060000ASTK firstname.lastname@example.org Tags:  service-management mainframe linux tivz ibmtivoli virtualization 1 Comment 2,345 Visits
Did you know that System z was a great platform for a private cloud? Software Group did a benchmark comparing a popular Intel based hypervisor to z/VM and observed the impact to CPU utilization, throughput (transactions per second), and response time of a sample application as they varied the number of virtual servers. Their results are documented in a paper: http://www.ibm.com/common/ssi/fcgi-bin/ssialias?infotype=SA&subtype=WH&appname=STGE_ZS_ZS_USEN&htmlfid=ZSW03125USEN&attachment=ZSW03125USEN.PDF
There is a companion paper that discusses the TCO implications of private cloud versus public cloud: http://www.ibm.com/common/ssi/fcgi-bin/ssialias?infotype=SA&subtype=WH&appname=STGE_ZS_ZS_USEN&htmlfid=ZSW03126USEN&attachment=ZSW03126USEN.PDF
Have you noticed that when the press talks about cloud, they're typically not talking about System z? I wonder why. I have some theories, but would be interested in your perspective.
Raymond Sun 060000ASTK email@example.com Tags:  ztiv service-management tivz ibmtivoli linux mainframe 2 Comments 2,015 Visits
As you consider moving workloads to Linux on System z, you will want to evaluate which workloads are the best fit. Ideally, data intensive or mixed workloads are best suited for System z whereas CPU intensive workloads may be better run on other platforms. On a recent webcast where I was co-presenting with Bill Reeder (IBM Linux Enterprise Servers), we discussed best workloads for Linux on System z which leverages the strengths of System z like WebSphere MQ, Domino, SAP. We also talked about good workloads for Linux on System z which run well on Linux on System z but can also run on other platforms. So, there are other factors (e.g. organizational politics) which must be evaluated in deciding the ideal platform for workloads.
Wayne Bucek 120000989Y firstname.lastname@example.org 568 Visits
OMEGAMON XE for Messaging on Z/OS provides comprehensive statistical information on application workloads using the MQI. When transaction response time degrades, it is helpful to isolate the delay to the offending subsystem. The Application Statistics feature facilitates problem identification, providing clocks and counts by MQI request type, for all monitored applications.
Details on the operation of this feature are found in the WebSphere MQ Monitoring Agent User's Guide. A white paper discussing Application Statistics best practices is available on the Tivoli Wiki.
barbara kennedy 2700025GG7 email@example.com Tags:  servicemgmt cloud tivz mainframe 1,478 Visits
Recently I interviewed Joe Clabby of Clabby Analytics regarding the role of the mainframe for cloud computing. We invite your comments -
Q - Joe, cloud computing has been characterized in many ways, but is basically a new service consumption and delivery model inspired by consumer Internet services. Key components include on-demand self service, ubiquitous network access, location independent resource pooling and rapid elasticity. In recent weeks you have been advocating the mainframe for management of the cloud. Why?
A - Simple ─ a mainframe is a cloud-in-a-box.
Q- In your experience, have customers who have adopted service management from system z realized measurable ROI? Can you share any examples?
Q - Not every customer has a mainframe. At what point(s) should a business consider adopting the platform for service management?
IBM is a client of Clabby Analytics and has provided compensation to Clabby Analytics for participation in this interview.
barbara kennedy 2700025GG7 firstname.lastname@example.org Tags:  mainframe service-mgmt ztiv tivz zos 1 Comment 1,783 Visits
Good news for all is that the “Call for Papers” for Pulse 2010 has been extended by popular demand. November 20th is the revised deadline so there is more time to finalize creative input. I spoke with Marcus Boone, product manager for Tivoli’s system z portfolio, regarding suggestions for Pulse participation. He is anxious to hear from those customers who can share real-world experiences on how an integrated approach to service management on System z is creating real benefits. Virtualization projects with savings, new Linux workload projects or best practices in the very complex world of ‘always available’ are good examples. Marcus believes Pulse 2010’s emphasis on alternative forms of customer participation, such as roundtables, panels and open discussions to support a topic will be both more interesting and more meaningful to all.
Please plan to join the conversation. Submit your proposal online at http://www.ibm.com/software/tivoli/pulse/.
Dec 13 Telecon - Unattended Cloud application execution with SmartCloud workload automation
Register now: http://ibm.co/SsNzHG
Consolidating various product roles, as the backbone for enterprise scheduling
Implementing dynamic scheduling capabilities that can virtually integrate into a larger high-availability infrastructure
Using a per-job-priced metric, based on the number of jobs actually managed, enabling you to pay per-job for workloads that are being managed
Speaker: Xavier Giannakopoulos, Workload Automation Product Manager, IBM Software Group
Broadcast Date: December 13, 2012 at 11:00 a.m. Eastern Standard Time, 4:00 p.m. GMT / UTC
Developed for: IT and enterprise architects and managers; operations managers and system administrators; CTOs; and project managers and researchers
Technical Level: Intermediate
Cloud environments bring with them a new set of challenges with lots of additional workload and resource management. Much of the savings gained from cloud deployments comes from reduced resource usage. But this requires well-managed, automated provisioning of applications and workloads and the capability to quickly de-provision, as well, to free up resources when there is no longer a need for the workload or application. If you’re moving to implement cloud support for new workloads and applications, these are important considerations. Workload automation can help.
IBM SmartCloud workload automation capabilities are designed to provide a consistent look and feel that enables unattended workload or application operations, with the provisioning and de-provisioning of cloud resources and infrastructures. Join us for this complimentary webcast as we look at how to gain additional value with improved availability and productivity by automating much of the workload associated with managing applications in a cloud. You can save time and money with SmartCloud workload automation, while simplifying cloud operations.
Join us after the teleconference for a live question-and-answer session. The teleconference will also be available for replay after the event