Installation and Configuration Services - Cloud Ready for Linux on System z Offering
Are you Cloud Ready for Linux on System z?
This offering provides the ability to provision to any platform using a common service catalog in a highly available environment, including availability, performance, monitoring, backup and recovery.
Customized offerings can also include integration of additional products, program design, implementation and ongoing support. Services are delivered in partnership with StreamFoundry, Inc., an IBM Business Partner.
* Delivers Cloud Service Management by creating and deploying standardized, virtualized and mixed infrastructure environments with Process Automation
* Results in reduced cost and increased speed of delivery of business services
* Installation and configuration of: Tivoli Provisioning Manager, IBM Tivoli Monitoring, System Automation for Multiplatforms, Tivoli Storage Manager
* Services for each stage of Cloud on System z design, implementation and ongoing support fit to individual requirements
* Knowledge Transfer and on-going support as needed.
* Typical Project Duration: 5 days. Can scale based on customer requirements
* SmartCloud Control Desk Media included for Service Desk Administration
Perils, pitfalls and problems of managing heterogeneous virtualization
Recently I spoke with Jasmine Noel, founding partner of Ptak Noel and Associates, LLC, regarding her research on the increasing drive for virtualization. What follows are some interesting observations -
"Enterprises keep asking for more agility (i.e. deploy more stuff even faster) and lower capital costs (i.e. minimize idle resources). Virtualization seems like the perfect answer. Virtual images can be deployed faster to provide more agility. More virtual images can be packed onto fewer physical systems to lower capital costs.
If only that were the end of the story.
Datacenter operations teams are realizing that virtualization is not a homogeneous, one-size-fits-all solution. Enterprises seem intent on acquiring different virtualization platforms as well as heterogeneous hardware platforms. This heterogenous virtualization is changing systems administration in some very fundamental ways:
• Virtualization decouples the traditional server and OS management from the physical hardware. This means administrators are being given a bunch of new, overlapping tools to manage the different virtual systems, which increases the potential for errors that impact business services.
• Virtual image management becomes more important (because someone will have to clean out the inevitable virtual sprawl) and more complex (because someone will have to minimize the business risk of high-speed image deployment).
• Matching dynamic workloads to virtual resources that can be easily and automatically added, deleted, moved or changed can’t be done manually. Imagine trying to hitch a bucking rodeo bronco to a wagon that changes size every few minutes -- which doesn’t bode well for guaranteed completion of critical business workloads.
There is no escaping these issues. Every enterprise has virtualization strategies for different types of infrastructure and applications. No single form of virtualization fits every aspect of those strategies. Thus there will be different hypervisors in the enterprise -- heterogeneity will persist. So what can be done about it?
• Start educating the executives that a virtual image needs proactive administration to keep it healthy They also need to understand that managing a roster of hundreds of virtual images is vastly different from keeping a spreadsheet list of 10-20 golden configurations. IT staff will need a major productivity boost keep up.
• Start seriously thinking about managing resources via policies, since it’s the only way to control changes that can happen automatically.
• Start evaluating which of the repetitive administrative activities can be done in a cross-platform, cross-VM manner. The more those tasks can be automated, the more time there’ll be to spend on avoiding the perils, pitfalls and problems of managing heterogeneous virtualization."
IBM is a client of Ptak Noel and has provided compensation to Ptak Noel for participation in this interview.
Using Tivoli Analytics for Service Performance to improve ROI
Broadcast Date: October 27, 2011 at 11:00 a.m. Eastern Daylight, 3:00 p.m. GMT, 4:00 p.m. British Summer Time
manual analysis, false alerts and diagnostic processes translates into
improved productivity. But how is this possible with today’s technology?
If you can get data from multiple sources within your environment and
perform analytics across multiple monitoring tools, you’ll have a better
understanding of the relationship among different product metrics —
making your systems more sensitive to when a problem has occurred. This
type of proactive model minimizes service disruptions, and enables you
to get the most from your monitoring investments by detecting emerging
problems that would otherwise go missed.
Join us for this
complimentary teleconference on IBM Tivoli® Analytics for Service
Performance. You will learn how you can reduce service outages by moving
to a proactive analytics model that minimizes service disruption and
improves the value from monitoring across mainframe and distributed
environments. The resulting IT analytics are designed to help business
and IT professionals who are interested in performance and application
monitoring, operations and service availability, and overall System z®
quality. If you currently use any monitoring product, such as ITCAM,
OMEGAMON®, OMNIbus, or Tivoli Business Service Manager and performing
event and service management, this teleconference will be of interest.
IBM Tivoli Analytics for Service Performance can take information from
your current monitoring products and perform predictive analytics to
find problems before they become outages.
You’ll come away
from this teleconference with a good understanding of Tivoli Analytics
for Service Performance capabilities, including how it:
- Learns normal operational behavior, and how metrics behave together
- Reduces expensive and time-consuming false alerts and trouble tickets
- Provides maximum warning of service impact, deterioration or outage
- Detects service impacts that are not identifiable by fixed thresholds alone
- Assists with root cause analysis by indicating most offending metrics
Speaker: Richard Gleeson, Tivoli Product Manager, IBM Software Group
us after the teleconference for a live question-and-answer session. The
teleconference will also be available for replay after the event.
Have you noticed that when the press talks about cloud, they're typically not talking about System z? I wonder why. I have some theories, but would be interested in your perspective.
Hi. I am Wayne Bucek a 20-year veteran in the systems management arena. When I have to come off the bike trails, I am a Consulting IT Specialist for zTivoli solutions. My favorite part of the job is to assist customers with the selection, design, and implementation of IBM zTivoli products. I am IBM-certified in WebSphere MQ.
Prior to joining vendor community, I had a real job - 10 years' experience across the disciplines of z/OS systems programming, application programming, and performance management.
I hope to make this blog a place z techies can come to link their real world business problems to zTivoli solutions.
As you consider moving workloads to Linux on System z, you will want to evaluate which workloads are the best fit. Ideally, data intensive or mixed workloads are best suited for System z whereas CPU intensive workloads may be better run on other platforms. On a recent webcast where I was co-presenting with Bill Reeder (IBM Linux Enterprise Servers), we discussed best workloads for Linux on System z which leverages the strengths of System z like WebSphere MQ, Domino, SAP. We also talked about good workloads for Linux on System z which run well on Linux on System z but can also run on other platforms. So, there are other factors (e.g. organizational politics) which must be evaluated in deciding the ideal platform for workloads.
Top 10 Ways to Participate in Pulse 2010
“Integrated Service Management”
10 – Come and attend interesting presentations
9 – Attend and enroll in a workshop
8 – Attend with a colleague
7 – Quiz the experts
6 – Present on a topic
5 – Share a best practice
4 – Share details of a project
3 – Share a technique
2 – Participate in a panel discussion or roundtable
1 – All of the above!
Pulse 2010 will be held February 21-24 in always exciting Las Vegas and feature the best of the best in Service Management. There will be announcements, presentations, demonstrations and receptions. Analysts and experts will address specialized topics for business leaders, technicians, and executives. Get smart, get certified and get connected. Plan to participate now. Call for papers and session contributors ends November 2.
Improve System z Storage Management with Integrated Storage Suite Teleconference
- Consolidation of four IBM Tivoli z/OS storage management, audit, and reporting product components
- Data protection for integrated catalog facility (ICF) catalogs to help increase data availability
- Prevent, and recover from, many space-related abends
- Audit and automatically correct errors in the DFSMS environment to help reduce operating cost and avoid costly outages
Speaker: Kevin Hosozawa, Tivoli System z Storage Management Product Manager, IBM Software Group
Broadcast Date: December 5, 2012 at 11:00 a.m. Eastern Standard Time, 4:00 p.m. GMT / UTC
Out-of-control storage growth is putting a heavy load on many storage
administrators — from the management of complex, multi-vendor
heterogeneous IBM® System z® storage devices, to resolving performance
bottlenecks with little visibility into the storage environment. Without
an increase in budget or staff, it’s also becoming more difficult to
minimize outages and manage changes in the storage environment.
Join us for this complimentary teleconference as we review all the
recent new capabilities in the Tivoli® System z Storage Suite, including
integrated z/OS® storage management capabilities — visibility, control
and automation — for the storage administrator. The suite can provide
z/OS storage management awareness into application and workload
management, enabling a more efficient process that helps reduce costs
and fosters efficiency in problem identification and resolution.
The IBM Tivoli Advanced Storage Management Suite for z/OS, V1.1
delivers a feature-rich storage management, audit and reporting software
suite that includes:
- IBM Tivoli Advanced Audit for DFSMShsm V2.4
- IBM Tivoli Advanced Reporting and Management for DFSMShsm V2.4
- IBM Tivoli Advanced Catalog Management for z/OS, V2.4
- IBM Tivoli Advanced Allocation Management V3.2
You’ll learn how the new capabilities in this suite can improve storage availability and performance at a lower cost.
Join us after the teleconference for a live question-and-answer
session. The teleconference will also be available for replay after the
Good news for all is that the “Call for Papers” for Pulse 2010 has been extended by popular demand. November 20th is the revised deadline so there is more time to finalize creative input. I spoke with Marcus Boone, product manager for Tivoli’s system z portfolio, regarding suggestions for Pulse participation. He is anxious to hear from those customers who can share real-world experiences on how an integrated approach to service management on System z is creating real benefits. Virtualization projects with savings, new Linux workload projects or best practices in the very complex world of ‘always available’ are good examples. Marcus believes Pulse 2010’s emphasis on alternative forms of customer participation, such as roundtables, panels and open discussions to support a topic will be both more interesting and more meaningful to all.
Please plan to join the conversation. Submit your proposal online at http://www.ibm.com/software/tivoli/pulse/.
OMEGAMON XE for Messaging is a complete WMQ management solution. Many customers focus exclusively on the performance monitoring aspect of the product, which is the ability to monitor WebSphere Broker and WebSphere MQ, and disregard the WMQ configuration agent.
The configuration agent is a robust, feature rich, component of the solution. However, the features and associated benefits of the configuration component are distinctly different from those of the monitoring components. When discussing the WMQ Configuration agent, I like to begin by covering the business benefits it offers. The following items (IMHO) represent the business benefits of the WMQ Configuration agent.
* Centralized administration of WebSphere MQ objects
* Single GUI across platforms
* Eliminates configuration errors
* Assists in dealing with large numbers of objects
* Disaster recovery tool
The list of features offered by the WMQ Configuration agent are too lengthy to list here, but a full accounting can be found in the product documentation at:
Future blog entries will cover this material in more detail.
Pulse 2012 offers more than ever before for the System z community!
Looking forward to four days of top-notch education, Pulse 2012 has a large number of System z focused activities occurring to help increase visibility and improve the automation of your IT
Product showcase in the Expo Hall
- Automated Operations Technical Council (AOTC) track
formerly standalone event will be held in conjunction with Pulse, with
its own dedicated track featuring 23 sessions providing the knowledge to
help you protect your business and IT services through end-to-end high
availability and automation.
- Featured System z sessions
sessions address key opportunities and challenges for today’s mainframe
environments, including new product capabilities, scalable cloud
computing, security and compliance, application delivery and service
management for hybrid environments.
- Special reception for System z attendees
us Monday evening for a New Orleans-style reception for the System z
community, hosted by Phil Weintraub (Vice President, zEnterprise
Software Sales North America).
Experience product demos via interactive touchscreen at the System z kiosks in the Expo Hall, including:
- Enhanced OMEGAMON XE for CICS and z/OS
- System Automation for z/OS
- Cloud Ready on System z
Taking Advantage of a Cloud-Ready solution to simplify moving to Cloud on System Z
- Creating and deploying standards in virtualized and mixed environments through Process Automation
- Streamline manual procedures that cause delays
- Rendering services through a common Service Catalog
- Automating validation of assets prior to deployment using a plug-n-play service management infrastructure
Speaker: Mike Baskey, Chief Architect, Cloud on System z, IBM Software Group
Broadcast Date: August 23, 2012, 11 a.m. EDT / 4:00 p.m. BST / 3:00 p.m. UTC
Developed for: Application developers and managers; and, database administrators and managers
Technical level: Intermediate
Every IT shop is trying to figure out how to save money and decrease
risks by implementing a cloud environment — beginning with the right
platform. IBM® System z® has had virtualization for years, and with new
tools, it can support the provisioning, orchestration and monitoring
necessary to successfully gain the value of cloud on a System z
Have you considered implementing cloud on System z? Join us for a
complimentary teleconference and learn how IBM solutions give you the
capability to monitor, provision and automate activities needed for
cloud service management and how IBM is enhancing cloud on System z
going forward. We’ll show you how to create and reuse images automated
with pre-built workload patterns, and later, automatically de-provision.
This teleconference includes a discussion of the various new components
available to support a cloud environment on System z, including Cloud
Ready for System z — and image-based deployment for cloud service
delivery and management—and Tivoli® Provisioning Manager. You’ll come
away with a good understanding of why System z is a great cloud
Join us after the teleconference for a live question-and-answer
session. The teleconference will also be available for replay after the
Recently I interviewed Joe Clabby of Clabby Analytics regarding the role of the mainframe for cloud computing. We invite your comments -
Q - Joe, cloud computing has been characterized in many ways, but is basically a new service consumption and delivery model inspired by consumer Internet services. Key components include on-demand self service, ubiquitous network access, location independent resource pooling and rapid elasticity. In recent weeks you have been advocating the mainframe for management of the cloud. Why?
A - Simple ─ a mainframe is a cloud-in-a-box.
Consider this: mainframe users want resources on-demand. Mainframes are designed to be able to provide services on-demand ─ based upon pre-established prioritization. If you want additional resources on a mainframe, you can ask IT to provide them and voila ─ if you have priority, you get them. Over time, I expect IBM to front-end mainframes with software that will give users more control when requesting resources ─ but the fact remains that mainframes can be easily provisioned to meet user requests for resources today.
Ubiquitous network access is another of your criteria. Mainframes can be connected using TCP/IP to the Internet. The Internet is ubiquitous. Hence, you get that with a mainframe.
Location independent resource pooling (in other words, the resources show up in a pool ─ and no one cares where those resources are ─ all they care about is getting access to those resources. Mainframes offer the market’s most advanced resource management, virtualization, and provisioning. Hence, they meet this location independent resource pooling descriptor.
Finally, elasticity (expandability, flexibility, etc.). Name a more scalable, elastic, flexible environment when it comes to making resources available. Weigh mobile partitioning, advanced virtualization management, and virtualized resource management as you seek to find a superior architecture. You can’t find any general workload processor more flexible and easier to manage (considering how many resources that it controls) than a mainframe.
Q- In your experience, have customers who have adopted service management from system z realized measurable ROI? Can you share any examples?
A -Most customers that I talk to don’t sit around and say “this is how much it cost me to run back-up/restore and other functions that I manage using service management software. What they say is “hey, look over there. Those are my five mainframe managers who manage the equivalent of hundreds of x86 servers. The math is pretty obvious ─ you need a dozen or a couple of dozen managers and administrators to manage x86 environments ─ and far, far fewer managers/adminis-trators to manage a mainframe. (And remember ─ in some geographies, the salaries, benefits, and sick leave for x86 managers can cost a company $100,000 per year. Take ten or twenty IT managers out of the equation and pretty soon you’re talking about real money…)
In Brazil, when I asked a mainframe user who manages his mainframe ─ he said “me and that other guy over there”. He was managing his company’s SAP environment with 2 people. That’s mission critical computing ─ the company’s livelihood ─ with two people.
In Arkansas, another mainframe manager said “I’ve replaced dozens of x86 servers ─ and will replace dozens more ─ with our mainframe. Here are the five offices of the people I use to manage this environment. People don’t believe me when I tell them this ─ but its true!”
Incidentally, mainframe managers have been using dashboards with monitor, visualization, and control facilities for years. They don’t even think of it as service management software. But it is.
Q - Not every customer has a mainframe. At what point(s) should a business consider adopting the platform for service management?
A -Not every customer has a mainframe ─ and not every customer needs a mainframe. It you’ve got a ton of applications and you want to run on the most efficient general workload processor ─ you should be considering using z/OS, zVM, and/or Linux on a mainframe. If you don’t have enough work for a mainframe ─ but want to run high-RAS (reliability, availability, security) applications in a mission critical environment ─ use IBM POWER systems (my opinion: SPARC and Itanium are dead-end architectures). If you want to run Windows on Linux on x86 Xeon processors ─ you’re going to deal with a bunch of distinct servers that will need to be administrated and managed. And you’re gonna need a ton of people to manage those discrete servers. To reduce the need to have to use as many administrators and managers (and to save associated salaries/benefits), as well as to reduce costs related to human error that cause downtime and potential loss of business, you need to adopt service management software.
Bottom line: use a mainframe if you have a lot of work to do ─ and you don’t want to spend a ton of money managing that environment. Buy a Unix server if you don’t have enough work to do to keep a mainframe busy. Buy an x86 server if you need Windows ─ or if you want to run Linux on x86 platforms. In each case though, use service management software to reduce you management costs.
IBM is a client of Clabby Analytics and has provided compensation to Clabby Analytics for participation in this interview.
Teleconference: Tivoli Business Service Management for System z
- Easily collect and forward status information from z/OS and improve end-to-end service visibility
- Auto-determine business and financial impact of service degradations to enable prioritization
- Identify and resolve critical problems with automated event correlation, isolation and resolution capabilities
- Retain events and business impact analysis to grow knowledge base and ease future correlation
Speaker: Clayton Ching, Senior Product Manager, Tivoli Business Service Management, IBM Software Group
Broadcast date: October 25, 2012, 11 a.m., EDT / 4:00 p.m. BST / 3:00 p.m. UTC
With enterprises becoming increasingly complex, and the volume of
events growing, the problem of preventing problems from impacting
service is becoming more urgent. IT shops need to respond proactively to
infrastructure-related issues that are causing service degradation. But
how do you get past multiple user administration interfaces, and
difficulty in seeing historical and real-time events in a business
context? How do you get to what matters most?
Business Service Management brings clarity and focus to what’s
important for IT in terms of business priorities. Prioritization of
business services is the key to managing any enterprise environment,
whether IBM® z/OS® or distributed, because your business services and
all the underlying dependencies need to align. In this session, we will
define Business Service Management and roadmap methods on what you need
get you there.
Join us for this complimentary teleconference and learn how to
architect and manage complex relationships between services and
supporting infrastructure for increased visibility, impact assessment
and reduced risk related to business events. We’ll discuss a methodology
to get z/OS events into Tivoli® Netcool® OMNIbus, and in turn Tivoli
Business Service Manger. We’ll highlight integration with the Tivoli
Event Pump, and contrast it with other methods. This method closely
aligns lines-of-business and IT operations teams, enabling collaborative
and holistic management of services and dynamic infrastructures, such
Join us after the teleconference for a live question-and-answer
session. The teleconference will also be available for replay after the
Perils, pitfalls and problems need solutions. So, I asked Jasmine Noel to continue her discussion and to focus on what an enterprise might do to address the issues. Jasmine -
Tools to Tame Heterogeneous Virtualization Management
As I mentioned in my initial blog, virtualization is elevating the daily stress for over-extended datacenter administrators to alarming levels. Higher business agility and lower capital costs through virtualization can’t be achieved without sophisticated, complete, and always-on management of heterogenous virtual computing resources. In other words, agile management of heterogeneous virtualization is critical for business agility and profitability. Hence, the reason administrator stress levels are going through the roof is that they don’t have agile management of heterogeneous virtualization.
It is vital, and not optional then, that IT organizations become adept at matching increasingly dynamic needs (be they automated workload schedules or on-demand service requests) with the resource flexibility afforded by virtualization’s rapid provisioning strengths. As such, system administrators are quickly realizing that they need to look for a different type of management solution. So what are the characteristics of this new solution type?
A policy-based approach is a key characteristic of this new solution type. Virtualization enables application images to move to different virtual machines very quickly (in some cases before admins get a chance to check their emailed service tickets). Policies enable IT to introduce some necessary controls around those moves and changes. Policies can also simplify the workload to resource matching process. For example, policies can ensure that multiple workloads all run on the same virtual machine in a particular datacenter location, or incorporate time of day requirements so that a workload spans ten virtual machines during the day but thirty virtual machines at night.
Centralized management of dense computing infrastructures (be they Blades, packaged Cloud systems, or mainframes) appears to be a natural extension of this phenomena because for the first time, administrators truly need a centralized management platform to manage virtual systems. Many IT organizations are learning the hard way about the sprawl created when virtual images are deployed at will with minimal oversight or visibility into how the environment is changing. System administrators need management solutions that make oversight and resource visibility as quick and seamless as deploying the virtual images. Solutions which afford centralized control over all virtualization options and extend across a diverse infrastructure enable optimal use of administrators’ time.
Besides centralization, the solution should also have a workflow-based approach to automation embedded in its design. Automating the many other tasks (patching, security checks, compliance checks, etc.) that surround virtual image deployment drive down the business risk of high-speed image deployment. When these tasks are orchestrated as complete workflows, IT productivity skyrockets, which gives administrators the time to focus on policy design and decision-related resource analysis.
There is no escaping these solution requirements. The system management status quo can’t deliver the agile management of heterogeneous virtualization that is essential for business agility and profitability. What administrators can do is demand that management vendors prove how they deliver on these requirements.
IBM is a client of Ptak Noel and has provided compensation to Ptak Noel for
participation in this interview.