Installation and Configuration Services - Cloud Ready for Linux on System z Offering
Are you Cloud Ready for Linux on System z?
This offering provides the ability to provision to any platform using a common service catalog in a highly available environment, including availability, performance, monitoring, backup and recovery.
Customized offerings can also include integration of additional products, program design, implementation and ongoing support. Services are delivered in partnership with StreamFoundry, Inc., an IBM Business Partner.
* Delivers Cloud Service Management by creating and deploying standardized, virtualized and mixed infrastructure environments with Process Automation * Results in reduced cost and increased speed of delivery of business services
Deliverables: * Installation and configuration of: Tivoli Provisioning Manager, IBM Tivoli Monitoring, System Automation for Multiplatforms, Tivoli Storage Manager * Services for each stage of Cloud on System z design, implementation and ongoing support fit to individual requirements * Knowledge Transfer and on-going support as needed. * Typical Project Duration: 5 days. Can scale based on customer requirements * SmartCloud Control Desk Media included for Service Desk Administration
Perils, pitfalls and problems of managing heterogeneous virtualization
Recently I spoke with Jasmine Noel, founding partner of Ptak Noel and Associates, LLC, regarding her research on the increasing drive for virtualization. What follows are some interesting observations -
"Enterprises keep asking for more agility (i.e. deploy more stuff even faster) and lower capital costs (i.e. minimize idle resources). Virtualization seems like the perfect answer. Virtual images can be deployed faster to provide more agility. More virtual images can be packed onto fewer physical systems to lower capital costs.
If only that were the end of the story.
Datacenter operations teams are realizing that virtualization is not a homogeneous, one-size-fits-all solution. Enterprises seem intent on acquiring different virtualization platforms as well as heterogeneous hardware platforms. This heterogenous virtualization is changing systems administration in some very fundamental ways: • Virtualization decouples the traditional server and OS management from the physical hardware. This means administrators are being given a bunch of new, overlapping tools to manage the different virtual systems, which increases the potential for errors that impact business services. • Virtual image management becomes more important (because someone will have to clean out the inevitable virtual sprawl) and more complex (because someone will have to minimize the business risk of high-speed image deployment). • Matching dynamic workloads to virtual resources that can be easily and automatically added, deleted, moved or changed can’t be done manually. Imagine trying to hitch a bucking rodeo bronco to a wagon that changes size every few minutes -- which doesn’t bode well for guaranteed completion of critical business workloads.
There is no escaping these issues. Every enterprise has virtualization strategies for different types of infrastructure and applications. No single form of virtualization fits every aspect of those strategies. Thus there will be different hypervisors in the enterprise -- heterogeneity will persist. So what can be done about it? • Start educating the executives that a virtual image needs proactive administration to keep it healthy They also need to understand that managing a roster of hundreds of virtual images is vastly different from keeping a spreadsheet list of 10-20 golden configurations. IT staff will need a major productivity boost keep up. • Start seriously thinking about managing resources via policies, since it’s the only way to control changes that can happen automatically. • Start evaluating which of the repetitive administrative activities can be done in a cross-platform, cross-VM manner. The more those tasks can be automated, the more time there’ll be to spend on avoiding the perils, pitfalls and problems of managing heterogeneous virtualization."
IBM is a client of Ptak Noel and has provided compensation to Ptak Noel for participation in this interview.
Hi. I am Wayne Bucek a 20-year veteran in the systems management arena. When I have to come off the bike trails, I am a Consulting IT Specialist for zTivoli solutions. My favorite part of the job is to assist customers with the selection, design, and implementation of IBM zTivoli products. I am IBM-certified in WebSphere MQ.
Prior to joining vendor community, I had a real job - 10 years' experience across the disciplines of z/OS systems programming, application programming, and performance management.
I hope to make this blog a place z techies can come to link their real world business problems to zTivoli solutions.
Using Tivoli Analytics for Service Performance to improve ROI
Broadcast Date: October 27, 2011 at 11:00 a.m. Eastern Daylight, 3:00 p.m. GMT, 4:00 p.m. British Summer Time
manual analysis, false alerts and diagnostic processes translates into
improved productivity. But how is this possible with today’s technology?
If you can get data from multiple sources within your environment and
perform analytics across multiple monitoring tools, you’ll have a better
understanding of the relationship among different product metrics —
making your systems more sensitive to when a problem has occurred. This
type of proactive model minimizes service disruptions, and enables you
to get the most from your monitoring investments by detecting emerging
problems that would otherwise go missed.
Join us for this
complimentary teleconference on IBM Tivoli® Analytics for Service
Performance. You will learn how you can reduce service outages by moving
to a proactive analytics model that minimizes service disruption and
improves the value from monitoring across mainframe and distributed
environments. The resulting IT analytics are designed to help business
and IT professionals who are interested in performance and application
monitoring, operations and service availability, and overall System z®
quality. If you currently use any monitoring product, such as ITCAM,
OMEGAMON®, OMNIbus, or Tivoli Business Service Manager and performing
event and service management, this teleconference will be of interest.
IBM Tivoli Analytics for Service Performance can take information from
your current monitoring products and perform predictive analytics to
find problems before they become outages.
You’ll come away
from this teleconference with a good understanding of Tivoli Analytics
for Service Performance capabilities, including how it:
Learns normal operational behavior, and how metrics behave together
Reduces expensive and time-consuming false alerts and trouble tickets
Provides maximum warning of service impact, deterioration or outage
Detects service impacts that are not identifiable by fixed thresholds alone
Assists with root cause analysis by indicating most offending metrics
Speaker: Richard Gleeson, Tivoli Product Manager, IBM Software Group
us after the teleconference for a live question-and-answer session. The
teleconference will also be available for replay after the event.
As you consider moving workloads to Linux on System z, you will want to evaluate which workloads are the best fit. Ideally, data intensive or mixed workloads are best suited for System z whereas CPU intensive workloads may be better run on other platforms. On a recent webcast where I was co-presenting with Bill Reeder (IBM Linux Enterprise Servers), we discussed best workloads for Linux on System z which leverages the strengths of System z like WebSphere MQ, Domino, SAP. We also talked about good workloads for Linux on System z which run well on Linux on System z but can also run on other platforms. So, there are other factors (e.g. organizational politics) which must be evaluated in deciding the ideal platform for workloads.
Recently I interviewed Joe Clabby of Clabby Analytics regarding the role of the mainframe for cloud computing. We invite your comments -
Q - Joe, cloud computing has been characterized in many ways, but is basically a new service consumption and delivery model inspired by consumer Internet services. Key components include on-demand self service, ubiquitous network access, location independent resource pooling and rapid elasticity. In recent weeks you have been advocating the mainframe for management of the cloud. Why?
A - Simple ─ a mainframe is a cloud-in-a-box. Consider this: mainframe users want resources on-demand. Mainframes are designed to be able to provide services on-demand ─ based upon pre-established prioritization. If you want additional resources on a mainframe, you can ask IT to provide them and voila ─ if you have priority, you get them. Over time, I expect IBM to front-end mainframes with software that will give users more control when requesting resources ─ but the fact remains that mainframes can be easily provisioned to meet user requests for resources today. Ubiquitous network access is another of your criteria. Mainframes can be connected using TCP/IP to the Internet. The Internet is ubiquitous. Hence, you get that with a mainframe. Location independent resource pooling (in other words, the resources show up in a pool ─ and no one cares where those resources are ─ all they care about is getting access to those resources. Mainframes offer the market’s most advanced resource management, virtualization, and provisioning. Hence, they meet this location independent resource pooling descriptor. Finally, elasticity (expandability, flexibility, etc.). Name a more scalable, elastic, flexible environment when it comes to making resources available. Weigh mobile partitioning, advanced virtualization management, and virtualized resource management as you seek to find a superior architecture. You can’t find any general workload processor more flexible and easier to manage (considering how many resources that it controls) than a mainframe.
Q- In your experience, have customers who have adopted service management from system z realized measurable ROI? Can you share any examples?
A -Most customers that I talk to don’t sit around and say “this is how much it cost me to run back-up/restore and other functions that I manage using service management software. What they say is “hey, look over there. Those are my five mainframe managers who manage the equivalent of hundreds of x86 servers. The math is pretty obvious ─ you need a dozen or a couple of dozen managers and administrators to manage x86 environments ─ and far, far fewer managers/adminis-trators to manage a mainframe. (And remember ─ in some geographies, the salaries, benefits, and sick leave for x86 managers can cost a company $100,000 per year. Take ten or twenty IT managers out of the equation and pretty soon you’re talking about real money…) In Brazil, when I asked a mainframe user who manages his mainframe ─ he said “me and that other guy over there”. He was managing his company’s SAP environment with 2 people. That’s mission critical computing ─ the company’s livelihood ─ with two people. In Arkansas, another mainframe manager said “I’ve replaced dozens of x86 servers ─ and will replace dozens more ─ with our mainframe. Here are the five offices of the people I use to manage this environment. People don’t believe me when I tell them this ─ but its true!” Incidentally, mainframe managers have been using dashboards with monitor, visualization, and control facilities for years. They don’t even think of it as service management software. But it is.
Q - Not every customer has a mainframe. At what point(s) should a business consider adopting the platform for service management?
A -Not every customer has a mainframe ─ and not every customer needs a mainframe. It you’ve got a ton of applications and you want to run on the most efficient general workload processor ─ you should be considering using z/OS, zVM, and/or Linux on a mainframe. If you don’t have enough work for a mainframe ─ but want to run high-RAS (reliability, availability, security) applications in a mission critical environment ─ use IBM POWER systems (my opinion: SPARC and Itanium are dead-end architectures). If you want to run Windows on Linux on x86 Xeon processors ─ you’re going to deal with a bunch of distinct servers that will need to be administrated and managed. And you’re gonna need a ton of people to manage those discrete servers. To reduce the need to have to use as many administrators and managers (and to save associated salaries/benefits), as well as to reduce costs related to human error that cause downtime and potential loss of business, you need to adopt service management software. Bottom line: use a mainframe if you have a lot of work to do ─ and you don’t want to spend a ton of money managing that environment. Buy a Unix server if you don’t have enough work to do to keep a mainframe busy. Buy an x86 server if you need Windows ─ or if you want to run Linux on x86 platforms. In each case though, use service management software to reduce you management costs.
IBM is a client of Clabby Analytics and has provided compensation to Clabby Analytics for participation in this interview.
Top 10 Ways to Participate in Pulse 2010 “Integrated Service Management” 10 – Come and attend interesting presentations 9 – Attend and enroll in a workshop 8 – Attend with a colleague 7 – Quiz the experts 6 – Present on a topic 5 – Share a best practice 4 – Share details of a project 3 – Share a technique 2 – Participate in a panel discussion or roundtable 1 – All of the above!
Pulse 2010 will be held February 21-24 in always exciting Las Vegas and feature the best of the best in Service Management. There will be announcements, presentations, demonstrations and receptions. Analysts and experts will address specialized topics for business leaders, technicians, and executives. Get smart, get certified and get connected. Plan to participate now. Call for papers and session contributors ends November 2.
Perils, pitfalls and problems need solutions. So, I asked Jasmine Noel to continue her discussion and to focus on what an enterprise might do to address the issues. Jasmine -
Tools to Tame Heterogeneous Virtualization Management
As I mentioned in my initial blog, virtualization is elevating the daily stress for over-extended datacenter administrators to alarming levels. Higher business agility and lower capital costs through virtualization can’t be achieved without sophisticated, complete, and always-on management of heterogenous virtual computing resources. In other words, agile management of heterogeneous virtualization is critical for business agility and profitability. Hence, the reason administrator stress levels are going through the roof is that they don’t have agile management of heterogeneous virtualization.
It is vital, and not optional then, that IT organizations become adept at matching increasingly dynamic needs (be they automated workload schedules or on-demand service requests) with the resource flexibility afforded by virtualization’s rapid provisioning strengths. As such, system administrators are quickly realizing that they need to look for a different type of management solution. So what are the characteristics of this new solution type?
A policy-based approach is a key characteristic of this new solution type. Virtualization enables application images to move to different virtual machines very quickly (in some cases before admins get a chance to check their emailed service tickets). Policies enable IT to introduce some necessary controls around those moves and changes. Policies can also simplify the workload to resource matching process. For example, policies can ensure that multiple workloads all run on the same virtual machine in a particular datacenter location, or incorporate time of day requirements so that a workload spans ten virtual machines during the day but thirty virtual machines at night.
Centralized management of dense computing infrastructures (be they Blades, packaged Cloud systems, or mainframes) appears to be a natural extension of this phenomena because for the first time, administrators truly need a centralized management platform to manage virtual systems. Many IT organizations are learning the hard way about the sprawl created when virtual images are deployed at will with minimal oversight or visibility into how the environment is changing. System administrators need management solutions that make oversight and resource visibility as quick and seamless as deploying the virtual images. Solutions which afford centralized control over all virtualization options and extend across a diverse infrastructure enable optimal use of administrators’ time.
Besides centralization, the solution should also have a workflow-based approach to automation embedded in its design. Automating the many other tasks (patching, security checks, compliance checks, etc.) that surround virtual image deployment drive down the business risk of high-speed image deployment. When these tasks are orchestrated as complete workflows, IT productivity skyrockets, which gives administrators the time to focus on policy design and decision-related resource analysis.
There is no escaping these solution requirements. The system management status quo can’t deliver the agile management of heterogeneous virtualization that is essential for business agility and profitability. What administrators can do is demand that management vendors prove how they deliver on these requirements.
IBM is a client of Ptak Noel and has provided compensation to Ptak Noel for
participation in this interview.
did April go?Seems everyone crawled out
from the blanket of winter and stepped into a very hectic schedule.I know I did.Among the events were two key user conferences which are summarized
below. In case we did not see you this
year, we hope you will plan to attend next April.
- Automated Operations Technical Council
2010 Automated Operations Technical Council (AOTC'10) occurred in PhiladelphiaApril 12-17, 2010
and was a great success.AOTC is the
annual event for people who are interested in maximizing their investment in
automation, high availability, and business continuity.It's focus is on technical presentations
delivered by both IBM
technical specialists and automation customers that provide practical how-to
knowledge that customers can take back to the office and use.Customers from around the world shared their
experience using automation to enhance their business, along with best
practices and questions about automation.Experts from the product development lab in Germany
and the US,
the Software Migration Project Office, and other technical organizations shared
product details, demonstrations, and automation tools and techniques.The agenda included detailed information
about upcoming product vision, briefings about operating system and hardware
directions, numerous customer experience presentations, and the very popular
live version of the online user forum, which allowed participants the
opportunity to ask questions of developers and fellow automation specialists in
an open and lively format.Attendee
feedback included: "It’s the most beneficial event in the automated
operations field to get first hand information and best practices in automated
conference with good speakers, interesting subjects, and good people to discuss
with. I found many topics to take back and reassess how we have used System
Automation."For more information
about this year's event, see www.ibm.com/training/conf/us/aotc. Planning is
already underway for AOTC in 2011 and we hope that you will plan to join then.
2010 -- the 22nd Annual Tivoli Workload Scheduler (TWS) User Education Seminar
-- was held in Carefree, Arizona
on April 18-21, 2010.
This year ASAP was all about making the quantum leap to Tivoli Workload
Scheduler (TWS) 8.5.The four-day
conference kicked off with a keynote presentation by Kendall Lock, Director of
Rome Lab, who engaged the conference audience with examples and challenges
about how workload automation can be applied to cutting edge business projects
and act as the key driver for business processes.The conference continued with nearly 70
technical sessions about workload automation and related topics that were
presented by technical specialists from the IBM
Rome lab, customers, and business partners.Some of the attendees had this to say about the ASAP conference:
"[Just one] conversation probably save me 20 hours of work.That class alone made the conference worth
[participating in ASAP]", "The presentations were certainly relevant,
covering several things from the basics of best practices to new features and
useful hints such as the agentless 'agents'.", and "I came away with
three aha's that I could apply immediately to my TWS packages."Mark your calendar now and plan to join us at
the next ASAP on April
10-13, 2011 in Fort Lauderdale, Florida.Watch the www.twsusers.org web site for more
Consolidation of four IBM Tivoli z/OS storage management, audit, and reporting product components
Data protection for integrated catalog facility (ICF) catalogs to help increase data availability
Prevent, and recover from, many space-related abends
Audit and automatically correct errors in the DFSMS environment to help reduce operating cost and avoid costly outages
Speaker: Kevin Hosozawa, Tivoli System z Storage Management Product Manager, IBM Software Group Broadcast Date: December 5, 2012 at 11:00 a.m. Eastern Standard Time, 4:00 p.m. GMT / UTC
Out-of-control storage growth is putting a heavy load on many storage
administrators — from the management of complex, multi-vendor
heterogeneous IBM® System z® storage devices, to resolving performance
bottlenecks with little visibility into the storage environment. Without
an increase in budget or staff, it’s also becoming more difficult to
minimize outages and manage changes in the storage environment.
Join us for this complimentary teleconference as we review all the
recent new capabilities in the Tivoli® System z Storage Suite, including
integrated z/OS® storage management capabilities — visibility, control
and automation — for the storage administrator. The suite can provide
z/OS storage management awareness into application and workload
management, enabling a more efficient process that helps reduce costs
and fosters efficiency in problem identification and resolution.
The IBM Tivoli Advanced Storage Management Suite for z/OS, V1.1
delivers a feature-rich storage management, audit and reporting software
suite that includes:
IBM Tivoli Advanced Audit for DFSMShsm V2.4
IBM Tivoli Advanced Reporting and Management for DFSMShsm V2.4
IBM Tivoli Advanced Catalog Management for z/OS, V2.4
IBM Tivoli Advanced Allocation Management V3.2
You’ll learn how the new capabilities in this suite can improve storage availability and performance at a lower cost.
Join us after the teleconference for a live question-and-answer
session. The teleconference will also be available for replay after the
OMEGAMON XE for Messaging is a complete WMQ management solution. Many customers focus exclusively on the performance monitoring aspect of the product, which is the ability to monitor WebSphere Broker and WebSphere MQ, and disregard the WMQ configuration agent.
The configuration agent is a robust, feature rich, component of the solution. However, the features and associated benefits of the configuration component are distinctly different from those of the monitoring components. When discussing the WMQ Configuration agent, I like to begin by covering the business benefits it offers. The following items (IMHO) represent the business benefits of the WMQ Configuration agent.
* Centralized administration of WebSphere MQ objects * Single GUI across platforms * Eliminates configuration errors * Assists in dealing with large numbers of objects * Disaster recovery tool
The list of features offered by the WMQ Configuration agent are too lengthy to list here, but a full accounting can be found in the product documentation at: http://publib.boulder.ibm.com/infocenter/tivihelp/v15r1/topic/com.ibm.omegamon.mes.doc_7.0/kmcuserguide700.pdf.
Future blog entries will cover this material in more detail.
Advanced problem determination using focused scenarios designed by customers
Combining OMEGAMON DB2 information with CICS® and z/OS® in a new enhanced 3270 workspace
Improved efficiency with fewer screen interactions to find root cause performance impact in real time
Additional end-to-end response time measurement capability
Speaker: Steve Fafard, Product Manager, OMEGAMON for DB2, IBM Software Group
Broadcast Date: August 9, 2012, 11 a.m., EDT / 4:00 p.m. BST / 3:00 p.m. UTC
Developed for: IT and enterprise architects and managers; system analysts; operations managers; system administrators
Technical level: Intermediate
Is DB2® performance important to your business? Can you quickly find
and fix DB2 application problems before they become outages? As a
critical part of the enterprise, DB2 shops rely on top-notch performance
and 100% availability. But monitoring and managing it is an ongoing
challenge as the stakes continue to be raised for the most competitive
businesses. IBM has continued making major enhancements to IBM Tivoli®
OMEGAMON XE for DB2 Performance Expert on z/OS V5.1.1 to make its
monitoring and management faster, more productive and more effective.
Join us for this complimentary teleconference and learn how
redesigned capabilities in the Tivoli OMEGAMON family of products can
still meet the highest expectations to address performance and outage
issues. IBM has redesigned OMEGAMON XE for DB2 Performance Expert based
on customer requirements to provide significant new visibility into DB2
applications. The redesigned OMEGAMON XE
for DB2 PE V5.1 enables subject matter experts to resolve DB2 issues
across Sysplex and LPAR boundaries with fewer screens and keystrokes.
In this teleconference, we’ll discuss what’s new in OMEGAMON XE for
DB2 PE V5.1, and how you can more easily monitor and maintain your
entire mainframe environment to quickly find and fix problems before
them become outages.
Join us after the teleconference for a live question-and-answer
session. The teleconference will also be available for replay after the
Good news for all is that the “Call for Papers” for Pulse 2010 has been extended by popular demand. November 20th is the revised deadline so there is more time to finalize creative input. I spoke with Marcus Boone, product manager for Tivoli’s system z portfolio, regarding suggestions for Pulse participation. He is anxious to hear from those customers who can share real-world experiences on how an integrated approach to service management on System z is creating real benefits. Virtualization projects with savings, new Linux workload projects or best practices in the very complex world of ‘always available’ are good examples. Marcus believes Pulse 2010’s emphasis on alternative forms of customer participation, such as roundtables, panels and open discussions to support a topic will be both more interesting and more meaningful to all.
Pulse 2012 offers more than ever before for the System z community!
Looking forward to four days of top-notch education, Pulse 2012 has a large number of System z focused activities occurring to help increase visibility and improve the automation of your IT
formerly standalone event will be held in conjunction with Pulse, with
its own dedicated track featuring 23 sessions providing the knowledge to
help you protect your business and IT services through end-to-end high
availability and automation.
sessions address key opportunities and challenges for today’s mainframe
environments, including new product capabilities, scalable cloud
computing, security and compliance, application delivery and service
management for hybrid environments.