Modified by Michael Barton email@example.com
What’s the difference between Big Data and just having a lot of data? Big Data has so much volume, variety or velocity that you have to change the way you think about basic functions, such as data storage, management and protection. Often, you don’t notice a problem with Big Data until something breaks. More organizations are exploring Big Data, and a discussion about infrastructure needs can help drive project success
Last week at IBM Interconnect2015, Bernard Shen from Re-Store shared his experiences implementing Big Data solutions for business and scientific environments. Bernard began with a discussion of the similarities between scientific and business workloads, when they run at a large scale. As business data grows into the petabyte range, much can be learned from scientific workloads that use scale-out supercomputer environments. There are differences, to be sure (file sizes, use cases, etc.), but the similarities are much greater.
When data grows to multiple petabytes, traditional data management and data protection paradigms break down. For example, the time required to find, backup or restore files can be unacceptable using ordinary processes and infrastructures. Supercomputer infrastructures provide a better model, because they’re designed to provide excellent service levels, regardless of data size, by using grid architectures that expand without architectural bottlenecks.
In multi-petabyte environments, small decisions can have big financial impacts. Bernard shared an example of a Life Sciences project needing multiple petabytes of storage. After understanding how the data would be used, Re-Store recommended a flash system for production workloads, funded by storing inactive data on lower cost storage. The solution delivers better performance than expected and saves millions of dollars, compared to the original proposal. Sometimes, it pays to get a second opinion.
Bernard also shared Best Practices for backup and restores in very large environments. Re-Store has experience with remote mirroring, snapshots, incremental backup, and Hierarchical Space Management. By using a combination of techniques, clients can reduce total costs without sacrificing data protection goals.
If your Big Data project uses all-disk storage, you may be spending too much and getting too little in return. Are you ready for a different approach? You could be spending more on innovation, and less on infrastructure.
Learn about Big Data infrastructure solutions from Re-Store and IBM at Caris Life Sciences and University of Colorado. Watch InterConnect General Session Keynotes via InterConnectGO. To stay connected, follow the conversation by using the hashtag #IBMInterConnect or #IBMStorage on Twitter.
About the Author
Mike Barton is a Worldwide Storage Marketing Manager at IBM. Prior to 2007, Mike was a Technical Manager and Principal IT Specialist for IBM, and a Sales Rep and Principal IT Specialist for Sybase. He holds ITIL Foundation and Gartner Group TCO Certifications. Mike has been with IBM over 15 years and has over 25 years of Information Technology experience. The opinions expressed herein are his own.
Modified by ANISE Mastin firstname.lastname@example.org
CERN is the European Organization for Nuclear Research located on the border of France in Geneva Switzerland is where physicists and engineers prob the fundamental structure of the universe. By leveraging the world's largest and most complex scientific instruments, these leaders study the basic constituents of matter – the fundamental particles. The particles are made to collide together at close to the speed of light to give physicists clues about how the particles interact, and provides insights into the fundamental laws of nature.
CERN has an impressive budget of $1B USD, 3000 employees, 9000 Users, and collaborates with more than 500 Institutes in 80 countries. There are currently 4 collider experiments (ALICE, ATLAS, LMS, LHC) that measure time charge energy position for muon particle analysis which occurs 150 meters (~492.1 ft) underneath the earth’s surface. The circumference of tunnel is 27km (16.7 miles) and 4m in diameter (~13 ft).
I was fortunate to recently visit CERN to tour the ATLAS project (see selfie of me next to ATLAS) and was astounded to learn that the ATLAS weighs a whopping 7000 tons, has the ability to test one collision at 40 MHz every 25 nanoseconds with 1 PB of input and produces 300 GB of output per collision. The massive level of data output plus the critical need for data protection makes it easy to understand why CERN chooses to leverage an efficient, reliable and cost effective backup and recovery solution – IBM TSM. IBM TSM was recently announced as IBM Spectrum Protect -- a member of the software defined Spectrum Family of solutions.
Data Processing and Replication
CERN stores 110 PB of data on tape with 90% of data dedicated to physics and 10% of data targeted for backup by IBM TSM (aka IBM Spectrum Protect) – this equates to greater than 20 PB per year! CERN retains two tape libraries, 2 TSM library managers (1 per library) and 6 TSM servers.
IBM TS1140: 24 drives, 3463 tapes – write speed is 196 MB/s
IBM TS1130 – 31 drives, 6591 tapes – write speed is 155 MB/s
Besides operational efficiency, CERN also places a major focus on cost control. With two buildings dedicated to backup replication, CERN confirmed that data redundancy occurs on even number of disk per RAID volume (software RAID/E everywhere) for faster daily DB backup.
~8.5 PB data – 2/3 backup and 1/3 archived
100 TB on average daily traffic
~1000 clients and 80 User Groups
Websites (~12000 sites); email; Servers (~1000 Linux/Windows)
Julian LeDuc, CERN Backup Service Manager, shared many details of CERN’s data backup operations during a breakout session at IBM InterConnect 2015 in Las Vegas NV. According to Julian, “IBM TSM suite satisfies our needs. We understand how it scales, how it performs, etc.”
Visit here for more detail on IBM data protection and recovery solution available for large enterprise and mid-sized organizations; also download a complimentary white paper, Ten Ways to Save Money with TSM to see how IBM TSM can backup and recover your data. Finally, check out the short video filmed during my visit at CERN on YouTube.
Modified by Niharika Trivedi NIHTRIVE@IN.IBM.COM
IBM InterConnect 2015 – the premier conference for cloud and mobile kicks off from February 22 – 26 in Las Vegas, Nevada - to deliver one of the most comprehensive technology events ever. While the excitement has already begun, let’s take a look at the top 5 reasons why you shouldn’t miss IBM InterConnect 2015:
General Sessions, Keynotes, and Break-out Sessions:
InterConnect 2015 offers you over 42 tracks, 8 streams, 3 general sessions to learn from the top industry experts about the latest and the greatest trends and technologies. From development to architecture to operations, InterConnect will provide 1500+ sessions worth of the best education, networking, and exhibits on topics like cloud, mobile, security, DevOps, and more. For instance, hear the latest IBM strategies from key executives like Tom Rosamilia, IBM SVP and gain insight from the General Session guest speakers, Barbara Corcoran, Daymond John, and Robert Herjavec of ABC's Shark Tank. InterConnect 2015 also offers you an opportunity to build your own agenda. Click here to learn more.
Business Partners Summit:
Save the date for the InterConnect 2015 Business Partner Summit, being held on February 22nd. This one-day summit offers a wide range of content and relationship building activities; networking with IBM executives, product and industry experts, and other Business Partners; and features bestselling author, venture capitalist, and entrepreneur Josh Linkner as the guest speaker. Download the Business Partner Summit Program Guide.
With great software, comes great responsibility! This year, thousands of software developers, architects, designers, and programmers will leave their language, platform, and editor wars behind to come together for two days of sessions, training, and building with the people and companies who are creating the new technology landscape. Hack, make, break, and shake with people who are super smart at what they do. Just like you. Click here to see what to expect from dev @InterConnect.
The Solution EXPO at InterConnect 2015 is the hub for networking, collaboration, and engagement. We’ve also incorporated new architectural elements which create a modern, functional space that facilitates the many demos and discussions taking place on the EXPO floor. Click here to check the key features included in the Solution EXPO at InterConnect 2015.
Get Social with InterConnectGO:
InterConnectGO is the digital interactive platform for InterConnect 2015. Hosted by gamer and video star Veronica Belmont, the event features three full days of live streaming video straight to your laptop or mobile device. If you have colleagues who can’t attend InterConnect 2015, encourage them to register for InterConnectGO for the free online digital experience. Also, if you like being on social, there’s a whole new opportunity waiting for you. Just share your InterConnect story or experience and get a chance to be featured as InterConnect Social Jockey on @IBMStorage. Don’t forget to mention @IBMStorage and include hashtag #IBMInterConnect in your posts.
Isn’t it exciting! So what are you waiting for? Register now and be a part of InterConnect 2015 experience! For up-to-the-minutes updates, follow #IBMInterConnect or #IBMStorage on Twitter. If you have any questions, please don't hesitate to contact me at email@example.com. I look forward to seeing you at InterConnect 2015!
Do you need a more simplified approach for deploying Tivoli Storage Manager (TSM) for Virtual Environments (VE)? Are you experiencing issues because data backup ratios promised by IBM marketing for TSM cannot be matched? Do you want to know more about how backup experts with multiple deployments are handling? Then, reading this blog can help address your questions while removing some frustration.
First, it is important to note that the solution design case study is based on TSM v6.4, yet is also valid for TSM v7.1 (announced Oct 2013) due to the following criteria:
Scalability and performance
Data reduction capabilities (deduplication)
Administration and monitoring
Hardware snapshot (flashcopy)
Next, the case study covers a client environment with small scale number of VMs (i.e. 500); yet, the findlings / results apply to environments up to 2000 VMs. The detail outlines a strategy leveraging three elements: Transition, VSphere architecture, and Backup Storage Device Selection (see further description of elements at bottom).
First Things First -- Case Study Environment Specs:
500 VM deployment (small scale); yet solution design is same for larger environment up to 2000 VMs
Demonstrates how to achieve objective
Works within infrastructure constraints -- network bandwidth limitation
Multiple retention requirements
Achieve reliable, successful backups
Dedup options include TSM native or appliance.
The dedup ratios should be benchmarked to insure realistic estimates are used.
When considering cost and restore performance, it is important to evaluate the trade-off between performance and storage costs; consider co-location by file space (VM) with tape virtual tape library (VTL) as well as specify critical VMs configured as exception for management.
Now About Those Ratios
If you've attained a dedup ratio closer to 3:1 that can actually equate to 25:1 reduction ratio when you consider whether reduction is based on change data versus the data of entire environment. The case study also examined network constraints, VTL selected and sizing.
Therefore, the main takeaway -- with the right strategy, use of TSM blueprinte and the following simplified estimate considerations:
Occasional full backup
Occasional image restore
Initial phase-in contingency
DR (disaster recovery) contingency
TSM for VE delivers a cost effective, simplified means to achieve reliable, successful backups.
Note: Optimal design will differ based on environment. Published case study made available upon request.
Global Program Director
IBM Software Group
Case Study Specs:
Scheduling Scope -- How to Achieve multiple retention policies.
Utilized data (planned) 38000 GB
Total VMs -- 500 count
Total ESX -- 25 count
Total Clusters -- 1
Daily data change rate --
Backup window -- 10 hrs
Monthly backup window -- 50 hrs
Daily retention -- 30 days
Monthly retention -- 12 months
Design Environment Constraints:
Eliminate backup traffic from network
Dual retention requirement: daily backup 30 days / monthly backup 12 month
Avg size 100 GB / VM projected
Range < 50 GB - 1.5 TB
38 TB planned / 20 TB actual
Initial full backup phase in requirement how many hours/day
Processing of pre versin ESX hct (CBT not supported)
VSphere Architecture -- Vcenters, data centers, clusters
How does the correspond correspond to retention policy
Is data store capacity evenly distributed or note
Identify network infrastructure for backup/restore
Backup Storage Device Selection --
Cost effective retention vs restore performance
Deduplicaton strategy -- TSM , appliance or both
Modified by Michael Barton firstname.lastname@example.org
IBM Edge 2014 Tuesday Storage Recap
Day 2 at IBM Edge 2014 focused on how clients, Business Partners and IBM are working together to build smarter infrastructures to meet the business challenges discussed on Day 1 (cloud, analytics, mobile and social).
Chris O’Connor, @chrisoc_IBM, Vice President of Strategy & Engineering, IBM Cloud & Smarter Infrastructure, spoke about the need to seamlessly extend infrastructures from what organizations own today to what they’ll need in the future. He recommended two approaches:
Cloud-enable existing workloads
Think about ‘cloud first’ for new workloads
The idea is to accelerate time to market and be able to make real time actionable insights. With 70% of enterprises planning to pursue hybrid clouds by 2015, according to a 2013 report by Gartner, a two-pronged approach makes sense.
Andrea Nelson, Director of Storage Marketing at Intel collaborated Gartner’s estimate, saying an estimated 50% of organizations less than 10 years old are putting their IT infrastructures on the cloud today.
Chris spoke about the importance of standards, such as OpenStack, that help organizations quickly assemble Software Defined Systems from components, rather than building clouds a stick at a time. With new development platforms such as IBM’s Code name: BlueMix, organizations can construct enterprise-capable cloud applications faster, without having to deploy a cloud infrastructure.
Mike North, Sr. Director of Programming for the National Football League spoke about the importance of speeding up the infrastructure to enable analytics. ‘Time to truth’ is critical for analytics. With faster processing, the NFL is able to look at 100s of potential schedules and choose the one with the best potential outcomes for their constituents. IBM’s Arvind Krishna suggested that traditional analytics is like driving a car by looking at the rear view mirror – You can only see where you’ve been. Predictive analytics helps you see into the future, react faster, and achieve better business results
Maria Winans, @mariawinans, IBM VP of Social Business, spoke about how IBM and other organizations are driving people-centric engagement for new profit channels. She also spoke about the importance of analytics, saying you can’t personalize customer experiences if you can’t do the required analytics. Maria offered 3 suggestions for successful social business initiatives:
Build shared value
Protect your brand
New mobile applications offer the opportunity to improve customer satisfaction and customer loyalty, as well as generate new revenue. Rapid transformation is happening across industries and geographies. IBM estimates there will be over 1 trillion connected objects and devices by 2015. Mobile applications are enriched by cloud, analytics and social business initiatives.
Storage virtualization and Software Defined Storage
Storage virtualization is the foundation for Software Defined Storage. Virtualization provides an abstraction layer between physical storage and applications that use it. The result is a storage infrastructure that can grow and change without impacting users or applications. Software Defined Storage will be required t manage the vast amounts of data organizations expect to manage in the years ahead.
Steve ‘Woj’ Wojtowecz, @steve_woj, IBM Vice President, Storage Software Development, shared new research from ITG that analyzed storage TCO using IBM, EMC, and VMware storage management solutions. ITG highlighted 4 issues that significantly impact storage TCO:
Storage software costs
Storage administration costs
IBM Virtual Storage Center users were far more successful than their peers using EMC or VMware storage management solutions:
In large enterprises, storage TCO was 72% lower with IBM than EMC
In midsized environments, storage TCO was 35% lower with IBM than VMware storage management.
Jose Garcia, Manager of Enterprise Storage and VMware at UCLA Health System, discussed his storage transformation project. Storage virtualization enabled rapid deployment of an Electronic Health Records system that improves patient care and improves organizational efficiency. Storage virtualization also reduced storage costs and enabled rapid data growth. Improved efficiency saved enough to fund a 3rd data center that will improve resilience and flexibility.
Collaborators wanted. No Eeyores. No squirrels.
Snehal Antani from GE Capital spoke about the importance of delivering IT at market speed, and with commercial intensity. He offered a strategy of dealing with important groups of people in the organization:
Kings and Queens
Collaborators can accelerate change. Identify your collaborators and put them on a pedestal
Cynics are like Eeore in Winnie-the-Pooh. They’ll tell you why change is hard, and focus on what might go wrong. Ignore your cynics.
Kings and Queens are executives and managers who are eager to be offended. They resist change that may impact their empires. They’re a small, but vocal, group. Don’t give them a megaphone.
Snehal also pointed out that technologists can get distracted by new technology, even if it isn’t essential to simplify or accelerate IT delivery. It’s like yelling, ‘Squirrel!’ to distract dogs, as in the movie, Up. GE Capital has signs that say, ‘No Eeyores’ and ‘No squirrels’.
Bottom line: Infrastructure matters
Can the right infrastructure help you build competitive advantage? Yes, of course. Infrastructure matters.
About the author
Mike Barton is a worldwide storage marketing manager at IBM. Mike is a former IT specialist with Gartner TCO and ITIL certifications. The opinions expressed herein are his own.
ITG Management Report: Cost/Benefit Analysis of IBM Virtual Storage Center Compared to EMC Storage Virtualization Solutions
ITG Management Report: Cost/Benefit Analysis of IBM Virtual Storage Center Compared to VMware Tools for Storage Virtualization and Management
IBM Software Defined Storage
TheCUBE by Wikibon
[Software Defined Storage (SDS)] is getting a lot of attention lately by press, analysts and technology providers such as IBM, causing organizations large and small to take notice. SDS describes a set of storage access and data management services that can deliver what IT administrators are most interested in these days:
Lower storage costs
Less reliance on specific storage systems
Simplified data and storage management
Improved utilization of existing resources
International Data Corporation (IDC) published a [taxonomy for Software Defined Storage] which defines software-based storage as a storage software stack running on commodity, off-the-shelf computing hardware. SDS should offer a full suite of storage services and federation of the underlying storage to enable data mobility, according to IDC.
The interesting thing is, while the name Software Defined Storage is relatively new, IBM has been delivering technology and client solutions that match the SDS definition for over a decade.
Matching IDC’s definition, [IBM SAN Volume Controller], introduced in 2003, is an x86-based appliance running Linux code, providing federated storage virtualization across heterogeneous storage platforms and enabling advanced storage services. SAN Volume Controller has been proven to scale to multiple petabytes. This core technology is also included in IBM’s midrange Storwize storage systems. To date, over 55,000 SAN Volume Controller and Storwize systems have been shipped worldwide, making IBM one of the most popular business class storage virtualization solutions.
Sitting on top of the storage virtualization platform, [IBM Virtual Storage Center] offers industry leading end-to-end storage management with analytics driven data management and policy-based automation to enable self-tuning, self-optimizing storage. According to recent research by International Technology Group, IBM’s approach can reduce storage Total Cost of Ownership by [up to 72% compared to EMC solutions in large enterprises], and [up to 35% compared to VMware storage management solutions in mid-size environments].
At the top of IBM’s storage software stack are interfaces that simplify storage, including:
[OpenStack integration], for automated storage provisioning by cloud applications
VMware vSphere integration, which provides VMware administrators with a familiar interface for simplified storage provisioning and management.
IBM advanced graphical interface that dramatically simplifies end-to-end troubleshooting and performance management, provisioning, and other time consuming storage administration tasks
[IBM Cloud Storage Access] user self-service portal, sold separately
While other vendors scramble to build new offerings for SDS, IBM is extending proven technology that can address your needs today and help you migrate to new era workloads whenever you’re ready.
See [IBM Software Defined Storage at the IBM Edge conference] next week in Las Vegas. Software Defined Storage sessions will be presented at Exec Edge and Tech Edge, and we’ll have live demos in the Solution Center.
If you can’t attend Edge, look for video interviews with Brian Jeffery, Managing Director of International Technology Group, and Steve Wojtowecz, VP of Storage and Network Management Software Development, on [TheCUBE, by Wikibon], live on Monday, May 19 and afterwards on demand.
Learn more about VSC and the rest of IBM’s storage software portfolio at: http://www.ibm.com/software/products/category/storage-software.
About the author:
Jason Davison is the Segment Manager for Storage Virtualization and Cloud Solutions in IBM’s Cloud and Smarter Infrastructure product management group. Views expressed are my own.
Modified by Meghna Chatterjee MEGCHATT@IN.IBM.COM
IBM’s Data Protection has all the right pieces
Jason Buffington, Senior Analyst, ESG in his interview to Dave Vellante from Wikibon, said IBM fills out the whole data protection spectrum and that it’s new UI is a great proof point to why it’s not your grand daddy’s solution. One of the top 5 problems people face in protecting a virtualized environment is lack of visibility. IBM’s new UI does a great job in adding that visibility. IBM has all the right pieces with its breadth of data protection solutions. With IBM starting to put cloud more aggressively into the mix, 2014 looks interesting.
Data Protection is a rainbow that must have all the colors
In Jason’s opinion when defining Data protection strategies, one should think of Data Protection as a rainbow with Backup, Snapshots, Replication, Archive and Availability making up for the different colors. So when have you ever seen a rainbow with no green? Mechanism of data protection should not only include the whole range of solutions but must also include a hybrid approach that includes Tape, Disk or the Cloud. Organizations can pick the color from the spectrum, according to what they want to recover and how.
Disk, Tape, Cloud – they are all going to stay
Disk is not going to be the one all and end all. Tape is going to stay, with economic advantages and new innovations like LTFS that makes tape look like disk, adding flexibility and durability. Cloud as a backup service is not the Silver Bullet either because it’s only a deployment mechanism, it does not make your back up problems go away, one still has to run it, run the admin and push the agents out. One needs to have an on premise intermediary appliance for fast recovery before going to the cloud. However when ESG looked at what were the primary use cases of cloud for the next couple of years they found Data Protection at number one and Disaster recovery at number three. Jason suggests that every solution should be considering cloud as part of it.
Data protection need not be so Hard
His advice to IT pros who are worried about cost and complexity of data protection is that it need not be so hard, there are great solutions available that allow you to backup, archive, snapshot, replicate and do an entire range of functions, from a single GUI, to a single data store from a single administrators view. People only need to wake up to the solution and start using it.
Check out other Wikibon Interviews at Pulse 2014
Modified by Sudipta Datta email@example.com
In the past two years that IBM acquired Butterfly, it generated hundreds of Analysis Engine Reports (AER) analyzing billions of gigabytes and established facts about Tivoli Storage Manager (TSM) that should make competition sit up and notice.
The Backup Analysis Engine report from Butterfly Software uses light-touch, agent-less software technology to analyze existing heterogeneous data backup environment. It is a non-intrusive analysis based on empirical production data collected in minutes and incorporated into the Backup Analysis Engine report from IBM Butterfly Software.
Why is Butterfly important?
Gartner Magic Quadrant: Backup and Recovery 2013 Competitive analysis says between 2012 and 2016, one-third of organizations will change backup vendors due to frustration over cost, complexity and/or capability. To be able to say conclusively that TSM solution can save backup infrastructure costs by as much as 38% when compared to some of the other competitive products opens the door for IBM to go get these 33% of the organizations looking for a change.
AER is the Key
More demand for AERs is expected with the launch of the automated “self-service” AER generation model. Scheduled to go live at the beginning of 2H 2014, it will scale out as a service to IBM and its Business Partners. These facts drive home the fact that Butterfly AERs have metamorphosed into a well accepted and standard approach to storage infrastructure analytics.
Meet the Butterfly Storage and Backup Assessment Team at Pulse 2014
If the butterfly flutter has caught your interest, visit Pulse 2014 on Feb 23-26 at Las Vegas and meet the folks who deliver Butterfly Storage and Backup Assessments in the IT Optimization section of the IBM booth. Find out how your company can use business analytics to dramatically lower the cost of running your backup and recovery or primary storage infrastructure.
Data protection matters! Actually it matters even more with the advent of big data. The unique challenges of managing & protecting big data has forced IT professionals to relook at their data backup & protection policies.
Every year ESG conducts a forward looking spending intention survey. They shared a couple of interesting facts that do not surprise but definitely reinstate my thoughts. When organizations were asked what they would consider most important IT priorities over the next 16-18 months, 30 percent responded back saying “improved data backup & recovery”!
And when they were asked what they would characterize as challenges with their organizations’ current data protection processes and technologies, “cost” & “need to reduce back up time” came out to be the major concerns.
ESG analysts Mark Peters and Tony Palmer shared these insights as they took us through the results of their lab testing on Tivoli Storage Manager. If you are not familiar with IBM Tivoli Storage Manager (TSM), it is a scalable client/server software primarily designed for centralized, automated data protection. The goal of the ESG report is to educate IT professionals and provide insight into the advanced data backup technologies such as forever incremental back up, deduplication and why it is so important in current scenario. Click here for the ESG video
The TSM Lab validation was performed using a combination of hands on testing, audits of IBM customers in live production environments and detailed discussion with IBM experts. The objective is to validate some of the valuable features and functions of the product and show how those can be used to solve real customer problems, and identify any area of improvement.
IBM has continuously invested in TSM platform bringing innovation to data protection and recovery. ESG evaluates how the newer versions of TSM provide a turnkey solution to a range of data protection issues. They found that the two technologies (deduplication and progressive incremental backups) working in tandem were able to achieve 90 percent data reduction after just six incremental backups and 95 percent data reduction after ten backups. Replication function is also fully integrated with deduplication, thus optimizing quicker recovery during disasters. TSM uses policy-based automation along with intelligent move-and-store techniques, helping to reduce data administration efforts. Over all, ESG’s validation rightfully points to the key enhancements to the TSM platform that drive greater scalability, efficiency, and data availability.
Please register and download the detail 23 page ESG Lab Validation Report
Opinions are my own
If you are following the developments related to Pulse 2013
you’re likely well aware that Peyton Manning has been announced as the keynote speaker and the evening entertainment at Pulse Palooza will feature Carrie Underwood. If you’ve been to Pulse before you also know you can expect compelling thought leadership in the General Sessions and deep content in over 300 breakout sessions to choose from.
Over and above all that exciting news there’s one thing that keeps attendees coming back year after year – the opportunity to network. Each year following Pulse, attendees tell us through the post-Pulse survey that networking with the over 8000 conference attendees rises to the top as the most valuable aspect of the event.
The opportunity to network with your Storage colleagues at Pulse 2013 will once again be front and center at the conference. Formal opportunities exist such as the Storage Birds of a Feather session, Meet the Experts in Storage, Client Connections along with access to Storage subject matter experts from development and product management in the Expo Hall. And of course in Las Vegas there’ll also be plenty of informal gatherings to connect with Storage professionals to share knowledge and expertise.
A great way to start the networking process is to take in the numerous client-led sessions in the Unified Recovery and Storage Management track within the Cloud and IT Optimization stream at Pulse 2013. Following the track-kick off which features Dave Russell, Research Vice President at Gartner
you’ll have the opportunity to hear IBM clients sharing their experiences, some highlights of which include:
• Learning about the experiences of Chesapeake Energy with the new TSM Backup and Recovery Dashboard based on their participation in the Early Adopters Program;
• Understanding how The University of Sydney is using SmartCloud Virtual Storage Center to provide centralized management of its diverse storage environment;
• Hearing how Banco de Brasil improved its backup capabilities by taking advantage of the latest advancements in Tivoli Storage Manager;
• A panel of experts from Blue Cross Blue Shield of Louisiana, Kindred Healthcare and Centene Health discussing how they are protecting healthcare data with IBM storage solutions.
While this is just a tiny sampling of the type of organizations that will take to the podium in the Storage track at Pulse there’s a wealth of experience to help you tackle your most pressing Storage Management challenges. Taking in the sessions is only the beginning – connecting with these storage professionals in the numerous networking opportunities at Pulse is how the conference truly comes to life.
If you’re already registered for Pulse start you can start networking now by connecting with the growing list of speakers and other conference attendees on the Pulse2013 Vivastream
site and if not visit the PULSE 2013 home page
for all the conference details and to register today.
Please plan to join IBM and thousands of your peers at the MGM Grand Hotel in Las Vegas, March 3 to 6, 2013.
is IBM’s premier event focused on business transformation and IT optimization, helping clients learn how to turn opportunities into outcomes.
As the planet becomes smarter, it becomes clear that a solid, robust, scalable and cost-effective IT infrastructure is required to create, store and manage all the information at the heart of these new opportunities.
Unified Recovery and Storage Management is the cornerstone track within the Cloud and IT Optimization stream at PULSE 2013. We are putting together a very excited agenda, and I’d like to give you a preview of what you can learn from your peers, thought leaders, and yes, a few IBMers, by attending this track.
We kick off the track on Monday with a keynote presentation by Dave Russell, Research Vice President at Gartner. Dave will describe the trends that his team is seeing, and encourage you to take a position on transforming and optimizing your data management infrastructure.
During the 3 days of breakout sessions, you will learn how many of our customers have started on this journey, including best practices and outcomes. Our speakers include subject matter experts from:
• Two major banks
• Two universities
• Five healthcare organizations
• Several consumer and industrial companies
• Five managed service providers
• A leader in media and entertainment
Some of the top-of-mind topics that will be covered include management and protection of virtualized server and storage environments; advancements in disaster recovery and business continuity; storage in the Cloud, storage as the Cloud, and storage to the Cloud; backup consolidation and simplification; and how to easily cost-justify an efficiency improvement project to your management.
You can also learn how IBM “eats its own cooking” as the IBM Office of the CIO describes its use of IBM storage management software to drive costs out of our business while meeting the computing demands of a company the size of IBM.
You will also have the opportunity to learn about new products and enhancements – we can’t tell you what they are yet, but we’re pretty excited.
You can see who our expert speakers are, what they’ll be speaking about, and start to build your experience at Pulse this year by visiting the Pulse SmartSite and Agenda Builder at: http://www.pulsesmartsite.com/ "The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
We’re getting deep into the planning for our 6th annual PULSE conference (ibm.com/pulse
), and I’m getting very excited about the storage content that is being assembled. Again, it will be at the MGM Grand Hotel in Las Vegas, March 3 – 6, 2013.
At our Storage Track Kickoff session, we’ll have some new things to announce and highlight, and we’re close to announcing an exciting keynote speaker.
Following the track kickoff, we’ll have 20 breakout sessions on data protection and storage management topics, covering advances in virtual machine protection, disaster recovery, cloud integration, and a lot more. We’re mixing it up a lot more this year to ensure you get a range of perspectives. We’ll have 21 client speakers discussing their experiences and best practices; plus 8 business and technology partners providing insights into added value approaches to storage management who will be complemented by IBMers sharing the new stuff we’ve been working on.
Among the client speakers will be storage professionals from across the globe representing major banking, healthcare, media, industrial and university organizations. There will also be sessions on a variety of cloud topics, including private cloud storage and backup-as-a-service opportunities.
To follow on a theme mentioned by Steve Mills in his keynote at PULSE 2012, we’ll show how IBM “eats its own cooking”, sharing how IBM’s Office of the CIO transformed its massive storage infrastructure; and how IBM’s Strategic Outsourcing services organization is leveraging our products to more effectively manage their clients’ storage environments.
There will be many cool things to see in the expo center again this year, including offerings from many of our ecosystem partners, and you can roll up your sleeves in the hands-on labs and product training and certification areas.
Have you heard about this year’s PULSE PALOOZA entertainment? We rocked the Grand Garden Arena with Maroon5 in 2012, and will follow that with Carrie Underwood in 2013.
Now’s the time to act. Early bird registration, which saves client attendees $500 off the conference fee, closes December 31st. Go to http://ibm.co/pulseregister
and get ready for an outstanding event. I look forward to seeing you there. "The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."
Business Continuity experts get ready. As communities in the Northeast US recover from devastating personal losses, people will begin to recover their businesses. We may experience the largest IT recovery effort in history.
Business losses from Hurricane Sandy will be enormous. Besides property loss, there is apt to be significant loss of information. There have never been so many information-driven organizations impacted by a single event. Power grid outages in India last summer impacted ten times more people, over 600 million, but far less of the world's business information.
Many organizations relocated their data ahead of Hurricane Sandy, but others weren’t as lucky. Systems may need to be replaced and data recovered from backup vaults. The extent of data loss could take weeks to assess,
If you have all your data today or, better yet, didn't experience an IT outage, hug your IT team. Keeping your information safe and available has never been more challenging.
To the IT heroes who recover our data after disasters, we salute you. If you’re working on Hurricane Sandy recovery, snap a few photos or capture some video, as a reminder of why we have backup systems. Please share your favorites here.
Cost of backup can be reduced up to 40% with an effective archive
week’s IBM Information
Management on Demand (IOD) event in Las Vegas, Michael Barton (Product
Marketing Manager, IBM Tivoli Storage Software) presented “All-in-One file
archiving with IBM Tivoli Storage Manager Suite for Unified Recovery –Archive
Option.” IOD was attended by more than 12000
attendees and kicked off on October 21 with the General Session featuring analytics
and big data, and continuing all week with keynote sessions on Business
leadership, Information management, Business analytics and Enterprise content
Storage was front and center at IOD with both Michael Barton’s TSM SUR –
Archive presentation, as well as presence in the IBM booth focused on IBM
SmartCloud Virtual Storage Center’s storage analytics engine and capabilities.
into Mike’s presentation:
- One of the analyst said 40
percent of the planet’s data should be archived, or moved out of the
production environment. That seems
reasonable. Applications simply run
better with less data to manage.
- Using a backup system for
archiving can off-load production systems, but can cause bloat in the
backup environment, and make your data harder to find.
- With the rapid growth of
unstructured file data, the benefits of an effective file archiving
solution are becoming more apparent.
Bloated file servers and NAS systems can impact productivity for
users, administrators and backup processes. Archiving can reduce requirements for
power, cooling and floor space.
- Industry analysts estimate the
cost of backup can be reduced up to 40% with an effective archive
solution. IBM's approach to file
archiving implements policy-based tiered storage, so large numbers of
files can be managed effectively.
Storage Manager Suite for Unified Recovery (TSM SUR) Archive solution
enables a simple, scalable solution that can significantly reduce costs for
file storage. This solution supports
information lifecycle management for efficient, long-term data retention
Want to know
how you can save money using Tivoli for
the space required for archive data up to 50%, using deduplication and
migration of archived data between storage tiers and storage systems without
disrupting users or applications
to retire old systems
decades-long retention times
without complexity, using policy-based administration
about this here: Tivoli Storage Manager
– Archive solution
IBM’s Business Without Limits
Recently, I had the distinct pleasure to deliver
a presentation on Data Storage and Compliance at the IBM Tivoli event 'Business
Without Limits 2012' in Bangalore,
India. There are more than 100 attendees who
attended the event from almost every industry.
My Track for the day: Addressing Data Growth, Threats and Compliance; Unified
The volume, velocity and importance of data have
increased dramatically during the past few years to the point where most backup
and archiving solutions can't keep up with the scalability, functionality,
performance, reliability and budget realities of today and tomorrow. Attendees
understood how to reduce backup data capacity by as much as 95%; how to reduce
the amount of new data at risk by 90% or more; and how to simplify global data
recovery operations and achieve compliance by leveraging a unified management
I was privileged to present in such an interactive
session, where customers understood how our broad product portfolio will help
in addressing their business challenges.
IBM now brings ‘Business Without Limits 2012’ to several cities across the United States
in October and November. This is an
exclusive IBM Tivoli event designed to increase awareness and thought
leadership among the IT managers, Infrastructure leaders, Systems
Administrators, Storage Managers, and Data Center Managers. IBM’s Business
Without Limits event is coming soon to the following cities:
Oct 18: Philadelphia
Oct 25: Columbus
The event will focus on how IBM’s Integrated Service Management strategy
brings together the different capabilities to enable integrated delivery of
business services across complex, interconnected physical and digital
IBM’s Business Without Limits event will have the following Storage Tracks:
the pivotal role of storage in modern data center
backup and unifying recovery
your data protection headaches to the cloud
storage analytic and reporting
This conference will explore how you can capitalize on the opportunities of
a smarter planet and remove the barriers to innovation that will help you
achieve “Business without Limits.” As
today’s leaders are transitioning to smarter, flexible cloud infrastructures
that speed the delivery of innovative products and services, effective storage
management becomes a critical component of that success. Please join us at this event to learn more!