Enterprise Data protection
is not a new topic (it’s actually a core IT principal = protecting data) and we
continue to see a growing investment in data protection methodologies in
enterprises across the globe. While some of our competition want you to believe
that all data needs can be satisfied by placing all data on spinning disk;
doesn’t seem very economical to me.Tape
is the greenest IT offering to date across the market.We are not going to discuss tape offerings, let
us discuss what IBM helped write and released early in 2010. The product
offering is called LTFS which stands for Linear Tape File System.This new offering allows admins drag and drop
capabilities for files making the tape F/S system appear to be a removable
media format (Flash Drive,
DVD etc,) to the Operating System.So
drag and drop to / from tape seems like an inexpensive solution for Video
How does LTFS work?
LTFS is a physical media
portioning technology and LTFS creates 2 partitions on LTO5 tapes while the
schema is stored in XML format.
What Operating Systems are
RHEL 5.4, 5.5, MAC OS X 10.5,
Windows 7, and others are getting their stamp of approval.
What Backup Software is
No dedicated backup program
such as TSM, NetBackup etc are required as the OS can write directly to the
What Hardware is supported?
LTFS V1 supports LTO5 tape
drives while the Tape Libraries are gaining their stamp of approval look for
that in future releases.
LTFS Defaults:Block size is 1MB
Best of all LTFS is FREE,
yes that is not a typo it’s free.
What does it take for a
successful VDI implementation?
Let’s think about a few
integral steps in getting your VDI project from the whiteboard into production (we don’t
have enough space to cover all aspects such as project management, change
management, Thin Client selection, etc.)
VDI aka Virtual Desktop Infrastructure relies on your disk subsystem being a stable foundation
in which you can build and grow your VDI environment.
Yet it needs to be flexible
enough that it can meet the demands of this environment moving forward too.
Properly sized: How did you size your subsystem prior to
How many I/O’s did you
allocate for each thin client?
How many more I/O’s did you
allocate for each application?
How will you handle the ‘exception
application’ which is not supported by VDI today?
Do not overlook some applications
which require higher uptime than others.
What type of disk drive did
you select to achieve this performance metric?
How large are the disk pools
you are going to utilize to keep your performance consistent (Proof of Concept,
Pilot, Initial rollout, full rollout)?
Did your sizing meet your
performance expectorations in your lab?
Protocol selection: What protocol did you select for the backend
connectivity (Fibre Channel, FCoE, iSCSI, NFS)?
What protocol does it use to
the desktop (I.E. PCoIP or RDP)?
How agile is the protocol
you selected? Was it designed for WAN
What are the cost
implications if other network gear must be ordered?
Are you working w/ the
Network team to ensure all aspects of the project are getting the proper
For instance, How did VDI
impact your LAN traffic (if applicable)?
How did VDI impact your WAN
traffic (if applicable)?
Management: How are you
going to keep control over your VDI environment (Roles/Responsibilities)?
Can you quickly integrate
management into a directory structure and push out policies to manage all these
How flexible is your storage
offering in assisting with desktop provisioning / creation?
How many golden images will
you create & maintain (these might be worlds apart by the time you complete
How will you maintain these
images? How much time has been allocated
to maintain these images?
Security: Sure, you have locked down the selected OS but what
about security at the physical layer have you disabled the USB ports too?
How are your existing security
practices today going to be impacted?Will
your VDI selection work across a VPN tunnel?
These questions might help
get you on your way to VDI success!
I am surprised that nobody questioned the business model I
referenced; questioning its validity? I believe it is NOT valid, as
business is constantly adapting to the new trends and directions
(successful enterprises are not rigid). I believe, in order for the
new flexible business model to be followed the IT organization must
be more flexible. Another component is that IT leaders need to push
further into businesses to allow enterprises the ability for them
continue to grow through increased productivity and automation. We
also need longer term goals instead of quarter to quarter we need
to take the time to plan out strategy and roll it out and being
integrated tighter into an enterprise is just one way to achieve
How has your enterprise been exploiting the
‘recession’ to push forward with new technology upgrades? Isn’t that what
we were taught to do in Econ. 201 & 202 (recall the businesses that
invested the most during times of economic downturn were the one’s taking the
greatest strides as the recovery came around and hence the most market share.
Think about your current data center was it built prior to 1997? If
so, your enterprise is not alone as a majority of them were built prior to then
(kind of scary knowing all the great technology created during the DOT.COM
Boom) and all that which has followed. Well, needless to say your
enterprise is NOT exploiting the most current technology initiatives that will
allow your business to continue to grow and gain market share. When did
you deploy IP phones were you early or late to deploying this technology set?
Are you ready to deploy a converged storage network? If not, why?
Is it due to lack of skills? Is it due to budget? Here are just
some of the benefits (simplified administration (are you willing to hand over
your storage network to the network team? If NO what are you doing to prevent
this coming skill absorption?)) rapid storage provisioning and roll outs,
increased cost savings). Are you skeptical that this will happen?
Think back to the early '90's and how many network protocols did the enterprise
have (DECnet, IPX/SPX, AppleTalk, Apollo domain, Named Pipes, XNS, etc) to
contend with and once IP was deemed the enterprise standard... in the end IP
won BIG. Still not convinced? Consider how many technologies
Ethernet has displaced (ATM, Token Ring, FDDI) with IP & Ethernet pushing
into the storage arena isn’t it time to put considerable thought into terms it
will impact Storage Network moving forward? Also consider how will this
impact your data center moving forward?
Until next time keep thinking about how Data Center
7.0 can help your enterprise move forward to quicker growth when this recession
comes to an end.
We have referenced NLSAS quite a bit in the past few weeks and felt it was time to get into the nuts and bolts of what NLSAS is exactly.
A SATA drive has only one port so in the event of a network failure the SATA drive is not reachable. To add a level of HA to the SATA environment
interposers are utilized to add a second connectivity port while allowing conversion to another interface (i.e. Fibre Channel). Switching the interposer to support NLSAS natively allows dual paths to your SAS environment.
What is gained by moving to NLSAS in your enterprise? One drive type can be utilized across your enterprise; SAS.
Another is improved RAS (Reliability, Availability, Survivability)((In my mind this is minor since this RAS was also added to SATA offering).
The slight improvement in performance gain from native protocol exchange is really not worth mentioning as net gain.
Hope that helps clear the muddy water in / around NLSAS
A flexible infrastructure is paramount in the new data
center which enables an enterprise to quickly adapt to the changing course of
the enterprise and technology while working within the realm of shrinking IT
From a networking perspective an enabler of flexibility is
based around the recent 40 & 100GB (802.3ba) Ethernet ratification. This
advancement is very important for the new data center not just from a speeds
and feeds perspective but from the technology that will follow such as TRILL
(RFC 5556) will change the game not just on the network edge (Metro Ethernet) but in
emulating Ethernet services in all kinds of new connectivity offerings.From a storage perspective how does an
enterprise properly move into the Petabyte age? Your business needs should
dictate your storage needs not the other way around.
Or put another way how to tame the data explosion issue?
IBM Systems Director Storage Control a plug-in for IBM Systems Director was announced on Nov 7th, 2010.
IBM Systems Director Storage Control has been designed & optimized for mid-range storage offerings to increase IT productivity, typically these tool sets are lacking in mid-range space.
The Director plug-in provides support for DS3/4/5/6/8K disk subsystems, as well as Storage virtualization engines Storwize V7000, SVC, to round out the portfolio support we include N series, Fibre Channel Switches, IBM Servers (x, p & z) and SMI-S providers (proxy). IBM Systems Director Storage Control also extends VMcontrol support (another IBM Systems Director Plug-in) to broaden storage systems support from this perspective too.
A few more features of the IBM Systems Director Storage Control are its Integrated support for Discovery, Inventory, Alerts, Monitoring, Configuration, Provisioning of storage offerings. In addition to automated monitoring and event notification thresholds. Sound like a tool that can enhance your storage environment?
IBM Systems Director Storage Control is available for 60-day free trial. Try it now http://tinyurl.com/24ocuel
As the IT industry continues to push commoditization
across the enterprise, its next target appears to be the Ethernet switching
realm. How so you ask? Network Convergence and the newest CNA offering
from QLogic is the 3GCNA a valiant effort into full commoditization of Ethernet
3GCNA’s provide support for multi-protocol
(iSCSI, FCoE) and QLogic’s newest 3GCNA allows VM’s to switch protocols on the
fly. Can you leverage this in your enterprise? You bet you can,
direct benefit day one by adding this into your Tiering strategy (tier by disk
and protocol) to lower your operational expenses. QLogic then takes the
offering up one level further by allowing direct VM to VM communication. I
envision this having direct impact in your environment from an admin
standpoint can you envision a larger impact?
The dual trends of network convergence and server
consolidation are driving major changes aimed directly at server I/O.
This drives home one of my favorite points for 2010 flexibility within your IT
infrastructure and the 3GCNA brings this flexibility in a big way.
Looking forward to the competition leap frogging this CNA offering and how it can benefit enterprises.
The IBM Storage Management Pack for Microsoft Systems Center Operations Manager (SCOM) is now available! This plugin enables customers to monitor and report the health state of their IBM storage products, using their native operations management tool and processes. The plugin operates with IBM DS8000, Storwize V7000, SVC, and XIV systems.
The plugin and documentation are available now via http://tinyurl.com/IBM-SCOM-Storage
We have kicked around the
term Easy Tier for a few weeks now and thought it was time to dissect the
technology in detail.
What is IBM Easy Tier?It is an easy tool (setup in 15-30 minutes) that
was designed to optimize data placement in a hybrid extent pool (Mix of SSD
& HDD).Understand that it is a 2-tier
architecture for data (hot & cold data sets) and that all candidate data is
reviewed at the extent level (sub-volume level) for movement with-in the hybrid
pool.The two tiers can be either
SSD+FC or SSD+SATA drive types.
IBM Easy Tier is a free tool
available on the DS8700, DS8800 (LIC code level requirements etc) &
Storwize V7000 subsystems (Soon to be available on SVC via code set v6.1.Under the covers Easy Tier is a continuous learning
algorithm that should have 24 hours (A.K.A Workload learning) of data prior to
making recommendations. The algorithm can
bring benefits to day-to-day workloads as well as the end of the quarter and
year end workloads can
There are currently two
modes to Easy Tier (Automatic & Manual Modes).
Easy Tier Automatic Mode enables facilities to
autonomically optimize data placement among physical resources with different
granularity, performance (and cost) characteristics.
In manual mode allows you to manually merge extent
pools and relocate volumes
Other Easy Tier features – is
fully aware of Copy Services (FlashCopy, MM/GM)
As a follow on to my September
13 2010 BLOG entry about SSD I felt it important to uncover what type of
workload is best suited for SSD in your Enterprise.
workloads (packets) are better suited to be serviced by SSD.What is small workload you ask?Anywhere between 4K - 64K is the sweet spot,
although that is NOT to say that you can’t service 128K and 256K workloads please
don’t expect a huge increase in performance with those workloads.
Throw in a very random
workload which is another good service characteristic for SSD.
Then add in a read workload
and you have a great need for SSDs.
Please recall SSD’s do very
little to the write performance of an SSD offering.
Well-suited for environments
capacity IBM through extensive research has concluded that the proper mix of
SSD to disk ratio in a Disk Subsystem is between 5-13% of total capacity.
You should fully evaluate your
workload before just throwing SSD at a problem in hopes that SSD can solve it.
In summary, random read requests in small packet sizes are a perfect fir for your SSD workloads