Get your IDC XIV Tier 1 paper here - http://tinyurl.com/6mkmxrr
Data Center 7.0
Keith Thuerk 110000F2GW email@example.com Tags:  tier ibm xiv 3 idc gen3 storage 1 gen 1,473 Visits
A freshly printed IDC White Paper shows how the recent XIV Gen 3 changes have pushed it from Tier 1.5 into the Full Tier 1 space.
Prior to now XIV was a disruptive technology.... well yet another Storage Inflection point has been breached.
Hang onto your legacy vendors no more... prepare for XIV Gen 3 (GA March '12)
Get your IDC XIV Tier 1 paper here - http://tinyurl.com/6mkmxrr
Let's talk @ your IBM Storage today!
Keith Thuerk 110000F2GW firstname.lastname@example.org Tags:  state ssd performance drive sizing solid 1,310 Visits
As a follow on to my September 13 2010 BLOG entry about SSD I felt it important to uncover what type of workload is best suited for SSD in your Enterprise.
Smaller workloads (packets) are better suited to be serviced by SSD. What is small workload you ask? Anywhere between 4K - 64K is the sweet spot, although that is NOT to say that you can’t service 128K and 256K workloads please don’t expect a huge increase in performance with those workloads.
Throw in a very random workload which is another good service characteristic for SSD.
Then add in a read workload and you have a great need for SSDs.
Please recall SSD’s do very little to the write performance of an SSD offering.
Well-suited for environments that need:
– High IO/sec
– Small capacity IBM through extensive research has concluded that the proper mix of SSD to disk ratio in a Disk Subsystem is between 5-13% of total capacity.
You should fully evaluate your workload before just throwing SSD at a problem in hopes that SSD can solve it.
In summary, random read requests in small packet sizes are a perfect fir for your SSD workloads
Keith Thuerk 110000F2GW email@example.com Tags:  hadoop sets data processing big no-sql real-time analytics ibm 1,125 Visits
At this point in 2011 we have all heard the term Big Data, but what is Big Data really?
There are still a lot of fragmented ideas for Big Data, best I can tell here is a nice definition.
Big Data can be defined simply as multi-terabyte datasets, typically ten or more. But Big Data also involves big complexity, namely many diverse data sources (both internal and external), data types (structured, unstructured, semi-structured), and indexing schemes (relational, distributed file systems, no-SQL). Plus, Big Data requires big processing to achieve useful analytic results.
Summary: Data + Analytics
What are the characteristics of Big Data?
· Very large data sets
· Distributed Aggregation
· Loosely Structured
· Often incomplete
Solve data issues with collaboration of Algorithms to process all the data not just subsets of data.
What are the Big Data components?
· Data at Rest
· Fast Data
Some food for thought in your enterprise:
Does Big Data mean more irrelevant data or does it require better BI tools?
Will Big Data arrive in waves?
If so, how will you accommodate the sudden spikes in requirements?
Are you feeling out of touch with the coming onslaught of Big Data even if you work in IT? No need to worry, this is just the next phase in IT expanding the IT role in an enterprise. You might explore adding some skills or adding deeper skills in / around analytics.
All the solution components are not yet in place to monetize Big Data results, will you be the next startup to exploit these results and help change IT?
In my mind Big Data will change industries and the world! Watch World here comes the next IT Wave!
Keith Thuerk 110000F2GW firstname.lastname@example.org Tags:  computing provisioning it cloud automation 1,120 Visits
Everyone knows that Cloud building (regardless of type Private, Public, Hybrid) has many facets to be successful. Let’s jump into one of the tools / utils of Clouds today, IT Automation.
You know that automation requires a full understanding of IT assets (new and old).
So, are enterprises large and small doing a good job of managing IT assets today? Most would admit they are not doing a good job of managing assets (on-site or off).
The asset management concept sounds easy enough, but think about how assets have changed in just the last 2 years (tablets, Mobile doc readers, etc…) these devices have to be counted too as they contribute to the creation and use of corporate assets.
With the proliferation of assets of all types and the ever expanding network boundaries the device count keep ticking up.
Where to start?
Service catalog seems to be the most pressing issue deployment today.
What products do you use?
Have you found any good open source tools for asset management?
IT Automation was once banished due to restricted budgets, now it’s a necessity as it’s a fundamental IT building block (agility).
What is the state of IT automation today?
How can IT Automation help your environment? When front ended by a service catalog your end users can hit the service portal and request a new system or service. Your provisioning tools then get work setting up the requested environment.
How far can you take IT Automation in your environment; the possibilities are limited only by time, technology and of course money.
Are you tying asset management into your BI infrastructure if no, why?
Are you tying asset management into your Sales Force automation if no, why?
Are you tying asset management into your consolidated I/O environments?
You should be spending time to see how IT automation can reduce your IT spend.
Keith Thuerk 110000F2GW email@example.com Tags:  v6.2 clustering scale out 10g storwize v7000 995 Visits
IBM shipped the next release of Storwize V7000 V6.2
Some of the cool features include (but, not limited to)
Plus plenty of others check out the official announcement letter:
How many enterprises have been waiting to buy their own Storwize V7000 once IBM shipped 15K drives?
Wait no more!
As of Friday April 15th 2011 we are shipping 146GB 15K SAS drives for the IBM Storwize V7000.
Great size drive for performance with out having to spend a lot of $$$ just to end up creating
your own short stroked drive.
Check out the announcement below.
Look for more 15K drives during the summer!!!!
Keith Thuerk 110000F2GW firstname.lastname@example.org Tags:  ibm storwize v7000 announcement storage 1 Comment 1,701 Visits
IBM has shipped 1000 Storwize V7000 units since the Oct announcement.
Talk about a fast start.
Don't take my word for it check it out.
There seems to be a perception in the storage arena that IT Security is for all facets other than storage. However, if you have been monitoring the SAN product announcements in the storage space (which I know you are) you have seen the end-to-end security offerings in the SAN are now available. Sure there are several ways to accomplish security in the Storage Arena (Disk Subsystem based, SAN Based, Host based).
Ever wonder what is going on under the covers of the SAN security offerings? What does SAN based security offering do and what are the components? Let’s dig into the SAN security offerings from a standards perspective.
Keep in mind I am not going to cover all IT security offerings such as LTPA, OpenPGP, GOST, SEED, or even Elliptical Curve Crypto, just what you will encounter in the SAN based offerings. Neither will we cover how security impacts your compliance needs or how to devise security domains or help identify threat levels for your data. We are going to focus on securing enterprise data while keeping it available for use (no time to cover how encryption will impact RTO & RPO values). This topic will also be limited to the Fibre channel SAN and not cover FICON environments.
Let’s lay the security foundation by understanding the important terms.
Industry Standard Security Terms: A.K.A the Security Alphabet Soup!
AES = Advanced Encryption Standard (IEEE 1619, 1619.3) Available in 128-, 192- & 256-bit key lengths
AES-256-XTS for Disk
AES-256-GCM for Tape
CC EAL-3 = Common Criteria Evaluation Assurance Level 3
CHAP = Challenge-Handshake Authentication Protocol RFC 1334 & 1994 – 3-way handshake using MD-5 hash
In our instance it will be used by iSCSI (or should be used)
FIPS = Published based on 128-bit
FIPS 140-2 L1
FIPS 140-2 L2
FIPS 140-2 L3 (Tamper Proof)
MD-5 = RFC 1321 (128-bit one way hash, never sent over the link), typically found in SMB v1.0 type environments
PKCS = Public Key Crypto Stands (RFC 2898)
PKCS #11, 7, 5, 1
PKI = Public Key Infrastructure (A.K.A x.509 PKI (Public Key Infrastructure))
PKIX = PKI CA / Registration Agent
RPKI = Resource Public Key Infrastructure (RFC 5280) expect to see this in ISP offerings (huge environments)
IKE = Internet Key Encryption (RFC 2408-2410)
IKEv2 (RFC 4306, RFC4595)
KMIP (Key Mgr Inter Operability Protocol) which is part of OASIS offering and way to get to IEEE P1619.3
Other Terms to be familiar with:
DAR = Data at Rest
DIF = Data in Flight
RPO = Recovery Point Objective
RTO = Recovery Time
The key to all IT security offerings rests in proper Key Management (you do NOT want to have to manually manage keys and worry about key lifecycles, key retention, etc).
The central role of a key manager offering is to manage security keys across all device types across your enterprise (just think about all the keys in / around the VPN). Key Managers also present the ability to work with Certificate Authority (CA) all while being easy to utilize. Do not overlook the need for Role Based Access Control (RBAC) and auditing capabilities (who polices the police)?
Here are just a few products (listed in no certain order) IBM Tivoli Key Lifecycle Manager (TKLM), RSA RKM, HP Storageworks Key Mgr (HKM), NetApp Lifetime Key Mgt (LKM).
KMIP is seen as one way to replace the hodgepodge of different encryption-key management products out there. Put another way it solves BIG issue within Enterprise security realm.
Key Size (128, 256 are the most common while 512, 1024 bit are offered with not a lot of adoption (as I have seen). Keep in mind that cracking the 256 bit code takes 2128 more computational power than the 128-bit version
Key Choice – select the
version that makes the most sense for your environment (keep in mind delay for
creating the key and unpacking the data) is a 8-10% performance hit acceptable? What are the acceptable guidelines for your enterprise?
Symmetric or Asymmetric keys
The Symmetric key is used for both read and writes for SAN encryption. Symmetric key encryption algorithms are significantly faster than asymmetric encryption algorithms, which makes symmetric encryption an ideal candidate for encrypting large amounts of data. Speed and short key length are advantages of symmetric encryption.
While Asymmetric keys are require two pairs for encrypting and decrypting data a typical usage is for VPN type of enterprise offerings and the king of the Asymmetric offering is RSA.
I have learned over the years that hardware is the way to go with security due to the load that software places on CPU’s. Having a dedicated component to tackle that one job ensures that all other processes are handled in a timely manner and not creating another issue.
I also like SAN encryption due to the flexibility it brings to an enterprise security offering (all data passes thru the SAN)
Another aspect of security to keep in mind is data classification. You might ask why? Do you really want or need all your data encrypted? What is the corporate directive state?
I have to leave our security discussion at this
point let’s call this Part I, keep watching this space for Part II (How to roll out SAN Encryption)
Keith Thuerk 110000F2GW email@example.com Tags:  benchmark esg v7000 workload storwize mixed 1,154 Visits
ESG (Enterprise Server Group) released a great benchmark on the IBM Storwize V7000.
The ESG benchmark was for mixed workloads.
Please review the ESG benchmark data here:
Keith Thuerk 110000F2GW firstname.lastname@example.org Tags:  dfs cifs federation gns mapping name windows nfs policies unix global space 1,254 Visits
Everyone knows that storage growth has been at growing exponential levels for a few years now within enterprises, and as a subset of data unstructured data is growing faster than structured (tabular) data.
How are you handling the unstructured data growth?
One option is via Global Name space (GNS) (Please don’t confuse that with the old Novell IPX GNS ((Get Nearest Server)) yes acronym reuse is fun.)
What is GNS?
Put in simple terms it will do to file storage what DNS did for IP networking.
Or put another way GNS enables your clients to access files w/ out knowing the exact location Or put yet another way it’s federation of a File System (F/S).
The official industry definition goes like this… A Global Namespace (GNS) has the unique ability to aggregate disparate and remote network based file systems, providing a consolidated view that can greatly reduce complexities of localized file management and administration.
Global Namespace technology can virtualize file server protocols such as Common Internet File System protocol (CIFS) and the Network File System (NFS) protocols.
So, you might be asking
yourself how does global namespace help my enterprise? How many mapped drives to you have (UNIX
& Windows)? How many more mapped
drives do you need to add?
How much time do you spend just managing mapped drives? How much time do you spend managing file locations? How much time do you spend performing cross mapping searches?
Now multiply that number by the number of admins and users who perform similar work in your enterprise.
How much time do you spend working policies and access rights to these files?
Another benefit of GNS is the ability to tie GNS into Microsoft’s DFS to ease your administration load there too.
You will find lots of competitive global namespace products out there in the market for you to select from such as (IBM SONAS, Brocade StorageX, F5 Networks, etc, just to name a few)
Happy Productivity Increase!
Keith Thuerk 110000F2GW email@example.com Tags:  tape ltfs drag ilm drop hsm 2 Comments 2,276 Visits
Enterprise Data protection is not a new topic (it’s actually a core IT principal = protecting data) and we continue to see a growing investment in data protection methodologies in enterprises across the globe. While some of our competition want you to believe that all data needs can be satisfied by placing all data on spinning disk; doesn’t seem very economical to me. Tape is the greenest IT offering to date across the market. We are not going to discuss tape offerings, let us discuss what IBM helped write and released early in 2010. The product offering is called LTFS which stands for Linear Tape File System. This new offering allows admins drag and drop capabilities for files making the tape F/S system appear to be a removable media format (Flash Drive, DVD etc,) to the Operating System. So drag and drop to / from tape seems like an inexpensive solution for Video Surveillance files.
How does LTFS work?
LTFS is a physical media portioning technology and LTFS creates 2 partitions on LTO5 tapes while the schema is stored in XML format.
What Operating Systems are supported?
RHEL 5.4, 5.5, MAC OS X 10.5, Windows 7, and others are getting their stamp of approval.
What Backup Software is supported?
No dedicated backup program such as TSM, NetBackup etc are required as the OS can write directly to the tape itself.
What Hardware is supported?
LTFS V1 supports LTO5 tape drives while the Tape Libraries are gaining their stamp of approval look for that in future releases.LTFS Defaults: Block size is 1MB
Best of all LTFS is FREE, yes that is not a typo it’s free.
Learn more about the IBM LTFS offering http://www-03.ibm.com/systems/storage/tape/ltfs/
We have kicked around the term Easy Tier for a few weeks now and thought it was time to dissect the technology in detail.
What is IBM Easy Tier? It is an easy tool (setup in 15-30 minutes) that was designed to optimize data placement in a hybrid extent pool (Mix of SSD & HDD). Understand that it is a 2-tier architecture for data (hot & cold data sets) and that all candidate data is reviewed at the extent level (sub-volume level) for movement with-in the hybrid pool. The two tiers can be either SSD+FC or SSD+SATA drive types.
IBM Easy Tier is a free tool available on the DS8700, DS8800 (LIC code level requirements etc) & Storwize V7000 subsystems (Soon to be available on SVC via code set v6.1. Under the covers Easy Tier is a continuous learning algorithm that should have 24 hours (A.K.A Workload learning) of data prior to making recommendations. The algorithm can bring benefits to day-to-day workloads as well as the end of the quarter and year end workloads can
There are currently two modes to Easy Tier (Automatic & Manual Modes).
Easy Tier Automatic Mode enables facilities to autonomically optimize data placement among physical resources with different granularity, performance (and cost) characteristics.
In manual mode allows you to manually merge extent pools and relocate volumes
Other Easy Tier features – is fully aware of Copy Services (FlashCopy, MM/GM)
Oh, did I mention the price? It is FREE!
Get out there and learn more about IBM Easy Tier
Keith Thuerk 110000F2GW firstname.lastname@example.org Tags:  xiv svc ds8000 v7000 storwize scom 1,360 Visits
The IBM Storage Management Pack for Microsoft Systems Center Operations Manager (SCOM) is now available! This plugin enables customers to monitor and report the health state of their IBM storage products, using their native operations management tool and processes. The plugin operates with IBM DS8000, Storwize V7000, SVC, and XIV systems.
The plugin and documentation are available now via
On Dec 16th I posted info about the new draft paper for CDMI v1.0.1f
SNIA recently released tutorial about CDMI
Learn about CDMI here ---> http://tinyurl.com/3a93hjq
Yesterday the FCC passed new rules governing the management of Internet Traffic.
Key Elements you should understand:
Rule 1: Transparency -- A person engaged in the provision of broadband Internet access service shall publicly disclose accurate information regarding the network management practices, performance, and commercial terms of its broadband Internet access services sufficient for consumers to make informed choices regarding use of such services and for content, application, service, and device providers to develop, market, and maintain Internet offerings.
Rule 2: No Blocking -- A person engaged in the provision of fixed broadband Internet access service, insofar as such person is so engaged, shall not block lawful content, applications, services, or non-harmful devices, subject to reasonable network management. A person engaged in the provision of mobile broadband Internet access service, insofar as such person is so engaged, shall not block consumers from accessing lawful websites, subject to reasonable network management; nor shall such person block applications that compete with the provider's voice or video telephony services, subject to reasonable network
Rule 3: No Unreasonable Discrimination -- A person engaged in the provision of fixed broadband Internet access service, insofar as such person is so engaged, shall not unreasonably discriminate in transmitting lawful network traffic over a consumer's broadband Internet access service. Reasonable network management shall not constitute unreasonable discrimination.