Get your IDC XIV Tier 1 paper here - http://tinyurl.com/6mkmxrr
Data Center 7.0
Keith Thuerk 110000F2GW firstname.lastname@example.org Tags:  tier ibm 3 xiv idc gen3 storage 1 gen 1,455 Visits
A freshly printed IDC White Paper shows how the recent XIV Gen 3 changes have pushed it from Tier 1.5 into the Full Tier 1 space.
Prior to now XIV was a disruptive technology.... well yet another Storage Inflection point has been breached.
Hang onto your legacy vendors no more... prepare for XIV Gen 3 (GA March '12)
Get your IDC XIV Tier 1 paper here - http://tinyurl.com/6mkmxrr
Let's talk @ your IBM Storage today!
Keith Thuerk 110000F2GW email@example.com Tags:  state ssd performance drive sizing solid 1,294 Visits
As a follow on to my September 13 2010 BLOG entry about SSD I felt it important to uncover what type of workload is best suited for SSD in your Enterprise.
Smaller workloads (packets) are better suited to be serviced by SSD. What is small workload you ask? Anywhere between 4K - 64K is the sweet spot, although that is NOT to say that you can’t service 128K and 256K workloads please don’t expect a huge increase in performance with those workloads.
Throw in a very random workload which is another good service characteristic for SSD.
Then add in a read workload and you have a great need for SSDs.
Please recall SSD’s do very little to the write performance of an SSD offering.
Well-suited for environments that need:
– High IO/sec
– Small capacity IBM through extensive research has concluded that the proper mix of SSD to disk ratio in a Disk Subsystem is between 5-13% of total capacity.
You should fully evaluate your workload before just throwing SSD at a problem in hopes that SSD can solve it.
In summary, random read requests in small packet sizes are a perfect fir for your SSD workloads
Keith Thuerk 110000F2GW firstname.lastname@example.org Tags:  hadoop sets data processing big no-sql real-time analytics ibm 1,091 Visits
At this point in 2011 we have all heard the term Big Data, but what is Big Data really?
There are still a lot of fragmented ideas for Big Data, best I can tell here is a nice definition.
Big Data can be defined simply as multi-terabyte datasets, typically ten or more. But Big Data also involves big complexity, namely many diverse data sources (both internal and external), data types (structured, unstructured, semi-structured), and indexing schemes (relational, distributed file systems, no-SQL). Plus, Big Data requires big processing to achieve useful analytic results.
Summary: Data + Analytics
What are the characteristics of Big Data?
· Very large data sets
· Distributed Aggregation
· Loosely Structured
· Often incomplete
Solve data issues with collaboration of Algorithms to process all the data not just subsets of data.
What are the Big Data components?
· Data at Rest
· Fast Data
Some food for thought in your enterprise:
Does Big Data mean more irrelevant data or does it require better BI tools?
Will Big Data arrive in waves?
If so, how will you accommodate the sudden spikes in requirements?
Are you feeling out of touch with the coming onslaught of Big Data even if you work in IT? No need to worry, this is just the next phase in IT expanding the IT role in an enterprise. You might explore adding some skills or adding deeper skills in / around analytics.
All the solution components are not yet in place to monetize Big Data results, will you be the next startup to exploit these results and help change IT?
In my mind Big Data will change industries and the world! Watch World here comes the next IT Wave!
Keith Thuerk 110000F2GW email@example.com Tags:  computing provisioning it cloud automation 1,109 Visits
Everyone knows that Cloud building (regardless of type Private, Public, Hybrid) has many facets to be successful. Let’s jump into one of the tools / utils of Clouds today, IT Automation.
You know that automation requires a full understanding of IT assets (new and old).
So, are enterprises large and small doing a good job of managing IT assets today? Most would admit they are not doing a good job of managing assets (on-site or off).
The asset management concept sounds easy enough, but think about how assets have changed in just the last 2 years (tablets, Mobile doc readers, etc…) these devices have to be counted too as they contribute to the creation and use of corporate assets.
With the proliferation of assets of all types and the ever expanding network boundaries the device count keep ticking up.
Where to start?
Service catalog seems to be the most pressing issue deployment today.
What products do you use?
Have you found any good open source tools for asset management?
IT Automation was once banished due to restricted budgets, now it’s a necessity as it’s a fundamental IT building block (agility).
What is the state of IT automation today?
How can IT Automation help your environment? When front ended by a service catalog your end users can hit the service portal and request a new system or service. Your provisioning tools then get work setting up the requested environment.
How far can you take IT Automation in your environment; the possibilities are limited only by time, technology and of course money.
Are you tying asset management into your BI infrastructure if no, why?
Are you tying asset management into your Sales Force automation if no, why?
Are you tying asset management into your consolidated I/O environments?
You should be spending time to see how IT automation can reduce your IT spend.
Keith Thuerk 110000F2GW firstname.lastname@example.org Tags:  v6.2 clustering out scale storwize 10g v7000 982 Visits
IBM shipped the next release of Storwize V7000 V6.2
Some of the cool features include (but, not limited to)
Plus plenty of others check out the official announcement letter:
How many enterprises have been waiting to buy their own Storwize V7000 once IBM shipped 15K drives?
Wait no more!
As of Friday April 15th 2011 we are shipping 146GB 15K SAS drives for the IBM Storwize V7000.
Great size drive for performance with out having to spend a lot of $$$ just to end up creating
your own short stroked drive.
Check out the announcement below.
Look for more 15K drives during the summer!!!!
Keith Thuerk 110000F2GW email@example.com Tags:  ibm storwize v7000 announcement storage 1 Comment 1,658 Visits
IBM has shipped 1000 Storwize V7000 units since the Oct announcement.
Talk about a fast start.
Don't take my word for it check it out.
There seems to be a perception in the storage arena that IT Security is for all facets other than storage. However, if you have been monitoring the SAN product announcements in the storage space (which I know you are) you have seen the end-to-end security offerings in the SAN are now available. Sure there are several ways to accomplish security in the Storage Arena (Disk Subsystem based, SAN Based, Host based).
Ever wonder what is going on under the covers of the SAN security offerings? What does SAN based security offering do and what are the components? Let’s dig into the SAN security offerings from a standards perspective.
Keep in mind I am not going to cover all IT security offerings such as LTPA, OpenPGP, GOST, SEED, or even Elliptical Curve Crypto, just what you will encounter in the SAN based offerings. Neither will we cover how security impacts your compliance needs or how to devise security domains or help identify threat levels for your data. We are going to focus on securing enterprise data while keeping it available for use (no time to cover how encryption will impact RTO & RPO values). This topic will also be limited to the Fibre channel SAN and not cover FICON environments.
Let’s lay the security foundation by understanding the important terms.
Industry Standard Security Terms: A.K.A the Security Alphabet Soup!
AES = Advanced Encryption Standard (IEEE 1619, 1619.3) Available in 128-, 192- & 256-bit key lengths
AES-256-XTS for Disk
AES-256-GCM for Tape
CC EAL-3 = Common Criteria Evaluation Assurance Level 3
CHAP = Challenge-Handshake Authentication Protocol RFC 1334 & 1994 – 3-way handshake using MD-5 hash
In our instance it will be used by iSCSI (or should be used)
FIPS = Published based on 128-bit
FIPS 140-2 L1
FIPS 140-2 L2
FIPS 140-2 L3 (Tamper Proof)
MD-5 = RFC 1321 (128-bit one way hash, never sent over the link), typically found in SMB v1.0 type environments
PKCS = Public Key Crypto Stands (RFC 2898)
PKCS #11, 7, 5, 1
PKI = Public Key Infrastructure (A.K.A x.509 PKI (Public Key Infrastructure))
PKIX = PKI CA / Registration Agent
RPKI = Resource Public Key Infrastructure (RFC 5280) expect to see this in ISP offerings (huge environments)
IKE = Internet Key Encryption (RFC 2408-2410)
IKEv2 (RFC 4306, RFC4595)
KMIP (Key Mgr Inter Operability Protocol) which is part of OASIS offering and way to get to IEEE P1619.3
Other Terms to be familiar with:
DAR = Data at Rest
DIF = Data in Flight
RPO = Recovery Point Objective
RTO = Recovery Time
The key to all IT security offerings rests in proper Key Management (you do NOT want to have to manually manage keys and worry about key lifecycles, key retention, etc).
The central role of a key manager offering is to manage security keys across all device types across your enterprise (just think about all the keys in / around the VPN). Key Managers also present the ability to work with Certificate Authority (CA) all while being easy to utilize. Do not overlook the need for Role Based Access Control (RBAC) and auditing capabilities (who polices the police)?
Here are just a few products (listed in no certain order) IBM Tivoli Key Lifecycle Manager (TKLM), RSA RKM, HP Storageworks Key Mgr (HKM), NetApp Lifetime Key Mgt (LKM).
KMIP is seen as one way to replace the hodgepodge of different encryption-key management products out there. Put another way it solves BIG issue within Enterprise security realm.
Key Size (128, 256 are the most common while 512, 1024 bit are offered with not a lot of adoption (as I have seen). Keep in mind that cracking the 256 bit code takes 2128 more computational power than the 128-bit version
Key Choice – select the
version that makes the most sense for your environment (keep in mind delay for
creating the key and unpacking the data) is a 8-10% performance hit acceptable? What are the acceptable guidelines for your enterprise?
Symmetric or Asymmetric keys
The Symmetric key is used for both read and writes for SAN encryption. Symmetric key encryption algorithms are significantly faster than asymmetric encryption algorithms, which makes symmetric encryption an ideal candidate for encrypting large amounts of data. Speed and short key length are advantages of symmetric encryption.
While Asymmetric keys are require two pairs for encrypting and decrypting data a typical usage is for VPN type of enterprise offerings and the king of the Asymmetric offering is RSA.
I have learned over the years that hardware is the way to go with security due to the load that software places on CPU’s. Having a dedicated component to tackle that one job ensures that all other processes are handled in a timely manner and not creating another issue.
I also like SAN encryption due to the flexibility it brings to an enterprise security offering (all data passes thru the SAN)
Another aspect of security to keep in mind is data classification. You might ask why? Do you really want or need all your data encrypted? What is the corporate directive state?
I have to leave our security discussion at this
point let’s call this Part I, keep watching this space for Part II (How to roll out SAN Encryption)
Keith Thuerk 110000F2GW firstname.lastname@example.org Tags:  benchmark esg v7000 workload storwize mixed 1,144 Visits
ESG (Enterprise Server Group) released a great benchmark on the IBM Storwize V7000.
The ESG benchmark was for mixed workloads.
Please review the ESG benchmark data here:
Keith Thuerk 110000F2GW email@example.com Tags:  dfs cifs federation gns windows name mapping nfs policies unix global space 1,241 Visits
Everyone knows that storage growth has been at growing exponential levels for a few years now within enterprises, and as a subset of data unstructured data is growing faster than structured (tabular) data.
How are you handling the unstructured data growth?
One option is via Global Name space (GNS) (Please don’t confuse that with the old Novell IPX GNS ((Get Nearest Server)) yes acronym reuse is fun.)
What is GNS?
Put in simple terms it will do to file storage what DNS did for IP networking.
Or put another way GNS enables your clients to access files w/ out knowing the exact location Or put yet another way it’s federation of a File System (F/S).
The official industry definition goes like this… A Global Namespace (GNS) has the unique ability to aggregate disparate and remote network based file systems, providing a consolidated view that can greatly reduce complexities of localized file management and administration.
Global Namespace technology can virtualize file server protocols such as Common Internet File System protocol (CIFS) and the Network File System (NFS) protocols.
So, you might be asking
yourself how does global namespace help my enterprise? How many mapped drives to you have (UNIX
& Windows)? How many more mapped
drives do you need to add?
How much time do you spend just managing mapped drives? How much time do you spend managing file locations? How much time do you spend performing cross mapping searches?
Now multiply that number by the number of admins and users who perform similar work in your enterprise.
How much time do you spend working policies and access rights to these files?
Another benefit of GNS is the ability to tie GNS into Microsoft’s DFS to ease your administration load there too.
You will find lots of competitive global namespace products out there in the market for you to select from such as (IBM SONAS, Brocade StorageX, F5 Networks, etc, just to name a few)
Happy Productivity Increase!
Keith Thuerk 110000F2GW firstname.lastname@example.org Tags:  tape ltfs drag ilm drop hsm 2 Comments 2,236 Visits
Enterprise Data protection is not a new topic (it’s actually a core IT principal = protecting data) and we continue to see a growing investment in data protection methodologies in enterprises across the globe. While some of our competition want you to believe that all data needs can be satisfied by placing all data on spinning disk; doesn’t seem very economical to me. Tape is the greenest IT offering to date across the market. We are not going to discuss tape offerings, let us discuss what IBM helped write and released early in 2010. The product offering is called LTFS which stands for Linear Tape File System. This new offering allows admins drag and drop capabilities for files making the tape F/S system appear to be a removable media format (Flash Drive, DVD etc,) to the Operating System. So drag and drop to / from tape seems like an inexpensive solution for Video Surveillance files.
How does LTFS work?
LTFS is a physical media portioning technology and LTFS creates 2 partitions on LTO5 tapes while the schema is stored in XML format.
What Operating Systems are supported?
RHEL 5.4, 5.5, MAC OS X 10.5, Windows 7, and others are getting their stamp of approval.
What Backup Software is supported?
No dedicated backup program such as TSM, NetBackup etc are required as the OS can write directly to the tape itself.
What Hardware is supported?
LTFS V1 supports LTO5 tape drives while the Tape Libraries are gaining their stamp of approval look for that in future releases.LTFS Defaults: Block size is 1MB
Best of all LTFS is FREE, yes that is not a typo it’s free.
Learn more about the IBM LTFS offering http://www-03.ibm.com/systems/storage/tape/ltfs/
We have kicked around the term Easy Tier for a few weeks now and thought it was time to dissect the technology in detail.
What is IBM Easy Tier? It is an easy tool (setup in 15-30 minutes) that was designed to optimize data placement in a hybrid extent pool (Mix of SSD & HDD). Understand that it is a 2-tier architecture for data (hot & cold data sets) and that all candidate data is reviewed at the extent level (sub-volume level) for movement with-in the hybrid pool. The two tiers can be either SSD+FC or SSD+SATA drive types.
IBM Easy Tier is a free tool available on the DS8700, DS8800 (LIC code level requirements etc) & Storwize V7000 subsystems (Soon to be available on SVC via code set v6.1. Under the covers Easy Tier is a continuous learning algorithm that should have 24 hours (A.K.A Workload learning) of data prior to making recommendations. The algorithm can bring benefits to day-to-day workloads as well as the end of the quarter and year end workloads can
There are currently two modes to Easy Tier (Automatic & Manual Modes).
Easy Tier Automatic Mode enables facilities to autonomically optimize data placement among physical resources with different granularity, performance (and cost) characteristics.
In manual mode allows you to manually merge extent pools and relocate volumes
Other Easy Tier features – is fully aware of Copy Services (FlashCopy, MM/GM)
Oh, did I mention the price? It is FREE!
Get out there and learn more about IBM Easy Tier
Keith Thuerk 110000F2GW email@example.com Tags:  xiv svc ds8000 v7000 storwize scom 1,342 Visits
The IBM Storage Management Pack for Microsoft Systems Center Operations Manager (SCOM) is now available! This plugin enables customers to monitor and report the health state of their IBM storage products, using their native operations management tool and processes. The plugin operates with IBM DS8000, Storwize V7000, SVC, and XIV systems.
The plugin and documentation are available now via
On Dec 16th I posted info about the new draft paper for CDMI v1.0.1f
SNIA recently released tutorial about CDMI
Learn about CDMI here ---> http://tinyurl.com/3a93hjq
Yesterday the FCC passed new rules governing the management of Internet Traffic.
Key Elements you should understand:
Rule 1: Transparency -- A person engaged in the provision of broadband Internet access service shall publicly disclose accurate information regarding the network management practices, performance, and commercial terms of its broadband Internet access services sufficient for consumers to make informed choices regarding use of such services and for content, application, service, and device providers to develop, market, and maintain Internet offerings.
Rule 2: No Blocking -- A person engaged in the provision of fixed broadband Internet access service, insofar as such person is so engaged, shall not block lawful content, applications, services, or non-harmful devices, subject to reasonable network management. A person engaged in the provision of mobile broadband Internet access service, insofar as such person is so engaged, shall not block consumers from accessing lawful websites, subject to reasonable network management; nor shall such person block applications that compete with the provider's voice or video telephony services, subject to reasonable network
Rule 3: No Unreasonable Discrimination -- A person engaged in the provision of fixed broadband Internet access service, insofar as such person is so engaged, shall not unreasonably discriminate in transmitting lawful network traffic over a consumer's broadband Internet access service. Reasonable network management shall not constitute unreasonable discrimination.
The Cloud Security Alliance (CSA) has launched a revision of the Cloud Controls Matrix (CCM)
The CSA CCM provides control frameworks which provide detailed understanding of security concepts and principles that are aligned to the Cloud Security Alliance guidance in 13 domains.
Get your copy here --> http://tinyurl.com/y9jtkfw
Fresh off the virtual presses is the release of the NPIV for ESX guide for IBM Storwize V7000.
Get your copy here ---> http://tinyurl.com/29fft2v
Keith Thuerk 110000F2GW firstname.lastname@example.org Tags:  automated tiering tier v7000 storwize svc easy 1,104 Visits
The IBM Storage Tier Advisor Tool predicts whether the addition of Solid State Drive (SSD) capacity in conjunction with the Easy Tier function could benefit system performance.
The IBM Storage Tier Advisor Tool is a Windows console application that analyzes heat data files produced by Easy Tier and produces a graphical display of the amount of "hot" data per volume and predictions of how additional Solid State Drive (SSD) capacity could benefit performance for the system and by storage pool. This version of the Storage Tier Advisor Tool supports heat data files produced by Easy Tier on SAN Volume Controller 6.1, Storwize V7000 and DS8000 5.1, 5.1.5.
Heat data files are produced approximately once a day when Easy Tier is active on one or more storage pools and summarize the activity per volume since the prior heat data file was produced. On SAN Volume Controller and Storwize V7000 the heat data file is in /dumps on the configuration node and is named "dpa_heat.node_name.time_stamp.data". Any existing heat data file is erased whenever a new heat data file is produced. The file must be off-loaded by the user and Storage Tier Advisor Tool invoked from a Windows command prompt console with the file specified as a parameter. The user can also specify the output directory. The Storage Tier Advisor Tool creates a set of html files and the user can then open the resulting "index.html" in a browser to view the results.
Usage information can be found in a readme supplied with the Tool.
We talked a few weeks back about Cloud Management / tools.
On April 12, 2010 SNIA formerly released V1.0 of CDMI (Cloud Data Management Interface) ((Can be found here http://tinyurl.com/2ukfo5t))
Which allows Enterprises to manage (create, change & delete) Meta Data placed into clouds.
SNIA recently released a draft for CDMI v1.0.1f
Looks like the standards are shaping up well for Cloud Management.
Until next time....
IBM Storwize V7000 SPC-2 benchmark data are available.
SPC-2 benchmark results came in at 3,132 MBps
IEEE Forms 100G Backplane and Twinax Cabling Study Group
IEEE 802.3 Working Group has formed a 100Gbps Ethernet Electrical Backplane and Twinax Copper Cable Assemblies Study Group.
Supporters of the initiative see a need to develop an electrical solution for 100GbE backplanes and short reach twinax cabling that operates at greater than 10Gbps per lane.
The study group will have its first meeting as part of the IEEE 802.3 Working Group interim meeting.
Keith Thuerk 110000F2GW email@example.com Tags:  discovery monitoring storwize smi-s ds5000 ds6000 san director svc storage alerting ibm systems ds4000 ds8000 control ds3000 v7000 1,387 Visits
IBM Systems Director Storage Control a plug-in for IBM Systems Director was announced on Nov 7th, 2010.
IBM Systems Director Storage Control has been designed & optimized for mid-range storage offerings to increase IT productivity, typically these tool sets are lacking in mid-range space.
The Director plug-in provides support for DS3/4/5/6/8K disk subsystems, as well as Storage virtualization engines Storwize V7000, SVC, to round out the portfolio support we include N series, Fibre Channel Switches, IBM Servers (x, p & z) and SMI-S providers (proxy).
IBM Systems Director Storage Control also extends VMcontrol support (another IBM Systems Director Plug-in) to broaden storage systems support from this perspective too.
A few more features of the IBM Systems Director Storage Control are its Integrated support for Discovery, Inventory, Alerts, Monitoring, Configuration, Provisioning of storage offerings. In addition to automated monitoring and event notification thresholds. Sound like a tool that can enhance your storage environment?
IBM Systems Director Storage Control is available for 60-day free trial. Try it now http://tinyurl.com/24ocuel
significant advances in its path to integrate electrical and optical devices on
the same piece of silicon. The new CMOS Integrated Silicon Nanophotonics, which
is the result of a decade of development at IBM's global Research laboratories,
promises over 10X improvement in integration density than is feasible with
current manufacturing techniques.
IBM said it anticipates
that Silicon Nanophotonics will dramatically increase the speed and performance
between chips. In addition to combining electrical and optical devices on a
single chip, the new IBM technology can be produced on the front-end of a standard
CMOS manufacturing line. Transistors can share the same silicon layer with
silicon nanophotonics devices. To make this approach possible, IBM researchers
have developed a suite of integrated ultra-compact active and passive silicon
nanophotonics devices that are all scaled down to the diffraction limit – the
smallest size that dielectric optics can afford. This makes possible the
integration of modulators, germanium photodetectors and ultra-compact
wavelength-division multiplexers with high-performance analog and digital CMOS
· In March 2010, IBM announced a Germanium Avalanche Photodetector working at 40 Gbps with CMOS compatible voltages as low as 1.5V. This was the last piece of the puzzle that completes the prior development of the “nanophotonics toolbox” of devices necessary to build the on-chip interconnects.
· In March 2008, IBM scientists announced the world’s tiniest nanophotonic switch for "directing traffic" in on-chip optical communications, ensuring that optical messages can be efficiently routed.
· December 2007, IBM scientists announced the development of an ultra-compact silicon electro-optic modulator, which converts electrical signals into the light pulses, a prerequisite for enabling on-chip optical communications.
December 2006, IBM scientists demonstrated silicon nanophotonic delay line that was used to buffer over a byte of information encoded in optical pulses - a requirement for building optical buffers for on-chip optical communications.
Keith Thuerk 110000F2GW firstname.lastname@example.org Tags:  svc storwize perl v7000 scripting alphaworks 1,018 Visits
Ever wonder how to gain more efficiency from an already super easy to use disk subsystem?
Scripting is the key; of course!
IBM makes it easy by releasing a ready to use package via IBM alphaWorks which uses Perl under
Best of all the alphaWorks tool is FREE
Happy Efficiency gains!
We have referenced NLSAS quite a bit in the past few weeks and felt it was time to get into the nuts and bolts of what NLSAS is exactly.
A SATA drive has only one port so in the event of a network failure the SATA drive is not reachable. To add a level of HA to the SATA environment
interposers are utilized to add a second connectivity port while allowing conversion to another interface (i.e. Fibre Channel).
Switching the interposer to support NLSAS natively allows dual paths to your SAS environment.
What is gained by moving to NLSAS in your enterprise?
One drive type can be utilized across your enterprise; SAS.
Another is improved RAS (Reliability, Availability, Survivability)((In my mind this is minor since this RAS was also added to SATA offering).
The slight improvement in performance gain from native protocol exchange is really not worth mentioning as net gain.
Hope that helps clear the muddy water in / around NLSAS
Earlier this week I was stunned to see China took #1 & #3 spots at the 2H2010 supercomputing event.
Trying not to be dismayed I wondered how US companies would respond?
Did I question our abilities? Certainly not, however, 2.36PF is order and magnitude larger than #2 slot.
A short time ago I saw that some students have built their own supercomputer which placed 3rd in the NCSA
Green 500. So my optimisium for the US future has been restored by our youth. I will continue to watch
that space for other great news.
See for yourself: http://tinyurl.com/2dss6uz
PCI-SIG released the latest PCIe Base 3.0 specification.Check it out: http://preview.tinyurl.com/2869qhp
The new PCIe 3.0 architecture is a low-cost, high-performance I/O technology that includes a new 128b/130b encoding scheme and a data rate of 8 gigatransfers per second (GT/s), doubling the interconnect bandwidth over the PCIe 2.0 specification.
PCIe 3.0 technology also maintains backward compatibility with previous PCIe architectures and provides the optimum design point for high-volume platform I/O implementations across a wide range of topologies. Possible topologies include servers, workstations, desktop and mobile personal computers, embedded systems, peripheral devices, etc.
Keith Thuerk 110000F2GW email@example.com Tags:  cna iscsi muilti-protocol fcoe convergence qlogic ethernet 3gcna 1 Comment 1,416 Visits
As the IT industry continues to push commoditization across the enterprise, its next target appears to be the Ethernet switching realm. How so you ask? Network Convergence and the newest CNA offering from QLogic is the 3GCNA a valiant effort into full commoditization of Ethernet switches.
3GCNA’s provide support for multi-protocol
(iSCSI, FCoE) and QLogic’s newest 3GCNA allows VM’s to switch protocols on the
fly. Can you leverage this in your enterprise? You bet you can,
direct benefit day one by adding this into your Tiering strategy (tier by disk
and protocol) to lower your operational expenses. QLogic then takes the
offering up one level further by allowing direct VM to VM communication. I
envision this having direct impact in your environment from an admin
standpoint can you envision a larger impact?
The dual trends of network convergence and server consolidation are driving major changes aimed directly at server I/O. This drives home one of my favorite points for 2010 flexibility within your IT infrastructure and the 3GCNA brings this flexibility in a big way.
Looking forward to the competition leap frogging this CNA offering and how it can benefit enterprises.
Ever wonder where the whole network convergence movement is going?
Wonder no more.
Keith Thuerk 110000F2GW firstname.lastname@example.org Tags:  tier async replication raid provisioning svc sync mid-range snapshot easy storwize mirror v7000 ssd ibm thin 1,201 Visits
IBM’s new mid-range Scalable Storage Storwize V7000 was designed from the ground up offering ease of use for product install, Admin & Maintenance. Take notice the product is new, however, the code set it runs on is stable it is based on IBM’s SVC which is > than 6.5 years young. The subsystem also brings enterprise caliber technology into the mid-range market. Such as, Disk Virtualization (based on SVC technology) RAID technology brought out of our DS8000 family and ease of use GUI from the XIV family. All features available to help your enterprise become successful with out worrying about watered down feature sets.
Let’s run through some of the Enterprise Offerings:
1) SVC + Disk Virtualization, 2) RAID technology from DS8K including Atomic Update, 3) Ease of use from XIV 4) Easy Tier
Maintenance Procedures (DMP) to help in maintenance & recovery tasks 6) Embedded e-learning video’s 7) Presets (AKA
Templates) helps in ease of use and simplify repetitive tasks
8) Replication (Async or Sync) over IP or Fibre Channel or 9) Mirroring via (MM/GM) 10) Thin Provisioning, 11) SnapShots w/ plug-in capabilities for FlashCopy Manager (FCM) and the last item I will cover is DR Automation (FO, FO back, Full site switching) – true mature enterprise technologies not watered down solutions
But, know that as you dig into this extensive product offering you will see many more options.
Oh, I just can’t help myself, I need to let you know about the 45 days of free disk migration to help get you off that other gear!
Isn’t it time to discuss your mid-range disk business with IBM?
Keith Thuerk 110000F2GW email@example.com Tags:  enterprise data sets value warehouse creation bi roi 1,069 Visits
Most sizable, if not all enterprises have already invested in a Data Warehouse (of some order and magnitude). Not only is it a business best practice, but it helps to uncover new business opportunities if used properly. I wonder how many enterprises that have invested in a Data Warehouse are getting ROI from that Data Warehouse? I ask since, the Business Intelligence (BI) market seems to be growing faster than any other segment in the IT market.
Which leads me to the conclusion that either there hasn’t
been a whole lot of value derived from their Data Warehouse investment to this
point? Or that, I draw another conclusion
that the Data Warehouse tools are adapting to new data sets and being able to
analyze new data sets? Which is it in your enterprise?
How is your enterprise using the data retrieved out of the Data Warehouse? How has it impacted your enterprises bottom line? In this Age of Analytics you need every leg up on your competition. Which is why, I don’t anticipate many responses as most want to keep this info under wraps. Does your Data Warehouse provide a headlight view or a tail light view of your customers?
How is your enterprise going to leverage BI to lead to new business changes? Can’t you envision a time when BI is used to help make tactical and strategic decisions?
Is your enterprise getting the most out of your Data Warehouse? Isn’t it time to discuss your BI business with IBM?
Wonder what IOP value you utilized for planning your VDI thin client environment?
Was it 4 IOPS,Was it 8 IOPS, Was it 12 IOPS / thin client?
How many IOPS are you driving in your environment?
Is the VMware standard of 20 IOPS / thin client your norm or the exception?
Heads Up Security Teams - VMware vSphere 4.0 Earns Common Criteria EAL4+ Certification
Keith Thuerk 110000F2GW firstname.lastname@example.org Tags:  virtualizing v7000 storwize oltp benchmarks 905 Visits
The IBM Storwize V7000 SPC-1 benchmark (OLTP-style workload) for IBM Storwize V7000 is now available at http://www.storageperformance.org/results/benchmark_results_spc1/#spc1Review the executive summary at http://www.storageperformance.org/benchmark_results_files/SPC-1/IBM/A00097_IBM_Storwize-V7000/a00097_IBM_Storwize-V7000_2-node_SPC1_executive-summary.pdf
SPC benchmark system info:
This measurement is of an IBM Storwize V7000 system with 120 internal disk drives and also 80 external drives that are being virtualized. Although we cannot measure a system with 240 drives at this point, this configuration provides a good perspective of the expected performance of a system with close to the maximum number of internal drives and also shows that the Storwize V7000 system performs very well when virtualizing external disk systems.
Keith Thuerk 110000F2GW email@example.com Tags:  proectier tape ds8000 tklm ds3000 1 Comment 1,310 Visits
Are you seeking some FREE IBM Deep Dive information (DS8000, ProtecTIER, Tape, TKLM or DS3000 and more).
Please sign up for some ATS Accelerate sessions:
Registration Link: www.tinyurl.com/ATSAccelerate
Keith Thuerk 110000F2GW firstname.lastname@example.org Tags:  flashcopy storage ssd v7000 enterprise mid-range storwize 1,082 Visits
Tired of looking everywhere of URLs associated w/ the IBM Storwize V7000?
Look no further....
Main IBM Storwize V7000 URL:
Storwize V7000 GUI Video
Storwize V7000 Flashcopy Video
Keith Thuerk 110000F2GW email@example.com Tags:  cloud center computing data virtualization storage 828 Visits
I seem to be stuck in the clouds lately, I am serious I have spent a great deal of time thinking about Cloud Computing.
For instance, with the in-depth push into virtualization across all layers in the Data Center (Storage, Networking, Servers etc….) I have gotten hung up on, at what point in the virtualization process does an enterprise cross over from a virtualized Data Center to having a Private Cloud? Do we have metrics to let us know and are there points of diminishing returns?
Another sticking point is Cloud management tools…. Where are we in the development and roll-out of these tools?
Anyone else stuck on these issues?
Today IBM took the covers off a revolutionary way to not only deploy but to run mid-range storage.
The new offering is called IBM Storwize V7000
Some of the features:
Revolutionize your enterprise storage solution, start by registering for the Oct 7th IBM storage announcement.
When Cloud Computing is mentioned does that thought conjure up?
As you are aware there are 3 current offerings:
What is your enterprises driving factor?
Is it Business Productivity?
Is it ease of use?
Is it more virtualization?
Is it to drive more service management into the Enterprise?
Is it to drive more methodology use into the Enterprise?
How do the offerings fit into your existing enterprise?
How much of what is in your data center is cloud ready?
What type of impact do you project for your LAN/WAN?
How much has to be re-architected?
How will it impact your provisioning?
How will it impact your change management structure?
How will it impact your in-house ITIL standards? Will you require more?
How do you seeing it impact your workload & workload patterns?
How will it impact your VPNs?
How will it impact your DR/BC plans?
How does moving to a Cloud Computing model impact your compliance gear?
How will it impact your existing skill base?
How much of your data will ever be in the cloud?
What will have to change to support such an offering (in-house)?
What are the security ramifications?
How will you manage your data in the Cloud?
Will you allow your data in a multi-tenancy offering?
Do you have a multi-year plan to get to Cloud Computing (I.E. Start small and get there over time)?
Don’t you think it’s time to start
the Cloud discussion so you can lay down the proper foundation?
Keith Thuerk 110000F2GW firstname.lastname@example.org Tags:  audit erasure disk wiping secure dlp 1 Comment 1,309 Visits
Secure Disk Wiping
a.k.a. Disk Erasure has been around for a good while in terms of IT life cycles.
Wondering how pervasive you utilize it in your Enterprise environment?
It is well understood that most IT breaches occur after data has left the premises (DLP)
Some well known options available:
shred -vfz -n 100/deve/hda
dd=if/deve/zero then run a pass w/ ones etc...
dd /dev/random (perhaps several times)
Or Does your enterprise perform a combination of them?
Wondering does a combination of a few inexpensive solutions provide the same type of protection (or nearly the same) as a more expensive complete offering?
Does your enterprise wipe each desktop / laptop disk before it leaves the building for good?
Does your enterprise wipe each server disk (after failure or after its aged out)?
Does your enterprise wipe each disk subsystem disk?
Does your enterprise take it a step further and retain the disks and fully destroy them say running them through Disk Shredders?
Are these types of events plugged into your change management systems?
Are these types of events plugged into your DLP Policies do they trigger audit events?
How much FTE or PTE time is allocated to attend to use duties?
Wondering how Secure Disk erasure impacts the IT budget and IT Audit cycles?
Would enjoy seeing how you might be performing Secure Disk Wiping, please provide some insight w/ out giving away to much info.
Keith Thuerk 110000F2GW email@example.com Tags:  disk ssd ilm solid performance drives subsystem tier state utilization 1,152 Visits
Solid State Drives (SSD)
We all are aware of the hundreds of articles at blogs raving about how the performance in / around SSD is going to transform the storage industry. I am not going down that rabbit hole; I am however, going to present another angle all together. Have you considered how many SSD’s your disk subsystem can support without bringing it (disk subsystem) to its virtual knee’s? You must understand, it is one thing to support SSD’s (say as a RFP checklist response and it is quite another to have them operate in your Tiered ILM with out negatively impacting disk subsystem performance).
Have you considered how did the disk subsystem architecture changed to support the SSD’s? Was the disk subsystem powerful enough to support SSD’s before their implementation? What is an optimal utilization to run a disk subsystem at with SSD’s? What is an optimal SSD utilization rate (is it 30%, is it 50% or >)? What application will you run on SSD? What business unit will want to run their application on SSD’s (besides all of them)?
What critical thinking have you been through to get to the nuts and bolts of how this technology will impact your enterprise?
Keith Thuerk 110000F2GW firstname.lastname@example.org Tags:  i/o security disks flexible storage fcoe lan man vdi 1 Comment 1,775 Visits
What does it take for a successful VDI implementation?
Let’s think about a few integral steps in getting your VDI project from the whiteboard into production (we don’t have enough space to cover all aspects such as project management, change management, Thin Client selection, etc.)
VDI aka Virtual Desktop Infrastructure relies on your disk subsystem being a stable foundation
in which you can build and grow your VDI environment.
Yet it needs to be flexible enough that it can meet the demands of this environment moving forward too.
Properly sized: How did you size your subsystem prior to rollout?
How many I/O’s did you allocate for each thin client?
How many more I/O’s did you allocate for each application?
How will you handle the ‘exception application’ which is not supported by VDI today?
Do not overlook some applications which require higher uptime than others.
What type of disk drive did you select to achieve this performance metric?
How large are the disk pools you are going to utilize to keep your performance consistent (Proof of Concept, Pilot, Initial rollout, full rollout)?
Did your sizing meet your performance expectorations in your lab?
Protocol selection: What protocol did you select for the backend connectivity (Fibre Channel, FCoE, iSCSI, NFS)?
What protocol does it use to the desktop (I.E. PCoIP or RDP)?
How agile is the protocol you selected? Was it designed for WAN use?
What are the cost implications if other network gear must be ordered?
Are you working w/ the Network team to ensure all aspects of the project are getting the proper attention?
For instance, How did VDI impact your LAN traffic (if applicable)?
How did VDI impact your WAN traffic (if applicable)?
Management: How are you going to keep control over your VDI environment (Roles/Responsibilities)?
Can you quickly integrate management into a directory structure and push out policies to manage all these new devices?
How flexible is your storage offering in assisting with desktop provisioning / creation?
How many golden images will you create & maintain (these might be worlds apart by the time you complete your pilot)?
How will you maintain these images? How much time has been allocated to maintain these images?
Security: Sure, you have locked down the selected OS but what about security at the physical layer have you disabled the USB ports too?
How are your existing security practices today going to be impacted? Will your VDI selection work across a VPN tunnel?
These questions might help get you on your way to VDI success!
Anyone care to share their thoughts and concerns on the Net Neutrality issue swirling the globe?
Wondering how many enterprises have made the switch to IPv6? Are you willing to share some of your 'gotchas'?
Also wondering how many have migrated over to DNSSEC? Are you willing to share some of your 'gotchas'?
Today's enterprise edge is evolving not just at the NIU, broadband entry but, at the mobile backhaul too. So, your network core needs to be able to handle the faster bandwidths but also the transition technologies for instance are you ready for low jitter SONET? How are you planning on making the transition in your data center?
Keith Thuerk 110000F2GW email@example.com Tags:  security compliance enterprise storage data center 911 Visits
The new data everywhere enterprise is about flexibility, which enables task savings freeing up resources to focus other critical enterprise events such as Risk, Security & Compliance.
What type of disk subsystem are we referring to that is flexible allow one to accomplish more?
Are you wondering how can it impact your enterprise?
Keith Thuerk 110000F2GW firstname.lastname@example.org Tags:  data unstructured structured storage 975 Visits
Everyone in IT is familiar w/ the 80 / 20 “rule”.
For networking it used for be 80% of traffic would remain local and then 20% would / could go offsite. Then the Internet Age came about and flipped that paradigm.
The data everywhere revolution is flipping the data storage model from an 80 / 20 rule. Where 80% WAS structured data and 20% was unstructured data. In the past few years, the rule has been creeping toward an equilibrium and it seems w/ the requirement to store video in-house the paradigm has shifted where 80% of the data is now unstructured (email, digital media, medical imaging, digital surveillance video, engineering / scientific data, etc) and 20% of data is structured. Are you locked into a storage offering which is built for the old paradigm? How is this going to benefit your enterprise moving forward?
How flexible is your new storage offering?
A flexible infrastructure is paramount in the new data center which enables an enterprise to quickly adapt to the changing course of the enterprise and technology while working within the realm of shrinking IT budgets.
From a networking perspective an enabler of flexibility is based around the recent 40 & 100GB (802.3ba) Ethernet ratification. This advancement is very important for the new data center not just from a speeds and feeds perspective but from the technology that will follow such as TRILL (RFC 5556) will change the game not just on the network edge (Metro Ethernet) but in emulating Ethernet services in all kinds of new connectivity offerings. From a storage perspective how does an enterprise properly move into the Petabyte age? Your business needs should dictate your storage needs not the other way around.
Or put another way how to tame the data explosion issue?
How is your enterprise preparing for this?
Keith Thuerk 110000F2GW email@example.com Tags:  enterprise fcoe security policies center convergence data 5 Comments 1,662 Visits
In response to: Data Center 7.0 & StorageI am surprised that nobody questioned the business model I referenced; questioning its validity? I believe it is NOT valid, as business is constantly adapting to the new trends and directions (successful enterprises are not rigid). I believe, in order for the new flexible business model to be followed the IT organization must be more flexible. Another component is that IT leaders need to push further into businesses to allow enterprises the ability for them continue to grow through increased productivity and automation. We also need longer term goals instead of quarter to quarter we need to take the time to plan out strategy and roll it out and being integrated tighter into an enterprise is just one way to achieve this.