In response to: Data Center 7.0 & StorageI am surprised that nobody questioned the business model I referenced; questioning its validity? I believe it is NOT valid, as business is constantly adapting to the new trends and directions (successful enterprises are not rigid). I believe, in order for the new flexible business model to be followed the IT organization must be more flexible. Another component is that IT leaders need to push further into businesses to allow enterprises the ability for them continue to grow through increased productivity and automation. We also need longer term goals instead of quarter to quarter we need to take the time to plan out strategy and roll it out and being integrated tighter into an enterprise is just one way to achieve this.
Data Center 7.0
Keith Thuerk 110000F2GW email@example.com Tags:  enterprise fcoe security policies center convergence data 5 Comments 1,444 Visits
Keith Thuerk 110000F2GW firstname.lastname@example.org Tags:  tape ilm drag ltfs drop hsm 2 Comments 2,006 Visits
Enterprise Data protection is not a new topic (it’s actually a core IT principal = protecting data) and we continue to see a growing investment in data protection methodologies in enterprises across the globe. While some of our competition want you to believe that all data needs can be satisfied by placing all data on spinning disk; doesn’t seem very economical to me. Tape is the greenest IT offering to date across the market. We are not going to discuss tape offerings, let us discuss what IBM helped write and released early in 2010. The product offering is called LTFS which stands for Linear Tape File System. This new offering allows admins drag and drop capabilities for files making the tape F/S system appear to be a removable media format (Flash Drive, DVD etc,) to the Operating System. So drag and drop to / from tape seems like an inexpensive solution for Video Surveillance files.
How does LTFS work?
LTFS is a physical media portioning technology and LTFS creates 2 partitions on LTO5 tapes while the schema is stored in XML format.
What Operating Systems are supported?
RHEL 5.4, 5.5, MAC OS X 10.5, Windows 7, and others are getting their stamp of approval.
What Backup Software is supported?
No dedicated backup program such as TSM, NetBackup etc are required as the OS can write directly to the tape itself.
What Hardware is supported?
LTFS V1 supports LTO5 tape drives while the Tape Libraries are gaining their stamp of approval look for that in future releases.LTFS Defaults: Block size is 1MB
Best of all LTFS is FREE, yes that is not a typo it’s free.
Learn more about the IBM LTFS offering http://www-03.ibm.com/systems/storage/tape/ltfs/
Keith Thuerk 110000F2GW email@example.com Tags:  ibm storwize v7000 announcement storage 1 Comment 1,410 Visits
IBM has shipped 1000 Storwize V7000 units since the Oct announcement.
Talk about a fast start.
Don't take my word for it check it out.
Keith Thuerk 110000F2GW firstname.lastname@example.org Tags:  cna iscsi convergence fcoe muilti-protocol qlogic 3gcna ethernet 1 Comment 1,248 Visits
As the IT industry continues to push commoditization across the enterprise, its next target appears to be the Ethernet switching realm. How so you ask? Network Convergence and the newest CNA offering from QLogic is the 3GCNA a valiant effort into full commoditization of Ethernet switches.
3GCNA’s provide support for multi-protocol
(iSCSI, FCoE) and QLogic’s newest 3GCNA allows VM’s to switch protocols on the
fly. Can you leverage this in your enterprise? You bet you can,
direct benefit day one by adding this into your Tiering strategy (tier by disk
and protocol) to lower your operational expenses. QLogic then takes the
offering up one level further by allowing direct VM to VM communication. I
envision this having direct impact in your environment from an admin
standpoint can you envision a larger impact?
The dual trends of network convergence and server consolidation are driving major changes aimed directly at server I/O. This drives home one of my favorite points for 2010 flexibility within your IT infrastructure and the 3GCNA brings this flexibility in a big way.
Looking forward to the competition leap frogging this CNA offering and how it can benefit enterprises.
Keith Thuerk 110000F2GW email@example.com Tags:  proectier tape ds8000 ds3000 tklm 1 Comment 1,133 Visits
Are you seeking some FREE IBM Deep Dive information (DS8000, ProtecTIER, Tape, TKLM or DS3000 and more).
Please sign up for some ATS Accelerate sessions:
Registration Link: www.tinyurl.com/ATSAccelerate
Keith Thuerk 110000F2GW firstname.lastname@example.org Tags:  audit erasure disk wiping secure dlp 1 Comment 1,177 Visits
Secure Disk Wiping
a.k.a. Disk Erasure has been around for a good while in terms of IT life cycles.
Wondering how pervasive you utilize it in your Enterprise environment?
It is well understood that most IT breaches occur after data has left the premises (DLP)
Some well known options available:
shred -vfz -n 100/deve/hda
dd=if/deve/zero then run a pass w/ ones etc...
dd /dev/random (perhaps several times)
Or Does your enterprise perform a combination of them?
Wondering does a combination of a few inexpensive solutions provide the same type of protection (or nearly the same) as a more expensive complete offering?
Does your enterprise wipe each desktop / laptop disk before it leaves the building for good?
Does your enterprise wipe each server disk (after failure or after its aged out)?
Does your enterprise wipe each disk subsystem disk?
Does your enterprise take it a step further and retain the disks and fully destroy them say running them through Disk Shredders?
Are these types of events plugged into your change management systems?
Are these types of events plugged into your DLP Policies do they trigger audit events?
How much FTE or PTE time is allocated to attend to use duties?
Wondering how Secure Disk erasure impacts the IT budget and IT Audit cycles?
Would enjoy seeing how you might be performing Secure Disk Wiping, please provide some insight w/ out giving away to much info.
Keith Thuerk 110000F2GW email@example.com Tags:  i/o security disks flexible storage lan fcoe man vdi 1 Comment 1,579 Visits
What does it take for a successful VDI implementation?
Let’s think about a few integral steps in getting your VDI project from the whiteboard into production (we don’t have enough space to cover all aspects such as project management, change management, Thin Client selection, etc.)
VDI aka Virtual Desktop Infrastructure relies on your disk subsystem being a stable foundation
in which you can build and grow your VDI environment.
Yet it needs to be flexible enough that it can meet the demands of this environment moving forward too.
Properly sized: How did you size your subsystem prior to rollout?
How many I/O’s did you allocate for each thin client?
How many more I/O’s did you allocate for each application?
How will you handle the ‘exception application’ which is not supported by VDI today?
Do not overlook some applications which require higher uptime than others.
What type of disk drive did you select to achieve this performance metric?
How large are the disk pools you are going to utilize to keep your performance consistent (Proof of Concept, Pilot, Initial rollout, full rollout)?
Did your sizing meet your performance expectorations in your lab?
Protocol selection: What protocol did you select for the backend connectivity (Fibre Channel, FCoE, iSCSI, NFS)?
What protocol does it use to the desktop (I.E. PCoIP or RDP)?
How agile is the protocol you selected? Was it designed for WAN use?
What are the cost implications if other network gear must be ordered?
Are you working w/ the Network team to ensure all aspects of the project are getting the proper attention?
For instance, How did VDI impact your LAN traffic (if applicable)?
How did VDI impact your WAN traffic (if applicable)?
Management: How are you going to keep control over your VDI environment (Roles/Responsibilities)?
Can you quickly integrate management into a directory structure and push out policies to manage all these new devices?
How flexible is your storage offering in assisting with desktop provisioning / creation?
How many golden images will you create & maintain (these might be worlds apart by the time you complete your pilot)?
How will you maintain these images? How much time has been allocated to maintain these images?
Security: Sure, you have locked down the selected OS but what about security at the physical layer have you disabled the USB ports too?
How are your existing security practices today going to be impacted? Will your VDI selection work across a VPN tunnel?
These questions might help get you on your way to VDI success!
A flexible infrastructure is paramount in the new data center which enables an enterprise to quickly adapt to the changing course of the enterprise and technology while working within the realm of shrinking IT budgets.
From a networking perspective an enabler of flexibility is based around the recent 40 & 100GB (802.3ba) Ethernet ratification. This advancement is very important for the new data center not just from a speeds and feeds perspective but from the technology that will follow such as TRILL (RFC 5556) will change the game not just on the network edge (Metro Ethernet) but in emulating Ethernet services in all kinds of new connectivity offerings. From a storage perspective how does an enterprise properly move into the Petabyte age? Your business needs should dictate your storage needs not the other way around.
Or put another way how to tame the data explosion issue?
How is your enterprise preparing for this?
Keith Thuerk 110000F2GW firstname.lastname@example.org Tags:  security enterprise convergence fcoe center data policies 1 Comment 1,420 Visits
How has your enterprise been exploiting the
‘recession’ to push forward with new technology upgrades? Isn’t that what
we were taught to do in Econ. 201 & 202 (recall the businesses that
invested the most during times of economic downturn were the one’s taking the
greatest strides as the recovery came around and hence the most market share.
Think about your current data center was it built prior to 1997? If
so, your enterprise is not alone as a majority of them were built prior to then
(kind of scary knowing all the great technology created during the DOT.COM
Boom) and all that which has followed. Well, needless to say your
enterprise is NOT exploiting the most current technology initiatives that will
allow your business to continue to grow and gain market share. When did
you deploy IP phones were you early or late to deploying this technology set?
Are you ready to deploy a converged storage network? If not, why?
Is it due to lack of skills? Is it due to budget? Here are just
some of the benefits (simplified administration (are you willing to hand over
your storage network to the network team? If NO what are you doing to prevent
this coming skill absorption?)) rapid storage provisioning and roll outs,
increased cost savings). Are you skeptical that this will happen?
Think back to the early '90's and how many network protocols did the enterprise
have (DECnet, IPX/SPX, AppleTalk, Apollo domain, Named Pipes, XNS, etc) to
contend with and once IP was deemed the enterprise standard... in the end IP
won BIG. Still not convinced? Consider how many technologies
Ethernet has displaced (ATM, Token Ring, FDDI) with IP & Ethernet pushing
into the storage arena isn’t it time to put considerable thought into terms it
will impact Storage Network moving forward? Also consider how will this
impact your data center moving forward?
Until next time keep thinking about how Data Center 7.0 can help your enterprise move forward to quicker growth when this recession comes to an end.
Keith Thuerk 110000F2GW email@example.com Tags:  tier ibm 3 xiv idc gen3 storage 1 gen 1,337 Visits
A freshly printed IDC White Paper shows how the recent XIV Gen 3 changes have pushed it from Tier 1.5 into the Full Tier 1 space.
Prior to now XIV was a disruptive technology.... well yet another Storage Inflection point has been breached.
Hang onto your legacy vendors no more... prepare for XIV Gen 3 (GA March '12)
Get your IDC XIV Tier 1 paper here - http://tinyurl.com/6mkmxrr
Let's talk @ your IBM Storage today!
Keith Thuerk 110000F2GW firstname.lastname@example.org Tags:  state ssd performance drive sizing solid 1,138 Visits
As a follow on to my September 13 2010 BLOG entry about SSD I felt it important to uncover what type of workload is best suited for SSD in your Enterprise.
Smaller workloads (packets) are better suited to be serviced by SSD. What is small workload you ask? Anywhere between 4K - 64K is the sweet spot, although that is NOT to say that you can’t service 128K and 256K workloads please don’t expect a huge increase in performance with those workloads.
Throw in a very random workload which is another good service characteristic for SSD.
Then add in a read workload and you have a great need for SSDs.
Please recall SSD’s do very little to the write performance of an SSD offering.
Well-suited for environments that need:
– High IO/sec
– Small capacity IBM through extensive research has concluded that the proper mix of SSD to disk ratio in a Disk Subsystem is between 5-13% of total capacity.
You should fully evaluate your workload before just throwing SSD at a problem in hopes that SSD can solve it.
In summary, random read requests in small packet sizes are a perfect fir for your SSD workloads
Keith Thuerk 110000F2GW email@example.com Tags:  hadoop sets data processing big no-sql real-time analytics ibm 974 Visits
At this point in 2011 we have all heard the term Big Data, but what is Big Data really?
There are still a lot of fragmented ideas for Big Data, best I can tell here is a nice definition.
Big Data can be defined simply as multi-terabyte datasets, typically ten or more. But Big Data also involves big complexity, namely many diverse data sources (both internal and external), data types (structured, unstructured, semi-structured), and indexing schemes (relational, distributed file systems, no-SQL). Plus, Big Data requires big processing to achieve useful analytic results.
Summary: Data + Analytics
What are the characteristics of Big Data?
· Very large data sets
· Distributed Aggregation
· Loosely Structured
· Often incomplete
Solve data issues with collaboration of Algorithms to process all the data not just subsets of data.
What are the Big Data components?
· Data at Rest
· Fast Data
Some food for thought in your enterprise:
Does Big Data mean more irrelevant data or does it require better BI tools?
Will Big Data arrive in waves?
If so, how will you accommodate the sudden spikes in requirements?
Are you feeling out of touch with the coming onslaught of Big Data even if you work in IT? No need to worry, this is just the next phase in IT expanding the IT role in an enterprise. You might explore adding some skills or adding deeper skills in / around analytics.
All the solution components are not yet in place to monetize Big Data results, will you be the next startup to exploit these results and help change IT?
In my mind Big Data will change industries and the world! Watch World here comes the next IT Wave!
Keith Thuerk 110000F2GW firstname.lastname@example.org Tags:  computing provisioning it cloud automation 999 Visits
Everyone knows that Cloud building (regardless of type Private, Public, Hybrid) has many facets to be successful. Let’s jump into one of the tools / utils of Clouds today, IT Automation.
You know that automation requires a full understanding of IT assets (new and old).
So, are enterprises large and small doing a good job of managing IT assets today? Most would admit they are not doing a good job of managing assets (on-site or off).
The asset management concept sounds easy enough, but think about how assets have changed in just the last 2 years (tablets, Mobile doc readers, etc…) these devices have to be counted too as they contribute to the creation and use of corporate assets.
With the proliferation of assets of all types and the ever expanding network boundaries the device count keep ticking up.
Where to start?
Service catalog seems to be the most pressing issue deployment today.
What products do you use?
Have you found any good open source tools for asset management?
IT Automation was once banished due to restricted budgets, now it’s a necessity as it’s a fundamental IT building block (agility).
What is the state of IT automation today?
How can IT Automation help your environment? When front ended by a service catalog your end users can hit the service portal and request a new system or service. Your provisioning tools then get work setting up the requested environment.
How far can you take IT Automation in your environment; the possibilities are limited only by time, technology and of course money.
Are you tying asset management into your BI infrastructure if no, why?
Are you tying asset management into your Sales Force automation if no, why?
Are you tying asset management into your consolidated I/O environments?
You should be spending time to see how IT automation can reduce your IT spend.
Keith Thuerk 110000F2GW email@example.com Tags:  v6.2 clustering out scale storwize 10g v7000 880 Visits
IBM shipped the next release of Storwize V7000 V6.2
Some of the cool features include (but, not limited to)
Plus plenty of others check out the official announcement letter:
How many enterprises have been waiting to buy their own Storwize V7000 once IBM shipped 15K drives?
Wait no more!
As of Friday April 15th 2011 we are shipping 146GB 15K SAS drives for the IBM Storwize V7000.
Great size drive for performance with out having to spend a lot of $$$ just to end up creating
your own short stroked drive.
Check out the announcement below.
Look for more 15K drives during the summer!!!!