High Performance Analytics with Big Data
Rohit Valia 2700057JJ4 RVALIA@US.IBM.COM Tags:  hadoop latency high performance low 1,141 Visits
Platform Symphony will be featured at the Information on Demand conference. Join us at the event and learn more about the latest trends and technologies.
New techniques like Hadoop are leading the way to providing a scalable and cost effective solution for a variety of data intensive problems. This session reviews the technical requirements for a low latency, multi-tenant, `big data` cluster to generate higher ROI. By harnessing the power of distributed compute and data coupled with advanced workload scheduling, organizations can scale to thousands of machines and achieve new levels of performance and efficiency. This session will show how enterprises can build a shared infrastructure for compute and data intensive applications, deploy mixed workloads, and leverage advanced scheduling and management capabilities for Hadoop and non Hadoop applications on a single cluster.
3817A Super-scaling Your Big Data Applications With High Performance Computing Capabilities
Date: Thu, Oct 25, 2012
Time: 3:30 PM - 4:30 PM
Location: South Pacific B - Mandalay Bay North Convention Center
Low-latency workload support is emerging as a high priority for organization that need to analyze big data for mission-critical applications. Financial services (risk mitigation, fraud detection), life sciences (bioinformatics) and government (intelligence) require a heterogeneous High Performance Computing framework that can distribute and schedule workloads with numerous, simultaneous short-running jobs in a grid of computing resources. In this session, you will learn how IBM InfoSphere BigInsights and InfoSphere Streams work with Platform Symphony to run low-latency big data applications for increased resource utilization, higher availability, improved job execution predictability and better manageability.
Rohit Valia 2700057JJ4 RVALIA@US.IBM.COM Tags:  hadoop data high analytics big hpc performance 1,040 Visits
The theme for the June 4th launch of the Platform Computing integration into IBM, was about the democratization of High Performance Computing.
eWeek reported, IBM Targets Big Data With Tech From Platform Computing Buy, New Storage Systems
Alan Radding on his blog got the synergies between HPC and Big Data. In his blog, Supercomputing for Everyone, the author writes "IBM is doing this mainly by bringing Platform Computing, a recent acquisition, to the HPC party. These include Platform LSF and Platform Symphony to enable up to 100% server utilization, Platform Cluster Manager, System x iDataPlex, and System Storage DCS3700 for parallel file management storage plus offerings for Big Data and cloud computing. Previously iDataPlex was IBM’s main HPC offering."
As more people begin to see how high performance computing technologies are applicable to the challenges of Big Data, we hope to see faster adoption of the Big Data technologies in mainstream computing. Clients will see these as mature technologies being applied to new problems.
Rohit Valia 2700057JJ4 RVALIA@US.IBM.COM Tags:  performance hadoop mapreduce latency 1,687 Visits
For many applications using the Hadoop MapReduce framework, the jobs have short execution times. For such jobs, the Apache Hadoop based implementations do not provide a response time that makes it feasible to use this framework. In addition, optimizing the data transfer and consumption impacts the performance of map reduce jobs.
IBM Platform Symphony accelerates your MapReduce application performance.
Download Platform Symphony Developer Edition and see its benefits for your applications.
Join me as we discuss Big Data with Gartner:
How to Reduce Cost in Government and Make Faster Decisions with near Realtime Hadoop Analytics
Merv Adrian, Research Vice President, Gartner, Inc.