Platform Symphony will be featured at the Information on Demand conference. Join us at the event and learn more about the latest trends and technologies.
New techniques like Hadoop are leading the way to providing a scalable and cost effective solution for a variety of data intensive problems. This session reviews the technical requirements for a low latency, multi-tenant, `big data` cluster to generate higher ROI. By harnessing the power of distributed compute and data coupled with advanced workload scheduling, organizations can scale to thousands of machines and achieve new levels of performance and efficiency. This session will show how enterprises can build a shared infrastructure for compute and data intensive applications, deploy mixed workloads, and leverage advanced scheduling and management capabilities for Hadoop and non Hadoop applications on a single cluster.
3817A Super-scaling Your Big Data Applications With High Performance Computing Capabilities
Date: Thu, Oct 25, 2012
Time: 3:30 PM - 4:30 PM
Location: South Pacific B - Mandalay Bay North Convention Center
Low-latency workload support is emerging as a high priority for organization that need to analyze big data for mission-critical applications. Financial services (risk mitigation, fraud detection), life sciences (bioinformatics) and government (intelligence) require a heterogeneous High Performance Computing framework that can distribute and schedule workloads with numerous, simultaneous short-running jobs in a grid of computing resources. In this session, you will learn how IBM InfoSphere BigInsights and InfoSphere Streams work with Platform Symphony to run low-latency big data applications for increased resource utilization, higher availability, improved job execution predictability and better manageability.