Cray Inc. now offers a Big Data framework that gives customers the ability to more easily implement and run Apache Hadoop on their Cray XC30 supercomputers. Fusing the benefits of supercomputing and Big Data, the Cray Framework for Hadoop package improves the overall efficiency and performance for Cray XC30 customers deploying Hadoop in Scientific Big Data environments. The Cray Framework for Hadoop package includes documented best practices and performance enhancements designed to optimize Hadoop for the Cray XC30 line of supercomputers. Built with an emphasis on Scientific Big Data usage, the framework provides enhanced support for data sets found in scientific and engineering applications, as well as improved support for multi-purpose environments, where organizations are looking to align their scientific, compute and data-intensive workloads. This enables users to gain the utility of the Java-based MapReduce Hadoop programming model on the Cray XC30 system, complementing the proven, HPC-optimized languages and tools of the Cray Programming Environment. Based on early customer response, the initial release of the Cray Framework for Hadoop and an optimized Cray Performance Pack for Hadoop will be available as free downloads and include validated and documented best practices for Apache Hadoop configurations. This performance pack includes Lustre-Aware Shuffle to optimize Hadoop performance on the Cray XC30 supercomputer. Further performance enhancements to the performance pack, which will include a native Lustre file system library and a plug-in to further accelerate Hadoop performance using the Aries system interconnect, will be available in the first half of 2014. The launch of Cray Framework for Hadoop further expands Cray’s portfolio of offerings for the rapidly growing Big Data market. Cray’s complementary array of Big Data solutions includes fast data and data movement capabilities with Cray Sonexion storage systems, a tiered storage and archiving solution with Cray Tiered Adaptive Storage, data discovery using the YarcData uRiKA appliance, Cray Cluster Supercomputers for Hadoop, and now a growing framework of enhancements and optimizations for running Hadoop workloads on Cray CX30 supercomputers.