Mellanox Technologies Ltd. announced the general availability of its MetroX TX6100 solution that enables InfiniBand and Ethernet RDMA connectivity between data centers. MetroX allows for rapid disaster recovery and improved utilization of remote storage and compute infrastructures across long distances and multiple geographic sites.
Purdue University successfully deployed MetroX TX6100 over six kilometers to connect computation clusters to storage facilities. Providing access to its remotely sited supercomputers allows the university to organize assets to limited data center space more flexibly, resulting in higher facilities utilization without increased construction or building retrofit costs. The savings come without sacrificing performance. MetroX’s long-haul capabilities allow Purdue researchers to run more complex simulations and further advance their cutting-edge research in areas such as climate change, aerospace and molecular biology.
“A common problem facing data-driven researchers is the time cost of moving their data between systems, from machines in one facility to the next, which can slow their computations and delay their results,” said Mike Shuey , HPC systems manager at Purdue University . “Mellanox’s MetroX solution lets us unify systems across campus, and maintain the high-speed access our researchers need for intricate simulations — regardless of the physical location of their work.”
Commonly used for internal data center long reach connectivity, or between close data center compute and storage infrastructures, the MetroX series extends Mellanox InfiniBand solutions from a single-location data center network to campus and metro data centers at a distance of up to 10 kilometers.
“The demand for long-haul interconnect technologies continues to increase as organizations deploy remote, agile systems,” said Gilad Shainer , vice president of marketing at Mellanox . “Mellanox’s MetroX RDMA systems provide the highest performing interconnect solution over long distances.”