Written by The Next Platform, sponsored by Panasas
High performance computing tackles some of the most complicated problems facing mankind, from university researchers and pharmaceutical companies doing gene sequencing in the race to develop vaccines, to the computational fluid dynamics which pushes Formula One teams over the line and to the top of the podium.
With vast amounts of data being processed by bleeding edge architectures, storage is a crucial part of the equation. But what might have been up to the challenge a decade ago, or even a year ago, might not cut it today. The problems associated with running massive climate simulations or manipulating genomes have taken up years of research, and trial and error. More recently, the demands of artificial intelligence have posed new challenges for storage architectures.
HPC storage systems are expensive, but so are the people who maintain and operate them, and the costs of both and downtime and outages have often been overlooked in the past.