Achieving Performance at Scale: What’s Storage Got To Do with It?
Boosting storage performance has been a top priority ever since Panasas was founded more than ten years ago. Today, it’s more important than ever. In the IDC report “HPC End-User Study of the Storage and Interconnects Used in Technical Computing” (Sept. 2013), analysts highlighted the increasing focus that storage performance is getting with storage buyers:
Storage and data management have arguably become the most important HPC "pain points" already, with access densities a particularly troubling issue. […]Many HPC sites are doubling their storage capacities every two to three years, but adding capacity does not address the access density, data movement, and related storage issues many HPC buyers face.
Storage is too often the weakest link in technical computing workflows: When compute nodes have to wait for lower performance storage systems to grind through I/O data, workflows take longer to complete and productivity slows, ultimately with negative bottom-line effects. This is the access density problem that IDC was referring to.
When this happens, your investments in processing, networking, middleware and applications are choked off by bottlenecks in your storage infrastructure. If you’re looking to maximize throughput of your technical computing infrastructure, storage performance often holds the key.
Today, technical computing workflows are common in both enterprise and traditional public sector HPC data centers. With the right IT infrastructure, the result is greater discovery, innovation and productivity. But achieving exceptional storage performance at the scale you need for technical research and enterprise applications can present real challenges.
You can scale legacy network-attached storage (NAS) systems for sheer capacity by simply adding more NAS units. But just cabling on more silos of NAS storage won’t help you linearly scale system performance, because NAS heads, which act as the interfaces between the storage media and network clients, quickly become choke points as data flows grow. We see this problem frequently in Life Sciences, Manufacturing, Media and Entertainment, and other commercial markets.
While the performance of legacy NAS technologies usually remains adequate for serving Microsoft Exchange data and meeting other core Enterprise IT needs, the extreme growth of unstructured big data and ever-faster compute infrastructures used for technical computing workflows require a new storage approach that fully addresses the requirements of massive scalability and performance.
Panasas® ActiveStor® 16 is a hybrid scale-out appliance that effectively breaks through performance gridlock.
Parallel data transfer allows simultaneous transmission of data over multiple separate paths, providing much faster transfer rates than the serialized architectures typically used in NAS systems. ActiveStor offers 100 percent parallel data transfer to maximize performance, and in production deployments, achieves up to 150 GB per second data throughput with capacity that can scale beyond 12 PB in a single file system! That’s impressive real-world performance. Just as importantly, you’ll also find that linear performance scaling is as simple as adding more shelves to the ActiveStor solution to increase both capacity and performance proportionally, without operational disruptions.
The hybrid architecture of ActiveStor is responsible for another major performance advantage– delivering the advantages of Solid State Disk (SSD) and Hard Disk Drive (HDD) storage from a single, global namespace. This ensures that file system metadata and small file data is placed on SSD-based flash storage and large file I/O access the hard drive-based tier, maximizing real-world application performance in a way that is not available from other parallel file systems.
We’ve designed ActiveStor so you don’t have to compromise. There’s no need to choose between fast performance, ease of management, and high reliability–ActiveStor delivers all three in a single hybrid scale-out NAS appliance.