CONTACT US 1-888-726-2727

You are here

Contact Sales



By Brent Welch on October 7, 2011

In 2008 the first computer broke the petascale barrier, allowing a supercomputer to exceed 1015 operations per second. The Panasas parallel file system was instrumental in accomplishing this major milestone, achieving extreme performance for big data sets. Los Alamos National Laboratory (LANL) built that first petascale system, “Roadrunner,” using the Panasas parallel file system to maximize I/O performance for the most demanding simulations and modeling applications. Roadrunner still remains one of the Top 10 supercomputing systems in...

By Barbara Murphy on September 27, 2011
Categories: pNFS

Any of us who have been around the storage industry for a long time knows that standards adoption happens at what sometimes feels like a snail’s pace.  It takes the vision and dedication of people like Garth Gibson, Panasas founder and CTO, to bring momentous change to the industry by pushing the boundaries of technology.

Panasas was instrumental in driving the initial proposal for pNFS and Garth Gibson has been a key contributor to the creation of the open standard for high performance computing.   Take a listen to Garth Gibson talking at SC10 on the genesis of pNFS...

By Brent Welch on September 14, 2011

Traditional RAID is designed to protect whole disks with block-levelredundancy.  An array of disks is treated as a RAID group, or protection domain, that can tolerate one or more failures and still recover a failed disk by the redundancy encoded on other drives.  The RAID recovery requires reading all the surviving blocks on the other disks in the RAID group to recompute blocks lost on the failed disk.  As disks have increased capacity by 40% to 100% per year, their bandwidth has not increased substantially. 

Comparing models of drives from various vintages of...

By Barbara Murphy on September 6, 2011

Panasas co-founder and CTO, Garth Gibson, talks about the evolution of cloud computing with parallel file systems and high performance storage. He discusses advances in data center manageability with new tools to support a wider range of user needs on different operating systems and on virtual machines. He points out that the notion of the ability of users to manage their own gear better makes cloud computing especially compelling.

By Barbara Murphy on August 31, 2011
Categories: Parallel Data Storage

Storage comprises about 20% of the available budget in a typical HPC system, with another 20% being spent on software, 20% for services, and 40% on the compute cluster, according to marketing intelligence firm Intersect360 Research Firm (WW HPC Total Market Model).  The majority of system design time and effort is spent validating the performance of the compute cluster, with less attention paid to storage. But not all storage is the same. All too frequently the investment and effort in building the best compute cluster will have yielded disappointing results when storage is not given...

By Barbara Murphy on August 23, 2011
Categories: Exascale

Panasas captured headlines in 2008 when Los Alamos National Laboratory (LANL) deployed Panasas® ActiveStor® as the primary storage for Roadrunner, the world's first petascale supercomputer. Now, as institutions turn their sights to exascale computing, Panasas co-founder and CTO, Dr. Garth Gibson, explores the storage system capabilities that will be needed to deliver the required 70TB/s bandwidth while meeting the U.S. government’s 2018 target timeframe.

By Geoffrey Noer on August 13, 2011
Categories: pNFS

It all started more than ten years ago when Panasas founder and CTO, Garth Gibson, realized that there was a strong emerging need for something faster than NFS to satisfy the growing I/O performance requirements of HPC Linux clusters. In 2004, Panasas started shipping its first ActiveStor® scale-out NAS solution running the PanFS® storage operating system. Key to the product’s success is the Panasas DirectFlow® protocol, the storage industry’s first implementation of...

By Barbara Murphy on July 29, 2011
Categories: Parallel Data Storage

Panasas co-founder and CTO, Dr. Garth Gibson, talks about the early days at Panasas and how the company grew to be the premier provider of high performance parallel storage for technical computing applications and big data workloads. Garth discusses the PanFS® parallel file system, Panasas scale-out NAS appliances, and the future for competing file systems.

By Barbara Murphy on July 29, 2011

A clear sign that parallel computing is starting to go mainstream is that the general press starts to cover the topic.  Recently The Economist ran an article on the need for parallelism for applications that run in multi-core environments –

The summary point is that few have reaped the benefits of the transition to multi-core technology.   While virtualization has allowed multiple applications to run simultaneously on a single multi-core processor, few applications have fully...