Contact Sales

RSS

Blog

By Admin on October 20, 2011 - 8:31pm
Categories: Big Data, Cloud Computing

Analyst firm, Intersect360 Research, has released a new white paper, “Solving Big Data Problems with Private Cloud Storage,” in which it describes Panasas as well-positioned to provide storage solutions for HPC private storage clouds. The comprehensive paper explores the challenges that HPC applications face, making public clouds less attractive than private cloud implementations.  HPC big data applications with very high data-to-compute ratios are challenged in a public...

By Geoffrey Noer on October 19, 2011 - 8:56pm
Categories: Big Data, Scale-out NAS

Over the last ten years, the HPC market has been transformed by the widespread adoption of compute clusters for technical computing workloads, accompanied by high growth rates in the generation of unstructured data. The storage industry has been responding by transitioning NAS architectures from old-school, legacy NAS to modern scale-out NAS approaches. Unsurprisingly, high performance computing has led the way in terms of requiring the most from scale-out products.

In the following slidecast I did for InsideHPC, I present some of the history behind scale-out NAS, and discuss how...

By Brent Welch on October 11, 2011 - 10:32pm

In my last post I talked about how the Panasas parallel file system (PanFS) achieves extreme performance for big data sets. It also provides redundancy without the need for hardware RAID controllers. In the attached video, Garth Gibson, Panasas founder and CTO, digs deep into the specifics of file system RAID and how PanFS delivers redundancy as part of the file system itself. He compares this innovative approach to most other parallel file systems which push...

By Brent Welch on October 7, 2011 - 9:08pm

In 2008 the first computer broke the petascale barrier, allowing a supercomputer to exceed 1015 operations per second. The Panasas parallel file system was instrumental in accomplishing this major milestone, achieving extreme performance for big data sets. Los Alamos National Laboratory (LANL) built that first petascale system, “Roadrunner,” using the Panasas parallel file system to maximize I/O performance for the most demanding simulations and modeling applications. Roadrunner still remains one of the Top 10 supercomputing systems in...

By Barbara Murphy on September 27, 2011 - 6:07pm
Categories: pNFS

Any of us who have been around the storage industry for a long time knows that standards adoption happens at what sometimes feels like a snail’s pace.  It takes the vision and dedication of people like Garth Gibson, Panasas founder and CTO, to bring momentous change to the industry by pushing the boundaries of technology.

Panasas was instrumental in driving the initial proposal for pNFS and Garth Gibson has been a key contributor to the creation of the open standard for high performance computing.   Take a listen to Garth Gibson talking at SC10 on the genesis of pNFS...

By Brent Welch on September 14, 2011 - 6:59pm

Traditional RAID is designed to protect whole disks with block-level redundancy.  An array of disks is treated as a RAID group, or protection domain, that can tolerate one or more failures and still recover a failed disk by the redundancy encoded on other drives.  The RAID recovery requires reading all the surviving blocks on the other disks in the RAID group to recompute blocks lost on the failed disk.  As disks have increased capacity by 40% to 100% per year, their bandwidth has not increased substantially. 

Comparing models of drives from various vintages of...

By Barbara Murphy on September 6, 2011 - 11:18pm

Panasas co-founder and CTO, Garth Gibson, talks about the evolution of cloud computing with parallel file systems and high performance storage. He discusses advances in data center manageability with new tools to support a wider range of user needs on different operating systems and on virtual machines. He points out that the notion of the ability of users to manage their own gear better makes cloud computing especially compelling.

By Barbara Murphy on August 31, 2011 - 6:29pm
Categories: Parallel Storage

Storage comprises about 20% of the available budget in a typical HPC system, with another 20% being spent on software, 20% for services, and 40% on the compute cluster, according to marketing intelligence firm Intersect360 Research Firm (WW HPC Total Market Model).  The majority of system design time and effort is spent validating the performance of the compute cluster, with less attention paid to storage. But not all storage is the same. All too frequently the investment and effort in building the best compute cluster will have yielded disappointing results when storage is not given...

By Barbara Murphy on August 23, 2011 - 4:06pm
Categories: Exascale

Panasas captured headlines in 2008 when Los Alamos National Laboratory (LANL) deployed Panasas® ActiveStor™ as the primary storage for Roadrunner, the world's first petascale supercomputer. Now, as institutions turn their sights to exascale computing, Panasas co-founder and CTO, Dr. Garth Gibson, explores the storage system capabilities that will be needed to deliver the required 70TB/s bandwidth while meeting the U.S. government’s 2018 target timeframe.

By Geoffrey Noer on August 13, 2011 - 12:06am
Categories: pNFS

It all started more than ten years ago when Panasas founder and CTO, Garth Gibson, realized that there was a strong emerging need for something faster than NFS to satisfy the growing I/O performance requirements of HPC Linux clusters. In 2004, Panasas started shipping its first ActiveStor™ scale-out NAS solution running the PanFS™ storage operating system. Key to the product’s success is the Panasas DirectFlow® protocol, the storage industry’s first...

Pages