CONTACT US 1888-726-2727

You are here

Contact Sales

RSS

Blog

By Admin on January 31, 2013

With nearly 8.5 petabytes of ActiveStor storage, the Panasas installation at Rutherford Appleton Laboratory (RAL) represents one of the largest multi-location, high-performance computing (HPC) storage deployments in Great Britain. Panasas ActiveStor gives RAL a solution that offers extreme scalability and simple storage management capabilities so that scientists can focus on important research, not on cumbersome system administration. RAL won the “Public Sector Storage Project of the Year” award for its work done with Panasas ActiveStor at last month’s Storage Virtualization and Cloud...

By Admin on January 29, 2013
Categories: Big Data Solutions

ATK Aerospace is the world’s top producer of solid rocket propulsion systems and a leading supplier of military and commercial aircraft structures. It built the systems that lifted the Space Shuttle, the Mars rover, and the Hubble Space Telescope into space.  ATK engineers use Panasas ActiveStor to speed innovation and reduce cost in its demanding research, and product performance simulation processes.

Supported by its high performance computing (HPC) center used by multiple business units across 12 US locations, ATK’s engineers and scientists employ advanced computer-aided...

By Barbara Murphy on January 24, 2013
Categories: pNFS

Panasas founder and chief scientist, Dr. Garth Gibson, and Steve Conway, research vice president in IDC's high performance computing group took some time recently to discuss HPC trends and forces affecting storage in the HPC market. The role that pNFS will play in HPC storage is discussed in this first of three video chapters. The role for solid state technology in HPC installations and big data will both be explored in upcoming installments.

 

Panasas and IDC pNFS discussion transcript:

Steve Conway: The first question I have for you is about pNFS....

By Geoffrey Noer on December 4, 2012

Now that we’ve talked about Solid State Disk (SSD) technology and what it’s good and not good for, it’s worth talking about how and why SSD technology is used in our new flagship parallel storage solution, ActiveStor 14.

The heart of any scale-out storage system is ultimately the parallel file system that runs as part of its storage operating system. For Panasas ActiveStor, that operating system is called PanFS. Unlike most other storage systems, PanFS is an object storage system. Objects can be best thought of as being at a level of abstraction half way between block storage and...

By Barbara Murphy on November 28, 2012

I recently read a quote from the US secretary of Energy, Steven Chu. He said: “The nation that leads the world in high-performance computing will have an enormous competitive advantage across a broad range of sectors.”  I couldn’t agree more, so it was a shame to experience the absence of so many national government laboratories at Supercomputing 2012.  Sadly, many of the big national labs pulled out from the show, in part due to federal budget cut-backs, but also as a result of the scandal involving the General Services Administration.  Not exactly the recipe for leading...

By Barbara Murphy on November 12, 2012
Categories: Big Data Solutions, Hadoop

A major theme here at Supercomputing 2012 is the use of Hadoop for big data analytics.  In partnership with Hortonworks, Panasas is demonstrating Hadoop running on our recently announced Panasas ActiveStor 14 parallel storage system at Supercomputing 2012, booth 3421.

The technology requirements of big data applications vary dramatically for legacy storage and compute technologies.  Big data workloads strain traditional storage architectures in three key ways: scale, bandwidth, and volume and variety of unstructured data.  Under the mantle of HPC, a new ecosystem had...

By Geoffrey Noer on November 6, 2012

This post is the second in a series of many on solid state storage and its use in parallel storage solutions.

How SSDs Work

Inside all SSDs are two major components:  Number of NAND flash memory chips and a controller.  The number of memory chips determines the capacity of the drive.  The controller, being the “brain” of the SSD, has the responsibility of making the collection of NAND flash chips look like fast HDDs to the host system.

This is not easy.

In order accomplish its job the SSD controller must perform the following tasks:

Host...
By Barbara Murphy on October 29, 2012

Browsing one of my favorite channel-focused magazines, CRN, I read with some amusement the comments of Jay Kidd, senior vice-president of product strategy and development at NetApp. Kidd was highlighting one of the major new features of ONTAP 8.1.1 – the nine-year-long integration of Spinnaker Systems’ clustering capabilities.  “In the past we brought Ethernet storage…to the mainstream,” Kidd said. ”Now we are doing the same with clustering. We’re bringing it from the crazy world of high-performance computing to the world of commercial storage.”

...

By Geoffrey Noer on October 25, 2012
Categories: Solid State Storage

With the recent launch of the SSD-accelerated ActiveStor 14, we think it will be interesting to share some of our thoughts around SSD and its use in a parallel storage solution.  This post is the first in a series of many on solid state storage.

What is a SSD?

A Solid State Drive (SSD) is an umbrella term for a device that behaves like a traditional Hard Disk Drive (HDD), but uses memory technology instead of a magnetic medium as its method of recording data.  It is interesting to note that this is actually not a new concept, as SSD products have been around since...

By Barbara Murphy on August 24, 2012

Utah State University (USU) is bucking the trend of many universities who are leveraging the high performance computing (HPC) facilities at national laboratories for research, paying on a per-use basis, rather than maintaining their own HPC centers.  Using national labs could potentially eliminate on-going capital equipment investment along with the associated operating and maintenance expenses, and is a good way to gain experience in HPC.  However, as the system demand grows, the work of planning research around available cycles on other systems brings its own set of challenges...

Pages