You are here

Contact Sales



By Geoffrey Noer on December 4, 2012 - 12:06pm

Now that we’ve talked about Solid State Disk (SSD) technology and what it’s good and not good for, it’s worth talking about how and why SSD technology is used in our new flagship parallel storage solution, ActiveStor 14.

The heart of any scale-out storage system is ultimately the parallel file system that runs as part of its storage operating system. For Panasas ActiveStor, that operating system is called PanFS. Unlike most other storage systems, PanFS is an object storage system. Objects can be best thought of as being at a level of abstraction half way between block storage and...

By Barbara Murphy on November 28, 2012 - 6:59pm

I recently read a quote from the US secretary of Energy, Steven Chu. He said: “The nation that leads the world in high-performance computing will have an enormous competitive advantage across a broad range of sectors.”  I couldn’t agree more, so it was a shame to experience the absence of so many national government laboratories at Supercomputing 2012.  Sadly, many of the big national labs pulled out from the show, in part due to federal budget cut-backs, but also as a result of the scandal involving the General Services Administration.  Not exactly the recipe for leading...

By Barbara Murphy on November 12, 2012 - 11:02pm
Categories: Big Data, Hadoop

A major theme here at Supercomputing 2012 is the use of Hadoop for big data analytics.  In partnership with Hortonworks, Panasas is demonstrating Hadoop running on our recently announced Panasas ActiveStor 14 parallel storage system at Supercomputing 2012, booth 3421.

The technology requirements of big data applications vary dramatically for legacy storage and compute technologies.  Big data workloads strain traditional storage architectures in three key ways: scale, bandwidth, and volume and variety of unstructured data.  Under the mantle of HPC, a new ecosystem had...

By Geoffrey Noer on November 6, 2012 - 6:05pm

This post is the second in a series of many on solid state storage and its use in parallel storage solutions.

How SSDs Work

Inside all SSDs are two major components:  Number of NAND flash memory chips and a controller.  The number of memory chips determines the capacity of the drive.  The controller, being the “brain” of the SSD, has the responsibility of making the collection of NAND flash chips look like fast HDDs to the host system.

This is not easy.

In order accomplish its job the SSD controller must perform the following tasks:

By Barbara Murphy on October 29, 2012 - 6:05pm

Browsing one of my favorite channel-focused magazines, CRN, I read with some amusement the comments of Jay Kidd, senior vice-president of product strategy and development at NetApp. Kidd was highlighting one of the major new features of ONTAP 8.1.1 – the nine-year-long integration of Spinnaker Systems’ clustering capabilities.  “In the past we brought Ethernet storage…to the mainstream,” Kidd said. ”Now we are doing the same with clustering. We’re bringing it from the crazy world of high-performance computing to the world of commercial storage.”


By Geoffrey Noer on October 25, 2012 - 5:21pm

With the recent launch of the SSD-accelerated ActiveStor 14, we think it will be interesting to share some of our thoughts around SSD and its use in a parallel storage solution.  This post is the first in a series of many on solid state storage.

What is a SSD?

A Solid State Drive (SSD) is an umbrella term for a device that behaves like a traditional Hard Disk Drive (HDD), but uses memory technology instead of a magnetic medium as its method of recording data.  It is interesting to note that this is actually not a new concept, as SSD products have been around since...

By Barbara Murphy on August 24, 2012 - 1:39am

Utah State University (USU) is bucking the trend of many universities who are leveraging the high performance computing (HPC) facilities at national laboratories for research, paying on a per-use basis, rather than maintaining their own HPC centers.  Using national labs could potentially eliminate on-going capital equipment investment along with the associated operating and maintenance expenses, and is a good way to gain experience in HPC.  However, as the system demand grows, the work of planning research around available cycles on other systems brings its own set of challenges...

By Admin on July 31, 2012 - 5:17pm
Categories: Exascale

Intel’s recent acquisition of Whamcloud occurred with little media fanfare.  Whamcloud is the main development arm for the open source Lustre file system, and provides support contracts for existing Lustre installations.  On the surface this sounds like an unusual move for Intel, given that it has traditionally maintained a neutral position on file systems and is not in the support business. However, it should be noted that Intel has been keenly interested in parallel file systems for a long time and has published several papers on the use of high performance computing (HPC) and...

By Barbara Murphy on June 28, 2012 - 9:13pm

I recently attended a panel event at the Churchill Club in Silicon Valley ( “The Elephant in the Enterprise: What Role Will Hadoop Play?” certainly packed the house. The panel included users and vendors from Facebook, Cloudera, Oracle, MapR, and Metamarkets, each providing very different perspectives on Hadoop’s readiness for the enterprise. . It was interesting to compare the user perspective “…it is no way ready for primetime,” with the vendor perspective being “…of course it’s ready!”

On the user side, Jay...

By Barbara Murphy on June 11, 2012 - 10:35pm
Categories: Big Data, Parallel Storage

Last year I wrote a blog post that showed how the lack of parallelism in computing was a limiter to innovation.  My post was inspired by a technology article I read in the Economist magazine – The Economist continues to be a trend setter, taking on new topics that highlight a revolution in the IT industry.  The May 19th issue takes on a new hot topic with a special report on big data in finance and banking –  ...