Industry | Computer Aided Engineering |
Challenge | An existing storage system hindered the compute performance of this research organization’s work in designing systems free of performance and safety issues related to turbulence. Their storage system often hung and limited the productivity of the cluster. A critical issue for a new system was installation and amount of time required for ease of integration |
Solution | Panasas ActiveStor™ scale-out NAS appliances, featuring the PanFS™ parallel file system and DirectFlow® protocol |
Results |
|
The Stanford University Institute for Computational and Mathematical Engineering (ICME) is a multidisciplinary organization designed to develop advanced numerical simulation methodologies that will facilitate the design of complex, large-scale systems in which turbulence plays a controlling role. Researchers at the Institute use high-performance computing as part of a comprehensive research program to better predict flutter and limit cycle oscillations for modern aircraft, to better understand the impact of turbulent flow on jet engines, and to develop computational methodologies that can be expected to facilitate the design of naval systems with lower signature levels. Their research can lead to improvements in the performance and safety of aircraft, reduction of engine noise, and new underwater navigation systems.
Researchers at the Institute are using the 164-node Nivation Linux cluster to tackle large-scale simulations. The Institute has recently benefited from deploying Rocks software on several large-scale clusters in support of its work, including the Nivation cluster.
Rocks provides a collection of integrated software components that can be used to build, maintain, and operate a cluster. Its core functions include installing the Linux operating system, configuring the compute nodes for seamless integration, and managing the cluster.
While deployment of Rocks software helped the Institute maximize the compute power of the Nivation cluster, an existing storage solution hindered overall system performance. Behind the Nivation cluster, an NFS server was initially deployed to store and retrieve data. At first, the solution appeared capable of supporting the number of cluster nodes and their associated data; however, it quickly became apparent that it could not meet the load requirements of the system. As a result, the storage system often hung and limited the productivity of the cluster. “It’s imperative that our clusters be fully operational at all times,” said Steve Jones, Technology Operations Manager at the Institute. “The productivity of our organization is dependent upon each cluster running at peak optimization.”
Adding a second NFS server was an initial option, but was quickly dismissed because of poor scalability and increased management burden. The Institute needed a solution that could scale in both capacity and the number of cluster nodes supported while providing exceptional random I/O performance. “High performance is critical to our success at the Institute,” said Jones.
“We needed a storage solution that would allow our cluster to maximize its CPU capabilities.” Finally, since Jones is supporting several clusters by himself, ease of installation and management was essential. A critical goal was to quickly install a storage system and immediately move data.
The Institute selected the Panasas ActiveStor to eliminate the I/O bottleneck hindering overall performance of the Nivation cluster. By offering an integrated SW/HW solution, Panasas ActiveStor was set up and in production within a matter of hours. This was a key benefit for Steve Jones. “We don’t have a huge team and weeks of time to integrate discrete systems,” said Jones. “We do science here, not IT.” Panasas ActiveStor enables quick deployment by combining a next-generation, distributed file system with an integrated hardware solution that is perfectly tuned to deliver outstanding performance.
In order to achieve high data bandwidth, the Institute installed the Panasas DirectFlow® protocol on each node in the cluster. The direct node-to-disk access offered by the DirectFlow protocol allowed the Institute to achieve an immediate order of magnitude improvement in performance. “By leveraging the object-based architecture, we were able to completely eliminate our storage I/O bottleneck,” said Jones.
The Panasas out-of-band DirectFlow data path moves file system capabilities to Linux compute clusters, enabling direct and highly parallel data access between the nodes and the Panasas storage. This achieves up to 30X increase in data throughput for the most demanding applications.
Since installing Panasas ActiveStor, the Institute is able to maximize the productivity of the Nivation cluster, consistently running at 100% utilization. This allows the Institute to deliver accelerated results to their customers. “Our user community is dependent upon our clusters to deliver results as quickly as possible,” said Jones. Panasas has been a key ingredient in maximizing the research capabilities of the Nivation cluster.
Additionally, the ease of installation and management of the system cannot be overlooked. Jones independently manages several clusters, so a solution that is simple to deploy and administer is of critical importance. Finally, the direct disk-to-node-to-disk access offered by the DirectFlow protocol provides a wealth of long-term architectural opportunities for the Institute. “Previously, we had to limit the growth of our clusters because of I/O issues,” said Jones. “With the object-based architecture, we are empowered to build the largest, fastest clusters possible. We now have a shared storage resource that can scale in both capacity and performance.”
Industry | Computer Aided Engineering |
Challenge | An existing storage system hindered the compute performance of this research organization’s work in designing systems free of performance and safety issues related to turbulence. Their storage system often hung and limited the productivity of the cluster. A critical issue for a new system was installation and amount of time required for ease of integration |
Solution | Panasas ActiveStor™ scale-out NAS appliances, featuring the PanFS™ parallel file system and DirectFlow® protocol |
Results |
|
The Stanford University Institute for Computational and Mathematical Engineering (ICME) is a multidisciplinary organization designed to develop advanced numerical simulation methodologies that will facilitate the design of complex, large-scale systems in which turbulence plays a controlling role. Researchers at the Institute use high-performance computing as part of a comprehensive research program to better predict flutter and limit cycle oscillations for modern aircraft, to better understand the impact of turbulent flow on jet engines, and to develop computational methodologies that can be expected to facilitate the design of naval systems with lower signature levels. Their research can lead to improvements in the performance and safety of aircraft, reduction of engine noise, and new underwater navigation systems.
Researchers at the Institute are using the 164-node Nivation Linux cluster to tackle large-scale simulations. The Institute has recently benefited from deploying Rocks software on several large-scale clusters in support of its work, including the Nivation cluster.
Rocks provides a collection of integrated software components that can be used to build, maintain, and operate a cluster. Its core functions include installing the Linux operating system, configuring the compute nodes for seamless integration, and managing the cluster.
While deployment of Rocks software helped the Institute maximize the compute power of the Nivation cluster, an existing storage solution hindered overall system performance. Behind the Nivation cluster, an NFS server was initially deployed to store and retrieve data. At first, the solution appeared capable of supporting the number of cluster nodes and their associated data; however, it quickly became apparent that it could not meet the load requirements of the system. As a result, the storage system often hung and limited the productivity of the cluster. “It’s imperative that our clusters be fully operational at all times,” said Steve Jones, Technology Operations Manager at the Institute. “The productivity of our organization is dependent upon each cluster running at peak optimization.”
Adding a second NFS server was an initial option, but was quickly dismissed because of poor scalability and increased management burden. The Institute needed a solution that could scale in both capacity and the number of cluster nodes supported while providing exceptional random I/O performance. “High performance is critical to our success at the Institute,” said Jones.
“We needed a storage solution that would allow our cluster to maximize its CPU capabilities.” Finally, since Jones is supporting several clusters by himself, ease of installation and management was essential. A critical goal was to quickly install a storage system and immediately move data.
The Institute selected the Panasas ActiveStor to eliminate the I/O bottleneck hindering overall performance of the Nivation cluster. By offering an integrated SW/HW solution, Panasas ActiveStor was set up and in production within a matter of hours. This was a key benefit for Steve Jones. “We don’t have a huge team and weeks of time to integrate discrete systems,” said Jones. “We do science here, not IT.” Panasas ActiveStor enables quick deployment by combining a next-generation, distributed file system with an integrated hardware solution that is perfectly tuned to deliver outstanding performance.
In order to achieve high data bandwidth, the Institute installed the Panasas DirectFlow® protocol on each node in the cluster. The direct node-to-disk access offered by the DirectFlow protocol allowed the Institute to achieve an immediate order of magnitude improvement in performance. “By leveraging the object-based architecture, we were able to completely eliminate our storage I/O bottleneck,” said Jones.
The Panasas out-of-band DirectFlow data path moves file system capabilities to Linux compute clusters, enabling direct and highly parallel data access between the nodes and the Panasas storage. This achieves up to 30X increase in data throughput for the most demanding applications.
Since installing Panasas ActiveStor, the Institute is able to maximize the productivity of the Nivation cluster, consistently running at 100% utilization. This allows the Institute to deliver accelerated results to their customers. “Our user community is dependent upon our clusters to deliver results as quickly as possible,” said Jones. Panasas has been a key ingredient in maximizing the research capabilities of the Nivation cluster.
Additionally, the ease of installation and management of the system cannot be overlooked. Jones independently manages several clusters, so a solution that is simple to deploy and administer is of critical importance. Finally, the direct disk-to-node-to-disk access offered by the DirectFlow protocol provides a wealth of long-term architectural opportunities for the Institute. “Previously, we had to limit the growth of our clusters because of I/O issues,” said Jones. “With the object-based architecture, we are empowered to build the largest, fastest clusters possible. We now have a shared storage resource that can scale in both capacity and performance.”