Panasas uses cookies to improve and customize your browsing experience on our website. By browsing our site, you consent to the use of cookies. Your privacy is important to us. To learn more about cookies and how we use them, please see our Privacy Policy.

Contact Us

Thank You

The form was submitted successfully. We will be in touch with you soon.

Contact Us

Full set of Academic Research References

Many academic research project, papers and publication have used and cited Panasas PanFS. Following is a list of some of those projects.

Bjerknes Center for Climate Research

Bethke, I.; Bentsen, Ml; Veldore, V. (2013, Jan.). Earth System Model Installation at TERI HPC: Norway – India Collaboration. The Energy and Resources Institute TERIA-NFA Working Paper No. 7.

University at Buffalo

Furlani, T.; Schneider, B.; Jones, M.; Towns, J.; Hard, D.; Patra, A.; DeLeon, R.; Gallo, S.; Lu, C-D.; Ghadersohi, A.; Gentner, R.; Bruno, A.; Boisseau, J.; Wang, F.; von Laszewski, G. (2012, Nov.). Data Analytics Driven Cyberinfrastructure Operations, Planning and Analysis Using XDMoD. Proceedings of the SC12 Conference.

Carnegie Mellon University

Patil, S.; Ren, K.; Gibson, G. (2012, Nov. 10). A Case for Scaling HPC Metadata Performance through De-specialization. 2012 SC Companion: High Performance Computing, Networking Storage and Analysis. doi: 10.1109/SC.Companion.2012.372

University of Central Florida

Mitchell, C.; Nunez, J; Wang, J. (2009, Aug 31) Overlapped checkpointing with hardware assist. 2009 IEEE International Conference on Cluster Computing and Workshops. doi: 10.1109/CLUSTR.2009.5289154

University of Cologne

Kawalia, A.; Motameny, S.; Wonczak, S.; Thiele, H.; Nieroda, L.; Jabbari, K.; Borowski, S.; Sinha, V.; Gunia, W.; Lang, U.; Achter, V.; and Nuernberg, P. (2015, May 5). Leveraging the Power of High Performance Computing for Next Generation Sequencing Data Analysis: Tricks and Twists from a High Throughput Exome Workflow. PLoS One. 105; 10(5): e0126321. doi: 10.1371/journal.pone.0126321

Cranfield University

Moulitsas, I.; Sharma, A. (2017, Sep.). MPI to Coarray Fortran: Experiences with a CFD Solver for Unstructured Meshes. Scientific Programming Journal, 2017. doi: 10.1155/2017/3409647

University of Electronic Science and Technology of China

Tang, A.; Gulbeden, A.; Zhou, J.; Stratherarn, W.; Yang, T; Chu, L. (2004, Nov). The Panasas ActiveScale Storage Cluster – Delivering Scalable High Bandwidth Storage. SC’04: Proceedings of the 2004 ACM/IEEE Conference on Supercomputing. doi: 10.1109/SC.2004.57

Florida State University

Shrum, D.C.; Woodruff, B.W.; Stagg, S.M. (2012, Jul 25). Creating an Infrastructure for High-Throughput High-Resolution Cryogenic Electron Microscopy.Journal of Structural Biology. 2012 Oct: 180(1): 254-258. doi: 10.1016/j.jsb.2012.07.009

Georgia Tech University

Lofstead, G. (2010, Dec. 14). Extreme Scale Data Management in High Performance Computing. Georgia Tech University Library SMARTech.

Zheng, F.; Abbasi, H.; Docan, C.; Lofstead, J.; Liu, Q.; Klasky, S.; Parashar, M. (2010, Jan 25). PreDatA – Preparatory Data Analytics on Peta-Scale Machines. 2010 IEEE International Symposium on Parallel & Distributed Processing (IPDPS). doi: 10.1109/IPDPS.2010.5470454

Haifa University

Nagle, D.; Factor, M.E.; Iren, S.; Naor, D.; Riedel, E.; Rodeh, O.; Satran, J. (2008, Jul). The ANSI T10 object-based storage standard and current implementations. IBM Journal of Research and Development, Volume: 52 Issue: 4.5. doi: 10.1147/rd.524.0401

Harvard University

Hermann, M.; Clunie, D.; Fedorov, A.; Doyle, S.; Pieper, S.; Klepeis, V.; Le, L.; Mutter, G.; Milstone,; Schultz, T.; Kikinis, R.; Kotecha, G.; Hwang, D.; Andriole, K.; Iafrate, A.; Brink, J.; Boland, G.; Dreyer, K.; Michalski, M.; Golden, J.; Louis, D.; Lennerz, J. (2018, Nov 2). Implementing the DICOM Standard for Digital Pathology. Journal of Pathology Informatics. 2018: 9: 37. doi: 10.4103/jpi.jpi_42_18

The Chinese University of Hong Kong

Chan, J.; Ding, Q.; Lee, P.; Chan, H. (2014, Feb). Parity Logging with Reserved Space: Towards Efficient Updates and Recovery in Erasure-coded Clustered Storage. FAST’14 Proceedings of the 12th USENIX Conference on File and Storage Technologies.

Irish Centre for High-End Computing (ICHEC)

Lysaght, M.; Wilson, N.; McHugh, E.; Browne, M.; Civario, G. (2013, Feb). Best Practice mini-guide “Stokes”. Partnership For Advanced Computing in Europe (PRACE).

University of Nebraska

Swanson, D. (2018, Aug 29). Finding Subatomic Particles and Nanoscopic Gold with Open, Shared Computing. University of Kansas Journals

Northwestern University

Ching, A.; Coloma, K.; Li, J.; Choudhary, A.; Liao, W-K. (2005, Jan). High-Performance Techniques for Parallel I/O. Proceedings of the 2005 International Symposium on High-Performance Computer Architecture.

Coloma, K.; Choudhary, A.; Liao, W-K.; Ward, L.; Tideman, S. (2005, Jan). DAChe: Direct Access Cache System for Parallel I/O. Proceedings of the 2005 International Symposium on High-Performance Computer Architecture.

Liao, W-K.; Coloma, K.; Choudhary, A.; Ward, L. (2001, Jan) Cooperative Client-side File Caching for MPI Applications. Proceedings HPCA Seventh International Symposium on High-Performance Computer Architecture.

Ching, A.; Liao, W-K.; Choudhary, A.; Ross, R.; Ward, L. (2007, Nov 10.) Noncontiguous Locking Techniques for Parallel File Systems. SC ’07: Proceedings of the 2007 ACM/IEEE Conference on Supercomputing. doi: 10.1145/1362622.1362658

Notre Dame University

Tovar, B.; Lyons, B.; Mohrman, K.; Sly-Delgado, B.; Lannon, K.; Thain, D. (2022, Feb). Dynamic Task Shaping for High Throughput Data Analysis Applications in High Energy Physics. Proceedings International Symposium on Parallel and Distributed Processing (IPDPS) 2022.

Rice University

Grossman, M.; Araya-Polo, M. (2015, Oct. 30). Efficient Static and Dynamic Memory Management Techniques for Multi-GPU Systems. Runtime Systems for Extreme Scale Programming Models and Architectures (RESPA ’15).

Grossman, M.; Araya-Polo, M. (2015, Nov). Distributed, Heterogeneous Scheduling Techniques Motivated by Production Geophysical Applications. 8thWorkshop on Many-Task Computing on Clouds, Grids, and Supercomputers (MTAGS) 2015.

Moll, M.; Bryant, D.H.; Kavraki, L.E. (2010, Nov 11). The LabelHash algorithm for substructure matching. BMC Bioinformatics. 2010; 11: 555. doi: 10.1186/1471-2105-11-555

Stanford University

Jia, Z., Treichler, S., Shipman, G., McCormick, P., Aiken, A. (2018, May 2). Isometry: A Path-Based Distributed Data Transfer System. ICS’18, June 12-15, 2018, Beijing, China. doi: 10.1145/3205289.3205301

Farhat, C. (2009, Aug 21). Parameterized Aeroelastic Reduced-Order Modeling of Fighters. Defense Technical Information Center. Accession Number: AD1026527

Tokyo Institute of Technology

Gomez, L.B.; Nukada, A.; Maruyama, N.; Cappello, F.; Matsuoka, S. (2010, Dec 16). Low-overhead checkpoint for large-scale GPU-accelerated systems. IPSJ SIG Technical Report. Vol.2010-ARC-102 No.12.

University of Cambridge

Calleja, P.; Posey, S.; Loewe, B. (2009, May). Cluster Scalability of Implicit and Implicit-Explicit LS-DYNA Simulations Using a Parallel File System. 7thEuropean LS-DYNA Conference.

University College London

Langdon, W.B. (2012, Jun 12). Initial experiences of the Emerald: e-Infrastructure South GPU supercomputer. UCL Department of Computer Science Research Note RN/12/08.

Uppsala University

Lampa, S.; Dahloe, M.; Olason, P.I.; Hagberg, J.; Spjuth, O. (2013, Jun 25). Lessons Learned from implementing a national infrastructure in Sweden for storage and analysis of next-generation sequencing data. Gigascience. 2013 Jun 25;2(1):9. doi: 10.1186/2047-217X-2-9

Utah State University

Horne, K. (2009, Dec). Parallelization of Performance Limiting Routines in the Computational Fluid Dynamics General Notation System Library. Utah State University Digital Commons.

Warsaw University of Technology

Pomeran, M.; Zlagnean, L.; Besutiu, L. (2019, Sep). Tentative oblique subduction high resolution models, lead to reducing costs of HPCC usage. Conference Proceedings, 10th Congress of the Balkan Geophysical Society, Sep 2019, Volume 2019, p.1 – 5  doi: 10.3997/2214-4609.201902620