You are here
Big data is a term applied to a new generation of software, applications, and system and storage architecture, all designed to provide business value from unstructured data. Big data sets require advanced tools, software, and systems to capture, store, manage, and analyze the data sets, all in a timeframe that preserves the intrinsic value of the data.
The use of big data to create value has been around for over 10 years in advanced research laboratories and advanced engineering research centers within large corporations. Traditionally termed HPC (high performance computing), big data provided significant advances in nuclear research, mineral exploration, material sciences, and computer modeling.
The term big data is now applied more broadly to cover commercial environments, thanks to massive technology advances, including: faster and cheaper processing power, clustered computing, lower cost SATA storage, and network performance improvements that enable companies to perform computing tasks that previously required highly sophisticated and costly computing systems.
Four distinct applications segments comprise the big data market, each with varying levels of need for performance and scalability. The four big data segments are: 1) Design (engineering collaboration), 2) Discover (core simulation – supplanting physical experimentation), 3) Decide (analytics), and 4) Deposit (Web 2.0 and data warehousing). Panasas ActiveStor scale-out NAS systems, together with PanFS, the Panasas high performance parallel file system, are designed to specifically address the design and discover segments.