There were 463 press releases posted in the last 24 hours and 398,943 in the last 365 days.

High-Performance Computing

At Pacific Northwest National Laboratory (PNNL), high-performance computing (HPC) encompasses multiple research areas and affects both computer science and a broad array of domain sciences.

PNNL provides science, technologies, and leadership for creating and enabling new computational capabilities to solve challenges using extreme-scale simulation, data analytics, and machine learning. We deliver the computer science, mathematics, computer architecture, and algorithmic advances that enable integration of extreme-scale modeling and simulation with knowledge discovery and model inference from petabytes of data.

Our research covers a multitude of areas, including advanced computer architectures, system software and runtime systems, performance modeling and analysis, quantum computing, high-performance data analytics, and machine learning techniques at scale. Our integrated computational tools enable domain science researchers to analyze, model, simulate, and predict complex phenomena in areas ranging from molecular, biological, subsurface, and climate sciences to complex networked systems. For example, the Scalable High-Performance Algorithms and Data-Structures library—or SHAD—can provide scalability and performance to support different application domains, including graph processing, machine learning, and data mining.

We have recognized expertise in the area of evaluation and capability prediction for both current and emerging large-scale system architectures. The Center for Advanced Technology Evaluation (CENATE) project evaluates emerging hardware technologies for use in future systems, exploring both the performance and security ramifications of novel architectural designs and features. CENATE uses both small-scale testbeds and predictive performance modeling to explore system scales that are currently unavailable.

Our researchers also lead efforts to prepare the U.S. Department of Energy for the upcoming exascale computing era. We are developing software tools such as the Global Arrays Toolkit, which provides a high-level, easy-to-use programming model with abstractions suitable for the science domains it targets. We are also innovating in areas of data-model convergence, charting a new path in integrating elements of HPC with data analytics to enable new scientific discoveries and computational capabilities.