HPC

  • What is it?
    High Performance Computing, or HPC, refers to the ability to process data and perform calculations at high speeds, especially in systems that function above a trillion floating point operations per second (teraFLOPS). The world's leading supercomputers operate on the scale of petaFLOPS (quadrillion floating point operations per second), whereas the next goalpost is “exascale computing”, which functions above a quintillion floating point operations per second (exaFLOPS).

    To achieve this level of performance, parallel computing across a number of CPUs or GPUs is required. One common type of HPC solutions is a computing cluster, which aggregates the computing power of multiple computers (referred to as "nodes") into a large group. A cluster can deliver much higher performance than a single computer, as individual nodes work together to solve a problem larger than any one computer can easily solve.

  • Why do you need it?
    Acquiring HPC capabilities for your organization is important, whether it is through a computing cluster or high-end mainframe computer. HPC can solve problems in science, engineering, or business. Examples include:

    Science: HPC is used by scientists at universities and research institutes to understand the formation of our universe, conduct research into particle physics, or simulate and predict climate and weather patterns.

    Media & Entertainment: HPC solutions such as render farms can be used to render animations and special effects, edit feature films, or livestream special events globally.

    Artificial Intelligence: A particular subset of HPC that is currently very popular is machine learning, which is used to develop a myriad of artificial intelligence applications, such as self-driving vehicles, facial recognition software, speech recognition and translation, or drone technology.

    Oil and Gas: HPC is used to process data such as satellite images, ocean floor sonar readings, etc., in order to identify potential new deposits of oil or gas.

    Financial Services: HPC is used to track stock trends and realize algorithmic trading, or analyze patterns to detect fraudulent activity.

    Medicine: HPC is used to help develop cures for diseases like diabetes, or to enable faster and more accurate diagnosis methods, such as cancer screening techniques.

  • How is GIGABYTE helpful?
    GIGABYTE's H-Series High Density Servers and G-Series GPU Servers are designed especially for use in HPC applications, since they combine a high amount of computing power into a 1U, 2U, or 4U server chassis. The servers can be linked into a cluster via interconnects such as Ethernet, Infiniband, or Omni-Path. An example of a HPC server solution is the GIGABYTE H262 Series equipped with the AMD EPYC™ 7002 Series processor, which can feature up to 512 cores / 1024 threads of computing power (two 64-core AMD EPYC™ CPUs per node, four nodes per system) in a single 2U chassis. By populating a full 42U server rack with these systems (leaving some room for networking switches), the user will be able to utilize up to 10,240 cores and 20,480 threads—a massive amount of computing power.

  • WE RECOMMEND
    RELATED ARTICLES
    To Empower Scientific Study, NTNU Opens Center for Cloud Computing
    High performance computing has a critical role to play in modern-day scientific research. The College of Science at National Taiwan Normal University anticipated the importance and rapid development of HPC. It purchased GIGABYTE servers to establish the Center for Cloud Computing on its campus, with an eye towards completing research projects more quickly and cultivating professionally trained experts in the field.
     How to Build Your Data Center with GIGABYTE? A Free Downloadable Tech Guide
    GIGABYTE is pleased to publish our first long-form “Tech Guide”: an in-depth, multipart document shedding light on important tech trends or applications, and presenting possible solutions to help you benefit from these innovations. In this Tech Guide, we delve into the making of “Data Centers”—what they are, who they are for, what to keep in mind when building them, and how you may build your own with products and consultation from GIGABYTE.
    In the Quest for Higher Learning, High Density Servers Hold the Key
    A top technological university in Europe noticed rising demand for computing services across its various departments. It decided to build a next-generation data center with GIGABYTE's high density servers. With the right tools in place, scientists were able to accelerate their research, analyze massive amounts of information, and complete more data-intensive projects. Science advanced while the institute flourished.
    The European Organization for Nuclear Research Delves into Particle Physics with GIGABYTE Servers
    The European Organization for Nuclear Research (CERN) bought GIGABYTE's high-density GPU Servers outfitted with 2nd Gen AMD EPYC™ processors. Their purpose: to crunch the massive amount of data produced by subatomic particle experiments conducted with the Large Hadron Collider (LHC). The impressive processing power of the GPU Servers’ multi-core design has propelled the study of high energy physics to new heights.
    Back to top