HPC

  • What is it?
    High Performance Computing, or HPC, refers to the ability to process data and perform calculations at high speeds, especially in systems that function above a trillion floating point operations per second (teraFLOPS). The world's leading supercomputers operate on the scale of petaFLOPS (quadrillion floating point operations per second), whereas the next goalpost is “exascale computing”, which functions above a quintillion floating point operations per second (exaFLOPS).

    To achieve this level of performance, parallel computing across a number of CPUs or GPUs is required. One common type of HPC solutions is a computing cluster, which aggregates the computing power of multiple computers (referred to as "nodes") into a large group. A cluster can deliver much higher performance than a single computer, as individual nodes work together to solve a problem larger than any one computer can easily solve.

  • Why do you need it?
    Acquiring HPC capabilities for your organization is important, whether it is through a computing cluster or high-end mainframe computer. HPC can solve problems in science, engineering, or business. Examples include:

    Science: HPC is used by scientists at universities and research institutes to understand the formation of our universe, conduct research into particle physics, or simulate and predict climate and weather patterns.

    Media & Entertainment: HPC solutions such as render farms can be used to render animations and special effects, edit feature films, or livestream special events globally.

    Artificial Intelligence: A particular subset of HPC that is currently very popular is machine learning, which is used to develop a myriad of artificial intelligence applications, such as self-driving vehicles, facial recognition software, speech recognition and translation, or drone technology.

    Oil and Gas: HPC is used to process data such as satellite images, ocean floor sonar readings, etc., in order to identify potential new deposits of oil or gas.

    Financial Services: HPC is used to track stock trends and realize algorithmic trading, or analyze patterns to detect fraudulent activity.

    Medicine: HPC is used to help develop cures for diseases like diabetes, or to enable faster and more accurate diagnosis methods, such as cancer screening techniques.

  • How is GIGABYTE helpful?
    GIGABYTE's H-Series High Density Servers and G-Series GPU Servers are designed especially for use in HPC applications, since they combine a high amount of computing power into a 1U, 2U, or 4U server chassis. The servers can be linked into a cluster via interconnects such as Ethernet, Infiniband, or Omni-Path. An example of a HPC server solution is the GIGABYTE H262 Series equipped with the AMD EPYC™ 7002 Series processor, which can feature up to 512 cores / 1024 threads of computing power (two 64-core AMD EPYC™ CPUs per node, four nodes per system) in a single 2U chassis. By populating a full 42U server rack with these systems (leaving some room for networking switches), the user will be able to utilize up to 10,240 cores and 20,480 threads—a massive amount of computing power.

  • WE RECOMMEND
    RELATED ARTICLES
    In the Quest for Higher Learning, High Density Servers Hold the Key
    A top technological university in Europe noticed rising demand for computing services across its various departments. It decided to build a next-generation data center with GIGABYTE's high density servers. With the right tools in place, scientists were able to accelerate their research, analyze massive amounts of information, and complete more data-intensive projects. Science advanced while the institute flourished.
    The European Organization for Nuclear Research Delves into Particle Physics with GIGABYTE Servers
    The European Organization for Nuclear Research (CERN) bought GIGABYTE's high-density GPU Servers outfitted with 2nd Gen AMD EPYC™ processors. Their purpose: to crunch the massive amount of data produced by subatomic particle experiments conducted with the Large Hadron Collider (LHC). The impressive processing power of the GPU Servers’ multi-core design has propelled the study of high energy physics to new heights.
    Decoding the Storm with GIGABYTE’s Computing Cluster
    Waseda University, the “Center for Disaster Prevention around the World”, has built a computing cluster with GIGABYTE’s GPU server and tower servers. They use it to study and prepare for natural disasters, such as tsunamis and storm surges. Efforts go into understanding the tropical cyclones of tomorrow, which are thought to become more dangerous due to climate change.
    Success Story: Achieving Naked-Eye 3D with Virtual Reality
    n'Space, a projector-based platform, is capable of implementing mixed reality without wearable devices. Users can immerse in a virtual environment and interact with 3D projections without the need to cover their faces with electronics. The real world just got an expansion pack in the infinite realm of the imagination. ArchiFiction, the company behind the invention, optimized the virtual experience with GIGABYTE's solution, which can process large amounts of data with high performance.