The Partition's computers
The JARA Partition consists of contingents on the high-performance computers and supercomputers installed at RWTH Aachen University (CLAIX) and Forschungszentrum Jülich (JURECA). The Partition has been established in 2012 and has been gradually expanded since then. All listet core-hours are evenly split between two computing time periods per year.
CLAIX (Cluster Aix-la-Chapelle)
(picture: Conor Crowe)
In November 2016 the RWTH Compute Cluster has been replaced by the new compute cluster “CLAIX” in two stages. All computing nodes will continue to be equipped with processors of the x86_64 architecture and be operated under Linux.
Because of the higher compute capacity of the new processors, more than 400 TFLops will be available for users of the JARA-HPC partition after the installation of the first stage of Claix.
There are two different node types available:
- CLAIX2016-MPI: Each node has two Intel Broadwell processors (2,2 GHz, 12 cores each) with 24 cores in total and 128 GB main memory.
- CLAIX2016-SMP: Comprising eight Intel Broadwell processors (2,2 GHz, 18 cores each) each node has 144 cores in total and 1024 GB main memory.
The smallest unit that can be requested for a computing job is one node. Which type of node is requested depends on the options to use parallelism resp. the required main memory. Nodes of type MPI are best suited for scalable MPI applications, whereas nodes of type SMP serve best in cases of shared memory parallelism or in cases of computing jobs with the need for large memory capacity.
You find more information about how to use the RWTH Computer Cluster here.
You can download the detailed manual here.
For further information please contact the ServiceDesk of IT Center RWTH Aachen University.
picture: JURECA, FZ Jülich
The modular supercomputer JURECA consists of two complementary modules, a cluster module for memory-intensive, low to medium-scalable applications and a booster module for highly-scalable applications.
The JURECA Cluster Module
The module comprises about 1800 compute nodes. Each node contains two Intel Haswell processors with 12 cores each and has at least 128 GB of main memory. This module has a peak performance of about 1.8 PFlop/s. 100 TFlop/s are available for users of the JARA-HPC partition.
The JURECA Booster Module
The module comprises about 1600 compute nodes. Each node is equipped with one Intel Xeon Phi 7250-F Knights Landing CPU containing 68 cores with at least 96 GB of main memory. The module posesses an Intel Omni-Path Architecture high-speed network with a non-blocking fat tree topology. The login infrastructure is shared with the cluster module. The booster module has a peak performance of about 5 PFlop/s. 800 TFlop/s are available for users of the JARA-HPC partition.
In order to estimate the resources for a regular computing time project it is possible to gain test access to the JURECA system. For this purpose, please contact firstname.lastname@example.org .
You can find more information about JURECA here.