JURECA


picture: JURECA, FZ Jülich

The modular supercomputer JURECA consists of two complementary modules, a cluster module for memory-intensive, low to medium-scalable applications and a booster module for highly-scalable applications.

 

THE JURECA Booster Module

The JURECA Booster Module has a peak performance of 5 PFlop/s.

 

Node characteristics

Available resources per call

  • One Intel Xeon Phi 7250-F Knights Landing CPU
  • 68 cores, 1.4 GHz
  • 96 GB per node node (~ 1.4 GB per core) +
    16 GB MCDRAM high-bandwidth memory
  • About 1640 nodes are available

140 million core-h*)

average overbooking factor: 1
average no. of applications: 15

*) Contains 60 million core-h dedicated to applicants from FZJ

 

The JURECA Cluster Module

The JURECA Cluster Module consists of compute nodes with GPUs (GPU nodes) and without GPUs (CPU nodes). It has a peak performance of about 2.2 PFlop/s.

Important hint: Resources on the JURECA Cluster Module are primarily available for researches from the Forschungszentrum Jülich. Researchers of the RWTH Aachen can only apply for this module if they benefit from the modular architecture of the JURECA system and use the JURECA Cluster Module in combination with the JURECA Booster Module. A detailed and convincing work plan must justify the need to apply for this module.

 

Node type

Node characteristics

Available resources per call

CPU nodes
  • Two Intel Xeon E5-2680 v3 Haswell CPUs
  • 2 x 12 cores, 2.5 GHz per node
  • 1605 nodes with 128 GB (~5 GB per core)
  • 128 nodes with 256 GB (~10 GB per core)
  • 64 nodes with 512 GB (~20 GB per core)

100 million core-h

average overbooking factor: 2.5
average no. of applications: 45

GPU nodes
  • Two Intel Xeon E5-2680 v3 Haswell CPUs
  • 2 x 12 cores, 2.5 GHz
  • 128 GB per node (~5 GB per core)
  • Two NVIDIA K80 GPUs
  • 2 x 4992 CUDA cores
  • 2 x 24 GB GDDR5 memory
  • About 75 nodes are available
4.7 million core-h**)
 
average overbooking factor: 1.5
average no. of applications: 10

**) Resources on GPU nodes are accounted in core-h of the host CPU, i.e. 24 core-h on GPU nodes means using 1 node with two K80 GPUs for 1 hour.

 

Hint: projects requesting up to 2.4 million core-h might want to consider applying for resources on CLAIX via a simplified application procedure.

In order to estimate the resources for a regular computing time project it is possible to gain test access to the JURECA system. For this purpose, please contact sc@fz-juelich.de.

You can find more information about JURECA here.

Important note: JSC introduced a user-centered model for using the supercomputing systems located at JSC (here: JURECA). Each user has only one account. Via this account all assigned projects can be accessed. In addition, data projects were introduced besides the known computing time projects. Computing time resources will continue to be requested through computing time projects and these projects continue to have access to a scratch file system (without backup) and a project file system (with backup). Access to the tape-based archive, however, is only possible via data projects. In addition, data projects provide access to various additional storage layers; however, they are not equipped with a computing time budget. You can find a fact sheet on data projects and the application form for data projects here. For further information please contact the user support (sc@fz-juelich.de) at JSC.