[University of Tennessee Knoxville, Innovative Computing Laboratory]

1. Introduction

This is a suite of benchmarks that measure performance of CPU, memory subsytem and the interconnect. For details refer to the HPC Challenge web site.

In essence, HPC Challenge consists of a number of subbenchmarks each of which tests different aspect of the system.

If you familiar with the HPL benchmark code (see the HPL web site) then you can reuse the build script file (input for make(1)) and the input file that you already have for HPL. The HPC Challenge benchmark includes HPL and uses its configuration and input files with only slight modifications. The most important change must be done to the line that sets the TOPdir variable. For HPC Challenge, the variable's value should always be ../../.. regardless of what it was in the HPL build script file.

2. Source Code Changes

2.1. Version 1.2 Changes

  1. Changes in the FFT component:

    • Added flexibility in choosing vector sizes and processor counts: now the code can do powers of 2, 3, and 5 both sequentially and in parallel tests.

    • FFTW can now run with ESTIMATE (not just MEASURE) flag: it might produce worse performance results but often reduces time to run the test and cuases less memory fragmentation.

  2. Changes in the DGEMM component:

    • Added more comprehensive checking of the numerical properties of the test's results.

  3. Changes in the RandomAccess component:

    • Removed time-bound functionality: only runs that perform complete computation are now possible.

    • Made the timing more accurate: main array initialization is not counted towards performance timing.

    • Cleaned up the code: some non-portable C language constructs have been removed.

    • Added new algorithms: new algorithms from Sandia based on hypercube network topology can now be chosen at compile time which results on much better performance results on many types of parallel systems.

    • Fixed potential resource leaks by adding function calls rquired by the MPI standard.

  4. Changes in the HPL component:

    • Cleaned up reporting of numerics: more accurate printing of scaled residual formula.

  5. Changes in the PTRANS component:

    • Added randomization of virtual process grids to measure bandwidth of the network more accurately.

  6. Miscellaneous changes:

    • Added better support for Windows-based clusters by taking advantage of Win32 API.

    • Added custom memory allocator to deal with memory fragmentation on some systems.

    • Added better reporting of configuration options in the output file.

3. Compiling

The first step is to create a build script file that reflects characteristics of your machine. This file is reused by all the components of the HPC Challenge suite. The build script file should be created in the hpl directory. This directory contains instructions (the files README and INSTALL) on how to create the build script file. The hpl/setup directory contains many examples of build script files. A recommended is to copy one of them to the hpl directory and if it doesn't work then change it.

The build script file has a name that starts with Make. prefix and usally ends with a suffix that identifies the target system. For example, if the suffix chosen for the system is Unix, the file should be named Make.Unix.

To build the benchmark executable (for the system named Unix) type: make arch=Unix. This command should be run in the top directory (not in the hpl directory). It will look in the hpl directory for the configuration file and use it to build the benchmark executable.

The runtime behavior of the HPC Challenge source code may be configured at compiled time by defining a few C preprocessor symbols. They can be defined by adding appropriate options to CCNOOPT and CCFLAGS make variables. The former controls options for source code files that need to be compiled without aggressive optimizations to ensure accurate generation of system-specific parameters. The latter applies to the rest of the files that need good compiler optimization for best performance. To define a symbol S, the majority of compilers requires option -DS to be used. Currently, the following options are available in the HPC Challenge source code:

4. Runtime Configuration

The HPC Challenge is driven by a short input file named hpccinf.txt that is almost the same as the input file for HPL (customarily called HPL.dat). Refer to the file hpl/www/tuning.html for details about the input file for HPL. A sample input file is included with the HPC Challenge distribution.

The differences between HPL input file and HPC Challenge input file can be summarized as follows:

The additional lines in the HPC Challenge input file (compared to the HPL input file) are:

Just for completeness, here is the list of lines of the HPC Challenge's input file with brief descriptions of their meaning:

5. Running

The exact way to run the HPC Challenge benchmark depends on the MPI implementation and system details. An example command to run the benchmark could like like this: mpirun -np 4 hpcc. The meaning of the command's components is as follows:

After the run, a file called hpccoutf.txt is created which contains results of the benchmark. This file should be uploaded through the web form at the HPC Challenge website.