Mid-term report

This report presents the status of the Google Summer of Code project Automated benchmark suite for numerical libraries in Gentoo before the mid-term evaluation.

Project description

The project aims to develop a simple yet powerful automated system of benchmarking for numerical libraries. The Gentoo software system provides many implementations of widely used standards such as BLAS, CBLAS, LAPACK, ScaLAPACK and some other numerical library such as FFTW, MKL. The developed tools will aid the system maintener to choose the best suited implementation with respect to the machine hardware and to test the samer implementation or different ones with different compilers, compiler versions and compile flags.

Status of the project

A set of Python scripts and a set of benchmarking suites written in C++ are provided. These tools are able to perform the following tasks:

  • Benchmark different implementations of the standard libraries BLAS, CBLAS, LAPACK
  • Benchmark the FFTW library
  • For each test choose different packages or package versions and for each package or version provide a customized compile environment
  • Customize the typology of tests to be performed
  • Generate an HTML report with plots regarding the performed tests in PNG format

The following features are provided:

  • Error management: even if a benchmark crashes, the script continues with the next task, providing an error message; the failed test will be ignored when generating the reports
  • Complete logging system: everything relevant is logged in a directory, from the emerging of a package to the benchmarking suite execution
  • Human readable output to the terminal as interface to the more low-level underlying test suites
  • Ability to skip specific implementations
  • Ability to specify a specific implementation for the libraries used by the tested libraries (for example many LAPACK implementations lay on BLAS and CBLAS libraries)
  • Compiled packages, tests results, logs and reports are saved. The tests are not run if all needed result exist
  • The high-quality benchmarking system BTL is used

Using the software

Installing the package

Fisr of all, the repository bicatali has to be installed. This can be done through layman. This repository provides updated packages for every numerical library and a patched eselect version. If you have some numerical library installed, you will have to remove them and install the versions provided by the bicatali overlay.

The repository git://git.overlays.gentoo.org/proj/auto-numerical-bench.git contains a portage tree with the ebuild for installing the software. One can set up the repository using layman (by adapting the configuration file) or checking the repository out and adding it to the PORTDIR_OVERLAY environment variable. The package has the name autobench. Until the first release, only the version 9999 is provided with the keywords ~x86 and ~amd64 (it were only tested on amd64 though). The autobench package will install the files under /usr/(libdir)/autobench and a symlink into /usr/bin.

In order to run the script, type autobench and add the arguments that are described below

Configuring the tests

In order to run the benchmarks, a configuration file has to be provided. In this file every line defines a package containing implementations to be tested. The line begins with a single-word label that serves as identifier. The second token is the package; every expression that is accepted by emerge works. After that, environment variables and other flags can be appended. A line starting with an # is skipped (comments).

For example, if one wants to test BLAS implementations, he could want to emerge the packages atlas, eigen, openblas, acml. He could also want to test different version of atlas with different gcc versions (or icc). An example of configuration file could be the following:

atlas-3.8 sci-libs/atlas-3.8.4 CFLAGS="-O3 -march=native"
atlas-3.9 sci-libs/atlas-3.9.41 CFLAGS="-O3 -march=native"
atlas-3.9_gcc-4.6.1 sci-libs/atlas-3.9.41 CFLAGS="-O3 -march=native" CC="gcc-4.6"

acml sci-libs/acml-4.4.0-r1 -acml32-gfortran -acml32-gfortran-openmp

Notice that gcc-4.6.1 must be installed in the system. As every package can install more than one implementation, every actual test is referenced through the following string: line-identifier/implementation. For example, sci-libs/atlas install the implementations atlas and atlas-threads. We have therefore six  tests regarding atlas, which are identified by:

  • atlas-3.8/atlas
  • atlas-3.8/atlas-threads
  • atlas-3.9/atlas
  • atlas-3.9/atlas-threads
  • atlas-3.9_gcc-4.6/atlas
  • atlas-3.9_gcc-4.6/atlas-threads

The ACML installs different implementations depending on the USE flags. Let assume in our example that it installs four:

  • acml32-gfortran
  • acml32-gfortran-openmp
  • acml64-gfortran
  • acml64-gfortran-openmp

with both 32-bit and 64-bit profiles. As we only want to test the 64-bit versions, we can add the strings -acml32-gfortran and -acml32-gfortran-openmp in order to avoid this implementations to be tested.

Another possible argument is the ability to control the used libraries. For example, the LAPACK reference implementation (package lapack-reference) makes use of the BLAS routines. This can be any of the provided ones. One could therefore test the performance of this package when using the eigen BLAS implementation and when using the openblas-threads one:

reference_eigen sci-libs/lapack-reference-3.3.1-r1 blas:eigen
reference-openblas sci-libs/lapack-reference-3.3.1-r1 blas:openblas-threads

The arguments blas:implementation instruct the script to use the desired implementation when running the suite. If none is specified (e.g. here no cblas implementation is specified), then the standard one will be picked. Notice that the desired implementation (in this case eigen and openblas-threads) has to be installed in the system.

Running the tests

Once a configuation file has been created (call it here conffile.in), the tests can be run. The call synopsis is the following:

autobench library conffile.in arguments

where module is the library to be tested (e.g. blas, cblas, lapack, fftw); conffile.in is the described configuration file which describes the implementations for the desired library; arguments are the module-dipendent arguments and usually are the numerical tests to be performed. In the following we see a list of accepted arguments for every provided module. Every module accepts the two flags -s or –summary and -S or –summary-only:

  • -s and –summary enable the summary figure, which is a single figure with many plots (subplot) that summarizes the tests. As the legend on such small plots often hides the lines, it is not displayed if standard plots are present (i.e. if the -S argument is not given)
  • -S and –summary-only enable the summary figure and disable the standard, single-plot figures. This also enables the legends on the summary figure.

The module blas.py accept the following tests as arguments:

  • axpy
  • axpby
  • rot
  • matrix_vector
  • atv
  • symv
  • ger
  • syr2
  • trisolve_vector
  • matrix_matrix
  • aat
  • trisolve_matrix
  • trmm

The same tests are accepted by the cblas.py module.

If no test is given as argument, then the following four standard tests are selected:

  • axpy
  • matrix_vector
  • trisolve_vector
  • matrix_matrix

The lapack module accepts the following tests as arguments:

  • general_solve
  • least_squares
  • lu_decomp
  • cholesky
  • symm_ev

If no arguments are given, then all tests are selected.


The fftw.py module accepts the following tests as arguments:

  • FFTW_1D_Forward_Measure
  • FFTW_1D_Forward_Estimate
  • FFTW_1D_Backward_Measure
  • FFTW_1D_Backward_Estimate

If no arguments are given, then all tests are selected.


The blas_accuracy module accepts the following tests as arguments:

  • axpy
  • matrix_vector
  • trisolve_vector
  • matrix_matrix

If no arguments are given, then all tests are selected.


On the page http://www.phys.ethz.ch/~arteagaa/soc/ some report examples generated by the script are available.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: