Gentoo automated bechmarks

This document explains how to run it and refers to the git repository status as of 4 july 2011.


You can install the project by just cloning the git repository: git clone git:// You will also need:

  • A python interpreter at version 2.6 or 2.7;
  • The portage and gentoolkit package;
  • Only the new style eselect with alternatives is supported, so you will need the bicatali ovelay;
  • For the graphical reports you will also need matplotlib (compiled with libpng) and numpy;
  • The packages can be tested only if they already have all the dependencies installed; so, if you want to test eigen, for instance, you will need to install all the dependencies before.
Once you have cloned the repository, enter into the directory app-benchmarks/autobench/files/python (or the name you have chosen) in order to run it.
Another option is to use layman to install the repository as overlay, then emerge autobench-9999 (which is ~x86, ~amd64), but won’t explain this.


The script generates binary packages running portage with a special environment. Then emerges the packages individually in a directory, compiles a standard benchmarking program, runs it and collects the results. A single package can be emerged many times with different compiler flags or compilers. For example, one could test sci-libs/atlas-3.9.41 3 times:

  • Using gfortran-4.5.2 with FFLAGS=-O3
  • Using gfortran-4.6.0 with FFLAGS=”-O2 fschedule-insns”
  • Using ifort (whatever version) with standard FFLAGS
In this case one has to provide a configuration file formatted as follows:
atlas-gcc-452 sci-libs/atlas-3.9.41 FC=gfortran-4.5.2 FFLAGS=-O3
atlas-gcc-460 sci-libs/atlas-3.9.41 FC=gfortran-4.6.0 FFLAGS="-O2 fschedule-insns"
atlas-icc sci-libs/atlas-3.9.41 FC=ifort
Each row defines a configuration and is formatted as follows:
  • The first part is a string of alphanumeric characters that identifies the configuration.
  • The second part is the package to test; in the example it is fully qualified through category/package-version, but it is not mandatory, although it is the best procedure. In case of ambiguity (e.g. more installable versions) every package is installed and tested separately. For example, sci-libs/atlas would test both sci-libs/atlas-3.8.4 and sci-libs/atlas-3.9.41
  • Everything after the package is is the environment to use while emerging the package.
The configuration file can be stored everywhere. Now, we come to the script.
The script is the file in the directory. It has to be called with the following syntax:
python2 [library] [conffile] [tests]
  • [library] can be
    • blas – currently supported
    • cblas – currently supported
    • lapack – currently supported
    • lapacke
    • scalapack
    • blacs
  • [conffile] is the described configuration file
  • [tests] is a list of tests to be performed during the benchmark. For blas and cblas the following tests are available:
    • Level 1:
      • axpy – standard
      • axpby
      • rot
    • Level 2:
      • matrix_vector – standard
      • atv
      • symv
      • syr2
      • ger
      • trisolve_vector – standard
    • Level 3:
      • matrix_matrix – standard
      • aat
      • trisolve_matrix
      • trmm
  • For lapack the following are available:
    • general_solve: solves a general quadratic linear system of equations
    • lu_decomp: computes the full-pivoting LE decomposition of a general quadratic matrix
    • least_squares: solves the least squares problem (for the benchmarks a quadratic matrix is considered)
    • cholesky: computes the cholesky decomposition of a SPD matrix
    • symm_ev: computes the eigenvalues of a symmetric matrix
The standard tests are performed if no args are provided. For lapack, all tests are standards.


The script can be runned as standard or super user. In the first case the packages will be stored into ~/.benchmarks/packages; in the latter, the packages are stored into /var/cache/benchmarks/packages. In both cases, the tests are runned into /var/tmp/benchmarks/roots and the temporary results stored into /var/tmp/benchmarks/tests.

Log files are stored within /var/log/benchmarks. There you will find a directory for every time you runned the script. Almost everything is logged.

The results are stored within /var/cache/benchmarks/results if the user is root and within ~/.benchmarks/results otherwise. If the switch -s is given also a summary plot is generated. If the switch -S is given, only the summay plot is generated. The results include a plot as PNG image for every operation, the summary image if required and an HTML page with all the plots.