The project name stands for ICOsahedral-grid Models for EXascale earth system simulations. The goal is to develop methods for fine resolution climate models based on icosahedral grids that scale well. The ICOMEX project is funded by the G8 initiative and includes partners from four countries.
At the University of Hamburg, we are responsible for three of the seven workpackages. This part of the project is funded by the DFG (GZ: LU 1353/5-1).
Contact & principal investigator: Prof. Dr. Julian Kunkel
The goals of this package are:
Up to now we have:
Next steps will focus on:
In this workpackage we are researching ways to optimize the I/O related parts of the models. Our main focus are the regularly invoked output routines for the produced data and for checkpointing. Also, we take a look at how the data is actually stored on disk/tapes and what can be done to allow faster access or compression.
We developed a compression algorithm for climate data.
Instrumentation of ICON to trace the I/O related parts is complete.
We created a NetCDF version that avoids double buffering. Cacheless NetCDF
The MultifileHDF5 splits a logical HDF file into one file per process. Multifile HDF5
A benchmark for testing a variety of libraries with output similar to the ICON-model is available here: ICON-output imitating benchmark
For our compression experiments, we have also developed a testfile generator. This produces test datasets with very precisely controlled characteristics. Currently, it supports 15 different modes, most of which also take a number of arguments. Out of the box, a simple make will only produce seven different datasets, more can easily be added to the Makefile. If povray is installed, it also possible to render images that give a visual impression of the type of data produced. Download: testfilegen.tar.gz
Within this workpackage we gladly support the Exascale I/O Working Group (EIOW) to protype the I/O routines of ICON to utilize the next generation of storage systems.
The traces are the basis for our I/O access pattern analysis.
Create a benchmark resembling the typical I/O access patterns.
Compare the measured I/O performance with the theoretical peak performance.
Develop optimization strategies.
The aim of this workpackage is the communication with the HPC hardware vendors and the developer communities who provide products used in climate model codes. We hope to get guidance from the vendors/developers on how to best utilize their products. On the other hand, we intend to provide them with valueable insights to our model code needs, enabling them to tailor their products to the climate computing community needs.