concace
High Performance Numerical and Parallel Composability (COmposabilité Numérique et parallèle pour le CAlcul haute performanCE)

Table of Contents

To deal with large size and large dimension problems coming from model- and data-driven applications, we will take advantage of modern development tools and languages to design high-level expressions of complex parallel algorithms. While the traditional approach to HPC is to fully exploit hardware, our complementary approach will enable a richer composability of numerical methods, allowing to fully exploit existing and new numerical algorithms.

This page may be browsed in html or pdf.

1. Home

The concace team is a joint Inria-Industry team that involve members from two private partners, namely, Airbus Central R&T and Cerfacs. The permanent members have a strong scientific background in parallel computational science with different emphasis on computer sciences and applied mathematics. The team is composed of members with diverse academic and professional backgrounds, which makes it rich and multidisciplinary with a large spectrum of skills and expertise.

This joint initiative was strongly motivated by the common scientific interests and their shared view of research actions.

Airbus aims to strengthen its position as a global leader in the aerospace and defence business and to deliver competitive, integrated solutions to its customers while targeting increased revenues and profitability. In doing so, we strive to solidify an innovation culture, to work more globally for all our stakeholders, improve our cultural diversity and promote ethics and transparency within our organisation. Airbus Central Research and Technology – the Airbus network of research facilities, scientists, engineers and partnerships – is at the forefront of research and technology. Through this network, we foster technological excellence and business orientation through the sharing of competences and means between Airbus, Airbus Defence and Space and Airbus Helicopters and develop and maintain partnerships with world-famous schools, universities and research centres.

Cerfacs is a research center specialized in scientific computing. Through its High Performance Computing (HPC) expertise and facilities, Cerfacs addresses major scientific and technical research problems of public and industrial interest. Its teams host interdisciplinary researchers including physicists, applied mathematicians, numerical analysts, computer scientists and software engineers who design and develop innovative methods and software solutions to tackle challenges of the aeronautics, space, climate, energy and environmental sectors. Cerfacs is involved in major national and international projects and works in close interaction with its seven partners: Airbus, Cnes, EDF, Météo France, Onera, Safran and TotalEnergies. The Cerfacs members of concace belong to the transversal parallel algorithms team, whose main research activities consist in designing efficient parallel algorithms for numerical linear algebra, optimization and stochastic computing.

2. Contact

3. Team members

  • Post-docs:
  • PhD students:
    • Antoine Jego, IRIT (co-advised with A. Buttari (CNRS) and A. Guermouche (Topal team)), from Nov. 2020
    • Romain Peressoni, Université de Bordeaux, PhD Student funded by Inria/Région, from Oct. 2019
    • Atte Torri, LISN (co-advised with O. Kaya), from Dec. 2021
  • Engineers:
    • Pierre Esterie, Inria (team contract)
  • External collaborators:
    • Oguz Kaya, LISN, Saclay, Associate Professor
    • Jean René Poirier (HDR), INP Toulouse, Associate Professor
    • Ulrich Rüde, Friedrich-Alexander-Universität, Erlangen-Nürnberg, Germany and Cerfacs
  • Former members:
    • Marek Felsoci, Inria-Airbus Central R&T, PhD, 2019-2023, manuscript available here
    • Karim Mohamed El Maarouf, PhD with IFPEN, 2019-2023, manuscript soon available
    • Martina Iannacito, Inria, PhD, 2019-2022, manuscript available here
    • Yanfei Xiang, Inria-Cerfacs, PhD, 2019-2022, manuscript available here

4. Job offers

  • Master positions with possible continuation in a Phd
    • Solving sparse linear systems using Krylov's modular and adaptive mixed-precision methods, collaboration with LIP6 in the context of the PEPR Numpex, for more details see here
    • Backward stability analysis of numerical linear algebra kernels using normwise-perturbation, collaboration with LIP6 in the context of the PEPR Numpex, for more details see here
    • Composability of execution models, for more details see here
    • Unsupervised learning using spectral approaches, in collaboration with IRIT, for more details see here
    • Abstraction of Krylov-type subspace methods, for more details see here
    • Unification of hierarchical methods for linear systems processing, for more details see here
    • Fault-tolerant numerical iterative algorithms at scale, collaboration with ENS Lyon in the context of the PEPR Numpex, for more details see here
    • An apprentice position @ Airbus (in Issy-Les-Moulineaux) to work on scientific compûting on GPU, for students starting M2 in september here
  • Postdoc positions
    • High performance solver for aeroacoustic, here
    • Equivalent models and Reduced Order Methods, here

5. Research

Over the past few decades, there have been innumerable science, engineering and societal breakthroughs enabled by the development of high performance computing (HPC) applications, algorithms and architectures. These powerful tools have enabled researchers to find computationally efficient solutions to some of the most challenging scientific questions and problems in medicine and biology, climate science, nano­technology, energy, and environment – to name a few – in the field of model-driven computing. Meanwhile the advent of network capabilities and IoT, next generation sequencing, … tend to generate a huge amount of data that deserves to be processed to extract knowledge and possible forecasts. These calculations are often referred to as data-driven calculations. These two classes of challenges have a common ground in terms of numerical techniques that lies in the field of linear and multi-linear algebra. They do also share common bottlenecks related to the size of the mathematical objects that we have to represent and work on; those challenges retain a growing attention from the computational science community.

In this context, the purpose of the concace project, is to contribute to the design of novel numerical tools for model-driven and data-driven calculations arising from challenging academic and industrial applications. The solution of these challenging problems requires a multidisciplinary approach involving applied mathematics, computational and computer sciences. In applied mathematics, it essentially involves advanced numerical schemes both in terms of numerical techniques and data representation of the mathematical objects (e.g., compressed data, low-rank tensor, low-rank hierarchical matrices). In computational science, it involves large scale parallel heterogeneous computing and the design of highly composable algorithms. Through this approach, concace intends to contribute to all the steps that go from the design of new robust and accurate numerical schemes to the flexible implementations of the associated algorithms on large computers. To address these research challenges, researchers from Inria, Airbus Central R&T and Cerfacs have decided to combine their skills and research efforts to create the Inria concace project team, which will allow them to cover the entire spectrum, from fundamental methodological concerns to full validations on challenging industrial test cases. Such a joint project will enable a real synergy between basic and applied research with complementary benefits to all the partners.

6. Software

  • Avci: Adaptive Vibrational Configuration Interaction is a shared memory C++ software that computes vibrational spectra of molecules.
  • Cppdiodon: it is a joint software development with the PLEIADE team, whose goal is to propose efficient methods from laptop to supercomputer for the main linear dimension reduction methods for learning purposes on massive data sets. It is based on fmr and may be accelerated with chameleon.
  • Fabulous: Fast Accurate Block Linear krylOv Solver is a software package that implements Krylov subspace methods with a particular emphasis on their block variants for the solution of linear systems with multiple right-hand sides.
  • fmr: Fast and accurate Methods for Randomized numerical linear algebra.
  • h-mat: Herarchical-matrix is a way to store and manipulate matrices in a hierarchical and compressed way. The sequential version of the library is available as open-source software (GPL v2) on github), the full-featured parallel version is proprietary, owned by Airbus Central R&T.
  • Maphys++: Maphys++ supersedes the MaPHyS} (Massively Parallel Hybrid Solver) software package that implements parallel linear solvers coupling direct and iterative approaches.
  • Celeste: it is a C++ library for Efficient Linear and Eigenvalue Solvers using TEnsor decompositions. This library is co-developed with O. Kaya from LISN.
  • Scalfmm: Scalable Fast Multipole Method is a tool to compute interactions between pairs of particles using the fast multipole method.

7. Collaborations

7.1. National initiatives

  • High Performance Spacecraft Plasma Interaction Software
    • Acronym: HSPIS
    • Duration: 2022 - 2024
    • Funding: ESA
    • Coordinator: ONERA
    • Partners: Airbus DS, Artenium, Inria
    • Summary: Controlling the plasma environment of satellites is a key issue for the nation in terms of satellite design and propulsion. Three-dimensional numerical modelling is thus a key element, particularly in the preparation of future space missions. The SPIS code is today the reference in Europe for the simulation of these phenomena. The methods used to describe the physics of these plasmas are based on the representation of the plasma by a system of particles moving in a mesh (here unstructured) under the effect of the electric field which satisfies the Poisson equation. ESA has recently shown an interest in applications requiring complex 3D calculations, which may involve several tens of millions of cells and several tens of billions of particles, and therefore in a highly parallel and scalable version of the SPIS code.
  • Massively parallel sparse grid PIC algorithms for low tempertaure plasma simulations
    • Acronym: Maturation
    • Duration: 2022 - 2026
    • Funding: ANR
    • Coordinator: Laplace
    • Partners: IMT, Inria, Maison de la simulation
    • Summary: the project aims at introducing a new class of PIC algorithms with an unprecedented computational efficiency, by analyzing and improving, parallelizing and optimizing as well as benchmarking, in the demanding context of partially magnetized low temperature plasmass through 2D large scale and 3D computations, a method recently proposed in the literature, based on a combination of sparse grid techniques and PIC algorithm.
  • Magnetic Digital Twins for Spintronics : nanoscale simulation platform
    • Acronym: Diwina
    • Duration: 2022 - 2026
    • Funding: ANR
    • Coordinator: Institut Néel
    • Partners: CMAP, Inria, Spintec
    • Summary: The DiWiNa project aims at developing a unified open-access platform for spintronic numerical twins, ie, codes for micromagnetic/spintronic simulations with sufficiently-high reliability and speed so that they can be trusted and used as reality. The simulations will be bridged to the advanced microcopy techniques used by the community, through plugins to convert the statics or time-resolved 3D vector- fields into contrast maps for the various techniques, including their experimental transfer functions. To achieve this, we bring together experts from different disciplines to address the various challenges: spintronics for the core simulations, mathematics for trust, algorithmics for speed, experimentalists for the bridge with microscopy. Practical work consists of checking the time-integration stability of spintronic torque involved in the dynamics when implemented in the versatile finite-element framework, improve the calculation speed through advanced libraries, build the bridge with microscopies through rendering tools, and encapsulate these three key ingredients into a user-friendly Python ecosystem. Through open-access and versatile user-friendly encapsulation, we expect that this platform is suited to serve the needs of the entire physics and engineering community of spintronics. The platform will be unique in its features, ranging from simulation to the direct and practical comparison with experiments. It will contribute to reduce considerably the number of experimental screening for the faster development of new spintronic devices, which are expected to play a key role in energy saving.
  • Low rank tensor decomposition for fast electromagnetic modeling of electrical engineering devices
    • Acronym: tensorVIM
    • Duration: 2022 - 2026
    • Funding: ANR
    • Coordinator: Laplace
    • Partners: G2ELab, Inria
    • Summary: This project aims to develop powerful calculation tools for the rapid implementation of electromagnetic simulations of electrical engineering devices using volume integral methods. These techniques based on a tensor writing of the numerical resolution have recently been applied to integral methods and make it possible to reduce the complexity of the calculations and to avoid an increase in the dimension.

7.2. European initiatives

  • A network for supporting the coordination of High-Performance Computing research between Europe and Latin America
    • Acronym: RISC2
    • Duration: 2021 - 2023
    • Funding: H2020
    • Coordinator: Atlantis Inria
    • Partners: orschungzentrum Julich GMBH (Germany), {{inriai}} (France), Bull SAS (France) , INESC TEC (Portugal), Universidade de Coimbra (Portugal), CIEMAT (Spain), CINECA (Italy), Universidad de Buenos Aires (Argentina), Universidad Industrial de Santander (Columbia), Universidad de le Republica (Uruguay), Laboratorio Nacional de Computacao Cientifica (Brazil), Centro de Investigacion y de Estudios Avanzados del Instituto Politecnico Nacional (Mexico), Universidad de Chile (Chile), Fundacao Coordenacao de Projetos Pesquisas e Estudos Tecnologicos COPPETEC (Brazil), Fundacion Centro de Alta Tecnologia (Costa Rica)
    • Summary: Recent advances in AI and the Internet of things allow high performance computing (HPC) to surpass its limited use in science and defence and extend its benefits to industry, healthcare and the economy. Since all regions intensely invest in HPC, coordination and capacity sharing are needed. The EU-funded RISC2 project connects eight important European HPC actors with the main HPC actors from Argentina, Brazil, Chile, Colombia, Costa Rica, Mexico and Uruguay to enhance cooperation between their research and industrial communities on HPC application and infrastructure development. The project will deliver a cooperation roadmap addressing policy- makers and the scientific and industrial communities to identify central application areas, HPC infrastructure and policy needs.

8. Publications

9. Internal (private)

Author: concace team

Created: 2024-02-14 Wed 15:22

Validate