Session 3 – HPC, schémas et architectures

imagesRégis Duvigneau – Inria Sophia Antipolis – Méditerranée  – équipe de recherche OPALE

Num3sis: a software platform for multi-physic simulations


Simulations of complex physical phenomena require more and more collaborative software development, to couple pieces of code developed by specialists from different disciplines (mesh generation, numerical schemes, physical models, high-performance computing, visualization, etc). Software architectures inherited from the 80s are usually not well adapted to this new challenge.

Therefore, num3sis is an attempt to build a modern software framework dedicated to numerical simulations and based on unusual tools in this context: plugin system, visual programming, interactive computation and visualization, store. The targeted objective is to offer a friendly  development framework for research teams, allow easy coupling of existing codes and foster crossover between different disciplines.

(télécharger la présentation)


Régis Duvigneau holds a PhD in Computational Fluid Dynamics from Ecole Centrale de Nantes in 2002, and a Habilitation (HDR) from University of Nice Sophia-Antipolis in 2013. He joined Opale Project Team at INRIA Sophia-Antipolis Méditerranée Center in 2005. His research activities are focused on numerical methods for optimization and control of fluidic systems.


24450_diazJulien Diaz Inria Bordeaux – Sud-Ouest – Équipe de recherche MAGIQUE 3D

Strategic Action DIP Inria/Total

Seismic imaging provides representations of the subsurface from the solution of the full wave equation that is recognized as promoting the best modeling since the 70s. The map contains the kinematic or/and the dynamic characteristics of the geophysical medium. Reverse Time Migration (RTM) is a widely-used technique to place the different geological layers which is based on solving plenty of wave equations in heterogeneous media. It is thus very computationally intensive but the tremendous progresses of scientific computing make it achievable now. Images are currently obtained by using direct arrivals of acoustic waves and the transition to elastic waves including multiples is not obvious, essentially because elastic waves equations are still more computationally consuming. One of the main drawback of RTM is the need of storing a huge quantity of information which is prohibitive when using elastic waves. Elastic RTM requires thus to develop algorithms which spare memory as much as possible. The characterization of the Earth dynamics is even more challenging because it requires using highly accurate numerical schemes while saving the memory remains mandatory. This is the purpose of Full Wave-Form Inversion (FWI) for which new advanced numerical methods combining accuracy and limited memory use have to be designed.

The production of efficient software package for seismic imaging thus requires gathering scientists with complementary skills like geophysicists, applied mathematicians and computer scientists. Hence, Total and Inria have decided to create DIP (Depth Imaging Partnership) which is an original concept getting together different Inria teams with the joint objective of designing industrial software for seismic imaging. This talk aims at describing DIP organization, main achievements since DIP creation in 2009 and DIP scientific overview for the beginning period 2014-2018.


Julien Diaz received a PhD. in Applied Mathematics from Université Paris 6 in 2005. After two post-docs at EDF (France) and in the mathematics institut of Basel (Switzerland), he joined Inria Bordeaux Sud Ouest as a “Chargé de Recherches” in the project team Magique 3D in 2007. His main research interests are the design of numerical methods for the simulation of wave propagation in geophysical media.


coulaudOlivier Coulaud Inria Bordeaux – Sud-OuestÉquipe de recherche HIEPACS


It is admitted today that numerical simulation is the third pillar for the development of scientific discovery at the same level as theory and experimentation. Numerous analyses also confirmed that high performance simulation will open new opportunities not only for research but also for a large spectrum of industrial sectors. On the route to exascale, emerging parallel platforms exhibit hierarchical structures both in their memory organization and in the granularity of the parallelism they can exploit.

In this joint project (FastLA)  between Inria HiePACS, Lawrence Berkeley National Laboratory (LBNL) and Stanford we propose to study, design and implement hierarchical parallel scalable numerical techniques to address two challenging numerical kernels involved in many intensive simulation codes: namely, N-body interaction calculations and the solution of large sparse linear systems. Those two kernels share common hierarchical features and algorithmic challenges as well as numerical tools such as low-rank matrix approximations expressed through H-matrix calculations.

(télécharger la présentation)


indexErwan Faou Inria Rennes – Bretagne Atlantique – Équipe de recherche IPSO

Turbulence d’onde : challenges mathématiques et computationnels.

Dans cet exposé, je mettrai l’accent sur les difficultés des calculs numériques effectués sur des modèles physiques non linéaires comme ceux qu’on peut trouver en mécanique des fluides ou des plasmas, mais aussi en dynamique moléculaire ou en astronomie. Je prendrai en particulier l’exemple de la turbulence d’onde pour laquelle les théories physiques existantes ne sont pas encore satisfaisantes, et reposent pour l’instant sur des considérations mathématiques non rigoureuses et des simulations numériques souvent contradictoires. Dans ce domaine en plein essor, j’essaierai de montrer le rôle fondamental joué par les mathématiques numériques, qui doivent concilier analyse mathématique rigoureuse, simulations numériques en temps très long, et calculs statistiques du type calculs incertitudes.


Après des études à l’École Normale Supérieure de Cachan, antenne de Bretagne (désormais ENS Rennes), Erwan Faou a effectué sa thèse sur la théorie des coques et la géométrie riemannienne. En 2001 il rejoint Inria en tant que chargé de recherches. Ses travaux portent sur la simulation numérique de systèmes Hamiltoniens, les méthodes numériques probabilistes en dynamique moléculaire, ainsi que sur la théorie des équations aux dérivées partielles. Depuis 2009 il est directeur de recherche Inria au sein de l’équipe-projet IPSO du centre Inria Rennes – Bretagne Atlantique. En 2011 reçoit une bourse ERC Starting Grant de l’union Européenne. Depuis 2012, il enseigne à l’École Normale Supérieure de Paris.

________________ Bougé – Inria Rennes – Bretagne Atlantique – Équipe de recherche KERDATA


Data@Exascale is an associated team between the KerData team from INRIA Rennes – Bretagne Atlantique, Argonne National Laboratory (ANL) and the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana Champaign (UIUC). Our research topics address the area of large scale data management for post-petascale supercomputers and for clouds. We aim to investigate several open issues related to storage and I/O in HPC, but also in situ data visualization and analysis from large scale simulations. This talk will summarize the current of this joint work.

(télécharger la présentation)(télécharger la vidéo)


Luc Bougé is a Professor at the newly created ENS Rennes, formerly known as the Brittany extension of ENS Cachan, Ker Lann Campus. Luc earned his “State Thesis” in Informatics at University Paris 7 in 1987. He worked as a CNRS junior researcher at LIENS, ENS Paris, and as a professor at ENS Lyon, and then at ENS Cachan/Rennes since 2001. Luc has always been intersted in parallel programming languages, their semantics, their compilers and their associated run-time environments: high-performance communication interfaces; data-parallel operators; large-scale data management supports.

The KerData team is led by Gabriel Antoniu. It was founded in 2009 as a spin-off of the Paris project of Thierry Priol. KerData is dedicated to support data-intensive, high-performance applications that exhibit the need to handle very large data sets on clouds and post-Petascale platforms. It specifically focuses on applications which handle massive data BLOBs (Binary Large OBjects), in the order of Terabytes, stored in a large number of nodes, thousands to tens of thousands, accessed under heavy concurrency by a large number of processes, thousands to tens of thousands at a time, with a relatively fine access grain, in the order of Megabytes.