A parallel scheme for a multi-domain truly incompressible smoothed particle hydrodynamics (SPH) approach is presented. The proposed method is developed for distributed-memory architectures through the Message Passing Interface (MPI) paradigm as communication between partitions. The proposal aims to overcome one of the main drawbacks of the SPH method, which is the high computational cost with respect to mesh-based methods, by coupling a multi-resolution approach with parallel computing techniques. The multi-domain approach aims to employ different resolutions by subdividing the computational domain into non-overlapping blocks separated by block interfaces. The particles belonging to different blocks are efficiently distributed among processors ensuring well balanced loads. The parallelization procedure handles particle exchanges both throughout the blocks and the competence domains of the processors. The matching of the velocity values between neighbouring blocks is obtained solving a system of interpolation equations at each block interfaces through a parallelized BiCGSTAB algorithm. Otherwise, a whole pseudo-pressure system is solved in parallel considering the Pressure Poisson equations of the fluid particles of all the blocks and the interpolation equations of all the block interfaces. The employed test cases show the strong reduction of the computational efforts of the SPH method thanks to the interaction of the employed multi-resolution approach and the proposed parallel algorithms.

Monteleone A., Burriesci G., Napoli E. (2022). A distributed-memory MPI parallelization scheme for multi-domain incompressible SPH. JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 170, 53-67 [10.1016/j.jpdc.2022.08.004].

A distributed-memory MPI parallelization scheme for multi-domain incompressible SPH

Napoli E.
Ultimo
2022-12-01

Abstract

A parallel scheme for a multi-domain truly incompressible smoothed particle hydrodynamics (SPH) approach is presented. The proposed method is developed for distributed-memory architectures through the Message Passing Interface (MPI) paradigm as communication between partitions. The proposal aims to overcome one of the main drawbacks of the SPH method, which is the high computational cost with respect to mesh-based methods, by coupling a multi-resolution approach with parallel computing techniques. The multi-domain approach aims to employ different resolutions by subdividing the computational domain into non-overlapping blocks separated by block interfaces. The particles belonging to different blocks are efficiently distributed among processors ensuring well balanced loads. The parallelization procedure handles particle exchanges both throughout the blocks and the competence domains of the processors. The matching of the velocity values between neighbouring blocks is obtained solving a system of interpolation equations at each block interfaces through a parallelized BiCGSTAB algorithm. Otherwise, a whole pseudo-pressure system is solved in parallel considering the Pressure Poisson equations of the fluid particles of all the blocks and the interpolation equations of all the block interfaces. The employed test cases show the strong reduction of the computational efforts of the SPH method thanks to the interaction of the employed multi-resolution approach and the proposed parallel algorithms.
dic-2022
Settore ICAR/01 - Idraulica
Monteleone A., Burriesci G., Napoli E. (2022). A distributed-memory MPI parallelization scheme for multi-domain incompressible SPH. JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 170, 53-67 [10.1016/j.jpdc.2022.08.004].
File in questo prodotto:
File Dimensione Formato  
A distributed-memory MPI parallelization scheme for multi-domain incompressible SPH.pdf

Solo gestori archvio

Tipologia: Versione Editoriale
Dimensione 2.06 MB
Formato Adobe PDF
2.06 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Parallel_MD_SPH.pdf

accesso aperto

Tipologia: Pre-print
Dimensione 4.8 MB
Formato Adobe PDF
4.8 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10447/582332
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? ND
social impact