[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: OpenMPI / MPI transition



On 2023-11-24 15:00, Alastair McKinstry wrote:
My understanding is that MPICH has been typically the reference
implementation, higher quality but less performant, particularly with
the range of fabrics. Certainly I've seen mostly OpenMPI but not MPICH
on various HPC machines. People would use either OpenMPI or a vendors
MPI (which may be forked MPICH with added network hardware support).

I took a straw poll in one of our upstream fora, and the reply there was similar. The serious performance improvements come from the fabrics drivers provided by the hardware manufacturers. So it doesn't really matter too much which implementation we default to. I'm comfortable moving to a mixed default, OpenMPI on 64 bit and MPICH on 32 bit or where needed.

Drew


Reply to: