PetscSFPattern#
Pattern of the PetscSF graph
Synopsis#
typedef enum {
  PETSCSF_PATTERN_GENERAL = 0,
  PETSCSF_PATTERN_ALLGATHER,
  PETSCSF_PATTERN_GATHER,
  PETSCSF_PATTERN_ALLTOALL
} PetscSFPattern;
Values#
- PETSCSF_PATTERN_GENERAL- A general graph. One sets the graph with- PetscSFSetGraph()and usually does not use this enum directly.
- PETSCSF_PATTERN_ALLGATHER- A graph that every rank gathers all roots from all ranks (like- MPI_Allgather()). One sets the graph with- PetscSFSetGraphWithPattern().
- PETSCSF_PATTERN_GATHER- A graph that rank 0 gathers all roots from all ranks (like- MPI_Gatherv()with root=0). One sets the graph with- PetscSFSetGraphWithPattern().
- PETSCSF_PATTERN_ALLTOALL- A graph that every rank gathers different roots from all ranks (like- MPI_Alltoall()). One sets the graph with- PetscSFSetGraphWithPattern(). In an ALLTOALL graph, we assume each process has- leaves and - roots, with each leaf connecting to a remote root. Here - is the size of the communicator. This does not mean one can not communicate multiple data items between a pair of processes. One just needs to create a new MPI datatype for the multiple data items, e.g., by - MPI_Type_contiguous.
See Also#
Level#
beginner
Location#
Index of all PetscSF routines
Table of Contents for all manual pages
Index of all manual pages