Skip to content

[MPI] Bug in triangulation preconditioning for explicit triangulations #1111

@eve-le-guillou

Description

@eve-le-guillou

Hi all,

A bug has been brought to my attention for Explicit Triangulations. I've reproduced here using the pipeline below. Though here it uses the DiscreteGradient filter, the problem seems to be located in the preconditioning of the triangulation, rather than the filter.

To reproduce:

  import os
  from paraview.simple import *
  from paraview import vtk
  from paraview import options as pv_options
  # import ttk
  
  from paraview import vtk
  ctrl  = vtk.vtkMultiProcessController.GetGlobalController()
  rank  = ctrl.GetLocalProcessId()
  size  = ctrl.GetNumberOfProcesses()
  
  print(f"Running with {size} MPI ranks")
  
  pv_options.streaming = 1
  
  # Rank detection
  rank = int(os.environ.get("PMI_RANK") or os.environ.get("OMPI_COMM_WORLD_RANK") or -1)
  
  # Load and process mesh
  mesh = Wavelet(registrationName='Wavelet1')
  
  mesh.UpdatePipeline()
  
  tetrahedralize1 = Tetrahedralize(registrationName='Tetrahedralize1', Input=mesh)
  tetrahedralize1.UpdatePipeline()
  
  print(f"[Rank {rank}] Input mesh loaded. Applying TTK filters...")
  
  # create a new 'Redistribute DataSet'
  redistributeDataSet1 = RedistributeDataSet(registrationName='RedistributeDataSet1', Input=tetrahedralize1)
  redistributeDataSet1.UpdatePipeline()
  
  # Ensure consistent ordering for parallel runs
  arrayPreconditioning = TTKArrayPreconditioning(Input=redistributeDataSet1)
  arrayPreconditioning.UpdatePipeline()
  
  # Discrete gradient computation
  discreteGradient = TTKDiscreteGradient(Input=arrayPreconditioning)
  discreteGradient.UpdatePipeline()

To compute this pipeline use:

OMP_NUM_THREADS=1 mpirun -n 4 pvbatch pipeline.py

Expected behavior
The pipeline should generate an Unstructured Grid, distribute it across all processes and compute the Discrete Gradient. Instead, it will either hang indefinitely or crash at the preconditioning stage of the Discrete Gradient. The bug happens regardless of the number of threads.

System:

  • OS: Linux
  • TTK latest
  • MPICH/4.3.0 or OpenMPI/4.0.3 (both fail)
  • ParaView 5.13

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions