Skip to content

need elegant way to handle byte-level pointer math used extensively in algos #2

@raffenet

Description

@raffenet

CIVL does not like when you do pointer math at byte-level, which MPICH does due to type information being external to the buffer supplied by the user.

Violation 0 encountered at depth 472:
CIVL execution violation in p1 (property: OUT_OF_BOUNDS, certainty: PROVEABLE) at
(char *) recvbuf
civl_allred_ring.c:185:15-30 | (char *) recvbuf

Pointer addition &<d7>my_allreduce_result[0] + 5*SIZEOF_REAL results in an index out of bound error.
Object: Variable my_allreduce_result
Object type :real[10]
Pointer value: &<d7>my_allreduce_result[0]
Offset value: 5*SIZEOF_REAL
Violated constraint: 0 < 0 + 5*SIZEOF_REAL <= 10

Input:
  _civl_argc=X__civl_argc
  _civl_argv=X__civl_argv
  N=10
  x_val=X_x_val
  _mpi_nprocs=2
  _mpi_nprocs_lo=1
  _mpi_nprocs_hi=10
Context:
  0==(32768 div SIZEOF_REAL)
  forall _t : dynamicType . (0 <= CIVL_SIZEOF(_t) - 1)
  0<=(SIZEOF_INT-1)
  0<=(SIZEOF_REAL-6554)
  0<=(X__civl_argc-1)
Call stacks:
process 0:
  main@23
process 1:
  MPIR_Allgatherv_intra_ring@573 civl_allred_ring.c:185:8 rbuf called from
  MPIR_Allreduce_intra_ring@372 civl_allred_ring.c:102:16 MPIR_Allgatherv_intra_ring called from
  _civl_main@400 ring_allreduce_driver.cvl:45:4 MPIR_Allreduce_intra_ring called from
  _mpi_process@230  called from
  _par_proc0@816
process 2:
  $mpi_sendrecv@685 civl-mpi-blocking.cvl:275:6 $comm_dequeue called from
  MPI_Sendrecv@257 mpi.cvl:225:2 $mpi_sendrecv called from
  MPIR_Allreduce_intra_ring@368 mpiimpl_cvl.h:8:47 MPI_Sendrecv called from
  _civl_main@400 ring_allreduce_driver.cvl:45:4 MPIR_Allreduce_intra_ring called from
  _mpi_process@230  called from
  _par_proc0@816

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions