Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DUMPI with a Fortran code #6

Open
sudheerchunduri opened this issue Jul 20, 2017 · 1 comment
Open

DUMPI with a Fortran code #6

sudheerchunduri opened this issue Jul 20, 2017 · 1 comment

Comments

@sudheerchunduri
Copy link

sudheerchunduri commented Jul 20, 2017

Hi,
I am using the dumpi library with instrumentation using libdumpi_enable_profiling and call libdumpi_disable_profiling()routines in a FORTRAN77 code.

I build the code with "-I/projects/Performance/chunduri/sst-dumpi/install/include -L/projects/Performance/chunduri/sst-dumpi/install/lib -ldumpif77 -ldumpi".

The build process completed without any issues, however, when running the code, it generates the following error:

Rank 1378 [Thu Jul 20 02:48:45 2017] [c1-1c2s10n2] Fatal error in MPI_Attr_get: Invalid argument, error stack:
MPI_Attr_get(141): MPI_Attr_get(MPI_COMM_WORLD, keyval=1681915906, attr_value=0x7fffffff63e4, flag=0x7fffffff6290) failed
MPI_Attr_get(99).: The attribute value is not the address of a pointer or pointer-sized integer. A common error is to pass the address of an integer to any of the MPI_Xxx_get_attr routines on systems where the size of a pointer is larger than the size of an integer.
.....
....
Similar error reported from each MPI rank. I suspect this to be a FORTRAN - C interoperability issue.

Can you suggest anyways to resolve this.

Thanks
Sudheer

@jjwilke
Copy link
Contributor

jjwilke commented Aug 7, 2017

Sorry about that. The GitHub notifications apparently aren't working for that repo so I missed the issue.
I am your point of contact. I assume this issue occurs regardless of whether libdumpi_enable/disable calls are happening?

Your best bet for now is to simply not add that function to the trace. It doesn't really provide any useful info for analysis/replay anyway.
When configuring, add the following to CPPFLAGS:

-DDUMPI_SKIP_MPI_ATTR_GET

You can keep turning off functions that don't matter and are breaking the code. Hopefully the interop bug isn't present in any MPI_Send,collective etc calls.
It's obviously not ideal, but your best bet for now until we can fix the interoperability bug.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants