Code Monkey home page Code Monkey logo

Comments (9)

dkuzmin72 avatar dkuzmin72 commented on September 24, 2024 1

Hi Chris! The diff is below.
Well, we don't have the simplest application. The application should create Communicators and communicators should be quite big. I also got complains from colleagues like: "I've noticed NAMD having the same issue. With one of the dataset that I'm running, without IPM it finished in 21 minutes; while with IPM now I'm estimating at least 9 hours to complete"
NAMD uses charm++ which exploits MPI_Isend a lot.
NAMD: http://www.ks.uiuc.edu/Research/namd/development.html
example1: http://www.ks.uiuc.edu/Research/namd/utilities/stmv.tar.gz
example2: http://www.ks.uiuc.edu/Research/namd/utilities/apoa1.tar.gz
I understand that it requires a lot of time to build and run NAMD examples. Maybe easier to create a simple example which will create a communicator by means of MPI_Cart_create and call a lot of MPI_Isend.
We used OpenMPI for testing.

$git diff
diff --git a/include/mod_mpi.h b/include/mod_mpi.h
index 135a558..a03b676 100755
--- a/include/mod_mpi.h
+++ b/include/mod_mpi.h
@@ -27,9 +27,7 @@ extern MPI_Group ipm_world_group;

 #define IPM_MPI_MAP_RANK(rank_out_, rank_in_, comm_) \
   do { \
-    int comm_cmp_; \
-    PMPI_Comm_compare(MPI_COMM_WORLD, comm_, &comm_cmp_); \
-    if (comm_cmp_ == MPI_IDENT || rank_in_ == MPI_ANY_SOURCE) { \
+    if (comm_ == MPI_COMM_WORLD || rank_in_ == MPI_ANY_SOURCE) { \
       rank_out_=rank_in_; \
     } else { \
       MPI_Group group_; \

from ipm.

cdaley avatar cdaley commented on September 24, 2024

Thanks for reporting the issue Dmitry.

If possible can you please send a diff of your fix so we can be sure we understand your changes correctly.

Also, it would be really helpful if you can point us to the simplest application or benchmark you have which reproduces the slow performance.

Thanks, Chris.

from ipm.

cdaley avatar cdaley commented on September 24, 2024

Thanks. I'm happy to build and run NAMD. It usually works better to use the production application rather than creating a synthetic benchmark without first referring to the production application. Can you give me the smallest and shortest running NAMD test problem which has high IPM performance overhead (even if it is not scientifically meaningful)? I'm also a little puzzled because I thought NAMD uses Charm++ and not MPI for communication. Perhaps we can move our conversation to email?

Chris

from ipm.

nerscadmin avatar nerscadmin commented on September 24, 2024

from ipm.

dkuzmin72 avatar dkuzmin72 commented on September 24, 2024

Hi,
I created a small test case which can show difference in performance.
comm.txt
I ran it on 29 nodes, 900 processes. (You need to use big core count to see overhead of MPI_Comm_compare)
With the original code I got wallclock around 1.4 -1.5s
while with the modified code I got wallclock : 1.15 -1.20s
How to reproduce:

  1. rename comm.txt into comm.c
  2. compile: mpicc -o comm comm.c
  3. run: LD_PRELOAD=libipm.so mpirun -np 900 ./comm

You can measure time for MPI_Isend() and for MPI_Comm_compare() separately to understand overhead.

Again, it's hardly possible to get anything visible with small core-count.

Regards!
---Dmitry

from ipm.

cdaley avatar cdaley commented on September 24, 2024

Thanks Dmitry,

I ran your comm.c application on Intel KNL nodes of Cori supercomputer (my only customization was to add an additional timer between MPI_Init and MPI_Finalize to measure run time with and without IPM). Cori has cray-mpich-7.6.2. I used 15 nodes with 68 MPI ranks per node to give a total of 1020 MPI ranks. I found minimal overhead added by IPM in this configuration:

Without IPM: time between MPI_Init and MPI_Finalize = 0.24 seconds
With IPM: time between MPI_Init and MPI_Finalize = 0.11 seconds. IPM wallclock = 0.53 seconds
(The IPM wallclock is higher than my custom timer because the IPM time includes the time that IPM spends in MPI_Finalize)

I then built OpenMPI-3.0.0 on Cori. There is now a definite slowdown when using IPM:
Run 1:
Without IPM: time between MPI_Init and MPI_Finalize = 0.70 seconds
With IPM: time between MPI_Init and MPI_Finalize = 5.30 seconds. IPM wallclock = 26.05 seconds
Run 2:
Without IPM: time between MPI_Init and MPI_Finalize = 0.64 seconds
With IPM: time between MPI_Init and MPI_Finalize = 5.72 seconds. IPM wallclock = 26.51 seconds

I will investigate further. Which version of MPI did you use? There is no monitored MPI_Comm_compare call in either of my configurations.

See OpenMPI results:

#
# command   : /global/cscratch1/sd/csdaley/ipm-overhead/openmpi/./comm.ipm 
# start     : Tue Mar 06 11:08:37 2018   host      : nid12452        
# stop      : Tue Mar 06 11:09:03 2018   wallclock : 26.05
# mpi_tasks : 1020 on 15 nodes           %comm     : 20.01
# mem [GB]  : 31.66                      gflop/sec : 0.00
#
#           :       [total]        <avg>          min          max
# wallclock :      26517.39        26.00        25.96        26.05 
# MPI       :       5307.36         5.20         0.20         5.25 
# %wall     :
#   MPI     :                      20.01         0.77        20.17 
# #calls    :
#   MPI     :          9178            8            8         1026
# mem [GB]  :         31.66         0.03         0.03         0.03 
#
#                             [time]        [count]        <%wall>
# MPI_Wait                   2683.75           1019          10.12
# MPI_Barrier                2622.88           1020           9.89
# MPI_Irecv                     0.44           1019           0.00
# MPI_Isend                     0.20           1019           0.00
# MPI_Comm_free                 0.09           1020           0.00
# MPI_Comm_rank                 0.00           1020           0.00
# MPI_Comm_size                 0.00           1020           0.00
# MPI_Waitall                   0.00              1           0.00
# MPI_Init                      0.00           1020           0.00
# MPI_Finalize                  0.00           1020           0.00
#
###################################################################

from ipm.

dkuzmin72 avatar dkuzmin72 commented on September 24, 2024

Hi Chris,

I haven't tried Intel MPI yet, we used OpenMPI. They may have different algorithms for MPI_Comm_compare.

Each IPM_MPI_* function has this macro:
IPM_MPI_RANK_DEST_C(irank)
where RANK_DEST_C is:
#define IPM_MPI_RANK_DEST_C(rank_) IPM_MPI_MAP_RANK(rank_, dest, comm_in);
MAP_RANK is:
#define IPM_MPI_MAP_RANK(rank_out_, rank_in_, comm_)
do {
int comm_cmp_;
PMPI_Comm_compare(MPI_COMM_WORLD, comm_, &comm_cmp_);

Calling PMPI_Comm_compare for each function leads to huge overhead.

Yes, we can move further conversation to email. I made my email public.

Regards!
---Dmitry

from ipm.

paklui avatar paklui commented on September 24, 2024

Hi Dmitry and Chris,
I am also seeing very high overhead when using IPM to profile. I think Dmitry's suggested fix is working me too. Seems like it has been sometime since it has an update, is there a chance to putback Dmitry's fix? Thanks

from ipm.

lcebaman avatar lcebaman commented on September 24, 2024

I see this has been opened for a while now. What is the status of this fix?

from ipm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.