Results 1 to 6 of 6

Thread: MPI and CUDA mixed programming

  1. #1
    Join Date
    Mar 2010
    Posts
    67

    MPI and CUDA mixed programming

    Hello guys !!
    Do you all have any suggestion about this concept. I am trying to find about this a lot more on the internet but I am unable to get any satisfactory result from google and other search engine but nothing. Has anyone thought about using MPI in conjunction with CUDA?

    Suppose that either the issue is straightforward to decompose for parallel execution or that I have available very quick conversion .

  2. #2
    Join Date
    May 2008
    Posts
    1,467

    Re: MPI and CUDA mixed programming

    Hey dude, you are going in right direction and The question that you asked to me about the programming combination of both is answered as yes. Was that the question you meant to ask though?

    CUDA and MPI are orthogonal. You can work with MPI to distribute work between more than one computers, each of which are functional with CUDA to execute its share of task. I have a tiny cluster, that I am using to do just that.
    Last edited by Marthaz; 18-05-2010 at 11:09 AM.

  3. #3
    Join Date
    Mar 2010
    Posts
    67

    MPI and CUDA mixed programming

    That's fine ! I got your suggestions but having some issues yet . I was very confused regarding implementation of those things but I thought so also, but as I have no other hardware components currently (we are trying to move via the process of ordering sad.gif ) I am unable to test myself. I still have to convince those with cash to perform the absolute things.

  4. #4
    Join Date
    May 2006
    Posts
    2,335

    Re: MPI and CUDA mixed programming

    However , another method to paraphrase this is 'use MPI for the coarse-grained parallelism between the nodes, and perform whatever you need within the node by adjusting the GPU(s) and the CPU cores to the limit'. The GPUs are programmed with CUDA. This is by the way a 'minimally invasive' technique, and I have been pursuing it with conventional offscreen GL rendering to make proper adjustment of things for a while now.

    If your parallel (in the MPI sense) application needs much more interaction and if your local task is not decoupled in the right way. As an example, just think about a parallel domain decomposition technique to a linear system solver. You will be getting the PCIe bottleneck pretty quick, at least for sparse matrices. One thing should be remember that parallel linear system resolving is sort of a unnecessary case situation, several applications do much more local task so that the data transmission from MPI through the PCIe interface to the CUDA device is not a prominent bottleneck.

  5. #5
    Join Date
    Dec 2007
    Posts
    1,547

    Re: MPI and CUDA mixed programming

    I read this thread completely and found that you are in the confusion with the establishment of the MPI and CUDA mixed programming. But for your kind information, I started with an MPI program, and "outsourced" the main computation section onto the GPUs. Commonly for every GPU that I use one CPU. This is for more than one GPUs in one machine and/or more than one computers.

    This was rather straightforward. I just used the makefiles from the examples and situated the mpi library. That's fine ...

  6. #6
    Join Date
    Mar 2010
    Posts
    67

    Re: MPI and CUDA mixed programming

    You guys are telling that compilation of a complex and mixed code of MPI and CUDA is trivial and should perform OOTB, but it assumes that I am unable to get it. I have a little file that should compile a mixed MPI and CUDA code.

    Code:
    CC=nvcc
    CFLAGS= -I/usr/local/mpich2-1.0.6.p1/include -I/usr/local/cuda/include -I/home/user/NVIDIA_CUDA_SDK/common/inc
    LDFLAGS= -L/usr/local/mpich2-1.0.6.p1/lib -L/usr/local/cuda/lib -L/home/user/NVIDIA_CUDA_SDK/lib -L/home/user/NVIDIA_CUDA_SDK/common/lib
    LIB= -lcuda -lcudart -lcutil -lm -lmpich -lpthread
    SOURCES= Init.c main.c
    EXECNAME= Exec
    
    all:
           $(CC) -o $(EXECNAME) $(SOURCES) $(LIB) $(LDFLAGS) $(CFLAGS)
    And getting some error message and this is the piece of error message is as follows :

    Init.c: In function ‘CUDAInit’:
    Init.c:62: error: ‘cudaDeviceProp’ undeclared (first use in this function)
    Init.c:62: error: (Each undeclared identifier is reported only once
    Init.c:62: error: for each function it appears in.)
    Init.c:62: error: expected ‘;’ before ‘deviceProp’
    Init.c:63: error: ‘deviceProp’ undeclared (first use in this function)
    Init.c:75: error: expected ‘;’ before ‘}’ token
    Init.c:120: error: expected declaration or statement at end of input
    make: *** [all] Error 255

Similar Threads

  1. Mixed SLI - mix different video cards
    By Inigo in forum Tips & Tweaks
    Replies: 4
    Last Post: 28-02-2011, 08:39 PM
  2. Keyboard keys mixed up
    By Jagruti23 in forum Hardware Peripherals
    Replies: 6
    Last Post: 30-08-2010, 07:29 PM
  3. Hyper-V / VM mixed up VHD files
    By Beter 2 Burn Out in forum Operating Systems
    Replies: 5
    Last Post: 07-04-2010, 02:35 AM
  4. Socket programming: Is any new Programming Language?
    By Kushan in forum Software Development
    Replies: 3
    Last Post: 14-11-2009, 11:13 AM
  5. Replies: 3
    Last Post: 13-12-2008, 01:49 PM

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Page generated in 1,656,873,730.96910 seconds with 17 queries