Search Results

Search found 136 results on 6 pages for 'mpi'.

Page 3/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • Vector Usage in MPI(C++)

    - by lsk1985
    I am new to MPI programming,stiil learning , i was successful till creating the Derived data-types by defining the structures . Now i want to include Vector in my structure and want to send the data across the Process. for ex: struct Structure{ //Constructor Structure(): X(nodes),mass(nodes),ac(nodes) { //code to calculate the mass and accelerations } //Destructor Structure() {} //Variables double radius; double volume; vector<double> mass; vector<double> area; //and some other variables //Methods to calculate some physical properties Now using MPI i want to sent the data in the structure across the processes. Is it possible for me to create the MPI_type_struct vectors included and send the data? I tried reading through forums, but i am not able to get the clear picture from the responses given there. Hope i would be able to get a clear idea or approach to send the data PS: i can send the data individually , but its an overhead of sending the data using may MPI_Send/Recieve if we consider the domain very large(say 10000*10000)

    Read the article

  • MPI C fprintf() output not showing up if the process hangs on MPI_Recv

    - by Karolis
    I'm writing an MPI C program. I have troubles debugging it, because whenever I use fprintf, like this: fprintf(stdout, "worker: %d", worker); if the program hangs, because of some blocking MPI_Recv, I can't see any output. I'm sure the line of code is reached, because I can put a return statement after the fprintf statement, in which case the process finishes execution and the output is printed. Any ideas, on how to print (see the output) even though the process gets blocked later by Recv? I hope this makes sense.

    Read the article

  • Unable to run OpenMPI across more than two machines

    - by rcollyer
    When attempting to run the first example in the boost::mpi tutorial, I was unable to run across more than two machines. Specifically, this seemed to run fine: mpirun -hostfile hostnames -np 4 boost1 with each hostname in hostnames as <node_name> slots=2 max_slots=2. But, when I increase the number of processes to 5, it just hangs. I have decreased the number of slots/max_slots to 1 with the same result when I exceed 2 machines. On the nodes, this shows up in the job list: <user> Ss orted --daemonize -mca ess env -mca orte_ess_jobid 388497408 \ -mca orte_ess_vpid 2 -mca orte_ess_num_procs 3 -hnp-uri \ 388497408.0;tcp://<node_ip>:48823 Additionally, when I kill it, I get this message: node2- daemon did not report back when launched node3- daemon did not report back when launched The cluster is set up with the mpi and boost libs accessible on an NFS mounted drive. Am I running into a deadlock with NFS? Or, is something else going on?

    Read the article

  • MPI_Bsend and MPI_Isend. How do they work ?

    - by GBBL
    Hi, using buffered send and non blocking send I was wondering how and if they implement a new level of parallelism in my application eventually generating a thread. Imagine that a slave process generates a large amount of data and want to send it to the master. My idea was to start a buffered or non blocking send then immediately begin to compute the next result. Just when I would have to send the new data I wold check if I can reuse the buffer. This would introduce a new level of parallelism in my application between CPU and communication. Does anybody knows how this is done in MPI ? Does MPI generate a new thread to handle the Bsend or Isend ? Thanks.

    Read the article

  • relation between ranks of processes after using MPI_COMM_split

    - by dks12345
    I used MPI_Comm_split to split the default MPI communicator.If initially there were 10 processes in default communicator ,MPI_COMM_WORLD and ,say, their ranks were identified by id_original. The new communicator consisted of 4 processes with id_original 6,7,8,9.These processes will have ranks defined by , say , id_new in the new communicator. What will be the relation between ranks of processes in these two communicators. Will the processes with id_original 6,7,8,9 will have new ranks 0,1,2,3 respectively in the new communicator or the ordering might be different?

    Read the article

  • MPI_SCATTER Fortran Matrices by Rows

    - by Fortran
    What is the best way to scatter a Fortran 90 matrix by its rows rather than columns? That is, let's say I have a matrix a(4,50) and I want to MPI_SCATTER it onto two processes where each part is alocal(2,50), where rank 0 has rows 1 and 2, and rank 1 has 3 and 4. Now, in C, this is simple since arrays are row-major, but in Fortran 90 they are column-major. I'm trying to avoid using TRANSPOSE to flip a before scattering (i.e, doubling the memory use), and I figure there must be a way in MPI to do this. Would it be MPI_TYPE_VECTOR? MPI_TYPE_CREATE_SUBARRAY? Likewise, what if I have a 3d array b(4,50,3) and I want two scattered matrices of blocal(2,50,3) distributed as above?

    Read the article

  • What happens when I MPI_Send to a process that has finished?

    - by nieldw
    What happens when I MPI_Send to a process that has finished? I am learning MPI, and writing a small sugar distribution-simulation in C. When the factories stop producing, those processes end. When warehouses run empty, they end. Can I somehow tell if the shop's order to a warehouse did not succeed(because the warehouse process has ended) by looking at the return value of MPI_Send? The documentation doesn't mention a specific error code for this situation, but that no error is returned for success. Can I do: if (MPI_Send(...)) { ... /* destination has ended */ ... } And disregard the error code? Thanks

    Read the article

  • catching a deadlock in a simple odd-even sending

    - by user562264
    I'm trying to solve a simple problem with MPI, my implementation is MPICH2 and my code is in fortran. I have used the blocking send and receive, the idea is so simple but when I run it it crashes!!! I have absolutely no idea what is wrong? can anyone make quote on this issue please? there is a piece of the code: integer,parameter::IM=100,JM=100 REAL,ALLOCATABLE ::T(:,:),TF(:,:) CALL MPI_COMM_RANK(MPI_COMM_WORLD,RNK,IERR) CALL MPI_COMM_SIZE(MPI_COMM_WORLD,SIZ,IERR) prv = rnk-1 nxt = rnk+1 LIM = INT(IM/SIZ) IF (rnk==0) THEN ALLOCATE(TF(IM,JM)) prv = MPI_PROC_NULL ELSEIF(rnk==siz-1) THEN NXT = MPI_PROC_NULL LIM = LIM+MOD(IM,SIZ) END IF IF (MOD(RNK,2)==0) THEN CALL MPI_SEND(T(2,:),JM+2,MPI_REAL,PRV,10,MPI_COMM_WORLD,IERR) CALL MPI_RECV(T(1,:),JM+2,MPI_REAL,PRV,20,MPI_COMM_WORLD,STAT,IERR) ELSE CALL MPI_RECV(T(LIM+2,:),JM+2,MPI_REAL,NXT,10,MPI_COMM_WORLD,STAT,IERR) CALL MPI_SEND(T(LIM+1,:),JM+2,MPI_REAL,NXT,20,MPI_COMM_WORLD,IERR) END IF as I understood even processes are not receiving anything while the odd ones finish sending successfully, in some cases when I added some print to observe what is going on I saw that the variable NXT is changing during the sending procedure!!! for example all the odd process was sending message to process 0 not their next one!

    Read the article

  • Sending 2 dim array using scatter

    - by MPI_Beginner
    I am a beginner in MPI, and i am using C Language, and Simulator for Processors (MPICH2), i wrote the following code to send a 2D array to make 2 processors take a line from it but it produces error when running MPICH2, the code is: int main ( int argc , char *argv[] ) { int rank; int commsize; MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD,&commsize); MPI_Comm_rank(MPI_COMM_WORLD,&rank); char** name=malloc(2*sizeof(char*)); int i; for(i=0;i<2;i++){ name[i]=malloc(15*sizeof(char)); } name[0]="name"; name[1]="age"; if(rank==0){ char** mArray=malloc(2*sizeof(char*)); MPI_Scatter(&name,1,MPI_CHAR,&mArray,1,MPI_CHAR,0,MPI_COMM_WORLD);//send } else{ char** mArray=malloc(2*sizeof(char*)); int k; for(k=0;k<2;k++){ mArray[k]=malloc(15*sizeof(char)); } MPI_Scatter(&mArray,1,MPI_CHAR,&mArray,1,MPI_CHAR,0,MPI_COMM_WORLD);//receive printf("line is %s \n",mArray[rank-1]); } MPI_Finalize(); }

    Read the article

  • Using MPI_Type_Vector and MPI_Gather, in C.

    - by Goloneg
    Hi, I am trying to multiply square matrices in parallele with MPI. I use a MPI_Type_vector to send square submatrixes (arrays of float) to the processes, so they can calculate subproducts. Then, for the next iterations, these submatrices are send to neighbours processes as MPI_Type_contiguous (the whole submatrix is sent). This part is working as expected, and local results are corrects. Then, I use MPI_Gather with the contiguous types to send all local results back to the root process. The problem is, the final matrix is build (obviously, by this method) line by line instead of submatrix by submatrix. I wrote an ugly procedure rearranging the final matrix, but I would like to know if there is a direct way of performing the "inverse" operation of sending MPI_Type_vectors (i.e., sending an array of values and directly arranging it in a subarray form in the receiving array). An example, to try and clarify my long text : A[16] and B[16] are 4x4 matrices to be multiplied ; C[16] will contain the result ; 4 processes are used (Pi with i from 0 to 3) : Pi gets two 2x2 submatrices : subAi[4] and subBi[4] ; their product is stored locally in subCi[4]. For instance, P0 gets : subA0[4] containing A[0], A[1], A[4] and A[5] ; subB0[4] containing B[0], B[1], B[4] and B[5]. After everything is calculed, root process gathers all subCi[4]. Then C[16] contains : [ subC0[0], subC0[1], subC0[2], subC0[3], subC1[0], subC1[1], subC1[2], subC1[3], subC2[0], subC2[1], subC2[2], subC2[3], subC3[0], subC3[1], subC3[2], subC3[3]] and I would like it to be : [ subC0[0], subC0[1], subC1[0], subC1[1], subC0[2], subC0[3], subC1[2], subC1[3], subC2[0], subC2[1], subC3[0], subC3[1], subC2[2], subC2[3], subC3[2], subC3[3]] without further operation. Does someone know a way ? Thanks for your advices.

    Read the article

  • mpicc hangs when called from makefile; runs fine as single command

    - by user2518579
    i'm trying to compile WRF (doubt that's relevant) and am having a problem where mpicc will hang when run w/ the compile script. icc and mpif90 have no issues. the compile script is executed w/ #!/bin/csh -f just to be verbose, here's an example. i run the script and get here make[3]: Entering directory `/home/jason/wrf/wrf3.5/external/RSL_LITE' mpicc -DMPI2_SUPPORT -DMPI2_THREAD_SUPPORT -DFSEEKO64_OK -w -O3 -DDM_PARALLEL -DMAX_HISTORY=25 -DNMM_CORE=0 -c rsl_bcast.c and hang. so then i run that line by itself jason@server:~/wrf/wrf3.5$ cd /home/jason/wrf/wrf3.5/external/RSL_LITE jason@server:wrf3.5/external/RSL_LITE$ mpicc -DMPI2_SUPPORT -DMPI2_THREAD_SUPPORT -DFSEEKO64_OK -w -O3 -DDM_PARALLEL -DMAX_HISTORY=25 -DNMM_CORE=0 -c rsl_bcast.c jason@server:wrf3.5/external/RSL_LITE$ compiles instantly. starting the compile script again does the exact same thing but on the next file. i have no idea what to do, and this is basically impossible to google for.

    Read the article

  • Linking Error: undefined reference to `MPI_Init' on Windows 7

    - by fatpipp
    I am using OpenMPI library to write a program to run on Windows 7. I compile and build with C Free 4.0, Mingw. Compiling is Ok but when the compiler links object, errors "undefined reference to ..." occurs. I have set the environment already: I added OpenMPI lib, include and bin folder into C Free Build Directories. I added them into Windows environment variables too. But the error still occurs. Can anyone tell me how to fix it? Thanks a lot.

    Read the article

  • Can I use MPI_Probe to probe messsages sent by any collective operation?

    - by takwing
    In my code I have a server process repeatedly probing for incoming messages, which come in two types. One type of the two will be sent once by each process to give hint to the server process about its termination. I was wondering if it is valid to use MPI_Broadcast to broadcast these termination messages and use MPI_Probe to probe their arrivals. I tried using this combination but it failed. This failure might have been caused by some other things. So I would like anyone who knows about this to confirm. Cheers.

    Read the article

  • Error while installing boost_1_54

    - by Farhat
    On trying to install boost I get this error during configuration checks. Googling did not give any pointers. [root@heracles boost_1_54_0]# ./b2 install Performing configuration checks - 32-bit : no (cached) - 64-bit : yes (cached) - arm : no (cached) - mips1 : no (cached) - power : no (cached) - sparc : no (cached) - x86 : yes (cached) error: No best alternative for libs/coroutine/build/allocator_sources next alternative: required properties: <link>static <target-os>windows <threading>multi not matched next alternative: required properties: <link>static <segmented-stacks>on <threading>multi not matched next alternative: required properties: <link>static <threading>multi not matched - has_icu builds : no (cached) warning: Graph library does not contain MPI-based parallel components. note: to enable them, add "using mpi ;" to your user-config.jam - zlib : yes (cached) - iconv (libc) : yes (cached) - icu : no (cached) - icu (lib64) : no (cached) - compiler-supports-ssse3 : yes (cached) - compiler-supports-avx2 : no (cached) - gcc visibility : yes (cached) - long double support : yes (cached) warning: skipping optional Message Passing Interface (MPI) library. note: to enable MPI support, add "using mpi ;" to user-config.jam. note: to suppress this message, pass "--without-mpi" to bjam. note: otherwise, you can safely ignore this message. error: No best alternative for libs/coroutine/build/allocator_sources next alternative: required properties: <link>static <target-os>windows <threading>multi not matched next alternative: required properties: <link>static <segmented-stacks>on <threading>multi not matched next alternative: required properties: <link>static <threading>multi not matched - zlib : yes (cached) How can the alternative for allocator sources be located? Thanks.

    Read the article

  • Python error after installing libboost-all-dev on debian [migrated]

    - by Cameron Metzke
    A friend of mine wanted the liboost libraries installed on our shared computer so after installing libboost-all-dev 1.49.0.1 ( A debian wheezy machine ), I get this error when using the "pydoc modules" command on the commandline. It spits out the following error -- root@debian:/usr/include/c++/4.7# pydoc modules Please wait a moment while I gather a list of all available modules... **[debian:49065] [[INVALID],INVALID] ORTE_ERROR_LOG: A system-required executable either could not be found or was not executable by this user in file ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 357 [debian:49065] [[INVALID],INVALID] ORTE_ERROR_LOG: A system-required executable either could not be found or was not executable by this user in file ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 230 [debian:49065] [[INVALID],INVALID] ORTE_ERROR_LOG: A system-required executable either could not be found or was not executable by this user in file ../../../orte/runtime/orte_init.c at line 132 -------------------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): orte_ess_set_name failed --> Returned value A system-required executable either could not be found or was not executable by this user (-127) instead of ORTE_SUCCESS -------------------------------------------------------------------------- -------------------------------------------------------------------------- It looks like MPI_INIT failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during MPI_INIT; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): ompi_mpi_init: orte_init failed --> Returned "A system-required executable either could not be found or was not executable by this user" (-127) instead of "Success" (0) -------------------------------------------------------------------------- *** The MPI_Init() function was called before MPI_INIT was invoked. *** This is disallowed by the MPI standard. *** Your MPI job will now abort. [debian:49065] Abort before MPI_INIT completed successfully; not able to guarantee that all other processes were killed!** root@debian:/usr/include/c++/4.7# I tried looking into the problem and ended up uninstalling the following to get it to work again. openmpi common all 1.4.5-1 libibverbs-dev amd64 1.1.6-1 libopenmpi-dev amd64 1.4.5-1 mpi-default-dev amd64 1.0.1 libboost-mpi-python1.49.0 although pydoc works again, I'm assuming the packages I removed are gunna hurt somethiong else down the track ? As you guessed im not a c/c++ programmer. So I guess my question is, will this hurt something later ? is their a way to install those packages without hurting python ?

    Read the article

  • Keyboad layout: In 13.10, modified symbols do not apply

    - by MPi
    I like to tweak my Colemak layout a bit, so I changed /usr/share/X11/xkb/symbols/us to contain my changes. Sure, they get lost on an upgrade, but that is not very often. After upgrading to 13.10, this does not work anymore. I changed the file, but the changes are not applied. Neither when I use the settings program, nor when I issue setxkbmap 'us(colemak)' directly. Where is this data stored now, is there some kind of cache?

    Read the article

  • Extend gnome-control-center’s list of keyboard layouts

    - by MPi
    While looking for my problem of Keyboad layout: In 13.10, modified symbols do not apply, I noticed that gnome-control-center region layouts has a list of keyboard layouts that is not directly related to the ones found in /usr/share/X11/xkb/symbols. For example, on my German machine it lists ‘Englisch (Colemak)’ and ‘English (Britisch, Colemak)’ as keyboard layouts, both are not found in the original files, but obviously translations. So, my question is: Where does gnome-control-center get its list of keyboard layouts from? Can it be extended?

    Read the article

  • Are you at Super Computing 10?

    - by Daniel Moth
    Like last year, I was going to attend SC this year, but other events are unfortunately keeping me here in Seattle next week. If you are going to be in New Orleans, have fun and be sure not to miss out on the following two opportunities. MPI Debugging UX Study Throughout the week, my team is conducting 90-minute studies on debugging MPI applications within Visual Studio. In exchange for your feedback (under NDA) you will receive a Microsoft Gratuity (and the knowledge that you are impacting the development of Visual Studio). If you are interested, sign up at the Microsoft Information Desk in the Exhibitor Hall during exhibit hours. Outside of exhibit hours, send email to [email protected]. If you took part in the GPGPU study, this is very similar except it is for MPI. Microsoft High Performance Computing Summit On Monday 15th, the Microsoft annual user group meeting takes place. Shuttle transportation and lunch is provided. For full details of this event and to register, please visit the official event page. Comments about this post welcome at the original blog.

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >