[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: MOSIX



Dariush Pietrzak wrote:

> > note that you might go for HA quite easily but in many cases HPC will be
> > very hard to attain on any system that is just POSIX.
>  what other systems allow easy HPC?

There is a nice (non-free) SGI parallelizing FORTRAN compiler which unrolls loops and
distributes them across processors, but that's for IRIX and only works well on shared
memory machines- like their Origin 2000s.  That's about as easy as it gets, but you
need a large SMP box to make it worth while.

Note that this won't work for C because while FORTRAN assumes each loop instance is
separate and may be unrolled, C assumes they happen in series.  So for example, the C
program:

a = 1; b = 1;
for (i=0, i<4; i++) {
  tmp = a+b; a=b; b=tmp; }

would generate the Fibonacci series, but if you did it in FORTRAN, a good optimizing
compiler would not necessarily do the loop sequentially and would probably give you
garbage.

> > parallel machine, while a distributed OS can't do that very good on its
> > own without help from programmer about parallelism)
> pvm/mpi allow number crnuching type of distribution, from distributed OS I
> expect mainly load balancing ( I'd be probably better of trying to
> properly designing my setup, I am just wondering what do people use
> MOSIX-style thingies for )

I don't know of anyone using it to do HPC.  The trouble is, even with MOSIX, you have
to write your app in multiple threads/processes, and MOSIX will distribute the
threads/processes over machines, the way Linux SMP distributes them over processors on
one machine.  Still non-trivial.

There's no "easy" way I know of to do this over a cluster, and on SMP, no "easy" way
that's free.

If you have lots of processes/threads, then MOSIX makes it easy to distribute them
over a cluster, but "shared memory" threads would obviously have communication
delays.  I don't know how MOSIX does this, but it's bound to be slower than if you use
MPI to control communication yourself.

> > Do it on MPI, you need a paper or two about HMM computation in parallel probably.
> How is MPI better then MPI ? ( i've used pvm, and my teacher said sth
> about MPI being known as better technique ). Is MPI available for
> different types of machines? ( with pvm I can make ?cluster? out of
> bunch of PCs in labs and add to it my 2 processor sparc with solaris )

I don't know, I've only used MPI, but my impression is that they are different ways to
do about the same thing.  (Anyone else know the differences better?)

Zeen,

-Adam P.

                 Welcome to the best software in the world today cafe!



Reply to: