[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: First Beowulf cluster. Some pointers?



"A.J. Rossini" wrote:
> 
>     e> It won't enhance anything for a parallel virtual machine that
>     e> has to know the topology.
> 
> It WILL balance jobs which are run via PVM/MPI, esp if you've got
> multiple processes on the same nodes.   PVM/MPI are not
> load-balancing, not without a good bit of direct coding.
> 

Interesting, but I presume it will enhance the performance for only
very coarse grained applications. In all other cases, the performance
penalty will be severe. For instance: irregular mesh partitioning, etc.
Any fine grained computation basically.

In fact MPI model presumes that you are coding on a static network.
As far as I've seen the dynamic process extensions are not very worthwile
so we stick to that when we use MPI.

>     e> mosix doesn't have much to do with HPC, so we can say that it
>     e> isn't very related to a beowulf class supercomputer.
> 
> It is a nice tool for HPC, depending on the jobs you do.  It enhances
> the underlying system, at a lower level than PVM/MPI/shmem.  It's
> definitely NOT a panacea.  You still have to write the parallel code.
> If you've got a really tight process and are targetting differential
                                               ^^^^^^^^^^^^^^^^^^^^^^^
> code at specific nodes in a topology, of course it's pretty useless.
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

most parallel algorithms are of that sort. even a fold and expand routine will
require that kind of behavior, let alone problems with irregular data.
which means that many real supercomputing problems will be in a similar vein.

> But that's a specific example (though I admit there are numerous other
> examples when MOSIX won't enhance the performance -- but you can
> always lock processes onto particular nodes in the topology).
> 

I don't think it's too specific.

> But in general, it's a nice tool.
> 

I agree with that; too bad I haven't had the time to actually try it :/

> Maybe it's not the tool for the HPC problems _YOU_ have; perhaps
> you've got a nice scenario where your cluster is only running a single
> HPC job at a time, and your nodes are equivalent.  But for those of us
> with a smallish clusters running several large, indeterminately
> long-term jobs from numerous sources/people...

Yes the research part and practical applications sometimes differ wildly.
For instance multitasking. In a research benchmark we confine the
machine to a single user single task, but in real life it isn't like that.

In fact, if you think about the general case where computation and
communication may be overlapped by the virtues of the underlying
message passing system multitasking would be an affordance rather
than a hindrance!

In particular, assume "multithreaded" parallel applications which break
down the application to threads on nodes that can tolerate network
latency and bandwidth in flux.

The point in a real system with multiple users is not to finish a single
job in the shortes t"wall clock" time, but rather use the machine to
the fullest possible capacity. In that respect, I cannot agree more.
Nevertheless, I hardly find a combination of a "home node" implementation
that is MOSIX and MPI a solution to the problem I describe.

Indeed, the more I use MPI the worse impression it has on me. I personally
think that a very tight message passing system with a *different* API
is required.

Regards,

-- 
Eray (exa) Ozkural
Comp. Sci. Dept., Bilkent University, Ankara
e-mail: erayo@cs.bilkent.edu.tr
www: http://www.cs.bilkent.edu.tr/~erayo



Reply to: