[Beowulf] Re: building a new cluster
schuang21 at yahoo.com
Fri Sep 3 11:31:46 PDT 2004
Thank you for the comments, Tim. I have some bigger case (16M grid
points) but have not posted them. :-)
The "wall-clock seconds" shown in the figure are the time needed for
one time step (it's an unsteady flow problem). In fact, the value is
an average value taken from several time steps (variation is about 10%
between time steps). The inital startup overhead time is totally
excluded in the timing (i.e. I started the code and let it run for a
few steps and then begin timing, using "mpi_wtime()" call).
--- Tim Mattox <tmattox at gmail.com> wrote:
> Hello SCH,
> That is an interesting graph. Are you running a representative size
> data set?
> 2 Million gridpoints seems small compared to what I'm used to seeing
> people run on our clusters. But I don't know your code or problem
> so 2 Million may be just what you want to work with.
> I say this because the wallclock times in your graph are rather
> short, with
> all but one of the runs on faster than 100 Mbit networks at under 10
> You may be seeing the job's startup overhead more than your actual
> If you can, try running a bigger job, and/or for more timesteps.
> More timesteps
> will hide the startup overhead, and a bigger job (if that is
> of your desired runs) should allow you to scale to larger numbers of
> On Fri, 3 Sep 2004 09:55:11 -0700 (PDT), SC Huang
> <schuang21 at yahoo.com> wrote:
> > Hi,
> > I just posted some timing results from my MPI code here:
> > http://www.geocities.com/schuang21/index.html
> > More are coming soon. It looks like even with gigabit switch (at
> > for this MPI code) using more than 16 nodes is not good...
> > Any comment or suggestion is very welcome. :-)
> > SCH
> Tim Mattox - tmattox at gmail.com - http://homepage.mac.com/tmattox/
Do you Yahoo!?
Win 1 of 4,000 free domain names from Yahoo! Enter now.
More information about the Beowulf