How about parallel computing with finance

Schilling, Richard RSchilling at affiliatedhealth.org
Thu Dec 7 10:29:01 PST 2000


I've been trying to get my e-mail client to sent text only (no HTML).  I
apologize in advance if this is HTML.

This is exactly where I've been focusing my energy on with beowulf
technology.  And in health care, like I think in many industries, making the
technology work for your finance department is key.  Do great science, and
people in finance raise an eyebrow.  Save the company six figures in
operating costs or process their data faster, and you get applauded, invited
to meet with directors  . . . the whole nine yards.

Richard Schilling
Web Integration Programmer
Affiliated Health Services
Mount Vernon, WA


> -----Original Message-----
> From: Terrence E. Brown [mailto:tbrown at lector.kth.se]
> Sent: Wednesday, December 06, 2000 11:14 PM
> To: Robert_G._Brown_ (Robert G. Brown <rgb at phy.duke.edu>)
> Cc:
> Horatio_B._Bogbindero__(Horatio_B._Bogbindero_<wyy at cersa.admu.
> edu.ph>)_;
> liuxg__; beowulf at beowulf.org__
> Subject: Re: How about parallel computing with finance
> 
> 
> Probably , not much discussion should take place here. But I 
> will have a
> another venue for it in a few days.
> 
> "Robert_G._Brown_ (Robert G. Brown )" wrote:
> 
> > On Wed, 6 Dec 2000, Terrence E. Brown wrote:
> >
> > > I am also interested in the business and managerial 
> application as well
> > as other
> > > industrial apps.
> > >
> > > I would certainly like to talk with another with similar 
> thoughts. I
> > have even
> > > started an org dedicated to that objective.
> > >
> > > Terrence
> > >
> > > "Horatio_B._Bogbindero_ (Horatio B. Bogbindero )" wrote:
> > >
> > > > i would just like to know about building neural 
> networks in clusters.
> > > > i am not into neural network but some people here in 
> the university
> > > > maybe interested. however, we do not know where to 
> start. i would to
> > know
> > > > where i can get some sample NN code. maybe for 
> something trivial.
> >
> > Hmmm, I don't know how much such of such a discussion 
> should occur on
> > this list.  The following (up to <shameless marketing> is probably
> > reasonable.
> >
> > Neural networks (and the genetic algorithms that underlie a 
> really good
> > one in a problem with high dimensionality) are certainly fascinating
> > things.  They are even in some sense a fundamentally 
> parallel thing, as
> > their processing capabilities result from a tiered composition of
> > relatively simple (but nonlinear) transfer functions.  A general
> > discussion of NN's and how they work is clearly not 
> appropriate for this
> > list though.  There are some particular issues that are.
> >
> > In practice, most the parallelization issues of NN's are a 
> small part of
> > the overall problem UNLESS you are interested in constructing custom
> > hardware or building NN ASIC's or the like.  This is 
> because computers
> > generally run neural network SIMULATORS and use what amounts to
> > relatively small-scale linear algebra (transmogrified 
> through an e.g.
> > logistic function) to do a net evaluation.  Since this is 
> so small that
> > it will often fit into even L1 (and almost certainly into 
> L2) there is
> > no possible way that it can be distributed in parallel except via
> > (embarrassingly parallel) task division in a profitable way.
> >
> > Evaluation of networks' values applied to training/trial 
> set data makes
> > up the bulk of the numerical effort in building a network 
> and is at the
> > heart of the other tasks (e.g. regression or conjugate gradient
> > improvement of the weights).  For large training/trial sets 
> and "big"
> > networks, this can be split up (and my experiences 
> splitting it up are
> > recorded in one of the talks available on the brahma website).  For
> > small ones, the ratio of the time spent doing parallel work 
> to the time
> > spent doing parallel communication isn't favorable and 
> one's parallel
> > scaling sucks.  As in even two nodes may complete in more 
> time than one
> > working alone.
> >
> > I'm working on an improved algorithm that splits up NN
> > construction/training in a way that is more functionally 
> coherent.  That
> > way one or two of the distinct tasks can be parallelized very
> > efficiently and thoroughly and the results fed back into a 
> mostly serial
> > or entirely serial step further down the pipeline.  I 
> expect that this
> > will permit a very nice master/slave implementation of a 
> neural network
> > constructor where nodes are slaves that can be working on any of a
> > number of parallelized tasks according to the directions of 
> the master
> > (quite possibly with internode IPC's, though), and all the 
> serial work
> > can be done on the master.
> >
> > <shameless marketing>
> > NN's (parallelized or not) are, as one might expect, 
> incredibly useful
> > and potentially profitable.  After all, a successful 
> predictive model
> > "tells the future", at least probabilistically, by construction, and
> > does even better than a delphic oracle ever did in that 
> they can often
> > provide a quantitative (although probabilistic) answer to "what if"
> > questions as well.  In ancient times the words of the 
> oracle were just
> > fate and nothing you could do would change them.  In 
> business, one would
> > like to predict what is likely to happen if you follow plan 
> A instead of
> > plan B. Just about any business manager has a list of 
> questions about
> > the future (what if or otherwise) they would love to have 
> the answers
> > to.  That's one of Market Driven's foci -- providing answers and
> > expertise in business optimization.
> > </shameless marketing>
> >
> > Anyway, let me know if you're interested in more discussion 
> of this (or
> > how NN's work or how they and predictive modeling in general can be
> > applied in business and managerial situations) offline.
> >
> >    rgb
> >
> > --
> > Robert G. Brown                        http://www.phy.duke.edu/~rgb/
> > Duke University Dept. of Physics, Box 90305
> > Durham, N.C. 27708-0305
> > Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu
> >
> > To: "Terrence E. Brown" <tbrown at lector.kth.se>
> > From: "Robert G. Brown" <rgb at phy.duke.edu>
> > Subject: Re: How about parallel computing with finance
> > Date: Wed, 6 Dec 2000 12:56:00 -0500 (EST)
> > cc: "Horatio_B._Bogbindero_ (Horatio B. Bogbindero
> > <wyy at cersa.admu.edu.ph>)" <wyy at cersa.admu.edu.ph>, liuxg_
> > <beowulf at eyou.com>, "beowulf at beowulf.org_" <beowulf at beowulf.org>
> > Message-ID: 
> <Pine.LNX.4.30.0012061029560.15373-100000 at ganesh.phy.duke.edu>
> > Received: from ganesh.phy.duke.edu (ganesh.phy.duke.edu 
> [152.3.183.52]) by
> > lector_gw.lector.kth.se with SMTP id MSGTTTQE; Wed, 6 Dec 
> 2000 17:56:02 GMT
> > Received: from localhost (rgb at localhost) by ganesh.phy.duke.edu
> > (8.9.3/8.9.3) with ESMTP id MAA15684; Wed, 6 Dec 2000 12:56:00 -0500
> 
> --
> -----------------------------------------
> Terrence E. Brown, Ph.D.
> Stockholm School of Entrepreneurship
> Assistant Professor
> Royal Technical Institute
> INDEK
> Drottning Kristina Väg 35D
> S- 100 44 Stockholm
> Sweden
> Tel: +46(0)8-7906174 (work)
> Fax: +46(0)8-7906741 (work)
> Email: tbrown at lector.kth.se
>  or terrence.brown at sses.se
> 
> 
> 
> _______________________________________________
> Beowulf mailing list
> Beowulf at beowulf.org
> http://www.beowulf.org/mailman/listinfo/beowulf
> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20001207/2ae6c24d/attachment.html>


More information about the Beowulf mailing list