os that rather uses the gpu?

Robert Myers rbmyersusa at gmail.com
Sat Jul 17 18:40:29 UTC 2010


On Sat, Jul 17, 2010 at 1:08 PM, Les <hlhowell at pacbell.net> wrote:

>
> But unfortunately, Robert, networks are inherently low bandwidth.  To
> achieve full throughput you still need parallelism in the networks
> themselves.  I think from your description, you are discussing the fluid
> models which are generally deconstructed to finite series over limited
> areas for computation with established boundary conditions.  This
> partitioning permits the distributed processing approach, but suffers at
> the boundaries where either the boundary crossing phenomena are
> discounted, or simulated, or I guess passed via some coding algorithm to
> permit recursive reduction for some number of iterations.
>
> What your field, and most others related to fluid dynamics, such as
> plasma studies, explosion studies and so on, needs is full wide band
> memory access across thousands of processors (millions perhaps?).
>

<big snip of insightful comments>

>


>        I am a test applications consultant.  My trade forces me to
> continuously update my skills and try to keep up with the multitude of
> new architectures.  I have almost no free time, as researching new
> architectures, designing new algorithms, understanding the application
> of algorithms to new tests, and hardware design requirements eats up
> many hours every day. Fortunately I am and always have been a NERD and
> proud of it.  I work at it.
>
> Since you have a deep knowledge of your requirements, perhaps you should
> put some time into thinking of a design methodology other than those I
> have mentioned or those that you know, in an attempt to improve the
> science.  I am sure many others would be very appreciative, and quite
> likely supportive.
>
> I'm suspicious of the claimed narrowness of your credentials, because you
have gotten so many things right. ;-)

Someone who *really* knows what he's talking about in this problem area
might be inclined to say, "This guy Myers is an idiot, because what he
apparently wants to do was shown not to be possible decades ago."

Nature, not hardware budgets, dictates the range of scales and the volume of
data that needs to be dealt with, and you are correct (if perhaps on the low
side) as to the scale of resources that would be required for a head-on
assault against some really important problems (hurricane prediction, for
example, or meaningful climate prediction).  There is no conceivable way
that such a head-on assault could ever be mounted with any hardware that I
know of that anyone has ever imagined.

What *can* be done that no one seems interested in doing is to explore how
the effects I have described (computational artifacts resulting from
accommodating the problem to the available hardware, as you apparently
understand) exhibit themselves at attainable ratios of scales.  Every lab
director knows that, important though such knowledge might be, the mere fact
that the potential importance of said knowledge is so hard to describe to a
layman means that significant funding will never be forthcoming.  Even
worse, you might do an expensive and insightful exploration only to fail to
discover anything that can be applied to "real world" modeling (a null
result).

When you sink a lot of money into a huge new particle accelerator, there is
always the possibility that you will discover nothing new that is
interesting.  What's different is that huge particle accelerators capture
the public imagination in a way that the grubby details of computational
physics, no matter how fundamental, never will.  A bigger particle
accelerator, if nothing else, allows you to estimate new bounds for as yet
undetected phenomena.  The bigger computers we keep building only push the
problem I have described more deeply into the mud.

I have talked to computer architects as to what is conceivably possible.
 What I need to do right now is computation that will allow me to show
people what I'm talking about, rather than asking them to imagine it, even
though it is all very clear to me.

A closely-related question--how important are collective, nonlocal, and
nonlinear phenomena in neurobiology, and how much global bandwidth do they
imply?-- may eventually push the computational frontier that I see as being
ignored.

Robert.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.fedoraproject.org/pipermail/users/attachments/20100717/d091dbba/attachment.html 


More information about the users mailing list