[Beowulf] Mark Hahn's Beowulf/Cluster/HPC mini-FAQ for newbies & some further thoughts

Robin Whittle rw at firstpr.com.au
Mon Nov 5 05:52:13 PST 2012


Hi Mark and Jim,

Thanks for reading my message and responding so informatively.

Regarding music synthesis, Mark wrote:

> OTOH, I can't really imagine how music synthesis could use enough
> compute power to keep more than a few cores busy, no matter what
> programming model you choose.

That may be true for playing samples, doing conventional digital
reverberation or for implementing oscillators, filters and the like.

However, consider the task of simulating a binaural recording, which in
the physical world involves microphones in the ear-canals of a real or
dummy human head, so that when the recording is listened to on
headphones, the listener has reasonable left-right, and lesser
front-back and above-below perception of the direction of the various
sound sources, due to the way human hearing interprets the gain vs.
frequency, the time delays and the phase responses generated by the head
and outer ear (pinna).

In a 6 sided room, with a single sound source, there are 6 first-order
reflections, 30 second-order reflections, 150 third-order reflections
and so on.  Each path of sound, with frequency response, volume, time
delay and direction should be modelled and subjected to a binaural
transformation at the location of the head.  (Finite element simulation
of the air in the room would be way out of control, due to the need to
model 5 or more cells per wavelength, such as 3mm cells for 15mm
wavelengths.  That would be 3 x 10^7 cells per cubic metre, and a
musically interesting room or hall might be 10^6 cubic meters.)  We
would typically want to simulate multiple sound sources, and the sources
are not isotropic, so their spectrum varies with the direction of sound
emanation.

To do this properly, and generate a realistic reverberation field, would
also involve frequency response behaviour or the walls, and most likely
many more walls.  To get realistic decays involving fiftieth-order or
hundredth-order reflections, the number of paths to simulate (per sound
source) would be completely out of the range of any known computing
resource.

Likewise simulating the reverberation of a double-bass, and the string
slapping on its wooden finger-board, or a sitar with its melody and
sympathy strings slapping on the bridge which is mounted on a resonant
gourd, or a sarod which has strings slapping both on the metal
fingerboard and the bridge . . .  The string tension is a function of
all the waveforms at any instant, and so frequency modulates the
traversal speed of all those waveforms, giving rise to a generally
higher pitch at the start of the note.  Stiff metal strings lead to
generally higher frequencies for the upper harmonics, since these
involves more bends each at a sharper angle per length of string for a
given amplitude.

A cymbal (or gong) might be simulated as a thousand or more
interconnected zones in which waves travel in all directions in the
plane of the cymbal.  The speed of propagation in each zone depends in
part on the tension on that piece of metal, which is an instantaneous
function of the stresses on it due to all the waves traversing it at
that time, with the short waves stressing it more than long waves for a
given amplitude.  So by injecting a single frequency or a simple
low-frequency impulse (a soft mallet stroke), the waves bounce around
the cymbal with all frequency components intermodulating all the other
frequency components in unique ways in each of the 1000 or whatever
zones.  So a gong starts off with a "bong" sound which intermodulates up
to a "hiss".  (I think the same process occurs on the level of atom
cores and atomic outer-orbital electrons in thermal motion within a
solid object - any input frequency being intermodulated into a broadband
random set of frequencies with an upper bound which rises as a function
of the average energy.)

In reality, there are no separate zones of a cymbal or gong, so better
simulation would be achieved by modelling 10,000 zones or whatever.

These are examples of real-world physical processes which make pleasing,
familiar, sounds but which result from fiendishly complex physical
processes.  They would be extremely computationally intensive to model,
even if it was possible to write suitable software.  It is easy to take
these sounds for granted, but even thinking about simulating them can
provide new levels of appreciation for the richness of the physical world.

It would be easier to bang a gong with a mallet, but I think it would be
fun and enlightening to use complex computer modelling to create
physical-like sounds and responses to other sounds which would be
difficult or impossible to realise in the physical world.

As far as I know, computer music synthesis languages don't work on
clusters, though some may use multithreading.

  - Robin







More information about the Beowulf mailing list