[Beowulf] cluster for doing real time video panoramas?

Jim Lux James.P.Lux at jpl.nasa.gov
Thu Dec 22 09:56:11 PST 2005


At 08:42 AM 12/22/2005, Bogdan Costescu wrote:
>On Thu, 22 Dec 2005, Jim Lux wrote:
>
> > But maybe, 3-4 real wide angle cameras (perhaps shining the camera
> > into a curved reflector (think christmas tree ornament) to give
> > spherical situational awareness,
>
>You'd need extra transformations afterwards from spherical to plane.

Not really.. you do the transformation calculations once (given the 
geometry) and then save the coefficients for later.  It actually would 
require less computation (since there's fewer total pixels in) or the same; 
depending on whether you do the calculation starting with the video image 
and filling it into an output array, or starting with the output array and 
filling it by looking up the appropriate input images.  Either way, the 
output pixel is some linear combination of some small number of input 
pixels from one or more frames.


>The cost for the extra CPUs and networking to do this might be higher
>than the cost of a few cameras :-)
>
> > Or, for the more technically inclined.. go get one of those nifty
> > 2.4GHz wireless cameras and tape it to the top of a toy radio
> > controlled car.  Attempt to drive the car by looking only at the
> > video screen, without having any visibility of the car.
>
>I've done this some years ago and I managed quite well, maybe because
>I have a good visual memory and the environment was quite static. Only
>when a second similar toycar came into play things became difficult.

Exactly.. it's the situational knowledge.  I can drive a robot around my 
house fairly well, but to drive the robot to my parents house a few hundred 
meters away took a remarkably long time (lots of stop, spin to look around 
and get situational awareness, etc.)


> > I wonder how one could hook up all cameras to all nodes (cheaply)..
> > If you wanted to parallelize the rendering on a frame by frame
> > basis, you'd need to stream the video from all cameras to all
> > processors.
>
>Why not using one or a few head nodes that do only the data
>aquisition, maybe decompression (if not raw image data) and alignment
>of data such that the next operations would be optimized from a memory
>access point of view. Then they would send each frame to a different
>compute node in a round-robin fashion or based on load.
>
>Another idea: AFAIK FireWire should allow a device to be access from
>more than one computer - I don't know how well this works in practice
>and if there are any constraints that would make it impractical for
>this application (f.e. that would result in delaying frames).

Indeed... it's "theoretically" possible in the sense that the 1394 spec 
allows it, but I don't know if it's doable practically.  I'm fairly sure 
the video stream is a 1394 isochronous channel (maybe not.. maybe it's just 
a frame buffer and you have to fetch frames explicitly)


>  All the
>computers that would need to access data from a certain camera would
>need to be daisy-chained - this introduces a hard partitioning. A
>workaround would be to have all cameras and all computers being
>daisy-chained, but this would put a limit on the number of cameras
>that can be connected. As a benefit, using FireWire (either way) would
>eliminate the need of frames transfer between a head node and a
>compute node so the network setup might be easier.
>
>--

James Lux, P.E.
Spacecraft Radio Frequency Subsystems Group
Flight Communications Systems Section
Jet Propulsion Laboratory, Mail Stop 161-213
4800 Oak Grove Drive
Pasadena CA 91109
tel: (818)354-2075
fax: (818)393-6875





More information about the Beowulf mailing list