A Modest Proposal (was [Beowulf] openMosix ending)

Robert G. Brown rgb at phy.duke.edu
Tue Jul 17 07:54:07 PDT 2007


On Tue, 17 Jul 2007, Chris Samuel wrote:

> On Tue, 17 Jul 2007, Robert G. Brown wrote:
>
>> The real drivers will install into the BIOS and should stop being OS
>> specific at all
>
> Given the general quality of BIOS and ACPI implementations this somehow does
> not fill me with a warm glow...
>
> "Our BIOS supports both types of Linux, RHEL and SLES"..

No, you misunderstand.  At this point in time, one major job of an
operating system is to hide the details of the hardware from the
programmer.  When I open a socket connection or configure TCP-IP, I do
so with tools that do not need to know WHICH ethernet card my system
has or its speed or whether it has onboard buffers or who made it.  All
I do is configure and open the socket for the "generic" network device.

A really, really major portion of the remaining pain associated with
using linux is that hardware MANUFACTURERS do not produce DRIVERS with a
toplevel "generic" interface.  This is actually remarkably silly --
there is no reason I can think of that that a hardware device driver
should not present a uniform interface to the kernel, and of course many
now do -- USB devices, ATAPI, SCSI -- all of these are high level
interfaces and use generic kernel drivers (which don't always work,
sigh, but in PRINCIPLE there exists a STANDARD and most e.g. CD-ROM
manufacturers comply with the standard because if they don't people
won't buy their devices and besides, they are more expensive for THEM to
support.

Manufacturers of certain devices have opposed this sort of generic
driver interface because it lowers their perceived market advantage or
interferes with their engineering process that splits off function
between on-card stuff (requiring firmware and on-card intelligence to
support the generic control/communications interface) and off-card stuff
(that doesn't).  To make a really cheap card for certain functions, you
basically make the system CPU do a lot of the work instead of putting a
PU and firmware on the card itself.  Again, lexmark is a classical
example -- they make "impossibly cheap" printers to sell bundled with
systems.  Why are the printers so cheap?  Because they are NOTHING but
an engine for putting dots on paper according to a data stream coming in
through their printer interface.  ALL of the computations regarding
where those dots go has to be done on the host system.  For this they
require not just a "printer driver" per se -- they need a program that
takes data streams sent "to the printer" that can transform them all the
way into the hardware control stream.  This is contrasted with a much
more expensive printer with a very high level interface -- send it a PDF
or PS file or text file and it will Do the Right Thing and print it, and
yes it can manage a high level handshaking communications interface to
manage the file transfer.  It can ALSO manage graphics and pixel-level
printing -- usually with a special language associated with it -- but
you can actually print quite a lot to it through a "generic" interface
and have it just work.

OK, so now you have to have real device specific software that resides
on the host system.  Many hardware device manufacturers consider all or
part of this software to be a trade secret.  They are under the illusion
that somebody out there gives a rodent's furry behind that they cleverly
unroll a loop in the middle of their graphics stream processor, or that
somebody out there is going to reverse engineer their particular split
up of on and off device processing and challenge their market
"dominance".  My current favorite example of this sort of insanity is
broadcom wireless devices.  There are many models.  Each one is
typically a cheap, onboard wireless device in a consumer crade laptop.
Each one has unique, proprietary, firmware that must be downloaded to
the device.  And finally, Broadcom refuses to release that firmware to
linux people.

To make a broadcom wireless device function under linux, one requires
"fwcutter", a reverse-engineered linux tool that takes a Windows driver,
cuts out the firmware, and packages it up so that it can be sent to the
device.  Once THIS is done, it is apparently straightforward to make it
function.  Broadcom clearly relies on being a price-point leader, and it
costs money to support linux, and they just shrug and say no.  Windows
drivers only, specific per device, keep it cheap.

This is where VM may stimulate a change.  There is no real reason
anymore to install device drivers of this sort at the operating system
level.  VMs already are in the process of "defining" a generic interface
for non-generic devices -- my XP VM doesn't even know that it is a
laptop with wireless any more -- it thinks it has "a network" (an
pseudo-AMD network device, if it matters, for which it has very generic
drivers).  For a Windows box hosting a Linux VM, the L-VM no longer
needs a broadcom driver even if the host system uses a broadcom device.

This produces some economic incentive to produce a system that presents
ALL network devices so that they look like that generic "pseudo-AMD
network device" (or an even more generic interface yet to be defined).
This basically would require, I think, an extension of the capabilities
of the standard BIOS firmware.  They would need a "device FW loader" and
a "device FW storage area", and ALL operating systems that ran on top of
the BIOS would require a common/portable interface for loading data into
the FW area.  The BIOS itself would then make all devices of a certain
type (firmware required or not, on or off device computing or not)
appear as generic devices to any resident kernel without the need of an
intermediary VM interface.  In fact, one could basically move most the
VM interface and all the heterogeneous device support out of the kernel
and into the BIOS.

The advantages of this, and the products it would enable, are profound.

    * Generic Operating environments.  For $300 you buy XP Pro,
preinstalled on a USB flash stick, with all the generic drivers
preloaded and preconfigured.  Snap it onto any system, boot (or direct a
VM player at it) and you're running XP Pro.  For $160 you buy
FC-kitchen-sink, configured with generic drivers, on a USB flash stick.
Snap it onto any system, etc.  If today you want your PC to be a Mac, no
problem.  For $300 you buy your stick, plug it in, and poof, you're a
Mac.  Borrow a friend's stick, poof you're a Mac.  The license resides
in and with the stick, which can be all copy protected and everything
for the commercial OSs.

USB Flash is just a tad slow still for this purpose, but it saves
energy, money, and above all, SUPPORT costs and MANUFACTURING costs.
Right now it costs OEM resellers a lot of time to install and support
operating environments.  They'd be all over a motherboard/BIOS that
permitted a single OS image to install on all of their hardware, no
matter what physical devices it contained.

    * A complete restructuring of the kernel itself.  Once upon a time, a
long long time ago, the humble IBM PC actually used the BIOS (and pretty
much only the BIOS) to talk to attached devices.  Want a character on
the screen?  Use the BIOS call.  Want to write to floppy?  Use the BIOS
call.  I/O routines where trivial wrappers for BIOS calls.  At the same
time, Unix devices had moved the hardware compatibility layer into the
kernel -- everything was a file, and Unix kernels had to provide all the
wrapping required to "make" any attached device into a file.  One model
failed because nobody was willing to devote much of the limited
computational and memory resources of an x86 system to handle the
dazzling array of physical devices that were spawned by the burgeoning
multibillion dollar consumer computer business.

NOW, however, we have a SURPLUS of computing resources. We can put an
entire operating environment -- kernel, applications, data -- onto a RW
persistent storage device that is basically a single chip the size of a
segment of my little finger.  Computers are already starting to appear
with linux preloaded in exactly this way -- there was one in this
month's Linux Magazine, for example, and Neoware terminals are basically
computers along this model that can run Linux or Windows XPe.  No disk.
No fan.  And billions of clock cycles per second on multiple CPU cores,
with a full GB of RAM or more.  One can achieve the separation of
hardware device layer from operating kernel layer NOW, using over the
counter open/free software and hardware, by crafting a linux image with
VMware player preinstalled that boots up directly into a loader
interface that lets you pick a VM to run or OS image to virtualize --
only.  However incredibly inefficient this is computationally, it STILL
saves one all of that nasty administrative cost and is therefore worth
it.  It's just a matter of time before the kernel and BIOS renegotiate
the split between hardware device interface provision and the OTHER
functions of the kernel, and most of the former move to the BIOS behind
a HAL.  The kernel will then really, really dramatically shrink, and
coincidentally one of the last real advantages of Windows will
disappear.

This will have profound consequences (if it happens) in cluster
engineering.  There will be significant pressure to make high end
cluster network devices "generic" in some way, whether or not they are
TCP/IP per se.  Open source "glue" will be developed that transform all
high speed native interfaces into a single "generic" high speed native
interface.  Ethernet based cluster engineering will become even simpler.
A "cluster distribution" will be conceptually replaced by a "cluster VM
image" that one simply plugs and plays.

Maybe this is all a fantasy on my part -- no surprise if it is -- but if
this IS the way things unfold we should be in for exciting times indeed.

     rgb

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu





More information about the Beowulf mailing list