Date

Insert "Whoo whaa" noises here,  But this is a musing post of tech geekery. So, what's advent is the current crop of CPU's are all either low-power, or multicore hot-rods,  where we have more and more cores on a chip. At the same time, almost all modern hardware supports Vz instructions (Virtualization) in the hardware, to easily swap complete context.

Add to this the emergence of Vz tech in Windows, OS X and Linux, and we can start to suspect that some kinds of software will, in not too far off, be completely virtualised.  You will have distribution of disk images, read only filesystems with a kernel and basic net environment. This boots up in its on virtualization sphere, and from there gets access to an IP via a local DHCP.

The hosting machine provides Net via NAT+DHCP techniques.   Configuration via some network bus mechanism (DBus, IPcomm, soap, dcomm, whatever technology more or less)   Filesystem access is via locally mounted network filesystem ( export userdata as a shared FS, mount inside the disk image. Configuration via DHCP from the host)   reading UI-settings and such come via the message protocol.

Exporting UI-windows via VNC-like window exposure.   Fast enough these days (Heck, shared memory-buffer window access for that. OSX has done it with X11+Quartz,  Other OS's / GUI's can solve it easily)

So, What does this -gain- us?    For one, we have software that needs constant security-upgrades for all the pieces in the stack. This is a nightmare for maintainance.    On the other hand, we also have software without dependency requirements and in a known library-stack, that can be run on any OS more or less.  For some things it makes a lot of sense (services of various kind) others, not so much (Games won't work).  But, it allows Software as a Service to work nicely and independently.  Add an authentication step + timed certificates inside the image, or encrypted image with a loader, and suddenly you can lease software to people, on a contract, and not worry about anything.  I can imagine MS wanting to do Office in this way to large customers.

Also, Virtualization is the current buzzword of the day.

Currently, I'd do it (on Linux) with some subset of the following techniques:

boot-> DHCP. Get DNS+Shared space + IP. Mount disk-image RO, mount shared space RW. Start DBus for communication with the outside,  Launch service.  Send a note via DBus about the ports we want "open".  Go into a headless X-server, and then launch a window, exporting it to the outside DISPLAY via a network connection.   Done, application lives in its own virtual world, and communicates with the outside via known network products and protocols.

Still doesn't mean a cracked application won't write bad stuff to the shared disks, but hey, that's not what we're trying to solve anyhow.

So, what's the use?  None, really.  Nothing that can't be solved in a better way via some other kind of "real" security or distribution rights.

( I should patent this ;)