IEEE 1647 Becomes a Standard!
Heading to Boot Camp

Virtualizing Hardware Simulations

Last week I wrote about the Community Virtual Appliances that can be run under VMware.  Since then I got to thinking - would there be any benefits to be had from running hardware simulations under a virtual machine?  After all, those simulations are processor and memory intensive... it seems unlikely that you'd be willing to trade performance to make it easier to manage a server farm.  If the technology was in place so that the overhead in running a virtual machine was small (both Intel and AMD are coming out with just such features) there are a few problems that could be solved more robustly. 

Often, chip simulations go through a lengthy initialization sequence before they get around to running the test you're interested in, especially when running gate sims.  It's possible in most simulators to dump the current state of the environment for later use.  But testbench components outside of the simulator may not behave appropriately when the environment is restored.  What if, instead of relying on the EDA vendors to implement this functionality, it was possible to run a simulation in a virtual machine through the initialization sequence and then stop just before things start getting interesting?  Multiple copies of the VM could be distributed throughout a batch pool, each one set up to load a different test.

Similarly, what if you could save away a virtual machine containing a snapshot of a system as it was configured when a given simulation was run so that bugs could be reproduced weeks or even years after the software and servers that originally ran them became obsolete?  This wouldn't only be an issue for bugs.  I worked on a 10Gig Ethernet project back in 2001-2002.  We had to maintain a few old servers so that we could run the simulation environment as late as 2004, long after almost all tool and OS versions in the environment had changed.  It was an ongoing pain because each time we made changes to the environment we had to ensure that the old machines weren't broken. 

There are several issues with building virtual machines for a given simulation environment or an individual simulation.  The main one would be disk space.  If you had to take a snapshot of *everything* and store it within a VM, each one might require gigabytes of space.  One possibility would be to save the OS and tools in separate VM partitions, linking the two together at runtime.  Another option would be to somehow maintain a set of diffs between VMs (like a Subversion repository on a large and mostly binary scale).  The final VM itself would still require a lot of space, but it wouldn't be prohibitively expensive to store new versions of each VM. 

Another issue is the amount of time it would take to restore a VM.  In my experience, it usually doesn't take more than a few minutes to reactivate a Fedora Core 4 VM , but things may be different with a system that has been busy doing complex computations.

The main problem with any of this is most likely to be that the people who know about systems involved in virtualization aren't the same people that know about hardware verification.  Perhaps once hardware virtualization solutions are generally available on X86 servers we'll start to see some movement in this area.

Comments