For the past few days, I’ve been working on getting the Athlon64 X2 4000+ setup as a XEN domain0, as well as experimenting with virtual machines installed on it.
Even though XP runs, no hardware can be passed on to it (PCI, USB… nothing) Not too big of a problem because that does work if the client is linux… and I have already done it. [The reason it doesn’t work is because WinXP must run in a Fully Virtual environment. So far there is no way to allow pass-through of the pgysical hardware to a fully-virtualized machine. This can only be done in Para-virtualized environments, meaning that the kernel of the OS (linux of course) is modified slightly to understand that it is running in as a virtual machine]
I have passed the Hauppauge WinTV capture card over to a virtual environment (of OpenSuse 10.3) and installed MythTV. The capture looks like it’s working very well.
All that seems to required for the pass-through to work is a few commands to be passed when the domain0 is booted (to unload the device and make it ready for a virtual machine to us it)
modprobe pciback
SLOT=0000:01:06.0
echo -n $SLOT > /sys/bus/pci/drivers/bttv/unbind
echo -n $SLOT > /sys/bus/pci/drivers/pciback/new_slot
echo -n $SLOT > /sys/bus/pci/drivers/pciback/bind
SLOT=0000:01:06.1
echo -n $SLOT > /sys/bus/pci/drivers/pciback/new_slot
echo -n $SLOT > /sys/bus/pci/drivers/pciback/bind
And then when starting the machine, use this command:
xm create
And it works! I just need to learn how to properly use and configure MythTV properly…
Now the other thing I’ve been thinking about is how to manage all these virtual machines, what kind of file-space to give them (virtual disk or direct disk access) and more importantly, how to back them all up, and be able to recover everything in case of a failure.
Of course the easy way out is to run everything (domain0 and all the other virtual machines) off a RAID5 array. But that’s not possible. The current RAID 5 array is one large XFS drive, which can’t be shrunk in order to create a separate file system (or LVM) along side it, onto which to migrate the data.
So I’m stuck to using a single drive for the OS side of things.
Currently, the OS for the virtual machines is a file which is used by the VM as it’s disk. I don’t know what performance hit I’m paying for this. Another idea I’ve read about is to create many LVM partitions, and let the VM OS use it as it’s disk… I would imagine this has better performance then the first method, but I have to investigate.
The attractive aspect of the first option (files used as disks by the vm OS) is that I can compress and backup the file on a regular basis, and if there ever is a problem, I can just revert back to the last saved/backed-up files. I don’t know how I would do that for and LVM volume.
So the idea would be to create 3 files (to be used as disks) for each VM. One would be the actual OS partition, another would be the SWAP, and the last would be a /tmp partition. Out of those 3 only the OS would need to be backed up. Don’t need to backup the swap and /tmp partition to save space.
All that would be saved on the OS partition is OS files. The working data is always saved on the RAID5 drives.
So in each OS partition would be many links to the folders on the RAID5 array. Another thing to find out is what’s the best/least performance hit to access this data. Do I give all the VMs access to the physical drive/array?(can more then one VM actually use one physical partition??) Do I just set it up as NFS share? What about this iSCSI thing? Is it faster then NFS?
Good read along simirar lines:
http://www.shorewall.net/XenMyWay.html