So the troubles seemed to be over with the ESXi server crashing when I was starting VMs that used PCI devices which had been passed through, but I was wrong.
The Win10 VM that had the GPU passed through is now once again either locking up the computer (ESXi locks up) or I get a PSOD (purple screen of death).
I read about others reporting similar issues and I updated the ESXi install to the latest build of version 6.0.0, but the problem persisted.
For reference, this is a listing of all the ESXi versions and updates and links to how to apply them (source).
One thing I just tried is disabling Interrupt Remapping (source) to see if it had any effect. It had no effect.
Removed and added the GPU device from the list of passed through devices.
There’s still a bit of testing to be done, but I may have gotten where I’ve been wanting to get for a while now in terms of virtualizing an remote desktop. That is the ability to use the GPU to render remote desktop sessions. By default this setting is turned off in Windows, and for the longest time I thought I had do crazy things like install VMWare Horison, or Xen Desktop (XDH) or even windows server to enable RemoteFX (which is the windows version of the GPU accelerated desktop delivery).
All along, it was just a few setting away in regular windows itself to enable RemoteFX.
Now running and playing on a remote desktop served by a virtual machine with a quadro 6000 gpu (well actually a modded gtx480 since it’s way cheaper then a real quadro 6000) in full HD.
Some testing is in order.
Bring on the heavy GPU use software!
What works so far:
– Full screen, full HD (1920×1080 video playback)
What doesn’t work so far:
I started my experimentation with virtualization back in 2008. and back then I started with Xen. It was the virtualization option that OpenSuSe 11.1 had built in, then in 2009 I switched to Xen Server as a dedicated hypervisor to gain the clever GUI for managing the VMs.
A couple of years later I decided that Xen Server wasn’t meeting some needs, mainly because most pre-made disk images were only made for VMWare’s ESXi platform and very few were available for Xen.
In 2011 I switched to ESXi 5.1. Ever since then, I’ve ran a handful of VMs, but looking back I really wasn’t taking advantage of the capabilies of ESXi. Even now I’m not. However I’ve been stuck using it (in Dec 2015 switched to ESXi 6.0) because it was all I knew to get virtual environments set up. Continue reading
I had some teething pains getting the ESXi server running (I think some hardware incompatibilities) but for now (knock on wood) everything is stable. At one point i was really considering switching over to an Intel server to avoid comparability issues, but that comes at the price of new hardware 🙁
http://lime-technology.com/forum/index.php?topic=22553.0 :Discussion of running UnRAID under ESXi on an AMD platform.
For future expansion, an eight port HBA SATA card would be welcome (to allow for more drives)
http://lime-technology.com/forum/index.php?topic=43026.msg410578#msg410578 :A great post on SATA HBA controllers and their performance
With version 5.5 of ESXi , the traditional vCenter tool does not allow for the full management of a VM (version 10 VM) which brings all the new upgrades to the VM.
You need to use the licensed vSphere Web Client to fully manage the VMs. vSphere Web Client only has a 60 day free period and for a home user, this kind of breaks the whole free ESXi server sandbox…
Tried to upgrade to ESXi 5.5.0 today, and got an error that there was no usable network device in the computer.
Found that strange, since the current ESXi 5.0.0 works fine with the on-board network adapter.
A bit of searching and it seems that VMware didn’t include the drivers for Realtek network adapters on the install disk. (makes me wonder what else was left out).
After a bit of searching, came up with a link describing the problem, and another link on how to add the driver onto the install disk.
May try moving my Xenserver setup to Vsphere to allow me the option for PCI passthrough, which Xenserver doesn’t offer.
This is a great guide (for both hardware and installing) ESXi 5 and setting up UnRaid on it, which is what I plan on doing as well.
The case my current unraid server is in has more then enough space for more drives, (just need another drive tray) and I’d be good to go. I am already running out of ram in the current Xenserver system so to try and upgrade the DDR2 ram in that box would cost a lot of money. Probably as much as getting a new system built uing DDR3 ram (yes, it’s that much cheaper right now). As an example, I just upgraded my desktop machine and for the cost of getting 4gigs DDR2 ram, I got 8 gigs DDR3 ram + a new motherboard.
I’d probably want 16gigs of ram in the new system, (going from 8gigs in the xenserver box). I just need to choose components as apparently ESXi is a little more picky about hardware then XenServer (which seems to run on pretty much anything)
I keep having issues every time I install VirtualBox on my machine, and figured this time I would document what I did to get the kernel modules to properly compile upon installation.
First make sure openSuSE is up to date
Then install kernel sources
zypper install gcc make automake autoconf kernel-source kernel-syms