Since the xmas holidays when I rebuilt the VMWare/file server, I’ve been having occasional reboots of the system. I have no understanding of why this is happening, as I tried to diagnose the problem by removing all but the necessary cards from within the server, but to no avail. Sometimes it runs for days, and others it reboots every few hours.
I ran memtest+ on it to check the RAM, and that passed.
Maybe the mobo is wonky? Maybe I didn’t seat the heatsink on the CPU properly? Can the CPU cause this? Maybe it’s a PSU issue (I’m not close to fully loading the PSU as it’s a 750W). Continue reading →
I started my experimentation with virtualization back in 2008. and back then I started with Xen. It was the virtualization option that OpenSuSe 11.1 had built in, then in 2009 I switched to Xen Server as a dedicated hypervisor to gain the clever GUI for managing the VMs.
A couple of years later I decided that Xen Server wasn’t meeting some needs, mainly because most pre-made disk images were only made for VMWare’s ESXi platform and very few were available for Xen.
In 2011 I switched to ESXi 5.1. Ever since then, I’ve ran a handful of VMs, but looking back I really wasn’t taking advantage of the capabilies of ESXi. Even now I’m not. However I’ve been stuck using it (in Dec 2015 switched to ESXi 6.0) because it was all I knew to get virtual environments set up. Continue reading →
I had some teething pains getting the ESXi server running (I think some hardware incompatibilities) but for now (knock on wood) everything is stable. At one point i was really considering switching over to an Intel server to avoid comparability issues, but that comes at the price of new hardware 🙁
http://lime-technology.com/forum/index.php?topic=22553.0 :Discussion of running UnRAID under ESXi on an AMD platform.
For future expansion, an eight port HBA SATA card would be welcome (to allow for more drives)
http://lime-technology.com/forum/index.php?topic=43026.msg410578#msg410578 :A great post on SATA HBA controllers and their performance
To be able to trully test the performance of the drives in a NAS I need a proper tool that runs on the NAS unit itself. I’ve seen a lot bench-marking done with bonnie++ but installing it in FreeNAS wasn’t straightforward.
First you need to install a jail, then you compile the program from source. FreeNAS is set up quite nicely to do this, so it’s not too much trouble, but figuring out this info took some time.
Some links for future reference:
Now that I’m relatively happy with the network transfer speeds (would be nice if I had a 10Gb LAN; that’s cost prohibitive for now) and I think I’m able to nearly fully saturating the network with file transfers.
The next step is to figure out how the network shares should be set up/accessed. I don’t want another malware going rampant on my network and encrypting files again.
First thought was to make all network shares read only, with the exception of the user data share. The media and other long term storage doesn’t really need r/w access. I can manage those from a separate r/w share, but for every other user, they would get only read.
The user data folder is another story. The user stores their sensitive data here, and having it deleted or messed around with would be crappy. Not the end of the world, as I have a Crashplan service that backup all the user data and can be recovered at any time (even deleted files) but it’s a hassle to get to, slow to re-download the data so I am leaving this as a worse case/house burns down scenario. I’d like to have a better option to recover erased/messed up data in my house.
The ZFS feature of snapshots is fantastic. I’ve used it, and it came in handy on a few occasions. Since I’ve consolidated all the storage on one system (UnRAID) and moved away from FreeNAS, i’d like to find a way to get a similar job done in some way.
Playing with ZFS (and FreeNAS) again.
I was wondering which way I should setup the RAID array for maximum throughput (speed), and with a bit of searching came across a fantastic article (found here)
The author does many comparisons of RAID 10, and 5 (raidz1), 6 (raidz2), for up to 24 drives to show how performance scales with # of drives as well as which raid is chosen.
Also since ZFS offers realtime compression, there are comparisons of with compression on (both lz4 and lzjb) and no compression.
Excellent read and great reference. Saves me a bunch of time.
For my 4 x 1TB drives, I’m going with a raidz1 (RAID5) with compression turned on to lz4. I can’t wait to see what kind of throughput I can get out of these disks.
Sadly I will be limited by the 1Gb ethernet connection at around 100MB/s transfer speeds.
I could always bond a couple of 1Gb connections to increase the network capability….
Just built a new unRaid setup (which is running as a VM indide ESXi 6.0.0)
Currently have 3 x 2TB drives in the array with no parity drive. This yields 6TB of storage.
Writing to a user share (no cache disk) yields some pretty great access speeds. The array is empty at this point, with no data on it yet, and there was no parity drive installed.
The unRaid machine was near by the windows box from where I ran this test. maybe 6ft of ethernet from windows machine to switch, and another 6ft cable to the unraid box After transferring 4.7TB of data to the array, I did another speed test. This time there is still no parity drive, but I’m running the test from a bit further away at maybe 30-40ft of cable. Still pretty good speeds.
Doing a parity check (since I just installed a parity drive) is giving me speeds of 120MB/sec (at the beginning of the check). This is very likely close to the max speed of the drives.
Parity check is finished, and ran another speed test to see how the read/write speeds to the array have changed. The differences are smaller then I expected.
Once I get a windows VM running on the same ESXi server, I will do another test to see how much of an effect the network has on these read/write speeds, but I don’t think the network is limiting me.
I am quite happy with these numbers as the old unRaid box was putting out in the order of 30-40MB/s (about 1/2 of the new speeds).
UPDATE: (29 Dec 2015)
Have been using the unraid NAS for over 2 weeks and I just did another speed test. The server has been in good use since and it’s running at round 70% capacity right now.
The speeds are better then before. I added a couple of 32 GB SSD drives I had laying around as cache drives (in raid1 for redundancy). They should take care of slow write speeds.
The more I use UnRaid, and the more I read about it the more I like it.
I do however miss the snapshotting feature of ZFS. I will have to figure out a way to get similar functionality as it’s nice to have access to changed or deleted files after the fact.
Some links of useful info I found when flashing my M1015 to 9211.
I don’t need a RAID (Integrated RAID) aka IR functionality. I want the controller to just act as a dumb controller (Initiator Target), aka IT mode so need to track down the IT version of the firmware.