Just built a new unRaid setup (which is running as a VM indide ESXi 6.0.0)
Currently have 3 x 2TB drives in the array with no parity drive. This yields 6TB of storage.
Writing to a user share (no cache disk) yields some pretty great access speeds. The array is empty at this point, with no data on it yet, and there was no parity drive installed.
The unRaid machine was near by the windows box from where I ran this test. maybe 6ft of ethernet from windows machine to switch, and another 6ft cable to the unraid box
After transferring 4.7TB of data to the array, I did another speed test. This time there is still no parity drive, but I’m running the test from a bit further away at maybe 30-40ft of cable. Still pretty good speeds.
Doing a parity check (since I just installed a parity drive) is giving me speeds of 120MB/sec (at the beginning of the check). This is very likely close to the max speed of the drives.
Parity check is finished, and ran another speed test to see how the read/write speeds to the array have changed. The differences are smaller then I expected.
Once I get a windows VM running on the same ESXi server, I will do another test to see how much of an effect the network has on these read/write speeds, but I don’t think the network is limiting me.
I am quite happy with these numbers as the old unRaid box was putting out in the order of 30-40MB/s (about 1/2 of the new speeds).
UPDATE: (29 Dec 2015)
Have been using the unraid NAS for over 2 weeks and I just did another speed test. The server has been in good use since and it’s running at round 70% capacity right now.
The speeds are better then before. I added a couple of 32 GB SSD drives I had laying around as cache drives (in raid1 for redundancy). They should take care of slow write speeds.
The more I use UnRaid, and the more I read about it the more I like it.
I do however miss the snapshotting feature of ZFS. I will have to figure out a way to get similar functionality as it’s nice to have access to changed or deleted files after the fact.