I updated to the latest version of UnRaid (v6.2.1) and Crashplan that runs ina docker container started misbehaving.
At first it kept cycling through “Syncronizing block information” and would not actually do a backup. It was stuck in a loop doing the sync operation.
I then changed the frequency of when the backup runs, by telling crashplan to run the backup between a specified time and days for all the backup sets.
That stopped the repeated sync operation, but now while doing backups, the Crashplan application (and engine) would crash and restart every minute.
I tried changing the -Xmx parameter in the run.conf file, but the restarting behaviour persisted.
I can monitor the docker container memory usage using the “cadvison” container, and I can see that as memory usage gets close to 2 GB, the system would restart that container. I bumped the -Xmx parameter all the way up to 4096MB, but it seems like it’s stuck at 2gb…
The next way I came across to increase the allowed memory usage is entering the CrashPlan console from within the app, ans issue the command “java mx 4096, restart”. (SEE LINK)
This successfully increased the allowed memory usage of CrashPlan to 4GB. Looking in the “cadvisor” app, I can see that crashplan now uses 3.5GB of ram, and everything is stable.
I had this problem before where CrashPlan (the backup service) software will seem to be stuck “analyzing” data over and over without actually backing it up.
The last time that happened I had found no way around it except to re-start the entire backup. And since it takes months to transfer 1.8TB of data, it wasn’t a decision I tool lightly. I had searched all over the web and tried everything I could find, but to no avail.
This time I did some more digging again, and found a solution that worked.
CrashPlan for me runs as a docker container under UnRAID and UnRAID is a virtual machine in my ESXi homelab. I had 4GB allocated to UnRAID to do what it needed to do. The memory usage (at least as reported by unRaid) was about 60-80% so I dind’t htink I was running out of memory, but seeing as no other solution was working this time either I made 2 changes.
- Increase the amount of ram that CrashPlan can use (as per this article)
- Gave UnRAID 4GB more ram for a total of 8GB.
Between those 2 tweaks, the next time the CrashPlan app started, it wet straight to actually transferring files instead of “analyzing” them.
I had some teething pains getting the ESXi server running (I think some hardware incompatibilities) but for now (knock on wood) everything is stable. At one point i was really considering switching over to an Intel server to avoid comparability issues, but that comes at the price of new hardware 🙁
http://lime-technology.com/forum/index.php?topic=22553.0 :Discussion of running UnRAID under ESXi on an AMD platform.
For future expansion, an eight port HBA SATA card would be welcome (to allow for more drives)
http://lime-technology.com/forum/index.php?topic=43026.msg410578#msg410578 :A great post on SATA HBA controllers and their performance
Now that I’m relatively happy with the network transfer speeds (would be nice if I had a 10Gb LAN; that’s cost prohibitive for now) and I think I’m able to nearly fully saturating the network with file transfers.
The next step is to figure out how the network shares should be set up/accessed. I don’t want another malware going rampant on my network and encrypting files again.
First thought was to make all network shares read only, with the exception of the user data share. The media and other long term storage doesn’t really need r/w access. I can manage those from a separate r/w share, but for every other user, they would get only read.
The user data folder is another story. The user stores their sensitive data here, and having it deleted or messed around with would be crappy. Not the end of the world, as I have a Crashplan service that backup all the user data and can be recovered at any time (even deleted files) but it’s a hassle to get to, slow to re-download the data so I am leaving this as a worse case/house burns down scenario. I’d like to have a better option to recover erased/messed up data in my house.
The ZFS feature of snapshots is fantastic. I’ve used it, and it came in handy on a few occasions. Since I’ve consolidated all the storage on one system (UnRAID) and moved away from FreeNAS, i’d like to find a way to get a similar job done in some way.
Just built a new unRaid setup (which is running as a VM indide ESXi 6.0.0)
Currently have 3 x 2TB drives in the array with no parity drive. This yields 6TB of storage.
Writing to a user share (no cache disk) yields some pretty great access speeds. The array is empty at this point, with no data on it yet, and there was no parity drive installed.
The unRaid machine was near by the windows box from where I ran this test. maybe 6ft of ethernet from windows machine to switch, and another 6ft cable to the unraid box
After transferring 4.7TB of data to the array, I did another speed test. This time there is still no parity drive, but I’m running the test from a bit further away at maybe 30-40ft of cable. Still pretty good speeds.
Doing a parity check (since I just installed a parity drive) is giving me speeds of 120MB/sec (at the beginning of the check). This is very likely close to the max speed of the drives.
Parity check is finished, and ran another speed test to see how the read/write speeds to the array have changed. The differences are smaller then I expected.
Once I get a windows VM running on the same ESXi server, I will do another test to see how much of an effect the network has on these read/write speeds, but I don’t think the network is limiting me.
I am quite happy with these numbers as the old unRaid box was putting out in the order of 30-40MB/s (about 1/2 of the new speeds).
UPDATE: (29 Dec 2015)
Have been using the unraid NAS for over 2 weeks and I just did another speed test. The server has been in good use since and it’s running at round 70% capacity right now.
The speeds are better then before. I added a couple of 32 GB SSD drives I had laying around as cache drives (in raid1 for redundancy). They should take care of slow write speeds.
The more I use UnRaid, and the more I read about it the more I like it.
I do however miss the snapshotting feature of ZFS. I will have to figure out a way to get similar functionality as it’s nice to have access to changed or deleted files after the fact.
I lost a HD the other day and whatever data was on it. I think I had most things backued up but there was still some data that is gone forever. This got me thinking about how to better protect ALL devices in my house.
Storage is cheap, but I don’t want 2 HD in every machine. And some machines like the laptops can’t use 2 HDs.
At the same time I need to re-evaluate the NAS solution in my house. I’m currently managing two systems. An UnRaid computer for archival storage and a Nas4Free machine for a faster access of data.
1. Add an SSD drive to UnRaid as a cache drive to speed up write functions on the UnRaid system, and compare network read/write speeds with the Nas4Free box. See how different the two really are.
2a. If speeds are very close, take apart the Nas4Free array and move the two 2TB drives to the UnRaid system.
2b. If speeds are not close buy two 2TB drives and replace the two 1TB drives in the UnRaid system. This will increase the array capacity by 2TB.
3. For all computers set up some software that will backup the working files of all computers to a backup folder on the UnRaid system. Have this backup be done once a day (probably at night). A scheduled rsync operation (that also mirrors deleted files) should suffice. I don’t want an ever growing amount of data. Just a mirror copy of the machine’s HD in case it fails.
For the longest time (ever since I had set up unRaid) I had an issue with the files saved on the unRaid system showing up as hidden files when viewed in a microsoft windows environment.
Never really liked that. My solution had been to enable viewing of hidden and system files, but it was a workaround. Never really knew why this was happening, until now.
I came across this post on the Unraid Forums. Just need to add a couple of lines to discourage samba to make the files hidden and system files.
After doing some searching, came across this Wiki entry on the UnRAID site.
A step by step instruction on what to do is found on the UnRAID forum in this thread. Look at post #3.
The recovered data is in the lost+found folder. To access it you’ll need to telnet to the unraid machine, and look in /mnt/diskX (diskX is the drive that you are recovering from) and there will be a lost+found folder. I moved it to a user share, so I can sort through it from the network.