IO Performance test for AuFS

Bonnie++ results from running on one of the 250gig drives.

Version 1.01d ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
xen-steel 8G 71543 63 73234 14 33611 3 73275 65 78241 0 185.4 0
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 346 0 +++++ +++ 302 0 343 0 +++++ +++ 265 0
xen-steel,8G,71543,63,73234,14,33611,3,73275,65,78241,0,185.4,0,16,346,0,+++++,+++,302,0,343,0,+++++,+++,265,0

Bonnie++ results from running on the aufs mount.
Version 1.01d ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
xen-steel 11G 67460 60 57827 13 26490 5 57663 40 60691 2 186.2 0
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 372 0 +++++ +++ 294 0 363 0 +++++ +++ 220 0
xen-steel,11G,67460,60,57827,13,26490,5,57663,40,60691,2,186.2,0,16,372,0,+++++,+++,294,0,363,0,+++++,+++,220,0

Not much difference…

Server OS upgrade?

Should I upgrade from OpenSuSE 10.3 to OpenSuSE 11.0?
Pros:
– Update AuFS (so I don’t have to compile from source)
– Possible XEN improvements (since suse11.0 user xen 3.2)
– Setup everything with Webmin to make it easily manageable from one place.

Cons:
– Takes time
– Unknown how long it will take to set up all VM’s again.

Moving away from Raid5

Move contents of Jess’s 500gig (206gigs in content) drive to my old USB enclosure drive of 250 gigs (233 gigs usable)

Move all my RAID 5 content (553gigs) to Jess’s 500gig drive (452gig usable) and my old 160gig drive (about 140gig usable)

Then I can play with taking the raid set off line, and re-formatting the 4 drives and then copy the data back to the drives. I will deal with AuFS and FlexRAID after.

FS merger found.

Found exactly what I was looking for here.

The driver combines a several mount points into the single one

Actually I think this may actually be better then the Drive Extender things…

Ther’s also UnionFS and AuFS

Played with AuFS a little, and got it working on the laptop.
This is the mount command that worked:
#mount -t aufs -o dirs=/home/outsider:/tmp=ro none /mnt/aufs

Just need to get it working on the server now.

TWMS: spam because of online forms

It looks like the current form on the TWMS site is attracting a lot of spam.
I need to add some sort of CAPTCHA to prevent bots from sending messages. I just added a checkbox, which should slow messages down, but I will implement a random text image captcha as found here.
This should prove good enough, but I need time to implement it.

Come to think of it, I should probably refactor the portion of the page that generates the html code for the pop up boxes.

Windows Home Server;Drive Extender – for linux

So it appears that nothing like the drive extender that WHS has, exists for linux. Discussion on AVSForums leads me to believe that.
Explanation on how WHS Drive Extender works.

So I have decided to write some code to accomplish something similar.
What are the needs:
– amalgamate a pool of partitions/folders and make them seem like one folder.
– when data is written to the pool, the software manages where the data is stored (what partition/folder in the pool)

Since everything is running from partitions/folders, FlexRAID can be applied to the same folders, to add a level of redundancy to the whole system (something WHS doesn’t have to my knowledge)