02. After adding all of those disks as a members of the RAID 5 array, you can go further, click on Volumes > volumes groups
and add a new volume group.
03. Once the volume group created, you can go and create the array. Go to Volume > Software Raid, and click on the link “create new raid partition”
You should obtain a screen like the one bellow.. where you can easily choose your disks participating in the RAID 5 and create an array. In my case I choose to select all four 1 Tb SATA drives which I had in my system.
And finally you should obtain a screen like the one below. You can see that the array is in degraded state and the synchro did not started yet. But don't worry, it'll start automatically…..
04. Next thing is to format this array. For this go to the menu Shares > and click on the link “create new file system volume“.
You have a quite a few options there, I took the default one… XFS. But it does not mean that you can't choose EXT 3 or EXT 4… -:)
Read the rest of the tutorial on next page.
PiroNet says
Great post. Looking forward to reading your benchmark test results.
The test results with the FreeNAS were a bit disappointing in my opinion…
Vladan SEGET says
That’s why I’m trying other solutions. Did you try Ubuntu server with mdadm? It seems that the writing speeds are more than double…. but, everything has it’s time…. -:)
Cheers
Vladan
PiroNet says
I did not try Ubuntu but my QNAP devices use MDADM. 4x1TB disks in RAID5 you should have write perf between 55 to 75MB/sec.
Cheers
Marcb says
I’ve been running Openfiler 2.3 for a few years now and like it very much. I’ve gone the iSCSI route with a dedicated 1Gb link between my esxi server and Openfiler.
Never did any benchmarks with the config i have so i’m curious as well what the difference between NFS and iSCSI connection would be.
Nice post
EV_Simon says
Vladan,
I will power up my Openfiler 2.99 machine tonight and run the same tests as you and publish them over on my site in the next day or so because I certainly didn’t see such a drop in performance doing the testing I did.
I will only run the 4k tests but run them twice to get the average score from both runs.
Vladan says
Simon,
good Idea. I forgot to mention that I used the fix to the mdadm in openfiler you announced in this article on your blog: http://www.everything-virtual.com/?p=349
Cheers,
Vladan
EV_Simon says
Vladan, additional testing carried out. http://www.everything-virtual.com/?p=378
EV_Simon says
Now testing Open-E DSS v6 using the same script as before, will do additional testing using the suggestion from Didier over on my site (as well as IOzone).
EV_Simon says
Vladan, DSS v6 testing carried out, this has been done using the original test script you used, I am currently re-testing using 32 outstanding IO’s.
Testing with IOzone to occur over the weekend but DSS definitely shows ALOT more performance gains over Openfiler.
Vladan SEGET says
Simon,
you’re a way ahead of me… -:).. Great to team with you to see the best we can get from home made NAS box. Find the best perf. platform for shared storage for VMware vSphere
EV_Simon says
Vladan, I have now carried out testing using Didiers original IOmeter script but using 32 outstanding IOs instead of the default 1, results over on my site. To make things comprehensive I will also re-run the Openfiler test using the same settings but what we can definitely see from the runs using a single outstanding IO is that DSS far outperforms Openfiler and if you’re using NFS as your storage solution then DSS is definitely the way to go.
venkat says
Thnks for the great post
Vladan I tried the setup in the sameway ……everything went fine till end…….but at last when i added nfs share in ESX i am receiving the error message as
“Call “HostDatastoreSystem.CreateNasDatastore” for object “datastoreSystem-10” on vCenter Server “vc.lab.dom” failed.
Operation failed, diagnostics report: Unable to complete Sysinfo operation. Please see the VMkernel log file for more details.” Kindly help me on this…
former vmware employee says
Its not supported. Sure, OpenFiler will work (until it breaks). Then you gotta reboot everything (esx, then openfiler, then bring esx back online).
It doesnt handle SCSI reservations properly and if you have more then 1 esx host connected, you might find your storage disappearing or locking up due to this issue.
There are lots of unsupported alternatives but stay clear from openfiler unless you dont care about hard stopping VMs.
The only vmware-supported (free) NFS solution for ESX4.x is Fedora Core (if memory serves me correct, i think version 8)
Craig says
Has anyone gotten FC working? This is my status on that project http://realworlducs.com/?p=84
Craig
Guest says
The performance figures are about what I’d expect. You’re running OpenFiler *inside* of a virtualized environment (VMware Workstation), which is pretty sucky at IO in the first place. Then you use SATA 1 TB disks – probably 5400 rpm no less. Lets be generous … maybe 7200 rpm (doubtful). Then you’re layering software based RAID over that?
It’s going to suck big time for performance. NFS is NOT a great protocol for performance either… This comes from 20 years of industry experience with it…
Vladan says
Some precisions: When the installation has been done (the 19.th of April 2011), the setup was on physical box with 7200RPM SATA Drives. No VMware Workstation.
There is a hardware RAID card used in the setup – the
That’s for sure that the NFS protocol cannot deliver satisfying performance on the box that I tried the install.
The gear has an Atom CPU, so I use it for sharing ISOs via the NFS mostly.
There is one 128Gb SSD in the box as an iSCSI target for my vSphere Cluster.
I shall build more powerfull gear for storage, with probably SATA III 6gb/s SSD later this year….
Marcone Delano says
You need a controller for the SATA physical disks, or everything is virtual in Openfiler?