Monthly Archives: October 2013

The Virtual Storage Appliance experiment – Part 2

Openfiler

-Follow the start of my Virtual Storage Appliance experiment with Part 1.

First up in my VSA experiment is Openfiler.   An Open Source SAN/NAS solution.  Openfiler comes with a raft of features.  For a home test lab it’s more than you could possibly ask for.  It will do both Block and file back based storage.  CIFS, NFS, RAID, snapshotting, the list goes on.  For my test lab purposes I focused on the Block based iSCSI features.  Openfiler is avaliable as an Open Source edition or with commercial support.  The latter providing advanced features, such as, High Availability, replication, and Fibre Channel target.

The first objective was to install it.  The download comes as an ISO file.  Installation was very simple with straight forward installation instructions supplied to perform either a text based or GUI install.

openfiler01

After a few simple clicks the installation was complete.  It is about as close as an Appliance install as you can get without being an Appliance.

openfiler04 openfiler05

Once installed and booted you are taking to a console prompt.  All administration is done via web GUI.   Both the admin GUI and user GUI are accessed through the same URL.  The admin GUI is accessed using the root account while the user configuration is accessed via the Openfiler account.

openfiler06

Instructions were non-existant on the non-commercial side with access to just the community forums.  For the SAN uninitiated, configuration would no doubt be a challenge.

I installed Openfiler on a 50 GB VM drive within ESXi.  I was immediately faced with an issue trying to create a usable volume.  At midnight on a work day and with no instructions I might have been asking for a bit much of myself.  When the words started floating on the screen  I decided to called it a night and came back to it the next day.  The following day faced with the same issues, unable to create a volume, I started trawling through the forums.

A few posts pointed to the partition type of the disk being msdos and suggested trying to the modify the disk to gpt.  Instead I added a new virtual disk which immediately detected as gpt.   That allowed me to edit the disk and partition out the disk.

openfiler07

In typical Linux fashion it was based around start and ending cylinder numbers.  Again not that intuitive to non-Linux admins.  Once a partition was created, volumes needed to be created.  The process became a little easier at this point.  Volumes could be created in MB with a dropdown menu to select the filesystem type.  As I wanted iSCSI I selected block.

While exploring the tab menus after installation in the GUI interface I noticed that most services were disabled, including iSCSI.  So I changed it to Enabled and click start.  Back over to the Volume tab I went to iSCSI targets.  I could see no iSCSI targets but a button to add one, so I clicked it.  Next under LUN mappings I could see the volumes that were previously created.  On each volume I click Map.

openfiler09

At this point using my previous knowledge of SANs I felt I had done everything I needed to now present that storage to a device.  In my case an ESXi host.  I had already preconfigured iSCSI on my ESXI host.  I had a software adapter added and bindings all setup.   I added the Openfile IP as an Target and then performed a Rescan.

openfiler011

Once the scan completed my disk appeared.  While still in ESXi I went over to Storage and attempted to add in a new Datastore.  The disks I created weren’t appearing and now after midnight again, I was ready to throw in the towel.  After a short think about what was going on.  I knew that a VMFS datastore did require a small amount of storage overhead.   So I created a new volume in Openfiler which was a bit more realistic in size at 2 GB (rather than piddly 256 MB I originally did).  This time the disk appeared and I was able to create a Datastore.  As expected, close to 800 MB was gone after formatting and VMDK overhead.

Putting aside the final process of adding a disk into ESXi.  The whole process of installing and configuring Openfiler went relatively smooth.  It did require a little bit of troubleshooting.  Not having access to official documentation and only community forums didn’t help.  The community is great but as with any community they are quick to lose interest when a solution isn’t straight forward.  Having a good working knowledge of SANs and iSCSI went a long way.   There are some features worth further investigation, namely, Snapshots, LDAP authentication, and NFS.  Some initial testing of snapshots haven’t work so no doubt it will require more time on the forums.

I did later come across a site that specialises in Virtualization solutions with some a installation and configuration documentation for Openfiler and ESXi.  I’ve provided a link the the PDF below.  It’s actually quite a good step by step document.

References

Xtravirt Openfiler install and ESXi iSCSI configuration

The Virtual Storage Appliance experiment – Part 1

vSphere Update Manager 5.x to 5.5 Upgrade

A few days back I upgrade a vCenter Server Appliance from 5.0 to 5.5.  With that prerequisite out the way I can now upgrade vSphere Update Manager (VUM) from 5.0 to 5.5.  This will bring me one step closer to being able to use VUM to upgrade my ESXi hosts to 5.5.

The process is very simple and just a matter of a few clicks.  VUM still requires that it runs on a Windows Server, VMware having yet provided an appliance alternative --yet.  So you will have to download the entire vSphere vCenter 5.5 Windows installation from VMware.

The process for a fresh install of VUM is similar to an upgrade.  As I’m upgrading I’ll be running through the process I used. The first step is executing the installer.

vum5.5_01

Select vSphere Update Manager and Click Install

vum5.5_02

Click OK

vum5.5_03

If you have a previous version of VUM installed you will be prompted to upgrade.

vum5.5_04

Click Next to start to installation wizard.

vum5.5_05

Accept the license agreement.

vum5.5_06

A few things have changed in the latest version of VUM.  Take note of what will be removed.  Make sure the checkbox to download updates after installation is selected.

vum5.5_07

Enter in your vCenter details.  Port 80 is the default unless you have changed this during vCenter installation.  Enter in a user account and password that has admin rights into vCenter.  If you’re running a vCenter Server Appliance like me you can use the root account.

vum5.5_08

If you haven’t already upgraded your vCenter to 5.5 you will receive the following warning and will need to upgrade vCenter before you can continue any further.

vum5.5_09

If you’re upgrading you won’t be able to change any database DSN or driver details.  Click Next.

vum01

Select ‘Yes, I want to upgrade my Update Manager Database’.  Select the checkbox ‘I have taken a backup of the existing Update Manager database’.

vum02

If your windows server’s IP address is resolvable its name will be listed in the drop down list.  Select the server and leave the default ports unless you wish to change them.  If you need to go through a proxy to access the internet select the checkbox below else click Next

vum03

Click Install to start the installation.

vum04

Update manager will need to be shutdown to upgrade it.  Select ‘Automatically close and attempt to restart applications’.

vum05

Once the installation is complete open up the vCenter 5.5 C# client.  Select plugins on the menu bar.  Scroll down to Available Plug-ins and click Download and Install on VMware vSphere Update Manager.

vum06

So that’s it.  Maybe a little more than a few click though 😉

You can now access Update Manager as you normally would in the vCenter Client.