Tag Archives: esxi - Page 2

Part 2: ESXi Owning The NUC

The Hardware

With the SDD and memory installed and the NUC re-assembled I was almost ready to power on.  But the last thing I needed to do was replace the North American cable with an Australian plug.  As I have never thrown a cable out since I was 8 this wasn’t an issue.  The system now looked as follows.

Intel NUC Kit D34010WYK
Crucial 16 GB Kit (8GBx2) SODIMM
Crucial M500 120GB mSATA internal SSD

2 x 4 GB USB Thumb Drives
Bluetooth Wireless KB/Mouse

The BIOS

The first thing I wanted to do was see the BIOS and if it required an update to the latest version.  The Bluetooth KB/Mouse plugged into the USB port worked without issue.  F2 on the NUC splash screen gets you into Intel’s VisualBIOS.  It’s all laid out nicely and simply to understand.  Finding the BIOS version is straight forward.  On the Home screen with Basic View you can see the bios version on the top left.  The first part of the version is the Initial Production Version WYLPT10H.86A.  The second part is the actual BIOS version 0027.  Followed by the date and what looks to be a build number 2014.0710.1904.

nuc_bios02￿

Pic1. Booting into the NUC Bios

Getting the latest BIOS version is as simple as heading over the the Intel Download Center and searching for NUC and selecting the model number.  I was on version 0026 (released back in 2013) with the latest being 0027 released only a few months back.  I downloaded the OS Independent .BIO file and copied it onto a USB thumb drive.  I plugged it into the NUC while it was currently sitting on the BIOS screen.  I went to the Wrench icon and selected Updated BIOS.  The NUC then rebooted and flashed the new BIOS.  The most obvious change I saw was the ability to now check and update the BIOS over the Internet.  However, when tried, I receive an ‘Unable to locate Product ID version’ error message.  Something to look into at a later stage.

Installing ESXi

So now the more interesting and fun frustrating part, installing ESXi.    The first thing I did was install a Beta version of ESXi.  Now unfortunately NDA says I can’t talk about this 🙁  How’s that for a tease… All I’ll say on that was that the process was fairly painless to get up and running.

But next I tried ESXi 5.5, and this I can talk about 😉

Booting off ESXi 5.x media should be as simple as mounting the ISO image and copying the files onto a USB thumb drive.  Attempting to boot in this way caused an error during boot with the following.

<3>Command line is empty.
<3>Fatal error: 32 (Syntax)

A VMware KB article solution was to turn off UEFI in the BIOS.  This was only part of the solution as now the error disappear but no boot media could be found.  Some searching lead me to UNetbootin.  A small simple app that would turn an ISO image into bootable media on a USB thumb drive.

Finally I had ESXi 5.5 booting off the USB and starting the install process. The network drivers in ESXi 5.x will not recognize the Intel I218V Ethernet Controller.  This will cause the installation of ESXi to fail.  The I218V  will work off e1000 drivers so the next step was to location an updated version.  I ended up finding net-e1000e-2.3.2.x86_64.vib

There are two way to inject the drivers into the ISO image.  The correct VMware way and the quick way!  The correct VMware way is with PowerCLI and using the add-esxsoftwaredepot commandlet and packaging a new ISO.  Turns out there is a much simpler way using an app called ESXi-Customizer.  Another simple app that will take an original ESXi ISO, extract it, inject a vib file of your choosing, and repackage it, all within 60 seconds.   This tool should be required learning for a VCAP, it’s that simple and it works!

So… I now have a new image.  With updated e1000 network drivers.  I re-run Unetbottin against the new image and create boot media.  And Happy Days!  ESXi 5.5 boots off the USB thumb drive with the new boot image and installs fine with the updated network drivers to the second USB thumb drive I have plugged in.

But why a second USB drive you ask?  Well, read on…

nuc_nic01

Pic2. Intel I218V Network Controller detected in ESXi

So I installed ESXi to a second USB thumb drive, as mentioned above, because during the installation the SSD did not detected when selecting a device to install to.  For the time being this isn’t an issue for me.  In my first post (Part 1: The NUC Arrival), my end game was to always boot and run ESXi from USB and use the internal SSD as storage and at some point vSAN.  All I’m doing now is just skipping to that step.

Getting ESXi 5.5 to detect the NUC’s mSATA controller requires creating new SATA driver mappings.  With a little digging around I found what i needed at vibsdepot.v-front.de.  Here you can download a VIB or an offline bundle (if you want to also inject it into the ISO the correct VMware way).  There’s also a link to the authors blog post that explains this process in much more details.

But, summerised below is what I did to get the NUCs mSATA controller to be detected.  I opened up an SSH session to the ESXi host and typed the following.

esxcli software acceptance set --level=CommunitySupported
esxcli network firewall ruleset set -e true -r httpClient
esxcli software vib install -d http://vibsdepot.v-front.de -n sata-xahci

After a reboot the controller detected and the SSD storage was available to create Datastores.

nuc_storage01

Pic3. Storage Controller now detected in ESXi

Conclusion

If you don’t want to read this whole post (it’s okay, I don’t blame you) and just want the steps to install ESX 5.5 onto a 4th Gen NUC.

1. Download ESXi 5.5 ISO from VMware.
2. Turn off UEFI in the BIOS of the NUC.
3. Download net-e1000e-2.3.2.x86_64.vib.
4. Download ESXi-Customizer, run it and select the ESXi 5.5 ISO and net-e1000e-2.3.2.x86_64.vib.
5. Download UNetbootin and create a bootable USB drive from the new ISO created above.
6. Boot off the USB thumb drive and install ESXi to a second USB thumb drive.
7. Enable SSH and run the above ESXCLI commands to create new mappings to the mSATA controller.
8. Turn UEFI back on in the BIOS

Conclusion Conclusion

At this point I have a fully working and running ESXi 5.5 host on the NUC.  I am running off a 4GB USB thumb drive.  I have network and detectable SSD storage.  The skies the limit now.

I’ll now be looking at purchasing a second Intel NUC, and while it’s being shipping, I’ll have a couple weeks to play with vSphere on this current NUC.

UPDATE;

At the time I wrote this article ESXi 6.0 was still in Beta so I couldn’t talk about it.  Now that it’s GA I can say that the process to install ESXi 6.0 still requires the msata process to get internal storage to work.  The great news is that networking now works out of the box.

Articles in this series

Part 1: The NUC Arrival
Part 2: ESXi Owning The NUC
Part 3: Powering a NUC Off A Hampster Wheel
Part 4: The NUC for Weight Weenies
Part 5: Yes, you can have 32GB in your NUC

Part 1: The NUC Arrival

I received a nice SMS from Australia Post the other day.  My package from Amazon had arrived and is waiting to be picked up from my Parcel Locker. It was my new Intel NUC!

nuc_01

I’ve been following the Intel NUC since it was first released back in late 2012.  I thought it was a great little box but never had a real reason to get it.  That was until a few weeks back while on a late night train ride home.  I had just started a new job and realised that my current home lab just wasn’t cutting it any more.  My Lenovo desktops with a huge 4 GB of memory were more trouble then they were worth.  So that’s where this NUC comes into play.

I bought the 4th Gen Intel i3 NUC with a 120GB mSATA SSD and 16 GB RAM.  The specs are probably a little high for a trial, but I’m holding high hopes that things will work out.  The plan is to run up the NUC as a VMware ESXi host in a couple different configurations.  To be honest i’m kind of making this up as I go along.  The plan is to try and run ESXi on the SSD and then off on an external USB stick.  Either one of these options and I’m high fiving and well on my way to a new home lab.

Anyway… that’s the plan.

Like a new family member entering the home I took some happy snaps.

Sliding the NUC out of the box, below, plays the Intel Inside theme music (Scared the hell out of me).  There’s a little light sensor and speaker in the corner.

nuc_02

Inside the box is obviously the NUC, a power pack and North American plug (no use to me in Australia), a mounting bracket (presumably for the back of a TV), a manual, and most important of all the Inside Inside sticker… all worth it now 😉

nuc_03

First things first, we follow the Ikea instructions and remove the bottom cover to expose the internals.

Pretty impressive.  Where’s the processor 😉

nuc_04   nuc_05

Next we install our two 8 GB SODIMMs.

nuc_06

Followed by the 120 GB mSATA SSD

nuc_07

Now we put it all back together and figure out what to do next.

nuc_08

Over the coming weeks I’ll be posting how I configured the NUC and setup of a new home lab.

Links

Intel NUC Overview site

Articles in this series

Part 1: The NUC Arrival
Part 2: ESXi Owning The NUC
Part 3: Powering a NUC Off A Hampster Wheel
Part 4: The NUC for Weight Weenies
Part 5: Yes, you can have 32GB in your NUC

Installing NetApp NFS Plug-in for VMware VAAI

For some time now NetApp have supported VAAI for NFS on vSphere.  If you’re using NFS on your NetApp with vSphere you might want to investigate installing the NFS Plug-in for VMware VAAI.

The plug-in helps vSphere communicate with NetApp basically allowing it to offload certain tasks from vSphere to NetApp.  By passing certain tasks off to the NetApp, tasks can be processed faster and communicated back to vSphere when complete.  An example is provisioning a VMDK file or performing a vMotion task.  Rather than vSphere attempting to perform these tasks over the wire they can be performed directly on the NetApp by the array itself.

There are three different ways to get the NFS Plug-in installed onto an ESXi host.  I’ve detailed the three different options below.  Before you start you’ll obviously need the NFS Plug-in which can be downloaded from the NetApp support site.  You’ll need a login ID and a valid support contract to do this.

Option 1. ESXCLI

This is my preferred option when only needed for a few hosts.  It can be done on the actual ESXi host (e.g. ssh) or via the vMA.

Step 1. Copy the NFS plug-in zip file to a location that the ESXi host has access to.  Below I copied the file to a folder called ‘vib’ on a test datastore.

1_nfs_vib00

Step 2. On the ESXCLI run the following command.

esxcli --server HOST_IP_ADDRESS software vib install -d /PATH_TO_VIB/vib_filename_.zip

nfs_vib01

Step 3. Reboot the host

Step 4. Check that the NFS plugin was installed with the following command.  Scroll till you find NetAppNasPlugin under Name.

nfs_vib03

Option 2. VMware Update Manager

Step 1. Install the Plug-in into the Patch Repository.  Click Import Patches and Browse to the location of the Plug-in zip file.

2_nfs_vib01

Click Next and ignore any certificate warning you may get to Import the patch.

2_nfs_vib02

Click Finish to finish the Import.

You should now see the NetAppNasPlugin in the Patch Repository list.

2_nfs_vib03

Step 2. Create a new baseline for the NFS Plug-in.

Click on Baseline and Groups. Right click to create a New Baseline.  Fill in the Name and Description and select Host Patch.

2_nfs_vib04

Click next and select Fixed.

2_nfs_vib05

Scroll through the list of patches and locate the NetAppNassPlugin.  Add the patch using the down arrow and click next.

2_nfs_vib06

Click Finish to install the patch.

2_nfs_vib07

Step 3. Attach the newly created Baseline to your hosts.  Where you choose to do this is up to you.  I choose to do it at the Cluster level.

2_nfs_vib08

Step 4. Once attached Scan and Remediate your host.

2_nfs_vib09

Option 3. NetApp Virtual Storage Console

This option is obviously dependant on you having already installed the Virtual Storage Console on a server and having the vSphere Plugin enabled.

If correctly installed the NetApp VSC can be found under Solutions and Applications called NetApp.

Navigate to Monitoring and Host Configuration and click on Tools.  Under NFS plug-in for VMware VAAI it will say Unable to location plug-in.

netapp_nfs_plugin00

Step 1. Extract the NFS zip file and locate the vib inside it.  The vib will be denoted with a version number at the end.  Make a copy of the file and call it NetAppNasPlugin.vib

This specific filename is required for the VSC to detect the vib correctly.

netapp_nfs_plugin01

Step 2.

On the server where the Virtual Storage Console was installed.  Copy the renamed file to C:Program FilesNetAppVirtual Storage Consoleetcvscweb

netapp_nfs_plugin02

Step 3. Exit vCenter and log back in.  Open the VSC back up.  If the vib was renamed correctly and copied to the correct location.  The VIB should now be detected under Tools of Monitoring and Host Configuration.

netapp_nfs_plugin00

Step 4. Click on Install on Host to install the VIB plug-in.  Any incompatible hosts will show up greyed out with a null besides there name.

In the below screenshot I have three incompatible hosts.

netapp_nfs_plugin04

So there you have it.  Three different ways to install the NetApp NFS Plug-in onto an ESXi host and three different pain in the ass ways.

Good Luck.

Exporting VMware Workstation VMDK to ESXi

In the quest to find the most difficult and convoluted way of doing things.  I feel I really outdid myself this week.  What should have been an easy task of creating an OVF on Vmware Workstation 8 and deploying it into a vSphere environment turned out to be much more.

The night before I created an OVF of a Workstation VM.  The next morning  I went to copy the files to a USB stick prior to work.  Still half asleep I only copied the compressed VMDK file and not the actual OVF or MF files.  Imagine my surprise when I got into work with only a single compressed VMDK file.

No worries, I’ll just upload it to a datastore and attach it.  Yeah no.  You need the VMDK descriptor file that points to the -flat.vmdk file.  vCenter won’t find any disks to attached to a VM.

So after some cursing, hopeful optimism, then some more cursing I found a workable solution.

Virtual Disk Development Kit 5.1 is a collect of tools to help you work with virtual disks.  Inside this kit is a util called vmware-vdiskmanager.exe.  The util actually comes with VMware Workstation but not having it with me required the download of this devel kit to acquire the util.  The util is similar to the ESXi shell command vmkfstools but can be run on a Windows box.  It has the ability to expand / convert a VMDK file out to an ESX-type virtual disk.

The command  I ran was

vmware-vdiskmanager.exe -r <original_source.vmdk> -t 4 <new_destination.vmdk>

Once complete two files are created.  A .vmdk descriptor file and a -flat.vmdk thick virtual disk.

I was now able to upload the two files to a datastore.  You have to upload both of them to the same location.  You won’t see the -flat.vmdk file once uploaded but it will be there.

exporting_vmdk01

Then I went into the Edit Settings of my pre-created vSphere VM.  I clicked on New Device and selected Existing Hard Disk and Add.

exporting_vmdk02

vmware-vdiskmanager.exe is quite a useful little util to help move Workstation VMDK files over to ESXi.  Prior to Workstation 8 this is probably your best way to get a VM’s virtual disk into ESXi.  With the introduction of Workstation 8 and above you can now Export to OVF.  This is no doubt the recommended way but if you just want an individual disk vmware-vdiskmanager is a decent option.

Reference:

VDDK Documentation

Download VMware vSphere 5.1 Virtual Disk Development Kit (login required)

ESXi Google Authenticator Fling -Install & Configure

I remember downloading the Google Authenticator app from the Google Play store the day it came out.  Since that time I never once even ran the app.  I just couldn’t be bothered setting it up with any websites, that was until now.

When I heard about a VMware Fling to bring Google Authenticator two-factor authentication to ESXi last week I wanted to try it out as fast as I could.  So today I played around with it and it works great!  So I noted down what I did and uploaded it all below.  There’s really only one requirement and that’s ESXi 5.0 or above.  True the instructions are on the Flings site but I thought I’d put them into my own words.

The first thing I did before I even started was make sure my host was using a good NTP time source and the time was correct.

Download the ESXi Google Authenticator zip file and extract the VIB file from it. (link below)

Upload the VIB to the ESX host.  I just used the vSphere Web Client and clicked on Storage under Inventories on the Home page.

google_auth01a

I then located a Datastore that my host had access to. I created a folder called vib.  I then clicked the Upload a file to the Datastore icon.  Selected the VIB and clicked Open.  (I also tried using the zip file without extracting the VIB but couldn’t get it to work so give that a miss)

google_auth01b

Next I installed the VIB on the host using the ESXCLI.  Normally I would use the Management Assistant for this but because I’m playing around with authentication I was on the console of the host.  Replace the path of where you uploaded the VIB.

esxcli software vib install -v /vmfs/volumes/datastore2/vib/esx_google-authenticator_1.0.0-0.vib -f

If successful you should receive output similar to below.

google_auth01c

Next you need to execute the command ‘google-authenticator’

google_auth02

A short wizard will run with a series of questions on how you would like to setup the authenticator.  Each environment may influence how it is set-up.  Record down the secret key and also the URL for when the Mobile App is set-up later on.

The Two Factor authentication works with SSH and Shell access.  The config process currently is all manual.

First you have to edit /etc/ssh/sshd_config.  I used vi from the ESXCLI.  Went into Insert mode made the below change and write & quit.

ChallengeResponseAuthentication yes

Next you have to edit /etc/pam.d/sshd for ssh and/or /etc/pam.d/login for console with the first line.

auth required pam_google_authenticator.so

Initially I tried to use vi but couldn’t save so I used sed as shown in the Fling instructions.

sed -i -e ‘3iauth required pam_google_authenticator.so’ /etc/pam.d/sshd
sed -i -e ‘3iauth required pam_google_authenticator.so’ /etc/pam.d/login

For the change to take effect immediately run ‘/etc/init.d/SSH restart’

The change is not persistent after a reboot so for this to happen the above two lines will need to be added to /etc/rc.local.d/local.sh

Finally you have to set up the Google Authenticator app.  I used the Android version which I originally downloaded the day after it was released and never used.  The Google Authenticator link below has links to iOS and Blackberry apps as well.  There’s two ways to add the ESXi host to the app.  You can manually add in the ESXi using the Secret Key provided above.  Or the easier approach I found was to use the URL that generated above and put that into a web browser.  That will load a QR code on the screen.  Using a QR reader on the phone scan it and it will automagically load Google Authenticator and add in the ESXi host.

References

Google Authenticator
ESXi Google Authenticator Fling
Android App

ESXi Google Authenticator Fling

I love that even as large as VMware is they can still have a little fun with their product and development names.  VMware Octopus was one of my favourites.  I think we were all disappointed when they changed the name to fall into the Horizon Suite.  Flings are another great name I love.

A few days back I saw a tweet from the Fling team of a new Fling.  The name caught my eye immediately –ESXi Google Authenticator.    It sounds like a pretty cool idea.  Two factor authentication to ESXi.  I haven’t tried it out yet but I’ll be looking to over the coming days.

The source link to the Fling is below.  Designed by a couple VMware engineers in the R&D team.  There doesn’t appear to be much to the installation and configuration process.  You will need a fast connection, though, to download the 26kb zip file 🙂

It’s supported on ESXi 5.0 and 5.1.  Single admin support on ESXi 5.0 and multiple admin support on ESXi 5.1.  You have 30-second TOTP codes and support for emergency scratch codes, which I presume are for emergencies 😉

Source Link

ESXi Google Authenticator Fling

Downgrade VMware Hardware version -the unsupported way!

I tend to run a pretty tight shop with my VMware infrastructure. I always make sure that everything is up to date and running the latest versions. That usually includes the VM Hardware versions. Until recently this never posed an issue. I’m currently in the progress of migrating a vSphere 5.1 environment to a vCloud Director 1.5 environment (with an ESXi 4.x back end).  The maximum H/W version supported is 7 while my environment is version 8 / 9.

The OVF upload process in vCD 1.5 does a few sanity checks before starting an upload.  If it finds your H/W versin isn’t supported it won’t let you proceed.

vcd_hw01

VMware have three supported ways of downgrading.

1. Revert to a previous snapshot while on H/W version 7.

2. Use VMware Converter and perform a V2V to H/W version 7.

3. Create a new VM on H/W version 7 and attach the disks from the original VM.

All very safe ways but there is a fourth, and totally unsupported, quicker way to downgrade by hacking the VMX config file.

The process is relatively simple.   Inside the VMX file for the VM there are a few critical lines that tell vSphere / ESXi what hardware version the VM is.  By editing this file we can make it think it’s a Version 7 VM rather than 8.  In essence fooling it into think it’s an older version and basically making it just skip the new config values.

To start  Power Down the VM.  Then obtain shell access to the ESXi server that has visibility to the VM’s Datastore.  Then use VI to edit the VMX file of the VM.  If you haven’t used VI it’s pretty straight forward.  I to Insert, X to Delete (you’ll figure it out).

Below is the first 19 lines of the VMX file of my VM running on hardware version 8 on a ESXi 5.1 host.  The critical line to change is the virtualHW.version = “8”.  By changing the 8 to a 7 that’s technically all that’s needed.  I’ve taken it one step further and replaced all the 8’s with 7’s with the exception of the first line.

.encoding = "UTF-8"
config.version = "8"
virtualHW.version = "8"
pciBridge0.present = "true"
pciBridge4.present = "true"
pciBridge4.virtualDev = "pcieRootPort"
pciBridge4.functions = "8"
pciBridge5.present = "true"
pciBridge5.virtualDev = "pcieRootPort"
pciBridge5.functions = "8"
pciBridge6.present = "true"
pciBridge6.virtualDev = "pcieRootPort"
pciBridge6.functions = "8"
pciBridge7.present = "true"
pciBridge7.virtualDev = "pcieRootPort"
pciBridge7.functions = "8"
vmci0.present = "true"
nvram = "MYSERVERNAME.nvram"
virtualHW.productCompatibility = "hosted"

My updated VMX config below.

.encoding = "UTF-8"
config.version = "7"
virtualHW.version = "7"
pciBridge0.present = "true"
pciBridge4.present = "true"
pciBridge4.virtualDev = "pcieRootPort"
pciBridge4.functions = "7"
pciBridge5.present = "true"
pciBridge5.virtualDev = "pcieRootPort"
pciBridge5.functions = "7"
pciBridge6.present = "true"
pciBridge6.virtualDev = "pcieRootPort"
pciBridge6.functions = "7"
pciBridge7.present = "true"
pciBridge7.virtualDev = "pcieRootPort"
pciBridge7.functions = "7"
vmci0.present = "true"
nvram = "MYSERVERNAME.nvram"
virtualHW.productCompatibility = "hosted"

Now exit VI and save the changes.  Back in vSphere select the VM and select the option “Remove from  Inventory” (don’t Delete from Disk).  Then open the Datastore that has the VM and select the VMX file of the VM and add back into Inventory.  Basically the same way you normally add a VM into the Inventory from a datastore.  If all has gone well when the VM is added back in inside vSphere it will now say VM Version: 7.

You can now happily migrate the VM between ESXi 5 and 4.  In my case create an OVF of the VM and upload it into vCD 1.5. I’ve performed this quite a lot of times without issue. Probably not recommended for production but in a TEST / DEV environment less of an issue.

Appendix
Supported Vmware KB article to Downgrade
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1028019

Windows Server 2012 VM requires No Execute (NX) flag

A work colleague recently had an issue trying to install a Windows Server 2012 VM.  The host was running ESXi5 with pre-configured 64-bit Win2k8 VMs.  When he tried to boot off a Windows 2012 ISO image.  The VM loaded with the initial Windows Server logo followed by an error.

win2012_nx01

win2012_nx02

The junior colleague felt that it might be the BIOS of the physical server didn’t support Windows 2012.  The blade server was a HP ProLiant BL460c.  It was a long shot but the BIOS had never been updated so I allowed him to flash it.  The blade was successfully flashed from a 2007 image to a 2011 image.  He tried again to install Windows Server 2012 unsuccessfully.

I took the opportunity of the ESXi host being down to perform an upgrade from ESXi 5 to ESXi 5.1.  Again another long shot but all our other Windows 2012 Servers were running on ESXi 5.1.  Very quickly into the upgrade processes we received the following error.

No eXecute (NX) bit is not enabled on the host. New ESXi version requires a CPU with NX/XD bit supported and enabled.

I make sure this feature is always turned on in the BIOS of our DELL blades.  On our older legacy HP blades they were all off by default and never turned on.  Something that now needs to be taken into account if we every want to upgrade the HP blades.

We rebooted the HP blade back into the BIOS.  Once in we went to Advanced settings / Processor Option / No-Execute Memory Protection.  It was set to Disabled so we changed it to Enabled.  No-Execute Memory Protection is how HP describe the No eXecute (NX).

win2012_nx03

We power cycled the blade and loaded the ESXi5 Hypervisor.  This time we were able to successfully boot up the Windows Server 2012 ISO image and configure and installed.  Again using the opportunity, I attempted to upgrade to ESXi 5.1 on the blade.  With the NX Flag enabled in the BIOS I too was able to successfully install.

So if you plan on moving to Windows Server 2012 or ESXi 5.1 anytime soon you better start checking that NX flag on your physical servers.

vCenter Web Client timeout

New in vSphere 5.1 is a completely redesigned vCenter Web Client.  Many of us would have never used the Web Client in previous versions of vSphere and for the few that did would have been quickly turned off by it.

VMware have put a lot of effort in this new version, to the point where many of the new features in 5.1 are only available in the Web Client.  Moving forward the C# client will no longer be developed and 5.1 will be the last version.

One of the features of the C# client was the ability to stay logged in indefinitely.  For good or for bad (from a security view) I like it.  My workstation is always secured when away from it and I liked being able to come back and have the vCenter Client connected and running.

With the new 5.1 Web Client a session is ended after 2 hours of being idle.  In all honesty this isn’t too impractical and will suit most people.  That being said I like being able to log in at the start of the day and know that I will still be logged in at the of my day even if I having been active on the new web client.

Currently there is no setting within the Web Client that will allow you to change the idle login timeout.  The timeout can be changed through a config file, both in Windows and the vCenter Virtual Appliance.

Windows

Navigate to %SYSTEMDRIVE%UsersAll UsersVMwarevSphere Web Clientwebclient.properties

Uncomment the line (remove #) and change the value in minutes for session.timeout = 120

Restart the VMware vSphere Web Client service

 

Virtual Appliance

Navigate to /var/lib/vmware/vsphere-client/webclient.properties

Uncomment the line and change the value in minutes for session.timeout = 120

Restart the Web Client using /etc/init.d/vsphere-client restart

Running nested ESXi –the real inception

Nested ESXi servers are essentially virtualised ESXi servers.  They can be quiet fun to play with.  Usually the only reason you’d do this is for a test lab.  You don’t need a physical ESXi server to nest virtual ESXi hypervisors.  You can choose your product of choice.

In my case I’m running VMware Workstation 9 and virtualised ESXi 5.1 as the second level and then running virtual guests as a third level within the virtual ESXi.  In Workstation 9 you have the ability to connect to ESXi hosts and view guest VMs --as below.

You don’t need to do anything special to virtualize ESXi.  In fact there’s an option to select ESX as the guest OS in VMware Workstation 9.

When you turn on a virtual ESXi guest (or is it a host???) a message appears in the bottom right corner of the console.  It’s pretty self-explanatory.  You can create a 32-bit guest VM only on this ESXi server.

The cool cats that VMware are, allow you to run 64-bit VM guests too with a small config change.  The feature is only available in ESXi 5 and above.

On the virtualised ESXi guest enable console access (or SSH) and login as root.  The file ‘config’ in /etc/vmware needs to be modified and an additional line add, ‘vhv.allow = “TRUE”’.

If you’re a VI fan go ahead and use that.  If you’re lazy like me you can just append it to the file with echo.


echo 'vhv.allow = "TRUE"' >> /etc/vmware/config

Once the change is made no reboot is necassary.  You will now be able to run 64-bit guest VMs within your already virtualised ESXi host.  In an extreme example I ran another ESXi guest at the third level to prove a 64-bit OS would run.  On a second virtualised ESXi guest I’m just running a 32-bit Linux VM at that same third level.

For my test lab three levels is as deep as I need to go.  I’ve read that it’s also as deep as you can go as guest VMs will fail to run at a fourth level.  I don’t have the resources to test this out in my lab.  I heard that there was a VMworld presentation on multiple level nesting ESXi servers.  Would certainly be worth finding.

 

Appendix

As with many things VMware, nested ESXi is not officially supported.
VMware Link:  Support for running ESX/ESXi as a nested virtualization solution

I had trouble with the vmxnet3 network drivers with the nested ESXi.  Instead I chose to go with E1000 particular for that third level.