Tag Archives: esxi

Configuring ESXi prerequisites for Packer

I’m currently working on a Packer build process for a customer using ESXi. The last time I had worked on Packer was over a year ago and I quickly realised how fast I forgot many things for a successful build. It’s also been interesting to experience new bugs and nuances using new versions of products. So I thought I might document some of my experiences to get to a successful build. I think this will easily turn into a multi-part post so I will attempt to document my process as much in order as possible.

A quick recap to Packer if you’re new to it all. Packer is brought to you by the good folks that brought you Terraform and Vagrant --HashiCorp. It’s a relatively small single file executable that you can use to programmatically build images through scripts. You can then take those images and upload them to AWS, Azure, or in my case, VMware ESXi.

While Packer works great with tools like Vagrant and VirtualBox. As a VMware Consultant I want to leverage ESXi for the build process. But before we can start with a Packer build we need to set up a few prerequisites on ESXi.

The first thing we need to do is designate an ESXi host for our builds. The reason we need a host and not a vCenter is that Packer will connect to the host via SSH and use various vim-cmd commands to do it’s work. Once we have a host there are three steps to complete, listed below.

Enable SSH
First we need to enable SSH on our host. There’s a number of different ways to do this. The two easiest ways are via the ESXi host Web Client or if managed by vCenter inside that.

For the host web client, navigate in a browser to https://esxi_host/ui and login with the root account. Navigate to the Manage pane and select the Services tab. Scroll down to TSM-SSH and click Start. Under Actions you may also want to change the policy to Start and Stop with Host.

In vCenter it’s a little different. Locate the host you have designated. Select the Configuration Tab. Scroll down to Security Profile and Click Edit. A new window will appear. Scroll and locate SSH, select start and change the Startup Policy to Start and stop with host.

Enable GuestIPHack
Next we need to run a command on the ESXi host. What this command does is allow Packer to infer the IP address of the Guest VM via ARP Packet Inspection.

SSH onto the ESXi host (e.g. using putty) and run the below command.

esxcli system settings advanced set -o /Net/GuestIPHack -i 1


Open VNC firewall ports on ESXi
Lastly, Packer uses VNC to issue boot commands to the Guest VM. I believe the default range is 5900 -- 6000. 5900 being the default for VNC but if you’re performing multiple builds or the port is in use Packer will cycle through the range until it finds an available one.

Run the following commands on the host to allow us to modify and save the firewall service.xml file.

chmod 644 /etc/vmware/firewall/service.xml
chmod +t /etc/vmware/firewall/service.xml
vi /etc/vmware/firewall/service.xml

Scroll to the very end of the file and just above the last line /ConfigRoot press i (to insert) and add the below in.

<service id="1000">
  <id>packer-vnc-custom</id>
  <rule id="0000">
    <direction>inbound</direction>
    <protocol>tcp</protocol>
    <porttype>dst</porttype>
    <port>
      <begin>5900</begin>
      <end>6000</end>
    </port>
  </rule>
  <enabled>true</enabled>
  <required>true</required>
</service>

press ESC and type in :wq! to save and quit out of the file.

Restore the permissions of the service.xml file and restart the firewall.

chmod 444 /etc/vmware/firewall/service.xml
esxcli network firewall refresh

You can check if you successfully made the change by heading back over to the host in the web client and checking the Security Profile section. Only the first port will be shown and not the range. You can also use the below commands on the host to confirm and see the entire range of ports set.

esxcli network firewall ruleset list
esxcli network firewall ruleset rule list


These are the only three changes you really need to make to an ESXi host for a Packer build to work successfully. I’ve tried to go into a little detail of each step to provide an explanation of what each change is doing. In reality it should only take a few minutes to implement.

In some secure environments I’ve seen SSH set with a timeout. If you notice SSH disable after you’ve enabled it. You’ll need to remove the timeout as SSH needs to stay enabled. You’ll also want to confirm that no external firewalls are blocking access to the VNC port range from Packer.

In future posts I’ll go into detail around the prerequisites in configuring a Windows Server to run Packer and importing / exporting OVF images.

ESXi Host Client Officially Released

A few days ago ESXi 6.0 Update 2 was released.  Quietly added in was version 1 of the ESXi Embedded Host Client.  I’ve spoken a few times about the Host Client.  It started out as a VMware Fling by VMware engineers Etienne Le Sueur and George Estebe.  Since then it has gained a hugely positive response from the community that it has finally found its way into ESXi.

If you’ve recently upgraded or installed ESXi 6.0 Update 2 you can access the host client via a browser connecting over standard SSL (https:/myesxi-host/ui/).   You can login with the host’s root account.  If you’ve never seen the Embedded Host Client before you’re in for a huge surprise.  You’ll be amazed at how similar it looks to the vSphere Web Client.  Not only that but it’s extremely snappy and fast built upon HTML5.

I recently upgraded my NUC home lab hosts to Update 2 to check out the production build.  It looks and feels just like the Tech Preview.  It’s going to be a great replacement to the C# Client.  If you’re running a previous Tech Preview release of the fling there’s a few things to note before you upgrade to Update 2.  Initially I did an upgrade of a host with an old Tech Preview 5 fling installed.  Update 2 left that version of the fling in place.  So on my subsequent hosts I removed the Tech Preview fling before upgrading the host.  That resolved the issue and installed the v1 production release.

Below are the steps to remove the Tech Preview fling before upgrading a host.  The -f represents a force removal just in case you have any third party vibs that may conflict with the uninstall as I did.

[[email protected]:~] esxcli software vib remove -f -n esx-ui
Removal Result
Message: Operation finished successfully.
Reboot Required: false
VIBs Installed:
VIBs Removed: VMware_bootbank_esx-ui_0.0.2-0.1.3357452
VIBs Skipped:
[[email protected]:~]

If, like me, you upgraded a host before removing the Tech Preview version of the fling. You can download the official Host Client from the VMware download portal.  List with the ESXi 6.0 U2 Zip and ISO images is the Host Client VIB and Offline Bundle.  Then just run through the steps to remove and install the VIB.

There is a newer build also available up on the flings page --Tech Preview v6.   I chose to upgrade to this build as it’s just my home lab.  The process is simple, I outlined the steps to update the Embedded Host Client to a new build in a previous post.

Latest v1 Production Build

host-client_v1

Latest Tech Preview Build

host_client_tp6

References

Embedded Host Client Fling Page

VMware Host Client Release Notes

Patch Release ESXi550-201601501-BG breaks connectivity to vCenter

A few days back I upgraded an ESXi host from ESXi 5.1 to 5.5.  I had downloaded the latest image my vCenter and hardware would support.  The upgrade went well.  Once the host rebooted and was connected back in vCenter I performed a patch scan to see missing patches.

I had about six missing patches.  One being ESXi550-201601501-BG, released only a few days beforehand on 4 Jan.  It was marked as a Critical Bugfix.  As I have always done with Critical Bugfixes I Remediated.  After all, it was Critical and a Bugfix.  Once the host remediated and rebooted I found that I could no longer connect to the host via vCenter.  Receiving the below error when attempting to connect.

Cannot contact the specific host (hostname). The host may not be available on the network, a network configuration problem may exist, or the management services on this host may not be responding.

I opened a C# Client and was able to successfully connect to the ESXi host directly.  So all looked good there.  I decided to look up the KB article of this latest patch (KB 2141207) to see what it fixed.  Nothing came up.  No page could be found!

With a lack of information on this patch the best thing I could do was just to revert the patches.  The fix was simple.  I rebooted the host.  During the Loading VMware Hypervisor I pressed Shift R for Recovery mode.

recovery01

The VMware Hypervisor Recovery screen then appears.  It shows the current build and the previous build that it will Roll Back to.  Pressing Y starts the process.  It was quick and simple and I was back up and going on the previous 5.5 build I had just upgraded to without the addition patches I installed.

recovery02
(The above screenshot was not the host that I Rolled Back but rather is a base 5.1 image)

The KB article page now works.  It is quite clear you shouldn’t update to version 5.5 Update 3b before you upgrade your vCenter to the same version of 5.5 Update 3b.  The interesting note here is that I didn’t realise that I was updating the host to ESXi 5.5 Update 3b.  My upgrade image was ESXi 5.5 Update 3a and I thought I was just installing a critical bugfix patch.  Where I can undone was this was an esx-base patch that brought the effective build number of the ESXi host up to the same level of a full ESXi 5.5 Update 3b build.

So what was actually going on with this patch…

Reading the KB article isn’t entirely obvious what’s going on.  Yes, we now know not to update unless vCenter is on Update 3b but why?  The article states that the patch updates the esx-base vib to resolve an issue when upgrading from ESXi 5.5 to 6.x, where some VMs and objects may become inaccessible or report non-compliant.  That’s clearly, though, not what we’re doing here.

The KB article now has a new link to a second KB article (KB 2140304) which describes the issue you will face if you don’t upgrade vCenter first.  What it describes is that SSLv3 is now disabled by default in this esx-base vib.  So I can only presume that vCenter wants to connect to the ESXi host using SSLv3.

Fortunately the resolution wasn’t that complex.  I was able to resolve the issue and get the host up and going relatively quickly and all within my outage window.  This is the first time I have experienced an ESXi patch break a host.  I’m surprised that a patch is actually changing the default behaviour of a host.  I can understand, and even support, VMware making the change in the latest ISO image build of ESXi 5.5 but not in a patch.

The Lesson…

So the lesson here (and there’s probably a few but I’m going with), stay away from any esx-base patches that have been released post 4 Jan 2016 until vCenter is upgraded to 5.5 Update 3b.

As a side note I would assume SSLv3 could be re-enabled on the host but I haven’t looked into this yet.

References

After upgrading an ESXi host to 5.5 Update 3b and later, the host is no longer manageable by vCenter Server (2140304)

VMware ESXi 5.5, Patch ESXi550-201601501-BG: Updates esx-base (2141207)

ESXi Embedded Host Client v4 released

Unless you’ve been hiding under a rock recently you must have heard of the ESXi Embedded Host Client Fling by now.  This is one of the more exciting Flings of recent.  Actually there’s a couple really cool ones atm but this one definitely stands out.  And it’s just undergone another update.  With each iteration the engineers, Etienne Le Sueur and George Estebe keep adding more features and bug fixes.  It’s really progressing along really nicely.

I did a vMug community talk recently on VMware Flings and spoke about the Embedded Host Client.  For my first community talk it’s been really well received.  If you’re not up on what Flings are about or what the ESXi Embedded Host Client can do please check it out.

Installing the Fling is really easy, especially if you’re already running v3 of the Fling as I was.  If you’re currently running v3 you can use the Update method under the Help menu like I did.

Updating ESXi Embedded Host Client from v3

Firstly you need to grab the latest build from VMware Flings.  Download the vib and upload it to a datastore accessible to your host.  I used the Embedded Host Client and used the Datastore Browse feature to upload.

Log into the Embedded Host Client as you normally would and click on Help in the top right and select Update.

embedded_host_client_03

Next enter in the path to where you uploaded the vib file.  This can be a little tricky as you have to manually enter in the path.  It took me about seven tries to get the path and file name correct.  A feature request I’ll be submitting for the next version will be a browse option to paste in the correct path and name.

embedded_host_client_04

Click Update and if you got it path correct you should see a task similar to below.  You’ll know pretty quickly if it didn’t work because the task will end instantly and you won’t see a progress bar in results.

embedded_host_client_05

Once complete reload your browser session and sign back in.  Your build version should show the latest version.

embedded_host_client_06

Of course if you’re not already running the Embedded Host Client or an older version to v3 you can still install/update using the ESXCLI command from the ESXi console.

I’ve covered in previous posts how to install a vib from the console using ESXCLI.  Examples are also given on the Instructions tab of the Fling’s page.  Basically, upload the vib to a datastore on the host.  Then use the command below substituting the path for where you placed the vib.  No reboot or Maintenance Mode is required, which is really nice.

esxcli software vib install -v /vmfs/volumes/datastore1/esxui-signed.vib

That’s all that’s required and hopefully the installation goes well.  You should now be able to access the host via https://esxi_host/ui/

I can’t wait to see this become an official product.  No doubt there’s a long and tedious process to get the ESXi Embedded Host Client certified and ready as a VMware supported product.  Hopefully we see it sometime in 2016.

References

ESXi Embedded Host Client Flings page.

My talk on VMware Flings and the ESXi Embedded Host Client

 

Part 5: Yes, you can have 32GB in your NUC!

Every time I get asked about the Intel NUC the number one question that comes up is, ‘Can I have 32GB in it?’.  Up until recently my response was ‘No’.  My response was like I had shattered a childhood dream or something for the person asking.  Most people I talk to lose interest in the NUC at this point.  It’s quite frustrating because it’s a great little device and the official 16GB limit when combined with multiple NUCs is still a great option for a home lab.  The relatively low price and power consumption is hard to beat, especially compared to something like a Super Micro that seems to be a home lab favourite to achieve 32GB+.

Recently I was put onto I’M Intelligent Memory who specialise in DRAM since 1991.  Going back as far as early 2014 I’M have been producing a 16GB SODIMM module.  I’M currently produce a 16GB SODIMM module which is aimed at 5th Gen Intel Broadwell processors.  They claim that their 16GB SODIMM modules are compatible with notebooks and Intel NUC i3/i5/i7-5xxxU CPUs.

16GB_SODIMM

I’M reference quite a good article on their website written by PCWorld on the 16GB SODIMM module used in an Intel NUC; it’s well worth a read.  PCWorld even approached Intel for their opinion on the module in its NUC.  While Intel would not validate the module for use in the NUC they did say that “technically” there is no reason why it won’t work.

For some time I had always been under the impression that memory was limited by the CPU.  In the case of the Intel NUC, however, it appears this is more an issue of JEDEC SODIMM standards preventing larger than 8GB SODIMM modules.  I’M work around some of the JEDEC standard limits by stacking chips on top of each other.  This allows them to double the density of an SODIMM, but no doubt makes them not standards compliant.

With I’M Intelligent Memory explicitly stating that the modules will work 5th Broadwell CPUs, a thorough write-up from PCWorld demonstrating that the modules do indeed work in 5th Gen NUCs, and Intel stating that technically it’s achievable.  The NUC has just become even more appealing, especially for a virtualization home lab setup.

The one caveat; though, is the price!  I recently purchased 16GB (2x8GB SODIMM modules) for a new 5th Gen NUC for $75 US.  I’M’s 16GB SODIMM have come down recently but are still going for a whooping $285 US (325 Euro), which is approximately three times the cost per a Gig.

For all those people out there that have been dismissing the NUC due to the 16 GB limit they now have to re-evaluate their position.  If 32GB is such an important factor for them, it’s now achievable with price being the new factor in the equation.

References

http://www.intelligentmemory.com/dram-modules/ddr3-so-dimm
http://www.pcworld.com/article/2894509/want-32gb-of-ram-in-your-laptop-or-nuc-you-can-finally-do-it.html

Articles in this series

Part 1: The NUC Arrival
Part 2: ESXi Owning The NUC
Part 3: Powering a NUC Off A Hampster Wheel
Part 4: The NUC for Weight Weenies
Part 5: Yes, you can have 32GB in your NUC

No vmkcore disk partition is available

Sometimes I really do feel the computer gremlins are out to get me.  As long as I can remember I’ve had a flawlessly running Test Lab at work.  The day of my scheduled ESXi host upgrades I come across numerous hosts with the below error.

No vmkcore disk partition is available and no network coredump server has been configured.  Host core dumps cannot be saved.

no_vmkcore01

I tried to ignore the error but VMware Update Manager would have no bar of it and prevented me from performing an ESXi version upgrade.

The error is referring to the location ESXi will dump its core during a Purple Screen of Death (PSOD).  Usually you’ll see this warning with a statless configured ESXi host.  In this situation a host will be running in memory with no disk.  You will usually configure the vSphere Network Dump Collector service.  This wasn’t the case in my situation.

Logging into the Shell I ran the follow

~ # esxcli system coredump partition list

no configured dump partition found; skipping

Next I attempted to set the coredump

~ # esxcli system coredump partition set --enable true --smart

Unable to smart activate a dump partition.  Error was: Not a known device: naa.6000097000024659483748380304235.

Not really sure what was going on here.  I just hope no one was messing with LUN mappings.

So next I use the set -u to unconfigure any current core dump follow by set --enable true --smart which allows ESXi to automatically determine the best location to set.

~ # esxcli system coredump partition set -u

~ # esxcli system coredump partition set --enable true --smart

~ # esxcli system coredump partition get

Active: naa.600601600fc04857394578c4d945e311:7

Configured: naa.600601600fc04857394578c4d945e311:7

This resolved the error immediately without any reboots and allowed me to continue with ESXi host upgrades.

I don’t know the root cause to this issue but as I’m upgrading ESXi versions I think it’s okay to sometimes let things go.

Note;  --enable true --smart contains double dashes.

PowerCLI A Syslog Server To All ESXi Hosts In vCenter

I was recently tasked with configuring all ESXi hosts within a number of vCenter environments to use a Syslog Server.  Each of these environments contained numerous clusters and ESXi hosts.  Too many to manually want to configure a Syslog Server by hand.  Using our lab environment I played around with a few different ways of quickly pushing out some Syslog settings via PowerCLI.

I came across two different PowerCLI cmdlets that would do the job.  The  first was Set-AdvancedSetting and the second was Set-VMHostAdvancedConfiguration.  The latter being a deprecated cmdlet.  Both cmlets do the job equally well.  Ideally you would want to be using the newer Set-AdvancedSetting cmdlet rather than the deprecate one.

Where I didn’t like the newer Set-AdvancedSetting was that it would ask for confirmation before making a change.   I didn’t fully realise how annoying this would be till after I had wrote my script.   Once a connection to a vCenter is made in PowerCLI the script I created prompts for a Syslog Server then gets all ESXi hosts in the vCenter, applies the Syslog Server value, reloads syslog, and finally opens the syslog ports.  All simple and basic except that you will be prompted to apply the syslog value to each host.  Fine if you’re very cautious or if you want to omit specific hosts.

Perform operation?
Modifying advanced setting ‘Syslog.global.logHost’.
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help
(default is “Y”):y

I then decided to test out Set-VMHostAdvancedConfiguration and found I preferred this cmdlet over Set-AdvancedSetting.  Each time it set a syslog value on a host it would flash up a friendly yellow little warning and go ahead with the change.  Much more convenient for large changes.  At some point in a future PowerCLI version I assume this cmdlet won’t work but until then it works a treat.

WARNING: Set-VMHostAdvancedConfiguration cmdlet is deprecated. Use
Set-AdvancedSetting cmdlet instead.

The below code block uses the newer Set-AdvancedSetting cmdlet.

# This script will set a Syslog Server all all ESXi hosts within a vCenter once connected.
# Uses the newer Get / Set-AdvancedSetting cmdlet.
# This cmdlet will require confirmation for each host being modified.
# Created by Mark Ukotic
# 04/04/2015

Write-Host "This script will change the Syslog Server on all hosts within a vCenter, restart Syslog, and open any required ports."

Write-Host

$mySyslog = Read-Host "Enter new Syslog Server. e.g. udp://10.0.0.1:514"

Write-Host

foreach ($myHost in get-VMHost)
{
#Display the ESXi Host being modified
Write-Host '$myHost = ' $myHost

#Set the Syslog Server
$myHost | Get-AdvancedSetting -Name Syslog.global.logHost | Set-AdvancedSetting -Value $mySyslog

#Restart the syslog service
$esxcli = Get-EsxCli -VMHost $myHost
$esxcli.system.syslog.reload()

#Open firewall ports
Get-VMHostFirewallException -Name "syslog" -VMHost $myHost | set-VMHostFirewallException -Enabled:$true
}

This second code block uses the deprecated Set-VMHostAdvancedConfiguration, which I prefer.

# This script will set a Syslog Server all all ESXi hosts within a vCenter once connected.
# The deprecated Set-VMHostAdvancedConfiguration cmdlet is used.
# Seems to run a little more cleaner with this cmdlet and doesn't ask for confirmation
# Created by Mark Ukotic
# 04/04/2015

Write-Host "This script will change the Syslog Server on all hosts within a vCenter, restart Syslog, and open any required ports."

Write-Host

$mySyslog = Read-Host "Enter new Syslog Server. e.g. udp://10.0.0.1:514"

Write-Host

foreach ($myHost in get-VMHost)
{
#Display the ESXi Host being modified
Write-Host '$myHost = ' $myHost

#Set the Syslog Server
Set-VMHostAdvancedConfiguration -Name Syslog.global.logHost -Value $mySyslog -VMHost $myHost

#Restart the syslog service
$esxcli = Get-EsxCli -VMHost $myHost
$esxcli.system.syslog.reload()

#Open firewall ports
Get-VMHostFirewallException -Name "syslog" -VMHost $myHost | set-VMHostFirewallException -Enabled:$true
}

Modifying Services via PowerCLI

If your vSphere environment is anything like the ones I manage over time you can be left with various ESXi hosts with Services left Running when they should be Stopped. It’s so common to turn on SSH or the ESXi Shell to troubleshoot an issue and then forget to Stop the service when you’re done.

If you’re managing 10s, if not, 100s of ESXi hosts you don’t want to be clicking on each host and checking the Security Profile setting.

This can be checked really easily and modified via PowerCLI. Below I slowly build a basic script that will check and modify a service of all hosts connected to a vCenter.

Open PowerCLI and make a connection to vCenter.

Connect-VIServer myvcenter.domain.local

Once connected we can run the following cmdlet to list all hosts in vCenter.

Get-VMHost

Next we can narrow it down by selecting an individual host then displaying all Services on that host to help identify the Service we want to modify.

Get-VMHost –Name esxi01.domain.local | Get-VMHostService

powercli_service01

This will display all services on the host, their policy state, and whether they are running.

Now we can take it one step further and enumerate all hosts looking for a specific service using its service name from the Key column above. In this case I want to list the settings for the ESXi Shell, which is defined by the Key value “TSM”

Get-VMHost | Get-VMHostService | Where {$_.Key –eq “TSM”}

powercli_service02

Next I want to now change the policy from On to Off for all hosts which we would do as follows.

Get-VMHost | Get-VMHostService | Where {$_.Key –eq “TSM-SSH”} | Set-VMHostService –Policy “On”

Finally, I want to also change the ESXi Shell on all hosts from Running to Stopped.

Get-VMHost | Get-VMHostService | Where {$_.Key –eq “TSM-SSH”} | Stop-VMHostService

powercli_service03

This will display a prompt asking you to acknowledge the operation on each or all hosts.

The scripts above are very crude but get the job done very quickly. They can obviously be narrowed down and enumerated much better. For example Get-Cluster can be used in front of Get-VMHost to target a specific cluster. Also the host’s name can be enumerated to better see which hosts you’re modifying on an individual basis. Call that your study lesson 😉

Configuring and Testing NTP on ESXi

I hate NTP.  I hate time sync issues.  I hate time skew issues on ESXi.

So now that I’ve got that out I feel a whole lot better.  I can now talk about how to configure and importantly validate your NTP settings on an ESXi host.

Setting and syncing time on an ESXi host is done within Time Configuration on the Settings tab of an ESXi host.

timesync02

The time can either be set manually or it can be set via NTP.  Setting the time manually is self explanatory.  Basically change your time and click OK.  It’s not something I’m going to go into any further.  Ideally, though, you want to be setting your time with NTP.  Using NTP is relatively easy too, the hardest part will be making sure you have the correct ports open on all parts of the network.  NTP generally uses UDP port 123 btw.

timesync03

So firstly you want to select Use Network Time Protocol.  You next want to head over to http://www.pool.ntp.org and find your closest NTP time sources.  Understanding how the underlying technology of how NTP works is actually quite interesting but beyond what this post is about.  Wikipedia is a good start on NTP.  Each region around the world has a number of NTP pools and within those regions many countries have pools of their own.  For me my closest pool is Australia within the Oceania region.  Australia has 4 pools.  Within these pools are actually a number of servers.  I can use one of these pools or I can use them all.  I’ll be using them all for redundancy.  Once I enter these pool addresses and separate them with commas I click the Start button and click OK.

timesync04

The Time Configuration should now look something similar to below.  The time change in not instant and can take… well… time.

timesync05

But how do you test that these settings are correct, considering that the time sync process is not instant.  Further more, NTP uses UDP port 123 which is connectionless.  Well, we can query the output our NTP sources gives us, which can be done from the CLI of the ESXi host.

Log into the console of the ESXi host using whatever method you prefer.  The simplest is usually just starting and connecting to SSH.

We use the NTPQ command and type the following.

ntpq -p localhost

The output should be sometime similar to below.  VMware have a good KB article which explains what it all means if your really want to know.

timesync06

If we see something similar we know we’re good and the time should start to change shortly.  If we get all zeros we probably have network and DNS working but NTP is block at the firewall somewhere.

 

Part 2: ESXi Owning The NUC

The Hardware

With the SDD and memory installed and the NUC re-assembled I was almost ready to power on.  But the last thing I needed to do was replace the North American cable with an Australian plug.  As I have never thrown a cable out since I was 8 this wasn’t an issue.  The system now looked as follows.

Intel NUC Kit D34010WYK
Crucial 16 GB Kit (8GBx2) SODIMM
Crucial M500 120GB mSATA internal SSD

2 x 4 GB USB Thumb Drives
Bluetooth Wireless KB/Mouse

The BIOS

The first thing I wanted to do was see the BIOS and if it required an update to the latest version.  The Bluetooth KB/Mouse plugged into the USB port worked without issue.  F2 on the NUC splash screen gets you into Intel’s VisualBIOS.  It’s all laid out nicely and simply to understand.  Finding the BIOS version is straight forward.  On the Home screen with Basic View you can see the bios version on the top left.  The first part of the version is the Initial Production Version WYLPT10H.86A.  The second part is the actual BIOS version 0027.  Followed by the date and what looks to be a build number 2014.0710.1904.

nuc_bios02￿

Pic1. Booting into the NUC Bios

Getting the latest BIOS version is as simple as heading over the the Intel Download Center and searching for NUC and selecting the model number.  I was on version 0026 (released back in 2013) with the latest being 0027 released only a few months back.  I downloaded the OS Independent .BIO file and copied it onto a USB thumb drive.  I plugged it into the NUC while it was currently sitting on the BIOS screen.  I went to the Wrench icon and selected Updated BIOS.  The NUC then rebooted and flashed the new BIOS.  The most obvious change I saw was the ability to now check and update the BIOS over the Internet.  However, when tried, I receive an ‘Unable to locate Product ID version’ error message.  Something to look into at a later stage.

Installing ESXi

So now the more interesting and fun frustrating part, installing ESXi.    The first thing I did was install a Beta version of ESXi.  Now unfortunately NDA says I can’t talk about this 🙁  How’s that for a tease… All I’ll say on that was that the process was fairly painless to get up and running.

But next I tried ESXi 5.5, and this I can talk about 😉

Booting off ESXi 5.x media should be as simple as mounting the ISO image and copying the files onto a USB thumb drive.  Attempting to boot in this way caused an error during boot with the following.

<3>Command line is empty.
<3>Fatal error: 32 (Syntax)

A VMware KB article solution was to turn off UEFI in the BIOS.  This was only part of the solution as now the error disappear but no boot media could be found.  Some searching lead me to UNetbootin.  A small simple app that would turn an ISO image into bootable media on a USB thumb drive.

Finally I had ESXi 5.5 booting off the USB and starting the install process. The network drivers in ESXi 5.x will not recognize the Intel I218V Ethernet Controller.  This will cause the installation of ESXi to fail.  The I218V  will work off e1000 drivers so the next step was to location an updated version.  I ended up finding net-e1000e-2.3.2.x86_64.vib

There are two way to inject the drivers into the ISO image.  The correct VMware way and the quick way!  The correct VMware way is with PowerCLI and using the add-esxsoftwaredepot commandlet and packaging a new ISO.  Turns out there is a much simpler way using an app called ESXi-Customizer.  Another simple app that will take an original ESXi ISO, extract it, inject a vib file of your choosing, and repackage it, all within 60 seconds.   This tool should be required learning for a VCAP, it’s that simple and it works!

So… I now have a new image.  With updated e1000 network drivers.  I re-run Unetbottin against the new image and create boot media.  And Happy Days!  ESXi 5.5 boots off the USB thumb drive with the new boot image and installs fine with the updated network drivers to the second USB thumb drive I have plugged in.

But why a second USB drive you ask?  Well, read on…

nuc_nic01

Pic2. Intel I218V Network Controller detected in ESXi

So I installed ESXi to a second USB thumb drive, as mentioned above, because during the installation the SSD did not detected when selecting a device to install to.  For the time being this isn’t an issue for me.  In my first post (Part 1: The NUC Arrival), my end game was to always boot and run ESXi from USB and use the internal SSD as storage and at some point vSAN.  All I’m doing now is just skipping to that step.

Getting ESXi 5.5 to detect the NUC’s mSATA controller requires creating new SATA driver mappings.  With a little digging around I found what i needed at vibsdepot.v-front.de.  Here you can download a VIB or an offline bundle (if you want to also inject it into the ISO the correct VMware way).  There’s also a link to the authors blog post that explains this process in much more details.

But, summerised below is what I did to get the NUCs mSATA controller to be detected.  I opened up an SSH session to the ESXi host and typed the following.

esxcli software acceptance set --level=CommunitySupported
esxcli network firewall ruleset set -e true -r httpClient
esxcli software vib install -d http://vibsdepot.v-front.de -n sata-xahci

After a reboot the controller detected and the SSD storage was available to create Datastores.

nuc_storage01

Pic3. Storage Controller now detected in ESXi

Conclusion

If you don’t want to read this whole post (it’s okay, I don’t blame you) and just want the steps to install ESX 5.5 onto a 4th Gen NUC.

1. Download ESXi 5.5 ISO from VMware.
2. Turn off UEFI in the BIOS of the NUC.
3. Download net-e1000e-2.3.2.x86_64.vib.
4. Download ESXi-Customizer, run it and select the ESXi 5.5 ISO and net-e1000e-2.3.2.x86_64.vib.
5. Download UNetbootin and create a bootable USB drive from the new ISO created above.
6. Boot off the USB thumb drive and install ESXi to a second USB thumb drive.
7. Enable SSH and run the above ESXCLI commands to create new mappings to the mSATA controller.
8. Turn UEFI back on in the BIOS

Conclusion Conclusion

At this point I have a fully working and running ESXi 5.5 host on the NUC.  I am running off a 4GB USB thumb drive.  I have network and detectable SSD storage.  The skies the limit now.

I’ll now be looking at purchasing a second Intel NUC, and while it’s being shipping, I’ll have a couple weeks to play with vSphere on this current NUC.

UPDATE;

At the time I wrote this article ESXi 6.0 was still in Beta so I couldn’t talk about it.  Now that it’s GA I can say that the process to install ESXi 6.0 still requires the msata process to get internal storage to work.  The great news is that networking now works out of the box.

Articles in this series

Part 1: The NUC Arrival
Part 2: ESXi Owning The NUC
Part 3: Powering a NUC Off A Hampster Wheel
Part 4: The NUC for Weight Weenies
Part 5: Yes, you can have 32GB in your NUC