Monthly Archives: September 2015

Part 5: Yes, you can have 32GB in your NUC!

Every time I get asked about the Intel NUC the number one question that comes up is, ‘Can I have 32GB in it?’.  Up until recently my response was ‘No’.  My response was like I had shattered a childhood dream or something for the person asking.  Most people I talk to lose interest in the NUC at this point.  It’s quite frustrating because it’s a great little device and the official 16GB limit when combined with multiple NUCs is still a great option for a home lab.  The relatively low price and power consumption is hard to beat, especially compared to something like a Super Micro that seems to be a home lab favourite to achieve 32GB+.

Recently I was put onto I’M Intelligent Memory who specialise in DRAM since 1991.  Going back as far as early 2014 I’M have been producing a 16GB SODIMM module.  I’M currently produce a 16GB SODIMM module which is aimed at 5th Gen Intel Broadwell processors.  They claim that their 16GB SODIMM modules are compatible with notebooks and Intel NUC i3/i5/i7-5xxxU CPUs.

16GB_SODIMM

I’M reference quite a good article on their website written by PCWorld on the 16GB SODIMM module used in an Intel NUC; it’s well worth a read.  PCWorld even approached Intel for their opinion on the module in its NUC.  While Intel would not validate the module for use in the NUC they did say that “technically” there is no reason why it won’t work.

For some time I had always been under the impression that memory was limited by the CPU.  In the case of the Intel NUC, however, it appears this is more an issue of JEDEC SODIMM standards preventing larger than 8GB SODIMM modules.  I’M work around some of the JEDEC standard limits by stacking chips on top of each other.  This allows them to double the density of an SODIMM, but no doubt makes them not standards compliant.

With I’M Intelligent Memory explicitly stating that the modules will work 5th Broadwell CPUs, a thorough write-up from PCWorld demonstrating that the modules do indeed work in 5th Gen NUCs, and Intel stating that technically it’s achievable.  The NUC has just become even more appealing, especially for a virtualization home lab setup.

The one caveat; though, is the price!  I recently purchased 16GB (2x8GB SODIMM modules) for a new 5th Gen NUC for $75 US.  I’M’s 16GB SODIMM have come down recently but are still going for a whooping $285 US (325 Euro), which is approximately three times the cost per a Gig.

For all those people out there that have been dismissing the NUC due to the 16 GB limit they now have to re-evaluate their position.  If 32GB is such an important factor for them, it’s now achievable with price being the new factor in the equation.

References

http://www.intelligentmemory.com/dram-modules/ddr3-so-dimm
http://www.pcworld.com/article/2894509/want-32gb-of-ram-in-your-laptop-or-nuc-you-can-finally-do-it.html

Articles in this series

Part 1: The NUC Arrival
Part 2: ESXi Owning The NUC
Part 3: Powering a NUC Off A Hampster Wheel
Part 4: The NUC for Weight Weenies
Part 5: Yes, you can have 32GB in your NUC

VCSA 6.0 Update 1 returns the VAMI

Earlier this month Update 1 for the vCenter Server Appliance 6.0 was released.  With all the cool things coming out of VMworld, like vSphere Integrated Containers, Photon, EVO SDDC, I think Pfft, Update 1 for vCenter and the return of the VAMI was the most exciting thing for me.  But no, seriously, this has been something I’ve been hanging out for.  I always found it odd that it was missing in vCenter 6.  It appears that it was really just a priority issue and not being ready in time for the GA release of 6.0.

If you’re not familiar with VAMI or as it’s better known now as Appliance Management User Interface.  It’s been in the vCenter Server Appliance since version 5.0.  It allowed an easy way to manage host based settings on the appliance such as networking, time & NTP, and the ever important patching.  When I started moving to VCSA 6.0 I had to learn new ways to configure these settings.  So thank God I can now forget.  The Interface has been completely rewritten in HTML5 keeping to the VMware theme.

Photos-000239

Access to the new Appliance Management User Interface (I guess AMUI for short now???) is on the same original port of 5480, (https://<fqdn or ip>:5480).  Log in with the root account and password.

When you log in you are faced with a Summary screen.  Here you get an overview of your Health Status, SSO status, version and Installation Type along with options to Reboot and Shutdown.

Photos-000240

On the left you have the Navigation Pane.  From here you can select and modify a number of different settings.  The Access page allows you to enable SSH and the Shell.  Networking for your DNS and IP details.  Time is a big one.  Prior to this you had to use the Shell to modify NTP settings, I recently wrote up a post on the process to modify VCSA NTP settings with the VAMI.

The Update page is also another important one.  Checking for patches is now no harder than a simple click or better yet a daily scheduled time.  Installation is just as easy with another click.

VMware Appliance Management-000241

The last page is Administration.  Here you can easily change the root password and change the Root Password expiry mode.  Most places may want this on but if your managing many vCenters it can be quite hard to stay on top of and if using strong passwords maybe not that critical.

VMware Appliance Management-000242If you’re currently running a vCenter Server Appliance you can get to version 6.0 Update 1 by either Upgrading or Patching.  The two methods are very different.  The upgrade method, by all accounts, is similar to previous.  It involves deploying a new appliance and performing a migration.  All relatively simply.

The patching process on the other hand is a little less involved and my preferred option.  It requires downloading a much smaller ISO and mounting it to the appliance.  You’ll need to already be on version 6.0.  As mentioned above the VAMI doesn’t currently exist so it has to be patched from the console.  I’ve written previously about patching a vCenter Server Appliance from the Shell.  The process is essentially the same and worth a look if you’re not familiar with it.

I highly recommend reading the Release Notes and looking at patching up to this latest version.  It resolves a number of outstanding issues from previous releases.  JRE has been updated along with SSLv3 being disabled.  My initial home lab upgrade took only 10 minutes.

So get to it!

References

Update for VMware vCenter Server Appliance 6.0 Update 1 (2119924)

VMware vCenter Server 6.0 Update 1 Release Notes

Increase vSphere Web Client Idle Timeout on VCSA 6

What I really like about the vCenter C# Client was that you could stay logged indefinitely.  Maybe not the best thing from a security stand point but it was damn convenient.  The vSphere Web Client on the other hand has always had an idle timeout value.  From a home lab point of view it’s really frustrating constantly being logged out.

The default idle timeout in the vCenter Server Appliance 6 (VCSA) is 120 minutes.  I know of no way to modify this via the Web Client itself but it is modifiable via the Shell.  The timeout value is contain in the webclient.properties file.  The location of this file has changed from previous versions of the VCSA.    Prior to version 6 it was found in /var/lib/vmware/vsphere-client/.  In version 6 it’s found in /etc/vmware/vsphere-client/.

At the Command prompt of the VCSA type the following to enable Shell access.

Command> shell
Shell is disabled.
Command> shell.set --enabled true
Command> shell
vc01:~ #

Now at the Shell type the following below and locate session.timeout.

cat /etc/vmware/vsphere-client/webclient.properties

You should find something similar to session.timeout = 120 as this is the default value.

Make a backup copy of webclient.properties.

cp /etc/vmware/vsphere-client/webclient.properties /etc/vmware/vsphere-client/webclient.properties.bak

If you’re comfortable using an editor like VI go ahead and use that to increase or decrease the value in minutes.  The timeout value, though, can quickly and easily be modified using the sed command.

The sed command below locates the specific string session.timeout = 120 and replaces it with session.timeout = 1440, which is 24 hours.  Change 1440 to however many idle minutes you want.  If sed doesn’t find the specific string, don’t worry, it won’t modify anything.  To set the client to never idle timeout set the value to 0.

sed -i “s/session.timeout = 120/session.timeout = 1440/g” /etc/vmware/vsphere-client/webclient.properties

Run the cat command again and check that the session.timeout value has changed.

cat /etc/vmware/vsphere-client/webclient.properties

If the session.timeout value has been modified correctly we now have to stop and restart the vsphere-client service by running the following commands below.  I covered stopping and starting services on a VCSA in a previous post.

service-control --stop vsphere-client
service-control --start vsphere-client

Wait a few minutes for the service to start up fully and open a new browser windows to the vSphere Web Client.  It should now be running with a new idle timeout.

As a general disclaimer you should only be going into the Shell on the VCSA if you are comfortable with what you are doing.  Of course make a backup of any files you are modifying and, of course, to play it safe take a snapshot of the VCSA VM.

References

Configure the vSphere Web Client Timeout Value