Tag Archives: vmware - Page 2

Modify HTML5 vSphere Client Idle Timeout

Before I go any further, just to make it clear, we’re talking about the new HTML5 client in vSphere 6.5 (GA Build 4602587).  Not the older Flash based vSphere Web Client in vCenter 5 and 6.  So lets call it the vSphere vCenter HTML5 UI Web Client.  Clear now?  Ok, just refer to the pic below.

Below are the steps I used on the vCenter Server Appliance.

Just like the old Web Client I know of no way to change the idle timeout from within the UI today.  So we have to revert to connecting to the console and making the changes through the shell.  We do this by opening up a console window to the VM or using SSH and login with root (remember to enable SSH first).

At the Command prompt of the VCSA type the following to enable Shell access.  You may received a Shell is disabled message.  If you do, enable with shell.set.

Command> shell
Shell is disabled.
Command> shell.set --enabled true
Command> shell
vc01:~ #

Now at the Shell type the following below and locate session.timeout.

cat /etc/vmware/vsphere-ui/webclient.properties

You should find something similar to session.timeout = 120 as this is the default value in minutes.

Make a backup copy of webclient.properties.

cp /etc/vmware/vsphere-ui/webclient.properties /etc/vmware/vsphere-ui/webclient.properties.bak

If you’re comfortable using an editor like VI go ahead and use that to increase or decrease the value in minutes.  Probably for the best, it doesn’t appear that you can set this value to never timeout.  I tried 0 and -1 and both caused the vSphere Client to timeout instantly on login.  The timeout value, though, can quickly and easily be modified using the sed command.

The sed command below locates the specific string session.timeout = 120 and replaces it with session.timeout = 720, which is 12 hours (or in other words my standard work day).  Change 720 to however many idle minutes you want.  If sed doesn’t find the specific string, don’t worry, it won’t modify anything.

sed -i “s/session.timeout = 120/session.timeout = 720/g” /etc/vmware/vsphere-ui/webclient.properties

Run the cat command again and check that the session.timeout value has changed.

cat /etc/vmware/vsphere-ui/webclient.properties

If the session.timeout value has been modified correctly we now have to stop and restart the vsphere-ui service by running the following commands below.  I covered stopping and starting all services on a VCSA in a previous post HERE.

service-control --stop vsphere-ui
service-control --start vsphere-ui

Wait a few minutes for the service to start up fully and open a new browser windows to the vSphere Client.  It should now be running with a new idle timeout.


Let’s have a Fling

After years of procrastinating, last week I gave my first vMug community talk in Melbourne.  After years of saying I can do that and constantly nodding my head to the Melbourne vMug committee to get more involved, I finally decided to step up.  Now that it’s over I can finally say that it was well worth it!

The decision to finally give a talk came at the end of vBeers in Melbourne a few months back.  After a few beers and some great conversations.  I left with the determination to get up and give that community talk.  There was just one issue though, I had no idea for a topic.  So I spent the next few weeks thinking of an idea and asking friends for suggestions.

In some ways the hardest part of the talk was just coming up with an interesting idea.  Something that not only other people would find interesting but me too.  I finally settled on VMware Flings.  I’ve been interested in VMware Flings for a few years now and this year they’ve really taken off with some really great Flings.  I put together a presentation abstract then reached out to the Melbourne vMug committee with my idea.  The @mvmug committee liked the idea and slotted me in as one of two committee presentations for Melbourne’s final vMug of the year.

Over the next month I put together my talk.  I’m not a big fan of the ‘Death by PowerPoint’.  So I worked on a talk where after I introduced myself I would pull up a web browser using the VMware Flings website as my slide deck.  Then lead into some demos in my home lab.  A format I think worked out really well.  Staring at slide decks can be hard after a long vMug.  So being able to interact with something on the screen keeps the audience engaged.  The biggest curve ball I got thrown was coming down with a virus two weeks prior to the talk.  Over that period I had blocked sinuses, constant coughing, and even a loss of voice for a day.  Disappointingly I was in no position to rehearse my talk over that period.  What was becoming a very exciting time became an extremely frustrating period 🙁

The day before the talk was nothing short of miraculous (either that or the upping of medication).  I woke up feeling like my old self.  It meant that I one good college cram session in the night before to rehearse my talk.  Not ideal but I’d take it 🙂

I felt that the talk went well.  I managed to use the full 45 minutes allocated to my session.  I tried to keep the talk moving along without dwelling on any one area for too long.  A had a short introduction, a brief overview of what VMware Fling are with some examples.  Then moved onto three demos in my home lab.  The ESXi Embedded Host Client, followed by PowerActions for the vSphere Web Client, and finishing up on a slightly less popular and more obscure Fling, the ESXi Google Authenticator.

I learnt quite a lot from the experience with a number of key take aways for the future.  Firstly, don’t rely on the facilities, not even as backups.  If you require Internet access, bring your own, then bring your own backup.  Bring all your own cables and dongles to connect to projectors.  I naively expect HDMI or DVI connectors.  Imagine my surprise when I got VGA.  Fortunately a VEEAM presenter from an earlier session leant me his VGA dongle.  Finally stay relaxed.  If you can interact with the audience, even just a little, it goes a long way in creating a positive atmosphere to present in.

The @mvmug committee have done a fantastic job in recording all of November’s sessions.  My session can be found here!


VCSA 6.0 Update 1 returns the VAMI

Earlier this month Update 1 for the vCenter Server Appliance 6.0 was released.  With all the cool things coming out of VMworld, like vSphere Integrated Containers, Photon, EVO SDDC, I think Pfft, Update 1 for vCenter and the return of the VAMI was the most exciting thing for me.  But no, seriously, this has been something I’ve been hanging out for.  I always found it odd that it was missing in vCenter 6.  It appears that it was really just a priority issue and not being ready in time for the GA release of 6.0.

If you’re not familiar with VAMI or as it’s better known now as Appliance Management User Interface.  It’s been in the vCenter Server Appliance since version 5.0.  It allowed an easy way to manage host based settings on the appliance such as networking, time & NTP, and the ever important patching.  When I started moving to VCSA 6.0 I had to learn new ways to configure these settings.  So thank God I can now forget.  The Interface has been completely rewritten in HTML5 keeping to the VMware theme.


Access to the new Appliance Management User Interface (I guess AMUI for short now???) is on the same original port of 5480, (https://<fqdn or ip>:5480).  Log in with the root account and password.

When you log in you are faced with a Summary screen.  Here you get an overview of your Health Status, SSO status, version and Installation Type along with options to Reboot and Shutdown.


On the left you have the Navigation Pane.  From here you can select and modify a number of different settings.  The Access page allows you to enable SSH and the Shell.  Networking for your DNS and IP details.  Time is a big one.  Prior to this you had to use the Shell to modify NTP settings, I recently wrote up a post on the process to modify VCSA NTP settings with the VAMI.

The Update page is also another important one.  Checking for patches is now no harder than a simple click or better yet a daily scheduled time.  Installation is just as easy with another click.

VMware Appliance Management-000241

The last page is Administration.  Here you can easily change the root password and change the Root Password expiry mode.  Most places may want this on but if your managing many vCenters it can be quite hard to stay on top of and if using strong passwords maybe not that critical.

VMware Appliance Management-000242If you’re currently running a vCenter Server Appliance you can get to version 6.0 Update 1 by either Upgrading or Patching.  The two methods are very different.  The upgrade method, by all accounts, is similar to previous.  It involves deploying a new appliance and performing a migration.  All relatively simply.

The patching process on the other hand is a little less involved and my preferred option.  It requires downloading a much smaller ISO and mounting it to the appliance.  You’ll need to already be on version 6.0.  As mentioned above the VAMI doesn’t currently exist so it has to be patched from the console.  I’ve written previously about patching a vCenter Server Appliance from the Shell.  The process is essentially the same and worth a look if you’re not familiar with it.

I highly recommend reading the Release Notes and looking at patching up to this latest version.  It resolves a number of outstanding issues from previous releases.  JRE has been updated along with SSLv3 being disabled.  My initial home lab upgrade took only 10 minutes.

So get to it!


Update for VMware vCenter Server Appliance 6.0 Update 1 (2119924)

VMware vCenter Server 6.0 Update 1 Release Notes

Being a vExpert is a great motivation booster!

Returning from a month long Euro Trip a few weeks back I had been struggling to get back into work mode.  So it was a huge motivation boost to receive an email earlier this week welcoming me to the VMware vExpert Program as a 2015 vExpert.  I’m thrilled to have been recognised along with a lot of other amazing people.

My history with VMware products goes back to the Virtual Infrastructure and ESX Server 3 days.  I still remember my scepticism when my manager said we’re going to migrate to this virtualisation stuff.  Over the last few years VMware and Virtualisation have become a crucial part of my role in the Cloud & Managed Services department of Optus Business in Australia.

I’m really looking forward to engaging ever more with the virtualisation community throughout the remainder of the year and into next.

The 2015 second half intake of vExperts can be found here.

and the full list here.


Patching vCenter Server Appliance 6 (VCSA)

The biggest thing I miss from the v5.x release of the vCenter Server Appliance was the VMware Appliance  Management Interface (VAMI).  I first realised it was missing in VCSA 6 when I needed to modify NTP settings.  What I liked about the VAMI was that it could auto check and install patches.  With it now removed we’re back to a manual check and apply process 🙁

So to get started… The easiest way to check what build you are on is in the vSphere Web Client.  Navigate to vCenter Inventory Lists -> vCenter Servers and click on your VC.

Windows 8.1 Pro - VMware Workstation-000174

Once you know what build you are on head over to Product Patches at https://my.vmware.com/group/vmware/patch#search

Select VC as the product and 6.0.0 as the version.  Note the releases.  One will be for the Windows version and one for the Appliance.  Windows versions still require downloading the full product to update.  While the Appliance gets away with a smaller yet still relatively large 1 GB patch file.  Both releases, though, can still apply minor patches individually.


Download the 1 GB ISO file (or whatever is current at the time).  Mount the ISO to the vCenter Appliance VM as you normally would with any ISO file.


Now login to a console session on the Appliance and run the following command

software-packages install --iso --acceptEulas


The --acceptEulas is optional.  If you choose to leave it out you will have to scroll through the VMware End User License Agreement and type yes to accept.

Command> software-packages install --iso
[2015-06-13T12:21:59.164] : Staging software update packages from ISO
[2015-06-13T12:21:59.164] : ISO mounted successfully
[2015-06-13 12:21:59,076] : Running pre-stage script…..
[2015-06-13T12:22:00.164] : Verifying staging area
[2015-06-13T12:22:00.164] : Validating software update payload
[2015-06-13T12:22:00.164] : Validation successful

Do you accept the terms and conditions? [yes/no] yes

This process will now automatically Stage the patches to the VC and proceed to immediately install.  During this process management to the VC will be lost.  So keep this in mind for your users.

Once complete you will receive a message that a Reboot is required to complete the installation.  According to VMware doco this is an optional step.  That said, after my upgrade completed, I still had no management connectivity via the Web Client or C# client and so ran shutdown reboot -r and proceeded to reboot the appliance.

[2015-06-13T12:29:53.164] : Packages upgraded successfully, Reboot is required to complete the installation.
Command> shutdown reboot -r “I have been patched”

Additional Commands (optional)

If you want to see the last patches that were applied run the below command

Command> software-packages list --history
[2015-06-13T12:54:03.164] :
‘Name’ ‘Install Date’
VC-6.0.0a-Appliance-FP 2015-06-13 12:29:52


VMware reference doco
Patching the vCenter Server Appliance



No vmkcore disk partition is available

Sometimes I really do feel the computer gremlins are out to get me.  As long as I can remember I’ve had a flawlessly running Test Lab at work.  The day of my scheduled ESXi host upgrades I come across numerous hosts with the below error.

No vmkcore disk partition is available and no network coredump server has been configured.  Host core dumps cannot be saved.


I tried to ignore the error but VMware Update Manager would have no bar of it and prevented me from performing an ESXi version upgrade.

The error is referring to the location ESXi will dump its core during a Purple Screen of Death (PSOD).  Usually you’ll see this warning with a statless configured ESXi host.  In this situation a host will be running in memory with no disk.  You will usually configure the vSphere Network Dump Collector service.  This wasn’t the case in my situation.

Logging into the Shell I ran the follow

~ # esxcli system coredump partition list

no configured dump partition found; skipping

Next I attempted to set the coredump

~ # esxcli system coredump partition set --enable true --smart

Unable to smart activate a dump partition.  Error was: Not a known device: naa.6000097000024659483748380304235.

Not really sure what was going on here.  I just hope no one was messing with LUN mappings.

So next I use the set -u to unconfigure any current core dump follow by set --enable true --smart which allows ESXi to automatically determine the best location to set.

~ # esxcli system coredump partition set -u

~ # esxcli system coredump partition set --enable true --smart

~ # esxcli system coredump partition get

Active: naa.600601600fc04857394578c4d945e311:7

Configured: naa.600601600fc04857394578c4d945e311:7

This resolved the error immediately without any reboots and allowed me to continue with ESXi host upgrades.

I don’t know the root cause to this issue but as I’m upgrading ESXi versions I think it’s okay to sometimes let things go.

Note;  --enable true --smart contains double dashes.

VCE Certified Converged Infrastructure Administration Engineer (VCE-CIAE)

Last week I took and passed my VCE-CIAE certification exam.  It wouldn’t surprise me if you said what certification.  VCE™ Certified Converged Infrastructure Administration Engineer (VCE-CIAE) is a relatively new certification released by VCE™ Company back in December.  It fits into the Manage Track of their Certified Professional Program.  The program itself only being about a year old.  According to VCE™ there are currently over 4000 certified professionals

A few months back I took VCE™’s (or as I like to think of them, VMware + Cisco + EMC) 5 day Administration and Management training course.  It was held in EMC’s Melbourne office.  It was a really well run course taught by an instructor from Malaysia.

After the course I was keen to pursue VCE™ certification.  There are currently two Tracks available, Manage and Deploy, with a soon to be released Design Track, all of which contain three levels of certification.  I found the most relevant to me was the Mange Track.

Step 1 in my study was obtaining the prerequisite Associate status, VCE-CIA.  Very similar in concept to VMware’s associate program.  It’s an online exam which can be taken any time at home.  Best of all, unlike VMware’s VCA, it’s free.  Registering and taking the exam links you over to EMC’s education portal.  The exam is presented after a free 4 hour foundation course (which can be skipped at any time).  There is no time limit with a passing score of 80%, though, I don’t recall how many questions.  I did find it interesting that you get a second attempt at a question if you failed to answer correctly.  I ended up passing with 96% which I think equated one question wrong.

With the Prerequisite out the way I could now study and take the second level Administration Engineer exam.  The study material PDF from VCE™ was pretty vague.  A few links to company websites, an EMC VNX PDF, a VMware vSphere PDF, and a Cisco UCS Manager Configuration Guide.  I used the practice exams from VCE™ to gauge my rough level of knowledge.  As expected I found my weak area was Cisco UCS.  While I have hands on knowledge with all three vendors in my day to day role most Cisco UCS work is done by a different team.  So I choose to use Pluralsight’s Implementing Cisco UCS Training to fill in the missing knowledge.  It was an excellent course presented and run by Jason Nash.  This course ended up providing 98% of the Cisco material I required to pass the exam.

The exam is booked through Pearsons and currently costs $200 USD.  I found the exam itself fairly true to the practice questions provided by VCE™.  The VCE™ exam guide stated 60 questions over 90 minutes.  I ended up getting 65 questions.  I can only assume five of those questions were evaluation questions for future inclusion into the exam.  Time was not an issue at all.  As I expect I felt the exam tested on knowledge a mile wide but only an inch deep.   Unless you ask 200+ questions it’s near impossible not to do so, especially when you have three products that have individual certs in their own right.

As with the rest of the industry, VCE™ certifications expire after two years and need to be renewed.  All this does is make it ever so harder to hold onto certs in multiple disciplines nowadays with something always expiring.  Rather than immediately looking to take the third and final Master Engineer cert in the Manage Track I’ll be holding off.  The intention being to renew and upgrade my certification status at the same time in the future.


VCE Manage Track Homepage

VCE™ Certified Professional Program

Pluralsight: Implementing Cisco UCS

Modifying vCenter Server Appliance 6 (VCSA) NTP settings

For some unknown reason that I’m yet to learn.  The VAMI in vCenter Server Appliance 6 has been removed.  VAMI is the management interface that you usually connect to on port 5480 for most VMware appliances.  Prior to vCenter 6 you could connect to your VCSA appliance on port 5480.  In the VAMI you could check that status of the appliance services, change its network settings, perform updates, and change NTP settings.


It’s this last  setting that quickly alerted me to this change shortly after deploying my first VCSA 6.  During the initial deployment of my vCenter Appliance I would specify my NTP servers when prompted.  During my first two attempts the deployment would error out and fail because my NTP time sources specified were timing out.  So on the third attempt I decided to skip the NTP servers and configure them post install.

Here in lies the new way of modifying NTP settings on a vCenter 6 Appliance.

Firstly we need to log into the appliance via SSH or via the console using the root account.


We will be presented with a VMware shell with instructions on how to enable BASH.  For this task and many other vCenter tasks the current shell is good enough.  From here we can run our NTP commands.  If we type ntp followed by the ‘TAB’ key we get a list of ntp commands we can run.


Typing ntp.get lists the current status of NTP and what NTP servers are configured.  In this case the status is Down and no servers have been configured.

Command> ntp.get
Status: Down

As we have no NTP servers listed we can use the ntp.server.set command.  This will override any current servers that may also be listed.

Command> ntp.server.set --servers 0.au.pool.ntp.org
Command> ntp.get
Status: Down
Servers: 0.au.pool.ntp.org

We now have one NTP time source set.  If we wish to make modifications to the list of servers without overriding them we can use the ntp.server.add command.

Command> ntp.server.add --servers 1.au.pool.ntp.org
Command> ntp.get
Status: Down
Servers: 0.au.pool.ntp.org 1.au.pool.ntp.org

With our NTP time sources set we now enable and start NTP using the command timesync

Command> timesync.set --mode NTP
Command> timesync.get
Mode: NTP

Command> ntp.get
Status: Up
Servers: 0.au.pool.ntp.org 1.au.pool.ntp.org

And that’s really all that is required.  Relatively straight forward to perform.  From my point of view it’s certainly not as convenient as using the VAMI web portal from previous versions.  As mentioned above.  I don’t know why it was removed.  Perhaps time constraints meant that it will be introduced in a future update.  Or perhaps it’s just hidden on a different port I’m not aware of.  In any case it would be nice to officially know.

Note; ‘--servers’ above is a double dash.


Add or Replace NTP Servers in the vCenter Server Appliance Configuration

VCP5-DCV Delta done and dusted

Over the weekend logic and common sense failed me and I decided to sit my VCP5-DCV Delta Exam.  Since VMware introduced a recertification policy back in March 2014 I’ve been buying my time to renew.  My deadline was approaching and with the limited time offer to sit a Delta exam recertification I jumped on it.

To date I’ve seen nothing out there on user experiences taking the Delta exam, though, it’s only been three weeks since the VCP5-DCV Delta exam has been available.  So unfortunately taking this exam would be unchartered waters.  I’ve been psyching myself up all week so that wasn’t going to phase me 🙂

I had my plan ready for first thing Saturday morning.  I would fire up the vSphere environment on my new NUC test lab.  Download all the PDFs in the VCP5-DCV Delta Blueprint.  Then cram like I’ve never crammed before over 48 hours and take the exam Sunday night.

Step 1 was Requesting Authorization on the VMware MyLearn site.  I had already performed this earlier on in the week and was authorized the same day, my authorization was valid for 10 years, I guess just in case I became a little busy.  Step 2 was booking for the VCP550D exam on the Pearsons website first thing Saturday morning.  There’s nothing like preparing for an exam than knowing you’ve already paid to take it.  The exam cost was $130 AUD ($120 USD).

Next I downloaded and studied the Blueprint.  65 questions over 75 minutes, that’s 1 minute 15 seconds per a question.  Vmware are notorious for pushing time limits in exams 🙁  The MyLearn site states that only new material between vSphere 5.0/5.1 and vSphere 5.5 would be on the exam yet the blueprint contained everything that would be in a full VCP exam.  This made study a little difficult.  I wasn’t going to study everything obviously.  So I took advantage of the FREE 1-hr online course -VMware vSphere: What’s New Fundamentals of V5.5 to help prepare for the exam.  I used that has the basis of what I needed to study.  The online course was a good starting point of where to start the deep dives into the PDFs.  What I found was absent in the video, but clearly mentioned on the blueprint, was vCOPs and vSAN.  Something to keep in mind.

Next came the mind numbingly hard part of reading the PDFs.  I focused heavily on some of the new guides, In particular, the Replication guide, Data Protection, and Storage.  By Sunday afternoon (after a day and half of reading) I had covered most of the material in the PDFs.  The only exception being the three vCOPS PDFs totalling 400 pages which I refused to read!

Next came taking a few practice exams off the VMware MyLearn site.  I knew the questions would be broader than the Delta exam so I just focused on the new 5.5 material.

As the end of Sunday approached it was time to take the exam.  Now If it’s not clear by this point, this is an online exam.  For the people that don’t know what that means.  It is an open book exam!  Now I don’t want hate messages.  It’s an open book exam!

So this is where my three monitors got put to good use.  Monitor 1, the exam window.  Monitor 2, Google.  Monitor 3, the Advanced Search function of Adobe Reader set to search all PDFs in the Blueprint folder.

Now if you’ve read this far, plain and simply, I’m not going to give you the answers.  I feel, though, based on my experience I can comfortably recommend what you need to be studying.  So know your vSAN, know your vFlash, know your VDP, and know Replication.  It felt like the lower end of 50% -- 75% was this new material and the rest was standard VCP knowledge material that we should all know.  Looking at the exam at a high level it’s set in the format of a traditional VCP exam.  So if you can remember back to your last one expect the same types of questions worded the same way.

Where I felt I was weak on and would also recommend.  Know your vSphere Editions and high level vCOPs 😉  If you’ve done your VCA-DCV certification you’ll been fine with that knowledge for vCOPs.  Just focus on your knowledge of what Badges are and what they are comprised of.

So now long story short I am recertified for another 728 days.

As an open disclaimer I’ve been using vSphere 5.5 since day one of release.  I’ve been closely following all the new technologies that have been accompanying vSphere 5.5.  Keep that in mind before you say this was a paper certification.


Recertification Policy

VCP5-DCV Delta recertification exam

Pearsons VMware exam registration site

Configuring and Testing NTP on ESXi

I hate NTP.  I hate time sync issues.  I hate time skew issues on ESXi.

So now that I’ve got that out I feel a whole lot better.  I can now talk about how to configure and importantly validate your NTP settings on an ESXi host.

Setting and syncing time on an ESXi host is done within Time Configuration on the Settings tab of an ESXi host.


The time can either be set manually or it can be set via NTP.  Setting the time manually is self explanatory.  Basically change your time and click OK.  It’s not something I’m going to go into any further.  Ideally, though, you want to be setting your time with NTP.  Using NTP is relatively easy too, the hardest part will be making sure you have the correct ports open on all parts of the network.  NTP generally uses UDP port 123 btw.


So firstly you want to select Use Network Time Protocol.  You next want to head over to http://www.pool.ntp.org and find your closest NTP time sources.  Understanding how the underlying technology of how NTP works is actually quite interesting but beyond what this post is about.  Wikipedia is a good start on NTP.  Each region around the world has a number of NTP pools and within those regions many countries have pools of their own.  For me my closest pool is Australia within the Oceania region.  Australia has 4 pools.  Within these pools are actually a number of servers.  I can use one of these pools or I can use them all.  I’ll be using them all for redundancy.  Once I enter these pool addresses and separate them with commas I click the Start button and click OK.


The Time Configuration should now look something similar to below.  The time change in not instant and can take… well… time.


But how do you test that these settings are correct, considering that the time sync process is not instant.  Further more, NTP uses UDP port 123 which is connectionless.  Well, we can query the output our NTP sources gives us, which can be done from the CLI of the ESXi host.

Log into the console of the ESXi host using whatever method you prefer.  The simplest is usually just starting and connecting to SSH.

We use the NTPQ command and type the following.

ntpq -p localhost

The output should be sometime similar to below.  VMware have a good KB article which explains what it all means if your really want to know.


If we see something similar we know we’re good and the time should start to change shortly.  If we get all zeros we probably have network and DNS working but NTP is block at the firewall somewhere.