Author Archives: Mark Ukotic

Display Last Command in PowerShell Title Bar

Carrying on from my previous post on pimping out your PowerShell console. I started to look into way to update the PowerShell title bar. In PowerShell there’s an automatic variable called $host where you can specify your own custom UI settings. One of the properties that can be modified is the console title bar in PowerShell. Now you say, “But Mark, why?”. Well I say, “But why not.”. And now that we have that formality out the way…

Changing the PowerShell console title is actually very simple and can be done on one line.

$Host.UI.RawUI.WindowTitle = 'Dream bigger.'

This command can be run from the command line or placed into your startup profile.ps1 file.

I initially placed quotes into the WindowsTitle property, placed the line at the bottom of my profile.ps1 file, and would get my startup profile to automatically load it when I ran a new PowerShell session. However, with my recent experimentation with the PowerShell prompt and the Get-History cmdlet. I had the idea of dynamically populating my the console title bar with my previous commands.

A lot of the leg work to do this is explained in my previous post, Display Execution Time In PowerShell Prompt. As such, i’m not going to delve into it too in depth here. Instead I do recommend looking at that post.

To update the console title with our previous command we leverage the cmdlet Get-History (just as I used in my previous post).

$host.ui.RawUI.WindowTitle = (Get-History)[-1]

This will update our console title with out last command, but it won’t continue to update after each subsequent command.

So we can take this one step further by updating the built-in PowerShell function Prompt. This function will run after each command is executed. We can modify the function by copying and pasting the below code into our PowerShell session and execute Prompt. This would work for our current PS session.

function Prompt {
  $history = Get-History
  $host.ui.RawUI.WindowTitle = ($history)[-1]
  return " "
}

Now better yet, we can update our startup profile file. Usually this is profile.ps1 held in C:\Users\{Username}\Documents\WindowsPowerShell\ for Windows PowerShell or C:\Users\{Username}\Documents\PowerShell\ for PowerShell Core. By pasting this code into our startup profile, it will execute each time we open a new PowerShell session automatically.

So there you have it. Another pointless awesome way to pimp our your PowerShell console. Combine this with Execution Time in your prompt and you have the flyest console around.

Display Execution Time In PowerShell Prompt

Some time back I attended a presentation where the presenter’s PowerShell console displayed their last command’s execution time in the prompt. At the time of thought it was a bit of a geeky novelty thing. Though recently I’ve had a slight change of opinion. It’s become a great way to easily see the efficiency of my code.

To make a pretty crude quote. There are many ways to skin a cat. PowerShell is extremely flexible in that it allows you to perform the same task many different ways. But not all ways are equal right?!? In the two examples below a perform a fairly basic count from 1 to 10 million.

$x = 0
ForEach ( $i in 1..10000000 ) {
        $x = $x + 1
    }
$x

In the above example the code runs in “around” 9834 milliseconds (9.8 seconds).

class MyClass
{
static[int] CountUp() 
    {
    $x = 0
    ForEach ( $i in 1..10000000 ) {
          $x = $x + 1
        }
    return $x
    }
}

[MyClass]::CountUp()

In this second example the code runs in 552 milliseconds (~0.5 seconds). A huge difference.

Being able to quickly and easily see execution time in the prompt can be quite helpful in determining if you’re coding as efficiently as you could be. It’s led me to trying things multiple ways before I settle. Now the actual code to display execution time is also quite easy to add into your PowerShell startup profile or to just run in your current session.

PowerShell comes with a built in prompt function which you can override with your own. In the below example I have created a new prompt function which I can execute by typing Prompt after running the code in my current session.

function Prompt {
  $executionTime = ((Get-History)[-1].EndExecutionTime - (Get-History)[-1].StartExecutionTime).Totalmilliseconds
  $time = [math]::Round($executionTime,2)
  $promptString = ("$time ms | " + $(Get-Location) + ">")
  Write-Host $promptString -NoNewline -ForegroundColor cyan
  return " "
  } 

The execution time of commands is retrieved from the StartExecutionTime and EndExecutionTime properties of Get-History. I get the time of the previous command, round to two decimal places, and write that to the prompt.

You can also take the function and place it in your PowerShell default startup profile file which will execute each time you open a new PowerShell session. It does require a slight modification to the above function which I’ll discuss later below. I’ve written a few posts on how to find and modify your default profile. But if your using Windows PowerShell you can find or add it in C:\Users\{Username}\Documents\WindowsPowerShell\profile.ps1. If your using PowerShell Core you can find or add it in C:\Users\{Username}\Documents\PowerShell\profile.ps1.

function Prompt {
  if ((Get-History).count -ge 1) {
  $executionTime = ((Get-History)[-1].EndExecutionTime - (Get-History)[-1].StartExecutionTime).Totalmilliseconds
  $time = [math]::Round($executionTime,2)
  $promptString = ("$time ms | " + $(Get-Location) + ">")
  Write-Host $promptString -NoNewline -ForegroundColor cyan
  return " "
  } else {
    $promptString = ("0 ms | " + $(Get-Location) + ">")
    Write-Host $promptString -NoNewline -ForegroundColor cyan
    return " "
  }
}

In the code above I’ve rapped it in an If / Else statement block. The logic here is that we use Get-History to get the execution time, but when a PowerShell session is first run there is nothing in Get-History, which will cause the function to fail and not run correctly generating a default vanilla prompt. Not ideal. So we create an else block and generate our own default prompt when no history of previous commands exist when initially opening a new PowerShell session.

So while many of you may also just find this a geeky novelty thing. It can also be a good reminder for you to try and keep your code and scripting as efficient as possible. Hey and at the very least you can impress your friends with a cool PowerShell prompt.

HaveIBeenPwned PowerShell Module Updates

Back in 2017 I wrote a post on a PowerShell module I created that consumes Troy Hunt’s Have I Been Pwned API service. I won’t go into too much detail about the service here. Plenty of people already have and since that time HaveIBeenPwned has exploded in popularity and most of us know what it is.

In that post I briefly discussed what the module does how you can begin to use some of the core functions in it. Since that time Troy has made a few changes to the API service, some small and some large, which I’ve slowly integrated into the PowerShell module. Things like UserAgent strings being a requirement and K-anonymity for password checks.

The community has also played a part in shaping the PowerShell module over the last year. I’ve had a lot of feedback and even some contributions through the GitHub project. It’s been pretty cool to receive PRs via my GitHub page for improvements to the module.

I thought now was a good opportunity for a follow-up post to talk about some of the changes and updates that have been made over the last year.

Probably the biggest change has been K-anonymity in Get-PwnedPassword. Originally you would send your password over the air in the body of a HTTPS request. With K-anonymity, Get-PwnedPassword will now SHA1 hash your password locally first and will always just send the first 5 characters of the hash to the HaveIBeenPwned API. It’s a much safer way of checking passwords which hopefully will lead to more people accepting and trying this method.

PS F:\Code> Get-PwnedPassword -Password monkey
AB87D24BDC7452E55738DEB5F868E1F16DEA5ACE
WARNING: Password pwned 980209 times!

I’ve attempted to make the module and all functions as PowerShell Core compliant as I can. I say, attempted, because as much of a fan of PowerShell Core as I am I keep finding differences in the way Core works. I’ve had to rewrite all the error handling to better catch 404 responses. A 404 not found response actually being a good thing in identifying that an email account has not be found in a breach. So whether it’s Windows PowerShell or PowerShell Core you should now be fine.

In my original post I gave an example of how you could run Get-PwnedAccount against a CSV file of email accounts and bulk check all your email addresses. Something that could be helpful in a corporate environment with many 100s of email addresses. The example I gave though was far from ideal.

This ability is now baked into Get-PwnedAccount and should lead for some interesting results. It’s very easy to use. A simple text file saved in CSV format with each email address on a separate line / row. Incorrectly formatted email addresses will be ignored and results are displayed only for identified email addresses in breaches.

Below is an example of what the CSV file might look like

[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]

Usage is straight forward too.

PS F:\Code> Get-PwnedAccount -CSV F:\emails.csv

Description                   Email             Breach
-----------                   -----             ------
Email address found in breach [email protected]    000webhost
Email address found in breach [email protected]    17
Email address found in breach [email protected]    500px

Each time an email is found in a breach it will output a result as an object. So you may get multiple results for a single email due to different breaches it’s in.

Identifying the total emails found in breaches is simple. For example

PS F:\Code> Get-PwnedAccount -CSV F:\emails.csv |  Measure-Object | Format-Table Count

Count
-----
  413

Now you probably don’t want to be hitting the API every time you want to manipulate the data. It will be slow and I can’t guarantee that rate limiting may block you. Storing the results in a variable will provide a lot more flexibility and speed. For example, finding results just on one email address

PS F:\SkyDrive\Code> $results = Get-PwnedAccount -CSV F:\emails.csv
PS F:\SkyDrive\Code> $results | Where-Object {$_.email -eq "[email protected]"}

Or if you don’t care about the breach and just want to display a compromised email address once.

$results | Sort-Object Email -Unique | Select-Object Email

You get the point right!?!? It’s fairly flexible once you store the results in an array.

Finally one last small addition. Get-PwnedAccount will now accept an email from the pipeline. So if you have another cmdlet or script that can pull an email out, you can pipe that directly into Get-PwnedAccount to quickly check if it’s been compromised in a breach. For example checking an AD user email address could be done as follows…

PS F:\code> Get-ADUser myuser -Properties emailaddress | % emailaddress | Get-PwnedAccount

Status Description              Account Exists
------ -----------              --------------
Good   Email address not found. False

The HaveIBeenPwned PowerShell module can be downloaded from the PowerShellGallery. Always make sure you are downloading and using the latest version. Within PowerShell use Install-Module -Name HaveIBeenPwned. The project can also be found on my GitHub page where you can clone and fork the project.

I’m keen to hear suggestions and feedback. So please let me know your experiences.

Download Links
PowerShellGallery: https://www.powershellgallery.com/packages/HaveIBeenPwned/
GitHub: https://github.com/originaluko/haveibeenpwned

Building and running Windows Terminal

The big news from Microsoft over the last week has been the announcement of Windows Terminal. An open source project from Microsoft currently up on GitHub. Windows Terminal allows you to run multiple tabbed CLIs from the one window. Not only that but they can be a mix of different CLIs --cmd, PowerShell, Python, Bash, etc. Pretty cool right. Windows Terminal is GPU accelerated ¯\_(ツ)_/¯ . Will allow for transparent windows, emojis, and new fonts.

As of today there are no pre-built binaries of Windows Terminal from Microsoft, that’s planned for sometime in Winter 2019 (that’s Northern Winter people), only the source code is up on GitHub. 1.0 release isn’t planned till at least the end of the year. The code is still very Alpha but never the less I decided to see what’s involved in building and running Windows Terminal on Windows 10.

Below I listed the steps and process I took to build and run Windows Terminal if anyone is interested in trying it out themselves. There’s a number of prerequisites required but nothing to difficult.

Prerequisites

Windows 10 (Build 1903)
As of today (May 2019) you need to be in the Windows Insider program to get this version. You’ll need to enable this inside of Windows 10 and download the latest build.

Visual Studio 2017 or newer
You can probably use a different IDE though I ended up using the community edition of Visual Studio 2019 which is a free download. Microsoft specifically calls out a few packages that you need if you’re running Visual Studio.

  • Desktop Development with C++
    • If you’re running VS2019, you’ll also need to install the following Individual Components:
      • MSVC v141 -- VS 2017 C++ (x86 and x64) build tools
      • C++ ATL for v141 build tools (x86 and x64)
  • Universal Windows Platform Development
    • Also install the following Individual Component:
      • C++ (v141) Universal Windows Platform Tools

Developer Mode in Windows 10.

Build and Deploy Process

The first thing you want to do is check that you’re on at least Windows 10 build 1903. You can check this by going to Settings > About. If you’re not on at least this build you can turn on Release Preview by going to Windows Insider Programme under Settings.

Next you want to make sure you’ve enabled Developer mode. You can do this in Settings > For developers

Now we can grab Visual Studio 2019 Community Edition. This is a super small and quick 20 GB download and install. <sarcasm emoji>

Make sure you select the right Workloads and Individual components from the prerequisites above.

Once the install completes comes the fun part of building. Skip the Visual Studio wizard and go to File > New > Repository

Under the Team Explorer window select Clone and enter in the Windows Terminal git path (https://github.com/microsoft/terminal.git). Make sure Recursively Clone Submodules is selected. Windows Terminal relies on git submodules for some of its dependencies. You’ll need around 200 MB to download the repo.

Once the package downloads you may receive an error that some NuGet packages are missing in the Error List. Even if you don’t it’s still probably a good idea to just update the packages.

Go to Tools > NuGet Package Manager > Package Manager Console. Then in the Package Manager Console type in Update-Package -reinstall

Head over to the Solution Explorer window and select Solutions and Folders view and select OpenConsole.sln

We’re now just about ready to build. Up in the top menu bar select Release for the build, x64 for the architecture, and CascadiaPackage for the package to build.

All things being equal we should be ready to now build. Select Build > Build Solution. Initially I had a few fails here, which were all down to available space. You’ll only need around 12 GB for a build to succeed <another sarcasm emoji>. It should take a few minutes and hopefully when complete you get a successful build with no errors. Finally select Build > Deploy Solution.

Once deployed you can find Windows Terminal (Dev Build) on your Start menu which you can now run.

When you first launch Windows Terminal you won’t see any tabs. Pressing CTRL+T will open a second tab and display a pull down menu where you can further select different CLIs. Settings can also be found under this menu which can be modified via a json file. It’s in the profiles.json file you can change transparency, fonts, colours, and of course add new types of CLIs.

Windows Terminal is still very rough around the edges. Microsoft are calling this a very early alpha release and this does show. It is exciting though to see what is coming. Windows Terminal has huge possibilities. I’ll be following it closely over the coming months and looking forward to spewing out emojis all over my terminals. 🙂 😮

vExpert 2019 is here and it’s huge!

Well after a long wait it’s that time of year again when the first half of year announcements are made for vExperts. A big congratulations to all the new and returning vExperts this year, in particularly all my fellow Australian vExperts. And yeah why not, a congratulations to even those just across the pond over in New Zealand. It’s just another state of Australia right 😛

This is my fifth year as a vExpert and as of late last year my first as a vExpert Pro. It’s this new sub program that I’m most proud and honoured to be part of. But more on this program later.

This year we have 1739 vExperts being honoured from 74 countries. That’s approximately 250 more than the same time last year and on par with vExpert numbers after the second half intake of 2018.

The United States are most represented with 639, followed by United Kingdom at 157. My home country, Australia, ninth most represented with 45 this year. 18 of those countries are represented by only 1 vExpert.

The recently updated public vExpert directory can be found at https://vexpert.vmware.com/directory/ . It contains all of this years vExperts with their public profile.

Coming back to the vExpert Pro sub program. I first heard about this program being created from Valdecir Carvalho (@homelaber) at VMworld Vegas in 2018. I thought it was a really great idea when Val described it to me. I won’t go into too much detail on this sub program as there’s a number of blogs that cover it very well. Basically though, one of its goals is to create vExperts that can champion the vExpert program in specific countries and regions around the world. In English speaking countries that might be a little hard to understand but in countries that don’t speak English, which it might be surprising to know, covers most of the world. As a result of the language barrier it can be hard to recruit and communicate to vExperts in non English speaking countries. That’s where bilingual vExpert Pros can help translate any vExpert communication back into their native language to fellow vExperts and potential candidates.

Coming a little closer to home I had a few potential first time vExperts in Australia approach and ask to sit down with me and help them work through the vExpert application process. Something that I felt quite humbled to help out with. I also had a number of people ask if I could be used as a reference if further information was required of them. Again, something I was more than happy to help with. If I can take a little of my personal time to help someone join this great program it’s well worth it.

A little bit of insight into how the voting and approval process worked this year. With a huge amount of applicants now applying for vExpert you can understand what a mammoth job it is to go through and screen each person for vExpert recognition and award. This is where the vExpert Pros were able to help out in a voting process. We had the opportunity to go through and help the core VMware vExpert team curate and vote for vExpert approval. I can comfortable say we all took this process very serious. Of course we were just voting and providing feedback with ultimate say and oversight coming from people like
Valdecir Carvalho and Corey Romero in making final decision. I feel the process worked quite well and should lead to a higher level of standard for vExpert approval but also future applications.

In conclusion, with the increased scrutiny and review of applicants. Everyone that made vExpert for 2019 should be extremely proud of themselves. We’re part of a great community and we have a high standard to live up to. The days of providing vague and misleading information on your applications are going away.

Again, congratulations to all the 2019 vExperts! Well Done and keep up the good work.

Configuring ESXi prerequisites for Packer

I’m currently working on a Packer build process for a customer using ESXi. The last time I had worked on Packer was over a year ago and I quickly realised how fast I forgot many things for a successful build. It’s also been interesting to experience new bugs and nuances using new versions of products. So I thought I might document some of my experiences to get to a successful build. I think this will easily turn into a multi-part post so I will attempt to document my process as much in order as possible.

A quick recap to Packer if you’re new to it all. Packer is brought to you by the good folks that brought you Terraform and Vagrant --HashiCorp. It’s a relatively small single file executable that you can use to programmatically build images through scripts. You can then take those images and upload them to AWS, Azure, or in my case, VMware ESXi.

While Packer works great with tools like Vagrant and VirtualBox. As a VMware Consultant I want to leverage ESXi for the build process. But before we can start with a Packer build we need to set up a few prerequisites on ESXi.

The first thing we need to do is designate an ESXi host for our builds. The reason we need a host and not a vCenter is that Packer will connect to the host via SSH and use various vim-cmd commands to do it’s work. Once we have a host there are three steps to complete, listed below.

Enable SSH
First we need to enable SSH on our host. There’s a number of different ways to do this. The two easiest ways are via the ESXi host Web Client or if managed by vCenter inside that.

For the host web client, navigate in a browser to https://esxi_host/ui and login with the root account. Navigate to the Manage pane and select the Services tab. Scroll down to TSM-SSH and click Start. Under Actions you may also want to change the policy to Start and Stop with Host.

In vCenter it’s a little different. Locate the host you have designated. Select the Configuration Tab. Scroll down to Security Profile and Click Edit. A new window will appear. Scroll and locate SSH, select start and change the Startup Policy to Start and stop with host.

Enable GuestIPHack
Next we need to run a command on the ESXi host. What this command does is allow Packer to infer the IP address of the Guest VM via ARP Packet Inspection.

SSH onto the ESXi host (e.g. using putty) and run the below command.

esxcli system settings advanced set -o /Net/GuestIPHack -i 1


Open VNC firewall ports on ESXi
Lastly, Packer uses VNC to issue boot commands to the Guest VM. I believe the default range is 5900 -- 6000. 5900 being the default for VNC but if you’re performing multiple builds or the port is in use Packer will cycle through the range until it finds an available one.

Run the following commands on the host to allow us to modify and save the firewall service.xml file.

chmod 644 /etc/vmware/firewall/service.xml
chmod +t /etc/vmware/firewall/service.xml
vi /etc/vmware/firewall/service.xml

Scroll to the very end of the file and just above the last line /ConfigRoot press i (to insert) and add the below in.

<service id="1000">
  <id>packer-vnc-custom</id>
  <rule id="0000">
    <direction>inbound</direction>
    <protocol>tcp</protocol>
    <porttype>dst</porttype>
    <port>
      <begin>5900</begin>
      <end>6000</end>
    </port>
  </rule>
  <enabled>true</enabled>
  <required>true</required>
</service>

press ESC and type in :wq! to save and quit out of the file.

Restore the permissions of the service.xml file and restart the firewall.

chmod 444 /etc/vmware/firewall/service.xml
esxcli network firewall refresh

You can check if you successfully made the change by heading back over to the host in the web client and checking the Security Profile section. Only the first port will be shown and not the range. You can also use the below commands on the host to confirm and see the entire range of ports set.

esxcli network firewall ruleset list
esxcli network firewall ruleset rule list


These are the only three changes you really need to make to an ESXi host for a Packer build to work successfully. I’ve tried to go into a little detail of each step to provide an explanation of what each change is doing. In reality it should only take a few minutes to implement.

In some secure environments I’ve seen SSH set with a timeout. If you notice SSH disable after you’ve enabled it. You’ll need to remove the timeout as SSH needs to stay enabled. You’ll also want to confirm that no external firewalls are blocking access to the VNC port range from Packer.

In future posts I’ll go into detail around the prerequisites in configuring a Windows Server to run Packer and importing / exporting OVF images.

VMware Cloud on AWS Management Exam 2019


It’s been a busy few weeks for me. Earlier on this month I wrote about sitting the VMware Cloud Provider Specialist Exam and continuing on from that I decided to pursue the VMware Cloud on AWS Management Exam.

As I previously discussed with the Cloud Provider exam it falls into a new collections of exams from VMware that are aimed at providing skills and achievements rather than certifications. Now if you’re a little confused let me clarify it a little more. Unlike the Cloud Provider which would require you to hold a VCP and provides you a badge denoting you as a Specialist. The VMware Cloud on AWS exam has no prerequisites and gives you a Skill VMware / Acclaim badge. All straight-forward right?!?! If you’re still confused, don’t worry about it for now, many people are.

Covering off some of the fundamentals of this exam. It’s a non-proctored web based exam. Meaning you can sit it whenever and where ever you like. You have 30 questions with 45 minutes which to complete it in. So while not many questions, you have only a minute and a half on average to answer each question.

Being honest, it’s a fairly basic exam comparative to other VMware exams available. You’re not going to be overly challenged over the 45 minutes. We do have to put this exam into context a little here though. As I mentioned above this exam is classed as a Skill. It’s not a certification so the question count and difficulty of those questions are kind of reflected here. I would rate this Skill exam a little below the level of a Specialist exam like that of the Cloud Provider I took recently.

If you look at what it’s trying to achieve as an exam it does hit the mark. Prior to studying and sitting this exam I really knew little about VMware Cloud on AWS. I had been to many sessions and presentations on VMC over the last year or so. In all the sessions I’ve seen they did a great job of explaining what it is but I still really didn’t know how to use it or all the little intricate things it was capable of. Having now studied and taken the exam, I have a much more thorough understand not just of the product but how it’s actually used and managed.

The types of questions you will see in the exam can be broken up into two basic categories. Simple high level questions of what VMware Cloud on AWS can do and what those services are. Then the slight more technical, but still relatively simple, questions on how to actually perform a task.

My study consisted of the VMware Cloud on AWS: Deploy and Manage three day course. It’s a paid course which you can do in the classroom or on-demand, the latter which I did. This was the bulk of my study which I crammed over three nights after work. The course covers 95% if not 100% of what is in the exam. I supplemented this with a very short demo of the VMware Cloud on AWS -- Getting Started Hands-on Lab and briefly looked at the VMware Cloud on AWS Sizer and TCO site and the VMware Cloud on AWS | FAQs.

Final Thoughts:
While far from being a deep technical exam. It does a decent job on testing your knowledge of the product and validating those skills. Certainly from my view point it encouraged me to actually spent some time studying VMware Cloud on AWS which I had been otherwise avoiding until now. Don’t expect to become a guru on the product afterwards but take the exam for what it is a learn something new if you haven’t delved into it until now.

VMware Cloud Provider Specialist Exam 2019

After many months of procrastinating I finally decided to sit the VMware Cloud Provider Specialist Exam (5V0-32.19). The exam was released at the end of August 2018, so it’s been available for quite a few months now. This comes after a long wait from the vCloud community asking for a specialist / dedicated exam around vCloud Director and its product suite.

The first thing to note with this exam is that it’s not a certification but rather falls under a new collection of exams from VMware that are better represented by Skills and Achievements and are acknowledged through VMware / Acclaim Badges (as shown by the one above).

The exam is non-proctored web based. Meaning, like me, you can take the exam first thing in the morning before starting work. This is a format first released by VMware with their VMware Certified Associate exams a number of years back. The exam is 40 questions sat over 60 minutes with the standard 300 passing score. The exam is predominately focused on vCloud Director but also cover numerous other products in the vCloud Suite of products and the Cloud Provider program

It’s a relativity solid exam, I feel sitting in between an Associate and Professional certification in terms of difficulty. Having used vCloud Director and its various suite of tools for quite a few years now. I took this exam cold with no additional study. I managed to answer the 40 questions in a little over 30 minutes and then spent 10 minutes reviewing about a dozen questions I was a little uncertain on. Generally speaking, with these kind of exams, you either know or don’t know the answer. So trust you gut instinct and put your answer as the first thing that comes to your mind. Then flag it for review if you are truly uncertain.

So what should you do if you want to take and pass this exam? It’s a little tricky for me to definitively recommend study material as I relied on my previously gained knowledge of the vCloud Director and it’s various product line. I would certainly say this is an exam for someone that administers and engineers vCloud Director solutions. That’s generally going to be someone in the Service Provider space. If you don’t use vCloud Director I would question the real benefit you would gain from this exam, with the exception of forcing you to study up on the various products that go into this exam. If you’re still set on this exam and don’t have access to vCD you’re best bet would be Hands on Labs HOL-1983-01-HBD -- VMware Cloud Provider Program -- vCloud Director for Service Providers.

There is no formal Blueprint that I’m aware of but there is an Exam Preparation Guide PDF for the exam on the VMware Certification site. It has quite a lot of Sections and Objectives to work through and a huge amount of reference material. This could be quite a challenge for someone new to vCloud Director to work through.

Generally speaking though you will need to know vCloud Director. It’s the core focus of this exam. The exam is also based on vCD 9.1. This is extremely important to know. For example things like supported databases have changed in recently releases of vCD leading you to in incorrect assumption for the answer. While it’s unlikely you’ll be asked to specifically do something around point and clicks. You will more likely need to understand all the different terminologies and constructs used in vCD and how they relate back into vSphere.

You should understand the concepts and components behind vCloud Extender. What it is, what it does, and how you might use it. The same goes for vCloud Usage Meter and it’s newer SaaS offering Usage Insight. While I don’t recall but you may see some questions around vCloud Availability too.

You’ll also probably see a few questions around a new product, Cloud Provider Pod and Cloud Provider Hub. Very few people would have hands on experience with this new product. It’s basically an Orchestration platform to stand up an entire vCloud Director stack from bare metal. I’d recommend watching the VMworld presentation Introducing VMware Cloud Provider Pod presented by Wade Holmes which should give you all the high level information you need on it.

Final Thoughts:
As mentioned above, this is a solid exam. It covers quite a lot of different products in VMware’s Cloud Program Program / vCloud suite. It’s ideally suited to Service Providers using vCD. vCloud Director is a very intricate product with many external dependencies. The exam is a great way to validate and acknowledge those skills you have acquired with vCD and associated products.

Error Setting Timezone on vCloud Director 9.5 Appliance

vCloud Director 9.5 is VMware’s first attempt at an appliance for vCloud Director.  It’s built upon VMware’s Photon OS 2.0.  The appliance does a couple great things.  It’s provided as an Linux appliance pre-configured with all the required dependencies for vCloud Director and installs the vCD binaries.  It also comes as an OVA deployment allowing you to easily enter all the required parameters to simplify the deployment.

Unfortunately the appliance isn’t perfect and has a few bugs in it.  The first of which you’ll come across very soon after deployment when you attempt to set the timezone from the console.

When you open the console for the first time you’ll see a familiar looking console menu where you can login or Set Timezone.  When you attempt to set the timezone you will see an error briefly flash up on the screen then be taken back to the console menu.

/usr/bin/tzselect: line 180 /usr/share/zoneinfo/zone1970.tab No such file or directory
/usr/bin/tzselect time zone files not setup correctly

There is no obvious way to correct this issue until a patch is released.  The timezone, though, can still be set via the CLI with the following steps.

Login to the CLI and type in

ls /usr/share/zoneinfo/

Find your nearest region and perform another ls on that folder.  If your region doesn’t exist you can perform an ls on Etc to select a specific GMT zone.

In my example I choose Australia.

ls /usr/share/zoneinfo/Australia/

Inside this directory find your nearest State or City.

Use the VAMI set timezone command to set this region.  For example

[email protected] [ ~ ]# /opt/vmware/share/vami/vami_set_timezone_cmd Australia/Melbourne
Timezone settings updated

Exit out of the CLI to return to the console menu.  Your timezone should now be set.

Quick Fix: Increase Root Partition Size in VCSA 6.7

Full disclosure, I’m stealing Anthony Spiteri’s ‘Quick Fix’ used in the title of this post.  It’s somewhat related to a post Anthony published yesterday so I hope he doesn’t mind.  Over this past week both Anthony and I upgraded our vCenters from 6.7 to 6.7 U1.  While our problems were slightly different they both had a common issue.  Our root partitions on our Tiny install of VCSA 6.7 had either run out of space or near out of space causing upgrade issues to U1.

While Anthony was adamant in finding the root cause of the issue to free up space my solution was much simpler --just increase the space on the root volume.  I’ve seen this a number of times with Tiny installs of VCSA so I didn’t want to bother troubleshooting a Home Lab problem through the night.

TL:DR

See bottom of post for the full description of my issue.

Issue:

The VCSA root partition is at 100% or near 100% with less than 10% free space available.

[email protected] [ ~ ]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 4.9G 0 4.9G 0% /dev
tmpfs 4.9G 792K 4.9G 1% /dev/shm
tmpfs 4.9G 696K 4.9G 1% /run
tmpfs 4.9G 0 4.9G 0% /sys/fs/cgroup
/dev/sda3 11G 11G 0 100% /

/dev/mapper/imagebuilder_vg-imagebuilder 9.8G 23M 9.2G 1% /storage/imagebuilder
/dev/mapper/log_vg-log 9.8G 2.3G 7.0G 25% /storage/log
[email protected] [ ~ ]#

Resolution:

Increase the size of /dev/sda3

Edit the settings of the VCSA VM and located Hard Disk 1 (This disk represents device SDA).  Increase the size of the disk from 12 GB (for a Tiny appliance deployment).  In the example below I have increase it to  20 GB.

Click OK and reboot the appliance.

After the reboot SSH to the VCSA and enter the shell.

There’s a few different commands you can run at this point (see my full issue below) but the easiest command to run is below

/usr/lib/applmgmt/support/scripts/autogrow.sh

As its name implies this will auto grow the root partition with the available space just allocated.

You should now see something like below.

[email protected] [ ~ ]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 4.9G 0 4.9G 0% /dev
tmpfs 4.9G 792K 4.9G 1% /dev/shm
tmpfs 4.9G 696K 4.9G 1% /run
tmpfs 4.9G 0 4.9G 0% /sys/fs/cgroup
/dev/sda3 19G 9.1G 8.5G 52% /

/dev/mapper/imagebuilder_vg-imagebuilder 9.8G 23M 9.2G 1% /storage/imagebuilder
/dev/mapper/log_vg-log 9.8G 2.3G 7.0G 25% /storage/log
[email protected] [ ~ ]#

The root partition should now have sufficient space for normal operation or upgrades.

 

 

Full Issue:

During an attempt to update my VCSA from version 6.7 to 6.7 U1 I encountered the following error in the VAMI.

Update installation failed. vCenter is non-operational

Attempting to log onto the appliance may also cause intermediate issues.

Error in method invocation Field messages missing from Structure com.vmware.vapi.std.errors.service_unavailable

Checking the space usage on my appliance showed that I only had 5% free space which appeared to not be sufficient for an appliance upgrade.

[email protected] [ ~ ]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 4.9G 0 4.9G 0% /dev
tmpfs 4.9G 792K 4.9G 1% /dev/shm
tmpfs 4.9G 696K 4.9G 1% /run
tmpfs 4.9G 0 4.9G 0% /sys/fs/cgroup
/dev/sda3 11G 9.1G 900M 95% /

/dev/mapper/imagebuilder_vg-imagebuilder 9.8G 23M 9.2G 1% /storage/imagebuilder
/dev/mapper/log_vg-log 9.8G 2.3G 7.0G 25% /storage/log
[email protected] [ ~ ]#

After a little bit of troubleshooting and attempting to locate the correct vCenter Hard Disk that /dev/sda3 represented.  I identified that Hard Disk 1 was the correct vCenter disk.  /dev/sda3 has multiple partitions and the actual size of /dev/sda is 12GB for a Tiny VCSA install.

To correct the issue I increase the size of this disk to 20 GB, rebooted the appliance, and ran the below command.

/usr/lib/applmgmt/support/scripts/lvm_cfg.sh storage lvm autogrow

This command auto grew the root partition with the available space on the disk that I had just allocated.  I later found out that autogrow.sh is a much simpler command to run and that lvm_cfg.sh wasn’t available anymore after upgrading to VCSA 6.7 U1.