Deploying HCX Cloud Manager – Part 1

HCX is a magical tool. It allows “Modernization of Mission-Critical Application Infrastructure with a minimal operational overhead, without requiring a retrofit of your legacy infrastructure“. It’s a mouthful right, straight from the HCX product pages. My explaination, in clearer terms, is “it allows you to connect one vCenter in one SSO domain to another vCenter in another SSO domain and migrate VMs“. That’s about as simple as I think I can put it. Yes it can do other stuff but in a nutshell this is its biggest use case.

It’s this exact scenario that I want to try and cover in the next few posts. Deploying and configuring HCX in a datacenter to migrate between two on-prem vCenter servers. Going from a legacy vCenter environment to a new SDDC environment with NSX. I plan on covering just the fundamentals to get HCX up and going as quick and easily as possible.

This guide is going to be long, there’s no way around that. We will initially cover the installation in the destination / target site using the Cloud version of HCX. Along the way I’ll call out anything notable. I recommend going through the pre-installation checklists and covering off the requirements prior to starting.

The first thing we want to do is download the HCX Cloud installer from the My VMware portal and start an OVF deployment. I’m basis the deployment off the R133 release. I recommend using the old Flex Client vs the new H5 client which will no doubt fail to deploy the OVF.

Give the HCX Cloud manager a name and location to deploy.

Select a Datacenter and Cluster.

The OVF Template will validate and present its details.

Scroll down and accept the license agreement.

Select your storage and policy.

Select your network. It’s critical you pick a network that has good connectivity to the rest of your network to save yourself a lot of grief later on. HCX connects to many different services. Ideally it’s on your same management network as vCenter and the rest of your services. There’s a good London Underground PDF map of all the services and ports you will require. If your using firewall rules between your services and network segments this will no doubt be one of the more trickier tasks.

Fill out all the customisations. Passwords, IP addresses, DNS, NTP, etc.

Review your settings and deploy the appliance.

Once the OVF is deployed, power on the VM and browse to its URL on port 9443 to start the configuration wizard (https://hcx-manager:9443). Login with admin and the password you defined during deployment.

Once you login you are given the option to Activate the HCX instance. You can skip this step for now and Activate later. Notice that the type is Cloud. Cloud is what other HCX types (Cloud & Enterprise) connect to.

Define the location of your datacenter. This really doesn’t matter what you select.

Give the manager a system name (hostname).

Select the type of instance to configure. vSphere in this case.

Next we configure vCenter and NSX details that we want to pair up to. This is the vCenter and NSX Manager in the destination / target site we are deploying to. HCX Cloud Manager requires NSX. HCX Enterprise does not. Enterprise can only pair to HCX Cloud but Cloud can pair to other HCX Cloud Managers. This is an important point if you have more than two vCenters and you want to create multi site pair relationships.

Enter in your PSC details. Internal, External, whatever the flavor of the month was when you build vCenter.


Add in the public URL of this HCX Cloud Manager. Most likely this is just the same as your internal DNS name you gave this server.

Finally review your settings and click restart. In a few minutes it will auto login to the management interface.

We’re on the home stretch now. Just a few final pages to check. Click on the Administration Tab and if you’re using a Proxy enter it in. HCX Manager needs to access two external URLs. (connect.hcx.vmware.com and hybridity-depot.vmare.com). I also recommend putting in your internal networks in the Exclusions.

The last thing we need to update on this tab is the Trusted CA Certificates. If you’re using Self-Signed certificates for HCX you need to add them here. You haven’t built a second HCX Manager yet so when you do come back here and add it in. I also like to add in my vCenter servers and NSX servers I’ll be using. I do this really just to sanity check that HCX Manager still has connectivity to this servers after I add a proxy server.

Now head over to the Configuration Tab. You can now enter in your HCX Advanced Key under Licensing. In my case I’m using my NSX key. You may also have an HCX Enterprise key for advanced features but I won’t be discussing any of them so we can ignore it for now.

Finally update the vSphere Role Mapping. This is granting access to log into the User Interface of HCX Manager.

Head over to the Appliance Summary Tab and give the appliance one last reboot and we’re done with the appliance configuration.

Super simple right? Now do it all again for the second HCX Manager in your second (source) datacenter! You have two options at this stage. Use the same HCX Cloud Manager installer or log into the HCX Manager User Interface (drop the 9443 in the URL) and under System Updates download the Enterprise version. Again, the difference being that Enterprise doesn’t ask for or use NSX, at least in the use case I’m covering.

In the next part I will cover creating Site Pairs and Interconnects between these two HCX Managers. This will all be performed from the source site HCX Manager.

VMware HCX Error: Failed to enable replication while exchanging thumbprints

I was recently helping a customer configure VMware HCX to migrate from their legacy vSphere environment to a new VxRail SDDC running VMware VCF. We had carefully followed all the prerequisites in configuring firewall rules, proxies, and routing. We were able to successfully initiate vMotions and Cold Migrations but could not achieve a successful Bulk Migration.

Numerous days of testing ports, connectivity between HCX Interconnects, ESXi hosts, and Replication networks showed nothing unusual. Successful vMotions and Cold Migrations but consistent failures with Bulk Migrations.

Migration failed
Failed to enable replication while exchanging thumbprints. Exchange thumbprint failed for serviceMeshIds [“servicemesh-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx”]

The issue came down to the configuration on VMkernel adaptor vmk0. VxRail configures vmk0 as a host discovery network using only IPv6 addressing. VxRail enables the ESXi Management network on this VMKernel but also provisions vmk2 as the main Management network where you will most likely be configuring an IPv4 address and connecting to your ESXi hosts.

HCX currently does not support IPv6 (as of R133). Despite vmk2 having an IPv4 address and having the Management and Replication networks enabled. HCX was still looking at vmk0, seeing only an IPv6 address, and failing Bulk Migrations.

The resolution turns out to be a relatively easy hack. Placing a dummy IPv4 address in addition to the IPv6 host discovery address on vmk0 is enough to convince HCX to perform a Bulk Migration.

With vmk0 now showing an IPv4 address as the default address instead of the IPv6 Discovery address Bulk Migrations can now be successfully performed.

This is no doubt an edge case using VxRail and HCX. Most customers will be using IPv4 addressing on vmk0. But in rare situations your using a different VMKernel interface other than vmk0 for Management or only using IPv6 on vmk0 this will hopefully resolve the issue, at least temporary for HCX migrations.

PowerShell 7 – Ternary Operator

In addition to pipeline chain operators which i discussed in my previous post. PowerShell 7 introduces another long awaited addition, a ternary operator. This operator can be used in place of conditional statements If and Else. The ternary operator follows the format of the C# language and takes three arguments.

<condition> ? <if-true> : <if-false>

Using a ternary operator can be seen as a shorthand way of writing an if-else statement in PowerShell.

if <condition> {
  do something
}
else
{
  do something else
}

Ternary operators are an interesting addition to PowerShell 7. It’s basically a simplified if-else statement that you can perform on one line. The first argument is the condition to evaluate. The second argument is a True evaluation response, and the third is a False evaluation response.

$total = 10 * 5 -eq 20 ? 'yes' : 'no'

$os = $IsWindows ? 'This is a Windows OS' : 'Not a Windows OS'

$ConfirmPreference -ne High ? ($ConfirmPreference = High) : 'Already High'

(Test-Path $PROFILE) ? (Write-output 'Profile exists') : (New-Item -ItemType file -Path $PROFILE -Force)

The condition argument will get evaluated to a Boolean True / False which will determine which part of the branch is evaluated and run. If you want to run commands you’ll need to wrap them in parenthesis.

Nesting is possible using a ternary operator. Nesting ternary operators should be kept simple. At first glance it can be seen a little cryptic. But once you understand how it works it starts to make sense.

 $IsMacOS ? 'You are on a Mac' : $IsLinux ? 'You are on Linux' : 'You must be on Windows'

If we write the above out in the traditional elseif way it starts to make more sense.

if ($IsMacOS) {
    'You are on a Mac'
} elseif ($IsLinux) {
    'You are on Linux'
} else {
    'You must be on Windows'
}

If you have used the Where-Object alias in the past you will know that it’s referenced as a ‘?‘. Under very rare situations should you find yourself having a conflict or undesirable behavior between the ternery ‘?’ and Where-Object alias ‘?’. What will be more important is being able to correctly identify code referencing a ternary ‘?’ vs a Where-Object alias.

Ternary operators in PowerShell aren’t without a little controversy. It’s been a feature that has been requested for many years. Jeffery Snover spoke about wanting them way back in PowerShell v1. PowerShell, though, has had the ability to perform something very similar on one line for some time.

# Pre ternary way
$var = if ($x) { $y } else { $z } 

# Ternary way
$var = $x ? $y : $z

It’s arguable which format is easier to understand. The traditional PowerShell way or the new ternary way??? While PowerShell users may find the former easier, programmers coming from languages like C# will no doubt be comfortable with using ternary operators in PowerShell. In any case I’d expect this to be a very niche operator rarely used, but time will tell I guess.

References
Ternary Operator RFC

PowerShell 7 – Pipeline Chain Operators

The GA release of PowerShell 7 is just around the corner.  With this release comes several new features that continue to build upon the previous versions.  One of these new features being introduced are two new operators, && and ||, referred to as pipeline chain operators.   They are intended to work like AND-OR lists in POSIX like shells (and to my surprise to learn, like conditional processing symbols in the Command Shell in Windows).

# Clone a Git repo and if successful display its README.md file
C:> $repo = "https://github.com/originaluko/haveibeenpwned.git"
C:> git clone $repo && Get-Content haveibeenpwned/README.md

Use of pipeline chain operators in PowerShell is fairly straight-forward.  The left-hand side of the pipeline will always run when using either of the operators.  The && operator will only execute the right-hand side of the operator if the left side of the pipeline was successfully executed.  Conversely the || operator it will only execute the right side of the pipeline if the left side fails to execute.

C:> Write-Output 'Left' && Write-Output 'Right'
Left
Right

C:> Write-Output 'Left' || Write-Output 'Right'
Left

Previously to achieve a similar outcome you might create a script block using an If statement.

C:> Write-Output 'Left'; if ($?) {Write-Output 'Right'}
Left
Right

You can also place multiple operators in the one pipeline.  These operators will be processed left-associative, meaning they are processed from left to right. 

C:> Get-ChildItem C:\temp\test.txt || Write-Error 'File does not exist' && Get-Content c:\temp\test.txt

In the above example || is processed before &&.  So Get-ChildItem and Write-Error can be seen as grouped first and processed before Get-Content.

[Get-ChildItem C:\temp\test.txt || Write-Error “File does not exist”] && Get-Content c:\temp\test.txt

To achieve something similar without pipeline chain operators, and this is where things get a little more interesting with the additional work involved, you might perform the below. 

C:> Get-ChildItem C:\temp\test.txt ; if (-not $?) {Write-Error "File does not exist"} ; if ($?) {Get-Content c:\temp\test.txt}

Care should be taken with commands that return a True / False boolean response. They should not be confused as successful and unsuccessful executions. For example something like Test-Path.

C:> Test-Path C:\temp\test.txt && Write-Output 'File exists'
True
Path exists

C:> Test-Path C:\temp\test.txt || Write-Output 'File exists'
False
Path exists

In both cases Test-Path successfully ran and in turn the right-hand side of the pipeline executes.  Pipeline chain operators work by checking the value of the $? variable.  This is an automatic variable which is set to either True or False based on the success of the last command. 

Pipeline chain operators can be used with Try / Catch blocks. It should be noted that script-terminating errors take precedence over chaining.

try
{
    $(throw) || Write-Output 'Failed'
}
catch
{
    Write-Output "Caught: "
}

In the above, while not elegant, a script-terminating error is caught and Caught: ScriptHalted is printed.

try
{
    Get-ChildItem C:\nonExistignFile.txt || Write-Output 'No File'
}
catch
{
    Write-Output "Caught: $_"
}

In the following example a non-terminating error happens when no file is found and ‘No File’ is printed.

One of the goals of pipeline chain operators is to be able to control your pipeline and actions based on command outcome rather than command output.  This is a slightly different approach than we might be use to in PowerShell.  Rather than having to validate output and then take an action we can immediately take that action based on the outcome of the previous command.

It’s going to be interesting to see how widely accept these new operators become. Many of us have made do without them since the beginning of PowerShell. Though we now have access to something that has been available in many of shells like bash and cmd.exe for a long time.

The pipeline chain operator RFC has a lot of good information and further explanation on its use.

References
Pipeline Chain Operators

Windows Server 2019 support on ESXi and vCenter

I’ve been asked by a few customers recently if Windows Server 2019 is supported on ESXi as they can’t seem to find it in the list of Guest OS in vCenter. Considering that Windows Server 2019 was released back in October 2018. It is quite surprising that on the latest version of ESXi and vCenter (currently 6.7 U3) that Windows Server 2019 is still not listed under Guest OS.

VMware does have a KB article on this, 59222. While it is lacking on detailed information why, it does state that you can select Windows Server 2016 instead.

There is also a link to the VMware Compatibility Guide. Here you will be able to select Windows Server 2019 and list all supported ESXi releases.

You see that all releases of ESXi 6.0, 6.5, and 6.7 are listed under Supported Releases on the Compatibility Guide.

It is worth noting the VMware blog Guest OS Install Guide from the Guest OS Team. This blog lists OS’s as they become supported. Also pay attention to support level. VMware has different levels of support from Tech Preview to Full Support. In the case of Windows Server 2019 it reached Full Support back in November 2018.

So as far installing Windows Server 2019 and selecting a Guest OS of Windows Server 2016 you should be fine and fully supported.

vRealize Easy Installer Walk-through Guide

With the recent release of VMware vRealize Suite Lifecycle Manager 8.0 and vRealize Automation, also comes a new deployment tool called vRealize Easy Installer.  The Easy Installer is a tool that streamlines and helps you install vRealize Suite Lifecycle Manager, VMware Identity Manager, and optionally, vRealize Automation via a simple and clean UI.  

The three packages are contained within a single ISO file call VMware vRealize Suite Lifecycle Manager 8.0.0 Easy Installer.  The ISO can be found within the vRealize Suite Download page in the My VMware portal.   Selecting either vRealize Suite Lifecycle Manager or vRealize Automation will take you to the same 9GB ISO download. vIDM still has it’s own individual download if you want/need it.

The Easy Installer is compatible with Linux, Windows, and Mac, which should make it very accessible to a large audience.  I decided to give it a try out and detail the process below.  It’s a rather simple process to follow as long as a few prerequisites specific to the Installer are met first.

On the Memory front, LCM and vIDM both require 2 vCPUs and 6 GB of memory. vRealize Automation on the other hand will require, for a Standard install, 8 vCPUs and 32 GB Memory. You can times that by three for a Clustered install. If you enable Thin Disk provisioning, 75 GB min storage will be required. Finally DNS records for LCM, vIDM, and optionally vRA if being installed, need to be created first.

In the below process I use Windows 10 as the client source I install from.   

To access the installer we need to right click the ISO file and select mount.  This will mount the ISO as a drive in Windows.  We can then navigate to \vrlcm-ui-installer\win32 (If you were on Linux or Mac this path would be different). Then select installer.exe to start the Installer UI.

Step 1. Select Install
Step 2. Introduction -- Click Next
Step 3. EULA -- Accept terms and CEIP then click Next
Step 4. Appliance Deployment Target -- Enter in vCenter details and click Next
Step 5. Certificate Validation — Accept any warnings and click Next
Step 6. Select a Location -- Select a Datacenter and click Next
Step 7. Select a Compute Resource -- Select a Cluster and click Next
Step 8. Select a Storage Location -- Select a Datastore and optionally Enable Thin Disk Mode and click Next
A warning will display if you click Next and there is insufficient disk space. You will need a minimum of 75 GB for a Thin Disk install
Step 9. Network Configuration -- Enter in global networking details for the install of all products. Optionally enter in NTP settings. Only static IP assignment is possible.
Step 10. Password Configuration -- Enter in a default root/admin password to be assigned to all products
Step 11. Lifecycle Manager Configuration -- Enter in LCM details and click Next
If a VM with the same name is found in vCenter when you click Next you will receive a warning
Step 12. Identity Manager Configuration -- Enter in the vIDM details. Optionally enable the Sync Group Member to the Directory
Do not use admin/sshuser/root when selecting a Default Configuration Admin account name.
Step 13. vRealize Automation Configuration -- Choose to install vRA 8. Standard Deployment will deploy one vRA 8 server. Cluster Deployment will deploy three. The License Key will not be validated at this stage so confirm it is correct.
Step 14. Summary -- Verify all installation parameters and click Submit
If there are any issues during installation the install will fail and you will have the option to download the logs to troubleshoot the issue. Make sure all your DNS settings are correct and the client you are installing from can validate those DNS settings.
A successful install will look similar to this

Have I Been Pwned PowerShell Module v3

Over the last few years I’ve written I few posts on a PowerShell module I created that allows users to directly talk to the Have I Been Pwned API service (https://haveibeenpwned.com) that Troy Hunt maintains. While those posts are a little old now, they are still a good read on what this PowerShell Module is about. I encourage you to read them if you are interested (links at the bottom).

A few months back Troy made a big change to the way his API service works by requiring authorisation in the form of an API key. This broke a lot of different scripts and services the community have created that leveraged his service, including my own PowerShell module. Troy has discussed at length why he has decided to take these steps. I won’t bother going into it here. Authentication and the Have I Been Pwned API

Shortly after this change took effect I received a number of comments from the community that my PowerShell module didn’t work anymore. One or two even said that it was failing because I wasn’t providing an API key with the module. So I wanted to spend a few minutes to explain some of the new changes in the way the latest version of the Have I Been Pwned PowerShell module works. And what you need to do if you want to use it.

Firstly I decided to version increment the PowerShell module from the previous latest version of v1.4.2 to v3 to match the API version used by HIBP. (Version 2 was a short lived version up on my GitHub page)

Now for the big breaking change. Where applicable, all the URIs in the module have been updated to the v3 API. And again, where applicable, have had a header added to them to include a hibp-api-key value/token. Not all URI endpoints require an API Key. Generally speaking if you want to check for a pwned email address you will need an API key.

So how does this work?
The two functions that require an API key to be specified are Get-PwnedAccount and Get-PwnedPasteAccount. In the past you would have typed something like --

Get-PwnedAccount -EmailAdddress [email protected]

This would have returned all breached instances of sites that this email address would have been compromised in. In version 3 you now require the use of an API key to do the same thing.

Get-PwnedAccount -EmailAdddress [email protected] -apiKey "hibp-api-key"

So in this above example you can input your API key directly in the command. Or you could store it in a variable and call it at a later stage in the command. For example

$myApiKey = "xxxxxxxxxxxxx"
Get-PwnedAccount -EmailAdddress [email protected] -apiKey $myApiKey 

If you also really wanted to, you could hard code your API key in the parameters section of these scripts. Certainly not recommended but the choice is yours.

So where do I get this API key?
To make it clear, not from this PowerShell module or from me. You will need to go to Troy Hunt’s site (https://haveibeenpwned.com/API/Key) and purchase one.

Once you do you, this will be yours, or your organisation’s, own personal key that you do not share out. How you protect it and how you want to use it will be up to you.

Where can I download the PowerShell Module?
PowerShellGallery: https://www.powershellgallery.com/packages/HaveIBeenPwned/
GitHub: https://github.com/originaluko/haveibeenpwned

Previous Posts
HaveIBeenPwned PowerShell Module
HaveIBeenPwned PowerShell Module Updates

Display Last Command in PowerShell Title Bar

Carrying on from my previous post on pimping out your PowerShell console. I started to look into way to update the PowerShell title bar. In PowerShell there’s an automatic variable called $host where you can specify your own custom UI settings. One of the properties that can be modified is the console title bar in PowerShell. Now you say, “But Mark, why?”. Well I say, “But why not.”. And now that we have that formality out the way…

Changing the PowerShell console title is actually very simple and can be done on one line.

$Host.UI.RawUI.WindowTitle = 'Dream bigger.'

This command can be run from the command line or placed into your startup profile.ps1 file.

I initially placed quotes into the WindowsTitle property, placed the line at the bottom of my profile.ps1 file, and would get my startup profile to automatically load it when I ran a new PowerShell session. However, with my recent experimentation with the PowerShell prompt and the Get-History cmdlet. I had the idea of dynamically populating my the console title bar with my previous commands.

A lot of the leg work to do this is explained in my previous post, Display Execution Time In PowerShell Prompt. As such, i’m not going to delve into it too in depth here. Instead I do recommend looking at that post.

To update the console title with our previous command we leverage the cmdlet Get-History (just as I used in my previous post).

$host.ui.RawUI.WindowTitle = (Get-History)[-1]

This will update our console title with out last command, but it won’t continue to update after each subsequent command.

So we can take this one step further by updating the built-in PowerShell function Prompt. This function will run after each command is executed. We can modify the function by copying and pasting the below code into our PowerShell session and execute Prompt. This would work for our current PS session.

function Prompt {
  $history = Get-History
  $host.ui.RawUI.WindowTitle = ($history)[-1]
  return " "
}

Now better yet, we can update our startup profile file. Usually this is profile.ps1 held in C:\Users\{Username}\Documents\WindowsPowerShell\ for Windows PowerShell or C:\Users\{Username}\Documents\PowerShell\ for PowerShell Core. By pasting this code into our startup profile, it will execute each time we open a new PowerShell session automatically.

So there you have it. Another pointless awesome way to pimp our your PowerShell console. Combine this with Execution Time in your prompt and you have the flyest console around.

Display Execution Time In PowerShell Prompt

Some time back I attended a presentation where the presenter’s PowerShell console displayed their last command’s execution time in the prompt. At the time of thought it was a bit of a geeky novelty thing. Though recently I’ve had a slight change of opinion. It’s become a great way to easily see the efficiency of my code.

To make a pretty crude quote. There are many ways to skin a cat. PowerShell is extremely flexible in that it allows you to perform the same task many different ways. But not all ways are equal right?!? In the two examples below a perform a fairly basic count from 1 to 10 million.

$x = 0
ForEach ( $i in 1..10000000 ) {
        $x = $x + 1
    }
$x

In the above example the code runs in “around” 9834 milliseconds (9.8 seconds).

class MyClass
{
static[int] CountUp() 
    {
    $x = 0
    ForEach ( $i in 1..10000000 ) {
          $x = $x + 1
        }
    return $x
    }
}

[MyClass]::CountUp()

In this second example the code runs in 552 milliseconds (~0.5 seconds). A huge difference.

Being able to quickly and easily see execution time in the prompt can be quite helpful in determining if you’re coding as efficiently as you could be. It’s led me to trying things multiple ways before I settle. Now the actual code to display execution time is also quite easy to add into your PowerShell startup profile or to just run in your current session.

PowerShell comes with a built in prompt function which you can override with your own. In the below example I have created a new prompt function which I can execute by typing Prompt after running the code in my current session.

function Prompt {
  $executionTime = ((Get-History)[-1].EndExecutionTime - (Get-History)[-1].StartExecutionTime).Totalmilliseconds
  $time = [math]::Round($executionTime,2)
  $promptString = ("$time ms | " + $(Get-Location) + ">")
  Write-Host $promptString -NoNewline -ForegroundColor cyan
  return " "
  } 

The execution time of commands is retrieved from the StartExecutionTime and EndExecutionTime properties of Get-History. I get the time of the previous command, round to two decimal places, and write that to the prompt.

You can also take the function and place it in your PowerShell default startup profile file which will execute each time you open a new PowerShell session. It does require a slight modification to the above function which I’ll discuss later below. I’ve written a few posts on how to find and modify your default profile. But if your using Windows PowerShell you can find or add it in C:\Users\{Username}\Documents\WindowsPowerShell\profile.ps1. If your using PowerShell Core you can find or add it in C:\Users\{Username}\Documents\PowerShell\profile.ps1.

function Prompt {
  if ((Get-History).count -ge 1) {
  $executionTime = ((Get-History)[-1].EndExecutionTime - (Get-History)[-1].StartExecutionTime).Totalmilliseconds
  $time = [math]::Round($executionTime,2)
  $promptString = ("$time ms | " + $(Get-Location) + ">")
  Write-Host $promptString -NoNewline -ForegroundColor cyan
  return " "
  } else {
    $promptString = ("0 ms | " + $(Get-Location) + ">")
    Write-Host $promptString -NoNewline -ForegroundColor cyan
    return " "
  }
}

In the code above I’ve rapped it in an If / Else statement block. The logic here is that we use Get-History to get the execution time, but when a PowerShell session is first run there is nothing in Get-History, which will cause the function to fail and not run correctly generating a default vanilla prompt. Not ideal. So we create an else block and generate our own default prompt when no history of previous commands exist when initially opening a new PowerShell session.

So while many of you may also just find this a geeky novelty thing. It can also be a good reminder for you to try and keep your code and scripting as efficient as possible. Hey and at the very least you can impress your friends with a cool PowerShell prompt.

HaveIBeenPwned PowerShell Module Updates

Back in 2017 I wrote a post on a PowerShell module I created that consumes Troy Hunt’s Have I Been Pwned API service. I won’t go into too much detail about the service here. Plenty of people already have and since that time HaveIBeenPwned has exploded in popularity and most of us know what it is.

In that post I briefly discussed what the module does how you can begin to use some of the core functions in it. Since that time Troy has made a few changes to the API service, some small and some large, which I’ve slowly integrated into the PowerShell module. Things like UserAgent strings being a requirement and K-anonymity for password checks.

The community has also played a part in shaping the PowerShell module over the last year. I’ve had a lot of feedback and even some contributions through the GitHub project. It’s been pretty cool to receive PRs via my GitHub page for improvements to the module.

I thought now was a good opportunity for a follow-up post to talk about some of the changes and updates that have been made over the last year.

Probably the biggest change has been K-anonymity in Get-PwnedPassword. Originally you would send your password over the air in the body of a HTTPS request. With K-anonymity, Get-PwnedPassword will now SHA1 hash your password locally first and will always just send the first 5 characters of the hash to the HaveIBeenPwned API. It’s a much safer way of checking passwords which hopefully will lead to more people accepting and trying this method.

PS F:\Code> Get-PwnedPassword -Password monkey
AB87D24BDC7452E55738DEB5F868E1F16DEA5ACE
WARNING: Password pwned 980209 times!

I’ve attempted to make the module and all functions as PowerShell Core compliant as I can. I say, attempted, because as much of a fan of PowerShell Core as I am I keep finding differences in the way Core works. I’ve had to rewrite all the error handling to better catch 404 responses. A 404 not found response actually being a good thing in identifying that an email account has not be found in a breach. So whether it’s Windows PowerShell or PowerShell Core you should now be fine.

In my original post I gave an example of how you could run Get-PwnedAccount against a CSV file of email accounts and bulk check all your email addresses. Something that could be helpful in a corporate environment with many 100s of email addresses. The example I gave though was far from ideal.

This ability is now baked into Get-PwnedAccount and should lead for some interesting results. It’s very easy to use. A simple text file saved in CSV format with each email address on a separate line / row. Incorrectly formatted email addresses will be ignored and results are displayed only for identified email addresses in breaches.

Below is an example of what the CSV file might look like

[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]

Usage is straight forward too.

PS F:\Code> Get-PwnedAccount -CSV F:\emails.csv

Description                   Email             Breach
-----------                   -----             ------
Email address found in breach [email protected]    000webhost
Email address found in breach [email protected]    17
Email address found in breach [email protected]    500px

Each time an email is found in a breach it will output a result as an object. So you may get multiple results for a single email due to different breaches it’s in.

Identifying the total emails found in breaches is simple. For example

PS F:\Code> Get-PwnedAccount -CSV F:\emails.csv |  Measure-Object | Format-Table Count

Count
-----
  413

Now you probably don’t want to be hitting the API every time you want to manipulate the data. It will be slow and I can’t guarantee that rate limiting may block you. Storing the results in a variable will provide a lot more flexibility and speed. For example, finding results just on one email address

PS F:\SkyDrive\Code> $results = Get-PwnedAccount -CSV F:\emails.csv
PS F:\SkyDrive\Code> $results | Where-Object {$_.email -eq "[email protected]"}

Or if you don’t care about the breach and just want to display a compromised email address once.

$results | Sort-Object Email -Unique | Select-Object Email

You get the point right!?!? It’s fairly flexible once you store the results in an array.

Finally one last small addition. Get-PwnedAccount will now accept an email from the pipeline. So if you have another cmdlet or script that can pull an email out, you can pipe that directly into Get-PwnedAccount to quickly check if it’s been compromised in a breach. For example checking an AD user email address could be done as follows…

PS F:\code> Get-ADUser myuser -Properties emailaddress | % emailaddress | Get-PwnedAccount

Status Description              Account Exists
------ -----------              --------------
Good   Email address not found. False

The HaveIBeenPwned PowerShell module can be downloaded from the PowerShellGallery. Always make sure you are downloading and using the latest version. Within PowerShell use Install-Module -Name HaveIBeenPwned. The project can also be found on my GitHub page where you can clone and fork the project.

I’m keen to hear suggestions and feedback. So please let me know your experiences.

Download Links
PowerShellGallery: https://www.powershellgallery.com/packages/HaveIBeenPwned/
GitHub: https://github.com/originaluko/haveibeenpwned