PowerShell 7 – Ternary Operator

In addition to pipeline chain operators which i discussed in my previous post. PowerShell 7 introduces another long awaited addition, a ternary operator. This operator can be used in place of conditional statements If and Else. The ternary operator follows the format of the C# language and takes three arguments.

<condition> ? <if-true> : <if-false>

Using a ternary operator can be seen as a shorthand way of writing an if-else statement in PowerShell.

if <condition> {
  do something
  do something else

Ternary operators are an interesting addition to PowerShell 7. It’s basically a simplified if-else statement that you can perform on one line. The first argument is the condition to evaluate. The second argument is a True evaluation response, and the third is a False evaluation response.

$total = 10 * 5 -eq 20 ? 'yes' : 'no'

$os = $IsWindows ? 'This is a Windows OS' : 'Not a Windows OS'

$ConfirmPreference -ne High ? ($ConfirmPreference = High) : 'Already High'

(Test-Path $PROFILE) ? (Write-output 'Profile exists') : (New-Item -ItemType file -Path $PROFILE -Force)

The condition argument will get evaluated to a Boolean True / False which will determine which part of the branch is evaluated and run. If you want to run commands you’ll need to wrap them in parenthesis.

Nesting is possible using a ternary operator. Nesting ternary operators should be kept simple. At first glance it can be seen a little cryptic. But once you understand how it works it starts to make sense.

 $IsMacOS ? 'You are on a Mac' : $IsLinux ? 'You are on Linux' : 'You must be on Windows'

If we write the above out in the traditional elseif way it starts to make more sense.

if ($IsMacOS) {
    'You are on a Mac'
} elseif ($IsLinux) {
    'You are on Linux'
} else {
    'You must be on Windows'

If you have used the Where-Object alias in the past you will know that it’s referenced as a ‘?‘. Under very rare situations should you find yourself having a conflict or undesirable behavior between the ternery ‘?’ and Where-Object alias ‘?’. What will be more important is being able to correctly identify code referencing a ternary ‘?’ vs a Where-Object alias.

Ternary operators in PowerShell aren’t without a little controversy. It’s been a feature that has been requested for many years. Jeffery Snover spoke about wanting them way back in PowerShell v1. PowerShell, though, has had the ability to perform something very similar on one line for some time.

# Pre ternary way
$var = if ($x) { $y } else { $z } 

# Ternary way
$var = $x ? $y : $z

It’s arguable which format is easier to understand. The traditional PowerShell way or the new ternary way??? While PowerShell users may find the former easier, programmers coming from languages like C# will no doubt be comfortable with using ternary operators in PowerShell. In any case I’d expect this to be a very niche operator rarely used, but time will tell I guess.

Ternary Operator RFC

PowerShell 7 – Pipeline Chain Operators

The GA release of PowerShell 7 is just around the corner.  With this release comes several new features that continue to build upon the previous versions.  One of these new features being introduced are two new operators, && and ||, referred to as pipeline chain operators.   They are intended to work like AND-OR lists in POSIX like shells (and to my surprise to learn, like conditional processing symbols in the Command Shell in Windows).

# Clone a Git repo and if successful display its README.md file
C:> $repo = "https://github.com/originaluko/haveibeenpwned.git"
C:> git clone $repo && Get-Content haveibeenpwned/README.md

Use of pipeline chain operators in PowerShell is fairly straight-forward.  The left-hand side of the pipeline will always run when using either of the operators.  The && operator will only execute the right-hand side of the operator if the left side of the pipeline was successfully executed.  Conversely the || operator it will only execute the right side of the pipeline if the left side fails to execute.

C:> Write-Output 'Left' && Write-Output 'Right'

C:> Write-Output 'Left' || Write-Output 'Right'

Previously to achieve a similar outcome you might create a script block using an If statement.

C:> Write-Output 'Left'; if ($?) {Write-Output 'Right'}

You can also place multiple operators in the one pipeline.  These operators will be processed left-associative, meaning they are processed from left to right. 

C:> Get-ChildItem C:\temp\test.txt || Write-Error 'File does not exist' && Get-Content c:\temp\test.txt

In the above example || is processed before &&.  So Get-ChildItem and Write-Error can be seen as grouped first and processed before Get-Content.

[Get-ChildItem C:\temp\test.txt || Write-Error “File does not exist”] && Get-Content c:\temp\test.txt

To achieve something similar without pipeline chain operators, and this is where things get a little more interesting with the additional work involved, you might perform the below. 

C:> Get-ChildItem C:\temp\test.txt ; if (-not $?) {Write-Error "File does not exist"} ; if ($?) {Get-Content c:\temp\test.txt}

Care should be taken with commands that return a True / False boolean response. They should not be confused as successful and unsuccessful executions. For example something like Test-Path.

C:> Test-Path C:\temp\test.txt && Write-Output 'File exists'
Path exists

C:> Test-Path C:\temp\test.txt || Write-Output 'File exists'
Path exists

In both cases Test-Path successfully ran and in turn the right-hand side of the pipeline executes.  Pipeline chain operators work by checking the value of the $? variable.  This is an automatic variable which is set to either True or False based on the success of the last command. 

Pipeline chain operators can be used with Try / Catch blocks. It should be noted that script-terminating errors take precedence over chaining.

    $(throw) || Write-Output 'Failed'
    Write-Output "Caught: "

In the above, while not elegant, a script-terminating error is caught and Caught: ScriptHalted is printed.

    Get-ChildItem C:\nonExistignFile.txt || Write-Output 'No File'
    Write-Output "Caught: $_"

In the following example a non-terminating error happens when no file is found and ‘No File’ is printed.

One of the goals of pipeline chain operators is to be able to control your pipeline and actions based on command outcome rather than command output.  This is a slightly different approach than we might be use to in PowerShell.  Rather than having to validate output and then take an action we can immediately take that action based on the outcome of the previous command.

It’s going to be interesting to see how widely accept these new operators become. Many of us have made do without them since the beginning of PowerShell. Though we now have access to something that has been available in many of shells like bash and cmd.exe for a long time.

The pipeline chain operator RFC has a lot of good information and further explanation on its use.

Pipeline Chain Operators

Windows Server 2019 support on ESXi and vCenter

I’ve been asked by a few customers recently if Windows Server 2019 is supported on ESXi as they can’t seem to find it in the list of Guest OS in vCenter. Considering that Windows Server 2019 was released back in October 2018. It is quite surprising that on the latest version of ESXi and vCenter (currently 6.7 U3) that Windows Server 2019 is still not listed under Guest OS.

VMware does have a KB article on this, 59222. While it is lacking on detailed information why, it does state that you can select Windows Server 2016 instead.

There is also a link to the VMware Compatibility Guide. Here you will be able to select Windows Server 2019 and list all supported ESXi releases.

You see that all releases of ESXi 6.0, 6.5, and 6.7 are listed under Supported Releases on the Compatibility Guide.

It is worth noting the VMware blog Guest OS Install Guide from the Guest OS Team. This blog lists OS’s as they become supported. Also pay attention to support level. VMware has different levels of support from Tech Preview to Full Support. In the case of Windows Server 2019 it reached Full Support back in November 2018.

So as far installing Windows Server 2019 and selecting a Guest OS of Windows Server 2016 you should be fine and fully supported.

vRealize Easy Installer Walk-through Guide

With the recent release of VMware vRealize Suite Lifecycle Manager 8.0 and vRealize Automation, also comes a new deployment tool called vRealize Easy Installer.  The Easy Installer is a tool that streamlines and helps you install vRealize Suite Lifecycle Manager, VMware Identity Manager, and optionally, vRealize Automation via a simple and clean UI.  

The three packages are contained within a single ISO file call VMware vRealize Suite Lifecycle Manager 8.0.0 Easy Installer.  The ISO can be found within the vRealize Suite Download page in the My VMware portal.   Selecting either vRealize Suite Lifecycle Manager or vRealize Automation will take you to the same 9GB ISO download. vIDM still has it’s own individual download if you want/need it.

The Easy Installer is compatible with Linux, Windows, and Mac, which should make it very accessible to a large audience.  I decided to give it a try out and detail the process below.  It’s a rather simple process to follow as long as a few prerequisites specific to the Installer are met first.

On the Memory front, LCM and vIDM both require 2 vCPUs and 6 GB of memory. vRealize Automation on the other hand will require, for a Standard install, 8 vCPUs and 32 GB Memory. You can times that by three for a Clustered install. If you enable Thin Disk provisioning, 75 GB min storage will be required. Finally DNS records for LCM, vIDM, and optionally vRA if being installed, need to be created first.

In the below process I use Windows 10 as the client source I install from.   

To access the installer we need to right click the ISO file and select mount.  This will mount the ISO as a drive in Windows.  We can then navigate to \vrlcm-ui-installer\win32 (If you were on Linux or Mac this path would be different). Then select installer.exe to start the Installer UI.

Step 1. Select Install
Step 2. Introduction -- Click Next
Step 3. EULA -- Accept terms and CEIP then click Next
Step 4. Appliance Deployment Target -- Enter in vCenter details and click Next
Step 5. Certificate Validation — Accept any warnings and click Next
Step 6. Select a Location -- Select a Datacenter and click Next
Step 7. Select a Compute Resource -- Select a Cluster and click Next
Step 8. Select a Storage Location -- Select a Datastore and optionally Enable Thin Disk Mode and click Next
A warning will display if you click Next and there is insufficient disk space. You will need a minimum of 75 GB for a Thin Disk install
Step 9. Network Configuration -- Enter in global networking details for the install of all products. Optionally enter in NTP settings. Only static IP assignment is possible.
Step 10. Password Configuration -- Enter in a default root/admin password to be assigned to all products
Step 11. Lifecycle Manager Configuration -- Enter in LCM details and click Next
If a VM with the same name is found in vCenter when you click Next you will receive a warning
Step 12. Identity Manager Configuration -- Enter in the vIDM details. Optionally enable the Sync Group Member to the Directory
Do not use admin/sshuser/root when selecting a Default Configuration Admin account name.
Step 13. vRealize Automation Configuration -- Choose to install vRA 8. Standard Deployment will deploy one vRA 8 server. Cluster Deployment will deploy three. The License Key will not be validated at this stage so confirm it is correct.
Step 14. Summary -- Verify all installation parameters and click Submit
If there are any issues during installation the install will fail and you will have the option to download the logs to troubleshoot the issue. Make sure all your DNS settings are correct and the client you are installing from can validate those DNS settings.
A successful install will look similar to this

Have I Been Pwned PowerShell Module v3

Over the last few years I’ve written I few posts on a PowerShell module I created that allows users to directly talk to the Have I Been Pwned API service (https://haveibeenpwned.com) that Troy Hunt maintains. While those posts are a little old now, they are still a good read on what this PowerShell Module is about. I encourage you to read them if you are interested (links at the bottom).

A few months back Troy made a big change to the way his API service works by requiring authorisation in the form of an API key. This broke a lot of different scripts and services the community have created that leveraged his service, including my own PowerShell module. Troy has discussed at length why he has decided to take these steps. I won’t bother going into it here. Authentication and the Have I Been Pwned API

Shortly after this change took effect I received a number of comments from the community that my PowerShell module didn’t work anymore. One or two even said that it was failing because I wasn’t providing an API key with the module. So I wanted to spend a few minutes to explain some of the new changes in the way the latest version of the Have I Been Pwned PowerShell module works. And what you need to do if you want to use it.

Firstly I decided to version increment the PowerShell module from the previous latest version of v1.4.2 to v3 to match the API version used by HIBP. (Version 2 was a short lived version up on my GitHub page)

Now for the big breaking change. Where applicable, all the URIs in the module have been updated to the v3 API. And again, where applicable, have had a header added to them to include a hibp-api-key value/token. Not all URI endpoints require an API Key. Generally speaking if you want to check for a pwned email address you will need an API key.

So how does this work?
The two functions that require an API key to be specified are Get-PwnedAccount and Get-PwnedPasteAccount. In the past you would have typed something like --

Get-PwnedAccount -EmailAdddress [email protected]

This would have returned all breached instances of sites that this email address would have been compromised in. In version 3 you now require the use of an API key to do the same thing.

Get-PwnedAccount -EmailAdddress [email protected] -apiKey "hibp-api-key"

So in this above example you can input your API key directly in the command. Or you could store it in a variable and call it at a later stage in the command. For example

$myApiKey = "xxxxxxxxxxxxx"
Get-PwnedAccount -EmailAdddress [email protected] -apiKey $myApiKey 

If you also really wanted to, you could hard code your API key in the parameters section of these scripts. Certainly not recommended but the choice is yours.

So where do I get this API key?
To make it clear, not from this PowerShell module or from me. You will need to go to Troy Hunt’s site (https://haveibeenpwned.com/API/Key) and purchase one.

Once you do you, this will be yours, or your organisation’s, own personal key that you do not share out. How you protect it and how you want to use it will be up to you.

Where can I download the PowerShell Module?
PowerShellGallery: https://www.powershellgallery.com/packages/HaveIBeenPwned/
GitHub: https://github.com/originaluko/haveibeenpwned

Previous Posts
HaveIBeenPwned PowerShell Module
HaveIBeenPwned PowerShell Module Updates

Display Last Command in PowerShell Title Bar

Carrying on from my previous post on pimping out your PowerShell console. I started to look into way to update the PowerShell title bar. In PowerShell there’s an automatic variable called $host where you can specify your own custom UI settings. One of the properties that can be modified is the console title bar in PowerShell. Now you say, “But Mark, why?”. Well I say, “But why not.”. And now that we have that formality out the way…

Changing the PowerShell console title is actually very simple and can be done on one line.

$Host.UI.RawUI.WindowTitle = 'Dream bigger.'

This command can be run from the command line or placed into your startup profile.ps1 file.

I initially placed quotes into the WindowsTitle property, placed the line at the bottom of my profile.ps1 file, and would get my startup profile to automatically load it when I ran a new PowerShell session. However, with my recent experimentation with the PowerShell prompt and the Get-History cmdlet. I had the idea of dynamically populating my the console title bar with my previous commands.

A lot of the leg work to do this is explained in my previous post, Display Execution Time In PowerShell Prompt. As such, i’m not going to delve into it too in depth here. Instead I do recommend looking at that post.

To update the console title with our previous command we leverage the cmdlet Get-History (just as I used in my previous post).

$host.ui.RawUI.WindowTitle = (Get-History)[-1]

This will update our console title with out last command, but it won’t continue to update after each subsequent command.

So we can take this one step further by updating the built-in PowerShell function Prompt. This function will run after each command is executed. We can modify the function by copying and pasting the below code into our PowerShell session and execute Prompt. This would work for our current PS session.

function Prompt {
  $history = Get-History
  $host.ui.RawUI.WindowTitle = ($history)[-1]
  return " "

Now better yet, we can update our startup profile file. Usually this is profile.ps1 held in C:\Users\{Username}\Documents\WindowsPowerShell\ for Windows PowerShell or C:\Users\{Username}\Documents\PowerShell\ for PowerShell Core. By pasting this code into our startup profile, it will execute each time we open a new PowerShell session automatically.

So there you have it. Another pointless awesome way to pimp our your PowerShell console. Combine this with Execution Time in your prompt and you have the flyest console around.

Display Execution Time In PowerShell Prompt

Some time back I attended a presentation where the presenter’s PowerShell console displayed their last command’s execution time in the prompt. At the time of thought it was a bit of a geeky novelty thing. Though recently I’ve had a slight change of opinion. It’s become a great way to easily see the efficiency of my code.

To make a pretty crude quote. There are many ways to skin a cat. PowerShell is extremely flexible in that it allows you to perform the same task many different ways. But not all ways are equal right?!? In the two examples below a perform a fairly basic count from 1 to 10 million.

$x = 0
ForEach ( $i in 1..10000000 ) {
        $x = $x + 1

In the above example the code runs in “around” 9834 milliseconds (9.8 seconds).

class MyClass
static[int] CountUp() 
    $x = 0
    ForEach ( $i in 1..10000000 ) {
          $x = $x + 1
    return $x


In this second example the code runs in 552 milliseconds (~0.5 seconds). A huge difference.

Being able to quickly and easily see execution time in the prompt can be quite helpful in determining if you’re coding as efficiently as you could be. It’s led me to trying things multiple ways before I settle. Now the actual code to display execution time is also quite easy to add into your PowerShell startup profile or to just run in your current session.

PowerShell comes with a built in prompt function which you can override with your own. In the below example I have created a new prompt function which I can execute by typing Prompt after running the code in my current session.

function Prompt {
  $executionTime = ((Get-History)[-1].EndExecutionTime - (Get-History)[-1].StartExecutionTime).Totalmilliseconds
  $time = [math]::Round($executionTime,2)
  $promptString = ("$time ms | " + $(Get-Location) + ">")
  Write-Host $promptString -NoNewline -ForegroundColor cyan
  return " "

The execution time of commands is retrieved from the StartExecutionTime and EndExecutionTime properties of Get-History. I get the time of the previous command, round to two decimal places, and write that to the prompt.

You can also take the function and place it in your PowerShell default startup profile file which will execute each time you open a new PowerShell session. It does require a slight modification to the above function which I’ll discuss later below. I’ve written a few posts on how to find and modify your default profile. But if your using Windows PowerShell you can find or add it in C:\Users\{Username}\Documents\WindowsPowerShell\profile.ps1. If your using PowerShell Core you can find or add it in C:\Users\{Username}\Documents\PowerShell\profile.ps1.

function Prompt {
  if ((Get-History).count -ge 1) {
  $executionTime = ((Get-History)[-1].EndExecutionTime - (Get-History)[-1].StartExecutionTime).Totalmilliseconds
  $time = [math]::Round($executionTime,2)
  $promptString = ("$time ms | " + $(Get-Location) + ">")
  Write-Host $promptString -NoNewline -ForegroundColor cyan
  return " "
  } else {
    $promptString = ("0 ms | " + $(Get-Location) + ">")
    Write-Host $promptString -NoNewline -ForegroundColor cyan
    return " "

In the code above I’ve rapped it in an If / Else statement block. The logic here is that we use Get-History to get the execution time, but when a PowerShell session is first run there is nothing in Get-History, which will cause the function to fail and not run correctly generating a default vanilla prompt. Not ideal. So we create an else block and generate our own default prompt when no history of previous commands exist when initially opening a new PowerShell session.

So while many of you may also just find this a geeky novelty thing. It can also be a good reminder for you to try and keep your code and scripting as efficient as possible. Hey and at the very least you can impress your friends with a cool PowerShell prompt.

HaveIBeenPwned PowerShell Module Updates

Back in 2017 I wrote a post on a PowerShell module I created that consumes Troy Hunt’s Have I Been Pwned API service. I won’t go into too much detail about the service here. Plenty of people already have and since that time HaveIBeenPwned has exploded in popularity and most of us know what it is.

In that post I briefly discussed what the module does how you can begin to use some of the core functions in it. Since that time Troy has made a few changes to the API service, some small and some large, which I’ve slowly integrated into the PowerShell module. Things like UserAgent strings being a requirement and K-anonymity for password checks.

The community has also played a part in shaping the PowerShell module over the last year. I’ve had a lot of feedback and even some contributions through the GitHub project. It’s been pretty cool to receive PRs via my GitHub page for improvements to the module.

I thought now was a good opportunity for a follow-up post to talk about some of the changes and updates that have been made over the last year.

Probably the biggest change has been K-anonymity in Get-PwnedPassword. Originally you would send your password over the air in the body of a HTTPS request. With K-anonymity, Get-PwnedPassword will now SHA1 hash your password locally first and will always just send the first 5 characters of the hash to the HaveIBeenPwned API. It’s a much safer way of checking passwords which hopefully will lead to more people accepting and trying this method.

PS F:\Code> Get-PwnedPassword -Password monkey
WARNING: Password pwned 980209 times!

I’ve attempted to make the module and all functions as PowerShell Core compliant as I can. I say, attempted, because as much of a fan of PowerShell Core as I am I keep finding differences in the way Core works. I’ve had to rewrite all the error handling to better catch 404 responses. A 404 not found response actually being a good thing in identifying that an email account has not be found in a breach. So whether it’s Windows PowerShell or PowerShell Core you should now be fine.

In my original post I gave an example of how you could run Get-PwnedAccount against a CSV file of email accounts and bulk check all your email addresses. Something that could be helpful in a corporate environment with many 100s of email addresses. The example I gave though was far from ideal.

This ability is now baked into Get-PwnedAccount and should lead for some interesting results. It’s very easy to use. A simple text file saved in CSV format with each email address on a separate line / row. Incorrectly formatted email addresses will be ignored and results are displayed only for identified email addresses in breaches.

Below is an example of what the CSV file might look like

[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]

Usage is straight forward too.

PS F:\Code> Get-PwnedAccount -CSV F:\emails.csv

Description                   Email             Breach
-----------                   -----             ------
Email address found in breach [email protected]    000webhost
Email address found in breach [email protected]    17
Email address found in breach [email protected]    500px

Each time an email is found in a breach it will output a result as an object. So you may get multiple results for a single email due to different breaches it’s in.

Identifying the total emails found in breaches is simple. For example

PS F:\Code> Get-PwnedAccount -CSV F:\emails.csv |  Measure-Object | Format-Table Count


Now you probably don’t want to be hitting the API every time you want to manipulate the data. It will be slow and I can’t guarantee that rate limiting may block you. Storing the results in a variable will provide a lot more flexibility and speed. For example, finding results just on one email address

PS F:\SkyDrive\Code> $results = Get-PwnedAccount -CSV F:\emails.csv
PS F:\SkyDrive\Code> $results | Where-Object {$_.email -eq "[email protected]"}

Or if you don’t care about the breach and just want to display a compromised email address once.

$results | Sort-Object Email -Unique | Select-Object Email

You get the point right!?!? It’s fairly flexible once you store the results in an array.

Finally one last small addition. Get-PwnedAccount will now accept an email from the pipeline. So if you have another cmdlet or script that can pull an email out, you can pipe that directly into Get-PwnedAccount to quickly check if it’s been compromised in a breach. For example checking an AD user email address could be done as follows…

PS F:\code> Get-ADUser myuser -Properties emailaddress | % emailaddress | Get-PwnedAccount

Status Description              Account Exists
------ -----------              --------------
Good   Email address not found. False

The HaveIBeenPwned PowerShell module can be downloaded from the PowerShellGallery. Always make sure you are downloading and using the latest version. Within PowerShell use Install-Module -Name HaveIBeenPwned. The project can also be found on my GitHub page where you can clone and fork the project.

I’m keen to hear suggestions and feedback. So please let me know your experiences.

Download Links
PowerShellGallery: https://www.powershellgallery.com/packages/HaveIBeenPwned/
GitHub: https://github.com/originaluko/haveibeenpwned

Building and running Windows Terminal

The big news from Microsoft over the last week has been the announcement of Windows Terminal. An open source project from Microsoft currently up on GitHub. Windows Terminal allows you to run multiple tabbed CLIs from the one window. Not only that but they can be a mix of different CLIs --cmd, PowerShell, Python, Bash, etc. Pretty cool right. Windows Terminal is GPU accelerated ¯\_(ツ)_/¯ . Will allow for transparent windows, emojis, and new fonts.

As of today there are no pre-built binaries of Windows Terminal from Microsoft, that’s planned for sometime in Winter 2019 (that’s Northern Winter people), only the source code is up on GitHub. 1.0 release isn’t planned till at least the end of the year. The code is still very Alpha but never the less I decided to see what’s involved in building and running Windows Terminal on Windows 10.

Below I listed the steps and process I took to build and run Windows Terminal if anyone is interested in trying it out themselves. There’s a number of prerequisites required but nothing to difficult.


Windows 10 (Build 1903)
As of today (May 2019) you need to be in the Windows Insider program to get this version. You’ll need to enable this inside of Windows 10 and download the latest build.

Visual Studio 2017 or newer
You can probably use a different IDE though I ended up using the community edition of Visual Studio 2019 which is a free download. Microsoft specifically calls out a few packages that you need if you’re running Visual Studio.

  • Desktop Development with C++
    • If you’re running VS2019, you’ll also need to install the following Individual Components:
      • MSVC v141 -- VS 2017 C++ (x86 and x64) build tools
      • C++ ATL for v141 build tools (x86 and x64)
  • Universal Windows Platform Development
    • Also install the following Individual Component:
      • C++ (v141) Universal Windows Platform Tools

Developer Mode in Windows 10.

Build and Deploy Process

The first thing you want to do is check that you’re on at least Windows 10 build 1903. You can check this by going to Settings > About. If you’re not on at least this build you can turn on Release Preview by going to Windows Insider Programme under Settings.

Next you want to make sure you’ve enabled Developer mode. You can do this in Settings > For developers

Now we can grab Visual Studio 2019 Community Edition. This is a super small and quick 20 GB download and install. <sarcasm emoji>

Make sure you select the right Workloads and Individual components from the prerequisites above.

Once the install completes comes the fun part of building. Skip the Visual Studio wizard and go to File > New > Repository

Under the Team Explorer window select Clone and enter in the Windows Terminal git path (https://github.com/microsoft/terminal.git). Make sure Recursively Clone Submodules is selected. Windows Terminal relies on git submodules for some of its dependencies. You’ll need around 200 MB to download the repo.

Once the package downloads you may receive an error that some NuGet packages are missing in the Error List. Even if you don’t it’s still probably a good idea to just update the packages.

Go to Tools > NuGet Package Manager > Package Manager Console. Then in the Package Manager Console type in Update-Package -reinstall

Head over to the Solution Explorer window and select Solutions and Folders view and select OpenConsole.sln

We’re now just about ready to build. Up in the top menu bar select Release for the build, x64 for the architecture, and CascadiaPackage for the package to build.

All things being equal we should be ready to now build. Select Build > Build Solution. Initially I had a few fails here, which were all down to available space. You’ll only need around 12 GB for a build to succeed <another sarcasm emoji>. It should take a few minutes and hopefully when complete you get a successful build with no errors. Finally select Build > Deploy Solution.

Once deployed you can find Windows Terminal (Dev Build) on your Start menu which you can now run.

When you first launch Windows Terminal you won’t see any tabs. Pressing CTRL+T will open a second tab and display a pull down menu where you can further select different CLIs. Settings can also be found under this menu which can be modified via a json file. It’s in the profiles.json file you can change transparency, fonts, colours, and of course add new types of CLIs.

Windows Terminal is still very rough around the edges. Microsoft are calling this a very early alpha release and this does show. It is exciting though to see what is coming. Windows Terminal has huge possibilities. I’ll be following it closely over the coming months and looking forward to spewing out emojis all over my terminals. 🙂 😮

vExpert 2019 is here and it’s huge!

Well after a long wait it’s that time of year again when the first half of year announcements are made for vExperts. A big congratulations to all the new and returning vExperts this year, in particularly all my fellow Australian vExperts. And yeah why not, a congratulations to even those just across the pond over in New Zealand. It’s just another state of Australia right 😛

This is my fifth year as a vExpert and as of late last year my first as a vExpert Pro. It’s this new sub program that I’m most proud and honoured to be part of. But more on this program later.

This year we have 1739 vExperts being honoured from 74 countries. That’s approximately 250 more than the same time last year and on par with vExpert numbers after the second half intake of 2018.

The United States are most represented with 639, followed by United Kingdom at 157. My home country, Australia, ninth most represented with 45 this year. 18 of those countries are represented by only 1 vExpert.

The recently updated public vExpert directory can be found at https://vexpert.vmware.com/directory/ . It contains all of this years vExperts with their public profile.

Coming back to the vExpert Pro sub program. I first heard about this program being created from Valdecir Carvalho (@homelaber) at VMworld Vegas in 2018. I thought it was a really great idea when Val described it to me. I won’t go into too much detail on this sub program as there’s a number of blogs that cover it very well. Basically though, one of its goals is to create vExperts that can champion the vExpert program in specific countries and regions around the world. In English speaking countries that might be a little hard to understand but in countries that don’t speak English, which it might be surprising to know, covers most of the world. As a result of the language barrier it can be hard to recruit and communicate to vExperts in non English speaking countries. That’s where bilingual vExpert Pros can help translate any vExpert communication back into their native language to fellow vExperts and potential candidates.

Coming a little closer to home I had a few potential first time vExperts in Australia approach and ask to sit down with me and help them work through the vExpert application process. Something that I felt quite humbled to help out with. I also had a number of people ask if I could be used as a reference if further information was required of them. Again, something I was more than happy to help with. If I can take a little of my personal time to help someone join this great program it’s well worth it.

A little bit of insight into how the voting and approval process worked this year. With a huge amount of applicants now applying for vExpert you can understand what a mammoth job it is to go through and screen each person for vExpert recognition and award. This is where the vExpert Pros were able to help out in a voting process. We had the opportunity to go through and help the core VMware vExpert team curate and vote for vExpert approval. I can comfortable say we all took this process very serious. Of course we were just voting and providing feedback with ultimate say and oversight coming from people like
Valdecir Carvalho and Corey Romero in making final decision. I feel the process worked quite well and should lead to a higher level of standard for vExpert approval but also future applications.

In conclusion, with the increased scrutiny and review of applicants. Everyone that made vExpert for 2019 should be extremely proud of themselves. We’re part of a great community and we have a high standard to live up to. The days of providing vague and misleading information on your applications are going away.

Again, congratulations to all the 2019 vExperts! Well Done and keep up the good work.