Tag Archives: Powershell

PowerShell 7 – Ternary Operator

In addition to pipeline chain operators which i discussed in my previous post. PowerShell 7 introduces another long awaited addition, a ternary operator. This operator can be used in place of conditional statements If and Else. The ternary operator follows the format of the C# language and takes three arguments.

<condition> ? <if-true> : <if-false>

Using a ternary operator can be seen as a shorthand way of writing an if-else statement in PowerShell.

if <condition> {
  do something
}
else
{
  do something else
}

Ternary operators are an interesting addition to PowerShell 7. It’s basically a simplified if-else statement that you can perform on one line. The first argument is the condition to evaluate. The second argument is a True evaluation response, and the third is a False evaluation response.

$total = 10 * 5 -eq 20 ? 'yes' : 'no'

$os = $IsWindows ? 'This is a Windows OS' : 'Not a Windows OS'

$ConfirmPreference -ne High ? ($ConfirmPreference = High) : 'Already High'

(Test-Path $PROFILE) ? (Write-output 'Profile exists') : (New-Item -ItemType file -Path $PROFILE -Force)

The condition argument will get evaluated to a Boolean True / False which will determine which part of the branch is evaluated and run. If you want to run commands you’ll need to wrap them in parenthesis.

Nesting is possible using a ternary operator. Nesting ternary operators should be kept simple. At first glance it can be seen a little cryptic. But once you understand how it works it starts to make sense.

 $IsMacOS ? 'You are on a Mac' : $IsLinux ? 'You are on Linux' : 'You must be on Windows'

If we write the above out in the traditional elseif way it starts to make more sense.

if ($IsMacOS) {
    'You are on a Mac'
} elseif ($IsLinux) {
    'You are on Linux'
} else {
    'You must be on Windows'
}

If you have used the Where-Object alias in the past you will know that it’s referenced as a ‘?‘. Under very rare situations should you find yourself having a conflict or undesirable behavior between the ternery ‘?’ and Where-Object alias ‘?’. What will be more important is being able to correctly identify code referencing a ternary ‘?’ vs a Where-Object alias.

Ternary operators in PowerShell aren’t without a little controversy. It’s been a feature that has been requested for many years. Jeffery Snover spoke about wanting them way back in PowerShell v1. PowerShell, though, has had the ability to perform something very similar on one line for some time.

# Pre ternary way
$var = if ($x) { $y } else { $z } 

# Ternary way
$var = $x ? $y : $z

It’s arguable which format is easier to understand. The traditional PowerShell way or the new ternary way??? While PowerShell users may find the former easier, programmers coming from languages like C# will no doubt be comfortable with using ternary operators in PowerShell. In any case I’d expect this to be a very niche operator rarely used, but time will tell I guess.

References
Ternary Operator RFC

PowerShell 7 – Pipeline Chain Operators

The GA release of PowerShell 7 is just around the corner.  With this release comes several new features that continue to build upon the previous versions.  One of these new features being introduced are two new operators, && and ||, referred to as pipeline chain operators.   They are intended to work like AND-OR lists in POSIX like shells (and to my surprise to learn, like conditional processing symbols in the Command Shell in Windows).

# Clone a Git repo and if successful display its README.md file
C:> $repo = "https://github.com/originaluko/haveibeenpwned.git"
C:> git clone $repo && Get-Content haveibeenpwned/README.md

Use of pipeline chain operators in PowerShell is fairly straight-forward.  The left-hand side of the pipeline will always run when using either of the operators.  The && operator will only execute the right-hand side of the operator if the left side of the pipeline was successfully executed.  Conversely the || operator it will only execute the right side of the pipeline if the left side fails to execute.

C:> Write-Output 'Left' && Write-Output 'Right'
Left
Right

C:> Write-Output 'Left' || Write-Output 'Right'
Left

Previously to achieve a similar outcome you might create a script block using an If statement.

C:> Write-Output 'Left'; if ($?) {Write-Output 'Right'}
Left
Right

You can also place multiple operators in the one pipeline.  These operators will be processed left-associative, meaning they are processed from left to right. 

C:> Get-ChildItem C:\temp\test.txt || Write-Error 'File does not exist' && Get-Content c:\temp\test.txt

In the above example || is processed before &&.  So Get-ChildItem and Write-Error can be seen as grouped first and processed before Get-Content.

[Get-ChildItem C:\temp\test.txt || Write-Error “File does not exist”] && Get-Content c:\temp\test.txt

To achieve something similar without pipeline chain operators, and this is where things get a little more interesting with the additional work involved, you might perform the below. 

C:> Get-ChildItem C:\temp\test.txt ; if (-not $?) {Write-Error "File does not exist"} ; if ($?) {Get-Content c:\temp\test.txt}

Care should be taken with commands that return a True / False boolean response. They should not be confused as successful and unsuccessful executions. For example something like Test-Path.

C:> Test-Path C:\temp\test.txt && Write-Output 'File exists'
True
Path exists

C:> Test-Path C:\temp\test.txt || Write-Output 'File exists'
False
Path exists

In both cases Test-Path successfully ran and in turn the right-hand side of the pipeline executes.  Pipeline chain operators work by checking the value of the $? variable.  This is an automatic variable which is set to either True or False based on the success of the last command. 

Pipeline chain operators can be used with Try / Catch blocks. It should be noted that script-terminating errors take precedence over chaining.

try
{
    $(throw) || Write-Output 'Failed'
}
catch
{
    Write-Output "Caught: "
}

In the above, while not elegant, a script-terminating error is caught and Caught: ScriptHalted is printed.

try
{
    Get-ChildItem C:\nonExistignFile.txt || Write-Output 'No File'
}
catch
{
    Write-Output "Caught: $_"
}

In the following example a non-terminating error happens when no file is found and ‘No File’ is printed.

One of the goals of pipeline chain operators is to be able to control your pipeline and actions based on command outcome rather than command output.  This is a slightly different approach than we might be use to in PowerShell.  Rather than having to validate output and then take an action we can immediately take that action based on the outcome of the previous command.

It’s going to be interesting to see how widely accept these new operators become. Many of us have made do without them since the beginning of PowerShell. Though we now have access to something that has been available in many of shells like bash and cmd.exe for a long time.

The pipeline chain operator RFC has a lot of good information and further explanation on its use.

References
Pipeline Chain Operators

Have I Been Pwned PowerShell Module v3

Over the last few years I’ve written I few posts on a PowerShell module I created that allows users to directly talk to the Have I Been Pwned API service (https://haveibeenpwned.com) that Troy Hunt maintains. While those posts are a little old now, they are still a good read on what this PowerShell Module is about. I encourage you to read them if you are interested (links at the bottom).

A few months back Troy made a big change to the way his API service works by requiring authorisation in the form of an API key. This broke a lot of different scripts and services the community have created that leveraged his service, including my own PowerShell module. Troy has discussed at length why he has decided to take these steps. I won’t bother going into it here. Authentication and the Have I Been Pwned API

Shortly after this change took effect I received a number of comments from the community that my PowerShell module didn’t work anymore. One or two even said that it was failing because I wasn’t providing an API key with the module. So I wanted to spend a few minutes to explain some of the new changes in the way the latest version of the Have I Been Pwned PowerShell module works. And what you need to do if you want to use it.

Firstly I decided to version increment the PowerShell module from the previous latest version of v1.4.2 to v3 to match the API version used by HIBP. (Version 2 was a short lived version up on my GitHub page)

Now for the big breaking change. Where applicable, all the URIs in the module have been updated to the v3 API. And again, where applicable, have had a header added to them to include a hibp-api-key value/token. Not all URI endpoints require an API Key. Generally speaking if you want to check for a pwned email address you will need an API key.

So how does this work?
The two functions that require an API key to be specified are Get-PwnedAccount and Get-PwnedPasteAccount. In the past you would have typed something like --

Get-PwnedAccount -EmailAdddress [email protected]

This would have returned all breached instances of sites that this email address would have been compromised in. In version 3 you now require the use of an API key to do the same thing.

Get-PwnedAccount -EmailAdddress [email protected] -apiKey "hibp-api-key"

So in this above example you can input your API key directly in the command. Or you could store it in a variable and call it at a later stage in the command. For example

$myApiKey = "xxxxxxxxxxxxx"
Get-PwnedAccount -EmailAdddress [email protected] -apiKey $myApiKey 

If you also really wanted to, you could hard code your API key in the parameters section of these scripts. Certainly not recommended but the choice is yours.

So where do I get this API key?
To make it clear, not from this PowerShell module or from me. You will need to go to Troy Hunt’s site (https://haveibeenpwned.com/API/Key) and purchase one.

Once you do you, this will be yours, or your organisation’s, own personal key that you do not share out. How you protect it and how you want to use it will be up to you.

Where can I download the PowerShell Module?
PowerShellGallery: https://www.powershellgallery.com/packages/HaveIBeenPwned/
GitHub: https://github.com/originaluko/haveibeenpwned

Previous Posts
HaveIBeenPwned PowerShell Module
HaveIBeenPwned PowerShell Module Updates

Display Last Command in PowerShell Title Bar

Carrying on from my previous post on pimping out your PowerShell console. I started to look into way to update the PowerShell title bar. In PowerShell there’s an automatic variable called $host where you can specify your own custom UI settings. One of the properties that can be modified is the console title bar in PowerShell. Now you say, “But Mark, why?”. Well I say, “But why not.”. And now that we have that formality out the way…

Changing the PowerShell console title is actually very simple and can be done on one line.

$Host.UI.RawUI.WindowTitle = 'Dream bigger.'

This command can be run from the command line or placed into your startup profile.ps1 file.

I initially placed quotes into the WindowsTitle property, placed the line at the bottom of my profile.ps1 file, and would get my startup profile to automatically load it when I ran a new PowerShell session. However, with my recent experimentation with the PowerShell prompt and the Get-History cmdlet. I had the idea of dynamically populating my the console title bar with my previous commands.

A lot of the leg work to do this is explained in my previous post, Display Execution Time In PowerShell Prompt. As such, i’m not going to delve into it too in depth here. Instead I do recommend looking at that post.

To update the console title with our previous command we leverage the cmdlet Get-History (just as I used in my previous post).

$host.ui.RawUI.WindowTitle = (Get-History)[-1]

This will update our console title with out last command, but it won’t continue to update after each subsequent command.

So we can take this one step further by updating the built-in PowerShell function Prompt. This function will run after each command is executed. We can modify the function by copying and pasting the below code into our PowerShell session and execute Prompt. This would work for our current PS session.

function Prompt {
  $history = Get-History
  $host.ui.RawUI.WindowTitle = ($history)[-1]
  return " "
}

Now better yet, we can update our startup profile file. Usually this is profile.ps1 held in C:\Users\{Username}\Documents\WindowsPowerShell\ for Windows PowerShell or C:\Users\{Username}\Documents\PowerShell\ for PowerShell Core. By pasting this code into our startup profile, it will execute each time we open a new PowerShell session automatically.

So there you have it. Another pointless awesome way to pimp our your PowerShell console. Combine this with Execution Time in your prompt and you have the flyest console around.

Display Execution Time In PowerShell Prompt

Some time back I attended a presentation where the presenter’s PowerShell console displayed their last command’s execution time in the prompt. At the time of thought it was a bit of a geeky novelty thing. Though recently I’ve had a slight change of opinion. It’s become a great way to easily see the efficiency of my code.

To make a pretty crude quote. There are many ways to skin a cat. PowerShell is extremely flexible in that it allows you to perform the same task many different ways. But not all ways are equal right?!? In the two examples below a perform a fairly basic count from 1 to 10 million.

$x = 0
ForEach ( $i in 1..10000000 ) {
        $x = $x + 1
    }
$x

In the above example the code runs in “around” 9834 milliseconds (9.8 seconds).

class MyClass
{
static[int] CountUp() 
    {
    $x = 0
    ForEach ( $i in 1..10000000 ) {
          $x = $x + 1
        }
    return $x
    }
}

[MyClass]::CountUp()

In this second example the code runs in 552 milliseconds (~0.5 seconds). A huge difference.

Being able to quickly and easily see execution time in the prompt can be quite helpful in determining if you’re coding as efficiently as you could be. It’s led me to trying things multiple ways before I settle. Now the actual code to display execution time is also quite easy to add into your PowerShell startup profile or to just run in your current session.

PowerShell comes with a built in prompt function which you can override with your own. In the below example I have created a new prompt function which I can execute by typing Prompt after running the code in my current session.

function Prompt {
  $executionTime = ((Get-History)[-1].EndExecutionTime - (Get-History)[-1].StartExecutionTime).Totalmilliseconds
  $time = [math]::Round($executionTime,2)
  $promptString = ("$time ms | " + $(Get-Location) + ">")
  Write-Host $promptString -NoNewline -ForegroundColor cyan
  return " "
  } 

The execution time of commands is retrieved from the StartExecutionTime and EndExecutionTime properties of Get-History. I get the time of the previous command, round to two decimal places, and write that to the prompt.

You can also take the function and place it in your PowerShell default startup profile file which will execute each time you open a new PowerShell session. It does require a slight modification to the above function which I’ll discuss later below. I’ve written a few posts on how to find and modify your default profile. But if your using Windows PowerShell you can find or add it in C:\Users\{Username}\Documents\WindowsPowerShell\profile.ps1. If your using PowerShell Core you can find or add it in C:\Users\{Username}\Documents\PowerShell\profile.ps1.

function Prompt {
  if ((Get-History).count -ge 1) {
  $executionTime = ((Get-History)[-1].EndExecutionTime - (Get-History)[-1].StartExecutionTime).Totalmilliseconds
  $time = [math]::Round($executionTime,2)
  $promptString = ("$time ms | " + $(Get-Location) + ">")
  Write-Host $promptString -NoNewline -ForegroundColor cyan
  return " "
  } else {
    $promptString = ("0 ms | " + $(Get-Location) + ">")
    Write-Host $promptString -NoNewline -ForegroundColor cyan
    return " "
  }
}

In the code above I’ve rapped it in an If / Else statement block. The logic here is that we use Get-History to get the execution time, but when a PowerShell session is first run there is nothing in Get-History, which will cause the function to fail and not run correctly generating a default vanilla prompt. Not ideal. So we create an else block and generate our own default prompt when no history of previous commands exist when initially opening a new PowerShell session.

So while many of you may also just find this a geeky novelty thing. It can also be a good reminder for you to try and keep your code and scripting as efficient as possible. Hey and at the very least you can impress your friends with a cool PowerShell prompt.

HaveIBeenPwned PowerShell Module Updates

Back in 2017 I wrote a post on a PowerShell module I created that consumes Troy Hunt’s Have I Been Pwned API service. I won’t go into too much detail about the service here. Plenty of people already have and since that time HaveIBeenPwned has exploded in popularity and most of us know what it is.

In that post I briefly discussed what the module does how you can begin to use some of the core functions in it. Since that time Troy has made a few changes to the API service, some small and some large, which I’ve slowly integrated into the PowerShell module. Things like UserAgent strings being a requirement and K-anonymity for password checks.

The community has also played a part in shaping the PowerShell module over the last year. I’ve had a lot of feedback and even some contributions through the GitHub project. It’s been pretty cool to receive PRs via my GitHub page for improvements to the module.

I thought now was a good opportunity for a follow-up post to talk about some of the changes and updates that have been made over the last year.

Probably the biggest change has been K-anonymity in Get-PwnedPassword. Originally you would send your password over the air in the body of a HTTPS request. With K-anonymity, Get-PwnedPassword will now SHA1 hash your password locally first and will always just send the first 5 characters of the hash to the HaveIBeenPwned API. It’s a much safer way of checking passwords which hopefully will lead to more people accepting and trying this method.

PS F:\Code> Get-PwnedPassword -Password monkey
AB87D24BDC7452E55738DEB5F868E1F16DEA5ACE
WARNING: Password pwned 980209 times!

I’ve attempted to make the module and all functions as PowerShell Core compliant as I can. I say, attempted, because as much of a fan of PowerShell Core as I am I keep finding differences in the way Core works. I’ve had to rewrite all the error handling to better catch 404 responses. A 404 not found response actually being a good thing in identifying that an email account has not be found in a breach. So whether it’s Windows PowerShell or PowerShell Core you should now be fine.

In my original post I gave an example of how you could run Get-PwnedAccount against a CSV file of email accounts and bulk check all your email addresses. Something that could be helpful in a corporate environment with many 100s of email addresses. The example I gave though was far from ideal.

This ability is now baked into Get-PwnedAccount and should lead for some interesting results. It’s very easy to use. A simple text file saved in CSV format with each email address on a separate line / row. Incorrectly formatted email addresses will be ignored and results are displayed only for identified email addresses in breaches.

Below is an example of what the CSV file might look like

[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]

Usage is straight forward too.

PS F:\Code> Get-PwnedAccount -CSV F:\emails.csv

Description                   Email             Breach
-----------                   -----             ------
Email address found in breach [email protected]    000webhost
Email address found in breach [email protected]    17
Email address found in breach [email protected]    500px

Each time an email is found in a breach it will output a result as an object. So you may get multiple results for a single email due to different breaches it’s in.

Identifying the total emails found in breaches is simple. For example

PS F:\Code> Get-PwnedAccount -CSV F:\emails.csv |  Measure-Object | Format-Table Count

Count
-----
  413

Now you probably don’t want to be hitting the API every time you want to manipulate the data. It will be slow and I can’t guarantee that rate limiting may block you. Storing the results in a variable will provide a lot more flexibility and speed. For example, finding results just on one email address

PS F:\SkyDrive\Code> $results = Get-PwnedAccount -CSV F:\emails.csv
PS F:\SkyDrive\Code> $results | Where-Object {$_.email -eq "[email protected]"}

Or if you don’t care about the breach and just want to display a compromised email address once.

$results | Sort-Object Email -Unique | Select-Object Email

You get the point right!?!? It’s fairly flexible once you store the results in an array.

Finally one last small addition. Get-PwnedAccount will now accept an email from the pipeline. So if you have another cmdlet or script that can pull an email out, you can pipe that directly into Get-PwnedAccount to quickly check if it’s been compromised in a breach. For example checking an AD user email address could be done as follows…

PS F:\code> Get-ADUser myuser -Properties emailaddress | % emailaddress | Get-PwnedAccount

Status Description              Account Exists
------ -----------              --------------
Good   Email address not found. False

The HaveIBeenPwned PowerShell module can be downloaded from the PowerShellGallery. Always make sure you are downloading and using the latest version. Within PowerShell use Install-Module -Name HaveIBeenPwned. The project can also be found on my GitHub page where you can clone and fork the project.

I’m keen to hear suggestions and feedback. So please let me know your experiences.

Download Links
PowerShellGallery: https://www.powershellgallery.com/packages/HaveIBeenPwned/
GitHub: https://github.com/originaluko/haveibeenpwned

Running Docker inside of VCSA 6.7

During this years VMware {code} Hackathon at VMworld Vegas I submitted a team project which ran PowerShell within the vSphere Client.  To achieve this I had to create a Docker container running PowerShell Core.  During development I ran the container on an Ubuntu Linux server.  For my final submission at the Hackathon I wanted to minimise the dependency to run a separate Linux VM.  So I created a process to install and run Docker directly on VCSA 6.7.

The process to install and run Docker within VCSA 6.7 is surprisingly very simple.  As a reference I used a blog post from William Lam but with a small modification to correctly load the Bridge module in VCSA 6.7 (as opposed to VCSA 6.5 in William’s post).

I thought I would share the steps I used below for others to experiment with.

Step 1.
SSH to the VCSA VM and enter the Shell

Install the docker package

tdnf -y install docker

Step 2.
Load the kernel module to start the Docker client. (This step needs to be re-run if the VCSA is rebooted)

modprobe bridge --ignore-install

If successful no information should be returned.

You can check if the module is installed by running the following

lsmod | grep bridge

[email protected] [ ~ ]# lsmod | grep bridge
bridge                118784  0
stp                    16384  1 bridge
llc                    16384  2 stp,bridge

Step 3.
Enable and start the Docker Client
systemctl enable docker

systemctl start docker

 

Making the Bridge module load after a reboot (Optonal)

As with William’s process to install and run Docker Step 2 needs to be re-run each time the VCSA is rebooted.  The solution to make the Bridge module automatically load after a reboot happens to be exactly the same.  I’ve included the specific commands to simplify the process.

Step 1.
Make a backup of the file we are going to modify.

cp /etc/modprobe.d/modprobe.conf /etc/modprobe.d/modprobe.conf.bak

Step 2.
Comment out the line that disables the Bridge module from loading

sed -i "s/install bridge/# install bridge/g" /etc/modprobe.d/modprobe.conf

Step 3.
Create a new config file to load the Bridge module

touch /etc/modules-load.d/bridge.conf

Step 4.
Specify the name of the Bridge module to load at reboot

echo bridge >> /etc/modules-load.d/bridge.conf

That’s all we need to do to start running containers on VCSA 6.7.  Excluding the steps to have the Bridge module persist after a reboot and it’s even simpler.

 

Running a Docker container

If you want to test your first container on VCSA you can try my Docker image I build for the Hackathon.  The steps are very simple and listed out below.

Step 1.
Download the two required files from my GitHub repo

curl -O https://raw.githubusercontent.com/originaluko/vSphere_PowerCLI/master/Dockerfile

curl -O https://raw.githubusercontent.com/originaluko/vSphere_PowerCLI/master/supervisord.conf

Step 2.
Build the Image

docker build -t vsphere-powercli .

Step 3.
Start the container with this image

docker run -d -p 4200:4200 --name vsphere-powercli vsphere-powercli

You can check if the docker container is running with the below command.

docker stats

You can check if the port has been mapped correctly by running

docker port vsphere-powercli

Step 4.
Open a web browser and test if you can connect directly to the container through the browser and login to the PowerShell prompt.

https://{my_vcsa}:4200

Accept any certificate warning and login with the default credentials of powercli / password

I hope that was all fairly straight forward  to understand.  It’s actually all very simple to achieve under VCSA 6.7.  As in William’s post, none of this is supported by VMware, so user beware.  Though I have been playing with Docker inside of VCSA 6.7 for quite a few months now with no noticeable issues.

If you want to see my full VMworld Hackathon project you can check it out over in my GitHub repo

VMware {code} Hackathon – VMworld Vegas 2018

It was a tough decision.  Hit up the Cohesity party and watch Snoop Dogg perform or participate in the VMware Code Hackathon.  No, the decision was easy, I was always going to take part in the Hackathon.  I had regretted not taking part last year and I was super keen for this year’s hackathon, I wasn’t going to miss it.  I think it’s safe to assume I chose wisely.  With my team, Team vMafia, coming away with the winning project and an impromptu segment on Virtually Speaking!

Left to right: Anthony Spiteri, Matt Allllford, Mark Ukotic, and Tim Carman

This year’s Hackathon broke from the format from previous years.  Whereas in previous year you showed up, attempted to form a team and an idea, then execute all within a handful of hours.  This year the Hackathon chose to leverage HackerEarth and give participates three weeks to create a team and build out a hackathon idea.  Without a doubt this was the single biggest improvement to the format of this years event.  Leading to some great projects being built.

Rewind a month, In the week leading up the Hackathon I started thinking about an idea I could submit.  Struggling to think of something I started playing a bit of buzzword bingo.  PowerShell and PowerCLI were big passions of mine.  Docker was an area I wanted to explore more into and I had wanted to learn more about what the vSphere SDK could do.  At that point an idea just naturally presented itself.  Could you run a PowerShell console with PowerCLI modules inside the new HTML5 vSphere Client?

At this point I didn’t know I had a good idea, let alone just an idea.  I ran it past some friends within a local Aussie Slack group I’m part of.  It went down extremely well but I still had my doubts.  So I did what anyone would do and went to ‘other’ friends who also thought it was a great idea.  So OK I thought, I have an idea.

Having recently quit my job I was able to focus 100% of my attention to the project.  Great for me and the Hackathon, I guess bad for everyone else with jobs 😛  I formed a team and without any persuasion a number of the Aussie Slack team jumped on-board including Anthony Spiteri, Matt Alffford, and Tim Carman.  We were going to need a lot of help so I reached out to a few friends for assistance.

First up was my ex-manager John Imison for some assistance around Docker.  Installing Docker and running a container is one thing.  But building a container image was something new.  John provided some great advice and ideas around building the docker image.  Actually far too many for a Proof of Concept hackathon idea that I could implement in a practical timeframe.  If this project continues to develop past hackathon expect to see some more of his ideas integrated in.

Next I hit up my Developer brother-in-law Simon Mackinnon.  After spending a couple days crying in front of my computer attempting to build and configure my vSphere SDK environment.  I went to Simon for help in decipher the vSphere Plugins.  Simon helped put us on the right track with finding the easiest solution to embed a console into the vSphere Client.

After a solid week of learning Docker and the vSphere SDK it was time to actually build something and put it all together.  Believe it or not this was actually the easiest part of the whole project.  Once I knew how to build a Docker image it was just a matter of customising it for my purpose.  The same went for the vSphere Plugin.  Take a sample plugin and modify it for purpose.

The night of the Hackathon at VMworld was fairly cruisey for Team vMafia 2.0.  Our project was built.  We performed some final testing to prove it all worked.  We gave some of the judges a walk-through of what we built.  I should point out that the night was extremely well run.  Previous years had some controversy around the event.  None of that existed this year.  Two rounds of hot food was served with lots of drinks and candy lollies to be had.  With WiFi and Internet stable and strong.  The event was pushed back from an original start time of 6:30 to 7:30.  This was due to some of the judges holding training sessions / presentations on some of the hackathon themes.  At 10 PM all the teams were given two minutes to present their ideas and projects in front of a projector.  A completed project was not a requirement to present.  The judges had a number of criteria they were working off for judging, such as how well you kept to your 2 minutes and how well you conveyed your idea.

Team vMafia were one of the last to present.  I was certainly not confident as there was some great ideas before us.  I spoke for our team.  I’m not sure how long I presented for but it was definitely less than the 2 minutes.  During the entire presentation i had Anthony Spiteri whispering in my hear, ‘hurry up, don’t waste time, go go go’.  Distracting, yes, but he did keep me on point.  The judges then left the room to vote.  They came back pretty quickly with it being a unanimous decision that Team vMafia had the winning idea of an integrated PowerShell console inside the H5 vSphere Client.  It was not until they called our name that I thought, ‘Hey, maybe we do have a good idea’.

Screen of vSphere PowerCLI being used within the vSphere Client

The night didn’t quite end there for Team vMafia.  After the event we passed by Eye Candy in Mandalay Bay where we ran into the Virtually Speaking guys, Pete Flecha and Glenn Sizemore.  While Anthony denies it , I think this was his master plan for us to bump into them.  We had some great conversations with the guys and they were super excited for our Hackathon project and win that they invited us onto the podcast the following day.

The episode of Virtually Speaking with Team vMafia can be found here.   While I find it hard to listen to myself I’m assured that the segment was really good and funny.  Our segment also managed to make the cut in the same episode as Micheal Dell and Pat Gelsinger which I have to say is pretty cool.  I huge thank you to Pete and Glenn for allow us to come into the show!

Finally I can’t end without linking to the actually project.  During the development I used my own private repos but have now moved it over to my GitHub page.  The project has the very original name of vSphere PowerCLI.  As far a disclaimers go the project is free to download and use at will.  I’m not really holding any restriction on it.  It was always just a Proof of Concept idea I had.  The instructions are hopefully fairly easy to follow.  Most people have been taking a snapshot of their VCSA and using the option to run Docker on their VCSA.  If the support is there I’ll be considering developing the project past hackathon idea.

Thanks for everyone’s support and kind words and let me know if you’d like to see the project developed further.

 

Getting Started With Ansible And PowerCLI

Continuing on my journey of learning Ansible with a twist of VMware (see my previous post on Getting started with Ansible and VMware).  I’ve started to play around with PowerShell Core and PowerCLI in Ansible.  What I’ve found is that you can do a lot of interesting things with PowerCLI in Ansible, removing the need for a Windows jumphost.

Now I think the magic here is really just using PowerShell Core with Ansible.  However, I wanted to tackle this once again from the VMware admin view point.  So I’m focusing on using Ansible to leverage PowerCLI to connect to vCenter server and to perform some PowerShell / PowerCLI actions, all running from the local Ansible host.

As with my previous post, this is not really an Ansible 101 guide.  Rather the goal here is to show you, the reader, what’s possible with PowerShell Core and PowerCLI using Ansible.  Getting you thinking about how you might leverage this in your environment.

So let’s lay the framework of what we’ll cover below.  I’m going to assume Ansible has already been installed.  I’ll go through the steps to install PowerShell Core onto the Ansible host.  Then install the VMware PowerCLI modules and run some basic Cmdlets.  Finally I’ll cover the more interesting Ansible integration part.

In my lab I’m using Ubuntu.  So everything I’m going to do will be based on this distro.  So let’s get started.

Installing PowerShell

Install PowerShell Core with the following command.  Depending on your Linux distro and version you may have to set an updated MS repo.

sudo apt-get install -y powershell

Next sudo into PowerShell.  We use sudo because the next few commands we run in PowerShell will need elevated privileges.

sudo pwsh

Install the VMware PowerCLI modules with the first command below.  Then change your PowerCLI settings to ignore self signed certificates.  If you have signed certs you can skip this step but most of us probably don’t.

PS /home/mukotic/> Install-Module vmware.powercli

PS /home/mukotic/> Set-PowerCLIConfiguration -InvalidCertificateAction Ignore -Scope AllUsers

At this point you can exit out of the PowerShell prompt and come back in without using sudo, or just keep going as is, the choice is yours.  We can now make our first connection to our VCSA host and if successful run a few basic PowerCLI Cmdlets like Get-VMHost.

PS /home/mukotic/> Connect-VIServer {vcsa_server}

PS /home/mukotic/> Get-VMHost

Creating the PowerShell Script

Assuming all is successful up to this point we can now turn the above commands into a PowerShell script called vcsa_test.ps1.  It’s not ideal but for the sake of demonstration purposes I put the username and password details into the script.  I like to pipe the vCenter connection to Out-Null to avoid any stdout data polluting my output results.

Connect-VIServer -Server vc01.ukoticland.local -User {vcsa_user} -Password {vcsa_pass} | Out-Null
$result = Get-VMHost | Select-Object -ExcludeProperty ExtensionData | ConvertTo-Json -Depth 1
$result | Out-File -FilePath vmhost.json

Creating the Ansible Playbook

We can now create an Ansible Playbook that will call our PowerShell script.  Exit out of the PowerShell prompt and using vi, nano, or another editor create a file called vcsa_test.yml and enter the below.  The only real important line is the one with ‘command’.  Remember that spacing is important in the yaml file.

---
- hosts: localhost

  tasks:
    - name: Run PowerShell Core script
      command: pwsh /home/mukotic/vcsa_test.ps1
      ignore_errors: yes
      changed_when: false
      register: powershell_output

Now try running the Ansible Playbook we just created and check if it runs.

ansible-playbook vcsa_test.yml

Again if all successful the results should look something similar to below.

PLAY [localhost] ****************************************************************************************************************************************************

TASK [Gathering Facts] *******************************************************************************************************************************************
ok: [10.10.10.10]

TASK [Run PowerShell Core script] ********************************************************************************************************************************
ok: [10.10.10.10]

PLAY RECAP *******************************************************************************************************************************************************
10.10.10.10 : ok=2 changed=0 unreachable=0 failed=0

Finally we check if our ouput file created correctly.

cat vmhost.json

[
{
“State”: 0,
“ConnectionState”: 0,
“PowerState”: 1,
“VMSwapfileDatastoreId”: null,



“DatastoreIdList”: “Datastore-datastore-780 Datastore-datastore-529 Datastore-datastore-530”
}
]

What we covered above are just some of the fundamentals to running a PowerShell Core script on an Ansible host.  There are still a lot of improvements that can be made.  The most obvious is we can move our username and password details out of the main PowerShell script.  We could also take the json output and pass it into the Ansible Playbook to read the values for later use in our plays.  But most importantly we can now start to make advanced configuration changes in vSphere where Ansible modules don’t exist.

References

Installing PowerShell Core on Linux -- MS

5 Getting Started PowerShell Core Tips

Now that PowerShell Core 6 has gone GA, it’s a great time to start learning this new version.  So I thought I would put together 5 quick tips and tricks that I’ve used to make using PowerShell Core a little easier for myself while making the transition from Windows PowerShell.

While the intention of this post is not to go into the differences between Windows PowerShell and PowerShell Core.  Despite the slightly mixed or vague messaging we getting from Microsoft on the future of PowerShell.  I think it’s safe to say that PowerShell Core is not a replacement or upgrade for Windows PowerShell but it is the future of PowerShell.  As soon as more and more Modules start supporting PowerShell Core the argument to switch over will become easier.

So while people continue to argue if they should make the switch to PowerShell Core or not here are 5 tips and tricks I have used with PowerShell Core in the meantime.  It’s also worth noting that all of these tips are Windows specific.

Tip 1. Modify your default profile.

I wrote a post on this a little while back on how to modify your default profile in Windows PowerShell here.  I recommend taking a look at it and bring a bit of yourself to PowerShell.  Much of what I discuss in the post is still valid for PowerShell Core.  The only different really is your profile path in PowerShell Core.

You can easily locate your profile in PowerShell Core by typing $Profile at the command profile.  Unless you’ve already modified it, there’s a good chance that the location path doesn’t actually exist on your PC.

In Windows 10 the path is C:\Users\{username}\Documents\PowerShell\Microsoft.PowerShell_profile.ps1.  If it doesn’t exist go ahead and create the folder PowerShell in your Documents folder.  Then create a file called profile.ps1 and modify to your hearts content.  All the steps are laid out in my previous post I mentioned on modifying your Windows PowerShell profile. Without any specific Core modifications I took my already modified profile.ps1 profile from Windows PowerShell and copied it to this location.

Original PowerShell Core console on the left with my modified profile.ps1 profile on the right.

Tip 2. Modify VS Code to use PowerShell Core Integrated Terminal

While I still do like to use the Windows PowerShell ISE with ISE Steroids from time to time.  I’ve for the most part switched to VS Code for most of my day to day use.  If you haven’t yet made the switch I highly recommend downloading it and giving it try.  It’s fast, stable, and uses relatively little memory compared to the PowerShell ISE.

Currently as of VS Code 1.22 you can’t run multiple different Integrated Terminals.  But there’s still a few things we can do here.  We can change our default Integrated Terminal to PowerShell Core.

Open VS Code and navigate to File / Settings.  On the left you have your Default Settings and on the Right you have your User Settings to overwrite the Default Settings.

Enter in the following between the two { }

// PowerShell Core
“terminal.integrated.shell.windows”: “C:\\Program Files\\PowerShell\\6.0.0\\pwsh.exe”,

Now when you open up an Integrated Terminal it should default to PowerShell Core (pwsh.exe)

You can download VS Code from https://code.visualstudio.com/

Tip 3. Modify Console Window properties

This one kind of carries on from Tip 1.  More visual customisations on how a terminal session looks and is very much personal preference.  Right click on the top left and pull up the terminal properties.  I drop the Font size down to a non standard size of 13 which fixes what I feel is an overly elongated terminal.  Increase the Layout windows size to 130 x 40 and hard code a different background colour.  Finally I increase the Command History buffer size.

Tip 4. Use Chocolatey to Install and Upgrade PowerShell Core.

Chocolatey is a Windows Package Manager similar to apt-get or yum in the Linux world.  What I love about Chocolatey is that it handles installing and configuring all dependencies you require when installing an application.

Installing Chocolatey is as simple as running a few commands on one line in PowerShell.  Installing / Upgrading Powershell Core is just has simple once Chocolatey is installed.

The below command will install Chocolatey from a Powershell prompt.

Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString(‘https://chocolatey.org/install.ps1’))

This next command will install PowerShell Core and any dependencies it needs.  Replace install with upgrade if you have a previous version of PowerShell Core installed.

choco install powershell-core -y

You can download Chocolatey from their website at https://chocolatey.org/  and instructions to install PowerShell Core can be found at https://chocolatey.org/packages/powershell-core

Tip 5. Find some modules

PowerShell Core by default uses a different module path to load modules. Meaning that all your Windows PowerShell modules won’t just import and run. This doesn’t mean that they won’t work. PowerShell Core has been designed to be has backwards compatible as possible. While modules like Active Directory won’t work correctly many still do.

By installing the WindowsPSModulePath module from the PowerShell Gallery, you can use Windows PowerShell modules by appending the Windows PowerShell PSModulePath to your PowerShell Core PSModulePath.

First, install the WindowsPSModulePath module from the PowerShell Gallery

Install-Module WindowsPSModulePath -Force

Then run the Add-WindowsPSModulePath cmdlet to add the Windows PowerShell PSModulePath to PowerShell Core:

Add-WindowsPSModulePath

This will now allow you to now start using all your current Windows PowerShell modules in Core (in your current session).  This is not a guarantee that they will work though.  So test thoroughly.

If you’re after specifically supported PowerShell Core modules you can search for the tag PSEdition_Core in the PowerShell Gallery.

 

I hope some of the above quick tips help you get started with PowerShell Core and make the transition a little easier.