Author Archives: Mark Ukotic

Getting Started With Ansible And PowerCLI

Continuing on my journey of learning Ansible with a twist of VMware (see my previous post on Getting started with Ansible and VMware).  I’ve started to play around with PowerShell Core and PowerCLI in Ansible.  What I’ve found is that you can do a lot of interesting things with PowerCLI in Ansible, removing the need for a Windows jumphost.

Now I think the magic here is really just using PowerShell Core with Ansible.  However, I wanted to tackle this once again from the VMware admin view point.  So I’m focusing on using Ansible to leverage PowerCLI to connect to vCenter server and to perform some PowerShell / PowerCLI actions, all running from the local Ansible host.

As with my previous post, this is not really an Ansible 101 guide.  Rather the goal here is to show you, the reader, what’s possible with PowerShell Core and PowerCLI using Ansible.  Getting you thinking about how you might leverage this in your environment.

So let’s lay the framework of what we’ll cover below.  I’m going to assume Ansible has already been installed.  I’ll go through the steps to install PowerShell Core onto the Ansible host.  Then install the VMware PowerCLI modules and run some basic Cmdlets.  Finally I’ll cover the more interesting Ansible integration part.

In my lab I’m using Ubuntu.  So everything I’m going to do will be based on this distro.  So let’s get started.

Installing PowerShell

Install PowerShell Core with the following command.  Depending on your Linux distro and version you may have to set an updated MS repo.

sudo apt-get install -y powershell

Next sudo into PowerShell.  We use sudo because the next few commands we run in PowerShell will need elevated privileges.

sudo pwsh

Install the VMware PowerCLI modules with the first command below.  Then change your PowerCLI settings to ignore self signed certificates.  If you have signed certs you can skip this step but most of us probably don’t.

PS /home/mukotic/> Install-Module vmware.powercli

PS /home/mukotic/> Set-PowerCLIConfiguration -InvalidCertificateAction Ignore -Scope AllUsers

At this point you can exit out of the PowerShell prompt and come back in without using sudo, or just keep going as is, the choice is yours.  We can now make our first connection to our VCSA host and if successful run a few basic PowerCLI Cmdlets like Get-VMHost.

PS /home/mukotic/> Connect-VIServer {vcsa_server}

PS /home/mukotic/> Get-VMHost

Creating the PowerShell Script

Assuming all is successful up to this point we can now turn the above commands into a PowerShell script called vcsa_test.ps1.  It’s not ideal but for the sake of demonstration purposes I put the username and password details into the script.  I like to pipe the vCenter connection to Out-Null to avoid any stdout data polluting my output results.

Connect-VIServer -Server vc01.ukoticland.local -User {vcsa_user} -Password {vcsa_pass} | Out-Null
$result = Get-VMHost | Select-Object -ExcludeProperty ExtensionData | ConvertTo-Json -Depth 1
$result | Out-File -FilePath vmhost.json

Creating the Ansible Playbook

We can now create an Ansible Playbook that will call our PowerShell script.  Exit out of the PowerShell prompt and using vi, nano, or another editor create a file called vcsa_test.yml and enter the below.  The only real important line is the one with ‘command’.  Remember that spacing is important in the yaml file.

---
- hosts: localhost

  tasks:
    - name: Run PowerShell Core script
      command: pwsh /home/mukotic/vcsa_test.ps1
      ignore_errors: yes
      changed_when: false
      register: powershell_output

Now try running the Ansible Playbook we just created and check if it runs.

ansible-playbook vcsa_test.yml

Again if all successful the results should look something similar to below.

PLAY [localhost] ****************************************************************************************************************************************************

TASK [Gathering Facts] *******************************************************************************************************************************************
ok: [10.10.10.10]

TASK [Run PowerShell Core script] ********************************************************************************************************************************
ok: [10.10.10.10]

PLAY RECAP *******************************************************************************************************************************************************
10.10.10.10 : ok=2 changed=0 unreachable=0 failed=0

Finally we check if our ouput file created correctly.

cat vmhost.json

[
{
“State”: 0,
“ConnectionState”: 0,
“PowerState”: 1,
“VMSwapfileDatastoreId”: null,



“DatastoreIdList”: “Datastore-datastore-780 Datastore-datastore-529 Datastore-datastore-530”
}
]

What we covered above are just some of the fundamentals to running a PowerShell Core script on an Ansible host.  There are still a lot of improvements that can be made.  The most obvious is we can move our username and password details out of the main PowerShell script.  We could also take the json output and pass it into the Ansible Playbook to read the values for later use in our plays.  But most importantly we can now start to make advanced configuration changes in vSphere where Ansible modules don’t exist.

References

Installing PowerShell Core on Linux -- MS

Getting Started With Ansible And VMware

For a little while now I’ve been playing around with Ansible and exploring its VMware modules.  While using Ansible with the VMware modules is not overly complex.  I quickly realised there were very little examples out on the web for the VMware administrator.   So I thought I would put together a very simple crash course on getting starting with Ansible and VMware.

The intention here is not to explain how Ansible works.  There’s a lot of information out on the web around that, plus I’m still learn too.  Instead I just wanted to put together something relatively simple.  Show how to quick and dirty get Ansible installed on a Linux box with the required VMware SDK.  Then create an Ansible playbook to build a basic environment in vCenter.  This will involve a new Datacenter, a Cluster, and a Resource Pool.

So let’s get started.

Installing Ansible

Firstly let’s install Ansible.  Ubuntu and CentOS are common distros so I cover them both below.  With Ubuntu I also add the Ansible repository.  While I don’t believe it’s really required it seems to be what most people do.

Ubuntu

sudo apt-add-repository ppa:ansible/ansible

sudo apt-get install ansible

CentOS

sudo yum install ansible

Once installed we can run a simple verification check to see if the install was successful.

ansible -m ping localhost

localhost | SUCCESS => {
"changed": false,
"failed": false,
"ping": "pong"
}

Installing pyVmomi

Now we install pyVmomi.  This is VMware’s Python SDK for managing vCenter and ESXi and is required to use the VMware modules that come with Ansible.

sudo pip install pyvmomi

And that’s all that we really need to install to build our first playbook and run it against vCenter.  To run our playbook we’re going to need to create a few folders and files.  The structure will look something similar to below.

├── ansible-vmware
│   ├── group_vars
│   │   └── all.yml
│   └── vmware_create_infra.yml

Let’s create a folder called ansible-vmware and a varibles folder called group_vars below that

mkdir ansible-vmware
mkdir ansible-vmware/group_vars

Now even though this is a crash course to running our first VMware playbook I want to at least do things half right and not have any plaintext passwords.  So before I go too far into creating the yaml files I want to create an encrypted string of our vCenter’s administrator SSO password.  I do that with the following line.

ansible-vault encrypt_string {admin_sso_password} --ask-vault-pass

You’ll be asked for an Ansible vault password and then receive back an encrypted string.  The vault password will be used when we run our play (don’t forget it).  Copy and paste the output and put it aside for a minute.  We’re going to pasta it in our group_vars file that we’re about to now create.

Let’s now create that variables file inside the group_vars folder and call it all.yml

touch ansible-vmware/group_vars/all.yml

Using vi or nano or whatever you prefer to edit the file.  Let’s edit the all.yml file and add in all the variables we will use in our playbook.  Again, crash course, so don’t worry too much about what each one does right this minute.  Just know that we have to reference these values multiple times in our playbook and having a variables yaml file really helps with that.

For the vcenter_password variable use the encrypted string we created in the step above and paste it in so it looks similar to below.  Obviously feel free to change any of the values, datacenter, cluster, etc.

---
datacenter: ansible_dc1
cluster: ansible_cluster1
resource_pool: ansible_resource1
datastore: datastore01
vcenter_ip: 192.168.0.100
vcenter_username: administrator
vcenter_password: !vault |
          $ANSIBLE_VAULT;1.1;AES256
          24242245545455516332373965613662616531653266326362643533613932356530663263326663
          65363339653337333478977865425424245245824524824858666463373838323330666633363763
          65323436643563333334527873245674247868727672789689787867867878643130616261336262
          3462323161633933320a653030333478567825725727855427887878787886666624313862663462
          8775

Now we create our main playbook.  This is going to contain all our plays and reference all our variables we just created in global_vars/all.yml

touch ansible-vmware/vmware_create_infra.yml

Like we did with the variables file lets edit this file. Again, vi, nano, whatever.  Copy and past the information below.  Things to note.  Yaml files don’t like tabs.  So spaces only and position is very important.

- hosts: localhost
  connection: local
  tasks:
    - name: include vars
      include_vars:
        dir: group_vars

    - name: Create Datacenter in vCenter
      local_action:
        module: vmware_datacenter
        datacenter_name: "{{ datacenter }}"
        hostname: "{{ vcenter_ip}}"
        username: "{{ vcenter_username}}"
        password: "{{ vcenter_password}}"
        validate_certs: False
        state: present

    - name: Create Cluster in datacenter
      local_action:
        module: vmware_cluster
        hostname: "{{ vcenter_ip}}"
        username: "{{ vcenter_username}}"
        password: "{{ vcenter_password}}"
        validate_certs: False
        state: present
        datacenter_name: "{{ datacenter }}"
        cluster_name: "{{ cluster }}"
        enable_ha: yes
        enable_drs: yes

    - name: Create Resource pool in cluster
      vmware_resource_pool:
        hostname: "{{ vcenter_ip }}"
        username: "{{ vcenter_username}}"
        password: "{{ vcenter_password}}"
        validate_certs: False
        state: present
        datacenter: "{{ datacenter }}"
        cluster: "{{ cluster }}"
        resource_pool: "{{ resource_pool }}"

Assuming you created the files correctly and have the right password we are ready to run our first Ansible playbook against vCenter.

ansible-playbook ansible-vmware/vmware_create_infra.yml --ask-vault-pass

This should produce something similar to below

PLAY [localhost] ***************************************************************************************************************************

TASK [Gathering Facts] *********************************************************************************************************************
ok: [localhost]

TASK [include vars] ************************************************************************************************************************
ok: [localhost]

TASK [Create Datacenter in vCenter] ********************************************************************************************************
changed: [localhost -> localhost]

TASK [Create Cluster in datacenter] ********************************************************************************************************
changed: [localhost -> localhost]

TASK [Create Resource pool in cluster] *****************************************************************************************************
changed: [localhost]

PLAY RECAP *********************************************************************************************************************************
localhost : ok=5 changed=3 unreachable=0 failed=0

The resulting output in the vSphere Client should look similar to below.

The cool part is we can run the same command again and again and nothing will change as long as our environment is consistent with our defined yaml files.  They in essence become our working as-built doco.

So the goal from what we’ve just done above was not to actually build an environment but rather to show you how quick and simple we can get Ansible up and running and configuring a vSphere environment.  I’ve avoided a lot of the technical stuff so instead you can think about how this might help you in your environment.

In future posts I might go into more details on specific modules and how to use them but for now I think I might just focus on what’s possible with Ansible and VMware.

References

Ansible VMware Getting Started

pyVmomi GitHub Page

 

5 Getting Started PowerShell Core Tips

Now that PowerShell Core 6 has gone GA, it’s a great time to start learning this new version.  So I thought I would put together 5 quick tips and tricks that I’ve used to make using PowerShell Core a little easier for myself while making the transition from Windows PowerShell.

While the intention of this post is not to go into the differences between Windows PowerShell and PowerShell Core.  Despite the slightly mixed or vague messaging we getting from Microsoft on the future of PowerShell.  I think it’s safe to say that PowerShell Core is not a replacement or upgrade for Windows PowerShell but it is the future of PowerShell.  As soon as more and more Modules start supporting PowerShell Core the argument to switch over will become easier.

So while people continue to argue if they should make the switch to PowerShell Core or not here are 5 tips and tricks I have used with PowerShell Core in the meantime.  It’s also worth noting that all of these tips are Windows specific.

Tip 1. Modify your default profile.

I wrote a post on this a little while back on how to modify your default profile in Windows PowerShell here.  I recommend taking a look at it and bring a bit of yourself to PowerShell.  Much of what I discuss in the post is still valid for PowerShell Core.  The only different really is your profile path in PowerShell Core.

You can easily locate your profile in PowerShell Core by typing $Profile at the command profile.  Unless you’ve already modified it, there’s a good chance that the location path doesn’t actually exist on your PC.

In Windows 10 the path is C:\Users\{username}\Documents\PowerShell\Microsoft.PowerShell_profile.ps1.  If it doesn’t exist go ahead and create the folder PowerShell in your Documents folder.  Then create a file called profile.ps1 and modify to your hearts content.  All the steps are laid out in my previous post I mentioned on modifying your Windows PowerShell profile. Without any specific Core modifications I took my already modified profile.ps1 profile from Windows PowerShell and copied it to this location.

Original PowerShell Core console on the left with my modified profile.ps1 profile on the right.

Tip 2. Modify VS Code to use PowerShell Core Integrated Terminal

While I still do like to use the Windows PowerShell ISE with ISE Steroids from time to time.  I’ve for the most part switched to VS Code for most of my day to day use.  If you haven’t yet made the switch I highly recommend downloading it and giving it try.  It’s fast, stable, and uses relatively little memory compared to the PowerShell ISE.

Currently as of VS Code 1.22 you can’t run multiple different Integrated Terminals.  But there’s still a few things we can do here.  We can change our default Integrated Terminal to PowerShell Core.

Open VS Code and navigate to File / Settings.  On the left you have your Default Settings and on the Right you have your User Settings to overwrite the Default Settings.

Enter in the following between the two { }

// PowerShell Core
“terminal.integrated.shell.windows”: “C:\\Program Files\\PowerShell\\6.0.0\\pwsh.exe”,

Now when you open up an Integrated Terminal it should default to PowerShell Core (pwsh.exe)

You can download VS Code from https://code.visualstudio.com/

Tip 3. Modify Console Window properties

This one kind of carries on from Tip 1.  More visual customisations on how a terminal session looks and is very much personal preference.  Right click on the top left and pull up the terminal properties.  I drop the Font size down to a non standard size of 13 which fixes what I feel is an overly elongated terminal.  Increase the Layout windows size to 130 x 40 and hard code a different background colour.  Finally I increase the Command History buffer size.

Tip 4. Use Chocolatey to Install and Upgrade PowerShell Core.

Chocolatey is a Windows Package Manager similar to apt-get or yum in the Linux world.  What I love about Chocolatey is that it handles installing and configuring all dependencies you require when installing an application.

Installing Chocolatey is as simple as running a few commands on one line in PowerShell.  Installing / Upgrading Powershell Core is just has simple once Chocolatey is installed.

The below command will install Chocolatey from a Powershell prompt.

Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString(‘https://chocolatey.org/install.ps1’))

This next command will install PowerShell Core and any dependencies it needs.  Replace install with upgrade if you have a previous version of PowerShell Core installed.

choco install powershell-core -y

You can download Chocolatey from their website at https://chocolatey.org/  and instructions to install PowerShell Core can be found at https://chocolatey.org/packages/powershell-core

Tip 5. Find some modules

PowerShell Core by default uses a different module path to load modules. Meaning that all your Windows PowerShell modules won’t just import and run. This doesn’t mean that they won’t work. PowerShell Core has been designed to be has backwards compatible as possible. While modules like Active Directory won’t work correctly many still do.

By installing the WindowsPSModulePath module from the PowerShell Gallery, you can use Windows PowerShell modules by appending the Windows PowerShell PSModulePath to your PowerShell Core PSModulePath.

First, install the WindowsPSModulePath module from the PowerShell Gallery

Install-Module WindowsPSModulePath -Force

Then run the Add-WindowsPSModulePath cmdlet to add the Windows PowerShell PSModulePath to PowerShell Core:

Add-WindowsPSModulePath

This will now allow you to now start using all your current Windows PowerShell modules in Core (in your current session).  This is not a guarantee that they will work though.  So test thoroughly.

If you’re after specifically supported PowerShell Core modules you can search for the tag PSEdition_Core in the PowerShell Gallery.

 

I hope some of the above quick tips help you get started with PowerShell Core and make the transition a little easier.

Change vRealize Network Insight consoleuser Password

The last few posts I’ve written about have revolved around vRealize Network Insight.  In these posts I mention using the consoleuser account and in my case using the default password for this account, which for the record is ark1nc0ns0l3 (shhh, don’t tell anyone).  Clearly not good practice so I thought I would briefly mention the process to change the password for the account.

The steps involved to change the password are fairly straight forward and as the consoleuser account can be logged in via SSH out of the box it’s recommended to be changed.

The below process are applicable to both the Platform and Proxy appliances.

1.  Open a console window or SSH to the Platform and or Proxy appliances (depending on what you deployed).  You can user the consoleuser account here with its default password of ark1nc0ns0l3

2. Type in the following command

modify-password system --user consoleuser

3. Type in a new password, hit enter, and verify that password.

…and that’s it!

As I mentioned above this account can be used with SSH and is enabled out of the box so recommended to be changed once vRNI is deployed.

 

Configure An HTTP Internet Proxy In vRealize Network Insight

Well, we’re up to vRealize Network Insight 3.7 and still no GUI way of setting an internet proxy.  Usually you would configure an HTTP Internet proxy via the VAMI but that also doesn’t exist for reasons I don’t quite understand???  Never the less the process to configure an HTTP Internet proxy can be performed via the CLI easy enough.  Reasons why we might want to do this, apart from the obvious of gaining internet access where none exist without a proxy, is so we can check for software updates and connect to support.

The command we use is set-web-proxy

(cli) set-web-proxy -h
usage: set-web-proxy [-h] {show,enable,disable} …

set the web http proxy (for Internet access)

positional arguments:
{show,enable,disable}
show show current configured http proxy state
enable enable http proxy
disable disable http proxy

optional arguments:
-h, --help show this help message and exit

1. First thing we do is connect up to an interactive console session or SSH into our vRNI boxes with the consoleuser account.  The default password for consoleuser if you haven’t changed it is ark1nc0ns0l3

2. Type in set-web-proxy show

You will see something similar to below.

(cli) set-web-proxy show
Http proxy connection disabled

3. Next we set our HTTP proxy using set-web-proxy enable.  This will stop and start a few services but not cause any disruption the to running of vRNI.  Below is an example with a proxy address and port.

(cli) set-web-proxy enable --ip-fqdn vrni-platform.mydomain.local --port 3128
Stopping services
* Stopping DNS forwarder and DHCP server dnsmasq [ OK ]
nginx stop/waiting
launcher-service stop/waiting
* Starting DNS forwarder and DHCP server dnsmasq [ OK ]
Enabling http proxy connections…
Http proxy connection enabled
Connected to http proxy vrni-platform.mydomain.local:3128
* Stopping DNS forwarder and DHCP server dnsmasq [ OK ]
Starting services
* Starting DNS forwarder and DHCP server dnsmasq [ OK ]
nginx start/running, process 5337
launcher-service start/running, process 5415
(cli)

4. We run set-web-proxy show again.

(cli) set-web-proxy show
Http proxy connection enabled
Connected to http proxy vrni-platform.mydomain.local:3128

5. Finally we can run show-connectivity-status

A bunch of network information will be returned along with connectivity status of a few URLs.


Upgrade connectivity status (svc.ni.vmware.com:443): Passed
Support connectivity status (support2.ni.vmware.com:443): Disabled
Registration connectivity status (reg.ni.vmware.com:443): Passed

Web Proxy connectivity status: Passed

Over in the settings page of vRNI you should now see some green icons indicating Upgrade Server Reachable.

References

https://docs.vmware.com/en/VMware-vRealize-Network-Insight/3.7/com.vmware.vrni.cli.doc/GUID-5BD84F61-4612-4330-B6D5-8D51DBAD3C25.html

Offline Upgrade vRealize Network Insight

Earlier this week VMware release the latest update to vRealize Network Insight, version 3.7.  If you jumped on this new update as I did you might have been caught out by a bad build (3.7.0.1518427076).  Upgrading to this version had a DNS issue that caused a communication issue between the Platform appliance and the Proxy appliance.  The version was quickly pulled and replaced a day later with a new and working build, 3.7.0.1519211678.  It’s unlikely that you will have this old build but before upgrading it’s best to check.

vRNI can be updated in two ways.  An Online upgrade via the GUI and an Offline upgrade via the CLI.  There are a few reasons why you might want to perform an Offline upgrade.  Cluster upgrades can only be performed via an Offline upgrade.  Your vRNI appliance might not have internet access.  Or like me you have configured a proxy server on your vRNI appliances but because vRNI wants to make your life difficult it doesn’t detect new updates.

The Offline upgrade can only be performed on version 3.5 or 3.6 and is very similar to previous upgrades.

1. Download the ZIP bundle from VMware.

2. Snapshot both your Platform and Proxy appliances or live life like a cowboy.

3. Copy (SCP) the ZIP bundle to both appliances (Platform & Proxy).

I had difficulties using WinSCP due to the limited console access given by the appliance so I used pscp.exe that comes with the Putty package.  The location to where you can copy the bundle to can also be a bit of a challenge.  I chose /home/consoleuser/downloads/ using user consoleuser.

Below is the command I ran from a PowerShell prompt from my Windows box.

PS C:\Program Files (x86)\PuTTY> .\pscp.exe -scp E:\temp\VMware-vRealize-Network-Insight.3.7.0.1519211678.upgrade.bundle [email protected]:~/home/consoleuser/downloads/

4. SSH over to the Platform appliance with the user account consoleuser which has to be upgraded first. The default password for consoleuser in vRNI is ark1nc0ns0l3

5. Run the package-installer command to upgrade the appliance.

Below is an example of the command I ran.

package-installer upgrade --name /home/consoleuser/downloads/VMware-vRealize-Network-Insight.3.7.0.1519211678.upgrade.bundle

The upgrade process can take a while so be patient.  A successful upgrade should look similar to below.

login as: consoleuser
[email protected]’s password:
vRealize Network Insight Command Line Interface
(cli) package-installer upgrade --name /home/consoleuser/downloads/VMware-vRealize-Network-Insight.3.7.0.1519211678.upgrade.bundle
Do you want to continue with upgrade? (y/n) y
It will take some time…
Successfully upgraded
(cli)

6. SSH over to the Proxy appliance now with the same user account consoleuser.

7. Run the same command as on the Platform appliance.

package-installer upgrade --name /home/consoleuser/downloads/VMware-vRealize-Network-Insight.3.7.0.1519211678.upgrade.bundle

As with the Platform upgrade it will take some time and the output after the upgrade should look the same

8. (Optional) Run show-version and confirm you are running the latest version build on each appliance.

That’s all there is to it.  Stopping and Starting services aren’t necessary.  As with no need to reboot appliances.

You can now open up a web browser and login to your upgraded vRNI platform appliance. Check that everything looks over in settings.

 References

https://docs.vmware.com/en/VMware-vRealize-Network-Insight/3.7/com.vmware.vrni.install.doc/GUID-2DCC214C-6EEE-43FE-B420-E1083E53C58F.html

Melbourne VMUG 2017 Recap

Last week the Melbourne VMware User Group held its final vBeers event of the year, closing out another massive year for the group.  It’s been a lot of hard work to bring the group to the end of the year but it’s been hugely rewarding to be part of.  Throughout the year I met a lot of people being introduced to VMUG for the first time.  Received some amazingly positive feedback from both new and old faces to the group.  And of course made some awesome new friends.  All of which really makes the hard work worth while.

As always Melbourne VMUG year kicked off its year with our annual UserCon in March, an all day free conference run by the community for the community.  For an event held on the other side of the world to many it’s amazing to see our UserCon really start to make a name for itself.  This couldn’t have been more demonstrated with the calibre of international guests eager to come out and support us this year.  From our opening Keynote speakers of Duncan Epping and Amy Lewis, to our closing keynote speakers William Lam and Alan Renouf.  Not to mention Emad Younis, Josh Atwell, and Rebecca Fitzhugh.  An amazing group of people whom I have a new level of respect for.

Following on from our UserCon we held three quarterly meetings throughout the year.  A few notable sessions that really stood out for me were The Register’s, Simon Sharwood, and VMware APJ CTO, Bruce Davie.  Some very insightful sessions that really made you thing about our industry.  New for this year we introduced a closing survey to all our meetings.  The responses received from these surveys would go into shaping subsequent meetings.  It’s been great to see the community support this initiative and be more involved in shaping how future VMUGs runs in Melbourne.  A big thank you to everyone that participated in these surveys!

Back in November a few of us on the committee travelled up to Sydney to help represent not just Melbourne but VMUG Australia at vForum Australia.  I’ve already spoken about vForum in a previous post so there’s not much more I can add to that.  It was a great experience to be behind the booth promoting VMUG to a whole new audience from around the country.  I now have a new level of respect for the hard work vendors and the people put in behind the booth at these kind of events.

Carrying on from previous years we continued to support and run separate vBeers events in between our UserCon and quarterly meetings.  It’s been great to see Melbourne vBeers really coming into it’s own this year.  The more intimate setting allowed for some great conversations that aren’t always possible at regular VMUG meetings.  As with our quarterly meetings many new faces have starting becoming regulars and support from vendors has increased allowing us to do more sponsored nights and drinks.  I look forward to meeting many new and old faces over vBeers next year.

It’s hard to convey in a short post everything that Melbourne VMUG has achieved this year.  Hopefully some of these numbers will help speak to that.  1 UserCon, 3 Quarterly meetings, 5 vBeers nights, 10 community presentations, 12 VMware presentations, and 6 Vendor presentations.  Some ammazing numbers that sets a high benchmark for next year.  I’m extremely appreciative of all those that presented and helped to give back to the community this year.

As always a huge thank you to the committee and supporters of Melbourne VMUG over 2017.  In particular Tyson Then, fellow co-leader.  I could not ask for a nicer guy to be a co-leader with.  Along with fellow committee members Andrew Duancey, Brett Johnson, Damien Calvert, and recently departed Craig Waters.  As well as some of our regular VMware liaisons Mo Jamal, Kev Gorman, and Chris Garrett.  The list does goes on, i’m sure I’ve missed many, just know your your time and effort has been greatly appreciated 🙂

I hope everyone has a great Christmas and an awesome New Year.  Attention now turns towards our next UserCon in March 2018.  Hope to see you all then!

vForum Australia 2017 Recap

Another year and another vForum has come and gone.  This has really become a stand out event for me on the local calendar.  For the Australia region this is the closest we can get to VMworld without actually going to VMworld.  This year vForum had returned back to the Sydney Convention Centre which has recently been rebuilt.  Unlike previous years the event had moved from a two day event to one.

My frame of reference for vForum is fairly small as this was only my second vForum Australia (Actually there was a vForum Roadshow in Melbourne a few years back too).  Without a doubt the biggest improvement made was the location.  Bring vForum to the centre of Sydney on Darling Harbour was a big win.  Hotels are plentiful and the views are amazing.

I arrived in Sydney from Melbourne the day before vForum.  My manager from Brisbane was down in Sydney on unrelated work so this was a good opportunity to catch up in person for a few drinks on George St (I can’t believe construction on the light rail is still happening on George St which I also recall from last years vForum).

While I had every intention on going, unfortunately I didn’t make VMDownUnderground this year, an event organised and run by the Sydney VMUG crew the night before vForum.  Last year’s VMDownUnderground was a  great event but I had used the opportunity to have dinner with fellow work colleagues on Darling Harbour.  Being Melbourne based and having most of my team in Sydney I don’t get this opportunity often.

This year I was not only representing myself and my organisation but also VMUG as the Melbourne Leader.  With the help of VMUG HQ and the vForum event planners the local VMUG Australia chapters pooled our time and resources to run a booth.  There were ~40 vendors on the showroom floor this year.  VMware and the event planners did a great job with vendor layout with all locations being great.  We, VMUG, were lucky enough to secure a prime location across from VMware in the centre of the showroom right next to the VMware charity water challenge.

While my day started off at 7 AM helping to setup and prepare the VMUG booth.  The official start of vForum Australia was the keynote at 9 AM with VMware COO Sanjay Poonen opening.  The attendance for the keynote was huge.  The entire keynote hall was almost completely full, a real great buzz to it.  The keynote sessions ran till just after 11 AM.  At which point a large proportion of attendees to the keynote left the event (or possibly went to the side events).  Though that didn’t deter from the atmosphere during the remainder of vForum.

Foot traffic around and to the VMUG booth was nothing short of amazing this year.  Having a Claw Machine full of plush toys at our booth I’m sure didn’t hurt either.  This was a huge success in drawing attendees to our booth.  Not only attendees but vendors and VMware staff were lining up for a game.  One of our original goals, as VMUG Australia, was to promote the upcoming Sydney and Melbourne UserCons but we quickly switched to brand awareness for VMUG.  I was amazed to find out so many people still hadn’t heard of VMUG!

vForum Australia ended with the after party at Hard Rock Cafe right next to the convention centre.  A great opportunity to wind down with friends and finally grab some food and drinks.  Compared to last year’s vForum party with Rouge Traders playing (whom I’m a big fan of).  Hard Rock was a slightly more subdue affair.  It did lead to a more intimate setting where you could have more meaningful conversations with people, so in that regards a success.

I still had a little bit left in me after Hard Rock.  So before calling it a night I headed back to my hotel to drop off my swag and have a shower before heading out for a few drinks and cocktails with some vForum friends at Palmer and Co.  A small underground bar set in a 1920s speakeasy style.

While I would have like to see vForum as a two day event, particularly with the addition of Transform Security and Empower Digital Workspace events running at the same time.  Whatever the format VMware and vForum always put on a great event for attendees.  I’m already looking forward to next year with catching up and meeting new people in the community.

HaveIBeenPwned PowerShell Module

If you haven’t heard of Have I Been Pwned, firstly what are you doing?  It’s a site created by fellow Aussie Troy Hunt.  Troy aggregates data breaches as they become public into a searchable database. One of the primary goals of Have I Been Pwned is to raise security awareness around data breaches to the public.

As a bit of a learning exercise to myself, I created a PowerShell Module that leverages the haveibeenpwned.com APIs.  The module contains five Functions, Get-PwnedAccount, Get-PwnedBreach, Get-PwnedDataClass, Get-PwnedPassword, and Get-PwnedPasteAccount. I like to think of the HaveIBeenPwned PowerShell Module as an Enabler. By itself it does nothing more than what the haveibeenpwned.com site does. But by leveraging the Power of PowerShell and returning the results in object format the data can be easily manipulated for many other purposes.

Installing and using the Module and Functions is very simple. Ideally you will be running PowerShell 5 or above which will allow you to easily download and install from the PowerShellGallery. If you’re not on PowerShell 5 I’d highly recommend you download the WMF 5.1 (Windows Management Framework) which includes PowerShell 5.

Installing the module is simply a matter of typing the following.

PS F:\Code> Install-Module -Name HaveIBeenPwned

Once installed you can view all the Functions available with the following command.

PS F:\Code> Get-Command -Module haveibeenpwned 

CommandType     Name                                               Version    Source                                                                               
-----------     ----                                               -------    ------                                                                               
Function        Get-PwnedAccount                                   1.1        HaveIBeenPwned                                                                       
Function        Get-PwnedBreach                                    1.1        HaveIBeenPwned                                                                       
Function        Get-PwnedDataClass                                 1.1        HaveIBeenPwned                                                                       
Function        Get-PwnedPassword                                  1.1        HaveIBeenPwned                                                                       
Function        Get-PwnedPasteAccount                              1.1        HaveIBeenPwned      

The two main Functions are Get-PwnedAccount and Get-PwnedPassword.

The first, Get-PwnedAccount, will enumerate if an account, based off an email address, has been found in the Have I Been Pwned list of data breaches.

PS F:\Code> Get-PwnedAccount -EmailAddress [email protected]

In the above example all breaches are listed where the account used [email protected] as the email address. Which is huge by the way.

The second and slightly more controversial, Get-PwnedPassword, will take a password and confirm if it has been identified in a data breach.  Get-PwnedPassword will accept a password in three different formats.  Plain text, Secure String, and SHA1 hash.

PS F:\Code> Get-PwnedPassword -SHA1 AB87D24BDC7452E55738DEB5F868E1F16DEA5ACE

In the above example a SHA1 hash was generated offline using Quick Hash GUI.  Get-PwnedPassword will then send that Password or SHA1 hash in the body of a HTTPS request to Have I Been Pwned.  Now, obviously, what can been see as the controversial part off this is not only do you have to trust Have I Been Pwned but also this PowerShell Function.

All Functions come with Help and Examples which can be view using Get-Help.  For example.

PS F:\Code> Get-Help Get-PwnedPassword -Examples

The Module and all Functions can be found in the PowerShellGallery for download.  The Module can also been found in my public GitHub Project https://github.com/originaluko/haveibeenpwned.  All code can been view and sanity checked and is free to consume.

 

Lastly, I thought I might show how you can go one step further from simply enumerating an individual account. Many organisation’s IT departments create and manage accounts for their staff. They also provide security awareness training in protecting online accounts. An organisation could take a CSV list of their staff’s email addresses, import that list into PowerShell, and run it against the Get-PwnedAccount Function and identify if any of their staff have been involved in a data breach.

In the below example I import a small CSV file I have created with a list of email addresses. Then using half a dozen lines of code I iterate through the CSV list of email addresses and identify all the accounts that have been involved in a data breach. Using this information I can pro-actively notify staff to review these accounts.

$emails = Import-Csv F:\email_list.csv
foreach ($email in $emails) {
    $email = $email.accounts
    $results = Get-PwnedAccount -EmailAddress $email
    if ($results.status -ne 'Good') {
        foreach ($result in $results) { 
            $breach = $result.title
            Write-Output "Email address $email has been found in a $breach breach"
        }
    }
    Start-Sleep -Milliseconds 1500
}

And sample output after running the above code.

Email address [email protected] has been found in a Yahoo breach
Email address [email protected] has been found in a Youku breach
Email address [email protected] has been found in a Zomato breach
Email address [email protected] has been found in a 000webhost breach
Email address [email protected] has been found in a 17 breach
Email address [email protected] has been found in a Adobe breach
Email address [email protected] has been found in a Bell (2017 breach) breach

Download Links
PowerShellGallery: https://www.powershellgallery.com/packages/HaveIBeenPwned/
GitHub: https://github.com/originaluko/haveibeenpwned

Recap: VCP-NV Certification (2V0-642)

Earlier this week I took and passed the VCP-NV (2V0-642) exam.  I do have to say it was a really good experience.  It’s one of the few exams I really did enjoy studying for and sitting.  So I thought I might use this as an opportunity to post a short recap of my experience and what I used to study and pass the exam.

Getting some of the technicalities out the way all of which can be found at VMware’s VCP-NV landing page.  The 2V0-642 exam is VMware’s updated version 2 of the original VCP-NV exam which officially came out back in 2015.  Back then it was a 120 questions and by all accounts much harder than this new revised version.  This revised exam, based on NSX 6.2, is 2 hours long and 77 questions with a standard 300 passing score out of 500.  If you currently hold a VCP the process to certification is fairly straight forward.  Take and pass the 2V0-642 exam and earn certification.  If you don’t hold a VCP you have a number of pre-requisites to meet.  Again, all of which can be found at the VCP-NV landing page.

So first how was the exam?  As I mentioned above, a really good experience.  Gone are the days of having to take a pre-exam survey.  Just acknowledge the Terms and Conditions and the exam begins immediately -Awesome.  The questions were well laid out, clear, and descriptive enough to understand.  Of course it wouldn’t be a real exam without one or two confusing questions and there were a few of them, but only a few.  The exam questions are all weighted so at the end of the day it is a level playing field for everyone.

So what was my process for studying for this exam?

I guess firstly I’ve attended many presentations and watched a number of high level videos on NSX but nothing really deep on the product, nothing really exam helpful.  A few months back (the week before VMWorld) I attended the 5-day Install, Configure, Manage course on NSX 6.2.  This was a great course and a good primer into learning to use NSX.  Very helpful grasping the fundamentals in being able to get started.  Well recommended for everyone getting started.

Next came actually using the product in a real lab environment.  I think this is a requirement!  Bare minimum you should be using VMware’s Hands on Labs but even better is to have your own environment.  I’m lucky enough to be preparing for a production deployment and had a test lab to deploy and play with.  Having your own environment constantly available is hard to beat.

vBrownBag YouTube videos!  There is a VCP-NV series available on YouTube.  The videos are based on the original VCP-NV exam and are a few years old but still very relevant.  Actually still extremely relevant.  There’s eight videos to hunt around for which cover the original objectives with the exception of Troubleshooting.  The Objectives match up very closely.  The 2V0-642 exam has one main new Objective which covers Cross-vCenter.

In terms of reading material i would highly recommend going through the official NSX online docs pages.  Lots of mindless reading but you will find that exam questions come straight out of that.  And truthfully you will learn a huge amount doing that.  Just remember to focus on version 6.2.  I’d also recommend the Cross-vCenter NSX Installation Guide PDF from VMware.  This is also in the online docs but I found the PDF easier to consume which I found to be hugely informative and the exam did test heavily on this for me.  So I was very thankful to have focused on this reading.

And that was basically it.  Practice hands on what you have learnt and read.  Troubleshoot in your lab as you are building it out.  A few solid study days on the weekend and you should be in a really good position to take and pass the exam.