Melbourne UserCon 2017 Wrap-Up

This year was a very special UserCon for myself.  This was my fifth out of six UserCons I’ve attended in Melbourne.  But it was special because it was my first as Melbourne VMUG Leader.  Co-Leader actually along with good friend Tyson Then who was also taking part, for the first time, as Leader.

It’s also the year that Melbourne VMUG founder, Craig Waters, has graciously decided to step down as leader and pass the baton onto, as he referred to on stage, ‘Fresh Meat’.  Standing next to Craig on the stage was Andrew Dauncey.  Andrew, having recently accepted a role with VMware, also used the opportunity to officially step aside as co-leader as well.  Both Craig and Andrew have been integral parts of the Melbourne VMUG team.  While they are both stepping aside as Leaders they have each pledged their continual support to the User Group and will continue to be involved in the steering committee.  While I can’t speak for Tyson, I think I can safely say we are both honoured to humbled to be filling their shoes and have the support of two, now former, great leaders of the group.

Getting back on track though.  Last week Melbourne held its sixth annual UserCon.  For the second year running we held the event at Melbourne’s Crown Promenade.  A great venue located in Soutbank in the heart of Melbourne.  The line up of speakers this year was nothing short of spectacular!  Following the VMUG Committee welcome from myself and Tyson, Duncan Epping opened as first Keynote speaker of the day.  Duncan gave a great session on his baby vSAN.  Right after Duncan followed Amy Lewis continuing the opening Keynote with a panel discussion.  The Panel was comprised of three VMUG committee members, Tyson Then, Craig Waters, and Justin Warren, along with Amy Lewis who chaired.  The Panel session was focused around career and building your brand and image.  Basically what Amy does best!

Throughout the day we had the regular goodness you come to expect from a UserCon, which included sessions from VMware and our Sponsors.  One of those sessions even included Emad Younis, Sr Tech Marketing Engineer from the VMware vCenter Team.  Where us Aussies like to differentiate and do things a little different at UserCons is support the community as much as possible.  We ran five community sessions throughout the day.  We had a huge submission response from the community to take part which made it difficult picking only five.  But as we have in the past, we the committee, picked based solely on most appealing Session Title and Abstract and not on the name of the speaker (which was obscured).  The final cut ended up being two internationals, Josh Atwell and Rebecca Fitzhugh, along with locals Grand Orchard, Claire O’Dwyer, and Arron Stebbing.

Alastair Cooke over in New Zealand was invited, and happily accepted, to once again represent the vBrownBag community along with Brett Johnson.  All community submissions that missed the cut where offered a short TechTalk session which ran throughout the day.

The day end with two final sessions.  A Celebrity SuperStar Panel session chaired by fellow VMware local Greg Mulholland and panelled by Duncan Epping, Amy Lewis, Emad Younis, Alan Renouf, and William Lam.   The final closing Keynote of the day were by the awesome duo Alan Renouf and William Lam showing us some of their recent work on creating an SDDC lab with nothing more than a few scripts and a USB stick.  This was the standout session in my eyes and clearly many other attendees as seen by the vast majority who chose to stay for this last session of the day.

Towards the end of Alan and William’s session, drinks and food were brought out to the attendees.  Duncan even personally came on stage to served Alan and William some local beer while we waited for their SDDC to build (no pressure guys).

While I might be biased, as VMUG Leader, this was by far the best UserCon I have ever been part of.  We say it a lot but our community really is awesome.  I met attendees from all over Australia and even from New Zealand who came to be part of this event.  Everyone I spoke to was just amazingly supportive and I thank you all.  It makes all this hard work worthwhile.  To all our sponsors, particularly our Platinum sponsors Veeam and Zerto, a big thank you because without you we could never put on an event like this.  I’d also like to thank the Sydney VMUG team for their hard work during the coordination of our two UserCons.  Lastly I can’t end this post without a HUGE thank you to all the international guests who made the long trek from across the sea to be with us.

I look forward to seeing all of you, especially those I didn’t get an opportunity to meet on the day, at our future #vBeers and Quarterly meeting events and of course our next UserCon.

 

Streaming Datasets – PowerShell | PowerCLI | Power BI

A large part of my day is spent scripting in PowerShell, specifically with PowerCLI.  One of the strongest areas of PowerCLI, obviously, is being able to retrieve information.  It’s one of the key use cases, in my opinion, for using PowerCLI in a VMware environment, it’s ability to retrieve information for Capacity planning and reporting.

Recently I’ve been looking at how to consume all that information.  You can obviously export it to a CSV, push it into a database, or something that I’ve been playing around with recently, stream it into Power BI.  Now if you haven’t tried it out yet, PowerBI is an analytics service from Microsoft.  At its core it’s a data warehouse for business intelligence.  But putting all those fancy words aside, I use it to create fancy reports.

Exporting information out of a vCenter environment with PowerCLI is dead simple.  I have dozens of scheduled tasks running all the time doing this.  Where I’ve fallen down, is taking that information and trending it over time.  This is where the Streaming Datasets functionality of Power BI comes in.  Using PowerCLI I can get an Object and Value from vCenter and then Post that directly into Power BI, using their API, and have it instantly graphed in a report.  I can then share that report out to anyone I want.  Power BI lets me do this over and over, almost as fast as I can pull the information out of vCenter.

In the example below I show how to create a trend report over time that displays Total and Available Storage of a vCenter Cluster.  Rather simple, I know, but can easily be adapted to show things like number of running VMs running, reserved resources used, etc, etc.  The skies the limit really.

Before we do any scripting the first thing we do is log into Power BI.  If you don’t have an account, don’t worry, the basic version is free.  Hit the Sign Up link and make sure you select Power BI and not Power BI Desktop for Windows, we want the cloud version.

Once logged in we click on Streaming Datasets in the bottom left under the Datasets category.  This is where we create our initial dataset schema so that it can accept streaming input.  We click on ‘Add streaminig dataset’ in the top right.

Then select the source of data, which will be API and click next.

We give our New Streaming Dataset a name and define a few values.  In this example we will define a Date, Total Storage, and Available Storage value, turn on Historic Data Analysis and click Create.  Make note of your data type to the right of the value.  Date is DateTime and the other two are Numbers.

We’ve now created our schema and are provided with a Push URL address and sample code in a few different formats (we want PowerShell).  If you look carefully we are using an Invoke-RestMethod to Post to Power BI.  This sample code has just provided us the template and hardest part of our PowerShell / PowerCLI script.  Click over the code and copy / pasta it out to use in our script (Paste it at the bottom of the script as it will be the last thing that runs).

Now we actually start on the PowerShell / PowerCLI script.  To keep it as concise as possible.  I’ve skip the process I use to actually connect to the vCenter and retrieve the information out using PowerCLI in the code below.  The real goal here is just to retrieve some values and get that into Power BI.  Line 6 is basically retrieving all shared VMFS datastores in Cluster1.  The important lines to note, though, are 4, 8, and 9 where I store my key values in three variables.  One for Date, one for TotalStorage, and one for AvailableStorage.

Import-Module VMware.VimAutomation.Core
Connect-VIServer -Server host.mydomain.local

$date = Get-Date

$datastore = Get-Cluster -Name Cluster1 | Get-Datastore | Where-Object {$_.Type -eq 'VMFS' -and $_.Extensiondata.Summary.MultipleHostAccess}

$TotalStorage = ($datastore | Measure-Object -Property CapacityMB -Sum).Sum / 1024
$AvailableStorage = ($datastore | Measure-Object -Property FreeSpaceMB -Sum).Sum / 1024 

The additional lines below from 11 onward is the important code.  This is our pasted sample code from Power BI that we will slightly modify to push our values up to Power BI.  Don’t copy mine, as your URL and key will be different.  On lines 13, 14, and 15 we will remove the example values and replace it with our three variables, $Date, $TotalStorage, and $AvailableStorage.

Import-Module VMware.VimAutomation.Core
Connect-VIServer -Server 10.1.1.201 -user "mydomain\username"

$date = Get-Date

$datastore = Get-Cluster -Name Cluster1 | Get-Datastore | Where-Object {$_.Type -eq 'VMFS' -and $_.Extensiondata.Summary.MultipleHostAccess}

$TotalStorage = ($datastore | Measure-Object -Property CapacityMB -Sum).Sum / 1024
$AvailableStorage = ($datastore | Measure-Object -Property FreeSpaceMB -Sum).Sum / 1024 

$endpoint = "https://api.powerbi.com/beta/83fe1fa2-fa52-4376-b7f0-cb645a5fcfced/datasets/d57970bc-60b3-46e6-b23b-d782431a72be/rows?key=2zEhgN9mu%2BEH%2FI2Cbk9hd2Kw4b5c84YaO6W8gzFcZbBnO6rti3N631Gjw%2FveNXSBxwR84VcWPGOSrheNwQnCbw%3D%3D"
$payload = @{
"Date" = $Date
"Total Storage" = $TotalStorage
"Available Storage" = $AvailableStorage
}
Invoke-RestMethod -Method Post -Uri "$endpoint" -Body (ConvertTo-Json @($payload))

Disconnect-VIServer * -Confirm:$false

On the last line I disconnect  from my vCenter and close any sessions.  This helps if running as a scheduled task.  Finally save the script.

And that’s it for the scripting part.  Assuming everything is correct, no connection issues, correct values being retrieved.  All we have to do is run the script and it will send a POST request using Invoke-RestMethod with our three values.  We can now run this script as many times as we want and it will continue to post the current date and time along with Total Storage and Available Storage.  At this point, if we wish, we can turn the script into a scheduled task or just continue to run manually to suit our needs.

We now go back to Power BI and report on what we have.  Back on our Streaming Datasets browser window we click the Create Report icon under actions.  Now this part is going to be very subjective to the end user who wants the report.  But the key data we want is under RealTimeData on the far right.  Select all three values and we get presented with a default view of our data.  Under Visualizations select line chart and now we start to see a more visual representation of our capacity over time.  Under the Analytics section add a trend line and see a basic view of available capacity over time.  Finally hit save and you have a self updating report from streaming data.

For the report to start to look anything like below it will take time and a few sample datasets.  In the below image I’ve mocked up some numbers over time as an example.

Once you have a working script and it’s streaming data to PowerBI it’s really up to you on how to report on it.  The above example, as simple as it is, lays the ground work to more customized and complex reporting that you might not be able to get out of traditional monitoring and reporting software.  The ability is there to even share out the report.

Streaming datasets, as you might have noticed in the UR, is still in beta.  As great as I have found it to be it does have some quirks.  For one you can’t easily modify data you have already streamed up to Power BI.  So if you send incorrect data / values up to Power BI in a streaming dataset it will remain their.  At which point you will have to consider Filters to exclude it in reports.

In summary I think Power BI is a very underrated free tool from Microsoft.  I’ve only just started to scratch the surface of what’s possible with it.  The simplicity of capturing data with PowerShell and sending it to Power BI is well worth the time and effort to try at least once.  So what are you waiting for?

Modify HTML5 vSphere Client Idle Timeout

Before I go any further, just to make it clear, we’re talking about the new HTML5 client in vSphere 6.5 (GA Build 4602587).  Not the older Flash based vSphere Web Client in vCenter 5 and 6.  So lets call it the vSphere vCenter HTML5 UI Web Client.  Clear now?  Ok, just refer to the pic below.

Below are the steps I used on the vCenter Server Appliance.

Just like the old Web Client I know of no way to change the idle timeout from within the UI today.  So we have to revert to connecting to the console and making the changes through the shell.  We do this by opening up a console window to the VM or using SSH and login with root (remember to enable SSH first).

At the Command prompt of the VCSA type the following to enable Shell access.  You may received a Shell is disabled message.  If you do, enable with shell.set.

Command> shell
Shell is disabled.
Command> shell.set --enabled true
Command> shell
vc01:~ #

Now at the Shell type the following below and locate session.timeout.

cat /etc/vmware/vsphere-ui/webclient.properties

You should find something similar to session.timeout = 120 as this is the default value in minutes.

Make a backup copy of webclient.properties.

cp /etc/vmware/vsphere-ui/webclient.properties /etc/vmware/vsphere-ui/webclient.properties.bak

If you’re comfortable using an editor like VI go ahead and use that to increase or decrease the value in minutes.  Probably for the best, it doesn’t appear that you can set this value to never timeout.  I tried 0 and -1 and both caused the vSphere Client to timeout instantly on login.  The timeout value, though, can quickly and easily be modified using the sed command.

The sed command below locates the specific string session.timeout = 120 and replaces it with session.timeout = 720, which is 12 hours (or in other words my standard work day).  Change 720 to however many idle minutes you want.  If sed doesn’t find the specific string, don’t worry, it won’t modify anything.

sed -i “s/session.timeout = 120/session.timeout = 720/g” /etc/vmware/vsphere-ui/webclient.properties

Run the cat command again and check that the session.timeout value has changed.

cat /etc/vmware/vsphere-ui/webclient.properties

If the session.timeout value has been modified correctly we now have to stop and restart the vsphere-ui service by running the following commands below.  I covered stopping and starting all services on a VCSA in a previous post HERE.

service-control --stop vsphere-ui
service-control --start vsphere-ui

Wait a few minutes for the service to start up fully and open a new browser windows to the vSphere Client.  It should now be running with a new idle timeout.

 

vCenter In VR (Is This VCSA 7?)

The last few months have been extremely fun for me.  I purchased a HTC Vive and have been enjoying every minute with it.  I’m not a huge gamer but I absolutely love the immersion factor.  I’ve lost count of the times I have got lost in games like Onward for hours on end.  The realism and social aspect of coordinating with your team mates on how to take the objective.  The absolute fear of crouching behind a wall while the enemy next to you discusses where you are.  An experience that’s hard to convey.

Games aside though, VR also has the ability to mirror your desktop and applications too.  Nothing like Minority Report or that awesomely realistic movie Hackers.  But think more a VR room with a computer screen in front of you that you can enlarge or shrink to suit your view.

So that got me thinking.  There are a few different VR apps that let you mirror your desktop in VR.  I decided to try out Bigscreen, mainly because it’s free!  And hell, because this is a virtualization blog, I obviously had to try out the vSphere Client to see if I could practically manage my vCenter Homelab environment.

It took a few attempts to find the best viewing mode and way to manage vCenter with the vSphere Client.  I first tried the large projector view on the wall in the VR room.  This turned out to be an absolute joke.  Imagine the worst, lowest, quality projector, and then try reading small text from the other side of a room.  Then think of something worse.  Okay… it wasn’t that bad but still.


Failing to use vCenter in the large projector mode view

The best mode I found was literally just sitting down in a chair.  Switching to the floating screen mode and enlarging the screen to encompass my field of view.  Then placing a small curve to the screen to rap a little around me.


S
omething’s red in my vCenter environment

I first tried managing vCenter with the HTC Vive controllers.  The controllers basically act as laser pointers.  You can pull up a virtual keyboard and laser zap the keys with the controllers as well as move the laser point around on the screen like a mouse cursor.  Using projector mode this was okay but up close it was really awkward.  Ultimately using the physical mouse and keyboard was most practical.  And it was practical.  As long as you can position your hands in the right spot and touch type there was no issues.  You just have to adjust to what feels like a 100 inch screen in your face.

Bigscreen has what they call Mutliplayer rooms.  This is where you can join and create a new room where people can share you screen experience.  I did jump into some of these rooms where movies were playing and had a little chat to the other guests.  I wasn’t game enough to create a room and share my vCenter screen though.  I just felt that the VR community wouldn’t have the same appreciation for my vSphere Homelab environment 😛


J
umping into someones VR cinema room

You can imagine how this multi-user room experience could be interesting though.  Inviting a friend / service desk into your private VR room to help you out on an issue in your environment.  Actually being able to point on the screen and talk through resolving an issue.  Waving your hands in frustration when the service desk can’t fix your issue.  It reminds me of the book Ready Player One.  A dystopian future where lives are lived out in a VR world and virtual chat rooms.

So alright, all of this was a big gimmick.  An excuse to talk about my HTC Vive and somehow justify it on my virtualization blog with vCenter.  It was fun, though, I’m not holding my breath for vCenter 7 VR.  But maybe a fling 🙂

 

Melbourne VMUG 2016 – The Year That Was

So before I head back to work tomorrow to wrap up my year.  I thought it would be a good opportunity to reflect back on the year that was with the Melbourne VMware User Group.  It was a big year for me with the Melbourne VMUG.  After years of just turning up to events I finally became a member of the committee team.  It’s been an awesome experience where I’ve met some great friends I might not have otherwise meet.

Melbourne VMUG kicked off its year, as with previous years, with its annual User Con in February.  For the first time in five years we had a venue change to the Crown Promenade.  It was a risky move but paid off. Hey, if VMworld can get away with having it in a casino so should we.  Support from the community on the venue change was overwhelmingly positive.  With ~350 attendees it was one of our biggest User Cons to date.  We had some great international guests with Chris Wahl and Keith Townsend.  The day rapped up with an after-drinks / vBeers party a short walk along the Yarra River across at the The Boatbuilders Yard.

We continued the year with three more quarterly meetings.  Each of them held at the Telstra Convention Centre and venue sponsored by Telstra themselves.  Having Telstra provide the venue facilities has been an absolute coup for VMUG.  The facilities are located in the heart of Melbourne CBD with easy access in and out for our community members.

The facilities provide us with two meeting rooms allowing us to run two side by side tracks during the quarterlys.  This has been another one of those surprisingly successful moves.  By running two tracks we have been able to provide more content to our community then we normally would otherwise.  At the end of each of the quarterlys we held vBeers paid for by the meeting’s sponsors at Trokia Bar, a small bar just across the road from the venue.

In between the User Con and the Quarterly meetings with also held separate vBeers events.  These were all held at Beer Deluxe at Federation Square in Melbourne CBD.  Unlike the Quarterly meeting vBeers these ones aren’t usually sponsored.  The settings for these vBeers have always been to provide a smaller more intimate environment to network with peers.

By using left over sponsor funds from the year Melbourne VMUG was able to sponsor the final vBeers of the year at Beer Deluxe.  This turned out to be one of the bigger vBeers MVMUG has held for some time.  It was also well supported by VMware with a number of their local SEs coming out to show support.  We even managed to get a few Sydneysiders to come out and show them how it’s done in Melbourne.

The Melbourne VMUG committee also got out and help sponsor VMUG at the Synology 2017 Conference at the Melbourne Convention Centre a few months back.  This was an invite request from Synology.  We pulled out the banners, and spruiked VMUG with flyers, pens, and t-shirts.  A great experience promoting our user group to a slightly different demographic of small business and storage enthusiasts.

We, the Melbourne VMUG committee, now switch to 2017 User Con planning with VMUG HQ.  We’ve already had a few meetings in and things are looking really good so far.   The same venue has been book at the Crown Promenade for the 23rd of March.  We’ve secured two keynote guests, which i think I can now safely say will be Duncan Epping and Amy Lewis, and we’re working towards a few more international guest to make this our best User Con to date.

Finally a big shout out to Melbourne VMUG committee this year.  The leaders Craig Waters, Andrew Dauncey, and Tyson Then, these guys have been the rock for MVUG throughout 2016.  They have also been great to lean on throughout the year for me.  Also not forgetting Justin Warren, Damien Calvert, and fellow 2016 committee newcomer Brett Johnson.  Not to mention VMware liaisons Ramon Valery, who has now moved over to Nimble storage, and his replacements Mo Jamal and Kev Gorman.  It’s been a massive year and look forward to working with you all next year.

Hope you all have a great New Year and look forward to seeing you at our User Con in 2017!

Sydney vForum 2016

For those of us not lucky enough to attend VMworld (yep, me).  The smaller vForum has to be the next best thing, particular for those of us in the ANZ region of the world.  vForum is seen as almost a mini VMworld in OZ spread out over two days and getting somewhere around 3 to 4 thousand people throughout the event.  I must have been scanned about a 100 times walking into the main pavilion so hopefully that gets taken into account 🙂  Having barely recovered from an intense three days at PAX AUS the weekend before.  I was still psyched and ready to go.

Day 0 – VMDownUnderGround (Tuesday)

My Tuesday before vForum started with a Work From Home half day.  I was able to put in a solid morning of work before heading to Melbourne Airport.  One of the benefits of where I live is the short 15-minute drive to the airport.  Boarding my flight, I literally bumped into Chew from VMware while trying to fight my way to my seat (Sorry again Chew).

We landed in Sydney at 3:30 PM, disembarked, and I followed the signs to the domestic terminal train station.  I purchased an Opal card and boarded a train that took me to Central Station.  This was my first Sydney Airport to City train trip and I must say I was really impressed with what Sydney have done. I can’t believe Melbourne haven’t done the same yet!

I checked in at the Cambridge Hotel which was a short walk from Central.  I took a few minutes to rest the feet then made my way into the city.  I still had a few hours before VMDownUnderGround at 6 PM, so I took a little stroll up to Circular Quay.

VMDownUnderGround, organised by Sydney VMUG and sponsored by Veeam, was held at King Street Brewhouse.  A microbrew pub overlooking Darling Harbour.  The turnout was a little smaller than I expected but still a great turnout of people, from Queensland to Tassie to New Zealand.  I had the opportunity to meet a number of VMware staff from the Sydney office.  Finally met in person some Brisbane and Sydney VMUG guys.  There was Brett and Alistair representing vBrownBag plus many more.  I could have chatted all night with everyone but us final few called it a night around 10:30 PM in preparation for vForum the next day.

Day 1 – vForum Techday (Wednesday)

My day began with a call from the boss!  He had taken the train to central station and swung past my hotel so we could walk down together to The Royal Hall of Industries @ Moore Park.  On entry, I instantly regretted bring my backpack as VMware provided one to All Access Pass guests.  Being the Techday I spent much of the day focusing on going to sessions.  NSX, DevOps, Containers, just to name a few.  While there were many people I wanted to catch-up with, I decided to leave that till Thursday’s General Access day.  Between sessions I ran into a few fellow Optus co-workers where we decided to focus our efforts together on visiting vendors and of course collecting awesome swag.  Moving between vendor stalls I found myself constantly bumping into people I knew.

That evening I caught up with a few more fellow work colleagues for dinner.  I particularly wanted to catch up with a recently departed team mate.  We made our way into the CBD and found a nice little Thai restaurant just off George Street, where I succumbed to peer pressure and ordered way toooo spicy food.

Day 2 – vForum General Access (Thursday)

Once again my day started with meeting up with my boss outside my hotel and walking down to Moore Park.  This time even before walking into the hall I ran into many more Sydney co-workers.  Many whom I was meeting in person for the first time.

I only had two sessions that I really wanted to attend on Thursday.  The Keynote at 10 AM with Pat Gelsinger and the Technical Keynote at 1:30 PM with Kit Colbert.  Outside those two keynote sessions I spent the day visiting the remaining vendors I had not spoken to yet and catching up with fellow colleagues and friends.   As well as heading over and saying hello to the vBrownBag and the VMUG guys.

A fellow team mate introduced me to former work colleague and friend Frank Yoo now working at Rubrik.  While at the Rubrik stand, I entered their raffle draw.  Now if you know me, you know that I’m one of the unluckiest people when it comes to competitions.  So it was a complete surprise punch in the face, when I won the coffee maker prize.  Thanks heaps, Frank and Rubrik.  The Rubrik branding on the actually coffee maker was a nice touch!

20161113_102013

The day ended with the vForum After Party featuring the band Rouge Traders.  I’ve been a huge fan of them for years so I was pretty excited to have them here playing.  The band played in the main pavilion where the keynotes were held.  But before we were allowed in they herded us into the small foyer for 45 minute or so.  Presumably they needed more time to setup either the band or the food and drinks in the pavilion.  So while it was a little uncomfortably cramped to begin with, once the doors opened and we got inside all was forgiven.

20161110_183428

Before calling it a night and making the solo trip back to my hotel I had one last catch-up with Ryan McBride from the Sydney VMUG crew.  Ryan’s an awesomely funny guy who I’m looking forward to catching up with next week back in Melbourne.

Day 3 – The Day after vForum Summary

While many people flew out and went back to work for Friday.  I decided to mix it up a little and spend a day in Sydney.  I couldn’t come to Sydney and not spend at least a day doing all the touristy things.

20161111_111119

I had an awesome time during vForum.  VMware have as always put on an excellent event.  VMDownUnderGround was also a great opener to vForum.  While I would have preferred more deep-dive sessions.  I did manage to take away a little from each session I went to which I see as a success.  And yes, I’m constantly told not to focus on sessions but rather use the time building networking connections.  But I felt that there was room to achieve both during vForum which I think I achieved.

PowerCLI Core

When Microsoft and Jeffrey Snover released PowerShell on Linux a few months back we knew PowerCLI running on Linux wasn’t too far away.  Well, an awesome demo from Alan Renouf running PowerCLI in a Docker container was probably a giveaway 🙂

Well since then we’ve been patiently waiting, and hearing rumors of a Fling, for the release.  Earlier this week VMware finally released that Fling.  And that haven’t disappointed.  VMware have provided a number of different methods to run PowerCLI Core --OS X, Linux, and Docker.  Skimming through the Instructions PDF on the Flings site by far the easiest method has to be the Docker image from Docker Hub (assuming you already have docker installed).

I decided to try out this docker image and was pleasantly surprised at how easy it was.  Boy, I miss the old days of Linux where I had to compile and install everything, then troubleshoot, and repeat.  Using an Ubuntu 14.04 build it’s as simple as running two commands.

First pull down the docker image from Docker Hub.

docker pull vmware/powerclicore

Then run the container!

docker run --rm -it --entrypoint=’/usr/bin/powershell’ vmware/powerclicore

And that’s really it, kind of.  There is one more command you’ll have to run to actually connect to a vCenter or ESXi host.  Without it you’ll receive an Invalid Certificate error which will prevent you from connecting.

Set-PowerCLIConfiguration -InvalidCertificateAction Ignore -Confirm:$false

After that you should be able to connect as normal to a vCenter.

ubuntu01-vmware-workstation-000303

As the doco states not all the modules are available yet.  Currently just the Core and VDS cmdlets.  A quick check shows we have 315 available to us.  Which, to be honest, is a huge amount.

PS /powershell> (get-command -Module powercli*).count
315

I haven’t done too much with it yet.  Though I have already noticed a few odd issues and errors.  It’s hard to say if it’s PowerCLI Core related or PowerShell.  One notable issue is when pipping a PowerCLI cmdlet multiple times on the command line it would intermittently fail.

The important thing to note here is this is a Fling, which as I’ve mentioned before, is unsupported and comes with no guarantees.  Not only that but it’s build upon an Alpha build of PowerShell 6.  Put it together and sure you’ll probably get unexpected results sometimes.

Never the less this is another get testament to VMware’s commitment to PowerCLI and PowerShell.  I’m excited to see PowerShell and PowerCLI continue to further develop and mature on Linux and open the door to a whole new slew of developers.

References

PowerCLI Core Fling
VMware PowerCLI Blog Announcement

SCP to a vCenter Server Appliance (VCSA)

For some this may be a rare situation but from time to time I find that I’m needing to copy files to and from a vCenter Server Appliance (VCSA).  I had one of these situations recently on vCenter 6.  I needed to move some log files off a VCSA box.

I’ve found the easiest way to do this is via SCP -- Secure Copy, which uses the SSH protocol.  It’s a relatively simple process to enable the VCSA to accept SCP connections.  It’s a two step process which first requires enabling SSH on the VCSA and then switching the default Shell.

Step 1, involves enabling SSH  

I’ve written a previous post on how to enable SSH on a VCSA here.  Since that post VMware have re-released the VAMI on vCenter Server Appliance V6 U2.  So I thought I might show this new method to enable SSH.  Only if using VCSA 6 U2 or greater else use my previous post steps.

Connect to the VAMI URL of your vCenter on port 5480 using HTTPS.  In my case it was https://vc.ukoticland.local:5480/login.html

vami-000298

Login with your VCSA root account and password.  Then navigate to Access and click Edit on the far right.  Select Enable ssh login and to make life a little easier also Enable bash shell and click OK.  The timeout refers to how long the Bash shell will stay enabled.  The default is fine.

vami-000299

Step 2, changing the default shell

Even though we enabled the bash shell above the default shell is still the VMware appliance shell which prevents us from connecting to the VCSA via SCP.  So we need to SSH to the VCSA and change the default Shell from the Appliance Shell to Bash.

In my case I used Putty.  Logged in with my root account and type shell.

putty-000300

Now i can change the default shell for the root user to bash using the below command.

chsh -s /bin/bash root

putty-000301

We’re now ready to SCP to our VCSA with the ability to transfer files to and from the VCSA.  I use the simple Windows app, WinSCP.  I change the File Protocol to SCP.  I enter in my vCenter as my host and my root credentials.

winscp-000302

When you’re complete just reverse the changes you made.   In the SSH Putty session type the below to permanently switch the Bash shell back to the default Appliance Shell.  Then log back into the VAMI as above.  In Access deselect SSH and Bash.

chsh -s /bin/appliancesh root

References

Toggling the vCenter Server Appliance 6.x default shell (2100508)

PowerShell on Linux

The big news out of Microsoft last month making headlines is the open sourcing of PowerShell.  Along with this comes the ability to now run PowerShell not just in Windows but also Linux and Mac OS X.  For people close to the PowerShell community this wasn’t unexpected, but make no mistake this is huge news.

I’m really liking this new Microsoft.  They are really embracing this open source stuff.  On first thought it’s not obvious how Microsoft will make money with PowerShell going open source.  But Microsoft isn’t stupid, this is no doubt part of a larger master plan.  With PowerShell so tightly linked to their products they are opening the door to a whole new demographic of users.  I can see PowerShell going open source being a key to getting a new mix of Linux Developers working in Azure.  Something close to my heart is VMware have also announced plans to port over PowerCLI to work with PowerShell for Linux.  As a PowerCLI tragic myself I’ve seen first hand how frustrated Mac users have been that they can’t manage their VMware infrastructure using PowerShell / PowerCLI directly from a Mac.

Microsoft have made it clear this is very early stages of an Alpha release on GitHub.  They are looking for community help to further develop and refine using PowerShell on Linux.  There’s a large number of bug fixes, growing by the day, that they need to work through before we get anywhere close to a production release.

I decided to try it out myself and i’m impressed, the future looks awesome.  Apart from Windows currently the open source version is limited to Ubuntu 14.04 /16.04, CentOS 7, and Mac OS X 10.11.

I had an Ubuntu 14.04 Linux VM that I used testing.  The first thing is to download the appropriate package over at GitHub. https://github.com/PowerShell/PowerShell

Once downloaded and depending on what OS you’re running you may need to install a few additional libraries first.  In my case it was libnuwind8 and libicu52 using apt-get. After which i was able to install the PowerShell Debian package. 

mukotic@ubuntu:~/Downloads$ sudo apt-get install libunwind8 libicu52
mukotic@ubuntu:~/Downloads$ sudo dpkg -i powershell_6.0.0-alpha.9-1ubuntu1.14.04.1_amd64.deb

Believe it or not that’s all that is required.  Whatever your Shell of choice is just type ‘powershell

mukotic@ubuntu:~/Downloads$ powershell
PowerShell 
Copyright (C) 2016 Microsoft Corporation. All rights reserved.

PS /home/mukotic/Downloads> 

So what can we do.  Well, it’s still early days.  The first thing i did was just check the version.  I can see we’re running the .Net Core release of PowerShell which comes with Nano Server.

PS /home/mukotic/Downloads> $psversiontable 

Name Value 
---- ----- 
PSVersion 6.0.0-alpha 
PSEdition Core 
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...} 
BuildVersion 3.0.0.0 
GitCommitId v6.0.0-alpha.9 
CLRVersion 
WSManStackVersion 3.0 
PSRemotingProtocolVersion 2.3 
SerializationVersion 1.1.0.1

Looking at what’s available to us it’s still limited to a handful of modules.

PS /home/mukotic/Downloads> Get-Module -ListAvailable 


 Directory: /opt/microsoft/powershell/6.0.0-alpha.9/Modules


ModuleType Version Name ExportedCommands 
---------- ------- ---- ---------------- 
Manifest 1.0.1.0 Microsoft.PowerShell.Archive {Compress-Archive, Expand-Archive} 
Manifest 3.0.0.0 Microsoft.PowerShell.Host {Start-Transcript, Stop-Transcript} 
Manifest 3.1.0.0 Microsoft.PowerShell.Management {Add-Content, Clear-Content, Clear-ItemProperty, Join-Path...} 
Manifest 3.0.0.0 Microsoft.PowerShell.Security {Get-Credential, Get-ExecutionPolicy, Set-ExecutionPolicy, ConvertFrom-SecureString...
Manifest 3.1.0.0 Microsoft.PowerShell.Utility {Format-List, Format-Custom, Format-Table, Format-Wide...} 
Binary 1.0.0.1 PackageManagement {Find-Package, Get-Package, Get-PackageProvider, Get-PackageSource...} 
Script 3.3.9 Pester {Describe, Context, It, Should...} 
Script 1.0.0.1 PowerShellGet {Install-Module, Find-Module, Save-Module, Update-Module...} 
Script 0.0 PSDesiredStateConfiguration {StrongConnect, IsHiddenResource, Write-MetaConfigFile, Get-InnerMostErrorRecord...} 
Script 1.2 PSReadLine {Get-PSReadlineKeyHandler, Set-PSReadlineKeyHandler, Remove-PSReadlineKeyHandler, G...

So those traditional Windows cmdlets will now work against the local Linux box.  Things like Get-Process will return the local running Linux processes.

PS /home/mukotic/Downloads> Get-Process


Handles NPM(K) PM(K) WS(K) CPU(s) Id SI ProcessName 
------- ------ ----- ----- ------ -- -- ----------- 
 0 0 0 0 0.400 1331 549 accounts-daemon 
 0 0 0 0 0.350 1111 111 acpid 
 0 0 0 0 0.000 2248 205 at-spi-bus-laun 
 0 0 0 0 0.040 2264 205 at-spi2-registr 
 0 0 0 0 0.000 147 0 ata_sff

Another thing that’s also worth checking out is Visual Studio Code.  This is another great open source project Microsoft has going.  If you’ve used PowerShell ISE in Windows, think of a stream lined version of that, just more powerful leveraging extensions.  Head over to https://code.visualstudio.com/docs/setup/linux and download the package.

Installation was also super simple.

PS /home/mukotic/Downloads> sudo dpkg -i code_1.4.0-1470329130_amd64.deb.deb

Then run by typing ‘code’

PS /home/mukotic/Downloads> code

Ubuntu 14.04 - VMware Workstation-000296

I recommend getting the PowerShell extension right off the bat.  Click the Extensions icon on the left, search for PowerShell, and click Install

Ubuntu 14.04 - VMware Workstation-000297

Now we have all the wonders of Intellisense that we are use to in the Windows PowerShell ISE.  I really see Visual Studio Code becoming a future replacement for the Windows PowerShell ISE, which while still in development, has been quite stagnated in recent years.

So there you have it.  Jeffrey Snover, a Technical Fellow, in the Microsoft Enterprise Cloud Group has a great post and video discussing PowerShell going open source that should be checked out.

https://azure.microsoft.com/en-us/blog/powershell-is-open-sourced-and-is-available-on-linux/

The next thing I’m hanging out for is PowerCLI on Linux.  A demo is shown in a video in the above link running inside a Docker container.  Expect to soon see a VMware Fling release for us to try out.

Meetups, PowerShell, Expanding My Horizons

20160714_184947

I’m not sure what it’s like in other major cities around the world.  But currently Melbourne is going through an IT meetup boom.  On any given week you can find at least one if not multiple meetups going on somewhere in Melbourne.  A big change of years past where we would have only a couple major conferences a year to look forward to.  It’s really quite an exciting period for meetups we’re going through.

So what is going on with all these meetups  —Meetup being the new buzz word we’re seeing slowly replacing the traditional User Group we’re all probably use to.  I think it’s in small part to do with the website meetup.com.  Sure, many of these User Groups have existed well before meetup.com became a thing.  But to find them you had to be part of the right Facebook group, follow the right twitter user, or just learn of it through some word of mouth.  I lost count before meetup.com on how many User Group meetings I missed by learning about it the next day.

We now have a common place we can visit to find all these User Groups and meetups.  Type in DevOps, PowerShell, VMware and dozens of meetups pop up in your local area.  RSVP and see all the other local users also going, not sure what the meetup is about, post a quick question and receive an answer right back.  There’s an update to a meeting, receive an email notification immediately.  I see it as a symbiotic relationship between a globally accepted meetup site and the user group.  We at the Melbourne VMware User Group have even started using it in conjunction with the traditional VMUG website to extend our community base.

CnUF-fZUsAEfGWd

This is how I found out about the recent PowerShell meetup I attended in Melbourne.  With all the scripting I’ve recently been doing in PowerCLI and PowerShell I wanted to expand my horizons a little further and find out how the wider PowerShell community works.  The group has only existed since the start of the year and this was their fourth meetup held in the seek.com.au offices.  The setting for the meetup was very casual and devoid of any advertising or marketing.  That is if you can overlook the office seek logos all over the place.  But considering the worst seek can actually do is find me a new job I’m more than happy to tolerate this 🙂   Of course there was the obligatory Beer and Pizzas which we all know elevates a good meetup to an awesome meetup.

sccm2012A found the format and atmosphere of this PowerShell meetup very appealing.  Heavy on practical content & demos and light on PowerPoint slides.  The setting around a large boardroom table with beer and pizza in hand also lead to a more comfortable environment to engage with community and presenters.  The meetup tended to have a slant towards DevOps practices using PowerShell rather than using PowerShell.  So less about how to connect to this server or use that cmdlet and more around processes and integration.  I was also lucky enough to receive a copy of the book, Learn System Center Configuration Manager in a Month of Lunches, from its author James Bannan.

Due to work commitments of the organiser, the PowerShell meetup was pushed out a day which turned out conflicted with an Azure meetup on the same night.  With so many IT meetup groups current listed and running in Melbourne.  There’s bound to be a small culling, a kind of survival of the fittest, happen.  So whether this PowerShell meetup group succeeds or not only time will tell.  I certainly hope it does and they continue to find that DevOps centric content it aims for.

Until the next meetup…

Melbourne VMUG Meetup Group