Page not found – Too many problems, not enough time. https://blog.ukotic.net Too many problems, not enough time. Wed, 27 Sep 2023 04:32:57 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.3 https://blog.ukotic.net/wp-content/uploads/2015/10/cropped-vmware_icon-32x32.png Page not found – Too many problems, not enough time. https://blog.ukotic.net 32 32 93161157 VMware Tanzu vs VMware Tanzu Kubernetes Grid – What’s the difference? https://blog.ukotic.net/2023/09/27/vmware-tanzu-vs-vmware-tanzu-kubernetes-grid-whats-the-difference/ https://blog.ukotic.net/2023/09/27/vmware-tanzu-vs-vmware-tanzu-kubernetes-grid-whats-the-difference/#respond Wed, 27 Sep 2023 04:32:52 +0000 https://blog.ukotic.net/?p=3240 VMware marketing sometimes doesn’t do themselves any favours when naming and describing products. VMware Tanzu is a great example. With so many products with the word Tanzu in it. I find myself quite often spending time with customers having to explain the differences between VMware Tanzu and VMware Tanzu Kubernetes Grid.

I’ve created a short 5 minutes video (without the marketing speak), that briefly explains what is the difference between these two names.

]]>
https://blog.ukotic.net/2023/09/27/vmware-tanzu-vs-vmware-tanzu-kubernetes-grid-whats-the-difference/feed/ 0 3240
Scripting With VMware SRM REST API And PowerShell https://blog.ukotic.net/2023/01/12/scripting-with-vmware-srm-rest-api-and-powershell/ https://blog.ukotic.net/2023/01/12/scripting-with-vmware-srm-rest-api-and-powershell/#comments Thu, 12 Jan 2023 01:16:08 +0000 https://blog.ukotic.net/?p=3224

In a previous blog I wrote about Getting Started With Site Recovery Manager REST API. The post focused on using the VMware SRM Rest API Explorer. This is a great way to get started with testing the SRM API. But to do anything practical with the API you will most likely want to use it with scripting. In this post I go through the process of leveraging PowerShell to make SRM REST API calls.

The process to making SRM API calls in PowerShell is fairly standard, we use Invoke-RestMethod to make all our calls and retrieve data. What is needed to be understand is the process to authenticate which is greatly simplified when using the SRM API Explorer.

Below I explain the process to get started working with the SRM REST APIs. The environment I used consisted of two sites with vCenter 7 U3 (U2 is the minimum version required to use the REST APIs) and SRM 8.6 deployed. SRM was configured with a single site pair connection between the vCenters and two Protection Groups.

The steps to making SRM REST API calls in PowerShell is very similar to the SRM REST API Explorer. So we will attempt to break it down the same way and build a complete PowerShell script. It’s worth noting that I’m using PowerShell 7 and I’m using the -SkipCertificateCheck parameter available to Invoke-RestMethod in POSH 7.

Step 1. Authenticating to SRM and retrieve a Session ID

Below we create a simple PowerShell script that will take our username and password and pass it Base64 encoded in an Authorization Header of our REST call. The username and password needs to be separated by a colon (:). For example, ‘bob:monkey123’

$srmFQDN = "srm_fqdn"
$srmUserPass = "username:password"
    
[string]$stringToEncode=$srmUserPass
$encodedString=[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($stringToEncode))

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"
$headers.Add("content-type", "application/json")
$headers.Add("accept", "application/json")
$headers.Add("Authorization", "Basic " + $encodedString)

$URI = "https://$srmFQDN/api/rest/srm/v1/session"

$request = Invoke-RestMethod -Uri $URI -UserAgent $UserAgent -Headers $headers -Method POST -SkipCertificateCheck

$sessionID = $request.session_id

Write-Host $request

What is returned is a Session ID which we save to the $sessionID variable for later use.

Step 2. Use the session ID for subsequent calls

Next we need to take the Session ID and pass that in an addition Header called “x-dr-session.

$headers.Add("x-dr-session", $sessionID)

Step 3. Get Pairings

In this step we need to get our Site Pairing ID. We need to update our URI used and retrieve the Pairing ID.

$URI = "https://$srmFQDN/api/rest/srm/v1/pairings"
$request = Invoke-RestMethod -Uri $URI -UserAgent $UserAgent -Headers $headers -Method GET -SkipCertificateCheck
$sitePair = $request.list.pairing_id

We store our Site Pair ID in the $sitePair variable.

Step 4. Create a session to the remote Site Recovery Manager

This step has two parts. First we need to take our remote SRM username and password and Base64 encode it. Then we update our REST call URI to pass this to our SRM appliance. It’s important to note that we remove our original Authorization Header and replace it with a new one using the remote SRM credentials Base64 encoded.

$remoteUserPass = "cloudadmin@vmc.local:monkey123"
[string]$stringToEncode=$remoteUserPass 
$encodedString=[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($stringToEncode))

$headers.Remove("Authorization")
$headers.Add("Authorization", "Basic " + $encodedString)
### Let's try this pairing stuff
$URI = "https://$srmFQDN/api/rest/srm/v1/pairings/$sitePair/remote-session"
$request = Invoke-RestMethod -Uri $URI -UserAgent $UserAgent -Headers $headers -Method POST -SkipCertificateCheck

This request does not return any data. All that we should see if a 204 successful call status.

Step 5. Begin making your calls

We’re now finally ready to start working with the SRM REST API. Let’s run a simple GET request to obtain a Recovery Plan ID and then run a Test Plan using a POST request. The POST request is a little more complicated as we need to also submit a payload with it.

### Get Recovery Plan
$URI = "https://$srmFQDN/api/rest/srm/v1/pairings/$sitePair/recovery-management/plans"
$request = Invoke-RestMethod -Uri $URI -UserAgent $UserAgent -Headers $headers -Method GET -SkipCertificateCheck
$recoveryPlan = $request.list.id

### Test Recovery Plan
$URI = "https://$srmFQDN/api/rest/srm/v1/pairings/$sitePair/recovery-management/plans/$recoveryPlan/actions/test"

$payload = @"
{
    "sync_data": false
}
"@

$request = Invoke-RestMethod -Uri $URI -UserAgent $UserAgent -Headers $headers -body $payload -Method POST -SkipCertificateCheck
$request.status

If all successful what should come back is a RUNNING status response of our Test Plan.

Putting it all together

Let’s finally put it all together to see what this might look like as one complete script.

$srmFQDN = "srm_fqdn"
$srmUserPass = "username:password"
    
[string]$stringToEncode=$srmUserPass
$encodedString=[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($stringToEncode))

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"
$headers.Add("content-type", "application/json")
$headers.Add("accept", "application/json")
$headers.Add("Authorization", "Basic " + $encodedString)

$URI = "https://$srmFQDN/api/rest/srm/v1/session"

$request = Invoke-RestMethod -Uri $URI -UserAgent $UserAgent -Headers $headers -Method POST -SkipCertificateCheck

$sessionID = $request.session_id

Write-Host $request

$headers.Add("x-dr-session", $sessionID)

### Just find any site pair to use
$URI = "https://$srmFQDN/api/rest/srm/v1/pairings"
$request = Invoke-RestMethod -Uri $URI -UserAgent $UserAgent -Headers $headers -Method GET -SkipCertificateCheck
$sitePair = $request.list.pairing_id


### Create remote session
$remoteUserPass = "cloudadmin@vmc.local:monkey123"
[string]$stringToEncode=$remoteUserPass
$encodedString=[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($stringToEncode))

$headers.Remove("Authorization")
$headers.Add("Authorization", "Basic " + $encodedString)
### Let's try this pairing stuff
$URI = "https://$srmFQDN/api/rest/srm/v1/pairings/$sitePair/remote-session"
$request = Invoke-RestMethod -Uri $URI -UserAgent $UserAgent -Headers $headers -Method POST -SkipCertificateCheck


### Get Recovery Plan
$URI = "https://$srmFQDN/api/rest/srm/v1/pairings/$sitePair/recovery-management/plans"
$request = Invoke-RestMethod -Uri $URI -UserAgent $UserAgent -Headers $headers -Method GET -SkipCertificateCheck
$recoveryPlan = $request.list.id

### Test Recovery Plan
$URI = "https://$srmFQDN/api/rest/srm/v1/pairings/$sitePair/recovery-management/plans/$recoveryPlan/actions/test"

$payload = @"
{
    "sync_data": false
}
"@

$request = Invoke-RestMethod -Uri $URI -UserAgent $UserAgent -Headers $headers -body $payload -Method POST -SkipCertificateCheck
$request.status

And that’s basically the foundation to start leveraging the Site Recovery Manager REST API with PowerShell. If you haven’t done so already, certainly go over my previous post Getting Started With Site Recovery Manager REST API. It provides good information on identifying all the different GET and POST requests that can be done against the SRM API.

As I also mentioned in my previous post. This new REST API is a great welcomed addition to the product. It opens the door to some interesting new opportunities to automate and orchestrate SRM.

]]>
https://blog.ukotic.net/2023/01/12/scripting-with-vmware-srm-rest-api-and-powershell/feed/ 5 3224
Getting Started With Site Recovery Manager (SRM) REST API https://blog.ukotic.net/2022/12/20/getting-started-with-site-recovery-manager-srm-rest-api/ https://blog.ukotic.net/2022/12/20/getting-started-with-site-recovery-manager-srm-rest-api/#comments Tue, 20 Dec 2022 00:20:04 +0000 https://blog.ukotic.net/?p=3183 With the release of Site Recovery Manager (SRM) 8.6 comes the introduction of REST API support. This has been a long sought after feature for SRM that we now finally have. Previously if you wanted to programmatically work with SRM you would have to use the SOAP API or vRealize Orchestrator. Both of which have a fairly steep learning curve. But now we have a more standardised way of working with SRM using REST.

Recently I decided to delve into these REST APIs to understand how they work and how might they be leveraged for my customers. I spent a little bit of time learning how to authenticate to SRM over the REST API, retrieve SRM configuration, and then configure SRM using these REST APIs.

I find many REST APIs have little nuances between them that you need to learn. But once you understand them everything else seems to fall into place fairly quickly. For me in this situation it was understanding the process of how to authenticate to SRM over REST.

Below I explain the process to get started working with the SRM REST APIs. The environment I used consisted of two sites with vCenter 7 U3 (U2 is the minimum version required to use the REST APIs) and SRM 8.6 deployed. SRM was configured with a single site pair connection between the vCenters and two Protection Groups.

I found the easiest way to get started was with the REST API Explorer in SRM. It’s accessible via the URL {https://protected_vc/api/rest/#/home}. I also leveraged SRM documentation over at https://developer.vmware.com/apis/srm-rest-api/latest/. The developer documentation is similar to what you’ll find out of REST API Explorer but I found it much easier to consume and understand with nicer examples.

Once you browse to the REST API Explorer the first thing you want to do is change the product from configure to srm. Configure refers to the VAMI APIs where srm is the actual SRM product APIs. Working with the SRM REST APIs can be broken down into a few steps. Below I outline these steps based off the developer documentation.

Step 1. Authenticating to SRM and retrieve a Session ID

We achieve this by making a POST request using the Authentication API.

Navigate to the authentication category and select POST (/session) and Execute

This will prompt you for your Protected vCenter credentials.

If successful, you will receive a 200 status response. In the response body returned you will have a session_id.

Step 2. Use the session ID for subsequent calls

Copy the session_id and paste it in the Session ID field at the top of the REST API Explorer page.

Step 3. Get Pairings

One of the most fundamental parameters you will need when using the REST API is the Pairing ID. Navigate down to the pairing API category and select the GET (/pairings) and Execute, which will Get a list of all existing pairings.

Returned in the response will be a pairing_id.

Step 4. Create a session to the remote Site Recovery Manager

If your vCenters are leveraging Enhanced Linked Mode this step may not be required. This step, though, is the official process that should be used, so we will use it here. While still under the pairing API category. Navigate to the POST (/pairings/{pairing_id}/remote-session) request. Take the pairing_id from the previous step and paste it into the pairing_id parameter and Execute.

You will be prompted to authenticate once again. This time you provide the remote vCenter credentials (not the local protected one).

If successfully a 204 status is returned with No Content.

Step 5. Begin making your calls

We have now completed the authentication and setup phase and we can now begin making our calls to the full set of SRM REST APIs.

There are a number of different types of calls we can make but by far the easiest and safest are just the GETs. GET calls won’t make any changes.

Let’s start with one of these easy ones for an example. Navigate down to the protection API category.

Select the first GET (/pairings/{pairing_id}/protection-management/groups) and paste in the pairing_id that we previously obtained and paste into the pairing_id parameter and Execute.

Returned in the response will be all Protection Groups that exist.

If you’ve reached this point successfully, it’s a good indication that everything is working and you can continue to make further calls. Always make note of the required parameters in a call. More complex calls required additional parameters be submitted along with the pairing_id.

Overall the new SRM REST APIs are a greatly welcomed addition. They are quite extensive in capabilities and should open the door to a larger community when it comes to automating SRM. I’m looking forward to delving further into these APIs into the New Year.

References

Using the Site Recovery Manager REST API Gateway

VMware Site Recovery Manager Developer Documentation

]]>
https://blog.ukotic.net/2022/12/20/getting-started-with-site-recovery-manager-srm-rest-api/feed/ 1 3183
Deploy Tanzu Community Edition (TCE) To Azure https://blog.ukotic.net/2022/07/05/deploy-tanzu-community-edition-tce-to-azure/ https://blog.ukotic.net/2022/07/05/deploy-tanzu-community-edition-tce-to-azure/#respond Tue, 05 Jul 2022 10:53:29 +0000 https://blog.ukotic.net/?p=3178 In this latest video I run through the process, from start to finish, how to deploy a Tanzu Kubernetes Grid management cluster into Microsoft Azure. I walk-through the process to prepare Azure and capture all the required information, accept base image licenses and have a public key ready for TCE deployment. I then run through the TCE installer using the captured information from Azure and deploy a management cluster.

This video went a little longer than I hoped. But sometimes not everything is easy and needs to be explained. 😊

]]>
https://blog.ukotic.net/2022/07/05/deploy-tanzu-community-edition-tce-to-azure/feed/ 0 3178
[Quick Fix] Tanzu Kubernetes Grid – Unable To Find Network https://blog.ukotic.net/2022/06/07/quick-fix-tanzu-kubernetes-grid-unable-to-find-network/ https://blog.ukotic.net/2022/06/07/quick-fix-tanzu-kubernetes-grid-unable-to-find-network/#respond Tue, 07 Jun 2022 09:39:03 +0000 https://blog.ukotic.net/?p=3160 I recently came across an interesting issue at a customer site where Tanzu Kubernetes Grid (TKG) would fail to deploy or scale workload clusters. The customer had already successfully deployed a management and workload cluster. So they knew they had a working environment.

Running kubectl get machines would show machines in a Provisioning state.

$ kubectl get machines
NAME                                PROVIDERID                                       PHASE          VERSION
tkg-wrk1-control-plane-asxzc     vsphere://321c4eb3-5g23-5698-2qws-a33g34e45cd4   Running        v1.21.8+vmware.1
tkg-wrk1-md-0-45e67cdda3-8ag34                                                    Running        v1.21.8+vmware.1
tkg-wrk1-md-0-45e67cdda3-erfje                                                    Provisioning   v1.21.8+vmware.1
tkg-wrk1-md-0-45e67cdda3-ej3ie                                                    Provisioning   v1.21.8+vmware.1
tkg-wrk1-md-0-45e67cdda3-wie5t                                                    Provisioning   v1.21.8+vmware.1

Targeting a provisioning node and running kubectl describe machine tkg-wrk1-md-0-45e67cdda3-erfje would show an error message ‘unable to find network’ followed by ‘resolves to multiple networks’.

Message: error getting network specs for “infrastructure.cluster.x-k8s.io/v1alpha3, Kind=VSphereVM default/tkg-wrk1-md-0-45e67cdda3-erfje”: unable to find network “/DC/network/nsxt-seg-tkg-wrk1”: path ‘/DC/network/nsxt-seg-tkg-wrk1’ resolves to multiple networks

A search on VMware’s knowledge base will bring up KB 83871. The KB references TKG 1.2 and 1.3 but it also applies to 1.4, which is the version being used here. The root cause is identified as being multiple Port Groups with the same name in vSphere.

In my customer’s case it was quickly identified they had a number of new vSphere clusters each with their own Virtual Distributed Switch (vDS) all being presented the same network segments out of NSXT. As a result each vDS would report the same TKG networks with the same name / port group.

The KB article’s resolution is to leverage vSphere permissions by creating a specific role and service account for TKG. Then assigning that role to only the required distributed port groups. While this is a valid solution, it’s clearly a little awkward to perform.

A much cleaner solution happens to exist. I have to thank a fellow work colleague here, Frank Escaros-Büchsel, for providing me the solution. It involves placing each Virtual Distributed Switch into its own network folder in vSphere.

Now when you reference a vSphere network to use for a workload cluster you include the network folder in the network path. Existing built clusters may continue to experience the issue as they won’t take into account the new network folder. But all future clusters should now work fine.

]]>
https://blog.ukotic.net/2022/06/07/quick-fix-tanzu-kubernetes-grid-unable-to-find-network/feed/ 0 3160
Configuring a Tanzu Community Edition Bootstrap Client https://blog.ukotic.net/2022/05/24/configuring-a-tanzu-community-edition-bootstrap-client/ https://blog.ukotic.net/2022/05/24/configuring-a-tanzu-community-edition-bootstrap-client/#respond Tue, 24 May 2022 07:53:12 +0000 https://blog.ukotic.net/?p=3139

I’m still continuing to find my feet with creating YouTube videos. I am, though, enjoying the new medium to share knowledge through. In this YouTube video I start with the basics of preparing a bootstrap client to install Tanzu Community Edition (TCE), VMware’s free and open source distribution of Kubernetes.

In the video I use a fresh install of Rocky Linux. I start with the prerequisites and install Docker CE. I then install Homebrew and use that to install the Tanzu Community Edition Installer. Please watch for a walkthrough on how I did it.

Configuring a Tanzu Community Edition bootstrap client

All the commands I used to configure the bootstrap client can be found below broken down into sections.

Install Docker CE

sudo dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo

sudo dnf install docker-ce --allowerasing -y

sudo systemctl start docker

sudo systemctl status docker

sudo groupadd docker

sudo usermod -aGdocker $USER

newgrp docker 

docker --version

docker run hello-world

Install Docker Compose (Optional)

# If not installed
sudo dnf install -y curl

sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

sudo chmod +x /usr/local/bin/docker-compose

docker-compose --version

Install Homebrew

sudo dnf groups mark install "Development Tools" -y

sudo dnf groupinstall "Development Tools" -y

sudo yum install git -y

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

# Extract following two commands from brew install output
echo 'eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"' >> /home/mark/.bash_profile

eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"

Install Tanzu Community Edition

brew install vmware-tanzu/tanzu/tanzu-community-edition

# Extract command from TCE install output
/home/linuxbrew/.linuxbrew/Cellar/tanzu-community-edition/v0.12.1/libexec/configure-tce.sh

Install Kubectl

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

kubectl version --client

Run the Tanzu Community Edition Installer

tanzu management-cluster create --ui

]]>
https://blog.ukotic.net/2022/05/24/configuring-a-tanzu-community-edition-bootstrap-client/feed/ 0 3139
Scaling In & Out A Tanzu Kubernetes Grid Cluster https://blog.ukotic.net/2022/05/03/scaling-in-out-a-tanzu-kubernetes-grid-cluster/ https://blog.ukotic.net/2022/05/03/scaling-in-out-a-tanzu-kubernetes-grid-cluster/#respond Tue, 03 May 2022 09:33:09 +0000 https://blog.ukotic.net/?p=3133 I thought I would try something a little different to my regular posts. I’ve taken the contents of this topic and moved it to a short YouTube video.

In the below video I briefly demonstrate how you can quickly and easily scale a Tanzu Kubernetes Grid (TKG) cluster. In the video I cover the basics of using the tanzu cluster scale command. Initially adding a worker node to an existing cluster and then removing a node.

Please let me know your thoughts on the video. I hope to start making more of these short videos covering Tanzu Kubernetes Grid over the coming months.

]]>
https://blog.ukotic.net/2022/05/03/scaling-in-out-a-tanzu-kubernetes-grid-cluster/feed/ 0 3133
Quick Fix: Using Kubectl Patch On Windows PowerShell https://blog.ukotic.net/2022/01/27/quick-fix-using-kubectl-patch-on-windows-powershell/ https://blog.ukotic.net/2022/01/27/quick-fix-using-kubectl-patch-on-windows-powershell/#comments Thu, 27 Jan 2022 10:25:23 +0000 https://blog.ukotic.net/?p=3123 When I first started learning Kubernetes and using Kubectl I did most of my study on a Linux box. One of the commands I learnt to use was the patch command. This patch command allows you to change part of a resource specification from the CLI. Having to only specify what you want to change as opposed to a command like apply where it reads a file and makes the required changes.

I’ve since switched to using Windows and PowerShell as my main desktop. For the most part I’ve found the transition to using Kubernetes and Kubectl on Windows fairly seamless for my work. That is with the exception of using the patch command. Recently I was following a guide to modify an Ingress deployment which made reference to using the patch command. When I went to run the command under Windows PowerShell I received the following error.

kubectl patch deployment nginx-ingress --patch ‘{“spec”: {“template”: {“metadata”: {“labels”: {“app”: “nginx”}}}}}’


Error from server (BadRequest): invalid character ‘s’ looking for beginning of object key string

I tried changing the single and double quotes and trying a few variations but that didn’t seem to help. It can be tricky sometimes to tell if you have all the correct curly braces and quotes. So I took the command over to Linux and sure enough it worked fine.

Digging into the issue, the solution, while I guess visually messy, is actually very simple. It turns out under Windows you need to escape each double quote in the command with a backslash.

So the following command

kubectl patch deployment nginx1 --patch ‘{“spec”: {“template”: {“metadata”: {“labels”: {“test”: “test123”}}}}}’

Becomes

kubectl patch deployment nginx1 --patch ‘{\”spec\”: {\”template\”: {\”metadata\”: {\”labels\”: {\”test\”: \”test123\”}}}}}’

Only the double quotes need the backslash in front of them. The single quote around the data that you are wanting to add does not need it.

]]>
https://blog.ukotic.net/2022/01/27/quick-fix-using-kubectl-patch-on-windows-powershell/feed/ 1 3123
Exam Review – VMware Certified Professional – Application Modernization https://blog.ukotic.net/2021/11/08/exam-review-vmware-certified-professional-application-modernization/ https://blog.ukotic.net/2021/11/08/exam-review-vmware-certified-professional-application-modernization/#respond Mon, 08 Nov 2021 10:32:14 +0000 https://blog.ukotic.net/?p=3102

Back in June / July of this year VMware quietly released a new certification, VMware Certified Professional -- Application Modernization (VCP-AM). The certification, as its name implies, focuses on application modernization but more specifically VMware’s cloud native portfolio. This includes VMware’s Tanzu Kubernetes Grid, vSphere with Tanzu, and Tanzu Mission Control solutions.

Early this past week I finally decided to sit this exam after procrastinating with it over the last few months. The TL|DR of this exam is that it’s quite a challenging exam to sit, requiring a very broad range of skills and knowledge to be familiar in. I’ve been actively using Kubernetes for over a year now. I’m CKA certified. I’ve done numerous deployments of all the different TKG flavours VMware has to offer. But when I hit that finish button at the end of the exam, I fully expected that I failed. It was a huge relief, though, to see that Pass flash up on the screen. But don’t let that scare you. I pretty much feel like this at the end of every exam I do 😉

Meeting The Prerequisites

Sitting the exam is just one part of the VCP-AM certification process. As with most certifications you have prerequisites that also need to be met. When the VCP-AM exam, 2V0-71.21, was initially released you needed to hold a previous valid VCP in another discipline. This has now been removed and you only need to complete one of the required training courses, which can be found here. Of course, you can sit the exam at any time but you won’t be award the certification status until you also complete one of these courses. From what I can see only one of the courses can currently be taken On-Demand, VMware vSphere with Tanzu: Deploy and Manage [V7] -- On Demand. Though by the time you’re reading this there may be more.

The VMware vSphere with Tanzu: Deploy and Manage [V7] course was the one I chose to do as it was the most convenient to do on-demand. It’s noted as being a three-day course, but you can obviously speed up the lecture parts and perform it a little faster. It does a good job of covering the fundamentals of Kubernetes. As well as explaining and breaking down vSphere with Tanzu, and Tanzu Kubernetes Grid (TKG). As a minimum I would highly recommend you do this course. The other course I would consider would be VMware Tanzu Mission Control: Management and Operations 2020 if you have the option available. I have not sat this course, so I can’t speak to it, but TMC is a core component of the VCP-AM exam, so make sure you cover it.

Study and Preparation

The first place to start, outside of meeting the training course prerequisites, is the official Exam Preparation Guide. Try not to have a heart attack when you first look at the guide. It is quite broad and in depth on content that you need to study and be familiar with. There’s no shortcuts here! You have to just be methodical and work your way through it. The objectives are well defined in the knowledge you need to learn to pass the exam. I find copying out the objectives into Excel and ticking each one off as I study and work my way through them works best for me.

One particular objective I feel needs calling out is Tanzu Mission Control (TMC). This is a SaaS product provided by VMware which requires a subscription to use. Many people may find it difficult to get access to this. There is, though, a Hands On Lab that can be used to guide you through some hands on experience. Don’t ignore this.

An area I don’t think gets called out well in the study guide objectives, is an understanding of how general ‘vanilla’ Kubernetes works. You may find yourself focusing a lot of your time and energy on how to install and manage TKG but forget to learn how to actually use Kubernetes. I’m talking about more than just logging in after deployment. How do you deploy an application, a pod, a container? Can you describe that? Are you comfortable modifying Kubernetes deployment YAML files? All things that will greatly help you during the exam.

The Exam

I sat the exam online via Pearson VUE. I have performed several online exams over the last year and generally had pretty good experiences with no issues. So I won’t go too much into the process. I think we’ve all read a lot about how it works over the last year.

The VCP-AM exam consists of 55 multiple choice questions over 130 minutes. This is fairly typical for a VCP exam. You have roughly a little over 2 minutes per a question to answer. I found I had ample time to answer each question, then go back and review each question, with a little time to spare.

The range of questions did a good job of covering all the objectives. So certainly don’t skimp on studying any of the areas. I’ve already hinted at this above, study TMC, and feel confident in how to use and manage it. Despite its small section in the study guide objectives you may be surprised at the number of question you get on it.

In Summary

I’m sure by looking at the exam study guide you can pick up that this is not going to be an easy exam to study and sit, and it certainly isn’t. That being said, if you learn and know the content you will be fine. I think your best weapon will be real hands on experience. No doubt about it. The more you use and play with Kubernetes, TKG, Tanzu, the better off you will be.

Whether you are studying for this exam or not I encourage everyone to just get out there and learn Kubernetes. I see this being a valuable skill to have in the near future.

Good Luck!

]]>
https://blog.ukotic.net/2021/11/08/exam-review-vmware-certified-professional-application-modernization/feed/ 0 3102
SSH To A Tanzu Community Edition (TCE) Node https://blog.ukotic.net/2021/10/25/ssh-to-a-tanzu-community-edition-tce-node/ https://blog.ukotic.net/2021/10/25/ssh-to-a-tanzu-community-edition-tce-node/#comments Mon, 25 Oct 2021 08:43:12 +0000 https://blog.ukotic.net/?p=3081 I’ve noticed a number of people recently ask how to SSH into a Tanzu Community Edition node / host. Particularly what is the user account used and how to confirm what public key is in use. So I took an opportunity to delve into this a little.

The below applies specifically to TCE deployed in a vSphere environment. It should also be relevant to the commercial version of TKG on vSphere. Cloud deployments, like AWS, will be different.

During the deployment of an initial TCE management cluster you are required to enter in a public key. If using the UI for deployment you perform this on Step 1 when configuring the vCenter details. If using a YAML deployment file the key name is VSPHERE_SSH_AUTHORIZED_KEY.

Below is how the SSH public key should look when pasted into the UI. Notice it has ssh-rsa at the start.

Once your management cluster is deployed you should attempt to SSH to the nodes to confirm your public key was correctly applied. The default user will be ‘capv‘ when deployed on vSphere.

You can perform this by using the private key of your key pair and specifying the user ‘capv‘. For example.

ssh -i id_rsa capv@10.10.10.100

If this fails to work and you can’t login, you’re either looking at an incorrect public/private key pair being used or an incorrectly applied public key during deployment. You can confirm the public key that was used during the management cluster deployment from inside Kubernetes of the management cluster.

In Kubernetes on the management cluster type the following

kubectl describe kubeadmconfigtemplate -A

Towards the bottom of the output displayed you should see Name and Ssh Authorized Keys.

Make sure the key matches the public key you used. Note the Name and if it is different from the default ‘capv’. Pay close attention to how the key looks. It should all be on one line. If it’s not on one line you may have pasted it incorrectly with multiple lines during deployment.

An example of an incorrectly defined public key during deployment below.

Notice that ssh-rsa is on a different line to the actual key. Also notice I have mark@docker which I pasted with the key but is not required. The key also appears very short. This is an example of a poorly pasted public key in the UI during deployment.

To resolve an incorrect public key it is possible to edit and update kubeadmconfigtemplate. I have had success doing this and then using tanzu cluster scale to deploy more nodes / hosts and then remove the old nodes with incorrectly applied keys.

]]>
https://blog.ukotic.net/2021/10/25/ssh-to-a-tanzu-community-edition-tce-node/feed/ 1 3081