Tag Archives: powercli

PowerShell Get-Command: finding the cmdlet

A recent Slack chat reminded me that PowerShell’s Get-Command cmdlet is a good way of finding what commands to use when you encounter a new problem. However it goes beyond typing “Get-Command” and just getting a huge list back- my laptop just gave me 7659 commands to choose from – as this can be unusable. Here’s some quick tips on focussing your search by using the built in arguments.

1. –module

PowerShell and it’s extensions are comprised of modules. If you want to use the cmdlets for interacting with a VMware environment you install their “PowerCLI” module. Get-Command can return just the cmdlets from a specific module, for example we can list all the cmdlets from the VMware modules

Get-Command –Module VMware.*

Or we can list the commands in the Azure Compute PowerShell module

Get-Command –Module Az.Compute

2. –verb

If you’ve used PowerShell before, you’ll know that cmdlet names are all of the format verb (“a doing word” as I was taught at school), followed by a dash,  followed by a noun. So we have Measure-Object, Remove-Disk, and even Get-Command itself. The “-verb” argument can be used to only show us cmdlets with this verb, for example to only see the “Get” cmdlets we use

Get-Command –Verb Get

3. –noun

So, after the dash we have the noun. A disk, network connection, user account, and so on. So to find out all the cmdlets that work on or with services:

Get-Command –Noun Service

4. Combining the above

Of course we can make this even more powerful by combining these arguments together and with wildcards. Let’s say we want to know all the cmdlets for working with VMware vSphere tags?

Get-Command –Module VMware* –Noun *Tag*

Or if we want to find all the get Azure get commands for working with resources, resource groups, resource locks and so on.

Get-Command -Module Az.* -Verb Get -Noun *resource*

Improving Documentation via the Community.

Have you ever had to deal with incorrect documentation? Or been frustrated by a typo? Or been annoyed that a how-to guide uses an old version of an interface?

Now you can fix it!

Many software providers are now using community-editable documentation online. This isn’t a Wikipedia style free-for-all, but a carefully moderated process ensuring that the resulting document is accurate.  If you come across an error in an online doc, or even a PowerShell help page, check and see if you can submit edits.

Continuous deployment pipelines mean that these edits can make it into live documentation in a matter of hours or days- impressive times if you’ve ever submitted an errata to a printed book, or submitted a bug request to get online documentation fixed.

docs.Microsoft.com

If you visit a Microsoft docs page, you’ll see an Edit link at the top of the screen (see (1) in the screenshot below). Clicking on this takes you to a page on Github with the source of the document. Click there to edit the file and a git fork will be made under your own profile- make your edits and submit a merge request and, once approved, your updates will appear in the original website. You’ll even get a little credit (see (2) in the screenshot below) for your contribution.

image

In this particular example I was following the step-by-step guide and noticed that the wording in the document no longer matched the Azure Portal. I was quickly able to suggest a fix and later that day the page was updated and anyone else following the instructions wouldn’t be misled. Two minutes of my time hopefully saved ten minutes of head-scratching by someone else.

VMware PowerCLI Example Scripts

As the name suggests, the source code for some example PowerCLI scripts has been published by VMware supported by members of the #vCommunity. If you find an error in the scripts you can pop over to Github and correct them- and remember this isn’t just the code of the script, but also it’s accompanying documentation.

image

In this example a typo in the get-help file was spotted and quickly corrected. Whilst the spelling mistake wasn’t a show-stopper this shows how quick and easy it is to contribute to these projects without being a coding guru.

Summary

Many of these projects use Github and learning how to use that version control platform isn’t arduous- especially for small changes like these- and is a useful skill to pickup if you don’t already have it. The important message here is you don’t need to be a developer to contribute to the code.

So, next time you spot a mistake in documentation, see if you can fix it yourself and help the next person who comes along.

vSAN Cluster Shutdown

A few weeks ago I had to shutdown a vSAN Cluster temporarily for a planned site-wide 24 hour power outage that was blacking out a datacentre. With the amount of warning and a multi-datacentre design this wasn’t an issue, but I made use of vSphere tags and some Powershell/PowerCLI to help with the evacuation and repopulation of the affected cluster. Hopefully some of this may be useful to others.

The infrastructure has two vSAN Clusters – Cluster-Alpha and Cluster-Beta. Cluster-Beta was the one being affected by the power outage, and there was sufficient space on Cluster-Alpha to absorb migrated workloads. Whilst they exist in different datacentres both clusters are on the same LAN and under the same vCenter.

I divided the VMs on Cluster-Beta into three categories:

  1. Powered-Off VMs and Templates. These were to stay in place, they would be inaccessible for the outage but I determined this wouldn’t present any issues.
  2. VMs which needed to migrate and stay on. These were tagged with the vSphere tag “July2019Migrate”
  3. VMs which needed to be powered off but not migrated. For example test/dev boxes which were not required for the duration. These were tagged with “July2019NOMigrate”

The tagging was important, not only to make sure I knew what was migrating and what was staying, but also what we needed to move back or power on once the electrical work had completed. PowerCLI was used to check that all powered-on VMs in Cluster-Beta were tagged one way or another.

Get the VMs in CLuster-Beta where the tag “July2019Migrate” is not assigned and the tag “July 2019NOMigrate” is not assigned and the VM is Powered On.

Get-Cluster -Name "Cluster-Beta" |Get-VM | where {
 (Get-TagAssignment -Entity $_).Tag.Name –notcontains "July2019Migrate" –and
 (Get-TagAssignment -Entity $_).Tag.Name –notcontains "July2019NOMigrate" –and
 $_.PowerState –eq “PoweredOn”}

In the week approaching the shutdown the migration was kicked off:

#Create a List of the VMs in the Source Cluster which are tagged to migrate
$MyTag= Get-Tag -Name "July2019Migrate"
$MyVMs=Get-Cluster "Cluster-Beta" | Get-VM | Where-Object {(Get-TagAssignment -Entity $_).Tags -contains $MyTag }
#Do the Migration
$TargetCluster= "Cluster-Alpha" #Target Cluster
$TargetDatastore= "vSANDatastore-Alpha" #Target Datastore on Target Cluster
$MyVMs | Move-VM -Destination (Get-Cluster -Name $TargetCluster) -Datastore (Get-Datastore -Name $TargetDatastore) -DiskStorageFormat Thin -VMotionPriority High

At shutdown time, a quick final check of the remaining powered on VMs was done and then all remaining VMs in Cluster-Beta were shut down. Once there were no running workloads on Beta it was time to shut down the vSAN cluster. This part I didn’t automate as I’m not planning on doing it a lot, and there’s comprehensive documentation in the VMware Docs site. The process is basically one of putting all the hosts into maintenance mode and then once the whole cluster is done, powering them off.

You are in a dark, quiet datacentre. There are many servers, all alike. There may be Grues here.

When power was restored, the process was largely reversed. I powered on the switches providing the network interconnect between the nodes, and then powered on those vSAN hosts and waited for them to come up. Once all the hosts were visible to vCenter, it was just a case of selecting them all and choosing “Exit Maintenance Mode”

2019-07-29 (8)

There was a momentary flash of alerts as nodes come up and wonder where their friends are, but in under a minute the cluster was passing the vSAN Health Check

image

At this point it was all ready to power on the VMs that had been shutdown and left on the cluster, and vMotion the migrated virtual machines back across. Again, PowerCLI simplified this process:

#Create a List of the VMs in the Source Cluster which are tagged to stay but need powering on.
$MyTag= Get-Tag -Name "July2019NOMigrate"
$MyVMs=Get-Cluster “Cluster-Alpha” | Get-VM | Where-Object {(Get-TagAssignment -Entity $_).Tags -contains $MyTag }
#Power on those VMs
$MyVMs | Start-VM

#Create a List of the VMs in the Source Cluster which are tagged to migrate (back)
$MyTag= Get-Tag -Name "July2019Migrate"
$MyVMs=Get-Cluster “Cluster-Alpha” | Get-VM | Where-Object {(Get-TagAssignment -Entity $_).Tags -contains $MyTag }
#Do the Migration
$TargetCluster= "Cluster-Beta" #New Target Cluster
$TargetDatastore= "vSANDatastore-Beta" #Target Datastore on Target Cluster
$MyVMs | Move-VM -Destination (Get-Cluster -Name $TargetCluster) -Datastore (Get-Datastore -Name $TargetDatastore) -DiskStorageFormat Thin -VMotionPriority High

Then it was just a case of waiting for the data to flow across the network and finally check that everything had migrated successfully and normality had been restored.

we have normality, I repeat we have normality…Anything you still can’t cope with is therefore your own problem. Please relax.

Trillian, via the keyboard of Douglas Adams. The Hitchhiker’s Guide to the Galaxy

Quick PowerCLI- Getting VM hardware versions

A quick PowerCLI snippet for examining what VM Hardware versions exist in your virtual environment:

Using the “Group-Object” cmdlet we can run up a quick count of all the VMs on each hardware version

Get-VM | Group-Object Version

Count Name                      Group
----- ----                      -----
42    v13                       {VM1,VM2,VM3...}
257   v8                        {VM4,VM5,VM6...}
70    v11                       {VM7,VM8,VM9...}
2     v4                        {VM10,VM11}
5     v10                       {VM12,VM13,VM14...}
2     v9                        {VM15,VM16}
2     v7                        {VM17,VM18}

This can be refined using “Sort-Object” to put the most common hardware version at the top of the list.

Get-VM  | Group-Object Version | Sort-Object Count -Descending
Count Name                      Group
----- ----                      -----
257   v8                        {VM4,VM5,VM6...}
70    v11                       {VM7,VM8,VM9...}
42    v13                       {VM1,VM2,VM3...}
5     v10                       {VM12,VM13,VM14...}
2     v7                        {VM17,VM18}
2     v9                        {VM15,VM16}
2     v4                        {VM10,VM11}

We may only be concerned with VMs that are Powered On, so “Where-Object” can be used to filter the original list.

Get-VM  | Where-Object {$_.PowerState -eq "PoweredOn"} | Group-Object Version | Sort-Object Count -Descending
Count Name                      Group
----- ----                      -----
66    v8                        {VM4,VM5,VM19...}
51    v11                       {VM7,VM8,VM9...}
33    v13                       {VM1,VM21,VM22...}
5     v10                       {VM12,VM13,VM20...}
2     v9                        {VM15,VM16}
1     v4                        {VM10}

This quick snippet can be useful when establishing the range of hardware versions in an environment, or estimating the amount of work involved in updating VM hardware to a modern standard across an estate.

Find VMs being replicated to a datastore with Get-VmdkFolders

I’m using vSphere Replication to replicate a number of VMs to a datastore which is being decommissioned. I can’t reconfigure replication by code (I’ve mentioned that there’s no public API or PowerCLI around the vSphere Replication functionality in vSphere 6.5 before- most recently on Twitter)  and I can’t multi-select VMs in the vSphere Web Client and reconfigure en-mass through the GUI.

There’s also no way of telling which VMs are being replicated to this datastore without going into the reconfigure dialogues for each one individually. That’s a lot of clicking before even starting to update the configuration.

I’ve put together a PowerCLI function takes a little of this pain away by establishing which folders on a given datastore contain VMDK files. I’ve already migrated any Virtual Machines off so any remaining VMDKs on the volume should be associated with a replication process. Additionally in this case the default folder name was used when setting up the replication so all the folder names correspond to a VM.

The function is called “Get-VmdkFolders” (as there’s potentially other uses for it) and is available on GitHub here. With this I now have a list of VMs that need manually reconfiguring.

PS C:\> Get-VmdkFolders "MyReplicationDatastore"
MyVM1
MyVM2
MyVM3
MyVM5
MyVM7