Tag Archives: powershell

PowerShell Get-Command: finding the cmdlet

A recent Slack chat reminded me that PowerShell’s Get-Command cmdlet is a good way of finding what commands to use when you encounter a new problem. However it goes beyond typing “Get-Command” and just getting a huge list back- my laptop just gave me 7659 commands to choose from – as this can be unusable. Here’s some quick tips on focussing your search by using the built in arguments.

1. –module

PowerShell and it’s extensions are comprised of modules. If you want to use the cmdlets for interacting with a VMware environment you install their “PowerCLI” module. Get-Command can return just the cmdlets from a specific module, for example we can list all the cmdlets from the VMware modules

Get-Command –Module VMware.*

Or we can list the commands in the Azure Compute PowerShell module

Get-Command –Module Az.Compute

2. –verb

If you’ve used PowerShell before, you’ll know that cmdlet names are all of the format verb (“a doing word” as I was taught at school), followed by a dash,  followed by a noun. So we have Measure-Object, Remove-Disk, and even Get-Command itself. The “-verb” argument can be used to only show us cmdlets with this verb, for example to only see the “Get” cmdlets we use

Get-Command –Verb Get

3. –noun

So, after the dash we have the noun. A disk, network connection, user account, and so on. So to find out all the cmdlets that work on or with services:

Get-Command –Noun Service

4. Combining the above

Of course we can make this even more powerful by combining these arguments together and with wildcards. Let’s say we want to know all the cmdlets for working with VMware vSphere tags?

Get-Command –Module VMware* –Noun *Tag*

Or if we want to find all the get Azure get commands for working with resources, resource groups, resource locks and so on.

Get-Command -Module Az.* -Verb Get -Noun *resource*

Azure: Email a Backup Report with PowerShell and Office365

Azure PortalThis PowerShell snippet compiles a daily report of backup jobs on all the Recovery Service Vaults within the current subscription. It then uses the Office 365 SMTP server to mail this report out to chosen recipients – if you’re not using O365 then just change the SMTPServer, Port, and UseSSL arguments as appropriate in the Send-MailMessage cmdlet.

$Body=foreach ($RSV in Get-AzRecoveryServicesvault) {
Get-AzRecoveryServicesBackupJob -VaultID $RSV.ID -Operation "Backup" -From ((Get-Date).AddDays(-1).ToUniversalTime()) |
Select-Object WorkloadName,Operation,Status,StartTime,EndTime,Duration
}
$Body= "
<h1>Daily Azure Backup Report: "
+ (Get-AzSubscription).Name +"</h1>
<code>"
+ ($Body | ConvertTo-HTML)+"</code>"
Send-MailMessage -BodyAsHTML $Body -From "[email protected]" `
-To "[email protected]" -SmtpServer smtp.office365.com -Port 587 `
-Subject "Azure Backup Report" -UseSsl `
-Credential (Get-Credential -Message "Office 365 credentials")

If the email should go to multiple recipients then comma separate the list as follows:

Send-MailMessage -To @("[email protected]","[email protected]")

Obviously to automate this you’ll need to feed the credentials in, using whatever secure platform you have available, rather than prompting for them in the script. The resulting email looks something like this:
Email
There’s plenty of scope for customisation of the email – the style and look of it can be changed by manipulating the HTML that’s generated in the snippet and the information included can be changed by modifying the Select-Object parameters.

Azure: Deploy a WebApp with PowerShell

A quick runthrough on using PowerShell to deploy a new WebApp. ASP.NET code for the website has been zipped up (into myapp.zip) and this code snippet will upload it to a new WebApp, hosted in a new App Service Plan and a new Resource Group.

From a local PowerShell session use Connect-AZAccount before running this code to sign-in to Azure. Alternatively this code can be run (with the exception of the upload itself) from the Cloud Shell directly in the Azure Portal.

The code also writes out the URL of the resulting WebApp and the PowerShell necessary to tear down the resources when they are no longer required.

#Set some parameters
$location="UK South"
$resourceGroupName= “rsg-myapp”
$webAppName=”web-myapp”
$appServicePlanName=”asp-myapp”
$codeZIPPath=”C:\myapp.zip”

#Create Resource Group
"-- Creating Resource Group"
New-AzResourceGroup -Location $location -Name $resourceGroupName -tag $Tags

#Create ServicePlan
"-- Creating Service Plan"
New-AzAppServicePlan -ResourceGroupName $resourceGroupName -Name $appServicePlanName -Location $location -Tier Free

#Create Web App
"-- Creating Web App"
New-AzWebApp -ResourceGroupName $resourceGroupName -Name $webAppName -Location $location -AppServicePlan $appServicePlanName

#Upload the web code
"-- Uploading Web App Code"
Publish-AzWebApp -ResourceGroupName $resourceGroupName -Name $webAppName -ArchivePath $codeZIPPath –Force

#Show user code to destroy this (useful for testing)
#  and the website that has been created.
"-- Tidy Up Code: "
" Remove-AzResourceGroup -Name $resourceGroupName"
"-- Website: "
"-- https://$WebAppName.azurewebsites.net"

"-- Done"

The resulting website can be viewed just by pointing a browser at the given URL. The created resources can be checked in the Azure portal:

image

vSAN Cluster Shutdown

A few weeks ago I had to shutdown a vSAN Cluster temporarily for a planned site-wide 24 hour power outage that was blacking out a datacentre. With the amount of warning and a multi-datacentre design this wasn’t an issue, but I made use of vSphere tags and some Powershell/PowerCLI to help with the evacuation and repopulation of the affected cluster. Hopefully some of this may be useful to others.

The infrastructure has two vSAN Clusters – Cluster-Alpha and Cluster-Beta. Cluster-Beta was the one being affected by the power outage, and there was sufficient space on Cluster-Alpha to absorb migrated workloads. Whilst they exist in different datacentres both clusters are on the same LAN and under the same vCenter.

I divided the VMs on Cluster-Beta into three categories:

  1. Powered-Off VMs and Templates. These were to stay in place, they would be inaccessible for the outage but I determined this wouldn’t present any issues.
  2. VMs which needed to migrate and stay on. These were tagged with the vSphere tag “July2019Migrate”
  3. VMs which needed to be powered off but not migrated. For example test/dev boxes which were not required for the duration. These were tagged with “July2019NOMigrate”

The tagging was important, not only to make sure I knew what was migrating and what was staying, but also what we needed to move back or power on once the electrical work had completed. PowerCLI was used to check that all powered-on VMs in Cluster-Beta were tagged one way or another.

Get the VMs in CLuster-Beta where the tag “July2019Migrate” is not assigned and the tag “July 2019NOMigrate” is not assigned and the VM is Powered On.

Get-Cluster -Name "Cluster-Beta" |Get-VM | where {
 (Get-TagAssignment -Entity $_).Tag.Name –notcontains "July2019Migrate" –and
 (Get-TagAssignment -Entity $_).Tag.Name –notcontains "July2019NOMigrate" –and
 $_.PowerState –eq “PoweredOn”}

In the week approaching the shutdown the migration was kicked off:

#Create a List of the VMs in the Source Cluster which are tagged to migrate
$MyTag= Get-Tag -Name "July2019Migrate"
$MyVMs=Get-Cluster "Cluster-Beta" | Get-VM | Where-Object {(Get-TagAssignment -Entity $_).Tags -contains $MyTag }
#Do the Migration
$TargetCluster= "Cluster-Alpha" #Target Cluster
$TargetDatastore= "vSANDatastore-Alpha" #Target Datastore on Target Cluster
$MyVMs | Move-VM -Destination (Get-Cluster -Name $TargetCluster) -Datastore (Get-Datastore -Name $TargetDatastore) -DiskStorageFormat Thin -VMotionPriority High

At shutdown time, a quick final check of the remaining powered on VMs was done and then all remaining VMs in Cluster-Beta were shut down. Once there were no running workloads on Beta it was time to shut down the vSAN cluster. This part I didn’t automate as I’m not planning on doing it a lot, and there’s comprehensive documentation in the VMware Docs site. The process is basically one of putting all the hosts into maintenance mode and then once the whole cluster is done, powering them off.

You are in a dark, quiet datacentre. There are many servers, all alike. There may be Grues here.

When power was restored, the process was largely reversed. I powered on the switches providing the network interconnect between the nodes, and then powered on those vSAN hosts and waited for them to come up. Once all the hosts were visible to vCenter, it was just a case of selecting them all and choosing “Exit Maintenance Mode”

2019-07-29 (8)

There was a momentary flash of alerts as nodes come up and wonder where their friends are, but in under a minute the cluster was passing the vSAN Health Check

image

At this point it was all ready to power on the VMs that had been shutdown and left on the cluster, and vMotion the migrated virtual machines back across. Again, PowerCLI simplified this process:

#Create a List of the VMs in the Source Cluster which are tagged to stay but need powering on.
$MyTag= Get-Tag -Name "July2019NOMigrate"
$MyVMs=Get-Cluster “Cluster-Alpha” | Get-VM | Where-Object {(Get-TagAssignment -Entity $_).Tags -contains $MyTag }
#Power on those VMs
$MyVMs | Start-VM

#Create a List of the VMs in the Source Cluster which are tagged to migrate (back)
$MyTag= Get-Tag -Name "July2019Migrate"
$MyVMs=Get-Cluster “Cluster-Alpha” | Get-VM | Where-Object {(Get-TagAssignment -Entity $_).Tags -contains $MyTag }
#Do the Migration
$TargetCluster= "Cluster-Beta" #New Target Cluster
$TargetDatastore= "vSANDatastore-Beta" #Target Datastore on Target Cluster
$MyVMs | Move-VM -Destination (Get-Cluster -Name $TargetCluster) -Datastore (Get-Datastore -Name $TargetDatastore) -DiskStorageFormat Thin -VMotionPriority High

Then it was just a case of waiting for the data to flow across the network and finally check that everything had migrated successfully and normality had been restored.

we have normality, I repeat we have normality…Anything you still can’t cope with is therefore your own problem. Please relax.

Trillian, via the keyboard of Douglas Adams. The Hitchhiker’s Guide to the Galaxy

Rubrik Build Workshop

Last week (end of May 2019) I was lucky enough to secure a place at the Rubrik Build Workshop in London. This event, which has been touring round the world, is a day of technical learning focussed on API, SDK, and version control.

Roxie at RubrikThe first thing to acknowledge here is even though Rubrik was hosting the event and the presenters  (the awesome pairing of Chris Wahl and Rebecca Fitzhugh) work for the company there was absolutely no sales push. Whilst they used their own APIs and SDKs as examples the majority of the content was very much platform agnostic. Kudos is due here for running this kind of free-of-charge educational event for the tech community without filling it with sales and marketing slides.

The morning started with a session on version control- looking at how git and in particular GitHub – can be used to track and share code. The “RoxieAtRubrik” GitHub account was used in some hands-on demos -we all forked a public project, made changes,  and submitted a pull request. The course material used in the workshops is publicly available via this account- check here: https://github.com/RoxieAtRubrik

There were some insights into how GitHub is used at Rubrik- there’s unit tests for every single function and in the background they have a CI (Continuous Integration) pipeline at work to make sure releases are up to scratch. Quality control can be tricky on community fed projects where developers may not be subject to traditional corporate control and it’s interesting to see how different teams handle this input.

Our dive into version control was followed by a look at how REST APIs work, using the Rubrik APIs as an example. There was plenty of hands-on activity here, with an online lab provided to simulate communicating with a real world device but in a safe environment.

Rubrik Hands on Lab Environment

The schedule of this event was flexible and after a show of hands amongst the 15 delegates we moved on to look at PowerShell, both in general terms for those new to the scripting language but also seeing how the SDK layer of the Rubrik PowerShell module made the API calls we’d looked at previously more user-friendly.

This PowerShell module is open source and available on GitHub- https://github.com/rubrikinc/rubrik-sdk-for-powershell – and as with all these projects contributions are welcome from the community. There was lots of encouragement from the presenters for customers/users to try these SDKs out and feed back any improvements that could be made, either by submitting an feature request or bug report, or by writing some or all of the addition yourself.

The European leg of the Rubrik Build tour has finished, but they’re off to Australia and New Zealand in June if that’s local to you. Check out https://build.rubrik.com/ for details.