Author Archives: Chris

PowerShell Get-Command: finding the cmdlet

A recent Slack chat reminded me that PowerShell’s Get-Command cmdlet is a good way of finding what commands to use when you encounter a new problem. However it goes beyond typing “Get-Command” and just getting a huge list back- my laptop just gave me 7659 commands to choose from – as this can be unusable. Here’s some quick tips on focussing your search by using the built in arguments.

1. –module

PowerShell and it’s extensions are comprised of modules. If you want to use the cmdlets for interacting with a VMware environment you install their “PowerCLI” module. Get-Command can return just the cmdlets from a specific module, for example we can list all the cmdlets from the VMware modules

Get-Command –Module VMware.*

Or we can list the commands in the Azure Compute PowerShell module

Get-Command –Module Az.Compute

2. –verb

If you’ve used PowerShell before, you’ll know that cmdlet names are all of the format verb (“a doing word” as I was taught at school), followed by a dash,  followed by a noun. So we have Measure-Object, Remove-Disk, and even Get-Command itself. The “-verb” argument can be used to only show us cmdlets with this verb, for example to only see the “Get” cmdlets we use

Get-Command –Verb Get

3. –noun

So, after the dash we have the noun. A disk, network connection, user account, and so on. So to find out all the cmdlets that work on or with services:

Get-Command –Noun Service

4. Combining the above

Of course we can make this even more powerful by combining these arguments together and with wildcards. Let’s say we want to know all the cmdlets for working with VMware vSphere tags?

Get-Command –Module VMware* –Noun *Tag*

Or if we want to find all the get Azure get commands for working with resources, resource groups, resource locks and so on.

Get-Command -Module Az.* -Verb Get -Noun *resource*

Azure: Email a Backup Report with PowerShell and Office365

Azure PortalThis PowerShell snippet compiles a daily report of backup jobs on all the Recovery Service Vaults within the current subscription. It then uses the Office 365 SMTP server to mail this report out to chosen recipients – if you’re not using O365 then just change the SMTPServer, Port, and UseSSL arguments as appropriate in the Send-MailMessage cmdlet.

$Body=foreach ($RSV in Get-AzRecoveryServicesvault) {
Get-AzRecoveryServicesBackupJob -VaultID $RSV.ID -Operation "Backup" -From ((Get-Date).AddDays(-1).ToUniversalTime()) |
Select-Object WorkloadName,Operation,Status,StartTime,EndTime,Duration
}
$Body= "
<h1>Daily Azure Backup Report: "
+ (Get-AzSubscription).Name +"</h1>
<code>"
+ ($Body | ConvertTo-HTML)+"</code>"
Send-MailMessage -BodyAsHTML $Body -From "[email protected]" `
-To "[email protected]" -SmtpServer smtp.office365.com -Port 587 `
-Subject "Azure Backup Report" -UseSsl `
-Credential (Get-Credential -Message "Office 365 credentials")

If the email should go to multiple recipients then comma separate the list as follows:

Send-MailMessage -To @("[email protected]","[email protected]")

Obviously to automate this you’ll need to feed the credentials in, using whatever secure platform you have available, rather than prompting for them in the script. The resulting email looks something like this:
Email
There’s plenty of scope for customisation of the email – the style and look of it can be changed by manipulating the HTML that’s generated in the snippet and the information included can be changed by modifying the Select-Object parameters.

VMworld Europe 2019-Day 2 Keynote highlights

The Wednesday General Session at VMworld Europe is usually where VMware puts the meat onto the bones of the Tuesday announcements and this year was no exception. Here’s a quick rundown of my highlights.

imageExecutive VP Ray O’Farrell kicked off proceedings with a video of a near-future environment where a person is making use of futuristic apps, devices, and transport- a storyline which was then tied in to the new VMware announcements. Following on from the success of Elastic Sky Pizza in 2017, attendees were introduced to the latest (ficticious) company- Tanzu Tees – who must be opening a European branch following their success at VMworld US in August.

The Keynote was divided into four sections to follow this theme- “Build and Run”, “Connect and Protect”, “Manage” and “Experience”. This split the hour into 10-15 minute sections and showed the breadth of todays’ VMware profile.

Less than 7 minutes into the show and we’re already diving into product demos, with Joe Baguely brought in to show an application being built with Spring Initializr to build out a framework for developers, deploying this to a Bitnami catalogue with Project Galleon and make it available in VMware Cloud Marketplace.

The second demo showed off the new Tanzu Mission Control managing Kubernetes clusters across vSphere, AWS, VMware Cloud, Azure, and Google Cloud- all on one screen. A key feature here was the ability to apply policies across all these different platforms from one consistent interface- no need to dive into 3, 4, or 5 different workflows, each with their own GUI, CLI, and API components to deal with.

A demo of Project Pacific followed this. I’ve heard lots of people say how much they appreciated these demonstrations and being able to see what the products actually look like as slide decks can only take you so far.

In this third demo we saw the vSphere Client we all know managing Kubernetes clusters alongside VMs and container pods- all natively within ESX. VMware are already using this technology in house- currently creating and destroying 800,000 containers weekly- a number which is growing.

Moving onto the “Connect and Protect” section Ray was joined onstage by Marcos Hernandez who had more demos. The first of these looked at the NSX Intelligence features- picking up risks, threats, and vulnerabilities which have been surfaced using the new Distributed IDS/IPS technology in NSX and then applying recommended firewall rules to remediate the faults.

Marcos’s second demo looked at how Carbon Black Cloud Workload adds another layer to protecting the application- spotting known vulnerabilities, locations in the infrastructure where encryption wasn’t implemented, The demo included a simulated hack on the Tanzu Tees application and showed how Carbon Black and AppDefense detected the intrusion attempt.

The “Manage” segment brought Purnima Padmanabhan to the stage. Wavefront was the first product up here, collecting metrics from the components of the Tanzu Tees apps and drilling down into individual microservices to diagnose performance problems- in this demo identifying a specific SQL query which was the root cause.

Project Magna was next up in the demonstrations- this uses AI and ML to optimise application performance- in this example by modifying cache size based on the current workload on the storage device.

CloudHealth was used by Tanzu Tees to analyse the usage of the components of the applications and recommend right-sizing of VMs and produce budget alerts to help proactively manage cloud spend.

The final section- “Experience” – was led by Shikha Mittal who continued the demo heavy theme by showing how Horizon Virtual Desktops sites can be created on both AWS and Azure clouds and use on-premises style images alongside the Microsoft Windows Virtual Desktops deployments of Windows 10.

VMware Workspace One was shown managing a variety of end user devices, and connecting to Carbon Black to spot anomalies in usual device behaviour, for example spotting malicious logins and potentially compromised endpoints. Again VMware uses this internally for their 60,000 endpoints across the globe.

The new CTO of VMware, Greg Lavender, closed out the presentations talking through some of the forward-looking activities of his office including using Bitfusion appliances to provide GPU resources across a network thus sharing a pool of GPU resources amongst a CPU-only ESX infrastructure.

In summary this was a session full of product demonstrations- definitely worth a watch or picking out the bits relevant to you. You can now tune into the full keynote (1 hour) on Youtube.

246520-vmworld2019-contentcatalog-eu-blank-1600x250

Azure Arc Announcement

Microsoft released an 87 page “Book of New” listing the announcements  from this weeks Ignite Conference and right at the top is Azure Arc. It’s not just alphabetical order that put’s this new product here, in my opinion this is a real step forward by Microsoft towards fulfilling the early promise of their Azure Hybrid Cloud model.

Arc’s first feature provides the ability to run Azure data services – Azure SQL Server and friends- on any platform, be it on-premises, on an edge device, or in the public cloud. We saw VMware advertising this from their point of view in the VMworld Europe keynote this week. Bringing Platform-As-A-Service to your own platform, or those at another cloud provider, is an interesting concept and vital to the idea of a true hybrid environment where you can run any app on any cloud.

Whilst Azure stack provided “Azure consistent hardware” in your datacentre, Azure Arc continues this journey – in essence expanding what “Azure consistent” means to the customer in terms of data services.

Azure Arc also extends the security, governance and management from Azure into other environments – coming back to a single architecture.

Azure hybrid innovation anywhere infographic

For me this is the key feature of this technology. With Azure Arc sitting at the heart of the Azure Hybrid model we’re one step closer to that utopia where the datacentre is abstracted away in the same way that virtualisation abstracted away the server hardware. You can do this abstraction in the public clouds, but there are still workloads that have regulatory, financial, or technical reasons for staying on-premises (or even a different public cloud) and until now managing these alongside Azure has meant two different platforms.

 image

Previously Azure Stack (and to a certain extent Azure Stack HCI) came close to providing this true hybrid functionality for Microsoft but there was still a disconnect- you have to visit a separate Azure portal to manage your on-premises Azure Stack “Region” for example.

In the Arc environment, an Azure agent is deployed to non-Azure VMs (or physical servers) and then they appear on the Azure Portal as a regular resource. Policies can be applied and compliance audited (remediation is expected in the “next few months”). The people in your Security Team who got excited about what was possible with Policies in Azure can now apply the same policy features to VMs in your datacentre and from the same interface.

image

As I implied above, this is still a journey in progress and I believe Microsoft have further to travel down this roadmap, but this is definitely a big step along their way and provides very useful features now and promise of an even brighter future.

As you would expect, there’s a number of recorded sessions at Microsoft Ignite 2019 covering this new product following it’s announcement in the keynotes. If you’re interested in finding out more I would suggest starting with BRK2208 : Introducing Azure Arc. Azure Arc is currently available in Preview and usable from the portal today.

image