Author Archives: Chris

AZ-104 Azure Administrator Associate

azure-administrator-associate BadgeLast week the Microsoft AZ104 exam went live and all those who took the test during the beta period (myself included) were issued with their results. The good news is I Passed!

This post will discuss the exam and some of the learning resources I used. Just in case your looking for a brain-dump, sorry- I’m not going to be giving out example questions I remembered from my test paper, or even “I had 7 questions on PowerShell, 4 on Application Gateways, and 3 on Custard Flavours”. That’s not allowed, and it’s not really helpful for anyone honestly trying to pass the exam.

Stepping up from Fundamentals

MS Certifications, Click for Full SizeUnsurprisingly the questions were definitely a natural step-up from the AZ900 Fundamentals exam I took last year– and the content here felt more focused on admin tasks- less of the general “Cloud Computing” viewpoint.

This is to be expected as the exam is for the next level up on the qualification ladder, but it also fit’s in with Microsoft’s role-based qualification mentality. In the past there were more product based exams – “Learn everything about Windows Server 2012” for example – but this has been replaced with a “Learn everything you need to be an Infrastructure Admin” or a “Security Engineer” or “Data Engineer” approach. Check out this chart for details of the current certification offerings and how they fit these roles.

Preparation Materials

Earlier in June I sat the official 4-day course (M-AZ104), hosted by Global Knowledge. The trainer, myself, and the 17 other students, were all connected online as the in-person classroom options are not available at the moment. This method works- and I took plenty of notes – but it’s not quite the same as all being together in-person. For starters, we had to provide our own biscuits!

In post-exam hindsight I think the course probably covered all the material, but perhaps not all of the topics were covered to the depths of the exam. So in addition to hands on experience, I supplemented my notes from the training with lab work and using other materials to reinforce areas I felt weaker on. I used some of the following:

The Questions

My exam had a couple of case study type questions, where you’re given a number of questions around a common environment/problem description. This was followed by a big “normal” multiple-choice section in the middle, and then an extra surprise case study bit at the end (I guess for about the last 10 questions or so). There’s plenty of time to do the test, but watch out and manage your time carefully as you can’t go back to the previous sections once you move on.

From memory, all of my questions were multiple choice, or “put these items in order” kind of questions. There is no lab environment in this exam.

Content wise, it was a real spread across everything on the syllabus. In particular make sure you know the proper Azure terms and where they apply- your Availability zones vs Availability sets for example. It would be easy to lose marks by picking the wrong one in a given situation, or even choosing an answer containing a term that doesn’t even exist.

AZ104 succeeded AZ103 as the exam for this qualification and the main points of new content on the syllabus I spotted were around Containers (Azure Container Instances and Kubernetes Service) and Web Apps (including App Services and App Service Plans). Most of the AZ103 learning material is therefore still valid, but make sure you check the updated list of skills required. I think a basic generic understanding of Kubernetes/Docker/Containers would also be worthwhile for that section.

Online Exam

Thanks to the COVID-19 situation, I took this exam from home- not something I’ve done before as I’ve usually gone along to the local testing centre. Here’s a few tips if you’re planning on doing the same:

  1. Find a space at home that’s nice and free of clutter. You can’t have your “Azure for Dummies” poster on the wall and techie books lying around. Also, remember no-one can walk into, or be overheard in, your exam space during the test.
  2. Ensure you have a stable network connection- so you might want to kick the kids off Netflix. I also ran a long ethernet cable out from my broadband router to the room I was taking the test in to avoid wi-fi hiccups.
  3. Prior to the exam (with Pearson) you’re offered a chance to test your environment. It makes sense to do this on the day of your exam, but be aware that it takes some time as you’re not only testing the network/ webcam/ microphone but also going through the process of taking and uploading photos of your testing space – you’ll have to repeat that bit prior to the exam itself.

If you’re planning on taking AZ-104 soon I hope this all helps, and good luck!

Datrium @ vRetreat May 2020

Last week I received an invitation to the latest in the vRetreat series of events. These events bring together IT vendors and a selected group of tech bloggers- usually in venues like football clubs and racetracks, but in the current circumstances we were forced online. The second of the two briefings at the May 2020 event came from Datrium.

To paraphrase their own words, Datrium was founded to take the complicated world of Disaster Recovery and make it simpler and more reliable, they call this DR-as-a-service. The focus of this vRetreat presentation was around their ability to protect an on-premises VMware virtual environment using a VMware Cloud on AWS Software-Defined-Data-Centre as the DR target.

These days the idea of backing up VMs to a cloud storage provider and then being able to quickly restore them is fairly commonplace in the market. Datrium, however, take this a step further and integrate the VMware-on-AWS model to reduce RTO but also ensure reliability by enabling easy, and automated, test restores.

When Disaster Strikes

In the event of a disaster Datrium promises a 1-click failover to the DR site through it’s ControlShift SaaS portal. One of the great benefits here is the DR site2020-05-19 (44)– or at least the compute side of it- doesn’t exist until that failover is initiated. This means the business isn’t paying for hardware to sit idly by just in case there’s a disaster.

The backup data is pushed up to “cheap” AWS storage and at the point the failover runbook is activated a vSphere cluster is spun up and the storage is mounted directly as an NFS datastore. VMs can then start to be powered on as soon as the hosts come online – with Datrium handling any required changes in IP addresses etc.

Whilst the system is running in this DR state, changes are monitored so that when the on-premises environment is restored failback only requires the delta change to be synchronised back from the cloud. And at this point the VMware environment on AWS is removed until the next time one is required.

2020-05-19_15-09-20

Testing – Practice Makes Perfect

This ability to spin-up and decommission the entire DR site on demand enables realistic testing to be performed without risk to the production workloads. Test restores can be run, and workload-specific tests run on the test environment, but the SDDC built on AWS only exists for the duration of the test.

The Datrium platform contains runbooks, and these are not just restricted to disaster events, but can be used to automate testing. The system will, on a schedule, spin up some or all of the VMware environment in a temporary SDDC then run some specified tests and shutdown and destroy the test infrastructure when complete. The results of this testing are compiled into an audit report.

Conclusion

As I’ve alluded to at the top of this post, there are plenty of “Backup” and “DR” products out there servicing Enterprise IT and leveraging the public cloud to do so. Of those, I think Datrium is worth considering particularly if you are focussed on protecting a vSphere environment with a short RTO, and are interested in using VMware on AWS as a DR solution but not that keen on the not-insubstantial costs of running that DR SDDC 24/7.

Please read my standard Declaration/Disclaimer and before rushing out to buy anything bear in mind that this article is based on a sales discussion at a sponsored event rather than a POC or production installation. I wasn’t paid to write this article or offered any payment, aside from being entered in a  prize draw of delegates to win a chair (I was not a winner).

Snapt @ vRetreat May 2020

Last week I received an invitation to the latest in the vRetreat series of events. These events bring together IT vendors and a selected group of tech bloggers- usually in venues like football clubs and racetracks, but in the current circumstances we were forced online. The first of the two briefings at the May 2020 event came from Snapt.

Established in 2012, Snapt is built on a product range they refer to as “Load Balancing Plus” – taking in Load Balancing, Web Acceleration and Firewall. They have a recent flagship release named “Nova” which enables the deployment and scaling of these load balancers across multiple environments.

It’s an interesting approach for anyone working in a multi-cloud environment, for example with workloads in vSphere, AWS, and Azure, who wants a consistent method of deploying, securing, and maintaining their load balancers in all of these clouds from one SaaS platform.

snapt1-crop

Snapt achieve this by separating their control and data planes – The SaaS control plane is managed by a clean web-based dashboard or API calls, and from there the nodes are deployed to the target infrastructures as VMs, containers, or cloud devices depending on the platform.

This separation of the nodes from the control adds potential for scaling, helped by their stateless nature. Logs are streamed directly out of the node and the parameters are pulled down from the control plane. Nodes also expose interfaces to allow direct monitoring from third-party applications as well as that provided by the Nova dashboard.

image

Down on the node, the load balancing features are supported by a wide set of security tools. Traditional blacklists and whitelists are supported by more advanced features such as geofencing and anomaly detection. Activity here is reported back up to the dashboard to give admins a clear, global, view of load and threats across their environments.

Whilst there are plenty of other load balancing solutions on the marketplace, based on this briefing I’d say that Snapt are well worth a look and particularly if the requirement is for a multi-cloud type environment.

There is a Community Edition of Nova available which allows up to 5 nodes free of charge- check out https://nova.snapt.net/pricing for details.

Please read my standard Declaration/Disclaimer and before rushing out to buy anything bear in mind that this article is based on a sales discussion at a sponsored event rather than a POC or production installation. I wasn’t paid to write this article or offered any payment, aside from being entered in a  prize draw of delegates to win a chair (I was not a winner).

Check Azure WebApps have Backup Configured

Azure WebApps (depending on tier) come with an optional native backup service. This quick PowerShell snippet looks at all the WebApps in the current subscription and reports back on whether Backup has been set up. This should be helpful for spotting where a configuration has been missed.

Use Set-AzContext to set the subscription in advance, and to restrict to an individual Resource Group use the –ResourceGroupName on the Get-WebApp cmdlet in the first line.

foreach($WebApp in Get-AzWebApp ){
  if (Get-AzWebAppBackupConfiguration `
      -ResourceGroupName $WebApp.ResourceGroup `
      -Name $WebApp.Name `
      -ErrorAction SilentlyContinue) {
  $WebApp.Name+" Backup Configured"
  } else {
  if( (Get-Error -Last 1).Exception.Response.Content `
      -like "*Backup configuration not found for site*")
    {$WebApp.Name+" Backup Not Configured"}
 }
}

Using New-AzureFirewallRule with multiple ports or IP ranges

When creating an Azure Firewall rule with multiple ports or IP ranges using the PowerShell “New-AzureFirewallRule” cmdlet, you may get an error like this:

Invalid IP address value or range or Service Tag 192.168.64.0/18,10.1.0.0/16.
StatusCode: 400
ReasonPhrase: Bad Request
ErrorCode: AzureFirewallRuleInvalidIpAddressOrRangeFormat

or

Invalid port value or range. User ports must be in [1, 65535]
StatusCode: 400
ReasonPhrase: Bad Request
ErrorCode: AzureFirewallRuleInvalidPortOrRangeFormat

The incorrect code causing these messages refers to the Source Address or Destination Port as a comma-delimited string as you would use in the Azure Portal, as shown here:

#Incorrect Code
$netRule = New-AzFirewallNetworkRule `
     -Name "FirewallRule1" `
     -Description "Rule for HTTP,SMB traffic" `
     -Protocol "TCP" `
     -SourceAddress "192.168.64.0/18,10.1.0.0/16" `
     -DestinationAddress "172.20.1.1/28" `
     -DestinationPort "139,445,80"

However, the cmdlet wants an array of strings to be passed here rather than a comma-delimited string value, so (“192.168.64.0/18″,”10.1.0.0/16”) rather than “192.168.54.0/18,10.1.0.0/16”. The correct version of the above code snippet is as follows:

#Corrected Code
$netRule = New-AzFirewallNetworkRule `
     -Name "FirewallRule1" `
     -Description "Rule for HTTP,SMB traffic " `
     -Protocol "TCP" `
     -SourceAddress ("192.168.64.0/18","10.1.0.0/16") `
     -DestinationAddress "172.20.1.1/28" `
     -DestinationPort ("139","445","80")