Tag Archives: VSAN

vSAN Cluster Shutdown

A few weeks ago I had to shutdown a vSAN Cluster temporarily for a planned site-wide 24 hour power outage that was blacking out a datacentre. With the amount of warning and a multi-datacentre design this wasn’t an issue, but I made use of vSphere tags and some Powershell/PowerCLI to help with the evacuation and repopulation of the affected cluster. Hopefully some of this may be useful to others.

The infrastructure has two vSAN Clusters – Cluster-Alpha and Cluster-Beta. Cluster-Beta was the one being affected by the power outage, and there was sufficient space on Cluster-Alpha to absorb migrated workloads. Whilst they exist in different datacentres both clusters are on the same LAN and under the same vCenter.

I divided the VMs on Cluster-Beta into three categories:

  1. Powered-Off VMs and Templates. These were to stay in place, they would be inaccessible for the outage but I determined this wouldn’t present any issues.
  2. VMs which needed to migrate and stay on. These were tagged with the vSphere tag “July2019Migrate”
  3. VMs which needed to be powered off but not migrated. For example test/dev boxes which were not required for the duration. These were tagged with “July2019NOMigrate”

The tagging was important, not only to make sure I knew what was migrating and what was staying, but also what we needed to move back or power on once the electrical work had completed. PowerCLI was used to check that all powered-on VMs in Cluster-Beta were tagged one way or another.

Get the VMs in CLuster-Beta where the tag “July2019Migrate” is not assigned and the tag “July 2019NOMigrate” is not assigned and the VM is Powered On.

Get-Cluster -Name "Cluster-Beta" |Get-VM | where {
 (Get-TagAssignment -Entity $_).Tag.Name –notcontains "July2019Migrate" –and
 (Get-TagAssignment -Entity $_).Tag.Name –notcontains "July2019NOMigrate" –and
 $_.PowerState –eq “PoweredOn”}

In the week approaching the shutdown the migration was kicked off:

#Create a List of the VMs in the Source Cluster which are tagged to migrate
$MyTag= Get-Tag -Name "July2019Migrate"
$MyVMs=Get-Cluster "Cluster-Beta" | Get-VM | Where-Object {(Get-TagAssignment -Entity $_).Tags -contains $MyTag }
#Do the Migration
$TargetCluster= "Cluster-Alpha" #Target Cluster
$TargetDatastore= "vSANDatastore-Alpha" #Target Datastore on Target Cluster
$MyVMs | Move-VM -Destination (Get-Cluster -Name $TargetCluster) -Datastore (Get-Datastore -Name $TargetDatastore) -DiskStorageFormat Thin -VMotionPriority High

At shutdown time, a quick final check of the remaining powered on VMs was done and then all remaining VMs in Cluster-Beta were shut down. Once there were no running workloads on Beta it was time to shut down the vSAN cluster. This part I didn’t automate as I’m not planning on doing it a lot, and there’s comprehensive documentation in the VMware Docs site. The process is basically one of putting all the hosts into maintenance mode and then once the whole cluster is done, powering them off.

You are in a dark, quiet datacentre. There are many servers, all alike. There may be Grues here.

When power was restored, the process was largely reversed. I powered on the switches providing the network interconnect between the nodes, and then powered on those vSAN hosts and waited for them to come up. Once all the hosts were visible to vCenter, it was just a case of selecting them all and choosing “Exit Maintenance Mode”

2019-07-29 (8)

There was a momentary flash of alerts as nodes come up and wonder where their friends are, but in under a minute the cluster was passing the vSAN Health Check


At this point it was all ready to power on the VMs that had been shutdown and left on the cluster, and vMotion the migrated virtual machines back across. Again, PowerCLI simplified this process:

#Create a List of the VMs in the Source Cluster which are tagged to stay but need powering on.
$MyTag= Get-Tag -Name "July2019NOMigrate"
$MyVMs=Get-Cluster “Cluster-Alpha” | Get-VM | Where-Object {(Get-TagAssignment -Entity $_).Tags -contains $MyTag }
#Power on those VMs
$MyVMs | Start-VM

#Create a List of the VMs in the Source Cluster which are tagged to migrate (back)
$MyTag= Get-Tag -Name "July2019Migrate"
$MyVMs=Get-Cluster “Cluster-Alpha” | Get-VM | Where-Object {(Get-TagAssignment -Entity $_).Tags -contains $MyTag }
#Do the Migration
$TargetCluster= "Cluster-Beta" #New Target Cluster
$TargetDatastore= "vSANDatastore-Beta" #Target Datastore on Target Cluster
$MyVMs | Move-VM -Destination (Get-Cluster -Name $TargetCluster) -Datastore (Get-Datastore -Name $TargetDatastore) -DiskStorageFormat Thin -VMotionPriority High

Then it was just a case of waiting for the data to flow across the network and finally check that everything had migrated successfully and normality had been restored.

we have normality, I repeat we have normality…Anything you still can’t cope with is therefore your own problem. Please relax.

Trillian, via the keyboard of Douglas Adams. The Hitchhiker’s Guide to the Galaxy

Hyper-Converged Cynicism

Or “How I’ve come to love my vSAN Ready Nodes”

I’ll admit it, some years ago I was very cynical about HyperConverged Infrastructure (HCI). Outside of VDI workloads I couldn’t see how it would fit in my environment – and this was all down to the scaling model.

With the building-block architecture of HCI; storage, compute, and memory are all expanded in a linear fashion. Adding an extra host to the cluster to expand the storage capacity also increases the available memory and CPU in the pool of resources. But my workloads were varied, one day we might get a new storage-intensive application, the next week it might be one which is memory intensive. I was used to independently expanding the storage through a SAN and just the compute/memory side through the servers and didn’t want to be either running up against a capacity wall or purchasing unnecessary compute just to cater for storage demands.

This opinion changed when my own HCI journey started in 2017 with the purchase of a VMware vSAN cluster built on Dell Ready Nodes. Whist I’ll be writing about that particular technology here, the principles apply to other HCI infrastructures.

If the problem of HCI could is scaling, the solution is scale. These imbalances in load and growth balance out once a number of VMs are on the system- and this scale doesn’t have to be massive, even from the 4-host starting point of a vSAN cluster I found that when the time came to install node 5, the demands on storage and memory were roughly matched to the relevant capacities of the new node.

The original hosts need to be sized correctly, but unless you’re starting in a totally greenfield environment then you will have existing hosts and storage to interrogate and establish a baseline on current usage requirements. Use these figures, allow appropriate headroom for growth, and then add a bit more (particularly when considering the storage) to prevent the new infrastructure from running near capacity. Remember you are trading a certain level of efficiency for resilience – the cluster needs to be able to withstand at least one host loss and still have plenty of capacity for manoeuvre.

If you are going down the vSAN route, I can thoroughly recommend the ReadyNode option. Knowing that hardware will arrive and just work with the software-defined storage layer without spending hours digging in the Hardware Compatibility Lists was a great time saver, and we’re confident that we can turn round to our vendors and say “this didn’t work” without getting told “it’s because you’ve got disk controller chipset X and that’s not compatible with driver Y on version Z”. There’s a reason I named this blog “IT Should Just Work”.DellEMC vSAN ReadyNode

When expanding the cluster I consider best practice to be to expand with hosts of as similar configuration as possible to the original. If larger nodes are added (for example, storage/memory/CPU is now cheaper/bigger/faster) then these can create a performance imbalance in the cluster. For example a process running on host A might get access to a 2.2GHz CPU, but run the same process on host B with a 3GHz CPU and it will finish slower. Also worth considering is what happens when a host fails, or is taken into maintenance mode for patching. If this host is larger than it’s compatriots then (without very careful planning and capacity management) there might not be sufficient capacity on the remaining hosts to keep the workloads running smoothly.

It is possible in vSAN to add “storage-only” nodes, reducing the memory and possibly going single-socket (this saves on your license cost too!) and then using DRS rules to keep VMs off the host. Likewise “compute-only” nodes are possible, where the host doesn’t contribute any storage to the cluster. Whilst there are probably specific use-cases for both these types of nodes, the vast majority of the time I believe them to be best avoided. Without very careful consideration of workloads and operational practices these could easily land you in hot water.

So, I’m a convert. Two years down the line here and HCI is the on-premises infrastructure I’d recommend to anyone who asks. And those clouds gathering on the horizon? Well, if you migrate to VMware Cloud on AWS then you’re going to be running vSAN HCI there too!

vSAN- Controller Driver is (not) VMware Certified

In the process of upgrading a vSAN ReadyNode cluster from ESXi 6.5 to 6.7 a warning appeared in the vSAN Health check. The first host in the cluster had gone through the upgrade and was now showing the warning “Controller driver is VMware certified” (Note 1 in the image below, click on it for a larger view). The Dell HBA330 card was using an older version of the driver (2 in the image below) than recommended (3).


All workloads were still online, but running VMware Update Manager (VUM) did not clear this warning. Looking in the VUM patch listing showed the driver for ESXi 6.5 (4) but not the version recommended for 6.7.



It was necessary to manually load these replacement drivers in. A quick google showed they could be sourced from VMware’s download site. Extract the ZIP file from the download and then use the “Upload from File” option in VUM (5) to upload the ZIP file which was inside (in this case “VMW-ESX-6.7.0-lsi_msgpt3-“). The new driver should then appear in the list (6) and will automatically be added to the “Non-Critical Host Patches” baseline (7). Final remediation is now just a case of applying that updating baseline to the host.


In this particular instance the hosts were Dell PowerEdge R630 vSAN ReadyNodes with the HBA330 SAS HBA Controller option but the principles outlined in this post should apply to other configurations with the same symptoms.

VMworld 2018 Banner

vSAN Scalable File Services

One of the new developments that caught my eye at VMworld this year was the introduction of file services to the VMware vSAN software-defined storage platform. vSAN already offers VMDK storage to vSphere and the ability to host iSCSI volumes, but this feature will allow NFS and SMB file-shares to be hosted directly on the cluster without the need for a separate Windows Server or NFS provider.


Yanbing Lee and Duncan Epping discuss vSAN at VMworld Europe 2018

Yanbing Lee and Duncan Epping discuss vSAN at VMworld Europe 2018

vSAN Scalable File Services is a layer that sits on top of vSAN to provide SMB, NFS (and others in future) file shares. It’s comprised of a vSAN Distributed File System (vDFS) which provides the underlying scalable filesystem by aggregating vSAN objects, a Storage Services Platform which provides resilient file server end points, and a control plane for deployment and management.

File shares are created using the vCenter GUI or via API calls from an automation platform, and the demos at VMworld included all the functionality you’d expect with permissions, quotas and so on.

An interesting point is that all the file shares are integrated into the existing vSAN Storage Policy Based Management, and on a per-share basis. Therefore FTT, encryption,  thin provisioning, and so on can all be defined at a pretty granular level. So if only one of your file-shares has an encryption requirement that’s just a case of setting the policy in a drop down list, or likewise if a particular file-share must be configured to be site-failure resilient across a stretched cluster.


Why would you want to do this? Well, a couple of use cases immediately sprang to mind. Firstly, the small office/ remote office/ branch office scenario. A company wants to host both virtual machines and file services in a compact environment- currently the choice would be to have a NAS plus compute hosts, or possibly go hyper-converged but run a VM within this serving the file data from a VMDK. vSAN file services simplifies this by providing that NFS/SMB provision from within the hypervisor- and this also means that all the benefits of resilience, deduplication, compression, and encryption can be provided to the file services.

The second case was for a SAN replacement- a traditional SAN is basically an expandable cluster of x86 servers loaded with disks running some file+disk management software. vSAN is the same thing, but can also run VM workloads. It would be an interesting price/feature comparison exercise to compare the two methodologies.


This offering is currently in Public Beta – details at the bottom of this article. NFS 4.1 with AD Authentication is expected at release, with SMB, OpenLDAP, vSAN Data Protection and other functionality to follow. Obviously this is all subject to change as VMware are still at the Beta stage, and a release date has not yet been confirmed.

Further Information

  • HCI3041BE – VMworld Europe 2018 session: Introducing Scalable File Storage on vSAN with Native File Services (Video and Slides)
  • HCI3728KE – VMworld Europe 2018 session:  Innovating Beyond HCI: How VMware is Driving the Next Data Center Revolution (Video)
  • www.vmware.com/go/vsan-beta – Sign up for the Beta. Phase 2 includes the ability to test vSAN File Services in your own lab environment.

VMworld Europe 2018

VMworld 2018 US: HCI1469BU- The Future of vSAN and Hyperconverged Infrastructure

This “HCI Futures” session at VMworld US was hosted by two VPs from the Storage and Availability Business Unit, plus a customer guest. It covered the new features recently added to the vSAN environment with the release of 6.7 Update 1, alongside discussion of the possible future direction of VMware in the Hyper-Converged Infrastructure space. I caught up with the session via the online recording.

HCI is a rapidly growing architecture, with both industry wide figures from IDC and VMware’s own figures seeing massive spending increases. In the week of this VMworld, the 4-year old vSAN product is now boasting 15,000 customers. We are told customers are embarking on journeys into the Hybrid Cloud and looking for operational consistency between their On-Premises and Public Cloud environments.

The customer story incorporated into this breakout session was provided by Honeywell. They were an early adopter of vSAN in 2014, starting with the low-risk option of  hosting their management cluster on the technology. Since then they have replaced much of their traditional SAN infrastructure and are now boasting 1.7 Petabytes of data on vSAN, with compression and de-duplication giving them savings of nearly 700TB of disk.

VMware is pushing along several paths to enhance the product- the most obvious is including new storage technologies as they become available. All-flash vSAN is now commonplace, with SSDs replacing traditional spinning disk in the capacity tiers. Looking to the future, the session talked of the usage of NVMe and Persistent Memory (PMEM) developments – storage latency becoming significantly less than network latency for the first time. This prompts a move away from the current 2-tier model to one which incorporates “Adaptive Tiering” to make best use of the different storage components available.


In the Public Cloud- in particular the VMware on AWS offering- there have been customers who want to expand storage faster than compute. In the current model this hasn’t been possible due to the fixed-capacity building blocks that HCI is known for. This is being addressed by adding access to Amazon’s Elastic Block Storage (EBS) in 6.7U1 as a storage target for the environment. vSAN Encryption using the Amazon KMS is also included, along with the ability to utilise the Elastic DRS features when using AWS as a DRaaS provider for a vSphere environment.

vSAN is also moving away from it’s position as “just” the storage for Virtual Machines. Future developments include the introduction of file storage- and the ability to do some advanced data management- classifying, searching, and filtering the data.

With all this data being stored, VMware is looking to enhance the data protection functionality in the platform. Incorporation of native snapshots with replication to secondary storage (and cloud) for DR purposes increase the challenge to “traditional” storage vendors- and although it was played down in this talk also encroach further into the backup space which is populated by a large group of VMware partners.

Cloud Native applications are also being catered for with Kubernetes integration- using application-level hooks to leverage snapshots, replication, encryption, and backups all through the existing vCenter interface.

If you want to watch the recording of this session to get more information it’s available on the VMworld site: https://videos.vmworld.com/searchsite/2018?search=HCI1469BU. To sign up to the vSAN Beta which is covering some of the Data Protection, Cloud Native Storage, and File Services visit http://www.vmware.com/go/vsan-beta