A video of the Cohesity vRetreat Session in July 2020 thanks to Patrick Redknap.
How Cohesity’s Approach to VM Backup Affects the Recovery Time Objective
This week I attended another vRetreat online, this time featuring data vendor Cohesity who I saw presenting at the (in-person) event last year. These are great events, and the small panel of delegates works well in the virtual format.
One thing that stood out to me in their presentation was the focus on the Recovery Time Objective (RTO)- in essence how long it takes to recover from an incident. In this post I will briefly discuss how I understand the definition of RTO before looking at how the Cohesity products work to keep this time down when working with Virtual Machines.
Recovery Time Objective
There’s plenty of material out on the interwebs which will explain RTO in great detail, but I’m taking the definition to be:
“the expected length of time between an incident occurring and users being able to work normally again”
As this diagram shows, the Time can be split into a number of notable sections, I’ve chosen the following three:
Discovering the Incident. How long is it before we notice something is broken? Do we have to wait for a user to contact the service desk, or do we have responsive monitoring and alerting in place?
Starting the Restore. How long does it take to actually start the restore operation? Is there a clear process to be followed? There might be internal decisions to be made as to whether to kick off a backup restore or attempt an in-place repair. Does somebody need to physically power on some equipment or find and load some tapes before a backup restore can commence?
The Restore Operation. How long does it take between “Go” being pushed on the restore console and the service being usable again?
You’ll notice there’s also a fourth section on the diagram- the “Tidy Up”. This is all those processes that need to happen after the user is working again to get the system back into a normal state. This might include things like tidying up the original (broken) copies of the VM, returning a backup tape to the library, or investigation of the root cause. In any of these cases, I’ve put this step outside of the RTO as by the definition above, the Users are working normally again.
Recovery from ransomware attacks seem to be the current favoured feature pushed by backup vendors, and Cohesity are no exception. Their take here is that because the Cohesity Data Platform handles all the backups, it sees all the data and this position in the data flow gives the rest of the Cohesity stack an opportunity to spot both when an unusual number of files have been changed and also when files suddenly can’t be indexed because they’ve been encrypted.
Tied with an alerting mechanism, this helps address our question in point 1 above- “Can we discover the incident quickly?”. The sooner someone in IT is aware that a ransomware infection has happened, the quicker a response can be started.
Additionally, Regular point-in-time snapshot backups make it easier to spot the time the infection started (or if not the point of infection, at least when the malware started acting) and the more granular the timestamps the less data is potentially lost between a backup and the incident. But we’re straying into RPO, not RTO, there.
Most of the time when responding to a major incident and orchestrating a restore operation the user interface will be key to assessing the situation and bringing services back online. Cohesity offers a clean and tidy web-based UI, complete with the now-obligatory Dark Mode.
Whilst the platform isn’t going to make those go/no-go decisions on kicking off a restore- it can influence that decision. Because the restores are so quick (as we’ll see shortly) the discussion on whether to repair or restore might favour the latter. It’s also possible to bring up the VMs in a network-disconnected state without touching the production systems so that once any discussions are complete the restore is even quicker (or if the repair option is chosen then that restore can just be cancelled)
Restoring User Service
Once recovery is started in Cohesity Data Protect an NFS datastore is created on the Data Platform- the VMDK is already here so there is no need to spend time at this point moving blocks across the network. The NFS datastore is mounted within vCenter and the VM registered and at this point the VM can be powered on and the users can get working again.
Once service has been restored, the longer process of putting the VM files back where they belong is achieved with the hypervisors own Storage vMotion technology (the fourth step above). Applications are available throughout this, and once the Cohesity datastore has been cleared, it is unmounted from vCenter.
As this slide extract from the Cohesity presentation shows, one of their big selling points is this quick recovery process. Notice how the “Recover data to target storage device” is positioned after the User access is restored.
Thanks to Patrick Redknap and the Cohesity team for hosting this informative event, and I look forward to the next one. For more information about Cohesity, check out their website: https://www.cohesity.com/
Please read my standard Declaration/Disclaimer and before rushing out to buy anything bear in mind that this article is based on a sales discussion at a sponsored event rather than a POC or production installation. I wasn’t paid to write this article or offered any payment, although Cohesity did sponsor a prize draw for delegates at the event.
During the recent #vRetreat event in London, Cohesity presented their latest release of DataPlatform – and with a launch happening the very weekend of the event, February 26 2019 this was timely presentation. This release included a number of new features- and when following up on the vRetreat event one which caught my attention is the Cohesity Marketplace.
The Marketplace is designed to allow third parties (plus your internal developers and Cohesity themselves) to release products that plug directly into the Cohesity framework- “bringing applications to the data, versus data to the applications”. From what I have seen of previous integrations they have been focussed on automating the backup/recovery process- for example using ServiceNow to provide end-users with self-service restores. This marketplace however allows third party applications to interact with and process the data on the Secondary Storage directly, without it leaving the appliance (or the public cloud storage). I see this as an interesting development, and visiting the website today you can get an idea of how this is going to grow.
Already in the list are analytics providers such as Splunk and Antivirus/ Threat Protection providers such as SentinelOne and ClamAV. The potential here for not just data protection but also analysis and business intelligence operations is intriguing- all that old, dark, data that companies hold but don’t make use of should be in this secondary storage and the ability to tap into that directly opens up many possibilities.
This all sits alongside a new Developer Portal and the existing REST API and PowerShell frameworks provided for the DataPlatform. Apps can be developed in-house but the big benefit I see is the third-party products being presented to admins to deploy- simplifying the traditional method of liaising with all the vendors in your environment separately to try and achieve a level of integration. And because the data is being processed within the Cohesity platform there’s the benefits of additional security, less duplicated storage, reduced network costs, and potentially better performance because we’re not spending time shifting data around to process it.
It’s early days yet so there’s only a handful of apps available (Mid March 2019), but it will be interesting to see how this develops and whether the work developing apps falls to Cohesity or will partners and third-party vendors take up the mantle.
For more information, check out this video from Cohesity.
Last week I had the pleasure of attending the latest #vRetreat blogger event. This edition featured a day of presentations and labs from enterprise storage vendor Cohesity held at Chelsea Football Club in London. In my first blog post from the event I look at what Cohesity are doing to distinguish “Secondary Storage” from “Backup Storage”.
There’s a number of vendors on the market who can provide enterprises with a backup appliance and support for public cloud storage. Cohesity have looked at this and asked what other business operations can leverage this (comparatively) cheap storage media? I’ve heard their message of “we’re not backup, but secondary storage” before, but at this event the distinction really clicked with me.
Whilst front-line production services often demand the best-performing storage possible, storage for backups doesn’t (hopefully) need to be accessed regularly and doesn’t require the speed of access that front line production systems might. Where possible organisations will purchase cheap(er) storage for this task, and this can lead to a separate backup storage silo.
If nearly 80 percent of stored data goes unused after 90 days then the majority of data on NAS/SAN filers also fits these access and performance characteristics, so why not combine the two and reduce the silo count? The Cohesity platform offers SMB and NFS, and can also function as an object store. This also helps justify the outlay on the storage for backup which, like an insurance policy, you hope to never actually need.
Similarly test and development workloads can often (but not always) be run on lower performance storage than their production counterparts. Again these functions are looking for similar attributes to backup when it comes to storage- keep the cost/GB low and don’t impact on the performance of our primary production storage.
Cohesity’s DataPlatform consolidates the traditional backup storage platform along with the ability to spin out test and dev workloads directly from this data, whilst also providing host file and object storage. For example, when the primary storage is upgraded to all flash, the NAS or test workloads that don’t need this level of performance can use the Cohesity platform.
This was an interesting briefing, and for me this part definitely showed the potential for not thinking of your backup infrastructure solely as an insurance policy but continuing to find new ways to leverage that investment elsewhere in the IT function.
Please read my standard Declaration/Disclaimer and before rushing out to buy anything bear in mind that this article is based on a sales discussion at a sponsored event rather than a POC or production installation. I wasn’t paid to write this article or offered any payment, although Cohesity did sponsor the lunch, T-shirt, and stadium tour at the event. Attendees were also given a pair of bright green socks and matching branded shoelaces so you should be able to spot them.