Tag Archives: #vRetreat

Datrium @ vRetreat May 2020

Last week I received an invitation to the latest in the vRetreat series of events. These events bring together IT vendors and a selected group of tech bloggers- usually in venues like football clubs and racetracks, but in the current circumstances we were forced online. The second of the two briefings at the May 2020 event came from Datrium.

To paraphrase their own words, Datrium was founded to take the complicated world of Disaster Recovery and make it simpler and more reliable, they call this DR-as-a-service. The focus of this vRetreat presentation was around their ability to protect an on-premises VMware virtual environment using a VMware Cloud on AWS Software-Defined-Data-Centre as the DR target.

These days the idea of backing up VMs to a cloud storage provider and then being able to quickly restore them is fairly commonplace in the market. Datrium, however, take this a step further and integrate the VMware-on-AWS model to reduce RTO but also ensure reliability by enabling easy, and automated, test restores.

When Disaster Strikes

In the event of a disaster Datrium promises a 1-click failover to the DR site through it’s ControlShift SaaS portal. One of the great benefits here is the DR site2020-05-19 (44)– or at least the compute side of it- doesn’t exist until that failover is initiated. This means the business isn’t paying for hardware to sit idly by just in case there’s a disaster.

The backup data is pushed up to “cheap” AWS storage and at the point the failover runbook is activated a vSphere cluster is spun up and the storage is mounted directly as an NFS datastore. VMs can then start to be powered on as soon as the hosts come online – with Datrium handling any required changes in IP addresses etc.

Whilst the system is running in this DR state, changes are monitored so that when the on-premises environment is restored failback only requires the delta change to be synchronised back from the cloud. And at this point the VMware environment on AWS is removed until the next time one is required.

2020-05-19_15-09-20

Testing – Practice Makes Perfect

This ability to spin-up and decommission the entire DR site on demand enables realistic testing to be performed without risk to the production workloads. Test restores can be run, and workload-specific tests run on the test environment, but the SDDC built on AWS only exists for the duration of the test.

The Datrium platform contains runbooks, and these are not just restricted to disaster events, but can be used to automate testing. The system will, on a schedule, spin up some or all of the VMware environment in a temporary SDDC then run some specified tests and shutdown and destroy the test infrastructure when complete. The results of this testing are compiled into an audit report.

Conclusion

As I’ve alluded to at the top of this post, there are plenty of “Backup” and “DR” products out there servicing Enterprise IT and leveraging the public cloud to do so. Of those, I think Datrium is worth considering particularly if you are focussed on protecting a vSphere environment with a short RTO, and are interested in using VMware on AWS as a DR solution but not that keen on the not-insubstantial costs of running that DR SDDC 24/7.

Please read my standard Declaration/Disclaimer and before rushing out to buy anything bear in mind that this article is based on a sales discussion at a sponsored event rather than a POC or production installation. I wasn’t paid to write this article or offered any payment, aside from being entered in a  prize draw of delegates to win a chair (I was not a winner).

Snapt @ vRetreat May 2020

Last week I received an invitation to the latest in the vRetreat series of events. These events bring together IT vendors and a selected group of tech bloggers- usually in venues like football clubs and racetracks, but in the current circumstances we were forced online. The first of the two briefings at the May 2020 event came from Snapt.

Established in 2012, Snapt is built on a product range they refer to as “Load Balancing Plus” – taking in Load Balancing, Web Acceleration and Firewall. They have a recent flagship release named “Nova” which enables the deployment and scaling of these load balancers across multiple environments.

It’s an interesting approach for anyone working in a multi-cloud environment, for example with workloads in vSphere, AWS, and Azure, who wants a consistent method of deploying, securing, and maintaining their load balancers in all of these clouds from one SaaS platform.

snapt1-crop

Snapt achieve this by separating their control and data planes – The SaaS control plane is managed by a clean web-based dashboard or API calls, and from there the nodes are deployed to the target infrastructures as VMs, containers, or cloud devices depending on the platform.

This separation of the nodes from the control adds potential for scaling, helped by their stateless nature. Logs are streamed directly out of the node and the parameters are pulled down from the control plane. Nodes also expose interfaces to allow direct monitoring from third-party applications as well as that provided by the Nova dashboard.

image

Down on the node, the load balancing features are supported by a wide set of security tools. Traditional blacklists and whitelists are supported by more advanced features such as geofencing and anomaly detection. Activity here is reported back up to the dashboard to give admins a clear, global, view of load and threats across their environments.

Whilst there are plenty of other load balancing solutions on the marketplace, based on this briefing I’d say that Snapt are well worth a look and particularly if the requirement is for a multi-cloud type environment.

There is a Community Edition of Nova available which allows up to 5 nodes free of charge- check out https://nova.snapt.net/pricing for details.

Please read my standard Declaration/Disclaimer and before rushing out to buy anything bear in mind that this article is based on a sales discussion at a sponsored event rather than a POC or production installation. I wasn’t paid to write this article or offered any payment, aside from being entered in a  prize draw of delegates to win a chair (I was not a winner).

Cohesity Marketplace

During the recent #vRetreat event in London, Cohesity presented their latest release of DataPlatform – and with a launch happening the very weekend of the event, February 26 2019 this was timely presentation. This release included a number of new features- and when following up on the vRetreat event one which caught my attention is the Cohesity Marketplace.

The Marketplace is designed to allow third parties (plus your internal developers and Cohesity themselves) to release products that plug directly into the Cohesity framework- “bringing applications to the data, versus data to the applications”. From what I have seen of previous integrations they have been focussed on automating the backup/recovery process- for example using ServiceNow to provide end-users with self-service restores. This marketplace however allows third party applications to interact with and process the data on the Secondary Storage directly, without it leaving the appliance (or the public cloud storage). I see this as an interesting development, and visiting the website today you can get an idea of how this is going to grow.

Already in the list are analytics providers such as Splunk and Antivirus/ Threat Protection providers such as SentinelOne and ClamAV. The potential here for not just data protection but also analysis and business intelligence operations is intriguing- all that old, dark, data that companies hold but don’t make use of should be in this secondary storage and the ability to tap into that directly opens up many possibilities.

image

This all sits alongside a new Developer Portal and the existing REST API and PowerShell frameworks provided for the DataPlatform. Apps can be developed in-house but the big benefit I see is the third-party products being presented to admins to deploy- simplifying the traditional method of liaising with all the vendors in your environment separately to try and achieve a level of integration. And because the data is being processed within the Cohesity platform there’s the benefits of additional security, less duplicated storage, reduced network costs, and potentially better performance because we’re not spending time shifting data around to process it.

It’s early days yet so there’s only a handful of apps available (Mid March 2019), but it will be interesting to see how this develops and whether the work developing apps falls to Cohesity or will partners and third-party vendors take up the mantle.

For more information, check out this video from Cohesity.


vRetreat February 2019- Secondary Storage with Cohesity

Last week I had the pleasure of attending the latest #vRetreat blogger event. This edition featured a day of presentations and labs from enterprise storage vendor Cohesity held at Chelsea Football Club in London. In my first blog post from the event I look at what Cohesity are doing to distinguish “Secondary Storage” from “Backup Storage”.IMG_20190222_105103399_sm

There’s a number of vendors on the market who can provide enterprises with a backup appliance and support for public cloud storage. Cohesity have looked at this and asked what other business operations can leverage this (comparatively) cheap storage media? I’ve heard their message of “we’re not backup, but secondary storage” before, but at this event the distinction really clicked with me.

IMG_20190222_110747824_smWhilst front-line production services often demand the best-performing storage possible, storage for backups doesn’t (hopefully) need to be accessed regularly and doesn’t require the speed of access that front line production systems might. Where possible organisations will purchase cheap(er) storage for this task, and this can lead to a separate backup storage silo.

If nearly 80 percent of stored data goes unused after 90 days then the majority of data on NAS/SAN filers also fits these access and performance characteristics, so why not combine the two and reduce the silo count? The Cohesity platform offers SMB and NFS, and can also function as an object store. This also helps justify the outlay on the storage for backup which, like an insurance policy, you hope to never actually need.

CohesitySimilarly test and development workloads can often (but not always) be run on lower performance storage than their production counterparts. Again these functions are looking for similar attributes to backup when it comes to storage- keep the cost/GB low and don’t impact on the performance of our primary production storage.

Cohesity’s DataPlatform consolidates the traditional backup storage platform along with the ability to spin out test and dev workloads directly from this data, whilst also providing host file and object storage. For example, when the primary storage is upgraded to all flash, the NAS or test workloads that don’t need this level of performance can use the Cohesity platform.1550869582007_sm

This was an interesting briefing, and for me this part definitely showed the potential for not thinking of your backup infrastructure solely as an insurance policy but continuing to find new ways to leverage that investment elsewhere in the IT function.

Please read my standard Declaration/Disclaimer and before rushing out to buy anything bear in mind that this article is based on a sales discussion at a sponsored event rather than a POC or production installation. I wasn’t paid to write this article or offered any payment, although Cohesity did sponsor the lunch, T-shirt, and stadium tour at the event. Attendees were also given a pair of bright green socks and matching branded shoelaces so you should be able to spot them.