Azure Site Recovery Deployment Planner

As per the following Azure blog post, the Azure Site Recovery (ASR) Deployment Planner tool was released earlier this month, following previews earlier in the year. The tool aims to provide a friction-free way to assess your existing Hyper-V/VMware estates allowing you to understand the costs for compute, network, storage and licensing to protect your workloads to the cloud (including the difficult to understand ones, like initial replication costs..).

I’ve blogged before that I think the ASR solution is a great way to either provide secondary or even tertiary instances of your on-premises workloads in a secondary location with minimal effort and cost. Previously it has been fairly time consuming and manual to gather the information required to correctly estimate costings in Azure.

Let’s have a quick look at the tool from a Hyper-V perspective. The tool is command line based, and can be downloaded from here. Once downloaded you’ll need to extract it onto a workstation that has access to the environment you’ll be assessing. My environment consists of a standalone device with Hyper-V enabled and a couple of VMs. The tool can be executed against clusters if you were in a larger/production setup.

The following link provides additional detail and optional parameters that can be used.

Generate your list of VMs

The first thing I did was generated a .txt file containing the hostname of my Hyper-V host. This can either be IP, individual hostnames, or cluster name. I then executed the following command to retrieve an export of machines running on the host:

image

Profile your VMs

Once you have a list of VMs, it’s now time to profile them. Profiling monitors your virtual machines to collect performance data. Again it is command line based and you have the option to configure settings such as how long the profiling will take place (minutes, hours, days) if you wish. Note: 30 minutes is the minimum duration.

In addition, you can connect an Azure storage account to profile the performance from the host(s) to Azure for additional info. As per the guidance in the Microsoft documentation the general recommendation is 7 days, however as always with any sizing tools 31 days is preferred to capture monthly anomalies. I used the generated list of VMs and executed the following command as a next-step:

image

I created an infinite loop in PowerShell to simulate some CPU load on one of the VMs, rather than it just staying static:

image

Report Generation

Once you have finished profiling, you can execute the tool in report generation mode. This creates an Excel file (.xlsm) which provides a summary of all of the deployment recommendations. To complete this example, I executed the following command:

image

Job done! – The report is now saved in the location detailed above. The report contains a number of areas, with the opening page looking as follows:

image

image

There are many tabs included which breaks down individual details. One thing to bear in mind is configuring the frequency of DR drills and how long they last, as that will affect the costings. The default assumes 4 test failovers per year, lasting 7 days each time. You will want to reduce/increase this accordingly.

This tool contains many good recommendations above and beyond cost, e.g. initial required network bandwidth to fulfil replication, the recommendation as to what VM type, and where to place storage (standard/premium) as well as the throughput from the host platform to an Azure Storage account. Give it a try!

Protect yourself from disaster with Azure Site Recovery!

cyclone-2100663_1920

Having a robust Disaster Recovery (DR) plan in place is a key foundation of any IT strategy and risk management approach. Ensuring key services are protected, at a minimum through backup, or cost permitting through some form of replication, is of critical importance. This will help to ensure that, in the event of a disaster, be it small or large, the organisations IT systems can be recovered allowing the business to continue functioning from an IT perspective.

This post will focus on the capabilities provided by Azure Site Recovery (ASR), and how this is a perfect solution to bolster an organisations protection. However protecting yourself from disaster involves a much wider set of considerations, e.g. Backup, Plans, RunBooks, Automation and Replication services. Each of these topics has its own unique considerations and is for another post.

A large number of organisations survive with a single datacentre, or perhaps two but at a single location, mainly because of cost constraints. This leaves those organisations susceptible to many disasters, e.g. natural disasters (flooding, earthquake, fire, etc.) as well as man-made, e.g. electrical failure, collapse, etc. A disaster like this could result in the need to recover to a temporary location, perhaps from backup (with finger crossed that the backups work!), with a lengthy lead time to rebuild and recover systems. This lead time can be the difference between an organisation recovering or suffering serious financial loss/reputation damage, etc. Come in ASR…

Image result for azure site recovery

Azure Site Recovery (ASR) provides a great solution to help organisations overcome the risks outlined above. Put short, ASR provides a solution that can provide protection and replication of your on-premises (and cloud) based services to ensure they are protected to the Microsoft datacentres. If you are an organisation that only has a single datacentre then this can provide that much sought after secondary/tertiary location. In addition to replication, ASR can also provide monitoring, recovery plans and testing services. As pointed out earlier, ASR can protect on-premises workloads, as well as workloads already in Azure (to ensure they are replicated to another Azure region). Since the focus is on-premises in this post, how does this part work?

ASR supports the replication of services/virtual machines from all key platforms, including Hyper-V and VMware. If you do not use these virtualisation platforms you can use the “physical” agent option which can be installed on a physical server or as an in-VM agent (if using a differing virtualisation platform, e.g. KVM). If you are using the VMware integration, or Physical option, a configuration server is required. The following link provides more detail around the support matrix for replicating these workloads.

Dependent upon the on-premises capabilities, the following Microsoft links provide the architectural details for each configuration:

Architecture

Source: https://docs.microsoft.com/en-us/azure/site-recovery/concepts-hyper-v-to-azure-architecture

To get started with Hyper-V, you require an Azure subscription, System Center Virtual Machine Manager (VMM) server, Hyper-V hosts, VMs and appropriate networking. A Site Recovery Vault and Storage Account is configured in the Azure Subscription and the Site Recovery provider is installed on to the VMM server  which is then registered into the above vault. The recovery services agent .exe is installed on all Hyper-V hosts (or cluster members). In this configuration, no agent is required on any of the Hyper-V virtual machines as it is proxy’d via the host connection.

Virtual machines can then be replicated as per the associated replication policy (e.g. sync intervals, retention, etc.). Initial sync’s can take time to move the data up to the Azure platform (unless using Import/Export and seeding).

Note: two key questions that come up with clients are as follows:

  1. Do I need Azure Networking / VPN in place
  2. Can replication traffic be forced over the VPN?

1) Strictly speaking no, however if you want to provide client access to private virtual machine IPs then the answer is yes. If you have a client/server application that is internally addressed, then that application will need to support IP address change/DNS update and the client will still need a network route to connect to the application. If the application can be made publically accessible then you may have alternatives.

2) In short, no – ASR is designed (like many Azure services) to be publically accessible via a public URI. This means that data is typically sent via HTTPS to the endpoint over the internet. There is some changes you can make if you have ExpressRoute however that is outside of the scope of this post. This may change soon with the introduction of virtual network service endpoints however this is currently preview feature only supported on storage accounts and Azure SQL.

I hope this helps you to understand how ASR can help your organisation, and provide a brief overview of some of the typical considerations and architectures.

Recap of key Azure features from Ignite Part 1

I started writing a post about some of the Azure features I found interesting from the Ignite event, but then put this on hold as I decided to do a topic on this instead at the Microsoft Cloud User Group (MSCUG) in Manchester last week. Now that’s done, I thought it’d be good to summarise the presentation!

This post will be split into two parts to avoid the article being too lengthy…

Azure Top 10 Ignite Features

First up…
10. Virtual Network Service Endpoints (Preview)

Virtual Network Service Endpoints is a new feature to address situations whereby customers would prefer to access resources (Azure SQL DBs and Storage Accounts in the preview) privately over their virtual network as opposed to accessing them using the public URI.

Typically, when you create a resource in Azure it gets a public facing endpoint. This is the case with storage accounts and Azure SQL. When you connect to these services you do so using this public endpoint which is a concern for some customers who have compliance and regulatory concerns OR just want to optimise the route the traffic takes.

Configuring VNSE is fairly simple – it’s setup on the virtual network first and foremost – and then when you configure the resource you select the VNet that you would like to attach it to. The resource then does not get a public URI and instead is accessible via that VNet.

This feature is currently only available as a preview in the US and Australian regions… be interested in knowing when this is looking at being publically launched and rolled out across regions, as it looks to be a great Enterprise feature!

9. Azure File Sync (Preview)

Azure File Sync is a new tool that complements Azure Files. Azure Files has been around for some time and essentially provides the capability to create an SMB 3.0 file share in Azure, running on Azure Storage. This is a great solution, however can suffer from performance considerations when users who are on-premises, or connecting via the internet try and access large files due to the latency and bandwidth considerations.

mysyncgroup

Step up Azure File Sync, which is currently in preview at the moment. Azure File Sync aims to provide a solution to the performance concerns noted above by allowing you to synchronise files hosted in Azure to a local file server you host on-premises. This sounds fairly trivial, and perhaps unimpressive as surely the whole point of an Azure File Share is to… host the files in Azure? Why duplicate the storage? Well this is where Azure File Sync impresses as it has the capability to tier the files and only hold the most frequently accessed files on premises, whilst still providing the capability to view all the other files through cloud recall.

More details on this feature can be found here… https://azure.microsoft.com/en-gb/blog/announcing-the-public-preview-for-azure-file-sync/?cdn=disable

8. Cost Management and Billing

This is an massive announcement, in my opinion, and if I’d ordered my top ten correctly it would be nearer to number one! Several customer concerns over the last 12-18 months have primarily being around controlling, understanding and forecasting cost across their cloud platforms. Partners have typically innovated in this space, and a number of third party solutions have come to market, e.g. Cloud Cruiser which can perform this functionality across multiple public cloud vendors (e.g. AWS, Azure and Google)

In response to customer concerns (in my opinion) and to also increase the feature set on Azure, Microsoft acquired Cloudyn to help organisations manage their cloud spend. It provides tools to monitor, allocate, and optimise cloud costs so you can further your cloud investments with greater confidence.

The tool is currently free for customers of Azure, and can be accessed directly from the Azure Portal, under Cost Management in the Billing section. Looking forward to talking to customers about this to help remove a potential (simple) barrier to cloud usage.

Cloudyn - Cost Management and Billing

7. Azure Availability Zones (Preview)

This feature is intended to provide parity with other vendors, such as AWS by allowing organisations to select a specific “zone” to deploy their resource to within a region. Currently when deploying resources in Azure, the only option you have is regional. For example, when deploying a virtual machine you get to choose “North Europe”, or “UK South”. This means that if you want to plan DR / BCP for a specific application you typically need to plan this cross region which can lead to key considerations around latency and bandwidth.

This feature allows you to stipulate a specific “zone” when deploying a supported resource. Supported resources include Virtual Machines, Scale Sets, Disks and Load Balancers. When you deploy one of these resources you can choose an “instance”, identified by a number. The instance corresponds to a zone. If you then deploy a secondary resource and select a different zone, this will be in a differing datacentre. Generally the round trip time between such datacentres is very low (as part of the design considerations Microsoft have when designing their regions). This allows you to plan true DR for your applications without having to worry about regional latency.

Availability Zone visual representation

Source: https://azure.microsoft.com/en-gb/updates/azure-availability-zones/

This is a great feature and is currently in preview in a selected number of locations; US East 2 and West Europe. For a region to qualify for AAZ, it must have 3 or more localised datacentres. For more information about this feature, please look here.

… and finally for Part 1:

6. Azure Gateway – 6x faster!

This was a raw feature increasing the potential throughput an Azure Gateway by up to 6x faster! The gateways now come in four flavours:

  • Basic – which is suitable for test or development workloads, supporting up to 100Mbps and a 3 9s SLA (99.9%)
  • VpnGw1 – suitable for Production workloads, with speeds up to 650Mpbs and a 99.95% SLA
  • VpnGw2 – suitable for production workloads, with speeds up to 1Gbps and a 99.95% SLA
  • VpnGw3 – suitable for production workloads, with speeds up to 1.25Gbps and a 99.95% SLA!!!!

This is important as for some organisations an ExpressRoute connection does not provide the best-fit nor is it cost feasible therefore by placing further investment in the standard gateways opens up more performance which allows even more organisations to fully leverage the power of the Azure Cloud!

And that’s it for this post – I’ll summarise the remaining features I found interesting shortly in Part 2.

A quick look at Instant File Restore in an Azure VM…

Instant File Restore directly from an Azure VM is a feature release that caught my eye recently. I remember being excited in the early days with on-premises virtualisation when backup companies, e.g. Veeam introduced the ability to mount VM backups and access the file system directly, thus allowing a quick restore and always thought it was a handy feature. Granted, it does not (always) replace proper In-VM backups of the application or service however it does provide a quick and easy way to restore a file/config, etc.

The feature went GA a couple of days ago, however the portal still has it as in-preview. More info can be found on the Azure Blog Post:

https://azure.microsoft.com/en-us/blog/instant-file-recovery-from-azure-vm-backups-is-now-generally-available/

To start with you need an Azure Virtual Machine and some files you’d like to protect. I’ve created a simple VM from the marketplace running 2016 Datacentre. I created some test files on one of the disks:

1

You’ll then need to log into the Azure Portal, and you’ll need to have a Recovery Vault already configured with the virtual machine you want to protect added. The following screenshot shows the virtual machine ‘azurebackuptest’ added to the Recovery Vault:

1

If you do not have your machine added, then use the ‘Add’ button, and choose a backup policy. You now need to ensure you have a ‘restore point’ from which you can recover.

I’m now going to go back to my virtual machine and ‘permanently delete’ some files (so they’re gone from the recycle bin, too), as  you can see from the following screenshot:

2

We now have a virtual machine with missing files that’d we’d ordinarily need to recover from an In-VM backup agent – however we’ll use the File Recovery tools. Navigate to the vault again and choose ‘File Recovery (preview)’:

4

From here you need to select your recovery point, and download the executable.. it can take a minute or two to generate. Once you’ve downloaded the .exe, ‘unmount disks’ will flag up:

6

Now simply run the .exe file on the  machine where you’d like the disks to be mounted so you can copy them off. The exe will launch PowerShell and mount the disks as iSCSI targets (disconnecting any previous connections if you have any):

7

You can now browse the mounted disk and recover the files that were deleted earlier:

8

Once complete, remember to kill the PowerShell session as indicated in the previous screenshot, and don’t forget to click ‘unmount’ disks from the Azure Console:

9

That’s it! A straight forward feature but one that can be very handy occasionally and starts to bring even more parity between Azure and equivalent on-premises capabilities.