A quick look at Instant File Restore in an Azure VM…

Instant File Restore directly from an Azure VM is a feature release that caught my eye recently. I remember being excited in the early days with on-premises virtualisation when backup companies, e.g. Veeam introduced the ability to mount VM backups and access the file system directly, thus allowing a quick restore and always thought it was a handy feature. Granted, it does not (always) replace proper In-VM backups of the application or service however it does provide a quick and easy way to restore a file/config, etc.

The feature went GA a couple of days ago, however the portal still has it as in-preview. More info can be found on the Azure Blog Post:


To start with you need an Azure Virtual Machine and some files you’d like to protect. I’ve created a simple VM from the marketplace running 2016 Datacentre. I created some test files on one of the disks:


You’ll then need to log into the Azure Portal, and you’ll need to have a Recovery Vault already configured with the virtual machine you want to protect added. The following screenshot shows the virtual machine ‘azurebackuptest’ added to the Recovery Vault:


If you do not have your machine added, then use the ‘Add’ button, and choose a backup policy. You now need to ensure you have a ‘restore point’ from which you can recover.

I’m now going to go back to my virtual machine and ‘permanently delete’ some files (so they’re gone from the recycle bin, too), as  you can see from the following screenshot:


We now have a virtual machine with missing files that’d we’d ordinarily need to recover from an In-VM backup agent – however we’ll use the File Recovery tools. Navigate to the vault again and choose ‘File Recovery (preview)’:


From here you need to select your recovery point, and download the executable.. it can take a minute or two to generate. Once you’ve downloaded the .exe, ‘unmount disks’ will flag up:


Now simply run the .exe file on the  machine where you’d like the disks to be mounted so you can copy them off. The exe will launch PowerShell and mount the disks as iSCSI targets (disconnecting any previous connections if you have any):


You can now browse the mounted disk and recover the files that were deleted earlier:


Once complete, remember to kill the PowerShell session as indicated in the previous screenshot, and don’t forget to click ‘unmount’ disks from the Azure Console:


That’s it! A straight forward feature but one that can be very handy occasionally and starts to bring even more parity between Azure and equivalent on-premises capabilities.

Protecting the keys to your cloud

A hot topic when migrating to Cloud providers is Security; where is data stored, how is it stored, how is it accessed, who can access it, all of the standard questions, etc. However, there is key theme that runs through all of these questions – it is focused directly on the cloud provider and misses arguably one of the biggest attack vectors; service administrators aka those with “keys to the kingdom”!

Working with clients over the last few years, I have been involved in several conversations surrounding securing admin access to cloud providers. Whether the end service is a SaaS based offering, e.g. Office 365 or an application built upon a subset of PaaS/IaaS services in Azure it is key to think about how you are going to provide access to IT staff as well as end-users so you can successfully manage and maintain those services on an on-going basis. 

The vast majority of organisations have a variety of roles within their IT department, e.g. Service Desk, Roaming Support, Network Engineers, Server Engineers, Architects, etc. Each of these have specific activities and functions that need to be performed to fulfil their role – from unlocking accounts, to creating new users or changing policies/settings.

For a long time organisations have been seeking the holy grail of RBAC (it’s not all negative, many succeed!) – however it is a difficult journey. From an on-premises Microsoft perspective, many organisations still have a legacy of over-provisioned “privileged accounts”, e.g. Domain/Enterprise Administrators and these are usually allocated directly to a “user account” which rarely has any “controls”, e.g. two-factor authentication, logging or time-bound access configured.

The lack of proper controls over privileged accounts is a serious attack vector and is even more so critical when moving to cloud. Access to administrative consoles or portals for on-premises systems is often tightly controlled through perimeter protection and they are rarely publicly exposed. With Cloud this is flip-reversed with portals primarily internet facing, e.g. portal.office.com or portal.azure.com. It is therefore crucial that you secure the identity used to access administrative features appropriately, through Role Based Access Controls (RBAC) such as “Privileged Identity Management (PIM)”, “Time-Bound Access”, “Multi-Factor Authentication” and allocation of specific roles as opposed to high permission accounts (e.g. Global Administrator). RBAC controls are a topic in their own right and there are many good articles on the web discussing what to consider, and how to achieve this.

As the journey to the cloud is still in its in infancy (but progressing at speed!), I would heavily recommend time is invested in getting your RBAC model correct before the legacy situation discussed earlier becomes a reality again. A few key areas you should consider on this journey include:

  • Develop a “Logical Model”. This should provide a logical view of your organisation, the standard users, operational teams, third party engineers, partners/consultants and contractors. Ultimately this should cover any user interacting with any system, regardless of permission or right. This activity may require a high degree of business analysis to understand the environment
  • Detail the roles that exist within each of the above categories, e.g. for each operational team there may be several roles that exist underneath that, there will be different roles that exist within your standard users (potentially departmental) and there may be a several types of contractor you employ
  • Understand the controls you want to apply, for example – is there any activation constraints to elevate your permissions (change request), how are you apply the access/permission (PIM), is it a time/bound permission, does the access need to be witnessed, will the account perform a system operation, does it require auditing and/or does the session need to be recorded
  • Design your account strategy, for example – what type of accounts are you creating, are you using standard user accounts or creating specialist ADM accounts? Are specialist system accounts used and authorised through PIM, are you apply MFA to these accounts?
  • Map out your “Physical Model”. This should include all available systems, technologies, etc. to whatever granularity you are trying to achieve. Some systems contain very in-depth RBAC controls, e.g. Office 365, Azure, System Center, etc. (to relate to Microsoft technologies) – examples for Office 365 can be found here: https://support.office.com/en-gb/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d

Most importantly, an end-to-end RBAC model may take some time to achieve. It is important that you fully understand to what degree of granularity and control you want to get to prior to embarking on a project of this nature. My personal recommendation is that organisations should take a light-touch approach initially, – developing the logical model, implementing some controls and mapping these to your physical model/systems that contain built-in RBAC roles. Some of the more advanced systems that require in-depth analysis to create “roles” can be left until you have a more mature model. As this topic primarily revolves around Cloud systems, it is good news to hear that majority of the major providers operate good standards with regards to administrative and end-user based roles which you can easily map your logical model against.

Some links for further reading: