Azure Site Recovery Deployment Planner

As per the following Azure blog post, the Azure Site Recovery (ASR) Deployment Planner tool was released earlier this month, following previews earlier in the year. The tool aims to provide a friction-free way to assess your existing Hyper-V/VMware estates allowing you to understand the costs for compute, network, storage and licensing to protect your workloads to the cloud (including the difficult to understand ones, like initial replication costs..).

I’ve blogged before that I think the ASR solution is a great way to either provide secondary or even tertiary instances of your on-premises workloads in a secondary location with minimal effort and cost. Previously it has been fairly time consuming and manual to gather the information required to correctly estimate costings in Azure.

Let’s have a quick look at the tool from a Hyper-V perspective. The tool is command line based, and can be downloaded from here. Once downloaded you’ll need to extract it onto a workstation that has access to the environment you’ll be assessing. My environment consists of a standalone device with Hyper-V enabled and a couple of VMs. The tool can be executed against clusters if you were in a larger/production setup.

The following link provides additional detail and optional parameters that can be used.

Generate your list of VMs

The first thing I did was generated a .txt file containing the hostname of my Hyper-V host. This can either be IP, individual hostnames, or cluster name. I then executed the following command to retrieve an export of machines running on the host:

image

Profile your VMs

Once you have a list of VMs, it’s now time to profile them. Profiling monitors your virtual machines to collect performance data. Again it is command line based and you have the option to configure settings such as how long the profiling will take place (minutes, hours, days) if you wish. Note: 30 minutes is the minimum duration.

In addition, you can connect an Azure storage account to profile the performance from the host(s) to Azure for additional info. As per the guidance in the Microsoft documentation the general recommendation is 7 days, however as always with any sizing tools 31 days is preferred to capture monthly anomalies. I used the generated list of VMs and executed the following command as a next-step:

image

I created an infinite loop in PowerShell to simulate some CPU load on one of the VMs, rather than it just staying static:

image

Report Generation

Once you have finished profiling, you can execute the tool in report generation mode. This creates an Excel file (.xlsm) which provides a summary of all of the deployment recommendations. To complete this example, I executed the following command:

image

Job done! – The report is now saved in the location detailed above. The report contains a number of areas, with the opening page looking as follows:

image

image

There are many tabs included which breaks down individual details. One thing to bear in mind is configuring the frequency of DR drills and how long they last, as that will affect the costings. The default assumes 4 test failovers per year, lasting 7 days each time. You will want to reduce/increase this accordingly.

This tool contains many good recommendations above and beyond cost, e.g. initial required network bandwidth to fulfil replication, the recommendation as to what VM type, and where to place storage (standard/premium) as well as the throughput from the host platform to an Azure Storage account. Give it a try!

Azure Virtual Datacentre – Free eBook

Related image

Governance in Azure is a hot topic and I often find myself talking to customers about Azure Enterprise Scaffold which is a prescriptive approach to subscription governance. I noticed today that a new (free) eBook has been released by the Azure Customer Advisory Team (AzureCAT). This book discusses how hosting on a cloud infrastructure is fundamentally different from hosting on a traditional on-premises infrastructure, and provides detail about how you can use the Azure Virtual Datacentre model to structure your workloads to meet your specific governance policies.

The first part of the eBook discusses three essential components; Identity, Encryption and Software Defined Networking with compliance, logging, auditing and reporting running across all these areas. It goes into detail about the technologies available in Azure that can help you to achieve this, for example Microsoft Compliance Manager, Availability Zones and other features such as Global VNet peering which I’ve discussed in other blog posts. It also talks about new and upcoming features such as confidential computing through TEE as well as virtual machine capabilities such as Secure Boot and Shielded VMs. There are many more areas discussed in the book which is well worth reading.

The second part of the eBook brings this to life using Contoso as an example case study and this helps you to visualise and understand how you could interpret it for your organisation. The final part of the book discusses the cloud datacentre transformation, and how this is an on-going process to modernise an organisations IT infrastructure. It talks about the balance between agility and governance and discusses some common workload patterns.

This looks to be a great read (kudos to the AzureCAT team!) to make what is a difficult area easier to understand, and also provides a great model to pin design considerations against. Look forward to reading it in more detail later! The book can be downloaded at the following link: https://azure.microsoft.com/en-us/resources/azure-virtual-datacenter/en-us/

Azure Migration assessment tool when moving to CSP

More and more customers are moving to CSP as the preferred licensing model for their Azure platform. Azure CSP is explained here, however in short it is a licensing program for Microsoft partners, as well as a license channel for various cloud services. It allows partners to offer added value to customers through many services, as well as being able to become a trusted advisor to those customers. CSP currently includes Office 365, Dynamics 365, Enterprise Mobility + Security, Azure as well as other Microsoft online services.

When customers are moving from other agreements, e.g. EAS or PAYG they typically need to perform some form of assessment to ensure they can safely migrate their subscriptions to the CSP program. One of those assessments entails understanding whether you have any services that are not available on CSP, as well as also making sure you are running ARM based resources as opposed to ASM (classic) which is not supported through CSP.

The following assessment tool provides a mechanism to analyse your resources…  https://docs.microsoft.com/en-us/azure/cloud-solution-provider/migration/ea-payg-to-azure-csp/ea-open-direct-assessment ..

The CSP Status and Suggested Approach columns

Source: https://docs.microsoft.com/en-us/azure/cloud-solution-provider/migration/ea-payg-to-azure-csp/ea-open-direct-assessment

As you can see from the figure above, it will show you a suggested approach per resource id/type and guide as to whether there are any issues and what the next steps are.

It will also show you the estimated costings, etc. after moving to CSP, compared with your existing rate via EA:

View the subscription resource costs

Source: https://docs.microsoft.com/en-us/azure/cloud-solution-provider/migration/ea-payg-to-azure-csp/ea-open-direct-assessment

All in all, a fairly simple tool but if you are considering a migration to CSP then well worth running to get some quick details on your current status!

Recap of key Azure features from Ignite Part 2

… continuation of the Part 1 post which can be found here

The following post summarises the recap of the remaining 5 features that I found interesting from the announcements at the Ignite conference.

Azure Top 10 Ignite Features

5. Global Virtual Network Peering (preview)

Inter-VNet peering is a technology that allows you to connect a VNet directly to another VNet, without having to route that traffic via a gateway of some sort. Bear in mind that VNets are isolated until you connect them via a gateway, this feature allows you to essentially peer the VNet with another VNet thus removing the complexity of routing that traffic via a gateway and/or back on-premises. In addition, it allows you to take advantage the Microsoft backbone with low latency and high bandwidth connectivity. Inter-VNet peering is available to use today, however is constrained to a particular region (I.e. you can only peer VNets that exist within UK South, for instance – not between UK South and UK West).

virtual network peering transit

Source: https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-peering-overview

Global VNet peering addresses that and allows you to peer between regions thus gaining global connectivity, without having to route via your own WAN. This feature is currently in preview in selected regions (US and Canada)

4. New Azure VM Sizes

Many new virtual machine sizes have been announced recently, factoring in differing workload types (e.g. for databases) as well as more cost effective virtual machines. A large number of organisations see Azure IaaS as a key platform allowing them to scale workloads that still require complete control over the operating system.

The announcements around Ignite were mainly focused around SQL server and Oracle type workloads that require high memory and storage, but are not typically CPU intensive. Some of the latest specifications, e.g. DS, ES, GS and MS provide constrained CPU counts to 1/4 or 1/2 of the original VM Size.

An example of this would be the Standard GS5 which comes with 32vCPU, 448GB memory, 64disks (up to 256TB total), and the new GS5-16 which comes with 16 and 8 active CPU respectively.

Another interesting VM type announced recently would be the B-series (burstable VMs) which allows credits to be recovered and applied back to your monthly totals for unused CPU. One to review!!

3. Planned VM maintenance

Maintenance in Azure has long been a bug bear of many customers. If you are operating a single virtual machine (which to be fair, you should think about architecting differently anyway…Smile) then at any time Microsoft may perform updates on the underlying hypervisors that run the platform. If your virtual machine is in this update domain then it will be restarted… and certain data (i.e. that stored in cache) may be lost.

Planned VM maintenance helps greatly here as it provides better visibility and control into when maintenance windows are occurring. Even allowing you to proactively start maintenance early at a suitable time for your organisation. You can create alerts, and discover which VMs are scheduled for maintenance ahead of time. In addition, you can choose between VM preserving and VM restarting/re-deploy state to better manage the recovery of the VM post maintenance.

As stated above, this problem goes away if you can re-architect your application accordingly with HA in mind. Plan to use Azure Availability Zones (AAZ) when they come out of preview and if not, look into regional availability and/or introduction of traffic manager and load balancers into your application.

2. Azure Migrate (preview)

Another great announcement was the introduction of a new capability called Azure Migrate, which is currently in preview. This service is similar to the Microsoft Assessment and Planning (MAP) kit however is very Azure focused (whereas MAP tended to be all about discovery and then light-weight Azure assessments).

The tool provides visibility into your applications and services and goes one step further to map the dependencies between applications, workloads and data. Historically, those working with Azure for a while will remember using tools like OMS to achieve this inter-dependency, or mapping it out themselves in pain staking fashion. A brief overview of the tool console is provided in the figures below:

Blog1Blur

Source: https://azure.microsoft.com/en-gb/blog/announcing-azure-migrate/

The tool is currently in preview, and is free of charge for Microsoft Azure customers (at time of writing). It is appliance based, and discovers virtual machines and performs intelligence such as “right-sizing” to the correct Azure VM type (thus saving costly IaaS overheads!!). It maps the multi-tier app dependencies and is a much deeper and richer capability set than MAP.

… and finally… drumroll please…

1. Azure Stack

I wrote a lengthy post on Azure Stack recently for the organisation I work for; Insight UK, and that post can be found here. Azure Stack was and is a big announcement from Microsoft and demonstrates their commitment to the Enterprise in my opinion. Microsoft have firmly recognised the need to retain certain workloads on-premises for a variety of reasons, from security/compliance through to performance, etc.

The Azure Stack is Microsoft’s true Hybrid Cloud platform and is provided by four vendors at present in HPe, Dell, Lenovo and Cisco. It provides a consistent management interface from the public Azure Cloud to on-premises, ensuring your DevOps/IT teams can communicate with applications in the same way irrespective of location. It allows for consistent management of both cloud native applications and legacy applications.

Image result for Azure Stack microsoft

Source: https://blogs.technet.microsoft.com/uktechnet/2016/02/23/microsoft-azure-stack-what-is-it/

Provided as either a four, eight or twelve node pre-configured rack, the software is locked down by Microsoft and only they can amend or provide updates. In addition the Stack firmware and drivers and controlled by the manufacturer and remain consistent with the software versions.

The hardware is procured directly from the vendor and then the resources are charged in a similar way to the public Azure cloud. The stack offers either a capacity based model or pay as you go, and can even operate in offline mode (great example with Carnival Cruise Ships)…

.. thanks for reading! – that’s my top 10 summary of Azure related announcements that came out of the Ignite conference in 2017. There is many more announcements and features and I hope to get more time to lab and write about them in the near future!

Update: Azure VNet Service Endpoints – Public Preview Expanded

I blogged about Virtual Network Service Endpoints (VNSE) recently after it was announced in preview mid September. From the earlier post;

Virtual Network Service Endpoints is a new feature to address situations whereby customers would prefer to access resources (Azure SQL DBs and Storage Accounts in the preview) privately over their virtual network as opposed to accessing them using the public URI.

Typically, when you create a resource in Azure it gets a public facing endpoint. This is the case with storage accounts and Azure SQL. When you connect to these services you do so using this public endpoint which is a concern for some customers who have compliance and regulatory concerns OR just want to optimise the route the traffic takes.

Initially this feature was restricted to the US and Australian regions. I missed the announcement last week that this feature has been expanded into all Azure regions (still in preview) – which is great news. I have introduced the preview of this feature to several customers recently and they saw great advantages in being able to address resources from a storage and SQL perspective privately rather than with a public URI and considered this something that would increase their opportunities in  the Azure space.

Protect yourself from disaster with Azure Site Recovery!

cyclone-2100663_1920

Having a robust Disaster Recovery (DR) plan in place is a key foundation of any IT strategy and risk management approach. Ensuring key services are protected, at a minimum through backup, or cost permitting through some form of replication, is of critical importance. This will help to ensure that, in the event of a disaster, be it small or large, the organisations IT systems can be recovered allowing the business to continue functioning from an IT perspective.

This post will focus on the capabilities provided by Azure Site Recovery (ASR), and how this is a perfect solution to bolster an organisations protection. However protecting yourself from disaster involves a much wider set of considerations, e.g. Backup, Plans, RunBooks, Automation and Replication services. Each of these topics has its own unique considerations and is for another post.

A large number of organisations survive with a single datacentre, or perhaps two but at a single location, mainly because of cost constraints. This leaves those organisations susceptible to many disasters, e.g. natural disasters (flooding, earthquake, fire, etc.) as well as man-made, e.g. electrical failure, collapse, etc. A disaster like this could result in the need to recover to a temporary location, perhaps from backup (with finger crossed that the backups work!), with a lengthy lead time to rebuild and recover systems. This lead time can be the difference between an organisation recovering or suffering serious financial loss/reputation damage, etc. Come in ASR…

Image result for azure site recovery

Azure Site Recovery (ASR) provides a great solution to help organisations overcome the risks outlined above. Put short, ASR provides a solution that can provide protection and replication of your on-premises (and cloud) based services to ensure they are protected to the Microsoft datacentres. If you are an organisation that only has a single datacentre then this can provide that much sought after secondary/tertiary location. In addition to replication, ASR can also provide monitoring, recovery plans and testing services. As pointed out earlier, ASR can protect on-premises workloads, as well as workloads already in Azure (to ensure they are replicated to another Azure region). Since the focus is on-premises in this post, how does this part work?

ASR supports the replication of services/virtual machines from all key platforms, including Hyper-V and VMware. If you do not use these virtualisation platforms you can use the “physical” agent option which can be installed on a physical server or as an in-VM agent (if using a differing virtualisation platform, e.g. KVM). If you are using the VMware integration, or Physical option, a configuration server is required. The following link provides more detail around the support matrix for replicating these workloads.

Dependent upon the on-premises capabilities, the following Microsoft links provide the architectural details for each configuration:

Architecture

Source: https://docs.microsoft.com/en-us/azure/site-recovery/concepts-hyper-v-to-azure-architecture

To get started with Hyper-V, you require an Azure subscription, System Center Virtual Machine Manager (VMM) server, Hyper-V hosts, VMs and appropriate networking. A Site Recovery Vault and Storage Account is configured in the Azure Subscription and the Site Recovery provider is installed on to the VMM server  which is then registered into the above vault. The recovery services agent .exe is installed on all Hyper-V hosts (or cluster members). In this configuration, no agent is required on any of the Hyper-V virtual machines as it is proxy’d via the host connection.

Virtual machines can then be replicated as per the associated replication policy (e.g. sync intervals, retention, etc.). Initial sync’s can take time to move the data up to the Azure platform (unless using Import/Export and seeding).

Note: two key questions that come up with clients are as follows:

  1. Do I need Azure Networking / VPN in place
  2. Can replication traffic be forced over the VPN?

1) Strictly speaking no, however if you want to provide client access to private virtual machine IPs then the answer is yes. If you have a client/server application that is internally addressed, then that application will need to support IP address change/DNS update and the client will still need a network route to connect to the application. If the application can be made publically accessible then you may have alternatives.

2) In short, no – ASR is designed (like many Azure services) to be publically accessible via a public URI. This means that data is typically sent via HTTPS to the endpoint over the internet. There is some changes you can make if you have ExpressRoute however that is outside of the scope of this post. This may change soon with the introduction of virtual network service endpoints however this is currently preview feature only supported on storage accounts and Azure SQL.

I hope this helps you to understand how ASR can help your organisation, and provide a brief overview of some of the typical considerations and architectures.

Recap of key Azure features from Ignite Part 1

I started writing a post about some of the Azure features I found interesting from the Ignite event, but then put this on hold as I decided to do a topic on this instead at the Microsoft Cloud User Group (MSCUG) in Manchester last week. Now that’s done, I thought it’d be good to summarise the presentation!

This post will be split into two parts to avoid the article being too lengthy…

Azure Top 10 Ignite Features

First up…
10. Virtual Network Service Endpoints (Preview)

Virtual Network Service Endpoints is a new feature to address situations whereby customers would prefer to access resources (Azure SQL DBs and Storage Accounts in the preview) privately over their virtual network as opposed to accessing them using the public URI.

Typically, when you create a resource in Azure it gets a public facing endpoint. This is the case with storage accounts and Azure SQL. When you connect to these services you do so using this public endpoint which is a concern for some customers who have compliance and regulatory concerns OR just want to optimise the route the traffic takes.

Configuring VNSE is fairly simple – it’s setup on the virtual network first and foremost – and then when you configure the resource you select the VNet that you would like to attach it to. The resource then does not get a public URI and instead is accessible via that VNet.

This feature is currently only available as a preview in the US and Australian regions… be interested in knowing when this is looking at being publically launched and rolled out across regions, as it looks to be a great Enterprise feature!

9. Azure File Sync (Preview)

Azure File Sync is a new tool that complements Azure Files. Azure Files has been around for some time and essentially provides the capability to create an SMB 3.0 file share in Azure, running on Azure Storage. This is a great solution, however can suffer from performance considerations when users who are on-premises, or connecting via the internet try and access large files due to the latency and bandwidth considerations.

mysyncgroup

Step up Azure File Sync, which is currently in preview at the moment. Azure File Sync aims to provide a solution to the performance concerns noted above by allowing you to synchronise files hosted in Azure to a local file server you host on-premises. This sounds fairly trivial, and perhaps unimpressive as surely the whole point of an Azure File Share is to… host the files in Azure? Why duplicate the storage? Well this is where Azure File Sync impresses as it has the capability to tier the files and only hold the most frequently accessed files on premises, whilst still providing the capability to view all the other files through cloud recall.

More details on this feature can be found here… https://azure.microsoft.com/en-gb/blog/announcing-the-public-preview-for-azure-file-sync/?cdn=disable

8. Cost Management and Billing

This is an massive announcement, in my opinion, and if I’d ordered my top ten correctly it would be nearer to number one! Several customer concerns over the last 12-18 months have primarily being around controlling, understanding and forecasting cost across their cloud platforms. Partners have typically innovated in this space, and a number of third party solutions have come to market, e.g. Cloud Cruiser which can perform this functionality across multiple public cloud vendors (e.g. AWS, Azure and Google)

In response to customer concerns (in my opinion) and to also increase the feature set on Azure, Microsoft acquired Cloudyn to help organisations manage their cloud spend. It provides tools to monitor, allocate, and optimise cloud costs so you can further your cloud investments with greater confidence.

The tool is currently free for customers of Azure, and can be accessed directly from the Azure Portal, under Cost Management in the Billing section. Looking forward to talking to customers about this to help remove a potential (simple) barrier to cloud usage.

Cloudyn - Cost Management and Billing

7. Azure Availability Zones (Preview)

This feature is intended to provide parity with other vendors, such as AWS by allowing organisations to select a specific “zone” to deploy their resource to within a region. Currently when deploying resources in Azure, the only option you have is regional. For example, when deploying a virtual machine you get to choose “North Europe”, or “UK South”. This means that if you want to plan DR / BCP for a specific application you typically need to plan this cross region which can lead to key considerations around latency and bandwidth.

This feature allows you to stipulate a specific “zone” when deploying a supported resource. Supported resources include Virtual Machines, Scale Sets, Disks and Load Balancers. When you deploy one of these resources you can choose an “instance”, identified by a number. The instance corresponds to a zone. If you then deploy a secondary resource and select a different zone, this will be in a differing datacentre. Generally the round trip time between such datacentres is very low (as part of the design considerations Microsoft have when designing their regions). This allows you to plan true DR for your applications without having to worry about regional latency.

Availability Zone visual representation

Source: https://azure.microsoft.com/en-gb/updates/azure-availability-zones/

This is a great feature and is currently in preview in a selected number of locations; US East 2 and West Europe. For a region to qualify for AAZ, it must have 3 or more localised datacentres. For more information about this feature, please look here.

… and finally for Part 1:

6. Azure Gateway – 6x faster!

This was a raw feature increasing the potential throughput an Azure Gateway by up to 6x faster! The gateways now come in four flavours:

  • Basic – which is suitable for test or development workloads, supporting up to 100Mbps and a 3 9s SLA (99.9%)
  • VpnGw1 – suitable for Production workloads, with speeds up to 650Mpbs and a 99.95% SLA
  • VpnGw2 – suitable for production workloads, with speeds up to 1Gbps and a 99.95% SLA
  • VpnGw3 – suitable for production workloads, with speeds up to 1.25Gbps and a 99.95% SLA!!!!

This is important as for some organisations an ExpressRoute connection does not provide the best-fit nor is it cost feasible therefore by placing further investment in the standard gateways opens up more performance which allows even more organisations to fully leverage the power of the Azure Cloud!

And that’s it for this post – I’ll summarise the remaining features I found interesting shortly in Part 2.

Azure Compute goes supernova…

Yesterday saw two key announcements on the Azure platform, with the launch of the new Fv2 VM series (officially the fastest on Azure) as well Microsoft sharing details on a partnership being formed with Cray to provide supercomputing capabilities in Azure.

Sun, Explosion, Planet, Moon, Orbit, Solar System

The Fv2 virtual machine addresses the need for large scale computation and runs on the fastest Intel Xeon processors (codenamed Skylake). The specification comes in seven sizes, from 2 vCPU/4Gb through to 72vCPU and 144Gb! The CPUs are hyper-threaded and operate at 2.7Ghz base with a turbo frequency of 3.7Ghz. These types of machines are generally targeted at organisations performing large scale analysis that requires massive compute power. For more details see: https://azure.microsoft.com/en-gb/blog/fv2-vms-are-now-available-the-fastest-vms-on-azure/ Note, at this time the Fv2 is only available in specific regions (West US 2, West Europe and East US)

The more eye-catching announcement was with regards to the partnership with Cray to bring supercomputing capabilities to Azure. A supercomputer is defined as:

A supercomputer is a computer with a high level of computing performance compared to a general-purpose computer. Performance of a supercomputer is measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). As of 2017, there are supercomputers which can perform up to nearly a hundred quadrillions of FLOPS[3], measured in P(eta)FLOPS.[4] The majority of supercomputers today run Linux-based operating systems. (Source: https://en.wikipedia.org/wiki/History_of_supercomputing)

Supercomputers have always felt like a bit of a mythical to me – as they have always been out of reach of the general public and the vast majority of organisations. The raw speed of the worlds fastest supercomputers (with China currently leading the road with the Sunway, operating at an insane 93 PFLOPS!) will still be something that is mythical in a sense, however Microsoft Azure is going a long way to bringing some of these capabilities to the Microsoft Azure platform, through their exclusive partnership with Cray.

This partnership is intended to provide access to supercomputing capabilities in Azure for key challenges such as climate modelling, scientific research, etc. It will allow customers to leverage these capabilities as they leverage any other type of Microsoft Azure resource, to help to transform their businesses by harnessing the power of the cloud. Microsoft are hoping this will lead to significant breakthroughs for many organisations as it opens the doors to supercomputing capabilities which will have previously been out of reach.

To read more, please refer to the Microsoft Announcement: https://azure.microsoft.com/en-gb/blog/cray-supercomputers-are-coming-to-azure/

On a closing note, the other key advantage, and one of the key tenets of any cloud computing platform is that these resources are available on a consumption basis – you can use them for as long as you need to use them – without having any up-front capital investment, or having to pay for the time and effort required to build the capability on-premises! This is one of many compelling reasons you should be looking to build a platform like Microsoft Azure into your overall Cloud or IT strategy moving forwards.

Azure ‘Just in time’ VM access

When I first saw “’Just in time…(JIT)” as a new preview announcement for Azure VMs, my mind was cast back to one of my first roles in IT, working for an automotive firm who supplied parts for car manufacturers. JIT was (or is!) a supply chain strategy whereby the parts or products arrive at a specific point only when required. Although I wasn’t directly involved, it used to fill me with dread as some of the penalties for holding up the supply chain were eye watering Smile with tongue out Anyway, I digress…

In relation to the Azure announcement, JIT is a security process currently in preview in Azure, whereby access is granted to a virtual machine only when required. It is simple in it’s architecture, in that it uses ACLs/NSGs to lock down access to specific inbound ports until a request by an authorised user is made. This drastically reduces the attack surface of your virtual and reduces any potential attacks through basic port scanning and brute force.

Licensing wise, JIT is part of the standard tier of Azure Security. Azure Security Center comes in two tiers:

  1. Free tier – this is enabled by default on your subscription and provides access to key security recommendations to help protect your infrastructure
  2. Standard Tier – extends the cloud and on-premises capabilities of Azure Security Center. Additional features are noted below:
    • Hybrid Security
    • Advanced Threat Detection
    • JIT VM access
    • Adaptive application controls

The following link provides pricing details: https://docs.microsoft.com/en-us/azure/security-center/security-center-pricing (Note: the standard tier is free for the first 60 days to allow you to evaluate the components!)

Over the next few weeks I’ll aim to create blog posts for each of these areas to provide a rounded picture of what the standard security tier can provide for your organisation.

The preview of JIT can be found within the Azure Security Center pane, on the home page as seen in the below screenshot. As it is a preview you need to activate it:

image

An overview of some of the alert types and an activation link can be found on the next page:

image

If you click through the following page you will receive information around pricing and have the ability to “on-board” into the standard tier:

image

Once you click through, you need to apply the plan to your resources, select upgrade and then save the applicable settings:

image

If you navigate back to Security Center homepage – you will now see all of the new features activated within your subscription:

image

Enabling JIT is very simple. Click within the JIT context in the screenshot above, and you’ll be presented with the JIT pane. From within here, click the “Recommended” option, and you should see any VMs you have in your subscription not currently enabled for JIT:

image

From here simply click on the VM on the left hand side, review the current state/severity and then click “Enable JIT on x VMs”… A pane will appear recommending key ports that should be locked down. It is important to note from here you can add any additional ports, and also specific key things like source IPs that will be allowed access upon request, and the duration for which access will be granted.

image

image

Following this your VM will now go into a “resolved state” and you will be able to head back to the main JIT screen, navigate to the “configured” tab, and hit request access…

image

The request access screen will allow you to permit access to specific ports from either “MyIP” (which will detect the current IP you are accessing from) or a specific IP range. You can also specific a time frame, up to the maximum limit configured in an earlier step.

Note: the ability to request access will depend upon your user role and any configured RBAC settings in the Resource Group or Virtual Machine.

image

In summary? I think JIT is a great feature and another measure of the investment Microsoft are making in security on the Azure platform. It is a feature I will be talking about with clients and in conjunction with the other features of Security Center I think it is an investment many organisations will look to make.

Azure Network Announcements at Ignite 2017

My blog has been very quiet recently having taken a few weeks off to spend time with the family, before joining Insight UK as a Cloud Architect in the Hybrid Cloud Team. The new role is exciting and with all of the innovations in the cloud space across all vendors, it’s a great time to join Insight to help them with their quest to advise and help clients and the community in leveraging this.  However, enough of the excuses about why things have been quiet… Smile 

Image result for no excuses

Ignite 2017 is like Christmas for anyone with interest in the Microsoft ecosystem and there have been a ton of announcements from a technical, strategy and business perspective to keep us all busy for some time to come. I’ve been collating my thoughts and plan on pulling together an all up view of the event once it wraps up.

One of the key things to peak my interest (being heavily focused on Azure) is the announcements today in the networking space. The following Microsoft Azure Blog post by Yousef Khalidi, CVP of Azure Networking provides a great overview:

https://azure.microsoft.com/en-gb/blog/azure-networking-announcements-for-ignite-2017/

At first glance on the above blog I expected a small number of changes/innovations however there is 22 (with my very rough counting!) individual areas in the announcements. From general performance, better availability through to enhancements in monitoring and management. Some of the key areas that interested me include:

  • Virtual Network Service Endpoints – this is a very positive change. A number of customers questioned the need to publically address Azure services citing obvious security concerns and how this would be managed. There key question was always “how do I turn this off?” From an architecture perspective I guess the key challenge for MS was on-going management, how it would be accessed, etc. This new innovation removes the requirements for the public endpoint instead allowing you (if you want to!) restrict access to the service from your own VNet, not the internet. Awesome! As per the original MS blog, more info can be found here: https://docs.microsoft.com/azure/virtual-network/virtual-network-service-endpoints-overview
  • ExpressRoute peering changes – this interested me as one of the key topics I usually discuss with clients is the 3 different peering options avaialble over ExpressRoute; private, public and Microsoft. As the blog notes, private includes traffic to your own VNets, public is traffic to Azure VIPs and Microsoft is traffic to SaaS services, e.g. 365. Customers have had several challenges with the MS peering namely around routing configurations within their own network and with the ExpressRoute provider. More recently, it was my understanding that Microsoft Peering was actually not recommended unless specific compliance regulations demanded this. With the above announcements it will be interesting to dig into this in more detail to understand it better. One for the ExpressRoute black belt calls.. Smile
  • General monitoring improvements – it’s great to see that OMS is mentioned everywhere and is becoming a key focal point across lots of components in the MS space. There are some great improvements that will help customers in this announcement, e.g. availability of your connections, monitoring endpoints (e.g. PaaS or SaaS availability) and some cool innovations around real user measurements and traffic flow within Traffic Manager.

Each of the above topics deserves individual consideration, as evidently a lot of effort has gone in behind the scenes by the Azure team, and it’s great to see them listen to customers and act on recommendations made. Big thumbs up and look forward to trying some of these out!