Increasing automation opportunities with Azure

Image result for azure automation

Automation (and Orchestration which is succinctly different) is always a hot topic for many organisations, for a variety of reasons. From freeing up time completing repetitive tasks, reducing errors in execution (because we’re human!) to making the environment you manage much more efficient are all key benefits.

But what is Automation (and Orchestration if it differs?) At the highest level automation is the ability to take a task, or procedure that you may execute manually and automate it. This task may involve several stages and automation will perform those stages given an input automatically. Examples may include restarting a service, deleting temporary files or creating accounts. Orchestration takes this a step further and allows you to take a series of “tasks” and orchestrate them into a workflow. Examples may include joiner/leaver/transfers account management, virtual machine, application or service provisioning, etc.

For most IT pros, automation has always been something that we’d like to do but either suffer from lack of time to develop the required script/Runbook, lack of environment from which to build the automation or have a mixed estate for which common automation tools have been difficult to come by.

Since Microsoft acquired Opalis in late 2009, Automation trends have become much more commonplace due to the ease by which automation can be created. System Center Center Orchestrator (the evolution of Opalis) provided a platform from which automation “Runbooks” could be created and executed. Using this technology, Organisations began to expand their automation capabilities to create highly automated, self-service driven  environments. Alternative technologies exist outside of the Microsoft ecosystem, e.g. VMware vRealize or Cisco UCS Director.

Whilst Orchestrator is still an excellent technology it requires a fairly hefty server footprint needing management, Runbook, web and database servers to function. In a highly available configuration (and hey, you’ll want your automation platform HA!) this can be a costly investment that requires on-going management and maintenance even before you start to automate.

Azure has evolved this by providing automation technologies in the cloud. This allows you to automate both on-premises workloads as well as cloud based workloads. The following technologies are relevant to Automation in Azure:

I wanted to focus on some of the capabilities within Azure Automation for this particular blog post, but it is worth giving a quick mention to Azure Functions as a possible automation engine, as I’ve seen several customers using this to date. Azure Functions is one of the serverless technologies within Azure providing an engine that can execute several programming languages (C#, JavaScript, PowerShell, etc.) Since majority of operational based automation is created using PowerShell scripts, then Azure Functions can provide a good option. Functions supports Web Hooks (HTTP) which can act as a trigger to run your code. In addition, Functions can also use timers which can execute at a particular date/time.

Whilst Functions provides a legitimate engine for your code, it lacks some of the Azure Automation features, e.g. Hybrid worker roles for on-premises execution. Azure Automation is essentially the cloud/PaaS equivalent of System Center Orchestrator on-premises. It allows you to create either your own PowerShell workflows or use one of the many available via the gallery. The service can be found under “Automation Accounts” in the portal, and the main functionality concerning this post under “Process Automation” with “Runbooks” (and the gallery) as seen in the figure below:

image

As you can see from the following figure, there are numerous pre-canned Runbooks available. These can either be used as-is, or could form the basis of your own Runbooks:

image

Reviewing one of the options takes you to the PowerShell code for that particular script and provides an option for you to “import” to your own automation account for execution:

image

From here you can then edit the script, configure WebHooks, deploy or publish the script:

image

Azure Automation is not only concerned with automation of bespoke activities via Runbooks, it also contains other great functionality, e.g.

  • Ability to perform update management (similar to traditional WSUS / SCCM technologies),
  • Ensure compliance of your workloads via desired state configuration (DSC) which can track configuration and ensure the machine meets the desired state
  • Perform inventory management of your services, in a similar way that you would use tools such as Configuration or Operations Manager
  • Track and manage change related activities integrated into your existing ITSM processes

It is worth nothing that a recent preview announcement for Azure Automation is the introduction of watcher tasks. This relies on the Hybrid Worker role for on-premises integration and allows automation to be triggered when a specific activity occurs, e.g. new ticket in a helpdesk, new event in a SOC, etc. More information can be here.

From a pricing perspective, Azure Automation is very competitive. Process related automation is priced per job execution minute, whilst configuration management tooling is priced per managed node. Typically you get 500 minutes free, per month and then each additional execution per minute is charged at £0.002. You can wrap pricing into the Operations Management Suite (OMS) technologies for further functionality and value.

In summary, Azure Automation is a mature, well developed and agile platform to satisfy your automation requirements. It provides great features and is continuously evolving. Even better, you can take advantage of pre-canned Runbooks, rather than having to write them from scratch!

Azure Virtual Datacentre – Free eBook

Related image

Governance in Azure is a hot topic and I often find myself talking to customers about Azure Enterprise Scaffold which is a prescriptive approach to subscription governance. I noticed today that a new (free) eBook has been released by the Azure Customer Advisory Team (AzureCAT). This book discusses how hosting on a cloud infrastructure is fundamentally different from hosting on a traditional on-premises infrastructure, and provides detail about how you can use the Azure Virtual Datacentre model to structure your workloads to meet your specific governance policies.

The first part of the eBook discusses three essential components; Identity, Encryption and Software Defined Networking with compliance, logging, auditing and reporting running across all these areas. It goes into detail about the technologies available in Azure that can help you to achieve this, for example Microsoft Compliance Manager, Availability Zones and other features such as Global VNet peering which I’ve discussed in other blog posts. It also talks about new and upcoming features such as confidential computing through TEE as well as virtual machine capabilities such as Secure Boot and Shielded VMs. There are many more areas discussed in the book which is well worth reading.

The second part of the eBook brings this to life using Contoso as an example case study and this helps you to visualise and understand how you could interpret it for your organisation. The final part of the book discusses the cloud datacentre transformation, and how this is an on-going process to modernise an organisations IT infrastructure. It talks about the balance between agility and governance and discusses some common workload patterns.

This looks to be a great read (kudos to the AzureCAT team!) to make what is a difficult area easier to understand, and also provides a great model to pin design considerations against. Look forward to reading it in more detail later! The book can be downloaded at the following link: https://azure.microsoft.com/en-us/resources/azure-virtual-datacenter/en-us/

Azure Migration assessment tool when moving to CSP

More and more customers are moving to CSP as the preferred licensing model for their Azure platform. Azure CSP is explained here, however in short it is a licensing program for Microsoft partners, as well as a license channel for various cloud services. It allows partners to offer added value to customers through many services, as well as being able to become a trusted advisor to those customers. CSP currently includes Office 365, Dynamics 365, Enterprise Mobility + Security, Azure as well as other Microsoft online services.

When customers are moving from other agreements, e.g. EAS or PAYG they typically need to perform some form of assessment to ensure they can safely migrate their subscriptions to the CSP program. One of those assessments entails understanding whether you have any services that are not available on CSP, as well as also making sure you are running ARM based resources as opposed to ASM (classic) which is not supported through CSP.

The following assessment tool provides a mechanism to analyse your resources…  https://docs.microsoft.com/en-us/azure/cloud-solution-provider/migration/ea-payg-to-azure-csp/ea-open-direct-assessment ..

The CSP Status and Suggested Approach columns

Source: https://docs.microsoft.com/en-us/azure/cloud-solution-provider/migration/ea-payg-to-azure-csp/ea-open-direct-assessment

As you can see from the figure above, it will show you a suggested approach per resource id/type and guide as to whether there are any issues and what the next steps are.

It will also show you the estimated costings, etc. after moving to CSP, compared with your existing rate via EA:

View the subscription resource costs

Source: https://docs.microsoft.com/en-us/azure/cloud-solution-provider/migration/ea-payg-to-azure-csp/ea-open-direct-assessment

All in all, a fairly simple tool but if you are considering a migration to CSP then well worth running to get some quick details on your current status!

Recap of key Azure features from Ignite Part 2

… continuation of the Part 1 post which can be found here

The following post summarises the recap of the remaining 5 features that I found interesting from the announcements at the Ignite conference.

Azure Top 10 Ignite Features

5. Global Virtual Network Peering (preview)

Inter-VNet peering is a technology that allows you to connect a VNet directly to another VNet, without having to route that traffic via a gateway of some sort. Bear in mind that VNets are isolated until you connect them via a gateway, this feature allows you to essentially peer the VNet with another VNet thus removing the complexity of routing that traffic via a gateway and/or back on-premises. In addition, it allows you to take advantage the Microsoft backbone with low latency and high bandwidth connectivity. Inter-VNet peering is available to use today, however is constrained to a particular region (I.e. you can only peer VNets that exist within UK South, for instance – not between UK South and UK West).

virtual network peering transit

Source: https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-peering-overview

Global VNet peering addresses that and allows you to peer between regions thus gaining global connectivity, without having to route via your own WAN. This feature is currently in preview in selected regions (US and Canada)

4. New Azure VM Sizes

Many new virtual machine sizes have been announced recently, factoring in differing workload types (e.g. for databases) as well as more cost effective virtual machines. A large number of organisations see Azure IaaS as a key platform allowing them to scale workloads that still require complete control over the operating system.

The announcements around Ignite were mainly focused around SQL server and Oracle type workloads that require high memory and storage, but are not typically CPU intensive. Some of the latest specifications, e.g. DS, ES, GS and MS provide constrained CPU counts to 1/4 or 1/2 of the original VM Size.

An example of this would be the Standard GS5 which comes with 32vCPU, 448GB memory, 64disks (up to 256TB total), and the new GS5-16 which comes with 16 and 8 active CPU respectively.

Another interesting VM type announced recently would be the B-series (burstable VMs) which allows credits to be recovered and applied back to your monthly totals for unused CPU. One to review!!

3. Planned VM maintenance

Maintenance in Azure has long been a bug bear of many customers. If you are operating a single virtual machine (which to be fair, you should think about architecting differently anyway…Smile) then at any time Microsoft may perform updates on the underlying hypervisors that run the platform. If your virtual machine is in this update domain then it will be restarted… and certain data (i.e. that stored in cache) may be lost.

Planned VM maintenance helps greatly here as it provides better visibility and control into when maintenance windows are occurring. Even allowing you to proactively start maintenance early at a suitable time for your organisation. You can create alerts, and discover which VMs are scheduled for maintenance ahead of time. In addition, you can choose between VM preserving and VM restarting/re-deploy state to better manage the recovery of the VM post maintenance.

As stated above, this problem goes away if you can re-architect your application accordingly with HA in mind. Plan to use Azure Availability Zones (AAZ) when they come out of preview and if not, look into regional availability and/or introduction of traffic manager and load balancers into your application.

2. Azure Migrate (preview)

Another great announcement was the introduction of a new capability called Azure Migrate, which is currently in preview. This service is similar to the Microsoft Assessment and Planning (MAP) kit however is very Azure focused (whereas MAP tended to be all about discovery and then light-weight Azure assessments).

The tool provides visibility into your applications and services and goes one step further to map the dependencies between applications, workloads and data. Historically, those working with Azure for a while will remember using tools like OMS to achieve this inter-dependency, or mapping it out themselves in pain staking fashion. A brief overview of the tool console is provided in the figures below:

Blog1Blur

Source: https://azure.microsoft.com/en-gb/blog/announcing-azure-migrate/

The tool is currently in preview, and is free of charge for Microsoft Azure customers (at time of writing). It is appliance based, and discovers virtual machines and performs intelligence such as “right-sizing” to the correct Azure VM type (thus saving costly IaaS overheads!!). It maps the multi-tier app dependencies and is a much deeper and richer capability set than MAP.

… and finally… drumroll please…

1. Azure Stack

I wrote a lengthy post on Azure Stack recently for the organisation I work for; Insight UK, and that post can be found here. Azure Stack was and is a big announcement from Microsoft and demonstrates their commitment to the Enterprise in my opinion. Microsoft have firmly recognised the need to retain certain workloads on-premises for a variety of reasons, from security/compliance through to performance, etc.

The Azure Stack is Microsoft’s true Hybrid Cloud platform and is provided by four vendors at present in HPe, Dell, Lenovo and Cisco. It provides a consistent management interface from the public Azure Cloud to on-premises, ensuring your DevOps/IT teams can communicate with applications in the same way irrespective of location. It allows for consistent management of both cloud native applications and legacy applications.

Image result for Azure Stack microsoft

Source: https://blogs.technet.microsoft.com/uktechnet/2016/02/23/microsoft-azure-stack-what-is-it/

Provided as either a four, eight or twelve node pre-configured rack, the software is locked down by Microsoft and only they can amend or provide updates. In addition the Stack firmware and drivers and controlled by the manufacturer and remain consistent with the software versions.

The hardware is procured directly from the vendor and then the resources are charged in a similar way to the public Azure cloud. The stack offers either a capacity based model or pay as you go, and can even operate in offline mode (great example with Carnival Cruise Ships)…

.. thanks for reading! – that’s my top 10 summary of Azure related announcements that came out of the Ignite conference in 2017. There is many more announcements and features and I hope to get more time to lab and write about them in the near future!

Update: Azure VNet Service Endpoints – Public Preview Expanded

I blogged about Virtual Network Service Endpoints (VNSE) recently after it was announced in preview mid September. From the earlier post;

Virtual Network Service Endpoints is a new feature to address situations whereby customers would prefer to access resources (Azure SQL DBs and Storage Accounts in the preview) privately over their virtual network as opposed to accessing them using the public URI.

Typically, when you create a resource in Azure it gets a public facing endpoint. This is the case with storage accounts and Azure SQL. When you connect to these services you do so using this public endpoint which is a concern for some customers who have compliance and regulatory concerns OR just want to optimise the route the traffic takes.

Initially this feature was restricted to the US and Australian regions. I missed the announcement last week that this feature has been expanded into all Azure regions (still in preview) – which is great news. I have introduced the preview of this feature to several customers recently and they saw great advantages in being able to address resources from a storage and SQL perspective privately rather than with a public URI and considered this something that would increase their opportunities in  the Azure space.

Protect yourself from disaster with Azure Site Recovery!

cyclone-2100663_1920

Having a robust Disaster Recovery (DR) plan in place is a key foundation of any IT strategy and risk management approach. Ensuring key services are protected, at a minimum through backup, or cost permitting through some form of replication, is of critical importance. This will help to ensure that, in the event of a disaster, be it small or large, the organisations IT systems can be recovered allowing the business to continue functioning from an IT perspective.

This post will focus on the capabilities provided by Azure Site Recovery (ASR), and how this is a perfect solution to bolster an organisations protection. However protecting yourself from disaster involves a much wider set of considerations, e.g. Backup, Plans, RunBooks, Automation and Replication services. Each of these topics has its own unique considerations and is for another post.

A large number of organisations survive with a single datacentre, or perhaps two but at a single location, mainly because of cost constraints. This leaves those organisations susceptible to many disasters, e.g. natural disasters (flooding, earthquake, fire, etc.) as well as man-made, e.g. electrical failure, collapse, etc. A disaster like this could result in the need to recover to a temporary location, perhaps from backup (with finger crossed that the backups work!), with a lengthy lead time to rebuild and recover systems. This lead time can be the difference between an organisation recovering or suffering serious financial loss/reputation damage, etc. Come in ASR…

Image result for azure site recovery

Azure Site Recovery (ASR) provides a great solution to help organisations overcome the risks outlined above. Put short, ASR provides a solution that can provide protection and replication of your on-premises (and cloud) based services to ensure they are protected to the Microsoft datacentres. If you are an organisation that only has a single datacentre then this can provide that much sought after secondary/tertiary location. In addition to replication, ASR can also provide monitoring, recovery plans and testing services. As pointed out earlier, ASR can protect on-premises workloads, as well as workloads already in Azure (to ensure they are replicated to another Azure region). Since the focus is on-premises in this post, how does this part work?

ASR supports the replication of services/virtual machines from all key platforms, including Hyper-V and VMware. If you do not use these virtualisation platforms you can use the “physical” agent option which can be installed on a physical server or as an in-VM agent (if using a differing virtualisation platform, e.g. KVM). If you are using the VMware integration, or Physical option, a configuration server is required. The following link provides more detail around the support matrix for replicating these workloads.

Dependent upon the on-premises capabilities, the following Microsoft links provide the architectural details for each configuration:

Architecture

Source: https://docs.microsoft.com/en-us/azure/site-recovery/concepts-hyper-v-to-azure-architecture

To get started with Hyper-V, you require an Azure subscription, System Center Virtual Machine Manager (VMM) server, Hyper-V hosts, VMs and appropriate networking. A Site Recovery Vault and Storage Account is configured in the Azure Subscription and the Site Recovery provider is installed on to the VMM server  which is then registered into the above vault. The recovery services agent .exe is installed on all Hyper-V hosts (or cluster members). In this configuration, no agent is required on any of the Hyper-V virtual machines as it is proxy’d via the host connection.

Virtual machines can then be replicated as per the associated replication policy (e.g. sync intervals, retention, etc.). Initial sync’s can take time to move the data up to the Azure platform (unless using Import/Export and seeding).

Note: two key questions that come up with clients are as follows:

  1. Do I need Azure Networking / VPN in place
  2. Can replication traffic be forced over the VPN?

1) Strictly speaking no, however if you want to provide client access to private virtual machine IPs then the answer is yes. If you have a client/server application that is internally addressed, then that application will need to support IP address change/DNS update and the client will still need a network route to connect to the application. If the application can be made publically accessible then you may have alternatives.

2) In short, no – ASR is designed (like many Azure services) to be publically accessible via a public URI. This means that data is typically sent via HTTPS to the endpoint over the internet. There is some changes you can make if you have ExpressRoute however that is outside of the scope of this post. This may change soon with the introduction of virtual network service endpoints however this is currently preview feature only supported on storage accounts and Azure SQL.

I hope this helps you to understand how ASR can help your organisation, and provide a brief overview of some of the typical considerations and architectures.