Azure Migration assessment tool when moving to CSP

More and more customers are moving to CSP as the preferred licensing model for their Azure platform. Azure CSP is explained here, however in short it is a licensing program for Microsoft partners, as well as a license channel for various cloud services. It allows partners to offer added value to customers through many services, as well as being able to become a trusted advisor to those customers. CSP currently includes Office 365, Dynamics 365, Enterprise Mobility + Security, Azure as well as other Microsoft online services.

When customers are moving from other agreements, e.g. EAS or PAYG they typically need to perform some form of assessment to ensure they can safely migrate their subscriptions to the CSP program. One of those assessments entails understanding whether you have any services that are not available on CSP, as well as also making sure you are running ARM based resources as opposed to ASM (classic) which is not supported through CSP.

The following assessment tool provides a mechanism to analyse your resources… ..

The CSP Status and Suggested Approach columns


As you can see from the figure above, it will show you a suggested approach per resource id/type and guide as to whether there are any issues and what the next steps are.

It will also show you the estimated costings, etc. after moving to CSP, compared with your existing rate via EA:

View the subscription resource costs


All in all, a fairly simple tool but if you are considering a migration to CSP then well worth running to get some quick details on your current status!

Protecting the keys to your cloud

A hot topic when migrating to Cloud providers is Security; where is data stored, how is it stored, how is it accessed, who can access it, all of the standard questions, etc. However, there is key theme that runs through all of these questions – it is focused directly on the cloud provider and misses arguably one of the biggest attack vectors; service administrators aka those with “keys to the kingdom”!

Working with clients over the last few years, I have been involved in several conversations surrounding securing admin access to cloud providers. Whether the end service is a SaaS based offering, e.g. Office 365 or an application built upon a subset of PaaS/IaaS services in Azure it is key to think about how you are going to provide access to IT staff as well as end-users so you can successfully manage and maintain those services on an on-going basis. 

The vast majority of organisations have a variety of roles within their IT department, e.g. Service Desk, Roaming Support, Network Engineers, Server Engineers, Architects, etc. Each of these have specific activities and functions that need to be performed to fulfil their role – from unlocking accounts, to creating new users or changing policies/settings.

For a long time organisations have been seeking the holy grail of RBAC (it’s not all negative, many succeed!) – however it is a difficult journey. From an on-premises Microsoft perspective, many organisations still have a legacy of over-provisioned “privileged accounts”, e.g. Domain/Enterprise Administrators and these are usually allocated directly to a “user account” which rarely has any “controls”, e.g. two-factor authentication, logging or time-bound access configured.

The lack of proper controls over privileged accounts is a serious attack vector and is even more so critical when moving to cloud. Access to administrative consoles or portals for on-premises systems is often tightly controlled through perimeter protection and they are rarely publicly exposed. With Cloud this is flip-reversed with portals primarily internet facing, e.g. or It is therefore crucial that you secure the identity used to access administrative features appropriately, through Role Based Access Controls (RBAC) such as “Privileged Identity Management (PIM)”, “Time-Bound Access”, “Multi-Factor Authentication” and allocation of specific roles as opposed to high permission accounts (e.g. Global Administrator). RBAC controls are a topic in their own right and there are many good articles on the web discussing what to consider, and how to achieve this.

As the journey to the cloud is still in its in infancy (but progressing at speed!), I would heavily recommend time is invested in getting your RBAC model correct before the legacy situation discussed earlier becomes a reality again. A few key areas you should consider on this journey include:

  • Develop a “Logical Model”. This should provide a logical view of your organisation, the standard users, operational teams, third party engineers, partners/consultants and contractors. Ultimately this should cover any user interacting with any system, regardless of permission or right. This activity may require a high degree of business analysis to understand the environment
  • Detail the roles that exist within each of the above categories, e.g. for each operational team there may be several roles that exist underneath that, there will be different roles that exist within your standard users (potentially departmental) and there may be a several types of contractor you employ
  • Understand the controls you want to apply, for example – is there any activation constraints to elevate your permissions (change request), how are you apply the access/permission (PIM), is it a time/bound permission, does the access need to be witnessed, will the account perform a system operation, does it require auditing and/or does the session need to be recorded
  • Design your account strategy, for example – what type of accounts are you creating, are you using standard user accounts or creating specialist ADM accounts? Are specialist system accounts used and authorised through PIM, are you apply MFA to these accounts?
  • Map out your “Physical Model”. This should include all available systems, technologies, etc. to whatever granularity you are trying to achieve. Some systems contain very in-depth RBAC controls, e.g. Office 365, Azure, System Center, etc. (to relate to Microsoft technologies) – examples for Office 365 can be found here:

Most importantly, an end-to-end RBAC model may take some time to achieve. It is important that you fully understand to what degree of granularity and control you want to get to prior to embarking on a project of this nature. My personal recommendation is that organisations should take a light-touch approach initially, – developing the logical model, implementing some controls and mapping these to your physical model/systems that contain built-in RBAC roles. Some of the more advanced systems that require in-depth analysis to create “roles” can be left until you have a more mature model. As this topic primarily revolves around Cloud systems, it is good news to hear that majority of the major providers operate good standards with regards to administrative and end-user based roles which you can easily map your logical model against.

Some links for further reading:

Servicing in a Cloud World

To kick things off on this rather derelict looking blog ;-), I wanted to start with a topic that I’ve been discussing with several customers recently, namely ‘Servicing in a Cloud World’. Why? Because it’s super important and requires a step change in the way IT organisations react to change, and communicate/engage this with their end-users.

Widespread adoption of Cloud technologies is almost old news now, organisations have been, and are currently, actively migrating services to a variety of cloud models. This includes anything from IaaS to PaaS to SaaS and they are therefore handing responsibilities over to the provider on a sliding scale. The most extreme end of this scale is any SaaS based solution as the provider not only manages/administers the solution for you, but also owns the roadmap which includes both feature additions and deprecations.

The following diagram taken from the “Change Management for Office 365 clients” guide by Microsoft, illustrates the difference between a traditional release model vs a servicing release model:


In a traditional release model with infrastructure/platform or services managed and administered by the organisation, typically releases happen after several months of planning, developing and testing. The actual “release” is then usually governed through change/release management processes which would generally involve some form of impact assessment as well as a mechanism by which to alert the target user base that the release is coming. This may then be supported by engagement through training / user guides, etc.


In a servicing release model where the infrastructure/platform or service is managed and administered by the provider, releases typically happen much faster and much more aggressively. This is due to the provider being incentivised (generally through a PAYG subscription approach) to be innovative on the platform to retain or attract new custom. As illustrated in the figure, this means the each stage of the lifecycle of the release is generally much smaller. This model is advantageous for organisations as it means they can leverage capability much sooner rather than having to invest in the planning/development and testing themselves. A well-understood benefit of Cloud, right?!

Onto the point of this post; organisations typically have well-defined and robust change and release management processes grown through many years of managing and delivering services out to their userbase (as discussed earlier in reference to the traditional release model). They are experts at managing changes and releases that are under their own control. However, it is crucial that organisations adapt these processes in reaction to the “servicing world”. These include gaining a full understanding of the providers roadmap in the following areas:

  • Minor / Major Changes
  • Feature additions
  • Feature modifications
  • Feature deprecations
  • New product releases (in the same portfolio)
  • Servicing (which can include downtime)
  • Maintenance

As noted earlier, these releases are typically much more frequent than previous, and will require roles and responsibilities across the IT org to adapt – making sure that they not only focus on the actual release but to understand how this will impact end users as well, who will likely require support as the frequency of change and adoption of new features/technologies will be unfamiliar.

To provide some support to this post, Office 365 Message Centre averages about 12-15 “notifications” per week (note, this changes frequently!) – anything from a new product, a feature change or a service announcement. Ultimately, as more services move to Cloud – the roles and responsibilities within IT need to adapt accordingly as IT professionals work to support their organisations operating in a cloud servicing world.