Ditch Windows PCs for the Mac revolution…

A couple of articles (AppleInsider) caught my attention earlier this week (Neowin), regarding Walmart and their plans to ditch Windows PCs in favour of MAC to “save on employee PC costs”. Whilst it is important to point out that Walmart have since refuted the claim stating it had been misinterpreted , the underlying sentiment still applies, and in my opinion this is unfair for reasons I’ll go into below. 

Let’s take a look at what was originally quoted:

“We looked at TCO [total cost of ownership] for our technology,” Leacy said. “The cost of deploying and securing [a Mac] at this point is a lot cheaper than supporting a Windows box — it just makes good business sense.

Source: https://www.cnet.com/news/walmart-plans-to-add-employee-macs-to-reduce-costs/

Whilst the articles do not go into detail about how the TCO was calculated, clearly the claim here is that all the roundabout costs that go into managing a Windows box are higher than a Mac box.

Likely costs are in the following areas:

  1. Sourcing and procuring the hardware / peripherals
  2. Initially configuring and deploying a device to desk
  3. Providing on-going configuration,security and reports
  4. The amount of post-support required to ensure a user remains productive

My thoughts.

Windows devices have a long, organic history of badly configured Group Policy. Most (not all) organisations have decades of GPOs accumulated in their Windows environment which greatly affects the performance and usability of a Windows Machine.

The drivers behind these configurations have typically been security related and over-zealousness has led to devices being locked down to the point they are unusable and slow. This is in contrary to the Mac which has had much less time being used in enterprises, therefore has had less time to accumulate over the top configurations.

In addition, Windows operating system images have typically been accumulated over time and it has become hip to include as much corporate bloatware as possible, with oodles of bespoke configuration – as opposed to Macs which are typically delivered quite vanilla.

Finally there are many, many variances of manufacturers and device types that the Windows Platform can run on. This has led to the proliferation of devices within an organisation, meaning multiple deployment methods, driver packages, firmware upgrades, etc. Whereas with Mac this is obviously much more constrained.

It’s like saying –

“I’m never buying a Ford car again, because the one I brought in 1995 only does 15Mpg and kicks out black smoke every time I pull off. It’s 2017 and I’m going to replace it with A Vauxhall as that does significantly more than 15Mpg… ”

(Although with Mac that’s more likely to be Ferrari Winking smile)

So what am I saying here? Well firstly, compare Apples with Apples. Windows does not have to be all about badly configured GPOs, bloated images, 100s of driver sets because you can’t control the hardware catalogue and greater security than Fort Knox. There is many new technologies available in the Windows space that can enhance your device strategy and give you all the benefits of Windows whilst improving the usability and performance, such as:

  • Windows 10 AutoPilot
  • Microsoft Intune
  • Azure Active Directory
  • Azure AD Premium
  • Enterprise Mobility + Security
  • Office 365

You don’t have to configure things like you’ve always configured them. Take a look at the new technologies that Microsoft has made available. Yes, they won’t fit all use cases, but they should form part of a new strategy. This will help you to compare fairly against Mac, and I’d bet my last dollar you can come out favourably in a Mac TCO battle Smile

…And in any case, even if you move to Mac, you’ll still have to fork out for Windows licensing, as everybody I know uses Parallels to run it as a VM anyway.

Recap of key Azure features from Ignite Part 1

I started writing a post about some of the Azure features I found interesting from the Ignite event, but then put this on hold as I decided to do a topic on this instead at the Microsoft Cloud User Group (MSCUG) in Manchester last week. Now that’s done, I thought it’d be good to summarise the presentation!

This post will be split into two parts to avoid the article being too lengthy…

Azure Top 10 Ignite Features

First up…
10. Virtual Network Service Endpoints (Preview)

Virtual Network Service Endpoints is a new feature to address situations whereby customers would prefer to access resources (Azure SQL DBs and Storage Accounts in the preview) privately over their virtual network as opposed to accessing them using the public URI.

Typically, when you create a resource in Azure it gets a public facing endpoint. This is the case with storage accounts and Azure SQL. When you connect to these services you do so using this public endpoint which is a concern for some customers who have compliance and regulatory concerns OR just want to optimise the route the traffic takes.

Configuring VNSE is fairly simple – it’s setup on the virtual network first and foremost – and then when you configure the resource you select the VNet that you would like to attach it to. The resource then does not get a public URI and instead is accessible via that VNet.

This feature is currently only available as a preview in the US and Australian regions… be interested in knowing when this is looking at being publically launched and rolled out across regions, as it looks to be a great Enterprise feature!

9. Azure File Sync (Preview)

Azure File Sync is a new tool that complements Azure Files. Azure Files has been around for some time and essentially provides the capability to create an SMB 3.0 file share in Azure, running on Azure Storage. This is a great solution, however can suffer from performance considerations when users who are on-premises, or connecting via the internet try and access large files due to the latency and bandwidth considerations.

mysyncgroup

Step up Azure File Sync, which is currently in preview at the moment. Azure File Sync aims to provide a solution to the performance concerns noted above by allowing you to synchronise files hosted in Azure to a local file server you host on-premises. This sounds fairly trivial, and perhaps unimpressive as surely the whole point of an Azure File Share is to… host the files in Azure? Why duplicate the storage? Well this is where Azure File Sync impresses as it has the capability to tier the files and only hold the most frequently accessed files on premises, whilst still providing the capability to view all the other files through cloud recall.

More details on this feature can be found here… https://azure.microsoft.com/en-gb/blog/announcing-the-public-preview-for-azure-file-sync/?cdn=disable

8. Cost Management and Billing

This is an massive announcement, in my opinion, and if I’d ordered my top ten correctly it would be nearer to number one! Several customer concerns over the last 12-18 months have primarily being around controlling, understanding and forecasting cost across their cloud platforms. Partners have typically innovated in this space, and a number of third party solutions have come to market, e.g. Cloud Cruiser which can perform this functionality across multiple public cloud vendors (e.g. AWS, Azure and Google)

In response to customer concerns (in my opinion) and to also increase the feature set on Azure, Microsoft acquired Cloudyn to help organisations manage their cloud spend. It provides tools to monitor, allocate, and optimise cloud costs so you can further your cloud investments with greater confidence.

The tool is currently free for customers of Azure, and can be accessed directly from the Azure Portal, under Cost Management in the Billing section. Looking forward to talking to customers about this to help remove a potential (simple) barrier to cloud usage.

Cloudyn - Cost Management and Billing

7. Azure Availability Zones (Preview)

This feature is intended to provide parity with other vendors, such as AWS by allowing organisations to select a specific “zone” to deploy their resource to within a region. Currently when deploying resources in Azure, the only option you have is regional. For example, when deploying a virtual machine you get to choose “North Europe”, or “UK South”. This means that if you want to plan DR / BCP for a specific application you typically need to plan this cross region which can lead to key considerations around latency and bandwidth.

This feature allows you to stipulate a specific “zone” when deploying a supported resource. Supported resources include Virtual Machines, Scale Sets, Disks and Load Balancers. When you deploy one of these resources you can choose an “instance”, identified by a number. The instance corresponds to a zone. If you then deploy a secondary resource and select a different zone, this will be in a differing datacentre. Generally the round trip time between such datacentres is very low (as part of the design considerations Microsoft have when designing their regions). This allows you to plan true DR for your applications without having to worry about regional latency.

Availability Zone visual representation

Source: https://azure.microsoft.com/en-gb/updates/azure-availability-zones/

This is a great feature and is currently in preview in a selected number of locations; US East 2 and West Europe. For a region to qualify for AAZ, it must have 3 or more localised datacentres. For more information about this feature, please look here.

… and finally for Part 1:

6. Azure Gateway – 6x faster!

This was a raw feature increasing the potential throughput an Azure Gateway by up to 6x faster! The gateways now come in four flavours:

  • Basic – which is suitable for test or development workloads, supporting up to 100Mbps and a 3 9s SLA (99.9%)
  • VpnGw1 – suitable for Production workloads, with speeds up to 650Mpbs and a 99.95% SLA
  • VpnGw2 – suitable for production workloads, with speeds up to 1Gbps and a 99.95% SLA
  • VpnGw3 – suitable for production workloads, with speeds up to 1.25Gbps and a 99.95% SLA!!!!

This is important as for some organisations an ExpressRoute connection does not provide the best-fit nor is it cost feasible therefore by placing further investment in the standard gateways opens up more performance which allows even more organisations to fully leverage the power of the Azure Cloud!

And that’s it for this post – I’ll summarise the remaining features I found interesting shortly in Part 2.

Azure Compute goes supernova…

Yesterday saw two key announcements on the Azure platform, with the launch of the new Fv2 VM series (officially the fastest on Azure) as well Microsoft sharing details on a partnership being formed with Cray to provide supercomputing capabilities in Azure.

Sun, Explosion, Planet, Moon, Orbit, Solar System

The Fv2 virtual machine addresses the need for large scale computation and runs on the fastest Intel Xeon processors (codenamed Skylake). The specification comes in seven sizes, from 2 vCPU/4Gb through to 72vCPU and 144Gb! The CPUs are hyper-threaded and operate at 2.7Ghz base with a turbo frequency of 3.7Ghz. These types of machines are generally targeted at organisations performing large scale analysis that requires massive compute power. For more details see: https://azure.microsoft.com/en-gb/blog/fv2-vms-are-now-available-the-fastest-vms-on-azure/ Note, at this time the Fv2 is only available in specific regions (West US 2, West Europe and East US)

The more eye-catching announcement was with regards to the partnership with Cray to bring supercomputing capabilities to Azure. A supercomputer is defined as:

A supercomputer is a computer with a high level of computing performance compared to a general-purpose computer. Performance of a supercomputer is measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). As of 2017, there are supercomputers which can perform up to nearly a hundred quadrillions of FLOPS[3], measured in P(eta)FLOPS.[4] The majority of supercomputers today run Linux-based operating systems. (Source: https://en.wikipedia.org/wiki/History_of_supercomputing)

Supercomputers have always felt like a bit of a mythical to me – as they have always been out of reach of the general public and the vast majority of organisations. The raw speed of the worlds fastest supercomputers (with China currently leading the road with the Sunway, operating at an insane 93 PFLOPS!) will still be something that is mythical in a sense, however Microsoft Azure is going a long way to bringing some of these capabilities to the Microsoft Azure platform, through their exclusive partnership with Cray.

This partnership is intended to provide access to supercomputing capabilities in Azure for key challenges such as climate modelling, scientific research, etc. It will allow customers to leverage these capabilities as they leverage any other type of Microsoft Azure resource, to help to transform their businesses by harnessing the power of the cloud. Microsoft are hoping this will lead to significant breakthroughs for many organisations as it opens the doors to supercomputing capabilities which will have previously been out of reach.

To read more, please refer to the Microsoft Announcement: https://azure.microsoft.com/en-gb/blog/cray-supercomputers-are-coming-to-azure/

On a closing note, the other key advantage, and one of the key tenets of any cloud computing platform is that these resources are available on a consumption basis – you can use them for as long as you need to use them – without having any up-front capital investment, or having to pay for the time and effort required to build the capability on-premises! This is one of many compelling reasons you should be looking to build a platform like Microsoft Azure into your overall Cloud or IT strategy moving forwards.

Azure ‘Just in time’ VM access

When I first saw “’Just in time…(JIT)” as a new preview announcement for Azure VMs, my mind was cast back to one of my first roles in IT, working for an automotive firm who supplied parts for car manufacturers. JIT was (or is!) a supply chain strategy whereby the parts or products arrive at a specific point only when required. Although I wasn’t directly involved, it used to fill me with dread as some of the penalties for holding up the supply chain were eye watering Smile with tongue out Anyway, I digress…

In relation to the Azure announcement, JIT is a security process currently in preview in Azure, whereby access is granted to a virtual machine only when required. It is simple in it’s architecture, in that it uses ACLs/NSGs to lock down access to specific inbound ports until a request by an authorised user is made. This drastically reduces the attack surface of your virtual and reduces any potential attacks through basic port scanning and brute force.

Licensing wise, JIT is part of the standard tier of Azure Security. Azure Security Center comes in two tiers:

  1. Free tier – this is enabled by default on your subscription and provides access to key security recommendations to help protect your infrastructure
  2. Standard Tier – extends the cloud and on-premises capabilities of Azure Security Center. Additional features are noted below:
    • Hybrid Security
    • Advanced Threat Detection
    • JIT VM access
    • Adaptive application controls

The following link provides pricing details: https://docs.microsoft.com/en-us/azure/security-center/security-center-pricing (Note: the standard tier is free for the first 60 days to allow you to evaluate the components!)

Over the next few weeks I’ll aim to create blog posts for each of these areas to provide a rounded picture of what the standard security tier can provide for your organisation.

The preview of JIT can be found within the Azure Security Center pane, on the home page as seen in the below screenshot. As it is a preview you need to activate it:

image

An overview of some of the alert types and an activation link can be found on the next page:

image

If you click through the following page you will receive information around pricing and have the ability to “on-board” into the standard tier:

image

Once you click through, you need to apply the plan to your resources, select upgrade and then save the applicable settings:

image

If you navigate back to Security Center homepage – you will now see all of the new features activated within your subscription:

image

Enabling JIT is very simple. Click within the JIT context in the screenshot above, and you’ll be presented with the JIT pane. From within here, click the “Recommended” option, and you should see any VMs you have in your subscription not currently enabled for JIT:

image

From here simply click on the VM on the left hand side, review the current state/severity and then click “Enable JIT on x VMs”… A pane will appear recommending key ports that should be locked down. It is important to note from here you can add any additional ports, and also specific key things like source IPs that will be allowed access upon request, and the duration for which access will be granted.

image

image

Following this your VM will now go into a “resolved state” and you will be able to head back to the main JIT screen, navigate to the “configured” tab, and hit request access…

image

The request access screen will allow you to permit access to specific ports from either “MyIP” (which will detect the current IP you are accessing from) or a specific IP range. You can also specific a time frame, up to the maximum limit configured in an earlier step.

Note: the ability to request access will depend upon your user role and any configured RBAC settings in the Resource Group or Virtual Machine.

image

In summary? I think JIT is a great feature and another measure of the investment Microsoft are making in security on the Azure platform. It is a feature I will be talking about with clients and in conjunction with the other features of Security Center I think it is an investment many organisations will look to make.

Visibility and increasing your Azure Subscription limits and quotas

Azure contains a number of default subscription limits that should be considered as part of any Azure design phase. Majority of these limits are soft and you can raise a support case to have them raised. To avoid delays to any project I always recommend this is one of the first areas of consultation as there can be a lag between the request and it being actioned. The limitations are there for a number of reasons, primarily to help Microsoft control and capacity plan the environment whilst also ensuring usage is limited to protect the platform.

There is a very handy  view in the Azure Portal from which you can view by going to the new, preview “Cost Management + Billing” blade. From here go to “Subscriptions” and click the subscription you would like to view. From here you get some great statistics, e.g. spending rate, credit remaining, forecast, etc. all of which holds great value to assist you in planning and controlling your cost. If you navigate to “Usage and quotas” under the “Settings” menu you will see the following view:

image

There are other ways to view your usage quotas via PowerShell, however these are individual commands per resource. It wouldn’t take much to create a quick script that you can run regularly to expose this however the above view deals with it nicely I think. One of the example commands include “Get-AzureRmVmUsage –location <location>”

image

The bottom part of the page shows your current usage against your limits. It is now also very easy to “Request an Increase” – if you click this link you will be taken through a guided wizard to increase your limits.

image

Bear in mind this is resource and type specific, e.g. you will be asked to demonstrate the model you are using (Classic/ARM) , the impact on your business so they can set a priority, the location you need a request in and also the SKU family.

image

The final screen asks you to supply contact details and who should be notified throughout the ticket.

image

.. and there you go! – limits raised. Evidently this can take some time and it asks you to be pretty specific about what you are requesting therefore I highly recommend you take time to plan your deployments correctly to avoid any frustration.

Azure Portal–Ability to Quick Delete Resources

Very minor, and very finicky but one thing that’s always frustrated me in the Azure Portal is the inability to clear up my test subscriptions without having to resort to PowerShell. Occasionally I’ll spend some time spinning new services up directly in the portal rather than using PowerShell and when I’ve finished I like to clear things down to avoid costs which usually involves just deleting any resource groups I’ve spun up.

Now, the PowerShell to do this is pretty straight forward… can just use Get-AzureRmResourceGroup and Remove-AzureRmResourceGroup with a –force flag and a for-loop if you don’t want to iterate through, so why complain? Smile well sometimes launching a script or PowerShell is just too much which invariably means I’ll leave stuff running and then incur costs.

I’m not sure when this feature change came about (either last week during Ignite, or sooner, or has it been there all along and I’ve missed it?!) – however you can now do a select on all your resources (rather than individually), enter the word delete and then it’ll attempt to remove. This doesn’t always work as sometimes you’ll have nested resources that will block the deletion, however it’s a quick way to delete resources if you don’t want to resort to PowerShell.

image