Azure is a fast moving platform. It is almost a full-time job just to keep with up the updates :-)

So at VSLive! I presented a session on *my* top ten announcements in “T-10” months (June 2014 – March 2015). I hope to update this list each quarter.

So how I did I come up with this list?

Firstly, I focused on services that are GA (Generally Available), i.e. the services that come with a money-backed SLA. The only exceptions are “meta” features such as Docker and Resource Manager – which may not be GA themselves but produce an end result that is GA.

Secondly, I picked services that I felt are architecturally significant – services that are both key enablers for Azure based applications and cloud-native. In other words, these services have themselves come together as a result of capabilities enabled by the cloud, including elasticity, resilience and composability. Please allow me to provide a couple of examples to explain my definition of “architecturally significant:”

  • Azure automation is an excellent and beneficial service, but I picked Resource Manager as one of the IaaS announcements in my list. As you will see shortly, Azure Resource Manager is the definitive resource provisioning and management service going forward.

 

  • Mobile services is one of my favorite services and I have used it extensively. Of course, it GAed outside of the T-10 month period. In the last ten months the key enhancement to this service has been the .NET support, which is very useful for .NET developers but not enough so to be deemed architecturally significant.

Thirdly, this list is based on *just* my personal opinion. That said, I would love to hear your thoughts – which features do you find most appealing and why? Please take a moment to leave a comment below. Thanks!

I broke down the list below into four categories – infrastructure, data and application tiers, and the tooling that goes with all the tiers. I find the aforementioned categories to be the most logical way to organize the ever-increasing set of Azure services.

 

Infrastructure Tier

#1 Bigger and Better Compute, Storage, and Networking

G-Series machines offer more memory (448 GB) and more local solid state drive (SSD) storage (up to 6.59 TB) than other Azure virtual machine sizes in the past. Additionally, premium storage allows for adding up to 32 TB of persistent storage per virtual machine, with more than 50,000 IOPS per virtual machine at less than one millisecond latency for read operations.

Key enhancements in VNET now allow multiple site-to-site connections, enabling you to connect multiple on-premises locations to Azure DC. Additionally, you can now connect two VNETs even if they are running in two different regions.

Finally, ExpressRoute allows you to create private (i.e. not routed over the public internet) high throughput connections between Azure datacenters and your existing on-premises infrastructure. Furthermore, Express Route connections offer a 99.9% SLA on connection uptime and up to 10 GB per second bandwidth.

The above announcements will help remove most concerns related to cloud performance and hybrid setup.

#2 Better Support for Automation and Management

Azure Resource Manager *appears* to be the definitive approach for provisioning and managing resources in Azure. Azure resources such as Websites, Database, and VM can be organized in groups (called Resource Groups). Resource groups are units of scale and management and access control (RBAC). Resource group members can talk to each other. Additionally, billing, monitoring and quota information rolls up to a resource group.

Azure Resource Manager supports both a declarative model (JSON), as well as an imperative model (PowerShell). Azure Resource Group “templates” (definitions of resources that make up a resource) are idempotent and “tightly” coupled (in a good way – deleting a Resource Group will cause all the constituent resources to be deleted as well)

Based on the above, it is clear that Azure Resource Manager is a key underpinning for the DevOps on Azure.

#3 Support for Containers

Azure team has announced support for Docker, which is a popular technology that allows applications and runtime to be packaged (aka “Dockerized”) in a manner that allows any Docker container to be run regardless of the OS. Currently, these Docker containers are Linux-based containers, but Windows-based containers are expected to be available in the future.

The Azure team has announced that Azure will support the Docker orchestration API. This means that you can use the Docker API (Docker Machine, Composer, and Swarm) to create a custom orchestration to create and manage Docker images in Azure. The Azure team has also announced that organizations will be able to host private Docker repositories (collections of Dockerized application images) within Azure storage.

Docker support in Azure has the potential to facilitate application migration to and from the cloud, improve reliability of applications by including the dependencies, and increase cost-effectiveness of Azure, since multiple Dockerized applications can be hosted within a single VM.

Data Tier

#4 Big Data Enhancements

The big data-as-a-service “HDInsight” has grown significantly in recent months. This includes support for Apache Storm, HBase, Apache Mahout, and Hadoop 2.6. In addition, from an infrastructure perspective, you can now scale up the HDInsight cluster without deleting and recreating it, and you possess the ability to choose from additional VM sizes. There is built-in integration between Azure Website logs and HDInsight. Finally, HDInsight support for VNET is also GA.

The growing importance of big data cannot be overstated. HDInsight has added some significant capabilities that make it easier to set up and work with services such as websites and process real-time events using Apache Storm.

#5 SQL Database Service Tiers and Performance Plans

SQL Database has added three service tiers (Basic, Standard and Premium). These tiers offer you the ability to choose from a range of options – size limits, self-service restore, DR options and performance objectives. The three tiers map to seven performance characteristics levels. Think of levels as ways to quantify the performance in terms of throughput (DTU), benchmark, transaction rate, and consistency of performance. For example, a Standard/S0 level offers 10 DTUs, a maximum database size of 250 GB, 521 transactions per minute, and “better” consistency. Similarly, the Premium/P3 level offers 800 DTUs, a maximum database size of 500 GB, 735 transactions/sec, and the “best” consistency.

If you have looked at SQL Database in the past, liked its ease of use but weren’t so sure about its performance and throughput concerns, then the SQL Database service tiers and performance plans is for you.

#6 Machine Learning for the Masses

There is a lot written about machine learning these days. Personally, I like Arthur Samuel’s definition below:

Machine Learning is the field of study that gives computers the ability to learn without being programmed – Arthur Samuel (1959)

Arthur Samuel used machine learning to build a checkers program that over time became a better checkers player than Arthur Samuel himself. While machine learning concepts such as deep learning and neural networks have been around for a long time, it is the advent of the cloud that has finally made these concepts viable to implement and within the reach of the masses.

Azure Machine Learning takes this even a step further. It makes it easy just to create machine learning models using a browser-based UI, where you can select from a range of algorithms. Once you have created a machine learning model, it is also easy to operationalize it by publishing it as a web service.

If you need predictive analytics by tapping into best of class algorithms already in XBOX, Bing, etc., Azure Machine Learning is the place to be.

Application Tier

#7 Azure Websites – The Go-To Execution Host

If you are building a PaaS based application on Azure, chances are that you are leveraging Azure Websites. Of course Azure Websites has been GA for a long time. But this highly used service (300,000+ instances and counting) continues to add features at a rapid clip. The last ten months have been no different. The Azure Websites team GAed capabilities including:

i) WebJobs – ability to run batch jobs within the Websites

ii) Migration Assistant – A tool that analyzes your on-premises IIS application and recommends steps for moving it to Azure Websites

iii) VNET Integration – Ability to talk to resources such as Azure hosted or even on-premises based VMs from within an Azure Website.

iv) CDN Support – Integrate Websites content with Azure Content Delivery Network (CDN)

v) Multiple (Production +4 ) deployment slots gives the ability to easily alternate multiple versions on your application in production

Whether you are building a web application, hosting a REST endpoint, or hosting a pre-packaged app, Azure Websites is the place to start.

#8 Ensure a Successful API Program

Ensuring a successful API program is critical for any startup, ISV, or large enterprise that is increasingly looking to modernize and unify silos of APIs spread across the enterprise. Azure API management can help you develop a successful API Program by providing out of the box features such as discoverability of your API, self-enrollment, API documentation, policies, analytics and security.

APIs are considered the silent engines of growth for any modern enterprise. Azure API Management makes it easy to get started.

#9 Outsource Your Identity and Access Control Using Azure AD

Every application has a need to securely manage the identity and control access for its users. Beyond the core authentication and authorization features, each application also needs a number of related capabilities including single-signon, self-service governance, seamless API access, high availability, DR, and the ability to support a growing set of authentication protocols including SAML, OAuth, OpenID, WS-Federation, etc.

This is why it makes so much sense to “outsource” this aspect of the application to a service like Azure Active Directory (or Azure AD in short).

Even though the premium version been available for a while, the Azure AD team has GAed some key capabilities in the last ten months, including:

Azure Active Directory Application Proxy – This feature allows for an on-premises application to be published via Azure AD and made available to external users as SaaS service.

Azure Active Directory Basic Tier – The Azure AD premium edition GAed in April 2014. It was more about enterprise oriented features such as multi-factor authorization, self-group management, writeback password resets to on-premises directories. In the last ten months Azure AD team has GAed support for the basic edition. The basic edition offers a lower price point and is designed for the “desk-less” worker. It has limited self-service capabilities and cannot write password resets to on-premises directories. Interestingly, the basic edition comes with the same 99.9% uptime SLA and no limits on storing objects.

If you are looking to outsource identity and access control for your Azure and on-premises applications, Azure AD can come in very handy.

Tooling Enhancements

#10 Load Testing Made Easy

While there have been a number of Azure-related tooling enhancements in Visual Studio, the one tool that stands out is the Visual Studio Load Testing Service. Technically this capability GAed in April 2014, outside of our “T-10” month timeframe, but I decided to include it anyways given its significance. Cloud Load Testing (CLT) offers the following features:

i) Easy to simulate a varying load (whether it is 200 or 200,000 concurrent users). Of course you only pay for what you use.

ii) The same tests that we have used on-premises in the past also work in CLT. The only difference is that the test execution is happening in the cloud.

Load testing has never been easy – the hardware, software and setup costs were non-trivial. With CLT you can conduct load tests easily, early, and often in your development lifecycle.

Public and private peering with Azure ExpressRoute is a topic that has come up a lot in my recent conversations recently so I thought I would capture some thoughts here:

What is peering?

According to Wikipedia contributors, in computer networkingpeering is a voluntary interconnection of administratively separate Internet networks for the purpose of exchanging traffic between the users of each network. An agreement by two or more networks to peer is instantiated by a physical interconnection of the networks and an exchange of routing information through the Border Gateway Protocol (BGP) routing protocol.

In Azure, peering translates into a private, dedicated and high-throughput connection between Azure and an on-premises data center via ExpressRoute. Note that Azure does offer Virtual Network (point-to-site) and Virtual Network (site-to-site) connectivity options, but rather, the routing is static or dynamic VPN. In contrast, ExpressRoute is based on BGP routing. For a detailed comparison of these options with guidance to choose between them, please refer to Ganesh Srinivasan’s blog post.

Furthermore, peering can be private or public. Public peering, as the name suggests, is a peering arrangement where the interchange between the participating networks happens over a public exchange point. Likewise, a private peering is a peering arrangement where the interchange between participating networks happens over a private exchange point.

So what does private / public peering mean in terms of Azure?

Public and Private Peering with Azure

As stated earlier, ExpressRoute allows you to create a dedicated circuit between on-premises and Azure DC. As part of this dedicated circuit, you get two independent routing domains (shown in green and orange below).

The “orange” link depicts private IP-based traffic among a customer’s network and VNET and VMs running in Azure.  There is a NAT in the path. Since the exchange point is completely private, this link represents a private peering based connection.

The “green” link depicts traffic between a customer’s network with Azure-based services that have a public endpoint (such as Azure Storage). Since the exchange point in this instance is indeed public, this link represents a public peering based connection.  Now, since the traffic is originating from a private IP (on-premises) address, ExpressRoute will NAT the traffic before it delivers the packets to the public endpoint of a service such as Azure Storage (ExpressRoute will use MSFT address range for the NAT pool) This means customers don’t have to go through their internet edge (proxy, firewall, NAT) to reach public Azure services, and thus *not* taking up a chunk of their internet bandwidth to communicate with Azure.

ExpressRoute Public and Private Peering
Exceptions

Please note that not all Azure public services are accessible via ExpressRoute public peering. The following services are not supported over ExpressRoute public peering at the time of writing of this post.

http://msdn.microsoft.com/en-us/library/azure/2db6ef11-aa86-4091-adbd-21882e136f65#BKMK_ExpressRouteAzureServices

For more information please visit:

Express Route FAQ.

Extending Your On-Premises Network into Azure Securely

Recently, I sat down with hosts Carl Franklin and Richard Campbell of .NET Rocks! To chat about the architectural patterns of cloud development. If you’re not familiar with .NET Rocks! it is a weekly online talk show for anyone interested in programming on the Microsoft .NET platform. During this discussion I talk about how the cloud influences application design, focused on more asynchronous, scalable, and flexible messaging focused architecture. While the patterns could be applied to any cloud technology, Microsoft Azure s particularly well-suited to these architectural patterns, providing services that cover each pattern approach for optimal results. Click here to listen to “Cloud Patterns with Vishwas Lele.”
http://blog.appliedis.com/2014/06/17/cloud-patterns-with-vishwas-lele/

In this blog post, I will discuss several highlights from Build 2014, Microsoft’s annual conference for software and web designers. As you might expect, this year was filled with new Azure announcements, many of which will influence and expand developers’ cloud computing options and help simplify and speed up delivery. Read on for more information about the new enhancements, services, and products, and to find which ones I found most exciting: http://blog.appliedis.com/2014/04/14/build-2014-what-is-new-in-azure-and-what-does-it-mean-to-you/

Let’s face it, keeping up with the latest on Windows Azure is hard. Whether it is a new feature announcement, a white paper, a code sample or just another attempt at “cloud washing,” it is difficult to keep up with the latest, no matter how adept you are at mining the various social media channels. This is why we built the “intelligent” twitter bot (@AzureUpdates) as a weekend project. @AzureUpdates is designed to keep you up to date with all things #WindowsAzure, or #Azure, in and around the Twitterverse. Read on to learn more about how @AzureUpdates works: http://blog.appliedis.com/2014/02/10/introducing-our-intelligent-twitter-bot-azureupdates/

In this post, I discuss AIS’ Windows Azure Media Services Manage (WAMS Manager) which is a desktop-based application that makes it easy to upload, tag, encode, and publish your media assets. It is designed to bring the benefits of Windows Azure Media Services to end users (who are typically business users responsible for managing media files) without the need to write any code. In this post, I provide a quick overview of the background of the application, explain the reasons why such a tool is necessary and beneficial, describe the high-level architecture of the app, and provide a quick tutorial on how to get started with using the app.
http://blog.appliedis.com/2013/11/19/manage-azure-media-services-assets-with-wams-manager-preview/

This post is not intended to compare “JavaScript with HTML” and “C# with XAML” styles of building Windows Store Apps; that is a choice you must make based on your own skill set, reuse considerations, whether the functionality you are targeting for the app is already available as a web app, etc. Rather, in this blog post, I provide a description of my own reasoning behind my preference for building windows store apps using HTML. http://blog.appliedis.com/2013/01/23/why-i-prefer-to-build-my-windows-store-apps-in-html/

Follow

Get every new post delivered to your Inbox.

Join 27 other followers