The third #AzureGov meetup was held at the Microsoft Technology Center in Reston on April 27th. This meetup was again well attended and featured three great presentations relating to next-gen cloud applications for government and micro-services.
Mehul Shah got the session started with an overview of the Cortana Intelligence Suite, including the latest announcements coming out of BUILD 2016. The talk started with a lap around the platform, and then focused in on some key areas where features are rolling out quickly – including the Cognitive Services APIs. The session included some good discussion from the group related to potential use cases, and the ability to adapt the services to any number of business verticals. More about the Cortana Intelligence Suite here, and Build 2016 here.
The second session focused on building microservices on Azure. Chandra Sekar lead the discussion that began with a general discussion about the options you have for building microservices on the platform, including container technologies and the Azure Service Fabric. Chandra then went into detail and showed a great demo of a sample microservice built on the Azure Service Fabric. The demo explained how stateful micrososervices can be built using the Service Fabric, and demonstrated the resiliency of this model by walking through a simulated “failure” of the primary service node and recovery of the service – which occurred very quickly and maintained its running state. Cool stuff!
The final speaker was Keith McClellan, a Federal Programs Technical lead with Mesosphere. Keith began by talking about what Mesosphere has been up to, its maturity in the market, and the recent announcement of the DC/OS project. DC/OS is an exciting open source technology (spearheaded by Mesosphere) which offers an enterprise grade data-center scale operating system – meant to be a single location for running workloads of almost any kind. Keith walked through provisioning containers and other interesting services (including a SPARK data cluster for big data analysis) on the platform – and actually provisioned the entire stack on Azure infrastructure. I was impressed with the number of services already available to run on the platform today. More about DC/OS here.
If you haven’t already, join the DC Azure Government meetup group here, and join us for the next meeting. You can also opt to be notified about upcoming events or when group members post content.
The second #AzureGov meetup was held in Reston on March 15th. It was well attended ( see picture below of Todd presenting to the audience. Standing next to Todd is Karina Homme – Principal Program Manager for AzureGov and co-organizer of the meetup )
This meetup was kicked off by Todd Neison from NOAA. Todd gave an excellent presentation on his group’s journey into #Azure. Starting with the decision to move to the cloud, challenges in getting the necessary approvals and focusing on #PaaS. Todd’s perspective on adopting the cloud as a federal government employee can be very valuable to anyone looking to move to the cloud.
The next segment was presented by Matt Rathbun. Matt reviewed the recent news on #AzureGov compliance including i) Microsoft cloud for the government has been invited to participate in a pilot for High Impact Baseline and expects to get a provisional ATO by the end of the month ii) Microsoft has also finalized the requirements to meet DISA Impact Level 4 iii) Microsoft is establishing two new physically isolated Azure Government regions for DoD and DISA Impact Level 5.
You can read more about the announcements here. What stood out for me was the idea of fast track compliance that is designed to speed up the compliance from the current annual certification cadence.
The final segment was presented by the Redhat team and Andrew Weiss from Microsoft. They demonstrated the #OpenShift PaaS offering running on Azure. OpenShift is a container based application development platform that can be deployed to Azure or on-premises. As part of the presentation, they took an ASP.NET core 1.0 based web application and hosted it on the OpenShift platform. Next they scaled up the web application using Openshift and underlying Kubernestis container cluster manager.
Hope you can join us for the next meeting. Please register with the meetup here to be notified of the upcoming meetings.
February 29, 2016
Thanks for those who joined us for inaugural #AzureGov meetup on Feb 22nd. For those who could not join here is a quick update.
We talked about the services that are available in AzureGov today ( see picture below).
Note that Azure Resource Manager (ARM) is now available in AzureGov. Why am I singling out ARM from the list above? Mainly because most of the services that are available today such as Azure Cloud Services, Azure Virtual Machines, Azure Storage are based on the classic model (also known as the Service Management Model) ARM is the “new way” to deploy and manage the services that make up your application in Azure. The differences between the ARM and the classic model are described here. Further note that the availability of ARM is only the first step. What we also need are the Resource Managers (RM)for various services like Compute and Storage V2 ( ARM in turn calls the necessary RM to provision a service).
Since I could not find sample code to call ARM in AzureGov, here is a code snippet that you may find handy.
using System ;
# Set well-known client ID for Azure PowerShell
private static string ClientId = "1950a258-227b-4e31-a9cf-717495945fc2";
private static string TenantId = "XXX.onmicrosoft.com";
private static string LoginEndpoint = "https://login.microsoftonline.com/";
private static string ServiceManagementApiEndpoint = "https://management.core.usgovcloudapi.net/";
private static string RedirectUri = "urn:ietf:wg:oauth:2.0:oob";
private static string SubscriptionId = "XXXXXXXX;
private static string AzureResourceManagerEndpoint = "https://management.usgovcloudapi.net";
static void Main(string args)
var token = GetAuthorizationHeader();
var credentials = new Microsoft.Rest.TokenCredentials(token);
var resourceManagerClient = new Microsoft.Azure.Management.Resources.ResourceManagementClient(new Uri(AzureResourceManagerEndpoint), credentials)
SubscriptionId = SubscriptionId,
Console.WriteLine("Listing resource groups. Please wait….");
var resourceGroups = resourceManagerClient.ResourceGroups.List();
foreach (var resourceGroup in resourceGroups)
Console.WriteLine("Resource Group Name: " + resourceGroup.Name);
Console.WriteLine("Resource Group Id: " + resourceGroup.Id);
Console.WriteLine("Resource Group Location: " + resourceGroup.Location);
Console.WriteLine("Press any key to terminate the application");
private static string GetAuthorizationHeader()
AuthenticationResult result = null;
var context = new AuthenticationContext(LoginEndpoint + TenantId);
var thread = new Thread(() =>
result = context.AcquireToken(
thread.Name = "AquireTokenThread";
if (result == null)
throw new InvalidOperationException("Failed to obtain the JWT token");
string token = result.AccessToken;
Hope this helps and I do hope you will register for the #AzureGov meetup here – http://www.meetup.com/DCAzureGov/
Thanks to my friend Gaurav Mantri, fellow Azure MVP and developer of Cloud Portam – www.CloudPortam.com – an excellent tool for managing Azure Services. Gaurav figured out that we need to set the well known client id for powershell.
February 8, 2016
Last week, Microsoft announced the availability of preview bits for Azure Stack, an offering which allows customers to run Azure Services such as storage, PaaS Services and DBaaS within their own data centers or hosted facilities.
How is Azure Stack different from its predecessors like the Windows Azure Platform Appliance (WAPA) and Azure Pack (WAP), and why does Microsoft think it has a better chance of success? The answers to these questions hold special significance given that the narrative on private cloud has turned decidedly negative in the last 2-3 years.
Past attempts at Azure-in-a-box
To get a better understanding of Azure Pack’s unique features, let’s first take a quick background look at its predecessors, WAPA and Azure Pack.
WAPA was marketed as a combination of hardware (~1000 servers) and software (Azure services). The idea was that customers could drop this appliance within their own datacenter and benefit from greater geographical proximity to their existing infrastructure, physical control, regulatory compliance and data sovereignty. However, the timing of the release was less than ideal – WAPA was announced in 2012 during Azure’s early days when there was no real IaaS story. Additionally, the lack of a standard control plane across the various Azure services and the pace of change made operating the appliance unviable. The size (and cost) of this appliance meant that it initially appealed to a very narrow segment of customers, as only industry giants (eBay, Dell, HP) could reasonably afford to implement it.
WAP took a slightly different approach. Rather than trying to run Azure bits on-premises, it used a UI experience similar to that of Azure Portal (classic) to manage on-premises resources. Internally, the Azure Pack Portal depended on System Center 2012 R2 VMM and SPF for servicing requests. However, the notion of first-class software-defined storage or software-defined networking did not yet exist. Finally, WAP only made available one PaaS offering, the Azure Pack WebSites – a technology that made it possible to host high-density, web sites on-premises. The fact that other Azure PaaS offerings were not part of WAP was also a bit limiting.
How is Azure Stack different?
Azure Stack’s primary difference lies in the approach it takes toward bridging the gap between on-premises and the cloud. Azure Stack takes a snapshot of the Azure codebase and makes it available within on-premises data centers (Jeffery Snover expects biannual releases of Azure Stack). Given that Azure Stack is the very same code, customers can expect feature parity and management consistency. For example, users can continue to rely on ARM whether they are managing Azure or Azure Stack resources (in fact, Azure Stack quick start templates are now available as well in addition to the Azure quick start templates).
The following logical diagram depicts how the control plane (ARM) and constructs such as vNets, VM Extensions and storage accounts (depicted below the ARM) remain consistent across Azure and Azure Stack. This consistency comes from the notion of Resource Providers. The Resource Providers in turn are based on Azure-inspired technologies that are now built into Windows Server 2016, such as S2D (Storage Spaces Direct) and Network Controller.
What about Windows Service Bus? Windows Service Bus is a set of installable components that provide Azure Service Bus like capability on Windows servers and workstations. So in some sense it is similar to the Azure Stack concept. However, it should be noted that Windows Service Bus is based on same foundation as the Azure Service Bus (not a snapshot of Azure Service Code) and does not come with full UI and control plain experience that Azure offers.
Azure Stack Architecture
The following diagram depicts the Azure Stack logical components installed across a collection of VMs.
ADVM hosts services like AD, DNS, DHCP.
ACSVM hosts Azure Consistent Storage Services (Blob, Table and Administration services).
NCSVM hosts the network controller component that is a key part of software-defined-networking.
xRPM hosts the core resource providers, including networking storage and compute. These providers leverage Azure Service Fabric technology for resilience and scale-out.
This article provides a more detailed look at these building blocks.
The hybrid cloud is closer to the “plateau of productivity”
While the narrative on private clouds has turned decidedly negative, the demand for hybrid cloud continues to grow. This rise in demand is mainly due to the fact that public cloud adoption is set to spike in 2016 (just look at the strong growth numbers for public cloud providers across the board). This spike in usage is squarely based on enterprise IT’s growing interest in the public cloud. And as enterprise IT begins to adopt the public cloud, hybrid cloud is going to be at the center of this adoption. Why? Because despite all the virtues of the public cloud, legacy systems are going to be around for the foreseeable future – and this is not just due to security and compliance concerns: the enormity and pitfalls of legacy migration are well known.
So after languishing in the hype curve for years, hybrid IT may be finally reaching the proverbial Gartner plateau of productivity.
Azure Stack “hybrid” scenarios
Here are some example “hybrid” scenarios enabled by Azure Stack:
1) Dev/ Test in public cloud and Prod On-Premises – This model would enable developers to move quickly through proving out new patterns and applications, but would still allow them to deploy the production bits in a controlled on-premises setting.
2) Dev / Test in Azure Stack and Prod in Azure – This is the converse of scenario #1 above. The motivation for this scenario is that team development in the cloud is still evolving. Some challenges include: managing the costs of dev workstations in the cloud, collaborating across subscriptions (MSDN), and mapping all aspects of on-premises CI/CD infrastructure with services like VSO. This is why it may make sense for some organizations to continue to development on Azure Stack on-premises and deploy the production bits to Azure. Of course, as with scenario #1 above, this second scenario will require dealing with the lag in features and services between Azure and Azure Stack.
3) Manage on-premises resources more efficiently – Having access to modern portal experience (Azure Portal), automation plain (ARM), access cloud native services (Azure Consistent Services), as a layer over the existing on-premises infrastructure brings enterprises closer to the vision of FAST IT.
4) Seamlessly handle burst-out scenarios – Technologies like stretching on-premises database to Azure are beginning to appear. Azure Stack and its support for DBaaS make these hybrid setups even more seamless.
5) Comply with data sovereignty requirements – A regional subsidiary of a multi-national corporation may not have access to a local Azure DC. In such a situation, Azure Stack can help meet the data sovereignty requirements and at the same maintain overall consistency.
Promising offering. But don’t expect a silver bullet solution.
Azure Stack is fundamentally a different approach to hybrid IT. The fact that it is a snapshot of the Azure codebase will ensure consistency between on-premises and cloud. A common control plane and API surface is an additional plus.
However, the lag in features between the two environments will need to be carefully considered (as is the case today with differences in availability of services across various regions)
Even though customers are getting the same codebase, what they are *not* getting is seamless scalability and all the operational knowhow to run a complex underlying infrastructure. There is no getting around the deliberate capacity planning and operational excellence needed to efficiently run Azure Stack in your data center.
Finally, as organizations are increasingly realizing, adopting any form of cloud (public, private or hybrid) requires a cultural and mindset shift. No technology alone is going to make that transition successful.
With its unique approach to hybrid cloud, Azure Stack certainly looks to be a promising offering which will provide developers and IT Pros alike, an Azure consistent environment on-premises something was previously unavailable.
July 19, 2015
After countless hours throughout the course of this past year, this training course has finally gone live. Putting together a training course is like writing a book, each sentence undergoes multiple revisions, and even after publication it is hard to let go of the project.
The public cloud is tomorrow’s IT backbone. As cloud vendors introduce new capabilities, the application-building process is undergoing a profound transformation. The cloud is based on key tenets such as commodity hardware, usage-based billing, scale-out, and automation, all on a global scale. But how does the cloud impact what we do as programmers every day? What do we need to do at a program level that aligns us with the aforementioned tenets? This course is organized into three modules which discuss a total of nine techniques designed to help developers make more effective use of the cloud.
A brief summary of the course is as follows.
Module #1 Getting Started
This module will outline key principles that make up cloud computing and introduces the discussion on cloud oriented programming.
Module # 2 Exception handling and instrumentation
Be very careful about “tight loops” in your logic. The consequences may be more than just an unresponsive UI. You are likely to run up a hefty cloud usage bill. So whether you are checking for a message in the queue or reading a table, it is essential to build in logic that multiplicatively decreases (aka exponentially) the rate on invocation.
Reimagine the Exception Handler
It has been a best practice to only handle exceptions from which our program can recover. For all other exceptions, the guidance is to “rethrow” the exception up the call stack. But with the cloud (and DevOps), we now have the ability to provision new resources, or alternatively redirect the request to another data center in a different geographical location.
Logging Takes a New Meaning
We know the importance of logging in a complex distributed application. In the cloud, it becomes critical to have rich logging. But don’t create a hotspot by logging into a single repository. Instead, use diagnostics frameworks that allow you to log locally but transfer the logs to a centralized repository asynchronously.
Module # 3 Container, Microservices and Reuse
Try to encapsulate as much as logic as possible into a “container” that corresponds to the unit of scale and availability on the cloud platform – whether it is a Windows VM instance or a Linux group. Doing so will allow the platform to scale and be more resilient.
Decomposition is a well understood software design principle. The idea of breaking a business problem down into smaller parts that promotes not only “separation of concerns” but also the notion of reuse is indeed very worthy. Microservices are being seen as a way to decompose your cloud based applications.
Reuse is often cited as the Holy Grail of software development. In the cloud it takes on a whole new meaning. Easy access to prebuilt ML algorithms and cloud marketplace offerings increasingly available on the cloud platforms and are very easy to use.
Module # 4 Cost, Scale and Automation
Cost Aware Computing
Architects, PMs and IT folks are not the only ones that need to worry about the overall total cost of ownership (TCO). Programmers have an important role to play to keep the costs down – whether it is adequate “poison message” handling, exponential back off or tight memory management can all go a long way in keeping the TCO low.
Partitioning is the key to scalability
“Sharding” is not just for databases. In the cloud, consider the use of horizontal partitioning logic for something as low-level as a singleton (i.e. if possible, use a partitioned set vs. a singleton set). The best way to scale in the cloud is to embrace “scale-out” at the lowest levels of your code.
Infrastructure as Code
A new category of programs that developers are likely to write, when working in the cloud, is automating the provisioning of infrastructure that is needed to host the applications they build.
The cloud has ushered an era of “software defined everything” – network, storage, compute etc. Consequently, every aspect of a modern data center API is accessible, thus bridging the chasm between developer and operations that is commonly referred to as DevOps.
April 12, 2015
Decomposition is a well-understood software design principle. The idea of breaking a business problem down into smaller parts that promote not only "separation of concerns" but also the notion of reuse is indeed very worthy.
In his highly acclaimed book, "Domain Specific Design," Eric Evans talked about designing software by creating models that mimic the business problem. If you are interested in detailed treatment of domain specific design, terminology, language, I highly recommend reading this book or watching recordings of Eric’s talk on infoQ talk
Once you have modeled the business problem that you are trying to solve into objects and entities, you have a number of decomposition techniques available to you, depending on the language and platform of your choice.
One of the obvious choices is to start with shared libraries, which are available in almost in any language. Shared libraries give you the ability to place some common code that represents an aspect of your solution into a library. This technique has obviously been extensively used over the years. However, one downside of a shared library is its tight coupling – given that library is linked (statically or dynamically) into the process space of the caller invariably leads to tight coupling. This is not to say you cannot achieve loose coupling with libraries. You can, but it is largely up to you to maintain that discipline – there are no built-in language protections to safeguard against tighter coupling. As a result, we often see such a solution degenerating into a tightly coupled brittle solution.
The other downside of a shared library is that the language used to develop the library and the language of the consumer(s) is usually the same.
Component-based design (CBD) was developed as enhancement to the idea of libraries. Examples of component-based design include COM and CORBA. CBDs introduced a notion of binary protocol that serves as a "firewall" between the caller and component, thus enforcing more discipline in terms of sharing. Additionally, this firewall makes possible reuse easier across languages. For example, a C++ based COM component could be invoked via a program written in VB.
The other important benefit of component-based design is process isolation. In other words, the ability to host a component in its own process space. This process decoupling can improve modularity and can provide the ability to independently manage the lifecycle of a shared module – i.e., the ability to deploy newer versions without impacting the callers.
While component-based design was certainly an improvement over shared libraries, it is not without its own downsides. The binary protocol (often proprietary due to the lack of industry standards) used for communication between the components was not Web/Internet friendly and was widely considered too complex. Additionally, despite the process separation, the caller and the component often continued to run in the same security context. Finally, the notion of scalability – scale-out, state management, discovery were not first-class concepts or even at least industry standard interoperable concepts.
SOA was considered an improvement over CBD. In SOA, services became the fundamental elements for developing software, instead of components. However, it is important to note that SOA and CBD are not competing ideas. SOA can be thought of as a way to leverage artifacts of CBD, components, in an attempt to make them easier to consume. The biggest advancement was the support for multiple protocols, including the ones that work well over the Internet. Additionally, SOA came with a well-defined notion to describe its capabilities, functional and non-functional characteristics, thus making them easier to discover and use. Unlike the process isolation, the service boundary was well-defined. SOA ultimately engendered a series of specifications (referred to as WS-* spec) that addressed many aspects including transactions, coordination, management, and quality of service.
Despite the promise, it is safe to say that SOA has not lived up to its potential. We will leave that debate – "why SOA has or has not lived up to its promise" to others.
In the last two to three years, a subset of SOA principles, referred to as "Microservices," has gained popularity. Companies like Netflix and Twitter embody the principles of SOA at the service level. However, the focus is on "micro" sized services as opposed to more coarse-grained services. The idea of microservices is to apply the principle of "single responsibility" (part of Robert Martin’s five principles of OOD) to a service.
Since each microservice exhibits strong cohesion, it leads to a system that is loosely coupled, thus yielding a number of benefits:
i) Failure of one microservice does not cascade to other parts of the system
ii) Each microservice is autonomous – it can deployed into a separate execution context, have it own data store of choice, have its own caching tier
iii) Each microservice can be scaled independently
iv) Each microservice can be easily deployed or updated independently
v) Each microservice can in theory be implemented in a different (within reason of course) underlying technology of choice
vi) Given their granularity, microservices can be composable – in other words, it is possible for to compose a large business process by building on a collection of microservices
All this sounds good, but microservices are not a "free lunch.” As you probably guessed from the earlier description, the "micro" nature of can add significant operations overhead due to proliferation of instances – execution environment (PaaS containers, VMs), networking stack and even data stores. The only way to be successful in deploying a large number of instances and keeping them in sync is through automation. Automation is also key to making the microservices cost effective by spinning things up and down as needed.
So how do we deal with the overhead associated with microservices? It turns out that the cloud offers perhaps the only viable approach for dealing with these overheads while still enabling us to tap into the advantages of microservices. Here is how:
1. Infrastructure as code leading up to automated deployment pipelines (coupled with continous integration) offers a manageable approach to dealing with a potentially very large number of microservices.
2. The ability to spin execution context all the way from the smallest micro sized VM to the largest VM to a multi-tenant compute service where you drop your code and have it be executed in response to a trigger is perfectly suited for microservices. Add to this the support for containers such as Docker, and now you have a way to package and deploy a microservice (granted that Docker support is not limited to the cloud).
3. The ability to tap into the elasticity of a networking stack makes it simple to provision a large number of microservices end points
4. The ability to tap into database as service (SQL and No-SQL stores) to promote autonomy with microservices
5. Emphasis on a DevOps mindset in the cloud offers a cost effective approach to managing microservices
- Decompose the business project into models – objects with bounded context
- Shared libraries, components, and services are decomposability options. All have some limitations associated with them. Microservices seem promising.
- The cloud offers a way to implement microservices cost-effectively through VMs and Docker containers
March 20, 2015
Azure is a fast moving platform. It is almost a full-time job just to keep with up the updates
So at VSLive! I presented a session on *my* top ten announcements in “T-10” months (June 2014 – March 2015). I hope to update this list each quarter.
So how I did I come up with this list?
Firstly, I focused on services that are GA (Generally Available), i.e. the services that come with a money-backed SLA. The only exceptions are “meta” features such as Docker and Resource Manager – which may not be GA themselves but produce an end result that is GA.
Secondly, I picked services that I felt are architecturally significant – services that are both key enablers for Azure based applications and cloud-native. In other words, these services have themselves come together as a result of capabilities enabled by the cloud, including elasticity, resilience and composability. Please allow me to provide a couple of examples to explain my definition of “architecturally significant:”
- Azure automation is an excellent and beneficial service, but I picked Resource Manager as one of the IaaS announcements in my list. As you will see shortly, Azure Resource Manager is the definitive resource provisioning and management service going forward.
- Mobile services is one of my favorite services and I have used it extensively. Of course, it GAed outside of the T-10 month period. In the last ten months the key enhancement to this service has been the .NET support, which is very useful for .NET developers but not enough so to be deemed architecturally significant.
Thirdly, this list is based on *just* my personal opinion. That said, I would love to hear your thoughts – which features do you find most appealing and why? Please take a moment to leave a comment below. Thanks!
I broke down the list below into four categories – infrastructure, data and application tiers, and the tooling that goes with all the tiers. I find the aforementioned categories to be the most logical way to organize the ever-increasing set of Azure services.
#1 Bigger and Better Compute, Storage, and Networking
G-Series machines offer more memory (448 GB) and more local solid state drive (SSD) storage (up to 6.59 TB) than other Azure virtual machine sizes in the past. Additionally, premium storage allows for adding up to 32 TB of persistent storage per virtual machine, with more than 50,000 IOPS per virtual machine at less than one millisecond latency for read operations.
Key enhancements in VNET now allow multiple site-to-site connections, enabling you to connect multiple on-premises locations to Azure DC. Additionally, you can now connect two VNETs even if they are running in two different regions.
Finally, ExpressRoute allows you to create private (i.e. not routed over the public internet) high throughput connections between Azure datacenters and your existing on-premises infrastructure. Furthermore, Express Route connections offer a 99.9% SLA on connection uptime and up to 10 GB per second bandwidth.
The above announcements will help remove most concerns related to cloud performance and hybrid setup.
#2 Better Support for Automation and Management
Azure Resource Manager *appears* to be the definitive approach for provisioning and managing resources in Azure. Azure resources such as Websites, Database, and VM can be organized in groups (called Resource Groups). Resource groups are units of scale and management and access control (RBAC). Resource group members can talk to each other. Additionally, billing, monitoring and quota information rolls up to a resource group.
Azure Resource Manager supports both a declarative model (JSON), as well as an imperative model (PowerShell). Azure Resource Group “templates” (definitions of resources that make up a resource) are idempotent and “tightly” coupled (in a good way – deleting a Resource Group will cause all the constituent resources to be deleted as well)
Based on the above, it is clear that Azure Resource Manager is a key underpinning for the DevOps on Azure.
#3 Support for Containers
Azure team has announced support for Docker, which is a popular technology that allows applications and runtime to be packaged (aka “Dockerized”) in a manner that allows any Docker container to be run regardless of the OS. Currently, these Docker containers are Linux-based containers, but Windows-based containers are expected to be available in the future.
The Azure team has announced that Azure will support the Docker orchestration API. This means that you can use the Docker API (Docker Machine, Composer, and Swarm) to create a custom orchestration to create and manage Docker images in Azure. The Azure team has also announced that organizations will be able to host private Docker repositories (collections of Dockerized application images) within Azure storage.
Docker support in Azure has the potential to facilitate application migration to and from the cloud, improve reliability of applications by including the dependencies, and increase cost-effectiveness of Azure, since multiple Dockerized applications can be hosted within a single VM.
#4 Big Data Enhancements
The big data-as-a-service “HDInsight” has grown significantly in recent months. This includes support for Apache Storm, HBase, Apache Mahout, and Hadoop 2.6. In addition, from an infrastructure perspective, you can now scale up the HDInsight cluster without deleting and recreating it, and you possess the ability to choose from additional VM sizes. There is built-in integration between Azure Website logs and HDInsight. Finally, HDInsight support for VNET is also GA.
The growing importance of big data cannot be overstated. HDInsight has added some significant capabilities that make it easier to set up and work with services such as websites and process real-time events using Apache Storm.
#5 SQL Database Service Tiers and Performance Plans
SQL Database has added three service tiers (Basic, Standard and Premium). These tiers offer you the ability to choose from a range of options – size limits, self-service restore, DR options and performance objectives. The three tiers map to seven performance characteristics levels. Think of levels as ways to quantify the performance in terms of throughput (DTU), benchmark, transaction rate, and consistency of performance. For example, a Standard/S0 level offers 10 DTUs, a maximum database size of 250 GB, 521 transactions per minute, and “better” consistency. Similarly, the Premium/P3 level offers 800 DTUs, a maximum database size of 500 GB, 735 transactions/sec, and the “best” consistency.
If you have looked at SQL Database in the past, liked its ease of use but weren’t so sure about its performance and throughput concerns, then the SQL Database service tiers and performance plans is for you.
#6 Machine Learning for the Masses
There is a lot written about machine learning these days. Personally, I like Arthur Samuel’s definition below:
Machine Learning is the field of study that gives computers the ability to learn without being programmed – Arthur Samuel (1959)
Arthur Samuel used machine learning to build a checkers program that over time became a better checkers player than Arthur Samuel himself. While machine learning concepts such as deep learning and neural networks have been around for a long time, it is the advent of the cloud that has finally made these concepts viable to implement and within the reach of the masses.
Azure Machine Learning takes this even a step further. It makes it easy just to create machine learning models using a browser-based UI, where you can select from a range of algorithms. Once you have created a machine learning model, it is also easy to operationalize it by publishing it as a web service.
If you need predictive analytics by tapping into best of class algorithms already in XBOX, Bing, etc., Azure Machine Learning is the place to be.
#7 Azure Websites – The Go-To Execution Host
If you are building a PaaS based application on Azure, chances are that you are leveraging Azure Websites. Of course Azure Websites has been GA for a long time. But this highly used service (300,000+ instances and counting) continues to add features at a rapid clip. The last ten months have been no different. The Azure Websites team GAed capabilities including:
i) WebJobs – ability to run batch jobs within the Websites
ii) Migration Assistant – A tool that analyzes your on-premises IIS application and recommends steps for moving it to Azure Websites
iii) VNET Integration – Ability to talk to resources such as Azure hosted or even on-premises based VMs from within an Azure Website.
iv) CDN Support – Integrate Websites content with Azure Content Delivery Network (CDN)
v) Multiple (Production +4 ) deployment slots gives the ability to easily alternate multiple versions on your application in production
Whether you are building a web application, hosting a REST endpoint, or hosting a pre-packaged app, Azure Websites is the place to start.
#8 Ensure a Successful API Program
Ensuring a successful API program is critical for any startup, ISV, or large enterprise that is increasingly looking to modernize and unify silos of APIs spread across the enterprise. Azure API management can help you develop a successful API Program by providing out of the box features such as discoverability of your API, self-enrollment, API documentation, policies, analytics and security.
APIs are considered the silent engines of growth for any modern enterprise. Azure API Management makes it easy to get started.
#9 Outsource Your Identity and Access Control Using Azure AD
Every application has a need to securely manage the identity and control access for its users. Beyond the core authentication and authorization features, each application also needs a number of related capabilities including single-signon, self-service governance, seamless API access, high availability, DR, and the ability to support a growing set of authentication protocols including SAML, OAuth, OpenID, WS-Federation, etc.
This is why it makes so much sense to “outsource” this aspect of the application to a service like Azure Active Directory (or Azure AD in short).
Even though the premium version been available for a while, the Azure AD team has GAed some key capabilities in the last ten months, including:
Azure Active Directory Application Proxy – This feature allows for an on-premises application to be published via Azure AD and made available to external users as SaaS service.
Azure Active Directory Basic Tier – The Azure AD premium edition GAed in April 2014. It was more about enterprise oriented features such as multi-factor authorization, self-group management, writeback password resets to on-premises directories. In the last ten months Azure AD team has GAed support for the basic edition. The basic edition offers a lower price point and is designed for the “desk-less” worker. It has limited self-service capabilities and cannot write password resets to on-premises directories. Interestingly, the basic edition comes with the same 99.9% uptime SLA and no limits on storing objects.
If you are looking to outsource identity and access control for your Azure and on-premises applications, Azure AD can come in very handy.
#10 Load Testing Made Easy
While there have been a number of Azure-related tooling enhancements in Visual Studio, the one tool that stands out is the Visual Studio Load Testing Service. Technically this capability GAed in April 2014, outside of our “T-10” month timeframe, but I decided to include it anyways given its significance. Cloud Load Testing (CLT) offers the following features:
i) Easy to simulate a varying load (whether it is 200 or 200,000 concurrent users). Of course you only pay for what you use.
ii) The same tests that we have used on-premises in the past also work in CLT. The only difference is that the test execution is happening in the cloud.
Load testing has never been easy – the hardware, software and setup costs were non-trivial. With CLT you can conduct load tests easily, early, and often in your development lifecycle.