The sixth #AzureGov meetup took place on July 25th at Microsoft’s K Street office.

Like the previous meetups, this meetup also began with a  networking session.

After the  networking session Andrew Stroup, Director of Product and Technology, White House Presidential Innovation Fellows, provided insights and discussion on how the federal government is tackling the procurement of cloud technologies by building a double-sided marketplace, Apps.gov.

Apps.gov is focused on creating easier pathways for tech companies to enter the federal government market and federal employees a singular place to discover, explore and procure these products. Andrew took several questions about how products are registered with Apps.gov, the market size and how cloud solutions like AzureGov  are available on Apps.gov.

 

image

 

Following Andrew’s session,  Martin Heinrich, Director of Enterprise Content and Records Management, CGI, discussed CGI’s recently-launched Records Management as a Service (RMaaS) which combines the Microsoft Azure Government Cloud and OpenText’s Content Suite with CGI’s extensive consulting expertise and implementation services.

Hope you can join us for the next meeting on August 31st that will feature Open Source  in government. For more information, please visit 

http://www.meetup.com/DCAzureGov/events/233042129/

The fifth #AzureGov meetup took place on June 29th at Microsoft’s K Street office.

Like the previous meetups, this meetup also began with a  networking session. A hot topic of conversation amongst the attendees was the recent announcement about AzureGov achieving high impact provisional authority  (P-ATO) – highest impact level for FedRAMP accreditation.  In other words, AzureGov can now host high-impact level data and applications. This was seen by many as a turning point in AzureGov adoption by federal and state agencies.

After the  networking session, Nate Johnson, Senior Program Manager in the Azure Security Group, presented an insightful session on how apps can achieve ATO.  He talked about the six step process for apps to achieve ATO.

 

 

image

Nate also talked about how AzureGov team can help customers in achieving an ATO for their apps including  Azure SME support, customer responsibility matrix, security documentation, blueprints and templates.

imagecontrol

The next session was presented by Aaron Barth. Principal PFE, Microsoft Services. Aaron’s session built on the FedRAMP process outlined by Nate earlier. Aaron provided a “practitioners perspective” based on his recent experience in going through ATO process for a client of his. As a developer himself,  Aaron was able to demystify, what appears to be  an onerous process  of  documenting every security control in the application. He explained that  by building on a FedRAMP compliant platform,  a large chunk of the documentation requirements were addressed by the cloud provider (AzureGov).

The following slide (also from Aaron’’s  deck) was very helpful in depicting i) how the various  security controls maps to different tiers of the application ii) how the number of ATO controls that you are responsible for (as an app owner), goes down significantly as you move from on-premises, IaaS and PaaS.

image

The next presentation was from Brett Goldsmith of AIS. Brett presented a brief  demonstration of how his team, using the  FBI UCR dataset,  built a nice looking visualizations using Power BI Embedded.

Finally, I tried to answer a question that has come up in previous meetups – “How can I leverage rich ARM templates in an AzureGov setting?”  As you know, not all services (including Azure RM providers) are available in AzureGov today.   So here is my brute force approach for working around this *temporary* limitation ( disclaimer – this is not an “official” workaround by any means, so please conduct your due diligence regarding licensing etc.)

In a nutshell – provision resources in Azure using ARM  (for example the sqlvm-always-on), glean the metadata from the provisioned resources  (AV Set, ILBs, Storage), copy the images to Azure Gov, use the metadata gleaned from the previous step to provision resources in AzureGov using ASM.

 

image

That said, hopefully we will not need to use the aforementioned workaround for long. New ARM Resources Providers are being added to AzureGov at a fairly rapid clip. In fact, I just  ran a console program to dump all resource providers. The output is pasted below.  Notice the addition of providers such as Storage V2.

Provider Namespace Microsoft.Backup
ResourceTypes:
BackupVault

Provider Namespace Microsoft.ClassicCompute
ResourceTypes:
domainNames
checkDomainNameAvailability
domainNames/slots
domainNames/slots/roles
virtualMachines
capabilities
quotas
operations
resourceTypes
moveSubscriptionResources
operationStatuses

Provider Namespace Microsoft.ClassicNetwork
ResourceTypes:
virtualNetworks
reservedIps
quotas
gatewaySupportedDevices
operations
networkSecurityGroups
securityRules
capabilities

Provider Namespace Microsoft.ClassicStorage
ResourceTypes:
storageAccounts
quotas
checkStorageAccountAvailability
capabilities
disks
images
osImages
operations

Provider Namespace Microsoft.SiteRecovery
ResourceTypes:
SiteRecoveryVault

Provider Namespace Microsoft.Web
ResourceTypes:
sites/extensions
sites/slots/extensions
sites/instances
sites/slots/instances
sites/instances/extensions
sites/slots/instances/extensions
publishingUsers
ishostnameavailable
sourceControls
availableStacks
listSitesAssignedToHostName
sites/hostNameBindings
sites/slots/hostNameBindings
operations
certificates
serverFarms
sites
sites/slots
runtimes
georegions
sites/premieraddons
hostingEnvironments
hostingEnvironments/multiRolePools
hostingEnvironments/workerPools
hostingEnvironments/multiRolePools/instances
hostingEnvironments/workerPools/instances
deploymentLocations
ishostingenvironmentnameavailable
checkNameAvailability

Provider Namespace Microsoft.Authorization
ResourceTypes:
roleAssignments
roleDefinitions
classicAdministrators
permissions
locks
operations
policyDefinitions
policyAssignments
providerOperations

Provider Namespace Microsoft.Cache
ResourceTypes:
Redis
locations
locations/operationResults
checkNameAvailability
operations

Provider Namespace Microsoft.EventHub
ResourceTypes:
namespaces
checkNamespaceAvailability
operations

Provider Namespace Microsoft.Features
ResourceTypes:
features
providers

Provider Namespace microsoft.insights
ResourceTypes:
logprofiles
alertrules
autoscalesettings
eventtypes
eventCategories
locations
locations/operationResults
operations
diagnosticSettings
metricDefinitions
logDefinitions

Provider Namespace Microsoft.KeyVault
ResourceTypes:
vaults
vaults/secrets
operations

Provider Namespace Microsoft.Resources
ResourceTypes:
tenants
providers
checkresourcename
resources
subscriptions
subscriptions/resources
subscriptions/providers
subscriptions/operationresults
resourceGroups
subscriptions/resourceGroups
subscriptions/resourcegroups/resources
subscriptions/locations
subscriptions/tagnames
subscriptions/tagNames/tagValues
deployments
deployments/operations
operations

Provider Namespace Microsoft.Scheduler
ResourceTypes:
jobcollections
operations operationResults

Provider Namespace Microsoft.ServiceBus
ResourceTypes:
namespaces
checkNamespaceAvailability
premiumMessagingRegions
operations

Provider Namespace Microsoft.Storage
ResourceTypes:
storageAccounts
operations
usages
checkNameAvailability

 

Here is the code to print all resource providers available in AzureGov

static void Main(string[] args)

  {

      var token = GetAuthorizationHeader();

      var credentials = new Microsoft.Rest.TokenCredentials(token);

      var resourceManagerClient = new Microsoft.Azure.Management.Resources.ResourceManagementClient(new Uri(AzureResourceManagerEndpoint), credentials)

      {

          SubscriptionId = SubscriptionId,

      };


      foreach (var provider in resourceManagerClient.Providers.List())

      {

          Console.WriteLine(String.Format("Provider Namespace {0}", provider.NamespaceProperty));

          Console.WriteLine("ResourceTypes:");


          foreach (var resourceType in provider.ResourceTypes)

          {

              Console.WriteLine(String.Format("\t {0}", resourceType.ResourceType));

          }


          Console.WriteLine("");

      }

 

The fourth #AzureGov meetup took place on May 24th at Microsoft’s K Street office.  

After a short networking session, Jack Bienko, Deputy for Entrepreneurship Education, US Small Business Administration talked about the upcoming National Day of Civic Hacking on June 4th. You can find more information about the event on Jack’s recent blog post

Next. Bill Meskill, Director of Enterprise Information Services, Office of the Under Secretary of Defense for Policy, kicked off a case study presentation of a Intelligent News Aggregator and Recommendation solution that his group has developed on Azure. He started out by describing the motivations for building this solution, a summary of business objectives and why it made sense to develop it on the cloud. 

 IMG_2819

Next Brent Wodikca and Jim Strang  from AIS presented an overview of the architecture, followed by, a brief demo.  At a high-level, the system relies on a Azure WebJobs to pull news feeds from a number of sources. The downloaded news stories are then used to train an Azure Machine Learning model -  LDA implementation of the Topic Model. The trained model is then made available as a web service to serve recommendations based on users’ preference.  There is an also an API Management facade that allows their on-premises systems ( SharePoint) to talk to the Azure hosted solution. Azure API Management allows security policies such as enforcing a JWT policy and restricting the incoming IP addresses is used to secure the connectivity between on-premises and Azure ML based solution.

image 

 

Finally Andrew Weiss, Azure Solution Architect, Microsoft briefly talked about  leveraging Azure Services to help solve hackathon challenges.

 

clip_image002

This guest post is from Brent Wodicka at Applied Information Sciences.

The third #AzureGov meetup was held at the Microsoft Technology Center in Reston on April 27th. This meetup was again well attended and featured three great presentations relating to next-gen cloud applications for government and micro-services.

Mehul Shah got the session started with an overview of the Cortana Intelligence Suite, including the latest announcements coming out of BUILD 2016. The talk started with a lap around the platform, and then focused in on some key areas where features are rolling out quickly – including the Cognitive Services APIs. The session included some good discussion from the group related to potential use cases, and the ability to adapt the services to any number of business verticals. More about the Cortana Intelligence Suite here, and Build 2016 here.

The second session focused on building microservices on Azure. Chandra Sekar lead the discussion that began with a general discussion about the options you have for building microservices on the platform, including container technologies and the Azure Service Fabric. Chandra then went into detail and showed a great demo of a sample microservice built on the Azure Service Fabric. The demo explained how stateful micrososervices can be built using the Service Fabric, and demonstrated the resiliency of this model by walking through a simulated “failure” of the primary service node and recovery of the service – which occurred very quickly and maintained its running state. Cool stuff!

The final speaker was Keith McClellan, a Federal Programs Technical lead with Mesosphere. Keith began by talking about what Mesosphere has been up to, its maturity in the market, and the recent announcement of the DC/OS project. DC/OS is an exciting open source technology (spearheaded by Mesosphere) which offers an enterprise grade data-center scale operating system – meant to be a single location for running workloads of almost any kind. Keith walked through provisioning containers and other interesting services (including a SPARK data cluster for big data analysis) on the platform – and actually provisioned the entire stack on Azure infrastructure. I was impressed with the number of services already available to run on the platform today. More about DC/OS here.

If you haven’t already, join the DC Azure Government meetup group here, and join us for the next meeting. You can also opt to be notified about upcoming events or when group members post content.

 

The second #AzureGov meetup was held in Reston on March 15th. It was well attended ( see picture below of Todd presenting to the audience. Standing next to Todd is Karina Homme – Principal Program Manager for AzureGov  and co-organizer of the meetup )

This meetup was kicked off by Todd Neison from NOAA. Todd gave an excellent presentation on his group’s journey into #Azure. Starting with the decision to  move to the cloud, challenges in getting the necessary approvals and focusing on  #PaaS. Todd’s perspective on adopting the cloud as a federal government employee can be very valuable to anyone looking to move  to the cloud. 

 

 

 

The next segment was presented by Matt Rathbun. Matt reviewed the recent news on #AzureGov compliance including  i) Microsoft cloud for the government has been invited to participate in a pilot for High Impact Baseline and expects to get  a provisional ATO by the end of the month ii) Microsoft has also finalized the requirements to meet DISA Impact Level 4 iii) Microsoft is establishing two new physically isolated Azure Government regions for DoD and DISA Impact Level 5.

 

Capture

 

 

You can read more about the announcements here.  What stood out for me was the idea of fast track compliance that is designed to speed up the compliance from the current annual certification cadence.

The final segment was presented by the Redhat team and Andrew Weiss from Microsoft.  They demonstrated the #OpenShift PaaS offering running on Azure. OpenShift is a container based application development platform that can be deployed to Azure or on-premises. As part of the presentation, they took an ASP.NET core 1.0 based  web application and hosted it on the OpenShift platform.  Next they scaled up the web application using Openshift and underlying Kubernestis container cluster manager.

Hope you can join us for the next meeting. Please register with the meetup here to be notified of the upcoming meetings.   

 

Thanks for those who joined us for inaugural #AzureGov meetup on Feb 22nd. For those who could not join here is a quick update.

We talked about the services that are available in AzureGov today ( see picture below).

image

 

Note that Azure Resource Manager (ARM) is now available in AzureGov.  Why am I singling out ARM from the list above? Mainly because most of the services that are available today such as Azure Cloud Services, Azure Virtual Machines, Azure Storage are based on the classic model (also known as the Service Management Model)  ARM is the “new way” to deploy and manage the services that make up your application in Azure.  The differences between the ARM and the classic model are described here. Further note that the availability of ARM is only the first step. What we also need are the Resource Managers (RM)for various services like Compute and Storage V2 ( ARM in turn calls the necessary RM to provision a service).

Since I could not find sample code to call ARM in AzureGov, here is a code snippet that you may find handy.

 

using System    ;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Microsoft.IdentityModel.Clients.ActiveDirectory;
using System.Threading;
using Microsoft.Azure.Management.Resources.Models;
using Microsoft.Azure.Management.Resources;
using Microsoft.Rest;
using Microsoft.Rest.Azure;

namespace AzureGovDemo
{
    class Program
    {
        # Set well-known client ID for Azure PowerShell

        private static string ClientId = "1950a258-227b-4e31-a9cf-717495945fc2";
        private static string TenantId = "XXX.onmicrosoft.com";
        private static string LoginEndpoint = "https://login.microsoftonline.com/";
        private static string ServiceManagementApiEndpoint = "https://management.core.usgovcloudapi.net/";
        private static string RedirectUri = "urn:ietf:wg:oauth:2.0:oob";
        private static string SubscriptionId = "XXXXXXXX;
        private static string AzureResourceManagerEndpoint = "https://management.usgovcloudapi.net";
        static void Main(string[] args)
        {
            var token = GetAuthorizationHeader();
            var credentials = new Microsoft.Rest.TokenCredentials(token);
            var resourceManagerClient = new Microsoft.Azure.Management.Resources.ResourceManagementClient(new Uri(AzureResourceManagerEndpoint), credentials)
            {
                SubscriptionId = SubscriptionId,
            };
            Console.WriteLine("Listing resource groups. Please wait….");
            Console.WriteLine("—————————————-");
            var resourceGroups = resourceManagerClient.ResourceGroups.List();
            foreach (var resourceGroup in resourceGroups)
            {
                Console.WriteLine("Resource Group Name: " + resourceGroup.Name);
                Console.WriteLine("Resource Group Id: " + resourceGroup.Id);
                Console.WriteLine("Resource Group Location: " + resourceGroup.Location);
                Console.WriteLine("—————————————-");
            }
            Console.WriteLine("Press any key to terminate the application");
            Console.ReadLine();
        }

        private static string GetAuthorizationHeader()
        {
            AuthenticationResult result = null;

            var context = new AuthenticationContext(LoginEndpoint + TenantId);

            var thread = new Thread(() =>
            {
                result = context.AcquireToken(
                  ServiceManagementApiEndpoint,
                  ClientId,
                  new Uri(RedirectUri));
            });

            thread.SetApartmentState(ApartmentState.STA);
            thread.Name = "AquireTokenThread";
            thread.Start();
            thread.Join();

            if (result == null)
            {
                throw new InvalidOperationException("Failed to obtain the JWT token");
            }

            string token = result.AccessToken;
            return token;
        }
    }
}

Hope this helps and I do hope you will register for the #AzureGov meetup  here – http://www.meetup.com/DCAzureGov/

Thanks to my friend Gaurav Mantri, fellow Azure MVP and developer of Cloud Portam – www.CloudPortam.com – an excellent tool for managing Azure Services. Gaurav figured out that we need to set the well known client id for powershell.

Last week, Microsoft announced the availability of preview bits for Azure Stack, an offering which allows customers to run Azure Services such as storage, PaaS Services and DBaaS within their own data centers or hosted facilities.

How is Azure Stack different from its predecessors like the Windows Azure Platform Appliance (WAPA) and Azure Pack (WAP), and why does Microsoft think it has a better chance of success? The answers to these questions hold special significance given that the narrative on private cloud has turned decidedly negative in the last 2-3 years.

Past attempts at Azure-in-a-box

To get a better understanding of Azure Pack’s unique features, let’s first take a quick background look at its predecessors, WAPA and Azure Pack.

WAPA was marketed as a combination of hardware (~1000 servers) and software (Azure services). The idea was that customers could drop this appliance within their own datacenter and benefit from greater geographical proximity to their existing infrastructure, physical control, regulatory compliance and data sovereignty. However, the timing of the release was less than ideal – WAPA was announced in 2012 during Azure’s early days when there was no real IaaS story. Additionally, the lack of a standard control plane across the various Azure services and the pace of change made operating the appliance unviable. The size (and cost) of this appliance meant that it initially appealed to a very narrow segment of customers, as only industry giants (eBay, Dell, HP) could reasonably afford to implement it.

WAP took a slightly different approach. Rather than trying to run Azure bits on-premises, it used a UI experience similar to that of Azure Portal (classic) to manage on-premises resources. Internally, the Azure Pack Portal depended on System Center 2012 R2 VMM and SPF for servicing requests. However, the notion of first-class software-defined storage or software-defined networking did not yet exist. Finally, WAP only made available one PaaS offering, the Azure Pack WebSites – a technology that made it possible to host high-density, web sites on-premises. The fact that other Azure PaaS offerings were not part of WAP was also a bit limiting.

How is Azure Stack different?

Azure Stack’s primary difference lies in the approach it takes toward bridging the gap between on-premises and the cloud. Azure Stack takes a snapshot of the Azure codebase and makes it available within on-premises data centers (Jeffery Snover expects biannual releases of Azure Stack). Given that Azure Stack is the very same code, customers can expect feature parity and management consistency. For example, users can continue to rely on ARM whether they are managing Azure or Azure Stack resources (in fact, Azure Stack quick start templates are now available as well in addition to the Azure quick start templates).

The following logical diagram depicts how the control plane (ARM) and constructs such as vNets, VM Extensions and storage accounts (depicted below the ARM) remain consistent across Azure and Azure Stack. This consistency comes from the notion of Resource Providers. The Resource Providers in turn are based on Azure-inspired technologies that are now built into Windows Server 2016, such as S2D (Storage Spaces Direct) and Network Controller.

clip_image002

What about Windows Service Bus? Windows Service Bus is a set of installable components that provide Azure Service Bus like capability on Windows servers and workstations. So in some sense it is similar to the Azure Stack concept. However, it should be noted that Windows Service Bus is based on same foundation as the Azure Service Bus (not a snapshot of Azure Service Code) and does not come with full UI and control plain experience that Azure offers.

Azure Stack Architecture

The following diagram depicts the Azure Stack logical components installed across a collection of VMs.

ADVM hosts services like AD, DNS, DHCP.

ACSVM hosts Azure Consistent Storage Services (Blob, Table and Administration services).

NCSVM hosts the network controller component that is a key part of software-defined-networking.

xRPM hosts the core resource providers, including networking storage and compute. These providers leverage Azure Service Fabric technology for resilience and scale-out.

This article provides a more detailed look at these building blocks.

clip_image004

The hybrid cloud is closer to the “plateau of productivity”

While the narrative on private clouds has turned decidedly negative, the demand for hybrid cloud continues to grow. This rise in demand is mainly due to the fact that public cloud adoption is set to spike in 2016 (just look at the strong growth numbers for public cloud providers across the board). This spike in usage is squarely based on enterprise IT’s growing interest in the public cloud. And as enterprise IT begins to adopt the public cloud, hybrid cloud is going to be at the center of this adoption. Why? Because despite all the virtues of the public cloud, legacy systems are going to be around for the foreseeable future – and this is not just due to security and compliance concerns: the enormity and pitfalls of legacy migration are well known.

So after languishing in the hype curve for years, hybrid IT may be finally reaching the proverbial Gartner plateau of productivity.

Azure Stack “hybrid” scenarios

Here are some example “hybrid” scenarios enabled by Azure Stack:

1) Dev/ Test in public cloud and Prod On-Premises – This model would enable developers to move quickly through proving out new patterns and applications, but would still allow them to deploy the production bits in a controlled on-premises setting.

2) Dev / Test in Azure Stack and Prod in Azure – This is the converse of scenario #1 above. The motivation for this scenario is that team development in the cloud is still evolving. Some challenges include: managing the costs of dev workstations in the cloud, collaborating across subscriptions (MSDN), and mapping all aspects of on-premises CI/CD infrastructure with services like VSO. This is why it may make sense for some organizations to continue to development on Azure Stack on-premises and deploy the production bits to Azure. Of course, as with scenario #1 above, this second scenario will require dealing with the lag in features and services between Azure and Azure Stack.

3) Manage on-premises resources more efficiently – Having access to modern portal experience (Azure Portal), automation plain (ARM), access cloud native services (Azure Consistent Services), as a layer over the existing on-premises infrastructure brings enterprises closer to the vision of FAST IT.

4) Seamlessly handle burst-out scenarios – Technologies like stretching on-premises database to Azure are beginning to appear. Azure Stack and its support for DBaaS make these hybrid setups even more seamless.

5) Comply with data sovereignty requirements – A regional subsidiary of a multi-national corporation may not have access to a local Azure DC. In such a situation, Azure Stack can help meet the data sovereignty requirements and at the same maintain overall consistency.

Promising offering. But don’t expect a silver bullet solution.

Azure Stack is fundamentally a different approach to hybrid IT. The fact that it is a snapshot of the Azure codebase will ensure consistency between on-premises and cloud. A common control plane and API surface is an additional plus.

However, the lag in features between the two environments will need to be carefully considered (as is the case today with differences in availability of services across various regions)

Even though customers are getting the same codebase, what they are *not* getting is seamless scalability and all the operational knowhow to run a complex underlying infrastructure. There is no getting around the deliberate capacity planning and operational excellence needed to efficiently run Azure Stack in your data center.

Finally, as organizations are increasingly realizing, adopting any form of cloud (public, private or hybrid) requires a cultural and mindset shift. No technology alone is going to make that transition successful.

With its unique approach to hybrid cloud, Azure Stack certainly looks to be a promising offering which will provide developers and IT Pros alike, an Azure consistent environment on-premises something was previously unavailable.

Follow

Get every new post delivered to your Inbox.

Join 29 other followers