Windows Azure Boot Camp at MIX

April 11, 2011

Later today I will be presenting the “Cloud computing with the Windows Azure Platform” boot camp at MIX. My session is comprised of three sections:

  • Overview of Cloud Computing
    Introduction to Windows Azure Platform
  • Review an on-premise MVC3 app
    Move the on-premise app to Windows Azure
  • Fault Tolerance and Diagnostics
    Tips & Tricks – Building Scalable Apps in the cloud

As part of the second session, I will go over the steps to “Azure-enable” an existing on-premise MVC 3 based web application. This blog is an attempt to capture these steps for the benefit of the attendees and other readers of this blog.

Before we begin, let me state the key objectives of this exercise:

    • Demonstrate that apps built using currently shipping, on-premise technologies (.NET 4.0 / VS 2010) can easily be moved to the Windows Azure Platform. (Of course, the previous version of .NET (3.5 Sp1) is supported as well.)
    • Understand subtle differences in code between  running on-premise and on the Windows Azure platform.
    • Reduce the number of moving parts within the application by “outsourcing” a number of functions to Azure based services.
    • Refactor the application to take advantage of the elasticity and scalability characteristics, inherent in a cloud platform such as Windows Azure.

 

Community Portal – An MVC3 based on-premise application

The application is simple by design. It is a Community Portal (hereafter interchangeably referred to as CP) that allows users to create tasks. To logon to the CP, users need to register themselves. Once a task has been created, users can kick-off a workflow to guide the task to completion. The workflow represents a two-step process that routes the request to the assignee, first, and then to the approver. In addition to this primary use case of task creation and workflow, Community Portal also allows users to access reports such as “Tasks complete/ Month”.

image

 

imageimage

 

 

The following diagram depicts the Community Portal architecture. It is comprised of two subsystems: 1) an MVC3 based Community Portal site, and 2) a WF 4.0 based Task Workflow Service. This is very quick overview assuming that you are familiar with the core concepts such as MVC 3, EF, Code-First and so on.

image

 

Community Portal is a MVC 3 based site. The bulk of the logic is resides within the actions  as part of the TasksController class.  Razor is used as the ViewEngine. Model is based on EF generated entities. Data annotations are used to specify validation for individual fields.  Community Portal uses the built-in membership provider to store and validate user credentials. CP also uses the built-in ASP.NET session state provider to store and retrieve data from the cache. In order to avoid hitting the database on each page refresh, we cache the TaskSummary (list of tasks assigned to a user) in the session state. 

 

image

Tasks created by the users are stored in SQL Server. EF 4.0, in conjunction with the Code-First library, is used to access the data stored in SQL Server. As part of the task creation, users can upload an image. Uploaded images are stored in SQL Server using the FILESTREAM feature.  Since the FILESTREAM allows for storing the BLOB data on a NTFS volume, it provides excellent support for streaming large BLOBs [as opposed to the 2GB limit imposed by varbinary(max)]. When the user initiates a workflow, a message is sent over to WCF using the BasicHttpBinding. This causes the WF instance to be created. This brings us to the second subsystem – workflow host.

As stated, Task Workflow Service consists of a WF 4.0-based workflow program hosted inside a console application (we could have just as easily hosted this in IIS). WF program is based on built-in Receive, SendReply, and InvokeMethod activities. As stated, this workflow is a two-stage workflow. The first call into the workflow kicks off a new instance. Each new instance updates the task status to “Assigned” via the InvokeMethod activity and sends a reply to the caller. Then the workflow waits for “task completed” message to arrive. Once the “task completed” message arrives, the workflow status is marked “complete”, followed by a response to the caller. The last step is repeated for the “task reviewed” message.

image

In order to ensure that all messages are appropriately correlated (i.e. routed to the appropriate instance), a content-based correlation identifier is used. Content-based correlation is new in WF 4.0. It allows an element in the message (in this case taskId) to server as the correlation token. Refer to the screenshot below: Xpath query is used to extract the TaskId from the message payload. The WF 4 runtime compares the result of the Xpath expression to the variable pointed to by the CorrelatesWith attribute to determine the appropriate instance where the message needs to be delivered.

image

 

A quick word on the reporting piece, again for the sake of simplicity, the Community Portal relies on client report definition files (.rdlc). This is based on the local processing mode supported by the report viewer control. Since the ReportViewer control (a server-side control) cannot be used inside MVC, an alternate route was created.

Finally, the following diagram depicts the on-premise solution – it consists of two projects: 1) an MVC 3 based CommunityPortal project, and 2) a console application that hosts the WF 4.0 based TaskWorkflow.

 

image

This concludes our whirlwind tour of the on-premise app.

Changing Community Portal to run on Windows Azure

Let us switch gears and discuss how the on-premise version of Community Portal can be moved to the Windows Azure Platform.

Even though our on-premise application is trivial, it includes a number of moving parts. So if we were to move this application to IaaS (Infrastructure as a Service) provider, we would have to setup and manage the various building blocks ourselves. For instance, we would need to install Windows Server AppFabric caching and setup a pool of cache servers; we would also have to install SQL Server and setup high availability; we would have to install an identity provider (IP) and setup user database and so on. We have not even discussed the work involved in keeping the servers patched.

Fortunately, Windows Azure platform is a PaaS offering. In other words, it provides a platform of building block services that can make it easier to develop cloud-based applications. Application developers can subscribe to a set of pre-built services as opposed to creating and managing their own. There is one other important benefit of PaaS – the ability to treat the entire application (including the virtual machines, executable and binaries) as one logical service. For instance, rather than logging on to each server (that makes up the application) and deploying the binaries to them individually, Windows Azure allows developers to simply upload the binaries and a deployment model to the Windows Azure Management portal – as you would expect there is a way to automate this step using the management API. Window Azure in turn allocates the appropriate virtual machines, storage and network, based on the deployment model. It also takes care of installing the binaries to all the virtual machines that have been provisioned. For more a detailed discussion on PaaS capabilities of Windows Azure PaaS, please refer to my previous blog post.

The following diagram illustrates the different pieces that make up a Windows Azure virtual machine (also referred to a compute unit). The binaries supplied by the application developer are represented as the dark grey box. The rest of the components are supplied by the Windows Azure platform. Role runtime is some bootstrapping code that we will discuss later in the blog post. Together, this virtual machine represents an instance of software component (such as an MVC 3 site, a batch executable program, etc.) and is commonly referred to as a role. Currently, there are three types of roles – Web role, Worker Role and VM role. Each role type is designed for a type of software component. For instance, a web role is designed for web application or anything related to it; a worker role is designed for long running, non-interactive tasks. An application hosted within Azure (commonly referred to as an Azure-hosted service) is made up of one more types of roles. Another thing to note is that an Azure-hosted application can scale by adding instances of a certain type.

image

 

Now that you understand the core concepts (such as deployment model, role and role instances) let us resume our discussion on converting the on-premise application to run on Windows Azure. Let us begin with a discussion of the deployment model. It turns out that the deployment model is an XML file that describes things such as the various roles that make our application, number of instances of each role, configuration settings, port settings, etc. Fortunately, Visual Studio provides tooling so that we don’t have to hand create the XML file. Simply add a Windows Azure project to the existing solution as shown below.

image

Select a worker role. Remember that a worker role is designed for a long-running task. In this instance, we will use it to host instances of TaskWorkflow. There is no need to select a web role as we will take the existing CommunityPortal project and map it to a web role.

image

clip_image001[4]

We do this by clicking on Add Web Role in the solution (as shown below) and selecting the CommunityPortal project.

image

Here is the screen shot of our modified  solution structure. Our solution has two roles.  To mark the transition to an Azure service, I dropped the OnPremise suffix from our solution name – so our solution is now named CommunityPortal.

image

Through these steps, we have generated our deployment model!

The following snippet depicts the generated ServiceDefinition file. This file contains definitions of role, endpoints, etc. As you can see, we have one web role and one worker role as part of our service. The Sites element describes the collection of web sites that are hosted in a web role. In this instance, we have a single web site that is bound to an external endpoint that is listening to HTTP traffic over port 80 ( line # 12 & 17 ).

   1:  <?xml version="1.0" encoding="utf-8"?>
   2:  <ServiceDefinition name="WindowsAzureProject5" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
   3:    <WorkerRole name="WorkerRole1">
   4:      <Imports>
   5:        <Import moduleName="Diagnostics" />
   6:      </Imports>
   7:    </WorkerRole>
   8:    <WebRole name="CommunityPortal">
   9:      <Sites>
  10:        <Site name="Web">
  11:          <Bindings>
  12:            <Binding name="Endpoint1" endpointName="Endpoint1" />
  13:          </Bindings>
  14:        </Site>
  15:      </Sites>
  16:      <Endpoints>
  17:        <InputEndpoint name="Endpoint1" protocol="http" port="80" />
  18:      </Endpoints>
  19:      <Imports>
  20:        <Import moduleName="Diagnostics" />
  21:      </Imports>
  22:    </WebRole>
  23:  </ServiceDefinition>

.csharpcode {
background-color: #ffffff; font-family: consolas, “Courier New”, courier, monospace; color: black; font-size: small
}
.csharpcode pre {
background-color: #ffffff; font-family: consolas, “Courier New”, courier, monospace; color: black; font-size: small
}
.csharpcode pre {
margin: 0em
}
.csharpcode .rem {
color: #008000
}
.csharpcode .kwrd {
color: #0000ff
}
.csharpcode .str {
color: #006080
}
.csharpcode .op {
color: #0000c0
}
.csharpcode .preproc {
color: #cc6633
}
.csharpcode .asp {
background-color: #ffff00
}
.csharpcode .html {
color: #800000
}
.csharpcode .attr {
color: #ff0000
}
.csharpcode .alt {
background-color: #f4f4f4; margin: 0em; width: 100%
}
.csharpcode .lnum {
color: #606060
}

While the service definition file defines the settings, the values associated with these settings reside in the service configuration file. The following snippet shows the equivalent service configuration file. Note the Instances element (line #4 & line #10) that defines the number of instances for a given role. This tells us that changing the number of instances of a role is just a matter of changing a setting within the service configuration file. Also note that we have one or more configuration settings defined for each role. This seems a bit odd doesn’t it? Isn’t Web.config the primary configuration file for an ASP.Net application? The answer is that we are defining configuration settings that apply to one or more instances of a web role. There is one other important reason for preferring the service definition file over web.config. But to discuss that, we need wait until we have talked about packaging and uploading to application binaries to Windows Azure.

 

   1:  <?xml version="1.0" encoding="utf-8"?>
   2:  <ServiceConfiguration serviceName="WindowsAzureProject5" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">
   3:    <Role name="WorkerRole1">
   4:      <Instances count="1" />
   5:      <ConfigurationSettings>
   6:        <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" />
   7:      </ConfigurationSettings>
   8:    </Role>
   9:    <Role name="CommunityPortal">
  10:      <Instances count="1" />
  11:      <ConfigurationSettings>
  12:        <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" />
  13:      </ConfigurationSettings>
  14:    </Role>
  15:  </ServiceConfiguration>

.csharpcode {
background-color: #ffffff; font-family: consolas, “Courier New”, courier, monospace; color: black; font-size: small
}
.csharpcode pre {
background-color: #ffffff; font-family: consolas, “Courier New”, courier, monospace; color: black; font-size: small
}
.csharpcode pre {
margin: 0em
}
.csharpcode .rem {
color: #008000
}
.csharpcode .kwrd {
color: #0000ff
}
.csharpcode .str {
color: #006080
}
.csharpcode .op {
color: #0000c0
}
.csharpcode .preproc {
color: #cc6633
}
.csharpcode .asp {
background-color: #ffff00
}
.csharpcode .html {
color: #800000
}
.csharpcode .attr {
color: #ff0000
}
.csharpcode .alt {
background-color: #f4f4f4; margin: 0em; width: 100%
}
.csharpcode .lnum {
color: #606060
}

 

Visual Studio makes it easy to work with the service configuration settings so developers don’t have to work with the XML directly. Following screenshot shows the UI for adjusting the configuration settings.

image

Now that we have the deployment model, it’s time to discuss how we package up the binaries (recall that to host an application on Windows Azure, we need to provide a deployment model and the binaries ). Before we create the package, we need to make sure that all the files we will need to run our application in Window Azure are indeed included. By default, all the application assemblies will be included. However, recall that Windows Azure is providing the environment for our application to run. So we can expect .NET 4.0 framework (or 3.5 Sp1) assemblies, C Runtime components, etc. to be available to the applications running in Windows Azure. There are however, some add-on packages that are not available provided in the environment today such as MVC3. This means we need to explicitly include the MVC3 assemblies as part of our package by setting the CopyLocal=true.

Now that we have covered the concept of a service package and configuration file, let us revisit the discussion on where to store configuration settings that need to change often. The only way to change a setting that is part of the package (such as the settings included within the web.config) is to redeploy it. In contrast, settings defined in the service configuration can easily be changed via the portal or the management API, without the need to redploy the entire package. So, now that I have convinced you to store the configuration settings in the service configuration files, we must make a change to the code to read settings from the new location. We need to use the RoleEnvironment.GetConfigurationSettingValue to read the value of a setting from the service configuration file. RoleEnvironment class gives us access to the Windows Azure runtime. In addition to accessing the configuration settings, we can use this class to find out about all the roles instances that are running as part of the service. We can also use it to make checks to see if the code is running in Windows Azure. Finally, we also need to set the global configuration setting publisher inside the Global.asax.cs as shown below:

 

 

   1:             CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>
   2:              {
   3:                  configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
   4:              });
   5:   

That’s all the changes we will make to the web role for now.

But we also need to modify the worker role we just added. The on-premise version relied on a console application to the host the task workflow. The equivalent piece of code (shown) can be moved to the Run method within the RoleEntryPoint. This begs the question – what is RoleEntryPoint and why do I need it? Recall that when discussing the Azure VM, we briefly alluded to the Role Runtime. This is the other piece of logic (other than the application code files themselves) that an application developer can include as part of the package. It allows an application to receive callbacks (OnStart, OnStop. etc.) from the runtime environment and is, therefore, a great place to inject bootstrapping code such as diagnostics, another important example of the type of callback is the notification when there is a change in service configuration setting. We talked about the role instance count as being a service configuration setting. So if the user decides to add or remove role instances, the application can receive a configuration change event and act accordingly.

Strictly speaking, including a class derived from RoleEntryPoint is not required if there is no initialization to be done. That is not the case with worker role we added to the solution. Here we need to use the OnStart method to setup WorkflowService endpoint and open the workflowhost as shown in the snippet below. Once the OnStart method returns (with a return value of true) we will start receiving incoming traffic. This is why it is important to have the WorkflowServiceHost all ready before we return from OnStart.

The first line of the snippet below requires some explanation. While running on-premise, the base address was simply the local host and dynamically assigned port. However, when the code is being run inside Windows Azure, we need to determine at runtime which port has been assigned to the role. The worker role needs to setup an internal endpoint to listen to incoming requests to the workflow service. Choosing an internal endpoint is a decision we made. We could have just as easily decided to allow the workflow host to listen on an external point (available on the public internet) However, in deference to the defense in depth idea, we decided to limit the exposure to the workflow host and only allow other role instances running as part of the service to communicate with it.

 

   1:  RoleInstanceEndpoint internalEndPoint =
   2:       RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["TaskEndpoint"];
   3:   
   4:    string protocol = "http://";
   5:    string baseAddress = string.Format("{0}{1}/", protocol, internalEndPoint.IPEndpoint);
   6:   
   7:    host = new WorkflowServiceHost(new TaskService(), new Uri(baseAddress));
   8:    
   9:    host.AddServiceEndpoint("ITaskService", new BasicHttpBinding(BasicHttpSecurityMode.None) { HostNameComparisonMode = HostNameComparisonMode.Exact }, "TaskService");
  10:   
  11:    host.Open();

The Run method then simply runs a never ending  loop as shown below.  This is equivalent to the static void Main method from our console application.

 
   1:             Trace.WriteLine("TaskServiceWorkerRole entry point called", "Information");
   2:   
   3:              while (true)
   4:              {
   5:                  Thread.Sleep(10000);
   6:                  Trace.WriteLine("Working", "Information");
   7:              }

Once we have gone through and included the  appropriate files, packaging the application binaries is just matter of right clicking and selecting the publish option. This brings up the publish dialog as shown below. For now we will simply select “Create Service Package option” and click OK.

 

image

That’s it. At this point, we have both the required artifacts we need to publish to Windows Azure. We can browse to the windows.Azure.com (shown below) and create a new service called CommunityPortal. Notice the two highlighted text boxes – here we provide the package location and the deployment model. Let us  go ahead and publish our application to Windows Azure.

image

 

Not so fast …

We’re all done and the application is running on Windows Azure! Well almost, except we don’t have a database available to the code running in Windows Azure (this includes the Task database and the membership database). Unless of course, we make our on-premise database instances accessible to the Windows Azure hosted service. While doing so is technically possible, it may not be such a good idea because of latency and security concerns. The alternative is to move the databases close to the Windows Azure hosted service. Herein lies an important concept. Not only do we want to move these databases to Windows Azure, we also want to get out of the business of setting up high availability, applying patches, etc. for our database instances. This is where the PaaS platform capabilities offered by Windows Azure shine.

In the next few subsections, we will look at approaches that allow us to “get out of the way” and let the Windows Azure platform take care of as many moving parts as possible. One other thing – changing the on-premise code to take advantage of Azure-provided services sounds good, but, it is only practical to do if there is semantic parity between the two environments. It is not reasonable to expect developers to maintain two completely different code bases. Fortunately, achieving semantic parity is indeed the key design goal that designers of Windows Azure had in mind. The following diagram represents the symmetry between the between on-premise and on Windows Azure offerings. It is important to note that a complete symmetry between on-premise and Windows Azure is not there today and is unlikely to be there anytime soon. Frankly, achieving complete symmetry is a journey on which the entire ecosystem including Microsoft, ISVs, SIs and developers like us, have to embark on together. Only then can we hope to realize the benefits that PaaS has to offer.

image

Let us start with moving the database to the cloud.

Windows Azure includes a highly available database as a service offering called SQL Azure. The following diagram depicts the architecture of SQL Azure. SQL Azure is powered by a fabric of SQL Servers nodes that allows for setting up highly elastic (i.e. ability to create as many databases as needed) and redundant (i.e. resilience against failures such as corrupted disk) databases. While the majority of the on-premise SQL Server functions are available within SQL Azure, there is not 100% parity between the two offerings. This is mainly due to the following reasons :

1) SQL Azure team has simply not gotten around building this capability into SQL Azure yet, however, a future release is planned to address this. An example of such a feature is CLR-based stored procedures. This is something that is not available on SQL Azure today, but the team has gone on record to add this capability in the future.

2) It simply does not make sense to enable certain features in a hosted environment such as SQL Azure. For example, ability to specify file groups will limit how the SQL Azure service can move the databases around. So this feature is unlikely to be supported by SQL Azure.

 

image

As application developers, all we need to do is provision the database on the Windows Azure portal by specifying the size as shown below. For this sample, we have a very modest size requirement. So we don’t have to worry about sharding our data across multiple databases. The next step is to replace the on-premise connection string with the SQL Azure connection string.

 

image

 

Remember from our earlier discussion that there is not 100 % parity between on-premise SQL Server and SQL Azure. Reviewing our on-premise database reveals one aspect of our schema that cannot be migrated to SQL Azure – this is the FILESTREAM column type that we used to efficiently store an uploaded image associated with a task. The FILESTREAM column type allows us to store the image on a mounted NTFS volume outside of the data file. This is something that SQL Azure is unlikely to support. Fortunately, Windows Azure provides Azure Blob storage that allows us to store large BLOBs efficiently. Furthermore, we don’t have to worry about managing the NTFS volume (as is the case with FILESTREAM) – Azure Blob storage takes care of that for us. It provides a highly elastic store that is also fault tolerant (as it keeps multiple copies of the data). Finally, should we need to, we can serve the images from a location closer to our users by enabling the Content Delivery Network (CDN) option on our Azure Blob containers. Hopefully, I have convinced you that Azure Blob storage is a much more optimal place for storing our BLOB data.

Now, let’s look at the code change that is needed to store the images in Blob store. The change is rather simple. First, we will upload the image to the Azure Blob storage. Next, we will take the address of the Blob container and store it in the Tasks table. Of course, we will need to change the Tasks table to replace the FILESTREAM column with the varchar (255) that allows us to store the address of the Blob container.

 

   1:         public void SaveFile(HttpPostedFileBase fileBase, int taskId)
   2:          {
   3:              var ext = System.IO.Path.GetExtension(fileBase.FileName);
   4:   
   5:              var name = string.Format("{0:10}_{1}{2}", DateTime.Now.Ticks, Guid.NewGuid(), ext);
   6:   
   7:              CreateOnceContainerAndQueue();
   8:   
   9:              var blob = blobStorage.GetContainerReference("imagelib").GetBlockBlobReference(name);
  10:              blob.Properties.ContentType = fileBase.ContentType;
  11:              blob.UploadFromStream(fileBase.InputStream);
  12:   
  13:              // Save the blob name into the table
  14:   
  15:              using (SqlConnection conn = new SqlConnection(RoleEnvironment.GetConfigurationSettingValue("CommunityPortalEntities")))
  16:              {
  17:                  SqlCommand comm = new SqlCommand("UPDATE Tasks SET BlobName = @blobName WHERE TaskId = @TaskId", conn);
  18:                  comm.Parameters.Add(new SqlParameter("@blobName", name));
  19:                  comm.Parameters.Add(new SqlParameter("@TaskId", taskId));
  20:   
  21:                  conn.Open();
  22:                  comm.ExecuteNonQuery();
  23:              }
  24:          }

With this change, we are ready to hook up the SQL Azure based CommunityPortal database to the rest of our application, but what about the existing SSRS reports? What happens to these reports, now that the database has been moved to SQL Azure? We have two options: 1) continue to run the report on an on-premise version of SQL Server Reporting Services (SSRS) by adding the SQL Azure database as a datasource, and 2) move the reporting function to an Windows Azure hosted SSRS service instance (currently in a CTP). The benefit of the latter is obvious by now. We get to subscribe to “reporting as a service” capability without being bogged down with installing and maintaining SSRS instances ourselves. The following screenshot depicts how, using Business Intelligence Development Studio BIDS , we can deploy the existing report definition to SQL Azure Reporting Services instance. Since our database is already migrated to SQL Azure, we can easily add it as the report datasource.

image

Outsourcing the authentication setup

What about the ASP.NET membership database? I suppose we can also migrate it to SQL Azure. In fact, it turns out that there is a version of ASP.NET membership database install scripts that are designed to work with SQL Azure. So once we migrate the data to SQL Azure based membership database, it is straight forward to rewire our Azure hosted application to work with the new provider. While this approach works just fine, there is one potential issue. Our users do not like the idea of registering a new set of credentials to access the Community Portal. Instead, they would like to use their existing credentials such as Windows LiveID, Facebook, Yahoo! and Google to authenticate to the Community Portal. Since these credentials such as Windows Live ID allow a user to establish their identity, we commonly refer to them as identity providers, or IP. Turns out that IPs rely on a number of different underlying protocols such as OpenID, OAuth etc. Furthermore, these protocols tend to evolve constantly. The task of interacting with a number of distinct IPs seems daunting. Fortunately, Windows Azure provides a service called Access Control Service (ACS) can simplify this task. ACS can act as an intermediary between our application and various IPs. Vittorio and Wade authored an excellent article that goes through the detailed steps required to integrate our application with ACS. So I will only provide a quick summary.

  • Logon to the portal.appfabriclabs.com
  • Create a namespace for the Community Portal application as shown below.

    image

    • Add the Identity Providers we want to support. As shown in the screenshot below, we have selected Google and Windows Live ID as the two IPs.

    image

    • Notice that the role of our application has changed from being an IP (via the ASP.NET membership provider that was in place earlier) to being entity that relies on well-known IPs. In security parlance, our Community Portal application has become a replying party or RP. Our next step is to register Community Portal as a RP with ACS. This will allow ACS to pass us information about the authenticated users. The screenshot below illustrates how we setup Community Portal as an RP

    image

     

    • The previous step concludes the ACS setup. All we need to do is copy and paste some configuration settings that ACS has generated for us (based on the IPs and RP setup). This configuration setting needs to be plugged into the Community Portal source code. The configuration information that needs to be copied is highlighted in the screenshot below.

    image

    • Finally, inside Visual Studio we need to add ACS as our authentication mechanism. We do this by selecting “Add STS reference” menu and plugging in the configuration metadata we copied from the ACS portal, as shown in the screenshot below.

    image

     

    • Windows Identity Foundation (WIF) classes needed to interact with ACS are not already installed within the current versions of OS available on Windows Azure. To get around this limitation, we need to install the WIF update as part of the web role initialization.
    • That’s it! Users will now be prompted to select an IP of their choice, as shown below:

    image

    Outsourcing the caching setup

     Up until now we have successfully migrated the task database to SQL Azure. We have also “outsourced” our  security scheme to ACS. Let us look at one last example of a building block service provided by Windows Azure Platform- caching. The on-premise version of Community Portal relies on default in-memory ASP.NET session state. The challenge with moving this default implementation to the cloud is that all web role instances are networked load balanced, so storing the state in-proc on one of the worker role instance does not help. We can get around this limitation by install caching components such as memcached on each of our web role instances.  However, it is our stated goal to  minimize the number of moving parts. Fortunately, what is now becoming a recurring theme of this blog post, we can leverage a Windows Azure Platform provided service called the AppFabric Caching Service.  The next set of steps illustrate how this can be accomplished:

    • Since AppFabric Caching Service is in CTP, we first browse to Appfabric labs portal  – portal.appfabriclabs.com
    • We create a new cache service namespace as shown below.

     

     

    image

    • We are done creating the cache. All we need to do is grab the configuration settings from the portal (as shown below) and plug it into the web.config of the Community Portal web project. 

    image

    • With the last step, our ASP.NET state session provider is now based on AppFabric Caching Service. We can continue to use the familiar session syntax to interact with the session state.

    Summary

    We have taken an on-premise version of Community Portal built using currently shipping technologies and moved it to the Windows Azure Platform. In the process  we have eliminated many moving parts of the on-premise application. The Windows Azure based version of Community Portal can now take advantage of the elasticity and scalability characteristics inherent in the platform.

    Hope you found this description useful. The source code for the on-premise version can be found here. The source code for the “Azure-enabled” version can be found here. Also, if you would like to test drive Community Portal on Windows Azure – please visit http://ais.cloudapp.net  (I will leave it up and running for a few days)

    About these ads

    3 Responses to “Windows Azure Boot Camp at MIX”


    1. [...] Check out the speaker’s (Vishwas Lele) blog for a full demo of the Cloud Portal here – http://vlele.wordpress.com/2011/04/11/windows-azure-boot-camp-at-mix/ [...]

    2. Tiendq Says:

      Your post is very useful for me who is still have a little hands-on experience with Azure platform.

      Thanks a lot.


    Leave a Reply

    Fill in your details below or click an icon to log in:

    WordPress.com Logo

    You are commenting using your WordPress.com account. Log Out / Change )

    Twitter picture

    You are commenting using your Twitter account. Log Out / Change )

    Facebook photo

    You are commenting using your Facebook account. Log Out / Change )

    Google+ photo

    You are commenting using your Google+ account. Log Out / Change )

    Connecting to %s

    Follow

    Get every new post delivered to your Inbox.

    %d bloggers like this: