Higher Order Software Development

January 14, 2010

Higher Order Software Development

I cannot claim it to be an act of volition, but, over the last three years I have found myself involved with building software designed to allow the business users to create the programs they need. I now have a somewhat presumptuous name for this approach – Higher Order Software Development. This approach is similar to the composite application site wherein applications are assembled using existing software assets. The difference, however, is that composite applications such as mashups are about assembling assets that have been built independently. Higher order software development is about software building blocks, designed from the ground-up, to allow business users to “develop”. In the last few years, many approaches including BPM (business process management) have espoused empowering the end-users. Even though BPM systems have been successful in improving the process agility, they have fallen short of making software development end-user ready. This is mainly because the BPM tools are designed to address a broad set of application scenarios, and in most instances, BPM offerings represent a collection of tools (workflow designer, rules engines, modeling, and so on). This has inadvertently raised the level of complexity, thus making it harder for end-users to participate. On the other hand, higher order software development has a much narrower focus — it is about solving a specific business problem within a given business domain. This approach has been made possible because of the rise in the level of abstractions available within the development platform itself. As we will discuss shortly, platforms such as SharePoint have served as a catalyst for higher order software development. 

Why is this important?  First, business users know the requirements the best. Unfortunately, most business users are not very good at communicating them – hence the famous phrase “I will not know what I want until I see it working”. Furthermore, a portion of the requirements are invariably lost in translation, as they are conveyed to development teams. Second, rather than learn new interfaces, business users want to continue to work with the tools they are already familiar with. What could be better than enabling the business users to create programs using familiar tools like Microsoft Office? Third, IT departments, already backed up in supporting existing operational systems, seldom have the resources to take on new application development projects.  Thus, empowering the business users to build their own software may be the only real way to scale and enable the business to adapt/react quickly to external market forces.  Since business users are directly involved in building, the cost and scope of development can be more efficiently managed. Business users are in a much better position to respond to questions such as – “Do we really need this feature? Is the customization really needed, or will the out-of-the box functionality do?”  Last, but not least, unlocking the true potential of IT and effectively competing in the next generation of business cycles will require a greater ability to adjust and change software systems cost effectively. By evolving how programs are put together, businesses can implement a process of continuous improvement over time.

For the remaining portion of this post, I will describe four concrete examples of applications to which we have applied this paradigm.

Example #1: Custom Calculation Engine in the Professional Services Industry

A common requirement in this domain is to implement calculations/reports that adhere to the Financial Accounting Standards Board (FASB) standards. These types of reports are often large and complex. The calculations in the reports are specific to a geographical region, so a multi-national company needs to implement different versions of these calculations.  Furthermore, over time these calculations have to be adjusted to comply with changing laws. Traditionally these calculations have been implemented using custom code, and, as a result, suffer from the challenges outlined above, including: the high cost of development and maintenance, requirements being lost in translation, the lack of traceability, and the lack of a robust mechanism for making a quick change to a calculation in response to  a change in a standard. In a nutshell, the analysts needed a flexible calculation engine tool that makes it easy to develop and maintain the calculations, and at the same time, provides the necessary robustness and scalability.

Here is a quick overview of the Excel Services based solution we developed. For an introduction to Excel Services please refer to the MSDN Magazine article – http://msdn.microsoft.com/en-us/magazine/cc163374.aspx. We used XSDs to capture all the input and output data elements needed for implementing a given calculation.  Using a custom Excel pre-compiler, we translated the XSD into named ranges. For example, the generated template workbook has three sheets – one each for input, output, and calculation. Analysts could then use the generated template workbooks to develop the algorithms.  As long as they worked within the contract – as defined in input, output and calculation sheets – they could use any Excel functions. Once the workbook was developed and tested, they placed the workbook into an Excel Services trusted folder for execution. To support high scalability, we used a cluster of Excel Services nodes. The following figure shows the basic architecture:

clip_image001[4]

clip_image003[4]

Who built what

Development Team

Built a framework based Excel Services

Business Users

Developed the calculation logic using Excel

As you can see from above example, we built the software building blocks using Excel Services, which in turn enabled the analysts to implement the calculations as needed. 

Example #2: Incident Management System for Law Enforcement Agencies

Law enforcement agencies are looking for ways to improve the processes surrounding incident reporting and management. Managing an incident entails gathering a variety of information to aid in the investigation, including documentation about the case (date, location, type of incident, summaries from eye witnesses, photographs, and evidence), as well as information about contacts and sources who can provide more information. In addition to providing an intuitive repository for this information, the system should enable an agency-wide collaborative work amongst the law enforcement personnel assigned to work on the cases. This requires the ability to assign and track tasks, initiate process steps such as review and approval, provide access to a group calendar and messaging, record the status of the case (open, active, closed, and archived), and generate centralized reports (such as the status of all cases or assignments). Information about cases also needs to be searchable and shareable with extra-agency organizations as necessary.

In addition to the high-level functional requirements, incident management systems need to be highly secure and be able to protect confidential information. This typically implies a number of requirements to control access to information based on the role, organization, and level of data sensitivity. There are also typically requirements for these types of systems to support auditing and policy compliance monitoring, tracking of email correspondence, support of e-discovery requests, and enforcement of agency governance rules.

The above requirements highlight the need to store large and variable amount of information about the case in one “container.”  The different document types that make up the case can have unique metadata and lifecycles associated with them. Law enforcement personnel working on the case need the flexibility to add notes to cases, include ad hoc documents, and store different versions of a document. They also need to be able to initiate a number of workflows such as approval and disposition for individual documents or a group of documents.  These workflows can be pre-defined or dynamic in nature. 

We decided to build this system on the SharePoint platform. We provisioned the appropriate container (site collection, site, document library) for a case. The provisioned container is based on a pre-defined blueprint we built. The blueprint includes document types, folder structure, web parts and workflows. Users are then able to customize the provisioned container as needed, including changing the folder structure, adding new ad hoc workflows, and installing additional web parts for reports and data visualization. A nice side-benefit of using a SharePoint based container is the built-in support for archival and restoration of cases.

clip_image005[4]

Snapshot of SharePoint based Incident Reporting System

Who built what

Development Team

Built SharePoint artifacts including site definitions, web parts, VS.NET based workflows

Business Users

Provisioned the site, customized the site, content types, adhoc workflows, custom lists and views

As you can see from above example, we built the software building blocks using the SharePoint platform that in turn enabled the law enforcement officials to effectively manage the incidents.

Example #3: Data Analysis Tool for the Telecommunication Industry

Network optimization engineers often need to convert vast amount of raw network performance data into useful data visualizations and data products that can assist them in identifying network performance bottlenecks. The goal is to allow non-programmers (in this instance, network engineers) to visually define complex multi-step algorithms  that specify the steps and control dependencies for analysis of raw data.  

The key requirement is that of flexibility and ease of use needed by network engineer to define custom data processing workflow steps. A solution that is completely built by IT would be too expensive and at the same time not be able to accord the flexibility needed in this instance.

We decided to build the solution using Windows Workflow Foundation (WF). The WF Designer tools provides the network engineers with useful and flexible way to author the data processing algorithms.  To make it easier for network engineers to author the workflows, we developed a set of custom domain specific activities. A lot of attention was devoted to making the workflow authoring experience as simple as possible.

Once the workflows are defined, they are executed asynchronously and results are made available for further analysis. The following diagram depicts the user interface for authoring the workflows.

clip_image007[4]

Snapshot of Workflow Foundation based designer used by network engineers

Who built what

Development Team

Built a framework to host WF programs, developed domain specific WF activities, customized the WF designer to make it business-user ready

Business Users

Developed custom workflows

As you can see from above example, by building a set of building blocks in the form of custom workflow activities,  we were able allow network engineers to develop the programs that allowed them to analyze vast amounts of raw data.

Example #4: Management of Policy Data in the Insurance Domain

A common requirement in the property and casualty (P&C) Insurance industry is to store and retrieve “snapshots” of a customer policy or contract, such as an automobile policy or a homeowners policy, as it existed at any given point in time in the past.  Clearly, there are a number of ways to implement this functionality. However, a key requirement is to be able to retrieve the policy snapshot very quickly – typically a sub-second response time is expected. Given the millions of rows worth of historical data that is common for such systems, it would be hard to retrieve a specific version of the policy data dynamically. An alternative approach is to pre-generate templates for the snapshots and tradeoff disk space in favor of response time. In other words, accept the additional cost of storing the entire policy snapshot for every change to the policy.   In addition to the response time requirement, there are two other  distinct requirements related to the historical data.  The first requirement is to generate reports for verification of compliance with various state laws. As you can imagine, compliance based reports tend to vary quite a bit, based on the laws of each state (within the US). The second requirement is to allow end-users to mine the historical data for interesting patterns, for instance: Why is there a higher rate of customer attrition in a given county over others?  What types of changes are more common in a given state?

The key aspect of this scenario is that while there is a need to support high performing historical queries, there is also a competing need for flexibility in reporting and data mining. In short, the system needs to allow self-service business intelligence (BI) for the end-users.

Let us now take a look at the solution we decided to build:  We used SQL Server Analysis Services (SSAS) based OLAP (online analytic processing) cube as an application building block.   To store the policy data snapshots we populated a OLAP cube in real time. The OLAP model we developed is depicted in the diagram below.  The dimensions (dimensions are reference information about the data) are obvious ones including customer, geography, and time. The interesting aspect of this model are the fact tables (facts are generally the numeric aspects of the transaction data). Since we are dealing with events (change in policy address, for example) as opposed to a classic business transaction that involves numbers, we ended up creating a “factless” that captures a log of all events on a policy. Additionally, to be able to retrieve the policy as of a certain date quickly, we maintain another “factless” table that captures the snapshot a given policy following any change to it.

:

clip_image009[4]

Once the OLAP cube is in place, analysts are able to perform queries using a tool such as Excel, as shown in the diagram below.

clip_image011[4]

Who built what

Development Team

Organized the data in OLAP structure, developed a scheme to populate the OLAP structure in real time

Business Users

Developed the queries to respond to historization related requests, built adhoc reports to meet compliance and regulatory needs

Readers who are familiar with BI (business intelligence) systems are probably wondering how the solution described above is different from a traditional BI solution. Furthermore, why do I see this as an example of higher order solution development? While this is indeed a BI-based solution, it is a different from a traditional BI Data Warehouse / Data Mart in several important ways. First, the dimensional model, with the factless and snapshot tables, is different from a traditional BI data warehouse. In this instance, the data model has been designed in a manner to promote “assembly” by business users. Secondly, the OLAP cube is very much part of the application architecture in the sense that  queries from the UI (user interface) layer are serviced by the OLAP cube in near real-time. This also unlike a traditional BI system which is usually a distinct system designed to serve as  a decision support system. Finally, by combining a traditional relational database with the OLAP cube, we are able to offer the notion of a composeable data services layer.

As you can see from above example, the OLAP based building blocks built by the development team enabled business users to perform dynamic queries to respond to historization requests from the client and build adhoc reports to meet compliance and regulatory needs.

In summary, with all of the above examples, we built the software tools that allowed the business-users to create the program they need.  This approach has a number of benefits including reduced cost of development and increased agility to adjust and change software systems.

11 Responses to “Higher Order Software Development”


  1. […] Higher Order Software Development (Vishwas Lele) […]


  2. Strange this post is totaly irrelevant to the search query I entered in google but it was listed on the first page. – In comparison with the industrial age, the information era is at the steam engine stage. By the time information systems reach jet-plane status, we will focus on utility over fads, triple our productivity, use our computers as naturally and easily as we now use our cars… Attributed to Michael L. Dertouzos


  3. This is a good approach to what, for some, may be a controversial topic. Very well though out post. – I had a great idea this morning, but I didnt like it. – Samuel Goldwyn 1882 – 1974


  4. Very interesting website. Solid content, keep up the good work!

  5. Eerm Says:

    So…uh, you used SharePoint? Congratulations. This is hardly “higher order software development”. Go spend some time with Charles Simonyi before you go claim to be working in such a fashion.

  6. skylos Says:

    I’m reading System Design from Provably Correct Constructs: The Beginnings of True Software Engineering by James Martin, Prentice Hall, published in 1984 – which discusses exactly what you’re talking about. Not new, but yes, very valuable.

  7. numerator Says:

    same comment as Kyle H, I worked for Margaret Hamilton at HOS from 1981 to 1984 should you have questions about it 🙂

  8. hanrygill Says:

    I Like these line of your article As you can see from above example, the OLAP based building blocks built by the development team enabled business users to perform dynamic queries to respond to historization requests from the client and build adhoc reports to meet compliance and regulatory needs.
    bespoke software development

  9. Dwayne Knirk Says:

    A note about your presumptuous name. The mathematical theory and a tool set for your method was published in 1976 by Margaret H. Hamilton. She was CEO of Higher Order Software, Inc.and was responsible for the first CASE tool product for it “USE.IT.” (HOS, 1980-1985.) Because of its theoretical foundations, the approach could assert “correct by construction.” A summary is available in a paper she published at the Conference on Systems Engineering Research, 2007, “Universal Systems Language for Preventative Systems Engineering.”


  10. I like what you guys tend to be up too. This kind
    of clever work and coverage! Keep up the amazing works guys I’ve
    included you guys to blogroll.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: