To ensure that cached objects between multiple web farm servers reflect the most recent changes (ie. do not become stale), Adxstudio Portals can be configured to distribute cache invalidation notifications across a local network (web farm). This article uses the example of Azure load balancing between virtual machines to describe to process of setting up distributed cache invalidation.

This documentation applies to Adxstudio Portals 7.0.0007 and later versions.
For cache invalidation settings in a Windows Azure Cloud Services environment refer to this article instead.

Prerequisites

A virtual network that is configured with the Azure load balancer should be setup and be successfully hosting an Adxstudio Portals website. The site is hosted through an external endpoint balanced across multiple IIS web servers on multiple virtual machines. The websites on each web server specify a common domain name (host header) site binding (ex. portal.contoso.com) over port 80 and/or port 443.

Configuring IIS

The distributed cache invalidation component of Adxstudio Portals requires each website in the virtual network to be addressable through a unique IP address. For each load balanced IIS website, add an IP address based site binding that is unique to that website. All websites can specify the same port number (avoiding the well-known ports). These IP address bindings defines the internal endpoints used to distribute cache invalidation messages across the network.

For example, the virtual network may have the following site bindings:

Virtual Machine #1

TypeHost NamePortIP Address
http portal.contoso.com 80 *
http 10000 10.0.1.1

Virtual Machine #2

TypeHost NamePortIP Address
http portal.contoso.com 80 *
http 10000 10.0.1.2

Virtual Machine #3

TypeHost NamePortIP Address
http portal.contoso.com 80 *
http 10000 10.0.1.3
The choice of IP Addresses and port number is arbitrary so use values that suit the specific network environment. The port 80 binding represents the common external endpoint.

Configuring your CRM organization for cache invalidation using Web Notification URLs

First, it is necessary to configure your CRM organization to send cache invalidation notifications to your Adxstudio Portals website. Review the documentation for Web Notification URLs to configure your CRM organization to send cache invalidation notifications to your website. In this example, only a single web notification entry is needed that specifies the http://portal.contoso.com/cache.axd URL. The distributed cache invalidation component will take care of notifying the load balanced websites in the virtual network.

Configure the application to use the remote endpoint service cache provider

This custom cache service provider adds the extended functionality needed to publish cache invalidation messages to the internal endpoints of the load balanced websites. Update the portal web.config to include the following elements.

<!-- This is an example configuration snippet, with most configuration sections omitted. -->
<configuration>
  <configSections>
    <section name="microsoft.xrm.client" type="Microsoft.Xrm.Client.Configuration.CrmSection, Microsoft.Xrm.Client"/>
  </configSections>
  <system.web>
    <httpHandlers>
      <add verb="*" path="Cache.axd" type="Adxstudio.Xrm.Web.Handlers.CacheInvalidationHandler, Adxstudio.Xrm"/>
    </httpHandlers>
  </system.web>
  <system.webServer>
    <handlers>
      <add name="CacheInvalidation" verb="*" path="Cache.axd" preCondition="integratedMode" type="Adxstudio.Xrm.Web.Handlers.CacheInvalidationHandler, Adxstudio.Xrm"/>
    </handlers>
  </system.webServer>
  <microsoft.xrm.client>
    <serviceCache default="Xrm">
      <add name="Xrm" type="Adxstudio.Xrm.Services.RemoteEndpointOrganizationServiceCache, Adxstudio.Xrm" internalEndpointName="XrmEndpoint" innerServiceCacheName="Inner" serviceDefinitionPath="~/App_Data/servicedefinition.json"/>
      <add name="Inner" type="Adxstudio.Xrm.Services.ContentMapOrganizationServiceCache, Adxstudio.Xrm"/>
    </serviceCache>
  </microsoft.xrm.client>
</configuration>

The value of the internalEndpointName attribute is used in the next step of creating the service definition. The value of the serviceDefinitionPath attribute specifies the location of the text file containing the service definition.

Create the service definition configuration file

The service definition is a JSON text file that describes the load balanced virtual network. The portal application uses the service definition to determine how to distribute the cache invalidation message. The location to create this file is specified by the serviceDefinitionPath attribute in the web.config.

{
Roles: [
{
Name: "MasterPortal",
IsCurrent: true,
Instances: [
{
InstanceEndpoints: {
"XrmEndpoint": {
Protocol: "http",
IPEndPoint: { Address: "10.0.1.1", Port: 10000 }
}
}
},
{
InstanceEndpoints: {
"XrmEndpoint": {
Protocol: "http",
IPEndPoint: { Address: "10.0.1.2", Port: 10000 }
}
}
},
{
InstanceEndpoints: {
"XrmEndpoint": {
Protocol: "http",
IPEndPoint: { Address: "10.0.1.3", Port: 10000 }
}
}
}
]
}
]
}

The first section of the configuration describes the top level virtual network environment and can be copied exactly for each network (i.e. data center).

{
Roles: [
{
Name: "MasterPortal",
IsCurrent: true,
Instances: [
]
}
]
}

Inside the Instances array are the individual virtual machine definitions. Repeat the instance definition for each virtual machine in the network (separating each instance block with a comma).

        {
InstanceEndpoints: {
"XrmEndpoint": {
Protocol: "http",
IPEndPoint: { Address: "[IP Address]", Port: [Port #] }
}
}
}

Replace the Address and Port values with the corresponding values defined in the IIS site bindings. Also note that the XrmEndpoint value in the instance definition matches up with the value defined by the internalEndpointName attribute in the web.config.

The service definition configuration format is modeled after similar settings in Azure Cloud Services. This allows other aspects of load balanced networks to be described but it also means having some settings extraneous to cache invalidation specifically.

At this point, once a change operation occurs on one virtual machine, messages are published to all the other load balanced virtual machines that have the specified internal endpoint defined. Each virtual machine will then invalidate its local cache item(s) according to the message.