Has Something Gone Wrong?

Generally, we choose to leverage Read-Access Geo-Redundant Azure Storage Accounts (RA-GRS) because we can use it as part of our disaster recovery (DR) plan. And sometimes, we forget that our devil is in the details. Disaster recovery (DR) plans are rarely tested and can cause headaches when they are. So let’s relieve some of those headaches.


“Geo Replication Lag” for GRS and RA-GRS Accounts is the time it takes for data stored in the Primary Region of the storage account to replicate to the Secondary Region of the storage account. Because GRS and RA-GRS Accounts are replicated asynchronously to the Secondary Region, data written to the Primary Region of the storage account will not be immediately available in the Secondary Region. Customers can query the Geo Replication Lag for a storage account, but Microsoft does not provide any guarantees as to the length of any Geo Replication Lag under this SLA.

The Recovery Time Objective (RTO) and Recovery Point Objective (RPO) are the first items that come up in DR discussions. When we use RA-GRS we control the RTO because we decide when to read from the secondary location. The RPO is a bit different because that can vary due to physics and load. The best way get current Recovery Point (RP) is to get the last sync time for the RA-GRS in question. This post is all about getting the right information, when we need it, because we need facts to make the right decisions.

Before we get technical, let’s take a moment to review RA-GRS.

What is Read-Access Geo-Redundant storage?

Read-access geo-redundant storage (RA-GRS) maximizes availability for your storage account, by providing read-only access to the data in the secondary location, in addition to the replication across two regions provided by GRS. In the event that data becomes unavailable in the primary region, your application can read data from the secondary region.

When you enable read-only access to your data in the secondary region, your data is available on a secondary endpoint, in addition to the primary endpoint for your storage account. The secondary endpoint is similar to the primary endpoint, but appends the suffix –secondary to the account name. For example, if your primary endpoint for the Blob service is myaccount.blob.core.windows.net, then your secondary endpoint is myaccount-secondary.blob.core.windows.net. The access keys for your storage account are the same for both the primary and secondary endpoints.

What is the RA-GRS SLA?

Source: SLA for Storage – Last updated: May 2015

We guarantee that at least 99.99% of the time, we will successfully process requests to read data from Read Access-Geo Redundant Storage (RA-GRS) Accounts, provided that failed attempts to read data from the primary region are retried on the secondary region.

We guarantee that at least 99.9% of the time, we will successfully process requests to write data to Locally Redundant Storage (LRS), Zone Redundant Storage (ZRS), and Geo Redundant Storage (GRS) Accounts and Read Access-Geo Redundant Storage (RA-GRS) Accounts.

What is The Last Sync Time?

A GMT date/time value, to the second. All primary writes preceding this value are guaranteed to be available for read operations at the secondary. Primary writes after this point in time may or may not be available for reads.

The value may be empty if LastSyncTime is not available. This can happen if the replication status is bootstrap or unavailable.

Although geo-replication is continuously enabled, the LastSyncTime result may reflect a cached value from the service that is refreshed every few minutes.

How do we Obtain The Last Sync Time?

Getting Started With Development

Cmdlets are created by inheriting from one of these classes:

  • Cmdlet: A simple cmdlet using a .NET class derived from the Cmdlet base
    class. This type of cmdlet does not depend on the Windows PowerShell runtime
    and can be called directly from a .NET language.
  • PSCmdlet: A more complex cmdlet based on a .NET class that derives from the
    PSCmdlet base class. This type of cmdlet depends on the Windows PowerShell
    runtime, and therefore executes within a runspace.

I decided to inherit from PSCmdlet, because I will not use these Cmdlets without Windows PowerShell. Once you’ve chosen a base class, the next challenge is finding the DLL that contains the System.Management.Automation namespace.

There are two ways to get this DLL. The first is to install the Windows SDK which will install the DLL in Program Files (x86)\Reference Assemblies\Microsoft\WindowsPowerShell\3.0. The second option is to run the following PowerShell command.

Copy ([PSObject].Assembly.Location) C:\

From the Visual Studio flavor of your choice, create a Class Library project and add a reference to the WindowsAzure.Storage NuGet package. Then add a reference to the System.Management.Automation.dll that we extracted using the previous PowerShell command.

The following is the CmdLet implementation that uses the .NET Azure Storage APIs to get the Service Status for Blobs, Tables and Queues. The collected services status objects are returned in a dictionary object that can then be used in our PowerShell scripts.


using System;
using System.Collections.Generic;
using System.Management.Automation;
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Blob;
using Microsoft.WindowsAzure.Storage.Queue;
using Microsoft.WindowsAzure.Storage.RetryPolicies;
using Microsoft.WindowsAzure.Storage.Shared.Protocol;
using Microsoft.WindowsAzure.Storage.Table;

namespace StorageAccountStatusCmdlet
    [Cmdlet(VerbsCommon.Get, "AzureStorageAccountStatus")]
    public class GetAzureStorageAccountStatus : PSCmdlet
        [Parameter(Mandatory = true,
            ValueFromPipeline = true,
            ValueFromPipelineByPropertyName = true,
            HelpMessage = "Storage Account Name")]
        public string Name { get; set; }

        [Parameter(Mandatory = true,
            ValueFromPipeline = true,
            ValueFromPipelineByPropertyName = true,
            HelpMessage = "Storage Account Key")]
        public string Key { get; set; }

        protected override void ProcessRecord()
            var account = CloudStorageAccount.Parse("DefaultEndpointsProtocol=https;"
                                                    + "AccountName="
                                                    + Name
                                                    + ";AccountKey="
                                                    + Key);

            var stats = new Dictionary<string, GeoReplicationStats>();
                var cloudBlobClient = account.CreateCloudBlobClient();
                var blobRequestOptions = new BlobRequestOptions
                    LocationMode = LocationMode.SecondaryOnly 

                var blobServiceStats = cloudBlobClient.GetServiceStats(blobRequestOptions);
                stats.Add("Blobs", blobServiceStats.GeoReplication);
            catch (Exception exception)


                var cloudTableClient = account.CreateCloudTableClient();
                var tableRequestOptions = new TableRequestOptions
                    LocationMode = LocationMode.SecondaryOnly 

                var tableServiceStats = cloudTableClient.GetServiceStats(tableRequestOptions);
                stats.Add("Tables", tableServiceStats.GeoReplication);
            catch (Exception exception)

                var queueClient = account.CreateCloudQueueClient();

                var queueRequestOptions = new QueueRequestOptions
                    LocationMode = LocationMode.SecondaryOnly

                var queueServiceStats = queueClient.GetServiceStats(queueRequestOptions);
                stats.Add("queues", queueServiceStats.GeoReplication);
            catch (Exception exception)

At this point, we can build the project and find the artifacts in the project’s build folder. To use the newly created CmdLet, import it using the Import-Module CmdLet. The following is a sample of how I used it in my personal tests.

Import-Module 'C:\StorageAccountStatus.dll' -Force

$status = Get-AzureStorageAccountStatus -Name 'briseboisms' `
                                        -Key 'NUA/tzAJtAW4OWhtUyLyO8ffYUfnFgTHf5eH...7g=='

"Blobs Status: ( " + $status.Blobs.Status + " ) Last Sync Time: ( " + $status.Blobs.LastSyncTime +" )"

"Tables Status: ( " + $status.Tables.Status +" ) Last Sync Time: ( "+ $status.Tables.LastSyncTime +" )"

"Queues Status: ( " + $status.Queues.Status +" ) Last Sync Time: ( "+ $status.Queues.LastSyncTime +" )"

Blobs Status: ( Live ) Last Sync Time: ( 02/21/2016 18:34:34 +00:00 )
Tables Status: ( Live ) Last Sync Time: ( 02/21/2016 18:35:04 +00:00 )
Queues Status: (  ) Last Sync Time: (  )

The status of the secondary location. Possible values are:

  • live: Indicates that the secondary location is active and operational.
  • bootstrap: Indicates initial synchronization from the primary location to the secondary location is in progress. This typically occurs when replication is first enabled.
  • unavailable: Indicates that the secondary location is temporarily unavailable.


Trackbacks and Pingbacks:

  1. Dew Drop – February 22, 2016 (#2193) | Morning Dew - February 22, 2016

    […] Get Last Sync Time for Read-Access Geo-Redundant Azure Storage (Alexandre Brisebois) […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.