Archives For TableQuery


Using Time-based Partition Keys in #Azure Table Storage

In a previous post about storing Azure Storage Table entities in descending order I combined a time-based key with a guid in order to create a unique key. This is practical when you need to use combined keys for the Row Keys or Partition Key. But it’s not practical for logs.

A better solution for logs, is to generate a Partition Key based on time. This allows you to query for logs by time periods. There are many ways to generate time-based partitions, so I will cover the two that I use the most. Continue Reading…


Crime-Control-Security-Services-23Finding the problem, of controlling access to specific resources for a certain amount of time and for a certain amount of downloads, to be quite interesting. So I decided to design a solution that would take advantage of the services made available by the Windows Azure platform.

The Challenge

Create single use URIs that invalidate after 15 minutes.

Why?

Companies that sell digital products often try to limit the number of downloads per purchase. Consequently, discouraging customers from sharing their download links becomes a priority. By creating public URIs constrained by a time to live and by a limited number downloads, can help accomplish this goal.

The Concept

Building on the Valet Key Pattern explored in “Keep Your Privates Private!” I used Shared Access Signatures to create a time to live for all public URIs. Then to control access to the actual resources I created a REST service using Web API, which tracks individual access to each URI using the Windows Azure Table Storage Service.

Continue Reading…


2013-06-03_18h15_18Building on top of the code from my post “Windows Azure Blob Storage Service – Migrating Blobs Between Accounts” I added logic so that the Windows Azure Storage Account migration process recreate all the Tables from the source account in the target account. Then It downloads the entities from the source Tables using segmented table queries and inserts(or replaces) them into the target Tables.

The process is surprisingly fast compared to the migration of blob containers. When we copy a blob from one container to another, a command is queued and it can take some time to complete. Migrating Tables on the other hand, requires significantly more bandwidth, because we need to download the data from the source Tables and upload it into the target Tables located in the target Windows Azure Storage Account.

Entities are downloaded 1000 at a time. Then they are fed into my TableStorageWriter, which regroups the entities by Partition Key and inserts them in batches of 100.

Continue Reading…


download

Windows Azure Table Storage Service can be queried using various approaches. I use the Windows Azure Storage NuGet package and reusable queries.

A Query on Windows Azure Table Storage Service usually completes in 200 milliseconds. On busy systems this is an eternity! To help my services perform better, and to reduce costs of operation, I built the TableStorageReader to executes queries with cache, in turn reducing the number of transactions and reducing the latencies created by communications over the network.

The reader executes reusable queries. Reusable queries greatly simplify the application’s design by encapsulating query logic in named concepts. They are like Stored Procedures in SQL Server, where team members can discover functionality by reading file names from the query folders. More details about reusable queries in the reference section at the bottom of this post.

The TableStorageReader is a fluent api that allows you to create tables, query tables and apply cache over queries.

The code from this Post is part of the Brisebois.WindowsAzure NuGet Package

To install Brisebois.WindowsAzure, run the following command in the Package Manager Console

PM> Install-Package Brisebois.WindowsAzure

Get more details about the Nuget Package.

Continue Reading…