Archives For Entity Framework

ef6 Since Entity Framework 6 has achieved General Availability I have been waiting for version 6.0.2 (available on NuGet). This version of Entity Framework packs a ton of new features, including async queries and built-in retries to deal with the inevitable transient faults. On Windows Azure, there are to be expected. Having the retry logic built-in is music to any seasoned Windows Azure developer who has had to implement this logic using the Transient Fault Handling Application Block.

Don’t get me wrong, the Transient Fault Handling Application Block is still very useful to implement your own error handling strategies. A good example can be found in my post about defining an HTTP Transient Error Detection Strategy for REST calls.

Since Entity Framework has changed a lot since version 5.0, it may come as a surprise that the transient fault retry logic isn’t activated by default. If you want to take advantage of this feature, you will need to activate it. The following piece of code, will show you what you need to know about configuring Entity Framework 6.0.X for services that use Windows Azure SQL Database.

Continue Reading…

8-29-2013 1-33-46 PM

Now don’t get me wrong, I’m a HUGE fan of Entity Framework. What I don’t get though, is why Lazy Loading is on by default.

As one of my friends likes to quote “with great power comes great responsibility” and he’s absolutely right! Lazy loading is very powerful and should be used selectively. I’m not saying that you should never use it, but you should really think long and hard about how you use it.

By forgetting that Lazy Loading is on by default, it’s quite easy to create N+1 situations where what could have been a simple read from the database can turn into a nightmare!

Continue Reading…

unicornIn 2010 I bought a Kindle and it changed my life! Since then I’ve been catching up on books I should have read years ago. Back then, reading technical books meant carrying bulky/ heavy printed books in my bag. The day I got my first kindle, is the day I started reading again!

Since then, I read quite a few books! Below are some of the books that helped me love and master Entity Framework!

Before you go through this list of books, I have to admit that I have a huge preference for Entity Framework Code First (aka. Magic Unicorn) because I find it easy to work with. It is easier to maintain, to evolve and it allows you to use true Plain Old CLR Objects (POCOs). If you have any suggestions for good Entity Framework, please share them through this post’s comments.

Continue Reading…

Caching with Entity Framework has always been a challenge. Out of the box Entity Framework doesn’t have second level caching support. Even though, many open source solutions like the Scalable Object Persistence (SOP) Framework exist, I decided to implement a query level cache that uses the Transient Fault Handling Application Block to execute retry policies to transparently handle transient faults.

Keep in mind that transient faults are normal and that its not a question of if they will occur, it’s really a question of when they will occur. SQL Database instances are continuously being shifted around to prevent the service’s performance from degrading.

The Database fluent API I created executes reusable queries. Reusable queries greatly simplify the application’s design by encapsulating query logic in named concepts. They are like Stored Procedures in SQL Server, where team members can discover functionality by reading file names from the query folders. More details about reusable queries can be found in the reference section at the bottom of this post.


The code from this Post is part of the Brisebois.WindowsAzure NuGet Package

To install Brisebois.WindowsAzure, run the following command in the Package Manager Console

PM> Install-Package Brisebois.WindowsAzure

Get more details about the Nuget Package.

Continue Reading…


The short answer is that you’re probably not using transactions.

I came across this issue a few time and decided to create the ReliableModel. When I modify the state of my database, it automatically wraps my code with a TransactionScope. By default, TransactionScopes are set to timeout after 30 minutes. This configuration is set in the machine.config and it cannot be modified on Windows Azure Roles. Therefor, if your statements need more than 30 minutes to execute, I strongly recommend using Stored Procedures. Be sure to use Transactions in your Stored Procedures.

Continue Reading…

In a previous post I discussed using Table Valued Parameters and Stored Procedures to insert large amounts of data into Windows Azure SQL Database with reasonable  throttling by the SQL Database. Not long after that post, I rapidly came to the conclusion that it was a great solution for reasonable amounts of data. Then I wrote about using SqlBulkCopy to overcome the limitations imposed by the stored procedure approach and provided the ReliableBulkWriter.

This article will show how to organize a  database to enable the insertion of large amounts of  relational data. To illustrate the changes that were required I will be using the following model.


Continue Reading…

When observing Query Plans through the Windows Azure Management Portal for SQL Database you may come across warnings. Some warning are about missing Indexes and some are about performance issues related to the Query itself.


This specific warning occurs when you are comparing two strings that are not of the same Type. By this I mean one is NVARCHAR and the second is VARCHAR.

Addressing these warnings will have a great impact on query performance with large datasets. It may not be noticeable on tables that have a small amount of records. Once you start querying over a few million records, performance gains will be noticeable by the end users.

Fixing these issues can go a long way when it comes to end user satisfaction! Remember, slow applications are applications to which people find alternatives!

Continue Reading…

There are times when entities don’t need navigation properties on both sides of the relationship. This is an example you can use to map to an existing database or generate a brand new database on Windows Azure SQL Database.


The ERD shows a Many to Many relationship between the FlightLog and the Airport entities. The requirements for the current model has no need for Airport to have a Navigation Property of FlightLogs.

Using Entity Frame Work Code First, the following statement can be used in the EntityTypeConfiguration to configure the relationship.

HasMany(r => r.Airports)
    .WithMany() // No navigation property here
    .Map(m =>

FlightLog has many Airports, the relationship is held in an intermediate table called BridgeTableForAirportsAndFlightlogs. By not specifying a property for the inverse Navigation Property, a unidirectional Many to Many relationship is created.

Continue Reading…

As some of you may have noticed, Windows Azure SQL Database is quite good at protecting itself. If you try to insert too many rows in a short period of time, it will simply close your connection and cause your transaction to roll back.

I was trying to insert close to 700 000 items into a table and was systematically getting cut off.

General Guidelines and Limitations (Windows Azure SQL Database)

Connection Constraints

Windows Azure SQL Database provides a large-scale multi-tenant database service on shared resources. In order to provide a good experience to all Windows Azure SQL Database customers, your connection to the service may be closed due to the following conditions:

  • Excessive resource usage
  • Connections that have been idle for 30 minutes or longer
  • Failover because of server failures

In my case, I was probably violating the first of the three reasons mentioned above. I tried to figure out how to go around this issue and after toying around with this problem for some time I came up with using a Table Value as a parameter for a Stored Procedure. I quickly found an example and came up with the following solution.

More information can be found in Windows Azure SQL Database Performance and Elasticity Guide

The degree to which a subscriber’s connectivity is blocked ranges from blocking inserts and updates only, to blocking all writes, to blocking all reads and writes. The time span for which throttling occurs is referred to as the Throttling Cycle and the duration of a Throttling Cycle is referred to as the Throttling Sleep Interval which is 10 seconds by default.

Throttling severity falls into one of two categories, Soft Throttling for “mildly exceeded” types and Hard Throttling for “significantly exceeded” types. Because significantly exceeded types pose a greater risk to overall system health, they are handled more aggressively than mildly exceeded types. Engine Throttling follows these steps to reduce load and protect system health:

  1. Determines the load reduction required to return the system to a healthy state.
  2. Marks subscriber databases that are consuming excessive resources as throttling candidates. If Engine Throttling is occurring due to a mildly exceeded type then certain databases may be exempt from consideration as throttling candidates. If Engine Throttling is due to a significantly exceeded type then all subscriber databases can be candidates for throttling with the exception of subscriber databases that have not received any load in the Throttling Cycle immediately preceding the current Throttling Cycle.
  3. Calculates how many candidate databases must be throttled to return the system to a healthy state by evaluating the historical resource usage patterns of the candidate databases.
  4. Throttles the calculated number of candidate databases until system load is returned to the desired level. Depending on whether throttling is Hard Throttling or Soft Throttling, the degree of throttling applied or the throttling mode, as described in Understanding Windows Azure SQL Database Reason Codes can vary. Any databases that are throttled remain throttled for at least the duration of one throttling cycle but throttling may often persist for multiple throttling cycles to return the system to a healthy state.

Keep in mind that you are sharing the following Database Server configuration.

As of this writing, each SQL Database computer is equipped with 32 GB RAM, 8 CPU cores and 12 hard drives. To ensure flexibility, each SQL Database computer can host multiple subscribers at a time.

Continue Reading…

Microsoft released The Transient Fault Handling Application Block as part of the Microsoft Enterprise Library, which targets various Windows Azure services. An interesting aspect of this block is that its easy to extend and use.

The Transient Fault Handling Application Block is a product of the collaboration between the Microsoft patterns & practices team and the Windows Azure Customer Advisory Team. It is based on the initial detection and retry strategies, and the data access support from the Transient Fault Handling Application Framework. The new block now includes enhanced configuration support, enhanced support for wrapping asynchronous calls, provides integration of the block’s retry strategies with the Windows Azure Storage retry mechanism, and works with the Enterprise Library dependency injection container. The new Transient Fault Handling Application Block supersedes the Transient Fault Handling Framework and is now a recommended approach to handling transient faults in the cloud.

Targeted services include Windows Azure SQL Database, Windows Azure Service Bus, Windows Azure Storage, and Windows Azure Caching Service. Although These are all Cloud services, it is easy to define your own detection strategies to identify known transient error conditions.

The following is an example of the Exponential Back-Off Transient Error Detection Strategy working with Entity Framework

Retry four times, waiting two seconds before the first retry, then four seconds before the second retry, then eight seconds before the third retry, and sixteen seconds before the fourth retry.

This retry strategy also introduces a small amount of random variation into the intervals. This can be useful if the same operation is being called multiple times simultaneously by the client application.

Continue Reading…