Archives For Big Data


Never would have imagined that the laws of physics would be so important in a world where virtualization is the new normal.

Data Locality is Important

Data Locality, refers to the ability to move the computation close to the data. This is important because when performance is key, IO quickly becomes our number one bottleneck. Data access times vary from milliseconds to seconds because of many factors like hardware specifications and network capabilities.

Let’s explore Data Locality through the following Scenario. I have eight files containing data about multiple trucks, and I need to Identify trips. A trip consists of many segments, including short stops. So if the driver stops for coffee and starts again, this is still considered the same trip. The strategy depicted below is to read each file and to group data points by truck. This can be referred to as mapping the data. Then we can compute the trips for each group in parallel over multiple threads. This can be referred to as reducing the data. And finally, we merge the results in a single CSV file so that we can easily import it to other systems like SQL Server and Power BI.

Single Machine

The single machine configuration results were promising. So I decided to break it apart and distribute the process across many task Virtual Machines (TVM). Azure Batch is the perfect service to schedule jobs. Continue Reading…


Big Compute or Big Data?

This question comes up on a fairly regular basis. So I thought it would be interesting to share my understanding in hopes to help you make the right decision.

Both are enablers, and they create opportunities through various approaches. When the problem is understood, and the algorithms vary by parameter, then Big Compute is definitely an approach to consider. When we know our input data, and are experimenting with various algorithms, Big Data is a clear winner.

This being said, let’s try to materialize this into something more concrete.

Big Compute shines at large scales. Easily parallelizable workloads are the best use cases, because they allow us to break the workload into independent tasks. This is where we can gain the most from large numbers of compute cores. Big Compute is all about executing any software package, written in any language by passing in variables. This creates an amazing opportunity for developers to optimize their code to be extremely efficient. Optimizations range from concurrency management, memory management, limiting IOPS and other aspects like network communication optimization. Possible scenarios are well known algorithms like Monte Carlo simulations, rendering and work flows.

Big Data is all about empowering us to experiment with our data by providing us with tools, query languages and scripting capabilities that are geared at giving us a lot of agility. Tinkering with algorithms, is the perfect use case. We know our data, and want to extract insights from it. This means that we’re going to clean it, shape it and question it. Big Data is built for this; it makes it possible to iterate through multiple versions of our algorithms in a way that’s difficult with Big Compute.

So now that we’ve nailed this down, which is right for your workload?

Share your thoughts in the comments below


9-5-2013 2-36-06 AMWindows Azure HDInsight has been available for a little while now, but I haven’t had a chance to work with it. Tonight as I was browsing the Patterns & Practices Website, I noticed that they were working on a new book for the Cloud Series. It’s an ongoing project about developing Big Data solutions using the Windows Azure HDInsight and related technologies.

The book can be downloaded from the Patterns & Practices Windows Azure Guidance site.

Continue Reading…