Archives For Principles & Practices


We converse everyday, but do we really know what makes a conversation good or bad? And, do we know how to improve them?

Designing a Conversation

How do we start meaningful conversations? And how do we structure them? I find these questions difficult to answer… my background is geared at building software, and communicating with a human isn’t as straight forward as I expected. I mean, we’ve been talking for most of our lives. We learn and interact with others every day. So why is designing a conversation proving to be a challenge? Continue Reading…


We communicate through images, movement, sounds, text, symbols, objects…

Bots are evolving so fast! Last October I wrote about how bots are the new apps and shared my thoughts about why bots and about the opportunities they create. Over the past few months, interest has grown and we as a community have iterated on our approaches to build bots. For some, the hype has gotten to a point where everyone wants one, but do we really understand them or the efforts that are required to build them.

So far, my initial thoughts have stood the test of time. 

The following are observations, experiences and thoughts, about what we need to consider when we set out to build a successful bot. 

Bots are the new Apps – Part 2

Bots are defined by an exceptional user experience! And not by the amount of Artificial Intelligence (AI) or by Natural Language Processing (NLP) that is used to build them.

A bot is successful if users actually use it! This is important, because we must build our bots with meaningful telemetry, logging and data collection mechanisms that empower us to measure, validate and iterate. This data forms a foundation that we can use to hypothesize and prioritize our efforts. Then through A/B testing we confirm that we solve the user’s need in the quickest and easiest way possible. This in itself directly impacts our business model and pushes us to grow the end-user consumption and shift our business model to selling through microtransactions. In other words, a bot helps users be successful. Continue Reading…


What’s this all about?

In the old days, R&D was about features, marketing was about promoting products to prospective decision makers, sales were about getting big deals, and service was about implementing and fixing things. Today, it’s all about growing end-user consumption and selling microtransactions to consumers.

The risk is on us, and our reward only happens if our end-users are successful.

Success doesn’t magically happen… Product marketing must design for it, development must build for it, services must contribute heavily to consumption research, marketing must translate the findings into offers, offer management technology must deliver it, services must access it during every service transaction…

In short, consumption is everything. If end-users underutilize our software, chances are that at some point the company we code for, won’t be able to pay for our services. We are all responsible for crafting successful software. From developers to sellers, everyone is liable to provide feedback, insights and value to the end-users. This is a team effort and can be supported through practices like DevOps, Business Intelligence and Artificial Intelligence. This requires communication and collaboration. It’s time to forget about silos and to move from a one-time sale to a pay-per-use model. Continue Reading…


Speaking at DevTeach 2016

Years ago, I attended the DevTeach conference and was fortunate to participate in conversations that helped me overcome many challenges over the years that followed. This week I had the opportunity to speak at DevTeach in Montreal. For this event, I chose a topic that I’m really passionate about and needed to cover a lot of ground in a short amount of time.

The talk had a progression from a public cloud, to an architectural pattern, to a hyper-scale microservice platform and finally about a programming model.

My goal with this talk is primarily to introduce Actors and Service Fabric. Then provide attendees with additional information in the downloadable slides about the patterns that I feel are important to consider when building microservices.

Caught by surprise, I had a full room and a lot of great questions. Thanks everyone for making this a success. Continue Reading…


Never would have imagined that the laws of physics would be so important in a world where virtualization is the new normal.

Data Locality is Important

Data Locality, refers to the ability to move the computation close to the data. This is important because when performance is key, IO quickly becomes our number one bottleneck. Data access times vary from milliseconds to seconds because of many factors like hardware specifications and network capabilities.

Let’s explore Data Locality through the following Scenario. I have eight files containing data about multiple trucks, and I need to Identify trips. A trip consists of many segments, including short stops. So if the driver stops for coffee and starts again, this is still considered the same trip. The strategy depicted below is to read each file and to group data points by truck. This can be referred to as mapping the data. Then we can compute the trips for each group in parallel over multiple threads. This can be referred to as reducing the data. And finally, we merge the results in a single CSV file so that we can easily import it to other systems like SQL Server and Power BI.

Single Machine

The single machine configuration results were promising. So I decided to break it apart and distribute the process across many task Virtual Machines (TVM). Azure Batch is the perfect service to schedule jobs. Continue Reading…


What should I be looking for during Code Reviews?

For the longest time, I was wary of Code Reviews. I used to spend most of my time implementing features and felt that Code Reviews robbed me of my precious time. I was mistaken and have changed my approach.

Our team uses systematic Code Reviews to help us normalize our code base. It has helped us to set our expectations and to identify major defects before our clients got to experience them in production. We have a minimum of 2 reviewers for each code contribution and we encourage everyone to take part in Code Reviews. The practice has the added benefit of creating opportunities for developers to look at different parts of our solution.

Thus far, it’s been a great way to share knowledge and practices amongst our team members. I hardly hear anyone swearing during reviews and I rarely see any code that strays from our common expectations. It has made onboarding new team members easier because once they understand our standards, they’re able to contribute with code and Code Reviews.

Code Review tools don’t always provide enough context for each piece of code that we need to review. This makes it a challenge to perform effective Code Reviews in a timely manner. To make my life easier, I try to concentrate on things that don’t require me to understand the full context. For example, I look at standards, documentation, Unit Tests and try to use common sense to identify as many issues as I can. Continue Reading…


Size Matters… A LOT!

A few months ago I wrote about resource contention and how the size of data matters on the cloud. Today I decided to share a true story that happened to a friend of mine.

Things that work during development and quality assurance can yield unexpected results at scale. It’s no surprise that we often say that Microsoft Azure is there to help your applications survive instant success. Going from 1 to 100 000 users overnight is a problem that we all dream to have!

Let’s look at the following line item from my friend’s bill.


Last month, the bandwidth generated by his apps totaled 783 Gigabytes. To some this number looks really small, to others this number looks astronomical. So let’s put some perspective on this story.

The apps that consumed this data, were mobile apps. Think about it for a second, that’s a lot of mobile devices! The data in question was Json. That’s not what you expected right? So how did so much Json go over the wire? Well whenever an app started, it would download a Json file from Azure Blob Storage that was then used to locate resources to display. Each Json file wasn’t that big, but the sheer success of his apps generated a staggering amount of traffic.

To be honest, this is the kind of surprise we all dream of! We all want our apps to go viral, but are we ready?

In this specific situation, there are a couple of things that can be done to reduce the app’s monthly bandwidth requirements.

  1. Setting Cache-Control on blobs can help reduce bandwidth
  2. Using ETags to identify whether the blob on the server has changed since it was last downloaded.
  3. Use a different serialization protocol. Protobuf could reduce the overall bandwidth consumption generated by his apps.
  4. Use Json minification to reduce the size of each blob. Remember blank spaces eat precious bytes.
  5. Partition the data into smaller blobs, my guess is that most users will not stray far from the first pages.
  6. If we can’t partition the blobs we might want to consider the use of compression.

 


If you are building your application on the cloud instead of building your application for the cloud, you’re doing it wrong!

Continue Reading...

temporary

What’s your first impression when you look at this elevator control panel? To be honest, I find it confusing. What was the rush? I mean they could have waited for the new control panel buttons or the new software update to change everything at once. Who knows how long this patch will stay in place because it’s deemed to be “good enough”…

Personally, I see this all the time. Think about your projects, past and present, do any of them feel like this elevator control panel? It happens to all of us at some point. Typically, it happens when we’re rushed and think that temporarily commenting out a piece of code isn’t going to hurt anyone.

Continue Reading…

My Reading List for 2014

January 2, 2014 — 2 Comments

book worm Every year I hunt for new books to read. 2013 was a wild year where I didn’t really have a plan in place and my reading was all over the place. Wanting to get organized in 2014 is all about not missing out.

Last year I read some great books about SQL Server, Windows Azure and methodology, this year I want to focus on catching up some more on those key books that greatly affected our industry. Are there any books that shouldn’t be on my list? What’s missing from the list?

I probably won’t get to read everything on my list, but with your help I can try to pick the best ones to read this year and continue working down the list next year.

2013 was a slow year for reading, because my focus was mostly geared towards sharing what I had learnt about Windows Azure through my blog. This year is going to be different, I will try to balance my efforts between writing and catching up on the past.

Continue Reading…