Archives For Infrastructure as a Service (IaaS)


Getting Around Blocked Ports

Regularly, I find myself in a location that blocks ports to the outside world. In many of those moments, I can’t use Remote Desktop (RDP) sessions to connect to Virtual Machines hosted on Azure. The strategy expressed in this post is one of many possible solutions that also applies to Linux and SSH sessions.

The Strategy

  • Using a Load Balancer and NAT rules to map port 443 to the RDP (3389) port for a Jumpbox Virtual Machine (VM)
  • Using the Jumpbox to RDP into VMs deployed to the Azure Virtual Network.
    Continue Reading…

Using PS to Add a Key to the Registry

In a recent experiment, I had to disable User Account Control (UAC) on a remote Virtual Machine through WinRM.

Note

To better protect those users who are members of the local Administrators group, we implemented UAC restrictions on the network. This mechanism helps prevent against “loopback” attacks. This mechanism also helps prevent local malicious software from running remotely with administrative rights.

Whenever I deal with the registry, I always feel like the guy in the picture above. You never know if you’re going to regret making changes…

Anyway, this was an experiment, so please, use this wisely.

$registryPath = "HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System"
$Name = "LocalAccountTokenFilterPolicy"
$value = "1"

New-ItemProperty -Path $registryPath `
                 -Name $name `
                 -Value $value `
                 -PropertyType DWORD `
                 -Force | Out-Null

# Restart the VM to apply the changes
Restart-Computer -Force

Deploying Azure Marketplace VMs

The first step is to gather information about the Market Place Virtual Machine (VM) image that we want to deploy. For this example I decided to deploy a Tableau Server image.

Login-AzureRmAccount

$location = 'eastus'
  
Get-AzureRmVMImagePublisher -Location $location `
    | Where-Object -Property PublisherName -Like Tableau*
 
$publisherName = 'tableau'
  
Get-AzureRmVMImageOffer -Location $location `
                        -PublisherName $publisherName
 
$offer = 'tableau-server'
  
Get-AzureRmVMImageSku -Location $location `
                      -PublisherName $publisherName `
                      -Offer $offer `
      | Select-Object -Property 'Skus'

Skus                  
----                  
bring-your-own-license

Now that we have the image information, it’s time to create an Azure Resource Manager (ARM) Template. Continue Reading…


Geo-HA Service Fabric Cluster

One of the biggest challenges that we face when we build an Internet-scale solution, is high availability across geographic locations (Geo-HA). Why is this important? Well, there can be a few different reasons. The most common reason, is to be able to survive data center outages. Another reason, is to bring services closer to end users so that we can provide good user experiences.

Geo-HA brings challenges to the table. For example, should we use an Active-Passive or Active-Active strategy for data across regions? Keeping in mind that Active-Active is difficult to get right, we need to take time to analyze and to make the correct choices. We need to consider our Disaster recovery (DR) plan, target RPO and RTO. Azure has a whole bunch of mechanisms for replication, backup and monitoring, so how do we decide what’s the right combination?

Today’s Internet-scale services are built using microservices. Service Fabric is a next-generation middleware platform used for building enterprise-class, Tier-1 services. This microservices platform allows us to build scalable, highly available, reliable, and easy to manage solutions. It addresses the significant challenges in developing and managing stateful services. The Reliable Actors API is one of two high-level frameworks provided by Service Fabric, and it is based on the Actor pattern. This API gives us an asynchronous, single-threaded programming model that simplifies our code while still providing the advantages of scalability and reliability guarantees offered by Service Fabric.

A Service Fabric cluster is HA within its geographic region by default. Thinking about our heritage of on premise data centers, we’ve poured thousands of man-hours to deploy Disaster Recovery sites in secondary physical locations, because we know that everything is possible. Over the past few years, we’ve experienced many interesting scenarios, for example, a cut cable, or a faulty DNS entry broke the Internet. So why should we do anything differently in the cloud? We must treat each region as we treat our own data centers and think about Geo-HA.

The rest of this post is about taking high availably to the next level by deploying a Geo-HA Service Fabric cluster. Continue Reading…


Big Compute or Big Data?

This question comes up on a fairly regular basis. So I thought it would be interesting to share my understanding in hopes to help you make the right decision.

Both are enablers, and they create opportunities through various approaches. When the problem is understood, and the algorithms vary by parameter, then Big Compute is definitely an approach to consider. When we know our input data, and are experimenting with various algorithms, Big Data is a clear winner.

This being said, let’s try to materialize this into something more concrete.

Big Compute shines at large scales. Easily parallelizable workloads are the best use cases, because they allow us to break the workload into independent tasks. This is where we can gain the most from large numbers of compute cores. Big Compute is all about executing any software package, written in any language by passing in variables. This creates an amazing opportunity for developers to optimize their code to be extremely efficient. Optimizations range from concurrency management, memory management, limiting IOPS and other aspects like network communication optimization. Possible scenarios are well known algorithms like Monte Carlo simulations, rendering and work flows.

Big Data is all about empowering us to experiment with our data by providing us with tools, query languages and scripting capabilities that are geared at giving us a lot of agility. Tinkering with algorithms, is the perfect use case. We know our data, and want to extract insights from it. This means that we’re going to clean it, shape it and question it. Big Data is built for this; it makes it possible to iterate through multiple versions of our algorithms in a way that’s difficult with Big Compute.

So now that we’ve nailed this down, which is right for your workload?

Share your thoughts in the comments below


Based on the current builds, compared to Server, Nano Server has 93 percent lower VHD size, 92 percent fewer critical bulletins and 80 percent fewer reboots!

Deploying Nano Server to Azure

I’ve been curious about Nano Server for a while now. And I recently noticed that it was available on Microsoft Azure. This post is definitely from a developers point-of-view. It goes through the steps required to create a functional Nano Server Virtual Machines (VM) on Microsoft Azure.

Nano Server is ideal for many scenarios:

  • As a “compute” host for Hyper-V virtual machines, either in clusters or not
  • As a storage host for Scale-Out File Server.
  • As a DNS server
  • As a web server running Internet Information Services (IIS)
  • As a host for applications that are developed using cloud application patterns and run in a container or virtual machine guest operating system.

The Adventure

Nano Server is a remotely administered server operating system (OS). Wait. Let me repeat this because it’s important… Nano Server is a remotely administered server operating system (OS). Developers, Nano Server is a server OS optimized for clouds and data centers. It’s designed to take up far less disk space, to setup significantly faster, and to require far fewer restarts than Windows Server. So why does this matter? Well it means more resources, more availability and stability for our Apps. And it also means that it’s time to learn new skills, because there is no local logon capability at all, nor does it support Terminal Services. However, we have a wide variety of options for managing Nano Server remotely, including Windows PowerShell, Windows Management Instrumentation (WMI), Windows Remote Management, and Emergency Management Services (EMS). Continue Reading…


Upload a VHD to Storage Using AzCopy

What’s the best way to upload a VHD to Azure Storage? This question comes up on a regular basis and I decided that it was time to share an example that uses PowerShell and AzCopy to upload a VHD to Azure Blob Storage. As you go through this post, remember that a VHD must be uploaded as a Page Blob for it to be usable as a Virtual Machine (VM) image. Continue Reading…


One Petaflop

In recent discussions around big compute I was asked what a Petaflop (PFlop) would look like on Azure. Now, I had heard the term before and knew it was used to describe a considerable capacity of compute. So what is a petaflop (PFlop) and how do we convert this into something that we can all relate to? Continue Reading…


Using ARM to Deploy Global Solutions

Imagine deploying your secure load balanced solution to three datacenters, putting in place a worldwide load balancer and doing so in roughly 24 minutes. Did I mention that this deployment is predictable and repeatable?

Good, now that I’ve got your attention, it’s time to dive in!

Building on my previous post about managing compute resources on Azure I decided to modify the Azure Resource Manager(ARM) template to deploy a real-world environment to three datacenters (Yes I know, the diagrams shows two locations, but as I built the demo, I got greedy…). Using Azure Traffic Manager we are able positively affect a users experience by directing them to the closest datacenter.

Its important to note that ARM does not support nested copy operation. This means that we have to use a different strategy to deploy identical environments in multiple Azure regions. After a bit of research it became apparent that I had to use nested deployments. This technique requires us to break our template into multiple files. The parent template in this demo is the azuredeploy-multi-geo.json file. It contains the full list of parameters, a nested deployment that deploys instances of our environment to multiple Azure regions, and a Traffic Manager definition. The azuredeploy.json template file was refactored from the template used in my previous blog post. It contains networking, storage and Virtual Machine definitions. Continue Reading…