Moving to a New Azure Datacenter
From time to time, I face interesting challenges. Azure is an exciting platform, because it’s pushing me to learn about things that I wouldn’t of dreamed of a few years back.
This post is all about moving a CentOS Virtual Machine that has a RAID 0 to a new Microsoft Azure Datacenter.
The Setup
It all starts with setting up the PowerShell and Azure Environment. The following provides the PowerShell session with variables and the Azure CmdLets with a default Azure Subscription. Then it creates an Azure Storage Account in the East US 2 Azure Datacenter. This account will store the initial CentOS Virtual Machine VHDs. And finally, it sets the account as the sessions’ default storage account.
# Switch to Azure Service Management (ASM) Switch-AzureMode -Name AzureServiceManagement $vmname = 'centos-classic' $subscriptionName = 'Free Trial' $storageAccountName = 'trie5d03826' $location = 'East US 2' # Switch to proper storage account. Set-AzureSubscription -SubscriptionName $subscriptionName # Create a new Storage Account. New-AzureStorageAccount -StorageAccountName $storageAccountName ` -Type 'Standard_LRS' ` -Location $location $vmImageName = '5112500ae3b842c8b9c604889f8753c3__OpenLogic-CentOS-71-20150410' $vmAdminPassword = 'Microsoft!' $vmAdminUser = 'Brisebois' # Swith to proper storage account. Set-AzureSubscription -SubscriptionName $subscriptionName ` -CurrentStorageAccount $storageAccountName
Create a CentOS VM
Using the variables defined in the previous script, we now create a Small CentOS Virtual Machine in the East US 2 Datacenter.
# Create the Virtual Machine using the newly created VM Image New-AzureQuickVM -Linux ` -ServiceName $vmname ` -Name $vmname ` -ImageName $vmImageName ` -InstanceSize 'Small' ` -Password $vmAdminPassword ` -Location 'East US 2' ` -LinuxUser $vmAdminUser ` -Verbose
Open an SSH Session
Once the Virtual Machine is ready, use the Azure portal to find out which public port was configured as the SSH port. Then use your favorite SSH client to an SSH Session. In this post, I will be using PuTTY.
Now that we’re logged in, let’s prepare the Virtual Machine for the next steps.
Preparation and Getting to Know the VM
sudo yum install mdadm
Once the mdadm too is installed, we can have a look at fstab to find the UUID of the OS Disk.
sudo cat /etc/fstab # # /etc/fstab # Created by anaconda on Tue Jul 22 19:41:26 2014 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=427e4cf4-85d2-4b58-ac5b-a5c12d0b70dd / ext4 defaul ts 1 1
This will allows us to differentiate OS Disk from the local Temp Disk. Ultimately, these UUIDs will come in handy once we start building the RAID 0.
sudo blkid /dev/sdb1: UUID="c22749d1-371c-4c5d-bcb3-1639d0eaba1f" TYPE="ext4" /dev/sda1: UUID="427e4cf4-85d2-4b58-ac5b-a5c12d0b70dd" TYPE="ext4" /dev/sda2: UUID="89aabb77-9b57-40cd-8469-da8c6016cd5d" TYPE="swap"
From the results of blkid, we can identify sda1 as the OS Disk and sdb1 as the local Temp Disk.
Then, we can use df -h to get a sense of how the disks are mounted.
df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 29G 1.6G 26G 6% / devtmpfs 1.7G 0 1.7G 0% /dev tmpfs 1.7G 0 1.7G 0% /dev/shm tmpfs 1.7G 8.3M 1.7G 1% /run tmpfs 1.7G 0 1.7G 0% /sys/fs/cgroup /dev/sdb1 50G 53M 47G 1% /mnt/resource
Add 2 Data Disks
The RAID 0 in this post is built using 2 Azure Data Disks. The following PowerShell will allow us to get a reference to our Virtual Machine. Then we create and attach two Data Disks.
Both Data Disks, are created in the sessions’ default Azure Storage Account.
#Add Disks $vm = Get-AzureVM -Name $vmname -ServiceName $vmname $vm | Add-AzureDataDisk -CreateNew ` -DiskSizeInGB 10 ` -DiskLabel 'Disk 2' ` -LUN 0 ` -HostCaching None $vm | Add-AzureDataDisk -CreateNew ` -DiskSizeInGB 10 ` -DiskLabel 'Disk 3' ` -LUN 1 ` -HostCaching None $vm | Update-AzureVM
Review and Validate
Before we continue, use PowerShell, CLI or the Azure portals to restart the Virtual Machine. Then open a new SSH Session.
To verify that our new Data Disks have been added properly, use the sfdisk command. It should list the disks as being available and ready to be mounted.
sudo sfdisk -l Disk /dev/sdc: 1305 cylinders, 255 heads, 63 sectors/track Disk /dev/sdd: 1305 cylinders, 255 heads, 63 sectors/track Disk /dev/sdb: 9137 cylinders, 255 heads, 63 sectors/track Units: cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/sdb1 * 0+ 9137- 9138- 73398272 83 Linux /dev/sdb2 0 - 0 0 0 Empty /dev/sdb3 0 - 0 0 0 Empty /dev/sdb4 0 - 0 0 0 Empty Disk /dev/sda: 3916 cylinders, 255 heads, 63 sectors/track Units: cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/sda1 * 0+ 3788- 3789- 30432256 83 Linux /dev/sda2 3788+ 3916- 128- 1024000 82 Linux swap / Solaris /dev/sda3 0 - 0 0 0 Empty /dev/sda4 0 - 0 0 0 Empty
Prepare The First Data Disk For The RAID 0
The first disk to partition is the sdc
sudo fdisk /dev/sdc Welcome to fdisk (util-linux 2.23.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help): p Disk /dev/sdc: 10.7 GB, 10737418240 bytes, 20971520 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x7193373d Device Boot Start End Blocks Id System Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p Partition number (1-4, default 1): 1 First sector (2048-20971519, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519): Using default value 20971519 Partition 1 of type Linux and of size 10 GiB is set Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
Prepare The Second Data Disk for The RAID 0
The second disk to partition is the sdd
sudo fdisk /dev/sdd Welcome to fdisk (util-linux 2.23.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table Building a new DOS disklabel with disk identifier 0x2a8edc29. Command (m for help): p Disk /dev/sdd: 10.7 GB, 10737418240 bytes, 20971520 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x2a8edc29 Device Boot Start End Blocks Id System Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p Partition number (1-4, default 1): 1 First sector (2048-20971519, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519): Using default value 20971519 Partition 1 of type Linux and of size 10 GiB is set Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
Building The RAID 0
Now that our Data Disks are partitioned, we can create our RAID 0. If the disks were previously formatted, you will get prompted as I was when I created the RAID.
Keep in mind that this happens because we will create a new file system that spans the RAID. It’s also important to note that any data on the formatted partitions will be lost.
sudo mdadm --create /dev/md/data --level 0 --raid-devices 2 /dev/sdc1 /dev/sdd1 mdadm: /dev/sdc1 appears to contain an ext2fs file system size=10484736K mtime=Thu Jan 1 00:00:00 1970 mdadm: /dev/sdd1 appears to contain an ext2fs file system size=10484736K mtime=Thu Aug 20 14:12:12 2015 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md/data started.
Create The File System on The New RAID 0
Now that we have a RAID 0, it’s time to create a file system.
sudo mkfs -t ext4 /dev/md/data mke2fs 1.42.9 (28-Dec-2013) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 1310720 inodes, 5238272 blocks 261913 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2153775104 160 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done
Add The RAID to /etc/fstab
Then, we need to mount the RAID 0. In this example, we are mounting it to /data.
sudo mkdir /data sudo /sbin/blkid /dev/sdb1: UUID="ad81c102-2972-47f2-9705-ba5fdc95076c" TYPE="ext4" /dev/sda1: UUID="427e4cf4-85d2-4b58-ac5b-a5c12d0b70dd" TYPE="ext4" /dev/sda2: UUID="89aabb77-9b57-40cd-8469-da8c6016cd5d" TYPE="swap" /dev/sdc1: UUID="61053e5c-2a40-6979-f986-5d6777f68a5f" UUID_SUB="1d8786a0-9104-c275-7cee-6e468e744e19" LABEL="centos-classic-img-inst:data" TYPE="linux_raid_member" /dev/sdd1: UUID="61053e5c-2a40-6979-f986-5d6777f68a5f" UUID_SUB="49ffe456-ceb8-24f9-7c08-f40e13e07e12" LABEL="centos-classic-img-inst:data" TYPE="linux_raid_member" /dev/md/data: UUID="f1b1efbd-bfc6-4ac8-90af-8c0f4b54dc29" TYPE="ext4"
Edit The fstab
By adding the UUID of our RAID 0 to fstab, we are making sure that it is mounted automatically when the Virtual Machine starts.
sudo vi /etc/fstab [i] UUID=f1b1efbd-bfc6-4ac8-90af-8c0f4b54dc29 /data ext4 defaults 0 2 [esc] [:w] [:q]
Verify and Test
Be sure to review and test the newly added configurations. Keep in mind that an invalid fstab can prevent you from successfully restarting the Virtual Machine.
When something goes wrong, and the Virtual Machine is stuck in the boot cycle. Attach the OS Disk to another Linux Virtual Machine and correct the fstab. Once everything is valid, you can rehydrate the Virtual Machine.
sudo cat /etc/fstab # # /etc/fstab # Created by anaconda on Tue Jul 22 19:41:26 2014 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=427e4cf4-85d2-4b58-ac5b-a5c12d0b70dd / ext4 defaults 1 1 UUID=f1b1efbd-bfc6-4ac8-90af-8c0f4b54dc29 /data ext4 defaults 0 2 sudo mount -a mount proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel) devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=851076k,nr_inodes=212769 ,mode=755) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relat ime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=62 0,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,seclabel,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xa ttr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpu set) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatim e,cpuacct,cpu) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,mem ory) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,de vices) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,fr eezer) cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,nosuid,nodev,noexec,relatime,ne t_cls) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blki o) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime ,perf_event) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hu getlb) configfs on /sys/kernel/config type configfs (rw,relatime) /dev/sda1 on / type ext4 (rw,relatime,seclabel,data=ordered) selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=34,pgrp=1,time out=300,minproto=5,maxproto=5,direct) debugfs on /sys/kernel/debug type debugfs (rw,relatime) mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel) /dev/sdb1 on /mnt/resource type ext4 (rw,relatime,seclabel,data=ordered) /dev/md/data on /data type ext4 (rw,relatime,seclabel,stripe=256,data=ordered)
The Big Move
Shutdown The Virtual Machine
Before you copy your Virtual Machine and RAID Data Disks, be sure to shutdown the Virtual Machine!
$servicename = "centos-classic-img-inst" $vmname = "centos-classic-img-inst" Get-AzureVM -ServiceName $servicename ` -Name $vmname ` | Stop-AzureVM -Force # Get a reference to the Azure VM $vm = Get-AzureVM -Name $name -ServiceName $serviceName # Wait for the VM to reach the StoppedDeallocated State Write-Output $('VM state is '+ $vm.InstanceStatus) while( $vm.InstanceStatus -ne 'StoppedDeallocated') { Start-Sleep -s 10 $vm = Get-AzureVM -Name $name -ServiceName $serviceName Write-Output $('VM state is '+ $vm.InstanceStatus) }
Setup The Target Environment
Then, make sure that you have an Azure Storage Account in the target Datacenter.
$destinationStorageAccountName = 'centosclassicwest' New-AzureStorageAccount -StorageAccountName $destinationStorageAccountName -Location 'West US' $container = 'vhds' # Destination Storage Account Contex $destinationKey = (Get-AzureStorageKey -StorageAccountName $destinationStorageAccountName).Primary $destinationContext = New-AzureStorageContext –StorageAccountName $destinationStorageAccountName ` -StorageAccountKey $destinationKey # Create the destination container New-AzureStorageContainer -Name $container ` -Context $destinationContext
Copy Assets to The Target Environment
Copying the VHDs is done by starting asynchronous job on Azure. The following example illustrates how to create such a job and monitor its progress by using PowerShell CmdLets.
# Source Storage Account Context $sourceStorageAccountName = "trie5d03826" $sourceKey = (Get-AzureStorageKey -StorageAccountName $sourceStorageAccountName).Primary $sourceContext = New-AzureStorageContext –StorageAccountName $sourceStorageAccountName ` -StorageAccountKey $sourceKey # Find the VHDs to copy Get-AzureStorageBlob -Container 'vhds' -Context $sourceContext Container Uri: https://trie5d03826.blob.core.windows.net/vhds Name centos-classic-img-inst-disk2.vhd PageBlob 10737418752 centos-classic-img-inst-disk3.vhd PageBlob 10737418752 dknoyt0h.gjf201508191319450497.vhd PageBlob 32212255232 # Copy dknoyt0h.gjf201508191319450497.vhd $blobName ='dknoyt0h.gjf201508191319450497.vhd' $blobCopy = Start-AzureStorageBlobCopy -DestContainer $container ` -DestContext $destinationContext ` -SrcBlob $blobName ` -Context $sourceContext ` -SrcContainer $container # Blob Copy Progress while(($blobCopy | Get-AzureStorageBlobCopyState).Status -eq "Pending") { Start-Sleep -s 30 $blobCopy | Get-AzureStorageBlobCopyState } # Copy centos-classic-img-inst-disk2.vhd $blobName ='centos-classic-img-inst-disk2.vhd' $blobCopy = Start-AzureStorageBlobCopy -DestContainer $container ` -DestContext $destinationContext ` -SrcBlob $blobName ` -Context $sourceContext ` -SrcContainer $container # Blob Copy Progress while(($blobCopy | Get-AzureStorageBlobCopyState).Status -eq "Pending") { Start-Sleep -s 30 $blobCopy | Get-AzureStorageBlobCopyState } # Copy centos-classic-img-inst-disk3.vhd $blobName ='centos-classic-img-inst-disk3.vhd' $blobCopy = Start-AzureStorageBlobCopy -DestContainer $container ` -DestContext $destinationContext ` -SrcBlob $blobName ` -Context $sourceContext ` -SrcContainer $container # Blob Copy Progress while(($blobCopy | Get-AzureStorageBlobCopyState).Status -eq "Pending") { Start-Sleep -s 30 $blobCopy | Get-AzureStorageBlobCopyState }
Restore The CentOS Virtual Machine
Bringing the Virtual Machine back to life is a standard process. We first need to created a VM Image from the OS Disk that we copied. Then, we can hydrate the Virtual Machine from this image.
Before we create an SSH Session, it’s important to reattach the RAID Data Disks.
Set-AzureSubscription -SubscriptionName (Get-AzureSubscription -Current).SubscriptionName ` -CurrentStorageAccountName 'centosclassicwest' # Add the Captured Virtual Machine as a VM Image Add-AzureVMImage -ImageName centos-classic-raid0-restore ` -OS Linux ` -MediaLocation "https://centosclassicwest.blob.core.windows.net/vhds/dknoyt0h.gjf201508191319450497.vhd" # Verify that the VM Image has been added as expected Get-AzureVMImage | Where-Object {$_.Location -eq 'West US'} ` | Where-Object {$_.ImageName -eq 'centos-classic-raid0-restore'} ` | Select-Object { $_.ImageName , $_.Location } $servicename = "centos-classic-raid0" $vmname = "centos-classic-raid0" # Create the Virtual Machine using the newly created VM Image New-AzureQuickVM -Linux ` -ServiceName $servicename ` -Name $vmname ` -ImageName centos-classic-raid0-restore ` -InstanceSize 'Small' ` -Password 'Microsoft!' ` -Location 'West US' ` -LinuxUser 'brisebois' ` -Verbose # Stop the VM in order to add the data disks Get-AzureVM -ServiceName $servicename ` -Name $vmname ` | Stop-AzureVM -Force # Get a reference to the Azure VM $vm = Get-AzureVM -Name $name -ServiceName $serviceName # Wait for the VM to reach the StoppedDeallocated State Write-Output $('VM state is '+ $vm.InstanceStatus) while( $vm.InstanceStatus -ne 'StoppedDeallocated') { Start-Sleep -s 10 $vm = Get-AzureVM -Name $name -ServiceName $serviceName Write-Output $('VM state is '+ $vm.InstanceStatus) } # Add the data disks Get-AzureVM -ServiceName $servicename ` -Name $vmname ` | Add-AzureDataDisk -ImportFrom ` -MediaLocation "https://centosclassicwest.blob.core.windows.net/vhds/centos-classic-img-inst-disk2.vhd" ` -DiskLabel "Disk 0" ` -LUN 0 ` | Add-AzureDataDisk -ImportFrom ` -MediaLocation "https://centosclassicwest.blob.core.windows.net/vhds/centos-classic-img-inst-disk3.vhd" ` -DiskLabel "Disk 1" ` -LUN 1 ` | Update-AzureVM # Start the Virtual Machine Get-AzureVM -ServiceName $servicename ` -Name $vmname ` | Start-AzureVM
The Moment of Truth
We’re now ready to open an SSH Session. This is definitely the moment of truth. If something went wrong, we may not be able to log in. On the other hand if the stars align, we will be presented with something similar to the following.

Notice that I connected to centos-classic-raid0 and that the prompt adjusted to centos-classic-img-inst. This comes from the fact that I did not generalize and capture the Virtual Machine as an Image. This was a lift and shift from one datacenter to another.