Scaling Azure Cloud Services

When we build applications on the cloud, we usually favor scaling-out over scaling-up. Consequently, scaling in and out is supported out of the box on Microsoft Azure. On rare occasions, there is a business need to scale-up during peak hours and to scale-down for the quiet hours. This blog post will show you how to achieve this with the help of Microsoft Azure Automation and with PowerShell.

Keep in mind, that scaling up and down requires us to be creative.

Getting Ready

Azure Cloud Services define Virtual Machine size in the ServiceDefinition.csdef file. Unfortunately this file is part of the Cloud Service package. Therefore, one needs to redeploy to scale to a different Virtual Machine size.

This limitation requires us to create an individual package for each Virtual Machine size that we will use to scale our Cloud Service up and down.

For this blog post, we will use the Cloud Service I created in my previous post about deploying a Console Application to Microsoft Azure to create my packages. The first package will be sized to an ExtraSmall (1 CPU, 768 MB RAM, 19 GB DISK) Virtual Machine. This will be the default deployment package. We will schedule this package to be deployed to the production slot between the hours of 8 PM and 6 AM. The second package will be sized to an ExtraLarge (8 CPU, 14 GB RAM, 2039 GB DISK) Virtual Machine. It will be deployed to the production slot of my Cloud Service between the hours of 6 AM and 8 PM.

To create the first package, we must edit the ServiceDefinition.csdef file and set the vmsize of the ConsoleWorkerRole to ExtraSmall. The following is the updated ServiceDefinition.csdef that we will use to created the scaled-down package.

<!--?<span class="hiddenSpellError" pre="" data-mce-bogus="1"-->xml version="1.0" encoding="utf-16"?>
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
                   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                   name="console-lift-and-shift"
                   xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
  <WorkerRole name="ConsoleWorkerRole" vmsize="ExtraSmall">
    <Startup>
      <Task commandLine="setup_worker.cmd &gt; log.txt" executionContext="elevated">
        <Environment>
          <Variable name="EMULATED">
            <RoleInstanceValue xpath="/RoleEnvironment/Deployment/@emulated" />
          </Variable>
          <Variable name="RUNTIMEID" value="" />
          <Variable name="RUNTIMEURL" value="" />
        </Environment>
      </Task>
      <Task commandLine=".\startup.cmd &gt; startup_log.txt" executionContext="elevated" />
    </Startup>
    <Endpoints>
      <InputEndpoint name="HttpIn" protocol="tcp" port="80" />
    </Endpoints>
    <Runtime>
      <Environment>
        <Variable name="PORT">
          <RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/Endpoints/Endpoint[@name='HttpIn']/@port" />
        </Variable>
        <Variable name="EMULATED">
          <RoleInstanceValue xpath="/RoleEnvironment/Deployment/@emulated" />
        </Variable>
      </Environment>
      <EntryPoint>
        <ProgramEntryPoint commandLine="worker.cmd" setReadyOnProcessStart="true" />
      </EntryPoint>
    </Runtime>
  <!--<span class="hiddenSpellError" pre="" data-mce-bogus="1"-->WorkerRole>
</ServiceDefinition>

In order to create the scaled-down package for the Cloud Service, we must open Windows PowerShell ISE and navigat to the folder that contains the Cloud Service.

The Cloud Service for this blog post is at the following location:

C:\users\<user id>\Documents\demo-lift-and-shift\console-lift-and-shift

In the PowerShell console, we execute the following command to create a deployment package for the Cloud Service.

# Package the Cloud Service
Save-AzureServiceProjectPackage

This produces a package named cloud_package.cspkg. It is located in the folder that contains the Cloud Service.

Next, we must upload the cloud_package.cspkg file and the ServiceConfiguration.Cloud.cscfg file to Azure Blob Storage. This can be done from tools like Visual Studio, Azure Management Studio or PowerShell.

For this example, we will use PowerShell to rename and upload the Cloud Service artifacts (Package & Configurations) to Azure Blob Storage.

In order for this to work, we need to import the publish profile from our Microsoft Azure Subscription. PowerShell will then use it to connect to Microsoft Azure.

In the PowerShell console, execute the following commands.

# Opens a website that allows us to download the Publish Profile
# associated with our Microsoft Azure Subscription
Get-AzurePublishSettingsFile

# Save the publish settings files to your downloads folder
# Then import it using the following command.
Import-AzurePublishSettingsFile -PublishSettingsFile 'C:\Users\\Downloads\-credentials.publishsettings'

Then make sure to set our Azure subscription as the default subscription for the local PowerShell environment.

Select-AzureSubscription -Default -SubscriptionName 'my subscription name'

Before we upload anything to Azure Storage, we must first make sure that we have a provisioned Storage Account.
Using the following commands I will create a new Storage Account that will be used to store the Cloud Service artifacts.

New-AzureStorageAccount -StorageAccountName 'scaleupdowndemopkgs' `
                        -Label 'scale up down demo deployment packages' `
                        -Location 'East US' `
                        -Type 'Standard_GRS' `
                        -Verbose -WarningAction SilentlyContinue `
                        -ErrorVariable $e `

Now using the following PowerShell function, we will upload the artifacts to our new Azure Storage Account.

function Set-BlobContent
{
    Param
    (
        [Parameter(Mandatory=$true)]
        [string]
        $StorageAccountName,

        [Parameter(Mandatory=$true)]
        [string]
        $StorageContainer,

        [Parameter(Mandatory=$true)]
        [string]
        $FilePath,

        [Parameter(Mandatory=$true)]
        [string]
        $BlobName
    )

    # Setup the Storage Context that is used for Storage Operations

    $AccountKeys = Get-AzureStorageKey -StorageAccountName $StorageAccountName `
                                       -Verbose `
                                       -WarningAction Stop `
                                       -ErrorVariable $e `

    $StorageContext = New-AzureStorageContext -StorageAccountName $StorageAccountName `
                                              -StorageAccountKey $AccountKeys.Primary `
                                              -Verbose `
                                              -WarningAction Stop `
                                              -ErrorVariable $e `

    try
    {
        Write-Verbose('Try create Container')
        $newcontainer = New-AzureStorageContainer -Context $StorageContext `
                                              -Container $StorageContainer `
                                              -Verbose `
                                              -WarningAction Ignore `
                                              -ErrorVariable $e `
    }
    catch
    {
        Write-Verbose('Container already exists')
    }

    # Upload and rename the Cloud Service package to Blob Storage

    Set-AzureStorageBlobContent -File $FilePath `
                                -Container $StorageContainer `
                                -Context $StorageContext `
                                -Blob $BlobName `
                                -Force `
                                -Verbose `
                                -WarningAction Stop `
                                -ErrorVariable $e `
}

Uploading the Cloud Service package provides us with the chance to give the package file a meaningful name. For the purpose of this example, I renamed the package file to extra_small_vm_cloud_package.cspkg.

Set-BlobContent -StorageAccountName 'scaleupdowndemopkgs' `
                -StorageContainer 'packages' `
                -FilePath 'C:\Users\\Documents\demo-lift-and-shift\console-lift-and-shift\cloud_package.cspkg' `
                -BlobName 'extra_small_vm_cloud_package.cspkg'

Then we must update the ServiceConfiguration.Cloud.cscfg file and set the instance count to 1. This will allow the Cloud Service to scale-in the as it scales-down.

<!--?<span class="hiddenSpellError" pre="" data-mce-bogus="1"-->xml version="1.0" encoding="utf-16"?>
xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" serviceName="lift-and-shift-demo" osFamily="4" osVersion="*" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration">
  <Role name="ConsoleWorkerRole">
    <ConfigurationSettings />
    <Instances count="1" />
    <Certificates />
  </Role>
</ServiceConfiguration>

Using the same PowerShell function we will upload the updated ServiceConfiguration.Cloud.cscfg and give it a meaningful name. For the purpose of this example, I renamed the package file to ServiceConfiguration.Scaled-In-Cloud.cscfg.

Set-BlobContent -StorageAccountName 'scaleupdowndemopkgs' `
                -StorageContainer 'packages' `
                -FilePath 'C:\Users\\Documents\demo-lift-and-shift\console-lift-and-shift\ServiceConfiguration.Cloud.cscfg' `
                -BlobName 'ServiceConfiguration.Scaled-In-Cloud.cscfg'

To create the second package, we must edit the ServiceDefinition.csdef file and set the vmsize of the ConsoleWorkerRole to ExtraLarge. The following is the updated ServiceDefinition.csdef that will be used to created the scaled-up package.

<?xml version="1.0" encoding="utf-16"?>
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
                   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                   name="console-lift-and-shift"
                   xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
  <WorkerRole name="ConsoleWorkerRole" vmsize="ExtraLarge">
    <Startup>
      <Task commandLine="setup_worker.cmd &gt; log.txt" executionContext="elevated">
        <Environment>
          <Variable name="EMULATED">
            <RoleInstanceValue xpath="/RoleEnvironment/Deployment/@emulated" />
          </Variable>
          <Variable name="RUNTIMEID" value="" />
          <Variable name="RUNTIMEURL" value="" />
        </Environment>
      </Task>
      <Task commandLine=".\startup.cmd &gt; startup_log.txt" executionContext="elevated" />
    </Startup>
    <Endpoints>
      <InputEndpoint name="HttpIn" protocol="tcp" port="80" />
    </Endpoints>
    <Runtime>
      <Environment>
        <Variable name="PORT">
          <RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/Endpoints/Endpoint[@name='HttpIn']/@port" />
        </Variable>
        <Variable name="EMULATED">
          <RoleInstanceValue xpath="/RoleEnvironment/Deployment/@emulated" />
        </Variable>
      </Environment>
      <EntryPoint>
        <ProgramEntryPoint commandLine="worker.cmd" setReadyOnProcessStart="true" />
      </EntryPoint>
    </Runtime>
  </WorkerRole>
</ServiceDefinition>

In the PowerShell console, we must execute the following command to create the deployment package for the Cloud Service.

# Package the Cloud Service
Save-AzureServiceProjectPackage

This will produce a package named cloud_package.cspkg. It will be created in the folder that contains the Cloud Service.

Before we upload the new package to Azure Storage, update the ServiceConfiguration.Cloud.cscfg file and set the instance count to 5. This will allow the Cloud Service to scale-out as it scales-up.

<?xml version="1.0" encoding="utf-16"?>
xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" serviceName="lift-and-shift-demo" osFamily="4" osVersion="*" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration">
  <Role name="ConsoleWorkerRole">
    <ConfigurationSettings />
    <Instances count="5" />
    <Certificates />
  </Role>
</ServiceConfiguration>

Uploading the Cloud Service package provides us with the chance to give the package file a meaningful name. For the purpose of this example, I renamed the package file to extra_large_vm_cloud_package.cspkg. I also renamed the updated ServiceConfiguration.Cloud.cscfg to ServiceConfiguration.Scaled-Out-Cloud.cscfg.

Set-BlobContent -StorageAccountName 'scaleupdowndemopkgs' `
                -StorageContainer 'packages' `
                -FilePath 'C:\Users\\Documents\demo-lift-and-shift\console-lift-and-shift\cloud_package.cspkg' `
                -BlobName 'extra_large_vm_cloud_package.cspkg'

Set-BlobContent -StorageAccountName 'scaleupdowndemopkgs' `
                -StorageContainer 'packages' `
                -FilePath 'C:\Users\\Documents\demo-lift-and-shift\console-lift-and-shift\ServiceConfiguration.Cloud.cscfg' `
                -BlobName 'ServiceConfiguration.Scaled-Out-Cloud.cscfg'

Automate and Schedule!

Microsoft Azure Automation is all about automating frequent, time-consuming, and error-prone cloud management tasks. Azure Automation helps you spend more of your time focused on work that adds business value. By reducing errors and boosting efficiency, it can also help lower your operational costs.

Setting this up for the first time can be tricky, but don’t worry, there are some great resources that will help us overcome this challenge.

By default, Microsoft Azure Automation does not have access to your Azure resources. It must be configured appropriately and to do so, we should followed the steps described in Azure Automation: Authenticating to Azure using Azure Active Directory.

Note: Those who have a microsoft.onmicrosoft.com subscription may hit a wall using this method. If you know of a workaround please share it through the comment section on this post.

OK. Now we’re ready to build a Windows PowerShell workflow (runbook). Having no prior experience with runbooks, I peaked at pre-built samples from the Runbook Gallery. This Gallery allows you to browse and import runbooks to your Automation account from the Azure Management Portal. This gave me a great deal of insight as to what I was up against and got me asking a bunch of questions. Here are a few of the resources that answered most of my questions.

Armed with these resources I wrote the following runbook. It deploys a Cloud Service package and desired configurations from Azure Storage to the production slot of a Cloud Service. If the Cloud Service does not exist, it creates it. Then it tries to update the Cloud Service. If that fails, it created a brand new deployment.

workflow Invoke-DeployPackage
{
   param
   (
      [parameter(Mandatory=$true, `
                 HelpMessage = 'The name given to the Windows PowerShell Credentials located in the Assets of this Azure Automation instance')]
      [String]
      $AzureCredentialsName,

      [parameter(Mandatory=$true, `
                 HelpMessage = 'The name of the Microsoft Azure Subscription that contains the resources your wish to deploy')]
      [String]
      $AzureSubscriptionName,

      [parameter(Mandatory=$true, `
                 HelpMessage = 'The name of the service you wish to deploy')]
      [String]
      $ServiceName,

      [parameter(Mandatory=$true, `
                 HelpMessage = 'The location of the service your with to deploy I.E. "East US"')]
      [String]
      $ServiceLocation,

      [parameter(Mandatory=$true, `
                 HelpMessage = 'The name of the Microsoft Azure storage account that contains the Cloud Service Packages')]
      [String]
      $StorageAccountName,

      [parameter(Mandatory=$true, `
                 HelpMessage = 'The name of the Blob container that contains the Cloud Service Packages')]
      [String]
      $StorageContainerName,

      [parameter(Mandatory=$true, HelpMessage = 'The name of the Cloud Service Package file')]
      [String]
      $PackageBlobName,

      [parameter(Mandatory=$true, HelpMessage = 'The name of the Cloud Service Configurations file')]
      [String]
      $ConfigurationBlobName
    )

    $VerbosePreference = 'Continue'

    # Mark the start time of the script execution
    $StartTime = Get-Date

    $DeploymentLable = $ServiceName + ' (' + $StartTime +')'

    Write-Verbose ('Connecting to Microsoft Azure')

    $Credentials = Get-AutomationPSCredential `
                       -Name $AzureCredentialsName

    Add-AzureAccount `
       -Credential $Credentials

    InlineScript{

        Write-Verbose ('Selecting Azure Subscription')

        Select-AzureSubscription -SubscriptionName $Using:AzureSubscriptionName

        $StorageAccount = (Get-AzureStorageAccount -StorageAccountName $Using:StorageAccountName).Label

        Write-Verbose ('Setting the Azure Subscription and Storage Accounts')

        Set-AzureSubscription `
            -SubscriptionName $Using:AzureSubscriptionName `
            -CurrentStorageAccount $StorageAccount

        Write-Verbose ('[Start] Validating Azure cloud service environment {0}' -f $Using:ServiceName)

        try
        {
            $CloudService = Get-AzureService `
                                -ServiceName $Using:ServiceName

            Write-Verbose ('cloud service {0} in location {1} exist!' -f $Using:ServiceName, $Using:ServiceLocation)
        }
        catch
        {
            #Create
            Write-Verbose ('[Start] creating cloud service {0} in location {1}' -f $Using:ServiceName, $Using:ServiceLocation)

            New-AzureService `
                -ServiceName $Using:ServiceName `
                -Location $Using:ServiceLocation

            Write-Verbose ('[Finish] creating cloud service {0} in location {1}' -f $Using:ServiceName, $Using:ServiceLocation)
        }

        Write-Verbose ('[Finish] Validating Azure cloud service environment {0}' -f $Using:ServiceName)

        $TempFileLocation = "C:\$Using:ConfigurationBlobName"

        Write-Verbose ('Downloading Service Configurations from Azure Storage')

        Get-AzureStorageBlobContent `
            -Container $Using:StorageContainerName `
            -Blob  $Using:ConfigurationBlobName `
            -Destination $TempFileLocation `
            -Force

        Write-Verbose('Downloaded Configuration File: '+ $TempFileLocation)

        Write-Verbose('Getting Package Url from Azure Storage: '+ $Using:PackageBlobName)

        $blob = $(Get-AzureStorageBlob -Blob $Using:PackageBlobName -Container $Using:StorageContainerName)

        $PackageUri = $blob.ICloudBlob.Uri.AbsoluteUri

        Write-Verbose('Package Url: '+ $PackageUri)

        try
        {
            Write-Verbose('Attempting to Update an Existing Deployment')
            Set-AzureDeployment `
                -Package $PackageUri `
                -Configuration $TempFileLocation `
                -Slot Production `
                -Mode Simultaneous `
                -Label $Using:DeploymentLable `
                -ServiceName  $Using:ServiceName `
                -Upgrade `
                -Force `
                -Verbose

        }catch
        {
            Write-Output $error

            Write-Verbose('Attempting to Deploy the service')

            New-AzureDeployment `
                -Package $PackageUri `
                -Configuration $TempFileLocation `
                -Slot Production `
                -Label $Using:DeploymentLable `
                -ServiceName  $Using:ServiceName `
                -Verbose
        }
    }
}

Using your browser let’s navigated to http://manage.windowsazure.com and log into our Azure Account. From this management portal I will show you how to schedule your runbook to execute once at 8 PM to scale-down and once at 6 AM to scale-up.

From the bottom of the management portal click on the + button and make the selections seen below. Then create a new runbook whose name is the same as the workflow listed above (for this to work, both names must match). For the purpose of this blog post, I also opted to create a new Automation Account. This account will be used to host the runbook, schedules and assets.

create-runbook

Once the Automation Account and Runbook have been created, navigate to the new Automation Account and click on the Assets tab.

adding-an-asset

Click on ADD SETTING and then on ADD CREDENTIAL

add-runbook-credentials-0

Select Windows PowerShell Credential from the CREDENTIAL TYPE dropdown. Then give this asset a name. It will be used by the runbook, to retrieve the required credentials that provide access the Azure Storage Account that contains the packages and configurations of the Cloud Service.

add-runbook-credentials

Enter the user name and password for the Azure Active Directory user that was created earlier in this post. This user must be co-administrator to your Azure Subscription.

add-runbook-credentials-2

Once the asset is created, it will appear in the assets list. This list is composed of Connections, Credentials, Variables and Schedules that can be used or linked to runbooks.

add-runbook-credentials-3

Navigate to the RUNBOOKS tab and select the Invoke-DeployPackage runbook.

navigate-to-runbooks

Navigate to the AUTHOR tab. The DRAFT tab should be active by default. By now the runbook is composed of an empty workflow. Replace it with the full runbook script listed above. Click SAVE and then TEST. Provide all the necessary information and see whether the runbook is able to deploy the Cloud Service. Once everything look OK, click PUBLISH.

runbook-draft

Navigate to the SCHEDULE tab and click on LINK TO A NEW SCHEDULE.

create-schedule

There are two schedules that need to be created. The first one is to scale-down the Cloud Service. Give the schedule a meaningful name and description.

create-schedule-1

Then configure the schedule. As originally stated, we want to scale-down everyday at 8 PM.

create-schedule-2

Provide the configuration values for the scheduled execution of the Invoke-DeployPackage runbook.

create-schedule-3

Once the schedule is created, it will be listed under the SCHEDULE tab.

create-schedule-4

It is possible to go back and view the schedule’s details. Click on VIEW DETAILS located at the bottom of the list.

Now it’s time to create the scale-up schedule. Click on LINK located at the bottom of the schedule list and select Link to a new Schedule.

create-schedule-5

To create the scale-up schedule, the wizard will walk through the same screens as it did for the scale-down schedule. Be sure to provide a meaningful name and description. These values can be quite useful when the list of schedules gets to be lengthy.

create-schedule-6

As previously stated, we want to scale-up the Cloud Service everyday at 6:00 AM.

create-schedule-7

Provide the configuration values for the scheduled execution of the Invoke-DeployPackage runbook. This time, specify the extra large package and configuration file names.

create-schedule-8

The schedule will appear in the schedule list. Note that this is a great place to get information about the schedule’s next execution time.

create-schedule-9

Once you navigate back to the DASHBOARD tab, you will notice that 2 schedules are active.

Azure-Automation

Within 24 hours, the DASHBOARD will light up with useful information about the execution of the runbooks. In this screen capture, we can observe that I had two successful executions and that I consumed 2 minutes of the available 500 minutes. This runbook is currently scaled at the free tier and can easily be scaled up if I need more execution time.

runbook-dashboard

Wrapping it up

This was a long one, but I hope it can help you through this process. Getting this setup for the first time required that I read up on runbooks and PowerShell. It also required me to do a lot of testing, and I found that using the portal slowed me down. So I decided to develop and test my runbook from Windows PowerShell ISE. This allowed me to go through development cycles at a much faster rate.

Configuring the Credentials necessary for Microsoft Azure Automation also took me some time to figure out because I was using the wrong Azure Active Directory.

In summary, where are the steps required to setup this scenario.

  1. Create the Cloud Service packages
    1. The first with the scaled-up VM size
    2. The second with the scaled-down VM size
  2. Prepare 2 Cloud Service configuration files
    1. The first with the scaled-in instance count
    2. The second with the scaled-out instance count
  3. Upload the Cloud Service packages and configuration files to Azure Storage
  4. Create an Automation account
  5. Setup Azure Automation: Authenticating to Azure using Azure Active Directory
  6. Author the runbook (draft, test and publish)
  7. Schedule therunbook
    1. Once to scale-up everyday at 6 AM
    2. Once to scale-down everyday at 8 PM

How does this scenario translate to lower operational costs?

The following comparison uses the pay-as-you-go pricing from December 2014

SCENARIO scaling down during quiet hours
VM HOURS INSTANCES COST/HOUR TOTAL
ExtraSmall 10 1 $           0.022 $     0.22
ExtraLarge 14 5 $           0.675 $   47.25
Daily Cost $   47.47
SCENARIO staying at maximum capacity
VM HOURS INSTANCES COST/HOUR TOTAL
ExtraSmall 0 0 $           0.022 $         –
ExtraLarge 24 5 $           0.675 $   81.00
Daily Cost $   81.00

Let’s take a look at the operational cost difference between both scenarios

DIFFERENCE
SCENARIO 1 DAY 1 MONTH 1 YEAR
scaling down $   47.47 $ 1,424.10 $ 17,326.55
full capacity $   81.00 $ 2,430.00 $ 29,565.00
savings $   33.53 $ 1,005.90 $ 12,238.45

I think the numbers speak for themselves. At this time, it’s important to note that scaling dynamically using features like Auto Scaling and scaling based on a schedule can dramatically affect your overall operational costs. Furthermore, Azure gives us the ability to scale up and down as well as in and out. Finding the right combination can enable us to provide the appropriate amount of resources to satisfy the application’s needs.

Be careful, over provisioning means overspending. On the other hand, under provisioning can result in decreased revenue and dissatisfied customers. It can even push customers to your competitors. Use telemetry data to identify the amount of resources required to strike a balance point between operational costs, availability and responsiveness of your application.

4 responses to Scaling #Azure Cloud Services Up and Down Like Clockwork

  1. 

    Hi Alexandre,

    Great post! Want to add the runbook you created into the Azure Automation category on ScriptCenter (https://gallery.technet.microsoft.com/scriptcenter/site/search?f%5B0%5D.Type=RootCategory&f%5B0%5D.Value=WindowsAzure&f%5B0%5D.Text=Windows%20Azure&f%5B1%5D.Type=SubCategory&f%5B1%5D.Value=WindowsAzure_automation&f%5B1%5D.Text=Automation) so it shows up in the Runbook Gallery?

    Also, to answer you question on microsoft.onmicrosoft.com subscriptions, these are subscriptions hooked up to the Microsoft AD directory. This means any OrgID credential in the Microsoft AD directory will work, assuming it is a service admin on the Azure subscription. So Microsoft corporate credentials (for example xyz@microsoft.com) should work.

    Liked by 1 person

  2. 

    Great idea, I would be happy to apply it.
    Though I cannot make the runbook code to work.
    Is it possible to give examples for the parameters ?

    Like

Trackbacks and Pingbacks:

  1. Securing Production #Azure SQL Databases « Alexandre Brisebois ☁ - January 8, 2015

    […] requires us to set up credentials and a schedule in Azure Automation Assets. My earlier post about scaling Azure Cloud Services up and down like clockwork can help you get up and […]

    Like

  2. Scaling #Azure Cloud Services to Zero « Alexandre Brisebois ☁ - March 16, 2015

    […] a post about scaling Cloud Services, I shared a PowerShell runbook that can deploy packages from Azure Storage based on a schedule. It […]

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.