Run Azure CLI inside Docker on a Macbook Pro

Laptop Setup

Bootcamp with Windows on one partition and OSX on another.

A great way to manage your Windows Azure environment is to use a Docker Container, instead of powershell.
If you are new to automating your infrastructure and code, then this will be a great way to start on the right foot from day one.

Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications. By taking advantage of Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production.

Install Docker

Grab the latest version of docker here.

After Installing Docker
1. Use bootcamp to boot back into OSX.
2. In OSX restart the machine (warm restart)
3. Hold the Options key down to boot back into Windows

The above looks like a waste of time, However, this will enable Virtualisation in the Bios of the Macbook, since OSX does this by default and windows will not. So it is a small hack to get virtualisation working via a warm reboot from OSX back to Windows.

Grab a Docker Virtual Image with Azure CLI

Run the following command:

docker run -it microsoft/azure-cli

docker-install-azure

The above command will connect to the Docker Repository and download the image to run in a container. This is basically a virtualized environment where you can now manage your windows environment from.

Install Azure Command Line Interface (CLI)

Run the following command:

Azure Help

Look carefully at the image below. Powershell was used to run Docker. However once I run Docker, look at my prompt (root@a28193f1320d:/#). We are now in a Linux virtual machine  (a28193f1320d) and we now have total control over our Azure resources from the command line.

Docker in Windows

Docker in Windows

Now, the Linux guys will start having some respect for us Windows guys. We are now entering an age where we need to be agnostic to technology.

Below we are now running a full blown Kernel of Linux in a Windows Powerhsell prompt.

docker-linux

What is even cooler, we are using a Linux VM to manage the Azure environment, and so we get awesome tools for free.

linuxtools

Good Habits
By using docker with the Azure Command Line interface, you will put yourself into a good position by automating all your infrastructure and code requirements.

You will be using the portal less and less to manage and deploy your azure resources such as Virtual Machines, Blobs and Permissions.

Note, we are now using ARM – Azure Resource Management, some features in ARM will not be compatible with older Azure deployments. Read more about ARM.

Conclusion
You can deploy, update, or delete all the resources for your solution in a single, coordinated operation. You use a template for deployment and that template can work for different environments such as testing, staging, and production. Resource Manager provides security, auditing, and tagging features to help you manage your resources after deployment.

CLI Reference


help: Commands:
help: account Commands to manage your account information and publish settings
help: acs Commands to manage your container service.
help: ad Commands to display Active Directory objects
help: appserviceplan Commands to manage your Azure appserviceplans
help: availset Commands to manage your availability sets.
help: batch Commands to manage your Batch objects
help: cdn Commands to manage Azure Content Delivery Network (CDN)
help: config Commands to manage your local settings
help: datalake Commands to manage your Data Lake objects
help: feature Commands to manage your features
help: group Commands to manage your resource groups
help: hdinsight Commands to manage HDInsight clusters and jobs
help: insights Commands related to monitoring Insights (events, alert rules, autoscale settings, metrics)
help: iothub Commands to manage your Azure IoT hubs
help: keyvault Commands to manage key vault instances in the Azure Key Vault service
help: lab Commands to manage your DevTest Labs
help: location Commands to get the available locations
help: network Commands to manage network resources
help: policy Commands to manage your policies on ARM Resources.
help: powerbi Commands to manage your Azure Power BI Embedded Workspace Collections
help: provider Commands to manage resource provider registrations
help: quotas Command to view your aggregated Azure quotas
help: rediscache Commands to manage your Azure Redis Cache(s)
help: resource Commands to manage your resources
help: role Commands to manage role definitions
help: servermanagement Commands to manage Azure Server Managment resources
help: storage Commands to manage your Storage objects
help: tag Commands to manage your resource manager tags
help: usage Command to view your aggregated Azure usage data
help: vm Commands to manage your virtual machines
help: vmss Commands to manage your virtual machine scale sets.
help: vmssvm Commands to manage your virtual machine scale set vm.
help: webapp Commands to manage your Azure webapps
help:
help: Options:
help: -h, --help output usage information
help: -v, --version output the application version
help:
help: Current Mode: arm (Azure Resource Management)
Advertisements

Windows Azure Cloud Drive–Dealing with large VHD’s and Blob Snapshot Restores

I ran into a scenario where I needed to transfer data from an Azure Cloud Drive VHD that was 250GB in Size from Production to Preproduction. So this means that I want to transfer the VHD from one azure account to another.

The thing with VHD’s and Cloud Drive, is that it is a Page Blob, and the VHD has to be a fixed size, so even though I might have 200MB of data in a 250GB VHD, you would need to download 250GB worth of data.

The Solution?

Remove desktop into the Azure Virtual Machine, Mount the VHD Manually, copy the data, zip it up and send it through other means, so in essence, I only send or download the data that is USED in the VHD i.e. 200MB and not 250GB.

This utility can use the concept of a blobsnapshot VHD backup to restore, and what it will do it mount it. This is ideal when you using blob snapshots as a backup mechanism and you need a way to restore the blob snapshot VHD data, as fast as possible to another Azure Account.

Below is the code for the helper and you can download the source code for the project here:

NOTE: This application must be run in a Windows Azure Virtual Machine, it will NOT WORK on a development machine/emulator.

hg clone ssh://hg@bitbucket.org/romiko/mountclouddrive
hg clone https://romiko@bitbucket.org/romiko/mountclouddrive

using System;
using Microsoft.WindowsAzure;
using Microsoft.WindowsAzure.StorageClient;
using MountCloudDrive.Properties;

namespace MountCloudDrive
{
    public class CloudDriveHotAssistant
    {
        private string restoreVHDName;
        private readonly StorageCredentialsAccountAndKey storageCredentials;
        private readonly CloudStorageAccount cloudStorageAccount;
        private CloudBlobClient blobClient;

        public CloudDriveHotAssistant(string accountName, string accountKey)
        {
            restoreVHDName = Settings.Default.DefaultRestoreDestination;
            storageCredentials = new StorageCredentialsAccountAndKey(accountName, accountKey);
            cloudStorageAccount = new CloudStorageAccount(storageCredentials, false);
            blobClient = new CloudBlobClient(cloudStorageAccount.BlobEndpoint, storageCredentials);
        }

        public CloudPageBlob GetBlobToRestore(string blobContainer, string blobFileName)
        {
            DateTime snapShotTime;
            var converted = DateTime.TryParse(Settings.Default.BlobSnapShotTime, out snapShotTime);

            CloudPageBlob pageBlob;

            if (converted)
                pageBlob = blobClient.GetPageBlobReference(string.Format(@"{0}\{1}", blobContainer, blobFileName), snapShotTime);
            else
                pageBlob = blobClient.GetPageBlobReference(string.Format(@"{0}\{1}", blobContainer, blobFileName));

            try
            {
                pageBlob.FetchAttributes();
            }
            catch (Exception ex)
            {
                Console.WriteLine("\r\nBlob Does Not Exist!");
                Console.WriteLine(ex);
                return null;
            }
            return pageBlob;
        }

        public void UnMountCurrentDrives()
        {
            foreach (var driveName in CloudDrive.GetMountedDrives())
            {
                var drive = new CloudDrive(driveName.Value, storageCredentials);
                Console.WriteLine(string.Format("\r\nUnmounting {0}", driveName.Key));
                Console.WriteLine("\r\nAre you sure Y/N");
                var key = Console.ReadKey();
                var decision = key.Key == ConsoleKey.Y;

                if (!decision)
                    continue;

                try
                {
                    drive.Unmount();
                }
                catch (Exception ex)
                {
                    Console.WriteLine(string.Format("\r\nUnmounting {0} FAILED.\r\n {1}", driveName.Key, ex));
                }
            }
        }

        public CloudPageBlob GetBlobReferenceToRestoreTo(string blobContainer)
        {
            return blobClient.GetPageBlobReference(string.Format(@"{0}\{1}", blobContainer, restoreVHDName));
        }

        public void MountCloudDrive(CloudPageBlob pageBlobSource, CloudPageBlob pageBlobDestination, string blobContainer)
        {
            pageBlobDestination.CopyFromBlob(pageBlobSource);
            Console.WriteLine(string.Format("\r\nAttempting to mount {0}", pageBlobDestination.Uri.AbsoluteUri));
            var myDrive = cloudStorageAccount.CreateCloudDrive(pageBlobDestination.Uri.AbsoluteUri);
            var drivePath = myDrive.Mount(0, DriveMountOptions.None);
            Console.WriteLine(string.Format("\r\nVHD mounted at {0}", drivePath));
        }
    }
}

Automate #WindowsAzure snapshot restores

Hi,
This is the last series in blog posts regarding the automation of backups, purging and restoring azure blobs.

Below is a PowerShell script that can take a file containing the contents of snapshot urls, it also supports the log file output from the backup restore script and just pasting that output in the event you want to restore a complete backup set.

Remember, when using the backup script, ALWAYS save the output of the script to use as a reference so that you have the URL’s of the snapshots you want to restore.

e.g. Sample restore.txt file.
[05:01:08]: [Publishing internal artifacts] Sending build.start.properties.gz file
[05:01:05]: Step 1/2: Command Line (14s)
[05:01:05]: [Step 1/2] in directory: C:\TeamCity\buildAgent\work\d9375448b88c1b75\Maintenance
[05:01:08]: [Step 1/2] Starting snapshot uniqueids
[05:01:08]: [Step 1/2] Found blob container uniqueids
[05:01:09]: [Step 1/2] https://uatmystory.blob.core.windows.net/uniqueids/agencies?snapshot=2012-04-22
[05:01:09]: [Step 1/2] T19:01:10.6488549Z
[05:01:09]: [Step 1/2] https://uatmystory.blob.core.windows.net/uniqueids/agency1-centres?snapshot=201
[05:01:09]: [Step 1/2] 2-04-22T19:01:10.8818083Z
[05:01:09]: [Step 1/2] https://uatmystory.blob.core.windows.net/uniqueids/agency1-clients?snapshot=201
[05:01:09]: [Step 1/2] 2-04-22T19:01:11.0257795Z
[05:01:09]: [Step 1/2] https://uatmystory.blob.core.windows.net/uniqueids/agency1-referrals?snapshot=2
[05:01:09]: [Step 1/2] 012-04-22T19:01:11.1717503Z

So the script will parse any restore file and just find URI’s in it, and then restore them.

#requires -version 2.0
param (
	[parameter(Mandatory=$true)] [string]$AzureAccountName,
	[parameter(Mandatory=$true)] [string]$AzureAccountKey,
	[parameter(Mandatory=$true)] [string]$FileContainingSnapshotAddresses
)

$ErrorActionPreference = "Stop"

if ((Get-PSSnapin -Registered -Name AzureManagementCmdletsSnapIn -ErrorAction SilentlyContinue) -eq $null)
{
	throw "AzureManagementCmdletsSnapIn missing. Install them from Https://www.cerebrata.com/Products/AzureManagementCmdlets/Download.aspx"
}

Add-PSSnapin AzureManagementCmdletsSnapIn -ErrorAction SilentlyContinue
Add-Type -Path 'C:\Program Files\Windows Azure SDK\v1.6\ref\Microsoft.WindowsAzure.StorageClient.dll'

$cred = New-Object Microsoft.WindowsAzure.StorageCredentialsAccountAndKey($AzureAccountName,$AzureAccountKey)
$client = New-Object Microsoft.WindowsAzure.StorageClient
.CloudBlobClient("https://$AzureAccountName.blob.core.windows.net",$cred)

function RestoreSnapshot
{
	param ( $snapShotUri)
	Write-Host "Parsing snapshot restore for $SnapShotUri"

	$regex = new-object System.Text.RegularExpressions.Regex("http://.*?/(devstoreaccount1/)?(?<containerName>.*?)/.*")
	$match = $regex.Match($snapShotUri)
	$container = $match.Groups["containerName"].Value
	$parsedUri = $match
	
	if($match.Value -eq "")
	{
		return
	}
		
	if ($container -eq $null)
	{
		Write-Host  "Container $blobContainerName doesn't exist, skipping snapshot restore"
	}
	else
	{
		Write-Host  "Restoring $snapShotUri" 
		Copy-Blob -BlobUrl $parsedUri -AccountName $AzureAccountName -AccountKey $AzureAccountKey -TargetBlobContainerName $container
		Write-Host  "Restore snapshot complete for $parsedUri"
	}
}

$fileContent = Get-Content $FileContainingSnapshotAddresses

foreach($uri in $fileContent)
{
	RestoreSnapshot $uri
}


Automate #Azure Blob Snapshot purging/deletes with @Cerebrata

The pricing model for snapshots can get rather complicated, so we need a way to automate the purging of snapshots.
Read how snapshots can accrue additional costs

Lets minimize these costs! We use this script to backup and manage snapshot retention for all our Neo4j Databases hosted in the Azure Cloud.

So a solution I have is that:
We have a retention period in days for all snapshots e.g. 30 days
We have a retention period for the last day of the month backups e.g. 180 days

Rules:
1. The purging will always ensure that there is always at least ONE snapshot, so it will never delete a backup if it is the only backup for a base blob.

2. The purging will delete snapshots greater than the retention period, respecting rule 1

3. The purging will delete snapshots greater than the last day month retention period, respecting rule 1

You can then schedule this script to run after the Backup Script in TeamCity or some other build server scheduler.

param(
	[parameter(Mandatory=$true)] [string]$AzureAccountName,
	[parameter(Mandatory=$true)] [string]$AzureAccountKey,
	[parameter(Mandatory=$true)] [array]$BlobContainers, #Blob Containers to backup
	[parameter(Mandatory=$true)] [int]$BackupRetentionDays, #Days to keep snapshot backups
	[parameter(Mandatory=$true)] [int]$BackupLastDayOfMonthRetentionDays # Days to keep last day of month backups
)


if( $BackupRetentionDays -ge $BackupLastDayOfMonthRetentionDays )
{
	$message = "Argument Exception: BackupRentionDays cannot be greater than or equal to BackupLastDayOfMonthRetentionDays"
	throw $message
}

Add-Type -Path 'C:\Program Files\Windows Azure SDK\v1.6\ref\Microsoft.WindowsAzure.StorageClient.dll'

$cred = New-Object Microsoft.WindowsAzure.StorageCredentialsAccountAndKey($AzureAccountName,$AzureAccountKey)
$client = New-Object Microsoft.WindowsAzure.StorageClient
.CloudBlobClient("https://$AzureAccountName.blob.core.windows.net",$cred)

function PurgeSnapshots ($blobContainer)
{
	$container = $client.GetContainerReference($blobContainer)
	$options = New-Object  Microsoft.WindowsAzure.StorageClient.BlobRequestOptions
	$options.UseFlatBlobListing = $true
	$options.BlobListingDetails = [Microsoft.WindowsAzure.StorageClient.BlobListingDetails]::Snapshots

	$blobs = $container.ListBlobs($options)
	$baseBlobWithMoreThanOneSnapshot = $container.ListBlobs($options)| Group-Object Name | Where-Object {$_.Count -gt 1} | Select Name

	#Filter out blobs with more than one snapshot and only get SnapShots.
	$blobs = $blobs | Where-Object {$baseBlobWithMoreThanOneSnapshot  -match $_.Name -and $_.SnapshotTime -ne $null} | Sort-Object SnapshotTime -Descending

	foreach ($baseBlob in $baseBlobWithMoreThanOneSnapshot )
	{
		 $count = 0
		 foreach ( $blob in $blobs | Where-Object {$_.Name -eq $baseBlob.Name } )
		    {
				$count +=1
				$ageOfSnapshot = [System.DateTime]::UtcNow - $blob.SnapshotTime
				$blobAddress = $blob.Uri.AbsoluteUri + "?snapshot=" + $blob.SnapshotTime.ToString("yyyy-MM-ddTHH:mm:ss.fffffffZ")

				#Fail safe double check to ensure we only deleting a snapshot.
				if($blob.SnapShotTime -ne $null)
				{
					#Never delete the latest snapshot, so we always have at least one backup irrespective of retention period.
					if($ageOfSnapshot.Days -gt $BackupRetentionDays -and $count -eq 1)
					{
						Write-Host "Skipped Purging Latest Snapshot"  $blobAddress
						continue
					}

					if($ageOfSnapshot.Days -gt $BackupRetentionDays -and $count -gt 1 )
					{
					    #Do not backup last day of month backups
						if($blob.SnapshotTime.Month -eq $blob.SnapshotTime.AddDays(1).Month)
						{
							Write-Host "Purging Snapshot "  $blobAddress
							$blob.Delete()
							continue
						}
						#Purge last day of month backups based on monthly retention.
						elseif($blob.SnapshotTime.Month -ne $blob.SnapshotTime.AddDays(1).Month)
						{
							if($ageOfSnapshot.Days -gt $BackupLastDayOfMonthRetentionDays)
							{
							Write-Host "Purging Last Day of Month Snapshot "  $blobAddress
							$blob.Delete()
							continue
							}
						}
						else
						{
							Write-Host "Skipped Purging Last Day Of Month Snapshot"  $blobAddress
							continue
						}
					}
					
					if($count % 5 -eq 0)
					{
						Write-Host "Processing..."
					}
				}
				else
				{
					Write-Host "Skipped Purging "  $blobAddress
				}
		    }
	}
}

foreach ($container in $BlobContainers)
{
	Write-Host "Purging snapshots in " $container
	PurgeSnapshots $container
}

Automate #Azure Blob Snapshot backups with @Cerebrata

Hi,

Leveraging the cerebrata cmdlets for Azure, we can easily backup our blob containers via snapshot, this will prove useful for Page Blobs that are Random Access i.e. VHD’s on Cloud Drive

Here is how Purging Snapshots works

#requires -version 2.0
param (
	[parameter(Mandatory=$true)] [string]$AzureAccountName,
	[parameter(Mandatory=$true)] [string]$AzureAccountKey,
	[parameter(Mandatory=$true)] [array]$BlobContainers
)

$ErrorActionPreference = "Stop"

if ((Get-PSSnapin -Registered -Name AzureManagementCmdletsSnapIn -ErrorAction SilentlyContinue) -eq $null)
{
	throw "AzureManagementCmdletsSnapIn missing. Install them from Https://www.cerebrata.com/Products/AzureManagementCmdlets/Download.aspx"
}

Add-PSSnapin AzureManagementCmdletsSnapIn -ErrorAction SilentlyContinue

function SnapShotBlobContainer 
{
	param ( $containers, $blobContainerName )
	Write-Host "Starting snapshot $blobContainerName"

	$container = $containers | Where-Object { $_.BlobContainerName -eq $blobContainerName }

	if ($container -eq $null)
	{
		Write-Host  "Container $blobContainerName doesn't exist, skipping snapshot"
	}
	else
	{
        Write-Host  "Found blob container $blobContainerName"
Checkpoint-BlobContainer -Name $container.BlobContainerName -SaveSnapshotInformation -AccountName $AzureAccountName -AccountKey $AzureAccountKey
	Write-Host  "Snapshot complete for $blobContainerName"
	}
}

$containers = Get-BlobContainer -AccountName $AzureAccountName -AccountKey $AzureAccountKey
foreach($container in $BlobContainers)
{
	SnapShotBlobContainer $containers $container
}

Then just call the script with the params. remember an array of items is parsed in like this:

-BlobContainers:@(‘container1’, ‘contaner2’) -AzureAccountName romikoTown -AzureAccountKey blahblahblahblahblehblooblowblab==

Appfabric Topics–Pub/Sub Messaging Service Bus

Overview

We will build a very simple Pub/Sub messaging system using the Azure sdk 1.6. I built this for a demo, so for production ready solutions you will need to alter how you define your message types, e.g. interfaces etc.

Below is an idea how topics work, basically they similar to queues, except a topic is broken down into subscriptions, so the publisher pushes messages to the Topic (Just like you would with a queue) and the subscriber will subscribe to a subscription.

image

So, in the above, we can see a publisher/subscriber messaging bus.

Filtering

Another nice feature is that you can attach metadata to messages via a IDictionary<string, object> property to include additional metadata for messages, e.g. messageType

This means, you can then create subscription rule filters based on properties that are on the message, all messages are cast to BrokeredMessage before they are sent on the bus. Generics are used on the subscriber to cast the message back to it’s original type.

This sample code that I have provided has filtering on the CUSTOMER subscription, which only takes messages with a MessageType = ‘BigOrder’.

Domain Modelling Rules

I like the idea of the service bus filtering ONLY on MessageType and NEVER on business domain rules e.g. Quantity > 100 etc, why? The Service Bus should never have rules that belong to a business domain, all logic for deciding what “TYPE” of message it is should be done before it is pushed onto the bus, if you stick to this, you will keep your service bus lean and mean.

The Code

I have a sample application you can download from:

https://bitbucket.org/romiko/appfabricpubsubdemo

or clone it with mecurial.

I also included asynchronous operations. Remember that Azure Appfabric Topics only support partial transactions. If you want full transaction support, check out NServiceBus. I highly recommend the use of NServiceBus with AppFabric for full transactional and guaranteed message delivery for .Net publishers and subscribers.

hg clone https://bitbucket.org/romiko/appfabricpubsubdemo

Sample output

Below are screenshots of what you should see when running the code, after updating the AcountDetails.cs file with your appfabric account details.

image

Notice:

One publisher – sending a small and big order

2 subscribers that get both messages

1 subscriber only getting the big order using the subscription filtering.

Subscription

You will need an Azure subscription to try out the demo, once you have the subscription, update the AccountDetails.cs file with your credentials. Namespace, Name and Key, this can all be found in the Azure portal management. Check the default key.

.

 

image

One the right pain will be the properties for the namespace, including the Management keys which you can use to get started, by using the default name of owner, or you can play with access control and create a user account and key.

image

Service Bus Explorer

I recommend you check out Servicebus Explorer

Here I use the explorer tool to see the message filters on the customer subscription, which only takes big orders.

image

Message Factory

using System;
using Microsoft.ServiceBus;
using Microsoft.ServiceBus.Messaging;

namespace Logic
{
    public class MyMessagingFactory : IDisposable
    {
        private MessagingFactory factory;
        public NamespaceManager NamespaceManager { get; set; }

        public MyMessagingFactory()
        {
            var credentials =
               TokenProvider.CreateSharedSecretTokenProvider
                   (AccountDetails.Name, AccountDetails.Key);

            var serviceBusUri = ServiceBusEnvironment.CreateServiceUri
                ("sb", AccountDetails.Namespace, string.Empty);

            factory  = MessagingFactory.Create
                (serviceBusUri, credentials);

            NamespaceManager = new NamespaceManager(serviceBusUri, credentials);
        }


        public TopicClient GetTopicPublisherClient()
        {
            var topicClient =
                factory.CreateTopicClient("romikostopictransgrid");

            return topicClient;
        }

        public SubscriptionClient GetTopicSubscriptionClient(SubscriptionName subscription)
        {
            var topicSubscriptionClient =
                factory.CreateSubscriptionClient("romikostopictransgrid", subscription.ToString(), ReceiveMode.ReceiveAndDelete);


            return topicSubscriptionClient;
        }

        public void Dispose()
        {
            factory.Close();
        }
    }
}

Publisher

using System;
using Microsoft.ServiceBus.Messaging;
using Messages;

namespace Logic.Publish
{
    public class PublisherClient
    {
        public void SendTransformerOrder(TopicClient topicClient)
        {
            const string format = "Publishing message for {0}, Quantity {1} Transformer {2}";
            var orderIn1 = new TransformerOrder
                               {
                    Name = "Transgrid",
                    Transformer = "300kv, 50A",
                    Quantity = 5,
                    MessageType = MessageType.SmallOrder
                };

            var orderInMsg1 = new BrokeredMessage(orderIn1);
            orderInMsg1.Properties["MessageType"] = orderIn1.MessageType.ToString();
            Console.WriteLine(format, orderIn1.Name, orderIn1.Quantity, orderIn1.Transformer);

            topicClient.Send(orderInMsg1);

            var orderIn2 = new TransformerOrder
                               {
                    Name = "Transgrid",
                    Transformer = "250kv, 50A",
                    Quantity = 200,
                    MessageType = MessageType.BigOrder
                };

            var orderInMsg2 = new BrokeredMessage(orderIn2);
            orderInMsg2.Properties["MessageType"] = orderIn2.MessageType.ToString();

            orderInMsg2.Properties["Quatity"] = orderIn2.Quantity;




            Console.WriteLine(format, orderIn2.Name, orderIn2.Quantity, orderIn2.Transformer);

            //topicClient.Send(orderInMsg2);
            topicClient.BeginSend(orderInMsg2, a => 
                Console.WriteLine(string.Format("\r\nMessage published async, completed is: {0}.", a.IsCompleted)), 
                topicClient);

        }
    }
}

Subscriber

using System;
using System.Linq;
using Microsoft.ServiceBus.Messaging;
using Messages;

namespace Logic.Subscription
{
    public class SubscriptionManager
    {
        public static void CreateSubscriptionsIfNotExists(string topicPath, MyMessagingFactory factory)
        {
            var sales = SubscriptionName.Sales.ToString();
            if (!factory.NamespaceManager.SubscriptionExists(topicPath, sales))
                factory.NamespaceManager.CreateSubscription(topicPath, sales);

            var customer = SubscriptionName.Customer.ToString();
            if (!factory.NamespaceManager.SubscriptionExists(topicPath, customer))
            {
                var rule = new RuleDescription
                {
                    Name = "bigorder",
                    Filter = new SqlFilter(string.Format("MessageType = '{0}'", MessageType.BigOrder))
                };
                factory.NamespaceManager.CreateSubscription(topicPath, customer, rule);
            }

            var inventory = SubscriptionName.Inventory.ToString();
            if (!factory.NamespaceManager.SubscriptionExists(topicPath, inventory))
                factory.NamespaceManager.CreateSubscription(topicPath, inventory);
        }

        public static void ShowRules(string topicPath, MyMessagingFactory factory)
        {
            var currentRules = factory.NamespaceManager.GetRules(topicPath, SubscriptionName.Customer.ToString()).ToList();

            Console.WriteLine(string.Format("Rules for subscription: {0}", "Customer"));
            foreach (var result in currentRules)
            {
                var filter = (SqlFilter)result.Filter;
                Console.Write(string.Format("RuleName: {0}\r\n Filter: {1}\r\n", result.Name, filter.SqlExpression));
            }
        }
    }
}
using System;
using System.Threading;
using Microsoft.ServiceBus.Messaging;
using Messages;

namespace Logic.Subscription
{
    public class Subscriber
    {
        public void ReceiveTransformerOrder(SubscriptionClient client)
        {
            GetMessages(client);
        }

        private static void GetMessages(SubscriptionClient client)
        {
                //var orderOutMsg = client.Receive(TimeSpan.FromSeconds(5));
                client.BeginReceive(ReceiveDone, client);
        }

        public static void ReceiveDone(IAsyncResult result)
        {
            var subscriptionClient = result.AsyncState as SubscriptionClient;
            if (subscriptionClient == null)
            {
                Console.WriteLine("Async Subscriber got no data.");
                return;
            }

            var brokeredMessage = subscriptionClient.EndReceive(result);

            if (brokeredMessage != null)
            {
                var messageId = brokeredMessage.MessageId;
                var orderOut = brokeredMessage.GetBody<
                    TransformerOrder>();

                Console.WriteLine("Thread: {0}{6}" +
                                  "Receiving orders for subscriber: {1}{6}" +
                                  "Received MessageId: {2}{6}" +
                                  "Quantity: {3}{6}" +
                                  "Transformer:{4} for {5}{6}", 
                                  Thread.CurrentThread.ManagedThreadId,
                                  subscriptionClient.Name, messageId,
                                  orderOut.Quantity, orderOut.Transformer, orderOut.Name, Environment.NewLine);
            }
            subscriptionClient.Close();
        }
    }
}

Message

using System.Runtime.Serialization;

namespace Messages
{
    [DataContract]
    public class TransformerOrder
    {
        [DataMember]
        public string Name { get; set; }

        [DataMember]
        public string Transformer { get; set; }

        [DataMember]
        public int Quantity { get; set; }

        [DataMember]
        public string Color { get; set; }

        [DataMember]
        public MessageType MessageType{ get; set; }
    }
}

Sample Publisher

using System;
using Logic;
using Logic.Publish;
using Logic.Subscription;

namespace Publisher
{
    class Program
    {
        const string TopicPath = "romikostopictransgrid";
        static void Main()
        {

            using (var factory = new MyMessagingFactory())
            {
                SubscriptionManager.CreateSubscriptionsIfNotExists(TopicPath, factory);
                SubscriptionManager.ShowRules(TopicPath, factory);
                PublishMessage(factory);
            }
        }

        private static void PublishMessage(MyMessagingFactory factory)
        {
            var queue = factory.GetTopicPublisherClient();
            var publisher = new PublisherClient();
            publisher.SendTransformerOrder(queue);
            Console.WriteLine("Published Messages to bus.");
            Console.WriteLine(Environment.NewLine);
            Console.WriteLine("Press any key to publish again");
            Console.ReadLine();
            PublishMessage(factory);
        }

    }
}

Sample Subscriber

using System;
using System.Threading;
using Logic;
using Logic.Subscription;

namespace SubscriberCustomer
{
    class Program
    {
        static void Main()
        {
            using (var factory = new MyMessagingFactory())
            while (true)
            {
                SubscribeToMessages(factory);
                Thread.Sleep(TimeSpan.FromMilliseconds(500));
            }
        }

        private static void SubscribeToMessages(MyMessagingFactory factory)
        {
            var subscriptionCustomer = factory.GetTopicSubscriptionClient(SubscriptionName.Customer);
            var subscriber = new Subscriber();
            subscriber.ReceiveTransformerOrder(subscriptionCustomer);
        }
    }
}

Windows #Azure configuration transformations

Hi,

I needed to update the transformation today to support different csdef files for UAT/Test etc. I found myself forgetting the process, so i thought it would be a good idea to log the entries needed.

What I wanted was a way to disable New Relic in our performance environment, since we do not have a license key.

Since we use the Azure Tasks to run batch jobs before the worker role starts, it made sense that I create a TASK Environment variable that my batch script can check and see if it should install New relic, e.g.

if "%MYSTORYENV%" == "PERF" goto :EOF

So, in the above, my batch file startup.cmd will skip installing new relic if the environment variable is PERF. However we need to set this value in the csdef file.

So we go to the BASE servicedefinition.csdef file and have this entry for it.

<sd:Startup>
      <sd:Task commandLine="Startup.cmd" executionContext="elevated" taskType="background">
        <sd:Environment>
          <sd:Variable name="EMULATED">
            <sd:RoleInstanceValue xpath="/RoleEnvironment/Deployment/@emulated" />
          </sd:Variable>
          <sd:Variable name="MYSTORYENV" value="DEV" />
        </sd:Environment>
      </sd:Task>
    </sd:Startup>

Notice, that I have qualified all my csdef entries, this is important for transformations to occur (sd:)

Ok, the next step is that we create a transformation file

https://gist.github.com/1777060

Now, that we have this transform, we will need to edit the CSPROJ file. Please see below the parts added

Item Groups

<ItemGroup>
<ServiceDefinition Include="ServiceDefinition.csdef" />
<ServiceConfiguration Include="ServiceConfiguration.cscfg" />
</ItemGroup>
<ItemGroup>
<EnvironmentDefinition Include="ServiceDefinition.uat.csdef">
<BaseConfiguration>ServiceDefinition.csdef</BaseConfiguration>
</EnvironmentDefinition>
<EnvironmentDefinition Include="ServiceDefinition.perf.csdef">
<BaseConfiguration>ServiceDefinition.csdef</BaseConfiguration>
</EnvironmentDefinition>
<EnvironmentConfiguration Include="ServiceConfiguration.uat.cscfg">
<BaseConfiguration>ServiceConfiguration.cscfg</BaseConfiguration>
</EnvironmentConfiguration>
<EnvironmentConfiguration Include="ServiceConfiguration.perf.cscfg">
<BaseConfiguration>ServiceConfiguration.cscfg</BaseConfiguration>
</EnvironmentConfiguration>
<None Include="@(EnvironmentConfiguration)" />
<None Include="@(EnvironmentDefinition)" />
</ItemGroup>

Notice I have the include at the bottom, so I can see these in Visual Studio. I also have transformations for cscfg files, hence the reason why you see them here Smile

Targets Validation

<Target Name="ValidateServiceFiles"
		Inputs="@(EnvironmentConfiguration);@(EnvironmentConfiguration->'%(BaseConfiguration)');@(EnvironmentDefinition);@(EnvironmentDefinition->'%(BaseConfiguration)')"
		Outputs="@(EnvironmentConfiguration->'%(Identity).transformed.cscfg');@(EnvironmentDefinition->'%(Identity).transformed.csdef')">

	<Message Text="ValidateServiceFiles: Transforming %(EnvironmentConfiguration.BaseConfiguration) to %(EnvironmentConfiguration.Identity).tmp via %(EnvironmentConfiguration.Identity)" />
	<TransformXml Source="%(EnvironmentConfiguration.BaseConfiguration)" Transform="%(EnvironmentConfiguration.Identity)" Destination="%(EnvironmentConfiguration.Identity).tmp" />
	<Message Text="ValidateServiceFiles: Transformation complete; starting validation" />

	<Message Text="ValidateServiceFiles: Transforming %(EnvironmentDefinition.BaseConfiguration) to %(EnvironmentDefinition.Identity).tmp via %(EnvironmentDefinition.Identity)" />
	<TransformXml Source="%(EnvironmentDefinition.BaseConfiguration)" Transform="%(EnvironmentDefinition.Identity)" Destination="%(EnvironmentDefinition.Identity).tmp" />
	<Message Text="ValidateServiceFiles: Transformation complete; starting validation" />

	<ValidateServiceFiles ServiceDefinitionFile="@(ServiceDefinition)" ServiceConfigurationFile="%(EnvironmentConfiguration.Identity).tmp" />
	<ValidateServiceFiles ServiceDefinitionFile="%(EnvironmentDefinition.Identity).tmp" ServiceConfigurationFile="@(ServiceConfiguration)" />
	<Message Text="ValidateServiceFiles: Validation complete; renaming temporary file" />

	<Move SourceFiles="%(EnvironmentConfiguration.Identity).tmp" DestinationFiles="%(EnvironmentConfiguration.Identity).transformed.cscfg" />
	<Move SourceFiles="%(EnvironmentDefinition.Identity).tmp" DestinationFiles="%(EnvironmentDefinition.Identity).transformed.csdef" />
</Target>

Notice above I have them for BOTH CSCFG and CSDEF files!

Move transforms to the app.publish folder for azure packaging

<Target Name="MoveTransformedEnvironmentConfigurationXml" AfterTargets="AfterPackageComputeService" Inputs="@(EnvironmentConfiguration->'%(Identity).transformed.cscfg')" Outputs="@(EnvironmentConfiguration->'$(OutDir)app.publish\%(filename).cscfg')">
<Move SourceFiles="@(EnvironmentConfiguration->'%(Identity).transformed.cscfg')" DestinationFiles="@(EnvironmentConfiguration->'$(OutDir)app.publish\%(filename).cscfg')" />
<Move SourceFiles="@(EnvironmentDefinition->'%(Identity).transformed.csdef')" DestinationFiles="@(EnvironmentDefinition->'$(OutDir)app.publish\%(filename).csdef')" />
</Target>

Summary

So there you have it, you will now have csdef and cscfg files for different environments.

Windows #Azure–Pre Role Startup Tasks

Hi,

Imagine you need to boot up a web role in the cloud, but before the global.asax events kick in or even lower, before the WebRoleEntryPoint events kick in, you need to do some installations of prerequisite software.

The best way to go about doing this is to register a task in the ServiceDefinition.csdef file. Lets imagine we need to run a batch file that will do some sort of installation, say a monitoring service that is required to be installed BEFORE IIS starts our web application, so that it can get a hook point, say New Relic!

Below is a configuration example that will do this for you.

https://gist.github.com/1775222

You can also set elevation privileges, which are required if you are running PowerShell scripts etc.

<Task commandLine="Startup.cmd" executionContext="elevated" taskType="background">

You can read more about this here:

http://msdn.microsoft.com/en-us/library/windowsazure/hh124132.aspx

http://msdn.microsoft.com/en-us/library/windowsazure/gg456327.aspx

http://msdn.microsoft.com/en-us/library/windowsazure/gg432991.aspx

So, I hope you now have a cool way to bootstrap your prerequisite software before IIS kicks in.

Windows Azure–Diagnosing Role Start-ups

Hi Guys,

I want to walk through three issues you can have with Windows Azure Diagnostics and a Worker Role. I assume you want to access the Windows Azure Trace Logs, since you use the Trace command to write out exceptions and status messages in the onStart code.

e.g, Trace.WriteLine(“Starting my service.”)

Also you have the WAD trace listener on, which it is by default.

https://gist.github.com/1757147

Scenario – Role fails to start very early on

You might have a custom worker role that starts some sort of background service and perhaps it fails immediate due to some sort of configuration.

Symptoms

You notice the role keeps recycling and recovering.

On Demand Transfer or Schedule Transfer of Diagnostics logs do not work at all, so you cannot get any Trace Information whatsoever.

Solution

Put this method in your WorkerEntryPoint.cs file and call it at the beggining of OnStart() and in any Catch Exception block

https://gist.github.com/1757083

e.g Start

public override bool OnStart()
{
WaitForWindowsAzureDiagnosticsInfrastructureToCatchUp();
try
{

 

e.g Exception

catch (Exception ex)
           {
               TraceException(ex);
               Trace.Flush();
               WaitForWindowsAzureDiagnosticsInfrastructureToCatchUp();
               return true;
           }

 

Scenario – Role fails to start a bit later

You are able to diagnose the problem since On Demand Transfer/Scheduled Transfer works and you can then get to the trace logs to see error messages you have written to Trace. Recall that Windows Azure has a settings to automatically have a trace listener on to redirect trace to its WAD table.

 

Scenario – Role fails to start a bit later

Symptoms

You notice the role keeps recycling and recovering or event stars up but is unresponsive.

On Demand Transfer does not work – You try but it just does not complete or hangs

Below is screen shots of On Demand Transfers with the Cerebrata Diagnostics Manager.

image

image

Solution

If you cannot do an On Demand Transfer of trace logs, perhaps it keeps recycling and recovering to fast for a On Demand Transfer to occur. Then what you do is temporarily configure a Scheduled Transfer of the Trace Logs

If using Cerebrata Diagnostics Manager

Click Remote Diagnostics in the Subscriptions under your Data Sources

image

image

Once you have configured Schedules transfer, this tool will basically UPLOAD a configuration file into your BLOB container: wad-control-container

Azure will automatically detect changes in this container and apply them to the Diagnostics Manager. Hence configuration of Windows Azure Diagnostics On The Fly

image

Now that we have scheduled transfer in place REBOOT the role that is causing the issue and then wait for it to try start up and fail, and then just go download the trace logs and it should be there.

image

Summary

So, ensure you have a silly sleep command in your work entry point OnStart and in areas where you catch exceptions in case your worker role crashes before Windows Azure Diagnostics!

Try On Demand Transfers if there is an issue, and if that does not work,  configure a scheduled transfer on the fly and then reboot the role to get the start up logs.

WARNING!

Scheduled Transfers will impact your billing of Storage Services, MAKE SURE you turn it OFF when you finished diagnosing the issue, else you will get BILLED for it.

Notice in my screen shot I ALWAYS use a quota so I never over use diagnostics storage – and Windows Azure Trace Logs are stored in TABLE Storage:

image

Remember configuration of WAD is in Blob and the actual trace logs are in Tables.

image

Windows Azure, which affinity group or region is best for you?

A nice tool to use to see which Azure Affinity Group to use e.g. South America, North America or Asia is to download this tool and run checks from where your clients will be based.

http://research.microsoft.com/en-us/downloads/5c8189b9-53aa-4d6a-a086-013d927e15a7/default.aspx

Once you got it installed, add your storage accounts and then get started.

image

So above we will test from Sydney Australia to our UAT environment in America.

Lets click “run”

It will start executing the test, this is now a good time to plan a date, make a cup of coffee or write some JScript for your open source projects.

Results:

Sydney to North America

image

image

 

Sydney to South East Asia (Singapore)

image

image

Conclusion

For us, South East Asia was far more better (Web site, download is more important than upload), and the proof was in the pudding when we measures web site response times with MVC MiniProfiler.

However, this is not the ultimate conclusion, I bet these response times will vary depending on time of day, perhaps when Asia is awake and US is asleep, it could be the other way round, so test it at different times of day and pick the affinity or region that is best for you!