Azure Database for PostgreSQL – Backup/Restore

PSQL has some awesome tools pg_dump and pg_restore that can assist and cut down restore times in the event of a recovery event.

Scenario

User error – If a table was accidentally emptied at noon today:

• Restore point in time just before noon and retrieve the missing table and data from that new copy of the server (Azure Portal or CLI) , Powershell not supported yet.
• Update firewall rules to allow traffic internally on the new SQL Instance

Then log onto a JumpBox in the Azure cloud.
• Dump the restored database to file (pg_dump)


pg_dump -Fc -v -t mytable -h Source.postgres.database.azure.com -U pgadmin@source -d mydatabase > c:\Temp\mydatabase .dump

The switch -Fc – gives us flexibility with pg_restore later (Table Level Restore)

• Copy the table back to the original server using  pg_restore

You may need to truncate the target table before, as we doing data only restore (-a)
Table Only (not schema):


pg_restore -v -A -t mytable -h Target.postgres.database.azure.com -p 5432 -U pgadmin@target -d mydatabase "c:\Temp\mydatabase .dump"

note: Use a VM in the Azure cloud that has access to SQL via VNET Rules or SQL Firewalls.

We currently use PSQL 10. There is a known issue with VNet Rules. Just contact Microsoft if you need them, else use firewall rules for now.

Issues

VNET Rules and PostgreSQL 10

Issue Definition: If using VET rules to control access to the PostgreSQL 10.0 server, customer gets the following error: psql ‘sslmode=require host=server .postgres.database.azure.com port=5432 dbname=postgres’ –username=pgadmin@server psql: FATAL: unrecognized configuration parameter ‘connection_Vnet’ This is a known issue: https://social.msdn.microsoft.com/Forums/azure/en-US/0e99fb68-47fd-4053-a8be-5f8b87b3a660/azure-database-for-postgresql-vnet-service-endpoints-not-working?forum=AzureDatabaseforPostgreSQL

DNS CNAME

nslookup restoretest2.postgres.database.azure.com

Non-authoritative answer:

Name:    cr1.westus1-a.control.database.windows.net

Address:  23.9.34.71

Aliases:  restoretest2.postgres.database.azure.com

nslookup restoretest1.postgres.database.azure.com

Non-authoritative answer:

Name:    cr1.westus1-a.control.database.windows.net

Address:  23.9.34.71

Aliases:  restoretest1.postgres.database.azure.com

The Host is the same server. The way the connection string works is on the USERNAME.

These two command will connect to the SAME Server instance


pg_restore -v -a -t mytable -h restoretest2.postgres.database.azure.com -p 5432 -U pgadmin@restoretest1 -d mydatabase  "c:\Temp\mydatabase.dump"


pg_restore -v -a -t mytable -h restoretest1.postgres.database.azure.com -p 5432 -U pgadmin@restoretest1 -d mydatabase   "c:\Temp\mydatabase.dump"

The hostname just gets us to the Microsoft PSQL server, it is the username that points us to the correct instance. This is extremely important when updating connection strings on clients!

Happy Life – Happy Wife

Advertisements

Windows Azure – Restore Encrypted VM – BEK and KEK

When restoring an Encrypted Virtual Machine that is managed by Windows Azure, you will need to use PowerShell.

This is a two stage process.

Restore-BackupImageToDisk … | Restore-EncryptedBekVM …

Stage 1

Retrieve the Backup Container – This will contain the encrypted disks and json files with the secret keys.

  • keyEncryptionKey
  • diskEncryptionKey

This will allow you to retrieve the disks and JSON metadata in step 2.

Restore-BackupImageToDisk.ps1

param(
    [Parameter(Mandatory=$true)]
    [string]
    $SubscriptionName="RangerRom",

    [Parameter(Mandatory=$true)]
    [string]
    $ResourceGroup="Ranger-Rom-Prod",

    [Parameter(Mandatory=$true)]
    [string]
    $StorageAccount="RangerRomStorage",

    [Parameter(Mandatory=$true)]
    [string]
    $RecoveryVaultName="RecoveryVault",

    [Parameter(Mandatory=$true)]
    [string]
    $VMName,

    [Parameter(Mandatory=$true)]
    [datetime]
    $FromDate
)

Login-AzureRmAccount
Get-AzureRmSubscription -SubscriptionName $SubscriptionName | Select-AzureRmSubscription
$endDate = Get-Date

$namedContainer = Get-AzureRmRecoveryServicesBackupContainer  -ContainerType "AzureVM" -Status "Registered" | Where { $_.FriendlyName -eq $VMName }
$backupitem = Get-AzureRmRecoveryServicesBackupItem -Container $namedContainer  -WorkloadType "AzureVM"

$rp = Get-AzureRmRecoveryServicesBackupRecoveryPoint -Item $backupitem -StartDate $FromDate.ToUniversalTime() -EndDate $enddate.ToUniversalTime()
$restorejob = Restore-AzureRmRecoveryServicesBackupItem -RecoveryPoint $rp[0] -StorageAccountName $StorageAccount -StorageAccountResourceGroupName $ResourceGroup
Wait-AzureRmRecoveryServicesBackupJob -Job $restorejob -Timeout 43200
$restorejob = Get-AzureRmRecoveryServicesBackupJob -Job $restorejob
$details = Get-AzureRmRecoveryServicesBackupJobDetails -Job $restorejob
$details

Stage 2

We will grab the job details via the pipeline if provided or go find the earliest Restore Job after the FromDate and then initiate a restore of all the encrypted disks to the new Virtual Machine.

Restore-EncryptedBekVM.ps1

[CmdletBinding()]
param(
    [Parameter(ValueFromPipeline=$True, Mandatory=$false, HelpMessage="Requires a AzureVmJobDetails object e.g. Get-AzureRmRecoveryServicesBackupJobDetails. Optional.")]
    $JobDetails,

    [Parameter(Mandatory=$true, HelpMessage="Assumes osDisks, DataDisk, RestorePoints, AvailbilitySet, VM, Nic are all in same resource group.")]
    [string]
    $DefaultResourceGroup="Ranger-Rom-Prod",
    
    [Parameter(Mandatory=$true)]
    [string]
    $SourceVMName="Lisha",

    [Parameter(Mandatory=$true)]
    [string]
    $TargetVMName,

    [Parameter(Mandatory=$true)]
    [string]
    $BackupVaultName="RecoveryVault",

    [Parameter(Mandatory=$true)]
    [datetime]
    $FromDate
)

Begin {
    $FromDate = $FromDate.ToUniversalTime()
    Write-Verbose "Started. $(Get-Date)"
}

Process {
	Write-Verbose "Retrieving Backup Vault."
	if(-not $JobDetails)
	{
		Get-AzureRmRecoveryServicesVault -Name $BackupVaultName -ResourceGroupName $DefaultResourceGroup | Set-AzureRmRecoveryServicesVaultContext
		$Job = Get-AzureRmRecoveryServicesBackupJob -Status Completed -Operation Restore -From $FromDate | Sort -Property StartTime | Where { $_.WorkloadName -eq $SourceVMName} | Select -Last 1
		if($Job -eq $null) {
			throw "Job $workLoadName not found."
		}
		$JobDetails = Get-AzureRmRecoveryServicesBackupJobDetails -Job $Job
	}

	Write-Verbose "Query the restored disk properties for the job details."
	$properties = $JobDetails.properties
	$storageAccountName = $properties["Target Storage Account Name"]
	$containerName = $properties["Config Blob Container Name"]
	$blobName = $properties["Config Blob Name"]

	Write-Verbose "Found Restore Blob Set at $($Properties['Config Blob Uri'])"
	Write-Verbose "Set the Azure storage context and restore the JSON configuration file."

	Set-AzureRmCurrentStorageAccount -Name $storageaccountname -ResourceGroupName $DefaultResourceGroup
	$folder = New-Item "C:\RangerRom" -ItemType Directory -Force
	$destination_path = "C:\$($folder.Name)\$SourceVMName.config.json"
	Get-AzureStorageBlobContent -Container $containerName -Blob $blobName -Destination $destination_path
	Write-Verbose "Restore config saved to file. $destination_path"
	$restoreConfig = ((Get-Content -Path $destination_path -Raw -Encoding Unicode)).TrimEnd([char]0x00) | ConvertFrom-Json

	# 3. Use the JSON configuration file to create the VM configuration.
	$oldVM = Get-AzureRmVM | Where { $_.Name -eq $SourceVMName }
	$vm = New-AzureRmVMConfig -VMSize $restoreConfig.'properties.hardwareProfile'.vmSize -VMName $TargetVMName -AvailabilitySetId $oldVM.AvailabilitySetReference.Id
	$vm.Location = $oldVM.Location

	# 4. Attach the OS disk and data disks - Managed, encrypted VMs (BEK only
	$bekUrl = $restoreConfig.'properties.storageProfile'.osDisk.encryptionSettings.diskEncryptionKey.secretUrl
	$keyVaultId = $restoreConfig.'properties.storageProfile'.osDisk.encryptionSettings.diskEncryptionKey.sourceVault.id
	$kekUrl = $restoreConfig.'properties.storageProfile'.osDisk.encryptionSettings.keyEncryptionKey.keyUrl
	$storageType = "StandardLRS"
	$osDiskName = $vm.Name + "_Restored_" + (Get-Date).ToString("yyyy-MM-dd-hh-mm-ss") + "_osdisk"
	$osVhdUri = $restoreConfig.'properties.storageProfile'.osDisk.vhd.uri
	$diskConfig = New-AzureRmDiskConfig -AccountType $storageType -Location $restoreConfig.location -CreateOption Import -SourceUri $osVhdUri
	$osDisk = New-AzureRmDisk -DiskName $osDiskName -Disk $diskConfig -ResourceGroupName $DefaultResourceGroup
	Set-AzureRmVMOSDisk -VM $vm -ManagedDiskId $osDisk.Id -DiskEncryptionKeyUrl $bekUrl -DiskEncryptionKeyVaultId $keyVaultId -KeyEncryptionKeyUrl $kekUrl -KeyEncryptionKeyVaultId $keyVaultId -CreateOption "Attach" -Windows

	$count = 0
	foreach($dd in $restoreConfig.'properties.storageProfile'.dataDisks)
	{
		$dataDiskName = $vm.Name + "_Restored_" + (Get-Date).ToString("yyyy-MM-dd-hh-mm-ss") + "_datadisk" + $count ;
		$dataVhdUri = $dd.vhd.uri;
		$dataDiskConfig = New-AzureRmDiskConfig -AccountType $storageType -Location $restoreConfig.location -CreateOption Import -SourceUri $dataVhdUri
		$dataDisk = New-AzureRmDisk -DiskName $dataDiskName -Disk $dataDiskConfig -ResourceGroupName $DefaultResourceGroup
		Add-AzureRmVMDataDisk -VM $vm -Name $dataDiskName -ManagedDiskId $dataDisk.Id -Lun $dd.Lun -CreateOption "Attach"
		$count += 1
	}

	Write-Verbose  "Setting the Network settings."

	$oldNicId = $oldVM.NetworkProfile.NetworkInterfaces[0].Id
	$oldNic = Get-AzureRmNetworkInterface -Name $oldNicId.Substring($oldNicId.LastIndexOf("/")+ 1) -ResourceGroupName $DefaultResourceGroup
	$subnetItems =  $oldNic.IpConfigurations[0].Subnet.Id.Split("/")
	$networkResourceGroup = $subnetItems[$subnetItems.IndexOf("resourceGroups")+1]
	$nicName= $vm.Name + "_nic_" + (New-Guid).Guid
	$nic = New-AzureRmNetworkInterface -Name $nicName -ResourceGroupName $networkResourceGroup -Location $oldNic.location -SubnetId $oldNic.IpConfigurations[0].Subnet.Id
	$vm=Add-AzureRmVMNetworkInterface -VM $vm -Id $nic.Id
	$vm.DiagnosticsProfile = $oldVM.DiagnosticsProfile

	Write-Verbose "Provisioning VM."
	New-AzureRmVM -ResourceGroupName $oldVM.ResourceGroupName -Location $oldVM.Location -VM $vm

}

End {
    Write-Verbose "Completed. $(Get-Date)"
}


 Usage

.\Restore-BackupImageToDisk … | .\Restore-EncryptedBekVM.ps1 …

Summary

Ensure you have Azure VM Backups and they are backed on on a regular schedule.

Use this script to backup a Restore Point from a Recovery Vault Container.

Remember it will pick the earliest date of a Restore Point relative to your From Date. If there are 4 Restores in the recovery vault from 1 May, it will pick the first one.

Remember to:

  • Decommission the old VM
  • Update DNS with the new ip address
  • Update the Load Balance Sets (If using a Load Balancer)

Automate #Azure Blob Snapshot purging/deletes with @Cerebrata

The pricing model for snapshots can get rather complicated, so we need a way to automate the purging of snapshots.
Read how snapshots can accrue additional costs

Lets minimize these costs! We use this script to backup and manage snapshot retention for all our Neo4j Databases hosted in the Azure Cloud.

So a solution I have is that:
We have a retention period in days for all snapshots e.g. 30 days
We have a retention period for the last day of the month backups e.g. 180 days

Rules:
1. The purging will always ensure that there is always at least ONE snapshot, so it will never delete a backup if it is the only backup for a base blob.

2. The purging will delete snapshots greater than the retention period, respecting rule 1

3. The purging will delete snapshots greater than the last day month retention period, respecting rule 1

You can then schedule this script to run after the Backup Script in TeamCity or some other build server scheduler.

param(
	[parameter(Mandatory=$true)] [string]$AzureAccountName,
	[parameter(Mandatory=$true)] [string]$AzureAccountKey,
	[parameter(Mandatory=$true)] [array]$BlobContainers, #Blob Containers to backup
	[parameter(Mandatory=$true)] [int]$BackupRetentionDays, #Days to keep snapshot backups
	[parameter(Mandatory=$true)] [int]$BackupLastDayOfMonthRetentionDays # Days to keep last day of month backups
)


if( $BackupRetentionDays -ge $BackupLastDayOfMonthRetentionDays )
{
	$message = "Argument Exception: BackupRentionDays cannot be greater than or equal to BackupLastDayOfMonthRetentionDays"
	throw $message
}

Add-Type -Path 'C:\Program Files\Windows Azure SDK\v1.6\ref\Microsoft.WindowsAzure.StorageClient.dll'

$cred = New-Object Microsoft.WindowsAzure.StorageCredentialsAccountAndKey($AzureAccountName,$AzureAccountKey)
$client = New-Object Microsoft.WindowsAzure.StorageClient
.CloudBlobClient("https://$AzureAccountName.blob.core.windows.net",$cred)

function PurgeSnapshots ($blobContainer)
{
	$container = $client.GetContainerReference($blobContainer)
	$options = New-Object  Microsoft.WindowsAzure.StorageClient.BlobRequestOptions
	$options.UseFlatBlobListing = $true
	$options.BlobListingDetails = [Microsoft.WindowsAzure.StorageClient.BlobListingDetails]::Snapshots

	$blobs = $container.ListBlobs($options)
	$baseBlobWithMoreThanOneSnapshot = $container.ListBlobs($options)| Group-Object Name | Where-Object {$_.Count -gt 1} | Select Name

	#Filter out blobs with more than one snapshot and only get SnapShots.
	$blobs = $blobs | Where-Object {$baseBlobWithMoreThanOneSnapshot  -match $_.Name -and $_.SnapshotTime -ne $null} | Sort-Object SnapshotTime -Descending

	foreach ($baseBlob in $baseBlobWithMoreThanOneSnapshot )
	{
		 $count = 0
		 foreach ( $blob in $blobs | Where-Object {$_.Name -eq $baseBlob.Name } )
		    {
				$count +=1
				$ageOfSnapshot = [System.DateTime]::UtcNow - $blob.SnapshotTime
				$blobAddress = $blob.Uri.AbsoluteUri + "?snapshot=" + $blob.SnapshotTime.ToString("yyyy-MM-ddTHH:mm:ss.fffffffZ")

				#Fail safe double check to ensure we only deleting a snapshot.
				if($blob.SnapShotTime -ne $null)
				{
					#Never delete the latest snapshot, so we always have at least one backup irrespective of retention period.
					if($ageOfSnapshot.Days -gt $BackupRetentionDays -and $count -eq 1)
					{
						Write-Host "Skipped Purging Latest Snapshot"  $blobAddress
						continue
					}

					if($ageOfSnapshot.Days -gt $BackupRetentionDays -and $count -gt 1 )
					{
					    #Do not backup last day of month backups
						if($blob.SnapshotTime.Month -eq $blob.SnapshotTime.AddDays(1).Month)
						{
							Write-Host "Purging Snapshot "  $blobAddress
							$blob.Delete()
							continue
						}
						#Purge last day of month backups based on monthly retention.
						elseif($blob.SnapshotTime.Month -ne $blob.SnapshotTime.AddDays(1).Month)
						{
							if($ageOfSnapshot.Days -gt $BackupLastDayOfMonthRetentionDays)
							{
							Write-Host "Purging Last Day of Month Snapshot "  $blobAddress
							$blob.Delete()
							continue
							}
						}
						else
						{
							Write-Host "Skipped Purging Last Day Of Month Snapshot"  $blobAddress
							continue
						}
					}
					
					if($count % 5 -eq 0)
					{
						Write-Host "Processing..."
					}
				}
				else
				{
					Write-Host "Skipped Purging "  $blobAddress
				}
		    }
	}
}

foreach ($container in $BlobContainers)
{
	Write-Host "Purging snapshots in " $container
	PurgeSnapshots $container
}

Windows Azure–Diagnosing Role Start-ups

Hi Guys,

I want to walk through three issues you can have with Windows Azure Diagnostics and a Worker Role. I assume you want to access the Windows Azure Trace Logs, since you use the Trace command to write out exceptions and status messages in the onStart code.

e.g, Trace.WriteLine(“Starting my service.”)

Also you have the WAD trace listener on, which it is by default.

https://gist.github.com/1757147

Scenario – Role fails to start very early on

You might have a custom worker role that starts some sort of background service and perhaps it fails immediate due to some sort of configuration.

Symptoms

You notice the role keeps recycling and recovering.

On Demand Transfer or Schedule Transfer of Diagnostics logs do not work at all, so you cannot get any Trace Information whatsoever.

Solution

Put this method in your WorkerEntryPoint.cs file and call it at the beggining of OnStart() and in any Catch Exception block

https://gist.github.com/1757083

e.g Start

public override bool OnStart()
{
WaitForWindowsAzureDiagnosticsInfrastructureToCatchUp();
try
{

 

e.g Exception

catch (Exception ex)
           {
               TraceException(ex);
               Trace.Flush();
               WaitForWindowsAzureDiagnosticsInfrastructureToCatchUp();
               return true;
           }

 

Scenario – Role fails to start a bit later

You are able to diagnose the problem since On Demand Transfer/Scheduled Transfer works and you can then get to the trace logs to see error messages you have written to Trace. Recall that Windows Azure has a settings to automatically have a trace listener on to redirect trace to its WAD table.

 

Scenario – Role fails to start a bit later

Symptoms

You notice the role keeps recycling and recovering or event stars up but is unresponsive.

On Demand Transfer does not work – You try but it just does not complete or hangs

Below is screen shots of On Demand Transfers with the Cerebrata Diagnostics Manager.

image

image

Solution

If you cannot do an On Demand Transfer of trace logs, perhaps it keeps recycling and recovering to fast for a On Demand Transfer to occur. Then what you do is temporarily configure a Scheduled Transfer of the Trace Logs

If using Cerebrata Diagnostics Manager

Click Remote Diagnostics in the Subscriptions under your Data Sources

image

image

Once you have configured Schedules transfer, this tool will basically UPLOAD a configuration file into your BLOB container: wad-control-container

Azure will automatically detect changes in this container and apply them to the Diagnostics Manager. Hence configuration of Windows Azure Diagnostics On The Fly

image

Now that we have scheduled transfer in place REBOOT the role that is causing the issue and then wait for it to try start up and fail, and then just go download the trace logs and it should be there.

image

Summary

So, ensure you have a silly sleep command in your work entry point OnStart and in areas where you catch exceptions in case your worker role crashes before Windows Azure Diagnostics!

Try On Demand Transfers if there is an issue, and if that does not work,  configure a scheduled transfer on the fly and then reboot the role to get the start up logs.

WARNING!

Scheduled Transfers will impact your billing of Storage Services, MAKE SURE you turn it OFF when you finished diagnosing the issue, else you will get BILLED for it.

Notice in my screen shot I ALWAYS use a quota so I never over use diagnostics storage – and Windows Azure Trace Logs are stored in TABLE Storage:

image

Remember configuration of WAD is in Blob and the actual trace logs are in Tables.

image

Automating Windows Azure Deployments leveraging TeamCity and PowerShell

Hi Guys,

Introduction

We will cover:

  • Overview to configure multiple build projects on TeamCity
  • Configure one of the build projects to deploy to the Azure Cloud
  • Automated deployments to the Azure Cloud for Web and Worker Roles
  • Leverage the msbuild target templates to automate generation of azure package files
  • Alternative solution of generating the azure package files
  • Using configuration transformations to manage settings e.g. UAT, Dev, Production

A colleague of mine Tatham Oddie and I are currently use TeamCity to automatically deploy our Azure/MVC3/Neo4j based project to the Azure cloud. Lets see how this can be done with relative ease and is fully automated. The focus here will be based on Powershell scripts which are using the Cerebrata Command scriplets, which can be found here: http://www.cerebrata.com/Products/AzureManagementCmdlets/Default.aspx

The PowerShell script included here will automatically undeploy and redploy you azure service and will even wait until all the services are in the ready state.

I will leave you to checking those commandlets out, and they worth every penny spent.

Now lets check how we get the deployment working.

The basic idea is that you have a continuous integration build configured on the Build Server in TeamCity, then what you do is configure the CI build to generate artifacts, which are basically the output from the build that can be used by another build project e.g. You can take the artifacts for the CI build and then run Functional Tests or Integration tests builds that run totally separate from the CI build. The idea here is, your functional and integration will NEVER interfere with the CI build and the Unit tests. Thus keeping CI builds fast and efficient.

Prerequisites on Build Server

  • TeamCity Professional Version 6.5.1
  • Cloud Subscription Certificate with Private key is imported into the User Certificate Store for the Team City service account
  • Cerebrata CMDLETS

TeamCity -Continuous Integration Build Project

Ok, so, lets do a quick check at my CI build that spits out the Azure Packages.

image

As we can see above, the CI build creates an Artifact called AzurePackage.

image

The way we generate these artifacts is very easy. In the settings for the CI Build Project we setup the artifacts path.

image

e.g. MyProjectMyProject.Azurebin%BuildConfiguration%Publish => AzurePackage

So, we will look at the build steps to configure.

image

As we can see below, we just say where the MSBuild is run from and then where the unit tests dll’s are.

image

Cool, now we need to setup the artifacts and configuration.

We just mention we want a release build.

image

Ok, now we need to tell our Azure Deployment project to have a dependency on the CI project we configured above.

Team City – UAT Deployment Build Project

So lets now go check out the UAT Deployment project.

image

This project will have dependencies on the CI build and then we will configure all the build parameters so it can connect to your Azure Storage and Service for automatic deployments. Once we done here, we will have a look at the powershell script that we use to automatically deploy to the cloud, the script supports un-deploying existing deployment slots before deploying a new one with retry attempts.

Ok, lets check the following for the UAT deployment project.

image

image

The above screenshot is the command that executes the powershell script, the parameters (%whatever%) will resolve from Build parameters in Step 6 of the screen shot above.

Here is the command for copy/paste friendless. Of course if you using some other Database then you do not need the Neo4j stuff.

-AzureAccountName “%AzureAccountName%” -AzureServiceName “%AzureServiceName%” -AzureDeploymentSlot “%AzureDeploymentSlot%” -AzureAccountKey “%AzureAccountKey%” -AzureSubscriptionId “%AzureSubscriptionId%” -AzureCertificateThumbprint “%AzureCertificateThumbprint%” -PackageSource “%AzurePackageDependencyPath%MyProject.Azure.cspkg” -ConfigSource “%AzurePackageDependencyPath%%ConfigFileName%” -DeploymentName “%build.number%-%build.vcs.number%” -Neo4jBlobName “%Neo4jBlobName%” -Neo4jZippedBinaryFileHttpSource “%Neo4jZippedBinaryFileHttpSource%”

This is the input for a deploy-package.cmd file, which is in our source repository.

image

Now, we also need to tell the Deployment project to use the Artifact from our CI Project. So we setup an Artifact Dependencies as show below in the dependencies section. Also, notice how we use a wildcard, so get all files from AzurePackage (AzurePackage/**). This will be the cspackage files.

image

Notice above, that I have a SnapShot Dependency, this is forcing the UAT deployment to USE the SAME source code that the CI build project is using.

So, the parameters are as follows.

image

PowerShell Deployment Scripts

The Deployment scripts consist of three files and remember I assumed you installed the Cerebrata Management Command Scriptlets.

Ok, so lets look at the Deploy-Package.cmd file, I would like to pay my gratitude to Jason Stangroome(http://blog.codeassassin.com) for this,

Jason wrote: “This tiny proxy script just writes a temporary PowerShell script containing all the arguments you’re trying to pass to let PowerShell interpret them and avoid getting them messed up by the Win32 native command line parser.”

@echo off
setlocal
set tempscript=%temp%\%~n0.%random%.ps1
echo $ErrorActionPreference="Stop" >"%tempscript%"
echo ^& "%~dpn0.ps1" %* >>"%tempscript%"
powershell.exe -command "& \"%tempscript%\""
set errlvl=%ERRORLEVEL%
del "%tempscript%"
exit /b %errlvl%

Ok, and now here is the PowerShell code, Deployment-Package.ps1. I will leave you to read what it does. In Summary.

It demonstrates.

  • Un-Deploying a service deployment slot
  • Deploying a service deployment slot
  • Using the certificate store to retrieve the certificate for service connections via a cert thumbprint – users cert store under the service account that TeamCity runs on.
  • Uploading Blobs
  • Downloading Blobs
  • Waiting until the new deployment is in a ready state
#requires -version 2.0
param (
	[parameter(Mandatory=$true)] [string]$AzureAccountName,
	[parameter(Mandatory=$true)] [string]$AzureServiceName,
	[parameter(Mandatory=$true)] [string]$AzureDeploymentSlot,
	[parameter(Mandatory=$true)] [string]$AzureAccountKey,
	[parameter(Mandatory=$true)] [string]$AzureSubscriptionId,
	[parameter(Mandatory=$true)] [string]$AzureCertificateThumbprint,
	[parameter(Mandatory=$true)] [string]$PackageSource,
	[parameter(Mandatory=$true)] [string]$ConfigSource,
	[parameter(Mandatory=$true)] [string]$DeploymentName,
	[parameter(Mandatory=$true)] [string]$Neo4jZippedBinaryFileHttpSource,
	[parameter(Mandatory=$true)] [string]$Neo4jBlobName
)

$ErrorActionPreference = "Stop"

if ((Get-PSSnapin -Registered -Name AzureManagementCmdletsSnapIn -ErrorAction SilentlyContinue) -eq $null)
{
	throw "AzureManagementCmdletsSnapIn missing. Install them from Https://www.cerebrata.com/Products/AzureManagementCmdlets/Download.aspx"
}

Add-PSSnapin AzureManagementCmdletsSnapIn -ErrorAction SilentlyContinue

function AddBlobContainerIfNotExists ($blobContainerName)
{
	Write-Verbose "Finding blob container $blobContainerName"
	$containers = Get-BlobContainer -AccountName $AzureAccountName -AccountKey $AzureAccountKey
	$deploymentsContainer = $containers | Where-Object { $_.BlobContainerName -eq $blobContainerName }

	if ($deploymentsContainer -eq $null)
	{
		Write-Verbose  "Container $blobContainerName doesn't exist, creating it"
		New-BlobContainer $blobContainerName -AccountName $AzureAccountName -AccountKey $AzureAccountKey
	}
	else
	{
		Write-Verbose  "Found blob container $blobContainerName"
	}
}

function UploadBlobIfNotExists{param ([string]$container, [string]$blobName, [string]$fileSource)

	Write-Verbose "Finding blob $container\$blobName"
	$blob = Get-Blob -BlobContainerName $container -BlobPrefix $blobName -AccountName $AzureAccountName -AccountKey $AzureAccountKey

	if ($blob -eq $null)
	{
		Write-Verbose "Uploading blob $blobName to $container/$blobName"
		Import-File -File $fileSource -BlobName $blobName -BlobContainerName $container -AccountName $AzureAccountName -AccountKey $AzureAccountKey
	}
	else
	{
		Write-Verbose "Found blob $container\$blobName"
	}
}

function CheckIfDeploymentIsDeleted
{
	$triesElapsed = 0
	$maximumRetries = 10
	$waitInterval = [System.TimeSpan]::FromSeconds(30)
	Do
	{
		$triesElapsed+=1
		[System.Threading.Thread]::Sleep($waitInterval)
		Write-Verbose "Checking if deployment is deleted, current retry is $triesElapsed/$maximumRetries"
		$deploymentInstance = Get-Deployment `
			-ServiceName $AzureServiceName `
			-Slot $AzureDeploymentSlot `
			-SubscriptionId $AzureSubscriptionId `
			-Certificate $certificate `
			-ErrorAction SilentlyContinue

		if($deploymentInstance -eq $null)
		{
			Write-Verbose "Deployment is now deleted"
			break
		}

		if($triesElapsed -ge $maximumRetries)
		{
			throw "Checking if deployment deleted has been running longer than 5 minutes, it seems the delployment is not deleting, giving up this step."
		}
	}
	While($triesElapsed -le $maximumRetries)
}

function WaitUntilAllRoleInstancesAreReady
{
	$triesElapsed = 0
	$maximumRetries = 60
	$waitInterval = [System.TimeSpan]::FromSeconds(60)
	Do
	{
		$triesElapsed+=1
		[System.Threading.Thread]::Sleep($waitInterval)
		Write-Verbose "Checking if all role instances are ready, current retry is $triesElapsed/$maximumRetries"
		$roleInstances = Get-RoleInstanceStatus `
			-ServiceName $AzureServiceName `
			-Slot $AzureDeploymentSlot `
			-SubscriptionId $AzureSubscriptionId `
			-Certificate $certificate `
			-ErrorAction SilentlyContinue
		$roleInstancesThatAreNotReady = $roleInstances | Where-Object { $_.InstanceStatus -ne "Ready" }

		if ($roleInstances -ne $null -and
			$roleInstancesThatAreNotReady -eq $null)
		{
			Write-Verbose "All role instances are now ready"
			break
		}

		if ($triesElapsed -ge $maximumRetries)
		{
			throw "Checking if all roles instances are ready for more than one hour, giving up..."
		}
	}
	While($triesElapsed -le $maximumRetries)
}

function DownloadNeo4jBinaryZipFileAndUploadToBlobStorageIfNotExists{param ([string]$blobContainerName, [string]$blobName, [string]$HttpSourceFile)
	Write-Verbose "Finding blob $blobContainerName\$blobName"
	$blobs = Get-Blob -BlobContainerName $blobContainerName -ListAll -AccountName $AzureAccountName -AccountKey $AzureAccountKey
	$blob = $blobs | findstr $blobName

	if ($blob -eq $null)
	{
	    Write-Verbose "Neo4j binary does not exist in blob storage. "
	    Write-Verbose "Downloading file $HttpSourceFile..."
		$temporaryneo4jFile = [System.IO.Path]::GetTempFileName()
		$WebClient = New-Object -TypeName System.Net.WebClient
		$WebClient.DownloadFile($HttpSourceFile, $temporaryneo4jFile)
		UploadBlobIfNotExists $blobContainerName $blobName $temporaryneo4jFile
	}
}

Write-Verbose "Retrieving management certificate"
$certificate = Get-ChildItem -Path "cert:\CurrentUser\My\$AzureCertificateThumbprint" -ErrorAction SilentlyContinue
if ($certificate -eq $null)
{
	throw "Couldn't find the Azure management certificate in the store"
}
if (-not $certificate.HasPrivateKey)
{
	throw "The private key for the Azure management certificate is not available in the certificate store"
}

Write-Verbose "Deleting Deployment"
Remove-Deployment `
	-ServiceName $AzureServiceName `
	-Slot $AzureDeploymentSlot `
	-SubscriptionId $AzureSubscriptionId `
	-Certificate $certificate `
	-ErrorAction SilentlyContinue
Write-Verbose "Sent Delete Deployment Async, will check back later to see if it is deleted"

$deploymentsContainerName = "deployments"
$neo4jContainerName = "neo4j"

AddBlobContainerIfNotExists $deploymentsContainerName
AddBlobContainerIfNotExists $neo4jContainerName

$deploymentBlobName = "$DeploymentName.cspkg"

DownloadNeo4jBinaryZipFileAndUploadToBlobStorageIfNotExists $neo4jContainerName $Neo4jBlobName $Neo4jZippedBinaryFileHttpSource

Write-Verbose "Azure Service Information:"
Write-Verbose "Service Name: $AzureServiceName"
Write-Verbose "Slot: $AzureDeploymentSlot"
Write-Verbose "Package Location: $PackageSource"
Write-Verbose "Config File Location: $ConfigSource"
Write-Verbose "Label: $DeploymentName"
Write-Verbose "DeploymentName: $DeploymentName"
Write-Verbose "SubscriptionId: $AzureSubscriptionId"
Write-Verbose "Certificate: $certificate"

CheckIfDeploymentIsDeleted

Write-Verbose "Starting Deployment"
New-Deployment `
	-ServiceName $AzureServiceName `
	-Slot $AzureDeploymentSlot `
	-PackageLocation $PackageSource `
	-ConfigFileLocation $ConfigSource `
	-Label $DeploymentName `
	-DeploymentName $DeploymentName `
	-SubscriptionId $AzureSubscriptionId `
	-Certificate $certificate

WaitUntilAllRoleInstancesAreReady

Write-Verbose "Completed Deployment"

Automating Cloud Package File without using CSPack and CSRun explicitly

We will need to edit the Cloud Project file so that Visual Studio can create the cloud package files , as it will then automatically run the cspackage for you which can be consumed by the artifacts and hence other build projects. This allows us to bake functionality into the MSBuild process to generate the package files without the need for explicitly using cspack.exe and csrun.exe. Resulting in less scripts, else you would need a separate PowerShell script just to package the cloud project files.

Below are the changes for the .ccproj file of the Cloud Project. Notice the condition is that we generate these package files ONLY if the build is outside of visual studio, so this is nice to keep it from not always creating the packages to keep our development experience build process short. So for the condition below to work, you will need to build the project from the command line using MSBuild.

Here is the config entries for the project file.

  <PropertyGroup>
    <CloudExtensionsDir Condition=" '$(CloudExtensionsDir)' == '' ">$(MSBuildExtensionsPath)\Microsoft\Cloud Service\1.0\Visual Studio 10.0\</CloudExtensionsDir>
  </PropertyGroup>
  <Import Project="$(CloudExtensionsDir)Microsoft.CloudService.targets" />
  <Import Project="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.targets" />
  <Target Name="AzureDeploy" AfterTargets="Build" DependsOnTargets="CorePublish" Condition="'$(BuildingInsideVisualStudio)'!='True'">
  </Target>

e.g.

C:\Windows\Microsoft.NET\Framework64\v4.0.30319\MSBuild.exe MyProject.sln /p:Configuration=Release

image

Configuration Transformations

You can also leverage configuration transformations so that you can have configurations for each environment. This is discussed here:

http://blog.alexlambert.com/2010/05/using-visual-studio-configuration.html

However, in a nutshell, you can have something like this in place, this means you can then have separate deployment config files, e.g.

ServiceConfiguration.cscfg

ServiceConfiguration.uat.cscfg

ServiceConfiguration.prod..cscfg

Just use the following config in the .ccproj file.

 <Target Name="ValidateServiceFiles"
          Inputs="@(EnvironmentConfiguration);@(EnvironmentConfiguration->'%(BaseConfiguration)')"
          Outputs="@(EnvironmentConfiguration->'%(Identity).transformed.cscfg')">
    <Message Text="ValidateServiceFiles: Transforming %(EnvironmentConfiguration.BaseConfiguration) to %(EnvironmentConfiguration.Identity).tmp via %(EnvironmentConfiguration.Identity)" />
    <TransformXml Source="%(EnvironmentConfiguration.BaseConfiguration)" Transform="%(EnvironmentConfiguration.Identity)"
     Destination="%(EnvironmentConfiguration.Identity).tmp" />

    <Message Text="ValidateServiceFiles: Transformation complete; starting validation" />
    <ValidateServiceFiles ServiceDefinitionFile="@(ServiceDefinition)" ServiceConfigurationFile="%(EnvironmentConfiguration.Identity).tmp" />

    <Message Text="ValidateServiceFiles: Validation complete; renaming temporary file" />
    <Move SourceFiles="%(EnvironmentConfiguration.Identity).tmp" DestinationFiles="%(EnvironmentConfiguration.Identity).transformed.cscfg" />
  </Target>
  <Target Name="MoveTransformedEnvironmentConfigurationXml" AfterTargets="AfterPackageComputeService"
          Inputs="@(EnvironmentConfiguration->'%(Identity).transformed.cscfg')"
          Outputs="@(EnvironmentConfiguration->'$(OutDir)Publish\%(filename).cscfg')">
    <Move SourceFiles="@(EnvironmentConfiguration->'%(Identity).transformed.cscfg')" DestinationFiles="@(EnvironmentConfiguration->'$(OutDir)Publish\%(filename).cscfg')" />
  </Target>

Here is a sample ServiceConfiguration.uat.config that will then leverage the transformations. Note the transformation for the web and worker roles sections. Our worker role is Neo4jServerHost and the Web is just called Web.

<?xml version="1.0"?>
<sc:ServiceConfiguration
    xmlns:sc="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration"
    xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
  <sc:Role name="Neo4jServerHost" xdt:Locator="Match(name)">
    <sc:ConfigurationSettings>
      <sc:Setting xdt:Transform="Replace" xdt:Locator="Match(name)" name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="DefaultEndpointsProtocol=https;AccountName=myprojectname;AccountKey=myaccountkey"/>
      <sc:Setting xdt:Transform="Replace" xdt:Locator="Match(name)" name="Storage connection string" value="DefaultEndpointsProtocol=https;AccountName=myprojectname;AccountKey=myaccountkey"/>
      <sc:Setting xdt:Transform="Replace" xdt:Locator="Match(name)" name="Drive connection string" value="DefaultEndpointsProtocol=http;AccountName=myprojectname;AccountKey=myaccountkey"/>
      <sc:Setting xdt:Transform="Replace" xdt:Locator="Match(name)" name="Neo4j DBDrive override Path" value=""/>
      <sc:Setting xdt:Transform="Replace" xdt:Locator="Match(name)" name="UniqueIdSynchronizationStoreConnectionString" value="DefaultEndpointsProtocol=https;AccountName=myprojectname;AccountKey=myaccountkey"/>
    </sc:ConfigurationSettings>
  </sc:Role>
  <sc:Role name="Web" xdt:Locator="Match(name)">
    <sc:ConfigurationSettings>
      <sc:Setting xdt:Transform="Replace" xdt:Locator="Match(name)" name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="DefaultEndpointsProtocol=https;AccountName=myprojectname;AccountKey=myaccountkey"/>
      <sc:Setting xdt:Transform="Replace" xdt:Locator="Match(name)" name="UniqueIdSynchronizationStoreConnectionString" value="DefaultEndpointsProtocol=https;AccountName=myprojectname;AccountKey=myaccountkey"/>
    </sc:ConfigurationSettings>
  </sc:Role>
</sc:ServiceConfiguration>

Manually executing the script for testing

Prerequisites:

  • You will need to install the Cerebrata Azure Management CMDLETS from: https://www.cerebrata.com/Products/AzureManagementCmdlets/Download.aspx
  • If you are running 64 bit version, you will need to follow the readme file instructions contained with the AzureManagementCmdlets, as it requires manual copying of files. If you followed the default install, this readme will be in C:\Program Files\Cerebrata\Azure Management Cmdlets\readme.pdf
  • You will need to install the Certificate and Private Key (Which must be marked as exportable) to your User Certificate Store. This file will have an extension of .pfx. Use the Certificate Management Snap-In, for User Account Store. The certificate should be installed in the personal folder.
  • Once the certificate is installed, you should note the certificate thumbprint, as this is used as one of the parameters when executing the PowerShell script. Ensure you remove all the spaces from the thumbprint when using it in the script!

1) First up, you’ll need to make your own ”MyProject.Azure.cspkg” file. To do this, run this:

C:\Windows\Microsoft.NET\Framework64\v4.0.30319\MSBuild.exe MyProject.sln /p:Configuration=Release

(Adjust paths as required.)

You’ll now find a package waiting for you at ”C:\Code\MyProject\MyProject\MyProject.Azure\bin\Release\Publish\MyProject.Azure.cspkg”.

2) Make sure you have the required management certificate installed on your machine (including the private key).

3) Now you’re ready to run the deployment script.

It requires quite a lot of parameters. The easiest way to find them is just to copy them from the last output log on TeamCity.

You will need to manually execute is Deploy-Package.ps1.

The Deploy-Package.ps1 file has input parameters that need to be supplied. Below is the list of parameters and description.

Note: These values can change in the future, so ensure you do not rely on this example below.

AzureAccountName: The Windows Azure Account Name e.g. MyProjectUAT

AzureServiceName: The Windows Azure Service Name e.g. MyProjectUAT

AzureDeploymentSlot: Production or Staging e.g. Production

AzureAccountKey: The Azure Account Key: e.g. youraccountkey==

AzureSubscriptionId:*The Azure Subscription Id e.g. yourazuresubscriptionId

AzureCertificateThumbprint: The certificate thumbprint you note down when importing the pfx file e.g. YourCertificateThumbprintWithNoWhiteSpaces

PackageSource: Location of the .cspkg file e.g. C:\Code\MyProject\MyProject.Azure\bin\Release\Publish\MyProject.Azure.cspkg

ConfigSource: Location of the Azure configuration files .cscfg e.g. C:\Code\MyProject\MyProject\MyProject.Azure\bin\Release\Publish\ServiceConfiguration.uat.cscfg

DeploymentName: This can be a friendly name of the deployment e.g. local-uat-deploy-test

Neo4jBlobName: The name of the blob file containing the Neo4j binaries in zip format e.g. neo4j-community-1.4.M04-windows.zip

Neo4jZippedBinaryFileHttpSource: The http location of the Neo4j zipped binary files e.g. https://mydownloads.com/mydownloads/neo4j-community-1.4.M04-windows.zip?dl=1

-Verbose: You can use an additional parameter to get Verbose output which is useful when developing and testing the script, just append -Verbose to the end of the command.

Below is an example executed on my machine, this will be different on your machine, so use it as a guideline only:

<code title=Sample Deployment Execution>

.\Deploy-Package.ps1 -AzureAccountName MyProjectUAT `

-AzureServiceName MyProjectUAT `

-AzureDeploymentSlot Production `

-AzureAccountKey youraccountkey== `

-AzureSubscriptionId yoursubscriptionid `

-AzureCertificateThumbprint yourcertificatethumbprint `

-PackageSource “c:\Code\MyProject\MyProject\MyProject.Azure\bin\Release\Publish\MyProject.Azure.cspkg” `

-ConfigSource “c:\Code\MyProject\MyProject\MyProject.Azure\bin\Release\Publish\ServiceConfiguration.uat.cscfg” `

-DeploymentName local-uat-deploy-test -Neo4jBlobName neo4j-community-1.4.1-windows.zip `

-Neo4jZippedBinaryFileHttpSource https://mydownloads.com/mydownloads/neo4j-community-1.4.1-windows.zip?dl=1 -Verbose

</code>

Note: When running the PowerShell command and the 64bit version of the scripts, ensure you running the PowerShell version that you fixed in the readme file from Cerebrata, do not rely on the default shortcut links in the start menu!

Summary

Well, I hope this will help you automating Azure Deployments to the cloud, this a great way to keep UAT happy with Agile deployments to meet the goals of every sprint.

If you do not like the way we generate the package files above, you can choose to use CSRun and CSPack explicitly, I have prepared this script already, below is the code for you to use.

#requires -version 2.0
param (
	[parameter(Mandatory=$false)] [string]$ArtifactDownloadLocation
)

$ErrorActionPreference = "Stop"

$installPath= Join-Path $ArtifactDownloadLocation "..\AzurePackage"
$azureToolsPackageSDKPath="c:\Program Files\Windows Azure SDK\v1.4\bin\cspack.exe"
$azureToolsDeploySDKPath="c:\Program Files\Windows Azure SDK\v1.4\bin\csrun.exe"

$csDefinitionFile="..\..\Neo4j.Azure.Server\ServiceDefinition.csdef"
$csConfigurationFile="..\..\Neo4j.Azure.Server\ServiceConfiguration.cscfg"

$webRolePropertiesFile = ".\WebRoleProperties.txt"
$workerRolePropertiesFile=".\WorkerRoleProperties.txt"

$csOutputPackage="$installPath\Neo4j.Azure.Server.csx"
$serviceConfigurationFile = "$installPath\ServiceConfiguration.cscfg"

$webRoleName="Web"
$webRoleBinaryFolder="..\..\Web"

$workerRoleName="Neo4jServerHost"
$workerRoleBinaryFolder="..\..\Neo4jServerHost\bin\Debug"
$workerRoleEntryPointDLL="Neo4j.Azure.Server.dll"

function StartAzure{
	"Starting Azure development fabric"
	& $azureToolsDeploySDKPath /devFabric:start
	& $azureToolsDeploySDKPath /devStore:start
}

function StopAzure{
	"Shutting down development fabric"
	& $azureToolsDeploySDKPath /devFabric:shutdown
	& $azureToolsDeploySDKPath /devStore:shutdown
}

#Example: cspack Neo4j.Azure.Server\ServiceDefinition.csdef /out:.\Neo4j.Azure.Server.csx /role:$webRoleName;$webRoleName /sites:$webRoleName;$webRoleName;.\$webRoleName /role:Neo4jServerHost;Neo4jServerHost\bin\Debug;Neo4j.Azure.Server.dll /copyOnly /rolePropertiesFile:$webRoleName;WebRoleProperties.txt /rolePropertiesFile:$workerRoleName;WorkerRoleProperties.txt
function PackageAzure()
{
	"Packaging the azure Web and Worker role."
	& $azureToolsPackageSDKPath $csDefinitionFile /out:$csOutputPackage /role:$webRoleName";"$webRoleBinaryFolder /sites:$webRoleName";"$webRoleName";"$webRoleBinaryFolder /role:$workerRoleName";"$workerRoleBinaryFolder";"$workerRoleEntryPointDLL /copyOnly /rolePropertiesFile:$webRoleName";"$webRolePropertiesFile /rolePropertiesFile:$workerRoleName";"$workerRolePropertiesFile
	if (-not $?)
	{
		throw "The packaging process returned an error code."
	}
}

function CopyServiceConfigurationFile()
{
	"Copying service configuration file."
	copy $csConfigurationFile $serviceConfigurationFile
}

#Example: csrun /run:.\Neo4j.Azure.Server.csx;.\Neo4j.Azure.Server\ServiceConfiguration.cscfg /launchbrowser
function DeployAzure{param ([string] $azureCsxPath, [string] $azureConfigPath)
	"Deploying the package"
    & $azureToolsDeploySDKPath $csOutputPackage $serviceConfigurationFile
	if (-not $?)
	{
		throw "The deployment process returned an error code."
	}
}

Write-Host "Beginning deploy and configuration at" (Get-Date)

PackageAzure
StopAzure
StartAzure
CopyServiceConfigurationFile
DeployAzure '$csOutputPackage' '$serviceConfigurationFile'

# Give it 60s to boot up neo4j
[System.Threading.Thread]::Sleep(60000)

# Hit the homepage to make sure it's warmed up
(New-Object System.Net.WebClient).DownloadString("http://localhost:8080") | Out-Null

Write-Host "Completed deploy and configuration at" (Get-Date)

note, if using .Net 4.0 which I am sure you all are, you will need to provide the text files for web role and worker role with these entries.

WorkerRoleProperties.txt
TargetFrameWorkVersion=v4.0
EntryPoint=Neo4j.Azure.Server.dll

WebRoleProperties.txt
TargetFrameWorkVersion=v4.0

Thanks to Tatham Oddie for contributing and coming up with such great ideas for our builds.
Cheers

Romiko