ARM – Modular Templates – Reference resources already created

Hi,

I noticed the Microsoft documentation related to the following function is a little bit vague.

reference(resourceName or resourceIdentifier, [apiVersion], [‘Full’])

The second issue is see a lot of people having is how do you reference a resource already created in ARM and get some of that objects properties e.g. FQDN on a public IP already created etc.

The clue to solve this issue, so that ARM Template B can reference a resource created in ARM Template A can be found here:

By using the reference function, you implicitly declare that one resource depends on another resource if the referenced resource is provisioned within same template and you refer to the resource by its name (not resource ID). You don’t need to also use the dependsOn property. The function isn’t evaluated until the referenced resource has completed deployment.

Or use linked templates (Linked templates is a huge rework and you need to host the files on the net). Lets see if we can do it via resourceId.

Therefore if we do reference a resource by resourceId, we will remove the implicit “depends on”, allowing ARM Template B to use a resource created in a totally different ARM template.

A great example might be the FQDN on an IP Address.

Imagine ARM Template A creates the IP Address


"resources": [
{
"apiVersion": "[variables('publicIPApiVersion')]",
"type": "Microsoft.Network/publicIPAddresses",
"name": "[variables('AppPublicIPName')]",
"location": "[variables('computeLocation')]",
"properties": {
"dnsSettings": {
"domainNameLabel": "[variables('AppDnsName')]"
},
"publicIPAllocationMethod": "Dynamic"
},
"tags": {
"resourceType": "Service Fabric",
"scaleSetName": "[parameters('scaleSetName')]"
}
}]

Now Imagine we need to get the FQDN of the IP Address in a ARM Template B

What we going to do is try this:

reference(resourceIdentifier, [apiVersion]) ->
reference(resourceId(), [apiVersion]) ->
e.g.

Here is an example where ARM template B references a resource in A and gets a property.


"managementEndpoint": "[concat('https://',reference(resourceId('Microsoft.Network/publicIPAddresses/',variables('AppPublicIPName')), variables('publicIPApiVersion')).dnsSettings.fqdn,':',variables('nodeTypePrimaryServiceFabricHttpPort'))]",

The important thing here is to ensure you always include the API Version. This pattern is a very powerful way to create smaller and more modular ARM templates.

Note: In the above pattern, you do not need to define DependsOn in ARM Template B, as we are explicitly defining a reference to an existing resource. ARM Template B is not responsible for creating a public IP. If you need it, you run ARM Template A.

So if you need a reference to existing resources use the above. If you need a reference to resources created in the SAME ARM template use:

reference(resourceName)

Cheers

Advertisements

Service Fabric – Upgrading VMSS Disks, Operating System on Primary Node Type

How do you upgrade the existing Data Disk on a primary Node Type Virtual Machine ScaleSet in Service Fabric?

How do you upgrade the existing Operating System on a primary Node Type VMSS in Service Fabric?

How do you move the Data Disk on a primary Node Type VMSS in Service Fabric?

How do you monitor the status during the upgrade, so you know exactly how many seed nodes have migrated over to the new scale set?

note – We successfully increased the SKU size as well, however this is not supported by Microsoft. However just increase your SKU in ARm and later, after the successful transfer to the new VMSS, run Update-AzureRmServiceFabricDurability.

Considerations

  • You have knowledge to use ARM to deploy an Azure Load Balancer
  • You have knowledge to use ARM to deploy a VMSS Scale Set
  • Service Fabric Durability Tier/Reliability Tier must be at least Silver
  • Keep the original Azure DNS name on the Load Balancer that is used to connect to the Service Fabric Endpoint. Very Important to write it down as a backup
  • You will need to reduce the TTL of all your DNS settings to reduce downtime during the upgrade which will just be the TTL value e.g. 10 minutes. (Ensure you have access to your primary DNS provider to do this)
  • Prepare an ARM template to add the new Azure Load Balancer that the new VMSS scaleset will attach to (Backend Pool)
  • Prepare an ARM template to add the new VMSS to an existing Service Fabric primary Node Type
  • Deploy the new Azure Load Balancer + Virtual Machine Scale Set to the Service Fabric Primary node
  • Run the RemoveScaleSetFromClusterController.ps1 – Run this script on the NEW node in the NEW VMSS. This script will monitor and facilitate moving the Primary Node Type to the new VMSS for you.  It will show you the status of the Seed nodes moving from the original Primary Node Type to the new VMSS.
  • When it completed, the last part will be to update DNS.
  • Run MoveDNSToNewPublicIPController.ps1

ARM Templates

You will need only 2 templates. One to Deploy a new Azure Load Balancer and one to Deploy the new VMSS Scale Set to the existing Service Fabric Cluster.

You will also need a powershell script that will run a custom script extension.

Custom Script – prepare_sf_vm.ps1


$disks = Get-Disk | Where partitionstyle -eq 'raw' | sort number

$letters = 70..89 | ForEach-Object { [char]$_ }
$count = 0
$label = "datadisk"

foreach ($disk in $disks) {
    $driveLetter = $letters[$count].ToString()
    $disk | 
    Initialize-Disk -PartitionStyle GPT -PassThru |
    New-Partition -UseMaximumSize -DriveLetter $driveLetter |
    Format-Volume -FileSystem NTFS -NewFileSystemLabel "$label$count" -Confirm:$false -Force
$count++
}

# Disable Windows Update
Set-ItemProperty -Path 'HKLM:\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate\AU' -Name NoAutoUpdate -Value 1

 

Load Balancer – azuredeploy_servicefabric_loadbalancer.json

Use your particular Load Balancer ARM Templates. No need to attached a backend pool, as this will be done by the VMSS script below.

Service Fabric attach new VMSS – azuredeploy_add_new_VMSS_to_nodeType.json

Create your own VMSS scaleset that you attach to Service fabric. The important aspect are the following.

nodeTypeRef (To attach VMSS to existing PrimaryNodeType).
dataPath (To use a new Disk for data)
dataDisk (to add a new managed physical disk)

We use F:\ onwards as D is reserved for Temp storage and E: is reserved for a CD ROM in Azure VM’s.


{
                                "name": "[concat('ServiceFabricNodeVmExt',variables('vmNodeType0Name'))]",
                                "properties": {
                                    "type": "ServiceFabricNode",
                                    "autoUpgradeMinorVersion": true,
                                    "protectedSettings": {
                                        "StorageAccountKey1": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('supportLogStorageAccountName')),'2015-05-01-preview').key1]",
                                        "StorageAccountKey2": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('supportLogStorageAccountName')),'2015-05-01-preview').key2]"
                                    },
                                    "publisher": "Microsoft.Azure.ServiceFabric",
                                    "settings": {
                                        "clusterEndpoint": "[parameters('existingClusterConnectionEndpoint')]",
                                        "nodeTypeRef": "[parameters('existingNodeTypeName')]",
                                        "dataPath": "F:\\\\SvcFab",
                                        "durabilityLevel": "Silver",
                                        "enableParallelJobs": true,
                                        "nicPrefixOverride": "[variables('subnet0Prefix')]",
                                        "certificate": {
                                            "thumbprint": "[parameters('certificateThumbprint')]",
                                            "x509StoreName": "[parameters('certificateStoreValue')]"
                                        }
                                    },
                                    "typeHandlerVersion": "1.0"
                                }
                            },
....
.......
.........
"storageProfile": {
                        "imageReference": {
                            "publisher": "[parameters('vmImagePublisher')]",
                            "offer": "[parameters('vmImageOffer')]",
                            "sku": "2016-Datacenter-with-Containers",
                            "version": "[parameters('vmImageVersion')]"
                        },
                        "osDisk": {
                            "managedDisk": {
                                "storageAccountType": "[parameters('storageAccountType')]"
                            },
                            "caching": "ReadWrite",
                            "createOption": "FromImage"
                        },
                        "dataDisks": [
                            {
                                "managedDisk": {
                                    "storageAccountType": "[parameters('storageAccountType')]"
                                },
                                "lun": 0,
                                "createOption": "Empty",
                                "diskSizeGB": "[parameters('dataDiskSize')]",
                                "caching": "None"
                            }
                        ]
                    }

...
....
.....
 "virtualMachineProfile": {
                    "extensionProfile": {
                        "extensions": [
                            {
                                "name": "PrepareDataDisk",
                                "properties": {
                                    "publisher": "Microsoft.Compute",
                                    "type": "CustomScriptExtension",
                                    "typeHandlerVersion": "1.8",
                                    "autoUpgradeMinorVersion": true,
                                    "settings": {
                                    "fileUris": [
                                        "[variables('vmssSetupScriptUrl')]"
                                    ],
                                    "commandToExecute": "[concat('powershell -ExecutionPolicy Unrestricted -File prepare_sf_vm.ps1 ')]"
                                    }
                                }
                            },


 

Once you have a new VMSS scale set attached to the existing NodeType, you should see in Service Fabric the extra nodes. the next step is to disable and remove the existing VMSS scaleset. This is an online operation, so you should be fine. However later we will need to update DNS for the Cluster Endpoint. This is important for Powershell Admin tools to still connect to the Service Fabric cluster.

RemoveScaleSetFromClusterController.ps1

Remote into one of the NEW VMSS virtual machines and run the following command. It will make dead sure that your seed nodes migrate over. it can take a long time (Microsoft docs say it takes a long time, how long?). it depends, for a cluster with 5 seed nodes, it took nearly 4 hours! So be patient and update the loop timeout to match your environment, increase the timeout if you have more than 5 seed nodes. My general rule is allow 45 minutes per seed node transfer.


#Requires -Version 5.0
#Requires -RunAsAdministrator



param (
    [Parameter(Mandatory = $true)]
    [string]
    $subscriptionName,

    [Parameter(Mandatory = $true)]
    [string] 
    $scaleSetToDisable,

    [Parameter(Mandatory = $true)]
    [string]
    $scaleSetToEnable,

    [Parameter(Mandatory = $true)]
    [string] 
    $resourceGroupName
)

Install-Module AzureRM.Compute -Force

Import-Module ServiceFabric -Force
Import-Module AzureRM.Compute -Force

function Disable-InternetExplorerESC {
    $AdminKey = "HKLM:\SOFTWARE\Microsoft\Active Setup\Installed Components\{A509B1A7-37EF-4b3f-8CFC-4F3A74704073}"
    $UserKey = "HKLM:\SOFTWARE\Microsoft\Active Setup\Installed Components\{A509B1A8-37EF-4b3f-8CFC-4F3A74704073}"
    Set-ItemProperty -Path $AdminKey -Name "IsInstalled" -Value 0
    Set-ItemProperty -Path $UserKey -Name "IsInstalled" -Value 0
    Stop-Process -Name Explorer
    Write-Host "IE Enhanced Security Configuration (ESC) has been disabled." -ForegroundColor Green
}

function Enable-InternetExplorerESC {
    $AdminKey = "HKLM:\SOFTWARE\Microsoft\Active Setup\Installed Components\{A509B1A7-37EF-4b3f-8CFC-4F3A74704073}"
    $UserKey = "HKLM:\SOFTWARE\Microsoft\Active Setup\Installed Components\{A509B1A8-37EF-4b3f-8CFC-4F3A74704073}"
    Set-ItemProperty -Path $AdminKey -Name "IsInstalled" -Value 1
    Set-ItemProperty -Path $UserKey -Name "IsInstalled" -Value 1
    Stop-Process -Name Explorer
    Write-Host "IE Enhanced Security Configuration (ESC) has been enabled." -ForegroundColor Green
}

$ErrorActionPreference = "Stop"

Disable-InternetExplorerESC

Login-AzureRmAccount -SubscriptionName $subscriptionName

Write-Host "Before you continue:  Ensure IE Enhanced Security is off."
Write-Host "Before you continue:  Ensure your new scaleset is ALREADY added to the Service Fabric Cluster"
Pause

try {
    Connect-ServiceFabricCluster
    Get-ServiceFabricClusterHealth
} catch {
    Write-Error "Please run this script from one of the new nodes in the cluster."
}

Write-Host "Please do not continue unless the Cluster is healthy and both Scale Sets are present in the SFCluster."
Pause

$nodesToDisable = Get-ServiceFabricNode | Where NodeName -match "_($scaleSetToDisable)_\d+"
$OldSeedCount = ( $nodesToDisable | Where IsSeedNode -eq  $true | Measure-Object).Count
$nodesToEnable = Get-ServiceFabricNode | Where NodeName -match "_($scaleSetToEnable)_\d+"

if($OldSeedCount -eq 0){
    Write-Error "Node Seed count must be greater than zero."
    exit
}

if($nodesToDisable.Count -eq 0){
    Write-Error "No nodes to disable found."
    exit
}

if($nodesToEnable.Count -eq 0){
    Write-Error "No nodes to enable found."
    exit
}

If (-not ($nodesToEnable.Count -ge $OldSeedCount)) {
    Write-Error "The new VM Scale Set must have at least $OldSeedCount nodes in order for the Seed Nodes to migrate over."
    exit
}

Write-Host "Disabling nodes in VMSS $scaleSetToDisable. Are you sure?"
Pause

foreach($node in $nodesToDisable){
    Disable-ServiceFabricNode -NodeName $node.NodeName -Intent RemoveNode -Force
}

Write-Host "Checking node status..."
$loopTimeout = 360
$loopWait = 60
$oldNodesDeactivated = $false
$newSeedNodesReady = $false

while ($loopTimeout -ne 0) {
    Get-Date -Format o
    Write-Host
    Write-Host "Nodes To Remove"

    foreach($nodeToDisable in $nodesToDisable) {
        $state = Get-ServiceFabricNode -NodeName $nodeToDisable.NodeName
        $msg = "{0} NodeDeactivationInfo: {1} IsSeedNode: {2} NodeStatus {3}" -f $nodeToDisable.NodeName, $state.NodeDeactivationInfo.Status, $state.IsSeedNode, $state.NodeStatus
        Write-Host $msg
    }

    $oldNodesDeactivated = ($nodesToDisable |  Where-Object { ($_.NodeStatus -eq [System.Fabric.Query.NodeStatus]::Disabled) -and ($_.NodeDeactivationInfo.Status -eq "Completed") } | Measure-Object).Count -eq $nodesToDisable.Count

    Write-Host
    Write-Host "Nodes To Add Status"

    foreach($nodeToEnable in $nodesToEnable) {
        $state = Get-ServiceFabricNode -NodeName $nodeToEnable.NodeName
        $msg = "{0} IsSeedNode: {1}, NodeStatus: {2}" -f $nodeToEnable.NodeName, $state.IsSeedNode, $state.NodeStatus
        Write-Host $msg
    }
    $newSeedNodesReady = ($nodesToEnable |  Where-Object { ($_.NodeStatus -eq [System.Fabric.Query.NodeStatus]::Up) -and $_.IsSeedNode} | Measure-Object).Count -ge $OldSeedCount
    if($oldNodesDeactivated -and $newSeedNodesReady) {
        break
    }
    $loopTimeout -= 1
    Start-Sleep $loopWait
}

if (-not ($oldNodesDeactivated)) {
    Write-Error "A node failed to deactivate within the time period specified."
    exit
}

$loopTimeout = 180
while ($loopTimeout -ne 0) {
    Write-Host
    Write-Host "Nodes To Add Status"

    foreach($nodeToEnable in $nodesToEnable) {
        $state = Get-ServiceFabricNode -NodeName $nodeToEnable.NodeName
        $msg = "{0} IsSeedNode: {1}, NodeStatus: {2}" -f $nodeToEnable.NodeName, $state.IsSeedNode, $state.NodeStatus
        Write-Host $msg
    }
    $newSeedNodesReady = ($nodesToEnable |  Where-Object { ($_.NodeStatus -eq [System.Fabric.Query.NodeStatus]::Up) -and $_.IsSeedNode} | Measure-Object).Count -ge $OldSeedCount
    if($newSeedNodesReady) {
        break
    }
    $loopTimeout -= 1
    Start-Sleep $loopWait
}

$NewSeedNodes = Get-ServiceFabricNode | Where-Object {($_.NodeName -match "_($scaleSetToEnable)_\d+") -and ($_.IsSeedNode -eq $True)}
Write-Host "New Seed Nodes are:"
$NewSeedNodes | Select NodeName
$NewSeedNodesCount = ($NewSeedNodes  | Measure-Object).Count

if($NewSeedNodesCount -ge $OldSeedCount) {
    Write-Host "Removing the scale set $scaleSetToDisable"
    Remove-AzureRmVmss -ResourceGroupName $ResourceGroupName -VMScaleSetName $scaleSetToDisable -Force
    Write-Host "Removed scale set $scaleSetToDisable"

    Write-Host "Removing Node State for old nodes"
    $nodesToDisable | Remove-ServiceFabricNodeState -Force
    Write-Host "Done"

    Get-ServiceFabricClusterHealth
    Get-ServiceFabricNode
} else {
    Write-Host "New Seed Nodes do not match the minimum requirements $NewSeedNodesCount."
    Write-Host "Manually run  Remove-AzureRmVmss"
    Write-Host "Then Manually run  Remove-ServiceFabricNodeState"
    Get-ServiceFabricClusterHealth
    Get-ServiceFabricNode
}

Enable-InternetExplorerESC

This script is extremely useful, you can see the progress of the transfer of seed nodes and disabling of existing primary node types.

You know it is successful, when the old nodes have ZERO seed nodes. All SEED nodes must transfer over to the new nodes, and all nodes in the old  scale set shoul dbe set to false by the end of the script execution.

MoveDNSToNewPublicIPController.ps1

Lastly you MUST update DNS to use the original CNAME . This script can help with this, what it does is actually detach the original internal Azure CNAME from the old public IP and move it to your new public IP attached to the new load balancer.




param (
        [Parameter(Mandatory = $true)]
        [string]
        $subscriptionName,

        [Parameter(Mandatory = $true)]
        [string]
        $oldLoadBalancerName,

        [Parameter(Mandatory = $true)]
        [string]
        $resourceGroupName=,

        [Parameter(Mandatory = $true)]
        [string]
        $oldPublicIpName=,

        [Parameter(Mandatory = $true)]
        [string]
        $newPublicIpName=
)

    Install-Module AzureRM.Network -Force
    Import-Module AzureRM.Network -Force

    $ErrorActionPreference = "Stop"
    Login-AzureRmAccount -SubscriptionName $subscriptionName

    Write-Host "Are you sure you want to do this. There will be brief connectivty downtime?"
    Pause

    $oldprimaryPublicIP = Get-AzureRmPublicIpAddress -Name $oldPublicIpName -ResourceGroupName $resourceGroupName
    $primaryDNSName = $oldprimaryPublicIP.DnsSettings.DomainNameLabel
    $primaryDNSFqdn = $oldprimaryPublicIP.DnsSettings.Fqdn
    
    if($primaryDNSName.Length -gt 0 -and $primaryDNSFqdn -gt 0) {
        Write-Host "Found the Primary DNS Name" $primaryDNSName
        Write-Host "Found the Primary DNS FQDN" $primaryDNSFqdn
    } else {
        Write-Error "Could not find the DNS attached to Old IP $oldprimaryPublicIP"
        Exit
    }
    
        Write-Host "Moving the Azure DNS Names to the new Public IP"
    $PublicIP = Get-AzureRmPublicIpAddress -Name $newPublicIpName -ResourceGroupName $resourceGroupName
    $PublicIP.DnsSettings.DomainNameLabel = $primaryDNSName
    $PublicIP.DnsSettings.Fqdn = $primaryDNSFqdn
    Set-AzureRmPublicIpAddress -PublicIpAddress $PublicIP

    Get-AzureRmPublicIpAddress -Name $newPublicIpName -ResourceGroupName $resourceGroupName
    Write-Host "Transfer Done"

    Write-Host "Removing Load Balancer related to old Primary NodeType."
    Write-Host "Are you sure?"
    Pause

    Remove-AzureRmLoadBalancer -Name $oldLoadBalancerName -ResourceGroupName $resourceGroupName -Force
    Remove-AzureRmPublicIpAddress -Name $oldPublicIpName -ResourceGroupName $resourceGroupName -Force

    Write-Host "Done"

Summary

In this article you followed the process to:

  • Configure ARM to add a new VMSS with OS, Data Disk and Operating System
  • Add a new Virtual Machine Scale Set to an Existing Service Fabric Node Type
  • Ran a powershell script controller to monitor the outcome of the VMSS transfer.
  • Transferred the original management DNS CNAME to the new Public IP Address

Conclusion

This project requires a lot of testing for your environment, allocate at least a a few days to test the entire process before you try it out on your production services.

HTH

T-SQL UpperCase first letter of word

I am amazed by the complex solutions out on the internet to upper case the first letter of a word in SQL. Here is a way I think is nice and simple.


-- Test Data

declare @word varchar(100)
with good as (select 'good' as a union select 'nice' union select 'fine')
select @word = (SELECT TOP 1 a FROM good ORDER BY NEWID())

-- Implementation

select substring(Upper(@word),1,1) + substring(@word, 2, LEN(@word))

Request.Browser.IsMobileDevice & Tablet Devices

Hi,

The problem with Request.Browser.IsMobileDevice is that it will classify a tablet as a mobile device.

If you need to discern between mobile, tablet and desktop. Then use the following extension method.

public static class HttpBrowserCapabilitiesBaseExtensions
{
 public static bool IsMobileNotTablet(this HttpBrowserCapabilitiesBase browser)
 {
  var userAgent = browser.Capabilities[""].ToString();
  var r = new Regex("ipad|(android(?!.*mobile))|xoom|sch-i800|playbook|tablet|kindle|nexus|silk", RegexOptions.IgnoreCase);
  var isTablet = r.IsMatch(userAgent) && browser.IsMobileDevice;
  return !isTablet && browser.IsMobileDevice;
 }
}

 

Then to use it is easy. Just import the namespace and reference the method.

using Web.Public.Helpers;
...
if (Request.Browser.IsMobileNotTablet() && !User.IsSubscribed)
....

 

 

 

 

JWPlayer .NET Client – Management API

Hi,

We recently migrated all our content from Ooyala to JWPlayer Hosted Platform. We needed a .NET tool to perform the following:

  1. Create Videos from remote sources
  2. Update Videos later e.g. Thumbnails etc
  3. List Videos in a Custom Application
  4. Other cools ideas that come in after adopting a new tool

Currently JWPlayer Management API only has a PHP and Python 2.7 Client as examples for Batch Migration.

To use in Visual Studio:

  1. Open Package Manager Console
  2. Run – Install-Package JWPlayer.NET

I have created an Open Source JWPlayer.NET library. Please feel free to improve on it e.g. Make it fluent.

Get Source Code (JWPlayer.NET)

Below is how you can use the API as at 29/06/2017.

Create Video

var jw = new Jw(ApiKey, ApiSecret);
var parameters = new Dictionary<string, string>
{
    {"sourceurl", "http://www.sample-videos.com/video/mp4/720/big_buck_bunny_720p_1mb.mp4"},
    {"sourceformat", "mp4"},
    {"sourcetype", "url"},
    {"title", "Test"},
    {"description", "Test Video"},
    {"tags", "foo, bar"},
    {"custom.LegacyId", Guid.NewGuid().ToString()}
};
var result = jw.CreateVideo(parameters);

Update Video

var jw = new Jw(ApiKey, ApiSecret);
var parameters = new Dictionary<string, string>
{
    {"video_key", "QxbbRMMP"},
    {"title", "Test Updated"},
    {"tags", "foo, bar, updated"},
};
var result = jw.UpdateVideo(parameters);

List Video

var jw = new Jw(ApiKey, ApiSecret);
var basicVideoSearch = new BasicVideoSearch {Search = "Foo", StartDate = DateTime.UtcNow.AddDays(-100)};
var result = jw.ListVideos(basicVideoSearch);
var count = result.Videos.Count;

 

Batch Migrations

Ensure you stick to the Rate Limit of 60 calls per minute, or call your JWPlayer Account Manager to increase it.

for(var i = 0; i < lines.count; i++)
{
   jw.CreateVideo(parameters);
   Thread.Sleep(TimeSpan.FromSeconds(1));
}

Clone JWPlayer.NET

Calculate Wind Direction and Wind Speed from Wind Vectors

Wind Vectors have a U (Eastward) and V (Northward) Component.

Below is the code in C# to calculate the resultant wind

public struct Wind
    {
        public Wind(float speed, float direction)
        {
            Speed = speed;
            Direction = direction;
        }
        public float Speed { get; set; }
        public float Direction { get; set; }
    }

public static Wind CalculateWindSpeedAndDirection(float u, float v)
        {
            if(Math.Abs(u) < 0.001 && Math.Abs(v) < 0.001)
                return new Wind(0,0);
            const double radianToDegree = (180 / Math.PI);

            return new Wind(
                Convert.ToSingle(Math.Sqrt(Math.Pow(u, 2) + Math.Pow(v, 2))),
                Convert.ToSingle(Math.Atan2(u, v) * radianToDegree + 180));
        }

Test Code

        [TestCase(-8.748f, 7.157f, 11.303f, 129.29f)]
        [TestCase(-4.641f, -3.049f, 5.553f, 56.696f)]
        [TestCase(10f, 0f, 10f, 270f)]
        [TestCase(-10f, 0f, 10f, 90)]
        [TestCase(0f, 10f, 10f, 180f)]
        [TestCase(0f, -10f, 10f, 360f)]
        [TestCase(0f,0f,0f,0f)]
        [TestCase(0.001f, 0.001f, 0.0014142f, 225f)]
        public void CanConvertWindVectorComponents(float u, float v, float expectedWindSpeed, float expectedWindDirection)
        {
            var result = MetraWaveForecastLocationModel.CalculateWindSpeedAndDirection(u, v);
            Assert.AreEqual(Math.Round(expectedWindDirection,2), Math.Round(result.Direction,2));
            Assert.AreEqual(Math.Round(expectedWindSpeed,2), Math.Round(result.Speed,2));
        }

Detect if User is idle

Scenario

You are running a Windows Forms application that runs as a System Tray. You have a few notifications that you would like to show the user.

However, what good is notifications, if the user is on the Loo? She will not see them.

Solution

Run a timer that detects when the user is active on the machine, and then show the notification or other task you would like to provide.

Below is the sample code that will do this for you. Of course for your production environment you would use a timer of some sort or event subscription service.

I have tested this by using other applications whilst the program monitors my input and it is safe to say it works across all my applications, even when the screen is locked.

So you might want to deal with an issue where the screen is locked but the user is moving the mouse. However such an issue is an edge case that is unlikely to happen.

I know the MSDN mentions that GetLastInputInfo is not system wide, however on my Windows 10 machine, it does seem to be the case that it is system wide.

using System;
using System.Runtime.InteropServices;
using System.Timers;

namespace MyMonitor
{
    class Program
    {
        private static Timer _userActivityTimer;
        static void Main()
        {
            _userActivityTimer = new Timer(500);
            _userActivityTimer.Elapsed += OnTimerElapsed;
            _userActivityTimer.AutoReset = true;
            _userActivityTimer.Enabled = true;
            Console.WriteLine("Press the Enter key to exit the program at any time... ");
            Console.ReadLine();
        }

        private static void OnTimerElapsed(object sender, ElapsedEventArgs e)
        {
            Console.WriteLine($"Last Input: {LastInput.ToShortTimeString()}");
            Console.WriteLine($"Idle for: {IdleTime.Seconds} Seconds");
        }

        [DllImport("user32.dll", SetLastError = false)]
        private static extern bool GetLastInputInfo(ref Lastinputinfo plii);
        private static readonly DateTime SystemStartup = DateTime.Now.AddMilliseconds(-Environment.TickCount);

        [StructLayout(LayoutKind.Sequential)]
        private struct Lastinputinfo
        {
            public uint cbSize;
            public readonly int dwTime;
        }

        public static DateTime LastInput => SystemStartup.AddMilliseconds(LastInputTicks);

        public static TimeSpan IdleTime => DateTime.Now.Subtract(LastInput);

        private static int LastInputTicks
        {
            get
            {
                var lii = new Lastinputinfo {cbSize = (uint) Marshal.SizeOf(typeof(Lastinputinfo))};
                GetLastInputInfo(ref lii);
                return lii.dwTime;
            }
        }
    }
}

idle-time-of-user

Migrating to AWS CodeCommit

Hosting your code in AWS CodeCommit has several advantages, the main one being seamless integration with AWS CodeDeploy and AWS CodePipeline.

I use SourceTree as my repo tool of choice, with Git/Bitbucket as the back end.

If you have a team of many developers and want to slowly migrate your code to AWS CodeCommit Git repo, you can setup your SourceTree config to push to both repo’s.

1. You will need a SSH-2-RSA 2048 Public/Private keys, this is what AWS supports. So once you have generated/imported the keys to AWS, you can then import the same key to your gihub or bitbucket account. Then just add them to your pageant. Read Setting Up AWS CodeCommit

2. In AWS, when you import your SSH keys for a IAM User, it will give you a SSH Key ID. Write down this SSH Key ID and the password for it will be the private key password you generated with PuttyGen. Always use a password for your private key file.

AWS IAM User SSH Key

AWS IAM User SSH Key

3. In SourceTree, go to Tools/Options and set the private key to your AWS SSH Key. Remember we added this to Bitbucket and Git, so we can now use the AWS SSH Key/Pairs for both repositories.

SourceTree Private Key

SourceTree Private Key

The last part, is to configure your local repo to post to both repositories, until you happy with the migration.

4. In SourceTree, select your repository, and go to Repository/Repository Settings. Then add a new origin. It will be in this format: ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/MyCoolApp

5. When it prompts for a username and password, enter your SSH Key ID and SSH private key password

Source Tree Remote

Source Tree Remote

Once you happy with the migration, you can then set AWS CodeCommit as the default remote, by ticking the checkbox. You may need to first rename the original remote “origin” to “old” then set AWS as the default 🙂

My only gripe with CodeCommit is no built in hooks to deploy directly to S3.  This would be great for static assets.

#CodeCommit #AWS

 

Getting started with Amazon SQS

The data and metadata inflection point

We are nearing an inflection point regarding technology and data. Data is basically gold. In the next 30 years you will basically have a life logger app and many connected/smart devices. You will be able to rewind back in time and listen/look at a conversation you had with a random guy you met at a party.

You will punch into the Amazon Life Timeline service “Go to when a met a man wearing a shirt with Sponge Bob on it”. You wait a few seconds and a video appears at the exact moment you met the guy at a party.

Our lives will be transparent, our ego will be lowered, because we as a species are happy to share and be transparent. We will tip towards transparency and away from privacy, why? Because if we do not, we create a stumbling block in our technological evolution. Data is king, and this is the fundamental reason why companies pay so much for apps.

Google is not a search engine, it is going to be the most powerful artificial intelligent service offering in the world. One day you will use Google’s AI to optimize your life. It will track how you drive, when you sleep, when you come home; and by doing so, will have enough data collection points to run AI routines on your data and provide you with awesome benefits.

Likewise, Amazon Machine Learning services will be our AI friend.

As programmers we going to need to store data about data or data about the bits that we send to the internet… Metadata.

One way to do so is by asynchronous messaging. You will of course have an App/Smart Device that needs to send data or metadata about user behavior.

Queue Sender

Your app can send small data messages as a user is consuming your service to an Amazon Queue on the cloud.

using System;
using System.Collections;
using System.Collections.Generic;
using System.Web;
using Amazon;
using Amazon.Runtime;
using Amazon.SQS;
using Amazon.SQS.Model;
using Amazon.Util;

namespace Wangle.Queue.Client
{
    public class AwsClient : IQueueClient
    {
        private AmazonSQSClient _client;
        private string defaultQueueUrl;

        public void Initialize(string url)
        {
            ProfileManager.RegisterProfile(&quot;Wangle&quot;, &quot;myaccessKey&quot;,&quot;mysecretkey&quot;);
            var amazonSqsConfig = new AmazonSQSConfig { ServiceURL = &quot;http://sqs.us-east-1.amazonaws.com&quot; };
            _client = new AmazonSQSClient(ProfileManager.GetAWSCredentials(&quot;Wangle&quot;), amazonSqsConfig);
            defaultQueueUrl = url;
        }

        public void SendMessage(string message)
        {
            var sendMessageRequest = new SendMessageRequest
            {
                QueueUrl = defaultQueueUrl,
                MessageBody = $&quot;{message} + {DateTimeOffset.UtcNow}&quot; //Unicode Only!
            };

            _client.SendMessageAsync(sendMessageRequest);

        }

        public IList&amp;lt;string&amp;gt; ReceiveMessage()
        {
            var data = new List&amp;lt;string&amp;gt;();
            var receiveMessageRequest = new ReceiveMessageRequest
            {
                QueueUrl = defaultQueueUrl,
                MaxNumberOfMessages = 10,
            };

            var receiveMessageResponse = _client.ReceiveMessage(receiveMessageRequest);

            receiveMessageResponse.Messages.ForEach(m =&amp;gt;
            {
                var receiptHandle = m.ReceiptHandle;
                data.Add(m.Body);
                _client.DeleteMessageAsync(defaultQueueUrl, receiptHandle);
            });
            return data;
        }
    }
}

Cloud Data Retention Receiver

Once the message is now in the message queue in the cloud, you will have a service in the cloud process this message and store it a Big Data Service. Below is the code to get the the message off the queue.

class Program
    {
        static void Main(string[] args)
        {
            //Fake a worker role service running in Amazon Cloud that processes data storage.
            Console.WriteLine(&quot;Fetching data logs from queue to prepare for governance...&quot;);
            var queueClient = new AwsClient();
            queueClient.Initialize(Settings.Default.QueueURL);

            while (true)
            {
                queueClient.ReceiveMessage().ToList().ForEach(m =&gt; Console.WriteLine(m));

                //ToDo: Store the audit data in AmazonS3 or Big Data service: User, Url, DateTimeUtc, SourceIP, DestIP
                Thread.Sleep(TimeSpan.FromSeconds(2));
            }

        }
    }

 

Summary

So that is basically the code you need. Of course you will need to install the Amazon Service SDK from Nuget.

This should get you going in the right direction when you need to send data to the cloud over the wire for later processing.

I am sure Amazon SQS will be used to start sending data asynchronously for  Fitbit information, how long you sleep for, how you drive your car and many more. Soon all our devices will be smart e.g. A cooking pot with a chip, your shirt with a chip …

See you soon in VR land…

PACS Server IntelePACS 4-2-1-P394 – Medical Connections – Inaccurate Image Counts

Hi,

When quering pacs at the Study Level, it is possible to get the incorrect ImageCounts, due to bugs in the software of IntelePACS. I think it is due to studies with mixed modalities.

I have written a library to alleviate this issue for .Net, where the imageCounts can be correctly retrieved at the Series level.

We need to query at the SeriesLevel and just parse an empty string for the studyUId (To force it)

https://gist.github.com/Romiko/4dbba2d5ea37a99b368b


private void SetQueryResultsSeries()
        {
            var seriesCount = Data.Count;
            if (seriesCount > 0)
            {

                for (var i = 0; i < seriesCount; i++)
                {
                    var imagesInSeriesCount = Data[i][Keyword.NumberOfSeriesRelatedInstances];
                    if (imagesInSeriesCount.ExistsWithValue)
                        ImageCount += int.Parse(imagesInSeriesCount.Value.ToString());
                }
                SetIntrinsicProperties();
            }
            else
            {
                ImageCount = 0;
            }
        }

Then to use my lirbary, we just do this:

var query = new DicomQueryManager("AE_Romiko", "MYMasterPacsServer", "5000", "MyAccessionNumber","").BuildMasterSeriesLevel();
            //Notice the empty string above to force studyLevel enumaration so I can get the actual series collections.
            query.Find();
            var imageCount = query.ImageCount

https://gist.github.com/Romiko/4dbba2d5ea37a99b368b