T-SQL UpperCase first letter of word

I am amazed by the complex solutions out on the internet to upper case the first letter of a word in SQL. Here is a way I think is nice and simple.


-- Test Data

declare @word varchar(100)
with good as (select 'good' as a union select 'nice' union select 'fine')
select @word = (SELECT TOP 1 a FROM good ORDER BY NEWID())

-- Implementation

select substring(Upper(@word),1,1) + substring(@word, 2, LEN(@word))

Advertisements

Request.Browser.IsMobileDevice & Tablet Devices

Hi,

The problem with Request.Browser.IsMobileDevice is that it will classify a tablet as a mobile device.

If you need to discern between mobile, tablet and desktop. Then use the following extension method.

public static class HttpBrowserCapabilitiesBaseExtensions
{
 public static bool IsMobileNotTablet(this HttpBrowserCapabilitiesBase browser)
 {
  var userAgent = browser.Capabilities[""].ToString();
  var r = new Regex("ipad|(android(?!.*mobile))|xoom|sch-i800|playbook|tablet|kindle|nexus|silk", RegexOptions.IgnoreCase);
  var isTablet = r.IsMatch(userAgent) && browser.IsMobileDevice;
  return !isTablet && browser.IsMobileDevice;
 }
}

 

Then to use it is easy. Just import the namespace and reference the method.

using Web.Public.Helpers;
...
if (Request.Browser.IsMobileNotTablet() && !User.IsSubscribed)
....

 

 

 

 

JWPlayer .NET Client – Management API

Hi,

We recently migrated all our content from Ooyala to JWPlayer Hosted Platform. We needed a .NET tool to perform the following:

  1. Create Videos from remote sources
  2. Update Videos later e.g. Thumbnails etc
  3. List Videos in a Custom Application
  4. Other cools ideas that come in after adopting a new tool

Currently JWPlayer Management API only has a PHP and Python 2.7 Client as examples for Batch Migration.

To use in Visual Studio:

  1. Open Package Manager Console
  2. Run – Install-Package JWPlayer.NET

I have created an Open Source JWPlayer.NET library. Please feel free to improve on it e.g. Make it fluent.

Get Source Code (JWPlayer.NET)

Below is how you can use the API as at 29/06/2017.

Create Video

var jw = new Jw(ApiKey, ApiSecret);
var parameters = new Dictionary<string, string>
{
    {"sourceurl", "http://www.sample-videos.com/video/mp4/720/big_buck_bunny_720p_1mb.mp4"},
    {"sourceformat", "mp4"},
    {"sourcetype", "url"},
    {"title", "Test"},
    {"description", "Test Video"},
    {"tags", "foo, bar"},
    {"custom.LegacyId", Guid.NewGuid().ToString()}
};
var result = jw.CreateVideo(parameters);

Update Video

var jw = new Jw(ApiKey, ApiSecret);
var parameters = new Dictionary<string, string>
{
    {"video_key", "QxbbRMMP"},
    {"title", "Test Updated"},
    {"tags", "foo, bar, updated"},
};
var result = jw.UpdateVideo(parameters);

List Video

var jw = new Jw(ApiKey, ApiSecret);
var basicVideoSearch = new BasicVideoSearch {Search = "Foo", StartDate = DateTime.UtcNow.AddDays(-100)};
var result = jw.ListVideos(basicVideoSearch);
var count = result.Videos.Count;

 

Batch Migrations

Ensure you stick to the Rate Limit of 60 calls per minute, or call your JWPlayer Account Manager to increase it.

for(var i = 0; i < lines.count; i++)
{
   jw.CreateVideo(parameters);
   Thread.Sleep(TimeSpan.FromSeconds(1));
}

Clone JWPlayer.NET

Calculate Wind Direction and Wind Speed from Wind Vectors

Wind Vectors have a U (Eastward) and V (Northward) Component.

Below is the code in C# to calculate the resultant wind

public struct Wind
    {
        public Wind(float speed, float direction)
        {
            Speed = speed;
            Direction = direction;
        }
        public float Speed { get; set; }
        public float Direction { get; set; }
    }

public static Wind CalculateWindSpeedAndDirection(float u, float v)
        {
            if(Math.Abs(u) < 0.001 && Math.Abs(v) < 0.001)
                return new Wind(0,0);
            const double radianToDegree = (180 / Math.PI);

            return new Wind(
                Convert.ToSingle(Math.Sqrt(Math.Pow(u, 2) + Math.Pow(v, 2))),
                Convert.ToSingle(Math.Atan2(u, v) * radianToDegree + 180));
        }

Test Code

        [TestCase(-8.748f, 7.157f, 11.303f, 129.29f)]
        [TestCase(-4.641f, -3.049f, 5.553f, 56.696f)]
        [TestCase(10f, 0f, 10f, 270f)]
        [TestCase(-10f, 0f, 10f, 90)]
        [TestCase(0f, 10f, 10f, 180f)]
        [TestCase(0f, -10f, 10f, 360f)]
        [TestCase(0f,0f,0f,0f)]
        [TestCase(0.001f, 0.001f, 0.0014142f, 225f)]
        public void CanConvertWindVectorComponents(float u, float v, float expectedWindSpeed, float expectedWindDirection)
        {
            var result = MetraWaveForecastLocationModel.CalculateWindSpeedAndDirection(u, v);
            Assert.AreEqual(Math.Round(expectedWindDirection,2), Math.Round(result.Direction,2));
            Assert.AreEqual(Math.Round(expectedWindSpeed,2), Math.Round(result.Speed,2));
        }

Detect if User is idle

Scenario

You are running a Windows Forms application that runs as a System Tray. You have a few notifications that you would like to show the user.

However, what good is notifications, if the user is on the Loo? She will not see them.

Solution

Run a timer that detects when the user is active on the machine, and then show the notification or other task you would like to provide.

Below is the sample code that will do this for you. Of course for your production environment you would use a timer of some sort or event subscription service.

I have tested this by using other applications whilst the program monitors my input and it is safe to say it works across all my applications, even when the screen is locked.

So you might want to deal with an issue where the screen is locked but the user is moving the mouse. However such an issue is an edge case that is unlikely to happen.

I know the MSDN mentions that GetLastInputInfo is not system wide, however on my Windows 10 machine, it does seem to be the case that it is system wide.

using System;
using System.Runtime.InteropServices;
using System.Timers;

namespace MyMonitor
{
    class Program
    {
        private static Timer _userActivityTimer;
        static void Main()
        {
            _userActivityTimer = new Timer(500);
            _userActivityTimer.Elapsed += OnTimerElapsed;
            _userActivityTimer.AutoReset = true;
            _userActivityTimer.Enabled = true;
            Console.WriteLine("Press the Enter key to exit the program at any time... ");
            Console.ReadLine();
        }

        private static void OnTimerElapsed(object sender, ElapsedEventArgs e)
        {
            Console.WriteLine($"Last Input: {LastInput.ToShortTimeString()}");
            Console.WriteLine($"Idle for: {IdleTime.Seconds} Seconds");
        }

        [DllImport("user32.dll", SetLastError = false)]
        private static extern bool GetLastInputInfo(ref Lastinputinfo plii);
        private static readonly DateTime SystemStartup = DateTime.Now.AddMilliseconds(-Environment.TickCount);

        [StructLayout(LayoutKind.Sequential)]
        private struct Lastinputinfo
        {
            public uint cbSize;
            public readonly int dwTime;
        }

        public static DateTime LastInput => SystemStartup.AddMilliseconds(LastInputTicks);

        public static TimeSpan IdleTime => DateTime.Now.Subtract(LastInput);

        private static int LastInputTicks
        {
            get
            {
                var lii = new Lastinputinfo {cbSize = (uint) Marshal.SizeOf(typeof(Lastinputinfo))};
                GetLastInputInfo(ref lii);
                return lii.dwTime;
            }
        }
    }
}

idle-time-of-user

Migrating to AWS CodeCommit

Hosting your code in AWS CodeCommit has several advantages, the main one being seamless integration with AWS CodeDeploy and AWS CodePipeline.

I use SourceTree as my repo tool of choice, with Git/Bitbucket as the back end.

If you have a team of many developers and want to slowly migrate your code to AWS CodeCommit Git repo, you can setup your SourceTree config to push to both repo’s.

1. You will need a SSH-2-RSA 2048 Public/Private keys, this is what AWS supports. So once you have generated/imported the keys to AWS, you can then import the same key to your gihub or bitbucket account. Then just add them to your pageant. Read Setting Up AWS CodeCommit

2. In AWS, when you import your SSH keys for a IAM User, it will give you a SSH Key ID. Write down this SSH Key ID and the password for it will be the private key password you generated with PuttyGen. Always use a password for your private key file.

AWS IAM User SSH Key

AWS IAM User SSH Key

3. In SourceTree, go to Tools/Options and set the private key to your AWS SSH Key. Remember we added this to Bitbucket and Git, so we can now use the AWS SSH Key/Pairs for both repositories.

SourceTree Private Key

SourceTree Private Key

The last part, is to configure your local repo to post to both repositories, until you happy with the migration.

4. In SourceTree, select your repository, and go to Repository/Repository Settings. Then add a new origin. It will be in this format: ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/MyCoolApp

5. When it prompts for a username and password, enter your SSH Key ID and SSH private key password

Source Tree Remote

Source Tree Remote

Once you happy with the migration, you can then set AWS CodeCommit as the default remote, by ticking the checkbox. You may need to first rename the original remote “origin” to “old” then set AWS as the default 🙂

My only gripe with CodeCommit is no built in hooks to deploy directly to S3.  This would be great for static assets.

#CodeCommit #AWS

 

Getting started with Amazon SQS

The data and metadata inflection point

We are nearing an inflection point regarding technology and data. Data is basically gold. In the next 30 years you will basically have a life logger app and many connected/smart devices. You will be able to rewind back in time and listen/look at a conversation you had with a random guy you met at a party.

You will punch into the Amazon Life Timeline service “Go to when a met a man wearing a shirt with Sponge Bob on it”. You wait a few seconds and a video appears at the exact moment you met the guy at a party.

Our lives will be transparent, our ego will be lowered, because we as a species are happy to share and be transparent. We will tip towards transparency and away from privacy, why? Because if we do not, we create a stumbling block in our technological evolution. Data is king, and this is the fundamental reason why companies pay so much for apps.

Google is not a search engine, it is going to be the most powerful artificial intelligent service offering in the world. One day you will use Google’s AI to optimize your life. It will track how you drive, when you sleep, when you come home; and by doing so, will have enough data collection points to run AI routines on your data and provide you with awesome benefits.

Likewise, Amazon Machine Learning services will be our AI friend.

As programmers we going to need to store data about data or data about the bits that we send to the internet… Metadata.

One way to do so is by asynchronous messaging. You will of course have an App/Smart Device that needs to send data or metadata about user behavior.

Queue Sender

Your app can send small data messages as a user is consuming your service to an Amazon Queue on the cloud.

using System;
using System.Collections;
using System.Collections.Generic;
using System.Web;
using Amazon;
using Amazon.Runtime;
using Amazon.SQS;
using Amazon.SQS.Model;
using Amazon.Util;

namespace Wangle.Queue.Client
{
    public class AwsClient : IQueueClient
    {
        private AmazonSQSClient _client;
        private string defaultQueueUrl;

        public void Initialize(string url)
        {
            ProfileManager.RegisterProfile(&quot;Wangle&quot;, &quot;myaccessKey&quot;,&quot;mysecretkey&quot;);
            var amazonSqsConfig = new AmazonSQSConfig { ServiceURL = &quot;http://sqs.us-east-1.amazonaws.com&quot; };
            _client = new AmazonSQSClient(ProfileManager.GetAWSCredentials(&quot;Wangle&quot;), amazonSqsConfig);
            defaultQueueUrl = url;
        }

        public void SendMessage(string message)
        {
            var sendMessageRequest = new SendMessageRequest
            {
                QueueUrl = defaultQueueUrl,
                MessageBody = $&quot;{message} + {DateTimeOffset.UtcNow}&quot; //Unicode Only!
            };

            _client.SendMessageAsync(sendMessageRequest);

        }

        public IList&amp;lt;string&amp;gt; ReceiveMessage()
        {
            var data = new List&amp;lt;string&amp;gt;();
            var receiveMessageRequest = new ReceiveMessageRequest
            {
                QueueUrl = defaultQueueUrl,
                MaxNumberOfMessages = 10,
            };

            var receiveMessageResponse = _client.ReceiveMessage(receiveMessageRequest);

            receiveMessageResponse.Messages.ForEach(m =&amp;gt;
            {
                var receiptHandle = m.ReceiptHandle;
                data.Add(m.Body);
                _client.DeleteMessageAsync(defaultQueueUrl, receiptHandle);
            });
            return data;
        }
    }
}

Cloud Data Retention Receiver

Once the message is now in the message queue in the cloud, you will have a service in the cloud process this message and store it a Big Data Service. Below is the code to get the the message off the queue.

class Program
    {
        static void Main(string[] args)
        {
            //Fake a worker role service running in Amazon Cloud that processes data storage.
            Console.WriteLine(&quot;Fetching data logs from queue to prepare for governance...&quot;);
            var queueClient = new AwsClient();
            queueClient.Initialize(Settings.Default.QueueURL);

            while (true)
            {
                queueClient.ReceiveMessage().ToList().ForEach(m =&gt; Console.WriteLine(m));

                //ToDo: Store the audit data in AmazonS3 or Big Data service: User, Url, DateTimeUtc, SourceIP, DestIP
                Thread.Sleep(TimeSpan.FromSeconds(2));
            }

        }
    }

 

Summary

So that is basically the code you need. Of course you will need to install the Amazon Service SDK from Nuget.

This should get you going in the right direction when you need to send data to the cloud over the wire for later processing.

I am sure Amazon SQS will be used to start sending data asynchronously for  Fitbit information, how long you sleep for, how you drive your car and many more. Soon all our devices will be smart e.g. A cooking pot with a chip, your shirt with a chip …

See you soon in VR land…

PACS Server IntelePACS 4-2-1-P394 – Medical Connections – Inaccurate Image Counts

Hi,

When quering pacs at the Study Level, it is possible to get the incorrect ImageCounts, due to bugs in the software of IntelePACS. I think it is due to studies with mixed modalities.

I have written a library to alleviate this issue for .Net, where the imageCounts can be correctly retrieved at the Series level.

We need to query at the SeriesLevel and just parse an empty string for the studyUId (To force it)

https://gist.github.com/Romiko/4dbba2d5ea37a99b368b


private void SetQueryResultsSeries()
        {
            var seriesCount = Data.Count;
            if (seriesCount > 0)
            {

                for (var i = 0; i < seriesCount; i++)
                {
                    var imagesInSeriesCount = Data[i][Keyword.NumberOfSeriesRelatedInstances];
                    if (imagesInSeriesCount.ExistsWithValue)
                        ImageCount += int.Parse(imagesInSeriesCount.Value.ToString());
                }
                SetIntrinsicProperties();
            }
            else
            {
                ImageCount = 0;
            }
        }

Then to use my lirbary, we just do this:

var query = new DicomQueryManager("AE_Romiko", "MYMasterPacsServer", "5000", "MyAccessionNumber","").BuildMasterSeriesLevel();
            //Notice the empty string above to force studyLevel enumaration so I can get the actual series collections.
            query.Find();
            var imageCount = query.ImageCount

https://gist.github.com/Romiko/4dbba2d5ea37a99b368b

Bound a Windows Form to WorkingArea on multiple display setup

If you want to ensure a windows form cannot be dragged out of the view-able area of a multiple monitor screen and also want the option to dock it to the monitor it was actively on, then this code might be helpful. It also has a tolerance level of 50% where 50% of the form can be out of the view-able area.

You might think you do not need to enumerate the screens, but you do, if you want to dock it, especially if some screens are portrait and others are landscape.

You can optimize the code by storing the LeftMost and RightMost screen in a global static location.

 private void DockFormIfOutOfViewableArea()
        {
            var widthTolerance = Location.X + (Width / 2);
            var heightTolerance = Location.Y + (Height / 2);
            Screen.AllScreens.OrderBy(r => r.WorkingArea.X).ForEach(screen =>
            {
                if (!IsOnThisScreen(screen)) return;

                if (heightTolerance > screen.WorkingArea.Height)
                    Location = new Point(screen.WorkingArea.X, screen.Bounds.Height - Height + screen.Bounds.Y);
                if (Location.Y < screen.WorkingArea.Y )
                    Location = new Point(screen.WorkingArea.X, screen.WorkingArea.Y);
            });

            if (widthTolerance > SystemInformation.VirtualScreen.Right)
            {
                var closestScreen = Screen.AllScreens.OrderBy(r => r.WorkingArea.X).Last();
                Location = new Point(closestScreen.Bounds.Right - Width, closestScreen.Bounds.Height - Height + closestScreen.Bounds.Y);
            }

            if (widthTolerance < SystemInformation.VirtualScreen.Left)
            {
                var closestScreen = Screen.AllScreens.OrderBy(r => r.WorkingArea.X).First();
                Location = new Point(closestScreen.Bounds.Left, closestScreen.Bounds.Height - Height + closestScreen.Bounds.Y);
            }
        }

Download

Nancy Rest Services – GZIP IT!

When dealing with JSON data, and you dealing with large result sets, say larger than a 1MB or so, it will definitely be feasible in many situation to zip the data before sending it to your client application.

The first step is to add zipping to the pipeline that Nancy uses, we then check that the content type returned in the response is JSON and we check that the client can accept the encoding of GZIP.

public static void AddGZip(IPipelines pipelines)
        {
            pipelines.AfterRequest += ctx =&amp;gt;
            {
                if ((!ctx.Response.ContentType.Contains(&amp;quot;application/json&amp;quot;)) || !ctx.Request.Headers.AcceptEncoding.Any(
               x =&amp;gt; x.Contains(&amp;quot;gzip&amp;quot;))) return;
                var jsonData = new MemoryStream();

                ctx.Response.Contents.Invoke(jsonData);
                jsonData.Position = 0;
                if (jsonData.Length &amp;lt; 4096)
                {
                    ctx.Response.Contents = s =&amp;gt;
                    {
                        jsonData.CopyTo(s);
                        s.Flush();
                    };
                }
                else
                {
                    ctx.Response.Headers[&amp;quot;Content-Encoding&amp;quot;] = &amp;quot;gzip&amp;quot;;
                    ctx.Response.Contents = s =&amp;gt;
                    {
                        var gzip = new GZipStream(s, CompressionMode.Compress, true);
                        jsonData.CopyTo(gzip);
                        gzip.Close();
                    };
                }
            };
        }

Perfect, now what we want to do is also, is in the CLIENT application calling the rest service, we need to add a header to the request so the server knows is supports GZIP:
Accept-Encoding: gzip

So, we add this code to the client.

Request sent by client.

protected WebRequest AddHeaders(WebRequest request)
        {
            request.Headers.Add(&amp;quot;Accept-Encoding&amp;quot;, &amp;quot;gzip&amp;quot;);
            return request;
        }

Response processed by client.

if (((HttpWebResponse)response).ContentEncoding == &amp;quot;gzip&amp;quot;
                    &amp;amp;&amp;amp; response.ContentType.Contains(&amp;quot;application/json&amp;quot;))
                {
                    var gzip = new GZipStream(response.GetResponseStream(), CompressionMode.Decompress, true);
                    var readerUnzipped = new StreamReader(gzip);
                    Response = Deserialize(readerUnzipped);
                }
                else
                {
                   Response = Deserialize(reader);
                }

Implement whatever deserializer you want, and then make sure you close the stream, reader.close 😉

Server No GZIP

GZIP-With Compression