Detect if User is idle


You are running a Windows Forms application that runs as a System Tray. You have a few notifications that you would like to show the user.

However, what good is notifications, if the user is on the Loo? She will not see them.


Run a timer that detects when the user is active on the machine, and then show the notification or other task you would like to provide.

Below is the sample code that will do this for you. Of course for your production environment you would use a timer of some sort or event subscription service.

I have tested this by using other applications whilst the program monitors my input and it is safe to say it works across all my applications, even when the screen is locked.

So you might want to deal with an issue where the screen is locked but the user is moving the mouse. However such an issue is an edge case that is unlikely to happen.

I know the MSDN mentions that GetLastInputInfo is not system wide, however on my Windows 10 machine, it does seem to be the case that it is system wide.

using System;
using System.Runtime.InteropServices;
using System.Timers;

namespace MyMonitor
    class Program
        private static Timer _userActivityTimer;
        static void Main()
            _userActivityTimer = new Timer(500);
            _userActivityTimer.Elapsed += OnTimerElapsed;
            _userActivityTimer.AutoReset = true;
            _userActivityTimer.Enabled = true;
            Console.WriteLine("Press the Enter key to exit the program at any time... ");

        private static void OnTimerElapsed(object sender, ElapsedEventArgs e)
            Console.WriteLine($"Last Input: {LastInput.ToShortTimeString()}");
            Console.WriteLine($"Idle for: {IdleTime.Seconds} Seconds");

        [DllImport("user32.dll", SetLastError = false)]
        private static extern bool GetLastInputInfo(ref Lastinputinfo plii);
        private static readonly DateTime SystemStartup = DateTime.Now.AddMilliseconds(-Environment.TickCount);

        private struct Lastinputinfo
            public uint cbSize;
            public readonly int dwTime;

        public static DateTime LastInput => SystemStartup.AddMilliseconds(LastInputTicks);

        public static TimeSpan IdleTime => DateTime.Now.Subtract(LastInput);

        private static int LastInputTicks
                var lii = new Lastinputinfo {cbSize = (uint) Marshal.SizeOf(typeof(Lastinputinfo))};
                GetLastInputInfo(ref lii);
                return lii.dwTime;


Run Azure CLI inside Docker on a Macbook Pro

Laptop Setup

Bootcamp with Windows on one partition and OSX on another.

A great way to manage your Windows Azure environment is to use a Docker Container, instead of powershell.
If you are new to automating your infrastructure and code, then this will be a great way to start on the right foot from day one.

Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications. By taking advantage of Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production.

Install Docker

Grab the latest version of docker here.

After Installing Docker
1. Use bootcamp to boot back into OSX.
2. In OSX restart the machine (warm restart)
3. Hold the Options key down to boot back into Windows

The above looks like a waste of time, However, this will enable Virtualisation in the Bios of the Macbook, since OSX does this by default and windows will not. So it is a small hack to get virtualisation working via a warm reboot from OSX back to Windows.

Grab a Docker Virtual Image with Azure CLI

Run the following command:

docker run -it microsoft/azure-cli


The above command will connect to the Docker Repository and download the image to run in a container. This is basically a virtualized environment where you can now manage your windows environment from.

Install Azure Command Line Interface (CLI)

Run the following command:

Azure Help

Look carefully at the image below. Powershell was used to run Docker. However once I run Docker, look at my prompt (root@a28193f1320d:/#). We are now in a Linux virtual machine  (a28193f1320d) and we now have total control over our Azure resources from the command line.

Docker in Windows

Docker in Windows

Now, the Linux guys will start having some respect for us Windows guys. We are now entering an age where we need to be agnostic to technology.

Below we are now running a full blown Kernel of Linux in a Windows Powerhsell prompt.


What is even cooler, we are using a Linux VM to manage the Azure environment, and so we get awesome tools for free.


Good Habits
By using docker with the Azure Command Line interface, you will put yourself into a good position by automating all your infrastructure and code requirements.

You will be using the portal less and less to manage and deploy your azure resources such as Virtual Machines, Blobs and Permissions.

Note, we are now using ARM – Azure Resource Management, some features in ARM will not be compatible with older Azure deployments. Read more about ARM.

You can deploy, update, or delete all the resources for your solution in a single, coordinated operation. You use a template for deployment and that template can work for different environments such as testing, staging, and production. Resource Manager provides security, auditing, and tagging features to help you manage your resources after deployment.

CLI Reference

help: Commands:
help: account Commands to manage your account information and publish settings
help: acs Commands to manage your container service.
help: ad Commands to display Active Directory objects
help: appserviceplan Commands to manage your Azure appserviceplans
help: availset Commands to manage your availability sets.
help: batch Commands to manage your Batch objects
help: cdn Commands to manage Azure Content Delivery Network (CDN)
help: config Commands to manage your local settings
help: datalake Commands to manage your Data Lake objects
help: feature Commands to manage your features
help: group Commands to manage your resource groups
help: hdinsight Commands to manage HDInsight clusters and jobs
help: insights Commands related to monitoring Insights (events, alert rules, autoscale settings, metrics)
help: iothub Commands to manage your Azure IoT hubs
help: keyvault Commands to manage key vault instances in the Azure Key Vault service
help: lab Commands to manage your DevTest Labs
help: location Commands to get the available locations
help: network Commands to manage network resources
help: policy Commands to manage your policies on ARM Resources.
help: powerbi Commands to manage your Azure Power BI Embedded Workspace Collections
help: provider Commands to manage resource provider registrations
help: quotas Command to view your aggregated Azure quotas
help: rediscache Commands to manage your Azure Redis Cache(s)
help: resource Commands to manage your resources
help: role Commands to manage role definitions
help: servermanagement Commands to manage Azure Server Managment resources
help: storage Commands to manage your Storage objects
help: tag Commands to manage your resource manager tags
help: usage Command to view your aggregated Azure usage data
help: vm Commands to manage your virtual machines
help: vmss Commands to manage your virtual machine scale sets.
help: vmssvm Commands to manage your virtual machine scale set vm.
help: webapp Commands to manage your Azure webapps
help: Options:
help: -h, --help output usage information
help: -v, --version output the application version
help: Current Mode: arm (Azure Resource Management)

Migrating to AWS CodeCommit

Hosting your code in AWS CodeCommit has several advantages, the main one being seamless integration with AWS CodeDeploy and AWS CodePipeline.

I use SourceTree as my repo tool of choice, with Git/Bitbucket as the back end.

If you have a team of many developers and want to slowly migrate your code to AWS CodeCommit Git repo, you can setup your SourceTree config to push to both repo’s.

1. You will need a SSH-2-RSA 2048 Public/Private keys, this is what AWS supports. So once you have generated/imported the keys to AWS, you can then import the same key to your gihub or bitbucket account. Then just add them to your pageant. Read Setting Up AWS CodeCommit

2. In AWS, when you import your SSH keys for a IAM User, it will give you a SSH Key ID. Write down this SSH Key ID and the password for it will be the private key password you generated with PuttyGen. Always use a password for your private key file.



3. In SourceTree, go to Tools/Options and set the private key to your AWS SSH Key. Remember we added this to Bitbucket and Git, so we can now use the AWS SSH Key/Pairs for both repositories.

SourceTree Private Key

SourceTree Private Key

The last part, is to configure your local repo to post to both repositories, until you happy with the migration.

4. In SourceTree, select your repository, and go to Repository/Repository Settings. Then add a new origin. It will be in this format: ssh://

5. When it prompts for a username and password, enter your SSH Key ID and SSH private key password

Source Tree Remote

Source Tree Remote

Once you happy with the migration, you can then set AWS CodeCommit as the default remote, by ticking the checkbox. You may need to first rename the original remote “origin” to “old” then set AWS as the default🙂

My only gripe with CodeCommit is no built in hooks to deploy directly to S3.  This would be great for static assets.

#CodeCommit #AWS


Getting started with Amazon SQS

The data and metadata inflection point

We are nearing an inflection point regarding technology and data. Data is basically gold. In the next 30 years you will basically have a life logger app and many connected/smart devices. You will be able to rewind back in time and listen/look at a conversation you had with a random guy you met at a party.

You will punch into the Amazon Life Timeline service “Go to when a met a man wearing a shirt with Sponge Bob on it”. You wait a few seconds and a video appears at the exact moment you met the guy at a party.

Our lives will be transparent, our ego will be lowered, because we as a species are happy to share and be transparent. We will tip towards transparency and away from privacy, why? Because if we do not, we create a stumbling block in our technological evolution. Data is king, and this is the fundamental reason why companies pay so much for apps.

Google is not a search engine, it is going to be the most powerful artificial intelligent service offering in the world. One day you will use Google’s AI to optimize your life. It will track how you drive, when you sleep, when you come home; and by doing so, will have enough data collection points to run AI routines on your data and provide you with awesome benefits.

Likewise, Amazon Machine Learning services will be our AI friend.

As programmers we going to need to store data about data or data about the bits that we send to the internet… Metadata.

One way to do so is by asynchronous messaging. You will of course have an App/Smart Device that needs to send data or metadata about user behavior.

Queue Sender

Your app can send small data messages as a user is consuming your service to an Amazon Queue on the cloud.

using System;
using System.Collections;
using System.Collections.Generic;
using System.Web;
using Amazon;
using Amazon.Runtime;
using Amazon.SQS;
using Amazon.SQS.Model;
using Amazon.Util;

namespace Wangle.Queue.Client
    public class AwsClient : IQueueClient
        private AmazonSQSClient _client;
        private string defaultQueueUrl;

        public void Initialize(string url)
            ProfileManager.RegisterProfile("Wangle", "myaccessKey","mysecretkey");
            var amazonSqsConfig = new AmazonSQSConfig { ServiceURL = "" };
            _client = new AmazonSQSClient(ProfileManager.GetAWSCredentials("Wangle"), amazonSqsConfig);
            defaultQueueUrl = url;

        public void SendMessage(string message)
            var sendMessageRequest = new SendMessageRequest
                QueueUrl = defaultQueueUrl,
                MessageBody = $"{message} + {DateTimeOffset.UtcNow}" //Unicode Only!



        public IList<string> ReceiveMessage()
            var data = new List<string>();
            var receiveMessageRequest = new ReceiveMessageRequest
                QueueUrl = defaultQueueUrl,
                MaxNumberOfMessages = 10,

            var receiveMessageResponse = _client.ReceiveMessage(receiveMessageRequest);

            receiveMessageResponse.Messages.ForEach(m =>
                var receiptHandle = m.ReceiptHandle;
                _client.DeleteMessageAsync(defaultQueueUrl, receiptHandle);
            return data;

Cloud Data Retention Receiver

Once the message is now in the message queue in the cloud, you will have a service in the cloud process this message and store it a Big Data Service. Below is the code to get the the message off the queue.

class Program
        static void Main(string[] args)
            //Fake a worker role service running in Amazon Cloud that processes data storage.
            Console.WriteLine("Fetching data logs from queue to prepare for governance...");
            var queueClient = new AwsClient();

            while (true)
                queueClient.ReceiveMessage().ToList().ForEach(m => Console.WriteLine(m));

                //ToDo: Store the audit data in AmazonS3 or Big Data service: User, Url, DateTimeUtc, SourceIP, DestIP




So that is basically the code you need. Of course you will need to install the Amazon Service SDK from Nuget.

This should get you going in the right direction when you need to send data to the cloud over the wire for later processing.

I am sure Amazon SQS will be used to start sending data asynchronously for  Fitbit information, how long you sleep for, how you drive your car and many more. Soon all our devices will be smart e.g. A cooking pot with a chip, your shirt with a chip …

See you soon in VR land…

Next generation VPN software – Wangle

I have been working in IT for over 20 years. One of the issues we often have is:

How do we provide encryption and leverage saving data usage on a mobile phone data bundle contracts.

How do we balance anonymous behavior without comprising speed thus improving security.

Finally the solution will be released within the next few days, possibly before 15 June 2016.
The software is a mobile app called Wangle.

Watch the video, showcasing how the software works.

Many mobile phone apps have huge flaws when communicating with their back end servers. Wangle will provide another layer of security to ensure it cannot be intercepted by hackers.

What is even more exciting is that Wangle will reduce your data consumption, thus saving on mobile phone bills. Exactly by how much, remains unknown, until the app is used by the masses. However, during beta phase testing PDF download speeds were 15-20 times faster.

Sign up for early discounts at Wangle’s website, with a 50% discount for the first years annual subscription.

The Chip in CISCO routers and Hubs
This is the cream on top of the cake. Wangle is busy developing a chip with a tech company in Israel, that will be installed into networking routers, hubs and multiplexers, so that Encryption, Compression and Security can be applied at the Layer3, Layer 4 networking level, which will be far more superior than running it at the application level in the OSI networking model. This means, we are looking at the first commercial VPN that provides Layer 3/4 features at the hardware level. This will blow out current SSL accelerators which just focus on SSL encryption speeds.

Wangle is looking to be the next disrupting technology that will be leverage by huge data consumers like Netflix.

I encourage those in South Africa and Australia to download the app as soon as it is available on the Google Play store and Apple Store. In a future blog post, I will be posting my own performance testing results.

With the advent of Cloud Computing, we can now combine software and hardware level features that leverage Content Delivery Network’s such as Amazon S3 and Amazon Cloud Front.

Wangle is not magic, it is real. When a user first downloads a video via the Wangle VPN, it gets redistributed to all Wangle endpoints around the world.

When another user requests the same video, instead of serving it off the original location, Wangle will serve it to its VPN user from it’s own cloud distribution that is MUCH closer to the user.

Catch you all in a few weeks time when I will post stats on the Wangle VPN and publish the performance reports. I will also do a in depth analysis of the Wangle infrastructure via packet level interception using test encryption keys on Wireshark. This will allow us to have a look at what cloud technologies and endpoints are being leverage to deliver content to the user Faster, Secure and Compressed, with no compromise.

PACS Server IntelePACS 4-2-1-P394 – Medical Connections – Inaccurate Image Counts


When quering pacs at the Study Level, it is possible to get the incorrect ImageCounts, due to bugs in the software of IntelePACS. I think it is due to studies with mixed modalities.

I have written a library to alleviate this issue for .Net, where the imageCounts can be correctly retrieved at the Series level.

We need to query at the SeriesLevel and just parse an empty string for the studyUId (To force it)

private void SetQueryResultsSeries()
            var seriesCount = Data.Count;
            if (seriesCount > 0)

                for (var i = 0; i < seriesCount; i++)
                    var imagesInSeriesCount = Data[i][Keyword.NumberOfSeriesRelatedInstances];
                    if (imagesInSeriesCount.ExistsWithValue)
                        ImageCount += int.Parse(imagesInSeriesCount.Value.ToString());
                ImageCount = 0;

Then to use my lirbary, we just do this:

var query = new DicomQueryManager("AE_Romiko", "MYMasterPacsServer", "5000", "MyAccessionNumber","").BuildMasterSeriesLevel();
            //Notice the empty string above to force studyLevel enumaration so I can get the actual series collections.
            var imageCount = query.ImageCount

Bound a Windows Form to WorkingArea on multiple display setup

If you want to ensure a windows form cannot be dragged out of the view-able area of a multiple monitor screen and also want the option to dock it to the monitor it was actively on, then this code might be helpful. It also has a tolerance level of 50% where 50% of the form can be out of the view-able area.

You might think you do not need to enumerate the screens, but you do, if you want to dock it, especially if some screens are portrait and others are landscape.

You can optimize the code by storing the LeftMost and RightMost screen in a global static location.

 private void DockFormIfOutOfViewableArea()
            var widthTolerance = Location.X + (Width / 2);
            var heightTolerance = Location.Y + (Height / 2);
            Screen.AllScreens.OrderBy(r => r.WorkingArea.X).ForEach(screen =>
                if (!IsOnThisScreen(screen)) return;

                if (heightTolerance > screen.WorkingArea.Height)
                    Location = new Point(screen.WorkingArea.X, screen.Bounds.Height - Height + screen.Bounds.Y);
                if (Location.Y < screen.WorkingArea.Y )
                    Location = new Point(screen.WorkingArea.X, screen.WorkingArea.Y);

            if (widthTolerance > SystemInformation.VirtualScreen.Right)
                var closestScreen = Screen.AllScreens.OrderBy(r => r.WorkingArea.X).Last();
                Location = new Point(closestScreen.Bounds.Right - Width, closestScreen.Bounds.Height - Height + closestScreen.Bounds.Y);

            if (widthTolerance < SystemInformation.VirtualScreen.Left)
                var closestScreen = Screen.AllScreens.OrderBy(r => r.WorkingArea.X).First();
                Location = new Point(closestScreen.Bounds.Left, closestScreen.Bounds.Height - Height + closestScreen.Bounds.Y);


Nancy Rest Services – GZIP IT!

When dealing with JSON data, and you dealing with large result sets, say larger than a 1MB or so, it will definitely be feasible in many situation to zip the data before sending it to your client application.

The first step is to add zipping to the pipeline that Nancy uses, we then check that the content type returned in the response is JSON and we check that the client can accept the encoding of GZIP.

public static void AddGZip(IPipelines pipelines)
            pipelines.AfterRequest += ctx =&amp;gt;
                if ((!ctx.Response.ContentType.Contains(&amp;quot;application/json&amp;quot;)) || !ctx.Request.Headers.AcceptEncoding.Any(
               x =&amp;gt; x.Contains(&amp;quot;gzip&amp;quot;))) return;
                var jsonData = new MemoryStream();

                jsonData.Position = 0;
                if (jsonData.Length &amp;lt; 4096)
                    ctx.Response.Contents = s =&amp;gt;
                    ctx.Response.Headers[&amp;quot;Content-Encoding&amp;quot;] = &amp;quot;gzip&amp;quot;;
                    ctx.Response.Contents = s =&amp;gt;
                        var gzip = new GZipStream(s, CompressionMode.Compress, true);

Perfect, now what we want to do is also, is in the CLIENT application calling the rest service, we need to add a header to the request so the server knows is supports GZIP:
Accept-Encoding: gzip

So, we add this code to the client.

Request sent by client.

protected WebRequest AddHeaders(WebRequest request)
            request.Headers.Add(&amp;quot;Accept-Encoding&amp;quot;, &amp;quot;gzip&amp;quot;);
            return request;

Response processed by client.

if (((HttpWebResponse)response).ContentEncoding == &amp;quot;gzip&amp;quot;
                    &amp;amp;&amp;amp; response.ContentType.Contains(&amp;quot;application/json&amp;quot;))
                    var gzip = new GZipStream(response.GetResponseStream(), CompressionMode.Decompress, true);
                    var readerUnzipped = new StreamReader(gzip);
                    Response = Deserialize(readerUnzipped);
                   Response = Deserialize(reader);

Implement whatever deserializer you want, and then make sure you close the stream, reader.close😉

Server No GZIP

GZIP-With Compression

NServiceBus-ServiceMatrix Saga To Saga Request/Response Pattern


This document explains how to setup a SAGA to SAGA request Response pattern. Bus.Reply is used, as ReplyToOriginator is not supported. We will simulate a service receiving an order, then sending it to an Order Service, which then has a request/response pattern to process payment.

  • We will create 3 endpoints, OrderSender , OrderService, PaymentService
  • We will configure an order to send an initial Order from OrderSender to OrderSaga
  • We will then configure OrderSaga to send a Request/Response to PaymentSaga
  • Note, I have message correlation as well. This is needed for ReplyToOriginator to work between Saga’s from a timeout.

Saga to Saga Request/Response supports Bus.Reply, however do not use it in TimeOut handlers, as it will try reply to the timeout queue.

ReplyToOriginator also works, when you need to call the originating Saga, however there were issues on the bug report However you can get it working by doing two things:

  1. Ensure the Calling Saga outlives the Called Saga (Create a Long TimeOut that marks Saga completed in Calling, Create a shorter timeout in the Called Saga that MarksItComplete)
  2. Add this code to the ProcessOrderHandlerConfigureHowToFindSaga.cs

You can download the source code at:


Just enabled NUGET package restore J

TimeOuts and responding to the original calling (originator) saga

Never use Bus.Reply within a TimeOut handle in the Saga, as it will try to reply to the timeout queue, as Bus.Reply will always respond to the source of the LAST incoming message.

To get ReplyToOriginator working between Saga’s, you need to:

  1. Ensure the called Saga (ProcessOrder) lives LONGER than the called (Payment), by using timeouts in both sagas
  2. You need to add a correlation



This is the message pattern with timeouts and a polling pattern, which you can run indefinitely if ever needed.


Create three endpoints

  1. Click New endpoint
  2. Create an OrderReceiver, as an NServiceBus Host. Do the same for OrderSaga and PaymentSaga

  3. Your canvas will look like this


Send a message from OrderReceiver to OrderSaga

So now we will simulate a service (OrderReceiver) that receives orders on a backend system and then sends them to our OrderSaga for long running transaction processing

  1. Click the OrderReceiver and click “Send Command”
  2. Set the Service name to Orders (Domain) and the command ProcessOrder

    Your canvas should look like this

  3. Click the undeployed component and select Deploy
  4. Select OrderSaga as the destination and click Done

    Your canvas should look like this, with a bit of interior design J

  5. Edit the ProcessOrder message and add the following properties

  6. Open the ProcessOrderSender.cs file under the Orders folder, we will configure it to send 3 orders. We will implement IWantToRunWhenBusStartsAndStops


Note that I am not in the infrastructure folder, as this is generated code.

  1. Build the solution


Configure the OrderSaga as a Saga and Message Correlation

Great, so now we have the minimum need to set the OrderSaga endpoint to a real Saga, as a SAGA MUST have a message to handle. In this case ProcessOrder.

  1. Click the ProcessOrderHandler and click “Convert To Saga”
  2. This will open the ProcessOrderHandlerConfigureHowToFindSaga.cs file. Build the solution, so that partial classes are generated.
  3. We want to correlate order messages based on the orderId to the correct Saga instance. So here we will set the properties on how to find it. Add the following code:
  4. Open the file ProcessOrderHandlerSagaData.cs and add the OrderId, set the property to Unique, as this is how the Saga will correlate messages to the correct instance.

Excellent, so now we have correlation established between the OrderReceiver and the OrderSaga. So if ever the Saga receives order updates for the same order, the infrastructure will no which instance to send the processorder command to.



Configure Saga To Saga Command

Here we will configure the OrderSaga to send a message to the PaymentSaga, then we will update the PaymentSaga to become a Saga.

  1. Click the ProcessOrderHandler and click SendCommand
  2. Name the command ProcessOrderPayment

  3. Click Copy to Clipboard. This will then open the ProcessOrderHandler.cs file. Paste the code.
  4. Open the Canvas, it should look like this
  5. Click the ProcessOrderPaymentHandler, and Click Deploy.
  6. Select the Payment Saga, as this will handle the ProcessOrderPayment Request.
  7. Your canvas will look like this. BUILD SOLUTION
  8. Let’s CONVERT PaymentSaga endpoint to a Saga, as we have the minimum needed to do this!
    WARNING: NEVER convert an endpoint to a saga unless it has at least one message handler, else it cannot implement IAmStartedByMessages interface. You would have to wire it up manually, since the Infrastructure code generator will not know how.
  9. Click ProcessOrderPaymentHandler and click Convert to Saga…
  10. This will open the ProcessOrderPaymentHandlerConfigureHowToFindSaga
  11. Build the solution, to auto generate the Saga partial classes and infrastructure
  12. We want the payment instance to correlate to the correct order Id, so add this:



    Build the solution! We added properties so ConfigureHowToFindSaga will compile J


Configure Saga To Saga Response and Bus.Reply

  1. Open the ServiceMAtrix Canvas, confirm your canvas looks like this

    notice the icon for saga’s has a circle in it with a square.
  2. Click the ProcessOrderPaymentHandler in the payment Saga and click Reply with Message…
  3. Click Ok
  4. Copy the code to Clipboard
  5. Click the Copy To Clipboard, note the mouse pointer will show as busy, however you can still click copy to clipboard.
  6. This will open the ProcessOrderPaymentHandler.cs, paste the code here. Put in a Thread.Sleep to simulate a credit card payment.

  7. Your canvas will look like this now
  8. Add the following code to ProcessOrderHandler.cs file
  9. Build the solution


Testing the solution

Following the following in Order, so the msmq’s are created in the correct order, to avoid race conditions on the first time is starts.

  1. Start the SagaToSagaRequestResponse.PaymentSaga
  2. Start the SagaToSagaRequestResponse.OrderSaga
  3. Start the SagaToSagaRequestResponse.OrderReceiver
    You should see


In ServiceInsight we see:

Source Code

You can download the source code at:


Just enabled NUGET package restore J

Kill/Terminate process for current logged on user

Below is code that you can use to terminate processes that belong to the currently logged on user. This is using WMI and will work with all authenticated users, even non administrators.

You can use ExcludeMe to exclude a process, e.g. if you running a program and want to guarantee one instance on the machine, but it must not kill the current program.
KillProcesses(Process.GetCurrentProcess().ProcessName + “.exe”, false, Process.GetCurrentProcess());
The above will kill all other processes with same name, except the calling program.

Download Source Code

e.g. KillProcesses(“chrome.exe”, true, null);

 public static void KillProcesses(string processName, bool currentUserOnly, Process excludeMe = null)
            var processes = new ManagementObjectSearcher(string.Format("SELECT * FROM Win32_Process WHERE  Name='{0}'", processName)).Get();
            foreach (var o in processes)
                var process = (ManagementObject)o;
                var processId = int.Parse(process["ProcessId"].ToString());
                if (process["ExecutablePath"] == null) continue;
                if (excludeMe != null && processId == excludeMe.Id) continue;
                var ownerInfo = new object[2];
                process.InvokeMethod("GetOwner", ownerInfo);
                var owner = (string)ownerInfo[0];

                if (currentUserOnly)
                    var windowsIdentity = WindowsIdentity.GetCurrent();
                    if (windowsIdentity == null) return;
                    var currentUser = windowsIdentity.Name;
                    if (currentUser.Contains(owner))
                        process.InvokeMethod("Terminate", null);
                    process.InvokeMethod("Terminate", null);

Download Source Code