Author: Romiko Derbynew

Welcome

Hi Folks,

I have just started out on a home project for XNA Development. The game is going to be a 2-D platformer, which will be a cross between Wonder Boy and Super Mario. It will also be utilizing the Farseer physics engine, which I hope will add an extra dimension of play to the game.

What is the game called? The title for now is The Dark Yogi. Players will need to embrace themselves for a fun filled adventure in a world of Fireballs, Magic Carpets and exotic worlds, some real, some in the world of consciousness or is it. All the levels will have their own unique set of rules that the Dark Yogi will need to traverse to conquer his self.

Helping seemingly innoccent people in the game could cause more harm than good in the world of The Dark Yogi! Stay Tuned.

Regards

Optimising SQL Server Disk subsystem – 64K

Hi Folks,

Nope this is not a job offer for 64K 🙂

Today I was having a chat with our SQL developer, as we are looking for a good DBA, once of the topics that came up was the B-Tree (Balanced Tree) structure for the SQL Storage engine. I was mentioning that the ideal DBA should at least understand how 64K extents , each containing 8 x 8k pages can cause some issues if your disk partitions are not aligned.

Anyway, we got into a heated debate of a DBA should know some of these principles. I thought to myself if I was going to win this discussion, I would need to pull something that everyone has heard of and apply my defence, and so I did!

If we use a DBCC ShowContig for a table, we will find the first two fields are the number of pages scanned and then the number of extents.

Now theoretically, for a system (Reporting system at that), the fill factor will be high, due to index references being appended to the B-Tree structure, so lets assume that we have such a reporting system that never gets a page split.

Then for a 100% fill factor index system, we should find that the (number of pages / 8 )  = number of extents.

This is of course the starting point to how SQL server detects logical fragmentation, whilst taking the fill factor into account.

Of course all DBA’s should know this, and so I applied it to disk partition alignment, and finally convinced my team mate 🙂 Since a good DBA will alter index fill factors for a reporting, OLAP, Transaction system respectively.

So, what about disk cluster alignments. Well it is important to optimise the disk for the way SQL stores day in 8K pages and 64k extents.

Microsoft discusses this in allot of detail on MSDN and any DBA wanting to optimise the Disk Subsystem, especially if you forked out allot of money for the disks such as a RAID 10 configuration.

So, what is the cluster size of a SQL optimised disk? it should of course be 64K!

Here is an example in which the F: drive is created on disk 3, aligned with an offset of 1,024 KB, and formatted with a file allocation unit (cluster) size of 64 KB.

However, aligning the partition will also be a crucial factor to disk usage. We must start the alignment and a place where we can ensure 64k extent FITS SNUGLY into a 64k cluster and not Overflow into another one.

Command Line Syntax

C:\>diskpart
Microsoft DiskPart version 6.0.6001
Copyright (C) 1999-2007 Microsoft Corporation.
On computer: DevMachine

DISKPART> select disk 3
Disk 3 is now the selected disk.
DISKPART> create partition primary align=1024
DiskPart succeeded in creating the specified partition.
DISKPART> assign letter=F
DiskPart successfully assigned the drive letter or mount point.
DISKPART> format fs=ntfs unit=64K label=”MyFastDisk” nowait


Want to know more?

The article can be read here.

http://msdn.microsoft.com/en-us/library/dd758814.aspx

I hope this will help any DBA’s out there when planning a SQL dedicated disk subsystem and put the companies money in well spent hardware into optimal use.

BizTalk: Dynamic SFTP, SOAP adapter and synchronised delivery notification within an orchestration using the Custom SQL send adapter

Hi Folks,

I recently had to build a dispatching system, where a message would be received from our web service. The message would then be saved to the database with:

Stage: IMPORT and Status: SUCCESS

A standard BizTalk SQL Receive location would pull the messages and change the message in the SQL table to:

Stage: BizTalk and Status: Processing

Envelopes and De-batching

The above pull is done in batch mode, this means, that SQL will pull a batch of records and use the FOR XML clause. However when the message is built in SQL, there is NO root node. However the SQL adapter can inject one, see image below on how it does it:

image

Now the messages are DEBATCHED by using an ENEVELOPE schema which will recognise the xml data, how is this enforced? The Receive pipeline on the SQL adapter is set to: XMLReceive.

There is allot of information out there regarding DEBATCHING and using envelopes. However, when you create a XSD to represent the xml data, just remember to ensure the property for an ENVELOPE is set:

image

This property is found on the Schema Design in Visual Studio.

So, now the messages are de-batched into the BizTalk message box.

An orchestration will have a DIRECT receive port with a filter that subscribes to this type of debatched record. So if the schema for the batch was BATCHRecords.xsd containing a collection of Records.xsd schema, then the orchestration will subscribe to the schema type of Records.xsd. I always define the individual record schema first and then define the batchrecord schema, then in the batch schema I just IMPORT the individual record schema.

image

Ok, we getting diverted, but understanding envelopes is crucial in BizTalk when pulling data from SQL.

Orchestration

Now that the message is in the orchestration we need to have the following goals in mind:

  • If the message succeeds, a part of the message needs to be extracted using Xpath and saved to the SQL database, by calling a stored procedure with parameters that accept a Xml document and other variables. The stage and status is then set to END, SUCCESS
  • The default SQL adapter cannot do this very well, so I have a custom SQL adapter to do it, you can read up about it here:

    Romiko-Custom SQL Adapter

  • If the message fails, then the original message is sent to SQL and Stage and Status is set to END, FAILED.
  • If message fails being dispatched to an external SOAP or SFTP destination then the orchestration must handle the delivery failure and MUST NOT suspend the SFTP or SOAP send port, since the message is flagged in SQL as failed, so it can always be reset from the database using an interface for help desk.
  • A way to store the ACKNOWLEGEMENT from the SOAP or SFTP so we can define if the delivery failed. The orchestration MUST NOT use an asynchronous way to send to SFTP or SOAP and then carry on executing shapes further down the line.
  • The custom SQL adapter can be asynchronous, since this is in house part, if it fails, we want to be able to resume the port, so the SQL aspect MUST be resumable, else help desk gets no answer and see messages with stage BIZTALK and status PROCESSING.

 

Now, lets dispel some of the gossip out there about delivery notification.

Delivery notification will always give you a NACK or ACK back when a message fails or success respectively when sending to a port. This is done if the port is enable for delivery notification.

Now let me get rid of some incorrect information out there:

  1. You DO NOT need a Long Running/ATOMIC Transaction scope to use Delivery Notification
  2. You CAN use delivery notification on Dynamic Ports 🙂

I say this, since I see books, a blogs saying that your delivery notification send shape must be in a scope that is set to Long Running or Atomic, not true at all…

So what do you need.

  1. A scope (Transaction = NONE), with the last shape being the send to the SOAP or SFTP port
  2. The SOAP and SFTP port must be setup for delivery notification
  3. The scope must be set to Synchronised
  4. A Delivery Notification Exception handler, A Soap handler (for soap failures) and a SYSTEM.Exception scope (NOT general exception, have no clue why it is default in BizTalk orchestration designer, it is not cool, always use a System.Exception type, then you have access to the Exception ex variable to get the message e.g. Trace.WriteLine(ex.Message)
  5. The ports enabled for delivery notification can be dynamic, however in a message construct, you must set the retry to 0, no point using retries, if you are flagging an external system with a FAIL.

ANY shapes after the send shape IN THE SCOPE will execute asynchronously, so ensure those shapes do not depend on the delivery, perhaps a DEBUG TRACE or something that you like to have for debugging. Another nice thing to have is a global boolean variable that is used to detect if a NACK was sent so you can always check this bool value AFTER the scope where the send shape was in, to ensure a ACK was received to continue processing, else you know it was a NACK and can then safely call the SQL code to send the message to SQL with Failure.

After the exception, is where the above logic for the SQL persistence can be put, also in its own scope, that is not set to synchronise and with no delivery notification, so we can also resume the send port of sql, but never resume SFTP or SOAP.

Below are screen images of the orchestration I used for this.

image

image

We subscribe using promotes properties of STAGE and STATUS.

Above we get the message and then detect if it is for SOAP sending or SFTP using some Distinguished fields.

Here is the if expression:

Record.MetaData.Protocol.Type == "SOAP"

Ok, so now lets check out the SFTP and then we will do the SOAP.

First thing to notice is that BizTalk does not support SFTP, however some cool developers have made one, and you can download it here:

Blogical.Shared.Adapters.SFTP:

http://sftpadapter.codeplex.com/Release/ProjectReleases.aspx?ReleaseId=19784

Ok, so here is the control flow:

image

 

image

Another thing is that SFTP is Request Model, not Request/Response like SOAP. you will see in the soap branch later that I will extract the response from the soap response message and use that, where as with SFTP, I use the originally request for database storage.

In the message construct we ensure we set the port retry to 0:

Here is the code for the message construct to set SFTP properties dynamically, I used distinguished fields to access the meta data from the message, basically when SQL got the the message it did a join with a control table to find out the destination SFTP location depending on the data pulled 🙂

//Romiko van de dronker 14/01/2010
// Set Port Properties
Port_Dynamic_OneWay(Microsoft.XLANGs.BaseTypes.Address) = "SFTP://" + Record.MetaData.Protocol.FTPProperties.SSHHost + ":" + Record.MetaData.Protocol.FTPProperties.SSHPort + "/";
// Assign the ProcessSaleslead data to the new output message
ProcessSalesLeadMessage = xpath(Record," /*[local-name()=’Record’ and namespace-uri()=’
http://Romiko.Dispatch.Schemas’]/*[local-name()=’Data’ and namespace-uri()=’http://Romiko.Dispatch.Schemas’]/*[local-name()=’ProcessSalesLead’ and namespace-uri()=’http://www.starstandard.org/STAR/5′]");
ProcessSalesLeadMessage(BTS.RetryCount) = 0; // To support NACK from port.
// Set SFTP properties
ProcessSalesLeadMessage(BTS.OutboundTransportType) = "SFTP";
ProcessSalesLeadMessage(Blogical.Shared.Adapters.Sftp.Schemas.host) = Record.MetaData.Protocol.FTPProperties.SSHHost;
ProcessSalesLeadMessage(Blogical.Shared.Adapters.Sftp.Schemas.portno) = System.Int16.Parse(Record.MetaData.Protocol.FTPProperties.SSHPort);
ProcessSalesLeadMessage(Blogical.Shared.Adapters.Sftp.Schemas.user) = Record.MetaData.Protocol.FTPProperties.SSHUser;
ProcessSalesLeadMessage(Blogical.Shared.Adapters.Sftp.Schemas.password) = Record.MetaData.Protocol.FTPProperties.SSHPassword;
ProcessSalesLeadMessage(Blogical.Shared.Adapters.Sftp.Schemas.remotepath) = Record.MetaData.Protocol.FTPProperties.SSHRemotePath;
ProcessSalesLeadMessage(Blogical.Shared.Adapters.Sftp.Schemas.remotefile ) = .Helper.OrchestrationHelper.BuildDateTime(System.DateTime.Now) + "." + Record.MetaData.Protocol.FTPProperties.SSHRemoteFile;
//Connection TimeOut
ProcessSalesLeadMessage(Blogical.Shared.Adapters.Sftp.Schemas.connectionlimit)= System.Int16.Parse(Record.MetaData.Protocol.FTPProperties.SSHConnectionLimit);
//Optional Settings – Not yet supported, as null values can exist, need to build a helper class to not assign nulls, if they null.
//Debug
ProcessSalesLeadMessage(Blogical.Shared.Adapters.Sftp.Schemas.trace) = System.Boolean.Parse(Record.MetaData.Protocol.FTPProperties.SSHDebugTrace);

Ok, now that the message is constructed, all we do is create a scope that is SYNCRHONISED and ensure the SFTP port delivery notification is on:

image

image

So above we set Delivery Notification to Transmitted.

Now anything to do with SFTP goes into this scope and NOTHING else.

Notice my exception types, I have a delivery and system.exception:

image

image

In the exception, you can set the state to Failed 🙂

SendSuccess = false;
Stage = "END";
Status = "FAI";
Error = "SFTP Failure, " + e.Message;
System.Diagnostics.Trace.WriteLine("BIZTALK: Throwing a Delivery Notification Error" + e.Message);

After this we then have a condition statement to see if the delivery was a success, if it was it executes the branch of SQL to make it a success else executes the sql to fail the record.

Notice this is a NEW SCOPE: (UPDATE SQL) tranaction = none

why? Well, delivery notification will only work in it’s scope, so a new scope is needed for anything new that MUST continue, it can be in a nested scope like ours or in the orchestration default scope, I like it is a nested scope for neatness and control.

if expression is:

SendSuccess == true

If this is the case, I will create a new message type and send the data to SQL, This is done in the last message construct:

//Romiko van de dronker 14/01/2010
OutRecord = Record;
OutRecord(MMIT.Dispatch.Schemas.Stage) = Stage;
OutRecord(MMIT.Dispatch.Schemas.Status) = Status;

 

Also with the CUSTOM SQL adapter, I send this data to the store procedure, note I send XML directly to SQL 🙂

image 

For the SOAP aspect, the same logic applies. However there are some catches.

The URL address set in the orchestration is in this format SOAP://myaddress.com/mypage.asmx.

Now when using a static adapter, this is not a problem, since in the adapter you set the URL to http:// format.

There is no shortcut, you need to make a custom pipeline component to handle this. The adapter will understand the SOAP format, however the pipeline will then do the magic and replace the SOAP with HTTP.

Another thing, you can even set the ASSEMBLY and proxy class for the SOAP adapter dynamically, the message that came into the orchestration has this information, and I set this at runtime 🙂 Another cool feature is I have a SOAP Header and a SOAP body, so my soap proxy has two method parameter, this means that my multipart message MUST match the paramter types of the soap proxy. lets see how this works:

The MULTIPART message looks like this:

For my SOAP proxy method paramters, I use type XmlDocument for both, so my multipart message that I will send to the SOAP adapter is this:

image

The header uses a schema from:

<Security xmlns="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"&gt;

<UsernameToken>

<Username>string</Username>

<Password>string</Password>

</UsernameToken>

<KeyIdentifier>string</KeyIdentifier>

</Security>

It is a common schema from oasis.

So what we going to do is parse the message body and the header to the SOAP adapter and the proxy will do the rest:

Here is my proxy class code:

using System;
using System.Net;
using System.Web.Services.Protocols;
using System.Xml;
using Romiko.Proxies.MMITGMDSConsumer;
using System.Xml.Serialization;

namespace Romiko.Proxies.Suppliers
{
    [System.Web.Services.WebServiceBindingAttribute(Name = "StarBodToMMITSoap", Namespace = "
http://Romiko.WebServices/")]
    public partial class MMITConsumer : System.Web.Services.Protocols.SoapHttpClientProtocol
    {

        [System.Web.Services.Protocols.SoapDocumentMethodAttribute("http://Romiko.WebServices/SubmitStarProcessSalesLead", Use = System.Web.Services.Description.SoapBindingUse.Literal, ParameterStyle = System.Web.Services.Protocols.SoapParameterStyle.Bare)]
        public AcknowledgeSalesLeadType Dispatch(XmlDocument MessagePartBody, XmlDocument MessagePartSOAPHeader)
        {
            try
            {
                System.Diagnostics.Trace.WriteLine("ServiceProxy:Dispatch to " + this.Url);
                System.Diagnostics.Trace.WriteLine("ServiceProxy:Message in is: " + MessagePartBody.OuterXml);
                System.Diagnostics.Trace.WriteLine("ServiceProxy:Header is: " + MessagePartSOAPHeader.OuterXml);

                StarBodToMMIT ws = new StarBodToMMIT();
                XmlSerializer X = new XmlSerializer(typeof(ProcessSalesLeadType), "
http://www.starstandard.org/STAR/5");
                ProcessSalesLeadType processSalesLead = (ProcessSalesLeadType)X.Deserialize(new XmlNodeReader(MessagePartBody));

                ws.Url = this.Url;

                XmlSerializer Y= new XmlSerializer(typeof(SecuritySoapHeader), "http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd");
                ws.Security = (SecuritySoapHeader)Y.Deserialize(new XmlNodeReader(MessagePartSOAPHeader));

                AcknowledgeSalesLeadType response = ws.SubmitStarProcessSalesLead(processSalesLead);           
                return response;

            }
            catch (WebException)
            {
                // To use the retry functionality of the send port
                // This exception type includes time outs and service unavailable from the web service.
                throw;
            }
            catch (Exception e)
            {
                throw new Exception("Problems occured in the proxy:" + e + e.InnerException);
            }
        }
    }
}

BizTalk will automatically detect the message parts, and the name they called is resolved to the proxy parameter names, as you can see they the same as the multi-part message 🙂

 

Ok, so that part was easy. How do you tell the soap adapter to do this dynamically, the magic is in the message construct:

//Romiko van de dronker 15/01/2010

// Assign the ProcessSaleslead data to the new output message
ProcessSalesLeadMessageRequest.MessagePartBody = xpath(DRONKYRecord," /*[local-name()=’DRONKYRecord’ and namespace-uri()=’
http://Romiko.DRONKY.Dispatch.Schemas’]/*[local-name()=’Data’ and namespace-uri()=’http://Romiko.DRONKY.Dispatch.Schemas’]/*[local-name()=’ProcessSalesLead’ and namespace-uri()=’http://www.starstandard.org/STAR/5′]");

//Assign SOAP Header to Context
ProcessSalesLeadMessageRequest.MessagePartSOAPHeader = Romiko.DRONKY.Helper.OrchestrationHelper.BuildSoapHeader(DRONKYRecord.MetaData.Protocol.SOAPProperties.SOAPHeaderUserName, DRONKYRecord.MetaData.Protocol.SOAPProperties.SOAPHeaderPassword);

// Retry Count MUST be 0 for NACK management, also we in a synchronisation scope.
ProcessSalesLeadMessageRequest(BTS.RetryCount) = 0; // To support NACK from port.

// Set Port Properties
Port_Dynamic_SOAP(Microsoft.XLANGs.BaseTypes.Address) = DRONKYRecord.MetaData.Protocol.SOAPProperties.SOAPURI;
System.Diagnostics.Trace.WriteLine("URI is: " + DRONKYRecord.MetaData.Protocol.SOAPProperties.SOAPURI);

// Set SOAP Proxy assembly
ProcessSalesLeadMessageRequest(SOAP.AssemblyName) = DRONKYRecord.MetaData.Protocol.SOAPProperties.SOAPAssemblyName;
ProcessSalesLeadMessageRequest(SOAP.TypeName) = DRONKYRecord.MetaData.Protocol.SOAPProperties.SOAPTypeName;
ProcessSalesLeadMessageRequest(SOAP.MethodName) = DRONKYRecord.MetaData.Protocol.SOAPProperties.SOAPMethodName;

//Set Misc SOAP settings
ProcessSalesLeadMessageRequest(SOAP.ClientConnectionTimeout) = System.Int32.Parse(DRONKYRecord.MetaData.Protocol.SOAPProperties.SOAPClientConnectionTimeOut);
ProcessSalesLeadMessageRequest(SOAP.UseSoap12) = System.Boolean.Parse(DRONKYRecord.MetaData.Protocol.SOAPProperties.SOAPUseSoap12);

You can see above, I dynamically set the assembly, class and method to call:

Cool!

ok, the magic part I spoke of for the pipeline, i assume you can build them,so here is the code of the execute and the custom method:

First set the logical port properties for the SEND Pipeline (ENCODER)

image

Ok, here is the partial code of the pipeline:

note the part that replaces SOAP string with HTTP, i do some extra checks, to support HTTPS on default ports 80 and 443 🙂

public IBaseMessage Execute(IPipelineContext pc, IBaseMessage inmsg)
        {
            try
            {
                System.Diagnostics.Trace.WriteLine("Pipeline:Execute entry");

                string address = (string)inmsg.Context.Read("OutboundTransportLocation", "http://schemas.microsoft.com/BizTalk/2003/system-properties");
                string method = (string)inmsg.Context.Read("MethodName", "http://schemas.microsoft.com/BizTalk/2003/soap-properties");

                System.Diagnostics.Trace.WriteLine("Pipeline:Updating context");

                Uri uri = new Uri(address);

                string protocol = "https";

                if (uri.Port == 80 || uri.Port == 8080)
                    protocol = "http";

                //Send over HTTP or HTTPS
                string addr = protocol + "://" + uri.Host + ":" + uri.Port + uri.PathAndQuery;
                System.Diagnostics.Trace.WriteLine("HTTP url is: " + addr);
                //Here is the magic where we can now convert SOAP://…. to
HTTP://…. and now we ready for some full contact generic web service proxy calls!

                inmsg.Context.Write("OutboundTransportLocation", "http://schemas.microsoft.com/BizTalk/2003/system-properties", addr);
                inmsg.Context.Write("Operation", "
http://schemas.microsoft.com/BizTalk/2003/system-properties", method);

                System.Diagnostics.Trace.WriteLine("Pipeline:End, Number of messages parts: " + inmsg.PartCount);
                System.Diagnostics.Trace.WriteLine("Pipeline:End, Body Part name:" + inmsg.BodyPartName);
                return inmsg;
            }
            finally
            {
                System.Diagnostics.Trace.WriteLine("Pipeline:Execute exit");
            }
        }

 

Well I hope this has shown how powerful orchestration development can be, and in fact you can CONTROL how an orchestration resumes by using scopes to control persistence points, in my case, I want the orchestration to NEVER suspend and always succeed, and handle failures by updating status states in the SQL database for support to handle 🙂

As soon as I get time, I will work on WCF and other cool features of BizTalk itching to upgrade to BizTalk 2009! Which I am doing right now 🙂

Good Luck.

Web Services Security

Hi Folks,

When it comes to SOAP security, I use the following schema for the SOAP Header:

http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd

 

It is really easy to use. Just have this class in your webservices project:

——–SecuritySoapHeader.cs

using System.Xml.Serialization;
using System.Web.Services.Protocols;

namespace Romiko.Hello.WebServices
{

    [XmlType(Namespace = "http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd")]
    [XmlRoot("Security", Namespace = "http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd", IsNullable = false)]
    public class SecuritySoapHeader : SoapHeader
    {
        public UsernameTokenType UsernameToken;
        public class UsernameTokenType
        {
            public string Username;
            public string Password;
            public string KeyIdentifier;
        }

    }
}

——————————-

Then just add a public property on the webservice.asmx.cs code behind like:

public class Hellow World: System.Web.Services.WebService
    {
        private SecuritySoapHeader security;
        public SecuritySoapHeader Security
        {
            get { return security; }
            set { security = value; }
        }

        [WebMethod(Description = "Hello System.", EnableSession = false, TransactionOption = TransactionOption.Disabled, BufferResponse = false, MessageName = "", CacheDuration = 0)]
        [SoapDocumentMethod(ParameterStyle = SoapParameterStyle.Bare)]
        [SoapHeader("Security", Direction = SoapHeaderDirection.In)]
        public Data_Records PullRecords(int BatchSize)
        {
            ValidateInputCredentials();
            return GetDataRecord(BatchSize);
        }

Notice I do not use everything from the OASIS xsd, just a username, password and GUID key 🙂

 

This will then be automatically generated in the WSDL in the types section:)

<s:schema elementFormDefault="qualified" targetNamespace="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd">
  <s:element name="Security" type="s2:SecuritySoapHeader" />

<s:complexType name="SecuritySoapHeader">
<s:sequence>
  <s:element minOccurs="0" maxOccurs="1" name="UsernameToken" type="s2:UsernameTokenType" />

  </s:sequence>

  <s:anyAttribute />

  </s:complexType>

<s:complexType name="UsernameTokenType">
<s:sequence>
  <s:element minOccurs="0" maxOccurs="1" name="Username" type="s:string" />

  <s:element minOccurs="0" maxOccurs="1" name="Password" type="s:string" />

  <s:element minOccurs="0" maxOccurs="1" name="KeyIdentifier" type="s:string" />

  </s:sequence>

  </s:complexType>

  </s:schema>

  </wsdl:types>

What is really nice as well is introducing an Enterprise Service Bus Management system, which can centrally audit all web services and authenticate users, then just publish the ESB management pages in SharePoint!

I will see, if I get time, I will show you how to build cool ESB Management systems 🙂

I just finished university in my spare time, so this will certainly free up some time for blogs!

XML SelectSingleNode or SelectNodes and dealing with namespaces the easy way

Hi Folks,

Sometimes namespaces can be a real pain. Especially when child elements in an XML document have multiple namespaces etc.

Imagine a document like this:

<Root xmlns="http://www.example.com">

<Child xmlns="http://www.romiko.com/>
<Child xmlns="http://www.romiko.com/>

</Root>

 

To select the data  for all childs you have two ways to do it:

ns.AddNamespace("ro", "http://www.example.com");
ns.AddNamespace("ch", "http://www.romiko.com");
XmlNodeList nodes = doc.SelectNodes("/ro:Root/ch:Child", ns);

OR

XmlNodeList nodes = doc.SelectNodes("/*[local-name()=’Root’]/*[local-name()=’Child]");

As You can see I prefer the latter, less code and is XPATH compliant.

Render XML as HTML

Hi Folks,

I had to make a report on a website, where users could see data being sent into the Enterprise Service Bus. So basically all requests sent to web services is stored as XML in the SQL database.

So, I created a GridView with a link to display the XML data if needed. However IE will display it like text unless you use a Response.Write mechanism and modify the content-type to text/xml etc or load IE with xml file direcrlty. I find this a bit dirty, since you need to finalise with a Response.End(), and this can cause the browser to not allow navigatiing back on your pages. It is much better to have a button on page to view data, then the user clicks it, a popup box shows with the nicely html view of the xml, just like IE does it when opening an XML file!!

Also I can imagine a user getting annoyed, where evertime he views the XML data he needs to click the back button, so popups are cool here. You can also of course just put it into another page if you want etc.

So the solution is the following:

1. AJAX – modalpopupextender

2. System.Web.UI.WebControls.Xml

3. XSLT to tranform the XML to HTML – http://grounding.co.za/files/folders/documents/entry2193.aspx

Ok, what you do is put in a event handler on the gridview for the tenmplate column:

<asp:TemplateField>
<ItemTemplate>
<asp:LinkButton ID="linkButtonShowData" Text="Show" Font-Underline="true" Font-Bold="true"
ForeColor="Pink" CommandArgument="<%# Bind(‘aib_RequestData’) %>" runat="server"
OnClick="linkButtonShowData_Click" />
</ItemTemplate>
</asp:TemplateField>

Then the event handler will parse the xml data to a helper method which will call the modalpopupextender.

protected void linkButtonShowData_Click(object sender, EventArgs e)
{
LinkButton showDataButton = (LinkButton)sender;
bool isXml = false;
XmlDocument doc = new XmlDocument();
try
{
doc.LoadXml(showDataButton.CommandArgument);
isXml = true;
}
catch
{

}

if (isXml)
showXml(doc);
else
showText(showDataButton.CommandArgument);
}

protected void showText(string text)
{
if (text == string.Empty)
{
text = "No request data found.";
lblRequestData.Text = @"<H5>" + text + @"</H5>";
lblRequestData.ForeColor = System.Drawing.Color.Red;
}
else
{
lblRequestData.Text = text;
lblRequestData.ForeColor = System.Drawing.Color.Black;
}
CreateModalPopupExtender.Show();
}

protected void showXml(XmlDocument xml)
{
lblRequestData.Text = string.Empty;
XmlRequestData.DocumentContent = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" + Environment.NewLine;
XmlRequestData.DocumentContent += xml.OuterXml;
XmlRequestData.TransformSource=@"~\XSLT\XMLToHtml.xslt";
CreateModalPopupExtender.Show();
}

From the above, we have to methods, if the XML is not valid, then we just show the text data as some wbe requests are not XML based.

Ok, so as you can see we load the xml into the XmlRequestData.DocumentContent. This is just a simple System.Web.UI.WebControls.Xml.

<asp:Xml ID="XmlRequestData" runat="server" />

Which is located in the popup control of ajax, see below:

<asp:Button ID="HiddenPopupButton" Style="display: none" runat="Server" />
<ajaxtoolkit:modalpopupextender id="CreateModalPopupExtender" targetcontrolid="HiddenPopupButton"
popupcontrolid="CreateModelPanel" cancelcontrolid="Close" backgroundcssclass="ModalBackground"
dropshadow="true" runat="server" />
<asp:Panel ID="CreateModelPanel" Style="display: none;" CssClass="ModalPopup" ScrollBars="Both"
runat="server">
<div class="PopUpContent">
<div>
<asp:Label ID="lblRequestData" runat="server" Width="100px" />
<asp:Xml ID="XmlRequestData" runat="server" />
</div>
<div>
<asp:Button ID="Close" Text="Close" runat="server" /><br />
<br />
<br />
</div>
</div>
</asp:Panel>

Now here is the magic!

XmlRequestData.DocumentContent += xml.OuterXml;
XmlRequestData.TransformSource=@"~\XSLT\XMLToHtml.xslt";
CreateModalPopupExtender.Show();

This will transform the xml control data into HTML using the xslt.

Below is the xslt:

——————————————————————-

<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output indent="no" method="html"/>

<xsl:template match="/">
<HTML>
<HEAD>
<SCRIPT>
<xsl:comment>
<![CDATA[
function f(e){
if (e.className=="ci") {
if (e.children(0).innerText.indexOf("\n")>0) fix(e,"cb");
}
if (e.className=="di") {
if (e.children(0).innerText.indexOf("\n")>0) fix(e,"db");
} e.id="";
}
function fix(e,cl){
e.className=cl;
e.style.display="block";
j=e.parentElement.children(0);
j.className="c";
k=j.children(0);
k.style.visibility="visible";
k.href="#";
}
function ch(e) {
mark=e.children(0).children(0);
if (mark.innerText=="+") {
mark.innerText="-";
for (var i=1;i<e.children.length;i++) {
e.children(i).style.display="block";
}
}
else if (mark.innerText=="-") {
mark.innerText="+";
for (var i=1;i<e.children.length;i++) {
e.children(i).style.display="none";
}
}
}
function ch2(e) {
mark=e.children(0).children(0);
contents=e.children(1);
if (mark.innerText=="+") {
mark.innerText="-";
if (contents.className=="db"||contents.className=="cb") {
contents.style.display="block";
}
else {
contents.style.display="inline";
}
}
else if (mark.innerText=="-") {
mark.innerText="+";
contents.style.display="none";
}
}
function cl() {
e=window.event.srcElement;
if (e.className!="c") {
e=e.parentElement;
if (e.className!="c") {
return;
}
}
e=e.parentElement;
if (e.className=="e") {
ch(e);
}
if (e.className=="k") {
ch2(e);
}
}
function ex(){}
function h(){window.status=" ";}
document.onclick=cl;
]]>
</xsl:comment>
</SCRIPT>
<STYLE>
BODY {font:x-small ‘Verdana’; margin-right:1.5em}
.c {cursor:hand}
.b {color:red; font-family:’Courier New’; font-weight:bold;
text-decoration:none}
.e {margin-left:1em; text-indent:-1em; margin-right:1em}
.k {margin-left:1em; text-indent:-1em; margin-right:1em}
.t {color:#990000}
.xt {color:#990099}
.ns {color:red}
.dt {color:green}
.m {color:blue}
.tx {font-weight:bold}
.db {text-indent:0px; margin-left:1em; margin-top:0px;
margin-bottom:0px;padding-left:.3em;
border-left:1px solid #CCCCCC; font:small Courier}
.di {font:small Courier}
.d {color:blue}
.pi {color:blue}
.cb {text-indent:0px; margin-left:1em; margin-top:0px;
margin-bottom:0px;padding-left:.3em; font:small Courier;
color:#888888}
.ci {font:small Courier; color:#888888}
PRE {margin:0px; display:inline}
</STYLE>
</HEAD>
<BODY class="st">
<xsl:apply-templates/>
</BODY>
</HTML>
</xsl:template>

<xsl:template match="processing-instruction()">
<DIV class="e">
<SPAN class="b">
<xsl:call-template name="entity-ref">
<xsl:with-param name="name">nbsp</xsl:with-param>
</xsl:call-template>
</SPAN>
<SPAN class="m">
<xsl:text>&lt;?</xsl:text>
</SPAN>
<SPAN class="pi">
<xsl:value-of select="name(.)"/>
<xsl:value-of select="."/>
</SPAN>
<SPAN class="m">
<xsl:text>?></xsl:text>
</SPAN>
</DIV>
</xsl:template>

<xsl:template match="processing-instruction(‘xml’)">
<DIV class="e">
<SPAN class="b">
<xsl:call-template name="entity-ref">
<xsl:with-param name="name">nbsp</xsl:with-param>
</xsl:call-template>
</SPAN>
<SPAN class="m">
<xsl:text>&lt;?</xsl:text>
</SPAN>
<SPAN class="pi">
<xsl:text>xml </xsl:text>
<xsl:for-each select="@*">
<xsl:value-of select="name(.)"/>
<xsl:text>="</xsl:text>
<xsl:value-of select="."/>
<xsl:text>" </xsl:text>
</xsl:for-each>
</SPAN>
<SPAN class="m">
<xsl:text>?></xsl:text>
</SPAN>
</DIV>
</xsl:template>

<xsl:template match="@*">
<SPAN>
<xsl:attribute name="class">
<xsl:if test="xsl:*/@*">
<xsl:text>x</xsl:text>
</xsl:if>
<xsl:text>t</xsl:text>
</xsl:attribute>
<xsl:value-of select="name(.)"/>
</SPAN>
<SPAN class="m">="</SPAN>
<B>
<xsl:value-of select="."/>
</B>
<SPAN class="m">"</SPAN>
</xsl:template>

<xsl:template match="text()">
<DIV class="e">
<SPAN class="b"> </SPAN>
<SPAN class="tx">
<xsl:value-of select="."/>
</SPAN>
</DIV>
</xsl:template>

<xsl:template match="comment()">
<DIV class="k">
<SPAN>
<A STYLE="visibility:hidden" class="b" onclick="return false"
onfocus="h()">-</A>
<SPAN class="m">
<xsl:text>&lt;!–</xsl:text>
</SPAN>
</SPAN>
<SPAN class="ci" id="clean">
<PRE>
<xsl:value-of select="."/>
</PRE>
</SPAN>
<SPAN class="b">
<xsl:call-template name="entity-ref">
<xsl:with-param name="name">nbsp</xsl:with-param>
</xsl:call-template>
</SPAN>
<SPAN class="m">
<xsl:text>–></xsl:text>
</SPAN>
<SCRIPT>f(clean);</SCRIPT>
</DIV>
</xsl:template>

<xsl:template match="*">
<DIV class="e">
<DIV STYLE="margin-left:1em;text-indent:-2em">
<SPAN class="b">
<xsl:call-template name="entity-ref">
<xsl:with-param name="name">nbsp</xsl:with-param>
</xsl:call-template>
</SPAN>
<SPAN class="m">&lt;</SPAN>
<SPAN>
<xsl:attribute name="class">
<xsl:if test="xsl:*">
<xsl:text>x</xsl:text>
</xsl:if>
<xsl:text>t</xsl:text>
</xsl:attribute>
<xsl:value-of select="name(.)"/>
<xsl:if test="@*">
<xsl:text> </xsl:text>
</xsl:if>
</SPAN>
<xsl:apply-templates select="@*"/>
<SPAN class="m">
<xsl:text>/></xsl:text>
</SPAN>
</DIV>
</DIV>
</xsl:template>

<xsl:template match="*[node()]">
<DIV class="e">
<DIV class="c">
<A class="b" href="#" onclick="return false" onfocus="h()">-</A>
<SPAN class="m">&lt;</SPAN>
<SPAN>
<xsl:attribute name="class">
<xsl:if test="xsl:*">
<xsl:text>x</xsl:text>
</xsl:if>
<xsl:text>t</xsl:text>
</xsl:attribute>
<xsl:value-of select="name(.)"/>
<xsl:if test="@*">
<xsl:text> </xsl:text>
</xsl:if>
</SPAN>
<xsl:apply-templates select="@*"/>
<SPAN class="m">
<xsl:text>></xsl:text>
</SPAN>
</DIV>
<DIV>
<xsl:apply-templates/>
<DIV>
<SPAN class="b">
<xsl:call-template name="entity-ref">
<xsl:with-param name="name">nbsp</xsl:with-param>
</xsl:call-template>
</SPAN>
<SPAN class="m">
<xsl:text>&lt;/</xsl:text>
</SPAN>
<SPAN>
<xsl:attribute name="class">
<xsl:if test="xsl:*">
<xsl:text>x</xsl:text>
</xsl:if>
<xsl:text>t</xsl:text>
</xsl:attribute>
<xsl:value-of select="name(.)"/>
</SPAN>
<SPAN class="m">
<xsl:text>></xsl:text>
</SPAN>
</DIV>
</DIV>
</DIV>
</xsl:template>

<xsl:template match="*[text() and not (comment() or processing-instruction())]">
<DIV class="e">
<DIV STYLE="margin-left:1em;text-indent:-2em">
<SPAN class="b">
<xsl:call-template name="entity-ref">
<xsl:with-param name="name">nbsp</xsl:with-param>
</xsl:call-template>
</SPAN>
<SPAN class="m">
<xsl:text>&lt;</xsl:text>
</SPAN>
<SPAN>
<xsl:attribute name="class">
<xsl:if test="xsl:*">
<xsl:text>x</xsl:text>
</xsl:if>
<xsl:text>t</xsl:text>
</xsl:attribute>
<xsl:value-of select="name(.)"/>
<xsl:if test="@*">
<xsl:text> </xsl:text>
</xsl:if>
</SPAN>
<xsl:apply-templates select="@*"/>
<SPAN class="m">
<xsl:text>></xsl:text>
</SPAN>
<SPAN class="tx">
<xsl:value-of select="."/>
</SPAN>
<SPAN class="m">&lt;/</SPAN>
<SPAN>
<xsl:attribute name="class">
<xsl:if test="xsl:*">
<xsl:text>x</xsl:text>
</xsl:if>
<xsl:text>t</xsl:text>
</xsl:attribute>
<xsl:value-of select="name(.)"/>
</SPAN>
<SPAN class="m">
<xsl:text>></xsl:text>
</SPAN>
</DIV>
</DIV>
</xsl:template>

<xsl:template match="*[*]" priority="20">
<DIV class="e">
<DIV STYLE="margin-left:1em;text-indent:-2em" class="c">
<A class="b" href="#" onclick="return false" onfocus="h()">-</A>
<SPAN class="m">&lt;</SPAN>
<SPAN>
<xsl:attribute name="class">
<xsl:if test="xsl:*">
<xsl:text>x</xsl:text>
</xsl:if>
<xsl:text>t</xsl:text>
</xsl:attribute>
<xsl:value-of select="name(.)"/>
<xsl:if test="@*">
<xsl:text> </xsl:text>
</xsl:if>
</SPAN>
<xsl:apply-templates select="@*"/>
<SPAN class="m">
<xsl:text>></xsl:text>
</SPAN>
</DIV>
<DIV>
<xsl:apply-templates/>
<DIV>
<SPAN class="b">
<xsl:call-template name="entity-ref">
<xsl:with-param name="name">nbsp</xsl:with-param>
</xsl:call-template>
</SPAN>
<SPAN class="m">
<xsl:text>&lt;/</xsl:text>
</SPAN>
<SPAN>
<xsl:attribute name="class">
<xsl:if test="xsl:*">
<xsl:text>x</xsl:text>
</xsl:if>
<xsl:text>t</xsl:text>
</xsl:attribute>
<xsl:value-of select="name(.)"/>
</SPAN>
<SPAN class="m">
<xsl:text>></xsl:text>
</SPAN>
</DIV>
</DIV>
</DIV>
</xsl:template>

<xsl:template name="entity-ref">
<xsl:param name="name"/>
<xsl:text disable-output-escaping="yes">&amp;</xsl:text>
<xsl:value-of select="$name"/>
<xsl:text>;</xsl:text>
</xsl:template>

</xsl:stylesheet>

——————————————————————-

That is all there is to it, now you XML will be displayed as HTML, as rendered when opening a file in Internet Explorer with a nice format to it!

You can download the XSLT here:

http://grounding.co.za/files/folders/documents/entry2193.aspx

XSD Code Generation Xsd2Code vs XSD.exe

Hi Folks,

A quick post for those of you who need to generate code based on complex and rather large XSD schemas.

I found this tool very handy:

http://www.codeplex.com/Xsd2Code

It has allot more customisation than the xsd.exe tool from Visual Studio. Xsd2Code can be customised to use Lists or Arrays and supports large schemas with allot of Import statements, which you do not need to include int he command line, very handy! Wish microsoft would jack up their xsd.exe gen tool.

Here is a sample command line for a schema that references around 40 xsd files.

C:\Code\MMIT.GMDS\MMIT.GMDS.WebServices\Schemas\Star\Rev5.3.\BODs\Standalone>"C
:\Program Files\Xsd2Code\Xsd2Code.exe" AcknowledgeSalesLead.xsd Romiko.GMDS.BOL.S
chemas.Star /c Array

NOTE: Above I do not use /cb Array as it states in the help file, I use /c Array, this is a bug in the help file or in the code, I will report it to CodePlex.

I am currently using this to manage code generation for this automative schema:

http://www.starstandard.org/SIGXMLSTAR5/XMLSchemas

As you can see for the acknowledge sales lead, there is allot of imports, which this tool handles nicely.

<?xml version="1.0" encoding="utf-8"?>
<xsd:schema xmlns:udt="http://www.openapplications.org/oagis/9/unqualifieddatatypes/1.1" xmlns:oacl="http://www.openapplications.org/oagis/9/codelists" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:sqdt="http://www.starstandard.org/STAR/5/qualifieddatatypes/1.0" xmlns="http://www.starstandard.org/STAR/5" xmlns:ccts="urn:un:unece:uncefact:documentation:1.1" xmlns:qdt="http://www.openapplications.org/oagis/9/qualifieddatatypes/1.1" xmlns:scl="http://www.starstandard.org/STAR/5/codelists" xmlns:oagis="http://www.openapplications.org/oagis/9" xmlns:star="http://www.starstandard.org/STAR/5" targetNamespace="http://www.starstandard.org/STAR/5" elementFormDefault="qualified" attributeFormDefault="unqualified" blockDefault="#all">
<xsd:import namespace="http://www.starstandard.org/STAR/5/qualifieddatatypes/1.0" schemaLocation="../../Resources/Components/Common/QualifiedDataTypes.xsd" />
<xsd:import namespace="http://www.starstandard.org/STAR/5/codelists" schemaLocation="../../Resources/Components/Common/CodeLists.xsd" />
<xsd:import namespace="http://www.openapplications.org/oagis/9/qualifieddatatypes/1.1" schemaLocation="../../Resources/Components/OAGIS/CoreComponents/QualifiedDataTypes.xsd" />
<xsd:import namespace="http://www.openapplications.org/oagis/9/unqualifieddatatypes/1.1" schemaLocation="../../Resources/Components/OAGIS/CoreComponents/UnqualifiedDataTypes.xsd" />
<xsd:import namespace="http://www.openapplications.org/oagis/9/codelists" schemaLocation="../../Resources/Components/OAGIS/Common/CodeLists.xsd" />
<xsd:import namespace="http://www.openapplications.org/oagis/9/currencycode/54217:2001" schemaLocation="../../Resources/Components/OAGIS/CoreComponents/CodeList_CurrencyCode_ISO_7_04.xsd" />
<xsd:import namespace="http://www.openapplications.org/oagis/9/languagecode/5639:1988" schemaLocation="../../Resources/Components/OAGIS/CoreComponents/CodeList_LanguageCode_ISO_7_04.xsd" />
<xsd:import namespace="http://www.openapplications.org/oagis/9/IANAMIMEMediaTypes:2003" schemaLocation="../../Resources/Components/OAGIS/CoreComponents/CodeList_MIMEMediaTypeCode_IANA_7_04.xsd" />
<xsd:import namespace="http://www.openapplications.org/oagis/9/unitcode/66411:2001" schemaLocation="../../Resources/Components/OAGIS/CoreComponents/CodeList_UnitCode_UNECE_7_04.xsd" />
<xsd:import namespace="http://www.openapplications.org/oagis/9" schemaLocation="../../Resources/Components/OAGIS/Common/Components.xsd" />

Beast XSD.exe from Microsoft hands down from a useability perspective.

SQL 2005 – Rebuilding Indexes

Hi,

Below is a script to rebuild indexes, it will try online operations, which can fail, in the event of a failure it will switch to offline mode.

SET ANSI_NULLS ON

GO

SET QUOTED_IDENTIFIER ON

GO

SET NOCOUNT ON

GO

CREATE procedure [dbo].[RebuildIndexes]

(

@DatabaseName VARCHAR(255)

)

as

DECLARE @Database VARCHAR(255)

DECLARE @Table VARCHAR(255)

DECLARE @cmd NVARCHAR(500)

DECLARE @fillfactor INT

SET @fillfactor = 75

DECLARE DatabaseCursor CURSOR FOR

SELECT name FROM master.dbo.sysdatabases

WHERE name IN (@DatabaseName)

ORDER BY 1

OPEN DatabaseCursor

FETCH NEXT FROM DatabaseCursor INTO @Database

WHILE @@FETCH_STATUS = 0

BEGIN

SET @cmd = ‘DECLARE TableCursor CURSOR FOR SELECT table_catalog + ”.” + table_schema + ”.” + table_name as tableName

FROM ‘ + @Database + ‘.INFORMATION_SCHEMA.TABLES WHERE table_type = ”BASE TABLE”’

— create table cursor

EXEC (@cmd)

OPEN TableCursor

FETCH NEXT FROM TableCursor INTO @Table

WHILE @@FETCH_STATUS = 0

BEGIN

— SQL 2000 command

–DBCC DBREINDEX(@Table,’ ‘,@fillfactor)

— SQL 2005 command

print ‘Rebuilding ‘ + @Table

Select @Table = Substring(@table, 0, LEN(@table) PATINDEX(‘%.%’, REVERSE(@table)) + 2) + ‘[‘ +

SUBSTRING(@table, LEN(@table) PATINDEX(‘%.%’, REVERSE(@table)) + 2, LEN(@table)) + ‘]’

SET @cmd = ‘ALTER INDEX ALL ON ‘ + @Table + ‘ REBUILD WITH (FILLFACTOR = ‘ + CONVERT(VARCHAR(3),@fillfactor) + ‘, ONLINE = ON)’

BEGIN TRY

EXEC (@cmd)

END TRY

BEGIN CATCH

print ‘Table: ‘ + @table + ‘ has been skipped as ONLINE, switching to OFFLINE’

SELECT

ERROR_NUMBER() AS ErrorNumber

,ERROR_SEVERITY() AS ErrorSeverity

,ERROR_STATE() AS ErrorState

,ERROR_PROCEDURE() AS ErrorProcedure

,ERROR_LINE() AS ErrorLine

,ERROR_MESSAGE() AS ErrorMessage;

SET @cmd = ‘ALTER INDEX ALL ON ‘ + @Table + ‘ REBUILD WITH (FILLFACTOR = ‘ + CONVERT(VARCHAR(3),@fillfactor) + ‘, ONLINE = OFF)’

EXEC (@cmd)

END CATCH

FETCH NEXT FROM TableCursor INTO @Table

END

CLOSE TableCursor

DEALLOCATE TableCursor

FETCH NEXT FROM DatabaseCursor INTO @Database

END

CLOSE DatabaseCursor

DEALLOCATE DatabaseCursor

Upgrading BizTalk 2006 to BizTalk 2009

Hi,

When you want to upgraded to BizTalk 2009 from BizTalk 2006 R2, it is a relatively easy process to do on Windows 2003.

MSMQT will no longer work, so if you implement message queuing then I advise to build another solution than use MSMQT.

Here is the upgrade warning:

Upgrade Required

Setup has detected that BizTalk Server 2006 R2 is installed on this computer. Setup will upgrade BizTalk Server 2006 R2 to BizTalk Server 2009.

Prior to the upgrade, you must stop all BizTalk Services in the BizTalk Group and backup the BizTalk databases.

Warning: The following features are no longer available in BizTalk Server 2009.

BizTalk Deployment Command Line Tool
Human Workflow Services Runtime Components
Human Workflow Services Base Assemblies
Human Workflow Service Administration Tools
BizTalk Message Queuing

If you proceed with the upgrade current applications may not function correctly. If you do not wish to proceed with the upgrade, press Cancel.

Database Upgrade

Setup will upgrade the following databases. Please backup these databases before you click upgrade on the final summary screen.

  • -Database: BizTalkMgmtDb on SQL Server: PP\WorkRomiko.
  • -Database: BizTalkMsgBoxDb on SQL Server: PP\EUWorkRomiko.
  • -Database: BizTalkDTADb on SQL Server: PP\EUWorkRomiko.
  • -Database: BizTalkRuleEngineDb on SQL Server: PP\EUWorkRomiko.

Also you will need to install the redistribution cab files:

BTSRedistW2K3EN32.cab (32 bit servers)

or

BTSRedistW2K3EN64.cab (64 bit servers)

The best is:

1. Run the setup for the first time and choose the option: Download the redistributable prerequisites CAB file.

2. Rerun setup and choose the last option. Automatically install the redistributable prerequisites from a CAB file.

Doing it this way, gives you the opportunity to have the file and then you can reuse it for development, preproduction and production installs.

Also stpo the rule engine and www services before the upgrade and any Biztalk host instances.

Once the upgrade is complete I do notice that the SQL BizTalk jobs owner is changed to the Windows Account that did the upgrade, so if you like them to run under SA or not use windows accounts, then you must go to the SQL jobs and change the owner of the jobs, else you might find your SQL jobs failing!

I was pretty happy with the upgrade and everything went smooth.

However I have one gripe, and I hate moaning, but this is really not cool. The BizTalk administrator console is also upgraded, however it can no longer administer BizTalk 2006 servers, and it can only administer 2009 servers. This can pose a problem on an administration machine when you in a upgrade project cycle, if you want to administer 2006 servers, then you need to logon locally or use another admin console.

I also notice that the BizTalk install folder in program files will still be BizTalk 2006, which is nice, if you use the BTSNTSvc.exe.config file, so those settings are still kept e.g. Enterprise Library profiles etc.

I heard of an issue in Visual Studio 2008 Maps and Schemas can cause a problem, I have yet to see this, so if I do, I will write something up about it! I am just happy to get ion Visual Studio 2008 and get rid of Visual Source Safe and work on Team Server!

I must say assemblies compiled on Visual Studio 2005 still work on 2009 🙂 However if you upgrade your development machine, the BizTalk projects of 2006 will not be recognised, you will need to upgrade to Visual Studio 2008 and upgrade the BizTalk 2006 solution files etc.

Another thing to note, is that when you upgrade a BizTalk project with an orchestration using a web reference, then in visual studio 2008, the upgraded web reference will be updated, however the orchestration odx.cs code behind is not updated. The solution is to DELETE the web references and recreate them from scratch and then reconfigure the web port types 🙂

Another issue is Visual Studio 2008 COPY LOCAL setting for reference dll’s is very buggy, since it is not actually boolean, so migrated settings will not work even if copy local is on, careful of this, else you will get manifest problems thinking it copied the recompiled referenced assemblies. So solution to fix this, is set COPY LOCAL to false in the interface and then set it back to true, thanks to this dudes blog:

http://underground.infovark.com/2009/09/09/visual-studio-2008-and-the-copylocal-setting/

Hope this helps for anyone considering an upgrade.

SQL Server 2005 Enterprise Reporting Solutions leveraging Service Broker Synchronization

Hi!

This blog will outline some high level aspects of implementing service broker, this is targeted at people who are already familiar with the basics of service broker, if you need the basics, check this tutorial:

http://msdn.microsoft.com/en-us/library/bb839483.aspx

Scenario:

You have a production system that is processing millions of records a month.
Reporting requires frequent complex reports to run on the metrics and also on the actual data stored.
The actual data payload is stored as XML data type. This means any reporting metrics on data payload must be done on a fact table of some sort.
Data must be archived for 5 years. (This will be around 500 million records)

Objectives:

Decouple the reporting from the production system
Ensure up to date synchronization from production to the reporting server
Running reports will not affect the production system

Solution:

  • We will use service broker to define ONE message contract for all tables that need to be synced. Synchronization will be activated by a job which activates the SQL service broker process
  • A message type per table is created
  • Synchronization will work in batch mode, supporting running of 40 to 50 simultaneous SQL processes (Optimal for a quad core process = 48)
  • After synchronization of the main transaction tables a job on the reporting server will update the FACTS table.
  • The initial queue on the sender side will have a PROC activation configured on the queue. When the SQL job runs it will activate this job.
  • We will have a service to Trigger the archiving process
  • We will create a Route to the destination system
  • Up to you how to use the Service Broker security JIn this sample I keep it simple, no encryption. You can use Asymmetric Encryption/security etc, not in scope of this article! In this example I USE A SQL USER MYUser and User, notice the service and route user names are swapped on either server J

The fact table is used to extract information from the XML data. This allows the data to be indexed and reports to run smooth.

Do you have a transaction database and need a custom reporting solution, without the complexity of log shipping or replication? Then look no further, SQL Service broker is the ideal tool to synchronize two databases, based on simple tables.

Source SQL Server

On this server you create a Service Broker Database to activate sending the data contracts payload. So for example you can create a database called SSBSender, this will manage the starting and spawning of processes, queues etc

Contract:

USE

[SSBSender]

GO

CREATE CONTRACT [ArchivingTables] AUTHORIZATION [dbo] ([CONF_AuditTrail] SENT BY ANY,

[Table_Error] SENT BY ANY,

[Table_Record] SENT BY ANY,

[Table_RecordData] SENT BY ANY,

[Table_Transaction] SENT BY ANY)

MessageTypes:

USE [SSBSender]

GO

CREATE MESSAGE TYPE [Table_Error] AUTHORIZATION [dbo] VALIDATION = WELL_FORMED_XML

CREATE MESSAGE TYPE [Table_Record] AUTHORIZATION [dbo] VALIDATION = WELL_FORMED_XML

CREATE MESSAGE TYPE [Table_RecordData] AUTHORIZATION [dbo] VALIDATION = WELL_FORMED_XML

CREATE MESSAGE TYPE [Table_Transaction] AUTHORIZATION [dbo] VALIDATION = WELL_FORMED_XML

Sender Queue:

CREATE QUEUE [dbo].[InitialQueue] WITH STATUS = ON , RETENTION = OFF , ACTIVATION ( STATUS = ON , PROCEDURE_NAME = [dbo].[SSB_ReportTransmission] , MAX_QUEUE_READERS = 20 , EXECUTE AS OWNER ) ON [PRIMARY]

Service:

CREATE SERVICE [TriggerArchiving] AUTHORIZATION [MyUser] ON QUEUE [dbo].[InitialQueue] ([ArchivingTables])

Route

CREATE ROUTE [SSBReceiver] AUTHORIZATION [dbo] WITH SERVICE_NAME = N’ReportingReceivingService’ , ADDRESS = N’TCP://DestinationServer:4022′

Remote Service Binding

CREATE REMOTE SERVICE BINDING [ReceivingServiceBinding] AUTHORIZATION [dbo] TO SERVICE N’ReportingReceivingService’ WITH USER = [MyUserName] , ANONYMOUS = OFF

Activation Job

Schedule a job to activate the process.

declare @datetime datetime

select @datetime = getdate()

EXEC [SSBSender].[dbo].[Start_Report_Transmission] @batch_size = 2500, @archive_date_limit = @datetime

Sender Procedures

USE [SSBSender]

GO

CREATE PROCEDURE [dbo].[Start_Report_Transmission] @batch_size INT = NULL, @archive_date_limit DATETIME = NULL

AS

DECLARE @xmldata XML

DECLARE @InitDlgHandle UNIQUEIDENTIFIER

SET @batch_size = ISNULL(@batch_size, 100)

SET @archive_date_limit = ISNULL(@archive_date_limit, GETDATE())

–@xmldata contains the batch size for every table

SET @xmldata = N'<batch>’ + CAST(@batch_size AS VARCHAR(30)) + N'</batch><archive_date_limit>’ + CONVERT(NVARCHAR(20), @archive_date_limit, 120)+ N'</archive_date_limit>’

–create a dialog from TriggerArchiving to itself

;BEGIN DIALOG @InitDlgHandle

FROM SERVICE TriggerArchiving

TO SERVICE N’TriggerArchiving’

ON CONTRACT ArchivingTables

WITH ENCRYPTION = OFF;

–send as many messages as the tables that need to be sent to reporting

;SEND ON CONVERSATION @InitDlgHandle MESSAGE TYPE [Table_Error](@xmldata);

;SEND ON CONVERSATION @InitDlgHandle MESSAGE TYPE [Table_Transaction](@xmldata);

;SEND ON CONVERSATION @InitDlgHandle MESSAGE TYPE [Table_Record](@xmldata);

;SEND ON CONVERSATION @InitDlgHandle MESSAGE TYPE [Table_RecordData](@xmldata);

END CONVERSATION @InitDlgHandle;

The procesure above will TRIGEER the Queue activation proc below!!!

USE [SSBSender]

GO

/****** Object: StoredProcedure [dbo].[SSB_ReportTransmission] Script Date: 03/10/2010 11:08:23 ******/

SET ANSI_NULLS ON

GO

SET QUOTED_IDENTIFIER ON

GO

CREATE PROCEDURE [dbo].[SSB_ReportTransmission]

AS

DECLARE @batch INT;

DECLARE @archive_date_limit DATETIME;

DECLARE @RecvReqDlgHandle UNIQUEIDENTIFIER;

DECLARE @RecvReqMsg XML;

DECLARE @RecvReqMsgName sysname;

DECLARE @ReportTransmissionHandle UNIQUEIDENTIFIER;

DECLARE @list XML;

DECLARE @far_service NVARCHAR(256)

WHILE 1=1

BEGIN

SELECT @RecvReqDlgHandle = NULL, @RecvReqMsg = NULL, @RecvReqMsgName = NULL, @far_service = NULL

WAITFOR

(

RECEIVE TOP(1)

@RecvReqDlgHandle = conversation_handle,

@RecvReqMsg = message_body,

@RecvReqMsgName = message_type_name

FROM InitialQueue

), TIMEOUT 10000

IF @RecvReqMsgName = N’http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog&#8217;

OR @RecvReqMsgName = N’http://schemas.microsoft.com/SQL/ServiceBroker/Error&#8217;

BEGIN

IF @RecvReqDlgHandle IS NOT NULL

BEGIN

END CONVERSATION @RecvReqDlgHandle;

END

RETURN 0

END

–at this point there is a message.

–check where the message is coming from (the far service)

SELECT @far_service = far_service

FROM sys.conversation_endpoints

WHERE conversation_handle = @RecvReqDlgHandle

–based on the source of the message perform different actions

IF @far_service = N’ReportingReceivingService’ –that means this is an answer from the reporting server

BEGIN

–set the status of records

EXEC dbo.SetReportingFlag @tabletype = @RecvReqMsgName, @list = @RecvReqMsg

CONTINUE –to receiving loop

END

–at this point the message came from a local call (triggering repeatable transmission)

–get the batch size

SET @batch = @RecvReqMsg.value(‘(batch/text())[1]’, ‘INT’)

SET @archive_date_limit = @RecvReqMsg.value(‘(archive_date_limit/text())[1]’, ‘DATETIME’)

IF ISNULL(@batch, 0) = 0 OR @archive_date_limit IS NULL

BEGIN

RETURN 0

END

;BEGIN DIALOG @ReportTransmissionHandle

FROM SERVICE TriggerArchiving

TO SERVICE N’ReportingReceivingService’

ON CONTRACT ArchivingTables

WITH ENCRYPTION = OFF;

WHILE 1=1

BEGIN

SET @list = NULL

–call procedure that updates the appropriate table

–and returns a list of ids that need to be transferred

EXEC dbo.GetArchiveIdsforReporting @tabletype = @RecvReqMsgName, @batch = @batch, @archive_date_limit = @archive_date_limit, @list = @list OUTPUT

–break the loop when list is null (no records to be transferred)

IF @list IS NULL

BEGIN

BREAK

END

–transmit the list, with the initially received message type

;SEND ON CONVERSATION @ReportTransmissionHandle MESSAGE TYPE @RecvReqMsgName(@list);

END

–end the conversation when no more records are to be transmitted

END CONVERSATION @ReportTransmissionHandle;

–break receiving loop

BREAK

END –of receiving looping

RETURN 0

USE [SSBSender]

GO

/****** Object: StoredProcedure [dbo].[GetArchiveIdsforReporting] Script Date: 03/10/2010 11:09:22 ******/

SET ANSI_NULLS ON

GO

SET QUOTED_IDENTIFIER ON

GO

CREATE PROCEDURE [dbo].[GetArchiveIdsforReporting] @tabletype SYSNAME = NULL, @batch INT = NULL, @archive_date_limit DATETIME = NULL, @list XML = NULL OUTPUT

AS

–first NULLify the output variable

–this is important for always returning the right value (when output param is NULL)

–to the caller

SET @list = NULL

IF ISNULL(@tabletype, ”) = ” OR ISNULL(@batch, 0) = 0 OR @archive_date_limit IS NULL

BEGIN

RETURN

END

DECLARE @tmp TABLE (archive_id BIGINT, tmpid INT IDENTITY(1,1) PRIMARY KEY CLUSTERED (archive_id, tmpid))

–trigger update of table and sending of records

–based on the message type

IF @tabletype = ‘Table_Error’

BEGIN

UPDATE TOP (@batch) MyTransactionDatabase.dbo.tfe_TableError

SET sent_to_report = 1

OUTPUT inserted.tfe_id INTO @tmp(archive_id)

WHERE sent_to_report = 0

AND archive_date <= @archive_date_limit

END

ELSE IF @tabletype = ‘Table_Transaction’

BEGIN

UPDATE TOP (@batch) MyTransactionDatabase.dbo.tft_TableTransaction

SET sent_to_report = 1

OUTPUT inserted.tft_id INTO @tmp(archive_id)

WHERE sent_to_report = 0

AND archive_date <= @archive_date_limit

END

ELSE IF @tabletype = ‘Table_Record’

BEGIN

UPDATE TOP (@batch) MyTransactionDatabase.dbo.tfr_TableRecord

SET sent_to_report = 1

OUTPUT inserted.tfr_id INTO @tmp(archive_id)

WHERE sent_to_report = 0

AND archive_date <= @archive_date_limit

END

ELSE IF @tabletype = ‘Table_RecordData’

BEGIN

UPDATE TOP (@batch) MyTransactionDatabase.dbo.trd_TableRecordData

SET sent_to_report = 1

OUTPUT inserted.trd_id INTO @tmp(archive_id)

WHERE sent_to_report = 0

AND archive_date <= @archive_date_limit

END

IF NOT EXISTS(SELECT * FROM @tmp)

BEGIN

RETURN

END

SET @list = NULL

–create the list of tbl_id’s

SET @list = (SELECT TOP 100 PERCENT archive_id AS id

FROM @tmp

ORDER BY archive_id

FOR XML PATH(”)

)

USE [SSBSender]

GO

/****** Object: StoredProcedure [dbo].[SetReportingFlag] Script Date: 03/10/2010 11:11:55 ******/

SET ANSI_NULLS ON

GO

SET QUOTED_IDENTIFIER ON

GO

CREATE PROCEDURE [dbo].[SetReportingFlag] @tabletype sysname = NULL, @list XML = NULL

AS

DECLARE @TransferStatus TINYINT

IF @tabletype IS NULL OR @list IS NULL

BEGIN

RETURN 0

END

–get the transmission flag value

SET @TransferStatus = @list.value(‘(TransferStatus/text())[1]’, ‘TINYINT’)

IF @TransferStatus IS NULL

BEGIN

RETURN 0

END

–shred the @list and get id values

DECLARE @tmp TABLE (archive_id BIGINT, tmpid INT IDENTITY(1,1) PRIMARY KEY CLUSTERED (archive_id, tmpid))

–shred the @list of values

INSERT INTO @tmp(archive_id)

SELECT a.col.value(‘text()[1]’, ‘BIGINT’)

FROM @list.nodes(‘id’) a(col)

IF NOT EXISTS(SELECT TOP 1 * FROM @tmp)

BEGIN

RETURN 0

END

–update proper table with @TransferStatus

IF @tabletype = ‘Table_Error’

BEGIN

UPDATE archive_tbl

SET sent_to_report = @TransferStatus

FROM MyTransactionDatabase.dbo.tfe_TableError archive_tbl

JOIN @tmp tmp ON archive_tbl.tfe_id = tmp.archive_id

END

ELSE IF @tabletype = ‘Table_Transaction’

BEGIN

UPDATE archive_tbl

SET sent_to_report = @TransferStatus

FROM MyTransactionDatabase.dbo.tft_TaleTransaction archive_tbl

JOIN @tmp tmp ON archive_tbl.tft_id = tmp.archive_id

END

ELSE IF @tabletype = ‘Table_Record’

BEGIN

UPDATE archive_tbl

SET sent_to_report = @TransferStatus

FROM MyTransactionDatabase.dbo.tfr_tableRecord archive_tbl

JOIN @tmp tmp ON archive_tbl.tfr_id = tmp.archive_id

END

ELSE IF @tabletype = ‘Table_RecordData’

blah blah blah

RETURN 0

Destination SQL Server

This is the server that will receive the data and process the messages in the queue. The database here to manage synchronization will be SSBReceiver, this will manage the receiving queuing and parsing the message payload to the receiver procedure.

On the receiving database, you will have the SAME contract and message types.

Queue

CREATE QUEUE [dbo].[ReceivingQueue] WITH STATUS = ON , RETENTION = OFF , ACTIVATION ( STATUS = ON , PROCEDURE_NAME = [dbo].[SSBReceiveMessage] , MAX_QUEUE_READERS = 40 , EXECUTE AS OWNER ) ON [PRIMARY]

Service

CREATE SERVICE [ReportingReceivingService] AUTHORIZATION [User] ON QUEUE [dbo].[ReceivingQueue] ([ArchivingTables])

Route

CREATE ROUTE [TriggerArchiving] AUTHORIZATION [dbo] WITH SERVICE_NAME = N’TriggerArchiving’ , ADDRESS = N’TCP://SourceServer:4022′

Remote Service Binding

CREATE REMOTE SERVICE BINDING [TriggerArchivingServiceBinding] AUTHORIZATION [dbo] TO SERVICE N’TriggerArchiving’ WITH USER = [MyUserName] , ANONYMOUS = OFF

Now all you need to do is get the stored procedure that is activated on the receiving queue to manage the message payload JEasy as that!

USE [SSBReceiver]

GO

/****** Object: StoredProcedure [dbo].[SSBReceiveMessage] Script Date: 03/10/2010 11:20:17 ******/

SET ANSI_NULLS ON

GO

SET QUOTED_IDENTIFIER ON

GO

CREATE PROCEDURE [dbo].[SSBReceiveMessage]

AS

print ‘SSBReceiveMessage started’

DECLARE @RecvReqDlgHandle UNIQUEIDENTIFIER;

DECLARE @RecvReqMsg XML;

DECLARE @RecvReqMsgName SYSNAME;

DECLARE @ReportAnswerHandle UNIQUEIDENTIFIER;

–begin an answering dialogue with the TriggerArchiving service

;BEGIN DIALOG @ReportAnswerHandle

FROM SERVICE ReportingReceivingService

TO SERVICE N’TriggerArchiving’

ON CONTRACT ArchivingTables

WITH ENCRYPTION = OFF;

WHILE 1=1

BEGIN

SELECT @RecvReqDlgHandle = NULL, @RecvReqMsg = NULL, @RecvReqMsgName = NULL;

WAITFOR

(

RECEIVE TOP(1)

@RecvReqDlgHandle = conversation_handle,

@RecvReqMsg = message_body,

@RecvReqMsgName = message_type_name

FROM [ReceivingQueue]

), TIMEOUT 10000

IF @RecvReqMsgName = N’http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog&#8217;

OR @RecvReqMsgName = N’http://schemas.microsoft.com/SQL/ServiceBroker/Error&#8217;

BEGIN

IF @RecvReqDlgHandle IS NOT NULL

BEGIN

END CONVERSATION @RecvReqDlgHandle;

END

BREAK

END

–in case the message is null, break looping

IF @RecvReqMsg IS NULL

BEGIN

BREAK

END

BEGIN TRY

–execute procedure that pulls archived records

EXEC dbo.PullArchiveRecords @tabletype = @RecvReqMsgName, @list = @RecvReqMsg

–apply Success Status element in @RecvReqMsg

SET @RecvReqMsg.modify(‘insert element TransferStatus {2} as first into .’)

END TRY

BEGIN CATCH

–at this point an error occured

print ‘Error occured during PullArchiveRecords: ‘ + ERROR_MESSAGE()

–apply Error Status element in @RecvReqMsg

SET @RecvReqMsg.modify(‘insert element TransferStatus {3} as first into .’)

END CATCH

–send the action answer to TriggerArchiving

;SEND ON CONVERSATION @ReportAnswerHandle MESSAGE TYPE @RecvReqMsgName(@RecvReqMsg);

END

–end conversation with TriggerArchiving service

END CONVERSATION @ReportAnswerHandle

RETURN 0

From here, you CREATE your own logic on the calling proc from the above proc to transfer the batches to the reporting/Archive tables.

USE [SSBReceiver]

GO

SET ANSI_NULLS ON

GO

SET QUOTED_IDENTIFIER ON

GO

CREATE PROCEDURE [dbo].[PullArchiveRecords] @tabletype SYSNAME = NULL, @list XML = NULL

AS

IF @tabletype IS NULL OR @list IS NULL

BEGIN

RETURN 0

END

DECLARE @tmp TABLE (archive_id BIGINT, tmpid INT IDENTITY(1,1) PRIMARY KEY CLUSTERED (archive_id, tmpid))

–shred the @list of values

INSERT INTO @tmp(archive_id)

SELECT a.col.value(‘text()[1]’, ‘BIGINT’)

FROM @list.nodes(‘id’) a(col)

IF NOT EXISTS(SELECT TOP 1 * FROM @tmp)

BEGIN

RETURN 0

END

–pull records from Archiving server

–based on @tabletype

BEGIN TRY

IF @tabletype = ‘Table_Error’

BEGIN

–PUT LOGIC HERE to insert from payload joining on @tmp

END

ELSE IF @tabletype = ‘Table_Transaction’

BEGIN

–PUT LOGIC HERE to insert from payload joining on @tmp

END

ELSE IF @tabletype = ‘Table_Record’

BEGIN

–PUT LOGIC HERE to insert from payload joining on @tmp

END

ELSE IF @tabletype = ‘Table_RecordData’

BEGIN

–PUT LOGIC HERE to insert from payload joining on @tmp

END

END TRY

BEGIN CATCH

INSERT INTO dbo.myerrors(errormessage)

values(ERROR_MESSAGE())

END CATCH

Reporting Server Facts Job

You can then create a FACT Table job to update the facts table with data that was just imported. You can use a logical flag to ensure data is flagged in the procedure above, where you implement your logic when inserting the new data. The fact job can then use a flag value to update facts, similar to a CUBE on an OLAP server.

Conclusion

You can develop some super fast and stream lime synchronization mechanisms with service broker, transferring millions of records is a breeze, since batch sizes are kept small and we spawn 48-50 parrallel batching processes at once into the queue, you ensure your Transaction log checkpoints are efficient, especially on high processing load days. Much more efficient that Replication and Log Shipping, and MUCH easier to support. This means, we can always keep the queue size at a max of +- 50 🙂 Easy to manage on large transaction days.

I have transferred around 300 million records and not had a hiccup since.