Powershell PSCX ado sample

$Provider=”System.Data.SQLClient”
$ConnectionString=”Data Source=.;Initial Catalog=Northwind;Integrated Security=SSPI”
$Connection = Get-AdoConnection $Provider $ConnectionString
$Query = “SELECT * FROM Orders”

Invoke-AdoCommand -ProviderName $Provider -Connection $Connection -CommandText $Query

$Connection.Close()

NVelocity

This is a simple templating tool.

Here is the documentation for the templates.

This is what you need to get NVelocity to work:

NVelocity.Context.IContext context = new VelocityContext();

context.Put(“name”, “Chris”);

NVelocity.Runtime.RuntimeSingleton.Init();
NVelocity.Template template = NVelocity.Runtime.RuntimeSingleton.GetTemplate(“helloworld.vm”);

StringWriter writer = new StringWriter();

template.Merge(context, writer);

Console.WriteLine(writer.ToString());

Here  is the template:

Hello $name!  Welcome to Velocity!

Service Locators versus Inversion of Control Conatiners

Of late I have been using “service locator” similar to the below:

//=== class ServiceLocator ====

using System;
using System.Collections.Generic;
using System.Configuration;
using System.Reflection;
using System.Xml;

namespace PerfectStorm
{
/// <summary>
/// This is used to allow the creation of a class to be decoupled from the implementation.
/// The client only needs to know the name and have a supported interface or base class.
///
/// The only restriction this imposes on the created class is that it must have an empty
/// constructor.
///
/// The registry may be populated automatically from the appropriate config file or the
/// client may populate it itself.  By default the full name of the class is used, but
/// if client-populated then any unique string identifier will do.
///
/// The beauty of this clarity of split is that the caller may not know any of the
/// implementation details. This ensures that the implementation can be replaced only
/// requiring a configuration setting.
///
/// </summary>
public static class ServiceLocator
{
static Dictionary<string, Type> _dict = new Dictionary<string, Type>();

// This will be called before the first method on the type.
static ServiceLocator()
{
XmlNode configNode = (XmlNode)ConfigurationManager.GetSection(“PerfectStorm.ServiceLocator”);
if (configNode != null)
{
foreach (XmlNode node in configNode.SelectNodes(“//PerfectStorm.ServiceLocator/assembly”))
{
LoadAssembly(node.Attributes[“name”].InnerText);
}
}
}

/// <summary>
///
/// </summary>
/// <param name=”Name”></param>
public static void LoadAssembly(string Name)
{
Assembly a = Assembly.Load(“Name”);
Type[] types = a.GetTypes();
foreach (Type t in types)
{
if (!t.IsAbstract)
{
// Only add if can be constructed.
if (t.GetConstructor(System.Type.EmptyTypes) != null)
{
AddType(t.FullName, t);
}
}
}

}

/// <summary>
/// Creates an instance of a named class that conforms to the supplied interface or base type.
/// </summary>
/// <typeparam name=”T”></typeparam>
/// <param name=”name”></param>
/// <returns></returns>
public static T CreateInstance<T>(string name)
{
Type t = null;
if (_dict.ContainsKey(name))
{
t = _dict[name];
}
else
{
throw new Exception(string.Format(“ServiceLocator is Unable to create {0}”, name));
}

return (T)Activator.CreateInstance(t);
}

/// <summary>
///
/// </summary>
/// <typeparam name=”T”></typeparam>
/// <remarks>Can only register classes with parameterless constructors.</remarks>
public static void Register<T>() where T : new()
{
Type type = typeof(T);
AddType(type.FullName, type);
}

/// <summary>
///
/// </summary>
/// <param name=”Name”></param>
/// <param name=”T”></param>
private static void AddType(string Name, Type T)
{
if (_dict.ContainsKey(Name))
{
_dict[Name] = T;
}
else
{
_dict.Add(Name, T);
}
}

/// <summary>
/// Empties the registry.
/// </summary>
/// <remarks>This has been included to assist unit testing.</remarks>
public static void Clear()
{
_dict.Clear();
}
}
}

This allows applications to load an assembly based upon config.

You will also need the Trivial section handler:

using System;
using System.Configuration;
using System.Xml;
using System.Xml.Serialization;
using System.Xml.XPath;

namespace PerfectStorm
{
/// <summary>
/// This allows the config section to be read as an XML node.
/// </summary>
/// <remarks>
/// This is identical code to that used in PerfectStorm.CodeGenLibrary but I don’t want to cause
/// a dependency between the two.
/// </remarks>
public class ServiceLocatorConfig : IConfigurationSectionHandler
{
/// <summary>
///
/// </summary>
/// <param name=”parent”></param>
/// <param name=”configContext”></param>
/// <param name=”section”></param>
/// <returns></returns>
public object Create(
object parent,
object configContext,
System.Xml.XmlNode section)
{
// This was based upon an idea that I got from:
// (http://alt.pluralsight.com/wiki/default.aspx/Craig/XmlSerializerSectionHandler.html)
return section;
}
}

}

This permit usages based upon config such as:

<?xml version=”1.0″ encoding=”utf-8″ ?>
<configuration>
<configSections>
<section name=”PerfectStorm.ServiceLocator” type=”PerfectStorm.ServiceLocatorConfig, PerfectStorm.ServiceLocator” />
</configSections>
<PerfectStorm.ServiceLocator>
<assembly name=”My fully qualified name” />
</PerfectStorm.ServiceLocator>
</configuration>

This permits the use:

IYourInterface svc = ServiceLocator.CreateInstance<IYourInterface>(“fully qualified name of class”);

This means that the caller and implementer only need agree on the interface and everything else can be setup in config.

This has allowed me to make rapid changes to the architecture of an application (in one case to break a cyclic dependency and in another to replace a set of pointless local WCF calls with a call to an interface.  This has been easy to retrofit to an existing application – probably because we have been working through some narrow interfaces (that is almost all remote calls went through one of three interfaces).  I have found it useful to move the interfaces into a common assembly providing the service interfaces.

The only limitation of the ServiceLocator is that the classes created require a parameterless constructor.

I have started looking at Winsor (from the Castle project) as an alternative.

The following is a usage sample of Winsor:

using System;
using Castle.Windsor;
using Castle.Windsor.Configuration.Interpreters;
using TestLib;

namespace TestProject
{
class Program
{
static void Main(string[] args)
{
IWindsorContainer container = new WindsorContainer(new XmlInterpreter());
IExecutor service = container.Resolve<IExecutor>();

// Or to be specific IExecutor service = container.Resolve<IExecutor>(“test”);

service.Execute();
Console.ReadLine();
}
}
}

This was controlled by the following config file:

<?xml version=”1.0″ encoding=”utf-8″ ?>
<configuration>
<configSections>
<section
name=”castle”
type=”Castle.Windsor.Configuration.AppDomain.CastleSectionHandler, Castle.Windsor” />
</configSections>

<castle>

<components>
<component id=”test”
service=”TestLib.IExecutor, TestLib”
type=”TestLibImplementation.Executor2, TestLibImplementation” />
</components>

</castle>
</configuration>

There is no reason why you can’t alter Winsor to use reflection to get all types from an assembly (in fact Binsor frequently does this).

Now this is very similar to the above Service Locator (With the right id name the usage would be identical).

I have more investigation of Winsor to do…

Use of Third Party Applications/Code in yours

Quite often you need to use a third party application in your code.  This may be a workflow tool, a reporting service.

If you need to integrate with it  (you need to call it or it needs to call you) keep these to a small well defined interface.

Build a simple assembly for calling it an use that for all access.  For it to call you produce a second small assembly.

Do not let any of it’s implementation details leak beyond that.  Those should be the only assemblies that have a reference to the third party code.

Careful use of this can allow you to replace a product (possibly with some custom code) at a later time with minimal effort.

Use of Version Control

In my opinion the use of version control is the definition of the professional programmer.

It forms one of the standard questions that I ask a prospective employer as part of the interview process (remember you are interviewing them as much as they are interviewing you).

I was listening to a recent DNR Show that covered Team System. In the show there was a discussion about version control systems and the use of branching.

Richard made a statement along the line of people should keep their branches for as short a time before merging.

I have been using Version Control in various forms for a number of years and have only had a single project where we needed to merge.  The majority of branches were long lived (although they would only change little from their branch point).

Branches in my experience are best used to provide bug fixes to a released version of a program when the current development version has moved along (and may not yet be sufficiently feature complete/tested to release as a replacement).  New development should be performed on the “tip” of the branch tree.  The checkpoint was taken when the build was released. Branches were only started when the first bug fix was required.  We would typically have 5 “live” branches per product.  New users would get the latest release, but some customers were reluctant to upgrade (especially Japanese Banks).

The only time that I worked on a development project that required branching was when we had two teams of developer trying to add significant distinct features to the same code-base.

Each team completed their development then a three-way merge was performed on the three branches (original, Team A and Team B).

Each of the version control systems that I have worked with has it’s own quirks.

I have worked in some places that just had a directory structure that was backed up weekly.  We used Beyond Compare to merge into the shared source and from it.

rcs is the simplest that I have used. It had a command line tool to check in and out.

mks source integrity added a gui to the rcs system.  It had a useful (yet slow) concept of a checkpoint. This formed an immovable label. You used checkpoints as the jump off point for a branch.  Since the storage was a file share it was possible to fix up errors easily, and it was also possible to move no-longer used code away to an archive yet keep the history. Labels could be slow to apply to a large codebase (one I worked on was ten years old).

pvcs has a nice concept of promotion groups.  Each file would have a promotion hierarchy (such as DEV, INT, BUILD, QA, RELEASE).  These would effectively be labels that had to be applied to revisions in an ascending order (all could be on the same file).  The idea is that you were free to check anything in at the DEV level – it had no guarantee’s even that it would build. INT was for passing to other developers, BUILD indicated that it at least compiled and satisfied rudimentary tests, QA was the next version to be released and RELEASE was production code.  You could check out all source at say BUILD then check out files locked to you at DEV over the top. This is a good way of checking that your code will not break the build before you promote it (to INT and then BUILD).  It encourages frequent check ins and integrations. You could easily replace the built-in compare and merge tools (my preference is Beyond Compare from scooter software).

starteam just seemed to work. It was limited with external tool integration.  It had nice features such as being able to get the source as it existed at a specified time.

svn is simple to use.  It integrates well into the windows desktop via TortoiseSVN.  You can even use svn bridge to allow you to use svn to store files in a tfs backend.  It was one of the earliest vcs systems to perform atomic commits – that is a group of files must all commit or the commit will fail.

TFS is a bit different to other version control systems (this is probably because it is relatively new – hopefully the nightmare that was sourcesafe has been forgotten) .  Firstly its the only vcs that actually remembers where that source has been checked out to (and objects if you move it without using the ide). It is highly integrated into Visual Studio (unless you use SvnBridge) which can make files that Visual Studio does not touch harder to configure (it can’t see changes to files copied at the file system level).  Is is also an atomic commit vcs – packaging code into change sets.  It is the only vcs that I know that does not allow pulling of a label (the excuses for this are rather weak – I don’t care that a label can be non-unique or moved, make a decision and pick the latest file version).  It is also lacking in macro support which is a very useful feature found in almost all older systems – macros allow you to include the version number/labels/commiter/timestamps from the checkin in the file when it is checked out.  It can also be a little odd when you ask it to “get latest” from within the IDE – it frequently fails to add new files.  It will ignore edited files when the latest source is grabbed – which is good since you don’t get your work clobbered by mistake.  It does have good integration support.

To be fair to TFS it is more than a vcs (but that is true of several of the others that I used) – it has an associated SharePoint site for task tracking among other things.

I probably need to explain why I am so down on TFS’s lack of macros.  In one project we used this feature to record the version number of each and every database object in a table.  The create scripts were separated into distinct files and I used a udf that was supplied a string that was populated by the filename and version.  This was really useful if you had a backup of a production database – you could immediately see how old the database objects were upon a restore.  This is not easy to implement without macro support.

VCS Nightmare

One place that I worked had an overseas development team that had entirely moved in from application support.  One application that they developed had an interesting naming conventions for sprocs. They would embed the version in the name: usp_MyProc_12_3.  This would be OK if they had bothered to replace all references but on a subsequent analysis of the database we found upto 5 different versions of the same proc in active use.  In addition they had been creating these customizations on each of three customer sites and did not always remember to bring the changes back so that it could be configured centrally.  It took us a while to recover the full source for that!