Pros and Cons of Stored Procedures

Recently I have been a heavy user of stored procedures.
There is a big debate going on about the benefits or otherwise of stored procedures.
I would not argue that they are faster than inline sql (these days that is a moot point).

Pro
Individual sp’s easy to replace this provides many places for system expansion/correction.
There are great security benefits. An app that exlusively accesses a db via
sp’s can have much tighter control over it’s data access.

Con
Not portable.
Sometimes limited error handling.
No use of constants.

What is three tier design

Recently I heard the preposterous claim that a three tier application that uses stored procedures is not really three tier.

A three tier application splits it work up into distinct tiers (Here is an example):

1. Presentation Layer
2. Business Layer
3. Database layer

While this is harder initially to set up it does provides a range of deployment, security, development and maintenance benefits.

Just because the database layer uses stored procedures does not in any way invalidate this model.

For deployment, with a corrrect design it is possible to only require detailed configuration once on the server as opposed to every client.

For security there are more oppertunities for security in depth (you only need grant database access to the Business Layer or Database Layer not every user).

For development there are more incentives to reuse code since the layers provide a some decoupling. In a two tier application there is far more room for code duplication. The layers also provide a point for performance tuning.

For maintenance having distinct layers means that a patch can be better isolated.
If you don’t change the interface only a single changed component needs to be deployed.

Very useful .NET debugging aid

The following can be very useful when debugging a service or any other .NET app

At an appropriate point in your code you add:

if ( !System.Diagnostics.Debugger.IsAttached )
{
System.Diagnostics.Debugger.Launch();
System.Diagnostics.Debugger.Break();
}

It will search for a suitable debugger which would include the IDE that you have open on your code.

Wireless at last

I have finally managed to get my laptop to work wirelessly!
When I set up the Belkin router I had switched off wifi as I did not have any wifi devices at the time.
Subsequently I have been trying to work out why the laptop cannot see the network.
I have sucessfully configured and secured other peoples wifi networks and it was getting embarasing.

Model Driven Development a Microsoft Defintion

This is an episode of dotnetrocks that deals with Model Driven Development.
It defines MDD as when your application is code generated from a given definition file or interprits a data file. This seems to me that MS has finally picked up on the ideas expressed in the language TCL and are merely adding a user frindly gui to link them together.

Chris Sells did admit that MS did not invent MDD, just now they are tring to bring them to the fore.

MDD really is the practice of using meta-data where appropriate.
Delphi has had this for years in the seperation of a form into it’s dfm and pas components.
It has been possible to change a delphi dfm to add static components without touching the code.

Applying Working Effectively With Legacy Code

Having recently read “Working Effectively With Legacy Code” (WEWLC) my work provided me the oppertunity to put it into practice. I had been handed a small program to maintain : it has a few issues that needed fixing. The product is written in VB.NET, uses a mixture of module level functions and classes to acheive it’s aims. There is no clear design or documentation and it relies upon COM calls to an external API of a product that I do not have a licence for (and it would be expensive to obtain). It had no unit test coverage. By all definitions this is Legacy code.

To be fair the application did have a set of wrapper functions that were used to access the API.
The first stage was to put these methods into a class and use “Lean On The Compiler” to find where to create the class. Having done so I was able to add some methods to the class so that it provided a better abtraction from the COM API (the same three COM calls were being made in sequence across the App). Once I had these I was able to reduce a fair amount of duplication.

Having extracted a class that access the COM API (which I could not yet unit test) I was able to perform “Extract Interface”. Given that COM API’s are extensively interfaces they are very easy to mock (except that the interfaces are execisive wide). By mocking one of the COM interfaces I was able to re-implement the extracted interface. This finally allowed me to get some of the application under NUnit.

Eventually the bug turned out to be in the data access code – there was a remove method with an optional second parameter. The second parameter controls whether to remove the item from the underlying database.

By using the techniques in WEWLC I was able to get an unwieldy application at least partially under unit test. However I was not quite able to meet the stringent requirements of unit tests taking less than 0.01 seconds. In addition the policy of not touching the database in the unit test meant that I had not been able to find the given bug directly. I had managed to prove that the method that called it was otherwise working.

I can see WEWLC becoming a classic along the lines of Refactoring.
It provides the answers to the questions that would prevent most people from attempting to get their code under a test harness and thereby allows TDD to be used on non-greenfield projects.