Error Handling in SQL 2000 is depressing

The whole buisness of error handling in sql server 2000 stored procedures is depressing.

For example if a query has an incorrect table structure then there is no way of handling that from within the sp.  

It is rather inconsistent, the best that can be acheived is a compromise. You have to check @@ERROR after every step.

I hope that SQL Server 2005  will be better once I am allowed to use it.

SQL Server Oddity and a fix.

Recently I found that @@SERVERNAME does not actually return the name of the server that the machine is running.

It returns the name that was last set using sp_addserver ‘NAME’, local

This makes a difference if you are  creating an image of a machine as a master copy.
All of the copied machines will think that they are the master machine.

The following script should detect and repair incorrect server names:

SET NOCOUNT ON

DECLARE @OLDNAME VARCHAR(200)
DECLARE @NEWNAME VARCHAR(200)

SELECT @OLDNAME = @@SERVERNAME
SELECT @NEWNAME = CAST( SERVERPROPERTY(‘ServerName’) AS VARCHAR(200))

select @OLDNAME, @NEWNAME

IF @OLDNAME <> @NEWNAME
BEGIN
    PRINT ‘Server name differs’
    IF @NEWNAME IS NOT NULL
    BEGIN
        IF @OLDNAME IS NOT NULL
        BEGIN
            PRINT ‘Dropping server ‘ +  @OLDNAME          
            EXEC sp_dropserver @OLDNAME
            PRINT ‘Dropped server ‘ +  @OLDNAME          
        END
        PRINT ‘Adding server ‘ + @NEWNAME
        EXEC sp_addserver @NEWNAME, ‘local’
        PRINT ‘Added server ‘ + @NEWNAME
        PRINT ‘Now stop and restart SQL Server’
    END
END

Old School HTTP: Impersonate a browser with Telnet

I have been having fun lately at work getting a public website hosted on a webserver inside a firewall sharing the same server (port and ip address) as other websites.

The IIS config is outside the scope of this post (Hint: HOST HEADER NAME should match the website name, for example http://www.google.com).

Also outside the scope is setting up a firewall/router to map the external ip address to the internal server address.

What I am going to discuss is the use of Telnet to impersonate a web browser.  This can be very useful to determine if you actually have got your website working and you are just waiting for the DNS entries to be propigated.

I am going to use google as an example (you would need to substitute your own site and ip address).

First I will find the ip address of google (it may have changed since this article was written).

At a command prompt type:

ping http://www.google.com

This should respond pinging google.com [64.233.167.99]

 

At a command prompt type:

telnet 64.233.167.99 80

GET / HTTP/1.1

Host: http://www.google.com

{Hit enter x3}

This should respond with some text.

It will start with:

HTTP/1.1 and then be followed (after some header details) with the html body of your website. 

If this works then you know that all you have to do is wait for the DNS records to be updated. 

Writing an OLE-DB data provider in C#

I am investigating the difficultly in writing an OLE-DB data provider in C#.

Here is a link to the simple OLE-DB provider.

Here is an article on  wsriting custom .net data providers.

I need to start this discussion with a justification of why I would want to do such a thing.

Of late I have been working on a very pure object-oriented project with a design goal of eliminating duplication.

We have a MVC architecture that works fine with all of the business logic being in the Model component.

The problem comes when we get to reporting. We have a requirement to use an external reporting tool. This is the cause of the duplication. As is typical in reporting tools it is oftern easier to code the report as a view than to put too much detail into the reports (this reduces duplication between similar reports).

However this has resulted in calculated fields being defined in both the views and the model. We can always write unit tests to ensure that database calculations matches the model calculations. This results in us loading from views and saving to tables which is far from ideal.

A better approach (which I have used before in a limited form) would be to have the reports run against our model. In order to do this we need to expose our model components as an OLE-DB provider.

Previously I have placed a hook layer between the reporting tool and the database so that I can extend sql with custom commands. This allows the model code (and various common formating objects) to be used from the reports.

This has the benefit of when you change the format of your custom object it is automatically consistent in all of the reports. This is a major time saver on a large project.

VS2005 and version control

VS2005 oddity.

When configuring VS2005 solutions under version control you should not check in the binary suo file.
This however is the file that contains the memory of which is the default project.
The only way around this I have found was to move the project that you want to be the default to the top (well the first non directory) of the sln file (In the Project bit). You also need to move the project up the postSolution section to the top of the list.

There does not appear to be any way to control this in the IDE that is not recorded in the suo file.

This is not documented anywhere I can find.

It appears that the default project is the first project added to the solution. 

Wierd Excel Behaviour

The following is the explanation of a very odd excel behaviour: 

http://support.microsoft.com/kb/194124/EN-US/

The problem comes when using sql server stored procedures to read data in from an Excel spreadsheet.

The symptom is that Excel will guess the datatype of the columns. If it guesses wrong you get null instead of the data.

I have a general rule of thumb about Excel: if you let it guess it will almost always guess wrong.

Why could microsoft not have added extra connection parameters that allow the caller to specify the data types?

This would not break existing code and would save a lot of heartache.

I have found a workaround:

Export the sheet to tsv and then reimport.

This looses all of the formatting that is getting in the way. 

SQL Server 2000 Documentation Bug

I have found out this is how sql servers internal functions work.

This is one of very few places that I have found books online to be wrong.

Books online can be infurating in that even a search for the exact term required will return 50 articles above the one you actually need! 

SQL-92 niladic function Value returned 
CURRENT_TIMESTAMP       Current date and time. 
CURRENT_USER    Name of user performing insert.
SESSION_USER    Name of user performing insert.
SYSTEM_USER     Name of user performing insert.
USER    Name of user performing insert.
Books online is wrong here

Using Openrowset to query csv and tsv files

The following will read a csv file. 

SELECT * FROM OPENROWSET (‘MSDASQL’, ‘Driver={Microsoft Text Driver (*.txt; *.csv)};DBQ=c:data;’, ‘SELECT * from test.csv’);

The following will read a tsv file (with a txt extension).

SELECT * FROM OPENROWSET (‘MSDASQL’, ‘Driver={Microsoft Text Driver (*.txt; *.csv)};DBQ=c:data;’, ‘SELECT * from test.txt’);

Edgy Eft Ubuntu

Previously I have posted upgrade instructions each time a new release of Ubuntu comes out.

This time due to the problems in the Dapper -> Edgy upgrade I would recommend a clean install.

Over the next few days I will be posting some of the extra configuration details that I needed to get my Linux server back to life. This will include details of mounting a hard drive and configuring Samba to share it with the windows network.