In my opinion the use of version control is the definition of the professional programmer.
It forms one of the standard questions that I ask a prospective employer as part of the interview process (remember you are interviewing them as much as they are interviewing you).
I was listening to a recent DNR Show that covered Team System. In the show there was a discussion about version control systems and the use of branching.
Richard made a statement along the line of people should keep their branches for as short a time before merging.
I have been using Version Control in various forms for a number of years and have only had a single project where we needed to merge. The majority of branches were long lived (although they would only change little from their branch point).
Branches in my experience are best used to provide bug fixes to a released version of a program when the current development version has moved along (and may not yet be sufficiently feature complete/tested to release as a replacement). New development should be performed on the “tip” of the branch tree. The checkpoint was taken when the build was released. Branches were only started when the first bug fix was required. We would typically have 5 “live” branches per product. New users would get the latest release, but some customers were reluctant to upgrade (especially Japanese Banks).
The only time that I worked on a development project that required branching was when we had two teams of developer trying to add significant distinct features to the same code-base.
Each team completed their development then a three-way merge was performed on the three branches (original, Team A and Team B).
Each of the version control systems that I have worked with has it’s own quirks.
I have worked in some places that just had a directory structure that was backed up weekly. We used Beyond Compare to merge into the shared source and from it.
rcs is the simplest that I have used. It had a command line tool to check in and out.
mks source integrity added a gui to the rcs system. It had a useful (yet slow) concept of a checkpoint. This formed an immovable label. You used checkpoints as the jump off point for a branch. Since the storage was a file share it was possible to fix up errors easily, and it was also possible to move no-longer used code away to an archive yet keep the history. Labels could be slow to apply to a large codebase (one I worked on was ten years old).
pvcs has a nice concept of promotion groups. Each file would have a promotion hierarchy (such as DEV, INT, BUILD, QA, RELEASE). These would effectively be labels that had to be applied to revisions in an ascending order (all could be on the same file). The idea is that you were free to check anything in at the DEV level – it had no guarantee’s even that it would build. INT was for passing to other developers, BUILD indicated that it at least compiled and satisfied rudimentary tests, QA was the next version to be released and RELEASE was production code. You could check out all source at say BUILD then check out files locked to you at DEV over the top. This is a good way of checking that your code will not break the build before you promote it (to INT and then BUILD). It encourages frequent check ins and integrations. You could easily replace the built-in compare and merge tools (my preference is Beyond Compare from scooter software).
starteam just seemed to work. It was limited with external tool integration. It had nice features such as being able to get the source as it existed at a specified time.
svn is simple to use. It integrates well into the windows desktop via TortoiseSVN. You can even use svn bridge to allow you to use svn to store files in a tfs backend. It was one of the earliest vcs systems to perform atomic commits – that is a group of files must all commit or the commit will fail.
TFS is a bit different to other version control systems (this is probably because it is relatively new – hopefully the nightmare that was sourcesafe has been forgotten) . Firstly its the only vcs that actually remembers where that source has been checked out to (and objects if you move it without using the ide). It is highly integrated into Visual Studio (unless you use SvnBridge) which can make files that Visual Studio does not touch harder to configure (it can’t see changes to files copied at the file system level). Is is also an atomic commit vcs – packaging code into change sets. It is the only vcs that I know that does not allow pulling of a label (the excuses for this are rather weak – I don’t care that a label can be non-unique or moved, make a decision and pick the latest file version). It is also lacking in macro support which is a very useful feature found in almost all older systems – macros allow you to include the version number/labels/commiter/timestamps from the checkin in the file when it is checked out. It can also be a little odd when you ask it to “get latest” from within the IDE – it frequently fails to add new files. It will ignore edited files when the latest source is grabbed – which is good since you don’t get your work clobbered by mistake. It does have good integration support.
To be fair to TFS it is more than a vcs (but that is true of several of the others that I used) – it has an associated SharePoint site for task tracking among other things.
I probably need to explain why I am so down on TFS’s lack of macros. In one project we used this feature to record the version number of each and every database object in a table. The create scripts were separated into distinct files and I used a udf that was supplied a string that was populated by the filename and version. This was really useful if you had a backup of a production database – you could immediately see how old the database objects were upon a restore. This is not easy to implement without macro support.
VCS Nightmare
One place that I worked had an overseas development team that had entirely moved in from application support. One application that they developed had an interesting naming conventions for sprocs. They would embed the version in the name: usp_MyProc_12_3. This would be OK if they had bothered to replace all references but on a subsequent analysis of the database we found upto 5 different versions of the same proc in active use. In addition they had been creating these customizations on each of three customer sites and did not always remember to bring the changes back so that it could be configured centrally. It took us a while to recover the full source for that!