Of late I have been finding that I am using the older unix tools more and more frequently (on windows…). These seem to solve the problem presented in a minimal fashion (and without artificial limits or attempts to be too helpful).
For example there is awk. This solves the same kind of parsing problems that excel does with it’s text-to-columns option. awk is designed to operate with tabular data and perform repetitive operations on it.
This can effectively provide queries and transforms of say a CSV file in a similar manner that can be applied with XSLT.
The above takes a file seperated by colons and prints the second column.
I have also been experimenting with the vim editor.
I started using vim a while ago when my eeepc linux netbook did not have emacs installed (and I was without an internet connection to get it).
I found it very useful when I was presented with a 2MB xml document that I needed to parse. Opening an xml document in VS2010 is fine provided that the document is not big and on a single line. The VS2010 editor tried to be clever and do graphical things with the document – but this took all of the processing power of the machine (which itself is impressive since this was my work quad core desktop). vim however opened the file painlessly and permitted searches which is all I really needed. I even found a vim script that allows the insertion of a newline after a given pattern. This allowed me to turn the monster xml into something that VS2010 could look at without hanging.
Subsequently I have been combining vim and powershell. Powershell can be started on unc paths. Vim can be started from within Powershell. This combination overcomes the historical dos limitation of forccing you to map a drive letter if you want a command prompt.
This may look like unix but is actually powershell: