I can’t claim credit for this, as I saw it yesterday whilst receiving a demo of a (very good) messaging architecture written by another developer.  It’s only a little scrap of code, but that makes it even better: its beauty is its simplicity.  So, to get parameters out of some parameter holding class, but to not have to (a) make everything the same type or; (b) have N methods for retrieving N types, you can do this:

    public class Params
    {
        Dictionary _values = new Dictionary();

        public Params()
        {
        }

        public T GetParam(string name, T defaultValue)
        {
            if (_values.ContainsKey(name) == false)
            {
                return defaultValue;
            }
            return (T)_values[name];
        }
     }

Which you then call like this:

            Params p = new Params();
            int key1 = p.GetParam("key1", 39);
            string key2 = p.GetParam("key2", "defaultValue");
            double key3 = p.GetParam("key3", 44.2);

Programming Fonts

October 3, 2008

Is Courier New not good enough for you?  Give your IDE a new lease of life with one of these programming-specific fonts.  I’m currently testing Proggy Tiny, in 11 point size. 

http://keithdevens.com/wiki/ProgrammerFonts

Joining a table on itself

October 3, 2008

select  RTRIM(LTRIM(rl.ric)) 'Identifier',
        case
            when (rl.code like '%46%' ) THEN 'True'
            when (rl.code like '%47%' ) THEN 'True'
            ELSE 'False' END D1,
        case
            when (rl.hard_restriction = 1 AND rl.hard_to_borrow=0) THEN 'Restricted'
            else '' end 'EventType'
from restricted_list rl_1
    inner join (select id, max(entry_date) as MaxEntryDate
                from restricted_list rl_2
                group by ric) rl_2
        on  rl_1.ric = rl_2.ric
        and rl_1.entry_date = rl_2.MaxEntryDate
order by rl_1.ric

Interviewing

September 11, 2008

I’ve been heavily involved in interviewing recently, both as a candidate and offering advice from the “other side of the fence”.  I’ve been spending a lot of time working out in my head exactly what the best way of interviewing is.  Below is a summary of what I’ve come up with so far, with each point to be extended in later posts.

  • Decide exactly what you’re hiring for, the role and the tasks to be performed by that role. 
  • Test for aptitude, not specific knowledge.
  • Questions need to be fair: if you’re going to ask for very answers to very specific things, then consider sending the candidate an outline/”reading list” before the interview. 
  • Remember that a good team has a mix of people, skills and backgrounds.
  • Expect no more or less of your candidate than you expect of yourself – that includes turning up on time, a certain level of dress, the ability to answer questions, particular technical knowledge, etc., etc.

More to come.

Whilst paging through a few tech blogs yesterday, I came upon the following code fragment:

if(a ? b : c)
{
  //  do something
}

nasty!

Another useful bit of SQL

August 27, 2008

 The problem was to take this data

RIC Visibility
BMWG.DE 1
BMWG.DE 0
CONG.DE 0

 and return a distinct list of RICs, with the lowest possible Visibility value. 
The SQL needed was

select distinct Ric,Visibility from Table
group by  Ric
having Visibility = min(Visibility)

Another quick tip: I used to get this functionality for free when I used an SQL plugin for Emacs years ago.  I wanted to use it again and had a quick think about how it must have been done; it’s trivial really, but I don’t have a huge background with SQL transactions.  Anyway, this controls your update statement so that it only affects one row (or as many rows as you specify) and rolls back if the rows affected is greater.  This fixes the worst nightmare of “250,000 rows affected” when you were expecting “1 row affected”, and it’s in a production system, and you don’t have a backup.

begin tran x

update Table set Value='NewValue' where Name='Key'
if @@rowcount > 1
begin
print 'error'
rollback tran x
end 
else
begin
commit tran x
end

I have been an enthusiastic user of reddit.com for nearly 2 years now.  I loved its simplicity, that it eschewed almost all of the user interface garbage that Digg suffered, and that it was filled with high-quality links.

Recently, as you may know, the new “2.0″ version of reddit was released.  This incorporated many new features, and several changes to the UI to make it more AJAX-y.  Except they’ve broken it.  First of all, everything moved.  Whereas I knew where everything in the UI was (and it only had about 6 features), now things are hidden under sub-menus for god knows what reason, and items have been moved around, and nobody thought to put a “Make it look like old reddit” feature.  Surely that should have been the first item on the list?  Also, the interface has gone from being single colour (blue-on-white) with a small logo, to looking much more like a CSS stylised website; I liked that I could have it on my screen without drawing attention to the fact.

The biggest problem for me, though, is that they’ve broken it.  I never used to get server errors, submission problems, or timeouts.  Right now, and for the last 20 minutes, reddit has been down – I can’t even get the front page to load up.  The site is still there, as it’s giving me an error message (”An error occurred while processing your request.  Reference #97.2cf50508.1212710545.26217039″ – catchy) but no headlines, no links, no submissions, nada.  The very essence of reddit taken away.  I’m not annoyed because the site is down – these things happen – I’m annoyed because I strongly suspect it’s down to new “reddit 2.0″, and I can’t figure out where the motivation for the massive changes came from.  Why change something if it works?  My method of interacting with reddit didn’t need to change, so why go changing stuff just for the sake of it?

I think it might be time to look for a new, simpler news aggregator.

This started off as a blog-comment to this article, but then grew large enough to merit its own post.

These “this makes a good programmer” articles are very simplistic. If you’re writing code that will “live” a long time, and will be worked on by many programmeres long after you have left, then it’s no good ‘having the chops to bang out killer features’ if the code you end up with is unmaintainable. Conversely, if you are working on a codebase for, say, an arcade game then the ability to carefully analyse, document, implement and debug your code will count for nothing if it’s not fast enough.

I like to imagine the difference between John Carmack and the guys who write the Shuttle code for NASA (Nasa article here http://www.fastcompany.com/magazine/06/writestuff.html, read ‘Masters of Doom’ for an overview of Carmack). For Carmack, the ability to implement features that almost nobody else could even think of was/is his trademark. And the ability to implement them faster than anyone else was his golden ticket to fame and fortune. However, Carmack essentially threw away his code at the end of each game and started again. For the NASA guys, the code they write absolutely definitely has to work 100% all of the time, without exception. They spend huge amounts of time, and money, trying to ensure that no bugs slip into the final release. They will chase those bugs down to the detriment of speed, of “sexiness”, of “that’s a cool idea”, of “well I think my way of doing it is better”. They produce the 99.99% bug-free software, the type that Carmack doesn’t aim for. Carmack produces the 99.99% performance software, that the NASA guys don’t aim for.

Despite these completely different approaches, is anyone going to argue that Carmack is not a good programmer? Is anyone going to argue that the NASA guys (and girls) are not good programmers? No, but we can argue that “programmer” as a catch-all term is too imprecise. What makes one “programmer” good in one context does not necessarily make them good in a different context.

Optimising TortoiseSVN

December 28, 2006

As noted in other posts, I am currently using Subversion for source code control, with TortoiseSVN as the “GUI” client. Recently I’ve been having big performance problems on my machine, particularly with Windows Explorer. Getting rid of most of the network drives that I’ve added helped quite a bit, and then I turned my attention to TortoiseSVN.

Our SVN repository is located in New York, whilst we are in London, and the network between the two sites is not great. For example, when we moved offices our new network was only 10mbit/s rather than 100mbit/s. Gigabit? Tish and pshaw! Combined with the fact that our project is 436Mb means that SVN can sometimes crawl.

I began digging into the TSVNCache.exe process. TSVNCache determines which icons should be displayed in Windows Explorer, indicating modifications, conflicts, etc. I then read this article about possible optimisations. By switching on the TSVNCacheWindow I could see that tens of thousands of directories were being cached.

The first thing was to remove branches that were no longer being used. Despite having no modifications for several months, they were being repeatedly indexed by TSVNCache.

Next was to specify the directories that I wanted icon overlays for. Into the “Include Paths” I added my C:\dev\trunk and C:\dev\Branches\ directories, with “*” after each of them so as to get recursive info. Into the “Exclude Paths” I added C:\*, thereby excluding everything else.

Performance was now significantly better, but I wasn’t satisfied. The final act was to switch on “Show overlays only in explorer”, which disables the overlays in File Open dialogs and other non-Explorer windows. This was an area where I had had particular problems.

The machine is now lightning-fast, far better than it had become and much more like the dual-core, dual-physical 3.2Ghz Xeon with 2Gb that it’s supposed to be.

My settings are shown here.