Bookreview - Software Craftsmanship

I'm in a small book club and the last book we've read was Software Craftsmanship: The new imperative by Pete McBreen. The main argument of the book is that the Software Engineering concept was designed to handle huge defence projects and is not suitable for most projects since they are a lot smaller. So the solution for this according to Pete should be a system where we have Apprentices, Journeymen and Craftsmen that make up the team. In many ways it would be to revert to the medieval ways of craftsmen where you would work for a craftsman to learn how it's done. There's also on many occasions the argument that it's better to hire few great developers than a lot of decent, it's the first time I've seen it in print but something one often talk about. While i don't think it's mentioned in the book to me it seems at many times that Pete is talking about XP along with some of the concepts from Pragmatic Programmer. The concept of the book appeals to me but at the same time I think it might be hard to apply in a consultant/contractor organisation as it requires a stable team, however for a product/project development/delivery team it could be a good way to increase quality. *Closing: might not be the best bookreview but mabey the next one's better :)

Reading Firefox cookies for fun and profit

So I do a bit of small applications when I have the time, among others a html scraper. It has worked fine for a couple of months until the site I was scraping decided to require the 'user' to be logged in to access the data I wanted. At first it didn't seem like a hard thing to do, just post username and password on the login page and store the session cookie, but they had added some guards against this: double hashed password in javascript (with multiple salts) so that option quickly became more work than I wanted. Since I'm a programmer I'm rather lazy so I started to google Internet Explorer cookies (ah, yes the site offers logging in with the option to stay logged in forever) but accessing them seemed messy and i couldn't find the cookie I wanted on my box. Next try was a real browser, FireFox, a second quick trip to google gave me this nice page (see stack overflow ftw). If your like me (lazy), FireFox stores cookies in a sqlite database, nice and easy to read. That simply solved my problems, what I needed to do was to open the database, read the cookies I wanted and use a HttpWebRequest with a CookiesContainer to download the page I wanted. So on to the fun part (Code!1!) First a small method to get the path of the cookie db
        private static string GetFireFoxCookiePath()
        {
            var x = Environment.GetFolderPath(
                             Environment.SpecialFolder.ApplicationData);
            x += @"\Mozilla\Firefox\Profiles\";
            var di = new DirectoryInfo(x);
            var dir = di.GetDirectories("*.default");
            if (dir.Length != 1)
                return string.Empty;

            x += dir[0].Name + @"\" + "cookies.sqlite";

            if (!File.Exists(x))
                return string.Empty;

            return x;
        }
Nothing hard here, just get the path to the ApplicationData folder, find the profile directory that has a name ending in .default, add that and \cookies.sqlite to the string and we have the path to the cookie file. Then some code to read the data we want
        private static Dictionary<string, CookieContainer> CookieYar = new Dictionary<string, CookieContainer>();

        private static CookieContainer GetCookieConatinerForHost(string url)
        {
            var uri = new Uri(url);
            var host = uri.Host.Replace("www.", "");

            if (CookieYar.ContainsKey(host))
                return CookieYar[host];

            var cc = new CookieContainer();
            using (var conn = new SQLiteConnection("Data Source=" + GetFireFoxCookiePath()))
            {
                using (var cmd = conn.CreateCommand())
                {
                    cmd.CommandText = "select * from moz_cookies where host like '%" + host + "%';";
                    conn.Open();
                    using (var reader = cmd.ExecuteReader())
                    {
                        while (reader.Read())
                        {
                            cc.Add(new Cookie(reader["name"].ToString(), reader["value"].ToString(),
                                              reader["path"].ToString(), reader["host"].ToString()));
                        }
                    }
                }
            }
            CookieYar.Add(host, cc);

            return cc;
        }
The replace in the start is to handle those cases where the cookie is set to be .domain since it seems to work that way (might be RFC somewhere covering this, but it seems to work). A yes the dictionary is just to avoid database access all the time. The rest of the code is quite simple, check if we have a CookieContainer for the host already in that case return it, otherwise open the database and read the cookies associated with that host. And something that uses it :)
        internal static string DownLoadData(string urlToDownload)
        {
            var req = (HttpWebRequest) WebRequest.Create(urlToDownload);
            req.CookieContainer = GetCookieConatinerForHost(urlToDownload);
            var response = (HttpWebResponse) req.GetResponse();
            var strm = new StreamReader(response.GetResponseStream());
            return strm.ReadToEnd();
        }
The only thing I've done here is to set the CookieContainer on the HttpWebRequest to a Container that has all the Cookies FireFox has stored for the host I'm downloading data from. Yes, the code has no error handling at all, in my app it does, since I wanted something to share I removed those bits, and it needs some refactoring too =) Code (with example): cookiereader

Killmail parser in C#

In EVE when a player is killed in PVP a killmail is created, the EVElopedia wiki has a nice page about killmails. A killmail looks something like this:
2009.05.11 02:37

Victim: Noob
Corp: Random Corp
Alliance: Random Alliance
Faction: NONE
Destroyed: Crow
System: Amarr
Security: 0.0
Damage Taken: 595

Involved parties:

Name: Killer (laid the final blow)
Security: 2.6
Corp: NONE
Alliance: NONE
Faction: NONE
Ship: Sabre
Weapon: 200mm AutoCannon II
Damage Done: 595

Destroyed items:

TE-2100 Standard Missile Bay
TE-2100 Standard Missile Bay
Bloodclaw Light Missile, Qty: 46
Warp Scrambler II
Overdrive Injector System II
Overdrive Injector System II

Dropped items:

Bloodclaw Light Missile, Qty: 92
TE-2100 Standard Missile Bay
1MN MicroWarpdrive I
Warp Disruptor II
Beta Reactor Control: Capacitor Power Relay I
Bloodclaw Light Missile, Qty: 150 (Cargo)
So a bit of regular text, with a rather simple formatting. Now on to the fun part: parsing! Usually I use regex to parse text, it's easy and fun however for this it seemed a bit overkill, since the format of the data i rather static, the only thing that might change is the amount of 'nodes' under Involved parties. To make the whole thing simpler, for my current purposes I don't have a need for the destroyed/dropped items only victim and involved players. So i decided to go with the simples solution: a StringReader. So i started building a small class to handle the parsing for me, i even did it the TDD way, it's a rather simple flow. Load the text, read it line by line and create objects that can be returned as a Kill object containing the parsed killmail. So the only public method in my parser class looks like this:
        public Kill ParseKillMail(string killMail)
        {
            StringReader reader = new StringReader(killMail);
            Kill kill = new Kill();

            try
            {
                kill.IncidentTime = GetIncidentTime(reader);
                MoveReader(reader, 1);
                kill.Victim = GetVictim(reader);
                MoveReader(reader, 3);
                GetInvolvedPlayers(kill, reader, kill.Victim.SystemSeen);
            }
            catch (Exception)
            {
                kill.IsComplete = false;
            }
            return kill;
        }
The GetIncidentTime method looks almost the same as the previous post on twitter date parsing, MoveReader just moves the reader X lines, handy since we do that a lot. The GetVictim method reads the first large block of text and converts it into a player object
        private Player GetVictim(StringReader reader)
        {
            Player P= new Player();
            List victim = ReadLines(reader, 8);
            P.Name = victim[0].Replace("Victim: ", "");
            P.Corporation = victim[1].Replace("Corp: ", "");
            P.Alliance = victim[2].Replace("Alliance: ", "");
            P.Faction = victim[3].Replace("Faction: ", "");
            P.Ship = victim[4].Replace("Destroyed: ", "");
            P.SystemSeen = victim[5].Replace("System: ", "");

            return P;
        }
After this we move the reader 3 lines down to start reading the list of involved players, since there can be 1..n players in that list that code is wrapped in a loop
        private void GetInvolvedPlayers(Kill kill, StringReader reader, string seen)
        {
            kill.Involved.Add(GetInvolved(reader, seen));
            MoveReader(reader,1);
            while(HasMoreInvolved(reader))
            {
                kill.Involved.Add(GetInvolved(reader, seen));
            }
        }
The HasMoreInvolved call simply does a peek to check the next character to see if we have more players involved (row starting with a 'N' for Name)
        private bool HasMoreInvolved(StringReader reader)
        {
            return ((char) reader.Peek() == 'N');
        }
GetInvolved looks almost the same as GetVictim just some slight changes to handle the data included. I've included the code + unit tests as a zip if anyone wants a peek at it. Code: killmailparser

Stack Overflow

I've been lurking on Stack Overflow for a while now, and i really like it. If you don't know what it is: it's a site created by Jeff Atwood (who runs an excellent blog) and Joel Spolsky that is a "programming Q&A site". It uses meta moderation and reputation to rank answers and the person asking the question has the ability to make a certain post as the right one. So if your stuck and need some help it's a good place to start (or well after a trip to Google that is) but that's not the part that i like the most about the site. Since it's open and offers a good way of moderating answers and voting the correct ones up it can be used as a tool of learning. Just browsing it 20 minutes per day allows me to find something new i didn't know before. And also some good laughs: Jon Skeet Facts? Hm, this post turned into an ad for Stack Overflow, not that i think they need it but hey, great site!