Being Pragmatic about Bugs

In my Curriculum Vitae I have written that I value organisations that don't bargain with quality. In the same CV I have written that I consider myself to be pragmatic.

A few year ago I went to an interview where they asked me how I could consider myself to be pragmatic and at the same time not be willing to sacrifice quality. Appearently, they considered that to be a contradiction.

I later learned that the product that the job opening was for was a 100 KLOC application that had hundreds of bugs. And this was after the team had spent the last three months doing nothing but bug fixing. Oh, and the team did very little testing, so these hundreds of bugs might only have been the tip of an ice berg.

I wonder, is that being pragmatic?


I guess few people would say it is. For apparent reasons, having hundreds of bugs in quite a small application can't be a good trade-off.

But what do I mean then when I say that I don't want to bargain with quality? How many bugs can you have and not bargain with quality?

The short answer is: zero bugs.

But that can't be pragmatic, right? It sounds more like dogmatism, doesn't it?

Which leads me to my long answer.

If you use the same practices trying to reach zero bugs as the organisation I mentioned above, then you will fail. You will fail because you will do nothing but bug fixing. You will spend most of your time in front of a debugger.

Bugs are a part of their system. They expect them. They are waiting for them. It's a natural thing to have bugs, and they can't imagine a world without them.

And herein lies the problem. The problem is that the organization does not try to improve. They don't try to learn.

A bug is an opportunity to learn. And not seizing the opportunity to learn is a failure. That is, a bug is only a failure if you don't learn anything from it. And to reach zero bugs you have to seize every opporunity you get.

If, for every bug you discover, you try to figure out a way to prevent the same bug, or even better, the same kind of bug from ever happening again, then and only then do you have the chance to reach zero bugs.

That is, you need to change your practices from being reactive towards being more proactive.

To be, or not to be... Pragmatic

I've always considered pragmatic to be a positive word. The way I see it, it's the opposite to dogmatism. You don't follow rules or principles just for the sake of it. You make your choices based on what's practical, not based on theories.

And I'm not alone to think that it's a positive word. After all, two of the greatest developers the world has seen base their whole career on the greatness of being pragmatic.

So therefore I strive to be pragmatic, and I often brag about how pragmatic I am.

But...

Then I see so many bad decissions being made in the name of pragmatism. Or what some people consider being pragmatic. The word is so commonly misused that I'm actually thinking about stopping using the word when I describe my values.

I have plenty of examples of this misuse, and I will blog about some of them. Here's the result so far:

When to Refactor

There are two ways to test-drive your code.

Either you write your test, you write your code and you do a little bit of refactoring. You spend less time on the refactoring than you do writing the test and the code. Then you restart the red-green-refactor loop. When you're done with the whole feature your code is a bit messy, so you do some bigger refactorings, spending enough time for the code to be satisfactory.

Or you write your test, you write your code and you do quite a lot of refactoring. You actually do enough refactoring to make the code look satisfactory, which means you probably spend more time refactoring than you do writing the test and the code. Then you restart the red-green-refactor loop. When you're done with the whole feature you're done. No more refactorings are necessary!

One way is better than the other, for several reasons.


Let's say you write a test. Then you write enough code to make the test pass. Your look at the solution and it looks horrible. But you say to yourself: "I might as well leave that mess, because that has to change when I write test X32".

Sounds like a good plan, right? Let's not waste our time doing work up-front.

What you didn't think about was that up until the time you write test X32 you will have to cope with the mess. It will slow you down since it's probably harder to understand than a cleaned up version. This will be even more apparent once you get around to writing test X32, because you must understand the code in order to change it.

Code that is difficult to understand is also difficult to compare with other parts of the code base. Without the mess you might find ways to simplify the design in ways that are later obfuscated due to fact that more code has been added. You'll end up with something that is no longer the simplest possible design.

Also, what happens if you never get around to test X32? What will happen then with the mess?

What if you run out of time just as you were about to clean up the mess? Will you be able to justify for the man with the money that you should spend some time next iteration cleaning up last iteration's mess? He'd probably prefer some new fancy features.

Will you be able to motivate yourself to clean up the mess? When the next feature is awaiting you around the corner, are you really going to put on the gloves and bring out the broom?


The above are all small things compared to the real problem: By postponing the refactoring you get a false sense of progress. A finished red-green-refactor loop is one notch on the ladder towards a completed feature. How do you measure progress while refactoring? Will you even remember to include the big final refactoring in your estimations?


So the better way is to do your homework directly. Always keep the code in a close to perfect shape.

Solution Probleming

I have noticed that human beings are very solution-oriented. When we see a problem we immediately try to find a solution. Perhaps we have recently read a book or an article that comes with examples of good solutions and now we see a chance to apply it. Or perhaps we are clever enough to come up with a solution ourselves.

Not that seldom we have a solution ready even before we understand the problem. It's a nice solution and we are eager to apply it. So now we start looking for problems that the solution solves...

Or perhaps we truly understand the problem. Perhaps we actually do start with the problem and try to find a solution for it. However, when we find that solution we go back to our instincts and can't wait to start using our solution. Seldom do we care to find out if it is the only solution and even less if it is the best solution.

I see this happen in every Retrospective Session I attend to for almost all issues discussed. And I have obviously been guilty of doing it myself many times.

Since it's such a common behavior, almost like an instinct, there's no point in trying to change people. Instead the focus must be on controlling it. My trick is to change the goal of the session:

The Retrospective should be about finding problems, not solutions.

You should not even be allowed to come up with solutions in the Retrospective. If anyone mentions a possible solution, the facilitator should interrupt and steer the focus back to problem solving. Not even the last five minutes should be about solutions. The solving part should be handled elsewhere.

Unfortunately, this is not enough. The solution finding instincts can haunt you even though solutions are banned. The next dilemma is that usually when people mention problems they are very solution-like. For instance, a problem might be "our team is too big". This is another way of saying "split the team into two", which is a solution.

What you need to do is to locate the symptoms of the problem. Ask, in a reverse 5 whyish way: Why was this a problem? Did the daily synchronization meetings take too long? Was the iteration planning session too long? Was there a lack of team spirit?

If the symptom was that the daily meetings were too long, then perhaps the root cause was that each member talked too much. Or perhaps there were non-team members present talking, taking up time.

If the problem was solved by splitting the team what you've done is merely kill the symptom while the problem still remains. Actually, you've made things even worse since not only does the root problem remain, but you've hidden the symptoms, making it more difficult to fix.

Once you've found the symptoms you can use 5 whys or fishbone diagram to find the root cause.


So when should you do the actual solving?

Most of the time, once you've found a problem the solution is obvious. All you need is to do some small modifications to the problem description and you'll get a solution. Take the "our team is too big" problem as an example.

Sometimes the solution is not that easy to implement though. If we decide to "split the team into two", there will arise questions on how to actually implement this. Who should be in what team? Should we have one or two backlogs? How do we overcome the communication barrier between the teams?

This is a discussion that takes longer than any Retrospective. Also, having the whole team brainstorming answers to these questions is probably not very efficient. Better then to form a small group of people who is assigned the task to think a bit longer about a solution.

But that's a different meeting with a different goal.

Disclaimer: I've impudently stolen the title for this post from Tim Ottinger :)

Three Minute Red

A couple of months ago when I attended a TDD coding dojo I managed to annoy most of the other participants. The reason was that I stubbornly argued that no refactoring should take more than three minutes.

Here's the reason for my stubbornness:
I have never been able to finish a 10-minute refactoring in less than half-an-hour.
Refactorings are really difficult to estimate. What feels to be a five-minute fix often ends up lasting 20 minutes. This is not only me: I've seen the same behavior in other developers when pair-programming.

My general rule says that a refactoring takes about four times longer than estimated. A one minute refactoring usually takes around four minutes. A two week Refactoring takes about two months. A total rewrite of the application estimated to half-a-year takes about two years.

So my stubbornness about staying below three minutes was actually an attempt not to be in the red zone for more than 10 to 15 minutes. The others thought that 10 minutes was ok. I disagreed because I knew that such a refactoring would last most of the remaining time of the kata.

Why is it so bad to do a long refactoring? There are many reasons.

There is a big chance that you over-design. When doing a "big" rewrite lasting 20 minutes you might, by accident, add some new functionality that you don't have a test for. One of the biggest advantages of TDD is that you have the simplest possible design. Staying in the red for a long time jeopardizes this.

Another problem is that you get no sense of progress. When will you finish? If you estimated a refactoring to 10 minutes you will probably estimate the remaining part to 10 minutes after working with it for 15...

Last but certainly not least: you don't get any feedback. If the tests are red after a 30-minute refactoring, how will you know what caused the problem? Is it something you just did or was it that other thing you did 25 minutes ago? How long will it take you to find out? It might take hours.

The Three Minute Red rule does not only apply to refactorings, but goes for when writing tests as well: If you estimate the time to add production code of a newly added test to more than three minutes you should either do some refactoring before adding the test or come up with a better test.

About The Template

My Blogger template is done by me, myself and I.

It took me quite a while to finish it. Why? Because I had to learn how blogger templates work, I had to refresh my HTML, CSS and JavaScript knowledge and I had to do a lot of graphics tweaking to get the right look.

The result is not that beautiful, but it's got all those bonus features that I just can't do without. Also, it's pretty nice for being done by me.

Well, I didn't do it all by myself. I have actually stolen the graphics from the Grungy blogger template, although I have modified it quite a lot.

I found the code for making a tag cloud on phydeaux3's blog. However, I didn't understand the JavaScript so I started refactoring (without a test safety net *argh*). Then I removed functionality that I didn't want and changed things to fit into my style sheet. The final result looks nothing like the original.

Another thing I wanted was expandable post summaries like the ones you find in WordPress. This was excellently described in the blogger help sections.

Finally, I wanted a static (non-blog) page where I could write something about myself (and also about the template -- this page!). Unfortunately I couldn't find any description on how to do this. All I found was the trick of backdating your posts, making them appear last in the archive.

Not good enough for me! So I started tweaking with the blogger widget tags and JavaScript and finally came up with a solution. If you are interested in how I did it, post a comment and I will write about it here. If you are a real blogger template whiz you can probably figure it out by reverse engineering the HTML source though.

And that's about it!

Links

Here I will keep a list of links to other blogs and resources that I find interesting.