Browsing Posts in Agile

Thanks to everyone for attending Simple Patterns for Simple Problems last week. Source code – before and after – can be found here.* The “after” source is where we left off. The additional Utility methods are left as an exercise for you, dear reader. If you have any suggestions on how to improve this presentation in the future, I would love to hear from you.

* I haven’t included NUnit in the zip file as that would have bloated the download. Once you unzip, simply copy NUnit-Net-2.0 2.2.8\bin to SimplePatternsForSimpleProblems-[Before|After]\tools\NUnit.

After my presentation yesterday on Simple Patterns for Simple Problems, John Bristowe, a fellow plumber (and closet agilist), interviewed me about agile development on the .NET platform. We talked about why developers should be interested in learning more about agile development techniques, including unit testing, TDD, code coverage, continuous integration, and more. Check it out: A Chat with James Kovacs on Agile Development.

UPDATE: One thing I forgot to note was that when I said most agile developers use Visual Studio and ReSharper, I should have qualified that with most agile developers [on the .NET/Windows platform] use Visual Studio and ReSharper, though I’m sure that was obvious. Java and Ruby developers don’t typically use Visual Studio. Tongue out

Jean-Paul Boodhoo and I will be presenting Simple Patterns for Simple Problems at the Calgary .NET User Group at noon on Thursday, July 19, 2007. Here’s the abstract:

Everyone has that little (or not so little) class called Utility that holds all kinds of interesting bits of business logic. It is a hodge-podge of code that you’re not sure where to put. This session will examine some common types of methods found in utility classes and how to refactor your design using simple patterns to eliminate these troublesome kitchen-sink classes.

Location:
330 – 5th Avenue SW, T2P 0L4
Calgary AB Canada
Conference Room CP1-1106
(the elevator will be open to the floor between 11:30 and 12:00 so no security pass will be required)

Food and beverages provided by Nexient.

Having done this a few times before and always ending up scouring the documentation regarding exactly how to enable the options I need, I’m hereby committing it to long-term memory…

Subversion 1.4.0 and later support running Subversion directly as a Windows Service. This allows you to access your repository via TortoiseSVN, svn.exe, etc. using:

svn://server/RepoName

You can find information in the Subversion FAQ as well as a link to a document describing exactly how to set it up. There is no tool provided to configure the Windows Service. So you’re stuck using sc.exe, the Service Control command line tool, which ships with various versions of Windows. It is rather quirky, even for command line tools. Note that the name/value pairs are “Name=” followed by a space followed by “Value”. The equals sign is part of the name and won’t work if the equals sign is omitted or if you insert a space before the equals sign.*

Here’s how I typically configure Subversion to run as a service:

sc create <ServiceName> binPath= “\”<PathToSvnBin>\svnserve.exe\” –service -r <SrcRepoRoot>” DisplayName= “Subversion Service” depend= Tcpip start= auto obj= <Computer or Domain\ServiceAccount> password= <Password>

where

<ServiceName> is the name of the service (as used in commands such as net stop <ServiceName>).

<PathToSvnBin> is the fully-qualified path to svnservce.exe. N.B. You have to surround it with \” to escape the path if it contains spaces.

<SrcRepoRoot> is the fully-qualified path to the directory that contains all your repositories.

<ComputerOrDomain\ServiceAccount> is the user account under which you want to run the service. You need to include the computer or domain name (depending on if it’s a local or domain account, respectively).

For my environment, it ends up looking like this:

sc create svnserve binPath= “\”C:\Program Files\Subversion\bin\svnserve.exe\” –service -r c:\SrcRepos” DisplayName= “Subversion Service” depend= Tcpip start= auto obj= Server\SvnDaemon password= P@ssw0rd

I usually use a local computer account, SvnDaemon, for running my repositories. After creating the account, I remove it from the Users group. By removing the account from Users, no one can log in using that account and it also removes it from the main log-in screen in Windows XP and Vista. In Local Security Policy… Local Policies… User Rights Assignment…, grant SvnDaemon the “Log on as a service” privilege. Explicitly grant Full Control to c:\SrcRepos.

Last thing you need to do is punch a hole in your firewall to allow connections on the standard svn port, which is TCP 3690.

Now you can start the service via “net start svnserve”. Your firewall might prompt you to grant permission for svnserve.exe to listen on the Subversion port.

Time to test. Launch the TortoiseSVN RepoBrowser and enter “svn://server/RepoName”. You should be able to now browse your repository.

* Note that PowerShell doesn’t grok the “Name=” syntax. PowerShell tries to interpret it as some sort of assignment. I haven’t bothered digging in to find out how to instruct PowerShell to treat the “Name=” literally. In the meantime, it works just fine from cmd.exe. Confused

A few days ago, I read Kyle Baley’s post about learning Rhino Mocks. Like many folks, he initially had trouble understanding Rhino Mocks’ record/playback metaphor. Then during the Calgary Code Camp, an attendee kindly pointed out that I had forgotten mockRepository.ReplayAll() in my unit test – fortunately before I actually ran it. Then yesterday I was catching up on my blog reading and saw this post from Oren discussing various people forgetting to call mockRepository.ReplayAll(), including Oren himself.

I thought to myself, Rhino Mocks has a very nice fluent interface, but doesn’t make its Record/Playback explicit. What would I want my code to look like if it did support explicit Record/Playback? So I grabbed the source from Subversion, put on my TDD hat, and came up with this failing test:

[Test]
public void CanRecordPlayback() {
    MockRepository mockRepository = new MockRepository();
    IFoo mockedFoo = mockRepository.CreateMock<IFoo>();
    using(mockRepository.Record()) {
        Expect.Call(mockedFoo.Bar()).Return(42);
    }
    using(mockRepository.Playback()) {
        Fooz fooz = new Fooz(mockedFoo);
        fooz.Barz();
    }
}

A few minutes later, I had working code for the Record and Playback methods, all the unit tests passed, I created a patch, and emailed it to Oren for his thoughts on the new syntax. He liked it so much that he included the new syntax in Rhino Mocks 3.0.5, which he just released today.

A few things to note about the new syntax. Expectations are grouped in the Record block. Exercising the mocks occurs in the Playback() block. ReplayAll() and VerifyAll() are called implicitly after the Playback() and Record() blocks, respectively.

Time between initial idea and production deployment: Less than 24 hours. How is that for great turn-around? Thanks, Oren. I’m proud to have committed to such a widely-used (and respected) project!

I would like to thank everyone who voted for my TechEd 2007 Bird of a Feather (BoF) session. The votes have been tallied and my session has been accepted. So I will be leading a lively discussion on:

Creating Flexible Software: TDD, Mocking, and Dependency Injection

There has been much discussion of agile development techniques, but what do they really mean? How can they help you develop better software that is more flexible in the face of change? What does it mean for software to be flexible? This Birds of a Feather session will promote a lively discussion of how to be successful in creating software that is resilient in the face of ever-changing business requirements.

Just like Spring, it seems that Agile is in the air as Jeffrey Palermo had his two agile BoFs accepted too. Congratulations, Jeffrey! I’m looking forward to Partying with Palermo at TechEd 2007 (and attending his BoFs).

Over the past few months, I’ve been doing test-driven development (TDD) using VstsUnit (aka MSTest), which ships with Visual Studio Team System. (A client’s choice to use VstsUnit, not mine.) I’m an avid ReSharper user and quite like their unit test runner, which allows you to run NUnit or csUnit tests directly from within Visual Studio. Unfortunately it doesn’t support VstsUnit. When Albert Weinert released MbUnit support for ReSharper a few months back, I realized that JetBrains had an undocumented plugin model for their unit test runner and I could integrate support for VstsUnit into ReSharper. Without further ado, I present to you the VstsUnit Plugin for ReSharper.

VstsUnit Plugin for ReSharper

Out of the box, JetBrains ReSharper Unit Test Runner supports NUnit and csUnit. This ReSharper plugin adds support for the Unit Testing Framework in Visual Studio Team System (VstsUnit), also known as MSTest.

N.B. The plugin will likely work with JetBrains UnitRun, but I have not taken the time to test it with UnitRun. Unfortunately you cannot have ReSharper and UnitRun installed at the same time, which means I need to set up a virtual machine with Visual Studio 2005 and JetBrains UnitRun. I have not taken the time to do this yet.

Installation

  1. Close Visual Studio 2005.
  2. Extract the contents of archive, including the VstsUnit folder, to:
    %ProgramFiles%\JetBrains\ReSharper\VS2005\Plugins
  3. Launch Visual Studio 2005.
  4. Open a project containing VstsUnit tests.
  5. Open a test file containing a TestClass with TestMethods. Standard ReSharper icons will appear in the left margin and allow you to run the unit tests.

Known Issues

ReSharper Unit Test Runner icons do not appear beside TestClass and TestMethod.
This is typically caused by having an assembly reference to another unit test framework. ReSharper Unit Test Runner (and UnitRun) only support a single unit test framework per test assembly. Remove references to NUnit.Framework, csUnit, and/or MbUnit.Framework from the References in your test project. This is a known limitation in the JetBrains’ current unit test runner implementation. A plugin votes on whether the current assembly contains tests that it can run. If the plugin votes “yes”, the unit test runner stops querying additional plugins. NUnit and csUnit get queried before third-party plugins.
Poor performance when running full test suite.
Unit tests are run by invoking mstest.exe and parsing the resulting XML file (.trx). The Unit Test Runner invokes the plugin once for every TestClass (aka TestFixture in NUnit terms). Unfortunately MSTest has high overhead as it copies all files to a new directory to execute tests. There is no way to turn off this behaviour in MSTest. It may be possible to run the full suite of tests and cache the results, but there are numerous problems with this approach including determining whether the cache is fresh and keeping TestClass or TestMethod runs responsive. Due to the architecture of MSTest, it is not possible (without what would likely be some truly awful System.Reflection work) to run the unit tests in-process as is done for NUnit and csUnit.
Cannot debug or profile VstsUnit tests.
I have not found a way to hook the debugger or profiler to the MSTest process. Hopefully this will be possible in a future version.

NUnit, MbUnit, or VSTSUnit (aka MSTest.exe) have similar syntax, which makes it easy to switch between one framework and another. Switching from NUnit to MbUnit is as simple as replacing:

using NUnit.Framework;

with:

using MbUnit.Framework;

Switching the other way is just as easy as long as you haven’t used any MbUnit-specific features such as RowTest.

Switching to/from VSTSUnit is not as easy because Microsoft decided to rename the test-related attributes. (The Assert class is largely the same fortunately. So switching is largely an attribute renaming exercise.) So here’s a snippet to place at the top of every test file that will allow you to switch between NUnit, MbUnit, and VSTSUnit via a simple #define in a code file, a compiler switch, or project properties. You then define your tests using VSTSUnit attributes. (i.e. TestClass, TestMethod, etc.)

#if NUNIT
using NUnit.Framework;
using TestClass = NUnit.Framework.TestFixtureAttribute;
using TestMethod = NUnit.Framework.TestAttribute;
using TestInitialize = NUnit.Framework.SetUpAttribute;
using TestCleanup = NUnit.Framework.TearDownAttribute;
using ClassInitialize = NUnit.Framework.TestFixtureSetUpAttribute;
using ClassCleanup = NUnit.Framework.TestFixtureTearDownAttribute;
#elif MBUNIT
using MbUnit.Framework;
using TestClass = MbUnit.Framework.TestFixtureAttribute;
using TestMethod = MbUnit.Framework.TestAttribute;
using TestInitialize = MbUnit.Framework.SetUpAttribute;
using TestCleanup = MbUnit.Framework.TearDownAttribute;
using ClassInitialize = MbUnit.Framework.TestFixtureSetUpAttribute;
using ClassCleanup = MbUnit.Framework.TestFixtureTearDownAttribute;
#else
using Microsoft.VisualStudio.TestTools.UnitTesting;
#endif

Now you’re probably thinking to yourself, “James must really love VSTSUnit because that’s the default.” Not exactly. I use a number of tools including ReSharper’s built-in Unit Test Runner and VSTSUnit’s Test View window. ReSharper works against the compiled code model. So the naming of the attributes is irrelevant. Instead of TestFixture, I could attribute my test-containing class with “MonkeysWritingShakespeare” as long as I had the proper using alias:

using MonkeysWritingShakespeare = NUnit.Framework.TestFixtureAttribute;

ReSharper’s Unit Test Runner figures it out because the whole using syntax is C# syntactic sugar so you don’t have to type fully qualified classes all the time. (The CLR only deals in fully qualified classes.)

How about VSTSUnit’s Test View? Not so good. It is apparently parsing the C# code file, not the code model, looking for tests. If you attribute a test with anything other than TestMethod (or its fully qualified equivalent), the test disappears from the Test View window even if you have the correct using alias to rename the attribute to TestMethodAttribute. Very lame. So that’s why I use VSTSUnit attributes and alias them to NUnit or MbUnit equivalents rather than the other way around.

Now should you use this unit test switching technique on every project? No, it’s not worth it. Pick a unit test framework and stick with it as long as it is not causing you pain. Because of differences between unit test frameworks, you need to run your test suite with all the test frameworks at least every few days. Otherwise you’ll accidentally use a feature specific to one framework and not realize it. (I’m assuming that you are running your test suite frequently throughout the day using your test framework of choice. My point here is that you need to run your test suite with all frameworks at least once in awhile to ensure that everything works.) There are occasions where you want to support multiple test frameworks and the snippet above will hopefully make life easier for you.

I’ve been a big believer in unit testing for years now, but was never serious about test-driven development (TDD) until a few months ago. It sounded like an interesting idea, but I didn’t understand why TDD practitioner were so zealous about writing the tests first. Why did it matter? I thought it was to ensure that some project manager doesn’t try to shave some time off the project by cutting the unit tests. Then I started doing some reading about TDD, poking around, asking questions, and trying it out myself. I discovered the real reason for writing tests first is that TDD isn’t about testing code, it’s about designing code. That was my first epiphany. You need to write the tests first because you’re designing how the code works. This isn’t some eggheaded “let’s make pretty UML” diagrams.* We’re actually specifying our object’s API and writing working code. At first our test will fail because the code to implement the behaviour we’re designing hasn’t been written yet. NUnit comes up RED! Now we implement the functionality we just specified.** NUnit comes up GREEN! How does this fit in with the rest of our design? Can we extract commonalities? Can we make our overall design better? REFACTOR! (NUnit should still be green because refactoring shouldn’t change behaviour.) So that’s where we get the TDD mantra: Red-Green-Refactor. When you get into the rhythm, it feels awesome. You’re writing working code quickly and because of your growing suite of unit tests, you have the confidence to make sweeping changes to improve the design.

Next on my list of things to understand was mocking. I had seen it used, but never quite understood the point. Then I downloaded Rhino.Mocks, a very well-respected mock object framework by Oren Eini (aka Ayende Rahien), and decided to take mocking for a spin. Mocking is all about decoupling the class being designed from its dependencies. So if you’re writing a service layer, you can mock out the data layer. The data layer doesn’t even need to exist. You are basically substituting a dynamically generated stub. I had some unit tests that we too closely coupled to the database, which meant they were slow to run and failed miserably if the database wasn’t available. When you’re testing whether your data layer is mapping properly, you need to touch the database, but you can test your presentation, workflow, domain, and even large parts of your data layer without a live database. Unfortunately I couldn’t. Let’s take a look at a test for ensuring that the data layer’s Identity Map is implemented correctly:

[Test]

public void ShouldReturnSameInstanceWhenRetrievingSameDepartment() {

    DepartmentRepository repository = new DepartmentRepository();

    IList<Department> departments = repository.FindAll();

    foreach(Department d1 in departments) {

        Assert.IsNotNull(d1);

        Department d2 = repository.FindById(d1.Id);

        Assert.IsNotNull(d2);

        Assert.AreSame(d1, d2);

    }

}

I wanted to mock out DepartmentRepository’s dependencies, but I couldn’t get to them. They were internal to DepartmentRepository’s implementation. What to do? Dependency injection to the rescue. Make DepartmentRepository take its dependencies through a constructor (or property setters).

[Test]

public void ShouldReturnSameInstanceWhenRetrievingSameDepartment() {

    DepartmentRepository repository = new DepartmentRepository(new DepartmentMapper());

    IList<Department> departments = repository.FindAll();

    foreach(Department d1 in departments) {

        Assert.IsNotNull(d1);

        Department d2 = repository.FindById(d1.Id);

        Assert.IsNotNull(d2);

        Assert.AreSame(d1, d2);

    }

}

DepartmentMapper has all the database access logic – TSQL, stored proc names, NHibernate code, or whatever strategy you’re using. DepartmentRepository contains all the database agnostic stuff, such as whether an instance is currently loaded. So now I could mock the DepartmentMapper like so:

[Test]

public void ShouldReturnSameInstanceWhenRetrievingSameDepartment() {

    MockRepository mockery = new MockRepository();

    IDepartmentMapper mapper = mockery.CreateMock<IDepartmentMapper>();

    mockery.ReplayAll();

    DepartmentRepository repository = new DepartmentRepository(mapper);

    IList<Department> departments = repository.FindAll();

    foreach(Department d1 in departments) {

        Assert.IsNotNull(d1);

        Department d2 = repository.FindById(d1.Id);

        Assert.IsNotNull(d2);

        Assert.AreSame(d1, d2);

    }

    mockery.VerifyAll();

}

That still doesn’t do much because I need to tell the mocked mapper to pass back some dummy data. Not too hard to do…

[Test]

public void ShouldReturnSameInstanceWhenRetrievingSameDepartment() {

    MockRepository mockery = new MockRepository();

    IDepartmentMapper mapper = mockery.CreateMock<IDepartmentMapper>();

    IList<Department> mockDepartments = new List<Department>();

    for(int i = 0; i < 4; i++) {

        mockDepartments.Add(mockery.CreateMock<Department>());

    }

    Expect.Call(mapper.FindAll()).Return(mockDepartments);

    mockery.ReplayAll();

    DepartmentRepository repository = new DepartmentRepository(mapper);

    IList<Department> departments = repository.FindAll();

    foreach(Department d1 in departments) {

        Assert.IsNotNull(d1);

        Department d2 = repository.FindById(d1.Id);

        Assert.IsNotNull(d2);

        Assert.AreSame(d1, d2);

    }

    mockery.VerifyAll();

}

This was feeling all wrong. All I wanted to do was stub out my data layer and I was practically having to “explain” to the mock object the internal implementation of the DepartmentRepository class. I scratched my head for awhile. Left it, came back to it, scratched my head some more. I was missing something about mocking and it felt like something big. Then my second epiphany hit. Unit testing in this way is all about blackbox. You are testing your API. When I call this method with these parameters, I get this result. You don’t care how the method is implemented, just that you get the expected result. Mocking is all about whitebox. You are designing how the method (and hence the class) interacts with its dependencies. You can’t simply drop in mocks to decouple a blackbox unit test from its dependencies. The types of unit tests that you write with and without mocks are fundamentally different and you need both. (If you simply need to decouple your unit tests from their dependencies, you can use stubs, which are objects that return dummy data.) Without mocks, you are writing blackbox tests that specify the inputs and outputs of an opaque API. With mocks, you are writing whitebox tests that specify how a class and its instances interact with its dependencies. A subtle difference, but an important one! So here is the completed, whitebox unit test:

[Test]

public void ShouldReturnSameInstanceWhenRetrievingSameDepartment() {

    MockRepository mockery = new MockRepository();

    IDepartmentMapper mapper = mockery.CreateMock<IDepartmentMapper>();

    IList<IDepartment> mockDepartments = new List<IDepartment>();

    for(int i = 0; i < 4; i++) {

        IDepartment department = mockery.CreateMock<IDepartment>();

        mockDepartments.Add(department);

        Expect.Call(department.Id).Return(i);

        Expect.Call(mapper.FindById(i)).Return(department);

    }

    Expect.Call(mapper.FindAll()).Return(mockDepartments);

    mockery.ReplayAll();

    DepartmentRepository repository = new DepartmentRepository(mapper);

    IList<IDepartment> departments = repository.FindAll();

    foreach(IDepartment d1 in departments) {

        Assert.IsNotNull(d1);

        IDepartment d2 = repository.FindById(d1.Id);

        Assert.IsNotNull(d2);

        Assert.AreSame(d1, d2);

    }

    mockery.VerifyAll();

}

Let me make a few comments in closing. The MockRepository is usually created in SetUp and mockery.VerifyAll() is called in TearDown. I typically have various helper methods for configuring the mock objects as I use them in multiple unit tests. ReplayAll is important because it switches Rhino.Mocks from record mode (these are the calls you should be expecting and this is how you should respond) to play mode where real calls are being made to the mock objects.

Another thing to note is the heavy use of interfaces because it makes mocking easier. With classes, you can only mock virtual methods/properties because the mocking framework dynamically creates a derived class. (The method/property needs to be virtual so the derived mock object can override the functionality.) This is one reason why TDD code typically makes such heavy use of interfaces. Lastly the use of dependency injection, which is necessary so that we can mock in the first place. You need a way to substitute your mock easily for the real object that it’s replacing. By using TDD with mocking, you have to create a loosely-coupled, easily tested design in order to use mock objects in the first place. Remember, mocking is useful for whitebox testing isolated classes while blackbox testing is useful for verifying the functionality of an API.

* This isn’t to say that UML is worthless. I use it frequently to whiteboard ideas. It’s a good for communicating object structures in common language. I do however think that days or weeks of writing nothing but UML is worthless, a view I didn’t hold a few years ago. The basic problem with UML is that it’s too easy to design something overly complex or completely unworkable. We’ve all had it happen that you start implementing a nice-looking UML diagram and you slap your head saying, “Oops, forgot about that small detail.” Or you realize that what looks like a perfectly reasonable assumption is completely false. TDD values writing working code. Working code tends to quickly identify those gotchas we always run into in software development.

** Jean-Paul Boodhoo does a great job explaining the difference between testing and behaviour specification here. He also mentions a new tool, NSpec, which is very similar in flavour to NUnit, but highlights the behaviour specification aspect of TDD, or behaviour driven design (BDD) as some people are now calling it.