Browsing Posts published by james

Earlier today, an unknown hacker exploited a security vulnerability in ScrewTurn Wiki and replaced the altnetpedia site with pron links. I have restored the site and upgraded to the latest version of ScrewTurn, which is v2.0.30. (We were running v2.0.21 and the vulnerability was fixed in v2.0.24.) My bad for not keeping the site updated with latest. I apologize to the ALT.NET community for not being more vigilant with patches to the wiki software. I’ve added the ScrewTurn RSS feed to my reader to keep me apprised of future fixes.

To the unknown hacker, I hope that you’re satisfied. I had booked the afternoon off to take my two boys (ages 3 and 5) to the Science Centre, but instead spent it undoing your evil.

During my geekSpeak screencast last week, one of the attendees asked:

Any recommendations for refactoring existing code to insert interfaces? (e.g., what’s the best dependency to break first, the database?)

Excellent question! Most of us do not have the luxury of working on greenfield projects, but instead work on brownfield projects – existing applications that could use some tender loving care. Brownfield projects are often inflicted with legacy code. What do I mean by legacy code? I agree with Michael Feathers’ definition:

Legacy code is simple code without tests.

Michael elaborates further saying:

Code without tests is bad code. It doesn’t matter how well written it is; it doesn’t matter how pretty or object-oriented or well-encapsulated it is. With tests, we can change the behavior of our code quickly and verifiably. Without them, we really don’t know if our code is getting better or worse.

If you haven’t read Michael’s book, Working Effectively with Legacy Code, you really must. It is on my short list of must-read development books. Michael provides excellent strategies and patterns for safely implementing tests around untested code, which is crucial in order to make non-breaking changes to an existing application AKA refactoring. If you can refactor your code, you can improve your code.

Now back to the question at hand… Which dependency to break first? Let me turn the question around. Which dependency is causing you the most pain?

In my experience, it is typically the database as round-tripping to a database on each test to fetch or save data dramatically slows the tests down. If you have slow tests, you’ll be unlikely to run them as often and then the tests start to lose their value as a safety net. (N.B. You still want integration tests that access the database. You just don’t want each and every unit test to do so.) As well it requires a lot of effort to keep consistent data for tests, often using test data setup scripts or rolling back transactions at the end of tests.

Other areas that often cause pain are integration points – web services, DCOM/Enterprise Services, external text files, … Anywhere your application is relying on an external application. Integration points are problems for tests because if you’re relying on them, your tests will fail when the external applications are unavailable due to crashes, service outages, integration point upgrades, external network failures, behaviour changes, … Imagine that your e-commerce website integrates with 6 external systems (credit card processor, credit check, inventory, sales, address verification, and shipping). Your development environment integrates with DEV/QA versions of each of these services. Each service has 95% uptime, which translates into 1.5 days of downtime a month for maintenance, upgrades, and unexpected outages. The chance of all systems being available is the product of their availabilities or 95%*95%*95%*95%*95%*95%=73.5% uptime for all 6 integration points. If your tests directly use these test systems, your test suite will fail over 25% of the time for reasons beyond your control. Now is that because you introduced a breaking change or because one of your integration points is temporarily unavailable or misbehaving? Life gets worse when you integrate with more systems or when the availability of those systems is lower. Imagine you have to integrate with 12 integration points with 95% availability – your test suite will only pass 54% of the time. Or if your 6 test systems only have 90% availability, your test suite only passes 53% of the time. In each case, it’s a coin toss whether you know for certain whether the change you just made broke the application. When you start getting a lot of false negatives (failing tests when the problem is in an integration point), you stop trusting your tests and you’re essentially flying without a parachute.
By decoupling yourself from your integration points by using interfaces for the majority of your tests, you know that the code base is healthy and you can separately test the interactions with your integration points.

To figure out which dependency to break first, take a critical look at your codebase. What is causing you the most pain? Pain can manifest itself as long debug sessions, waiting for external services to be available, high bug counts, … Solve the pain by decoupling yourself from that dependency. Then look at what is now causing you the most pain and solve that. Lather, rinse, repeat. It might take a few weeks or months, but eventually you’ll have a codebase that is a pleasure to work with. If you don’t believe me, read this email that I received from Karl (originally posted here):

Hi James,

I’m writing you because I just wanted to thank you!

It was about two months ago and I attended TechEd/Orlando. I have to say that it was the first time for me and it really was an honor to be the one and only chosen from approximately 300 software developers working in my company to go to Orlando. I was very impressed about the good quality of the sessions, especially on the architecture track, but your tiny little discussion on Tuesday evening really opened my mind. At that time I had been working as a software developer for 7 years with about 5 years experience in software design and project management and 5 years of .NET experience. I was fighting with a 400,000 LOC .NET legacy monster that’s used by public safety agencies around the world in security related areas. I have a team of 12 developers and we were trying really hard to keep this beast up and running and extend it with new features. I think you can imagine that most of the time we were using the trial and error approach to get new features in (just hack it in and prepare for long debugging sessions hunting weird bugs in parts of the application you never expected to be affected by the new featureā€¦). The main problem was – guess what – the dependencies. There were about 50 classes (all singleton “Managers”), and every single manager was tied to at least 10 other managers – no interfaces at all – and that’s just one of several layers of the app… We knew that dependencies were our problem, but we had no clue how to solve it – there was this chicken/egg problem – I want to decouple my system, which needs a lot of refactoring. To ensure that I don’t break anything I’d need unit tests but I can’t use them because my system is so highly coupled ;-) We have tried TypeMock, but my feeling was that this went in the wrong direction. Needless to say that this attempt failed.

During the discussion after your session you gave me some quite useful hints:

1. Read Applying Domain Driven Design and Patterns by Jimmy Nilsson
2. Read Working Effectively with Legacy Code by Michael Feathers
3. Use ReSharper (especially for Refactoring)
4. Use a Mock-Framework (preferably RhinoMocks)
5. Use a Dependency Injection Framework like Spring.NET

I bought Jimmy Nilsson’s book in the conference store and read it cover to cover on my flight back to Vienna. Then I bought the second book and read it within one week. I started to use ReSharper more extensively to apply some of the patterns from Michael Feathers’ book to get some unit tests in place. I extracted a lot of interfaces, brought Spring.NET into action and used RhinoMocks and the VS2005 built in UnitTest-Framework to write some useful unit tests. I also used the built in code coverage functionality in combination with the unit tests. In addition we already started Design for a messaging based service application that we want to develop in a very TDDish style.

As you can see there’s a lot going on since I attended your session. It was really this discussion about agile principles that gave me sort of a boost.

So again – thanks for opening my mind and keep on doing this great work!

Regards,
Karl

In my opinion, Karl and his team are the real heroes here. You can be a hero too by taming your software dependencies!

Today at lunch* I’ll be joining Glen Gordon and Lynn Langit on geekSpeak to talk about Taming Your Software Dependencies. Specifically I’ll be talking about moving from tightly-coupled to loosely-coupled architectures using dependency inversion, dependency injection, and inversion of control containers. geekSpeak is an interactive LiveMeeting driven by audience questions with no PowerPoint and lots of code. Come and geek out with me on geekSpeak! Register here.

* Today at lunch == Wednesday, March 26, 2008 from 12-1pm PST or 1-2pm MST or 2-3pm CST or 3-4pm EST or …

My latest article just hit the web in the March 2008 issue of MSDN Magazine. Loosen Up: Tame Your Software Dependencies for More Flexible Apps takes you on a journey from a highly-coupled architecture, which we’re all familiar with, to gradually more loosely-coupled ones. First stop is the problems inherent in highly-coupled applications. To start solving those problems, I look to dependency inversion and service location. Next stop is poor man’s dependency injection and then a simple, hand-rolled inversion of control (IoC) container. From there, I look at the features provided by full-fledged IoC containers and use Castle Windsor as an example, along with some Binsor thrown in for configuration goodness. My goal was to help developers understand the benefits of dependency injection and IoC containers by showing them the problems solved at each stage of the journey.

A big thanks to Howard Dierking, MSDN Magazine editor extraordinaire, for his encouragement and not having an issue with using Windsor for the advanced examples. Thanks to Oren Eini for help debugging a Binsor configuration problem in one of the examples. And an especially big thanks to my spouse and two young sons for their patience while I was writing.

Thanks in advance for reading the article. I welcome your feedback and questions.

DISCLAIMER: For the agilists reading this post, there is nothing new here. I just hear a lot of misconceptions around terms like YAGNI and wanted to provide my own take on things.

YAGNI is an acronym for “you ain’t gonna need it”. You often hear it bandied about agile shops. One developer suggests an over-architected solution and the other cries out YAGNI. For example:

Dev#1: We can transfer that file using BizTalk!

Dev#2: YAGNI. We can download it using HttpWebRequest.

I’m not knocking BizTalk here. My point is that if all you need to do is transfer a file from point A to point B, BizTalk is an awfully complicated and expensive way to do it.

Many agile critics see YAGNI as an excuse for developers to do stupid things. Critics claim that YAGNI encourages developers to hard-code connection strings and other silliness. “In development, you’re not going to need to change the connection string. So by the YAGNI principle, you should hard-code your connection strings. Isn’t agile stupid?” YAGNI is a principle like any other and should be applied intelligently. It’s about keeping simple problems simple. It’s about the right tools for the right job – don’t try to drive a finishing nail with a sledgehammer. Too often we as an industry implement “enterprise solutions” to solve otherwise trivial problems.

YAGNI isn’t an excuse to be sloppy with your architecture. If you’re building the next hot social bookmarking application, it better scale. You better design with caching, distribution, load-balancing, etc. in mind. I would want some architectural spikes demonstrating acceptable performance at high user loads. Similarly if you’re developing an internal application to be used by 10 users, I would bet that you don’t need the complexity of a load-balanced web farm, distributed middle-tier cache infrastructure, and a clustered database. YAGNI is about choosing the appropriate amount of complexity for the problem at hand.

YAGNI is about building software at the last responsible moment. The “responsible” part is key. You can’t slap an asynchronous architecture in after the fact. You need to design that in up front. At the same time you should avoid speculative generalization. Solve the problems at hand with a responsible eye toward future requirements. Keep yourself lean and nimble enough to respond to those future requirements, but don’t build them “just in case we need them”. What you will need in the future seldom has any resemblance to what you might build now. Even worse, unused features add complexity and rigidity to a codebase. You’re actually worse off than if you didn’t have those features at all! Keeping your architecture simple, lean, and nimble requires a lot of discipline.

So next time someone suggests using a SOA-based ESB to integrate the disparate workflows in the document lifecycle, ask yourself whether a wiki couldn’t work just as well. :^)

The Plumbers are back for another half-hour of mayhem. Get it here.

  • Heroes Happen {Here} Launch (http://www.microsoft.com/canada/heroeshappenhere)
  • DevTeach Toronto May 12-16 (http://www.devteach.com)
  • Discussing Hanselminutes #103 Quetzal Bradley on Testing after Unit Tests (http://tinyurl.com/2tby39)
  • Release It! by Michael Nygard (http://tinyurl.com/2vtbrj)
  • Bil 0 IE 8 (http://tinyurl.com/yt8oh8)
  • John {Hearts} Polymorphic Podcast (http://polymorphicpodcast.com/)
  • MSDN Mag article – Loosen Up: Tame Software Dependencies for More Flexible Apps (Coming soon to http://msdn.microsoft.com/msdnmag)
  • James talking about Taming Software Dependencies on GeekSpeak – Wed. Mar. 26 @ 12-1pm PST (http://tinyurl.com/38a2xx)
  • John Guests on .NET Rocks (http://tinyurl.com/2uczpx)
  • Living Life on the Edge

The Plumbers are finally back after a long hiatus with a new 1/2 hour format. You can download it from here. In this episode we discuss:

  • Heroes Happen Here Launch
  • SQL Server 2008
  • Visual Studio 2008
  • Extension methods
  • JavaScript debugging and Intellisense
  • Lamdas, LINQ, and PLINQ
  • DevTeach past and future
  • ALT.NET Open Space Conference coming to Canada
  • ASP.NET MVC Framework
  • MVCContrib Project on CodePlex
  • 360Voice.com

As promised, Microsoft has made the source for the .NET Framework available for debugging purposes. You’ll need to be running Visual Studio 2008 and install this QFE (aka patch). Full instructions for how to enable .NET Framework source debugging can be found on Shawn Burke’s blog here. You can also read Scott Guthrie’s announcement here.

As luck would have it, I couldn’t get this working initially. There was no “Load Symbols” in the right-click menu of the call stack. (“Load Symbols” should appear right above “Symbol Load Information…”)

image

Double-clicking in my call stack on any frames in the .NET Framework:

image

resulted in:

image 

After some quick investigation and reading through Shawn’s troubleshooting section at the bottom of his post, I realized that the _NT_SYMBOL_PATH environment variable was overriding the settings in Visual Studio. (I had set up _NT_SYMBOL_PATH to the Microsoft Symbol Server for WinDbg.) The problem is that the symbols provided by the Microsoft Symbol Server have their source information stripped out.

To solve this problem, you have two options.

  1. Delete the environment variable and just set the symbol paths in Visual Studio and WinDbg independently as noted in Shawn’s blog post above.
  2. Add the Reference Source Symbol Server to _NT_SYMBOL_PATH. (This has the advantage that the setting is shared by all debugging tools, including Visual Studio and WinDbg.)

Regardless of which option you choose, first close all instances of Visual Studio 2008 and delete all the files in your local symbol cache. Visual Studio has no way of knowing which version of the symbols you have. So if you already have System.Web.pdb downloaded from the Microsoft Symbol Server – the PDB file without source information – you won’t be able to debug into System.Web.

To add/modify the environment variable:

  1. Right-click Computer… Properties…
  2. On Vista, click Advanced System Settings…
  3. Click on the Advanced tab, then Environment Variables…
  4. Click New… under System Environment Variables (or Edit… if _NT_SYMBOL_PATH is already defined).
    • Variable name: _NT_SYMBOL_PATH
    • Variable Value: SRV*c:\dev\symbols*http://referencesource.microsoft.com/symbols;SRV*c:\dev\symbols*http://msdl.microsoft.com/download/symbols
  5. Click OK three times.

Maybe I’m just dense, but it took me awhile to figure out the syntax for _NT_SYMBOL_PATH. Note that you have to separate symbol servers via a semi-colon. Specifying SRV*LocalCache*Server1*Server2 fails miserably. Symbols don’t download and no errors are shown. The four-part syntax is valid, but meant for caching symbols on your network for all developers to share. Local cache is on each developer’s box, Server1 is a fileshare on your network with read-write access for all developers, and Server2 is the actual public symbol server. If you specify a public symbol server as Server1 in the four-part format, symbol loading just fails silently. Use the semi-colon separated syntax noted above to specify multiple public symbol servers and everything works as expected.

The local symbol cache can be the same for all public symbol servers. It can be anywhere that you have read/write access, such as c:\symbols, c:\users\james\symbols, or – my preferred location – c:\dev\symbols.

You should be sure that the Reference Source Symbol Server is before the Microsoft Symbol Server. The order of symbol servers is the order of search. If your debugger doesn’t find the correct PDB file in the local symbol cache, it will check the first symbol server. If the first symbol server doesn’t have the appropriate PDB file, it will proceed to the second. So if you have the Microsoft Symbol Server first, you’ll be downloading the PDB files without source information.

Which brings me to my last point. Right now, source has been released for:

  • .NET Base Class Libraries (including System, System.CodeDom, System.Collections, System.ComponentModel, System.Diagnostics, System.Drawing, System.Globalization, System.IO, System.Net, System.Reflection, System.Runtime, System.Security, System.Text, System.Threading, etc.)
  • ASP.NET (System.Web, System.Web.Extensions)
  • Windows Forms (System.Windows.Forms)
  • Windows Presentation Foundation (System.Windows)
  • ADO.NET and XML (System.Data and System.Xml)

LINQ, WCF, Workflow, and others will be following in the coming months, according to Scott Guthrie. So if I debug WCF today, I’ll download symbols from the Microsoft Symbol Server without source information since the Reference Source Symbol Server won’t have the PDB files. When the PDB files are released, I won’t be able to debug the source until I delete the old PDB files without source information and force a re-download. You can do this by deleting the appropriate folder in your local symbol cache – in this case, the c:\dev\symbols\System.ServiceModel.pdb folder – or you can just delete the entire contents of your local symbol cache and re-download everything. If you’re not able to view source on something that you know is available, the easiest solution is to just clear out your local cache and let Visual Studio download the symbols again from the correct location. Downloading symbols over a broadband connection doesn’t take that long and is a lot faster than trying to troubleshoot which PDB files are causing you problems.

Happy Debugging!!!

A great new feature of Visual Studio 2008 is multi-targeting, which allows VS 2008 to compile for .NET 2.0, 3.0, or 3.5 simply by changing a project property.

ApplicationDesignerWindowPaneControl

You might be thinking, now I don’t have to keep Visual Studio 2005 and 2008 installed. I can just use Visual Studio 2008 for all my projects! Well, yes, but with one big proviso. Code targeting .NET 2.0 and written in VS2008 may only compile in VS2008! The reality is that multi-targeting changes Intellisense, the project templates, and the assemblies that you’re offered, but your code is still compiled using the C# 3.0 or VB9 compilers regardless of which .NET Framework version you target. Compiling a project targeting .NET 2.0 using VS2008 results in this output:

—— Build started: Project: Multitargetting, Configuration: Debug Any CPU ——
C:\Windows\Microsoft.NET\Framework\v3.5\Csc.exe /noconfig /nowarn:1701,1702 /errorreport:prompt /warn:4 /define:DEBUG;TRACE /reference:C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Data.dll /reference:C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.dll /reference:C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Xml.dll /debug+ /debug:full /filealign:512 /optimize- /out:obj\Debug\Multitargetting.exe /target:exe Program.cs Properties\AssemblyInfo.cs

Compile complete — 0 errors, 0 warnings
Multitargetting -> c:\dev\Examples\Multitargetting\Multitargetting\bin\Debug\Multitargetting.exe
========== Build: 1 succeeded or up-to-date, 0 failed, 0 skipped ==========

Note the path to csc.exe – C:\Windows\Microsoft.NET\Framework\v3.5. We’re using the C# 3.0 compiler that ships with .NET 3.5. (Don’t get me started about the renaming of WinFX to .NET 3.0 again. My friends at Microsoft already know how I feel about that one.) Generally this doesn’t matter because .NET 3.0 and .NET 3.5 are additive libraries to .NET 2.0. .NET 2.0, 3.0, and 3.5 all run on the CLR that shipped with .NET 2.0. (I originally read about this in Dustin Campbell’s C#2.5 post.)

So how do we get all the crazy goodness of lambda expressions, anonymous types, LINQ, extension methods, and more? In .NET 3.0, WCF, WPF, WF, and CardSpace were all just additive libraries on top of .NET 2.0. In .NET 3.5, the new features can be divided into two categories – additive libraries and compiler enhancements.

Feature Library Compiler
implicitly-typed locals (var)   X
lambda expressions   X
automatic properties   X
anonymous types   X
object and collection initializers   X
LINQ X X
Expression Trees X X
Extension Methods X* X

* See below

Any feature not requiring library support is just syntactic sugar provided by our compilers. If you look under the covers using Lutz Roeder’s Reflector, you’ll see that the compilers are generating the same old CLR 2.0-compatible MSIL as they always were.

Implicitly-Typed Locals

Let’s take a look at implicitly-typed locals using the “var” keyword:

var foo = "Hello, World!";
 
Reflector says! (Harkening back to Family Feud and Richard Dawson.)
 
string foo = "Hello, World!";
 
The compiler was able to infer the type of “foo” based on its usage. No magic MSIL instruction to infer type. It’s all in the compilers.

Lambdas

Let’s take a look at our new swanky lambda expressions:
 
Action<string> display = msg => Console.WriteLine(msg);
 
Reflector says!
 
Action<string> display = delegate(string msg) { Console.WriteLine(msg); }
 
Once again, it’s just compiler magic turning the terse lambda expression (that funky “params => body” syntax) into the anonymous delegate syntax that we’ve known and loved since .NET 2.0. (To be honest, that’s not quite what Reflector says. The C# compiler actually caches the anonymous delegate in a static field for performance reasons, but the code above is close enough for purposes of this discussion.)

Automatic Properties

What about automatic properties?

public string Hello { get; set; }

Reflector says!

[CompilerGenerated]
private string <Hello>k__BackingField;
[CompilerGenerated]
public string get_Hello()
{
    return this.<Hello>k__BackingField;
}
[CompilerGenerated]
public void set_Hello(string value)
{
    this.<Hello>k__BackingField = value;
}
Once again, more compiler magic.

Anonymous Types

var position = new { Lat=42, Long=42 };

Reflector says!

[CompilerGenerated, DebuggerDisplay(@"\{ Lat = {Lat}, Long = {Long} }", Type="<Anonymous Type>")]
internal sealed class <>f__AnonymousType0<<Lat>j__TPar, <Long>j__TPar> {
  // Compiler generated goo omitted for clarity
}

Compiler magic.

Object/Collection Initializers

Object and collection initializers allow us to create and initialize objects/collections in a single line. They only require that the object have a parameterless constructor.

Program program = new Program { Hello="World" };
 
Reflector says!
Program <>g__initLocal0 = new Program();
<>g__initLocal0.Hello = "World";
Program program = <>g__initLocal0;
 
The C# 3.0 compiler is changing our object initializer into a plain old object instantiation via the parameterless constructor followed by a bunch of property sets. Nothing else to see here. Move it along…

LINQ, Expression Trees, and Extension Methods

Each of these features requires library support from System.Core, a new assembly that ships with .NET 3.5. Without System.Core installed in the GAC, your code isn’t going to run. So you can’t use these features without .NET 3.5 installed on the client.

The one exception is extension methods. If you try to compile an extension method while targeting .NET 2.0, you will receive the following compile error:

Cannot define a new extension method because the compiler required type ‘System.Runtime.CompilerServices.ExtensionAttribute’ cannot be found. Are you missing a reference to System.Core.dll?

As noted by Jared Parsons here, you can simply define the attribute yourself:

namespace System.Runtime.CompilerServices {
    [AttributeUsage(AttributeTargets.Method)]
    public class ExtensionAttribute : Attribute {
    }
}

Extension methods now compile while targeting .NET 2.0.

N.B. If you target .NET 3.5 as well as .NET 2.0 with this code, you’ll end up with a duplicate definition warning under .NET 3.5 as noted by Dustin Campbell here. If you’re really going to use this trick, you should probably wrap the class in some conditional directives that omit the definition when compiled under .NET 3.5.

Multi-targeting Proviso

What about that proviso I mentioned at the beginning of this post? If everyone on your project is using VS 2008 and targeting .NET 2.0, then all is good. You can use all the new syntactic sugary goodness that the C# 3.0 compiler (and VB9 compiler – if you swing that way) provides for you. If some team members are still using VS 2005, you need to be careful to not introduce C# 3.0 language constructs as VS2008 will not provide feedback that you are using them even if you target .NET 2.0. The new constructs are sufficiently different enough that it’s unlikely that you’ll mistakenly use a lambda expression or automatic property, but it’s something to keep in mind. Besides your CruiseControl.NET server is using a NAnt script that compiles with the C# 2.0 compiler to keep everyone honest, right? If not, this is yet another reason you should set up a CI server for your project today…

Yes, it’s true. My CodeBetter friends have invited me to join their ranks and I’ve accepted. Like JP, I’ll still be cross-syndicating to both my CodeBetter blog and my own site. No need to subscribe to both. You’ll get the same content either way. To my old readers, sit back, relax, and keep enjoying yourselves. Let me assure you that my content and focus aren’t going to change. To my new CodeBetter readers, welcome to my humble little corner of the web. Like many of the other CodeBetter bloggers, I’ll be blogging about agile development on the .NET platform. Thanks again to the CodeBetter crew for having me.