Browsing Posts in .NET General

Parthenon Under ConstructionI don’t have to remind everyone that we’re in the middle of a world-wide economic depression downturn. When the economy is good, it is hard enough to convince your boss to re-build an application from scratch. When the economy is bad, it is bloody near impossible. In the coming months (and potentially years), I expect that as developers we’re going to be seeing more and more brownfield projects, rather than greenfield ones. We’re going to see more push for evolutionary development of applications rather than wholesale replacement. We will be called upon to improve existing codebases, implement new features, and take these projects in initially unforeseen directions. We will have to learn how to be Working Effectively with Legacy Code. (Took some effort to coerce the title of Michael Feathers’ excellent book into that last sentence.) A lot of companies have tremendous investment in existing “classic” ASP.NET websites, but there is a desire to evolve these sites rather than replace them, especially given these tough economic times. Howard Dierking, editor of MSDN Magazine, has asked me to write a 9-week series entitled From Web Dev to RIA Dev where we will explore refactoring an existing “classic” ASP.NET site. We want to improve an existing ASP.NET using new technologies, such as AJAX, jQuery, and ASP.NET MVC. We want to show that you can adopt better practices, such as continuous integration, web testing (e.g. WatiN, WatiR, Selenium), integration testing, separation of concerns, layering, and more.

So I have two questions for you, Dear Reader…

  1. Can you think of a representative “classic” ASP.NET website (or websites) for the project?
  2. What topics would you like to see covered?

I should clarify what I mean…

“Classic” ASP.NET Applications

I’m currently considering PetShop, IBuySpy, DasBlog, SubText, and ScrewTurn Wiki. I’m not looking for one riff with bad practices. Just an ASP.NET project in need of some TLC – one that doesn’t have a decent build script, isn’t under CI, a bit shy on the testing, little to no AJAX, etc. The code should be typical of what you would see in a typical ASP.NET application. (For that reason, I am probably going to discount IBuySpy as it is built using a funky webpart-like framework, which is not typical of most ASP.NET applications.) Some of the ASP.NET applications that I just mentioned don’t exactly qualify because they do have build scripts, tests, and other features that I would like to demonstrate. I will get permission from the project owner(s) before embarking on this quest and plan to contribute any code back to the project. Needless to say that the project must have source available to be considered for this article series. So please make some suggestions!

Topics

I have a lot of ideas of technologies and techniques to explore including proper XHTML/CSS layout, jQuery, QUnit, AJAX, HTTP Modules/Handlers, build scripts, continuous integration (CI), ASP.NET MVC, web testing (probably WatiN or Selenium), refactoring to separate domain logic from codebehind/sprocs, … I will cover one major topic per week over the 9-week series. So I’ve got lots of room for cool ideas. What would you like to see? What do you think is the biggest bang for your buck in terms of improving an existing ASP.NET application?

Depending on the topics covered (based on your feedback here), I might use one site for the entire series or different sites to cover each topic. It would add some continuity to the series to use a single site over the 9 weeks, but after a brief inspection of the codebases mentioned above, I am having my doubts about finding a single representative site. We’ll have to see. Please leave your suggestions in the comments below. Thanks in advance!

Coffee and Code Joey Devilla (aka The Accordian Guy) from Microsoft’s Toronto office started Coffee and Code a few weeks ago in Toronto and John Bristowe is bringing the experience to Calgary. When John contacted me about the event, I thought to myself, “I like coffee. I like code. I want to be involved!” (Heck, I would order an Americano via intravenous drop if I could.) So John and I will be hanging at the Kawa Espresso Bar this Friday for the entire day drinking coffee, cutting code, and talking to anyone and everyone about software development. John is broadly familiar with a wide variety of Microsoft development technologies, as am I. I’ll also be happy to talk about Castle Windsor (DI/IoC), NHibernate (ORM), OOP and SOLID, TDD/BDD, continuous integration, software architectures, ASP.NET MVC, WPF/Prism, build automation with psake, … Curious what ALT.NET is about, I’ll be happy to talk about that too! I got my cast off today from my ice skating accident two weeks ago and am in a half-cast now. So I am hopeful that I’ll be able to demonstrate some ReSharper Jedi skills for those curious about the amazing tool that is ReSharper. (I am going to be daring and have a nightly build of ReSharper 4.5 on my laptop to show off some new features.) So come join John and I for some caffeinated coding fun at the Kawa Espresso Bar anytime between 9am and 4pm Friday, March 13, 2009.

This post has been brought to you by the letter C and the number 4…

A friend, having recently upgraded to Rhino Mocks 3.5, expressed his confusion regarding when to use mocks vs. stubs. He had read Martin Fowler’s Mocks Aren’t Stubs (recommended), but was still confused with how to actually decide whether to use a mock or a stub in practice. (For a pictorial overview, check out Jeff Atwood slightly NSFW photo montage of dummies, fakes, stubs, and mocks.) I thought I’d share my response which cleared up the confusion for my friend…

It’s easy to get confused. Basically, mocks specify expectation. Stubs are just stand-in objects that return whatever you give them. For example, if you were testing that invoices over $10,000 required a digital signature…

// Arrange
var signature = DigitalSignature.Null;
var invoice = MockRepository.GenerateStub<IInvoice>();
invoice.Amount = new Money(10001M);
invoice.Signature = signature;
var signatureVerifier = MockRepository.GenerateMock<ISignatureVerifier>();
signatureVerifier.Expect(v => v.Verify(signature)).Return(false);
var invoiceRepository = MockRepository.GenerateMock<IInvoiceRepository>();
var accountsPayable = new AccountsPayable(signatureVerifier, invoiceRepository);
 
// Act 
accountsPayable.Receive(invoice);
 
// Assert 
invoiceRepository.AssertWasNotCalled(r => r.Insert(invoice));
signatureVerifier.VerifyAllExpectations(); 

I don’t have a real invoice. It’s a proxy generated by Rhino Mocks using Castle DynamicProxy. You just set/get values on the properties. Generally I use the real object, but stubs can be handy if the real objects are complex to set up. (Then again, I would consider using an ObjectMother first.)

Mocks on the other hand act as probes to detect behaviour. We are detecting whether the invoice was inserted into the database without requiring an actual database. We are also expecting the SignatureVerifier to be called and specifying its return value.

Now the confusing part… You can stub out methods on mocks too. If you don’t care whether a method/property on a mock is called (by you do care about other aspects of the mock), you can stub out just that part. You cannot however call Expect or Stub on stubs.

UPDATE: I’m including my comments inline as they respond to important points raised by Aaron and John in the comments here and many readers don’t bother looking through comments. :)

@Aaron Jensen – As Aaron points out in the comments, you are really mocking or stubbing a method or property, rather than an object. The object is just a dynamically generated proxy to intercept these calls and relay them back to Rhino Mocks. Whether it’s a mock/stub/dummy/fake doesn’t matter.

Like Aaron, I prefer AssertWasCalled/AssertWasNotCalled. I only use Expect/Verify if the API requires me to supply return values from a method/property as shown above.

I also have to agree that Rhino Mocks, while a great mocking framework that I use everyday, is showing its age. It has at least 3 different mocking syntaxes (one of which I contributed), which increases the confusion. It’s powerful and flexible, but maybe a bit too much. Rhino Mocks vNext would likely benefit from deprecating all but the AAA syntax (the one borrowed from Moq) and doing some house-cleaning on the API. I haven’t given Moq an honest try since its initial release so I can’t comment on it.

@John Chapman – Thanks for the correction. I’ve had Rhino Mocks throw an exception when calling Expect/Stub on a stub. I assumed it was expected behaviour that these methods failed for stubs, but it looks like a bug. (The failure in question was part of an overly complex test and I can’t repro the issue in a simple test right now. Switching from stub to mock did fix the issue though.) stub.Stub() is useful for read-only properties, but generally I prefer getting/setting stub.Property directly. Still stub.Expect() and stub.AssertWasCalled() seems deeply wrong to me. :)

My first dnrTV episode went live today. I am talking with Carl Franklin about dependency inversion, dependency injection, and inversion of control. I demonstrate how to build a very simple IoC container. My intent is to show developers that it isn’t any thing crazy scary. I talk about how IoC facilitates decoupling dependencies and creating more easily testable software. Check it out!

dnrTV #126: James Kovacs’ roll-your-own IoC container

Feedback is always welcome.

Carl and I plan to do another show focused on IoC containers in the next few weeks. Specifically we’ll be talking about what a full-fledged container offers over and above the roll-your-own. If you have other IoC questions you would like answered on the next show, drop me an email.

I’ll be one of the speakers at the Calgary .NET User Group this Thursday. First up is Daryl Rasmussem…

Building ASP.NET/AJAX with Visual Studio 2008 by Daryl Rasmussem

AJAX is now built into ASP.NET with Visual Studio 2008 – and because there’s no separate download to install, the fully integrated nature of AJAX gives you better separation of the code from the design, and the generated Javascript and HTML is cross browser compatible, including support for both FireFox and Opera.

In this presentation, we will use a real world development scenario to explore the following AJAX techniques in Visual Studio 2008:

· Partial Page Rendering and the use of Update Panels

· Using the UpdateProgress control

· Using Control Extenders from the AJAX Toolkit

· Creating a new Extender Control

· Working with web services – from both server and client side code

To C# 3.0… and Beyond by James Kovacs

C# 3.0 introduces lambda expressions, extension methods, automatic properties, and a host of other features. We will look at where C# is today, where it is going tomorrow, and what ideas we can borrow from languages like F# and Ruby to improve our C# code. Plus find out the real reason for the new “var” keyword.

Date/time: Thursday, May 29, 2008 from 5-8pm

Location: Nexen Conference Centre (801 – 7th Avenue SW, Calgary)

Registration: Calgary .NET User Group Events Calendar (N.B. You must be logged in to see the registration link.)

This morning I was getting ready to record a screencast about ReSharper 4 EAP. To make it easier for people to follow along, I launched Roy Osherove’s excellent utility, Keyboard Jedi. Rather than the expected result, this friendly dialog box greeted me:

image_thumb[2]

Living La Vida x64

My main development machine is running Vista x64. I’ve used KeyJedi frequently on my Vista x86 laptop and never had a problem. So I immediately suspected a 64-bit compatibility issue. KeyJedi is a .NET 2.0 application, which means that it will execute on the 64-bit CLR if available. (The 64-bit CLR was introduced with .NET 2.0. You don’t have to do anything special. If you install .NET 2.0 or higher on 64-bit Windows, the 64-bit CLR is installed automatically.) The 64-bit CLR uses a 64-bit JIT compiler, which produces x64 (or IA64)* machine code. This is why things like Paint.NET’s Gaussian Blur, which involves a lot of 64-bit arithmetic, run faster on 64-bit Windows. The MSIL is identical, but on 64-bit platforms, the JIT produces 64-bit optimized code. For example, a 64-bit ADD can be done in a single instruction on 64-bit hardware, but requires multiple instructions on 32-bit. (N.B. If you’re running 32-bit Windows on 64-bit hardware, there is no way to access the 64-bit capabilities of the chip as the OS thinks it’s running on a 32-bit chip.)

* x64 is the 64-bit enhanced, x86-compatible instruction set introduced by AMD. IA64 is used by Intel’s Itanium processors and is incompatible with x86. So you have to recompile the world to use IA64. Once Intel realized that not everyone on the planet was willing to recompile their programs, they introduced EMT64 for the Pentiums and Core chips. EMT64 is functionally identical to x64 from AMD.

WoW

So on Vista x64, KeyJedi is running on the 64-bit CLR. But that doesn’t explain why it fails. Most programs, like Paint.NET, just work. What is KeyJedi doing that is special and incompatible with 64-bit Windows? It is registering global system hooks to capture keyboard and mouse messages. Registering hooks and receiving messages involves Win32 calls, which are handled by Michael Kennedy’s Global System Hooks in .NET library. This library includes both managed (Kennedy.ManagedHooks.dll) and unmanaged (SystemHookCore.dll) code. SystemHookCore.dll makes 32-bit Win32 calls and receives 32-bit callbacks. You cannot make 32-bit Win32 calls directly to the 64-bit Windows kernel. You need a translation layer, which is the Windows-on-Windows or WoW layer.

The WoW layer marshals 32-bit Win32 calls to and from their 64-bit equivalent, translating data structures on the way in and out. This marvelous piece of magic is built into 64-bit Windows and allows most 32-bit programs to execute on 64-bit Windows without problem. The WoW layer sits below all 32-bit processes happily marshalling system calls back and forth. If you’re running in a 64-bit process, such as the 64-bit CLR host, there is no WoW layer beneath you. So you cannot load 32-bit DLL, such as SystemHookCore.dll. (This is also why 32-bit kernel-mode drivers don’t work on 64-bit Windows. There is no WoW layer in the kernel. It exists between user and kernel mode.)

Now we know the problem. The question is how to fix it. We could create a 64-bit version of SystemHookCore.dll, but that would involve a lot of spelunking and debugging of unmanaged C++ code. Not exactly how I want to spend my morning. The other option is to force KeyJedi to run on the 32-bit CLR even on 64-bit Windows. Then we would have the WoW layer beneath us and SystemHookCore.dll could merrily assume that it is running on 32-bit Windows while the WoW layer takes care of all the 32-bit to 64-bit Win32 marshalling. So how do we force a managed application to run on the 32-bit CLR…

Any CPU vs. x86

The easiest technique is to modify your project settings in Visual Studio. Just go to Project Properties… Build… Platform target and change it from Any CPU to x86.

image_thumb[9]

Now you might think, “Wait a second. I’m compiling to MSIL, which is processor independent.” Yes, you are, but when you select Any CPU (the default), your program will load on the 64-bit CLR if available. By selecting x86, you are forcing the program to run on the 32-bit CLR. Then just recompile your program and problem solved. Just one problem… Roy never released the source. (Yes, I’ve emailed Roy and asked him to do recompile with x86 as the target platform.)

No Source, No Problem

We don’t want to modify the application, just ask it to run on the 32-bit CLR. Turns out that the project settings above just twiddle the CORFLAGS inside the PE file header of the executable. The .NET Framework SDK ships with a program called corflags.exe for just this purpose:

image_thumb[12]

You’ll notice that 32BIT has a value of 0, which means Any CPU. We want to twiddle this to 1. One little problem. The assembly is signed. Twiddling even a single bit will invalidate the signature and the CLR loader will prevent the application from loading. If the assembly weren’t signed, you could just execute:

corflags KeyJedi.exe /32BIT+

ILDASM to the Rescue

Time to bring out the big guns. We’re going to disassemble KeyJedi.exe into MSIL. Hold onto your hats…

ildasm /all KeyJedi.exe /out=KeyJedi.il

This produces three files:

KeyJedi.il

KeyJedi.res

Osherove.KeyJedi.MainForm.resources

Time to hack some MSIL… Open KeyJedi.il in your text editor of choice and search for “.corflags”. Change the line to:

.corflags 0x0000000b    //  ILONLY 32BITREQUIRED

We can’t compile the code yet because we don’t have the private key that matches the public key embedded in the MSIL. Search for .publickey and delete it. (You could change it to your own public key generated with sn -k, but there’s no reason that we need to sign the assembly.) Now we can re-compile the MSIL using ilasm:

ilasm /res:KeyJedi.res KeyJedi.il /output:KeyJedi-x86.exe

If we execute KeyJedi-x86.exe, we get:

image_thumb[14]

Success!!! KeyJedi is now running on Vista x64.

Postscript

I’m not going to redistribute the recompiled binary because KeyJedi is Roy’s baby and the fix is really straightforward for him to make. Look to his blog for an update. My main purpose was to help people better understand 64-bit compatibility issues and some tricks that 64-bit Windows does so that, in most cases, you aren’t forced to recompile the world to run the programs you’ve come to depend on.

During my geekSpeak screencast last week, one of the attendees asked:

Any recommendations for refactoring existing code to insert interfaces? (e.g., what’s the best dependency to break first, the database?)

Excellent question! Most of us do not have the luxury of working on greenfield projects, but instead work on brownfield projects – existing applications that could use some tender loving care. Brownfield projects are often inflicted with legacy code. What do I mean by legacy code? I agree with Michael Feathers’ definition:

Legacy code is simple code without tests.

Michael elaborates further saying:

Code without tests is bad code. It doesn’t matter how well written it is; it doesn’t matter how pretty or object-oriented or well-encapsulated it is. With tests, we can change the behavior of our code quickly and verifiably. Without them, we really don’t know if our code is getting better or worse.

If you haven’t read Michael’s book, Working Effectively with Legacy Code, you really must. It is on my short list of must-read development books. Michael provides excellent strategies and patterns for safely implementing tests around untested code, which is crucial in order to make non-breaking changes to an existing application AKA refactoring. If you can refactor your code, you can improve your code.

Now back to the question at hand… Which dependency to break first? Let me turn the question around. Which dependency is causing you the most pain?

In my experience, it is typically the database as round-tripping to a database on each test to fetch or save data dramatically slows the tests down. If you have slow tests, you’ll be unlikely to run them as often and then the tests start to lose their value as a safety net. (N.B. You still want integration tests that access the database. You just don’t want each and every unit test to do so.) As well it requires a lot of effort to keep consistent data for tests, often using test data setup scripts or rolling back transactions at the end of tests.

Other areas that often cause pain are integration points – web services, DCOM/Enterprise Services, external text files, … Anywhere your application is relying on an external application. Integration points are problems for tests because if you’re relying on them, your tests will fail when the external applications are unavailable due to crashes, service outages, integration point upgrades, external network failures, behaviour changes, … Imagine that your e-commerce website integrates with 6 external systems (credit card processor, credit check, inventory, sales, address verification, and shipping). Your development environment integrates with DEV/QA versions of each of these services. Each service has 95% uptime, which translates into 1.5 days of downtime a month for maintenance, upgrades, and unexpected outages. The chance of all systems being available is the product of their availabilities or 95%*95%*95%*95%*95%*95%=73.5% uptime for all 6 integration points. If your tests directly use these test systems, your test suite will fail over 25% of the time for reasons beyond your control. Now is that because you introduced a breaking change or because one of your integration points is temporarily unavailable or misbehaving? Life gets worse when you integrate with more systems or when the availability of those systems is lower. Imagine you have to integrate with 12 integration points with 95% availability – your test suite will only pass 54% of the time. Or if your 6 test systems only have 90% availability, your test suite only passes 53% of the time. In each case, it’s a coin toss whether you know for certain whether the change you just made broke the application. When you start getting a lot of false negatives (failing tests when the problem is in an integration point), you stop trusting your tests and you’re essentially flying without a parachute.
By decoupling yourself from your integration points by using interfaces for the majority of your tests, you know that the code base is healthy and you can separately test the interactions with your integration points.

To figure out which dependency to break first, take a critical look at your codebase. What is causing you the most pain? Pain can manifest itself as long debug sessions, waiting for external services to be available, high bug counts, … Solve the pain by decoupling yourself from that dependency. Then look at what is now causing you the most pain and solve that. Lather, rinse, repeat. It might take a few weeks or months, but eventually you’ll have a codebase that is a pleasure to work with. If you don’t believe me, read this email that I received from Karl (originally posted here):

Hi James,

I’m writing you because I just wanted to thank you!

It was about two months ago and I attended TechEd/Orlando. I have to say that it was the first time for me and it really was an honor to be the one and only chosen from approximately 300 software developers working in my company to go to Orlando. I was very impressed about the good quality of the sessions, especially on the architecture track, but your tiny little discussion on Tuesday evening really opened my mind. At that time I had been working as a software developer for 7 years with about 5 years experience in software design and project management and 5 years of .NET experience. I was fighting with a 400,000 LOC .NET legacy monster that’s used by public safety agencies around the world in security related areas. I have a team of 12 developers and we were trying really hard to keep this beast up and running and extend it with new features. I think you can imagine that most of the time we were using the trial and error approach to get new features in (just hack it in and prepare for long debugging sessions hunting weird bugs in parts of the application you never expected to be affected by the new feature…). The main problem was – guess what – the dependencies. There were about 50 classes (all singleton “Managers”), and every single manager was tied to at least 10 other managers – no interfaces at all – and that’s just one of several layers of the app… We knew that dependencies were our problem, but we had no clue how to solve it – there was this chicken/egg problem – I want to decouple my system, which needs a lot of refactoring. To ensure that I don’t break anything I’d need unit tests but I can’t use them because my system is so highly coupled ;-) We have tried TypeMock, but my feeling was that this went in the wrong direction. Needless to say that this attempt failed.

During the discussion after your session you gave me some quite useful hints:

1. Read Applying Domain Driven Design and Patterns by Jimmy Nilsson
2. Read Working Effectively with Legacy Code by Michael Feathers
3. Use ReSharper (especially for Refactoring)
4. Use a Mock-Framework (preferably RhinoMocks)
5. Use a Dependency Injection Framework like Spring.NET

I bought Jimmy Nilsson’s book in the conference store and read it cover to cover on my flight back to Vienna. Then I bought the second book and read it within one week. I started to use ReSharper more extensively to apply some of the patterns from Michael Feathers’ book to get some unit tests in place. I extracted a lot of interfaces, brought Spring.NET into action and used RhinoMocks and the VS2005 built in UnitTest-Framework to write some useful unit tests. I also used the built in code coverage functionality in combination with the unit tests. In addition we already started Design for a messaging based service application that we want to develop in a very TDDish style.

As you can see there’s a lot going on since I attended your session. It was really this discussion about agile principles that gave me sort of a boost.

So again – thanks for opening my mind and keep on doing this great work!

Regards,
Karl

In my opinion, Karl and his team are the real heroes here. You can be a hero too by taming your software dependencies!

Today at lunch* I’ll be joining Glen Gordon and Lynn Langit on geekSpeak to talk about Taming Your Software Dependencies. Specifically I’ll be talking about moving from tightly-coupled to loosely-coupled architectures using dependency inversion, dependency injection, and inversion of control containers. geekSpeak is an interactive LiveMeeting driven by audience questions with no PowerPoint and lots of code. Come and geek out with me on geekSpeak! Register here.

* Today at lunch == Wednesday, March 26, 2008 from 12-1pm PST or 1-2pm MST or 2-3pm CST or 3-4pm EST or …

My latest article just hit the web in the March 2008 issue of MSDN Magazine. Loosen Up: Tame Your Software Dependencies for More Flexible Apps takes you on a journey from a highly-coupled architecture, which we’re all familiar with, to gradually more loosely-coupled ones. First stop is the problems inherent in highly-coupled applications. To start solving those problems, I look to dependency inversion and service location. Next stop is poor man’s dependency injection and then a simple, hand-rolled inversion of control (IoC) container. From there, I look at the features provided by full-fledged IoC containers and use Castle Windsor as an example, along with some Binsor thrown in for configuration goodness. My goal was to help developers understand the benefits of dependency injection and IoC containers by showing them the problems solved at each stage of the journey.

A big thanks to Howard Dierking, MSDN Magazine editor extraordinaire, for his encouragement and not having an issue with using Windsor for the advanced examples. Thanks to Oren Eini for help debugging a Binsor configuration problem in one of the examples. And an especially big thanks to my spouse and two young sons for their patience while I was writing.

Thanks in advance for reading the article. I welcome your feedback and questions.

As promised, Microsoft has made the source for the .NET Framework available for debugging purposes. You’ll need to be running Visual Studio 2008 and install this QFE (aka patch). Full instructions for how to enable .NET Framework source debugging can be found on Shawn Burke’s blog here. You can also read Scott Guthrie’s announcement here.

As luck would have it, I couldn’t get this working initially. There was no “Load Symbols” in the right-click menu of the call stack. (“Load Symbols” should appear right above “Symbol Load Information…”)

image

Double-clicking in my call stack on any frames in the .NET Framework:

image

resulted in:

image 

After some quick investigation and reading through Shawn’s troubleshooting section at the bottom of his post, I realized that the _NT_SYMBOL_PATH environment variable was overriding the settings in Visual Studio. (I had set up _NT_SYMBOL_PATH to the Microsoft Symbol Server for WinDbg.) The problem is that the symbols provided by the Microsoft Symbol Server have their source information stripped out.

To solve this problem, you have two options.

  1. Delete the environment variable and just set the symbol paths in Visual Studio and WinDbg independently as noted in Shawn’s blog post above.
  2. Add the Reference Source Symbol Server to _NT_SYMBOL_PATH. (This has the advantage that the setting is shared by all debugging tools, including Visual Studio and WinDbg.)

Regardless of which option you choose, first close all instances of Visual Studio 2008 and delete all the files in your local symbol cache. Visual Studio has no way of knowing which version of the symbols you have. So if you already have System.Web.pdb downloaded from the Microsoft Symbol Server – the PDB file without source information – you won’t be able to debug into System.Web.

To add/modify the environment variable:

  1. Right-click Computer… Properties…
  2. On Vista, click Advanced System Settings…
  3. Click on the Advanced tab, then Environment Variables…
  4. Click New… under System Environment Variables (or Edit… if _NT_SYMBOL_PATH is already defined).
    • Variable name: _NT_SYMBOL_PATH
    • Variable Value: SRV*c:\dev\symbols*http://referencesource.microsoft.com/symbols;SRV*c:\dev\symbols*http://msdl.microsoft.com/download/symbols
  5. Click OK three times.

Maybe I’m just dense, but it took me awhile to figure out the syntax for _NT_SYMBOL_PATH. Note that you have to separate symbol servers via a semi-colon. Specifying SRV*LocalCache*Server1*Server2 fails miserably. Symbols don’t download and no errors are shown. The four-part syntax is valid, but meant for caching symbols on your network for all developers to share. Local cache is on each developer’s box, Server1 is a fileshare on your network with read-write access for all developers, and Server2 is the actual public symbol server. If you specify a public symbol server as Server1 in the four-part format, symbol loading just fails silently. Use the semi-colon separated syntax noted above to specify multiple public symbol servers and everything works as expected.

The local symbol cache can be the same for all public symbol servers. It can be anywhere that you have read/write access, such as c:\symbols, c:\users\james\symbols, or – my preferred location – c:\dev\symbols.

You should be sure that the Reference Source Symbol Server is before the Microsoft Symbol Server. The order of symbol servers is the order of search. If your debugger doesn’t find the correct PDB file in the local symbol cache, it will check the first symbol server. If the first symbol server doesn’t have the appropriate PDB file, it will proceed to the second. So if you have the Microsoft Symbol Server first, you’ll be downloading the PDB files without source information.

Which brings me to my last point. Right now, source has been released for:

  • .NET Base Class Libraries (including System, System.CodeDom, System.Collections, System.ComponentModel, System.Diagnostics, System.Drawing, System.Globalization, System.IO, System.Net, System.Reflection, System.Runtime, System.Security, System.Text, System.Threading, etc.)
  • ASP.NET (System.Web, System.Web.Extensions)
  • Windows Forms (System.Windows.Forms)
  • Windows Presentation Foundation (System.Windows)
  • ADO.NET and XML (System.Data and System.Xml)

LINQ, WCF, Workflow, and others will be following in the coming months, according to Scott Guthrie. So if I debug WCF today, I’ll download symbols from the Microsoft Symbol Server without source information since the Reference Source Symbol Server won’t have the PDB files. When the PDB files are released, I won’t be able to debug the source until I delete the old PDB files without source information and force a re-download. You can do this by deleting the appropriate folder in your local symbol cache – in this case, the c:\dev\symbols\System.ServiceModel.pdb folder – or you can just delete the entire contents of your local symbol cache and re-download everything. If you’re not able to view source on something that you know is available, the easiest solution is to just clear out your local cache and let Visual Studio download the symbols again from the correct location. Downloading symbols over a broadband connection doesn’t take that long and is a lot faster than trying to troubleshoot which PDB files are causing you problems.

Happy Debugging!!!