Browsing Posts published in February, 2009

teamcity.codebetter.com

CodeBetter – in collaboration with JetBrains, IdeaVine, and Devlicio.us – is proud to announce the launch of TeamCity.CodeBetter.com – a continuous integration server farm for open source projects. JetBrains is generously supporting our community efforts by funding the monthly costs of the server farm and providing a TeamCity Enterprise license. Volunteers from CodeBetter, IdeaVine, and Devlicio.us are administering the servers and setting up OSS projects on the build grid. We are currently providing CI for the following projects (in alphabetical order):

We will be adding additional OSS projects in the coming weeks/months. You can register for an account here or log in as a guest. By default, new users can view all hosted projects. If you are a project member, you can email us at teamcity@codebetter.com to have us add you as a project member. (N.B. You only need to be a project member on TeamCity if you need to manage/modify the build.)

The current build grid consists of:

  • TeamCity – Dual CPU Quad-Core Xeon 5310 @ 1.60 GHz (clovertown) with 4GB RAM & 2x250GB SATA II in RAID-1
  • Agents – Single CPU Dual-Core Xeon 5130 @ 2.00 GHz (Woodcrest) with 4GB RAM & 2x250GB SATA II

Both are physical servers hosted by SoftLayer. As we add more projects, we will add additional agent servers to distribute the load. Each agent will have the following software installed:

  • Microsoft Windows Server 2003 R2 Standard x64 Edition SP2
  • Microsoft .NET Framework 1.1, 2.0 SP2, 3.0 SP2, 3.5 SP1
  • Microsoft .NET Framework 2.0 SDK
  • Windows SDK 6.1
  • Microsoft SQL Server 2008 Express (64-bit)
  • Ruby 1.8.6-26 (including rake, rails, activerecord, and rubyzip)

Build scripts can be authored in NAnt, MSBuild, Rake, or any other build runner supported by TeamCity. The build farm monitors your current version control system – at SoureForge.net, Google Code, or elsewhere – for changes and supports Subversion, CVS, and other popular source control systems. (TeamCity 4.0.2 – current version – does not support GIT. GIT support is planned for the 4.1 release, which should to be released at the end of March. We will upgrade to TeamCity 4.1 as soon as it is released.)

Projects can use SQL Express for integration testing. N.B. We will not be manually setting up databases, virtual directories, or other services for projects. If you need a database created, your build script must include its creation/teardown.

If your build script includes unit/integration tests, TeamCity can display the results in the UI if they are in the correct format. We can work with individual projects to ensure that this is the case. TeamCity can archive build artifacts and make them available for download if projects want to make CI builds available to the community.

TeamCity has rich notification mechanisms for communicating build status of projects, including email, IDE (VS, IntelliJ, Eclipse), and Windows Tray notifiers. Alternately you can subscribe to the build server’s RSS feed for succeeded and failed builds, succeeded builds only, or failed builds only. You can make use of these tools to stay apprised of current build health as team members check in changes to source control. All notifiers can be downloaded and configured through the My Settings & Tools menu on the TeamCity server itself.

If you would like your OSS project considered for free CI hosting, you must meet the following requirements:

  • Active project with a commit in the last 3 months.
  • OSI-approved OSS license with a publicly available source.

We will prioritize requests for hosting solely at our discretion, though we will try to accommodate as many requests as possible. (We do have day jobs, you know.) smile_regular We reserve the right to remove projects from the build farm that are monopolizing farm resources. (i.e. If a build script pegs all CPUs at 100% for one hour at a time, it’s going to get disabled so as to be fair to other projects.)

To apply to have us host CI for your OSS project:

  • Register a user account here.
  • Email teamcity@codebetter.com with the following information:
    • Your user account name, which you created above.
    • Project name & URL.
    • Link to your OSI-approved OSS license.
    • URL and type (SVN, CVS, …) of your source control system.
    • Build runner (NAnt, MSBuild, Rake, etc.) and default target.
    • Any additional requirements you might have.

CodeBetter, JetBrains, IdeaVine, and Devlicio.us are looking forward to providing free continuous integration hosting for the open source community. Please email us at teamcity@codebetter.com if you have any questions or comments.

James in CastUnfortunately I’m going to have to postpone my presentation on Tuesday as I broke my left wrist late this afternoon while ice skating with my older son. (I was practicing skating backwards, slipped, and landed with all my weight on the one wrist.) It’s a distal radial fracture, which means lots o’ pain meds for a few days and a cast for 6-8 weeks. smile_sad You can see the effects of the percocet kicking in in the photo to the right. On a positive note, they let you pick the colour of the fibreglass cast. Glad to know that you can break your bones, but still be fashion conscious. Unfortunately they didn’t have my corporate colour green, which would have been cool.

So coding is going to be excruciatingly slow for awhile. I’ll reschedule the presentation once the cast comes off.

Coming to a .NET User Group near you*… This Tuesday only…

Topic: Light Up Your Application with Convention-Over-Configuration
Date: Tuesday, February 24, 2009 Postponed
Time: 5:00 pm – 5:15 pm (registration)
  5:30 pm – ??? (presentation)
Location: Nexen Conference Center
801-7th Ave. S.W., Calgary, AB. (Plus 15 level)
Map

Inversion of Control (IoC) containers, such as Castle Windsor, increase the flexibility and testability of your architecture by decoupling dependencies, but as an application grows, container configuration can become onerous. We will examine how convention-over-configuration can allow us to achieve simplicity in IoC configuration while still maintaining flexibility and testability. You can have your cake and eat it too!

* Assuming that you live in Calgary. smile_regular

A friend, having recently upgraded to Rhino Mocks 3.5, expressed his confusion regarding when to use mocks vs. stubs. He had read Martin Fowler’s Mocks Aren’t Stubs (recommended), but was still confused with how to actually decide whether to use a mock or a stub in practice. (For a pictorial overview, check out Jeff Atwood slightly NSFW photo montage of dummies, fakes, stubs, and mocks.) I thought I’d share my response which cleared up the confusion for my friend…

It’s easy to get confused. Basically, mocks specify expectation. Stubs are just stand-in objects that return whatever you give them. For example, if you were testing that invoices over $10,000 required a digital signature…

// Arrange
var signature = DigitalSignature.Null;
var invoice = MockRepository.GenerateStub<IInvoice>();
invoice.Amount = new Money(10001M);
invoice.Signature = signature;
var signatureVerifier = MockRepository.GenerateMock<ISignatureVerifier>();
signatureVerifier.Expect(v => v.Verify(signature)).Return(false);
var invoiceRepository = MockRepository.GenerateMock<IInvoiceRepository>();
var accountsPayable = new AccountsPayable(signatureVerifier, invoiceRepository);
 
// Act 
accountsPayable.Receive(invoice);
 
// Assert 
invoiceRepository.AssertWasNotCalled(r => r.Insert(invoice));
signatureVerifier.VerifyAllExpectations(); 

I don’t have a real invoice. It’s a proxy generated by Rhino Mocks using Castle DynamicProxy. You just set/get values on the properties. Generally I use the real object, but stubs can be handy if the real objects are complex to set up. (Then again, I would consider using an ObjectMother first.)

Mocks on the other hand act as probes to detect behaviour. We are detecting whether the invoice was inserted into the database without requiring an actual database. We are also expecting the SignatureVerifier to be called and specifying its return value.

Now the confusing part… You can stub out methods on mocks too. If you don’t care whether a method/property on a mock is called (by you do care about other aspects of the mock), you can stub out just that part. You cannot however call Expect or Stub on stubs.

UPDATE: I’m including my comments inline as they respond to important points raised by Aaron and John in the comments here and many readers don’t bother looking through comments. :)

@Aaron Jensen – As Aaron points out in the comments, you are really mocking or stubbing a method or property, rather than an object. The object is just a dynamically generated proxy to intercept these calls and relay them back to Rhino Mocks. Whether it’s a mock/stub/dummy/fake doesn’t matter.

Like Aaron, I prefer AssertWasCalled/AssertWasNotCalled. I only use Expect/Verify if the API requires me to supply return values from a method/property as shown above.

I also have to agree that Rhino Mocks, while a great mocking framework that I use everyday, is showing its age. It has at least 3 different mocking syntaxes (one of which I contributed), which increases the confusion. It’s powerful and flexible, but maybe a bit too much. Rhino Mocks vNext would likely benefit from deprecating all but the AAA syntax (the one borrowed from Moq) and doing some house-cleaning on the API. I haven’t given Moq an honest try since its initial release so I can’t comment on it.

@John Chapman – Thanks for the correction. I’ve had Rhino Mocks throw an exception when calling Expect/Stub on a stub. I assumed it was expected behaviour that these methods failed for stubs, but it looks like a bug. (The failure in question was part of an overly complex test and I can’t repro the issue in a simple test right now. Switching from stub to mock did fix the issue though.) stub.Stub() is useful for read-only properties, but generally I prefer getting/setting stub.Property directly. Still stub.Expect() and stub.AssertWasCalled() seems deeply wrong to me. :)

Last time, I discussed why you as a developer might be interested in PowerShell and gave you some commands to start playing with. I said we’d cover re-usable scripts, but I’m going to delay that until next post as I want to talk more about life in the shell…

PowerShell feels a lot like cmd.exe, but with a lot more flexibility and power. If you’re an old Unix hack like me, you’ll appreciate the ability to combine (aka pipe) commands together to do more complex operations. Even more powerful than Unix command shells is the fact that rather than inputting/outputting strings as Unix shells do, PowerShell inputs and outputs objects. Let me prove it to you…

  1. At a PowerShell prompt, run “get-process” to get a list of running processes. (Remember that PowerShell uses single nouns for consistency.)
  2. Use an array indexer to get the first process: “(get-process)[0]” (The parentheses tell PowerShell to run the command.)
  3. Now let’s get really crazy… “(get-process)[0].GetType().FullName”

As a .NET developer, you should recognize “.GetType().FullName”. You’re getting the class object (aka System.Type) for the object returned by (get-process)[0] and then asking it for its type name. What does this command return?

image

That’s right! The PowerShell command, get-process, returns an array of System.Diagnostics.Process objects. So anything you can do to a Process object, you can do in PowerShell. To figure out what else we can do with a Process object, you can look up your MSDN docs or just ask PowerShell itself.

get-member –inputObject (get-process)[0]

Out comes a long list of methods, properties, script properties, and more. Methods and properties are the ones defined on the .NET object. Script properties, alias properties, property sets, etc. are defined as object extensions by PowerShell to make common .NET objects friendlier for scripting.

Let’s try something more complex and find all processes using more than 200MB of memory:

get-process | where { $_.PrivateMemorySize –gt 200*1024*1024 }

Wow. We’ve got a lot to talk about. The pipe (|) takes the objects output from get-process and provides them as the input for the next command, where – which is an alias for Where-Object. Where requires a scriptblock denoted by {}, which is PowerShell’s name for a lambda function (aka anonymous delegate). The where command evaluates each object with the scriptblock and passes along any objects that return true. $_ indicates the current object. So we’re just looking at Process.PrivateMemorySize for each process and seeing if it is greater than 200 MB.

Now why does PowerShell use –gt, –lt, –eq, etc. for comparison rather than >, <, ==, etc.? The reason is that for decades shells have been using > and < for input/output redirection. Let’s write to the console:

‘Hello, world!’

Rather than writing to the console, we can redirect the output to a file like this:

‘Hello, world!’ > Hello.txt

You’ll notice that a file is created called Hello.txt. We can read the contents using Get-Content (or its alias, type).

get-content Hello.txt

image

Since > and < already have a well-established use in the shell world, the PowerShell team had to come up with another syntax for comparison operators. They turned to Unix once again and the test command. The same operators that have been used by the Unix test command for 30 years are the same ones as used by PowerShell.*

So helpful tidbits about piping and redirection…

  • Use pipe (|) to pass objects returned by one command as input to the next command.
    • ls | where { $_.Name.StartsWith(‘S’) }
  • Use output redirection (>) to redirect the console (aka stdout) to a file. (N.B. This overwrites the destination file. You can use >> to append to the destination file instead.)
    • ps > Processes.txt
  • Do not use input redirection (<) as it is not implemented in PowerShell v1. smile_sad

So there you have it. We can now manipulate objects returned by PowerShell commands just like any old .NET object, hook commands together with pipes, and redirect output to files. Happy scripting!

* From Windows PowerShell in Action by Bruce Payette p101. This is a great book for anyone experimenting with PowerShell. It has lots of useful examples and tricks of the PowerShell trade. Highly recommended.

PowerShell Unix-style Recently I was on the PowerScripting Podcast hosted by Hal Rottenberg and Jonathan Walz. I had a great time talking about PowerShell from a developer’s perspective and psake, my PowerShell-based build system, in particular. You can find the interview on Episode 56 here. Thanks to Hal and Jonathan for having me on the show.

Now let’s talk PowerShell and scripting for developers. I don’t see a lot of developers using the command line and this surprises me. Maybe it’s my Unix background that attracts me to the command line. Maybe it’s my belief that sustainable, maintainable development is facilitated by a CI process, which necessitates being familiar with the command line. Maybe it’s because the only way to create reproducible results is to automate and the easiest way to automate is the command line. Whatever the reason, I believe that developers should become familiar with copying, building, deploying, and otherwise automating tasks via the command line.

So why PowerShell? PowerShell is a full-fledged programming language focused on object-based shell scripting. Note that I didn’t say “object-oriented”. You cannot create class hierarchies or define polymorphic relationships, but you can instantiate and use objects defined in other .NET-based programming languages (or COM objects). PowerShell is a shell language, meaning that it serves the same role as cmd.exe, bash, tcsh, etc. It’s raison d’etre is automating the command prompt. Manipulating directories/files, launching applications, and managing processes is really straightforward.

Microsoft is investing heavily in PowerShell and providing support for managing Windows Server 2008, IIS7, SQL Server 2008, Exchange 2007, and other Microsoft server products. This means that we are approaching the Nirvana that is end-to-end build to deploy. Imagine getting latest from your source repository, building the code, unit/integration testing the code, building the documentation, labelling the build, and deploying into a QA environment! The entire process is automated and reproducible. You know the exact version of the code that went into QA. When you get the green light to do a live deployment, you give the same PowerShell scripts to the IT deployment team who uses it to deploy the compiled (and QA’d) bits into the production environment! Who best to write these deployment scripts than your friendly neighbourhood IT Pro, who is intimately familiar with the production and QA environments. This is a great collaboration point to get the IT Pros, who will be deploying and maintaining your apps, involved in the development process.

Enough chitchat. Show me the code! Believe it or not, you probably already know a fair amount of PowerShell. Many of the commands you’re familiar with in cmd.exe work in PowerShell. (More surprisingly, many of the common commands you know from bash, tcsh, or other Unix shells also work!) The command line arguments are often different, but the basic familiar commands are there. So try out dir, cd, copy, del, move, pushd, popd, … (If you’re a old Unix hacker, you can try out ls, man, kill, pwd, ps, lp, cp, … Unfortunately there is no grep equivalent built in, which is terribly unfortunate.) All of these commands are actually aliases to PowerShell commands, which are named by VERB-NOUN, where NOUN is singular. For example to get a list of running processes, you run Get-Process, which is aliased to “ps”.

PowerShell is very conducive to experimentation. You can always find out more about a command or alias typing “Get-Help [CmdName|Alias]” or simply “help [CmdName|Alias]” since help is an alias for Get-Help. (N.B. PowerShell is case insensitive.) You can also look for commands by typing part of the command and pressing tab repeatedly. For example, if you want to find all set- commands, type “set-[TAB][TAB]…” to display Set-Acl, Set-Alias, etc. You can also look for commands using wildcards. Type “*-Acl[TAB][TAB]…” displays Get-Acl and Set-Acl.

So start playing around with PowerShell. Learn what it can do for you. Next time, we’ll look at writing re-usable scripts for accomplishing common developer tasks. Until then, happy scripting!