Browsing Posts published in March, 2009

Parthenon Under ConstructionI don’t have to remind everyone that we’re in the middle of a world-wide economic depression downturn. When the economy is good, it is hard enough to convince your boss to re-build an application from scratch. When the economy is bad, it is bloody near impossible. In the coming months (and potentially years), I expect that as developers we’re going to be seeing more and more brownfield projects, rather than greenfield ones. We’re going to see more push for evolutionary development of applications rather than wholesale replacement. We will be called upon to improve existing codebases, implement new features, and take these projects in initially unforeseen directions. We will have to learn how to be Working Effectively with Legacy Code. (Took some effort to coerce the title of Michael Feathers’ excellent book into that last sentence.) A lot of companies have tremendous investment in existing “classic” ASP.NET websites, but there is a desire to evolve these sites rather than replace them, especially given these tough economic times. Howard Dierking, editor of MSDN Magazine, has asked me to write a 9-week series entitled From Web Dev to RIA Dev where we will explore refactoring an existing “classic” ASP.NET site. We want to improve an existing ASP.NET using new technologies, such as AJAX, jQuery, and ASP.NET MVC. We want to show that you can adopt better practices, such as continuous integration, web testing (e.g. WatiN, WatiR, Selenium), integration testing, separation of concerns, layering, and more.

So I have two questions for you, Dear Reader…

  1. Can you think of a representative “classic” ASP.NET website (or websites) for the project?
  2. What topics would you like to see covered?

I should clarify what I mean…

“Classic” ASP.NET Applications

I’m currently considering PetShop, IBuySpy, DasBlog, SubText, and ScrewTurn Wiki. I’m not looking for one riff with bad practices. Just an ASP.NET project in need of some TLC – one that doesn’t have a decent build script, isn’t under CI, a bit shy on the testing, little to no AJAX, etc. The code should be typical of what you would see in a typical ASP.NET application. (For that reason, I am probably going to discount IBuySpy as it is built using a funky webpart-like framework, which is not typical of most ASP.NET applications.) Some of the ASP.NET applications that I just mentioned don’t exactly qualify because they do have build scripts, tests, and other features that I would like to demonstrate. I will get permission from the project owner(s) before embarking on this quest and plan to contribute any code back to the project. Needless to say that the project must have source available to be considered for this article series. So please make some suggestions!

Topics

I have a lot of ideas of technologies and techniques to explore including proper XHTML/CSS layout, jQuery, QUnit, AJAX, HTTP Modules/Handlers, build scripts, continuous integration (CI), ASP.NET MVC, web testing (probably WatiN or Selenium), refactoring to separate domain logic from codebehind/sprocs, … I will cover one major topic per week over the 9-week series. So I’ve got lots of room for cool ideas. What would you like to see? What do you think is the biggest bang for your buck in terms of improving an existing ASP.NET application?

Depending on the topics covered (based on your feedback here), I might use one site for the entire series or different sites to cover each topic. It would add some continuity to the series to use a single site over the 9 weeks, but after a brief inspection of the codebases mentioned above, I am having my doubts about finding a single representative site. We’ll have to see. Please leave your suggestions in the comments below. Thanks in advance!

I’ve been having fun writing about my adventures in PowerShell. I would like to thank everyone for their encouragement and feedback. Something that I haven’t explicitly stated – which should go without saying as this is a blog – is that I am not a PowerShell expert. This is one man’s journey learning about PowerShell. I consider myself an expert on C#, .NET, and many other things, but as for PowerShell, I am a hacker. I learn enough to get the job done.

Yes, I wrote psake, which is a cool little PowerShell-based build tool, if I do say so myself. I wrote it in part to learn more about PowerShell and what was possible. (I surprised myself that I was able to write a task-based build system in a few hours with about 100 lines of PowerShell, ignoring comments.)

If you’re looking for PowerShell gospel, I would recommend checking out the Windows PowerShell Blog (the blog of Jeffrey Snover and the rest of the PowerShell team), Windows PowerShell in Action by Bruce Payette, the PowerScripting Podcast, or any of the myriad PowerShell MVP blogs. They are the experts. I’m just a hacker having fun.

With that disclaimer, I hope that by documenting my PowerShell learnings in public, I will help other developers learn PowerShell. I know that I am learning great things about PowerShell from my readers. In Getting Started with PowerShell – Developer Edition, I lamented the lack of grep. My friend, Chris Tavares – known for his work on Unity and ASP.NET MVC – pointed out that Select-String can perform similar functions. Awesome! Then in PowerShell, Processes, and Piping, Jeffrey Snover himself pointed out that PowerShell supports KB, MB, and GB – with TB and PB in v2 – so that you can write:

get-process | where { $_.PrivateMemorySize –gt 200MB }

rather than having to translate 200MB into 200*1024*1024 as I originally did. Fantastic!

In Writing Re-usable Scripts with PowerShell, wekempf, Peter, and Josh discussed the merits of setting your execution policy to Unrestricted. I corrected the post to use RemoteSigned, which means that downloaded PowerShell scripts have to be unblocked before running, but local scripts can run without requiring signing/re-signing. Thanks, guys. I agree that RemoteSigned is a better option.

Let’s talk security for a second. I am careful about security. I run as a normal user on Vista and have a separate admin account. When setting up teamcity.codebetter.com, the build agent runs under a least privilege account, which is why we can’t run NCover on the build server yet. (NCover currently requires admin privs, though Gnoso is working on fixing that in short order.) (Imagine if we did run builds as an Administrator or Local System. Someone could write a unit test that added a new user with admin privs to the box, log in remotely and start installing bots, malware, and other evil.) So I tend to be careful about security.

Now for my real question… What is the threat model for PowerShell that requires script signing? Maybe I’m being really dense here, but I don’t get it. Let’s say I want to do something really evil like formatting your hard drive. I create a PowerShell script with “format c:” in it, exploit a security vulnerability to drop it onto your box, and exploit another security vulnerability to launch PowerShell to execute the script. (Or I name it the same as a common script, but earlier in your search path, and wait for you to execute it.) But you’ve been anal-retentive about security and only allow signed scripts. So the script won’t execute. Damn! Foiled again! But wait! Let me just rename it from foo.ps1 to foo.cmd or foo.bat and execute it from cmd.exe. If I can execute code on your computer, there are easier ways for me to do bad things than writing PowerShell scripts. Given that we can’t require signing for *.cmd and *.bat files as this would horribly break legacy compatibility, what is the advantage of requiring PowerShell scripts to be signed by default? Dear readers, please enlighten me!

UPDATE: Joel “Jaykul” Bennett provided a good explanation in the comments. I would recommend reading:

http://blogs.msdn.com/powershell/archive/2008/09/30/powershell-s-security-guiding-principles.aspx

as it exlains the PowerShell Team’s design decision. The intention wasn’t to force everyone to sign scripts, but to disable script execution for most users (as they won’t use PowerShell), but allow PowerShell users to opt into RemoteSigned or Unrestricted as they so choose. Script signing is meant for administrators to set group policy and use signed scripts for administration (as one example use case of script signing).

Thanks again, Joel! That was faster than sifting through the myriad posts on script signing trying to find the reasoning behind it. Once again, the advantages of learning as a community!

Continuing on from last time, I will now talk about writing re-usable scripts in PowerShell. Any command that we have executed at PowerShell command line can be dropped into a script file. I have lots of little PowerShell scripts for common tasks sitting in c:\Utilities\Scripts, which I include in my path. Let’s say that I want to stop all running copies of Cassini (aka the Visual Studio Web Development Server aka WebDev.WebServer.exe).

Stop-Process -name WebDev.WebServer.exe -ErrorAction SilentlyContinue

This will terminate all running copies of the above-named process. ErrorAction is a common parameter for all PowerShell commands that tells PowerShell to ignore failures. (By default, Stop-Process would fail if no processes with that name were found.)

We’ve got our command. Now we want to turn it into a script so that we don’t have to type it every time. Simply create a new text file with the above command text called “Stop-Cassini.ps1” on your desktop using the text editor of your choice. (The script can be in any directory, but we’ll put it on our desktop to start.) Let’s execute the script by typing the following at the PowerShell prompt:

Stop-Cassini

Current dirctory not in search path by default

What just happened? Why can’t PowerShell find my script? By default, PowerShell doesn’t include the current directory in its search path, unlike cmd.exe. To run a script from the current directory, type the following:

.\Stop-Cassini

Another option is to add the current directory to the search path by modifying Computer… Properties… Advanced… Environment Variables… Path. Or you can modify it for the current PowerShell session using:

$env:Path += ‘.\;’

($env: provides access to environment variables in PowerShell. Try $env:ComputerName, $env:OS, $env:NUMBER_OF_PROCESSORS, etc.)

You could also modify your PowerShell startup script, but we’ll talk about that in a future instalment. Let’s run our script again:

ExecutionPolicy error

No dice again. By default, PowerShell does not allow unsigned scripts to run. This is a good policy on servers, but is a royal pain on your own machine. That means that every time you create or edit a script, you have to sign it. This doesn’t promote the use of quick scripts for simplifying development and administration tasks. So I turn off the requirement for script signing by running the following command from an elevated (aka Administrator) PowerShell prompt:

Set-ExecutionPolicy Unrestricted

Set-ExecutionPolicy RemoteSigned

Set-ExecutionPolicy succeeded

If this command fails with an access denied error:

Set-ExecutionPolicy failed

then make sure that you launched a new PowerShell prompt via right-click Run as administrator…

Third time is the charm…

Success!

We are now able to write and use re-usable scripts in PowerShell. In my next instalment, we’ll start pulling apart some more complicated scripts that simplify common developer tasks.

UPDATE: As pointed out by Josh in the comments, setting your execution policy to RemoteSigned (rather than Unrestricted) is a better idea. Downloaded scripts will require you to unblock them (Right-click… Properties… Unblock or ZoneStripper if you have a lot) before execution. Thanks for the correction.

Coffee and Code Joey Devilla (aka The Accordian Guy) from Microsoft’s Toronto office started Coffee and Code a few weeks ago in Toronto and John Bristowe is bringing the experience to Calgary. When John contacted me about the event, I thought to myself, “I like coffee. I like code. I want to be involved!” (Heck, I would order an Americano via intravenous drop if I could.) So John and I will be hanging at the Kawa Espresso Bar this Friday for the entire day drinking coffee, cutting code, and talking to anyone and everyone about software development. John is broadly familiar with a wide variety of Microsoft development technologies, as am I. I’ll also be happy to talk about Castle Windsor (DI/IoC), NHibernate (ORM), OOP and SOLID, TDD/BDD, continuous integration, software architectures, ASP.NET MVC, WPF/Prism, build automation with psake, … Curious what ALT.NET is about, I’ll be happy to talk about that too! I got my cast off today from my ice skating accident two weeks ago and am in a half-cast now. So I am hopeful that I’ll be able to demonstrate some ReSharper Jedi skills for those curious about the amazing tool that is ReSharper. (I am going to be daring and have a nightly build of ReSharper 4.5 on my laptop to show off some new features.) So come join John and I for some caffeinated coding fun at the Kawa Espresso Bar anytime between 9am and 4pm Friday, March 13, 2009.

This post has been brought to you by the letter C and the number 4…