Browsing Posts published in March, 2006

I am currently involved in organizing the Calgary Code Camp, which will be held
in downtown Calgary on Saturday, May 27, 2006. You are probaby wondering, “What
is a Code Camp?” Simply put, a Code Camp is a developer event by and for the
developer community. You get a full day of developer-focused content created by
fellow developers, not marketing wonks. You’ll see code, code, and more code. We
are planning sessions on the usual suspects – ASP.NET, Windows Forms, and Web
Services. We’ll also have sessions on upcoming technologies such as WinFX, Windows Workflow Foundation (WinWF),
Windows Communication Foundation
(WCF)
, and Windows Presentation Foundation (WPF), and the new Microsoft
Command Shell (codename Monad). In addition to technology-focused sessions,
we’ll have sessions on development techniques and tools such as unit testing,
test-driven development (TDD), continuous integration, code coverage, and
more!

You’re wondering how much will all this goodness cost me? Well as Dr. Nick
from the Simpsons would say, “$199.95 and it comes with a free state of Kansas
Jello mould!” Just kidding. This event is COMPLETELY FREE. A
full day of code and fun with cool door prizes for absolutely nothing. We’ll be
posting speakers and sessions on the website as they’re confirmed. Drop by the
Calgary Code Camp Website and
sign-up. See you there!!!

Call for Speakers!

If you would like to speak at the event, please send a brief description of
yourself and your topic to proposals@calgarycodecamp.com.

Call for Sponsors!

If you are interested in sponsoring the Calgary Code Camp, please contact us
at sponsorship@calgarycodecamp.com.

Fellow plumber, Bil Simser, asks the question how the heck does someone debug SharePoint as a non-admin. Elementary, my dear Simser, elementary…

The fundamental problem that Bil is experiencing occurs with SharePoint, ASP.NET, or any app that runs under a different security context than your own. A normal user can only debug applications running under his/her own security context.* Administrators have the SeDebug privilege, which allows them to debug processes running under any security context. Granting your user the SeDebug privilege gives them tremendous power, which is exactly what you’re trying to avoid. (With SeDebug, you can open any process, including system processes with full permissions. If you can do that, you own the box. I leave it as an exercise to the reader to figure out how, given only SeDebug, to elevate your normal user to be a member of the local administrators group.) I know of a few solutions to allow debugging of server processes:

  1. Develop server apps in an isolated virtual machine and use an admin account.
  2. Run as admin when debugging server apps, but run as a normal user while developing them. (This can be done using MakeMeAdmin and then running devenv.)
  3. Run the server app under your user account, though this may mean placing your username/password in clear text, which is non-ideal. (This is the strategy used by the Visual Web Developer Web Server – aka Cassini – that ships with VS 2005.)

* Note that although you don’t require any special privileges to debug a process running under your own security context, Visual Studio does enforce that you need to be a member of the Debugger Users group.

EDIT: Additional information added below related to Bil’s comment.

Bil is correct. If you run Visual Studio as a non-admin when developing server apps and you want to debug, you need to break stride and launch another copy of Visual Studio using MakeMeAdmin or runas. This is highly non-ideal. Is it a huge security risk to run Visual Studio under an admin account while the rest of your log-in session is running as a normal user? Somewhat, but it’s a lot better than running your entire log-in session as an admin.

Also remember one of the main reasons for developing apps as a non-admin – to ensure that you are running/debugging with credentials similar to what your end users will be using. (i.e. Your app isn’t writing to protected regions of the file system or registry to which normal users don’t have access.) With server apps, the story is a bit different. You want your server app to be running with different credentials – the credentials of the account that the application will be running under in production – NETWORK SERVICE or other service account. The safest solution is #1 above. Develop server apps as an admin in an isolated virtual machine. Second would be running only Visual Studio under elevated privileges using technique #2. Although technique #3 above works, you run the risk of developing your server code under unrealistic conditions – for instance, you’ll have a logged in user with a loaded HKCU hive. If you want to try option #3, you’ll have to configure your application pool and/or ASP.NET application to run as your current (non-admin) user. For the app pool identity, you can configure that using the IIS Manager MMC. For ASP.NET, you have to modify the following in machine.config:

<configuration>
  <system.web>
    <processModel username="" password=""/>
  </system.web>
</configuration>

Although you can store this in cleartext, I would recommend against it for obvious reasons. Take a look at aspnet_setreg.exe and the following KB article on how to store this information securely:

How to use the ASP.NET utility to encrypt credentials and session state connection strings

First we had Joel Semeniuk speaking about VSTS in January. Now more Microsoft Regional Directors are descending on our fair city!

Juval Lowy of iDesign will be speaking about Windows Communication Foundation (WCF aka Indigo) at the Calary .NET User Group on March 23, 2006 from 5 to 8pm. You can find details and registration on the Calgary .NET User Group website.

Also in March, Scott Hanselman of Corillian Corporation will be speaking about Anatomy of a Successful ASP.NET Application – Inside DasBlog at the Alberta .NET User Group on March 30, 2006 from 12 to 1pm. You can find details and registration on the Alberta .NET User Group website.

Coming in June, Richard Campbell of Campbell Associates will be speaking about SQL Querying Tips & Techniques at the Alberta .NET User Group on June 1, 2006 from 12 to 1pm. You can find details and registration on the Alberta .NET User Group website, though registration doesn’t appear to be open yet.

Juval, Scott, and Richard are all Microsoft Regional Directors, frequent speakers at conferences and user groups, and prolific authors. You don’t want to miss these opportunities to hear them speak!

In this final (for now) instalment, let me ask a rhetorical question: Is managed code the right choice for every applications? Absolutely not! For example, .NET and Windows itself are not designed for use in real-time systems. There are no guarentees on worst-case latency during processing. i.e. If you’re writing software for a pacemaker or nuclear reactor – both hard real-time systems since failure results in loss of life – you have deadlines by which you have to complete computations and if you don’t make those deadlines, your system may fail. You might think it trivial to meet a deadline (e.g. worst-case length of computation / instructions per second), but consider the fact that any device connected to the system can raise an interrupt, which can result in your code being preempted so that kernel-level driver code can run. So figuring out worst-case latency involves considering the impact of all connected peripherals (and how they interact) in addition to other factors. Not an easy problem and the reason why there exists many operating systems specifically designed for real-time applications. How much software truly has real-time constraints? Very little, to be honest.

But I digress. I think you get the point that .NET isn’t appropriate for all software, but then again, neither is Java or many other commonly used languages and frameworks. However .NET is applicable to a broader class of software than you might imagine. What surprises many people is that .NET isn’t slower than unmanaged code in many cases. There are a lot of areas, such as raw numerical calculations, where the JITed MSIL code is essentially equivalent to what an optimising C++ compiler would produce. Games, which traditionally try to squeeze out the every last ounce of performance from your hardware, are starting to be written in managed code, and we’re not talking Space Invaders and Pacman. Arena Wars, a real-time strategy game, is built using the .NET Framework 1.1. (I’ve honestly never played this game, but it does go to show you that it is possible to write a real game using managed code.) Games are no longer requiring hand-optimized assembly language for critical loops. (For example, Quake by id Software was written in C with parts in
hand-optimized assembly. Quake 2 was also written in C, but contained
no assembly language. The slight performance gain of using assembly
language was not deemed necessary.)

Look at the performance characteristics of your code. If the bulk of your CPU time is spent in third-party frameworks (like DirectX, a physics engine, or an AI engine), it’s rather irrelevant what you write the code that drives the third-party framework, assuming the overhead of calling it isn’t too high. Imagine a situation where you have a sort and you naively use a bubble sort, which is known to be slow. Should this concern you? That depends on how much time is spent in the sort. If the CPU spends 1% of the overall application time in the bubble sort, speeding it up will result in at most a 1% performance gain. Not worth your time. If however the application spends 50% of its time in the sort, then even a factor of 2 speed-up would result in 25% faster execution. The same is true with the choice between managed and unmanaged code. If the time spent in your code is a small fraction of the overall execution time, it’s rather irrelevant what that code is written in. Pick a language/framework that allows you to develop quickly and with the fewest number of errors. But as I pointed out earlier, managed code doesn’t equate to slow code.

In summary, there are trade-offs in using managed code as there are with any runtime environment. You’re not going to get any faster than hand-optimized assembly (if you have infinite time to optimize), but who is going to write Windows applications in assembly language today? (I’m ignoring Steve Gibson for the moment. Amazing what one can do in assembly language these days, but not somewhere I want to live my developer life personally.) The key is to know your tools and know which ones are right for which job. With that in mind, I leave you with some links comparing managed to unmanaged code performance. I hope that they prove enlightening.

Episode 4: Life, the Universe, and Everything is complete. John and I hold down the fort in the latest episode of Plumbers @ Work. You can find the show notes here (as well as below) and podcast here. It will be posted to MSDN Canada Community Radio shortly.

Show Notes

  • Introduction
  • News Bytes: Renaming of Office “12″ to Office 2007
  • News Bytes: Release Date for Team Foundation Server (TFS)
  • News Bytes: WSCF 0.6
  • Developer Destination: HanselMinutes
  • Discussion about Reflector
  • Discussion about SysInternals
  • Developer Destination: .NET Kicks
  • Developer Destination: DNR TV
  • Discussion about Screencasting
  • Calgary .NET User Group
  • Site Navigation in ASP.NET 2.0
  • WebParts in ASP.NET 2.0
  • Upcoming Speakers for Calgary .NET User Group
  • Discussion about AJAX and ASP.NET “Atlas”
  • Test Driven Development (TDD) in AJAX
  • Dan Sellers’ WebCast Series – Security on the Brain
  • Canadian Developers Blog
  • Discussion about WinFX
  • Overview of Windows Communication Foundation (WCF)
  • Overview of Windows Presentation Foundation (WPF)
  • Overview of Extensible Application Markup Language (XAML)
  • Overview of Windows Workflow Foundation (WF)
  • Discussion about Workflows and Activities
  • Windows WorkFlow Foundation (WF) versus BizTalk Server (BTS)
  • Overview of the Windows Shell (AKA, “Monad”)
  • Don Box’s Weblog Post on SOAP versus REST in WCF
  • Overview of SOAP and REST
  • Multi-Core CPUs and the Future with Concurrency

Running time: 56:34

Show References

Discuss this episode in the forums.

After I wrote Common Pitfalls When Handling Exceptions in .NET, I had a few questions about exception handling techniques when you’re writing a Web Service. Of course the common pitfalls still apply. The questions were more around how you should expose those exceptions to the client of a Web Service. Should you let them percolate up and allow the SOAP stack to transform them into SoapFaults? Or should you catch all exceptions in each WebMethod and manually transform them into appropriate SoapFaults? I would definitely recommend using the second technique of catching all exceptions and manually transforming them into SoapFaults. What difference does it make, you might ask? There are two main reasons – the first related to your “contract” and the second related to security.

Let’s talk about Web Service contracts. Your contract – what you are promising to clients of your Web Service – is anything that you expose to the outside world. Typically developers think of this as the methods, parameters, and return values of your Web Service, but it also includes any SoapFaults that your Web Service can throw. You don’t want to let your implementation details (i.e. exceptions that can be thrown) leak into your Web Service contract. Does your client care if the Web Service is written in .NET, Java, C++, or hand-coded assembly language? They shouldn’t. If you allow the SOAP stack to transform exceptions into SoapFaults, you are allowing your underlying implementation details to leak into the contract. Your clients may start depending on the fact that a particular exception is being thrown and transformed into a particular SoapFault. What happens if you want to switch implementations later – for instance switching from ASP.NET Web Services (aka ASMX )to Windows Communication Foundation (WCF)? You will end up breaking client code. Best case scenario (for you, the Web Service developer) is that all the clients update their code to handle the changed SoapFaults. Worst case scenario (from your point of view) is that the install base is too large and you have to replicate ASMX exception-to-SoapFault semantics in WCF.

The second concern is security-related. By allowing exceptions to escape your web service and allowing the SOAP stack plumbing to serialize the exception, you are exposing implementation details, such as stack traces, that could assist a hacker in abusing your service. A good practice is to catch all exceptions in the WebMethod and provide meaningful (and sanitized) SoapFaults to your clients, but record detailed exception information, such as stack traces, using your favourite logging framework.