Browsing Posts published in January, 2006

Back to talking about performance and .NET. Let me give you an example from my own experience regarding skepticism about whether you can write highly performant applications using managed code. The thought is that if it ain’t C++, it can’t be fast. (The same skepticism applies to Java, too.) I can attest first-hand that it is possible to write highly performant applications using .NET. For example, I was brought in to assist with the development of a server application that was written in C++ and having perf problems. The performance goal was 2000 concurrent clients on reasonable hardware. The alpha version in C++ was able to handle 200 concurrent clients and fell over (read the server crashed with out of memory errors) at 500. The team had spent approximately 3 months working on the C++ codebase writing it and trying to bring it up to par. They had never written server software before and had made a few honest mistakes such as firing up a thread per client (which resulted in running out of memory as stack space was allocated for each thread) and writing to the database while a remote connection was held open. (The problem with holding the connection open was that the clients were just dropping off information and didn’t require an ACK. So you’re holding open a TCP socket for an extra 50-100 ms during the synchronous database write and then closing it rather than freeing it up immediately.)


The team believed that you could only write a highly performant server in C++. So they weren’t too keen when I suggested using C# and .NET. With the assistance of another developer, we created in two days a skunkworks prototype in C# that serviced clients using a thread pool and dropping the results into a queue (implemented with MSMQ) so that we didn’t have to roundtrip to the database on every connection. (The results were lazy-loaded into the database by a queue listener.) This allowed us to free up sockets quickly. The results… On the same hardware that was falling over at 500 clients with the C++ version, we did approximately 10,000 clients with little to no perf tuning. (We used the same client load generator in both cases for a fair comparison.)


Why did we use C#? It wasn’t our intent to subvert their development standards. We just wanted to show that this alternate architecture could scale and the most expedient way to implement it was using .NET. When we showed them the managed version, their jaws dropped. When I suggested that we could spend a few weeks (literally!) writing the same thing in C++, the immediate question was “Why bother?” They had spent more time tracking down memory leaks and bad pointer arithmetic in their C++ codebase than we had spent implementing ours. We more than exceeded their performance goals and could implement new features faster. They didn’t look back and implemented a highly scalable server written ground-up in C# and .NET.


Next time, we’ll look at the performance characteristics of different aspects of managed code…

With Windows Forms 2.0 and a bit of ingenuity, you can easily make your application minimize to the system tray – that little area to the left of your clock. So let’s see how it’s done…


First stop – the NotifyIcon. The NotifyIcon is a new widget in Windows Forms 2.0, which displays a tray icon for your form. Simply drop a NotifyIcon on your form, set its Icon property to an .ico file of your choice, and set its Visible property to false. (I’m assuming that you want to either display the tray icon or your application, but not both.)


So far, so good. Now we want to find out when the form is minimized. We do this by finding out when the form is resized and examining its current WindowState, which we’ll do by overriding the OnResize method. (Overriding the OnXxx() methods is the preferred method for handling these types of Windows events if you’re creating a derived class since you don’t incur the overhead of a multicast delegate. If you want to capture these events from outside the form, use the appropriate event, such as Resize in our case.) If your WindowsState is minimized, display your tray icon and hide your form.


Now when we double-click our tray icon, we want to reverse the process by hiding the tray icon and displaying the form again. We do this by restoring the previous WindowState, but how do we know what the previous Window state was. We better keep track of it. Whenever a form is minimized, maximized, or restored, OnResize is called (and the Resize event is fired). So as long as the current WindowState isn’t minimized, we squirrel away the current value in a member variable. We then know what to restore the form to when the tray icon is double-clicked. There’s some additional logic in the constructor to set our restore state to a sane value when the application starts up.


That about wraps it up. You can download the sample code MinimizeToTray.zip (22.08 KB), though most is below for your viewing pleasure. Enjoy!

using System;
using System.Windows.Forms;

namespace JamesKovacs.Samples.MinimizeToTray {
public partial class Form1 : Form {
private FormWindowState m_previousWindowState;

public Form1() {
InitializeComponent();
// Store the initial window state that we want to restore to when we double-click the tray icon
m_previousWindowState = (this.WindowState == FormWindowState.Minimized ? FormWindowState.Normal : this.WindowState);
}

protected override void OnResize(EventArgs e) {
base.OnResize(e);
// We need to keep track of whether you’re minimizing from a normal or maximized window
if(this.WindowState != FormWindowState.Minimized) {
m_previousWindowState
= this.WindowState;
}
notifyIcon1.Visible
= (this.WindowState == FormWindowState.Minimized);
this.Visible = !notifyIcon1.Visible;
}

private void notifyIcon1_MouseDoubleClick(object sender, MouseEventArgs e) {
this.Visible = true;
notifyIcon1.Visible
= false;
this.WindowState = m_previousWindowState;
}
}
}

Continuing the series on life in a managed world, I promised you Developer Deathmatch. It was honestly more of a friendly challenge between Rico Mariani and Raymond Chen. This is only one application and your mileage will vary from application to application, but the main point here is to show that managed doesn’t necessarily mean slow. Now onto the main event…


Raymond wrote a series of articles about perf tuning an unmanaged application, a Chinese/English dictionary reader. Rico ported the same application to managed code and also perf tuned it. (Links to all of Rico’s and Raymond’s blog posts on the topic can be found here on Rico’s blog.) The results are staggering. Remember, both developers are at the top of their game. So this isn’t a naive unmanaged application versus a highly optimized managed application. Let’s look at the execution times:


































Version Execution Time
(seconds)
Unmanaged v1 1.328
Unmanaged v2 0.828
Unmanaged v3 0.343
Unmanaged v4 0.187
Unmanaged v5 With Bug 0.296
Unmanaged v5 Corrected 0.124
Unoptimized Managed port of v1    0.124
Optimized Managed port of v1 0.093
Unmanaged v6 0.062


It was only after Raymond pulled out all the stops with unmanaged v6 and wrote a custom allocator to replace the new operator that the unmanaged version surpassed Rico’s first try. This is not suggestive that Raymond is a poor coder – far from that – he’s probably one of the best.


In the end, Raymond won, but at what cost? It took six versions of the unmanaged code and who knows how much effort analyzing and tuning to beat the managed version. As for the managed version, at 0.093 seconds a large portion of the time is the startup overhead of the CLR. So you’re hitting a performance limit of managed code, but for most reasonable, and long-running, applications, ~50 milliseconds firing up your runtime at the beginning of application execution isn’t going to make a dent in your overall perf.


Why do people think that managed code is slow? One reason is that .NET makes it really easy to do colossally stupid things. Things like sucking 1 GB XML file off your hard disk and parsing it into an in-memory tree.


XmlDocument dom = new XmlDocument();
dom.Load(“someHonkingHugeFile.xml”);


Just because you can do this in two lines of code doesn’t absolve you of the fact that you need to understand what you’re doing. You need to understand your toolset. It is not the fault of the toolset designer if you use it improperly. Let me tell you that XmlDocument.Load isn’t going to be returning control to you anytime soon if someHonkingHugeFile.xml is 1 GB. I haven’t tried it, but you’ll probably throw an OutOfMemoryException on 32-bit Windows if you try something dumb like that.


Next time I’ll talk about some of my own personal experiences in developing performant managed applications.

This is the first in a series on life in the managed world. Originally it started as a single post and has been growing tentacles ever since. To tame the beast, I’ll only write about a few tentacles at a time…


The performance characteristics of managed code have been an interest of mine for a long time. I love writing C# code because it’s so much easier to write, maintain, and debug compared to unmanaged C++ code. But let’s be honest, if managed code doesn’t truly perform compared to C++, then you’re left with a nice prototyping language. Any serious work will still have to be done in C++.


So let’s start by looking at Microsoft’s use of managed code in its own products. This should be suggestive of whether it’s possible to write performant managed applications. (If anyone can do it, it should be the creators of the managed environment!) Dan Fernandez, Lead Product Manager of Visual Studio Express, released some interesting stats:



































Product Estimated Managed Code
Visual Studio 2005
7.50 million lines
SQL Server 2005
3.00 million lines
BizTalk Server
2.00 million lines
Visual Studio Team System
1.70 million lines
Windows Presentation Foundation
0.90 million lines
Windows Sharepoint Services
0.75 million lines
Expression Interactive Designer
0.25 million lines
Sharepoint Portal Server
0.20 million lines
Content Management Server
0.10 million lines

Take a closer look. Let’s look at the not-so-interesting ones first. WSS and SPS, though great products that both use .NET, are still very much unmanaged code bases at the core. (Read big C++ ISAPI filter that takes over IIS.) From what I’ve heard, this will be changing with WSS/SPS v3.0. (Bil, correct me if I’m wrong here.) Same goes for CMS, SQL Server, and VS 2005. Basically unmanaged applications/servers with .NET bolted on the side. Great for extensibility, but they really don’t tell us anything deep about the performance characteristics of managed code.


The interesting data points are BizTalk Server 2004, WPF, and Expression (aka Sparkle). These products are .NET from front-to-back.


BizTalk is a fantastic EAI tool and pivotal to Microsoft’s strategy in that space. Do you think Microsoft would “bet the farm” if .NET didn’t perform? They’ve got smart engineers. If managed code couldn’t perform up to its unmanaged counterpart, they’d write the core in unmanaged C++ and provide extensibility points by hosting the CLR. But they didn’t. They wrote the whole thing in .NET. That’s rather suggestive (though not conclusive) that managed code can perform well compared to C++.


Windows Presentation Foundation (WPF) is the new windowing framework that is set to replace Win32-style windows. We’re going from a bitmap/HWND-based windowing environment to a composable, vector-based windowing environment. At its core, it uses DirectX technology for drawing. Vista is getting a full UI facelift in the form of Aero, which is built on top of WPF. As developers, we love to hate our end users who buy apps based solely on visual appearance. The actual code can be a dog’s breakfast, but as long as it looks pretty, users will love it. So this is another huge bet for Microsoft. If WPF doesn’t perform, it will be immediately noticible by end users.


Expression (codename Sparkle) is a new design tool for graphic designers to develop WPF applications. Developers will still work in Visual Studio, but graphic designers will author XAML using Expression. (You didn’t expect Microsoft to force VS onto the artist community, did you?) The nice thing is that the XAML will roundtrip between VS and Expression. Like it or hate it, what will drive most users to upgrade to Vista is a prettier interface. If applications are going to have nice Vista-style visuals, Expression better be fast, easy, and intuitive for graphic designers. (Interesting tidbit. Expression only makes one P/Invoke call. Everything else is pure .NET. The one call? To show Help. The managed wrapper to the Help subsystem is in System.Windows.Forms and it was deemed silly to take a dependency on Windows Forms solely for this reason.)


Microsoft is making some pretty serious bets on managed code and is baking it more and more deeply into Windows and its server products. From experience, managed code is just plain easier. Forget the machismo about whether you’re man (or woman) enough to handle pointers and pointer arithmetic. Pointers have a time and a place. Sure, break out the C, C++, and even assembly language when it is warranted, but only when warranted. Most of the time you should be living in a managed world.


Next time I’ll look at Developer Deathmatch: Rico “Managed” Mariani vs. Raymond “Unmanaged” Chen.