Browsing Posts in Security

I was recently asked how to configure a site to redirect automatically from HTTP to HTTPS. By this I mean when the user types in http://server.example.com/app/page.aspx,* the browser will automatically redirect to https://server.example.com/app/page.aspx. You can do it through code, but with a little ingenuity, you can do it strictly through IIS configuration. Let’s walk through the setup…

Open IIS Manager and select properties for the website for which you want to require SSL. For TCP port, enter any unused port other than port 80 (the default HTTP port). For example, use 8888. For SSL port, enter the default SSL port, which is 443. Now go to the Directory Security tab… Secure communications… Edit… Set the “Require secure channel (SSL)” (required) and “Require 128-bit encryption” (optional, but recommended). Restart IIS. Browse to http://server.example.com:8888 now and you will get “The page must be viewed over a secure channel”. So far, so good.

Create a brand new IIS website by right-clicking… New… Web site… Click Next and give the website a name such as “Redirect to SSL”. Click Next… For TCP port, choose port 80, the default HTTP port. For path, point it to c:\inetpub\wwwroot. (It doesn’t really matter as we’ll be changing this in a minute.) Click Next… Give it Read permissions. Click Next… Finish… to create the website. Right-click, properties on the new website. Select the Home Directory tab. Change “The content for this resource should come from:” to “A redirection to a URL”. In the “Redirect to:” textbox, enter https://server.example.com. You can also optionally select “A permanent redirection for this resource”, which will cause bookmarks to update to the new URL. DO NOT, I repeat, DO NOT select “The exact URL entered above” or “A directory below URL entered”. Restart IIS. Now browse to http://server.example.com and you’ll be redirected to the SSL site. Note that the path portion of the URL is preserved and only the protocol and server are modified. So http://server.example.com/some/path/deep/within/my/app.aspx will redirect to https://server.example.com/some/path/deep/within/my/app.aspx just by applying the redirection steps noted above.

N.B. The redirect URL is sent back to the client. So if you type https://localhost as the redirect, the client browser will try to redirect to localhost on the client’s machine, which probably won’t exist. Same thing goes for NetBIOS names. (e.g. https://server rather than https://server.example.com)

* Little known fact. Example.com, example.net, and example.org are reserved top level DNS names as specified in RFC 2606, Section 3 and are intended for – surprise, surprise – examples. This allows authors to create bogus URLs in books, blog posts, and elsewhere that are guaranteed to go nowhere, which is generally the author’s intent when creating a bogus URL.

Ian Griffiths has an excellent post on why UAC exists and why we as developers shouldn’t turn it off in frustration. Like Ian, I ran Windows XP as a non-admin and UAC is welcome relief to the tedium of switching between accounts to configure things.* Now I’m not claiming that UAC is perfect. Notably Vista Backup doesn’t play nicely. Rather than giving you the option to elevate, you must log in as an admin. But honestly most programs don’t cause elevation prompts and the ones that do, they occur seldomly. I would really like to know what people are doing that cause so many elevation prompts. I run Visual Studio 2005 as a regular user on Vista and can develop quite happily. Sure there are some edge cases where VS doesn’t work such as browsing MSMQ within the server explorer. Honestly, how often do I do that? Seldom. When I need to, I can run a MMC console elevated rather than running all of VS elevated. So stop your griping and turn UAC back on.

* The Windows Firewall in XP is notoriously non-admin friendly as it blocks the program without providing the option to allow it through by providing admin credentials. That meant that if a new program was blocked, you would have to log into the admin account and manually add an exception to the list of programs. Now you just type in your admin credentials.

Ever had the trial version of an application run just fine, but fail horribly when you register a license key? I’ve had this happen with two different and completely unrelated software products – FLStudio (awesome music authoring application) and FinalBuilder (a NAnt/MSBuild replacement). In both cases, the cause was a Data Execution Prevention (DEP) violation. (DEP is on by default in 64-bit versions of Windows, which is why it is most often encountered there. On 32-bit Windows, DEP is enabled only for critical operating system files, though it is simple to turn it on for all files.) I suspect that the licensing scheme generates code from the license key, which is then executed. The code will be generated in a data region, which is marked with the NX (No eXecute) flag on modern processors. When a modern processor tries to execute code in a portion of memory marked with the NX flag, DEP kicks in and tears down the process. The rationale behind this is that you’re not supposed to be executing data! (Apologies to the LISP folks out there.) If you need to dynamically generate code on the heap, clear the NX flag to mark it as executable.

Now you may be wondering about the .NET Framework because the CLR takes MSIL and JITs it to native code, which lives on the unmanaged heap. The CLR clears the NX flag for the regions of the heap where JIT’d methods live. The same applies to JVMs in the Java world and any other JIT’d language.

The Fix

If you’re an end user, there is no (easy) way to mark the generated code as executable. (I suppose you could hook a debugger up to the process every time you launch the application and manually clear the NX flag in the correct region. Thanks, but no.) So the solution is to exempt the executable in question from data execution prevention (DEP). On Vista, go to Computer… Properties… Advanced system settings… Performance Settings… Data Execution Prevention tab… and add the EXE to the exception list. (You should find the DEP tab in a similar place on Longhorn Server, Windows XP SP2, and Windows Server 2003.) The process doesn’t have DEP protection, but at least you can run it.

The Real Fix

If you actually wrote the software, you can really fix the issue. You must clear the NX bit in the regions where you generate code to prevent DEP from tearing down your process. Rather than using malloc or HeapAlloc, you must use VirtualAlloc or VirtualProtect with one of the following flags: PAGE_EXECUTE, PAGE_EXECUTE_READ, PAGE_EXECUTE_READWRITE, or PAGE_EXECUTE_WRITECOPY.

Why Malware Can’t Sidestep DEP

Now you may think that this opens a security hole. Can’t malware simply clear the NX flag itself? Let’s walk through the hypothetical exploit:

  1. You have an unexploited application with an undiscovered buffer overflow bug.
  2. Your application gets a malformed request attempting to exploit the bug.
  3. The buffer is overflown and return address of the current function is set to somewhere in the buffer. (This is how a typical buffer overflow exploit occurs.)
  4. The processor tries to execute the data in the buffer, which would contain the attack code. The buffer is marked NX because it is part of the current thread’s stack (or less often, the heap).
  5. DEP kicks in and tears down the process.

At no time has the attacker’s code been given a chance to run. The only way that the attacker’s code could run and call VirtualAlloc/Protect is if your code called VirtualAlloc/Protect on the buffer, which would be a pretty silly thing to do. So if malware can call VirtualAlloc/Protect, it can already run code in your process and do much worse things than dynamically generate code and mark it as executable.

Hopefully this helps some folks solve mysterious licensing failures and software authors make their code work on DEP-enabled systems, which are being more and more common.

I must bid a fond farewell to a fellow plumber, Dan Sellers. Today was Dan’s last day in the Big House. He is stepping into a well-deserved, early retirement. Dan and I have worked together many times over the years especially on Plumbers @ Work and more recently on the Canadian Developer Security Virtual Team. (I’ve got some posts and content coming up soon.) The developer scene in Canada will not be the same without him. Congratulations, Dan. You’ll be missed.

What is SecurityKicks.com?

SecurityKicks.com is a community-based news site edited by our members. It specialises in security information for developers, including writing secure code, authentication and authorization techniques, cryptography, and related topics.

Individual users of the site submit and review stories, the most popular of which make it to the homepage. Users are encouraged to ‘kick’ stories that they would like to appear on the homepage. If a story receives enough kicks, it will be promoted.

What is a kick?

Kicks are votes of approval from our members. If a story has 16 kicks, that means 16 users liked it. The more kicks a story receives, the more likely that it will appear on the homepage. If you don’t like a story, don’t give it a kick.

How do I submit stories?

You simply need to register for a new account. With an account you can submit stories by clicking the ‘Submit a story’ link on the right menu of the homepage.

How do I find stories?

To find stories that have not been promoted, click the ‘Find stories‘ link on the right menu of the homepage. You can also click the ‘find’ link beside each category in the list of categories.

Who are the brains behind this operation?

The original idea for the site was mine, but Gavin Joyce, creator of DotNetKicks.com, deserves the bulk of the credit. He not only set up the site, but is also hosting it. Hats off to Gavin!

I hope you find SecurityKicks.com to be a useful resource for finding developer-related security information. Welcome and enjoy!

Thanks to everyone who attended my talk on Introducing Windows CardSpace. It was one of the liveliest talks I’ve given. I had lots of great questions throughout the presentation and some great discussions with people during and after. I’ve posted the slidedeck here (5086 KB). Following is a list of resources (reproduced from the slidedeck) for those of you who want to learn more about Windows CardSpace.

Windows CardSpace

Whitepapers

Identity Blogs

Presenting at EDMUG last week was a blast. The audience was great and people asked some fantastic questions. I presented Enterprise Architecture for Mere Mortals: Authentication where I discussed the major authentication mechanisms for enterprise applications – basic, NTLM, and Kerberos – and authentication topologies – trusted subsystem, delegation, constrained delegation, and protocol transition. It felt very strange doing a development presentation and never launching Visual Studio. I believe the audience got the point that, although not straightforward, constrained delegation isn’t that hard to configure and you don’t have to resort to basic authentication when you need to do a multi-server hop. (e.g. Sending credentials from the client to IIS to SQL Server.) Here is the slidedeck. (N.B. You’ll need PowerPoint 2007 to open it. Email me if you would like a version for 2003.)

I had Tools of the Trade: Must-Have .NET Utilities in my back pocket in case I ran out of things to talk about regarding security. As it turns out, lack of material wasn’t a problem. I always seem to arrive over-prepared. :^) If EDMUG wants to invite me back, I’ve got a presentation waiting. Or maybe I’ll present it at the Edmonton Code Camp

Thanks again to EDMUG for inviting me to speak!

For our first event, the Calgary Code Camp was a huge success, if I do say so myself. We had over 80 developers attend and both tracks were constantly buzzing with great discussions. Thanks to everyone who presented for generously sharing their time and knowledge! Thanks also to everyone who attended and made the day a success. We’ll be posting slidedecks and demos to the Calgary Code Camp website in the coming week. In the meantime you can find mine here. You can find some extra goodies in the zip file that I didn’t have time to demo, such as the DatabaseImageHandler and the SiteAvailabilityModule, which I’ll be blogging about in the next while.

In the previous two parts (Part 1 and Part 2), I introduced the ImpostorHttpModule as a way to test intranet applications that use role-based security without having to modify your group memberships. (I’ll assume that you know what I’m talking about. If not, go back and re-read the first two parts.) In the final part, let’s look at what exactly is going on behind the scenes with ImpostorHttpModule…

The ImpostorHttpModule requires surprisingly little code to work its magic. Let’s think about exactly what we want to do. We want to intercept every HTTP request and substitute the list of roles defined for the incoming user in the ~/App_Data/Impostors.xml file instead of the user’s actual roles. (In an intranet scenario, a user’s roles are often just the local and domain groups to which the user belongs.) To do this, we need to implement a HttpModule. We’ll start with the simplest HttpModule, which we’ll call NopHttpModule for “No operation”.

using System.Web;

namespace JamesKovacs.Web.HttpModules {
    public class NopHttpModule : IHttpModule {
        public void Init(HttpApplication context) {
        }

        public void Dispose() {
        }
    }
}

To be a HttpModule, we simply need to implement IHttpModule and provide implementations for the two methods, Init() and Dispose(). We now have to register ourselves with the ASP.NET pipeline. We do this using the <httpModules> section of Web.config.

<?xml version=”1.0″?>
<configuration>
    <system.web>
        <httpModules>
            <add name=”NopHttpModule” type=”JamesKovacs.Web.HttpModules.NopHttpModule, JamesKovacs.Web.HttpModules”/>
        </httpModules>
    </system.web>
</configuration>

That’s it. Not terribly interesting because it does absolutely nothing. So let’s move on and implement the HelloWorldHttpModule, which simply returns “Hello, world!” no matter what you browse to, whether it exists or not!

using System;
using System.Web;

namespace JamesKovacs.Web.HttpModules {
    public class HelloWorldHttpModule : IHttpModule {
        public void Init(HttpApplication context) {
            context.BeginRequest += new EventHandler(context_BeginRequest);
        }

        void context_BeginRequest(object sender, EventArgs e) {
            HttpContext.Current.Response.Write(“<html><body><h1>Hello, World!</h1></body></html>”);
            HttpContext.Current.Response.End();
        }

        public void Dispose() {
        }
    }
}

Try browsing to /Default.aspx, /Reports/Default.aspx, /ThisDoesNotExist.aspx, or even /ThisDoesNotExistEither.jpg. They all return “Hello, World!” (N.B. ASP.NET 1.X will return a 404 for the JPEG. ASP.NET 2.0 will return “Hello, World!” In 1.X, static
files were served up directly by IIS without ASP.NET getting involved. Although this gives excellent
performance for images, CSS, JavaScript files, etc., it also meant that
those files were not protected by ASP.NET security. With ASP.NET 2.0, all unknown files types are handled by
the System.Web.DefaultHttpHandler, which allows non-ASP.NET resources to be protected by ASP.NET security as well. See here for more information.)

Now back to our regularly scheduled explanation… In our Init() method, we tell the HttpApplication which events we would like to be informed of. In this case, we grab the BeginRequest event, which is the first event of the ASP.NET pipeline. It occurs even before we determine if the URL is valid, hence our ability to serve up “missing content”.

ASP.NET provides many hooks into its processing pipeline. Here is an excerpt from MSDN2 on the sequence of events that HttpApplication fires during processing:

  1. BeginRequest
  2. AuthenticateRequest
  3. PostAuthenticateRequest
  4. AuthorizeRequest
  5. PostAuthorizeRequest
  6. ResolveRequestCache
  7. PostResolveRequestCache

    After the PostResolveRequestCache event and before the PostMapRequestHandler event, an IHttpHandler (a page or other handler corresponding to the request URL) is created.

  8. PostMapRequestHandler
  9. AcquireRequestState
  10. PostAcquireRequestState
  11. PreRequestHandlerExecute

    The IHttpHandler is executed.

  12. PostRequestHandlerExecute
  13. ReleaseRequestState
  14. PostReleaseRequestState

    After the PostReleaseRequestState event, response filters, if any, filter the output.

  15. UpdateRequestCache
  16. PostUpdateRequestCache
  17. EndRequest

The pipeline in ASP.NET 1.X had many, but not all, of these events. ASP.NET 2.0 definitely gives you much more flexibility in plugging into the execution pipeline. I’ll leave it as an exercise to the reader to investigate why you might want to capture each of the events.

Armed with this information, you can probably figure out which event we want to hook in the ImpostorHttpModule. Let’s walk through the thought process anyway… We are trying to substitute the actual user’s roles/groups for one that we’ve defined in the ~/App_Data/Impostors.xml file. To do this we need to know the user. So we need to execute after the user has been authenticated. We need to execute (and substitute the groups/roles) before any authorization decisions are made otherwise you might get inconsistent behaviour. For instance, authorization may take place against your real groups/roles and succeed, but then a PrincipalPermission demand for the same group/roles might fail because the new groups/roles have been substituted. So which event fits the bill? PostAuthenticateRequest is the one we’re after. In this event, we know the user, which was determined in AuthenticateRequest, but authorization has not been performed yet as it occurs in AuthorizeRequest.

public void Init(HttpApplication context) {
    context.PostAuthenticateRequest += new EventHandler(context_PostAuthenticateRequest);
}

We know which event we want to hook. Now what to do once we hook it. In .NET, we have Identities and Principals. An Identity object specifies who has been authenticated, but does not indicate membership in groups/roles. A Principal object encapsulates the groups/roles and the identity. So what we want to do is construct a new Principal based on the authenticated Identity and populate with the groups/roles that we read in from ~/App_Data/Impostors.xml. As it so happens, the built-in GenericPrinicpal fits the bill quite nicely. It takes a IIdentity object and a list of roles (in the form of an array of strings). N.B. It doesn’t matter if the Identity is a WindowsIdentity, a FormsIdentity, a GenericIdentity, or any other. All that matters is that the Identity implements the IIdentity interface. This makes the group/role substitution code work equally well regardless of authentication technology.

IIdentity identity = HttpContext.Current.User.Identity;
string[] roles = lookUpRoleListFromXmlFile(identity);    // pseudo-code
IPrincipal userWithRoles = new GenericPrincipal(identity, roles);

Armed with userWithRoles, we just need to patch it into the appropriate places:

HttpContext.Current.User = userWithRoles;
Thread.CurrentPrincipal = userWithRoles;

We have discarded the original principal (but kept the original identity) and patched in our custom one. That’s about it. Any authorization requests are evaluated against the new GenericPrincipal and hence the group/role list that we substituted.

An additional feature I would like to point out is caching of the users/roles as you probably don’t want to parse a XML file on every request. The users/roles list will auto-refresh if the underlying ~/App_Data/Impostors.xml file changes. Let’s see how this works. We store a Dictionary<string, string[]> in the ASP.NET Cache, which contains users versus roles as parsed from the ~/App_Data/Impostors.xml file. If it doesn’t exist in the Cache, we parse the XML file and insert it into the Cache along with a CacheDependency like this:

HttpContext.Current.Cache.Insert(“ImpostorCache”, impostors, new CacheDependency(pathToImpostorsFile));

When the underlying file changes, the entry is flushed from the cache. The next time the code runs, the cache is re-populated with the contents of the updated ~/App_Data/Impostors.xml.

One last point… The ImpostorHttpModule is meant for development/testing purposes, which means that I haven’t optimized it for performance, but for ease of implementation and comprehension.

So there you have it – the ImpostorHttpModule. Hopefully you have a better appreciation for the power and extensibility built into ASP.NET as well as some cool ideas of what else you can implement using HttpModules. Full source code can be found here.

In our last cliff-hanger episode, I introduced the ImpostorHttpModule. I’m going to show how you can use it to implement and test a sitemap and navigation menu in ASP.NET. We’ll use the new ASP.NET 2.0 Master Pages feature because it’s the easiest way to ensure that the same menu ends up on every page. We’ll start from the previous solution with the ImpostorHttpModule registered in the Web.config. I’ve created three pages, ~/Default.aspx, ~/Reports/Default.aspx, and ~/Admin/Default.aspx. The Web.config files are set up as follows:

Path Allowed Roles
~/Default.aspx User, Manager, Administrator
~/Reports/Defaults.aspx Manager, Administrator
~/Admin/Default.aspx Administrator

We’ll be using the security trimming feature of sitemaps, which removes nodes from the sitemap that are not accessible by the current user. Note that this is simply a UI nicety and not actual security – without the appropriate Web.config files and <authorization> sections, users could still access those areas by navigating directly to them. So let’s start by implementing the menu and then we’ll enable security trimming. We’ll see how ImpostorHttpModule can help us in testing these features.

Our first step is to add a sitemap, which we’ll leave with the default name of Web.sitemap. We’ll add three nodes – Home, Reports and Administration. One of the odd things about the sitemap XML schema is that the child of <siteMap> must be a single <siteMapNode>. So if you want to have multiple nodes, you need to create an empty <siteMapNode> with further child nodes like this:

<?xml version=”1.0″ encoding=”utf-8″ ?>
<siteMap xmlns=”http://schemas.microsoft.com/AspNet/SiteMap-File-1.0” >
    <siteMapNode>
        <siteMapNode url=”~/Default.aspx” title=”Home”  description=”Return to the Home page” />
        <siteMapNode url=”~/Reports/Default.aspx” title=”Reports” description=”Go to reports” />
        <siteMapNode url=”~/Admin/Default.aspx” title=”Administration”  description=”Go to site administration” />
    </siteMapNode>
</siteMap>

One quick note is the use of ~/ which allows us to create links that are relative to the application’s virtual directory. This is a great ASP.NET feature, which works in most places links are found in server-side controls, as it allows you to deploy the site at http://servername/ or http://servername/someVdir/ or anywhere else.

Now that we have our sitemap set up, we can create a Menu that is driven by the sitemap. We’ll create a master page called Main.master and add the Menu to it in design mode. We’ll create a new SiteMapDataSource and point the Menu control at it. Since we have an empty starting node, we’ll set SiteMapDataSource.ShowStartingNode = false. (I also changed the menu to use the Classic style and horizontal orientation.) We should now see three nodes: Home, Reports, and Administration in the Menu. If we try to browse the site, we’ll get a 401 – Access Denied because our user account is not in any of the appropriate roles. At least the Web.config files are doing their jobs. Let’s see how to use the ImpostorHttpModule to give ourselves access to these directories without having to create a local or domain group or add our user account to it. (In a deployment situation, you would likely have a domain group that a domain admin would add users to. We’re trying to avoid all the overhead of calling up your friendly neighbourhood domain admin every time you want to test a different security context when accessing your app. Believe me – your domain admin will thank you.)

So let’s add our user, DOMAIN\Foo, to the User, Manager, and Administrator role in the ~/App_Data/Impostors.xml file. (N.B. If you’re logged on using a local account, you’ll need to specify MACHINE\Bar. If you logged in using the alternate user name syntax, foo@example.com, you’ll need to use this in the name.)

<?xml version=”1.0″ encoding=”utf-8″ ?>
<impostors>
    <impostor name=”DOMAIN\Foo” roles=”User, Manager, Administrator”/>
</impostors>

Clicking on the Menu links should allow you to see the Reports and Administration sections. All is good in the world. Now let’s implement security trimming! Our next stop is Web.config where we’ll have to add a new sitemap provider as the default one declared in Machine.config does not have security trimming enabled. The key attribute here is securityTrimmingEnabled=”true”.

<siteMap defaultProvider=”XmlSiteMapProvider” enabled=”true”>
   <providers>
      <add name=”XmlSiteMapProvider” description=”Default SiteMap provider.” type=”System.Web.XmlSiteMapProvider” siteMapFile=”Web.sitemap” securityTrimmingEnabled=”true”/>
   </providers>
</siteMap>

With this change, the menu disappears entirely! We haven’t defined any roles that can see the nodes. So they are all trimmed off. Let’s update the Web.sitemap to add the roles:

<?xml version=”1.0″ encoding=”utf-8″ ?>
<siteMap xmlns=”http://schemas.microsoft.com/AspNet/SiteMap-File-1.0” >
    <siteMapNode roles=”User, Manager, Administrator”>
        <siteMapNode url=”~/Default.aspx” title=”Home”  description=”Return to the Home page” roles=”User, Manager, Administrator” />
        <siteMapNode url=”~/Reports/Default.aspx” title=”Reports” description=”Go to reports” roles=”Manager, Administrator” />
        <siteMapNode url=”~/Admin/Default.aspx” title=”Administration”  description=”Go to site administration” roles=”Administrator” />
    </siteMapNode>
</siteMap>

A quick note is that you must specify all the roles on the empty <siteMapNode>. If you do not, it gets trimmed and none of the child nodes get displayed either. (Took me awhile to realize this the first time I used sitemaps and security trimming.)

Now that we have everything in order, try removing your user account from the Administrator role in ~/App_Data/Impostors.xml and browse to ~/Default.aspx. The Administration link disappears. You can now happily add and remove roles from your user account to test what different kinds of users would see when they browse the site.

In summary, the ImpostorHttpModule allows us to easily test different security configurations for our site without having to change your local or domain group memberships. Since we’re replacing the list of roles that the incoming user is a member of, this technique not only works for testing sitemaps and website security using <authorization>, but it also works for declarative and imperative security demands in code. Full source code is available here. In our next episode, we’ll look at how the ImpostorHttpModule works under the covers.