Easy URL Rewriting With ASP.NET Routing

URL rewriting is a great way to avoid the problems typically associated with the standard .NET way of building web apps.  Take this typical URL, for example:


Most ASP.NET solutions wind up having URLS that look like this.  It works, but there are problems with these types of URLs:

  • They’re long and ugly.
  • They make things look complicated and scary to typical end users.
  • They don’t let power users understand and navigate the site through modifying the URL.
  • Search engines don’t like them
  • Inner workings of your code are exposed, which could lead to security issues.
  • URLs are tied to the specific implementation

What you really want is to have that same web page handling requests, but accessed through an URL like this:


There are two great tools in the ASP.NET world to rewrite URLS: The IIS URL Rewrite module, and the System.Web.Routing functionality built into the framework.  Unfortunately, they both have serious drawbacks.

The URL Rewrite module snaps into IIS and lets you configure rewriting without touching any code.  It’s worth researching this tool a bit, because depending on what you’re trying to accomplish, it might suit your needs exactly.  But, it does suffer some serious drawbacks for .NET developers:

  • The URL Rewrite module must be installed on the server.
  • Visual Studio is not aware of URL Rewrite.
  • URL Rewrite doesn’t work with the ASP.NET Development Server built into Visual Studio; you have to build and debug your application through IIS.
  • You can only configure rewrites through the limited URL Rewrite scheme; you can’t write code.

ASP.NET Routing doesn’t suffer from these drawbacks, but while it’s far more powerful, it’s also far more complicated to set up.  It does require code changes, and when used in the traditional manner, it requires each page in your site to be rewritten to use the Routing architecture.  This means a lot of work.  But, there’s a very easy way to harness the power of Routing and use it much like you would the URL Rewrite module.

Enable routing for your application

The first thing you’ll need to do is map a page route within your application.  This is done through code.  The Global.asax class contains a method called Application_Start which is run whenever your application starts, so it’s an ideal place for this.  Find this method, and add this code:

// Register routing
System.Web.Routing.RouteCollection routes = System.Web.Routing.RouteTable.Routes;
routes.MapPageRoute("Generic Routing", "{page}/{*id}", "~/routing.aspx");

There are three important arguments to the MapPageRoute call: a friendly name for the route you’re adding (call it whatever you’d like), the format of the URL that’s going to be caught, and the path to the ASP.NET page you’d like to handle matching URLs.  In this example, we’re going to catch URLs of the style used as an example at the top of this page, but you could easily change this to work with an MVC pattern or anything else you need.  Of course, you’re not limited to just one mapping, but that’s all we need for this example.  The asterisk is used to indicate that the {id} part of the URL pattern could contain slashes. The call to .Ignore prevents requests to WebResource.axd from being caught by your routing.

Build your routing pages

Now, add a page called “routing.aspx” to your project, and add this code to Page_Load:

// Get routing data
string page = (string)this.RouteData.Values["page"];
string id = (string)this.RouteData.Values["id"];

// Transfer to appropriate page
if (page == "categories")
Server.Transfer(string.Format("~/displaycategory.aspx?category={0}", id));
else if (page == "titles")
Server.Transfer(string.Format("~/displaytitle.aspx?title={0}", id));

You can see what’s going on here: the code gathers the values that were used to built the URL, checks to see which page should handle the request, and then forwards the request on the the proper page.  In this example, we also remap requests to URLs like this:


From this point forward, you can configure as many rewrites as you’d like through this one little bit of code.  Because this is code, though, you can modify this to suit whatever needs you may have.  If you wanted, you could even write this to draw data from an XML file so you don’t need to touch code to edit your URL mappings.

Really, this is the best of both worlds: easy to configure, drop-in URL rewriting that will work with any existing solution that doesn’t require anything to be installed, works with Visual Studio, and lets you write code wherever you need a bit more complexity.

Choosing Between C# and VB

Here’s a question most .NET developers have to deal with: C# or VB?

This can be a pretty heated debate; people love to defend the tools they love.  Once you get down to work, though, both languages are very similar.  They both have access to the same libraries and tools, they both have full support from Microsoft and enormous developer communities, and they both get the job done well.

But there are differences.  Let’s look at some of the more important ones:

C# Only: Better syntax

Let’s face it: C-style syntax is better than BASIC-style syntax.  You just can’t argue this one.  BASIC is too wordy; C lets you focus on what matters: your code.  Sure, both languages have code generation and IntelliSense and code snippets, and yes, you can come up with examples where VB code is shorter and more elegant than C#.  But for the most part, it’s pretty hard to argue that VB syntax is designed for experienced developers.

This isn’t as big a deal as you might think.  There’s no scenario where C# syntax is much faster to code in than VB syntax (assuming you have Visual Studio to back you up). But, C# is just a tiny bit faster in 500 different ways, and it adds up.  There are other factors to consider in choosing a language, of course, but this remains a very compelling argument.

C# Only: More advanced development community

VB is generally easier for new developers to pick up, and often allows faster development.  This might sound like an advantage to VB, but there’s a huge counter-argument: the C# community tends to be more advanced than the VB community, and is often more respected.  An experienced developer who prefers VB might have a hard time convincing others that VB can sometimes be a better choice, but a developer who only knows VB will be laughed right out of the room.

If you’re trying to decide on a single language to learn, don’t.  You need to understand at least half a dozen languages and technologies to get anything done in the real world: HTML, CSS, Java(script), XML, SQL, C(++), and more. And if you’re going to be a .NET developer, learn C# and VB.

C# Only: Unsafe code

The .NET world is wonderful, but sometimes you need to drop back to the frightening world of direct memory management.  You can usually accomplish the same tasks in VB through managed code, and even in the pre-.NET world, VB could still read and write to locations in memory directly, but there’s no getting around the fact that C# is a better choice if you can’t imagine a world without pointers.

C# Only: Checked / Unchecked

C# also lets you control exactly when overflows and underflows are caught and when they’re ignored.  In the managed world, it’s pretty tough to argue that overflow can actually be useful, but there’s a lot of legacy code – and legacy developers – out there who depend on things working they way they always have.

C# Only: Iterators

C# also lets you work with iterators.  Sure, VB knows how to iterate, but C# has a bit of extra power and flexibility here.  Check this out:

public IEnumerator<string> GetEnumerator()
foreach (string s in strings)
yield return s;

Iterators essentially let a function return values in the middle of the function.  This is a great tool, and one that’s hard to get used to not having when coding under VB.

C# Only: Refactoring

Only C# includes refactoring support build right into the IDE.  These are a collection of extra tools and commands that make development easier and faster, and C# developers are often shocked to learn that VB doesn’t include these features.  True, there are enhanced refactoring add-ins available for both languages that do a better job than what’s built into the C# IDE, but you can’t beat having something ready to go right out of the box.

VB Only: Handles and WithEvents

In C#, you have to hook up events through code.  Sure, there’s designer support available, but it makes for a more complicated project.  In VB, the Handles keyword does all this work for you.  When it comes to creating a UI for your application, this is a really big deal and makes VB developers significantly more productive: things are simpler, and you just don’t have to write as much code.  When it comes to writing business logic and other UI-less code, this doesn’t really matter very much.

VB Only: With

VB offers the With structure.  Not only is this convenient, it also improves performance.  Take this bit of C# code:

System.Text.StringBuilder sb = new StringBuilder();
sb.AppendLine("FileName: " + System.Diagnostics.Process.GetCurrentProcess().MainModule.FileName);
sb.AppendLine("Memory size: " + System.Diagnostics.Process.GetCurrentProcess().MainModule.ModuleMemorySize.ToString());
sb.AppendLine("Entry point: " + System.Diagnostics.Process.GetCurrentProcess().MainModule.EntryPointAddress);

And now look at it under VB:

Dim sb = New System.Text.StringBuilder
With System.Diagnostics.Process.GetCurrentProcess.MainModule
sb.AppendLine("File name: " + .FileName)
sb.AppendLine("Memory size: " + .ModuleMemorySize)
sb.AppendLine("Entry point: " + .EntryPointAddress)
sb.AppendLine("Debug: " & .FileVersionInfo.IsDebug)
End With

That’s just less code.  Less code is easier to write, read, and maintain.

This example might be a little contrived:  in the real world, you’d just declare a new variable (and give it a short name).  My point, though, is that with VB, you don’t have to do this.

VB Only: My

The My class is pure convenience.  There’s nothing under My that can’t be found elsewhere in the framework, but it makes it very easy to access a lot of calls in the framework that were previously difficult to find and use.  Have a look at this:

If My.User.IsInRole(ApplicationServices.BuiltInRole.Administrator) Then
My.Computer.Network.DownloadFile("http://server.com/data.xml", "C:\")
My.Computer.Network.DownloadFile("http://server.com/data.xml", My.Computer.FileSystem.SpecialDirectories.Desktop)
End If
Catch ex As Exception
End Try

C# can do all this, of course, but it’s going to take more code.  That said, a lot of the functionality under My is just there to help beginners find what they’re looking for.  There are a few things here that are invaluable (such as the My.Settings class), but generally, C# developers won’t miss this too much.

VB Only: XML / Date Literals

Date literals have been around in VB forever, and while it’s debatable how often you should be hardcoding dates in code, it’s still nice to have the option (although it’s too bad the illogical American MM/dd/yyyy format is used).  XML literals, on the other hand, are a huge leap forward.  Once you work with XML in VB for a while, going back to C# will be pretty painful.  Have a look at this code, for example:

Dim allScreens = From s In Screen.AllScreens
Select <Screen>
<Device><%= s.DeviceName %></Device>
<Width><%= s.Bounds.Width %></Width>
<Height><%= s.Bounds.Height %></Height>
<BitsPerPixel><%= s.BitsPerPixel %></BitsPerPixel>

Dim document = <?xml version="1.0" encoding="utf-8"?>
<%= allScreens %>


That’s insanely, ridiculously simple.  And the IntelliSense support here is amazing; you really have to try it to understand how beneficial this is.  If you work with XML much, this is a really compelling reason to pick VB over C#.

VB Only: Late-binding and COM

This is another big one.  VB allows developers to use late-binding.  Essentially, this means a developer can call a member on a variable declared simply as Object.  At run-time, the compiler looks at the object, and if the call makes sense, it runs.  If it doesn’t make sense, an error occurs.  In the theoretical world of pure managed code and beautifully designed classes, using such a feature would be considered poor code.  In the real world, though, it’s nice to have this option available.  And where it really makes a world of difference is when you’re working with COM objects.  Again, let’s compare.  Here’s some VB code that automates Microsoft Word a bit:

With CreateObject("Word.Application")
With .Documents.Add()
.Range.Text = Clipboard.GetText()
End With
End With

And here’s the same code in C# (brace yourself!):

object app = Activator.CreateInstance(Type.GetTypeFromProgID("Word.Application"));
app.GetType().InvokeMember("Visible", System.Reflection.BindingFlags.SetProperty, null, app, new object[1] { true });
object docs = app.GetType().InvokeMember("Documents", System.Reflection.BindingFlags.GetProperty, null, app, null);
object doc = docs.GetType().InvokeMember("Add", System.Reflection.BindingFlags.InvokeMethod, null, docs, null);
object range = doc.GetType().InvokeMember("Range", System.Reflection.BindingFlags.InvokeMethod, null, doc, null);
range.GetType().InvokeMember("Text", System.Reflection.BindingFlags.SetProperty, null, range, new object[1] { Clipboard.GetText() });
doc.GetType().InvokeMember("SaveAs2", System.Reflection.BindingFlags.InvokeMethod, null, doc, new object[1] { "clipboard.docx" });
doc.GetType().InvokeMember("Close", System.Reflection.BindingFlags.InvokeMethod, null, doc, null);
app.GetType().InvokeMember("Quit", System.Reflection.BindingFlags.InvokeMethod, null, app, null);

As you can see, working with COM in this fashion is really, really painful under C#.  In fact, only through reflection is this even possible!  This has been improved somewhat with the recent addition of the dynamic type in C# 4.0, if you’re able to take advantage of the latest version.

VB Only: Implicit Conversions

In C#, all type conversions must be performed explicitly.  In VB, most simple conversions are performed automatically by the compiler.  This means you can add 2 and 2.0, and it means you don’t need to type .ToString() anywhere near as often.  It can save a lot of time, but can also cause bugs if the conversion wasn’t expected.  Note that you don’t want to give this feature to new developers; they will only get themselves into trouble with it.  It’s great to have conversions done implicitly, but only if you already understand what’s going on under the hood.

VB Only: Better IntelliSense and Error List

In VB, the IDE is much faster at updating IntelliSense, the Error list, and other tools.  Under C#, you often need to rebuild your project to update the Error list and certain other features.  And, IntelliSense is just all-around better under VB.  This might not seem like a drastic difference, but it saves you a second or two countless times a day.  You’ll certainly notice this when moving between the languages frequently.

Other Differences

I think that’s about it for major features available in only one language.  There are a plethora of other small arguments to make, but none that really have much of an overall impact on choosing a language.  There are, of course, many other major differences that don’t really have a clear advantage one way or the other.  Namespaces are handled differently.  VB offers project-level Imports, while C# is better at helping you manage file-level ‘using’ statements.  C# offers static classes, while VB offers Modules.  Commenting works differently.


There is one area where C# is really the only sensible choice:

  • Unsafe code

There are three areas where VB has a clear advantage over C#:

  • Working with COM (although C# 4.0 narrows the gap)
  • Working with XML
  • Developing UI

Outside of these areas, it all comes down to personal preference.  C# has the better syntax and a more advanced community, while VB offers a range of features and aids not available to C#.

But remember: it’s not about the tool.  It’s about what you do with it.

OneNote not syncing–Windows Phone 7

It’s finally here, and it seems that it is getting better and better each and every day! While the new Windows Phone 7 does follow an Apple-like model by excessively locking down the device, it does seem that underneath all of that new shininess is the ability to get under the hood and tweak the device just like any other Windows based device ever created.

Overall, I love my LG Optimus Quantum, but I do have to report one small glitch that I have encountered and give our readers some pointers on how to fix this.


One of the first things I noticed on the device was the beautifully integrated Office 2010 components that can be set to automatically sync to the Windows Sky Drive. I have always loved OneNote and to have a fairly complete version of this running on a smart phone is a god send.  Unfortunately, it isn’t as intuitive as it may look.

By default, when you set up a Windows Phone 7, it asks you for a live ID to which it will sync itself up to. With OneNote, it will create a default OneNote notebook called Personal (Web) . Logging in to your Skydrive, you will also see this notebook.

I wasn’t a fan of this as I like to create custom notebooks and I really didn’t want my default location for my saved noted to go to Personal (Web) (whoever chose this name for the default notebook should be tarred and feathered and or sent to Guantanamo). I proceeded to delete that notebook. It is at this point that my brand new phone started generating errors and barking at me saying it couldn’t sync, and that the default location for saving unfiled notes was gone. The phone gave the helpful suggestion of creating a new location for these, however, there seemed to be no apparent way of creating a new notebook that would sync and allow itself to become the default.  The obvious solution would be to log into the Skydrive account and create a new notebook called Personal (web) and everything would be happy again..? Right? Well, no , not exactly.  I struggled for a long time and was almost ready to wipe the phone when I decided to browse to office.live.com on the phones browser.  What I saw there was actuallysome other notebooks that I thought I was having trouble syncing.  If you open one of these from here, they will automatically be added to your phone and you can then select one of these as your default notebook. This will entirely fix the sync issue.

This is really counter-intuitive and Microsoft doesn’t seem to tell anyone that they have to do this anywhere. Instead on their Windows Phone 7 site, they warn people about deleting the default with no mention of what to do if you have done this.  This is also the method you will need to use to add your own existing OneNote notebooks to the phone.

Again, I love the phone, but Microsoft will hopefully start making things like this a little clearer. It’s not that any of this was difficult, it’s just that it was by complete happenstance that I found the answer to my problem. 

Hope this helps someone out there.

Speed up the Visual Studio Development Web Server

Here’s a fix to a problem many people don’t realize they’re having!  When you debug a web site project in Visual Studio, by default, a simple little web server called ‘Visual Studio Development Web Server’ (previously known as Cassini) fires up so you can test your site with whatever browser or tools you want:


You’d think that since this tiny little web server runs on the local machine, everything should be pretty speedy, right?

Well, it’s not.  Sometimes, it kind of works.  Sometimes, it times out.  And here’s the problem: IPv6.  I hate this technology.  Sure, it might be necessary, but it’s a pain-in-the-ass over-engineered solution that is going to cause everybody a LOT of grief over the next few years. By default, Windows tries to use IPv6 first.  Why?  Because it’s so much more awesome, I guess.  Unfortunately, the web server built into Visual Studio doesn’t play nice with IPv6.

I offer you several fixes here.  Pick the one you hate least!  Note that these fixes can on occasion be a bit finicky; you may have to restart your browsers, flush DNS caches, restart your computer, or scream and curse for a while.

The Quick Fix

When you start your project, your browser will be sent to an address like this (the port number will be random):


Simply change localhost to, so the address looks something like this (leaving the original port number):

Remember to include the ‘http://’ in the URL you type here.  Ugly, yes, but it works instantly, you don’t need to reconfigure anything, and you don’t need admin access.  The down side is that you have to do this every time you launch the project.

The Easy Fix

Want to fix this issue permanently?  The best way is to edit your hosts file, which you’ll find here:


Towards the bottom, you’ll find this line:

#       localhost

Uncomment this line by removing the ‘#’.  Then save the file.

There’s another line right after this that mentions ‘::1’; leave this one the way it is.  This file is protected, so the easiest way to save it is to save a copy to your desktop and then move this copy to the original location; this way Windows offers you the opportunity elevate and overwrite rather than simply give you a ‘read only’ error.

This fix should instantly take care of the problem machine-wide.  In theory, this shouldn’t break anything – IPv6 is still turned on, and resolution still works – but if this is a server, you might want to test things through.

The Browser-Specific Fixes

There are options within some browsers to disable IPv6.  Doesn’t seem like the best way of going about solving this problem, but hey, you do what you gotta do.

In Firefox, browse to about:config and toggle the network.dns.disableIPv6 preference:


In Chrome, start the browser with the “--disable-ipv6” argument.  Note that the dashes are a bit awkward; you have to get this exactly right.  See our (outdated) article Google Chrome on Windows 7 for more details on making this change in your shortcuts.

Other Fixes

There are other ways of fixing this out there.  These include various ways of disabling IPv6, registry hacks, and editing the web.config file.  None of these are particularly ideal, unless you know exactly what you’re doing (in which case, why are you reading this?).  Note that disabling IPv6 (as some existing articles out there will tell you to do) will break things!

Fix: Cannot import the following key file

Here’s another quick fix for a small issue you may encounter when upgrading your project to Visual Studio 2010.  You may find that the import works okay, but when you go to compile, you get the following error message:

Cannot import the following key file: keyfile.pfx. The key file may be password protected. To correct this, try to import the certificate again or manually install the certificate to the Strong Name CSP with the following key container name: VS_KEY_0123456701234567

The cause of the error is exactly as Visual Studio described: it can’t open the key file it needs to access because the key file is password protected.  The suggested fix, however, is not likely to set you on the right path.  In fact, Visual Studio should really just fix this itself.  In previous versions, it would.  Remember you’d occasionally get a password prompt when opening a project for the first time?

Well, all we need to do to fix this is trigger Visual Studio to ask you for the password.  Then, it will do its thing and you’ll be set.  Try this:

  1. Open Project Properties.
  2. Click on the Signing section.
  3. Where it says ‘Choose a strong name key file:’, reselect the current value from the drop-down box:

  4. Visual Studio will now prompt you for the password.  Enter it.

  5. You might get another error message:

    ”An attempt was made to reference a token that does not exist”

    If so, just ignore it.
  6. Click the ‘Change Password” button:

  7. Enter the original password in all three boxes and click OK. If you’d like to change your password (or if your old password doesn’t meet complexity requirements), you can do so now.
  8. Repeat for each key file in your project.
  9. Save your project and do a rebuild.

Of course, there are less empirical ways of solving this, but they involve using the signtool.exe application and messing around with the certificate store.  This might not be the most impressive way of solving this problem, but it seems to work.

Fix IE9 Address Bar Search

So the IE9 beta is out.  The rest of the Internet is busy reviewing it today, so I won’t bother.  I will, however, show you how to fix one little problem.  It has to do with a little-used feature that I happen to use from time to time: searching.

IE8 had separate text boxes for entering an URL and entering a search query.  This made it simple for developers, but it kind of sucked for users.  Really, by 2010 I kind of thought my web browser would know how to figure out the difference between www.microsoft.com/msdn and “Microsoft MSDN” and choose whether to navigate directly or search accordingly.  Google Chrome really upped the game here: there’s one box, and it always seems to know exactly what you want to do.  It’s still the best implementation of an address bar out there, in my opinion, but IE9 is certainly catching up.

Under IE9, if you type an address, it works.  Wonderful.  But if you type something that’s not clearly an address, one of two things happens.  Either you’re brought to a search results page (from Bing, Google, or whoever else you’ve chosen to use) or – if there’s a really obvious ‘best’ search result – you’re taken right to the site you obviously wanted to go to.  This sounds nice, but when I type something that’s not an address, I want search results.  If I wanted to go to www.linux.org, that’s what I would have typed, so why doesn’t “linux” take me to my search results page where I can click on the Wikipedia article?

Luckily, this behavior can easily be changed.  When I first started looking into this, I expected something ugly… maybe even as bad as writing my own search provider.  But the solution is really simple.  Obscure, perhaps.  But simple.  Here’s how:

Click the “Tools” button (the gear at the top right of the window), and then click “Manage add-ons”:


Now click on “Search Providers”, and then select Google (or Bing, or whatever else you use):


See the “Disable top result in address bar” link I highlighted?  Click it.  Then click Close.  That’s it!  Now, you’ll always get search results (unless you typed an address, in which case it will go where you told it to).

When I first ran into this, I started to rant a bit.  But after figuring this one out, I have to admit that Microsoft got it right.  They set up the default setting the way the unwashed masses will like it, they made it nicely configurable for those who want it to work a specific way, and they kept the details out of the way until needed.

Choosing a .NET Framework Version

Whether you’re starting a new project or just releasing a new version of a tried-and-true application that’s been around forever, one major decision you need to make – and make right – is the framework version you choose to target.  This is more complicated a question today than ever before, but since the release of .NET 4, there are good choices available regardless of your scenario.  Here’s a quick guide:

.NET 1.0

This is now completely obsolete.  Don’t use this for new projects.  Ever.  If you’re supporting an application that uses this, at LEAST upgrade to 1.1, and ideally, 2.0.  It’s generally that not hard, and the 1.0 release of .NET has some ugly little surprises hidden away, just waiting for the opportunity to ruin your week.  Microsoft no longer supports this version, and yes, there are serious bugs.

When to use:

  • Never.

When to upgrade:

  • Long, long ago.

.NET 1.1

This is pretty old.  Don’t use this for anything new, and avoid active development on this platform. It’s still safe to use, though, so feel free to maintain code running on this for another few years.  In fact, this was the first version of .NET to be included with an OS (Windows Server 2003), so Microsoft will continue to support this until 2015 at least.

When to use:

  • Your application must support Windows NT 4.0.
  • Maintaining applications for which active development has ceased.

When to upgrade:

  • Immediately, if active development continues.
  • In the next few years, if long-term support is required.

.NET 2.0

I love this release.  This is when .NET came of age, and it’s still used all over the place.  As I’ll describe below, there are even reasons why you might want to base new development on this release.  Sure, it might not have all the fancy new features the newer releases include, but the core functionality is rock solid, and everything you need for a simple, timeless, reliable application is there.  It’s the last .NET release to run on Windows 2000, and if will even work under Windows 98 (do people still use that?).  The installer is just over 20 MB, and installation is pretty painless, but most modern computers out there will have this installed already.  Upgrading from .NET 1.x to 2.0 is usually pretty smooth.

This is also a great version of the framework to choose if you’ll be supporting Mono (which allows your code to run on a variety of devices and operating systems, including Linux and MacOS).

When to use:

  • Your application must support Windows 98, Windows ME, or Windows 2000.
  • You want to avoid requiring .NET Framework updates as part of your deployment process as much as reasonably possible.
  • Maintaining applications built on .NET 2.0 or earlier.
  • Your application must run under Mono.

When to upgrade:

  • Active development continues, and you want access to features available only in newer versions of .NET.

.NET 3.0

This is where things start to get ugly.  .NET 3.0 is actually not a full release of the .NET Framework.  It’s really just .NET 2.0 plus some new technologies thrown in (WPF, WCF, WF and a few other oddities).  This version was included with Windows Vista, but was never really popular with developers.  Unless you know exactly why this is the version you need, you should avoid this one.  It’s just… weird.

When to use:

  • You require features not available in .NET 2.0, AND your application must not require Framework updates, AND your application will only run on Windows Vista or newer.
  • Maintaining applications built on .NET 3.0.

When to upgrade:

  • Now, if active development continues (unless you really know why you’re using 3.0)

.NET 3.5

The Beast.  I really hate this release.  This version continues the weird existence of 3.0.  It’s really just good old .NET 2.0, plus a bunch of changes and additions.  As a developer, there’s a lot of new stuff here since 2.0 (LINQ is introduced, WCF and WPF are a bit more usable, ASP.NET includes AJAX support, and there are a bunch of other new toys and language improvements to play with).  But administrators have learned to hate this release.  The installer is over 230MB, can take HOURS to run, and often requires several reboots.  Automated deployment is an absolute joke; it’s probably easier to upgrade the entire OS than get this release out over group policy (see http://msdn.microsoft.com/en-us/library/cc160717(VS.90).aspx, and check out the bitching in the comments).  I was involved in an upgrade project where updating one single server turned into an overnight ordeal, and pushing updates through group policy, WSUS, or any other modern management software was abandoned in favour of walking around to each and every machine.

This was the latest version of .NET for several years, and it presented a real dilemma for developers: a) stick with the tried and true .NET 2.0 and make do without any enhancements introduced since 2007, b) move to 3.5 and deal with the endless problems associated with the upgrade, deployment, and support processes, hoping the next version wouldn’t be even worse, or c) abandon all hope, give up on .NET, and move to a different development platform.  I struggled with this dilemma for a couple years myself – and don’t forget, this was the Windows Vista era.  Microsoft seemed to be losing ground on all fronts, alternatives looked better than ever, and the future was really tough to call.  I spend serious time playing around with alternatives to .NET, and decided I’d give Microsoft one more release to make things right.  If they didn’t, I would have to start moving away from Microsoft technologies.

As I said: this one is The Beast.

When to use:

  • You require features not available in .NET 2.0, AND your application must not require Framework updates, AND your application will only run on Windows 7.
  • Maintaining applications built on .NET 3.5.

When to upgrade:

  • Now, if active development continues (unless you really know what you’re doing and you don’t care about the pain you cause your users and administrators).

.NET 3.5 Client Profile

This was an attempt to deal with the horrific 3.5 framework size and updating process.  The Client Profile is a subset of .NET that includes just the functionality typically required for client applications, and does not include any server functionality.  Unfortunately, it doesn’t do much to solve the original problems, and brings a new range of pesky little quirks.  Also, there’s a good chance you’ll run into a situation in the middle of development that requires a feature not available under the Client Profile.  I generally advise ignoring this one.

When to use:

  • You must use .NET 3.5, and are certain you require only the reduced functionality included in this release.

When to upgrade:

  • Now, if active development continues.
  • Upgrading to the full version of .NET 3.5 is usually about three clicks, so feel free to do this if you run into a Client Profile-specific problem.

.NET 4.0

The latest, the greatest, and a long-overdue upgrade everybody should get behind.  This is the first true update to the CLR since .NET 2.0 was released.  It includes all the developer magic released in 3.5, adds more toys and polish, brings some very welcome language improvements, improves performance and security, makes WCF usable, makes WPF almost bearable, and generally makes life much happier for everyone.  Use it.  Love it.  Preach it.

The full installer for .NET 4.0 is less than 50 MB, and there’s also a web installer that will download just the required components.  Installation is generally pretty painless, but can occasionally require a reboot.  You can also get this update through Windows Update or WSUS.  Side-by-side installation with previous versions works, and works well.

When to use:

  • You don’t need to support pre-WinXP machines.
  • You don’t mind requiring your users to install a (simple and easy) framework update.

When to upgrade:

  • Not for close to a decade at least, I’m guessing.  This is the one to go with if you hear people start to talk about ‘Future Proofing’.

It’s not always a good idea to change tools in the middle of a project, so depending on your restraints, you might not be able to make the leap right away.  But upgrading between .NET Framework versions is usually easy, and .NET 4.0 is well worth it.  Do note that your clients will need to have Server 2003 or Windows XP available (with certain service pack requirements).

.NET 4.0 Client Profile

The Client Profile is also available under .NET 4.0, although the installation package is only about 10MB smaller.  I don’t really see the point to this, but it’s there if you want it.  It might be wise to start development under the Client Profile so you have the option to go both ways, and then move to the full version of .NET 4.0 if the need arises.  Or, you could just ignore it.

When to use:

  • Hell if I know.

When to upgrade:

  • You need something not available in the Client Profile.  Luckily, this is still a three-click task.

Of course, if you have a specific scenario that you think calls for a different version than this guide might suggest, go nuts.  Just be sure you know what you’re getting yourself into, and don’t forget: you might finish developing your application, but you’ll never finish supporting it, so be sure you consider deployment and security as part of your selection.

Quick Fix: “Find in Files” Button is Disabled

Okay, so this is a pretty simple one.  Perhaps I should have figured it out way, way faster than I did, and perhaps nobody else will ever have trouble with this, but I never promised to be helpful.  Or did I?

Anyway, here’s a pretty common scenario (around here, at least):

  1. Fire up Visual Studio 2010.
  2. Hit CTRL + SHIFT + F to bring up the Find and Replace dialog in “Find In Files” mode.
  3. Type a keyword that will bring up the area in your project you want to work with.
  4. Hit Enter.
  5. Wait.
  6. Finally notice that nothing is happening.
  7. Hit Enter again.
  8. And again.
  9. Mumble “What the hell…?”
  10. Reach for your mouse.
  11. Notice the “Final All” button is disabled.
  12. Say “What the hell?” a bit louder.
  13. Click the button anyway.  (Nothing happens.)
  14. Stare at monitor with angry / confused expression.


Here’s the problem: in Visual Studio 2010, the “Find All” button isn’t enabled until you’ve opened a text file of some description.  Once this happens, it will stay enabled until you close Visual Studio, even if you don’t have any documents open.  Yeah, it’s a bug.

Here’s a workaround:

  1. Open any text document (code file, XML file, whatever).
  2. Hit CTRL + SHIFT + F to re-open the Find and Replace dialog.

If you’re looking for a quick keyboard fix, try CTRL + N, ENTER, CTRL + SHIFT + F, CTRL + F4.

And please, if this helps you, leave a comment and let me know.  I’d really like to hear that I’m not the only person this bothers.

Visual Studio is Getting Expensive

Microsoft has made a lot of mistakes in its day, but one thing it’s always done right is to treat developers like royalty.  Giving developers really compelling reasons to choose Microsoft ensures that enterprises and consumers keep choosing Microsoft too, because that’s where all the programs are.

One of the best ways Microsoft does this is with Visual Studio.  It is the best development suite on the planet, bar none.  It lets developers make better products in less time, and it makes them not hate their jobs.  It keeps programming ‘fun’.  And traditionally, Microsoft has practically given this away.  Sure, if you walk into a store and look for a shrink-wrapped copy of Visual Studio, it’s pricy.  But nobody does that.  Through the Empower program, MSDN subscriptions, and more, Microsoft has kept Visual Studio very affordable, and takes off all the restrictions placed on mere mortals.  Which is good: developers are probably the least likely bunch of people to pirate software, and they don’t have time to worry about things like licenses and product activation.  They’re not using these products.  They’re building on them so other people can use them.

But this is changing.

Now, developers have to choose to either live with an ‘inferior’ version of Visual Studio, or pick which ‘Team’ edition to go with.  I’m sure the marketing department was really proud of the work they did identifying their market segments, but you know what?  That doesn’t work with developers.  Am I a Software Architect?  A Database Developer?  A Test Engineer?  It really depends on which day it is, but most often, I’m all of these.  And I really don’t like having to choose which features I want to live without, because this is exactly what is happening here.

Sure, I could go for the uber-premium ‘Team Suite’ edition, which does have it all.  But that costs many, many thousands, which is well beyond what any mere mortal can afford.  In fact, it’s a pretty good part of an annual salary most places in the world.  Visual Studio is great, but it ain’t that great.

And worse: the usual channels that developers used to avoid paying retail are slowly being closed, or at least weakened: most of these offerings no longer come with the top-tier edition.  More and more software is unavailable to developers through MSDN subscriptions (such as the Expression products), and often, developers can’t even get into beta programs.  I develop solutions that use Microsoft Office every day, and I didn’t see the web-enabled versions of these until public release.

All of this is happening at a time when the alternatives are getting harder and harder to ignore.  Macs are becoming a significant market segment again, much of the Linux world has rallied behind the fantastic Ubuntu distribution, and the development tools for non-Microsoft platforms are getting pretty damn good.  Sure, I’d rather use Visual Studio.  But by the time I’m paying more on Visual Studio than my rent, I think I can probably learn to live with Eclipse.  And if I do that, my products probably won’t require Windows anymore.  And if they don’t, neither will my customers.

I really hope Microsoft wakes up here.  These higher prices probably look good on a balance sheet; I’m sure the developer tools division at Microsoft is pulling in huge amounts of cash.  But this is a strategically damaging way of increasing revenue: you get a relatively small, yet concrete and immediate, gain in one corner, but you’re slowly, surely, weakening your entire foundation.

Microsoft: give us developers a break.  Try going in the other direction: give us more for less.  We’ll respond in kind.

Configuring Windows 7 AppLocker

Today, we have decided to post a video that will briefly go over how to use Windows 7 AppLocker. This is a great new Windows 7 feature that will make securing individual machines a lot easier.

Click below to learn more! You will want to follow along on your own computer to better see the settings as the video is small.

Offline Installer for Windows Live Essentials

I constantly find myself amazed at just how good some of the free applications included with Windows Live essentials are. I particularly like the movie maker which is easily as good as any of Mac’s offerings.

That being said, I am also constantly amazed at how annoying the online installs of some of these free programs can be – especially if you are installing the package on multiple PCs.

However, in the case of Windows Live Essentials, Ed Bott recently published a link to the offline installer on ZD Net for all to use.  Please find the link below

 Windows Live Essentials Offline

For those of you that have never used this package, I strongly encourage you to check it out. In fact, I have written all of these blog post’s on the Windows Live Writer and it is an essential tool for the maintenance of this website.

Enjoy the link folks!

Working with the Auto-Complete Cache

“Fire!!” “Earthquake!!” “Tornado!!” – all of these desperate screams for help pale in comparison to the shriek heard from end users after getting a new mail profile. As the words “Hey, my contacts are gone!!” come bellowing down the hall, many techies simply feel like packing up, retiring and heading for a more rewarding career – perhaps that of a trash collector or road sweeper. Truly, the world has come to en end if [insert_name_annoying_user_here]  has lost his contacts!

Or, maybe not. While it is true that there are few things that both irk and are more readily noticed than a missing auto-complete cache, the file itself is fairly easily managed. With a little bit of extra work, you will be able to avoid many of the pitfalls that plague IT administrators after a profile switch and you will also be able to help end users clean up or reset their cache completely.

Before we get going, however, it is worth noting that the auto-complete cache is simply that – a cache. As such, it has very little offered by Microsoft in terms of editing it and in Microsoft’s eyes is more of a temporary data repository rather than some sort of proper database. Don’t expect to get stellar support from Microsoft should this become corrupt – the official line is that all addresses should be kept in your address book or outlook contacts. As a matter of principle, I must admit that I too agree with this advice.

Nevertheless, users expect their cache to be manageable so let’s dive in and look at what can be done with this.

The cache itself is implemented in the form of a file that has the .NK2 extension and is named after the name of your outlook profile. By default, this name is likely going to be “outlook.nk2”, but be aware of other names. In Vista and Windows 7, you will find this located at:

“C:\ Users\UserName\AppData\Roaming\Microsoft\Outlook”   

in Windows XP     

“C:\ Documents and Settings\UserName\Application Data\Microsoft\Outlook”      

In order to see this file, you will have to ensure that you have enabled the display of hidden files and folders.  (Learn how:Show Hidden Files)


So, once you have the directory open, it is likely that you will not see an auto-complete cache in that folder. Outlook only creates this after an email is sent and Outlook is closed and re-opened, so go ahead and send a test message to yourself, or whoever you see fit. This will cause an entry to be written to the .NK2 when Outlook is closed and reopened. After doing this, ensure that Outlook remains closed as having Outlook open will put a lock on this file and prevent you from renaming it.    

Now, take not of the name of the new .NK2 file. You will have to find the previous one from the old profile, move it to the new directory and rename it to the exact name of the new .NK2 file you have created. Go ahead and rename the new one to profilename.bak. Rename the old .Nk2 file to the name of the newly created one. Finally, reopen outlook and you should have all of the old auto-complete entries available.

Now, given this information, one of the easiest ways of creating a brand new .NK2 cache should the old one be corrupt, is to simply follow the procedure above and just rename the .NK2 file.

Another useful tidbit, is that individual entries in an an auto-complete cache can be deleted by simply hovering over the entry in the drop down and pressing the delete key while it is highlighted.


In Outlook 2010, Microsoft has also added a red X with which you may delete the entry by clicking on it. You can also get a brand new cache in Outlook 2010 by using the Outlook.exe /CleanAutoCompleteCache switch. Simply cut and paste this into the run box and you will end up with a brand new cache.

If you need more extensive editing capabilities of the .NK2 cache there are several free and relatively cheap editors out there that will allow the import of entries and correction in an easy to use editor.

So don’t worry about finding a pair of overalls just yet for your new career, there really are ways of dealing with the auto-complete cache.  I hope this little tutorial helps!


Hosting and the Cloud

Hello folks.

It really seems like an eternity since I updated this blog. I have been busy implementing some projects at work and Paul spent a fair chunk of time gallivanting around Europe and South America.  While I am sure that Paul has had a good chance brush up on his French and Spanish, I have spent a good chunk of my time brushing up on DFS logs and learning more about The Cloud.

Much has been written about the cloud and both the media and the big players in the IT industry seem to be having a virtual orgy espousing the virtues of server-less IT environments.I jumped in to this love fest at first too, but that was with great naivety – I was but a virgin in this game.

I know better now and let me tell you – don’t party in the cloud until you are sure you know what you are getting yourself into. While many of the benefits that big players like Microsoft and Google promote are true, they often fail to mention how difficult the actual logistics of moving into the cloud can be.  There are some real gotchas to watch out for. Put your party hats on boys.

Let’s take the example of a hosted exchange environment and dissect it a little. It seems like a great idea. The ability to take your email anywhere; no need for a VPN; 99.9 percent uptime and no menacing exchange limits or management to perform. All of this is true, but the question is, how do you get the data there?

Many organizations have spent years building up complex linkages in their exchange environment and to that end have enormous amounts of email behind all of that. There are public folders, contacts, resource mailboxes and a plethora of other oddities.  Quite simply, users don’t often realize how complex their environment is and will easily consent to changes to the environment without actually realizing what the change itself means. This leads to disappointment when the new environment is rolled out and a lot of headaches for IT support staff who were assured that everything was “kosher” before the transition.

Then, there is the logisitcs of actually moving data in the first place. The tools that exist for importing and exporting mail are quite labour intensive and demand that the user be pulled away from their computer. It can take literally hundreds of hours to export pst files, re-import them and then to have that data spool down again and rebuild a users cached mail file. In theory, there would seem to be many tools at one’s disposal for doing this, but at the end of the data these tools prove utterly unreliable for the large mail stores of today’s users. For instance, it is not implausible to run across mailboxes that are over 15 GBs in size today. How do you even manage getting such a mailbox into the cloud? How do you deal with exporting it. All of these operations actually require exporting mail to a .pst file. Exmerge is of little use since it breaks the files into 2Gb chunks. So, you are left with actually exporting this out of outlook and having to babysit it to ensure that it completes. Then you have to physically open the users new mailbox in outlook and import that data back in. Given average Internet speeds for small businesses, you are looking at a 4 or 5 day process just for one mailbox. The logistics are a nightmare. This definitely isn’t the no-strings-attached fun you were promised on Craigslist. 

Now, after all of that data is finally up in the cloud, what happens when this all breaks? You are completely at the mercy of some company in New York that has very little accountability to the end customer. I guarantee you that they will not feel the same sense of urgency that your own personal systems administrator feels when exchange has issues. Sure, it is more reliable – but let’s be honest. Everybody has some downtime.

Anyway, this was all a bit of a ramble, but I guess my point in all of this here is to make sure you carefully evaluate whether a hosted or int he cloud solution is really right for your business and if you do decide to take the plunge carefully audit and where possible purge the amount of data that will need to be uploaded. Then ensure that your users understand the transition plan 100 per cent and make sure that the expectations of the outcome are crystal clear. Do not promise that an environment will look identical to the in-house exchange. Just promise that there will be a way of accessing data that is needed in a timely fashion. Both your users and IT team will reap the rewards of this careful planning and management of expectations.

Thanks for the read!



My SBS CALS Have Disappeared


So here I ask, is there anything more intensely irritating than a boy dragging his nails across a dry dusty chalkboard? Probably not, but I must admit, every time I open up a licence manager on a SBS Server and look at CALS, I get that same feeling of impending rage.

Let’s just say it; SBS 2003 CALS were a total pain and completely useless.  Yes, I understand that Microsoft wanted to protect their best interests and needed to make sure that the SBS package wasn’t used inappropriately, but surely it did not warrant a system so maddening.

So, with my thoughts on this known, you can imagine how I felt when one of my client’s SBS Servers for whatever reason decided to drop its licence database.

This shouldn’t be such a big deal, except for the fact that nobody could find any records regarding these CALS and even calling Microsoft was futile. The previous IT firm did not document anything and it was not completely clear how they even purchased the CALS in the first place.

After discussing it with Microsoft and being told that they were “escalating” the issue, I set out to see if there was a technical solution to my problem.  I poked around and found that the licence database itself is implemented in two tiny files. These are: autolicstr.cpa and licstr.cpa.

Apparently, there is an automatic backup created of the licstr.cpa which is the main licence database.  The easiest way of fixing this is to copy these two files out of the C:\windows\system32 directory, use NTbackup and back them up to a file and then use the restore wizard in SBS licence manager to recover them from the file.

If you get an error message that the CALS are no longer useable, simply write down the keys that are shown in the licence manager, re-enter them and phone Microsoft to re-validate. Naturally, you should take the time to properly document the CALS and back them up using the backup licence function in the licence manager, so that you will never have to deal with this again.

This little trick sure made my day, and I hope it helps some of you as well!


Outlook 2007 Inbox Renamed

We’ve been a little shy on updates lately folks. It has been a busy time of year, Paul is down in South America having the time of his life, and spring is fast approaching. Still none of this excuses our lack of writing and definitely doesn’t excuse the brevity of this post.

Nonetheless, this is all I have to offer for now.

A few days ago, one of my clients called me up and said that his Outlook 2007 inbox had suddenly been renamed to image001.jpg.  I have seen a lot of strange things, but this I really had to see.

As my log me in session connected and I focussed on the tiny text on my laptop screen, there it was. image001.jpg. This was indeed bizarre. I wasn’t really sure where to begin with this. The issue, in fact, doesn’t affect the end user in any way nor does it impede the normal flow of mail. But still, it is an annoyance. 

Historically, I have had great luck with the various switches that can be used to launch Outlook. So, I started digging in a bit, and sure enough I found a switch /resetfoldernames.

I was a little worried about running some random switch that I knew very little about, but again, past experience has been really good with these Outlook commands and away I went with it.  5 seconds later and the problem is gone.

So, for thos of you that have never used an outlook switch, here is the procedure. Open the run dialog box and enter the following command.

Outlook.exe /resetfoldernames

I always like to start this with outlook closed so that it opens one new instance of the program.

If anyone knows why this happens, I would love to hear about it. Please post your comments below.


Fixing RSS Feeds in Outlook 2010 Beta 2

I love RSS.  I can't believe I'm saying this, because for years, I never really 'got' it.  I knew how it worked technically, I saw how it could possibly be useful, but I just never really cared enough to actually use it, and I couldn't imagine why anyone else ever would, either.

That is, until Outlook added RSS support.

For me, Outlook is always running.  I manage half my life through that program, in fact (the other being OneNote, which I like even better).  And as I’ve mentioned before, I really dislike the current state of the web, where each web site is most like a program, with its own interface and quirks and learning curve, not to mention bugs, advertisements, distractions, and security issues.  So if I can take the content from the sites I love and read that content in a program I already know, I’m a happy man.

Outlook lets me do just that:


By default, Outlook creates a folder for each feed to keep things organized, but I’ve changed that to have everything delivered into my RSS Feeds folder.  Outlook keeps every article around until I choose to delete it, shows me which I’ve read, and lets me manage them just like any other Outlook item.  I can even forward interesting stories to friends just like an e-mail.  You can also see Outlook keep an archive of all our Slick IT articles here, which is a very easy way of having a complete backup and searchable offline reference of all our content.  Happily, the feeds I choose to read are saved in my .PST file, so I don’t have to re-add them every time I rebuild my machine or reinstall Outlook.

The Problem

When I upgraded to Outlook 2010 Beta 2, however, the magic stopped.  Everything looked okay… my RSS Feeds folder was still there, the articles I hadn’t yet read were working just fine, but I wouldn’t get any new articles delivered.  It wouldn’t sync.  I’d hit Send / Receive all day, but the RSS Feeds just wouldn’t update.  I even checked under Accounts, where they seem to show up:


But despite this, Outlook wouldn’t actually check them.

The Solution

It turns out to be quite an easy fix.  Here’s what worked for me:

  1. Using Internet Explorer, add a new feed.  If you’re using IE now, just click the View Feeds button on the toolbar and then click “Subscribe to this feed”:
  2. Under the File menu, choose Open, and then click Import:
  3. Choose “Import RSS Feeds from the Common Feed List” and click Next:
  4. Check the feed you just added and click Next:
  5. Click finish, and then try another Send / Receive.  You should find all your old feeds now begin to synchronize.  You’ll only have to do this once, and you can now remove the feed you just added.  But, if you just added Slick IT, why would you?

Hope this helps!

The Fall of DomainPeople

Like many of you, I manage a few domains.  Mostly, these are on behalf of clients, and as such, great support and reliability are more important to me than rock-bottom prices.  I want to keep all my domains with the same registrar, under one account, and since many of my domains are .ca, this somewhat limits my choice in registrars.

Well, let me summarize this lengthy article with a few choice words of wisdom:

Never use DomainPeople for anything.  If you’re using them now, run away.

For years, I’ve used DomainPeople.  They have been around forever, they promise 24/7 tech support that’s easy to reach (no waiting on hold), and they offer a good selection of “value-added” services, such as DNS hosting, e-mail forwarding, and so on.  Their prices might not be the cheapest, but they’re not quite as astronomical as a couple others you probably know about.  And, keep in mind, this goes back a few years, to a time when GoDaddy wasn’t an option, Network Solutions didn’t do .ca domains (or perhaps was even more expensive than it is today), and generally, the world of domain registration was a very different place.


By the time 2009 rolled around, DomainPeople’s site and control panel was looking a bit dated.  There had been essentially no change for years in either the function or the design of their services.  But that’s okay.  It all worked.  Sure, they weren’t the most dynamic and aggressively-growing company out there, but I didn’t want that.  I just wanted a good, stable registrar.  I didn’t mind putting up with a quirky interface from the 90’s so long as they delivered where it counts.  And for years, they did.  Mostly.  And when they didn’t, it was easy to reach them by phone and have everything sorted out.

Until everything changed.

The Full Story

Let’s do this chronologically.

January 11, 2010: The first sign of change arrives.  I receive an e-mail (well, five, actually… but DomainPeople has never been particularly organized when it comes to that sort of thing) describing some upcoming changes to their control panel.  Here’s essentially what they had to say:

  • Within two weeks, your account will be automatically upgraded.
  • You will gain access to new features.
  • Your username and password will not change.
  • An auto-renew feature has been added.  If you choose to keep a card on file with us, we’ll renew any domain automatically before it expires.
  • You will no longer be able to access the control panel on a per-domain basis using the domain’s password.  If you don’t want all your domains to be accessible by your master account password only, you must remove them from your master account within the next two days.

Okay, so fair enough.  New features are coming, and there’s nothing I need to do.  Wonderful!  Now, in the back of my mind, I was thinking “gee, it’s a good thing I don’t need to provide my clients with access to their domain control panel, because then I’d have only two days to set up new accounts for each and every domain, provide my clients with the new login information, and then lose the ability to manage the domains myself without keeping track of dozens of user accounts, hoping none of my clients changed their passwords”.  This should have set off a few alarms… I’m sure many of their customers were caught off-guard by this, and I doubt many of them actually responded to this in the two days DomainPeople gave.  But it didn’t affect me, there was nothing I needed to do, and I was looking forward to having a snazzy new interface with new features, so I didn’t much care.

Friday, January 15, 2010: I receive close to 10 e-mails from DomainPeople, all of which are notices of pending auto-renewals.  I also receive two notices of auto-renewal failures.  This is all a bit odd, but I figure that since I haven’t provided a credit card, this is just their auto-renewal system trying its best to do its thing.  It’s a bit odd that I was given no notice of pending auto-renewal for the two failed attemps, but that’s okay.  So far, none of this means anything; it’s just a bit of weird e-mail related to a feature I’m not using.

Saturday, January 16, 2010: I receive two notices of pending domain expiry.  This is weird… these are different domains.  I would have expected more auto-renewal notices.  So do domains renew automatically, or not?  Still, I put this down to glitches in the deployment of their new system.  Again, none of it affected me at the moment.

Sunday, January 17, 2010: Another five notices of auto-renewal failures.  This is getting a bit ridiculous.  By the way, at this time I was on a small island off the west coast, and due to a recent windstorm, had no power or Internet.  But this is all okay, because I don’t need to do anything.  I thought.

Monday, January 18, 2010: The calls start.  I’m not quite sure why so many of my clients have no e-mail or websites working at the moment, but my first thought is that it must be related to the power outage.  What weird, vital, forgotten thing do I have running through my workstation?  It’s tough to diagnose, of course, when all you have is a cell phone (thank God for Windows Mobile, and no, there is no iPhone App for that), but by that afternoon, I was eventually able to trace things back to a DNS issue.  Well, for the clients having trouble, I use DomainPeople for their DNS.  So, I call them.

They answer quickly.  This is a good sign.  I explain the problem, and they admit that yes, there was an issue transferring zone files to their new system, but there are technicians working on it right now, and the problem should be fixed in a couple hours.  So, I leave them to do their thing.  The notices of renewal failures and upcoming renewals continue.

Tuesday, January 19, 2010: Still no fix.  And still no power, unfortunately.  So, I call again.  I’m given pretty much the same story.  They’re aware of the issue, they’re working on it, and it should be fixed soon.  I’m also asked for a list of ‘high priority’ domains, which they promise to take care of right away.  After asking a few more questions, I find out that these issues would have started on Friday, and yes, many other customers are affected.  I’m a bit discouraged at this point… four days of downtime on a simple DNS server, and no estimated time of resolution?  This is not the DomainPeople I had grown to love and trust.  Still, shit happens, and it sounds like they’re doing everything they can to fix it, so I hang up feeling pretty lucky I’m not a DomainPeople admin today.

By the end of that day, I realize how much as gone wrong: no DNS for any of the domains they serve, no e-mail forwarding, nothing.  Web sites are down, e-mail is broken, and still, there’s no estimated time of resolution.

Wednesday, January 20, 2010: Still broken.  And another surprise: a bunch of my domains have auto-renewed!  How odd. I call tech support, and learn that there is a senior technician working on the issue, but this person will not accept my call or provide any further details to Tier 1 tech support.  No estimated time of resolution is available.  Apparently, I should just hang up, wait, and eventually, it will be fixed.

I inquire about the domain auto-renewals.  Well, it turns out that one of my clients had called DomainPeople directly, and was told the issue was caused by a domain name about to expire (this was entirely false), and if she didn’t renew the domain immediately, it would be gone forever.  In a panic, my client provided her credit card to renew the affected domain name.  Big mistake.  This didn’t fix the problem, of course, but it did allow the auto-renewal system to renew many domain names to her credit card.  The tech support representative was not sympathetic, but did suggest I call their billing department.

Billing agreed with my assessment, but didn’t see anything wrong.  I was told that the auto-renew process was explained in an e-mail, and that the system was working properly.  I should have turned off auto-renew if I didn’t want this to happen.  Well, here are the problems with that:

  • Notice of pending auto-renewals were first given only on Friday, the 15th.  This is, by the way, the same day everything went down and the first auto-renewal failure notices started to come in.  No credit card was on file at this time.
  • DomainPeople did not have authorization from me – the accountholder and domain registrant – to make changes to the billing configuration of my account.
  • DomainPeople did not have authorization from the cardholder to charge the credit card for the amounts or services billed.

Eventually, the billing representative was able to see this.  The new system, however, does not have the ability to remove a credit card that has been added, and the charges could not be removed, but only reversed and then applied to a different credit card.

So, at this point, DomainPeople has told my client the downtime was due to an expired domain (entirely untrue), has charged my client (without authorization) with the renewal of many other domains (none of which were immediately about to expire), and has still not fixed the technical issues.  No one is able to offer any additional help.

I then track down a manager at DomainPeople who listens patiently to my tale of woe, takes a bunch of information, and promises to have both the technical and billing issues resolved as soon as he can make it happen.  He promises to stay in touch with me, and tells me to expect a call from him the next morning.

Thursday, January 21, 2010: Still broken.  And no call from tech support or the manager I was working with.  I do, however, have a message from the billing department advising me that auto-renew has been turned off, they’re working on the refund, and I should call to provide a bit more information.  I do, and a told that they will be able to process the reversal in the next day or so.

However, the technical issues still exist.  And worse, the tech support department has no record of anything even being wrong.  Nobody is working on this.  The senior technician responsible for this problem is ‘not available’.  The manager I spoke with earlier is ‘not in today’.  And nobody knows anything.  I’m encouraged to open a ticket using their web site.

Again, I battle my way to a senior staff member who seems to be in a position to help things.  He is very friendly, understanding, and promises to get things resolved immediately.  He takes my contact information, a list of the domains in question, the support ticket numbers I have been provided, and other details.  He then tells me he will call back as soon as he’s able to get things resolved.

Friday, January 22, 2010: Still broken.  I’ve heard back from billing, and they are pleased to say they were able to reverse the charges.

But not a word from tech support or either of the higher level people I was working with.  I’m told to sit tight, that technicians are working on this, and I will be called.

Saturday, January 23, 2010: Still broken.  Still no word.  I’m out of town and unable to do much except for call to see what’s happening.  I don’t learn much.  This marks one week since my clients had working e-mail.  I do not look good to them.

Sunday, January 24, 2010: Still broken.  Still no word.  Another call, but no more answers.  Only empty promises.  Nobody I’ve spoken with earlier is around, and nobody else seems to be in a position to help.

Monday, January 25, 2010: Still broken.  Still no word.  Things are getting really critical at this point.  I’m starting to ask myself how I’ve let things go over a week.  I could have transferred the domain to a different registrar twice over by now, at least.

I call tech support.  No changes have been made.  They have no record of any word being done on this.  The tickets I provide have all been marked as resolved, one by the original senior-level technical working on my account.  This technician still refuses to speak with me directly, and does not return the messages I have left for him.  Neither of the two managers I spoke with are available, nor have they returned my calls.  I leave additional messages, explaining how I cannot wait any longer, and this issue, one way or another, must be resolved.  I am yet again promised immediate action.

I have also now come up with a strategy on moving away from DomainPeople.  It’s not pretty, or fast, but it will work.

Tuesday, January 26, 2010: Still broken.  Still no word.  At this point, I’m done with DomainPeople.  It’s been over 10 days since this problem arose, and I am no closer to fixing anything than when I started.  I’m going to reconfigure my DNS elsewhere and point my domains to these new nameservers.  I’ll also reconfigure e-mail forwarding, avoiding DomainPeople’s services and servers entirely.  This will all take time, of course, but a couple more days of downtime followed by a guaranteed resolution that I can control looks much better than anything DomainPeople could offer at this point.  There’s only one thing that could break this: in the unlikely event that someone at DomainPeople is still working on this, they might see the changes I make to my nameservers and undo them.

So, I call.  Nobody knows anything about my issues, and nobody cares.  All tickets are closed.  There is no one able to help.  Nothing.

I leave a message, stating that no one at DomainPeople must edit my nameservers, or do anything else with any of my domains, without first talking with me, directly, over the phone.  I’m assured this will be the case, but I don’t think the representative actually does anything.

I begin the process of rebuilding DNS records elsewhere.

Wednesday, January 27, 2010: Still broken.  Still no word.  Everything on my end is now set to go, though, and I’m just waiting for the nameserver changes to propagate.

Thursday, January 28, 2010:  All my domains are now working!  This is because I moved all services away from DomainPeople, though… still no word from them, of course.

The Aftermath

If I look back, here’s what I see: for many, many years, I have a great relationship with DomainPeople.  I pay them thousands of dollars, they provide me with everything I need to offer quick and reliable service to my clients.  And then one day, the world explodes.  With no useful warning whatsoever, they break half my domains, charge one of my clients’ credit cards with a long series of unauthorized charges, and offer no assistance whatsoever in fixing anything.  For over a week, they do nothing to fix the problems they have created.  They do not return calls.  Ultimately, I am held hostage by a company who doesn’t care and doesn’t respond, and I am left to my own devices to find a way to escape and restore service to my clients.

It’s important to realize that I was never rude or hostile with anyone at DomainPeople.  Every time I called, it was always easy to find someone to speak with.  When I spoke with managers, they were all very understanding and promised to do everything they could to resolve the matter.  And while all this sounds good, the fact remains: they didn’t fix what they broke.

Hosting domains and running DNS is not hard.  These are the fundamental parts of the Internet designed to withstand nuclear assault.  There’s just about nothing that needs to be done here.  And there’s absolutely no reason why this should have broken.  That it did, and for so long, is enough to stay away from DomainPeople alone.

But to have a large,  legitimate company (who charges premium rates for their services in exchange for promised reliability and support) completely ignore a customer like this and not work towards resolving these issues is entirely unacceptable.

So, everyone: be warned!  Go to great lengths to avoid DomainPeople, because you do not want to go through what I did to learn this.

And, of course, if anyone has a recommendation for a new registrar that does .CA domains, offers DNS hosting and e-mail forwarding, and actually helps out when there’s a problem, please leave a comment!

Copyright © 2010 Paul Guenette and Matthew Sleno.