General Ramblings comments edit

Or maybe it was thrown away for me, sort of. Regardless, it’s the end of Saturday, and while I guess I could say I got some stuff done, I’m not 100% sure that much of it was worth doing.

My mom came over to help out and watch Phoenix while Jenn worked on her Halloween costume and I got errands done. I definitely appreciate that help because, while I love Phoe with all my heart, she gets to be a handful and it gets hard to really get anything done when you have to watch her, too.

Jenn and I headed over to Jo-Ann, where we both picked up some additional materials for costumes. This is actually where things started to go awry. Jenn had a non-trivial amount of stuff to get and needed help picking things out, while I had like three things to get. This trip lasted close to two hours, all told, which ate up a bit of the day I wasn’t really anticipating getting eaten. Not Jenn’s fault, just that’s how it happened.

Then it was lunchtime, which normally I’ll skip if I’m out or get something on the run, but I had to take Jenn back home so we stopped and got some food, went home and ate… which was another, say, hour down.

At that point Jenn went upstairs to work on her costume and I finally left to get my errands done. I stopped at the dry cleaner to get a shirt cleaned, stopped at the comic store (which was a good point of the day), and then went to a couple of stores to get a little bit more for our costumes. Then - home.

By this time, we’re well into the afternoon. Jenn’s upstairs taking over the only baby-off-limits room up there with her work, so, since I can’t get my costume done, I decide to take care of comic book inventory since I’d been neglecting it.

I love collecting comics, truly. I have a great time reading the fun stories and love the art. There are two problems, though.

First, I don’t get much time to read them anymore. I can’t do it when Phoenix is around, and when I’m home, she’s around. So I get a pretty big backlog of comics to read and end up eating a full day catching up, which makes the reading feel more like a chore than the recreation it really should be.

Second, since the comics are actually worth something, I have them covered by insurance. But in order to keep the insurance up, I have to have a current inventory. I keep mine in ComicsPriceGuide.com. Problem is, it’s sort of a manual inventory process. No bar code scanner or any of that - it’s search for a title, find the issue, click the “add” button, enter some price data in. For each comic. This is where I get that “my stuff owns me” sort of a vibe and I’m not sure what to do about it.

Anyway, I spent a lot of time doing inventory because I bought each of the 52 new DC #1 issues and with the manual inventory process, looking up and adding each issue takes about a minute. Given I had about 80 or so comics, I stopped to get a glass of water… we’re looking at another two hours of inventory.

Once I finished that, I noticed that I still had a ton of photos I haven’t filed yet all sitting on my computer desktop. We took advantage of a Groupon that let us send 1000 photos in to get scanned. It turns out doing this is both a blessing and a curse. It is nice - really nice - to have our photos in digital format. We can look at them, back them up, share them, and so on. However, physical photos don’t scan and magically get the right date metadata embedded. You have to name them, or set that data, or whatever all by yourself. And if the photos aren’t written on (or imprinted with a timestamp) you realize that you really don’t remember much beyond maybe a five-year-window around when some photos were taken. This turns filing the photos and/or naming them so you can find them again… non-trivial.

So I work, iteratively, on filing these things so eventually they’ll all be done. I wish I could get Jenn’s help with that, but she’s still got photos sitting on her computer desktop from several months ago that she keeps promising me she’ll get to. So it’s all me. Since I was sitting here, I decided to do a few photos.

After putting them in the right folders on our Windows Home Server, I fired up Picasa so it could auto-discover them and scan for people in the photos, etc. Here’s where the real clusterfuck hit.

As the new photos were being scanned, I noticed that Jenn had two entries in the list of people. What that means is that Picasa thinks there are two different Jennifer Illig people and it was splitting the pictures of her between the two people. Try as I might, I couldn’t unify them. Every time I did, other, different pictures would pop into this odd doppelganger contact.

Ignoring that, I saw that Picasa had some suggestions for face tags. It has this nice feature where it “learns” everyone’s face and can give intelligent recommendations on tagging. “Is this Travis Illig?” Usually it’s right.

Anyway, I saw the suggestions and I clicked to accept them. The suggestions were accepted… then reverted back to suggestions again. I tried several times to accept the suggestions to no avail.

This was very weird, so I researched what I could do to fix it. Several places in the help forums, you see that if Picasa is doing weird things or behaving oddly, you’re supposed to uninistall/reinstall so it can re-build its internal database. Fine. I took a backup of everything, did the uninstall/reinstall.

First problem I found is that about half of my albums were lost. It appeared to be arbitrary which half, but half. I followed several suggestions on how to restore these from backup, but none of them worked. Every time I tried to restore the albums, Picasa would delete them for me again. Thanks, Picasa. This caused me to have to manually recreate all of the missing albums.

Once I recreated the albums, I noticed that the synchronization with Picasa Web Albums was broken. Figuring all I needed to do was turn it on again, I clicked the button to enable sync… and it turns out this creates a duplicate copy of the web album. Fantastic.

The way you re-attach an album is:

  1. Re-create the album with the exact same name, date, etc.
  2. Put the pictures into the album, ideally in the right order.
  3. Right-click on the album and select “Upload to web albums.”
  4. When the dialog pops up, scroll down to find the existing album that’s already been uploaded. Select that.
  5. Click OK and the upload should happen pretty quickly because Picasa will see the photos are already there.
  6. Now click the sync button and things should sync up right. Should.

Let me tell you the ridiculous amount of trial and error that went into that.

I had Picasa back up, I had my albums attached… but now I had facial recognition problems.

Any faces I had marked to be ignored had to be marked ignored again. Thousands and thousands of faces. All over again.

Some people in the pictures it basically forgot. It put “unknown person” for several people who used to be named. All of those had to be reassigned.

And remember the original issues? Where I couldn’t accept suggestions and I couldn’t get Jenn to unify?Still fucking there.

Doing a bunch more searching, I find a help forum where, it turns out, all of this appears to have started recently and is somehow tied to synchronizing albums with Picasa Web Albums. All that work I did to get it re-syncing? Turn that off.

After I turned off synchronizing, sure enough, I could get things filed right. After I had them filed right, I thought I’d be smart and turn synchronization back on. Big mistake. It broke everything again.

That means the last, say,three or four hours fighting with Picasa was basically all for nothing. I’m still at square one.

And that’s where my Saturday went.

I’m at the “my stuff owns me” point with these photos, too. Kind of, “who cares if we ever look at these pieces of shit ever again?” style. I mean, I know academically that they are important, but my mood is saying, “fuck it.”

Now it’s close to midnight and I’m beat. Jenn went to bed a couple of hours ago. I’m at a stalemate with Picasa so I’m abandoning that for now… and I would love to do something fun before going to bed, but really, I’m so tired. Mentally drained, emotionally drained. If I sat to watch some TV or something I’d fall asleep. I don’t think I have the concentration to read a book or play a game. So I’ll just go to bed, Saturday wasted.

Tomorrow Jenn will be up super early because Phoenix can’t sleep past 5:00a and I just don’t hear her. Once I finally come out of my coma, I’ll be watching her while Jenn continues on her costume all day. Which means tomorrow I won’t really get to do anything relaxing, either.

I guess there goes the weekend.

General Ramblings comments edit

A while ago I did a time lapse video of me building the Lego Imperial Shuttle kit and I thought I blogged it, but I guess not. Hmm.

It took about 11 hours to build and I got it down to about three minutes in the film. I’m really pleased with the model - really well done and nice quality. I have it sitting on my desk at home now.

If you want to see it better, open it up full screen in HD.

dotnet, vs comments edit

Most of the DXCore/CodeRush plugins I write are Tool Window plugins like CR_Documentor or are plugins you’d bind to a hot key like CR_JoinLines. For Tool Windows, DXCore automatically gives you menu integration…

DXCore Tool Window plugin menu
integration

…and for hot key plugins, you don’t need it. But sometimes your plugin isn’t really a tool window, or maybe you need to integrate into a different menu, like the standard “Tools” menu. I’ll show you how to do that.

Before you begin, consider if you really need this. It’s not hard to add, but if you’re like me, you already have a ton of stuff flooding the various menus. Before you add that to your plugin, determine if you really need to. “With great power comes great responsibility,” right?

OK, so you want the top level integration. Let’s do it.

First, I’ll create a standard plugin project and call it TopLevelMenuDemo. If you already have a plugin project, you can add a new plugin to your existing project. The key here is you need a “standard plugin” rather than a “tool window plugin” for this.

The TopLevelMenuDemo plugin in Solution
Explorer

Next, I’m going to drag an Action onto my plugin designer from the “DXCore” part of my toolbox. I’ll name it “menuAction” so we know what it’s for.

Select the “menuAction” Action and open the Properties window. You need to set…

  • ActionName - Set this to the name of the action. It won’t be visible, but I usually set this to the text I expect to see in the menu.
  • ButtonText - This is the text you’ll actually see in the menu.
  • CommonMenu - This is the top-level menu to which your item should be added.

For our demo integration, we’ll set “ActionName” and “ButtonText” to “Demo Menu Integration” and we’ll set “CommonMenu” to “DevExpress” so we appear in the top-level DevExpress menu.

Properties on the plugin
action.

Switch the Properties menu over to show events and set an Execute event handler. The Execute handler is what happens when someone clicks your menu item.

Events on the plugin
action.

For our demo handler, we’ll just show a quick message box.

private void menuAction_Execute(ExecuteEventArgs ea)
{
  MessageBox.Show("Demo success!");
}

That’s it!

Hit F5 to start debugging. A new instance of Visual Studio will pop up and you should see your menu integration in place.

The plugin menu integration in
action.

Selecting the menu item, you should see the message box pop up. Success!

The demo success message
box.

For the more advanced users… check out the other events you can handle on the action if you want even more control. For example, the PositionMenuButton event can give you more control over the positioning of your menu item within the parent menu. There’s not a lot of documentation out there on this, so you’ll need to experiment a bit, but a lot can be achieved.

dotnet, csharp comments edit

“Guard” classes - those little “convenience wrappers” around common argument checking and exception throwing. You know what I’m talking about, things like…

public static class Guard
{
  public static void AgainstNull(object value, string parameterName)
  {
    if(value == null)
    {
      throw new ArgumentNullException(parameterName);
    }
  }
}

Then, rather than the if/throw block in your method, you have something like…

public void MyMethod(string theParameter)
{
  Guard.AgainstNull(theParameter, "theParameter");
  // Do the rest of the work...
}

It seems like a good idea, right? Reduce the three lines of if/throw checking to a tiny, fluent-looking one-liner. There are some common reasons people seem to like them:

  • Makes the code tighter/more readable.
  • If you want to add common logging, you can do it in one place.

Both are totally legit. But there are a lot more reasons not to like them, and here are mine:

  1. Guard classes defeat static analysis like FxCop. I like FxCop. I treat it like it’s another set of unit tests that help me to make sure my code behaves consistently. I don’t use all the rules, but most of them are valuable. One of those valuable rules can analyze whether you validated an argument for null prior to sending it to another method. If you wrap that check in a Guard class, FxCop isn’t going to see it - it sees the Guard class validating the argument, but not the caller. FxCop can also validate that the name of the parameter in the exception being thrown matches exactly the name of the real parameter on the method - a lifesaver if you’re doing some refactoring that renames parameters and you forget to fix that. You either have to turn these FxCop rules off or write custom rules that understand your Guard class.
  2. Guard classes become giant validation dumping grounds. How many things can you imagine you need to Guard against? Null values, sure. Maybe strings that are null or empty. Collections that are null or empty. How about ranges? Things like “if this date is in the future?” What else? There are actually a lot of things you can possibly guard against. Unfortunately, unless you’re in a very small team, that means the Guard class quickly becomes hundreds of lines of code doing dozens of different validations that aren’t actually all that common, and there’s no real way to “draw the line” and say “this should be in, but this shouldn’t.”
  3. Guard classes mess up the call stack. The place where the exception gets thrown is now no longer actually the method that should be doing the validation - it’s one level deeper (possibly more if you call Guard methods from other Guard methods).
  4. Guard classes become a single point of failure. Someone messes up or tweaks the logic in one Guard check, that affects literally every method through the whole application. It also means you’d better check performance well in there because it’s totally central.
  5. Guard classes tend to get used in the wrong places. Say you have a check that validates for null, as seen in the example above. That’s great for validating arguments to a method… but what about if you read in a configuration value and you want to check it for null. Same Guard method? No! The configuration value isn’t an argument, so you shouldn’t throw an ArgumentNullException. Unfortunately, it’s very tempting to go shorthand everywhere and end up throwing the wrong exceptions just because it’s convenient.
  6. Guard classes fool your unit test coverage. If you ship the validation of arguments or values off to a Guard class, then suddenly your unit test coverage is 100% whether or not you test the failure scenario of an invalid argument - it passed through the Guard class, so that line got covered. Done! Unless you’re doing strict TDD where you wrote the negative checks up front along with the positive checks, there’s going to be a pretty good chance you’ll forget to add all the negative test cases… and there’s no way to tell if you did or not.

I’m not convinced that saving one (or three, pending on your counting) lines of code for an argument check is really worth all the downsides.

dotnet, build comments edit

When building a Sandcastle Help File Builder (SHFB) project in the GUI, you manually specify the list of assemblies you want to document. However, if you want to make the execution of it a little more flexible, you can do a bit of MSBuild magic to dynamically build up the list of documentation sources for the project prior to execution.

The reason this works is that SHFB has changed its command-line execution to run through MSBuild using build tasks rather than a standalone executable. (Though with the executable, you could go through some steps to fashion a response file, you can skip the temporary file creation now.)

First, create your SHFB project in the GUI. Set up all the various settings including a few assemblies and make sure it builds the documentation properly.

Next, run the SHFB build from the command line. SHFB projects are MSBuild projects now, so you can run them through MSBuild at a Visual Studio command prompt:

MSBuild Documentation.shfbproj

This establishes that everything is working properly from the command line. You’ll need that because when you make the project dynamic, you’ll have to run it from the command line.

Create a new MSBuild project file that you will use to dynamically build the list of documentation sources (assemblies/XML files) and add a target in it that will run the SHFB project.

If you are using SHFB 1.9.3.0 from .NET 4.0, you can use the MSBuild task directly, like this:

<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0">
  <Target Name="Build">
    <MSBuild ToolsVersion="4.0" Projects="Documentation.shfbproj" />
  </Target>
</Project>

On the other hand, if you’re using SHFB 1.9.1.0 from .NET 4.0, you’ll need to use the Exec task to run a .NET 3.5 MSBuild process manually, like this:

<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0">
  <Target Name="Build">
    <!--
      You have to use MSBuild 3.5 with SHFB or you get a warning
      telling you that any parameters you pass will be ignored.
    -->
    <Exec
      Command="&quot;$(windir)\Microsoft.NET\Framework\v3.5\MSBuild.exe&quot; Documentation.shfbproj"
      WorkingDirectory="$(MSBuildProjectDirectory)" />
  </Target>
</Project>

Execute your new MSBuild project and make sure the documentation still builds.

Open up the SHFB project in a text editor and find the DocumentationSources XML node. This is the bit that we’re going to make dynamic. The node in question looks like this:

<DocumentationSources>
  <DocumentationSource sourceFile="..\build_output\bin\Assembly1.dll" />
  <DocumentationSource sourceFile="..\build_output\bin\Assembly1.xml" />
  <DocumentationSource sourceFile="..\build_output\bin\Assembly2.dll" />
  <DocumentationSource sourceFile="..\build_output\bin\Assembly2.xml" />
</DocumentationSources>

The tricky part here is that DocumentationSources is a property, not an item, so when you make this value dynamic you actually need to build an XML string, not a list of files like you would for other tasks.

Remove the DocumentationSources node from the SHFB project file (or comment it out). We’ll be building that dynamically and passing it in as a parameter.

In your new MSBuild project file, use an ItemGroup to locate all of the .dll and .xml files that should be included in your documentation. These are the files you will have seen in the list of DocumentationSources:

<ItemGroup>
  <DocTarget Include="..\build_output\bin\*.dll;..\build_output\bin\*.xml;" />
</ItemGroup>

Use the CreateProperty task to build the XML string that contains each DocumentationSource node:

<CreateProperty Value="@(DocTarget -> '&lt;DocumentationSource sourceFile=%27%(FullPath)%27 /&gt;', '')">
  <Output TaskParameter="Value" PropertyName="DocumentationSources" />
</CreateProperty>

Finally, pass the list of DocumentationSources to the SHFB project when you run it.

If you’re using the MSBuild task:

<MSBuild
  ToolsVersion="4.0"
  Projects="Documentation.shfbproj"
  Properties="DocumentationSources=$(DocumentationSources)" />

If you’re using the Exec task:

<Exec
  Command="&quot;$(windir)\Microsoft.NET\Framework\v3.5\MSBuild.exe&quot; Documentation.shfbproj /p:DocumentationSources=&quot;$(DocumentationSources)&quot;"
  WorkingDirectory="$(MSBuildProjectDirectory)" />

A complete MSBuild script that uses SHFB 1.9.3.0 and the MSBuild task looks like this:

<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0">
  <Target Name="Build">
    <ItemGroup>
      <DocTarget Include="..\build_output\bin\*.dll;..\build_output\bin\*.xml;" />
    </ItemGroup>
    <CreateProperty Value="@(DocTarget -> '&lt;DocumentationSource sourceFile=%27%(FullPath)%27 /&gt;', '')">
      <Output TaskParameter="Value" PropertyName="DocumentationSources" />
    </CreateProperty>
    <MSBuild
      ToolsVersion="4.0"
      Projects="Documentation.shfbproj"
      Properties="DocumentationSources=$(DocumentationSources)" />
  </Target>
</Project>

Done! A dynamic Sandcastle Help File Builder project that you’ll never have to update as you add or change the assemblies you want documented.