subtext, blog, sql, downloads comments edit

Tim Heuer figured out the right stored procedure to modify in Subtext 2.5.2.0 to disable tracking of referrals altogether. I’m all for this since it means less need to monitor my database and remove/shrink the referral table.

I updated my Subtext Database Maintenance page so you can fix your DB up with a single click. Enable/disable, push-button style. All yours, free, YMMV. Do note that it does actually modify the stored proc, so if you’ve got your DB locked down or you’ve customized stuff, this may not be something you want to do. You have been warned.

dotnet, coderush, vs comments edit

I’m pleased to announce I’ve got the CR_Documentor 3.0.0.0 release up and running in the Visual Studio Gallery!

CR_Documentor in the Extension
Manager

You can either get it from the gallery, or, from the Visual Studio Extension Manager, search for “documentor,” “dxcore,” “coderush,” or other related terms and it’ll come up.

It’s a full VSIX installer, so it’ll install right into Visual Studio without you needing to download, unzip, or do anything additional.

(I think I’m the first DXCore plugin with a VSIX installer to appear in the gallery, so… I have to say, I’m a little proud. Huge props to the DevExpress folks who made this possible.)

CR_Documentor in the Recently Added list of the VS
Gallery

downloads, vs, coderush comments edit

The latest version of CR_Documentor, 3.0.0.0,  has been released.

This version is an update to .NET 4 in preparation for a VSIX-based installer (think Visual Studio Gallery) so it will only support Visual Studio 2010.

It also resolves a small issue where some interfaces changed in DXCore 11.2.8 and the plugin was throwing exceptions. You will need the latest CodeRush/Refactor/DXCore (11.2.8) or things may not work. (I admittedly haven’t tried it on earlier versions.)

Free, as always, so go get it! And watch for the VSIX installer, coming soon!

dotnet, testing, dotnet comments edit

I love Typemock Isolator. I do. The power it gives me to deal with legacy code interaction testing is phenomenal.

However, every once in a while, I’ll get an odd failure that doesn’t make sense. Today’s error message looks like this:

    SetUp method failed. SetUp : TypeMock.ArrangeActAssert.NestedCallException :
    *** WhenCalled does not support using a property call as an argument.
    -   To fix this pass null instead of LoggerWrapperImpl.Logger

    ***
    * Example - this would work:
    -   MyObj argument = Something.Other().GetStuff();
    -   Isolate.WhenCalled(() => ObjUnderTest.MethodUnderTest(argument))...;
    ***
    * Example - this would not work:
    -   Isolate.WhenCalled(() => ObjUnderTest.MethodUnderTest(Something.Other().GetStuff()))...;
    at cv.a()
    at hg.a()
    at dj.a(Boolean A_0)
    at do.b(Boolean A_0)
    at iz.b(Boolean A_0)
    at iz.a(Object A_0, Boolean A_1, Func`1 A_2, Action A_3, Action A_4, Action A_5)
    at iz.b(Object A_0)
    ...

We weren’t doing anything odd in the test that failed, and we have other tests that do very similar stuff. What gives?

Doing a little poking around in the TeamCity build logs, I found that the failing test…

  • Was the first one to run in the given test assembly, and
  • The test fixture setup had a call to mock a static method on a static class.

Which means static construction wasn’t happening on the static class and was causing some weird problems.

I fixed the issue by adding a real call to a static method on the class – just enough to get static construction to run first. Then everything worked perfectly.

Why did it pass on my dev box and not on the build box? Tests were getting run in a different order. Static construction was happening on the class in a different test.

Like I said, I love Typemock, but sometimes… sometimes there are some gotchas.

personal comments edit

I think getting feedback on your software is valuable. It’s nice to know people are using what you put out there and it’s interesting to see what they think.

Usually.

The thing I’m sort of stewing over is something I’ll call, for the sake of discussion, “unjustified negative feedback.” Let me dive into an example.

I have a free, open-source add-on for Firefox that makes it easier to manage the list of sites to which you allow Windows pass-through authentication. It’s called Integrated Authentication for Firefox. The description is as follows:

Most people don’t realize it, but Firefox will do integrated authentication like NTLM (Windows pass-through) just like Internet Explorer. Some people solve the issue by going around Firefox and hosting IE right in Firefox. The other way to do it is to keep Firefox as the rendering engine and tell Firefox it’s OK to use integrated authentication with a given site.

The problem is that managing the list of sites you allow Firefox to pass-through authenticate with is not straightforward and involves manually manipulating configuration settings.

This add-on makes it easier to manage this list, allowing you to stick with Firefox but still use pass-through authentication like Windows/NTLM or Kerberos.

NOTE: This add-on does not actually DO the authentication. Firefox itself already has built-in integrated authentication, it’s just not obvious how to get it to work. This add-on makes it easy to configure Firefox to use its already-existing features, but it does not do the authentication proper.

The problem this add-on solves is in that second paragraph: “managing the list of sites… is not straightforward and involves manually manipulating configuration settings.” If you’re a power user (and not everyone out there is), you can hit the “about:config” page in Firefox and tweak the “network.negotiate-auth.trusted-uris,” “network.negotiate-auth.delegation-uris,” and “network.automatic-ntlm-auth.trusted-uris” settings in the appropriate format and accomplish the same thing. But it’s not pretty and it’s not really a first-class interface.

The point is, there’s a certain audience for this software. Most people seem to like it because they didn’t know about those buried settings and this makes it nice and easy to deal with. And then there are folks who leave reviews like this:

Disappointing. http or https is required before each entry when it really isn’t required when set manually. You can’t paste in a ton of coma separated items to add them quickly. This isn’t any better than going to about:config unfortunately.

That got me a one-star. Or this one:

This is just a GUI for the “network.automatic-ntlm-auth.trusted-uris” value of the “about:config” page… if anything, it makes adding the URL’s more difficult, since you can’t add them comma-separated or without “http” prefix. For this to be actually useful a content menu like “Add this URL” or automatic/wildcard addition of an HTTP Auth protected pages (not a good idea from security perspective) would need to be added; as of right now this could very well be replaced with a link to above configuration value.

That one was two stars.

Notice any common theme? “This isn’t any better than going to about:config…” “This is just a GUI for the … value of the ‘about:config’ page…” I feel like the folks giving the bad reviews didn’t actually read what the plugin did. Of course it’s not any better than going to “about:config” – that’s all the plugin does.

The folks leaving the reviews were obviously not the target audience for the plugin. If you’re comfortable messing around in “about:config” then just do it. But I don’t have a ton of ratings over there, so just a few bad reviews take my overall average down pretty easily.

I use my plugin as an example, but this sort of thing is pretty common in software feedback. Here’s a one-star review from a recent Free App of the Day in the Amazon AppStore:

I haved played a lot of drawing apps for my kindle fire but this one is the ultimate worst it needs to be User friendly and I hate it so take that people who made this app you just got pwned lookout swaggar alert……. so yea u guys suk

That’s not even worth the electrons it’s printed on. Maybe you didn’t like the app… but you sort of got what you paid for it (nothing), and I can’t imagine it was worth only one star when there are actual thoughtful reviews that give it much higher ratings.

And that’s what I’m talking about: unjustified negative feedback.

I feel like there’s always that guy who has to say, “I know this is a screen shot app, but how come it doesn’t make coffee, too? ONE STAR!” It’s like there needs to be a screener before people leave feedback. “Did you actually use the software? Did you read the description of what the software was intended to do? Did it do what it said?”

I know there’s not really any way to solve it because there are always going to be trolls, people who use the software who aren’t the target for it, and folks just generally having a bad day who will take it out on the developer.

At the same time, I’m not entirely sure the people leaving the feedback, especially for free/open source software, realize there’s probably only one or two people working on it… in their spare time… for no money… and maybe something more constructive would be beneficial.

Maybe this is what Jeff Atwood was talking about when he said, “90% of all community feedback is crap.” I wish we could reduce that percentage. It’d be a far more motivating experience to build better products.

Note: I think there’s a strong (inversely-proportional) correlation between “valuable feedback” and “community breadth.” The wider the audience for a product, the larger the percentage of crap feedback. The user base for Firefox is huge and there’s a wide range of skill levels, so you get a higher crap percentage. The user base for Android apps in the Amazon AppStore is huge and, again, there’s a wide range of skill levels, so, again, high percentage of crap. On the other hand, I find that slightly smaller, more focused communities like the user base for CodeRush/DXCore plugins, has a vastly lower percentage of crap feedback. Everyone seems willing to work together to make the ecosystem better. Something to think about.