dotnet, aspnet comments edit

I just spent a day fighting these so I figured I’d share. You may or may not run into them. They do get pretty low-level, like, “not the common use case.”

PROBLEM 1: Why Isn’t My Data Serializing as XML?

I had set up my media formatters so the XML formatter would kick in and provide some clean looking XML when I provided a querystring parameter, like http://server/api/something?format=xml. I did it like this:

var fmt = configuration.Formatters.XmlFormatter;
fmt.MediaTypeMappings.Add(new QueryStringMapping("format", "xml", "text/xml"));
fmt.UseXmlSerializer = true;
fmt.WriterSettings.Indent = true;

It seemed to work on super simple stuff, but then it seemed to arbitrarily just stop - I’d get XML for some things, but others would always come back in JSON no matter what.

The problem was the fmt.UseXmlSerializer = true; line. I picked the XmlSerializer option because it can create prettier XML without all the extra namespaces and cruft of the standard DataContractSerializer

UPDATE: I just figured out it’s NOT IEnumerable<T> that’s the problem - it’s an object way deep down in my hierarchy that doesn’t have a parameterless constructor.

When I started returning IEnumerable<T> values, that’s when it stopped working. I thought it was because of the IEnumerable<T>, but it turned out that I was enumerating an object that had a property with an object that had another property that didn’t have a default constructor. Yeah, deep in the sticks. No logging or exception handling to explain that one. I had to find it by stepping into the bowels of the XmlMediaTypeFormatter.

PROBLEM 2: Why Aren’t My Format Configurations Being Used?

Somewhat related to the first issue - I had the XML serializer set up for that query string mapping, and I had JSON set up to use camelCase and nice indentation, too. But for some weird reason, none of those settings were getting used at all when I made my requests.

Debugging into it, I could see that on some requests the configuration associated with the inbound request message was all reset to defaults. What?

This was because of some custom route registration stuff.

When you use attribute routes…

  1. The attribute routing mechanism gets the controller selector from the HttpConfiguration object.
  2. The controller selector gets the controller type resolver from the HttpConfiguration object to which it holds a reference.
  3. The controller type resolver locates all the controller types for the controller selector.
  4. The controller selector builds up a cached list of controller name-to-descriptor mappings. Each descriptor gets passed a reference to the HttpConfiguration object.
  5. The attribute routing mechanism gets the action selector from the HttpConfiguration object.
  6. The action selector uses type descriptors from the controller type selector and creates a cached set of action descriptors. Each action descriptor gets passed a reference to the HttpConfiguration object and get a reference back to the parent controller descriptor.
  7. The actions from the action selector get looked at for attribute route definitions and routes are built from the action descriptor. Each route has a reference to the descriptor so it knows what to execute.
  8. Execution of an action corresponding to one of these specific routes will use the exact descriptor to which it was tied.

Basically. There’s a little extra complexity in there I yada-yada’d away. The big takeaway here is that you can see all the bajillion places references to the HttpConfiguration are getting stored. There’s some black magic here.

I was trying to do my own sort of scanning for attribute routes (like on plugin assemblies that aren’t referenced by the project), but I didn’t want to corrupt the main HttpConfiguration object so I created little temporary ones that I used during the scanning process just to help coordinate things.

Yeah, you can’t do that.

Those temporary mostly-default configurations were getting used during my scanned routes rather than the configuration I had set with OWIN to use.

Once I figured all that out, I was able to work around it, but it took most of the day to figure it out. It’d be nice if things like the action descriptor would automatically chain up to the parent controller descriptor (if it’s present) to get configuration rather than holding its own reference. And so on, all the way up the stack, such that routes get their configuration from the route table, which is owned by the root configuration object. Set it and forget it.

dotnet, aspnet, autofac comments edit

I’m working on a new Web API project where I want to use AutoMapper for some type conversion. As part of that, I have a custom AutoMapper type converter that takes in some constructor parameters so the converter can read configuration values. I’m using Autofac for dependency injection (naturally).

Historically, I’ve been able to hook AutoMapper into dependency injection using the ConstructServicesUsing method and some sort of global dependency resolver, like:

Mapper.Initialize(cfg =>
{
  cfg.ConstructServicesUsing(t => DependencyResolver.Current.GetService(t));
  cfg.CreateMap();
});

That works great in MVC or in other applications where there’s a global static like that. In those cases, the “request lifetime scope” either doesn’t exist or it’s managed by the implementation of IDependencyResolver the way it is in the Autofac integration for MVC.

Retrieving the per-request lifetime scope is much more challenging in Web API because the request lifetime scope is managed by the inbound HttpRequestMessage. Each inbound message gets a lifetime scope associated, so there’s no “global static” from which you can get the request lifetime. You can get the global dependency resolver, but resolving from that won’t be per-request; it’ll be at the application level.

It’s also a challenging situation because AutoMapper really leans you toward using the static Mapper object to do your mapping and you can’t really change the value of ConstructServicesUsing on the static because, well, you know, threading.

So… what to do?

The big step is to change your mindset around the static Mapper object. Instead of using Mapper to map things, take an IMappingEngine as a dependency in your class doing mapping. Yes, that’s one more dependency you’d normally not have to take, but there’s not really a better way given the way the IMappingEngine has to resolve dependencies is actually different per-request.

This frees us up to now think about how to register and resolve a per-request version of IMappingEngine.

Before I show you how to do this, standard disclaimers apply: Works on my machine; I’ve not performance tested it; It might not work for you; etc.

Oooookay.

First, we need to understand how the IMappingEngine we build will come together.

  1. The implementation of AutoMapper.IMappingEngine we’ll be using is AutoMapper.MappingEngine (the only implementation available).
  2. MappingEngine takes in an IConfigurationProvider as a constructor parameter.
  3. IConfigurationProvider has a property ServiceCtor that is the factory we need to manipulate to resolve things out of a per-request lifetime scope.
  4. The main AutoMapper.Mapper has a Configuration property of type IConfiguration… but the backing store for it is really an AutoMapper.ConfigurationStore, which is also an IConfigurationProvider. (This is where the somewhat delicate internal part of things comes in. If something breaks in the future, chances are this will be it.)

Since we need an IConfigurationProvider, let’s make one.

We want to leverage the main configuration/initialization that the static Mapper class provides because there’s a little internal work there that we don’t want to copy/paste. The only thing we really want to change is that ServiceCtor property, but that’s not a settable property, so let’s write a quick wrapper around an IConfigurationProvider that lets us override it with our own method.

public class ConfigurationProviderProxy : IConfigurationProvider
{
  private IComponentContext _context;
  private IConfigurationProvider _provider;

  // Take in a configuration provider we're going to wrap
  // and an Autofac context from which we can resolve things.
  public ConfigurationProviderProxy(IConfigurationProvider provider, IComponentContext context)
  {
    this._provider = provider;
    this._context = context;
  }

  // This is the important bit - we use the passed-in
  // Autofac context to resolve dependencies.
  public Func<Type, object> ServiceCtor
  {
    get
    {
      return this._context.Resolve;
    }
  }

  //
  // EVERYTHING ELSE IN THE CLASS IS JUST WRAPPER/PROXY
  // CODE TO PASS THROUGH TO THE BASE PROVIDER.
  //
  public bool MapNullSourceCollectionsAsNull { get { return this._provider.MapNullSourceCollectionsAsNull; } }

  public bool MapNullSourceValuesAsNull { get { return this._provider.MapNullSourceValuesAsNull; } }

  public event EventHandler<TypeMapCreatedEventArgs> TypeMapCreated
  {
    add { this._provider.TypeMapCreated += value; }
    remove { this._provider.TypeMapCreated -= value; }
  }

  public void AssertConfigurationIsValid()
  {
    this._provider.AssertConfigurationIsValid();
  }

  public void AssertConfigurationIsValid(TypeMap typeMap)
  {
    this._provider.AssertConfigurationIsValid(typeMap);
  }

  public void AssertConfigurationIsValid(string profileName)
  {
    this._provider.AssertConfigurationIsValid(profileName);
  }

  public TypeMap CreateTypeMap(Type sourceType, Type destinationType)
  {
    return this._provider.CreateTypeMap(sourceType, destinationType);
  }

  public TypeMap FindTypeMapFor(ResolutionResult resolutionResult, Type destinationType)
  {
    return this._provider.FindTypeMapFor(resolutionResult, destinationType);
  }

  public TypeMap FindTypeMapFor(Type sourceType, Type destinationType)
  {
    return this._provider.FindTypeMapFor(sourceType, destinationType);
  }

  public TypeMap FindTypeMapFor(object source, object destination, Type sourceType, Type destinationType)
  {
    return this._provider.FindTypeMapFor(source, destination, sourceType, destinationType);
  }

  public TypeMap[] GetAllTypeMaps()
  {
    return this._provider.GetAllTypeMaps();
  }

  public IObjectMapper[] GetMappers()
  {
    return this._provider.GetMappers();
  }

  public IFormatterConfiguration GetProfileConfiguration(string profileName)
  {
    return this._provider.GetProfileConfiguration(profileName);
  }
}

That was long, but there’s not much logic to it. You could probably do some magic to make this smaller with Castle.DynamicProxy but I’m keeping it simple here.

Now we need to register IMappingEngine with Autofac so that it:

  • Creates a per-request engine that
  • Uses a per-request lifetime scope to resolve dependencies and
  • Leverages the root AutoMapper configuration for everything else.

That’s actually pretty easy:

// Register your mappings here, but don't set any
// ConstructServicesUsing settings.
Mapper.Initialize(cfg =>
{
  cfg.AddProfile<SomeProfile>();
  cfg.AddProfile<OtherProfile>();
});

// Start your Autofac container.
var builder = new ContainerBuilder();

// Register your custom type converters and other dependencies.
builder.RegisterType<DemoConverter>().InstancePerApiRequest();
builder.RegisterApiControllers(Assembly.GetExecutingAssembly());

// Register the mapping engine to use the base configuration but
// a per-request lifetime scope for dependencies.
builder.Register(c =>
{
  var context = c.Resolve<IComponentContext>();
  var config = new ConfigurationProviderProxy(Mapper.Configuration as IConfigurationProvider, context);
  return new MappingEngine(config);
}).As<IMappingEngine>()
.InstancePerApiRequest();

// Build the container.
var container = builder.Build();

Now all you have to do is take an IMappingEngine as a dependencyand use that rather than AutoMapper.Mapper for mapping.

public class MyController : ApiController
{
  private IMappingEngine _mapper;

  public MyController(IMappingEngine mapper)
  {
    this._mapper = mapper;
  }

  [Route("api/myaction")]
  public SomeValue GetSomeValue()
  {
    // Do some work and use the IMappingEngine for maps.
    return this._mapper.Map<SomeValue>(otherValue);
  }
}

Following that pattern, any mapping dependencies will be resolved out of the per-request lifetime scope rather than the application root container and you won’t have to use any static references or fight with request contexts. When the API controller is resolved (out of the request scope) the dependent IMappingEngine will be as well, as will all of the chained-in dependencies for mapping.

While I’ve not tested it, this technique should also work in an MVC app to allow you to get away from the static DependencyResolver.Current reference. InstancePerApiRequest and InstancePerHttpRequest do effectively the same thing internally in Autofac, so the registrations are cross-compatible.

media comments edit

Back in 2008 when I originally was looking at the various solutions for archiving my movies, I weighed the pros and cons of things and decided to rip my movies to VIDEO_TS format. I did that for a few reasons:

  • I wanted to keep the fidelity of the original movie. I didn’t want a whole bunch of additional compression artifacts that would detract from the watching experience.
  • I wanted to keep the sound. I didn’t want everything downmixed to stereo sound, I wanted the full surround experience you’d normally get with the movie.
  • I wanted a backup of the original disc. In the event a disc goes bad, I wanted to be able to re-burn the disc.

Well, six years (!) have passed since I made that decision and a lot has changed, not only with technology, but with my own thoughts on what I want.

  • Full DVD rips take a lot of space. That native MPEG-2 compression is really not great. Not to mention the digital files some DVDs come with for “interactive features” and things.
  • We don’t use the extra features. After running the media center for this long, our usage pattern with it has become pretty clear – we watch the main movie but we generally don’t make use of the behind-the-scenes featurettes, audio commentary, or other features of the movies.
  • The FBI warning, menus, previews, and other “up front stuff” is annoying. We’ve known that all along, but it’s like that five minute tax you just accept for watching a movie. I’m tired of paying that tax.
  • Discs don’t go bad as often as you think. Of the literally hundreds of discs I have, I think I’ve had like two go bad. I know I’ve jinxed it now that I’ve said it.
  • Disk space isn’t free. It’s cheap, but not free. The real challenge is that if you have a NAS with all full bays and a RAID 5 array, it’s not really that easy or cheap to expand. You have to move all the data off the giant array (to where?), upgrade the disks, and move it all back. (Basically. Yeah, there are other ways to swap one disk out oat a time, etc., but the idea is that it’s painful and not free.)
  • Video containers are way better and more compatible now. Originally it was nigh unto impossible to get actual surround sound out of a compressed video in an MKV or MP4 container. I say “nigh unto” because some people had figured out this magic incantation and had it working, but finding the right spells to make it happen was far less straightforward than you might think. I tried for a long time to no avail. Plus, compatibility in general was not great – one device would play MP4 but not MKV; one device wouldn’t play any of them; one device would only play MP4 but only certain bit-rates of audio. It was horrible. Now pretty much everything plays MP4 and DLNA servers stream it nicely.
  • Compression is way better. Handbrake has changed a lot since I originally looked and the filters it uses are way better. You don’t notice the difference in a converted movie the way you once did, and it’s way easier to get “the right” settings for things.

What really got me thinking about it was this Slashdot article talking about how a person lost 20TB of data because it’s basically impossible to back all that up at home. I don’t have 20TB of data, but I have 5TB and my NAS is close to 80% full. I don’t have much room to continue just adding movies and, as noted, disk space isn’t free. It got me thinking about looking at video formats again.

I ended up switching to:

  • Handbrake’s “High Profile” preset modified with…
  • The primary audio channel updated from 160kbps to 256kbps
  • The “x264 Preset” set to “Slower”
  • Based on the content type, choose an “x264 Tune” of “Film,” “Animation,” or “Grain.”

These settings yield results that are visually comparable to the original DVD source; and include both stereo mixdown (for iPads and mobile devices that don’t support surround) and surround sound passthrough audio (for media servers and players that support surround).

I chose the higher quality sound because my primary use case is still high-fidelity home theater speakers and while I don’t need lossless audio, I wanted really good quality, too. It didn’t seem to affect the file size in any significant way.

I chose the “slower” x264 preset because I could tell in some areas the difference between “medium” (the default) and the slower settings, but from a time-to-encode perspective, “slow” and “slower” yielded about the same amount of time. I tried “very slow” but it nearly doubled the amount of encode time (not feasible for hundreds of discs).

The file size is roughly 25% – 50% of the original source content, so for a 4GB DVD I see about 1.5GB – 2.0GB compressed movies; for an 8GB DVD I see 2.0 – 3.5GB compressed movies. This is great from a space perspective because it means I can put off expanding my RAID array for a while longer.

On my current (not great) computers, I can encode a two hour movie in about 8 – 10 hours. Thank goodness for the queue feature in Handbrake, I can just queue up a ton of movies and let it run around the clock. I have a couple of computers going all the time now. Of course, with the number of discs I have to go through, it’s still going to be a couple of months.

This has opened a lot of new doors from a compatibility standpoint at home. My Synology NAS comes with a DLNA server that streams these perfectly to any device at home, so I can watch from my phone or tablet. The XBMC media center plays them beautifully and gets the full surround sound. I can put these on the iPad and take them traveling with us. I don’t have the full backup anymore… but for the cost/benefit on that, I may as well just re-purchase discs that go bad if I have to.

Some documents that helped me determine this new format:

The Handbrake Guide, particularly

This amazingly well done “best settings” guide for Handbrake 0.9.9

A comparison of the x264 “RF” settings in Handbrake

If you’re interested in the rest of my media center solution, check out the main article.

media comments edit

Over the last six months, give or take, I’ve noticed that my Netflix streaming performance over my Comcast internet has taken a dive.

If I stream a show during the day, I can get a full 1080P HD signal. Watching through my PS3, I get Super HD. It’s awesome.

However, watching that same show once it hits “prime time” – between, say, 7:30p and 10:30p – I get a maximum of 480SD.

I saw this article that went around about a fellow who caught Verizon supposedly doing some traffic shaping around Amazon Web Services and it got me wondering if Comcast was doing the same thing.

I called Comcast support and got the expected runaround. The internet people sent me to the TV people (because you watch Netflix on your TV?!); the TV people sent me to the home network support people (because no way is it a Comcast issue), then the home network people said they would transfer me to their dedicated Netflix support team…

…which transferred me to actual Netflix support. No affiliation with Comcast at all.

Netflix support ran me through the usual network troubleshooting steps, which I’d already done and basically amounts to “reboot everything and use a wired connection,” all of which I’d already done, and then we ended up with “call your ISP.” That’s how I got here in the first place. Sigh.

I reached out this time to @comcastcares on Twitter and had a much better result. I got in touch with a very helpful person, @comcastcamille, who did a few diagnostics on their end and then got me in touch with their “executive support” department.

The executive support department sent a tech to my house who replaced my aging cable modem. That actually improved my speed tests – I used to only occasionally make it to ~25Mbps down, but with the new modem I consistently get between 25Mbps and 30Mbps. Unfortunately, that didn’t fix the Netflix issue, so I called back.

This time I got ahold of a network tech named Miguel who not only spoke very clearly but also knew what he was talking about. A rare find in tech support.

First we did a speed test on two different sites. My results:

Looking good. On that same computer, I then tried streaming Netflix. 480SD. Lame.

Then he mentioned something I didn’t consider: Amazon Prime is also backed by AWS. Same computer, streamed an Amazon Prime video… full HD in less than three seconds buffering.

For giggles, we tried streaming Netflix and running the speed test at the same time and got similar results as the first speed test. I also ran the Net Neutrality speed test and got great results.

Of course, as mentioned on the Net Neutrality test site, much of the Netflix traffic doesn’t actually flow from AWS, but through Open Connect peering agreements. Ars Technica has a nice article about how several providers are having trouble keeping up with Netflix and it may not necessarily be intentional traffic shaping so much as sheer volume.

In the end, Miguel convinced me that it may not be entirely a Comcast problem. He also mentioned that he, himself, switched from Netflix to Amazon Prime because the quality is so much better. Something to consider.

Of course, Google Fiber is now looking at Portland, so that may be a good alternative.

For the record, I’ve never really had any problems with Comcast the way many people have. I admit I am possibly an exception. Other than the phone runaround, which you often get with any type of service provider, Comcast service has been reliable and good for me. Netflix aside, the TV works, the phone works, the internet is getting good speed and is always up… I can’t complain. (Well, the prices do continue to go up, which sucks, but that’s only peripherally related to the service quality discussion.) We tried Frontier, the primary local competitor, and I had the experiences with Frontier that other people seem to report with Comcast. Frontier (when I was with them) had outages constantly, pretty much refused to help… and actually did things that required me to reset my router periodically and fully reconfigure the network.

But, you know, Google Fiber…