sublime, xml, gists comments edit

I already have my build scripts tidy up my XML configuration files but sometimes I’m working on something outside the build and need to tidy up my XML.

There are a bunch of packages that have HTML linting and tidy, but there isn’t really a great XML tidy package… and it turns out you don’t really need one.

  1. Get a copy of Tidy and make sure it’s in your path.
  2. Install the Sublime package “External Command” so you can pipe text in the editor through external commands.
  3. In Sublime, go to Preferences -> Browse Packages... and open the “User” folder.
  4. Create a new file in there called ExternalCommand.sublime-commands. (The name isn’t actually important as long as it ends in .sublime-commands but I find it’s easier to remember what the file is for with this name.)

Add the following to the ExternalCommand.sublime-commands file:

[
    {
        "caption": "XML: Tidy",
        "command": "filter_through_command",
        "args": { "cmdline": "tidy --input-xml yes --output-xml yes --preserve-entities yes --indent yes --indent-spaces 4 --input-encoding utf8 --indent-attributes yes --wrap 0 --newline lf" }
    }
]

Sublime should immediately pick this up, but sometimes it requires a restart.

Now when you’re working in XML and want to tidy it up, go to the command palette (Ctrl+Shift+P) and run the XML: Tidy command. It’ll be all nicely cleaned up!

The options I put here match the ones I use in my build scripts.. If you want to customize how the XML looks, you can change up the command line in the ExternalCommand.sublime-commands file using the options available to Tidy.

aspnet, rest, json comments edit

Here’s the situation:

You have a custom object type that you want to use in your Web API application. You want full support for it just like a .NET primitive:

  • It should be usable as a route value like api/operation/{customobject}.
  • You should be able to GET the object and it should serialize the same as it does in the route.
  • You should be able to POST an object as the value for a property on another object and that should work.
  • It should show up correctly in ApiExplorer generated documentation like Swashbuckle/Swagger.

This isn’t as easy as you might think.

The Demo Object

Here’s a simple demo object that I’ll use to walk you through the process. It has some custom serialization/deserialization logic.

public class MyCustomObject
{
  public int First { get; set; }

  public int Second { get; set; }

  public string Encode()
  {
    return String.Format(
        CultureInfo.InvariantCulture,
        "{0}|{1}",
        this.First,
        this.Second);
  }

  public static MyCustomObject Decode(string encoded)
  {
    var parts = encoded.Split('|');
    return new MyCustomObject
    {
      First = int.Parse(parts[0]),
      Second = int.Parse(parts[1])
    };
  }
}

We want the object to serialize as a pipe-delimited string rather than a full object representation:

var obj = new MyCustomObject
{
  First = 12,
  Second = 345
}

// This will be "12|345"
var encoded = obj.Encode();

// This will decode back into the original object
var decoded = MyCustomObject.Decode(encoded);

Here we go.

Outbound Route Value: IConvertible

Say you want to generate a link to a route that takes your custom object as a parameter. Your API controller might do something like this:

// For a route like this:
// [Route("api/value/{value}", Name = "route-name")]
// you generate a link like this:
var url = this.Url.Link("route-name", new { value = myCustomObject });

By default, you’ll get a link that looks like this, which isn’t what you want: http://server/api/value/MyNamespace.MyCustomObject

We can fix that. UrlHelper uses, in this order:

  • IConvertible.ToString()
  • IFormattable.ToString()
  • object.ToString()

So, if you implement one of these things, you can control how the object appears in the URL. I like IConvertible because IFormattable runs into other things like String.Format calls, where you might not want the object serialized the same.

Let’s add IConvertible to the object. You really only need to handle the ToString method; everything else, just bail with InvalidCastException. You also have to deal with the GetTypeCode implementation and a simple ToType implementation.

using System;
using System.Globalization;

namespace SerializationDemo
{
  public class MyCustomObject : IConvertible
  {
    public int First { get; set; }

    public int Second { get; set; }

    public static MyCustomObject Decode(string encoded)
    {
      var parts = encoded.Split('|');
      return new MyCustomObject
      {
        First = int.Parse(parts[0]),
        Second = int.Parse(parts[1])
      };
    }

    public string Encode()
    {
      return String.Format(
        CultureInfo.InvariantCulture,
        "{0}|{1}",
        this.First,
        this.Second);
    }

    public TypeCode GetTypeCode()
    {
      return TypeCode.Object;
    }

    public override string ToString()
    {
      return this.ToString(CultureInfo.CurrentCulture);
    }

    public string ToString(IFormatProvider provider)
    {
      return String.Format(provider, "<{0}, {1}>", this.First, this.Second);
    }

    string IConvertible.ToString(IFormatProvider provider)
    {
      return this.Encode();
    }

    public object ToType(Type conversionType, IFormatProvider provider)
    {
      return Convert.ChangeType(this, conversionType, provider);
    }

    /* ToBoolean, ToByte, ToChar, ToDateTime,
       ToDecimal, ToDouble, ToInt16, ToInt32,
       ToInt64, ToSByte, ToSingle, ToUInt16,
       ToUInt32, ToUInt64
       all throw InvalidCastException */
  }
}

There are a couple of interesting things to note here:

  • I explicitly implemented IConvertible.ToString. I did that so the value you’ll get in a String.Format call or a standard ToString call will be different than the encoded value. To get the encoded value, you have to explicitly cast the object to IConvertible. This allows you to differentiate where the encoded value shows up.
  • ToType pipes to Convert.ChangeType. Convert.ChangeType uses IConvertible where possible, so you kinda get this for free. Another reason IConvertible is better here than IFormattable.

Inbound Route Value, Action Parameter, and ApiExplorer: TypeConverter

When ApiExplorer is generating documentation, it needs to know whether the action parameter can be converted into a string (so it can go in the URL). It does this by getting the TypeConverter for the object and querying CanConvertFrom(typeof(string)). If the answer is false, ApiExplorer assumes the parameter has to be in the body of a request - which wrecks any generated documentation because that thing should be in the route.

To satisfy ApiExplorer, you need to implement a TypeConverter.

When your custom object is used as a route value coming in or otherwise as an action parameter, you also need to be able to model bind the encoded value to your custom object.

There is a built-in TypeConverterModelBinder that uses TypeConverter so implementing the TypeConverter will address model binding as well.

Here’s a simple TypeConverter for the custom object:

using System;
using System.ComponentModel;
using System.Globalization;

namespace SerializationDemo
{
  public class MyCustomObjectTypeConverter : TypeConverter
  {
    public override bool CanConvertFrom(
        ITypeDescriptorContext context,
        Type sourceType)
    {
      return sourceType == typeof(string) ||
             base.CanConvertFrom(context, sourceType);
    }

    public override bool CanConvertTo(
        ITypeDescriptorContext context,
        Type destinationType)
    {
      return destinationType == typeof(string) ||
             base.CanConvertTo(context, destinationType);
    }

    public override object ConvertFrom(
        ITypeDescriptorContext context,
        CultureInfo culture,
        object value)
    {
      var encoded = value as String;
      if (encoded != null)
      {
        return MyCustomObject.Decode(encoded);
      }

      return base.ConvertFrom(context, culture, value);
    }

    public override object ConvertTo(
        ITypeDescriptorContext context,
        CultureInfo culture,
        object value,
        Type destinationType)
    {
      var cast = value as MyCustomObject;
      if (destinationType == typeof(string) && cast != null)
      {
        return cast.Encode();
      }

      return base.ConvertTo(context, culture, value, destinationType);
    }
  }
}

And, of course, add the [TypeConverter] attribute to the custom object.

[TypeConverter(typeof(MyCustomObjectTypeConverter))]
public class MyCustomObject : IConvertible
{
  //...
}

Setting Swagger/Swashbuckle Doc

Despite all of this, generated Swagger/Swashbuckle documentation will still show an expanded representation of your object, which is inconsistent with how a user will actually work with it from a client perspective.

At application startup need to register a type mapping with the Swashbuckle SwaggerSpecConfig.Customize method to map your custom type to a string.

SwaggerSpecConfig.Customize(c =>
{
  c.MapType<MyCustomObject>(() =>
      new DataType { Type = "string", Format = null });
});

Even More Control: JsonConverter

Newtonsoft.Json should handle converting your type automatically based on the IConvertible and TypeConverter implementations.

However, if you’re doing something extra fancy like implementing a custom generic object, you may need to implement a JsonConverter for your object.

There is some great doc on the Newtonsoft.Json site so I won’t go through that here.

Using Your Custom Object

With the IConvertible and TypeConverter implementations, you should be able to work with your object like any other primitive and have it properly appear in route URLs, model bind, and so on.

// You can define a controller action that automatically
// binds the string to the custom object. You can also
// generate URLs that will have the encoded value in them.
[Route("api/increment/{value}", Name = "increment-values")]
public MyCustomObject IncrementValues(MyCustomObject value)
{
  // Create a URL like this...
  var url = this.Url.Link("increment-values", new { value = value });

  // Or work with an automatic model-bound object coming in...
  return new MyCustomObject
  {
    First = value.First + 1,
    Second = value.Second + 1
  }
}

Bonus: Using Thread Principal During Serialization

If, for whatever reason, your custom object needs the user’s principal on the thread during serialization, you’re in for a surprise: While the authenticated principal is on the thread during your ApiController run, HttpServer restores the original (unauthenticated) principal before response serialization happens.

It’s recommended you use HttpRequestMessage.GetRequestContext().Principal instead of Thread.CurrentPrincipal but that’s kind of hard by the time you get to type conversion and so forth and there’s no real way to pass that around.

The way you can work around this is by implementing a custom JsonMediaTypeFormatter.

The JsonMediaTypeFormatter has a method GetPerRequestFormatterInstance that is called when serialization occurs. It does get the current request message, so you can pull the principal out then and stick it on the thread long enough for serialization to happen.

Here’s a simple implementation:

public class PrincipalAwareJsonMediaTypeFormatter : JsonMediaTypeFormatter
{
  // This is the default constructor to use when registering the formatter.
  public PrincipalAwareJsonMediaTypeFormatter()
  {
  }

  // This is the constructor to use per-request.
  public PrincipalAwareJsonMediaTypeFormatter(
    JsonMediaTypeFormatter formatter,
    IPrincipal user)
    : base(formatter)
  {
    this.User = user;
  }

  // For per-request instances, this is the authenticated principal.
  public IPrincipal User { get; private set; }

  // Here's where you create the per-user/request formatter.
  public override MediaTypeFormatter GetPerRequestFormatterInstance(
    Type type,
    HttpRequestMessage request,
    MediaTypeHeaderValue mediaType)
  {
    var requestContext = request.GetRequestContext();
    var user = requestContext == null ? null : requestContext.Principal;
    return new PrincipalAwareJsonMediaTypeFormatter(this, user);
  }

  // When you deserialize an object, throw the principal
  // on the thread first and restore the original when done.
  public override object ReadFromStream(
    Type type,
    Stream readStream,
    Encoding effectiveEncoding,
    IFormatterLogger formatterLogger)
  {
    var originalPrincipal = Thread.CurrentPrincipal;
    try
    {
      if (this.User != null)
      {
        Thread.CurrentPrincipal = this.User;
      }

      return base.ReadFromStream(type, readStream, effectiveEncoding, formatterLogger);
    }
    finally
    {
      Thread.CurrentPrincipal = originalPrincipal;
    }
  }

  // When you serialize an object, throw the principal
  // on the thread first and restore the original when done.
  public override void WriteToStream(
    Type type,
    object value,
    Stream writeStream,
    Encoding effectiveEncoding)
  {
    var originalPrincipal = Thread.CurrentPrincipal;
    try
    {
      if (this.User != null)
      {
        Thread.CurrentPrincipal = this.User;
      }

      base.WriteToStream(type, value, writeStream, effectiveEncoding);
    }
    finally
    {
      Thread.CurrentPrincipal = originalPrincipal;
    }
  }
}

You can register that at app startup with your HttpConfiguration like this:

// Copy any custom settings from the current formatter
// into a new formatter.
var formatter = new PrincipalAwareJsonMediaTypeFormatter(config.Formatters.JsonFormatter);

// Remove the old formatter, add the new one.
config.Formatters.Remove(config.Formatters.JsonFormatter);
config.Formatters.Add(formatter);

Conclusion

I have to admit, I’m a little disappointed in the different ways the same things get handled here. Why do some things allow IConvertible but others require TypeConverter? It’d be nice if it was consistent.

In any case, once you know how it works, it’s not too hard to implement. Knowing is half the battle, right?

Hopefully this helps you in your custom object creation journey!

autofac, aspnet comments edit

We’ve been silent for a while, but we want you to know we’ve been working diligently on trying to get a release of Autofac that works with ASP.NET 5.0/vNext.

When it’s released, the ASP.NET vNext compatible version will be Autofac 4.0.

Here’s a status update on what’s been going on:

  • Split repositories for Autofac packages. We had been maintaining all of the Autofac packages - Autofac.Configuration, Autofac.Wcf, and so on - in a single repository. This made it easier to work with but also caused trouble with independent package versioning and codeline release tagging. We’ve split everything into separate repositories now to address these issues. You can see the repositories by looking at the Autofac organization in GitHub.
  • Switched to Gitflow. Previously we were just working in master and it was pretty easy. Occasionally we’d branch for larger things, but not always. We’ve switched to using Gitflow so you’ll see the 4.0 work going on in a “develop” branch in the repo.
  • Switched the build. We’re trying to get the build working using only the new stuff (.kproj/project.json). This is proving to be a bit challenging, which I’ll discuss more below.
  • Switched the tests to xUnit. In order to see if we broke something we need to run the tests, and the only runner in town for vNext is xUnit, so… we switched, at least for core Autofac.
  • Working on code conversion. Most of the differences we’ve seen in the API has to do with the way you access things through reflection. Of course, IoC containers do a lot of that, so there’s a lot of code to update and test. The new build system handles things like resources (.resx) slightly differently, too, so we’re working on making sure everything comes across and tests out.
  • Moved continuous integration to AppVeyor. You’ll see build badges on all of the README files in the respective repos. The MyGet CI NuGet feed is still live and where we publish the CI builds, but the build proper is on AppVeyor. I may have to write a separate blog entry on why we switched, but basically - we had more control at AppVeyor and things are easier to manage. (We are still working on getting a CI build for the vNext stuff going on there.)

Obviously at a minimum we’d like to get core Autofac out sooner rather than later. Ideally we could also get a few other items like Autofac.Configuration out, too, so folks can see things in a more “real world” scenario.

Once we can get a reliable Autofac core ported over, we can get the ASP.NET integration piece done. That work is going on simultaneously, but it’s hard to get integration done when the core bits are still moving.

There have, of course, been some challenges. Microsoft’s working hard on getting things going, but things still aren’t quite baked. Most of it comes down to “stuff that will eventually be there but isn’t quite done yet.”

  • Portable Class Library support isn’t there. We switched Autofac to PCL to avoid having a ton of #if ASPNETCORE50 sorts of code in the codebase. We had that early on with things like Silverlight and PCL made this really nice. Unfortunately, the old-style .csproj projects don’t have PCL support for ASP.NET vNext yet (though it’s supposed to be coming) and we’re not able to specify PCL target profiles in project.json. (While net45 works, it doesn’t seem that .NETPortable,Version=v4.6,Profile=Profile259 does, or anything like it.) That means we’re back to a couple of #if items and still trying to figure out how to get the other platforms supported. UPDATE: Had a Twitter conversation with Dave Kean and it turns out we may need to switch the build back to .csproj to get PCL support, but PCL should allow us to target ASP.NET vNext.
  • Configuration isn’t quite baked. Given there’s no web.config or ConfigurationElement support in ASP.NET, configuration is handled differently - through Microsoft.Framework.ConfigurationModel. Unfortunately, they don’t currently support the notion of arrays/collections, so for Autofac.Configuration if you wanted to register a list of modules… you can’t with this setup. There’s an issue for it filed but it doesn’t appear to have any progress. Sort of a showstopper and may mean we need to roll our own custom serialization for configuration.
  • The build structure has a steep learning curve. I blogged about this before so I won’t recap it, but suffice to say, there’s not much doc and there’s a lot to figure out in there.
  • No strong naming. One of the things they changed about the new platform is the removal of strong naming for assemblies. Personally, I’m fine with that - it’s always been a headache - but there’s a lot of code access security stuff in Autofac that we’d put into place to make sure it’d work in partial trust; we had [InternalsVisibleTo] attributes in places… and that all has to change. You can’t have a strong-named assembly depend on a not-strong-named assembly, and as they move away from strong naming, it basically means everything has to either maintain two builds (strong named and not strong named) or we stop strong naming. I think we’re leaning toward not strong naming - for the same reason we tried getting away from the #if statements. One codeline, one release, easy to manage.

None of this is insurmountable, but it is a lot like dominos - if we can get the foundation stuff up to date, things will just start falling into place. It’s just slow to make progress when the stuff you’re trying to build on isn’t quite there.

aspnet, dotnet, autofac, github comments edit

Alex and I are working on switching Autofac over to ASP.NET vNext and as part of that we’re trying to figure out what the proper structure is for a codeline, how a build should look, and so on.

There is a surprisingly small amount of documentation on the infrastructure bits. I get that things are moving quickly but the amazing lack of docs of any detail creates for a steep learning curve and a lot of frustration. I mean, you can read about the schema for project.json but even that is out of date/incomplete so you end up diving into the code, trying to reverse-engineer how things come together.

Below is a sort of almost-stream-of-consciousness braindump of things I’ve found while working on sorting out build and repo structure for Autofac.

No More MSBuild - Sake + KoreBuild

If you’re compiling only on a Windows platform you can still use MSBuild, but if you look at the ASP.NET vNext repos, you’ll see there’s no MSBuild to be found.

This is presumably to support cross-platform compilation of the ASP.NET libraries and the K runtime bits. That’s a good goal and it’s worth pursuing - we’re going that direction for at least core Autofac and a few of the other core libs that need to change (like Autofac.Configuration). Eventually I can see all of our stuff switching that way.

The way it generally works in this system is:

  • A base build.cmd (for Windows) and build.sh (for Linux) use NuGet to download the Sake and KoreBuild packages.
  • The scripts kick off the Sake build engine to run a makefile.shade which is platform-agnostic.
  • The Sake build engine, which is written in cross-platform .NET, handles the build execution process.

The Sake Build System

Sake is a C#-based make/build system that appears to have been around for quite some time. There is pretty much zero documentation on this, which makes figuring it out fairly painful.

From what I gather, it is based on the Spark view engine and uses .shade view files as the build scripts. When you bring in the Sake package, you get several shared .shade files that get included to handle common build tasks like updating assembly version information or running commands.

It enables cross-platform builds because Spark, C#, and the overall execution process works both on Mono and Windows .NET.

One of the nice things it has built in, and a compelling reason to use it beyond the cross-platform support, is that a convention-based standard build lifecycle that runs clean/build/test/package targets in a standard order. You can easily hook into this pipeline to add functionality but you don’t have to think about the order of things. It’s pretty nice.

The KoreBuild Package

KoreBuild is a build system layered on top of Sake that is used to build K projects. As with Sake, there is zero doc on this.

If you’re using the new K build system, though, and you’re OK with adopting Sake, there’s a lot of value in the KoreBuild package. KoreBuild layers in Sake support for automatic NuGet package restore, native compile support, and other K-specific goodness. The _k-standard-goals.shade file is where you can see the primary set of things it adds.

The Simplest Build Script

Assuming you have committed to the Sake and KoreBuild way of doing things, you can get away with an amazingly simple top-level build script that will run a standard clean/build/test/package lifecycle automatically for you.

var AUTHORS='Your Authors Here'

use-standard-lifecycle
k-standard-goals

At the time of this writing, the AUTHORS value must be present or some of the standard lifecycle bits will fail… but since the real authors for your package are specified in project.json files now, this really just is a placeholder that has to be there. It doesn’t appear to matter what the value is.

Embedded Resources Have Changed

There is currently no mention of how embedded resources are handled in the documentation on project.json but if you look at the schema you’ll see that you can specify a resources element in project.json the same way you can specify code.

A project with embedded resources might look like this (minus the frameworks element and all the dependencies and such to make it easier to see):

{
    "description": "Enables Autofac dependencies to be registered via configuration.",
    "authors": ["Autofac Contributors"],
    "version": "4.0.0-*",
    "compilationOptions": {
        "warningsAsErrors": true
    },
    "code": ["**\\*.cs"],
    "resources": "**\\*.resx"
    /* Other stuff... */
}

Manifest Resource Path Changes

If you include .resx files as resources, they correctly get converted to .resources files without doing anything. However, if you have other resources, like an embedded XML file…

{
    "code": ["**\\*.cs"],
    "resources": ["**\\*.resx", "Files\\*.xml"]
}

…then you get an odd generated path. Easiest to see with an example. Say you have this:

~/project/
  src/
    MyAssembly/
      Files/
        Embedded.xml

In old Visual Studio/MSBuild, the file would be embedded and the internal manifest resource stream path would be MyAssembly.Files.Embedded.xml - the folders would represent namespaces and path separators would basically become dots.

However, in the new world, you get a manifest resource path Files/Embedded.xml - literally the relative path to the file being embedded. If you have unit tests or other stuff where embedded files are being read, this will throw you for a loop.

No .resx to .Designer.cs

A nice thing about the resource system in VS/MSBuild was the custom tool that would run to convert .resx files into strongly-typed resources in .Designer.cs files. There’s no automatic support for this anymore.

However, if you give in to the KoreBuild way of things, they do package an analogous tool inside KoreBuild that you can run as part of your command-line build script. It won’t pick up changes if you add resources to the file in VS, but it’ll get you by.

To get .resx building strongly-typed resources, add it into your build script like this:

var AUTHORS='Your Authors Here'

use-standard-lifecycle
k-standard-goals

#generate-resx .resx description='Converts .resx files to .Designer.cs' target='initialize'

What that does is add a generate-resx build target to your build script that runs during the initialize phase of the standard lifecycle. The generate-resx target dependes on a target called resx which does the actual conversion to .Designer.cs files. The resx target comes from KoreBuild and is included when you include the k-standard-goals script, but it doesn’t run by default, which is why you have to include it yourself.

Gotcha: The way it’s currently written, your .resx files must be in the root of your project (it doesn’t use the resources value from project.json). They will generate the .Designer.cs files into the Properties folder of your project. This isn’t configurable.

ASP.NET Repo Structure is Path of Least Resistance

If you give over to Sake and KoreBuild, it’s probably good to also give over to the source repository structure used in the ASP.NET vNext repositories. Particularly in KoreBuild there are some hardcoded assumptions in certain tasks that you’re using that repo structure.

The structure looks like this:

~/MyProject/
  src/
    MyProject.FirstAssembly/
      Properties/
        AssemblyInfo.cs
      MyProject.FirstAssembly.kproj
      project.json
    MyProject.SecondAssembly/
      Properties/
        AssemblyInfo.cs
      MyProject.SecondAssembly.kproj
      project.json
  test/
    MyProject.FirstAssembly.Test/
      Properties/
        AssemblyInfo.cs
      MyProject.FirstAssembly.Test.kproj
      project.json
    MyProject.SecondAssembly.Test/
      Properties/
        AssemblyInfo.cs
      MyProject.SecondAssembly.Test.kproj
      project.json
  build.cmd
  build.sh
  global.json
  makefile.shade
  MyProject.sln

The key important bits there are:

  • Project source is in the src folder.
  • Tests for the project are in the test folder.
  • There’s a top-level solution file (if you’re using Visual Studio).
  • The global.json points to the src file as the place for project source.
  • There are build.cmd and build.sh scripts to kick off the cross-platform builds.
  • The top-level makefile.shade handles build orchestration.
  • The folder names for the source and test projects are the names of the assemblies they generate.
  • Each assembly has…
    • Properties with AssemblyInfo.cs where the AssemblyInfo.cs doesn’t include any versioning information, just other metadata.
    • A .kproj file (if you’re using Visual Studio) that is named after the assembly being generated.
    • A project.json that spells out the authors, version, dependencies, and other metadata about the assembly being generated.

Again, a lot of assumptions seem to be built in that you’re using that structure. You can save a lot of headaches by switching.

I can see this may cause some long-path problems. Particularly if you are checking out code into a deep file folder and have a long assembly name, you could have trouble. Think…

C:\users\myusername\Documents\GitHub\project\src\MyProject.MyAssembly.SubNamespace1.SubNamespace2\MyProject.MyAssembly.SubNamespace1.SubNamespace2.kproj

That’s 152 characters right there. Add in those crazy WCF-generated .datasource files and things are going to start exploding.

Assembly/Package Versioning in project.json

Part of what you put in project.json is your project/package version:

{
    "authors": ["Autofac Contributors"],
    "version": "4.0.0-*",
    /* Other stuff... */
}

There desn’t appear to be a way to keep multiple assemblies in a solution consistently versioned. That is, you can’t put the version info in the global.json at the top level and I’m not sure where else you could store it. You could probably come up with a custom build task to handle centralized versioning, but it’d be nice if there was something built in for it.

XML Doc Compilation Warnings

The old compiler csc.exe had a thing where it would automatically output compiler warnings for XML documentation errors (syntax or reference errors). The K compiler apparently doesn’t do this by default so they added custom support for it in the KoreBuild package.

To get XML documentation compilation warnings output in your build, add it into your build script like this:

var AUTHORS='Your Authors Here'

use-standard-lifecycle
k-standard-goals

#xml-docs-test .clean .build-compile description='Check generated XML documentation files for errors' target='test'
  k-xml-docs-test

That adds a new xml-docs-test target that runs during the test part of the lifecycle (after compile). It requires the project to have been cleaned and built before running. When it runs, it calls the k-xml-docs-test target to manually write out XML doc compilation warnings.

Runtime Update Gotchas

Most build.cmd or build.sh build scripts have a line like this:

CALL packages\KoreBuild\build\kvm upgrade -runtime CLR -x86
CALL packages\KoreBuild\build\kvm install default -runtime CoreCLR -x86

Basically:

  • Get the latest K runtime from the feed.
  • Set the latest K runtime as the ‘default’ one to use.

While I think this is fine early on, I can see a couple of gotchas with this approach.

  • Setting the ‘default’ modifies the user profile. When you call kvm install default the intent is to set the aliast default to refer to the specified K runtime version (in the above example, that’s the latest version). When you set this alias, it modifies a file attached to the user profile containing the list of aliases - it’s a global change. What happens if you have a build server environment where lots of builds are running in parallel? You’re going to get the build processes changing aliases out from under each other.
  • How does backward compatibility work? At this early stage, I do want the latest runtime to be what I build against. Later, though, I’m guessing I want to pin a revision of the runtime in my build script and always build against that to ensure I’m compatible with applications stuck at that runtime version. I guess that’s OK, but is there going to be a need for some sort of… “binding redirect” (?) for runtime versions? Do I need to specify some sort of “list of supported runtime versions?”

Testing Means XUnit and aspnet50

At least at this early stage, XUnit seems to be the only game in town for unit testing. The KoreBuild stuff even has XUnit support built right in, so, again, path of least resistance is to switch if you’re not already on it.

I did find a gotcha, though, where if you want k test to work your assemblies must target aspnet50.

Which is to say… in your unit test project.json you’ll have a line to specify the test runner command:

{
    "commands": {
        "test": "xunit.runner.kre"
    },
    "frameworks": {
        "aspnet50": { }
    }
}

Specifying that will allow you to drop to a command prompt inside the unit test assembly’s folder and run k test to execute the unit tests.

In early work for Autofac.Configuration I was trying to get this to work with the Autofac.Configuration assembly only targeting aspnetcore50 and the unit test assembly targeting aspnetcore50. When I ran k test I got a bunch of exceptions (which I didn’t keep track of, sorry). After a lot of trial and error, I found that if both my assembly under test (Autofac.Configuration) and my unit test assembly (Autofac.Configuration.Test) both targeted aspnet50 then everything would run perfectly.

PCL Support is In Progress

It’d be nice if there was a portable class library profile that just handled everything rather than all of these different profiles + aspnet50 + aspnetcore50. There’s not. I gather from Twitter conversations that this may be in the works but I’m not holding my breath.

Also, there’s a gotcha with Xamarin tools: If you’re using a profile (like Profile259) that targets a common subset of a lot of runtimes including mobile platforms, then the output of your project will change based on whether or not you have Xamarin tools installed. For example, without Xamarin installed you might get .nupkg output for portable-net45+win+wpa81+wp80. However, with Xamarin installed that same project will output for portable-net45+win+wpa81+wp80+monotouch+monoandroid.

Configuration Changes

Obviously with the break from System.Web and some of the monolithic framework, you don’t really have web.config as such anymore. Instead, the configuration system has become Microsoft.Framework.ConfigurationModel.

It’s a pretty nice and flexible abstraction layer that lets you specify configuration in XML, JSON, INI, or environment variable format. You can see some examples here.

That said, it’s a huge change and takes a lot to migrate.

  • No appSettings. I’m fine with this because appSettings always ended up being a dumping ground, but it means everything you have originally tied to appSettings needs to change.
  • No ConfigurationElement objects. I can’t tell you how much I have written in the old ConfigurationElement mechanism. It had validation, strong type parsing, serialization, the whole bit. All of that will not work in this new system. You can imagine how this affects things like Autofac.Configuration.
  • List and collection support is nonexistent. I’ve actually filed a GitHub issue about this. A lot of the configuration I have in both Autofac.Configuration and elsewhere is a list of elements that are parameterized. The current XML and JSON parsers for config specifically disallow list/collection support. Everything must be a unique key in a tree-like hierarchy. That sort of renders the new config system, at least for me, pretty much unusable except for the most trivial of things. Hopefully this changes.
  • Everything is file or in-memory. There’s no current support for pulling in XML or JSON configuration that comes from, say, a REST API call from a centralized repository. Even in unit testing, all the bits that actually run the configuration parsing on a stream of XML/JSON are internals rather than exposed - you have to load config from a file or manually create it yourself in memory by adding key/value pairs. There’s a GitHub issue open for this, too.

As a workaround, I’m considering using custom object serialization and bypassing the new configuration system altogether. I like the flexibility of the new system but the limitations are pretty overwhelming right now.

android comments edit

A couple of years back I bought some Samsung TecTiles for use with my Galaxy S3. I created a tag that would easily switch my phone to vibrate mode at work - get to work, scan it, magic.

Within the last couple of weeks I upgraded to a Galaxy Note 4 and when I tried to use the Note 4 on my TecTile I got a message saying the tag type wasn’t supported.

A little research revealed that the TecTiles I bought are “MIFARE Classic” format, which are apparently not universally compatible. So… crap. If I want to mess around with NFC, I’m going to need to get some different tags. These TecTiles can’t be read by any device I have in my home anymore.