personal, culture comments edit

There has been a lot of push lately for people to learn to code. From Hour of Code to the President of the United States pushing for more coders, the movement towards everyone coding is on.

What gets lost in the hype, drowned out by the fervor of people everywhere jamming keys on keyboards, is that simply being able to code is not software development.

OK, sure, technically speaking when you write code that executes a task you have just developed a piece of software. Also, technically speaking, when you fumble out Chopsticks on the keyboard while walking through Costco you just played the piano. That doesn’t make you a pianist any more than taking an hour to learn to code makes you a software developer.

Here’s where my unpopular opinion comes out. Here’s where I call out the elephant in the room and the politically-correct majority gasp at how I can be so unencouraging to these folks learning to code.

Software development is an art, not a science.

Not everyone can be a software developer in the same way not everyone can be a pianist, a painter, or a sculptor. Anyone can learn to play the piano well; anyone can learn to paint or sculpt reasonably. That doesn’t mean just anyone can make a living doing these things.

It’s been said that if you spend 10,000 hours practicing a task you can become great at anything. 10,000 hours is basically five years of a full-time job. So, ostensibly, if you spent five years full-time coding, you’d be a developer.

However, we’ve all heard that argument about experience: Have you had 20 years of experience? Or one year of experience 20 times? Does spending 10,000 hours coding make you a developer? Or does it just mean you spent a lot of time coding?

Regardless of your time in any field you have probably run across both of these people - the ones who really have 20 years’ experience and the ones who have been working for 20 years and you wonder how they’ve advanced so far in their careers.

I say that to be a good developer - or a good artist - you need three things: skills, aptitude, and passion.

Skills are the rote abilities you learn when you start with that Hour of Code or first take a class on coding. Pretty much anyone can learn a certain level of skill in nearly any field. It’s learned ability that takes a brainpower and dedication.

Aptitude is a fuzzier quality meaning your natural ability to do something. This is where the “art” part of development starts coming in. You may have learned the skills to code, but do you have any sort of natural ability to perform those skills?

Passion is your enthusiasm - in this case, the strong desire to execute the skills you have and continue to improve on them. This is also part of the “art” of development. You might be really good at jamming out code, but if you don’t like doing it you probably won’t come up with the best solutions to the problems with which you’re faced.

Without all three, you may be able to code but you won’t really be a developer.

A personal anecdote to help make this a bit more concrete: When I went to college, I told my advisors that I really wanted to be a 3D graphics animator/modeler. My dream job was (and still kind of is) working for Industrial Light and Magic on special effects. As a college kid, I didn’t know any better, so when the advisors said I should get a Computer Science degree, I did. Only later did I find out that wouldn’t get me into ILM or Pixar. Why? In their opinion (at the time, in my rejection letters), “you can teach computer science to an artist but you can’t teach art to a computer scientist.”

The first interesting thing I find there is that, at least at the time, the thought there was that art “isn’t teachable.” For the most part, I agree - without the skills, aptitude, and passion for art, you’re not going to be a really great artist.

The more interesting thing I find is the lack of recognition that solving computer science problems, in itself, is an art.

If you’ve dived into code, you’re sure to have seen this, though maybe you didn’t realize it.

  • Have you ever seen a really tough problem solved in an amazingly elegant way that you’d never have thought of yourself? What about the converse - a really tough problem solved in such a brute force manner that you can’t imagine why that’s good?
  • Have you ever picked up someone else’s code and found that it’s entirely unreadable? If you hand someone else your code, can they make heads or tails of it? What about code that was so clearly written you didn’t even need any comments to understand how it worked?
  • Have you ever seen code that’s so deep and unnecessarily complicated that if anything went wrong with it you could never fix it? What about code that’s so clear you could easily fix anything with it if a problem was discovered?

We’ve all seen this stuff. We’ve all written this stuff. I know I have… and still do. Sometimes we even laugh about it.

The important part is that those three factors - skill, aptitude, and passion - work together to improve us as developers.

I don’t laugh at a beginner’s code because their skills aren’t there yet. However, their aptitude and passion may help to motivate them to raise their skill level, which will make them overall better at what they do.

The art of software development isn’t about the quantity of code churned out, it’s about quality. It’s about constant improvement. It’s about change. These are the unquantifiable things that separate the coders from the developers.

Every artist constantly improves. I’m constantly improving, and I hope you are, too. It’s the artistic aspect of software development that drives us to do so, to solve the problems we’re faced with. Don’t just be a software developer, be a software artist. And be the best artist you can be.

dotnet, aspnet comments edit

Here’s the situation:

  • I have a .NET Core / ASP.NET Core (DNX) web app. (Currently it’s an RC1 app.)
  • When I start it in Visual Studio, I get IIS Express listening for requests and handing off to DNX.
  • When I start the app from a command line, I want the same experience as VS - IIS Express listening and handing off to DNX.

Now, I know I can just dnx web and get Kestrel to work from a simple self-host perspective. I really want IIS Express here. Searching around, I’m not the only one who does, though everyone’s reasons are different.

Since the change to the IIS hosting model you can’t really do the thing that the ASP.NET Music Store was doing where you copy the AspNet.Loader.dll to your bin folder and have magic happen when you start IIS Express.

When Visual Studio starts up your application, it actually creates an all-new applicationhost.config file with some special entries that allow things to work. I’m going to tell you how to update your per-user IIS Express applicationhost.config file so things can work outside VS just like they do inside.

There are two pieces to this:

  1. Update your applicationhost.config (one time) to add the httpPlatformHandler module so IIS Express can “proxy” to DNX.
  2. Use appcmd.exe to point applications to IIS Express.
  3. Set environment variables and start IIS Express using the application names you configured using appcmd.exe

Let’s walk through each step.

applicationhost.config Updates

Before you can host DNX apps in IIS Express, you need to update your default IIS Express applicationhost.config to know about the httpPlatformHandler module that DNX uses to start up its child process.

You only have to do this one time. Once you have it in place, you’re good to go and can just configure your apps as needed.

To update the applicationhost.config file I used the XML transform mechanism you see in web.config transforms - those web.Debug.config and web.Release.config deals. However, I didn’t want to go through MSBuild for it so I did it in PowerShell.

First, save this file as applicationhost.dnx.xml - this is the set of transforms for applicationhost.config that the PowerShell script will use.

<?xml version="1.0"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
    <configSections>
        <sectionGroup name="system.webServer"
                      xdt:Locator="Match(name)">
            <section name="httpPlatform"
                     overrideModeDefault="Allow"
                     xdt:Locator="Match(name)"
                     xdt:Transform="InsertIfMissing" />
        </sectionGroup>
    </configSections>
    <location path=""
              xdt:Locator="Match(path)">
        <system.webServer>
            <modules>
                <add name="httpPlatformHandler"
                     xdt:Locator="Match(name)"
                     xdt:Transform="InsertIfMissing" />
            </modules>
        </system.webServer>
    </location>
    <system.webServer>
        <globalModules>
            <add name="httpPlatformHandler"
                 image="C:\Program Files (x86)\Microsoft Web Tools\HttpPlatformHandler\HttpPlatformHandler.dll"
                 xdt:Locator="Match(name)"
                 xdt:Transform="InsertIfMissing" />
        </globalModules>
    </system.webServer>
</configuration>

I have it structured so you can run it over and over without corrupting the configuration - so if you forget and accidentally run the transform twice, don’t worry, it’s cool.

Here’s the PowerShell script you’ll use to run the transform. Save this as Merge.ps1 in the same folder as applicationhost.dnx.xml:

function script:Merge-XmlConfigurationTransform
{
    [CmdletBinding()]
    Param(
        [Parameter(Mandatory=$True)]
        [ValidateNotNullOrEmpty()]
        [String]
        $SourceFile,

        [Parameter(Mandatory=$True)]
        [ValidateNotNullOrEmpty()]
        [String]
        $TransformFile,

        [Parameter(Mandatory=$True)]
        [ValidateNotNullOrEmpty()]
        [String]
        $OutputFile
    )

    Add-Type -Path "${env:ProgramFiles(x86)}\MSBuild\Microsoft\VisualStudio\v14.0\Web\Microsoft.Web.XmlTransform.dll"

    $transformableDocument = New-Object 'Microsoft.Web.XmlTransform.XmlTransformableDocument'
    $xmlTransformation = New-Object 'Microsoft.Web.XmlTransform.XmlTransformation' -ArgumentList "$TransformFile"

    try
    {
        $transformableDocument.PreserveWhitespace = $false
        $transformableDocument.Load($SourceFile) | Out-Null
        $xmlTransformation.Apply($transformableDocument) | Out-Null
        $transformableDocument.Save($OutputFile) | Out-Null
    }
    finally
    {
        $transformableDocument.Dispose();
        $xmlTransformation.Dispose();
    }
}

$script:ApplicationHostConfig = Join-Path -Path ([System.Environment]::GetFolderPath([System.Environment+SpecialFolder]::MyDocuments)) -ChildPath "IISExpress\config\applicationhost.config"
Merge-XmlConfigurationTransform -SourceFile $script:ApplicationHostConfig -TransformFile (Join-Path -Path $PSScriptRoot -ChildPath applicationhost.dnx.xml) -OutputFile "$($script:ApplicationHostConfig).tmp"
Move-Item -Path "$($script:ApplicationHostConfig).tmp" -Destination $script:ApplicationHostConfig -Force

Run that script and transform your applicationhost.config.

Note that the HttpPlatformHandler isn’t actually a DNX-specific thing. It’s an IIS 8+ module that can be used for any sort of proxying/process management situation. However, it doesn’t come set up by default on IIS Express so this adds it in.

Now you’re set for the next step.

Configure Apps with IIS Express

I know you can run IIS Express with a bunch of command line parameters, and if you want to do that, go for it. However, it’s just a bunch easier if you set it up as an app within IIS Express so you can more easily launch it.

Set up applications pointing to the wwwroot folder.

A simple command to set up an application looks like this:

"C:\Program Files (x86)\IIS Express\appcmd.exe" add app /site.name:"MyApplication" /path:/ /physicalPath:C:\some\folder\src\MyApplication\wwwroot

Whether you use the command line parameters to launch every time or set up your app like this, make sure the path points to the wwwroot folder.

Set Environment Variables and Start IIS Express

If you look at your web.config file in wwwroot you’ll see something like this:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
    <system.webServer>
        <handlers>
            <add name="httpPlatformHandler"
                 path="*"
                 verb="*"
                 modules="httpPlatformHandler"
                 resourceType="Unspecified" />
        </handlers>
        <httpPlatform processPath="%DNX_PATH%"
                      arguments="%DNX_ARGS%"
                      stdoutLogEnabled="false"
                      startupTimeLimit="3600" />
    </system.webServer>
</configuration>

The important bit there are the two variables DNX_PATH and DNX_ARGS.

  • DNX_PATH points to the dnx.exe executable for the runtime you want for your app.
  • DNX_ARGS are the arguments to dnx.exe, as if you were running it on a command line.

A very simple PowerShell script that will launch an IIS Express application looks like this:

$env:DNX_PATH = "$($env:USERPROFILE)\.dnx\runtimes\dnx-clr-win-x86.1.0.0-rc1-update1\bin\dnx.exe"
$env:DNX_ARGS = "-p `"C:\some\folder\src\MyApplication`" web"
Start-Process "${env:ProgramFiles(x86)}\IIS Express\iisexpress.exe" -ArgumentList "/site:MyApplication"

Obviously you’ll want to set the runtime version and paths accordingly, but this is basically the equivalent of running dnx web and having IIS Express use the site settings you configured above as the listening endpoint.

windows, azure, security comments edit

I’ve been experimenting with Azure Active Directory Domain Services (currently in preview) and it’s pretty neat. If you have a lot of VMs you’re working with, it helps quite a bit in credential management.

However, it hasn’t all been “fall-down easy.” There are a couple of gotchas I’ve hit that folks may be interested in.

Active Directory Becomes DNS Control for the Domain

When you join an Azure VM to your domain, you have to set the network for that VM to use the Azure Active Directory as the DNS server. This results in any DNS entries for the domain - for machines on that network - only being resolved by Active Directory.

This is clearer with an example: Let’s say you own the domain mycoolapp.com and you enable Azure AD Domain Services for mycoolapp.com. You also have…

  • A VM named webserver.
  • A cloud service responding to mycoolapp.cloudapp.net that’s associated with the VM.

You join webserver to the domain. The full domain name for that machine is now webserver.mycoolapp.com. You want to expose that machine to the outside (outside the domain, outside of Azure) to serve up your new web application. It needs to respond to www.mycoolapp.com.

You can add a public DNS entry mapping www.mycoolapp.com to the mycoolapp.cloudapp.net public IP address. You can now get to www.mycoolapp.com correctly from outside your Azure domain. However, you can’t get to it from inside the domain. Why not?

You can’t because Active Directory is serving DNS inside the domain and there’s no VM named www. It doesn’t proxy external DNS records for the domain, so you’re stuck.

There is not currently a way to manage the DNS for your domain within Azure Active Directory.

Workaround: Rename the VM to match the desired external DNS entry. Which is to say, call the VM www instead of webserver. That way you can reach the same machine using the same DNS name both inside and outside the domain.

Unable to Set User Primary Email Address

When you enable Azure AD Domain Services you get the ability to start authenticating against joined VMs using your domain credentials. However, if you try managing users with the standard Active Directory MMC snap-ins, you’ll find some things don’t work.

A key challenge is that you can’t set the primary email address field for a user. It’s totally disabled in the snap-in.

This is really painful if you are trying to manage a cloud-only domain. Domain Services sort of assumes that you’re synchronizing an on-premise AD with the cloud AD and that the workaround would be to change the user’s email address in the on-premise AD. However, if you’re trying to go cloud-only, you’re stuck. There’s no workaround for this.

Domain Services Only Connects to a Single ASM Virtual Network

When you set up Domain Services, you have to associate it with a single virtual network (the vnet your VMs are on), and it must be an Azure Service Manager style network. If you created a vnet with Azure Resource Manager, you’re kinda stuck. If you have ARM VMs you want to join (which must be on ARM vnets), you’re kinda stuck. If you have more than one virtual network on which you want Domain Services, you’re kinda stuck.

Workaround: Join the “primary vnet” (the one associated with Domain Services) to other vnets using VPN gateways.

There is not a clear “step-by-step” guide for how to do this. You need to sort of piece together the information in these articles:

Active Directory Network Ports Need to be Opened

Just attaching the Active Directory Domain Services to your vnet and setting it as the DNS server may not be enough. Especially when you get to connecting things through VPN, you need to make sure the right ports are open through the network security group or you won’t be able to join the domain (or you may be able to join but you won’t be able to authenticate).

Here’s the list of ports required by all of Domain Services. Which is not to say you need all of them open, just that you’ll want that for reference.

I found that enabling these ports outbound for the network seemed to cover joining and authenticating against the domain. YMMV. There is no specific guidance (that I’ve found) to explain exactly what’s required.

  • LDAP: Any/389
  • LDAP SSL: TCP/636
  • DNS: Any/53

personal, gaming, toys, xbox comments edit

This year for Christmas, Jenn and I decided to get a larger “joint gift” for each other since neither of us really needed anything. That gift ended up being an Xbox One (the Halo 5 bundle), the LEGO Dimensions starter pack, and a few expansion packs.

LEGO Dimensions Starter Pack

Never having played one of these collectible toy games before, I wasn’t entirely sure what to expect beyond similar gameplay to other LEGO video games. We like the other LEGO games so it seemed like an easy win.

LEGO Dimensions is super fun. If you like the other LEGO games, you’ll like this one.

The story is, basically, that a master bad guy is gathering up all the other bad guys from the other LEGO worlds (which come from the licensed LEGO properties like Portal, DC Comics, Lord of the Rings, and so on). Your job is to stop him from taking over these “dimensions” (each licensed property is a “dimension”) by visiting the various dimensions and saving people or gathering special artifacts.

With the starter pack you get Batman, Gandalf, and Wildstyle characters with which you can play the game. These characters will allow you to beat the main story.

So why get expansion packs?

  • There are additional dimensions you can visit that you can’t get to without characters from that dimension. For example, while the main game lets you play through a Doctor Who level, you can’t visit the other Doctor Who levels unless you buy the associated expansion pack.
  • As with the other LEGO games, you can’t unlock certain hidden areas or collectibles unless you have special skills. For example, only certain characters have the ability to destroy metal LEGO bricks. With previous LEGO games you could unlock these characters by beating levels; with LEGO Dimensions you unlock characters by buying the expansion packs.

Picking the right packs to get the best bang for your buck is hard. IGN has a good page outlining the various character abilities, which pack contains each, and some recommendations on which ones will get you the most if you’re starting fresh.

The packs Jenn and I have (after getting some for Christmas and grabbing a couple of extras) are:

Portal level pack Portal level pack

Back to the Future level pack Back to the Future level pack

Emmet fun pack Emmet fun pack

Zane fun pack Zane fun pack

Gollum fun pack Gollum fun pack

Eris fun pack Eris fun pack

Wizard of Oz Wicked Witch fun pack Wizard of Oz Wicked Witch fun pack

Doctor Who level pack Doctor Who level pack

Unikitty fun pack Unikitty fun pack

Admittedly, this is a heck of an investment in a game. We’re suckers. We know.

This particular combination of packs unlocks just about everything. There are still things we can’t get to - levels we can’t enter, a few hidden things we can’t reach - but this is a good 90%. Most of the stuff we can’t get to is because there are characters where only that one character has such-and-such ability. For example, Aquaman (for whatever reason) seems to have one or two abilities unique to him for which we’ve run across the need. Unikitty is also a character with unique abilities (which we ended up getting). I’d encourage you as you purchase packs to keep consulting the character ability matrix to determine which packs will best help you.

I have to say… There’s a huge satisfaction in flying the TARDIS around or getting the Twelfth Doctor driving around in the DeLorean. It may make that $15 or whatever worth it.

If you’re a LEGO fan anyway, the packs actually include minifigs and models that are detachable - you can play with them with other standard LEGO sets once you get tired of the video game. It’s a nice dual-purpose that other collectible games don’t provide.

Finally, it’s something fun Jenn and I can play together to do something more interactive than just watch TV. I don’t mind investing in that.

In any case, if you’re looking at one of the collectible toy games, I’d recommend LEGO Dimensions. We’re having a blast with it.

personal comments edit

It’s been a busy year, and in particular a pretty crazy last-three-months, so I’m rounding out my 2015 by finally using up my paid time off at work and effectively taking December off.

What that means is I probably won’t be seen on StackOverflow or handling Autofac issues or working on the Autofac ASP.NET 5 conversion.

I love coding, but I also have a couple of challenges if I do that on my time off:

  • I stress out. I’m not sure how other people work, but when I see questions and issues come in I feel like there’s a need I’m not addressing or a problem I need to solve, somehow, immediately right now. Even if that just serves as a backlog of things to prioritize, it’s one more thing on the list of things I’m not checking off. I want to help people and I want to provide great stuff with Autofac and the other projects I work on, but there is a non-zero amount of stress involved with that. It can pretty quickly turn from “good, motivating stress” to “bad, overwhelming stress.” It’s something I work on from a personal perspective, but taking a break from that helps me regain some focus.
  • I lose time. There are so many things I want to do that I don’t have time for. I like sewing and making physical things - something I don’t really get a chance to do in the software world. If I sit down and start coding stuff, pretty soon the day is gone and I may have made some interesting progress on a code-related project, but I just lost a day I could have addressed some of the other things I want to do. Since I code for a living (and am lucky enough to be able to get Autofac time in as part of work), I try to avoid doing much coding on my time off unless it’s helping me contribute to my other hobbies. (For example, I just got an embroidery machine - I may code to create custom embroidery patterns.)

I don’t really take vacation time during the year so I often end up in a “use it or lose it” situation come December, which works out well because there are a ton of holidays to work around anyway. Why not de-stress, unwind, and take the whole month off?

I may even get some time to outline some of the blog entries I’ve been meaning to post. I’ve been working on some cool stuff from Azure to Roslyn code analyzers, not to mention the challenges we’ve run into with Autofac/ASP.NET 5. I’ve just been slammed enough that I haven’t been able to get those out. We’ll see. I should at least start keeping a list.