costumes, personal, halloween comments edit

For Halloween 2019 I decided to make a Star Trek II “Monster Maroon” costume. This is the dress uniform you see folks wearing in the Star Trek II through IV movies.

October 26, 2019: The complete costume (front)

I’ve wanted one for quite some time, so I went for it. I figured folks might be interested in some of the process to get this done.

Research

First, I did a lot of research. I track my research in a Google Doc and I have a folder where I save relevant images and things that I find online that can help. This is a lot of time, searching, finding images, following links, reading on forum posts, looking at movie frames for reference.

I determined that my complete costume would have:

  • Jacket
  • Pants
  • Shirt (the white puffy neck/arm one)
  • Chest insignia
  • Shoulder strap rank pin
  • Shoulder strap security device
  • Left sleeve rank pin
  • Left sleeve “pips and squeaks”
  • Belt
  • Boots
  • Phaser

I figured I’d want a phaser but I didn’t need a communicator. Too many handheld props and nowhere to put them.

I decided this broke down as such:

  • Items to sew:
    • Jacket
    • Pants
    • Shirt
    • Belt
  • Buy or 3D print:
    • Shoulder strap rank pin
    • Shoulder strap security device
    • Left sleeve rank pin
    • Left sleeve “pips and squeaks”
    • Belt buckle
    • Phaser

I skipped the boots because I already had some I planned on reusing. Not perfect, but good enough and it already appeared like a lot of work.

The Anovos version of this costume provided invaluable from a reference photo standpoint. It’s not “screen worn” but it does show how things should roughly line up and helped a lot.

This forum also has some great photos and info that I used while figuring things out.

Cost Breakdown

The probably-incomplete itemized parts list so I can scare myself when I see what got spent:

Item Cost
Jacket pattern $24.95
Pants pattern $18.95
Shirt pattern $14.95
Shoulder strap clasp $7.95
S&H for patterns and clasp $12.95
Fabric swatches for jacket, pants $11.75
Burgundy gabardine, 4 yd (jacket, pants stripes) $26.21
White gabardine (2.75 yd) $9.99
Black gabardine (4 yd) $15.98
Ivory four-way stretch fabric (shirt, 5 yd) $59.97
Thread $25.95
Black bias tape $15.54
Gold bias tape $2.79
Interfacing $1.74
Muslin $22.71
Batting (for shirt and jacket puffy sleeves) $4.97
Snaps $1.99
Stitch Witchery $2.49
Invisible zipper for shirt $2.99
Invisible zipper for pants $4.99
Shoulder pads $4.49
Silver chain $6.38
Spandex $23.78
Knit ribbing for pants cuff $5.99
Black broadcloth (pocket interior, 0.5 yd) $1.00
Black lining (1.5 yd) $5.99
Belt buckle (eBay) $20.99
2” elastic (pants waist, 2yd) $5.99
Gold soutache cord $7.48
Spray paint $50.00
TOTAL $421.90

To soften the blow, I have to consider that I have some fabric left over and I didn’t use all of the thread or bias tape, so I have some I can use on other projects. I also had a few things already that I didn’t have to buy, like white thread, so it ends up kind of evening out.

The shirt fabric would have been half that price but I hosed it up and had to make it twice. Yeah, I’ll get there.

Unless otherwise listed, the vast majority of this went to Joann Fabrics and Crafts. I didn’t break out how much I “saved” using coupons or whatever so it’s not precise. And, of course, there’s stuff you don’t think about - the black bias tape finishes all the seams on the inside, so I used way more than I thought I’d need; I really screwed things up a couple of times and had to remake a couple of pieces, so used more fabric than I thought. That sort of thing.

I didn’t count my time as something that costs, but there was a lot of time here. Calendar time isn’t equal to effort time, though - I work on these things mostly for a few hours on the weekend or on a night or two each week after work. Getting that “me time” is really helpful for me psychologically and I think it makes me less generally cranky. But I don’t know how much “effort time” went in here.

Finally, the patterns - these are as close to screen accurate as you can find. They’re built from real screen-worn costumes as much as possible. However, the instructions are terrible. I don’t care if you’ve done one of the “advanced” patterns from Vogue, this isn’t how you expect. There are missing steps, duplicate steps, one small paragraph where you really need a lot… there’s one step on the pants which is basically, “OK, now put in the waistband.” No photos, no description. It’s like someone giving you an Ikea cabinet and not labeling anything. Just sorta figure it out! This caused a lot of difficulty. Now that I’ve done it once, I see how it should be so I could do it again if needed. I also see how to make it better.

The Phaser

I couldn’t find an affordable phaser and Thingiverse didn’t have quite what I wanted, so I ended up having to make my own using Autodesk Fusion 360. I used reference photos and tried to trace the shape as close as possible.

My first draft looked OK, but was pretty plain.

June 10, 2019: First draft of my phaser

I started adding some details, moving some things around.

June 17, 2019: Iterating over the phaser

After a month or so, in mid-June I finally got something I was happy with and it looked pretty good painted up. If you’d like to make one, I put the model on Thingiverse for free.

July 1, 2019: Final version of the phaser

The Security Device

I followed a similar process as my phaser for the security device. Research tells me the actual screen-used one is a kit-bashed tank wheel from some plastic model kit. I used photos of the screen used props as well as a commercially available replica to create my model. If you want to make one of these, I put the model on Thingiverse for free.

The security device was done about a month after the phaser at the end of July.

July 26, 2019: Iterating over the security device

The Shirt

I started the shirt around the same time as the security device (early July), right after the phaser was finished. I had tried making a muslin version to determine the right size, and while I found the right size around and dove in, it turned out it wasn’t long enough for my torso. I moved too quick with the muslin one to realize I had forgotten to hem it, which shortens it up by an inch or so. Dammit. I had to throw out the whole shirt and start over because you can’t “add length” to the bottom of a shirt.

I also remade the sleeves and neck several times to try to get the “puffiness” right. Luckily these are separate pieces that fit on so when I saw they weren’t right I noticed before they were attached and saved having to remake the shirt more times.

One of the items in the parts list is Stitch Witchery. This is sort of like “fabric tape” - you put it between two pieces of fabric, iron them, and they stick together. That’s how the hem in the shirt works and looks “stitch-free.”

It took about a month (end of July) to get the shirt done.

July 28, 2019: The finished shirt

The Belt

Making the belt was actually pretty easy. I had some vinyl left over from a different costume, so I wrapped the vinyl around some batting and stitched it up. I think this took maybe an hour.

However, the belt buckle was a struggle. I really wanted to 3D print one since I was so deep into making things, but I found that the PLA plastic can’t hold up to the strain of keeping a belt together. The little hook that keeps the belt closed simply breaks, even if it’s solid PLA. I ended up getting a metal belt buckle from eBay and calling it good.

August 2, 2019: The finished belt

The Pants

I started with muslin for the pants which saved a lot of effort. I found that the size I thought I’d need was actually just a little too small.

The pants were my first run-in with how problematic the pattern instructions really are. There are no pattern markings, so when you’re trying to put in pockets which are “optional” you really don’t know how that’s supposed to go. The leg stripes ended up getting made a couple of times because the description of how to make them didn’t actually yield a result that matched the required measurements.

August 24, 2019: Sewing stripes for the pants

The waistband is my biggest gripe in all of this. There are fully four different ways you could make the waistband, each of which is a single paragraph that doesn’t explain enough about what has to happen, and there are no pictures. I sort of muscled something functional into place and I’m absolutely not happy with it. But I decided to skip remaking them (for now) in favor of getting the rest of the costume done.

It took another month (end of August) to get the pants done.

August 25, 2019: Finished shirt and pants

The Jacket

I also started the jacket with a muslin version and, knowing what I know about how short the shirt turned out, decided to add some length both to the arms and the torso.

As noted in the pants section, there are no pattern markings at all, so figuring out where to length pieces (as well as which pieces need to be lengthened) was entirely manual. It also assumed you know how the whole pattern comes together, which I only sort of did. I guessed pretty well and ended up only having to re-cut one piece.

September 6, 2019: Cutting pieces for the jacket

By the end of the first week of September I had all the jacket pieces cut.

September 8, 2019: All jacket pieces cut

The black piping that runs down the sides of the back is all handmade by wrapping cord with bias tape.

September 10, 2019: Starting the jacket piping

Once it’s done, it looks pretty good.

September 14, 2019: The jacket piping is done

There’s some quilting that gets done on the end of each sleeve. My sewing machine doesn’t have one-inch markings up to the four inches required so I used some painter’s tape to make some temporary lines.

September 15, 2019: Quilting the sleeve ends

There’s a white stripe that goes on the left sleeve. This was another instructions challenge. The instructions say to assemble this with Stitch Witchery and attach it to the sleeve.

Problem 1: The stripe has batting in it to make it puffy. You can’t iron batting or it flattens out. Stitch Witchery needs to be ironed on high to adhere. Soooooo that’s not going to work.

Problem 2: The stripe pattern only gives you about 0.25” - 0.5” of working length on each end of the stripe. You rip open the seam on the back of the sleeve, insert one end of the stripe, wrap it around the sleeve (which is all puffy and quilted), and insert the other end of the stripe. With that small amount of working length, it doesn’t really work.

I ended up creating a “tube” of white cloth, using Stitch Witchery to adhere the black and gold bias tape on the sides, then taking the puffy batting and sliding it into the cloth tube. Totally not the way the directions explain it (or roughly diagram it) but the only way I could get it to work.

September 18, 2019: Making the left sleeve stripe

After all the stripes and quilting the sleeves were done.

September 21, 2019: Sleeves are done (front view)

September 21, 2019: Sleeves are done (back view)

The next part is the front facings - those white bits that are on the inside of the jacket.

September 22, 2019: Starting the front jacket facings

The gabardine frays really bad on the edges so you can’t see how cool it really is until it’s finished with the bias tape.

All the chain detail and snaps to hold the front together are hand sewn.

September 22, 2019: Stitching chain details to the facing edge

Once all that work is done, the jacket flap is complete! I added a snap close to the left armpit to hold the right side of the jacket in place as you close it. It seemed to be hard to keep in place otherwise; the pattern didn’t call for any of that, though.

September 28, 2019: The front jacket flap is complete

The shoulder strap is another “tube filled with batting” situation. Once you have that done, gold soutache gets sewn to the edges and the clasp goes on.

September 29, 2019: Making the shoulder strap

Looks pretty good once it’s attached.

September 29, 2019: The shoulder strap attached

There’s very little description in the pattern on how to put the lining in. There’s actually a whole step missing where it seems there’s an assumption that the lining gets inserted “somewhere around here” but no explanation of how it gets attached.

There’s no pattern requirement for shoulder pads, but the shoulders didn’t sit right without them so I added that. I can’t imagine there weren’t shoulder pads in the screen worn costumes.

Finally, because the gabardine frays so horribly, I finished it off using a Hong Kong seam with bias tape. I think it looks really good - finished without a full lining.

October 3, 2019: Jacket lining is complete

After the inside was done, it was time to attach pins, starting with the shoulder strap security device.

October 6, 2019: Security device attached to the shoulder strap

I made these “pips and squeaks” in a couple of hours on the 3D printer based on some reference measurements and photos. If you want to make these, I put the models on Thingiverse for free.

They’re placed in the same manner as Kirk had them in Star Trek 2. However, you’ll also possibly notice my rank pins and the rest of the uniform are Captain, not Admiral, while Kirk was an Admiral in ST2. I’m a little Spock and a little Kirk here. It’s not intended to be an exact replica of either.

October 7, 2019: Pips and squeaks attached to the sleeve

Overall, the jacket ran from early September to early October, so about a month.

Chest Badge

The chest badge gave me a lot of grief and was where a lot of paint and time went.

I found a pretty good Thingiverse model for the badge and figured I’d “just need to paint it.”

I learned a lot about painting here.

  • Don’t try to dilute acrylic enamel with water if you’re hand-painting it. It’ll leave air bubbles and look really lumpy.
  • Reflective metallic spray paint is really hard to work with. It can take days to set, and even then may still leave fingerprints you can’t buff out if you touch it.
  • You can’t really coat a metallic spray paint with anything. Polyurethane causes the paint to get darker because it changes the reflective properties. Lacquer doesn’t change the color but dulls it a lot.
  • Frog Tape is amazing masking tape. It’s expensive but worth it.

What I did was paint the inside bits - ivory and gray - then mask those parts off for the final metallic coat.

October 17, 2019: Masking the chest badges for painting

As you can see, I did several tests. The top one is a glittery but not too shiny paint and hand-painted insides. It’s not too smooth, but the paint doesn’t leave fingerprints. The second one has an airbrushed interior but I tried covering the reflective gold with a clear coat. The third one is an airbrushed interior with that original not-too-shiny metallic paint. The fourth one is the one I ended up using and just choosing to “not touch it” - airbrushed interior, reflective metal paint.

By the time I’d dealt with this I realize I probably should have just bought the damn badge for $20 and called it good. I did that with my rank pins. I dunno. I guess it turned into a mission.

October 19, 2019: Iterating over the chest badges

Final Product

When all was said and done, from a calendar perspective I spent four-and-a-half months on this (early June to mid-October) but I think it turned out fantastic.

October 26, 2019: The complete costume (front)

October 26, 2019: The complete costume (back)

autofac, dotnet comments edit

Back in April 2018 I posted a request for help to own some of the Autofac extension packages.

As I mentioned then, Autofac has effectively two owners: me and Alex Meyer-Gleaves. We maintain core Autofac along with the 20+ extension packages that integrate with different application types (ASP.NET Core, WCF, web forms, and more) as well as feature support packages (configuration, multitenancy). We put out the call for owners to help lighten the ever-growing load.

Since then, we’ve received a small handful of pull requests (thanks to the folks who submitted!) and, unfortunately, no takers to help out on ownership.

When it comes to pull requests, we generally get one of two flavors:

  • Very small fixes - between one and five lines, something that corrects a small error condition or fixes a documentation error.
  • Incredibly large changes - adjusting the way memory gets allocated, changing the way the container gets built, that sort of thing.

In the case of the small changes, these aren’t hard to review or accept, but in the majority case they’re also not addressing any of the issues that users have filed.

In the case of the large changes, it’s more challenging:

  • These are very hard to review. They’re both time intensive and they generally include some breaking API changes we need to consider.
  • The person submitting the change isn’t going to come back and own it if it something goes wrong. It’s a “drive-by submission.” Maybe it introduces a memory leak in an application because it inadvertently holds onto references it shouldn’t. Maybe it adds a few milliseconds on every resolve operation and now under load things are failing. The original submitter isn’t going to fix that.

Finally, there are a lot of things that seem small but are still time sinks. A good recent example is the change in the hosting model for the .NET Core conforming container. Good changes for .NET Core, but for Autofac we have to update docs, come up with examples, adjust how some things get handled, answer StackOverflow questions on it, and so on. What seems like a small change can become a non-trivial time sink.

Unfortunately, we’ve reached a point where a combination of life events, work pressure, and general OSS maintainer burnout has set in and we can’t keep up. We need someone (or several someones) who can come on board and help OWN Autofac. Someone who can review the PRs coming in, understands the challenges with breaking API changes, can help out with support, documentation… all the things I mentioned in our original request for help.

It’s not just extension packages anymore. We need help on all of Autofac - extensions, core Autofac, the whole shmear.

I can’t promise a deep mentoring experience. I apologize in advance for that. The current owners will still be involved and doing everything we can. We’ll be collaborating with anyone new and working to get new team members on-boarded. However, this is likely not a good fit for someone new to C#, .NET, or dependency injection.

Hello… Is it you we’re looking for? Take a second to check out the original post outlining what it means to be an owner. If you think that’s you, tweet us at @AutofacIoC or say hello in the Autofac Google Group.

What Helm Is

Helm is a tool used to create and deploy templates that define entities in Kubernetes. It’s kind of like taking Kubernetes YAML and adding handlebars template support. For example, you might see something like this:

apiVersion: v1
kind: ReplicationController
metadata:
  name: deis-database
  namespace: deis
  labels:
    app.kubernetes.io/managed-by: deis
spec:
  replicas: 1
  selector:
    app.kubernetes.io/name: deis-database
  template:
    metadata:
      labels:
        app.kubernetes.io/name: deis-database
    spec:
      serviceAccount: deis-database
      containers:
        - name: deis-database
          image: {{.Values.imageRegistry}}/postgres:{{.Values.dockerTag}}
          imagePullPolicy: {{.Values.pullPolicy}}
          ports:
            - containerPort: 5432
          env:
            - name: DATABASE_STORAGE
              value: {{default "minio" .Values.storage}}

In this case, you can see there are some values poked in from another YAML document that has some configuration parameters. The values document might look like this:

imageRegistry: "quay.io/deis"
dockerTag: "latest"
pullPolicy: "Always"
storage: "gcs"

When Helm does an installation or upgrade, it takes the parameters, fills in the appropriate blanks in the larger YAML deployment templates, and does the Kubernetes work to execute the deployment.

Since there are a lot of pieces to putting things in Kubernetes - maybe you have some deployments, services, etc. that all need to go in at the same time - Helm uses a concept called “charts” to bundle these up as an atomic entity. You can think of a Helm chart as a zip file with a bunch of Kubernetes YAML in it and a small manifest that explains what the zip file installs.

There are some great benefits to using Helm to deploy things:

  • Separation of deployment template from configuration. You don’t need to keep rafts of YAML around for different environments/clusters/whatever. You can have different parameters and one larger template.
  • Ability to roll back a deployment. Helm tracks the installations and upgrades you make along with the values. If you deploy something that doesn’t work, you can roll it back to the previous version.
  • Ability to list deployments and versions. You can use Helm to list out the charts and versions that have been deployed to Kubernetes. This makes things that are logically spread around namespaces into a nice, central list.
  • Charts can have dependencies. Let’s say your application needs a Redis instance when you deploy it. Cool! You can set up your chart to have a dependency on installing/upgrading Redis at the same time using the Redis Helm chart.

Things to be aware of that will come into play later:

  • Helm installations are global. When you do a Helm installation of a chart, even if it’s into a specific namespace, the installation itself is a global concept. Let’s say you want to install Redis twice (separately from your applications) - once into a test namespace and once into a prod namespace. Make sure you name those installations in a unique way - the output of the installation may go into the namespace but the installation itself is a global thing outside the namespace. (Unclear if this will change with Helm 3.0.)
  • Helm isn’t using kubectl. For version 2.0 Helm uses a service running in the cluster called tiller to do its installations. Tiller is going away in Helm 3.0 but it’s still not using kubectl - it’s using the Kubernetes API directly. What that means is if you’re looking to use kubectl later (even in an automated fashion) to modify things, you’ll get messages like Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply.
  • Helm installation tracks chart version and install version, not container version. When you install something with Helm, later you can do helm list to get the installations. Doing that, you’ll see the chart version and the data last updated but no information about what’s in that installation.

For the purposes of this article, I’m not going into exactly how Helm does its work. If you’re interested in diving deep, the docs are a good place to start. The important things to understand are some of the benefits and limitations.

Continuous Delivery vs. Continuous Deployment

Let’s talk about “CD.” Some folks differentiate between “continuous delivery” and “continuous deployment.” The apparent differentiator is that “delivery” implies software that’s been tested and is ready to be deployed to production but isn’t; while “deployment” implies another step - the software is actually automatically deployed into production and verified.

I guess you can do that, but I don’t really separate those things in my mind. If it’s ready to go to production… why isn’t it going there? I’d argue that if you differentiate then maybe it’s just that your pipeline (or your org, or your process, or whatever) just isn’t ready for a complete check-in-code to deploy-code-in-production execution. It’s an incomplete pipeline, not “two different things.”

Folks Love Helm for CD

There are a lot of articles like this one pointing out how Helm is the missing link in the CI/CD chain for Kubernetes.

It sort of is, but not the way they explain in articles. At least, not the way I see it.

The process most of the articles describe roughly follows this:

  • Create a Helm chart for your application.
  • In the CD pipeline, create a set of parameters that can be used to helm upgrade your app in Kubernetes.
  • If anything fails, use helm rollback to return to the previous state.

Seems simple enough, right? And it is simple. But this is really the “continuous delivery” sort of pipeline - the pipeline where you’re not actually trying to automate things into production. I can’t imagine you’d ever want to stomp your production deployment like this, hope that it works, and depend on helm rollback to return to a previous state if it doesn’t.

That’s a hugely important differentiator. If all you need to do is get things deployed into some sort of development/testing environment and that’s the limit of your pipeline, I guess that works. I feel like the goal should be bigger than that. I’m a fan of test in production and I’m not a fan of trying to replicate environments across dev, test, perf, and production (or however you break it up).

Given that…

Why I Don’t Like Helm for CD

If I’m thinking about actually deploying into production on a regular basis, I want support for more complex scenarios like canary testing. Let’s think about how that works for Kubernetes.

  • Existing deployment of the service in production is taking traffic.
  • New deployment of the service in production goes in alongside the existing deployment, but takes no traffic.
  • Testing against the new deployment runs internal to the cluster.
  • Traffic handling is tweaked to allow some small amount of production traffic to the new deployment whilst the majority of the traffic is still going to the original deployment. This may be accomplished in a few different ways. For example…
    • With a standard Kubernetes service, you can adjust how much traffic goes to old vs. new based on the number of deployed pods. If you want 10% of the traffic on the new and 90% on old, you’d need that proportion of pods - 1 new, 9 old.
    • With a service mesh like Istio, you can use the traffic management built in to control the percentage of traffic routed to each set of pods. This is a lot cleaner than the standard mechanism but means you have some complexity with a service mesh in the mix.
  • Testing runs on the service and both old and new versions of the deployment are monitored. If the new version starts misbehaving, all traffic is routed back to the original version of the service and the “canary” new version is killed. If the new version behaves, more traffic is directed to the new version and removed from the old version until either all traffic is pointed to the new version or the canary is killed.

How do you do something like that with Helm? In its stock form, you really can’t… or your Helm chart is going to be pretty complicated.

  • It’ll have to allow for parameterization of both existing and new deployments or you’re going to have two deployments side-by-side. Remember how deployments are a global concept? That makes it complicated.
  • You’ll have to figure out how to only have one Kubernetes service that can route to both sets of pods (old and new) or you’ll need something external to the Kubernetes load balancer to handle traffic control across the Kubernetes services.
  • If your chart has the Istio bits in it, again, you want one set of Istio load balancing/ingress controls across both pods, so the canary installation wouldn’t want to deploy those.
  • If you control traffic using Kuberenetes constructs (either by adjusting the ratio of deployed pods or tweaking Istio values) those are all helm upgrade operations. If the canary fails, how many helm rollback operations do you need to perform to get back to the original state?

There are other reasons Helm isn’t great for CD, too. Here’s a pretty good article that talks about some of the challenges and shortcomings you can hit.

Helm is Great for Infrequent Deployments

Helm is great for installing things that you don’t deploy often. Need to deploy Istio? Sweet, Helm to the rescue. You don’t update Istio every day. Need to get an ElasticSearch instance installed that your services can share? Boom! Helm, baby! You won’t be upgrading that every day.

Helm as a way to manage infrastructure or shared services is awesome. Things that don’t require canary testing, continuous rollout, that sort of thing.

Helm is Great for Templating in CD Pipelines

Helm is great as a way to package up a set of YAML and handle parameterization and some calcuations to generate a final Kubernetes YAML for deployment… and in a continuous delivery/continuous deployment context, that’s what I would recommend using it for.

A great example of this is the way Spinnaker uses Helm to deploy things. It’s not using helm install or helm upgrade - instead it can take a Helm chart or Helm-formatted YAML document and it uses helm template to generate the final template with the parameters all populated. It then takes that output and executes the deployment in Kubernetes. The note at the top of the Spinnaker “Deploy Helm Charts” page pretty much says it all:

Note: This stage is intended to help you package and deploy applications that you own, and are actively developing and redeploying frequently. It is not intended to serve as a one-time installation method for third-party packages. If that is your goal, it’s arguably better to call helm install once when bootstrapping your Kubernetes cluster.

If you think about what you get with helm list, you’re getting the chart version, right? Honestly, once you have the chart down, the chart version has no meaning in continuous deployment. The important stuff is the version(s) of the container(s) that are deployed and making up the application. That stuff isn’t tracked, so helm list becomes pretty useless.

Things like Helm chart repositories are also really not interesting. In fact, the YAML for the Helm template may just be embedded right in the CD pipeline itself, not a separate thing stored elsewhere. This allows you to separate things like tweaking Istio load balancer settings from the concept of helm install. It also avoids needing to ensure deployment names are unique.

Finally, it means helm rollback won’t save you. You’re not helm installing anything, so there’s nothing to roll back. You’ll need to ensure your CD pipeline can appropriately kill the canary. However, if you’re doing canary testing and not just stomping your existing production deployment with a new untested deployment… you should be able to easily kill the canary and leave the existing deployment untouched, with no one the wiser that something went awry.

docker, vs comments edit

Container Tools in Visual Studio offers the ability to develop an application and have it run inside a container while Visual Studio debugs it on your host machine. It’s a cool way to see how your app will behave in the container environment with the flexibility of edit-and-refresh functionality that doesn’t require the overhead of rebuilding the container each time.

I ran into a bunch of trouble getting some things working the other day which caused me to dive a little deeper into how this works and I found a few interesting things. Gotchas? Tips? Sure. That.

I’m primarily using the single-container support - not the Docker Compose multi-container support. If you’re all Docker Compose up in there, this may or may not be helpful to you.

You Only Need ‘base’ for VS

The Dockerfile that gets generated has a bunch of named intermediate containers in it - base, build, publish. This is helpful if you don’t already have a Dockerfile, but if you’re really just trying to get debugging working with VS, you only need the base container. You can delete or modify the others.

UPDATE August 16, 2019: Microsoft has some documentation now on how Container Tools builds Dockerfiles. It’s not necessarily base that’s a magic target - it’s “the first stage found in the Dockerfile.”

When VS builds the container, you can see in the Container Tools output window it’s running a command like this:

docker build -f "C:\src\solution\project\Dockerfile" -t project:dev --target base --label "com.microsoft.created-by=visual-studio" "C:\src\solution"

The --target base is the key - it’s not going to build the rest.

(You can change this using <DockerfileFastModeStage> in your project - see below.)

VS Controls Container Startup and Teardown

In Visual Studio the container for debugging will get built and will start as soon as you select the Docker configuration for running. Even if you don’t actually start a debug setting, the container will be pulled, built, and run in the background. The container will continue to run until VS shuts down.

docker run -dt -v "C:\Users\yourname\vsdbg\vs2017u5:/remote_debugger:rw" -v "C:\src\solution\project:/app" -v "C:\Users\yourname\AppData\Roaming\Microsoft\UserSecrets:/root/.microsoft/usersecrets:ro" -v "C:\Users\yourname\.nuget\packages\:/root/.nuget/fallbackpackages2" -v "C:\Program Files\dotnet\sdk\NuGetFallbackFolder:/root/.nuget/fallbackpackages" -e "DOTNET_USE_POLLING_FILE_WATCHER=1" -e "ASPNETCORE_ENVIRONMENT=Development" -e "NUGET_PACKAGES=/root/.nuget/fallbackpackages2" -e "NUGET_FALLBACK_PACKAGES=/root/.nuget/fallbackpackages;/root/.nuget/fallbackpackages2" -p 58260:80 --entrypoint tail project:dev -f /dev/null

There are some interesting things to note here:

  • A ton is mounted through volumes. Look at all those -v commands. There’s a remote debugger; your application code/source; user secrets; your local NuGet package cache; and your local installation of fallback packages. You’ll get a warning that pops up if you don’t have volume sharing enabled in Docker. You have to allow the drive with your source code to be mounted. VPNs and firewalls can really mess this up by blocking the SMB port.
  • The remote debugger isn’t associated with a VS 2017 install. The path to the remote debugger you see is C:\Users\yourname\vsdbg\vs2017u5 but this isn’t part of a VS 2017 install. Even if you only have VS 2019, it’s still this path. It could change later, but don’t be fooled.
  • The default environment is Development. The Container Tools put the ASPNETCORE_ENVIRONMENT=Development thing in there. You can override this by updating launchSettings.json (see below).
  • The entrypoint is not your application. Notice the entrypiont is tail -f /dev/null. This just ensures the container keeps running but isn’t tied to your application. A separate docker run call will be issued when it’s time to start your app.

During build, you’ll see something like this in the Build output window:

docker exec -i 40b49d8d963bb682a08fed17248212bcfd939456c8030689e9a28f17f5b067e3 /bin/sh -c "if PID=$(pidof dotnet); then kill $PID; fi"

What this is doing is killing the running dotnet command in the container so any files that might be getting regenerated by Visual Studio or whatever won’t mess up the running process.

When you start debugging, the remote debugger starts in the container. I used Process Explorer and Process Monitor to look for docker commands going by. I see that the command to start the remote debugger is:

"docker" exec -i 40b49d8d963bb682a08fed17248212bcfd939456c8030689e9a28f17f5b067e3 /bin/sh -c "ID=.; if [ -e /etc/os-release ]; then . /etc/os-release; fi; if [ $ID = alpine ] && [ -e /remote_debugger/linux-musl-x64/vsdbg ]; then VSDBGPATH=/remote_debugger/linux-musl-x64; else VSDBGPATH=/remote_debugger; fi; $VSDBGPATH/vsdbg --interpreter=vscode"

UPDATE June 18, 2019: After publishing this post I found out that Visual Studio communicates the dotnet startup command directly to the remote debugger. The debugger is what launches the dotnet command and provides the additional environment variables from launchSettings.json. This allows VS to catch any startup errors.

Using ps -axwwe on a running container being debugged, I can see the command line and the environment for the running dotnet process. The command line looks like:

/usr/bin/dotnet --additionalProbingPath /root/.nuget/fallbackpackages2 --additionalProbingPath /root/.nuget/fallbackpackages bin/Debug/netcoreapp2.1/project.dll

The environment is big so I won’t paste it all here, but I can see environmentVariables things (from launchSettings.json) show up.

launchSettings.json Gets Extra Stuff

Once you’ve right-clicked in VS and added Docker support to your ASP.NET Core project, launchSettings.json will be updated to include a Docker configuration that looks something like this:

{
  "iisSettings": {
    "windowsAuthentication": false,
    "anonymousAuthentication": true,
    "iisExpress": {
      "applicationUrl": "https://localhost:44308",
      "sslPort": 44308
    }
  },
  "$schema": "http://json.schemastore.org/launchsettings.json",
  "profiles": {
    "IIS Express (Development)": {
      "commandName": "IISExpress",
      "launchBrowser": true,
      "launchUrl": "",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    },
    "Kestrel (Development)": {
      "commandName": "Project",
      "launchBrowser": true,
      "launchUrl": "",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      },
      "applicationUrl": "https://localhost:44308"
    },
    "Docker": {
      "commandName": "Docker",
      "launchBrowser": true,
      "launchUrl": "{Scheme}://{ServiceHost}:{ServicePort}/",
      "httpPort": 58260,
      "useSSL": false,
      "sslPort": 44308
    }
  }
}

There are some things to note in here.

  • environmentVariables will work, but just in the app. Just like in the other configruations that have an environmentVariables node, you can add that to the Docker node and the environment variables there will be available in your application. However, they won’t be added as global environment variables to the container - they’ll instead get passed in to your application. If you launch a separate shell process in there to poke around, you won’t see them.
  • iisSettings is still important. Even if you don’t use it, the iisSettings.iisExpress.sslPort and iisSettings.iisExpress.applicationUrl values are still important.
  • Docker as the command name is the key. This seems to be a magic thing interpreted by the add-in to know to launch Docker. The name of the configuration doesn’t appear to matter.
  • There are curly-brace magic strings that work in the launchUrl field. I can’t find any beyond {Scheme}, {ServiceHost}, and {ServicePort} that do anything, though in the Microsoft.Docker assembly I see a definition for {ServiceIPAddress} that doesn’t seem used.

You Can Affect docker run Through Project Settings

UPDATE August 16, 2019: Microsoft has added documentation about the available MSBuild properties that can influence the Container Tools behavior. I’ve updated my doc below based on the official docs, but there’s more to be seen over there.

There are some magic XML elements you can put in your project properties that will affect the docker run command. These are found in the .targets files in the Microsoft.VisualStudio.Azure.Containers.Tools.Targets package, which you can find in your local cache in a spot like C:\Users\yourname\.nuget\packages\microsoft.visualstudio.azure.containers.tools.targets\1.4.10\build.

Ones that seem interesting which I have pulled straight out of the Container.targets file…

  • <DockerfileTag>: The default tag used when building the Docker image. When using the container debugging tools this defaults to dev.
  • <DockerDefaultTargetOS>: The default target OS used when building the Docker image.
  • <DockerfileBuildArguments>: Additional arguments passed to the Docker build command. I’m not sure what you might do with that, but it may be an interesting hook.
  • <DockerfileRunArguments>: Additional arguments passed to the Docker run command. You can use this to add volume mounts and such to the VS process that starts up the container for your project. You can add environment variables this way, too, if you don’t want to use launchSettings.json.
  • <DockerfileRunEnvironmentFiles>: Semicolon-delimited list of environment files applied during Docker run.
  • <DockerfileFastModeStage>: The Dockerfile stage (i.e. target) to be used when building the Docker image in fast mode for debugging. This defaults to the first stage found in the Dockerfile, which is usually base.

I’ve only really tried the DockerfileRunArguments to see if I could use that for environment variables (before I figured out the launchSettings.json part) and it seemed to work. On the others, YMMV.

Troubleshooting Container Startup

If you have an error starting the container, restart Visual Studio after resolving it. I can’t figure out a way to manually force the container to restart and still have it controlled (start/stop/cleanup) by VS.

If you see…

docker: Error response from daemon: driver failed programming external connectivity on endpoint hungry_nash (09d5705dc88b7afc229be8c3ed8c92bc30c3c4a2e11fdc9ece423cfb4bfe50b3): Error starting userland proxy:.

…then close VS and restart the Docker daemon. I’ve seen this a lot when my machine starts up and maybe has a race condition between networking getting established and the Docker daemon needing networking. A simple restart usually fixes it.

If you see…

Launching failed because the directory '/remote_debugger' in the container is empty. This might be caused by the Shared Drives credentials used by Docker Desktop being out of date. Try resetting the credentials in the Shared Drives page of the Docker Desktop Settings and then restarting Docker.

or…

Error response from daemon: error while creating mount source path '/host_mnt/c/Users/yourname/vsdbg/vs2017u5': mkdir /host_mnt/c: file exists.

…then there’s a problem with drive mounting. Make sure your drive sharing settings in Docker allow the drive with your source and the drive with the remote debugger to both be mounted. Further, if you’re on a VPN like Cisco Anyconnect, chances are the SMB sharing port 445 is blocked. Try getting off the VPN. You’ll need to close VS and restart the Docker daemon once you’ve resolved that.

No, I haven’t found a fix for the VPN thing. I wish I had.

If the container fails to start for whatever reason, you may be left with zombie containers or images.

  • Use docker ps -a to see the containers that are created but will never be used again. When VS is closed these will remain in Created state. Use docker rm image_name_here to clean them up.
  • Use docker images to see the images on your machine. If you see one that’s named like your project with the tag dev, that’s the dev container. Clean that up with docker rmi project:dev (using the appropriate project name in there).

The Actual Documentation

UPDATE August 16, 2019: I opened an issue to request some on June 18, 2019 to get some documentation. There is now some official doc on how Container Tools builds the container as well as the MSBuild properties available to control that process. They are working on additional docs.

docker, linux, windows comments edit

Working in an environment of mixed containers - both Windows and Linux - I wanted to run them all on my dev machine at the same time if possible. I found some instructions from a while ago about this but following them didn’t work.

Turns out some things have changed in the Docker world so here are some updated instructions.

As of this writing, I’m on Docker Desktop for Windows 2.0.0.3 (31259) Community Edition. The instructions here work for that; I can’t guarantee more won’t change between now and whenever you read this.

  1. Clean up existing containers before switching to Windows containers. Look to see if you’re using Windows containers. Right-click on the Docker icon in the system tray. If you see “Switch to Windows containers…” then you are not currently using Windows containers. Stop any running containers you have and remove all images. You’ll need to switch to Windows containers and the image storage mechanism will change. You won’t be able to manage the images once you switch.

  2. Switch to Windows Containers. Right-click on the Docker icon in the system tray and select “Switch to Windows containers…” If you’re already using Windows containers, great!

  3. Enable experimental features. Right-click on the Docker icon in the system tray and select “Settings.” Go to the “Daemon” tab and check the box marked “Experimental features.”

Enable experimental features.

That’s it! You’re ready to run side-by-side containers.

The big key is to specify --platform as linux or windows when you run a container.

Open up a couple of PowerShell prompts.

In one of them, run docker images just to make sure things are working. The list of images will probably be empty if you had to switch to Windows containers. If you were already on Windows containers, you might see some.

In that same PowerShell window, run:

docker run --rm -it --platform windows microsoft/nanoserver:1803

This is a command-prompt-only version of Windows Server. You should get a C:\> prompt once things have started up.

Leave that there, and in the other PowerShell window, run:

docker run --rm -it --platform linux ubuntu

This will get you to an Ubuntu shell.

See what you have there? Windows and Linux running side by side!

Windows and Linux containers - side by side!

Type exit in each of these containers to get out of the shell and have the container clean itself up.

Again, the big key is to specify --platform as linux or windows when you run a container.

If you forget to specify the --platform flag, it will default to Windows unless you’ve already downloaded the container image. Once you have used the image, the correct version will be found and used automatically:

# Works because you already used the image once.
docker run --rm -it ubuntu

If you try to run a Linux container you haven’t already used, you may get a message like this:

no matching manifest for windows/amd64 10.0.18362 in the manifest list entries

I’m not sure on the particulars on why sometimes --platform is required and sometimes it’s not. Even after removing all my container images, I was able to run an Ubuntu container without specifying platform, like some cache was updated to indicate which platform should be used by default. YMMV. It doesn’t hurt to include it, however, if you try to use --platform on another machine it may not work - you can only use it when experimental features are enabled.

UPDATE June 14, 2019

I’ve found since working in this mixed environment that there are things that don’t work as one might entirely expect.

  • Networking: With Linux-only containers on Windows you get a DockerNAT virtual network switch you can tweak if needed to adjust network connectivity. Under mixed containers, you use the Windows Container network switch, nat and you really can’t do too much with that. I’ve had to reset my network a few times while trying to troubleshoot things like network connections whilst on VPN.
  • Building container images that reference files from other images: A standard .NET Core build-in-container situation is to create, in one Dockerfile, two container images - the first builds and publishes the app, the second copies the published app into a clean, minimal image. When in mixed container world, I get a lot of errors like, “COPY failed: file does not exist.” I can look in the intermediate containers and the files are all there, so there’s something about being unable to mount the filesystem from one container to copy into the other container.

Unrelated to mixed containers, it seems I can’t get any container to connect to the internet when I’m on my company’s VPN. VPN seems to be a pretty common problem with Docker on Windows. I haven’t solved that.

It appears there’s still a lot to work out here in mixed container land. You’ve been warned.