personal comments edit

While I have made a lot of costumes and clothing, I’ve never previously sewn a bag. When I saw the YouTube video of Adam Savage building his EDC TWO bag, it made me want to build a bag, too.

In the video he mentioned that the EDC TWO is slightly smaller than the EDC ONE and I wanted to make the full-sized bag. He sells the plans for both, but I bought the EDC ONE pattern. I see that since the time I bought it he’s now offering it in your choice of physical/paper plans or a PDF download. I’m glad I bought the paper plans; it’s huge and I don’t have a printer that would accommodate such a thing. Taping a million sheets of paper together isn’t my style.

Big thanks to Adam Savage and Mafia Bags for making this pattern and publishing it.

The pattern is licensed by Adam Savage under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. I think that means you could technically distribute the pattern for free as long as you give credit, but it’s all of $15 and I feel good contributing to someone who put something cool together. I don’t think my explanation here really needs a license, but assuming it does, let’s also use that same license with the same restrictions, etc.

OK, let’s get this going.

Assumptions

I will make some assumptions as I go along here about a few things. You may have to Google some stuff if you don’t know what I’m talking about. Here’s what I assume:

  • You have the EDC ONE pattern. I’m not going to give you the pattern here. I’m going to show you how I used it and call things out about it.
  • You have some sewing experience. If I mention that I basted two pieces together, you should know it doesn’t have anything to do with a turkey baster. Tons of YouTube videos and articles out there to catch you up if you don’t recognize what I’m talking about.

Improvements and Errata

Along the way I’ll point out the improvements that could be made in the pattern as well as some of the “bugs” I encountered in the pattern or provided instructions. If you’re building along here, it’d be good to read through all of this first so you might know what’s coming.

General Tips

  • Watch your sewing machine tension. I had a heck of a time getting this right and some of my stitching from the inside looks bad. If you normally sew through a couple of layers of medium weight fabric, using heavier-weight fabric means you’re going to have to adjust. For example, my machine usually sits on a tension level 4, but for something close to decent I had to almost double that to 7.75.
  • Use the right needle for the job. I used duck canvas for the body of my bag. I had some leather and denim needles but they really didn’t work out in thicker areas. When putting the handle attachments on the bag, for example, you may be sewing through upwards of six thicknesses of fabric. Six layers of duck canvas bends a leather needle. I didn’t find until the end that there are “heavy duty needles” (size 110/18) that are made for this sort of job and I wish I had known earlier.
  • Clip corners to reduce bulk. Especially if you’re using thick fabric, you need to clip some of the corners to make sure things lay flat. This includes some of the things like the handle attachments that get sewn to the bag body - without clipping, you see fabric folds and weirdness along the edges.
  • Baste things in place. I had a bad time holding super thick fabric with pins. It was pretty good to machine baste things in place close to the edge of the fabric / inside the seam allowance to keep the parts together while assembling. This was especially helpful when attaching the lining, zipper, bag outside, and metal frame holder pieces in one go.
  • Watch your seam allowances. This pattern doesn’t work like a Simplicity or Vogue pattern you might buy at the store. That was sort of hard for me.Common patterns use a 5/8” seam allowance. The instructions on this pattern say it’s a 1/2” seam allowance unless marked. However, they may only change the seam allowance on one side of a pattern piece, so if you see a 3/8” marking it might only mean for that side. When in doubt, measure.
  • There’s no pattern layout for the fabric. You have to kind of figure out how to place the pattern pieces yourself to make the best use of your fabric.
  • Use the drawings and reference photos. Given the instructions are a little vague in places, use all the photos and drawings you can to fill in the gaps. I wouldn’t have been able to finish this without the detailed photos for their finished bag.
  • There are bugs in the pattern. I will call them out shortly, but you’re not crazy.

Initial References

  • EDC ONE bag - This is what the bag looks like fully put together. Really helpful for reference photos to see if you’re building things right.
  • EDC ONE pattern - This is the pattern I bought.

When looking at the pictures of the finished bag and comparing to the pattern, I took some notes.

Front:

Notes on the EDC ONE reference photo - front

  • The pattern calls for lobster claws to hold the shoulder strap. The lobster claw they use isn’t like anything I’ve seen. I searched and couldn’t find anything like that.
  • You can see the stitching that attaches the metal frame holder channel inside the bag. The pattern doesn’t specifically say you need to sew through all thicknesses when attaching that, so its good to know that’s what’s happening.

Back:

Notes on the EDC ONE reference photo - back

  • The pattern calls for “tube magnets” to put in the handles. I’ll get into this later, but I couldn’t figure out what this was and the picture didn’t make it any more clear.
  • The “D rings” used to hold the handles on look more square here than standard D rings. I ended up using rectangle rings instead of D rings.
  • The shadow on the bag shows that the metal frame you insert in each side of the bag opening may not go the full width of the side - there’s space between where the two frame halves end to allow the bag opening to bend freely.

Inside:

Notes on the EDC ONE reference photo - inside

  • Again, you can see the frame holder is stitched to the bag through all thicknesses. There’s no crafty seam hiding here.
  • Also, again, you can see the metal frame appears slightly shorter than the width of the bag to allow the bag to open and close easily.

Pattern Bugs

Here are the things I found that are wrong on the pattern. Keep these things in mind as you are cutting and sewing.

  • The “strap loop” piece is marked “cut one.” Howeer, you need two of these - one for each side of the bag. Cut two.
  • The pattern calls for 5” of 3/4” webbing used with the strap loops. Again, this is only enough for one and you need two loops… so you need 10” of 3/4” webbing. (I’ll update that in my parts list below.)
  • The bottom piece says you need one for the lining, one for the bottom, implying a total of two pieces. However, you need one for the bottom of the “A panels,” one for the bottom of the “B panels,” and one for the lining - a total of three pieces.
  • The instructions say to use 3” Velcro pieces on the main panel for your patch but the pattern drawing measures out at 3.5” for the Velcro pieces. I don’t know which is right, but given the 70” Velcro measurement I think it’s supposed to be 3.5”.
  • The handle tab pattern piece is slightly larger than the placeholder shown on Main Panel A. I think the drawing on Main Panel A is wrong.
  • There are no instructions explaining what to do with the shoulder pad pieces. For that matter, there’s no “pad” in the shoulder pad in this pattern. I’ll explain what I did in my instructions.
  • There are no instructions explaining when you should attach the D rings or the strap loops to the main bag.
  • There is no measurement for the metal frame. I used two 22” pieces of 3/16” steel rod. I bent 3-5/8” legs on each end of the rod pieces for the U-shape of the frame.

Parts List

The parts list that ships with the pattern is amazingly vague. For example, they list “self fabric.” Uh… how much? “Lining.” Again, how much?

Here’s a more detailed parts list based on what I actually used. Note I used a different fabric color for the bottom than I did for the main bag body so I list those things separately. Also, I didn’t really pay attention to fabric grainlines since I used duck canvas and it generally looks and works the same regardless of direction. You may need to factor that in.

  • 18” of 60” duck for body
  • 12” of 60” duck for bottom / accent
  • 18” of 60” rip stop nylon for lining
  • 29” continuous zipper with two sliders meeting in the middle
  • 4 - D rings (rectangle rings) 1” inside width
  • 2 - lobster clasps: 1” inside width for strap, clasp big enough for 3/4” webbing
  • 1 - webbing slide, 1” inside width
  • 70” of 1” wide Velcro
  • The pattern calls for 2 “tube magnets” for handles. I used…
  • 60” of 1” wide webbing
  • 7-3/4” x 15” plastic board for the bottom (corrugated sign)
  • 44” of 3/16” steel rod for the metal frame
  • 10” of 3/4” webbing
  • 3” x 12” craft foam for the shoulder pad if you decide to follow my enhanced instructions

My supplies, laid out and ready to start

I have no idea what a “tube magnet” is, but that’s used in the handle of the bag. I figured I wanted something that would be nice and strong but still have the magnetism, so I put neodymium magnets inside carbon fiber tubes. To keep the magnets centered in the tubes I put some padding in each end and sealed them up with some silicone sealant.

In the parts list I show using two magnets (one for each handle) and 8” of tube (4” for each handle). You may see in my pictures that I actually went with four magnets (two in each handle) and 12” of tube (6” per handle). This was too much. My handles are a bit too long and the magnet power is a bit stronger than my liking. I may later redo the handles with shorter tubes.

A note on neodymium magnets: They are powerful and they will jump out of your hands to attach to each other. If they do this, they may hit each other with enough force to break. This absolutely happened to me. Luckily I could just jam them into the carbon fiber tubes and it wasn’t a big deal, but be aware this could happen.

Preparation

Cut out all the parts. I didn’t bother transferring pattern markings onto every part because this pattern is very square - you can just measure where things need to be at the time you need to place them.

All my pieces, cut out and organized

Attach seven strips of Velcro, each 9” long, to the bottom of the bag lining.

Enhancement: Either don’t use Velcro at all, or ensure you use only the soft half of the Velcro. I’m not sure what value having Velcro at the bottom of the bag provides. I put it in, but probably wouldn’t on future versions. If you do this step, make sure you use the soft half of the Velcro. If you use the rough/stiff half, then you can’t really put any clothing into the bag because it’ll snag.

Lining bottom with Velcro attached

Enhancement: The next step on the official instruction list is to attach the main panel B to main panel A; and side panel B to side panel A. Don’t do that. It doesn’t make any sense. You’re going to have two “bags” - a short outer bag (from the “B” panels) and a larger inner bag (from the “A” panels). You’re going to need to sandwich the plastic board between those two bag bottoms. Attaching the main panels/side panels like this will just make the sewing way harder. Hard pass.

Fold back and finish the edges for the pen holder. You may want to trim the corners so you don’t have weird extra folds showing at the top.

Pen holder edges finished with corner trimmed

Instead of marking the pen holder pattern on the pen holder, press the pen holder at the locations where you need to sew. This serves two purposes - first, you won’t have to deal with pattern markings; second, the pen holder will bend a little cleaner at the places that get sewn down.

Pieces for the pen holder and pocket

Attach the pen holder to the pocket.

Pen holder attached to pocket

Enhancement: More pockets, more pen holders, pocket closures. The pattern shows a single pocket and pen holder on the inside of the bag. The pocket is open on the top. I have a lot of small things that I cart around with me, so one pocket is not enough. I’d also really like to have a pocket that can close - snap, Velcro, zip, whatever. Before you sew the pocket down to the lining, consider adding more pockets, having better/bigger pockets, etc.

Attach the pocket + pen holder to the lining.

Pen holder attached to lining

Cut two lengths of Velcro that are 3.5” long. Attach these to one of the main panel A pieces.

Fold the handle tabs, press, and trim the corners. Attach these with the D rings to the main panel A pieces of the bag.

Fold and press the handle tabs

Cut the 3/4” webbing in half so you have two 5” long pieces.

Five inches of webbing for the loop

Fold and press the strap loop attachments. Trim the corners as neeeded.

Strap loops ready to attach to the bag

Attach the shoulder strap loops to the main panel A pieces of the bag.

Strap loops attached to main panel

You should have something that looks like this. Note my Velcro is longer because I wanted a longer patch attachment.

Handle tabs and Velcro attached

Cut the carbon fiber tube into two 4” lengths. Each one of these will be a handle.

Put the neodymium cylinder magnets into the carbon fiber tubes. Pack each side of the tube so the magnets sit in the center and won’t shake around. However, don’t glue the magnets in because, depending on the orientation, you may need them to be able to rotate in place inside the tube such that the poles will line up when you put the two handles together.

Seal each side of the tube with something waterproof like silicone sealant. Using a waterproof sealant will ensure you can throw the bag in the washing machine if you need to.

Filling the end of the tube with sealant

I took it a step further and also dipped the handle ends in Plasti-Dip, however if you do that then the carbon fiber tube won’t slide into the handle as easy because the Plasti-Dip is very grippy.

Drying the Plasti-Dip on the handle tube

Enhancement: Multi-colored handles. I added a layer of fabric to my handles so they’re multi-colored. The part you hold onto has an extra layer. This also adds some padding around the extra strong magnets.

Lay out your handle pieces and press the long seam allowances in. The pattern/instructions call for topstitching to close the handle.

Handle pieces, pressed and ready

If you’re doing the multi-color handles, sew the top layer down. Make sure when you lay this out, you pin the top layer to the bottom layer while the bottom layer is wrapped around your handle. If you don’t, especially if you’re using thick fabric, the handle won’t lay flat around the carbon fiber tube.

Once you have that set, sew the handle closed. If you didn’t use Plasti-Dip, you should be able to sew the handle and then slide the carbon fiber tube into the middle of it. I did use Plasti-Dip, so I had to sew the handle around the tube. This means I had to slipstitch it closed by hand instead of doing a machine topstitch. I like how it turned out, but it was more effort.

Handles laid out for assembly

Enhancement: The official instructions say you should attach the handles to the main panel with the D rings now. Don’t do that. It’s going to make it much harder to get the main panels through your machine if you have the handles getting in the way, magnets attaching to the side of the machine, etc.

Fold and press the zipper ends. Trim as needed.

Zipper ends, ready to attach

Attach the zipper ends to the zipper.

Zipper ends attached to the zipper

Construction

I’m not sure why the official instructions differentiate between “preparation” and “construction” since you’ve been constructing things all along, but it is what it is.

Sew the side panels and the main panels together to form a sort of “tube.” Do this for both the A panels and the B panels.

I attached both sides onto one main panel first, then I attached the final main panel.

Main panel A with side panels A

Here’s what the bottom (main/side B panels) “tube” looks like.

Main panels B attached to side panels B

Attach the bottom to each respective tube. You should end up with a short bag (made out of panels B) and a tall bag (made out of panels A). Here’s the bottom attached to my lower/B bag.

Bottom attached to panels B

Trim your corners where the bottom attaches to the main bag. There’s a lot of fabric here and it’s going to be lumpy otherwise.

Trim corners on bottom

Flip the two bags right-side out. Here’s my main bag “A” flipped right-side out.

Main bag A

Sew the lining panels together into a tube.

Lining sides attached

Attach the lining bottom to the lining sides.

Lining bottom attached to sides

Fold the seam allowance on the bottom “B” bag over and press it.

Fold B bag seam allowance and press

Enhancement: Seal the plastic board. I used a corrugated plastic board that I got in the sign section of Home Depot. If you put the bag in the washing machine, that board is going to fill up and retain water. I used some silicone sealant along the edges of the board to ensure it’s waterproof.

Place the plastic board inside the short bag “B.” Slide the larger bag “A” down inside bag B so the board is sandwiched between the two bags. Topstitch along the top of bag B to attach the two bags.

Enhancement: Use Velcro on one of the short edges and only do topstitching on that side for the visual effect. If you want to put the bag in the washer, sewing it in means the board is stuck there. Next time I make this bag, I’m going to use Velcro to close one of the short sides between bag A and bag B so I can open the Velcro and pull the board out.

From the outside it will look like this:

Bottom attached to top - outside

Main body with bottom attached

From the inside it will look like this:

Bottom attached to top - inside

Put the lining into the main bag and baste really close to the top edge.

Lining basted in place

Mark the sides where the zipper should end (ie, where the zipper should stop being attached to the bag). I used pins to do this. Center the zipper on the sides and baste it in place really close to the top of the bag, ending the basting about an inch before your mark.

Zipper basted in place

Press the seam allowances on the channel for the metal frame.

Pin the frame channel to the edge of the bag over the zipper. This part is sort of a geometric mind-bender.

  • When the zipper/channel is folded into the bag the channel will end at the top of the bag and you’re going to sew down the bottom of the channel later.
  • You want the zipper to stop attaching to the bag at the intersection where the zipper end mark is and where the seam allowance is. This is why you didn’t baste the zipper all the way to the zipper end mark. You’ll see in my picture how the zipper curves down from the edge of the bag to about 3/8” below the edge of the bag when it meets the zipper end mark. This makes a nice smooth transition as the zipper exits the bag.

This picture shows where I started the frame channel - right at the bag midpoint. I have it pinned over the top of the zipper, but the zipper makes that gradual transition/curve I mentioned under the channel. I haven’t finished pinning the frame channel down so you can see what the zipper looks like underneath it.

The zipper end pinned in place, half the frame channel pinned

Sew through all thicknesses along the bag seam allowance (3/8” as marked on the pattern) - lining, bag A, zipper, frame channel top. Once it’s all sewn down you can flip the frame channel up and it should look like this from the outside:

Zipper and frame channel - outside

Here’s a view of the same thing from the inside:

Zipper and frame channel - inside

Fold the frame channel all the way around to the inside. Pin it down, then sew through all thicknesses - lining, bag A, frame channel bottom. When you’re done, you’ll see the stitching from the outside and it’ll look like this:

Frame channel sewn down

Now is a good time to actually attach the handles to the bag. You’re done pushing the bag through the machine so you won’t have the problems of the handles and the tube magnets making life hard.

Handles attached to the bag

You’ll note my handles don’t hang totally straight. I used 6” carbon fiber tubes but the D rings on the bag are only 4” apart. I will likely redo my handles at some point to make them better. In my instructions here, I’ve been pretty consistent about the 4” carbon fiber tube lengths to avoid this issue.

Cut the 3/16” steel rod in half so you have two lengths of 22”. On each end of each length, bend the rod down so you have 3-5/8” “legs”. You should end up with two steel rod pieces roughly in the shape of a really wide “U.” File any sharp edges off the ends so they won’t poke through the bag.

Enhancement: Coat the ends of the frame pieces in Plasti-Dip. This will stop them from sliding around in the frame channel, and it’ll also add more protection from the frame poking through the bag.

Frame with the ends coated in Plast-Dip

Put the frame pieces into the frame channel along the top of the bag. This gives the bag a sort of “hinged opening” like an old-school doctor’s bag.

Insert the frame into the channel

We’re now at a point where the official instructions just stop. There’s no description of what to do with the shoulder pad, and the shoulder pad shown in the reference photos really doesn’t have any pad. So… here’s what I did. Note a lot of it was sort of eyeballing it and hoping so I may not have exact measurements here.

My basic idea was to make a shoulder pad that has an actual piece of padding sewn inside and a channel for the shoulder strap to pass through. I didn’t want it attached to the shoulder strap so I could adjust things as needed.

I cut three of the octagonal shoulderpad pieces from the official pattern. I also cut a piece of craft foam a little smaller than the octagon.

Shoulder pad pieces

On all of the three pieces I sewed down the ends so they were finished. I also pressed along where the shoulder strap channel would be.

Pad ends finished, press markings

I took two of the finished pieces and sewed them together down the pad channel marking. This makes a sort of tube where I can push the shoulder strap through.

Channel for the shoulder strap

Here it gets a little confusing. Stick with me.

I took the third piece and stuck it on top - right-side to right-side - of the pieces with the channel. I then sewed along the edges with about a 1/4” seam allowance. In the photo below, you can see that sewing along the edge in black. You can see in the photo where I started to flip the whole thing right-side out.

Flipping the pad right-side out

Flip it right-side out and the larger pocket you’ll have is where you’re going to put the foam.

The pad flipped right-side out

Cut a piece of craft foam and slide it into the shoulder pad. It took me a few tries to get the right size and shape so it’d fill out the shoulder pad but will also sit nice and flat.

Cut foam for the shoulder pad

Slipstitch the ends of the shoulder pad closed so the foam doesn’t slip out. Be careful not to sew the channel shut where you need to put the strap!

Attach the slide on the end of the shoulder strap and sew it down.

Slide attached to shoulder strap

Put one of the lobster claws on the shoulder strap and thread it back through the slide.

Lobster claw on one end of the shoulder strap

Thread the shoulder strap through your custom shoulder pad.

Slide the pad onto the strap

Attach the other lobster claw on the other end of the strap.

Finished shoulder strap

There’s also no description of how to make a patch. Ostensibly many people will buy one, but the 2” x 3.5” size doesn’t seem standard to me.

I wanted to make a custom patch, so here’s how I did that.

I made my bag Velcro 2” x 8” to accommodate for the larger patch I had planned. I used my embroidery machine to create the design for the patch.

Embroidering the patch

The design/front half of the patch is 2” x 8” and I created a back half that has the Velcro on it.

Front and back halves of the patch

I attached the two halves and finished the edge of the patch with a very dense zig-zag machine stitch.

The finished patch

The Finished Product

After attaching the shoulder strap and patch, here’s what my bag looks like:

The finished outside of the EDC ONE

The finished inside of the EDC ONE

I’m pretty happy with how this turned out. Hopefully this tutorial helped you in making a bag you’re happy with, too!

spinnaker, kubernetes comments edit

I’m using Spinnaker to manage microservice deployments in one of my projects. One of the nice features in Spinnaker is pipeline expressions - a way to do a little dynamic calculation based on the current state of the pipeline.

I’m somewhat new to Spinnaker and I just got through trying to figure out a way to do some conditional stage execution using pipeline expressions but found the learning curve was steep. I don’t plan on repeating everything already in the docs… but hopefully I can help you out.

The Layers: Spinnaker pipeline expressions use the Spring Expression Language syntax to expose the currently executing Spinnaker pipeline object model along with some whitelisted Java classes so you can make your pipeline stages more dynamic.

That’s a lot to build on: The Spinnaker object model, Spring Expression Language, and Java. If you’re, say, a Node or .NET developer, it’s a bit of an uphill battle.

Resources:

The Data: The best way you can see what pipeline data you have available is to run a pipeline. Once you have that, expand the pipeline and click the “Source” link in the bottom right. That will get you a JSON document.

Get the pipeline source

The JSON doc is an “execution.” In there you’ll see the “stages” each of which has a “context” object. This is the stuff you’ll get when you use the helper functions like #stage("My Stage") - it’ll find the JSON stage in the execution with the appropriate name and expose the properties so you can query them.

Troubleshooting: There are two ways I’ve found to troubleshoot.

  1. Create a test pipeline. Actually just run the expression in a test pipeline. You’ll get the most accurate results from this, but you may not be able to test the queries or operations against a “real pipeline execution” this way.
  2. Use the REST API. Spinnaker has a REST API and one of the operations on a pipeline is evaluateExpression. This runs against already-complete executions but is the fastest way to iterate over most queries. I’ll show you how to use that.

If you want to use the REST API to test an expression, first find a pipeline execution you want to use for testing. As with earlier, go get the JSON doc for a pipeline run by clicking the “Source” link on a complete pipeline. Now look at the URL. It’ll look like http://spin-api.your.com/pipelines/01CMAJQ6T8XC0NY39T8M3S6906. Grab that URL.

Now with your favorite tool (curl, Postman, etc.)…

  • Create a POST request to the evaluateExpression endpoint for that pipeline, like: http://spin-api.your.com/pipelines/01CMAJQ6T8XC0NY39T8M3S6906/evaluateExpression
  • In the body of the request, send a single parameter called expression. The value of expression should be the expression you want to evaluate, like ${ 1 + 1 } or whatever.
  • The response will come back with the result of the expression, like { result: 2 }. If you get an error, read the message carefully! The error messages will usually help you know where to look to solve the issue. For example, if you see the error Failed to evaluate [expression] EL1004E: Method call: Method length() cannot be found on java.util.ArrayList type - you know that whatever you’re calling length() on is actually an ArrayList so go look at the ArrayList documentation and you’ll find out it’s size() not length(). The errors will help you search!

Example: The pipeline expression I wanted was to determine if any prior step in the pipeline had a failure. The expression:

${ #stage("My Stage").ancestors().?[status.isFailure() && name != "My Stage"].isEmpty() }

Let’s walk through how I got there.

How did I figure all that out? Lots of trial and error, lots of poking around through the source. That REST API evaluation mechanism was huge.

media, personal comments edit

I listen to a lot of music at work and thought it’d be nice to have a decent setup for that. I’m not what you’d refer to as an “audiophile” so much, but I have an opinion and I know what I think sounds good vs. what I think doesn’t sound good.

Listening to low-bandwidth streaming audio off my phone with $10 Bluetooth headphones is not good.

Decent headphones with my iPod Classic playing lossless directly is pretty good.

My first step was getting some decent headphones. Not being, as I said, an audiophile, I went with some good-bang-for-the-buck Sony MDR7506 over the ear headphones.

Sony MDR7506 Headphones

That was a definite improvement, but then I started looking at amps.

Headphone amps are expensive. Wow. In my opinion, most times it’s unjustifiably expensive like when you see the $1000 HDMI cable that you know won’t outperform the $6 version.

Anyway, I started looking at these things and realized… I need to take it a little slower. I’m not interested in dropping $100 on something I don’t know is going to make a difference. But I could dip my toe in the pool for, say, under $50 and see what the fuss is about, and if I really do like it maybe step up at some later time.

I picked up a DIY 6J1 tube preamp kit for $15 off wish.com both as a step toward this and as a fun little project. I like putting stuff together and I figured, why not? To be clear, I know this isn’t a super high-quality deal, but it’s at the right price point and will at least tell me if I like it.

Anyway, this particular preamp is said to “make the sound more warm.” It doesn’t help the volume, it’s more like it just changes the sound coming in. Sooo….

I also got a Behringer HA400 amplifier. That’s the thing that will help with the volume.

Behringer HA400 amplifier

All put together, here’s the whole setup:

My headphone amp setup

My audio source (phone, iPod, whatever) runs into the 6J1 preamp, then into the HA400 amp, then to the headphones.

It sounds pretty darn good.

I’ve done some totally non-scientific testing:

  • Plug headphones directly to audio source (bypass the amp setup)
  • Headphones to HA400 to audio source
  • Headphones to HA400 to preamp to audio source (the whole amp setup)

I listened to a few different songs, somewhat arbitrarily chosen yet representative of the sort of thing I listen to more often than not:

I listened to others throughout the day, these are just ones I started with and tried A/B testing with.

Notes:

  • I definitely notice a difference in the sound between when the 6J1 preamp is in place vs. when it’s not. I guess that’s the “warmth” of the sound. It sounds to me like it’s a bit of bass boost and maybe a bit in the mid area as well. I wonder if you could accomplish the same thing with a decent graphic equalizer.
  • If you have the preamp, you definitely need the amp. Running through the preamp without the amp reduces the volume/power enough that you have to max out the volume on the source and the preamp to get back to a standard listening level.
  • The tubes that came with the DIY kit were crappy and introduced a lot of buzz so I had to replace them. I got some $1 replacements, none of this “tested audio matched pair” crap. Maybe those would be even better, but the replacements cost me a total of $2 and removed the buzz. There’s still a weird spot where there’s a very light buzz when the preamp is turned to exactly 50% power. I dunno. Fine. Turn it up a little and it goes away.
  • The amplifier by itself really doesn’t do anything but add volume. It doesn’t process the sound in any way. I didn’t expect it to, but, you know, to be clear, just having an amplifier doesn’t really do anything. It’s the preamp that changes the sound.

I think I’d like to try one of these more expensive all-in-one units where the preamp and amp are all in the same box, but I’ll admit I’d go in a bit biased. I paid about $40 for my setup and it sounds good enough for me. I can’t imagine the difference in sound would be enough to justify $100 or more.

dotnet, json, xml comments edit

When .NET Core was released, a new configuration mechanism came with it: Microsoft.Extensions.Configuration. It’s an improvement over the System.Configuration namespace in a lot of ways and much simpler to use, but there is still a lot to know to effectively take advantage of the features. This post tries to clarify some of the usage patterns and how the new system works based on questions and common issues I’ve seen “in the wild.”

Also, it’s definitely worth looking at the official docs since there are great examples in there, too.

As of this writing, .NET Core 2.1.1 is out. That’s the version I’ll be writing about here. If you show up in a year or three, this could be out of date.

Everything is Key/Value

The most important thing to know about the new configuration system is that everything boils down to key/value pairs. You may have a pseudo-hierarchy of these key/value pairs so you can walk it like a tree, but in the end it’s still key/value pairs, like a dictionary. No matter the input format, it all gets normalized.

  • Keys are case-insensitive.
  • Values are strings.
  • The hierarchy delimiter is : when querying parsed configuration.
  • Every configuration provider flattens their structure down to the same normalized format.

Let’s look at some samples and see how they flatten out.

Here’s a JSON file with some configuration:

{
  "logging": {
    "enabled": true,
    "level": "Debug"
  },
  "components": {
    "database": {
      "connection": "connection-string"
    },
    "files": {
      "path": "/etc/path"
    }
  }
}

When this flattens out, you get:

components:database:connection = "connection-string"
components:files:path = "/etc/path"
logging:enabled = "True"
logging:level = "Debug"

I’ve sorted the configuration keys for easier reading. As you can see, it’s all strings. The Boolean true has been converted to its string representation True.

Let’s look at the same configuration but in XML format:

<?xml version="1.0" encoding="utf-8" ?>
<root>
    <logging enabled="True">
        <level>Debug</level>
    </logging>
    <components>
        <database connection="connection-string" />
        <files path="/etc/path" />
    </components>
</root>

Things to notice in the XML format:

  • The <root> element is throwaway. Configuration ignores the root element.
  • You can specify child configuration items as attributes or as child elements with text.
  • The logging:enabled setting in XML is True to generate the same output as the JSON. If it had been true, since it’s a string, the parsed output would have had logging:enabled = "true".

Something important to note is that name has a special meaning in XML configuration. If you add a name attribute to an XML element it uniquely identifies that element. We’ll get more into that later with Ordinal Collections and Advanced XML.

Now that we’ve seen XML, how about INI format?

[logging]
enabled=True
level=Debug

[components]
files:path=/etc/path

[components:database]
connection=connection-string

Things to notice in the INI format:

  • You can put a : in headings or in keys and it’ll generate the proper flattened format.
  • As with XML, the logging:enabled setting is True since it’s a string by default and won’t be seen as a Boolean.

You can specify configuration as environment variables! Since : doesn’t work well in environment variables in all systems, you use __ in the actual environment variable and it will get converted.

set COMPONENTS__DATABASE__CONNECTION=connection-string
set COMPONENTS__FILES__PATH=/etc/path
set LOGGING__ENABLED=True
set LOGGING__LEVEL=Debug

This will generate the keys in all caps, but that’s OK because keys are case insensitive. You can still access them using lower case names like logging:enabled and you’ll get the right thing.

Note by default the environment variable configuration source will bring all environment variables in. Maybe you want that, maybe you don’t. I’ll talk later about Environment Variable Prefixes to show you how to filter and only get what you want.

And let’s finish up with command line parameters:

mycommand.exe --components:database:connection=connection-string --components:files:path=/etc/path --logging:enabled=True --logging:level=Debug

Each switch gets converted to be a key and the value after the equals sign is the value. You can also do space delimited:

mycommand.exe --components:database:connection connection-string --components:files:path /etc/path --logging:enabled True --logging:level Debug

Don’t mix and match. If some things have equals and some don’t, weird things happen. For example, this:

mycommand.exe --badswitch --goodswitch=value

Yields this config:

badswitch = "--goodswitch=value"

I’ll talk more about this later in the “Advanced Command Line” section.

Overriding Values

One of the coolest things (I think) about the way the Microsoft Configuration system works is that you can use all these providers and set up a configuration precedence to layer differnet config sources on top of each other.

For example, a common way to go in ASP.NET Core is:

  • Base appsettings.json
  • Environment-specific appsettings.{EnvironmentName}.json
  • Environment variables
  • Command line parameters

In this manner, the application can ship with some core defaults (appsettings.json). If you have environment-specific settings that don’t need to change (without re-deploying a file) you could have an environment-specific JSON file like appsettings.Staging.json or appsettings.Production.json. Layer some environment variables on the top for things that get wired up on the fly (like URLs to services) or maybe API keys/secrets that need to be used by the app. Finally, last-minute must-have overrides that can go on the command line, like which port to listen on.

Building that up might look like this:

var config = new ConfigurationBuilder()
  .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
  .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true, reloadOnChange: true)
  .AddEnvironmentVariables()
  .AddCommandLine(args)
  .Build();

Let’s see what happens when we layer some configuration on. Pretend we’re in the Staging environment and here’s what’s out there:

appsettings.json has…

{
  "logging": {
    "includeScopes": false,
    "logLevel": {
      "default": "Debug"
    }
  }
}

appsettings.Staging.json has…

{
  "logging": {
    "logLevel": {
      "default": "Warning"
    }
  }
}

Environment variables:

set ASPNETCORE_ENVIRONMENT=Staging
set LOGGING__LOGLEVEL__MICROSOFT=Information

Command line arguments:

dotnet run --urls=http://*:5005

What does that yield?

aspnetcore_environment = "Staging"
logging:includescopes = "False"
logging:loglevel:default = "Warning"
logging:loglevel:microsoft = "Information"
urls = "http://*:5005"

There are a few interesting things here.

  • If you use the same key in an override configuration, it will replace the corresponding value.
  • You can add configuration at override time, but you can’t remove things. The best you can do is override a value with an empty string.
  • The ASPNETCORE_ENVIRONMENT value is Staging, which is a sort of magic value for ASP.NET Core. I’ll talk more about that later in Environment Variable Prefixes. That said, you’ll also see it in the config as a flattened value if you choose to import everything. It gets the underscore in the name because it’s not double-underscore __ (the usual delimiter).
  • The urls value will be picked up by the ASP.NET Core web host and that’s what port it’ll listen on. If you add arguments to your config, you’ll see it, too.

I mentioned overriding with an empty string as a “fake way” to remove configuration. Specifying null as a value, even in JSON config, doesn’t work. It reads the value and uses an empty string as the value instead of null. Further, there’s no XML analogy to null, nor are there such analogies in most other providers.

Given everything in the config system is a key/value string pair, the closest you can get, then, is to set things to empty strings. When you read values, check for String.IsNullOrEmpty() before using the value.

Since you can’t remove things, specify as little configuration as possible and behave correctly using defaults in your code when configuration isn’t there. This will save you a lot of time when you have some base configuration specifying a value that you don’t want and you can’t figure out how to override and “remove it.”

Ordinal Collections (Arrays)

Ordinal collections (think arrays) are a sort of interesting special case in configuration. It’s pretty easy to think about it when using JSON like this:

{
    "components": [{
        "database": {
            "enabled": true
        }
    }, {
        "files": {
            "enabled": false
        }
    }]
}

It’s an array of two objects. But how does that flatten out into key/value pairs?

This is a big one, since JSON, INI, XML, environment variables, command line parameters, and other config sources all need to work together. You don’t have “arrays” in environment variables. So what does it look like?

The answer is that numeric 0-based keys get generated for each element. The flattened config looks like this:

components:0:database:enabled = "True"
components:1:files:enabled = "False"

Knowing how this works is huge because when you try to intermingle different configuration formats and override values, you have to generate the same key structure.

Let’s look at the same thing in XML:

<?xml version="1.0" encoding="utf-8" ?>
<root>
    <components name="0">
        <database enabled="True" />
    </components>
    <components name="1">
        <files enabled="False" />
    </components>
</root>

Notice in XML we had to manually specify the numeric key. As mentioned earlier, name has a special meaning in XML configuration. If you add a name attribute to an XML element it uniquely identifies that element. For ordinal collections the name is the index in the collection.

For reference let’s look at some bad XML configuration for ordinal collections:

<?xml version="1.0" encoding="utf-8" ?>
<!-- THIS WON'T WORK! MISSING NAMES! -->
<root>
    <components>
        <database enabled="True" />
    </components>
    <components>
        <files enabled="False" />
    </components>
</root>

This version missing names will generate:

components:database:enabled = "True"
components:files:enabled = "False"

Notice it’s missing the index part of the key. If you had JSON and XML config in the same system, the overrides would fail.

Let’s look at INI:

[components:0]
database:enabled=True

[components:1]
files:enabled=False

INI files don’t have a notion of ordinal collections so you need to manually specify the indexes in the keys.

Environment variables also get manual specification:

set COMPONENTS__0__DATABASE__ENABLED=True
set COMPONENTS__1__FILES__ENABLED=False

So do command line parameters:

mycommand.exe --components:0:database:enabled=True --components:0:files:enabled=False

What happens if you skip an index?

<?xml version="1.0" encoding="utf-8" ?>
<root>
    <components name="1">
        <database enabled="True" />
    </components>
    <components name="4">
        <files enabled="False" />
    </components>
</root>

It doesn’t matter. It’s all just string key/value. The above XML will become:

components:1:database:enabled = "True"
components:4:files:enabled = "False"

Given that, how do we override things?

Let’s say we start with a JSON file like we had before:

{
    "components": [{
        "database": {
            "enabled": true
        }
    }, {
        "files": {
            "enabled": false
        }
    }]
}

We want to enable the files component at runtime via the environment. To do that, we’d set an environment variable like this:

set COMPONENTS__1__FILES__ENABLED=True

If you then layer environment variables over the JSON configuration you’ll get the desired effect:

components:0:database:enabled = "True"
components:1:files:enabled = "True"

How would you layer two JSON files to get this to work?

You have to fake out the indexing mechanism. You can do this one of two ways. First, you can use empty objects to pad out your overrides file:

{
    "components": [{}, {
        "files": {
            "enabled": true
        }
    }, ]
}

The presence of the empty object there pushes the index of the “files” object forward so it matches the original index.

The other option is to just specify the index right in a key:

{
    "components:1": {
        "files": {
            "enabled": true
        }
    }
}

Notice the index is in the key and there’s no array at all. Either way it will flatten out and get the desired result.

The complexity around ordinal collections is something to consider when you’re picking a configuration format. Especially if you want to override something in the environment or via command line at runtime, you’ll have to know which index to use in your override.

No Built-In Validation

One of the things that made the old System.Configuration mechanism nice was the ability to put pretty rich type converters, default values, and configuration validation into the ConfigurationSection you write. The new Microsoft.Extensions.Configuration mechanism doesn’t have any of this. You can sort of fake it by binding configuration to objects (which I’ll cover later) but there’s no notion of configuration “schema” or any sort of validation/annotation you can provide to ensure values are in a form you expect.

To that end, make sure you validate your configuration values before using them. Maybe that’s parsing things into strong object models (see below), maybe it’s checking that values are in an expected format. It’s not built in so it’s up to you. In the next section I’ll show you an example of how to use Data Annotations validation with configuration binding.

Also, use String.IsNullOrEmpty() to check for presence/absence of values. If you only check against null then you won’t be able to “remove” configuration later if you need to by setting it to an empty string.

Configuration Object Model

It really helps to know how to navigate around the configuration object model if you’re going to do more than read just a small set of values.

You start your configuration with a ConfigurationBuilder. This is the object to which you’ll attach the various things providing configuration.

var builder = new ConfigurationBuilder();

builder.AddJsonFile("appsettings.json")
       .AddXmlFile("appsettings.xml")
       .AddIniFile("appsettings.ini")
       .AddEnvironmentVariables()
       .AddCommandLine(args);

The builder doesn’t actually invoke any of the things that read configuration. It just gives you an opportunity to specify your configuration sources. It also has a Properties dictionary on it that you can use when people register sources. For example, if you wanted to ensure a particular source only gets registered one time with a ConfigurationBuilder, you could synchronize on that Properties dictionary.

Each time you call one of the extensions (AddJsonFile, AddXmlFile), it adds an IConfigurationSource to the ConfigurationBuilder. The ConfigurationBuilder keeps track of these until you tell it to Build().

Each IConfigurationSource is responsible for building an IConfigurationProvider. When ConfigurationBuilder.Build() is called, each IConfigurationSource.Build() is called in turn to build the configuration providers.

An IConfigurationProvider is what actually reads in and parses the configuration. It provides a normalized view on top of the configuration so the system can query the key/value pairs and have them all nice, flat, and colon-delimited.

You have a set of these configuration providers, so you need something to handle the “merging” of all the providers and provide that single, unified view. This is where the IConfigurationRoot comes in. It keeps track of the final, built set of providers. This is what comes out of ConfigurationBuilder.Build().

When you ask an IConfigurationRoot for a configuration item, it iterates through the set of providers (in reverse order - that’s how the “override” works) until it finds the value, then returns the first value found. If no value is found, it returns null.

From the IConfigurationRoot you can ask directly for a key like logging:level or you can ask for an IConfigurationSection, which gives you a localized view of a sub-tree of the configuration. root.GetSection("logging") will get you the part of configuration that starts with logging:.

Everything under the IConfigurationRoot is an IConfigurationSection. This is where you’ll spend most of your time. An IConfigurationSection has these methods and properties:

  • Key: The local config key based on the current section. If you were looking at the logging:level section, the Key would be level.
  • Path: The full path to the key from the root of config. This would be like logging:level.
  • Value: If the configuration key has a value, this is it. Otherwise this value will be null.
  • this[key]: The configuration value of a child of this section.
  • GetSection(key): Gets a child IConfigurationSection of this section.
  • GetChildren(): Gets the set of all child IConfigurationSection values from this section.
  • GetReloadToken(): Gets the change token that the configuration system is watching for configuration changes.

This is easier to see if we look at some code. Let’s say we have the following configuration:

appsettings.json:

{
  "debug": true,
  "logging": {
    "includeScopes": false,
    "logLevel": {
      "default": "Debug"
    }
  }
}

overrides.json:

{
  "logging": {
    "logLevel": {
      "default": "Warning"
    }
  }
}

The flattened configuration will look like:

debug = "True"
logging:includescopes = "False"
logging:loglevel:default = "Warning"

Let’s build the configuration and wander around.

// This will track the configuration sources we'll merge.
var builder = new ConfigurationBuilder();

// Add two JsonConfigurationProviders to the builder.
builder.AddJsonFile("appsettings.json")
       .AddJsonFile("overrides.json");

// Run through all the providers and build up the sources.
// This will actually read the JSON files and parse them.
var configRoot = builder.Build();

// Ask for a value by absolute path. This will be
// the string "False"
var includeScopes = configRoot["logging:includescopes"]

// Grab the logging section so we can look at it.
var loggingSection = configRoot.GetSection("logging");
foreach(var child in loggingSection.GetChildren())
{
    // Inspect the path, key, and value of each child.
    // This will output:
    //
    // logging:includeScopes (includeScopes) = False
    // logging:logLevel (logLevel) = (null)
    //
    // Notice the path is the absolute path, the key is
    // relative to the parent section. Also notice it's
    // only _immediate_ children you get.
    Console.WriteLine(
      "{0} ({1}) = {2}",
      child.Path,
      child.Key,
      child.Value ?? "(null)")
}

// You can get the logging:loglevel section by absolute path...
var logLevelSection = configRoot.GetSection("logging:loglevel");
// ...or you can get it as a child of the logging section we got earlier...
logLevelSection = loggingSection.GetSection("loglevel");

// Now we can look at the children again:
foreach(var child in logLevelSection.GetChildren())
{
    // This will output:
    //
    // logging:logLevel:default (default) = Warning
    Console.WriteLine(
      "{0} ({1}) = {2}",
      child.Path,
      child.Key,
      child.Value ?? "(null)")
}

// What happens if we ask for something that doesn't exist?
// WE GET BACK AN EMPTY SECTION, NOT AN EXCEPTION. You can't
// ask for a specific section by name to test if it was defined
// in configuration.
var doesNotExist = configRoot.GetSection("does:not:exist");

// This will output:
//
// does:not:exist (exist) = (null)
Console.WriteLine(
  "{0} ({1}) = {2}",
  doesNotExist.Path,
  doesNotExist.Key,
  doesNotExist.Value ?? "(null)")

// If you absolutely need to check to see if something _exists_
// rather than if it's just null (undefined), you need to look at
// the _parent section_ to see if it has a child with the name.
// Make sure you do a case-insensitive comparison! Keys aren't
// case sensitive.
var checkSectionExists =
    configRoot.GetSection("does:not")
              .GetChildren()
              .Any(c =>
                   c.Key.Equals("exist", StringComparison.OrdinalIgnoreCase));

As you can see, there is a lot of power in the new system, but also a little challenge with respect to checking for values or existence of an item. Also, since everything comes out as strings, it means a lot of parsing… but hold on, check this next section out.

Binding to Objects

Since everything comes out as strings one of the first things you’re likely going to do is parse the strings into strongly typed objects. You’re going to have a lot of Int32.TryParse() and stuff all over the place. But wait! There’s another package for you!

Microsoft.Extensions.Configuration.Binder brings the parsing to you. If you don’t want to stay in string-land, this package adds helpful configuration binding/conversion to the mix.

Let’s say we have a configuration that looks like this:

debug = "True"
logging:includescopes = "False"
logging:loglevel:default = "Warning"
logging:maxmessagelength = "255"

Cool. Once we’ve built up the configuration root, we can start getting parsed values pretty easily:

// maxMessageLength will be the integer 255
// Note keys are case-insensitive so we can use camelCase here
// in our code if we really want.
var maxMessageLength = configRoot.GetValue<int>("logging:maxMessageLength");

// debug will be the Boolean true
var debug = configRoot.GetValue<bool>("debug");

// includeScopes will be the Boolean false
var loggingSection = configRoot.GetSection("logging");
var includeScopes = loggingSection.GetValue<bool>("includeScopes");

OK, that works well for getting one-off values. What if we want to get a whole section of values at the same time? Sure. You can create a couple of object classes like this:

public class LoggingSection
{
    public bool IncludeScopes { get; set; }
    public int MaxMessageLength { get; set; }
    public LogLevelSettings LogLevel { get; set; }
}

public class LogLevelSettings
{
    public Microsoft.Extensions.Logging.LogLevel Default { get; set; }
    public Microsoft.Extensions.Logging.LogLevel Identity { get; set; }
}

Now you can read and parse the whole thing at once.

var logSettings = configRoot.GetSection("logging").Get<LoggingSection>();

// This will output:
//
// Max message length: 255
// Include scopes: False
// Default level: Warning
// Identity level: Trace
Console.WriteLine("Max message length: {0}", logSettings.MaxMessageLength);
Console.WriteLine("Include scopes: {0}", logSettings.IncludeScopes);
Console.WriteLine("Default level: {0}", logSettings.LogLevel.Default);
Console.WriteLine("Identity level: {0}", logSettings.LogLevel.Identity);

That’s pretty cool, right?

But, wait, how did logSettings.LogLevel.Identity become LogLevel.Trace?

You noticed there’s no configuration for that. Since that’s not defined the value becomes default(T), in this case, the default value for the Microsoft.Extensions.Logging.LogLevel enumeration, which is LogLevel.Trace.

Under the covers, the configuration binder is mostly using type converters obtained from System.ComponentModel.TypeDescriptor.GetConverter(Type) to convert values from string. If you need more robust support, like deserializing list values or custom types, you’ll need to implement base .NET type conversion on your type and enable string deserialization.

Anyway, once you have it deserialized, you can validate on the values if needed. For example, you could use the System.ComponentModel.DataAnnotations validation pretty easily with some attributes…

public class LoggingSection
{
    public bool IncludeScopes { get; set; }

    // The max length has to be between 10 and 100...
    // but the configuration is set at 255 right now!
    [Range(10, 100)]
    public int MaxMessageLength { get; set; }
    public LogLevelSettings LogLevel { get; set; }
}

Then run the validation yourself:

var loggingSection = config.GetSection("logging").Get<LoggingSection>();
var validationResults = new List<ValidationResult>();

// This will output:
//
// Logging configuration is invalid!
// - The field MaxMessageLength must be between 10 and 100.
if (!Validator.TryValidateObject(
      loggingSection,
      new ValidationContext(loggingSection),
      validationResults,
      true))
{
  Console.WriteLine("Logging configuration is invalid!");
  foreach (var validationResult in validationResults)
  {
    Console.WriteLine("- {0}", validationResult.ErrorMessage);
  }
}

Default Providers

Out of the box there are several providers shipped (you can always see the latest in the repo):

  • Azure Key Vault (bind secrets from the vault into your app)
  • Command Line
  • Environment Variables
  • INI Files
  • JSON Files
  • Key-Per-File (bind a directory of files as config - file name is the key, file contents are the value)
  • User Secrets (dev-time secrets that normally get replaced by something like Azure Key Vault in production)
  • XML Files

Any or all of these can be used together to layer configuration into your application. You can also write your own custom provider to, for example, read configuration out of Redis or SQL.

Refreshing on Change

When configuration is initially built the usual pattern is to read all the configuration from the underlying source and store it in-memory in a dictionary for later retrieval.

Some (but not all) configuration providers allow for the configuration source to change at runtime and have that source reload its configuration to pick up changes.

Out of the box, the providers that allow for reload on change are the file-based providers:

  • INI Files
  • JSON Files
  • XML Files

Conversely, some providers do not handle change events. In this case, you’d have to manually rebuild configuration or handle the change/update yourself. Usually this is either because there’s not a reasonable way to receive a change event; or it doesn’t make sense that the values would change during runtime:

  • Azure Key Vault
  • Command Line
  • Environment Variables
  • Key-Per-File
  • User Secrets (the underlying provider here is JSON files but the code specifically turns off the reload on change flag)

Environment Variable Prefixes

When you use environment variables as a configuration source, you may get a lot of junk you didn’t really want. To avoid that, you can signal the provider to only include variables that have a given prefix.

Let’s say you have these environment variables:

set RANDOM_VALUE=BlipBlipBlip
set COMPONENTS__DATABASE__CONNECTION=connection-string
set COMPONENTS__FILES__PATH=/etc/path
set LOGGING__ENABLED=True
set LOGGING__LEVEL=Debug

If you added all the environment variables like this…

var config = new ConfigurationBuilder()
    .AddEnvironmentVariables()
    .Build();

You’d get everything, even maybe stuff you don’t want:

components:database:connection = "connection-string"
components:files:path = "/etc/path"
logging:enabled = "True"
logging:level = "Debug"
random_value = "BlipBlipBlip"

Uh oh, that random_value snuck in. Instead, let’s prefix the variables we like:

set RANDOM_VALUE=BlipBlipBlip
set CONFIGURATION_COMPONENTS__DATABASE__CONNECTION=connection-string
set CONFIGURATION_COMPONENTS__FILES__PATH=/etc/path
set CONFIGURATION_LOGGING__ENABLED=True
set CONFIGURATION_LOGGING__LEVEL=Debug

Now specify a prefix in the configuration build:

var config = new ConfigurationBuilder()
    .AddEnvironmentVariables("CONFIGURATION_")
    .Build();

The prefix gets trimmed off and the correct environment variables make it in:

components:database:connection = "connection-string"
components:files:path = "/etc/path"
logging:enabled = "True"
logging:level = "Debug"

There are some prefixes that have special meaning.

Out of the box, there are several prefixes that automatically provide help in configuration of connection strings.

  • MYSQLCONNSTR_ - MySQL connection string
  • SQLAZURECONNSTR_ - SQL Azure connection string
  • SQLCONNSTR_ - SQL connection string
  • CUSTOMCONNSTR_ - Custom provider connection string

If you use these, it deserializes differently.

set MYSQLCONNSTR_MYSQLDATABASE=connect-to-mysql
set SQLAZURECONNSTR_SQLAZUREDATABASE=connect-to-sqlazure
set SQLCONNSTR_SQLDATABASE=connect-to-sql
set CUSTOMCONNSTR_CUSTOMRESOURCE=connect-to-custom

If you bring all of the environment variables in (don’t specify a prefix) you get:

connectionstrings:customresource = "connect-to-custom"
connectionstrings:mysqldatabase = "connect-to-mysql"
connectionstrings:mysqldatabase_providername = "MySql.Data.MySqlClient"
connectionstrings:sqlazuredatabase = "connect-to-sqlazure"
connectionstrings:sqlazuredatabase_providername = "System.Data.SqlClient"
connectionstrings:sqldatabase = "connect-to-sql"
connectionstrings:sqldatabase_providername = "System.Data.SqlClient"

The prefix indicates which database provider will be automatically added to configuration. This appears to be primarily useful in moving Azure environment variables over to the application environment.

ASP.NET Core also has several things it uses in the ASPNETCORE_ prefix. The complete list is in the documentation but some common ones are:

  • ASPNETCORE_ENVIRONMENT: Defines the environment name. Typically Development, Staging, or Production.
  • ASPNETCORE_URLS: Defines the addresses/ports on which the server should listen.
  • ASPNETCORE_WEBROOT: If your web root isn’t wwwroot, this override can be used to point to the new location.

Advanced Command Line

When you add command line parameters to configuration, the parser lets you use one of two formats:

  • Each argument is a pair separated by = like --key=value
  • Argument pairs can alternate key/value/key/value with spaces between each, like /key value

It doesn’t matter which format you use, but you have to pick one. You can’t mix and match.

Rules for specifying keys:

  • Keys can start with -, --, or / (and you can mix/match these)
  • If a key starts with - it’s considered to be a shortened version of a longer key so you need to provide a mapping.
  • If you specify the same key on the command line twice, last in wins.

Here are some examples:

# YES
myapp.exe --key1 value1 /key2 value2 -k3 value3
myapp.exe --key1=value1 --key2=value2 /key3=value3

# NO (can't use both space and equals for delimiters)
myapp.exe --key1 value1 /key2=value2

Notice in that first example line there’s a short switch -k3. To use that, we’d need to provide a mapping to a longer key or we’ll get an exception.

var mappings = new Dictionary<string, string>
{
    // Be sure to include the leading dash on the key
    // but leave it off the value!
    { "-k3", "key3" }
};
var config = new ConfigurationBuilder()
    .AddCommandLine(args, mappings)
    .Build();

Finally, be aware that, like with environment variable prefixes, some systems you work with will find special meaning in some command line switches. For example, if you’re working with ASP.NET Core, the web host accepts a --urls command line parameter as the set of locations on which it should listen.

There’s no built-in filtering for command line arguments the way there is for environment variables with prefixes, so be aware when you’re building your app.

Advanced XML

Given the previous .NET configuration system was so rooted in XML it’s understandable that many folks coming in start out looking at how to transform existing XML config into new XML config.

It’s your call if you decide to do that, but be aware of a few things when using XML:

  • You can’t include a DTD or parsing will fail.
  • You can’t include a namespace or parsing will fail.
  • You can’t include elements that may be perceived as “duplicates” or parsing will fail. You must disambiguate with the name attribute.

That last one can easily bite you. Let’s look at an XML file with some gotchas:

<?xml version="1.0" encoding="utf-8" ?>
<!-- THIS IS AN INVALID FILE -->
<root>
  <components route="100">
    <database enabled="True" />
  </components>
  <components route="200">
    <database enabled="False" />
    <files enabled="False" />
  </components>
</root>

So, we don’t have any DTD or namespaces, so that’s fine, but what else is wrong?

There are two components elements that the parser will see as identical. Even though the route values are different, the parser will still see them as redefining the same thing twice. We need to disambiguate with names. Let’s change route to name.

<?xml version="1.0" encoding="utf-8" ?>
<!-- THIS IS A VALID FILE -->
<root>
  <components name="100">
    <database enabled="True" />
  </components>
  <components name="200">
    <database enabled="False" />
    <files enabled="False" />
  </components>
</root>

Note even though there’s a database element in both components elements, they’re not seen as identical because the parent on each has a name. It’ll flatten out to this:

components:100:database:enabled = "True"
components:100:name = "100"
components:200:database:enabled = "False"
components:200:files:enabled = "True"
components:200:name = "200"

As you can see, name shows up in the key hierarchy and becomes a key/value of its own. In any case, you need that to disambiguate.

If you have multiple XML files, you can add a prefix on the elements by setting a name on the root element:

<?xml version="1.0" encoding="utf-8" ?>
<root name="settings">
  <components>
    <database enabled="False" />
    <files enabled="False" />
  </components>
</root>

This will become:

settings:components:database:enabled = "False"
settings:components:files:enabled = "False"
settings:name = "settings"

One thing you can do in XML that you can’t in the other providers is encrypt the XML. Encrypted XML generally looks like this:

<?xml version="1.0" encoding="utf-8" ?>
<root name="settings">
  <components>
    <database enabled="False" />
    <EncryptedData Type="http://www.w3.org/2001/04/xmlenc#Element"
                   xmlns="http://www.w3.org/2001/04/xmlenc#">
      <EncryptionMethod Algorithm="http://www.w3.org/2001/04/xmlenc#aes256-cbc" />
      <KeyInfo xmlns="http://www.w3.org/2000/09/xmldsig#">
        <EncryptedKey xmlns="http://www.w3.org/2001/04/xmlenc#">
          <EncryptionMethod Algorithm="http://www.w3.org/2001/04/xmlenc#kw-aes128" />
          <KeyInfo xmlns="http://www.w3.org/2000/09/xmldsig#">
            <KeyName>myKey</KeyName>
          </KeyInfo>
          <CipherData>
            <CipherValue>KPE...SNg==</CipherValue>
          </CipherData>
        </EncryptedKey>
      </KeyInfo>
      <CipherData>
        <CipherValue>Vwl8...64a</CipherValue>
      </CipherData>
    </EncryptedData>
  </components>
</root>

The encrypted XML portion entirely replaces the element being encrypted along with all its contents. This is commonly used to encrypt passwords or other secrets in XML.

A full example showing both how to encrypt and decrypt XML in configuration is seen in the XML configuration provider unit tests.

Testing Using launchSettings.json

You will no doubt want to test your configuration setup to ensure things are being read in/parsed correctly. That’s easy when it’s file based, but what about command line parameters and environment variables?

Visual Studio and .NET Core allow you to put a file called launchSettings.json in the Properties folder under your project. You’ll get one for free with an ASP.NET project, but it works fine with .NET console apps, too.

The full JSON schema for launchSettings.json is available but here’s one that works for a .NET console app:

{
  "profiles": {
    "No Config": {
      "commandName": "Project",
      "environmentVariables": {
      }
    },
    "Environment Config": {
      "commandName": "Project",
      "environmentVariables": {
        "CONFIGURATION_COMPONENTS__DATABASE__CONNECTION": "connection-string",
        "CONFIGURATION_COMPONENTS__FILES__PATH": "/etc/path",
        "MYSQLCONNSTR_MYSQL": "MySQL"
      }
    },
    "Command Line Config": {
      "commandName": "Project",
      "commandLineArgs": "--components:database:connection=connection-string /components:files:path=/etc/path"
    }
  }
}

When you add that to your project, Visual Studio will detect that you have three different run configurations you can choose from. They’ll appear as a dropdown next to the green “Play” button so you can select one before you start debugging.

This is a really great way to do quick configuration changes without modifying base configuration files or code.

In an ASP.NET Core project, you’ll likely see there’s an element under environmentVariables called ASPNETCORE_ENVIRONMENT. You can use that to switch your local dev work to emulate staging or production.

Key Takeaways

It’s a long article, so there are a lot of takeaways.

  • It’s all colon-delimited key/value pairs.
  • Override values by using the same key as a previously defined configuration item.
  • Specify as little configuration as possible and have your app function with reasonable defaults when the configuration isn’t present.
  • Ordinal collections (arrays) use numbered indexes for names.
  • Make sure you validate/parse configuration before using it.
  • Use String.IsNullOrEmpty() to check for presence of a config value.
  • Ordinal collections use the index as part of the key.
  • Use object binding to handle conversion to strong types.
  • Consider data annotations validation to deal with configuration validation.
  • Only file-based providers handle reloading on change events.
  • Use environment variable prefixes to import subsets of the environment.
  • Don’t mix-and-match command line argument specs if you can avoid it.
  • With great XML power comes a great many gotchas.
  • Test environment variables and command line parameters using launchSettings.json.

powershell, windows comments edit

Two things crossed my path in a relatively short period of time that got me thinking:

  1. I read Scott Hanselman’s article on dotnet CLI completion for PowerShell and I liked it a lot. I didn’t know you could have custom completions.
  2. I have been working a lot with Kubernetes and kubectl in PowerShell and wanted completion for that… but you can only get it for bash, not PowerShell.

Challenge accepted.

A lot of Google-fu and some trial-and-error later, and I have a bridge that takes a PowerShell command line, passes it to bash, manually runs the completion function in bash, and hands the set of completions back to PowerShell.

Introducing: PSBashCompletions - a PowerShell module to enable bash completions to surface in PowerShell.

I published it as a module on the PowerShell Gallery so you can install it nice and easy:

Install-Module -Name PSBashCompletions -Scope CurrentUser

(I always install to my own profile because I don’t run as admin.)

You can also go grab it right from GitHub if you want.

To use it:

  1. Make sure bash.exe is in your path. If it’s not, the module will fall back to the location of Git for Windows (assuming git.exe can be found) and try to use the packaged bash.exe there. Failing that… you’re stuck. You need bash.
  2. Locate your bash completion script. Sometimes you can export this from the command (like kubectl); other times you can download it from the project (like git when using Git for Windows).
  3. Register the completer using Register-BashArgumentCompleter. Tell the completer which command you want to complete (kubectl) and where the completion script is (C:\completions\kubectl_completions.sh).

A registration looks like:

Register-BashArgumentCompleter kubectl C:\completions\kubectl_completions.sh

After that, in PowerShell you should be able to use the command and hit tab at the end of the line to get completions.

kubectl c<TAB>

That will compete all the commands starting with ‘c’ for kubectl.

I tried to test it a bunch, but I can’t guarantee it’ll work for every completion or every workstation.

I put troubleshooting instructions in the source readme so if it’s not working there are ways to figure it out. Using the -Verbose option when calling Register-BashArgumentCompleter is the first step to seeing what’s up. If completers in PowerShell encounter any errors, the messages get swallowed. The -Verbose option will tell you the basic bash command line the completer is going to try using so you can run it and see what happens.

I do have some demo/example completions I’ve exported (for git and kubectl) so you can try it out by grabbing those if you want.

Find something wrong? I’d love a PR to fix it.