ndepend, dotnet, vs comments edit

I love NDepend and I’ve been using it for a long time. It’s a great way to get a big-picture view of your application code whilst still being able to drill in for good details. However, I have to say, the latest version - v2020.1 - has the coolest dependency graph functionality. It’s a total overhaul of the feature and I think this is going to be my new go-to view in reports.

First, it’s worth checking out the video that walks you through how to use it. The help videos are great and super valuable.

I watched this and had to try it out. I went and grabbed the Orchard Core codebase which is one solution with 156 projects in it, at the time of this writing.

I got the new version of NDepend and installed the VS extension. It’s interesting to note there’s still a VS 2010 version of the extension available. Are you still using VS 2010? Please upgrade. Please. Also stop using IE.

Install NDepend extension for VS

I loaded up the Orchard Core solution, fired up a new NDepend project for it, and ran the analysis. (If you don’t know how to do this, there is so much help on it, just waiting for you.).

After that, I dived right into the new dependency graph. And… wow. Orchard Core is big. Here’s the initial eye chart you get, but don’t be overwhelmed - it’s a high-level view, right, like looking at a map of the world. I didn’t even bother letting you click to zoom in because it’s just too much.

The Orchard Core map of the world, in dependency graph form

I decided to start by looking at things the main OrchardCore.Cms.Web application references, to get a more specific picture of what the app is doing. I went to Solution Explorer and dragged the OrchardCore.Cms.Web project right into the Dependency Graph. This highlighted the project on the graph so I could zoom in.

OrchardCore.Cms.Web dependency graph

Looking at that, I noticed the red double-ended arrows under OrchardCore.DisplayManagement.Liquid. Hmmm. Let’s zoom in and check that out.

OrchardCore.DisplayManagement.Liquid dependency graph

Hmmm. There are three codependencies there, but let’s just pick one to figure out. Seems the OrchardCore.DisplayManagement.Liquid namespace uses stuff from OrchardCore.DisplayManagement.Liquid.Tags, but the Liquid.Tags namespace also uses stuff from the parent Liquid namespace. What exactly is that?

Easy enough to find out. Double-click on that double-arrow or right-click and select “Build a Graph made of Code Elements involved in this dependency.”

Dive into the OrchardCore.DisplayManagement.Liquid codependency graph

Whoa! Another really tall graph. You can see OrchardCore.DisplayManagement.Liquid.LiquidViewTemplate calls a bunch of stuff in the OrchardCore.DisplayManagement.Liquid.Tags namespace… but… what’s that little arrow at the bottom being called by OrchardCore.DisplayManagement.Liquid.Tags.HelperStatement.WriteToAsync?

OrchardCore.DisplayManagement.Liquid codependency graph

That looks like the culprit. Let’s just zoom in.

OrchardCore.DisplayManagement.Liquid.ViewBufferTextWriterContent is the culprit!

Aha! There is one class being referenced from the Liquid.Tags namespace back to the Liquid namespace - OrchardCore.DisplayManagement.Liquid.ViewBufferTextWriterContent. That’s something that may need some refactoring.

All done here? Click the “application map” button to get back to the top level world map view.

The application map button

That’s a really simple example showing how easy it is to start at the world view of things and explore your code. There’s so much more to see, too. You can…

  • Change the clustering and grouping display to get as high or low level as you want - maybe in simpler apps you don’t want to group as much but in more complex apps you want to be more specific about the grouping.
  • Show or hide third-party code - by default it’s hidden, but have you ever wondered how many dependencies you have on something like log4net?
  • Run a code query using CQL, put your cursor over a result, and see the corresponding item(s) highlighted in the dependency graph.

Truly, check out the video, it’s six minutes of your time well spent.

I’m going to go start loading up some source from some other projects (things I maybe can’t provide screen shots of?) and see if there are some easy refactorings I can find to improve them.

azure, git comments edit

I use Azure DevOps wikis a lot and I love me some PlantUML diagrams - they’re far easier to maintain than dragging lines and boxes around.

Unfortunately, Azure DevOps wiki doesn’t support PlantUML. There’s Mermaid.js support but it’s a pretty old version that doesn’t support newer diagram types so it’s very limited. They’re being very slow to update to the latest Mermaid.js version, too, so it kind of leaves you stuck. Finally, it doesn’t seem like there’s any traction on getting PlantUML into Azure DevOps, so… we have to bridge that gap.

I bridged it by creating an automatic image generation script for PlantUML files. If you’re super anxious, here’s the code Otherwise, let explain how it works.

First, I made sure my wiki was published from a Git repository. I need to be able to access the files.

I used a combination of node-plantuml for generating PNG files from PlantUML diagrams along with watch to notify me of filesystem changes.

Once that script is running, I can create a .puml file with my diagram. Let’s call it Simple-Diagram.puml:

@startuml
Bob->Alice : hello
@enduml

When I save that file, the script will see the new .puml file and kick off the image generator. It will generate a .png file with the same name as the .puml (so Simple-Diagram.png in this case). As the .puml file changes the generated image will update. If you delete the .puml file, the image will also be removed.

Simple diagram example from PlantUML

Now in your wiki page, you just include the image.

![PlantUML Diagram](Simple-Diagram.png)

The drawback to this approach is that you have to render these on your editing client - it’s not something that happens via a build. I didn’t want a build generating something and committing that to the repo so I don’t mind that too much; you can look at integrating it into a build if you like.

The benefit is that it doesn’t require a PlantUML server, it doesn’t require you run things in a container to get it working… it just works. Now, I think under the covers the node-plantuml module is running a PlantUML .jar file to do the rendering, but I’m fine with that.

The editing experience is pretty decent. Using a Markdown preview extension, you can see the diagram update in real-time.

VS Code and PlantUML

I have an example repo here with everything wired up! It has the watcher script, VS Code integration, the whole thing. You could just take that and add it to any existing repo with an Azure DevOps wiki and you’re off to the races.

azure, build, javascript comments edit

I’m in the process of creating some custom pipeline tasks for Azure DevOps build and release. I’ve hit some gotchas that hopefully I (and you!) can avoid. Some of these are undocumented, some are documented but maybe not so easy to find.

Your Task Must Be In a Folder

You can’t put your task.json in the root of the repo. Convention assumes that your task is in a folder of the same name as the task. Azure DevOps won’t be able to find your task in the extension if it’s in the root.

# THIS DOESN'T WORK

/
+- task.json
+- vss-extension.json

I messed with it for a really long time, you really just need that task folder. They show it that way in the tutorial but they never explain the significance.

# THIS WORKS

/
+- YourTaskName/
|  +- task.json
+- vss-extension.json

You Can Have Multiple Task Versions in One Extension

You may have seen tasks show up like NuGetInstaller@0 and NuGetInstaller@1 in Azure DevOps. You can create a versioned task using a special folder structure where the name of the task is at the top and each version is below:

/
+- YourTaskName/
|  +- YourTaskNameV0/
|     +- task.json
|  +- YourTaskNameV1/
|     +- task.json
+- vss-extension.json

This repo has a nice example showing it in action.

Don’t Follow the Official Azure Pipeline Tasks Repo

The official repo with the Azure DevOps pipeline tasks is a great place to learn how to use the task SDK and write a good task… but don’t follow their pattern for your repo layout or build. Their tasks don’t get packaged as a VSIX or deployed as an extension the way your custom tasks will, so looking for hints on packaging will lead you down a pretty crazy path.

This repo is a bit of a better example, with some tasks that… help you build task extensions. It’s a richer sample of how to build and package a task extension.

You Have Two Versions to Change

In a JavaScript/TypeScript-based project, there are like four different files that have versions:

  • The root package.json
  • The VSIX manifest vss-extension.json
  • The task-level package.json
  • The task-level task.json

The only versions that matter in Azure DevOps are the VSIX manifest vss-extension.json and the task-level task.json. The Node package.json versions don’t matter.

In order to get a new version of your VSIX published, the vss-extension.json version must change. Even if the VSIX fails validation on the Marketplace, that version is considered “published” and you can’t retry without updating the version.

Azure DevOps appears to cache versions of your tasks, too, so if you publish a VSIX and forget to update your task.json version, your build may not get the latest version of the task. You need to increment that task.json version for your new task to be used.

You’re Limited to 50MB

Your VSIX package must be less than 50MB in size or the Visual Studio Marketplace will reject it.

This is important to know, because it combines with…

Each Task Gets Its Own node_modules

Visual Studio Marketplace extensions pre-date the loader mechanism so your extension, as a package, isn’t actually used as an atomic entity. Let’s say you have two tasks in an extension:

/
+- FirstTask/
|  +- node_modules/
|  +- package.json
|  +- task.json
+- SecondTask/
|  +- node_modules/
|  +- package.json
|  +- task.json
+- node_modules/
+- package.json
+- vss-extension.json

Your build/package process might have some devDependencies like TypeScript in the root package.json and each task might have its own set of dependencies in its child package.json.

At some point, you might see that both tasks need mostly the same stuff and think to move the dependency up to the top level - Node module resolution will still find it, TypeScript will still compile things fine, all is well on your local box.

Don’t do that. Keep all the dependencies for a task at the task level. It’s OK to share devDependencies, don’t share dependencies.

The reason is that, even if on your dev box and build machine the Node module resolutiono works, when the Azure DevOps agent runs the task, the “root” is wherever that task.json file is. No modules will be found above there.

What that means is you’re going to have a lot of duplication across tasks (and task versions). At a minimum, you know every task requires the Azure Pipeline Task SDK, and a pretty barebones “Hello World” level task packs in at around 1MB in the VSIX if you’re not being careful.

This means you can pretty easily hit that 50MB limit if you have a few tasks in a single extension.

It also means if you have some shared code, like a “common” library, it can get a little tricky. You can’t really resolve stuff outside that task folder.

You might think “I can Webpack it, right?” Nope.

You Can’t Webpack Build Tasks

I found this one out the hard way. It may have been in the past this was possible, but as of right now the localization functions in the Azure Pipeline Task SDK are hard-tied to the filesystem. If the SDK needs to display an error message, it goes to look up its localized message data which requires several .resjson files in the same folder as the SDK module. Failing that, it tries some fallback… and eventually you get a stack overflow. The Azure Pipelines Tool SDK also has localized stuff, so even if you figured out how to move all the Task SDK localization files to the right spot, you’d also have to merge in the localization files from other libraries.

The best you can do is make use of npm install --production and/or npm prune --production to reduce the node_modules folder as much as possible. You could also selectively delete files, like you could remove all the *.ts files from the node_modules folder. A few of these tricks can save a lot of space.

It’s Best to Have One Task Per Repo Per Extension

All of the size restrictions, complexity around trimming node_modules, and so on means that it’s really going to make your life easier if you stick to one task per extension. Further, it’ll be even simpler if you keep that in its own repo. It could be a versioned task, but one VSIX, one task [with all its versions], one repo.

  • You won’t exceed the 50MB size limit.
  • You can centralize/automate the versioning - the VSIX vss-extension.json and the task task.json versions can stay in sync to make it easier to track and manage.
  • Your build will generally be less complex. You don’t have to recurse into different tasks/versions to build things, repo size will be smaller, fewer moving pieces.
  • VS Code integration is easier. Having a bunch of different tasks with their own tests and potentially different versions of Mocha (or whatever) all over… it makes debugging and running tests harder.
  • Shared code will have to be published as a Node module, possibly on an internal feed, so it can be shipped along with the extension/task.
  • GitHub Actions require one task per repo.

Wait, what? GitHub Actions? We’re talking about Azure DevOps Pipelines.

With Microsoft’s acquisition of GitHub, more and more integration is happening between GitHub and Azure DevOps. You can trigger Azure Pipelines from GitHub Actions and a lot of work is going into GitHub Actions. Some of the Microsoft tasks are looking at ways to share logic between both pipeline types - write once, use both places. Having one task per repo will make it easier to support this sort of functionality without having to refactor or pull your extension apart.

autofac, dotnet comments edit

Today the Autofac team and I are pleased to announce the release of Autofac 5.0!

This is the first major-version release we’ve had in about three years (Autofac 4.0 was released in August 2016). There are some breaking changes and new features you should know about as you decide your upgrade strategy. Hopefully this will help you navigate those waters.

Breaking Changes

Framework Version Targeting Changes

Starting with Autofac 5.0 there is no longer support for .NET 4.5.x. .NET 4.5.2, the last release in that line, follows the same support lifecycle as Windows Server 2012 R2 which ended mainstream support in September 2018.

Autofac 5.0 now targets:

  • netstandard2.0
  • netstandard2.1
  • net461

Containers are Immutable

The container registry can no longer be updated after it has been built.

The ContainerBuilder.Update method was marked obsolete in November 2016 and there has been a robust discussion to answer questions about how to get the container contents to adjust as needed at runtime.

ContainerBuilder.Update has now been removed entirely.

If you need to change registration behavior at runtime, there are several options available to you including the use of lambdas or child lifetime scopes. See this discussion issue for examples and ideas. We will work to add some documentation based on this issue.

[PR #948 - thanks @weelink!]

Lifetime Scope Disposal Hierarchy Enforced

Resolving a service from a lifetime scope will now check all parent scopes to make sure none of them have been disposed.

If you dispose a lifetime scope, all children of that lifetime scope will stop resolving objects. In cases like this you’ll start getting ObjectDisposedException instead.

If you have a custom application integration that involves creating/destroying lifetime scopes (e.g., custom per-request support) this may cause issues where proper disposal ordering is not occurring.

[Fixes #1020; PR #1061 - thanks @alistairjevans!]

Prevent Auto-Injecting onto Static Properties

Autofac will no longer do property injection on static properties when auto-wiring of properties is enabled.

If your application behavior depends on static property injection you will need to do some additional work like adding a build callback to populate the property.

[Fixes #1013; PR #1021 - thanks @alistairjevans!]

Features and Fixes

Asynchronous Disposal Support

Autofac lifetime scopes now implement the IAsyncDisposable interface so they can be disposed asynchyronously.

await using (var scope = container.BeginLifetimeScope())
{
   var service = scope.Resolve<ServiceThatImplementsIAsyncDisposable>();
   // When the scope disposes, any services that implement IAsyncDisposable will be
   // Disposed of using DisposeAsync rather than Dispose.
}

[PR #1037 - thanks @alistairjevans!]

Nullable Reference Type Annotations

Autofac is now build using nullable reference type annotations. This allows developers to get sensible compiler warnings if they opt-in, thus avoiding NullReferenceException instances where possible.

Nullable reference type warnings

[PR #1037 - thanks @alistairjevans!]

Build Callbacks in Lifetime Scopes

One method of running code at container build time is by registering a build callback. Previously this only worked at the container level, but we’ve added the ability to register callbacks that run at lifetime scope creation as well.

var scope = container.BeginLifetimeScope(cfg =>
{
    cfg.RegisterBuildCallback(scope => { /* do something */ });
});

The callback will be invoked just prior to BeginLifetimeScope exiting, after any startable components are instantiated.

[Fixes #985; PR #1054 - thanks @alistairjevans!]

Other Fixes

Still TODO

Now that Autofac 5.0 is out, there is still a lot to do. We’ll be working on these things as fast as we can:

  • Updating integration packages we support so they ensure compatibility with Autofac 5.
  • Updating the documentation to reflect the above changes.

Some of this is sitting in branches ready to go, other things need to be done now that we have this core package out there.

If your favorite integration isn’t ready yet, we’re doing our best. Rather than filing “When will this be ready?” issues, consider pull requests with the required updates.

Thank You

On a more personal note, I’d like to thank all the folks that threw code at Autofac in the past few months. We appreciate the effort and the contributions. NuGet tells me we’re at 41,587,203 total downloads as I write this, #27 on the list of top package downloads in the last six weeks. We have 25 integration packages we maintain along with documentation, examples, and support.

Without your contributions this wouldn’t be possible. Thank you so much!

Hopefully git shortlog and the GitHub contributors page didn’t fail me - I don’t want to miss anyone!

personal comments edit

I’ve been having trouble of late when trying to repair robots at home. This is leading me to the belief that if anything you own has a battery, that battery must be user-serviceable. I’m going to start looking for things with this specific feature.

Robot repair is, admittedly, not the most common occurrence, but within the last few months there have been some robotic mishaps.

The first issue came along with my Sphero. I had one of the originals and it was working pretty well, but I left it unused for a few months while I was doing other things. When I came back it wouldn’t hold a battery charge anymore.

The body on a Sphero is a sealed plastic ball. You can’t open it or do anything with it. Once it dies, you throw it out. Which is a shame, because with a new battery it’d be good as new.

Luckily, despite the age of the robot, the super amazing awesome folks at Sphero replaced it at no cost. I got a newer robot which paired easier with my phone (the old one was giving me problems, likely due to an old firmware) and it’s sweet.

Of course… I figured I’d try to replace the battery in the old one anyway. Why not? Worst case scenario I’d be in the same position I was already in - one working robot, one non-working robot.

I took a Dremel and opened the sphere. I bought some 70mm clear plastic ornaments to serve as a replacement body. I looked at the battery in there and bought a replacement of the same type. Easy enough to unscrew the body part that covers the battery, unplug the old one, plug in the new one, and put it all back together.

I did all that and stuck it on the charger for a day. It appeared to charge, but… it just wouldn’t wake up. Hmmm. I unplugged the battery, plugged it back in, put it back on the charger for a day. Next day I tried again and it woke up… but wouldn’t pair with my phone. Or the iPad. Or anything else. It just blinked until the pairing timed out.

I have no idea what went wrong there. I’m guessing there was something else messed up, or the new battery wasn’t as identical as I thought it was… or something.

That robot went to the trash and now I have a bag of clear ornaments and a battery pack that doesn’t go to anything.

The second issue was with a “BB-8 Hero Droid”. This is a gift I got my wife a few years back. It follows you around, responds to voice commands… pretty fun.

Just like with my Sphero, BB-8 went unused for a few months. After we tried charging him up, he wouldn’t hold a charge.

I found this video where a guy takes one of these apart. Seemed involved but easy enough.

Today I got around to doing that, being on vacation for the holidays. I was hoping to get it fixed up for my wife for Christmas.

With no small effort I got the shell of the BB-8 body off. All the parts were labeled in the order in which I removed them, I had a bunch of photos, and the video also shows the exact steps I did.

Once the body is off, though… that’s where the video ends. There aren’t instructions for actually replacing the battery. It’s buried right in the center of the thing, too. In for a penny, in for a pound. Let’s do this.

I got two or three screws off the motor and realized I needed to detach the power switch on the side because it was holding the two halves together. I unscrewed one part, then started unscrewing the other…

…and I guess my screwdriver was in the power charging port or something rather than on an actual screw because fire shot out of the side of the motor and smoke started rolling out. FUUUUUUUUUUUUUUUUUUUUUUUUUUUU

I just about ran for the fire extinguisher but yanking all the cables out of the main board stopped the power flow and the fire. It did not, however, leave things in a functional state.

I feel really bad. I didn’t mean to kill Jenn’s BB-8 but… I guess it’s not a worse situation than it was before - the robot didn’t work before, now it still doesn’t work. But I still feel bad.

I had a small funeral for the BB-8 as I put his parts in the trash can. He is survived by the R-unit that Jenn made when we were in Disney World last month. I guess next trip we make to a Disney theme park she’ll have to make a new BB-unit to replace ol’ Hero Droid.

Anyway, if it has batteries, I need to be able to replace them without breaking into the device, lighting anything on fire, or otherwise causing the device to stop functioning. That should be a thing. I would also accept “lifetime warranty on the battery” such that I can take a unit that isn’t functioning due to the battery and have the factory replace either the battery or the whole unit.

(Yes, I realize there’s some interesting coincidence/irony about this post follwing the post about grounding the yellow wire to hack a fan into working but none of my fans have started a fire. Yet.)