Towards a New Adventure

Adventure for the Atari 2600

Time for a Change

10 years ago, I decided to hang my shingle and start my own consulting business. Soon after, I became a partner at StoneDonut. Over these past years, I have had an amazing adventure as a consultant focusing on integration. I am blessed to have two awesome partners in Chuck Loughry and Michael Schenck and I have learned so much from working with them. I had the privilege of designing and implementing BizTalk middleware solutions for enterprise customers both large and small. It has been a wild ride and I am very proud of the work we have done.

However, I have become interested in a number of newer technology areas, like DevOps, Cloud-native distributed systems development and Data Science. These areas do not really fit into StoneDonut’s core business focus of integration, service orientation and business process management. Add to that the feeling over the past few years that I have plateaued in my personal growth as a developer, and it became clear that something needed to change. After much deliberation and with more than a little trepidation, I have decided it is time fore me to move on and pursue a new direction in my career.

So, What is Next?

I do not have an answer to this question yet. Right now I am open to new opportunities, and I am talking to people in my network to get an idea of what is out there at the moment. I have already had a number of recruiters contact me with interesting positions, so I do not think it will take too long to find something that interests me. I am equally comfortable in Windows and Linux environments and I am looking forward to applying my development experience to new challenges. If you are reading this and think that my experience would be beneficial to your company in one of the three areas above, feel free to contact me on Twitter, LinkedIn or email.

As always, thanks for reading!

Mediatr Custom Behavior: Logging

Warning!

Before I begin, a brief word of warning. I have not yet decided if the technique described below is a good idea or a terrible idea. So far it seems to be working well, but I have not exercised it enough to confidently recommend that others use it. Now that the safety advisory is out of the way, on with the show.

ASP.NET Web API Logging

I have been working on a Web API project where I wanted to have teh API log some basic information on every web request. What I did not want was logging code splattered in my API controllers. I am using Mediatr to decouple my application logic from the Web API framework, but having logging code splattered all over my message handlers did not feel like an improvement. After considering my available options, I decided to try building a custom Mediatr behavior, and adding it to a Mediatr pipeline. To facilitate this, I created an interface to define the metadata that I wanted to be logged.

The interface simply references another class that contains all of my data elements:

1
2
3
4
5
6
7
8
9
10
11
12
13
public interface IAPIRequestContext
{
APIRequestMetadata APIRequestMetadata { get; set; }
}
public class APIRequestMetadata
{
public Guid RequestId { get; set; }
public string CurrentUser { get; set; }
public string Controller { get; set; }
public string Method { get; set; }
public Dictionary<string, object> Parameters { get; set; }
}

This interface is then added to my Mediatr message definitions:

1
2
3
4
5
public class Query : IRequest<Result>, IAPIRequestContext
{
public string Status { get; set; }
public APIRequestMetadata APIRequestMetadata { get; set; }
}

Finally, I have custom Mediatr pipeline behavior that casts the Query object to the IAPIRequestContext and logs (using Serilog) the data in the APIRequestMetadata object:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
public class LoggingBehavior<TRequest, TResponse> : IPipelineBehavior<TRequest, TResponse>
{
public async Task<TResponse> Handle(TRequest request, RequestHandlerDelegate<TResponse> next)
{
Log.Debug("Entering LoggingBehavior with request {Name}", typeof(TRequest).Name);
var reqCtxt = (request as IAPIRequestContext);
if (reqCtxt != null)
{
if (reqCtxt.APIRequestMetadata != null)
{
var metadata = reqCtxt.APIRequestMetadata;
Log.Information("Request Id: {RequestId}, Current User: {User}, Controller: {Controller}, Method: {Method}", metadata.RequestId, metadata.CurrentUser, metadata.Controller, metadata.Method);
if (metadata.Parameters != null)
{
foreach (var param in metadata.Parameters)
{
Log.Debug("Request Id: {RequestId}, {ParameterName}: {ParameterValue}", metadata.RequestId, param.Key, param.Value);
}
}
}
}
var response = await next();
Log.Debug("Leaving LoggingBehavior with request {Name}", typeof(TRequest).Name);
return response;
}
}

With this setup, every controller in my API project emits a standard logging event, with having the logging code located in every one of my Mediatr handler methods. My plan is to combine this with a metrics pipeline behavior so I can track which API methods get used the most, and see how well they perform.

So, do you think this is pretty good idea for handling logging, or do you think this is the worst idea you have ever seen? Feel free to send me comments on Twitter or LinkedIn either way.

Version Gotcha When Using Enzyme with React

The Gotcha

I recently ran into a little gotcha with the Enzyme testing library and I want to document the fix in case I run into the same issue later on. I was working through the TypeScript React Starter tutorial, and noticed some odd output warnings in my console when running my unit tests:

Warning: ReactTestUtils has been moved to react-dom/test-utils. Update references to remove this warning.

Warning: Shallow renderer has been moved to react-test-renderer/shallow. Update references to remove this warning.

My tests were executing successfully, but I wanted to figure why these errors were occurring. It turns out that there are additional modules needed when using Enzyme with React and Jest, and these modules will differ based on which version of React you are using.

The Solution

If you are using a React version >= 15.5, you will need to install the react-test-renderer module in addition to Enzyme:

npm install react-test-renderer --save-dev

If you are using a React version older than 15.5, you will need to install the react-addons-test-utils module:

npm install react-addons-test-utils --save-dev

The tutorial I was following had instructions to install the latter, which was probably correct at the time it was written, but React v15.6.1 was installed as part of the project setup. Once I uninstalled react-addons-test-utils and replaced it with react-test-renderer my tests ran successfully without the additional warnings.

Running xUnit Tests with VSTS

Introduction

A few weeks ago I set up my first automated build pipeline in Visual Studio Team Services. For the most part it was fairly easy to setup and configure, but I ran into some issues getting my xUnit tests to run. The fix is simple, but I figure I will not be setting up these builds very often and I do not want to have to figure it out again in the future.

The Problem

Note: These instructions apply to the full .NET Framework, not .NET Core. I followed the instructions in the xUnit documentation for configuring the test runner for VSTS. The documentation said to set the Test Assembly field to the following:

**\bin\$(BuildConfiguration)\*test*.dll;-:**\xunit.runner.visualstudio.testadapter.dll

However, when the test step of the pipeline would execute, it would raise a warning that it could not find any test assemblies that conformed to the above pattern. I tried fiddling with the other options, but the pipeline still could not locate the test assemblies.

The Solution

Thankfully, the solution is very simple. Instead of setting the Test Assembly field to a one line expression, break it into two lines and remove the trailing semicolon:

**\bin\$(BuildConfiguration)\*test*.dll
-:**\xunit.runner.visualstudio.testadapter.dll

Once I made this change, the test step was able to find the test assembly and execute the tests. My best guess is that a VSTS update made changes to the Test Assembly field and the xUnit documentation has not been updated yet.

Hopefully this blog post will help others who run into this issue, as well as future me next time I need to setup a VSTS build.

Hexo Global License Plugin

Hexo Global License

One of the Pelican features I really liked was the global license plugin. This plugin took a configurable string representing a license statement and placed it into the footer of every page on the site. In my case, this was pretty handy as I license all of my blog content as CC-BY-SA and this way I did not have to remember to include an explicit license statement on each post.

Hexo did not have this functionality, so I sat down and built a Hexo plugin to mimic the behavior of the Pelican plugin. Thus, hexo-global-license was born. Since I use a Creative Commons license for all of my content, I built in support for all of the latest Creative Commons licenses. Add the appropriate settings to the config.yml file and the plugin will add the appropriate license text, image and link to the Creative Commons website. In the case where you want to use a different license, you can set the license type to custom and then include the text you want in the configuration.

The plugin is available on npm, and once I use it some more and shake out any major bugs I will submit it for inclusion in the Hexo plugins registry. If you try it and encounter problems, or if you have ideas to improve it please file an issue on the GitHub page.

Live on Hexo

Finished

Hexo is Live

If you are reading this, then the transition to Hexo has completed successfully. Hopefully the redirects are all working properly and new articles are showing up in your favorite feed reader. This should remove a lot of my blogging friction and make it easier for me to write articles more often.

Thanks for reading, and I look forward to posting some new material soon.

Moving to Hexo

We're Moving

Moving to Hexo

Just a heads-up for anybody who subscribes to this blog via RSS or ATOM. Sometime within the next week I will be migrating this blog from Pelican to Hexo. Pelican has served me well the past several years, but it is starting to cause enough problems that I want to try another static site generator. Pelican is a little awkward to run under Windows. For Python programs like Pelican, I typically install them inside of some form of virtual environment like virtualenv or an Anaconda environment. This means I have to remember to switch environments which I usually forget to do until I get strange errors trying to work with Pelican. Managing the plugins and themes is also somewhat different. I do not change either very often, and I end up having to read the documentation every time I want to add or update something. The final reason I am moving away from Pelican is it is rather slow. It takes a good 30-40 seconds to regenerate the site every time I hit save in my editor. This is really painful when experimenting with page layout for a new article.

After trying out a few different static site generators, I have settled on Hexo for the next version of my blog site. Hexo is a Node.js application and is a much smoother experience on Windows. During development, my site can regenerate in 3-5 seconds, which I really like. Plus, plugins and themes are all managed through npm which makes it easy to keep things updated.

The one downside to Hexo is it will not allow me to have both RSS and ATOM feeds. Going forward, the site will have an ATOM feed and a new JSON Feed. I am going to have automatic redirects configured that should route requests from the old feeds to the ATOM feed. Most feed readers that I know of understand both RSS and ATOM, so this should be seamless for most readers. I have a test post queued up for next week after I make the move, so if you are not seeing new content by then I would suggest manually updating your subscription.

Hopefully this will be a relatively smooth transition.

Thanks for reading!

Raw Input with Azure Functions HTTP Triggers

Introduction

This is a quick blog post to document my experience with building serverless
web APIs with Azure Functions. Recently, I was building an API where I
wanted to receive an XML message in the body of the HTTP trigger, but I did
not want the Functions framework to attempt to deserialize the data. I
wanted to receive the raw input so I could pass it as-is to another API.

HTTP Trigger Template

When you create a new Azure Function, the default template will use the
following statement to read the body of the HTTP request:

1
dynamic data = await req.Content.ReadAsAsync<object>();

This statement will attempt to deserialize the HTTP request body by using the
Content-Type header to choose a serializer component. By default it supports
the JSON and XML serializers, but you can define and register your own custom
serializer too. In order for the serializer to work properly, you need to have
a classs with the appropriate decorators so the serializer knows how to map
the data to the class properties.

In my use case, I was not going to do anything with the data other than pass
it on to another API, so I did not want to incur the overhead of
deserializing/serializing the data. Instead, you can change the above
statement to read the body as a string. When reading the data as a string,
the framework will not attempt to deserialzie the data.

To read the body as a string, replace the above statement with this:

1
string data = await req.Content.ReadAsStringAsync();

Conclusion

It turns out getting the raw input is nothing more than a simple statement
change. All it takes is changing the statement from reading the data as an
object to reading it as a string. Now I have it documented for the next time
I need to do something similar in Azure Functions.

This article originally posted at https://scottbanwart.com/blog/2017/07/azure-functions-raw-input

Yeoman Generator for Morepath

Yeoman Generator for Morepath

I have been playing around a bit with the Morepath microservices framework for Python lately. I noticed in the Morepath documentation that there is a Cookiecutter template for scaffolding new projects. On a lark, I wanted to see how hard it would be to build a Yeoman template for Morepath. I have wanted to learn about building custom Yeoman templates to set up new projects with the proper structure and dependencies for my work projects. Since I did not see an existing Morepath template on the NPM site, I figured I would try and build one myself.

In its current state, it just sets up a basic skeleton project, but as I learn more about Morepath I plan on adding more features. If anybody is interested in trying it out, it is available on NPM. Anybody interested in how I built it, and how bad my JavaScript code is, can check out the project repository on GitHub.

Implementing Auto-Save with React

Over the past few months I have been slowly learning about modern web development. The last time I worked on a web project .NET 2.0 was the new hotness, so I have a lot of catching up to do. After checking out a few of the more popular frameworks out there I chose React as the framework for building web-based user interfaces. I chose React because of its popularity and support, tight focus on views and its affinity for functional programming.

A common feature in modern applications is support for automatically saving. This is seen in desktop applications like Microsoft Office, and nearly every mobile application out there. I wanted to see if I could replicate that behavior inside of a React application.

Challenge Accepted

The systems I have seen that implement this feature send a save event after every change. In a web application, this would generate a lot of network activity so wanted to try implementing this feature using an idle timer. The application starts the timer after any user activity and tracks changes to the form state. The timer resets every time there is user activity to prevent triggering a save while the user is still actively changing the form data. Once the timer expires, it fires an event where the application would perform the save functionality. In the case of my test applcation, it resets the tracking state and outputs a message stating that it saved successfully. The full application code can be found on GitHub in my react-auto-save repository.