Weekly Journal 2 - rcm

Introduction

A short post this time around. I have been experimenting with ways to manage and backup my dotfiles. In the past I have copied the files into a git repository and stored them in GitHub, but it requires me to remember to refresh those copies whenever I make changes. I have been looking for a better option that would require less manual work to keep things properly backed up.

rcm

rcm is a tool for managing dotfiles. You create a separate directory to hold your dotfiles (I called mine .dotfiles) and create a git repository in it. Then you use the rcm command mkrc to add your configuration files to the dotfiles directory. mkrc moves the files to the directory, and creates a symlink in the original locations. Once all of the configuration files have been moved to the dotfiles directory, you can add them to the git repository and push them to GitHub, or your favorite remote git hosting service.

The really neat feature is you can easily move your dotfiles to a different computer system by cloning the git repository and running the rcup command. This will automatically replace the dotfiles on the new computer with symlinks to the dotfiles in the git repository. There are additional commands and advanced configuration options to play with. Check out the GitHub site and the man pages for all the details.

Limitations

Similar to direnv, rcm is made for Unix-like systems like Linux. It probably works with WSL in Windows, but I have not had a chance to try it out there yet.

Weekly Journal 1

Introduction

One of the challenges I have had as an operations manager is managing configuration settings for the many tools and applications we use on a daily basis. I have been looking for a sane way of managing configuration settings that did not involve abusing my .zshrc/.bashrc files, or creating dozens of one-off aliases. I think I have found a pretty good solution in the form of direnv.

direnv

direnv is a tool for automatically setting and switching environment variables. It works by adding a special configuration file. .envrc, in each project directory containing the environment variables for that project. When you enter that directory in a shell like bash or zsh it automatically sets those values in the current session. direnv also unsets the environment variables when you leave the project directory, which is nice from a security/credentials stand point. It makes it more difficult to accidentally use the wrong settings because you have forgotten what was last set.

It also support cascading configuration files. This means you can set global settings in the project root directory, and have more specific settings in a sub-directory.

Git Configuration

Since I tend to add things like API keys and other secrets to my .envrc, I have a global ignore rule in git to prevent them from being accidentally added to a GitHub repository In my case, I have a .gitignore_global file in my home directory that contains .envrc as a pattern to ignore, and git is configured to use that as the global ignore file.

>$ git config --global core.excludefiles '~/.gitignore_global'

Limitations

Near as I can tell, direnv only supports Unix-like systems like Linux or Mac, but it does not have support for Windows Powershell. It may work with the Windows Subsystem for Linux, but I have not had a chance to try that yet. I use it on Linux, and it works very well there.

If you primarily work in a Linux or Unix environment I highly recommend giving it a shot.

Weekly Journal 0

Introduction

Over the last several years, I have struggled to maintain consistency with this blog. To try and improve on my posting regularity, I am going to try something a little different. This weekly journal will be a short entry that I can put together once a week or so that documents something interesting I have learned about. In the past I have had trouble getting motivated to work on longer articles, but I am hopeful this will help me establish a more frequent posting schedule. The idea is that if I can establish a regular posting cadence, maybe I can use that to motivate myself to write more longer posts too.

Personal Productivity

For my first journal post, I am going to write about changes to my personal project and task tracking system. For the past several months I have been using a Bullet Journal for tracking projects, tasks, and my personal journal. It has been working surprisingly well, and I have decided to adopt it over the mix of apps I was using previously for this purpose.

Bullet Journal

Bullet journaling is a methodology that uses a small notebook and a system of simple templates to quickly capture and organize tasks and notes. The full methodology documented in the book by the system’s creator, Ryder Carroll, but there is a reference and getting started guide available for free on the website.

The basic idea behind bullet journaling is to capture tasks, events, and notes into the notebook during the day, and to then organize and process these items at regular intervals, called reflection in the Bullet Journal lingo. During reflection, you process items by marking them complete, rescheduling them for a later time, or crossing them off if they are not longer relevant. At the beginning of each month, you perform a larger reflection and recopy any open tasks to a new page in the journal. At first glance, it may seem redundant to go through the copying exercise, but it forces you to focus on the highest priority tasks. The review process helps to prevent accumulating a large list of low-priority tasks that will never get done.

What I really like about the system is the flexability it offers. Since it is a paper notebook, it is easy to adjust the templates or create new ones as needed. It is also easy to add bits and pieces from other productivity methodologies if it makes sense for your situation. For example, I like many of the ideas in [Getting Things Done][], and I incorporate a few them into my system. My notebook is fairly spartan, but other people create some spectacular layouts in their journals. You can find examples by searching for Bullet Journal using your favorite search engine, or by checking something like Pintrest. I lack the artistic talent necessary to make more elaborate layouts, and I would much rather spend time on my projects and tasks than on my notebook.

Prior to getting hooked on bullet journaling, I used a combination of an app like Todoist or Microsoft Todo to capture my projects and tasks, and OneNote to record my notes. Now I do most everything in my paper notebook instead. I still copy some of my information into OneNote so I can easily reference it on multiple devices and so I can use the search tools. However, everything initially gets created and managed in my notebook.

If you are looking to make changes or improvements to your personal productivity system, I would encourage you to give bullet journaling a try. With that, I am going to close this this entry. Hopefully the first of many to come. Thanks again for reading.

Note: In the spirit of full disclosure, I receive no compensation for this blog post. The links are not affiliate links; I am just a vocal fan of the system.

A New Direction

Changes Ahead

A New Direction

One-Year Anniversary

At the end of 2017, I left my role as part-owner of an IT consulting firm to take an in-house engineering manager role at a local software company. I have been in my new role as a manager for almost one year now. My employer affords me many of the amenities and perks that I used to enjoy when I was self-employed, so I do not feel like I lost too much. The biggest adjustment so far is I have not spent as much time doing actual technical work. I spend at around half of my time performing my management duties, and with the remaining time I get to do some DevOps engineering work.

In a way, I get to experience the best of both worlds. I still continue to work on my technical skills, both at work and on personal projects at home, but I do not get to spend as much time in the trenches as I used to do. So far, I like being a manager and I plan on sticking with it. It has its own set of challenges, and I look forward to learning and doing more in the coming year. I am still a developer at heart, so I would like to stay close to the engineering side of the business.

The Blog

Focus Shift

My overall blogging output has been way down since I stopped the “Distributed Weekly” list back in 2013. I think I have wasted more time feeling guilty about not blogging as I have actually writing articles. I am at a crossroads where I need to make a decision about whether or not I wish to continue having this blog. I have never had any sort of advertisements or other financial support for this blog. While it is not overly expensive, there is no sense in spending money to host my blog if I am not going to keep it updated.

2019 will be the year where I make the hard decision. If I cannot keep a reasonable posting schedule through the first half of this year, it will be time to retire the blog. I have an idea to create a weekly journal around a combination of interesting links I have seen, and new things I have learned. I am hoping this will give me new material to blog about, and an incentive to write the articles.

The Alternative

If I am unable to maintain a regular schedule, my plan is to archive the blog in my GitHub account. The source of my blog articles are stored as Markdown files, so they should render as-is in the GitHub web UI. If I am feeling motivated, I will attempt to deploy the rendered output using GitHub Pages. Either way, the articles themselves will be preserved on the off chance somebody other than myself wants to view them.

The first of the new journal articles should appear within the next week or so. If not, then it should be easy to see which direction this blog will go. Thanks for reading.

2018 Favorite Books

So, not my most active year for blogging, but I figure I can at least put together a quick post with my favorite books for the year. My reading this year reflects my transition to a dual-role DevOps engineer and manager. I also have a new plan for my blog in 2019, but I will cover that in a separate post.

Note: None of the links below are affiliate links. I get nothing from linking to these books. I try to link to the author or publisher site when possible, and Amazon as the default.

Favorite Books 2018

  • The DevOps Handbook - The definitive guide to implementing DevOps practices. If you are interested in getting into or learning more about DevOps, this book should be high on your list.
  • WTF? What’s the Future and Why It’s Up to Us - An interesting read on how modern technology is shaping the economy. Lots of questions and discussion around using technology to augment human workers instead of using it to replace them.
  • Building Evolutionary Architecture - An interesting study in how to design systems that will live for a long time. It covers architectural concepts and ideas that enable systems to adapt and change over time.
  • Kubernetes: Up and Running - A solid introduction to the world of container development and running them on Kubernetes. It covers setting up and configuring a basic Kubernetes cluster all the way through rolling deployments and scaling.
  • The First 90 Days - This was assigned as required reading by my employer. It is all about leaders in transition, from starting a new job or a promotion. Lots of strategies around getting up to speed quickly so you can be an effective leader.
  • Practical Monitoring - A solid monitoring strategy is a cornerstone of DevOps. This book offers strategies around metrics, logging, dashboards and much more without focusing on specific tools.
  • The Lean Startup - The classic book on agile and lean practices to accelerate innovation in startups and in new ventures in established businesses.
  • Faith Alone: The Doctrine of Justification - I enjoy reading about theology and church history. This is a great walk through the history of justification, and a systemic defense of it from the Reformed point of view.

Gobot and Arduino

Gobots

Go and Arduino

I have been trying to get back into working with my Arduino again, but I find myself wishing I could use a higher-level language like Python instead of C. While I have a decent grasp of C, I would much rather work with a language with modern affordances like garbage collection. Luckily, I stumbled over a framework for Go that fits the bill. While Go is definitely a lower-level language than Python, it is higher level than C and is nicer, at least in my opinion, to work with than C.

Enter Gobot

The framework I discovered is called Gobot. Gobot is a framework for building drones, robots, and IoT devices. It has support for numerous devices and platforms, including the Arduino and the Raspberry Pi boards that I own. Just in case I need to set up another Arduino board in the future, here is how I was able to install the Firmata library on the Arduino board.

Getting Started

Install Firmata

Gobot uses the Firmata protocol to communicate with with the Arduino, so the first step is to load the Firmata library onto the board. The Firmata library ships with the latest version of the Arduino IDE, so I downloaded and installed the IDE package, and followed these instructions for installing Firmata.

  1. Download the latest version of the Arduino IDE and install it
  2. Connect the Arduino board to the laptop
  3. Configure the IDE with the board type and port
    1. For the board type, go to the Tools | Board menu and select the proper board from the list
    2. For the port, go to the Tools | Port menu and select the port that has the board name next to it
  4. Upload the Firmata sketch to the Arduino board
    1. Go to the File | Examples | StandardFirmata menu to open the sketch
    2. Click the Upload button and wait for the sketch to build and transfer
    3. When the process is complete, you should see the message, “Done Uploading” in the status window

The board should be ready to work with Gobot now.

Install Gobot

For our purposes here, I am assuming you already have the Go language and environment already installed.

  1. Install Gobot into your local environment
    1
    $ go get -d -u gobot.io/x/gobot/...
  2. Go grab a snack and drink while you wait for the framework to download and install

Execute a Sample Project

  1. Connect an LED to the Arduino using your favorite sample, I am using the directions that came with my Adafruit Arduino kit, and make a note of which pin it is connected to.
  2. Enter the following code sample from the Gobot Getting Started documentation and save it in a file called blink.go; if necessary change the pin number to correspond to the pin that the LED is connected to
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    package main
    import (
    "time"
    "gobot.io/x/gobot"
    "gobot.io/x/gobot/drivers/gpio"
    "gobot.io/x/gobot/platforms/firmata"
    )
    func main() {
    firmataAdaptor := firmata.NewAdaptor("/dev/ttyACM0")
    led := gpio.NewLedDriver(firmataAdaptor, "13")
    work := func() {
    gobot.Every(1*time.Second, func() {
    led.Toggle()
    })
    }
    robot := gobot.NewRobot("bot",
    []gobot.Connection{firmataAdaptor},
    []gobot.Device{led},
    work,
    )
    robot.Start()
    }
  3. Run the sample program
    1
    $ go run blink.go

After the code compiles and gets uploaded to the board, you should see the LED blink. At this point everything should be ready for more advanced projects.

Next Steps

Now that I have my Arduino up and running with Gobot, my next challenge is to go back and redo the sample projects that came with my kit and see if I can get them all working in Go. Once I finish, I figure it will have given me enough practice to start working on something more elaborate.

Until next time, thanks for reading!

New Year, New Adventure

DevOps gears

New Job

In my last post, I mentioned that I was looking for a new job. Late last year I accepted an offer to become the DevOps Manager at VHT here in Akron, OH. I have held off on announcing it until now to allow StoneDonut to communicate my exit to their customers and partners. I have taken a much-needed vacation the first two weeks of this year, and have spent the past two weeks getting settled in my new position in DevOps.

Devving the Ops

I am very excited to be joining the DevOps team at VHT. I really enjoy working on tough technical problems and building tools and platforms for other developers. I think the DevOps space will be a great fit for my interests and my career growth. The opportunity at VHT was especially attractive as it will give me the opportunity to grow as a technical manager, while still allowing me to perform hands-on work as well. I am looking forward to using my development and integration experience to build the next-generation CI/CD platform for VHT’s development teams.

The Only Constant is Change

As one might expect, this career move will bring changes to my personal projects, including this blog. For most of my career, I have primarily worked with .NET, BizTalk, Azure, and other parts of the Microsoft stack. However, VHT is primarily a Linux and Amazon Web Services shop. As a result, the focus of this blog will shift to focus more on Continuous Delivery, AWS, and open source tools like Ansible and Jenkins. I plan on keeping somewhat current with C#, .NET Core and parts of Azure, but I will not be writing blog posts about BizTalk and enterprise integration any time soon. (Though maybe I will write up some posts about integrating open source tools together as part of a CD pipeline!) I am also hoping to write some posts about the custom tools and platforms we build at VHT.

On one hand, I am sad to leave StoneDonut. After 10 years, I definitely consider my former partners Chuck and Michael as mentors and friends. I most certainly would not be here without their help. However, I am also really excited to see where this new road leads. Thanks again for reading.

2017 Favorite Books

Not sure how it happened so fast, but all of a sudden we are near the end of 2017. For the first time, I have more non-technical books than purely technical on this list. I am interested to see if this is an outlier, or the beginning of a trend. Tune into my favorite books post next year to find out!

Note: None of the links below are affiliate links. I get nothing from linking to these books. I try to link to the author or publisher site when possible, and Amazon as the default.

Favorite Books 2017

  • The Mythical Man Month - The classic book on software engineering and project management. Despite its age, there are still many relevant insights for modern software projects.
  • Data Science from Scratch - This book provides an interesting introduction to data science. Instead of starting with specific tools and libraries, it teaches data science fundamentals by having the reader implement them in Python.
  • Microservices in .NET Core - This book covers the fundamentals of building microservices and assembling them into complete systems. Examples are built using .NET Core, OWIN and the NancyFX web framework. Definitely worth a look if you are interested in building microservices using the next generation .NET framework.
  • Think Like a Data Scientist - This book presents an end-to-end process for applying data science to solve problems. It starts with defining goals and initial analysis, proceeds to the technical stages of development, and ends with final presentation to end users. Perfect for people like me who are comfortable with the technical aspects of data science, but have little experience in the realms of business analysis and presentation.
  • The Monuments Men - A history narrative about the men and women who worked to protect and secure the greatest art treasures in the world during World War II. If you liked the movie, the book goes into a lot more detail about the mission of the monuments division.
  • Console Wars - This book chronicles the history of the video game history starting in the mid-1980s with the rise of Nintendo and Sega. It is full of behind-the-scenes information and is a really fun read for somebody like me who is a big fan of video games and grew up during this time period.
  • Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins - The epic story of the battle between former world chess champion Gary Kasporav and IBM’s supercomputer Deep Blue. Kasporav uses his personal experience to discuss the future of artificial intelligence and creativity.

Towards a New Adventure

Adventure for the Atari 2600

Time for a Change

10 years ago, I decided to hang my shingle and start my own consulting business. Soon after, I became a partner at StoneDonut. Over these past years, I have had an amazing adventure as a consultant focusing on integration. I am blessed to have two awesome partners in Chuck Loughry and Michael Schenck and I have learned so much from working with them. I had the privilege of designing and implementing BizTalk middleware solutions for enterprise customers both large and small. It has been a wild ride and I am very proud of the work we have done.

However, I have become interested in a number of newer technology areas, like DevOps, Cloud-native distributed systems development and Data Science. These areas do not really fit into StoneDonut’s core business focus of integration, service orientation and business process management. Add to that the feeling over the past few years that I have plateaued in my personal growth as a developer, and it became clear that something needed to change. After much deliberation and with more than a little trepidation, I have decided it is time fore me to move on and pursue a new direction in my career.

So, What is Next?

I do not have an answer to this question yet. Right now I am open to new opportunities, and I am talking to people in my network to get an idea of what is out there at the moment. I have already had a number of recruiters contact me with interesting positions, so I do not think it will take too long to find something that interests me. I am equally comfortable in Windows and Linux environments and I am looking forward to applying my development experience to new challenges. If you are reading this and think that my experience would be beneficial to your company in one of the three areas above, feel free to contact me on Twitter, LinkedIn or email.

As always, thanks for reading!

Mediatr Custom Behavior: Logging

Warning!

Before I begin, a brief word of warning. I have not yet decided if the technique described below is a good idea or a terrible idea. So far it seems to be working well, but I have not exercised it enough to confidently recommend that others use it. Now that the safety advisory is out of the way, on with the show.

ASP.NET Web API Logging

I have been working on a Web API project where I wanted to have teh API log some basic information on every web request. What I did not want was logging code splattered in my API controllers. I am using Mediatr to decouple my application logic from the Web API framework, but having logging code splattered all over my message handlers did not feel like an improvement. After considering my available options, I decided to try building a custom Mediatr behavior, and adding it to a Mediatr pipeline. To facilitate this, I created an interface to define the metadata that I wanted to be logged.

The interface simply references another class that contains all of my data elements:

1
2
3
4
5
6
7
8
9
10
11
12
13
public interface IAPIRequestContext
{
APIRequestMetadata APIRequestMetadata { get; set; }
}
public class APIRequestMetadata
{
public Guid RequestId { get; set; }
public string CurrentUser { get; set; }
public string Controller { get; set; }
public string Method { get; set; }
public Dictionary<string, object> Parameters { get; set; }
}

This interface is then added to my Mediatr message definitions:

1
2
3
4
5
public class Query : IRequest<Result>, IAPIRequestContext
{
public string Status { get; set; }
public APIRequestMetadata APIRequestMetadata { get; set; }
}

Finally, I have custom Mediatr pipeline behavior that casts the Query object to the IAPIRequestContext and logs (using Serilog) the data in the APIRequestMetadata object:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
public class LoggingBehavior<TRequest, TResponse> : IPipelineBehavior<TRequest, TResponse>
{
public async Task<TResponse> Handle(TRequest request, RequestHandlerDelegate<TResponse> next)
{
Log.Debug("Entering LoggingBehavior with request {Name}", typeof(TRequest).Name);
var reqCtxt = (request as IAPIRequestContext);
if (reqCtxt != null)
{
if (reqCtxt.APIRequestMetadata != null)
{
var metadata = reqCtxt.APIRequestMetadata;
Log.Information("Request Id: {RequestId}, Current User: {User}, Controller: {Controller}, Method: {Method}", metadata.RequestId, metadata.CurrentUser, metadata.Controller, metadata.Method);
if (metadata.Parameters != null)
{
foreach (var param in metadata.Parameters)
{
Log.Debug("Request Id: {RequestId}, {ParameterName}: {ParameterValue}", metadata.RequestId, param.Key, param.Value);
}
}
}
}
var response = await next();
Log.Debug("Leaving LoggingBehavior with request {Name}", typeof(TRequest).Name);
return response;
}
}

With this setup, every controller in my API project emits a standard logging event, with having the logging code located in every one of my Mediatr handler methods. My plan is to combine this with a metrics pipeline behavior so I can track which API methods get used the most, and see how well they perform.

So, do you think this is pretty good idea for handling logging, or do you think this is the worst idea you have ever seen? Feel free to send me comments on Twitter or LinkedIn either way.