Last week I mentioned I was looking to move my cloud backups from AWS to another provider due to costs and the risks of a surprise bill. Both Wasabi and Backblaze B2 are substantially cheaper than AWS S3 and over much simpler billing. I’m trying out Backblaze first as their billing is a little less complicated. Wasabi has a 90-day billing requirement for data that makes estimating costs more difficult, and for personal use I don’t want to have the added complexity. Pricing for more general cloud providers like DigitalOcean and Linode were similar to S3, so in this case I think the cost savings is worth having the additional vendor to manage.
Backblaze is supported by MSP360, my backup software, so it was easy to configure a new storage account for Backblaze and let it push some data. My plan is to let the backups run for a few weeks, perform some integrity tests, and if everything checks out retire my S3 buckets.
]]>In my quest to migrate off of AWS for my personal needs, the first project is to find a new place for my cloud backups. One of my over-arching goals for my homelab is to balance the complexity vs. the cost of the hardware, software, and services I use. IN this case, “complexity” is linked to the number of cloud service vendors I have to deal with. Based on my research, I could save a bit on costs by using a dedicated S3-compatible backup storage service like Wasabi or Backblaze Cloud Storage instead of using the object storage from a more general cloud service provider like DigitalOcean or Linode. I’m debating whether the savings is worth the additional complexity of having an additional vendor to keep track of. I have other things in AWS that will require more general cloud services, so I will have to choose a general cloud services provider regardless. To add another issue to the mix, the backup software I currently use, MSP360, has support for Wasabi and Backblaze but it doesn’t work with either DigitalOcean’s or Linode’s S3-compatible object storage so I would have to find new backup software too.
My employer has standardized on Apple hardware for most roles in the company, including engineering. I’ve had a Windows machine for the past two years that was grandfathered in when the original company was acquired, but that has recently been replaced with an M1 MacBook. This is the first Mac I’ve used since System 7 was a thing, so now I’m struggling to unlearn 30+ years of muscle memory built up around Windows and Linux keyboard shortcuts. I suppose I’ll get used to it in time, but right now I am not a fan.
Google’s Gemini AI was involved in controversy recently for generating images for historical figures that didn’t make any sense. Some of the more egregious examples include images of Black Vikings and a female Pope. This is a stark reminder that any closed source and proprietary AI system will embody the biases, both implicit and explicit, of its creators. I would caution against putting a lot of trust into these AI systems. If you don’t think companies like Google, OpenAI, and Tesla aren’t going to build their own biases and agendas into their AI products, I would think again.
]]>AWS continues to make changes that are designed to extract more money from customer’s wallets. Their latest move is to charge a fee for all IPv4 Elastic IPs. Previously, you would be charged $5.00/month if you provisioned an EIP and didn’t use it. Now they are charging money for every IPv4 address you have allocated, whether you are using it or not. This move adds zero value to customers, but will likely add tens or hundreds of millions of dollars to their revenue numbers.
As a casual user of AWS for some of my personal projects, the risks of being hit by a surprise bill are becoming too high. I will be exploring other options and migrating off of AWS as soon as I can.
In a similar vein, Medium continues to do their best to ruin blogging. I’ve been noticing more and more articles are being locked behind a paywall. Either you need to have a paid subscription, or you have to be logged into a Medium account in order to read them. It’s getting to the point where I’m starting to pass on links when I see medium.com
in the URL. It’s just not worth the hassle.
Some days I wonder if the modern web is too far gone, and whether it’s time to embrace newer protocols like Gemini and just leave the garbage web behind. So-called “platform” companies continue to “innovate” by taking open services and turning them into walled gardens and then charging money. I don’t think this what anyone envisioned the web would be like back in 1995.
]]>This week I took my first baby steps with the Nix package manager. I’m really interested in building reproducible environments using Nix. I’m currently running Nix on top of openSuse Tumbleweed instead of taking the going all the way down the rabbit hole and installing NixOS. One of the concepts I find fascinating is that Nix isn’t tightly coupled to NixOS, and that the environments defined using Nix exist separately from the base system. This allows me to create a custom Nix environment for every project, and it allows me to easily share that environment definition with others so that we’re all using the same tools and components.
The downside to Nix is that it’s a complicated beast. There are a lot of knobs to turn and it’s going to take a while to learn how it all works. I would also like to see if Nix can be combined with containers, as that feels like it could be an interesting combination.
This week I went to my first tech meetup since COVID. Many of the meetups in my area shut down with during pandemic and never started back up. Thankfully, a couple of technologists in the area decided to restart the DevOpsCLE group. For this first meeting there was really good presentation and discussion around building a great engineering culture.
]]>I wanted to check back and see how Dagger has changed and improved since the last time I looked at it. Unfortunately, I still can’t use it due to a dependency on Docker. I use alternative container tooling like Podman and Rancher, and I don’t really want to spend time trying to hack in support for these tools into Dagger myself. On the plus side, I noticed they have added a Developer Certificate of Origin (DCO) requirement to their contributing guide. A DCO attests that the developer making the contribution has the necessary rights to make that contribution. It doesn’t require the contributor to sign over their copyright. Many projects with a corporate backer like Dagger instead require a Contributor License Agreement, which does force the contributor to sign over their copyright. This can then be used for a license bait-and-switch from open source to proprietary like Hashicorp did last year. I’ll check back again in a few months to see if Dagger will support additional container tools.
I’m still working on how to make note taking and processing a regular habit during the week. Lately I’ve been really busy with work, and and as a result I haven’t been taking notes and putting them into Logseq. I’m not sure how to fix that,, but it is something I noticed.
]]>Another quick comment about my digital notes with Logseq. I’ve been reading through the, “How to Take Smart Notes” book. This book is all about creating a Zettelkasten, with a nod to the use of modern digital tools. However, I’m not an academic researcher so I don’t really need a full Zettelkasten. I think I can get a lot of mileage out of taking relatively simple notes and linking them together in Logseq. I am interested in the concepts of small, atomic notes and the process of rewriting notes in my own words to help reinforce learning and retention, but I don’t necessarily see value in following the complete Zettelkasten system. Getting hung up on the system leads to friction, and friction leads to procrastination. At least in my case.
]]>Just a quick comment on Logseq this week. I’m not currently using the journal for anything in Logseq, but since it defaults to the journal whenever it starts up, I feel like I’m holding it wrong. I suppose I could try entering my notes for the day into the journal, and then create new pages from there and link topics together, but it feels like that would make it more difficult to search and recall notes in the future. I’m not sure what to do about that.
]]>I was able to figure out why my Logseq integration with Omnivore broke again late last week. Once again the culprit is Omnivore API, more specifically the format of the query used by the Logseq plugin to import my highlights and notes from Omnivore. This time around the query engine has decided that tag values have to be quoted, and if they aren’t it returns an empty set. It also looks like the logical AND
I’ve been using may be optional. The last time this happened the query engine changed to accept AND
but not and
. Now it appears that I could send the query as in:archive label:"logseq-imprt"
instead of in:archive AND lable:"logseq-import
.
Next time the plugin import suddenly stops working I’ll be sure to check for query changes first.
Logseq allows you to link to pages and even individual blocks. The page links are easy enough to work with, but the block links have a side effect that I don’t like. Inside of Logseq the block links are a seamless experience, but in the Markdown file, all you will see is a UUID similar to {a8524411-8193-4010-8f77-4fe19f4c16b9}
. So far I haven’t been able to determine where those identifiers are kept within Logseq’s data. Sometimes I like to use other tools outside of Logseq to work with notes. Using inscrutable identifiers like this makes it difficult to do so. My solution is easy enough, I just won’t use block links in my notes. I’ll use page links or tags instead to link notes and ideas together.
I’m still experimenting some with how I take notes in Logseq. I keep my daily journal in a paper notebook, so the digital journal in Logseq isn’t all that useful to me. I’ve been experimenting with using it as a landing place for new ideas that I haven’t figured out how to tag/categorize yet. I’m trying to be more intentional about taking my fleeting notes, or notes I jot down while reading or watching something, and converting them into permanent notes by summarizing and organizing them in my own words. While it’s more work, so far it seems to be leading me towards a better organized digital notebook.
]]>Bruce Perens is looking at how Free Software/Open Source should evolve to meet the needs and challenges of today. I’m interested to see if he can figure out how to balance the needs of developers to make money from their work while simultaneously protecting end users freedoms. As one of the founders of the Open Source Initiative, I think he is well positioned to do something interesting in this space.
For several years I switched almost exclusively to reading eBooks, but the last year I’ve been switching back to paper books. I really want to like eBooks, but there isn’t a good reading experience outside of the DRM and personal data collection of platforms like Amazon Kindle. I’m not about to buy into that garbage ecosystem, so my next best option is paper books. Hopefully we’ll see a good, open eBook reading experience hit the market in the future.
I’ve been playing around with a new blogging platform “for hackers” called Pico Prose. I really like the tool, and if I were starting a new blog it would be a no-brainer for me to use it. Unfortunately it would break all my existing URLs, including the link to my ATOM feed, so it’s a no-go. For now I will continue working on migrating this blog to Hugo.
If you are interested in starting a small blog and are familiar with tools like SSH and rsync, I think it makes for a wonderful experience.
Sigh. I have been making decent progress with using Logseq as my tool of choice for digital notes. However, earlier today I discovered the Omnivore integration is mysteriously broken again. This is really beginning to sour my opinion of Logseq and it’s plugins. I don’t feel I can trust my notes to a tool that is consistently broken like this.
Logseq is still fairly new at version 0.10.5, and I don’t want to make a knee-jerk move to another tool, so I’m going to wait a bit and see if the issue gets fixed. Hopefully this will stop happening as frequently as both tools continue to mature.
]]>This week I learned my work laptop is going to be replaced by an Apple laptop of some sort. Apparently that’s the standard and somehow I slipped through the cracks with my existing Windows laptop. Sadly, that means I’ll lose access to my openSuse Tumbleweed installation on WSL. One of the reasons I chose Tumbleweed is so I would have a consistent distribution across my home and work machines. Looks like that plan is dead, so maybe it’s time to look at other distributions for home. I had trouble previously installing NixOS but I’d like to try it again.
This week I started working on my personal project. So far I’ve started experimenting with FastAPI with Pydantic data models. I’m still learning how everything fits together and so far I’ve done is to build a few simple data models and some basic API routes to run tests with.
I’ve also installed Pre-commit and Ruff as part of my development toolchain. I’m a big fan of opinionated code formatters and the linter is helping me to write more idiomatic Python. At some point I would like to compare Ruff to other tools like Black and Flake8 to see if it has any missing rules.
As I get more comfortable with these packages, I will be looking for a logging package next. I figure I may need one to help with debugging the API as it grows.
]]>This year, I want to get back to doing more hands-on programming, and I want to improve my Python skills. I would like to get to the point where I could function as either an engineering manager/director, or a staff+ engineer. To that end, I’m starting a small personal Python project. It’s nothing of consequence, though if it turns out to be useful I may release it on GitHub under an open source license. I’ll also try and blog about the tools and libraries I’m using to build it along the way.
]]>2023 has been a challenging year for me. We had a lot of churn and turmoil at work. Our CEO left the company, and with a new CEO comes a whole new executive team and the expected reorganizations and changes in strategy. Combine that with all the normal business and challenges of family life and I spent a good portion of the year feeling exhausted and burnt out. It feels like our new leadership team has figured out the direction for the company and I’m looking forward to getting into a more sustainable groove in 2024.
As I mentioned above, I felt very tired and run down most of this year. I don’t do a good job of managing my energy levels. I tend to have two operating states, either I’m going full bore/top speed, or I’m sleeping. I also do a lousy job of managing my stress levels. Put those together and it’s a recipe for exhaustion.
In 2024 I’m going to work on managing my energy levels better. I am going to make time for breaks in my work day, like dedicating time for lunch. I am also going to be more disciplined about taking care of my physical health through diet and exercise. I don’t want to end up having a heart attack or other stress-induced condition before I’m 50 years old.
I learned this year that building a personal knowledge base (PKB) is much harder than it looks. It takes a great deal of discipline and practice to become good at taking and organizing notes for future reference. While I definitely didn’t do as well as would have liked, I feel like I have learned a lot about the process and I’ve started to develop the habits and practices necessary to make this a useful tool. I’m looking to make my PKB a regular part of my life and workflow in 2024.
Surveillance capitalism is only increasing. I’m becoming more and more uncomfortable with putting my personal data into cloud services, especially those that are “free”. In 2024 I’m looking to build out a small home lab and to start self hosting my own digital services. For myself, this will be things like calendars, tools related to my PKB, and possibly a home media center.
I currently use AWS for some personal cloud computing resources. The services themselves are fine, but AWS’ pricing does not have home users in mind. I would like to use and experiment more with using cloud services to assist with my self hosting objective, but it is just too cost prohibitive for me to do that in AWS. I would like to change to a cloud provider with pricing and services that are designed to serve small developers and self hosting, but there are economic and compatibility issues that I need to be watch out for. I tried using alternative S3-compatible object storage for offsite backups, but I quickly discovered that not all S3-compatible APIs are truly S3-compatible. Likewise, I expect to see more consolidation in the cloud provider market. Akamai acquiring Linode this past year is an example, and I expect more providers who are not AWS or Azure will either be acquired or possibly shut down.
2023 has been an interesting year for the broader open source community. Companies like Hashicorp pulled the rug our from under their users and customers, and even stalwarts like Red Hat made moves that have people questioning the long term viability of open source. I am still all in on open source, especially for my personal use. However, I will be less and less likely to take a gamble on open source companies, like Hashicorp, that are backed by venture capital or private equity and have permissive licenses. It feels like more and more companies are deliberately using the open source angle to build up a following with the intention of changing to a proprietary license when they reach critical mass.
I think that will wrap it up for 2023. It’s been a challenging year, but I also think I learned a lot. Here’s to hoping 2024 will be better and that I will continue to learn and grow. For the few of you out there that read this blog, I pray you and your families will have a Merry Christmas, a Happy New Year, and a better 2024.
]]>The main thing I wanted to share this week is the new home of FLOSS Weekly. After being cancelled on the TWiT network, a few of the hosts got together and found a new home for the podcast at Hackaday. FLOSS Weekly has been a podcast I’ve enjoyed listening to for over a decade now, and I’m happy to see it will continue.
With that I think I will sign off for the rest of 2023.
]]>I enjoy watching content on YouTube, but I’m not a fan of the Google data black hole. For a few years now I’ve used YouTube with my search and viewing history turned off. YouTube has always nagged me about it periodically, but recently they updated all their apps so that it no longer provides recommendations, including for Shorts. I think they think they are penalizing people like me, but in reality it prevents me from wasting time scrolling through videos that I don’t really care about. Instead of making me want to enable history, it’s actually reinforcing my decision to leave it disabled.
I just saw a notice that the JetPorch project will be discontinued. JetPorch was a new systems automation tool that was started by the original author of Ansible. He was taking many of the lessons learned from Ansible and using them to build a new tool. I haven’t had much time to look into it yet, but I was interested in trying it out as it matured. Sadly, it looks like there wasn’t a large amount of interest in the project and so it will be discontinued.
With that, I want to wish everyone a Merry Christmas and a Happy New Year.
]]>Note: None of the links below are affiliate links. I get nothing from linking to these books. I try to link to the author or publisher site when possible, and Amazon as a default.
Favorite Books 2023
That’s it for this year. I haven’t managed to do these more often than annually, so this year I’ll just wish everyone a Merry Christmas and a Happy New Year and I’ll most likely do this again next December.
]]>Last week I was lamenting that the Omnivore plugin for Logseq had stopped working. A couple of days ago I figured out why. In the plugin, you can configure a custom query to select which articles get imported into Logseq. My query was defined as label:logseq-import and in:archive
, and had been working well for several months. Apparently they’ve changed the query language a little bit, so that query now returns an empty set. The fix was to change the query to label:logseq-import AND in:archive
. Now everything is working again as expected.
I’ve been playing around a little with how I import articles and highlights into Logseq. I tried creating a separate page for each article, but that feels like a lot of unnecessary work. After thinking about it for a bit, I ended up moving all my newly imported articles into the Logseq journal. Along with putting articles in the journal, I add a few metadata items, Logseq calls them properties, to each article to make it easier to find it later. This works well since Logseq automatically creates pages for tags, and has a really solid search system.
From there I can create new pages to record my own thoughts and insights, and link that page to the article block in the journal for reference.
Today I was listening to one of my favorite podcasts, FLOSS Weekly, and they announced that the episode recorded this week would be the last. Apparently Twit is having some financial struggles and is ending some of their shows that don’t make enough money from advertisers and supporters. I’ve been a listener of FLOSS Weekly for well over a decade now. I guess all good things must come to an end.
]]>This is a just a quick thought this week. Over the past several years I’ve noticed some recurring issues with larger infrastructure as code (IaC) projects. Namely, they become harder and harder to maintain over time in ways that are different from application code projects. As a project grows over time, I’ve noticed that it’s very easy to introduce changes that prevent using IaC to recreate the environment. For example, it’s very easy in Terraform to introduce a circular dependency. Since the system is growing incrementally, Terraform won’t see it as an issue unless you try to recreate the entire system from scratch.
As IaC projects grow they become harder and harder to understand. I wish I knew of a way to better tame that complexity, as now what would take 5-10 minutes to do manually in the AWS console can take several days to accomplish in Terraform due to the complexity and interactions of all the resources. It makes it very difficult to onboard new developers and engineers, as well as slows down development velocity.
My experience is primarily with Terraform and Ansible, so this may be different for other IaC tools. I don’t have answers to these issues, but I find them interesting to think about.
]]>In an unexpected turn of events, I’ve discovered today the plugin that integrates Omnivore with Logseq isn’t working any more. As much as I appreciate the features of a dedicated notes tool like Logseq, I think the additional complexity leads to situations like this where things suddenly stop functioning. I don’t know if it’s a bug in Logseq or the Omnivore plugin, but it’s making rethink my choice of note tools yet again.
Maybe it’s time to abandon backlinks and fancy visualizations in favor of simplicity and plain text.
]]>For the longest time I struggled with the feeling that I wasn’t getting the right projects completed. The Bullet Journal method has been a big help in that area by helping me to ensure that I’m capturing information and tasks throughout the day so that I don’t forget anything. Even with that I was still struggling with prioritization and a feeling that I was stagnating. At the time I wasn’t really doing the weekly and monthly reflections that the Bullet Journal method describes and now I think that was the missing link.
Weekly and monthly reflections are like a review. It’s about taking a little time, I typically spend 15-30 minutes per week, to review the events of the previous week or month, and figuring out what worked and what didn’t. If you can be honest with yourself, this is an opportunity to identify problem areas that you can work on to make small improvements in yourself. For example, I use it to review things like my diet and exercise progress, my spiritual disciplines like Bible study, prayer, and fasting, and to gauge progress on my personal projects. A review can be as structured as you like it to be. It can be as simple as doing some free-form journaling or asking the three questions:
I personally like Matt Ragland’s WRAP method. I list out my Wins, Result, Alignment, and Pivot items each week in my journal, and use those to inform my planning for the next week and beyond. There are countless other methods out there to choose from. I would recommend trying out several and finding one that works for you.
Regardless of what productivity system you use, if you feel like you are still missing something I would highly recommend adding some form of regular reviews to your practice. It’s been a real boost in my life.
]]>Canonical, the company behind Ubuntu Linux, has a new private cloud offering called MicroCloud. MicroCloud bills itself as a lightweight private cloud solution that is easy to deploy to manage, but is capable of scaling to large workloads. As the economics of public cloud continue to shift towards provider profits, it may make sense to spin up a private cloud using local resources. While I’m not super keen on it being delivered as a Snap package, it looks like it’s very simple to install and set up versus other private cloud platforms like OpenStack.
I’m not a huge Ubuntu fan, but this has me intrigued. I might try to spin this up on a Raspberry Pi or two to kick the tires.
]]>I have been wanting to get more into home automation and incorporating “smart” devices like light switches and lights that I can combine with scripts and programs. However, when I look at most of the consumer devices on the market they all require signing up for some sort of cloud service account and they want to gather data from the devices in my home and sell it. Jeff Geerling published an excellent video this past week that sums up the issues with smart devices.
I found myself nodding vigorously in agreement at several points in the video. Similar to Jeff, I think I would only trust devices that I either build myself, devices that have full function without internet access. I’m also very much against devices that break the normal use of a device. Jeff’s example is “smart” light switches that only work with an app after they are installed vs. a smart switch that works in conjunction with the normal light switch. If I’m standing next to the switch, I just want to flip it on or off without needing to use my phone.
This is where I wish I knew more about electronics so that I could build more devices myself.
]]>I’m still working on getting Hugo fully up and running. The last thing I need to figure out is how to generate an ATOM feed. I’ve seen a couple of forum and blog posts that show how to generate an ATOM feed using a custom layout, and I also found a Hugo ATOM module on GitHub that I would like to try out.
Alternatively, I think I could use the default RSS feed and configure a 302 redirect from the old ATOM URL to the RSS URL. I think most modern feed readers would be able to follow the redirect and load the RSS feed. That feels sloppy to me though, so I’m trying to avoid doing that if I can. On the other hand, the module I found is a fork of a module that was last updated in 2022. I’m not sure that I want to take a dependency on something that is likely to be abandoned vs. using a standard feature in the main distribution.
]]>This week I watched a Bullet Journal video about augmenting the rapid logging technique with something called interstitial journaling. The idea is that you use your written journal to record context switches throughout the day so that you can better learn about what is getting your attention each day.
When you have to switch contexts, you record three things in your journal:
The notes in the second step can help you resume the task when you get back to it, and the notes in the third step can record the context around why you are changing tasks. My work day is full of interruptions and I’m going to try using this technique to help me stay focused on the important work and to not succumb to the tyranny of the urgent.
]]>This week I spent some time learning how Hugo’s routing works. Part of this was due to struggling to figure out why my test posts weren’t rendering with some themes. Turns out I forgot to add the -D
flag to render draft posts, and all my test posts were marked draft = true
. Now I have a good understanding of how the routing works and what I need to do to migrate my existing blog posts without changing the links. I’ve also picked out a theme that I think will look nice for both my blog journal and the new digital garden entries. It’s the Nightfall theme in the Hugo theme gallery, in case anybody is interested.
I’ve encountered one limitation so far. I don’t know of a way to have Hugo automatically generate backlinks to files. I’ll have to generate and manage those manually. It shouldn’t be too much trouble initially to keep up with, and I’ll continue looking for some way to do this automatically.
]]>I’ve settled on Hugo as the static site generator for the next iteration of my blog. Now begins the arduous task of migrating all my existing posts to the new engine. I need to find a theme that will work well with my plans to add a digital garden, and I need to learn more about how Hugo does request routing.
I have been wanting to try out Atuin for enhanced command history management. I finally installed it and set it up, and it has been really great. Atuin takes your Linux command history from the terminal and puts it into a database so you can perform advanced searches and filtering. It also enhances the Ctrl-r
experience by replacing it with a new terminal UI that lets you view and search your history. So far really like it and I would recommend trying it out if you access your history frequently looking for commands.
Finally, for this week, I learned about the Small Technology Foundation from the FLOSS Weekly podcast. They are a non-profit dedicated to building web tools and applications for individuals that respect your privacy and security. They are looking to build out what they call the Small Web using simple tools that enable end-to-end encryption and federated access. It looks like the technology is still in the early stages, but it’s something I want to keep an eye on. It reminds me in a lot of ways of Gemini, without the need to learn a whole new protocol and markup language.
]]>