Author Archives: tastefulglues

Apprenticeship pattern: Familiar Tools [featuring the Concrete Skills pattern]

Since beginning this undergrad program (really, the CS transfer program at my previous school) and starting my internship, I’ve found myself working in several different environments with several different tools. I’ve been able to scrape by in most of them, but haven’t stuck around with any one of them for long enough to become really familiar with it. I can specialize template classes and manage pointers in C++, but I don’t know how to use most of the standard library. I’ve grown from using a couple of Linux commands in a Manjaro VM to writing increasingly useful Bash scripts, but I’m not rapidly adding Linux commands to my toolbox. I’ve programmed in at least 10 different languages in the last year, but I’m not happy with my proficiency in a single one of them.

I think that this means that I need to start making use of the related Familiar Tools and Concrete Skills patterns, or at my own dot product reinterpretation of the two. The Familiar Tools pattern is all about developing consistency with a tool that you already trust yourself with, and Concrete Skills is about developing a proficiency foundation to build on. I have some foundation, but it’s split across a bunch of different technologies, and I need to rebuild it starting from a single tool that can become familiar to me.

The tool that I’d like to become familiar with is Rust. I share a lot of the general and technical priorities of that project. It’s great that it’s flexible and able to progress and fix mistakes without being paralyzed by a need to forever remain backwards-compatible. More importantly, I want to eventually work mostly with statically typed languages, and I love that it lets you opt into fewer levels of abstraction. For example, how cool is it that you can inline assembly in the same language that considers C-style undefined behavior to be a design failure?

Now that my formal education is ending, I’m going to use some of my spare time to develop my concrete skills in Rust by working on small projects. Hopefully, one of those will be working through Sedgewick & Wayne’s Algorithms again in Rust, some will be collaborative projects on fun things that I can’t yet get paid to do, and some will be projects where I figure out how to integrate a Rust backend with other tools that I need to learn more about anyway such as web APIs and AWS CloudFormation and Timestream.

From the blog CS@Worcester – Tasteful Glues by tastefulglues and used with permission of the author. All other rights reserved by the author.

Apprenticeship pattern: Craft over Art

Craft over Art is a pattern I’d like to start keeping in mind as I program. Because every functional specification can be implemented in more ways than any of us can imagine, I tend to think up the one best way that I can imagine how to solve a problem, and then continue solving the problems that come up on the way to that result.

There are two problems with this approach, both of which can be addressed by following the Craft over Art pattern. The first issue is that often, when trying to do a new thing, what I learn along the way indicates that there is some easier or more useful solution. When I treat programming as an art, these concerns are secondary to finishing the beautiful thing that I first set my mind on. In many cases, though, these realizations are a sign that it’s time to change plans in order to achieve some other goal that will be more useful or achieved more quickly.

The second problem with my process is that the best way I choose to set out on is often one of the artistically best solutions I can think of, not the way that most quickly results in a product with acceptable quality. As the author says, the professional goal of software development is to make useful things, not to write beautiful code.

I do think that, to some extent, writing programs in the most perfect way that you can is good exercise for developing mastery in an area of programming, as is continuing to learn and implement new ways to do things. After all, some problems really are suited to specific solutions, especially when some form of performance is important. For example, determining a superior route between nodes is a problem that should often be solved after considering the methods of graph theory. Sometimes, though, the stakes are low, and there really are no other concerns besides a solution’s approximate correctness and the speed with which it is implemented.

One of the main reasons that I like programming is because there is so much opportunity to be paid to work in a way that’s artistically fulfilling to me. But as an apprentice who wants to improve, I think that limiting this impulse to appropriate times is one of the most valuable skills that I can learn over the next few years.

From the blog CS@Worcester – Tasteful Glues by tastefulglues and used with permission of the author. All other rights reserved by the author.

Sprint 3 retrospective

As sprint 3 was a short cleanup sprint, we spent it doing some cleanup on the epics and issues board. During the sprint we were also trying to get a testing CI pipeline running. We ended up finally figuring that out after the technical end of the sprint, but I’ll be talking about that work here anyway.

As far as cleanup, I deleted a few remote branches in the GuestInfoBackend repo that had not been deleted when they were merged. I only deleted the branches that I knew had been merged, but now that I look at the other branches, GitLab indicates whether a branch has been merged into the main branch, so I will know which branches are clearly safe to delete this week in the backend, API, and frontend repositories.

I did some additional cleanup related to the status of features that remain in the product backlog. For example, I removed an indication that the upcoming Kubernetes conversion epic is blocked by the GuestInfoIntegration verification epic that we completed:

I helped Kelvin’s effort to get the GitLab test job to run our backend tests. For example, I made sure we were getting a copy of an openapi.yaml file that we knew would actually exist into the testing/test-runner/ directory while the test script is being executed:

It wound up being a bit harder than expected to get a value from bash into a YAML definition file being used to network the various docker containers. There seem to be a few more ways to do that when executing docker-run compared to docker-compose, so I figured out how to do that to get the right backend image name substituted into the docker composition for both the GitLab CI test job and when executing in a dev container:

I also verified that the test job is doing all the right docker stuff by making sure that broken unit tests and broken endpoints both result in failed tests in the test job:

I think our team did a good job getting the tests to run in the CI pipeline this sprint. At the same time, putting a bunch of energy into a different branch than main may hive divided our energy a bit. Probably more important was that this was a short sprint with lower goals than the first two. Given the fact that during sprint 1 we didn’t really know what we were doing or how most of the architecture worked together, it might have been nice to have a shorter first sprint and then more time to work on this one, where maybe half or a third of the goals were cleaning things up.

Honestly, I don’t think it’s bad that trying to gradually wrap your head around a codebase is a big part of this class, as that’s going to be one of the challenges that people face in new working environments. The software construction & architecture prerequisite class does use the same architecture, but it’s easy in that class to build better understanding of how some of the individual parts work and an understanding of microservices and REST APIs. However, the devil is in the details, and understanding small integration details enough to be confident that contributions are correct is not easy.

One other thing I wish I did differently to help our team as a whole this sprint is trying to collaborate a bit with everyone. I feel a bit like I locked myself in the test-runner branch and wasn’t very present to help with other things.

From the blog CS@Worcester – Tasteful Glues by tastefulglues and used with permission of the author. All other rights reserved by the author.

Apprenticeship Pattern: Breakable Toys

With the breakable toys pattern, the learner conducts experiments and observes the work of others to improve their own understanding of how to build complex systems. Apprenticeship Patterns suggests a few ways to go about this. You might build a simplification of an existing problem that you’re already facing. By reducing uncontrollable variables or emulating a single possible case, you can come to understand one key interaction, tool, or data flow at a time until you become capable of composing a solution to a broader problem.

Another possibility is building your own projects that are not focused on understanding one specific issue you are already facing. Doing so can build a familiarity with your tools, and can lead to unexpected creativity or interest in learning how others achieved problems you might never have otherwise encountered.

The third variant of this pattern is the analysis of external source code. The abundance of open-source projects provides an effectively infinite learning resource for studying how others design systems and resolve issues.

I think that this pattern is a very important part of long-term learning. It can be easy to build habits when programming: you can usually come to a desirable result either by brute-force trial and error from the bottom up, or by drawn-out requirements planning that gets done before any prototype is built. But I’ve found myself spending far more time invested in one of these two methods than I should have, when remaining flexible and incorporating aspects of the other would have led me to a better solution faster.

In general, I think it’s a shame how often the daily work of software development can feel detached from the playful curiosity of the scientific method. Much of the most rewarding learning happens as a result of developing a question and finding a finding out how you can reach a confident answer to it. Considering we’re working in the one field where rapid iteration and near-zero cost to entry makes the scientific method universally accessible, I’d like to hope I’d take advantage of that by putting myself in many situations that generate spontaneous curiosity.

That’s what I like about making a breakable toy. It’s usually more fun to play with than the projects that we have real responsibility for, and that fun makes us want to understand everything about the toy and how the toy could grow into something else.

From the blog CS@Worcester – Tasteful Glues by tastefulglues and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospective

One issue our team was having during this sprint was that getting or deleting a guest, after creating the guest, was returning a 404 not found response. We were having a difficult time identifying the cause of this reversion of good behavior, so I learned how to use the git bisect subcommand to find the commit, found the change within the commit that caused the bug, and reverted that change. It ended up being that we could not refer specifically to the /guests endpoint token within the OpenAPI server URL value without other changes.


Continued existence in frontend (we just need to make sure the right API YAML file is being used in all the repos now):

Chai has very nice syntax, which together with chai-http lets you call should on an object, such as object.should.have.status(200), or to call chai.expect(object) when the object might or might not exist. I wrote some simple unit tests, such as verifying that the version string has a range in [5,8] — ('0.0.0' - '99.99.99') — for getting the API version. As with other tests, being able to call them quickly with cd src && npm run test while the server is running on localhost is a useful holdover until we get the CI pipeline working.

I did some pre-sprint-3 cleanup by removing the Chai (not Chai-HTTP) example testGuest.js file.

I cleaned up our set of unit tests, for the create guest endpoint, which I think is now the first to be in a really good state concerning test coverage not including the get API endpoint. This includes a cleanup step at the end (so tests can be run repeatedly without rebuilding the server), several HTTP requests: both valid and invalid, examining many of the properties of the returned responses. One trick I learned was to assign a const object to store guest data and then repeatedly copy it with the spread ... operator. Otherwise JavaScript would copy by reference and disrespect the const-ness of the object: it would otherwise mutate a const! What a lovely language feature.

Overall, I think I did decently. I helped Kelvin a little with some of his issues. He individually, and our team as a whole, did a great job this sprint. We began and completed work on multiple epics. We ended in a state where now, 1 week from the end of sprint 3, we think we can finish up everything that we wanted and agreed to get to in our sprint 3 planning meeting.

One challenge I faced that I couldn’t resolve was an unfortunate technical difficulty. Despite multiple restarts, re-cloning the repos, and checking my installed program versions, my computer stopped being able to run the server. I eventually gave up after a few hours of trial and error, and moved to a different computer, but I suspect the problem was in an incompatibility involving the rolling-release software update model used by Arch Linux. I’ve now had 2 confirmed and 2 other suspected significant issues that might have been avoided by using LTS software.

Our team could probably improve our communication. It’s hard to find time after Thursday and before Tuesday to plan or collaborate on changes. Early this semester I supported working asynchronously, but in retrospect having an hour or two per week of planned collaboration probably would have led to more productivity and confidence without an unreasonable cost.

From the blog CS@Worcester – Tasteful Glues by tastefulglues and used with permission of the author. All other rights reserved by the author.

Sprint 1 retrospective

For sprint 1, I did insufficient work. The only changes that I ended up contributing are in this commit, which adds a Node script that runs an example unit test on GuestInfoBackend.

Before this class, I had not written any JavaScript unit tests, I was not aware of any testing frameworks for the language. I assumed that implementing simple unit tests would be trivial. I ended up spending a few hours learning how to use the mocha testing tool and chai assertion library. Because I needed to make HTTP requests, I spent the rest of the sprint trying to get chai-http working in order to do something similar to the previous testing implementation that I wasn’t aware of at the time.

Many projects will export an Express app as a node module. This makes it easy to use the same module both as a listening server and to serve requests by some parallel testing environment. Because we instead define our own entrypoint in src/index.js, I was gradually exploring how to define a second testing entrypoint when sprint 1 ended.

Many of the changes that both I and the team need to make for sprint 2 are circumstantial. We understood very little about many of the issues early in sprint 1. Some were small syntax changes, but others required quite a bit of domain knowledge. We’ve since developed enough of that knowledge. Especially now that we’ve agreed to delay significant front-end changes to sprint 3, we have most of the information needed to be confident we can address most of the sprint 2 issues.

Personally, I was much busier leading up to the end of February, and didn’t always reserve enough of my weekend time to work on issues, which obviously just need to change for sprints 2 & 3. I also was slow to bring my chai-http woes to our professor, when he could have shown me the branch with most of this work done for me.

One change that I’m interested in is finalizing a collection of changes before pulling a merge commit branch. Sometimes we may want to swap to another issue without committing to the current branch or using git stash, and sometimes it became clear we should have split an issue into multiple issues. For sprint 2, we’ve found a work-in-progress workflow where we instead work on a local branch, and then, after we’re ready to push code, create the merge request and set the remote to that remote merge request branch.

From the blog CS@Worcester – Tasteful Glues by tastefulglues and used with permission of the author. All other rights reserved by the author.

Apprenticeship Patterns – A review before having read any of the patterns

From the outset, Apprenticeship Patterns by Dave Hoover and Adewale Oshineye demands that you interpret its abstractions. Apprentices are to be seen as those who are learning a craft but not yet spreading it or responsible for the success of their workshop. Craftsmanship is not participation in the historical guild system, but engaging with shared values of self-reliance and free information.

This vague framing of the process of learning to write software is a bit off-putting, but a brief skim of the early content of the book shows why it is necessary for this to be an adhesive work. The language is that of self help books and MBTI questionnaires because that is the vocabulary that exists to describe an infinitely vague process of self-directed improvement, and the variety of technologies to learn and career paths to take in software can be so varied that no one book can describe them in concrete terms. This one book can be forgiven for trying to describe them all in the terms that remain.

The introductions of Patterns‘ early chapters pose a category of situations in which all learners will find themselves. From a need to destroy the habits preventing an exploration into the field in chapter 2, to chapter 6 presenting the challenge of selecting from a near-infinite pool of available learning resources. These problems are all sufficiently broad that any dedicated learner either will face them in the near future or has adapted to each at some point in the past. The structure seems compelling: with respect to the pseudoscience identity assessment parallel, everyone can identify with one or two of the patterns presented in each chapter. I hope that readers think more about the solutions to problems not yet solved than about the solutions that they are most proud of using in the past.

Overall, I’m surprised by how easily I’m drawn in by the book. Every section headlined by ‘Context’ is a generous platitude unwrapped from a self-centric fortune cookie. Each ‘Problem’ a horoscope personalized for any human on the planet. But I want to read about me, and what challenges I am already facing, and the strategies I can use to optimize my learning. Maybe all together that is worse advice than just ‘keep making new things in new ways,’ but Apprenticeship Patterns knows the power of framing information around the reader.

From the blog CS@Worcester – Tasteful Glues by tastefulglues and used with permission of the author. All other rights reserved by the author.

Some APInions on REST and GraphQL

Whenever you’re new to a thing, a comparative look at different tools can help you understand the problem by learning how each tool approaches a solution. As someone new to consuming and designing APIs for the web, I’m interested in understanding APIs by looking at the difference in approaches of the REST specification and the GraphQL query language. This post is based on Viktor Lukashov’s GraphQL vs. REST blog post, which explores some GraphQL basics from the perspective of a REST API user.

Priority: server-defined endpoints or client-defined queries

The largest difference mentioned by most sources is that a well-built REST API relies on extensive backend definitions of endpoints, while GraphQL puts the onus on the consumer to carefully query the correct data.

In REST, accessing multiple entities requires visiting an endpoint for each entity. These endpoints expose data through predefined parameters and responses. Meanwhile, GraphQL exposes a single endpoint while only returning data that corresponds to the consumer-defined query. This approach requires higher effort from the user, but allows them to construct tailored queries without the need for forethought from the API designer.

As a fan of query languages, I think this comparison is very favorable to GraphQL. For any interesting or useful dataset, a user exploring the data should have more ideas about how to observe it than its maintainers and gatekeepers will. Providing flexibility for query writers lets your interface be used in ways you can’t predict.

Implication: caching and performance

One upside of REST’s focus on a planned set of endpoints and parameters is that expected frequent responses can be use HTTP caching to improve performance. Responses to GET requests will be cached automatically, reducing server requirements and potentially improving response speed for future identical requests.

In GraphQL, the query writer is responsible for using a unique identifier to generate useful cache data. That said, the consumer may be able to use single queries across multiple tables that would require more than one REST API call to access.

Relying on architecture over following best practices is probably the better way to make performance accessible, which is a point in favor of REST.

Consequence: rules and security

Another difference Viktor mentions is how developers implement secrurity rules. REST’s default to the expansion of endpoint-HTTP method combinations includes setting rules on this basis. In GraphQL, with it its single-endpoint, rules are instead set per field. This may require a steeper learning curve, but it also provides more flexibility for easily making some attributes of an entity more available than others.

Conclusion: rigid or demanding

One recurring theme of this comparison is that REST APIs are built to be rigid, and another is that GraphQL requires higher effort from the client. This is how I would decide between the tools. If writing something that I expect to be querying frequently myself in new ways, I’d want the query freedom offered by GraphQL. If I wanted a small and fixed feature set, REST seems like the spec to follow.

From the blog CS@Worcester – Tasteful Glues by tastefulglues and used with permission of the author. All other rights reserved by the author.

Gluing functions together in F#

Often when writing a function, we want it to be generic enough that it can operate in different ways for different inputs. Different languages will provide different ways to do this. Object-oriented solutions might include interfaces and polymorphism. Procedural languages might suggest template specialization or function pointers.

I’m going to talk about how functional programming languages use first-class functions to solve this problem in a satisfying way without any pointer syntax. Here’s some F#:

let rec filter fn lst =
    match lst with
        | [] -> []
        | head::tail -> 
            if fn head then [head] @ filter fn tail
            else filter fn tail

We’re defining a function named filter here which takes a boolean function and a list. The filter function will pass each element of the list to the function and eventually return a list of the elements for which the boolean function returns true. Let’s look at the result of passing some simple arguments.

let numbers = [0; 1; 2; 3; 4; 5; 6; 7; 8; 9]

let isEven i = 
    i % 2 = 0

filter isEven numbers
[0; 2; 4; 6; 8]

Usually when we see functions in the argument lists of other functions, we expect them to be glorified values. But in this case, isEven is not returning some value that gets glued to a parameter of filter. When a language has first-class functions, it’s perfectly fine to have functions themselves be the inputs or outputs of other functions. Here, isEven is itself an argument, and it operates on the head of each sub-list as we recursively cut our list down to one with zero elements.

In functional programming languages, it’s harder to write programs that aren’t reusable. If we want our filter to cut out any numbers that aren’t part of the Fibonacci sequence, we just write another boolean function and pass it instead.

let rec isFibHelper n x y = 
    if n = x then true 
    else if (y < 0) || (x > n) then false
    else isFibHelper n y (x+y)

let isFib n =
    if n < 0 then false
    else isFibHelper n 0 1

filter isFib numbers
[0; 1; 2; 3; 5; 8]

filter isFib (filter isEven numbers)
[0; 2; 8]

Because filter operates on a list, to filter a list of some other type we can just steal some documentation code for another boolean function.

let strings = ["abc"; "DEF"; "gHi"; "jkl"]

open System

let allLower s =
    s |> String.forall Char.IsLower

filter allLower strings
["abc"; "jkl"]

Functional programming is all about composition and modularity. You start with tiny Lego pieces and need to attach them until you have a gigantic ninja castle. In its most dogmatic form, that means sacrificing creature comforts like loops and mutable data that we use to quickly slap together solutions that can never be reused.

If you learned to write procedures or call constructors, I recommend giving one of these languages a try. It even makes recursion fun, so what’s not to like?

From the blog CS@Worcester – Tasteful Glues by tastefulglues and used with permission of the author. All other rights reserved by the author.

What’s glued for the goose

Hello there. This is a blog about the sticky things that keep stuff together.

Recently I had the chance to reapply a layer of GRUB, which is a kind of glue called a boot loader. When GRUB stays sticky, it will hold your hardware and software mess in a neat little bundle. But when this glue comes unstuck, the surfaces just don’t fit together the same.

GRUB is glue

My mess was quite unstuck. Turning on the hardware just kept bringing me back to the BIOS.

How is a fan of glues to restick what’s unstuck?

First, you can make a bootable USB of your flavor of operating system. When you plug it in and turn on your hardware, I recommend pressing all of the keys at the same time as fast as you can: this is the only guaranteed way to find the hidden BIOS button. After you’ve taken matters into your own hands or your computer has otherwise dropped you into the BIOS, you may have to move this little USB device higher in the boot priority.

When our hardware succumbs to the live USB, we can take a look at what partitions we have access to.


In my case, I had a root directory partition called nvme0n1p2, and an EFI partition named nvme0n1p1. As a user of Arch Linux (no applause is necessary), the next step let me hop on over to my unbootable-from root directory.

mount /dev/nvme0n1p2 /mnt
arch-chroot /mnt

And then all that’s left to apply the glue is to mount the EFI partition and set GRUB back up.

mount /dev/nvme0n1p1 /boot/efi
grub-install --efi-directory=/boot/efi
grub-mkconfig -o /boot/grub/grub.cfg

Of course, no two surfaces are the same, so your journey back to the land of Adhesion may require a slightly different gluing process.

The charm of GRUB is its malleability. It’s hard to appreciate life without a few glues that fail just often enough to keep you on your toes.

From the blog CS@Worcester – Tasteful Glues by tastefulglues and used with permission of the author. All other rights reserved by the author.