Author Archives: Thomas Clifford

Read Constantly

This is the first in a series of short blog posts I will be making about the book “Apprenticeship Patterns” by Dave Hoover and Adewale Oshineye, discussing individual patterns from the book. This week I am looking at the pattern called “Read Constantly” from chapter six.

The scenario this pattern covers is one in which a programmer is feeling overwhelmed by new information. This is happening even in spite of a good amount of proficiency and enthusiasm. The proposal here is to read constantly in order to catch up on old developments in the field and stay ahead of new ones as much as possible. This also means prioritizing denser sources of information such as books or occasional research articles over things like blog posts, for example. The authors also suggest keeping a small (and thus easily portable) book on your person at all times to read whenever you have downtime.

I don’t really read as much as I would like to. It’d be good to do more reading, especially about the software field, so I think that is useful advice. I have some issues with the framing of this pattern, though.

I don’t fully agree with the outlook of “catching up” to people like Linus Torvalds, who the authors namedrop here, or with how the authors view people like him. I don’t think, taking Torvalds for instance, that he got where he is purely through effort. This isn’t to say that he’s lacking in talent in any way – rather that he got where he was through a combination of being a highly motivated person and being in the right place at the right time in the industry. I don’t think you can make up the latter part through sheer effort alone. I view it as sort of like the lottery – it’d be nice, and you could increase your chances by buying tickets regularly, but it’s misguided to have winning the lottery as a goal when it’s ultimately out of your control.

I think it’s good to read more, of course. I just don’t agree with constant reading specifically as a way to stay “competitive” in the software industry. I do not have the background to make this kind of claim, but I also suspect there’s diminishing returns when you try to cram as much information into your head as possible.

Having read this section, I think I will actually read more, or at least make some effort to. I’ll also probably try to read more about programming and technology specifically. I just don’t think I will take it as far as the authors recommend.

From the blog CS@Worcester – Tom's Blog by Thomas Clifford and used with permission of the author. All other rights reserved by the author.

Software as a Craft

Over the next few months, I’m going to be taking a look at the book “Apprenticeship Patterns” by Dave Hoover and Adewale Oshineye. Here are my initial thoughts based on skimming through it a little.

I found the authors’ idea of applying the “medieval craft model” to software interesting. It seems on the surface like there is no common ground there, but writing software is as much a craft as a science. I don’t think this is due to some kind of correlation between programming and something like blacksmithing; rather I think it’s because the way of thinking this book is trying to outline has more to do with simply learning and working in general rather than specific technical details. I also like the attitude the authors take towards the individual – everyone has something to contribute, but to contribute something the most important thing for you to focus on is yourself.

I found the idea of a “growth mindset” as it was laid out in the book useful. To the authors, effort is is the key to success while failure is merely an incentive to try something different. I already more or less believed this, but I like the way it was articulated here. Other ideas I found useful from the first chapter were choosing to be pragmatic over dogmatic, adapting to feedback from one’s environment, and a focus on skills over process, as processes and tools will not necessarily make you or your work better.

One idea that stood out to me was the idea that it’s better to share what we know rather than hoard knowledge to ourselves. While it seems pretty straightforward on the surface, it’s counter-intuitive to the way I’ve slowly become accustomed to thinking, personally. It’s easy to feel like the only value I really add is various bits of arcane knowledge I’ve picked up over the years, and it’s tempting to not let them go so easily.

I haven’t really come up with any major disagreements with the book or things I want to change about how I work yet, but I suspect this may change as I keep reading. Most of the chapters feel relevant to me. In particular, chapter six, which is about constructing a curriculum for yourself, and five, which is about staying motivated to learn and grow in the absence of ideal conditions (which I think is very good general life advice as well).

From the blog CS@Worcester – Tom's Blog by Thomas Clifford and used with permission of the author. All other rights reserved by the author.

Looking at the Thea’s Pantry project

Looking at the workflow page for the Thea’s Pantry documentation, the first thing I noticed was “conventional commits.” I’d seen them in the commit logs of the projects but wasn’t aware of the name of this format or the motivation behind it.

There’s also the system of having one branch per feature being worked on. In my college courses, none of the projects I’ve worked on have really been large enough to have a need for branches, unless the point of the project was learning about branches. The last time I’ve actually had to use branches was back in high school for my vocational classes (I’m pretty lazy so my personal git projects tend to not have branches) and so hopefully my git skills come back to me.

From the blog CS@Worcester – Tom's Blog by Thomas Clifford and used with permission of the author. All other rights reserved by the author.

Looking at the LibreFoodPantry project

In the next few months, I’m going to be doing my capstone course for my computer science degree. It’s going to (possibly) involve working on the LibreFoodPantry project, so I decided to look over the website.

The first thing that stood out to me was the Coordinating Committee. Previously I had been picturing this project as essentially just something a handful of people were working on on the side. In reality it has slightly more people directing it than I imagined and also a much more rigorous organization and schedule than I had assumed.

More specifically, I thought that there being three versions of the software meant that there were three people managing a shop of developers, as the page says, but there’s actually six.

From the blog CS@Worcester – Tom's Blog by Thomas Clifford and used with permission of the author. All other rights reserved by the author.

Thinking About Finite State Machines

https://www.dossier-andreas.net/software_architecture/fsm.html

A finite state machine is an abstract model of computation. In the context of software development, it can be used as a way to control and conceptualize the flow of a program.

Finite state machines consist of a number of states, each representing a particular behavior of the system. These states have transitions between them, defining when the system may change from one state to another. Transitions can be thought of as connecting the states like a graph (specifically an ordered graph, with two states that can change to each other represented by two separate transitions). The graph is not necessarily complete or acyclic. The state transition also defines what actions must be taken to change the system from one state to another.

I like most of the articles from this blog, but I found this one a little lacking. In particular, the “Examples” and “When should you use it?” section, which really don’t say anything meaningful.

The examples section states two examples without elaborating: A coffee machine and “Games” (just, in general I guess). Here’s how I would model a coffee machine (knowing nothing about how they actually work and just going off how I’m used to them behaving):

The coffee machine begins off. In this state, it can only be turned on, which starts it in the “awaiting input” state. From there, you can press a button to have it actually make the coffee. For the sake of simplicity, giving it water and coffee grounds are handled by a human and heating the water is rolled into the “dispensing water” step (I would place it in between “dispensing water” and “awaiting input”, connected to both of them and also to “off”). A sensor would detect whether there is actually enough water to make coffee and move to either the “dispensing water” or “error” state accordingly. The “dispensing water” stage just pours the coffee and either moves to the “awaiting input” state when finished or the “error” state if it runs into some kind of problem. The error state simply displays a message and then returns to the awaiting input state. At any point, the machine can be turned off, but once turned on can only start at the awaiting input state.

Note that this is not really helpful at all for building an actual coffee machine. It is, however, helpful for simulating a coffee machine programmatically, and a similar thought process can be used to break down almost any kind of behavior.

From the blog CS@Worcester – Tom's Blog by Thomas Clifford and used with permission of the author. All other rights reserved by the author.

Exploring Pipe-And-Filter Architecture

https://www.dossier-andreas.net/software_architecture/pipe_and_filter.html

The Pipe-And-Filter architecture is conceptually very simple. It essentially consists of breaking down one operation into a sequence of smaller operations, in which the input of each is the output of the previous. These operations are called “filters” and the connectors linking them are called “pipes”. Sometimes the terms “pump” and “sink” are used to refer to the initial input and final output, respectively. I think making up those last two terms is a little excessive, but overall I like the metaphor – it makes me think of an actual physical machine, which is similar to how I prefer to think about computing in general.

The most well-known example of this pattern is seen in Unix and Unix-like operating systems, which are also the most obvious example of this pattern’s utility. There is no need for any program to have word counting functionality, because its output can be piped into wc, a program that only does that. Similar functionality for pattern matching is provided by grep. With this ecosystem of programs in place, a skilled user of a Unix shell has a great deal of functionality available to them through composing these programs in different ways, as opposed to creating new programs from scratch that do slightly different things than what the programs on their machine already do. Another common example is compilers, which also function in a similar way in order to streamline and simplify the process of translating between languages. The OpenGL rendering pipeline has similar motivations, only for processing graphics instead of programs.

One drawback of a pipe-and-filter system is that it has the potential to draw too much overhead. Being able to pipe something into wc rather than counting words yourself is a more flexible solution, but it does involve running a separate program. At small scales (i.e. most use cases) this isn’t an issue, but if your data is large enough you may need to abandon this system.

Before looking into this, I wasn’t really aware of the pipe-and-filter architecture as a distinct pattern. I was aware of how the Unix ecosystem worked, but I thought the practice of piping together small programs in sophisticated ways was just a quirk of that operating system. I didn’t connect the dots that it was also the same basic concept being used in graphics pipelines, even though I was aware of them. I also have always regarded compilers as a little magical, and seeing that their workflow can be decomposed like this makes them seem a little more approachable.

From the blog CS@Worcester – Tom's Blog by Thomas Clifford and used with permission of the author. All other rights reserved by the author.

Solo Project Management Using Government-Grade Advice

https://derisking-guide.18f.gov/state-field-guide/basic-principles/

Something I struggle with quite a bit is directing my personal software projects. Staying committed, composing large pieces of code with each other correctly, and even figuring out what I want to do in the first place are all things that I just sort of play by ear, with limited success. In an attempt to gain some insight, I found this article published by a technology and design consultancy for the US government, directed at non-technical project managers. The gist of it is that these managers must understand six concepts: user-centered design, Agile software development, DevOps, building with loosely coupled parts, modular contracting, and product ownership. I won’t go into all of these, since some are still more technical than others, but I want to highlight a few.

One of my favorite points in the article is something that I’ve believed for a long time, which is that all development should be centered on the needs of the end user, rather than stakeholders. Project risk is reduced by ensuring the software is solving actual problems for actual people, and the problems are identified via research tactics like interviews and testing for usability. It would be kind of silly to interview myself, but I think this is a good mindset to have. It kind of sounds meaningless when stated so directly, but if you want a product you have to focus on creating the product, rather than daydreaming about every possible feature it could have.

Another point I liked was the discussion of Agile software development. Without getting too into the weeds on details, the basic problem it seeks to solve is that detailed, long term plans for major custom software projects generally become incorrect as the project proceeds and new technical considerations are discovered. To combat this, agile developers plan in broad strokes, filling in details only as necessary. In a way, it kind of reminds me of an analogy I heard once to describe Turing machines – an artist at their desk, first drawing a broad outline and then focusing on specific sections and filling in details (it may or may not be obvious how this is related to Turing machines but that’s not relevant). The primary metric is how fast documented and tested functionality is delivered.

I found two other somewhat related points useful as well, both of which deal with modularity. The first is the idea of “building with loosely coupled parts”, which essentially boils down to the idea that if one agile team is in over their head, they should split the work into two or more distinct products each with its own dedicated team. Modular contracting is just applying this concept before even beginning a project. Together, I think this a helpful way of possibly connecting all the small fleeting app ideas I have – rather than one unfocused monolith, I could work on a small ecosystem with a shared API that I add and remove things from as needed.

From the blog CS@Worcester – Tom's Blog by Thomas Clifford and used with permission of the author. All other rights reserved by the author.

Comparing Vue and React

https://www.monterail.com/blog/vue-vs-react-2021

Web development has never been my main area of interest. It’s not the environment I’m the most comfortable in, especially when dealing with projects involving many different frameworks and libraries. In order to prepare myself to contribute something useful to the LibreFoodPantry project, I basically picked a framework out of a hat from what I roughly remember in class. As a result, I’m reading up on Vue.js.

Vue (pronounced “view,” which I am still struggling to pronounce correctly when reading) is a frontend framework, meaning it is used as a way to streamline creating the user interface of a website (as opposed to the backend, which refers to work done on the server’s end, or the API layer, which connects the frontend and the backend).

Vue was created by a solo developer named Evan You, and is currently maintained by him and a small team. It allows users to either define the site’s interface in JSX as one would do with React.js or in HTML templates. It is well-documented and considered easy to learn by many developers. It is suitable for small projects, but can also scale to larger ones. It is progressive, meaning that features can be introduced to a project incrementally as they are required.

As someone without a lot of intuition around tech stacks, I figured it would be a good idea to find an article that compared Vue to another framework, rather than just reading about it on its own. The article linked above compares it to React, an older and more well established frontend framework created and maintained by Facebook. It is not as thoroughly documented, but has a much larger community of developers. Unlike Vue, you can only use JSX with it. It is also primarily meant for large projects, and developers of smaller projects using React might fight themselves with a lot of unnecessary boilerplate.

Both Vue and React use the concept of a Virtual DOM, meaning that they modify the state of the HTML document but only change it dynamically as is needed, rather than reloading the entire thing. They both have large libraries of components that can be added for additional functionality (Vue’s is smaller but has more officially maintained components). Vue and React also show similar performance.

Vue and react both have pros and cons. My instinct is to avoid boilerplate as much as possible, and I can’t imagine I will be working on lots of very large web apps, so I think I would generally prefer Vue over React, if it was up to me to choose one, especially since you can pick and choose how much of its complexity you want to worry about.

From the blog CS@Worcester – Tom's Blog by Thomas Clifford and used with permission of the author. All other rights reserved by the author.

Thinking about software testing

blog.ndepend.com/10-reasons-why-you-should-write-tests/

For as long as I have been aware of it, I have been skeptical of the value of software testing. It has always struck me as unnecessary busywork, mostly because that is how writing them for my classes feels (granted, that’s generally how writing code for any class feels in my experience). Either the program works or it doesn’t, right? Why bother writing a test when you could use that time to tighten the ratchet on the code you’d otherwise be testing, or even move on to something else?

In an attempt to broaden my horizons, I sought out some arguments in favor of testing. One idea I found, from Avaneesh Dubey (which he discusses in the above article), is probably the one I personally find the most compelling. He argues that the hallmark of a poorly constructed test case is essentially one that is too narrow in its scope or the functionality that it covers. Proper tests, he argues, must reflect “usage patterns and flows.”

Jumping off of this, I would articulate this slightly differently. I think that proper testing methodology would necessarily force developers to be aware of the boundaries they want to encapsulate between. For example, it would be kind of absurd to write tests to make sure a factory class works correctly, because whether or not you’re even using the factory paradigm is almost certainly too technical for non-technical product managers to care about. My understanding of testing is that it’s primarily a way for this kind of person to make judgments about the development process independently of the actual developers.

When you write software tests, you are, or at least should be, asking yourself questions about the high-level flow of the program – what it’s actually doing in physical reality rather than very tiny implementation details – and that is ultimately where your head should be at all times during the development process, in my opinion.

Though obviously, test writing is an important skill for actual work in the industry, I had no intention of ever writing any tests for my personal projects. Now, I’m not really sure that I’m sold on it for personal use, and I’m still a little skeptical about the efficacy of test-driven development in single person projects, but I think it might be of some use to me. In particular, I hope it can help me make some sense of the WebGL code I’m planning to write for a project in the near future, which is certain to contain many fine technical details that can quickly become a headache if not managed properly.

From the blog CS@Worcester – Tom's Blog by Thomas Clifford and used with permission of the author. All other rights reserved by the author.

Notes on “Notes on the Errors of TEX”

Having just read a short keynote address* by Donald Knuth from 1989 about the development of his typesetting system, TeX, I’m struck by how little the core methodology of computer programming has changed in the last thirty years.

Students and new programmers can struggle with the difference in scale between real-world applications and textbook examples. Knuth feels that his typesetting system, TeX, is a useful illustration to beginners because it is a “medium-size” project: small enough for one person to understand given a reasonable amount of effort, but large enough to draw meaningful architecture and design lessons from.

Here, he outlines a number of lessons he took from the process of writing TeX. There’s nine, and they are only discussed briefly here (this is a summary of a much larger paper he wrote, which is simply called “The Errors of TeX.” I think they’re generally all very good, but for the sake of brevity I’m only going to focus on one of them.

Knuth’s first lesson here, which I think is the most important, is that it is not enough to merely specify a design. The implementation isn’t just physically necessary to create the product, but the actual process of implementing it is where the most important information about the design comes from. This seems maybe so obvious that it’s not worth actually saying, but I think it’s important and easy to lose sight of. Looking back on it, I think that personally, my greatest stumbling block in programming is what I would call overdesign. I would sit down and construct a conceptually perfect model in my head, and then it would fall apart on me as I attempted to implement it without me really understanding why.

At the time, anyway. Looking back now, I have a pretty clear idea, and Knuth expresses a similar idea here. The problem is that I viewed implementation as an externality to the design process rather than as one of the most significant components of it. Fundamentally, the purpose of thinking about software architecture is to minimize difficulty in implementation, and so actually writing the code is the most important source of data in the process. For example, it’s one thing to know, vaguely, that Microsoft’s DirectSound library can provide audio playback functionality to an application. But knowing specifically that it needs a ring buffer and understanding all the small, tedious details in setting it up might push you in the direction of architecting your program a specific way, or using a different audio library if you’d rather not. Granted, I guess it’s possible that you might come to the same conclusions simply looking at the documentation, but my mind doesn’t work that way, and I suspect that experience is generally a better teacher.

*Knuth, Donald E. (1989). “Notes on the Errors of TeX” (PDF).

From the blog CS@Worcester – Tom's Blog by Thomas Clifford and used with permission of the author. All other rights reserved by the author.