Author Archives: ctshadis

First Sprint Retrospective

Christian Shadis

2/26/2022

This past week, my development team in my Software Development capstone completed our first Scrum sprint for the semester and recently had our Sprint Retrospective meeting. This was my first exposure to an Agile development environment, and thus I have many thoughts about the experience. In this post I will review my own contributions to the project, reflect on the team dynamics, and critique my own contributions and suggest strategies to improve my work in the next sprint.

The sprint got off to a rocky start since none of the six of us had worked in a Scrum environment, nor worked with Gitlab issues and boards, but we settled in quickly enough to complete nearly all the work we set out to for the three-week period. The most difficulty we had as a team was with the process of branching and merging – we had multiple instances of changes being made on main instead of in the branch and had a surplus of merge conflicts to deal with due to pushing outdated code to the main branch. Our difficulties can be attributed to lack of experience, but as a team we were able to overcome these difficulties quickly, and by the end of the semester had a much smoother process in place for merging code into the main branch.

As a team we communicated efficiently and were able to help each other with issues when needed. We divided work evenly such that no team member worked significantly more or less than any other member. We were also effective in the work itself, as nearly every issue agreed upon during the Sprint Planning meeting was completed.

The team has agreed to implement numerous changes in our workflow for the upcoming sprint. We have re-structured our merging process to make updating the main branch as seamless as possible: instead of approving changes to the code during class time, we will instead transition to having two team members approve any merge request before merging it. This will help prevent logjams of merge requests and will allow us to better utilize our in-class meetings. We also agreed to work on improving the efficiency of our standup meetings. In the previous sprint, we would often get sidetracked during the standup meetings and talk about code that was written or have discussion about implementation – we have resolved to instead keep the standup meeting as concise and focused as possible and hold all other discussions afterward.

I contributed to the team by creating the Documentation repository in the Foodkeeper group and populating it with the necessary files (https://gitlab.com/LibreFoodPantry/common-services/foodkeeper/food-keeper-backend/-/issues/1), by creating the index file in the API repository (https://gitlab.com/LibreFoodPantry/common-services/foodkeeper/food-keeper-backend/-/issues/24), and by creating the schema for the Category of a data point (https://gitlab.com/LibreFoodPantry/common-services/foodkeeper/food-keeper-backend/-/issues/6). I also wrote several files of code for Issue 17, which wound up not being implemented (see below for further explanation).

There are several improvements I can make as an individual as well. I noticed my shortcomings in keeping the issue boards organized: I often forgot to move issues between labels, and at one point unassigned myself to an issue I had already completed yet not merged (Issue 17 mentioned above), causing a waste of time when another team member picked up and re-completed the issue. Keeping my own issues organized, properly labeled, and assigned is important to the overall function of the team.

I would consider the sprint successful overall. We had some rough patches in the beginning, but our team excelled at addressing problems as they came up. We are also producing work that all of us can be proud of. I am looking forward to the next sprint and further improving my capability of working in an agile development team.

From the blog CS@Worcester – Christian Shadis' Blog by ctshadis and used with permission of the author. All other rights reserved by the author.

Introduction to Apprenticeship Patterns

Christian Shadis

As a young developer soon graduating from University, it is essential to remain dissatisfied with the skills and knowledge I have already acquired. Dave Hoover and Adewale Oshineye explore this process in Apprenticeship Patterns, guiding those relatively new to the field to become a ‘master’ of their craft.

In the introduction to the book and each subsequent chapter’s introduction, the authors discuss several major themes such as continuous learning, humility, and dedication. Independent study has been the largest component of my development education so far, and upon completion of my degree I fully intend to continue teaching myself new languages and technologies. Practicing humility is essential to keep an open mind to the vast amount of information that I have yet to discover. No matter how many years I spend as a software craftsman, there will always be an overwhelming, ever-growing amount of information to learn. Remaining dedicated to learning this new information regardless of my experience level will be vital to continue to grow as a craftsman.

Hoover and Oshineye particularly grabbed my attention when they discussed their definition of ‘Software Craftsmanship’ and ‘Software Apprenticeship’. They are careful not to refer to programmers as “developers” or “engineers” in favor of a more inclusive term that illustrates the blurred edges between the disciplines. I found this distinction both surprising and refreshing; I had found myself concerned about the prospect of finding a job as an engineer given my development degree, but the choice of terminology used in the book implies that any programmer can venture into development or engineering regardless of background if they are willing to work to learn the technologies or techniques needed.

Chapters I am most interested in exploring are Emptying the Cup and Walking the Long Road. I expect to obtain valuable insight from these chapters specifically because of my perfectionistic tendencies and my impatience when it comes to learning new things. Setting aside what I already know in favor of learning new technologies and embracing my ignorance rather than expecting instant success (another main theme in the book) will ensure I never stop learning.

As a young developer wholly unsure of his place in the software world, I will use the strategies provided in this book to make myself a better learner, teammate, and craftsman to prepare for my imminent career in software.

Hoover, D. H., & Oshineye, A. (2010). Apprenticeship patterns: Guidance for the aspiring software craftsman. O’Reilly.

From the blog CS@Worcester – Christian Shadis' Blog by ctshadis and used with permission of the author. All other rights reserved by the author.

Getting Familiar with Thea’s Food Pantry

Christian Shadis

Worcester State University’s pantry, Thea’s Pantry, is a member in the Libre Food Pantry Community Humanitarian Free and Open-Source Software (HFOSS) project. To prepare to contribute to this project, I familiarized myself with the project’s documentation. Of note was the project’s workflow process, which resolved several questions I previously had about large-scale projects and version control. I had wondered whether many small commits would overload the repository, or whether professional developers were expected to commit much less frequently than I was used to. In the Thea’s Pantry workflow, small inconsequential commits are ‘squashed’ when a feature is merged into the main branch of the repository. I had questioned the consistency of semantic versioning, but Conventional Commits and the automated semantic-release tool1 addressed those concerns. Examining the documentation offered me some extra insight into how smaller projects become larger, distributed ones.

1https://github.com/semantic-release/semantic-release

From the blog CS@Worcester – Christian Shadis' Blog by ctshadis and used with permission of the author. All other rights reserved by the author.

Getting Started with LibreFoodPantry

Christian Shadis

I will be spending my final semester at Worcester State University contributing to the LibreFoodPantry Humanitarian Free and Open-Source Software (HFOSS)1 project. In reviewing the mission statement, values, and other biographical sections of their website, I was directed to a list of Heidi Ellis’ sixteen ‘FOSSisms’2, advice primarily for developers new to open-source projects. There was a recurrent theme among some FOSSisms – a developer should contribute as much as they can, regardless of the size of the fix they provide, or how perfect the code is. Version control allows for easily fixing mistakes, and creating new branches allows for editing code without affecting the main branch. I have been a perfectionist throughout my college career, so I found that perspective both surprising and helpful. I plan to apply these (and the rest of the) FOSSisms in these coming months.

1https://librefoodpantry.org/
2https://opensource.com/education/14/6/16-foss-principles-for-educators

From the blog CS@Worcester – Christian Shadis' Blog by ctshadis and used with permission of the author. All other rights reserved by the author.

Getting Started with JavaScript and the DOM

Christian Shadis

As I transition into the final semester of my undergraduate degree, I plan to learn Javascript. In my Software Construction, Design, and Architecture class, we briefly explored and edited some frontend code in Node.js and Vue.js. Using my prior knowledge in Java, I was able to understand how most of the code was functioning. As Javascript continues to rise in popularity, it would be a valuable language to add to my skillset. One of the fundamental aspects of learning Javascript, or so it seems, is to understand the Document Object Model (DOM). Having used his book HTML & CSS: Design and Build Web Sites to supplement my knowledge in web design, I decided to consult the fifth chapter of Jon Duckett’s Javascript and JQuery: Interactive Front-End Web Development to learn more about the DOM.

Duckett first specifies the DOM as a set of rules, separate from the HTML and CSS of the website, which “specifies how browsers should create a model of an HTML page and how JavaScript can access and update the contents of a web page while it is in the browser window” (Duckett 184). In other words, the DOM acts analogously to an API in that it facilitates or enables real-time communication between JavaScript code and the HTML page. Duckett proceeds to describe the DOM tree, the benefits of caching DOM queries, and how to traverse the DOM tree. He also described how to access, update, add, and delete HTML content from the page using the DOM. The chapter included a bit of bonus information such as cross-site scripting attack prevention and how to view the DOM in each of the major web browsers.

Reading this chapter was useful in contextualizing how the different parts of a webpage interact with each other. In addition to learning the fundamentals of the DOM, I also gained experience reading JavaScript code, which is a great way to learn how a language works. What stood out to me most about the material was the parallels between the DOM nodes and the LinkedList data structure. Having coded an implementation of the data structure in a previous course, it was intuitive to see how traversal of the DOM tree worked.

I plan to continue to learn JavaScript, and plan to read the remainder of the book. Web design is becoming more ubiquitous by the day, and thus a more valuable tool for developers to have. I would highly recommend this book and others by Jon Duckett to developers – the design is pleasing, he provides sample code, and offers excellent simplistic explanations.

Duckett, J. (2014). Document Object Model. In JavaScript & JQuery: Interactive Front-end web development (pp. 183–242). essay, John Wiley & Sons.

From the blog CS@Worcester – Christian Shadis' Blog by ctshadis and used with permission of the author. All other rights reserved by the author.

The Difference Between Frontend and Backend

Christian Shadis

Over the past month, my Software Construction, Architecture, and Design course has shifted gears into a sort of crash-course for full-stack web development, as I mentioned in the last blog post. We have focused primarily on the backend, creating RESTapi endpoints in the api.yaml file, along with implementing those endpoints in Javascript. In the final two weeks of class, we are transitioning into implementing a front end using Vue.js. In the process of switching contexts, my understanding of the difference between a frontend and backend became a bit obfuscated.

In Frontend vs Backend, Maximilian Schwarzmüller first declares that the common refrain that each web app is built from a frontend and a backend. He then begins to describe the differences between them. In short, the difference between the frontend and backend of a web app is that the frontend refers to the content displayed in the browser, while the backend refers to the portion of the program which runs on a server. He gives the helpful example of Amazon – it consists of a frontend (the catalog, shopping cart, and other visual elements of the webpage) and a backend (all data/database management). He is careful to specify that the difference does not lie only in where data is stored, because there are non-database aspects of a backend as well, such as file system interaction or input validation. He also lists several skills and programming languages that a programmer may need in order to build the frontend (HTML, CSS,  Javascript, Vue.js / React.js / Angular) and the backend (Node.js, PHP, Python).

One area of confusion for me was understanding exactly how the frontend and backend communicated with each other – via HTTP requests. We had discussed and implemented HTTP requests in class, but I thought the communication occurring was between the backend and the database. In fact, the user performs some action on the frontend, resulting in an HTTP request being sent to the backend (and by extension, the database).

Knowing how to code both the frontend and backend of a web app is vital to become a web developer, but it is also important to be able to differentiate between them, and to understand why they are considered separate components of an app. I plan to work on developing the full stack of a web application from scratch, and the information from this article will enable me to begin development with a stronger understanding of how to build the frontend and backend properly to communicate with each other.

From the blog CS@Worcester – Christian Shadis' Blog by ctshadis and used with permission of the author. All other rights reserved by the author.

Fundamentals of APIs

Christian Shadis

Over the past month, my Software Construction, Architecture, and Design course has shifted gears into a sort of crash-course for full-stack web development. Our initial focus was on the backend of web apps and specifically on APIs. In class I learned how to edit API endpoints for different HTTP methods such as GET, POST, and PUT. I understood how to code the API and I understood the difference between the different HTTP methods, but I still did not understand the very basics of what an API was beyond its acronym. For this blog post, I set out to learn the fundamental nature of APIs to supplement my technical understanding. While it is helpful to know how to code an API, the knowledge would likely be useless without an understanding of what an API is or when to use one.

An API (Application Programming Interface) is a technology developed as a solution for multiple applications needing to communicate despite their incompatibility. For example, a person running Google Chrome trying to view the available items on a store website requires their web browser and the store’s server to be able to communicate. This is unlikely to be possible without an API. The API allows Chrome to send a request for information to the server, and then allows the server to return the requested data (or an error message) in a particular format such as JSON. The API facilitated the exchange of information between a client application and an unrelated server. This example illustrates the necessity of APIs. Without them, communication among devices built differently would be impossible. Web browsing as we know it would not exist, nor would web-based phone applications.

In What is An API and How Does It Work?, Thomas Davis first describes the omnipresence of APIs in everyday life even for non-programmers, and contrasts that ubiquity with the fact that few people understand how an API works, let alone what it does. He describes the API as an intermediary between two parties that do not speak each other’s language, and provides everyday examples of situations in which one would desire two computers to communicate. Once he explained the base concept of an API, he described different types of APIs such as the REST API, the SOAP API, and the XML-RPC API, comparing and contrasting each.

Knowing how to code APIs is an important aspect of backend development, but I feel much more confident in my ability to code an API now that I more fully understand the concept. I expect to build backends for web applications in the future, so this knowledge should be helpful in my development career.

References:

https://towardsdatascience.com/what-is-an-api-and-how-does-it-work-1dccd7a8219e

From the blog CS@Worcester – Christian Shadis' Blog by ctshadis and used with permission of the author. All other rights reserved by the author.

Creating Docker Images With DockerFiles

Christian Shadis

Docker, a main focus in my Software Construction, Architecture, and Design course, is an essential tool for the modern developer to be able to completely separate the application being built from the developer’s local machine. Essentially, this allows for all the application’s dependencies and configurations to be packaged with the application and independent of the machine the application is being run on. A Docker container is built from an image, which is basically the ‘blueprint’ of a Docker container. Images are easily-reusable, and many images can be found for use on the Docker Hub. Images can be created from scratch using Dockerfiles, which I previously did not understand. In order to improve my Docker skills and gain the ability to create my own Docker images, I chose to research and write about the structure and use of Dockerfiles.

There are situations in which a developer would want to create their own Docker image – maybe they need a specific version of the Ubuntu operating system, or there are specific modifications that need to be made to that operating system before the application can run. Maybe for an application to be run, many slightly modified versions of the same container must be deployed. The developer can address these scenarios by creating a Dockerfile, which contains all software to be included in the image. Any time the image needs to be used, a container can be created with that image in a single command, preventing the necessity of importing dependencies or changing the machine’s configuration repeatedly.

 In Docker Basics: How to Use Dockerfiles, Jack Wallen first describes the necessity and use cases of Dockerfiles. He then supplies a list of keywords that can be used in a DockerFile, such as CMD, which executes a docker command inside the container, ENV, which is used to set environment variables, and EXPOSE, which is used to publish networking ports in the Docker image. From there, Wallen proceeds to demonstrate the process to create a Dockerfile from scratch in a text editor. Finally, Wallen outlines the process for building the usable image from the Dockerfile. The article concludes with a short section including a second worked example, this time building a Dockerfile for a different operating system.

 Knowing how to create containers using pre-built images from DockerHub is an important first step for a developer to get started with Docker, but the true power of Docker images is realized when the developer has the capability to create a new image with the exact configurations needed. Creating custom images is a skill I expect to use often in my development career, since a large portion of applications in development today use Docker containers.

Reference:

https://thenewstack.io/docker-basics-how-to-use-dockerfiles/

From the blog CS@Worcester – Christian Shadis' Blog by ctshadis and used with permission of the author. All other rights reserved by the author.

REST API Security

Christian Shadis

For the past couple weeks, my Software Construction, Architecture, and Design course has been focusing on the anatomy of simple REST APIs. While we were learning about how to create endpoints to retrieve data from the backend, it was as simple as just extracting the data from the database on the backend. This seemed too basic to me. Though I was sure there was still plenty of unexplored detail in the backend regarding security protocols, I was curious to see how security measures can be implemented in a REST API like the ones we have been working with.

Chris Wood discussed Security Scheme Objects and their role in the REST API, listing supported security mechanisms (Basic Authentication, API Key, JWT Bearer, and OAuth2.0), in his 2019 article REST API Security Design. He began by defining what a REST API Security Scheme Object is: a Component object (like Schemas or Responses) which “[describes] the security requirements for a given operation.” There is not a specific object for each of the mechanisms listed above, but rather a single Security Scheme Object that can represent any of the four mechanisms. The Object is defined in the top-level index.yaml file under the Components section, the desired mechanism is applied, and any additional arguments specific to the mechanism are passed. Once it is defined, the Security Scheme Object can be applied to individual endpoints or operations. For example, for some path /users/, we define an operation get, and underneath the parameters and responses section, a security section can be added containing an array of Security Scheme Objects to be applied. If we define a BasicAuth object and assign it to the get /users/ endpoint, other developers know that the operation should have basic authentication.

My main takeaway from the article was that my perception of API security was flawed. Whereas I had considered the idea of implementing security measures directly into the API itself, the article outlines instead that security measures in an API consist primarily as a guide or a definition of security requirements for other developers to uphold. For example, authentication itself is not performed inside our REST API by implementing one of these Security Scheme Objects. Rather, the API designer can specify that some specified authentication should be included in certain operations by defining them in the API.

While security measures in API design may not be as essential as I believed, the article asserts that it is a vital factor of API design. As I continue my career as a developer, I plan to develop all my applications in the most secure way possible. Since API design is such a fundamental aspect of web application development, I am glad to have gained some exposure to how security measures are implemented.

Reference: https://blog.stoplight.io/rest-api-security

From the blog CS@Worcester – Christian Shadis' Blog by ctshadis and used with permission of the author. All other rights reserved by the author.

Container Orchestration

Christian Shadis               

Docker has been the focus of my Software Construction, Design, & Architecture class for the past couple weeks. In a software development environment, it is paramount that applications can run on different operating systems and in different runtime environments. When applications rely on several dependencies, it becomes cumbersome to install all needed libraries on every machine the application runs on. This is where Docker comes in handy, allowing the programmer to create a self-sufficient container holding all parts of the application, along with all of its dependencies. Anybody can now run an application using its container without having to install any dependencies on their local computer.

Applications, however, are often designed as microservices, each microservice in its own container. A software company may have tens, hundreds, or thousands of different containers that need to be deployed and monitored at once. It is plain to see how this becomes a scaling issue. Container orchestration emerged to address the scalability of containerizing applications. Container orchestrators, like Kubernetes or Docker Swarm, allow the automation of repetitive work related to the deployment and maintenance of containers, such as configuration, scheduling, provisioning, deployment, resource allocation, scale of containers, load balancing, monitoring, and facilitating secure interactions between containers. This is why I chose to read about container orchestration from the article “What is Container Orchestration” from Red Hat.

The article goes into detail on what container orchestration is used for and why it is necessary, along with listing its major functions. It also describes how container orchestration programs like Kubernetes are configured, along with a basic overview of the anatomy and behavior of a Kubernetes cluster.

Using Docker to containerize applications is a pivotal skill for developers to have. In a world where so much computing is moving toward cloud technology, however, it is also important to be able to use Docker Swarm or Kubernetes because a large portion of applications a developer will work on will be deployed on the cloud in some way. In those situations, traditional Docker knowledge will be of little use. Instead, the developer should be able to leverage a Kubernetes cluster or a Docker Swarm to work with large containerized cloud-based applications.

Before reading this article and writing this entry, I had no exposure to container orchestration, though I had wondered about the scalability of the small docker containers we have been working with in class. I learned the basics of the subject, and accrued a list of further references to read more about applying container orchestration in an enterprise setting.

https://www.redhat.com/en/topics/containers/what-is-container-orchestration

From the blog CS@Worcester – Christian Shadis' Blog by ctshadis and used with permission of the author. All other rights reserved by the author.