Author Archives: robiciampa

Thea’s Pantry

While reading the documentation for Thea’s Pantry, I thought the User Stories section was really interesting. I think it is a valuable resource to have on hand while working on Thea’s Pantry since it gives a very clear description of how this software is intended to be used. This gives me a better idea of how the program should flow during use, and it especially helps me understand the difference between the responsibilities of a staff member and those of an administrator at the food pantry. I also thought it was interesting that only one guest was allowed into the pantry at a time; I did not know this previously.

From the blog CS@Worcester – Ciampa's Computer Science Blog by robiciampa and used with permission of the author. All other rights reserved by the author.


Something from the LibreFoodPantry’s website that I found interesting was the mission that is listed. This page states that LibreFoodPantry’s mission is to “expand a community of students and faculty across multiple institutions who believe software can be used to help society.” I really like that this mission is centered around bettering our community. I think a lot of modern software projects are focused on comercialization and proffit, and I don’t like that. A software can be a very powerful tool for any cause; I believe that softwares should be designed to help people. I appreciate the oppurtinity to work on a project that has a real impact on the community.

From the blog CS@Worcester – Ciampa's Computer Science Blog by robiciampa and used with permission of the author. All other rights reserved by the author.

Frontend Architecture

My experience with HTML programming outside of classwork pretty much begins and ends with making forum posts look fancy in 2012 messaging boards, and I have even less experience designing frontends with HTML. I wanted to use this blog post to learn more about how to build frontends. Matias Lorenzo’s “Frontend Architecture and Best Practices for Consuming APIs” seemed like a very good start.

Lorenzo notes that issues begin to arise when designing a frontend because that frontend is often dependent on a backend API. This means that whenever the backend or the API undergoes a change, the frontend must also be changed. Lorenzo describes some other issues one might encounter when integrating an API into their frontend. APIs may use lengthy key names that necessitate mass replacement. APIs might structure data in a way that is difficult to fully utilize, or they might include too much data.

Lorenzo proposes a frontend architecture that would further isolate the frontend from the backends it depends on so that it is more stable when the backend changes and is titled model-controller-serializer. Below is a diagram of how this architecture fetches information, in this example it is fetching products for a storefront, from an API:

Controllers in this architecture are the point of contact between the rest of the app and the API. This portion of the app should be able to create any relevant request. Information comes in through the controller, is deserialized by the serializer, then is passed to the model to create instances of that information.

Serializers in this architecture decipher information coming in from the API. If an API includes information that is not relevant to the app, the serializer can remove this information. The serializer can also change the type of a field (eg. from a string to a boolean). Similarly, if the API uses long or clunky names for fields, the serializer can alter the name of the field before passing it along.

Models in this architecture are representations of information from the API that can be read by the app. These store information as an object in Javascript (or whatever language is being used in the app) so that it is easy to access and understand.

I chose this source because I wanted a more in-depth look at frontends. I know what a frontend does, but I would be at a loss for how to design one. This frontend architecture Lorenzo describes seems really useful to help keep the frontend and backend seperate, and I will be keeping this reference in mind for future projects.

From the blog CS@Worcester – Ciampa's Computer Science Blog by robiciampa and used with permission of the author. All other rights reserved by the author.

Docker Swarm

Deploying a large application with Docker often requires a number of Docker containers to be running at the same time and on the same network. Rather than orchestrating this manually, Docker offers an orchestration tool that makes this more manageable: Docker Swarm. I’ve come across the phrase “Docker Swarm” on occasion, but I never had reason to really look into it. It seems like a very useful tool, and I wanted to explore it this week with Gabriel Tanner’s “Definitive Guide to Docker Swarm.”

A Docker Swarm is comprised of a number of Docker hosts that run in “swarm mode.” These hosts act either as a manager that manages relationships between nodes or a worker that runs services.

Tanner defines a node as “an instance of the Docker engine participating in the swarm.” A device may run one or more nodes, but it is typical in deployment environments to distribute Docker nodes across multiple devices. Manager nodes take incoming tasks and distribute them to worker nodes. They also maintain the state of the cluster and manage orchestration. A swarm should have multiple manager nodes in order to maintain availability and to avoid downtime in the case of a manager node failure. Worker nodes exist to carry out commands from the manager nodes such as spinning up a container or starting a service. A manager node can also be a worker node and is both by default.

Services in a Docker Swarm are the definitions of tasks to be carried out by nodes, and defining a service is the primary way a user interacts with a swarm. Creating a service requires the user to specify a container image to be used and a set of commands to be executed.

Docker Swarm offers a number of benefits. It contains a built-in load balancer that allows you to dictate how services and containers are distributed between nodes. Swarm is integrated directly into the Docker command-line interface and does not require the installation of any additional software. It is easy to scale and allows you to apply updates incrementally.

I chose this source because I wanted to look more in-depth at some features of Docker I don’t have much experience with. I was not really sure what Docker Swarm was used for, so the context I gained from this article will surely be useful. I did not have space to cover it in this post, but Tanner’s article also details how to set up a Docker Swarm. I will definitely be saving this information for future use.

From the blog CS@Worcester – Ciampa's Computer Science Blog by robiciampa and used with permission of the author. All other rights reserved by the author.

Docker Bridge Networking

I have worked before with Docker and Docker Compose, but I am a bit shaky when it comes to networking either of them. I know it is possible, and I know it is often a necessary step in deploying applications. However, I have never had luck trying to network containers together. I wanted to spend some time learning more about this.

Docker comes with several networking drivers: bridge, host, overlay, ipvlan, and macvlan. Bridge is the default networking driver used by Docker and is the most popular. Ranvir Singh’s article “Docker Compose Bridge Networking” outlines how to use that bridge networking driver to network containers together using either Docker or Docker Compose.

A network for Docker containers can be created using the following command:

$ docker network create -d bridge my-network

This creates a network called ‘my-network’ which uses the bridge driver. To add containers to this network, one would run commands similar to the following:

$ docker run -dit --name container1 --network my-network ubuntu:latest
$ docker run -dit --name container2 --network my-network ubuntu:latest

These commands would create ubuntu containers ‘container1’ and ‘container2’ respectively and would add them to the ‘my-network’ network. Singh advises that a network should be dedicated to a single application; this ensures that applications are secure and isolated from one another.

Networking with the bridge driver works slightly differently when using Docker Compose. By default, Docker Compose will create a network with the bridge driver and will deploy the services defined in the Compose file to that network. This network is named after the directory that the docker-compose.yml file is in. This setup does not require ports to be exposed in order for containers to talk to one another; one should be able to call the hostname to reach another container. Calling ‘docker-compose down’ removes and deletes the network created from the docker-compose.yml file.

Instead of using the default Docker Compose network, a network can be defined inside the docker-compose.yml file. This definition happens at the top level of the compose file, similar to how a service is defined. This way multiple networks can be defined and used by a single compose file.

I selected this source because it was something I was interested in. I thought networking was exclusive to Docker Compose, and I was not aware that you could create a network for Docker images to communicate over. This does not help my problem of not being able to get Docker Compose services to talk to each other, but it does help to clarify some information I had been missing. I will internalize the information in this article for future work, but I also want to keep looking into this topic.

Additional Source

From the blog CS@Worcester – Ciampa's Computer Science Blog by robiciampa and used with permission of the author. All other rights reserved by the author.

Microservices vs Monolithic

Microservices seem to be a very popular concept in software design, and it is one I have trouble fully understanding. Because of this, I wanted to spend some time developing my understanding. Chris Richardson’s “Introduction to Microservices” is the first article in a series of seven discussing microservices; it compares microservices architecture to monolithic architecture, and it is where I began looking.

Monolithic architecture typically involves one application at its core built to handle a multitude of tasks. This application may branch out into APIs, adapters, or UIs that allow the application to access objects outside of its scope. This application, along with its more modular pieces, is packaged and deployed as a single monolith.

This approach to software design has some inherent problems. Applications tend to grow in size, scope, and lines of code over time. In a monolithic application, code can easily become too large and too complex to efficiently deal with. The resulting applications are often too complicated for a single developer to understand, which also complicates further development. Larger applications also suffer from longer start-up times. Continuous deployment becomes difficult since the entire project needs to be redeployed. It’s difficult to scale monolithic applications as well as employ new frameworks or languages.

Many of the issues monolithic architecture is prone to can be solved by instead adopting microservices architecture. Microservices architecture involves splitting an application into connected but smaller services. These services are typically dedicated to a single feature or functionality and are usually each run in an individual Docker container or VM.

Using microservices architecture comes with a number of benefits. Splitting a complicated, monolithic application into smaller pieces makes that application significantly less complicated. It becomes easier for individual developers to understand the services, thus making development faster. This also allows a team of developers to focus on a single service. Teams can become more familiar with the service they are working on, and maintenance becomes more manageable. Splitting an application into microservices also makes that application easier to scale.

Despite its benefits, microservices architecture is not without its drawbacks. Splitting an application into microservices can introduce a new complexity. These services need to talk to each other, so a mechanism to send and receive these messages must be put in place. It is also more complicated to test an application that uses microservices. A test class for a monolithic application would only need to start that application, but a test class for a microservices application would have to know which services it needs to work.

I chose this article because I appreciated how thorough the information was. I plan on reading the remaining six articles in Richardson’s series. I had not thought about how connecting microservices together might complicate a project. I will be thinking about the pros and cons of microservices as I continue into my career.

From the blog CS@Worcester – Ciampa's Computer Science Blog by robiciampa and used with permission of the author. All other rights reserved by the author.


My experience with APIs outside of classwork is very limited, so I wanted to take this opportunity to further familiarize myself with them. After a discussion in class, I am especially eager to learn more about REST APIs and how they might differ from other APIs. Jamie Juviler’s “REST APIs: How They Work and What You Need to Know” is a very good start.

Introduced in 2000 by Roy Fielding, REST APIs are application programming interfaces that follow REST guidelines. REST, which stands for representational state transfer, enables software to be able to communicate over the internet in a way that is scalable and easily integrated. According to Juviler, “when a client requests a resource using a REST API, the server transfers back the current state of the resource in a standardized representation.” REST APIs are a popular solution for many web apps. They are able to handle a wide variety of requests and data, easy to scale, and simple to build as they utilize web technologies that already exist.

REST guidelines offer APIs increased functionality. Juviler states that in order for an API to properly take advantage of REST’s functionality, it must abide by a set of rules: client-server separation, uniform interface, stateless, layered system, cacheable, and code on demand. Client-server separation addresses the way a client communicates with a server. A client sends the server a request, and the server sends that client a response. Communication cannot happen in the other direction; a server cannot send a request nor can a client send a response. Uniform interface addresses the formatting of requests and responses. This standardized the formatting and makes it easier for servers and softwares to talk to each other. Stateless requires calls made with a REST API to be stateless. Each request to the server is dealt with completely independent of other requests. This eases the use of the server’s memory. Layered system states that, regardless of the possible existence of intermediate servers, messages between the client and main server should have the same format and method of processing. Cacheable requires that a server’s response to a client include information about if and for how long the response can be cached. Code on demand is an optional rule. It allows a server to send code as part of a response so that the client can run it.

I picked this source because I thought the information was valuable. Again, I have very limited experience with APIs, even less with REST APIs, and I struggled somewhat to understand them. Jamie Juliver provides a very thorough yet easily understood overview of REST APIs. I was not aware that what sets a REST API apart from a regular API was its adherence to a set of rules. This article helped me better understand REST APIs, and I am eager to put this knowledge to use in future coursework and projects.

From the blog CS@Worcester – Ciampa's Computer Science Blog by robiciampa and used with permission of the author. All other rights reserved by the author.