Author Archives: vrotimmy

Apprenticeship Pattern: Your First Language

For my first blog post on these patterns, I wanted to start with Your First Language. I have only been exposed to a handful of languages since starting my computer science journey. And though I have honed my skills somewhat in those languages, I am by no means a master at them. I can relate to the problem in the pattern, so I was interested in what it had to say. This pattern addresses those who are looking to start learning a programming language or those who feel their skills are not up to par with what a job is looking for. Selecting your first language could influence your future career, so it is important to choose a language wisely. The solution suggests choosing a language based on those around you, for example, an expert in the subject that can mentor you. Becoming proficient in a language involves building up knowledge through reading specifications and solving problems.

One of my main takeaways was to choose a language based on the people you know, specifically an expert. The other takeaway was to look at the community built around a language and see if you want to belong to that community. I found those two points to be the largest deciding factors when it comes to picking up a new language. For me, I would consider my first language to be Java since that was the first language that I learned at school. There were plenty of professors that aided in my learning process and the community surrounding Java is extensive. It would only make sense for me to continue learning about java and solving sample solutions until I consider myself fluent. I think starting a professional career is a bit daunting, but what the pattern explains makes a lot of sense. The only way to become proficient at a language is to do a lot of reading and try out problems myself. A work environment where you can call on a more experienced team member is also beneficial when it comes to learning a language. That being said, I don’t think there is anything I disagree with anything mentioned in the pattern, it more so emphasizes what it means to be a good learner.

From the blog CS@Worcester – Null Pointer by vrotimmy and used with permission of the author. All other rights reserved by the author.

Apprenticeship Patterns Chapter 1 and Chapter 2-6 Introductions

The introduction of the book set the tone for the rest of the reading, everything that was being said came from a place of experience. The listed values to define software craftsmanship were really insightful, some of which can be applied to other aspects of life as well. I found Dave’s story encouraging because it hit a bit close to home. I sometimes feel like I am at a spot where I can’t fully apply my knowledge. It is inspiring to read about his journey as a software developer and how he put his mind towards the things he wanted. I commend him for being able to read through books on topics that he wanted to learn more about. Personally, I find myself absorbing information better when I am working alongside someone that is proficient in the topic, which is something that Dave brings up. Other than that, the chapter does a good job of introducing the topic of apprenticeship and it guides readers away from the clichéd image of a blacksmith and their apprentice.

The introduction to chapter 2 was a fun read and the lesson of “emptying your cup” is something that I will remember. In order to be successful in the field, I should keep in mind that there is a lot of room to grow. One of the main ideas of apprenticeship is allowing yourself to take in the knowledge of experienced colleagues. Even in general, opening yourself up to other ways of doing things can prove beneficial and it allows you to grow.

The introduction of chapter 6 somewhat ties in with my take on Dave’s story. What I gathered from it was that I would eventually have to push myself to learn and gain experience. One way of doing so is reading books written by experienced practitioners as opposed to blogs on the internet. I am guilty of not utilizing the plethora of free resources available to me. But at the same time, a textbook worth of information is a bit overwhelming. That being said, I can’t argue with the fact that at the end of the day, the only person that can motivate you, is you. Nobody is going to force you to read and learn, but it is only beneficial to read what experts in the field have to share.

From the blog CS@Worcester – Null Pointer by vrotimmy and used with permission of the author. All other rights reserved by the author.

Thea’s Pantry User Stories

I took a look at the user stories documentation to familiarize myself with the process of how everything works. The user stories are organized in a way that I can understand the step by step flow of the processes. I believe that this document will be pretty useful to refer to when working on the project. That being said, I noticed the pantry log entry form has been modified with all of the donation options crossed off. It would make sense to have separate forms for guests and donors. One thing I am curious about though is how they transfer the information from the google forms to the database, I will probably have to look into it some more.
Link: https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/documentation/-/blob/main/UserStories.md

From the blog CS@Worcester – Null Pointer by vrotimmy and used with permission of the author. All other rights reserved by the author.

LibreFoodPantry

Before taking a look at the LibreFoodPantry user story map, I read through the linked article. I was able to understand the importance of a user story map when it comes to developing a project. I found the section,“Your software has a backbone and a skeleton – and your map shows it,” interesting because it actually makes sense when you look at the user story map. My initial impression on the food pantry project was that it was simple, but after looking at the story map, I realized that there are a lot of moving pieces that are all part of a bigger picture. I think that creating the story map is a good way of laying out all of the ideas of the clients and allows us to visualize/prioritize the features we need to work on.

Links: https://librefoodpantry.org/docs/user-story-map
https://www.jpattonassociates.com/the-new-backlog/

From the blog CS@Worcester – Null Pointer by vrotimmy and used with permission of the author. All other rights reserved by the author.

Understanding SOLID principles

This week I took a look at some of the popular acronyms that are out there and wanted to focus on the SOLID principles. SOLID is a set of design principles introduced by Robert C. Martin in the essay, “Design Principles and Design Patterns,” written in 2000. Martin argues that without good design principles, a number of design smells will occur. I thought that this would be a fitting topic to cover since I am learning about clean code in my Software Process Management class. SOLID is an acronym for a set of five commonly used principles among software developers:

Single Responsibility Principle – A class should have one, and only one, reason to change. In order to follow this principle, a class or any module should simply serve one purpose. It is important to prevent functions from doing more than one thing, which can be done by keeping them small. By utilizing this principle, code is easier to test and maintain in the future. 

Open-Closed Principle – Changing a class can lead to problems or bugs. Martin argues that you should be able to extend a class’ behavior without modifying it. In order to have a class that is open for extension but closed for modification, use abstractions. Through the use of interfaces and inheritances that allow polymorphic substitutions, one can follow this principle. In doing so, code is easier to maintain and revise in the future.

Liskov Substitution Principle – Named after Barbara Liskov, this principle requires that every derived class should be substitutable for their base or parent class. This is a way of ensuring that derived classes extend the base class without changing the behavior. Implementing this principle is like implementing the open-closed principle, as it prevents problems or bugs brought about by any class changes.

Interface Segregation Principle – The idea of this principle is that it is better to have multiple smaller interfaces as opposed to a few bigger ones. Developers should be inclined to build new client-specific interfaces, instead of starting with an existing interface and making changes. 

Dependency Inversion Principle – Martin states, “Depend upon Abstractions. Do not depend upon concretions.” meaning every dependency in a design should target an interface or abstract class. No dependency should target a concrete class. Being able to utilize this principle will make your code more flexible, agile, and reusable. 

While these principles are certainly not a fool proof method of avoiding code smells, they offer many benefits when followed correctly. It has become good practice for developers to follow these principles in order to keep code clean so as to not introduce any problems with future changes. I think this is a very useful set of principles to follow, and I plan to refer to it for anything I plan to develop in the future. These principles drew ideas from the concept of code smells and clean code, so it was interesting to finally connect the topics that I’ve been learning about in two different classes.

The Importance of SOLID Design Principles

Design Principles and Design Patterns

From the blog CS@Worcester – Null Pointer by vrotimmy and used with permission of the author. All other rights reserved by the author.

Understanding RESTful APIs and what makes an API “RESTful”

This week, I wanted to solidify my understanding of the concept of REST or RESTful APIs, what they are, and how they work. There is a plethora of information on this topic but I’ll be referring to a blog post by Jamie Juviler and a website on REST API that I found helpful.

Before we get into what REST is, let’s first go over what an API is. API stands for application programming interface, and they provide a way for two applications to communicate. Some key terms include client, resource, and server. A client is a person or program that requests the API, to retrieve information, a resource is any information that can be returned to the customer, and a server is used by the application to receive the requests and maintain the resources that the client is asking for.

REST or RESTful is an acronym for Representational State Transfer and is a term used to describe an API conforming to the principles of REST. REST APIs work by receiving requests and returning all relevant information in a format that is easily interpretable by the client. Clients are also able to modify or add items to the server through a REST API.

Now that we have an understanding of how REST APIs work, let’s go over what makes an API a RESTful API.

There are six guiding principles of REST that an API must follow:

  1. Client-Server separation

The REST architecture only allows for the client and server to communicate in a single manner: all interactions are initiated by the client. The client sends a request to the server and the server sends a response back but not vice versa.

  1. Uniform Interface

    This principle states that all requests and responses must follow a uniform protocol. The most common language for REST APIs is HTTP. The client uses HTTP to send requests to a target resource. The four basic HTTP requests that can be made are: GET, POST, PUT, and DELETE.

  1. Stateless

All calls with a REST API are independent from each other. The server will not remember past requests, meaning that each request must include all information required.

  1. Layered System

    There can be additional servers, or layers, between the client and server that can provide security and handle traffic.

  1. Cacheable

REST APIs are created with cacheability in mind, when a client revisits a site, cached data is loaded from local storage instead of being fetched from the server.

  1. Code on Demand

In some cases, an API can send executable code as a response to a client. This principle is optional.

If an API follows these principles, then it is considered RESTful. These rules still leave room for developers to modify the functionality of their API. This is why REST APIs are preferred due to their flexibility. RESTful APIs offer a lot of benefits that I can hopefully cover in my next blog post. 

What is REST

What is an API?

REST APIs: How They Work and What You Need to Know

From the blog CS@Worcester – Null Pointer by vrotimmy and used with permission of the author. All other rights reserved by the author.

The Basic Docker Compose File Structure

In last week’s blog, I covered docker compose files and their significance. For today’s blog, I would like to go in depth on docker-files to explain how they are structured. To give a quick review of what Docker Compose is. Docker Compose is a tool that allows developers to create and run multi-container docker applications. Once again, I am referring to The Definitive Guide to Docker compose, posted by Gabriel Tanner. He does a good job of breaking down the structure of a docker-compose.yml file.

Docker Compose File Structure

By using a docker file, you can list the containers you want and all of the specifications as you would in a docker run command. As explained in last weeks blog, almost every docker-compose file should include the following:

  • The version of the compose file
  • The services which will be built
  • All used volumes
  • The networks which connect the different services

Docker files are structured upon indentation. Consider the idea of levels of abstraction.

On the top levels are the version, services, and volumes tag. The next level down is where the containers are listed, then parameter tags such as image, volumes, and ports. And finally on the lowest level, the respective rules for each parameter.

Here is a simple docker-compose.yml file:

Let’s take a look at each tag individually.

Version:

The version tag is used to specify the version of our compose file, based on the Docker engine we are using. For example, if we are using Docker engine release 19.03 or later:

Services:

The service tag acts as a parent tag, as all of the containers are listed underneath it.

Base Image/Build:

The image tag is where you would define the base image of each container. You can define a build using preexisting images available on DockerHub. In this case, we are defining a web1 container using the nginx:mainline image. 

You can even define the base image by pointing to a custom Dockerfile:

https://gabrieltanner.org/blog/docker-compose

Volumes:

The volumes tag allows you to designate a directory where the persisting data of a container is managed. Here is an example of a normal volume being specified.

https://gabrieltanner.org/blog/docker-compose

Another way of defining a volume is through path mapping, linking a directory on the host machine to a container destination separated by a : operator.

Ports:

Specifying a port allows you to expose your host machine to the container that is running. In this case, we are defining port 10000 for the host port and we want to expose it to port 80.

Those are all of the main components that you need to structure a Docker Compose file. In this blog post, I simply covered the basic tags and structure. Compose files allow you to manage multiple containers and specify properties to a finer detail. There are tags that I haven’t covered such as dependencies, commands, and environment that Gabriel explains really well. 

From the blog CS@Worcester – Null Pointer by vrotimmy and used with permission of the author. All other rights reserved by the author.

Understanding Docker Compose and its Benefits

Recently, I worked with bash scripts and docker-compose files in order to run a set of containers. While both seemed to be valid ways of running multiple containers in Docker, I wanted to look into docker-compose files further to understand the possible advantages and use cases. A resource that I found quite helpful was The Definitive Guide to Docker compose, a blog post by Gabriel Tanner. Inside, he explains why we should care about Docker-compose and the potential use cases.

Docker Compose

Let us first go over what Docker Compose is. According to the Docker Compose documentation, “Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.” Instead of listing each docker run command in a script, Compose utilizes a docker-compose.yml file to handle multiple docker containers at once.

Here’s a sample docker-compose.yml file:

You might be able to recognize some of the labels such as image, ports, and volumes. All of which would normally be specified in a docker run command. And as you can see, each individual container is listed under the services tag. The docker command equivalent to run web1 would be something like: docker run -it –name web1 -p 10000:80 -v ${PWD}/web1:/usr/share/nginx/html -d nginx:mainline

Tanner explains that almost every compose file should include:

  • The version of the compose file
  • The services which will be built
  • All used volumes
  • The networks which connect the different services

Now that we have a brief understanding of the docker-compose file structure, let’s talk about the use cases for Compose and their benefits.

Portable Development Environments

As opposed to running multiple containers with a separate docker run command, you can simply use docker-compose up to deploy all the containers specified in your docker-compose.yml file. And it is just as easy to stop all of the containers by running docker-compose down. This provides developers the ability to run an application and configure the services all within a single environment. Since the compose file manages all of the dependencies, it is possible to run an application on any machine with Docker installed.

Automated Testing

A beneficial use case of a docker-compose file is with automated testing environments. Compose offers an isolated testing environment that closely resembles your local OS that can easily be created or destroyed.

Single Host Deployments

Compose can be used to deploy and manage multiple containers on a single system. Because applications are maintained in an isolated environment, it is possible to run multiple copies of the same environment on one machine. And when running through Docker Compose, interference between different projects are prevented.

Conclusion

Hopefully this blog post helped you learn more about Docker Compose as much as it helped me. For the most part, docker-compose files make it possible to run multi-container applications with a single command. While researching this topic, I’ve come to believe that docker compose files will be the standard for running applications through Docker if it isn’t already. I’d like to write more about this topic, and I think a blog post going in-depth on the structure of docker-compose files would be useful.

From the blog CS@Worcester – Null Pointer by vrotimmy and used with permission of the author. All other rights reserved by the author.

What is Semantic Versioning and why is it important?

Semantic Versioning 2.0.0, SemVer for short, is a format widely used by developers to determine version numbers. When implementing Semantic Versioning, you will need to declare an API (Application Programming Interface). This is because the way the version number is incremented depends on the changes made against the previous API. A semantic version consists of 3 numbers separated by decimals and is formatted like so: MAJOR.MINOR.PATCH. There are rules to follow when incrementing each of the numbers and the SemVer documentation provides a very helpful summary of when to do so:

MAJOR version is incremented when API incompatible changes have been made.

MINOR version is incremented when backwards compatible functionality has been added.

PATCH version is incremented when backwards compatible bug fixes have been made.

Next let’s take a look at each version number specifically. I came across this blog post on the Web Dev Simplified Blog and it does a good job of explaining each of the versions further. 

MAJOR Version 

The major version number is only to be incremented when any API breaking change has been introduced. The blog provides examples and a major change could be as simple as an entire library rewrite or a rework of a single component that still breaks the API. The version MUST be incremented if any backwards incompatible changes have been made. It is possible that minor and patch level changes have been made as well, but they must be reset to 0 when the major version is incremented. For example, when a major change is made to version 1.17.4, the new version number will be 2.0.0.

MINOR Version

The minor version number is incremented when backwards compatible changes have been made that don’t break the API. According to the SemVer documentation, it MUST be incremented if any public API functionality is marked as deprecated and MAY be incremented if substantial new functionality or improvements are introduced within the private code. When incrementing the minor version number, the patch level must be reset to 0 as well. For example, when a minor change is made to version 1.14.5, the new version number will be 1.15.0.

PATCH Version

The patch version number is incremented simply when backwards compatible bug fixes have been introduced. This number is commonly updated, and the only change that should be made is the fixing of incorrect behavior. For example, a bug fix for version 0.4.3, would make the new version 0.4.4.

Hopefully this blog was helpful in understanding Semantic Versioning. It is evident of its wide usage that Semantic Versioning is a useful tool to use when creating new versions. For the most part, the process of deciding the version number is straightforward, but it is neat to see that there is an actual guideline that many developers follow.

From the blog CS@Worcester – Null Pointer by vrotimmy and used with permission of the author. All other rights reserved by the author.

What is Docker and why are we using it?

For the past few weeks in class, we have been working with something called Docker. I have been working on projects that used docker, and we recently did an activity on Docker commands. With all this work revolving around Docker, I wanted to familiarize myself with it further. I did some research on what Docker is, how it works, and why we use it. There are an abundance of sources and blogs that go in depth to how Docker works. That being said, this blog post will just relay most of that information, and you may find it useful if you have been confused about docker up until now.

Let’s first understand what Docker is. A very informative source that I found was an article by IBM that explains this topic very well. Docker is an open source platform that utilizes containerization to package applications, their dependencies and required operating systems into containers. This in turn allows software developers like us to write code and build applications no matter the environment. Though it took a bit to get set up, I found that it made the whole process of writing programs more convenient.

For our second homework assignment, to get the project running in Visual Studio Code, we needed to reopen the folder in a dev container. Docker revolves around the process of containerization, a variation of virtualization. When you hear the term virtualization, you may think of virtual machines, which is the process of emulating a physical machine, virtualizing the OS, underlying hardware, the application and their dependencies. Containers on the other hand virtualize the OS and only the application and their dependencies. As a result, containers offer more portability because “unlike a virtual machine, containers do not need to include a guest OS in every instance and can, instead, simply leverage the features and resources of the host OS” as stated in another article by IBM.

Now that we have a better understanding of how containers differ from virtual machines. I just wanted to conclude by listing the benefits of using Docker and containers. IBM mentions that containers are more lightweight. I have definitely noticed the difference in system usage between running a virtual machine and just running Docker. Another benefit I have seen is the increase in development efficiency, especially for the second homework assignment, where we were required to run the code against tests several times as we made changes. Overall, I found that writing this blog post helped me get a better understanding of what Docker is, how containers work and their benefits to the software development process. It allowed me to weigh the pros and cons of using virtual machines as opposed to containers. And now I can understand why we are using Docker.

Sources:

https://www.ibm.com/cloud/learn/docker

https://www.ibm.com/cloud/blog/containers-vs-vms

From the blog CS@Worcester – Null Pointer by vrotimmy and used with permission of the author. All other rights reserved by the author.