Author Archives: cloudtech360

Sprint 1 RetroSpective

Working in a SCRUM team has proven to be a task to say the least. For our first sprint, I was our team’s assigned SCRUM master. Our project? To create a mobile application that runs on android and apple devices to help aspiring U.S. citizens prepare for their citizenship exam. Our project was the only one out of the 8 that had no prior work done before it was assigned to us… aka we were starting from scratch.

The Bad

Before this semester, I wasn’t a stranger to working in a team to complete a project. Despite this fact, working in a SCRUM team felt a bit different. Collectively, we were working together as a team but at the same time we were working independently. At points during this sprint, there were times where we didn’t have great communication. A lot of the same questions kept reemerging after they had already been answered. I feel like we were slower to realize how important effective communication and active listening is. While we did conduct our daily standups, I feel like we only did them for ritual’s sake. Also, I felt the sense of everyone wanting to take the project in different directions because it reflected in how we had our conversations.

The Good

Although I had feelings of stagnation within our team, I do not fault anyone in the group because developing an application while beginning to learn many new things about getting it started is a lot, especially since it’s everyone’s first time. This feeling of stagnation is everything but. Our team’s progress had never stopped moving forward, even if it was moving slowly and that is an amazing display of my teammates willingness to finding solutions in new and unfamiliar territory. During our retrospective meeting we all came to realize that we got all the essential tasks for Sprint 1 project setup completed. I am also hopeful because I feel like my teammates are going to be great to work with once we all learn how to work with one another.


I think as a team we need to put more value into the daily standup meetings. Although they’re short in comparison to the work we’re doing during the rest of our meetings, they are super important in terms of our success. Making sure that everyone is active in the meeting whether speaking or listening is something we can improve upon. Another improvement we can make is being comfortable with having our ideas challenged. Instead of just blindly agreeing with an idea one of us has, we should be able to hold respectable debates on why something may not be great for our project and be alright with the outcomes of the debate.

How Did I Contribute?

Much of the early part of the sprint was dedicated to figuring out which framework we would use to create our application. We broke into three teams of two. Eric and I were assigned to investigate what the Flutter framework would bring in its arsenal to help us complete our project. A few of the things I spent my time researching include:

  • Learning about what type of application Flutter is.

  • Creating a sample “Hello World” -like application in Flutter.

  • Making the decision to install Flutter locally on our system or use docker containers.

Once we decided that Flutter was not going to be the route we were going to take, I used the rest of my time during the sprint to work on the writing portion of our application.

From the blog CS@Worcester – You have reached the upper bound by cloudtech360 and used with permission of the author. All other rights reserved by the author.

Apprenticeship Patterns Chapter 1-6 introductions

The authors use the introduction of this book to instill the passion of programming into the reader. From my reading, I believe the authors convey that as an active participant in the field, one should carry themselves with a sense of pride for their work, to become immersed with it, and constantly blooming over the span of their career.

One particular part in the introduction peaked my interest by sharing how Agile development changed the way the authors think about software development. When developers come across a situation that isn’t “covered by the rules, they’re lost”. There have been many cases in my own journey of becoming a software developer where I have been lost because a solution I was told would work didn’t (Ok, maybe because I was a novice and didn’t implement the solution properly was the reason the solution didn’t work). Solutions do not come in a one size fits all category, and that’s no different in the realm of computing. But as the authors continue to explain, they mention that if developers hold values that undermine the rules that have been set forth, they can create new rules that apply to any situation they find themselves in. For me, these words gave me the permission I’ve been seeking for to be as creative as I desire in this field.

In the “What Does It Mean To Be An Apprentice” section, so many gems resonated with me that this is my favorite part of the entire introduction. The beginning phases of the journey to becoming anything you want to be good at, requires that you take a good look inward and figure out exactly what works for you and how you learn. Allowing you to focus on yourself and your needs to grow is crucial in how well your journey goes.

While these were my main takeaways from the entire first chapter, the introductions to the other patterns I should begin practicing all also seemed to me like things I’ve already thought about doing (I just needed to be reassured to do them).

The second chapter encourages you to forget what you know when learning something new. While, I understand the intent here, I feel it can be more helpful to make connections with how aspects of a new programming language is similar to what you already know.

The fourth chapter seems like it is going to be a stark reminder to be humble in your journey as a software craftsman. While it is good to be proud of your accomplishments along the way, you should not become too full of yourself that your growth becomes stunted because you feel you have reached your peak.

Never. Stop. Learning. These are the words that rang throughout my head as I read the fifth chapter’s introduction. No matter what you’ll be doing as a software craftsman the field will always be moving forward. You do not know everything and you never will, but you can always try.

From the blog CS@Worcester – You have reached the upper bound by cloudtech360 and used with permission of the author. All other rights reserved by the author.

Thea’s Pantry Intro

Thea’s Pantry is an Open Source software tool used by Worcester State University to help manage their on-site food pantry. This software is a division of the LibreFoodPantry open source tool. While taking a look at the architecture of the system on the food pantry’s repository on GitLab, I noticed that it is composed of multiple systems that make it whole. Each of the features the system offers breaks down into smaller specialized parts. The type of person that I am likes to do things alone. Although this project seems like a relatively smaller one and can probably be done by a single developer, by the time the software is complete, it’s likely that better solutions would have been discovered and faster. Looking at the architecture really put things into perspective for me in terms of the importance of having a team to work on software.

From the blog CS@Worcester – You have reached the upper bound by cloudtech360 and used with permission of the author. All other rights reserved by the author.


LibreFoodPantry is an open source software for food pantries with a growing community of developers, clients, and users. Upon perusing the website, I found their values to be particularly interesting, especially their value of FOSSisms. In my opinion, the belief system of FOSS (Free Open Source Software) contradicts the traditional teaching system that one receives throughout their academic career. For instance, a course at a university is typically led by a single professor, of whom the students receive all their direction from. In an Open Source community, the community tends to agree upon the direction in which the project goes. This can be a bit overwhelming for newcomers like myself to become accustomed to.

From the blog CS@Worcester – You have reached the upper bound by cloudtech360 and used with permission of the author. All other rights reserved by the author.

Npm and Yarn

When looking at Node, I was confused on what type of software it is. It seemed like a framework to me. As I did some research I came across some articles and found that it is a common misconception that people think it is a framework or even a programming language. Node.js is a java runtime environment (JRE). It is typically used in backend development, but it also has pretty good use in the front end as well. 

Over the course of the semester we have been using node modules in our projects. In order to get one of these modules in a project we would need to use the npm install method. Npm stands for Node Package Manager. It contains hundreds of thousands of repositories that developers can use to inject dependencies into their applications. At the time I didn’t really understand what the npm was used for so I would just blindly install packages into my projects so I could get them working. 

During the last few projects we were working on in class we started using Yarn, another type of package manager. Once again not knowing exactly what Yarn was or why I needed it I blindly installed the resources I needed for my project, and got to work. At the time, I didn’t notice any subtle differences. The packages I installed were also put into my node modules and I was able to work on my projects as needed. So what was the difference?

Yarn was developed as a response to some of the shortcomings of npm in the past. Over time the developers of the two package managers have been copying each other’s homework in terms of staying relevant in the developer community. One of the glaring differences between the two is that when installing packages, yarn installs multiple packages at one time while npm installs packages sequentially. In the grand scheme of things, this saves some time when setting up your projects. Both package managers allow the node modules to have the same file structure although the file signatures differ. In yarn, the node modules are generated with yarn.lock and in npm the node modules are generated with package-lock.json. Yarn has made it so it is easy to convert from package-lock.json to yarn.lock files in case users wanted to make the switch from npm to yarn. Npm however doesn’t seem to have the same ease of access when migrating from yarn to npm.

In terms of which package manager is better will depend on the developer. It is important to take into consideration though, that yarn is the later package manager. It has gained as much traction as npm has in its entirety, but this could just be due to the increasing demand in package managers in the present day. 


NPM vs. Yarn: Which Package Manager Should You Choose?

From the blog CS@Worcester – You have reached the upper bound by cloudtech360 and used with permission of the author. All other rights reserved by the author.

Angular (No .js)

Looking at more frontend frameworks, this blog post was originally going to be used to take a look at Angular.js, but with its EOL soon approaching on December 31, 2021, I thought it may be better to look at a framework that is still receiving support from its developers. Angular (No js) is another type of framework that allows you to develop front-end applications. Angular works off of four main concepts : Components, Templates, Directives, and Dependency Injection. 

Components in Angular (like other frameworks and libraries) are the bread and butter of the framework. You need them in order to design your application. Every component in Angular has three pieces that make it whole. The first piece is an HTML template which is responsible for what can be seen on the page. The second piece involves the use of TypeScript. TypeScript works like JavaScript but has added support to work with types like in object-oriented programming. A class is written in TypeScript and it then defines the behavior of the component. The third piece is a CSS selector which determines how the component should be used.

Templates are the working parts of HTML in your application. They can be present in a component in one of two ways: “template” which allows you to define the content that goes in the component, or “templateUrl” which allows you to define the content of the component using a reference to another file. It’s important to note that only one of these declarations can be used in a component at a time. Because a template represents the HTML element it refers to, it allows the developer to omit the use of HTML tags while creating it.

Directives are like classes that allow you to add extra functionality to the elements in your application. Angular currently uses three types of directives. Components are the first type of directive which is defined as a directive with a template. The second directive is an attribute directive. This directive’s purpose is to change the appearance and or behavior of an entity within the application. The third directive is a structural directive. This directive allows you to change the Document Object Model (DOM) layout by adding or removing DOM elements.

Dependency Injection is a design pattern. Angular uses this pattern whenever a class needs an outside service in order to be able to carry out its functions. Instead of creating a new instance every time a service is needed which can be resource deficient, during runtime, the needed service(s) can be called upon and referred to.

Angular even has its own Command Line Interface(CLI) to help make the use of the framework more simplistic for the developer. 

I learned about the Angular framework in the link below. You can use it to find out more about things I didn’t get to mention in this post.

From the blog CS@Worcester – You have reached the upper bound by cloudtech360 and used with permission of the author. All other rights reserved by the author.

To-may-to : To-mah-to; Po-tay-to : Po-tah-to ; Framework : Library?

At the end of my previous blog post, I incorrectly referred to React.js as a framework. It is actually a JavaScript library. Although the two can be used to achieve a common goal, the two terms are not exactly interchangeable. Allow me to use the following analogy to explain my understanding of the two.

The main difference is when you’re using a library, you’re asking for it to assist you with completing your code. Frameworks on the other hand can be used to write your code but require you to “relinquish ownership” and abide by its rules. To discern the two, let’s look at the code to be written in terms of sharing information with one another. 

Scenario A.

You’re browsing StackOverflow and you come across a user who is asking a question about how to use various functions/methods in a particular programming language. You, being a well-seasoned programmer and active user in the StackOverflow community, wish to give this user a bit of assistance. So you decide to do some research on said programming language and functions/methods. Once you’ve gotten a firm understanding of the concepts, you give a friendly and in-depth response to the user, which helps to solve their problem. 

Scenario B. 

You’ve been assigned to write a paper explaining how to use various functions/methods in a particular programming language by your professor. They require the paper to be written in an accepted formatting style (MLA/APA) of your choosing. You, being a top student of your class, do some research to produce a high quality paper that reflects your standing. As you write your paper, to adhere with formatting guidelines, you use in-text citations. Once your paper has been completed you also cite your sources on your works cited page. Your professor gives you perfect marks on your paper due to the accuracy and proper formatting of your paper. 

In both of these scenarios, we were able to relay the information (write our code) in different ways. While the method of finding the information was roughly the same the end product is what differed. 

In scenario A the user was able to answer the question in any manner that suited their needs with no restriction. In scenario B, however, the student was not granted the same leisure and was required to structure their response according to a specific set of guidelines. Scenario A represents the usage of a library in your code, while scenario B represents the use of a framework. While the tools used were essentially the same, the control over the end product was not. The correlation to be made here is the control over the code that highlights one of the main differences between how libraries and frameworks operate.

From the blog CS@Worcester – You have reached the upper bound by cloudtech360 and used with permission of the author. All other rights reserved by the author.

SPA treatment. How’s the Vue over there?

With front-end development being one of the last topics we cover for this semester, I decided to take a deeper look into things that can be built using the frameworks we’ve utilized. Vue.js for example is a framework that can be used to develop a front-end application. One type of front-end application that many of us are very familiar with is a Single Page Application(SPA). An SPA is an application that displays all of its information, well, on a single page. It sounds like a lot but only the requested and necessary information is usually displayed on the page at one time. 

For example, most email service providers have been developed using SPA styled programming. If you’re like me, you have a ton of unread emails that you are just too lazy to go through and delete. All of these messages can span over the course of a few “pages”. As you click through the pages, you can see how quickly the response is while sifting through them. Although this design gives you the impression that you are navigating through many different pages, the browser is just updating the very same page that you navigated to.

While some of this implementation is back-end stuff, Vue can help make this possible with the use of components, that’s all it really is. Instead of having to use hundreds of unmaintainable lines of javascript to add functionality to an application, Vue makes it really simple and easy to maintain your code with Don’t Repeat Yourself (DRY) standards. It starts with a root component to get the Vue framework into your code. From there everything else is just a series of components to get everything working. 

Components come in two different flavors. Global components, as the name might suggest, are registered and usable everywhere within the application. Local components are only usable wherever they are registered. To register a component as global you would need to use the Vue.component method. Registering a component locally would require that you make it a part of the “props” of an element. Figuring out when to use a component globally or locally calls for careful consideration when making your application. 

There are other great front-end developing frameworks like React.js and Angular.js. Along with Vue, these three are very popular in terms of leading frameworks that developers like to use in the present day.

My sources I used for this post:

Understand VueJS in 5 minutes

Global & Local Components

From the blog CS@Worcester – You have reached the upper bound by cloudtech360 and used with permission of the author. All other rights reserved by the author.

So why is called REST Again…?

REST stands for REpresentational State Transfer. It was introduced by Roy Fielding as a part of his doctoral dissertation. It’s purpose is to allow the use of data while not having it tied to an entity specifically. This allows the data to be represented differently through various mediums we refer to as hypermedia. 

A RESTful interface is resource dependent. In order for an application to follow REST guidelines, it must adhere to a set of constraints. The first constraint states an interface should interact on a client-server basis. The client machine should be able to make requests to the server, and in return the server will respond according to the information it received. 

The second constraint states the client-server interaction must utilize a uniform interface. In order for the client and server to interact RESTfully, the use of Universal Resource Identifiers (URIs) is imperative. Any resource that is involved between the client and server must be identifiable in order for the interaction to be successful. 

Thirdly, all requests between the client and server must be stateless. This means that a request made from the client side must have all the necessary information so the server can complete the request. This is necessary to keep the workload on the server to a minimum as it handles various requests from different clients. The burden of keeping track of the session state of the application is the responsibility of the client, and it basically gives a snapshot of the current state to the server when the request for additional resources is made. 

The fourth constraint states that any response from the server must either be cacheable or non-cacheable. This will allow the client to reuse data from a request for a certain period of time (if the server allows it) without having to resend the request to the server.

The fifth constraint states that the client and server should have layers in between them. This allows legacy systems to have continued support as improvements and new features are added to the system. This will continue to work as long as the implementation of the interface has not been changed.

The last constraint is an optional one and it’s called code on demand. This constraint states that the functionality of a client can be extended by allowing code to be downloaded and executed. This allows the client to be simpler.

While I found all of this to be informative, I was mostly taken aback that the formulation of this architecture is to be accredited to a student pursuing their doctorate degree. It places things in perspective for me that any assignment that I am given does not only have to be completed for a grade, but it can be used as an opportunity to change the way the world interacts with things.

The information that I conveyed in this post is all thanks to two the following links

From the blog CS@Worcester – You have reached the upper bound by cloudtech360 and used with permission of the author. All other rights reserved by the author.

Docker Explained.

This week in class we’ve gone over UML Diagrams and the importance of being able to translate back and forth between writing code from the diagram and making a diagram based on the code. The professor told us to download Visual Studio and Docker, which I’m assuming will be used for the entirety of the semester. I didn’t have a single clue as to what Docker was or why it may have been needed. After a brief explanation prof told me to do a little bit of reading myself and so I did. I’m by no means not a Docker expert but the picture has become a bit clearer.

Docker is a container based application that allows you to run services independent of each other. Containers tend to be pretty compact and only carry the information neccesary for a service to work. Docker containers are created through docker images. An image is basically just a template that tells the system how to make the container. An image can consist of many layers, of which each layer is just a previous working version of the image. It’s important to note that an image is read-only. The purpose of the image is to load the container. The top most layer (when the container is created) is what the user works with, whether it’s making changes to the container itself or using the tools that come with the container. When reading about how this technology works the thought of how something like this could be secure kept on swimming through my mind but as each layer of the image is created it becomes a completely new and immutable image. I’m still not entirely sure how this works and will have to spend more time trying to understand, but for now I’ll just take it for what its worth.

Where Docker really becomes a useful tools is in its portablilty and reusabilty. For example, the use of a virtual machine to run certain programs or applications isn’t frowned upon, but it does tend to be costly in terms of using space and memory. A 500MB application could take heaps of memory to run because the guest OS and libraries would need to run before being able to use a desired application. If you wanted to run multiple instances of that application you would need to run multiple VMs. That’s where Docker delivers and gives the user what they need in terms of reusability.

Now Docker containers are not a one stop shop when it comes to solving issues. If a user is trying to use multiple servers and tries to adminstrate them only using Docker containers, they will find themselves in a pinch due to the stripped down capabilties of a container. A container only holds enough information for what actions are necessary to ensure task completion in terms of portabilty. In a scenario like this you would probably want to stick with using a VM to get the full use of the OS and all it’s resources to maintain multiple servers.

Here’s two videos that brought me up to speed on just what type of software Docker is and why it is extremely useful in just over 15 minutes. The explanations are given in a low level manner that allows people like me who couldn’t even begin to understand the concept grasp it better. I hope you enjoy the content, I did!

Containers vs VMs
Containerization Explained

From the blog CS@Worcester – You have reached the upper bound by cloudtech360 and used with permission of the author. All other rights reserved by the author.