Author Archives: murtazan

Sprint Retrospective 1

What worked well:

Creating issues was as expected; We were able to have open communication within the team and we were helping each other in how to create issues, connect them with epics, assigning team member for work and for reviewing.

We were helping each other figure out using git again. Dahwal introduced me to Github Desktop application which made the whole process of adding files, committing branch, push branch, creating merge requests, creating new branch and switching branches fairly simple.


I had to revise scrum techniques and how to use the scrum board but with the help of team quickly grasped the concept again. I was quickly able to assign issue to myself in the API part of the project (links to the issues below). Most of the code was very similar to homework project from last semester therefore was easier to complete. I created src folder with path, responses and schema subfolders. I created schemas for view, product, shelflife and EvoError. I had to do good amount of research on OpenAPI and also on regex for patterns and format for different schemas within the schemas like min, max, temp, etc.

What didn’t work so well:

Figuring out workings of git took longer than expected. Brushing up on scrum and trying to use scrum boards proved harder than expected. Communication in the beginning was limited to only class time, which proved to be a big obstacle in this sprint. We also ran into a lot of problems with merge request, where they were unsuccessful due to being behind on commits, improper branch creation and not having a clean working tree in general.


I had a lot of time used in git bash until Dahwal showed me GitHub Desktop application which really helped me save a lot of time. I had to do fair amount of reading of OpenAPI documentation in trying to figure out proper patterns and formats and even after that I needed to ask help from team members and professor. I also confined myself to only issues related to schemas and did not get opportunity to create paths for the API.


The most important issue we need to deal with is communication. We need to meet outside class time or at least meet via discord in video calls. We need to attach issues created to appropriate epics and understand how weights work and its assignment. We also need to communicate difficulties or problems we faced regarding issues in the comment thread sections instead of only using discord. We also need to come up with a system of approvals and merge requests so most of or class time is not spent approving and merging requests.


I need to communicate with team members working on issues similar to mine so that we can set some conventions. For example, my schemas, branches, commits and merge request had slightly different naming convention than others. I also need to keep up with emails, discord messages and other notifications and respond in timely manner. Most importantly I need to make an effort to keep myself updated with issues I am not assigned to and keep up with the flow of code.

Links to some issues:


Product Schema

View Schema

Shelflife Schema

From the blog CS@worcester – Towards Tech by murtazan and used with permission of the author. All other rights reserved by the author.

Apprenticeship Patterns

Dave’s story introduces both motivation and fear. A fear that next new big thing might make all our efforts wasteful but motivation to work hard in learning every new skill and language.  We see many versions of this statement by researchers like, “Failure is merely an incentive to try a different approach” by Carol Dweck or “recognize the inadequacies in what you do and to seek out solutions” by Atul Gwande.

However, Etienne Wenger said it best using ‘situated learning’ that, “best way to learn is to be in the same room with people who are trying to achieve some goal using the skills you wish to learn”. ‘Situated Learning’ is also the key to Apprenticeship and achieving Mastery. A man who is always working to find a better/smarter/faster way to accomplish their goals using connections between practitioners, the communication channels within and outside the team is a man who will enhance the skills of his apprentice and journeymen.

That said, ‘Situated learning’ is no small feat. It is a long and hard journey for an apprentice to become a journeyman and then Master. Learning new skills throughout one’s career is vital but perfecting skills warrants one to have a concrete learning skill to learn new skills and therefore Perpetual learning pattern must be applied as early as possible in our career path.

The biggest mistake I have heard graduates make is that they overestimate themselves. Coming out of university with bachelor’s in computer science doesn’t mean we have four years of experience, it means we have just started and that is why the author writes in the very first paragraph, “this book is for people at the beginning of the journey.” To avoid this one must have accurate self-assessment skills. We must recognize the limited information we have compared to the limitless treasure of knowledge available and strive to acquire them by learning about other teams, organizations, journeymen, and master craftsmen. We can also acquire great amounts of knowledge and skills through internet via text, audio, and video however, we must also recognize that the vast amounts of wisdom captured in the books of experienced software craftsmen cannot be replaced by blogs posted on world wide web.

“Walking the long road” teaches us how to practice patience and hard work, we cannot learn all languages by working hard only one month or year. We need to keep practicing and polishing our skills, which we learn in “Perpetual learning”, skills like self-assessment (“Accurate Self- Assessment) and learning not only through internet but also through wisdom and experience of software craftsmen who came before us. However, I do not agree with “Emptying our cup”. We should not forgo of our skills and knowledge but build on them. We should never be arrogant or foolish enough to think we have learned everything we possibly can.

From the blog CS@worcester – Towards Tech by murtazan and used with permission of the author. All other rights reserved by the author.

Thea’s Pantry:Technology

It is exciting to learn about the software being used in building an entire working system from bottom to top. I will research but briefly talk about Kubernetes, RabbitMQ and Keycloak since these are the software I personally have not used before.

Keycloak is an open-source Identity and Access Management solution for modern applications and services. It offers features such as Single-Sign-On (SSO), Identity Brokering and Social Login, User Federation, Client Adapters, an Admin Console, and an Account Management Console.

RabbitMQ is a message broker that implements Advanced Message Queuing Protocol (AMQP). Basically, it is a messaging system used for decoupling software components. It also Performs background operations and asynchronous operations.

As complexity increases in containers, features like; automated deployment, the orchestration of containers, scheduling apps, granting high availability, managing a cluster of several app instances, etc. are needed. Kubernetes is a tool that provides all these features.

From the blog CS@worcester – Towards Tech by murtazan and used with permission of the author. All other rights reserved by the author.

Libre Food Pantry:Mission

The mission statement by Libre food pantry is very similar to my personal life goals. I as well hope to use my computer science knowledge and skills I have obtained towards the betterment of not just humans, but also human society, our standard of living, environment, and planet earth in general.

There is also a false belief among the common masses that people in computer science just have to write their part of code and move on. Therefore, they do not work well with other people or just don’t have to work with other people. Mission statement of Libre food pantry states right from the start that they are a “vibrant, welcoming community of clients, users, and developers” working together as an unit and team to achieve their goals.

From the blog CS@worcester – Towards Tech by murtazan and used with permission of the author. All other rights reserved by the author.

Why Vue

There are several frontend frameworks available to pick from, so why do we use Vue? To research about Vue and learn about its benefits, I decided to read blogs from Vue mastery, specifically one written by Lauren Ramirez.

  • Vue does not use up too much memory. Vue allows us to import only the pieces of the library that we need, which means whatever we don’t use will be removed for us via treeshaking.
  • Virtual DOM (Document Object Model) uses compile-based optimization resulting in faster rendering times.
  • To work with Vue, we did not have to learn HTML, CSS, and JavaScript. It was surprisingly easy how we were able to learn as we go.
  • Vue has many libraries that can be added as needed. Some of which are:
    • Vue Router (client-side routing)
    • Vuex (state management)
    • Vue Test Utils (unit testing)
    • vue-devtools (debugging browser extension)
    • Vue CLI (for rapid project scaffolding and plugin management)
  • Vue’s one of the best features – Composition API;
    • We are able to group features into composition functions then call them in the setup instead of having large unreadable and unmaintained code directly in setup.
    • We are able to export features from components into the functions. This means we don’t have to keep re-writing code and avoid having useless repetition.
  • Vue has enhanced support for TypeScript users as well.
  • In Vue, we are able to use multi root components. In most front-end frameworks component template should contain exactly one root element because sibling elements aren’t allowed. The way around to that problem is using functional components, they are components where you have to pass no reactive data means component will not be watching for any data changes as well as not updating itself when something in parent component changes. However, they are instance less and you cannot refer to them anymore and everything is passed with context. With the multi root component support of Vue (Vue 3), there are no such restrictions and we can use any number of tags inside the template section.
  • Vue 3 gives us the Teleport component, which allows us to specify template HTML that we can send to another part of the DOM. Sometimes a piece of a component’s template belongs there logically, but it would be preferable to render it somewhere else. This is useful for things like modals, which may need to be placed outside of the body tag or outside the Vue app.
  • Most importantly, Vue is open source. Vue has complete freedom to be community-driven and its bottom line is the satisfaction of its end users. It doesn’t have to answer to the company-specific feature demands and corporate bureaucracy.


From the blog CS-WSU – Towards Tech by murtazan and used with permission of the author. All other rights reserved by the author.

UML Diagrams – PlantUML

We learned PlantUML at beginning of the semester and even did an entire homework creating UML diagrams of a project. But the reason I am writing about UML diagrams right now is because as I was going through previous activities I realized, not only can UML diagrams be used for code built on OOP but also to create a simple and easy to read diagram of an microservices architecture.

As we all know we are going to be using microservices architecture to build Libre Food Pantry so I decided to look for a resource that can make using PlantUML simpler and easier to use and understand. I found a great blog by solutions architect Alex Sarafian. Below are the few features that I found very useful:

  • We can color code arrows for multiple flows flow of the diagram and add legend to specify what color represents which flow.

Format example:        A -> B #Blue : text


    | Color | Flow |

    |<#Red>|  Flow 1 |

    |<#Blue>|  Flow 2 |


We can color code arrows inside PlaceOrderService and ApproveOrderService for easier understanding of the services.

  • Autonumber feature will automatically add a number alongside the text of every event. Autonumber gives a linear sequence of events that are going to take place. For example, 1st UI placing the order to API, then 2nd from API to Database and so on.

Format example 1: numbers in front of event text


Bob -> Alice : Authentication Request

Bob <- Alice : Authentication Response

Format example 2: Numbers are 2-digit padded and highlighted

autonumber “<B>[00]”

Bob -> Alice : Authentication Request

Bob <- Alice : Authentication Response

  • PlantUML limits the image width and height to 4096. Therefore, when image file is created over the limit of 4096, diagrams are cutoff completely taking away the advantage of efficient flow UML diagram. To fix this we can use command line parameters to increase the width and height limit of image. Another way is to use ‘skinparam dpi X’ parameter. However, the downside to using skinparam is that we must find the value for ‘X’ by experimenting to see when UML diagram fits the best.
  • We can display text relevant to an event below the arrow for a cleaner UML diagram. We a skin parameter to achieve this. For example:

skinparam responseMessageBelowArrow true

autonumber “<B>[00]”

Bob -> Alice : Authentication Request

Bob <- Alice : Authentication Response

  • PlantUML supports a lot of different colors. A neat trick is to use the ‘colors’ command to render a picture with all colors.


From the blog CS@worcester – Towards Tech by murtazan and used with permission of the author. All other rights reserved by the author.


We have been using REST API for most of the semester now but did not really read or have knowledge about it. We have been reading a lot of documentation of MongoDB – its operators, commands, methods, or collections. but nothing about REST even though we will be using REST API again in future semester i.e., for capstone when we are working on Libre food pantry. Therefore, I wanted to research REST and found a very interesting blog by Adam Duvander.

REST stands for REpresentational State Transfer. REST API’s are a form of web services used to run websites (like we have built Libre food pantry example), mobile applications and most enterprise integrations.

Important thing to know about REST is that it is not a standard, it is built over HTTP standard. The information can be in several formats: JSON, HTML, XLT, Python, PHP, or even plain text. JSON is the one we have been using and probably will be using because it is easy to read by both people and machines.

Developers used HTTP methods or HTTP verbs to define the requests being made. GET, PUT, POST, and DELETE are ones we have used so far. PATCH is another commonly used HTTP method to update a subset of existing data.

REST resource is data that is modified or accessed using HTTP methods. For example, when we worked on backend, we defined a path to access or modify data by using GET, PUT or POST methods. An example of a request would be:

POST /order/{id}/items

{id} would be an identifier to find order with that specific id value. Identifiers can be integers or hashes.

REST architecture is made of this resources and requests and requests are made using HTTP methods. REST architecture also states that information should not be stored after any request is executed, meaning every request is independent of the other. However, resources should be accessible and modifiable by the user. Therefore, interface between components is needed so that resources requested are identifiable and separate from representations sent to the user. The representation has information which can be then used by user to access and modify resources. REST architecture also needs cache for interactions between user and server and a system to handle different servers used to process requested information into hierarchies.

Built on these principles, REST is very versatile, able to work in a large variety of environments, with multiple data types making REST API’s faster and lightweight.


From the blog CS@worcester – Towards Tech by murtazan and used with permission of the author. All other rights reserved by the author.

Visual Code: Docker Extension

We have used visual studio code and docker extensively in our software design learning process. Even though it wasn’t made necessary for us to get the docker extension, I have been using the docker extension for a while now. Since docker is one of the biggest open source platform providing virtual containers, I wanted to further explore what benefits would an extension bring to visual studio code. For this I am focusing on blogpost under microsoft by Mike Morton.

Using the extension, we can easily add docker files through command palette and using Docker: Add Docker Files to Workspace command. This generates ‘Dockerfile’ and ‘.dockerignore’ files and adds them directly to our workspace. The command also gives us an option to add Docker Compose files. Extension provides option to build docker file in more than ten most popular development languages and then we can set up one-click debugging of Node.js, Python, and .NET Core inside a container.

Extension has docker commands to manage to manage images, networks, volumes, image registries, and Docker Compose built right into the command palette. So, we no longer must go to the terminal and meticulously type $ docker system prune -a or search IDs of specific container we want to stop, start, remove, etc.

Moreover, the extension lets us customize many of the commands. For example, when you run an image, you can now have the extension put the resulting container on a specific network.

Docker Explorer, another feature of extension, lets us examine and manage Docker containers, images, volumes, networks, and container registries. We can use the context menu to hide/show them on explorer panel.

The best feature is extension’s ability to select multiple containers or images and execute commands on the selected items. For example, we can select ‘nginx’ and ‘mongodb’ container and stop or start them at the same time without affecting other containers and without having to run start or stop command twice. Similarly, we can run or remove multiple images of our choice. Moreover, when running start command through command palette, we can see list of all the containers that can be started with a checkbox next to each.

When we are working on say Libre Food pantry microservices and have multiple development containers running – running commands through command palette is going to be quick and concise, explorer will give us a very simplified and organized way to manage docker assets and executing run/stop commands for multiple containers at the same time will be an extreme time saver. All these features combined are going to increase development productivity exponentially.


From the blog CS@worcester – Towards Tech by murtazan and used with permission of the author. All other rights reserved by the author.

Working in containers

Today we will be focusing on containers and why containers have become the future of DevOps. For this we will be looking at a blog by Rajeev Gandhi and Peter Szmrecsanyi which highlights the benefits of containerization and what it means for the developers like us.

Containers are isolated unit of software running on top of OS i.e., only applications and their dependencies. Containers do not need to run full operating system; we have been using Linux operating system kernel through docker and using capabilities of our local system hardware and OS (windows or macOS). When we remote accessed SPSS and logic works through virtual machine, VM came loaded with full operating system and its associated libraries which is why they are larger in size and much slower compared to containers that are smaller in size and faster. Containers can also run anywhere since docker (container) engine supports almost all underlying operating systems. Containers can also work consistently across local machines and cloud making containers highly portable.

We have been building containers for our DevOps by building and publishing container (docker) images. We have been working on files like api and backend in development containers preloaded with libraries and extensions like preview swagger. We then make direct changes in the code and push them into containers – this can lead to potential functionality and security risks. Therefore, we can change the docker image itself. Instead of making code changes on backend, we are building image with working backend code and then coding on frontend. This helps us avoid accidental changes to a working backend, but we must reconstruct the container if we are making changes to the container image.

Containers are also highly compatible for deploying and scaling microservices, which are applications broken into small and independent components. When we will be working on Libre food pantry microservices architecture, we will have five to six teams working independently on different components of the microservices in different containers giving us more development freedom. After an image is created, we can also deploy a container in matter of seconds and replicate containers giving developers more experimentation freedom. We can try out minor bug fixes, or new features and even major api changes without the fear of permanent damage on the original code. Moreover, we can also destroy a container in matter of seconds. This results in faster development process which leads to quicker releases and upgrades to fix minor bugs.


From the blog CS@worcester – Towards Tech by murtazan and used with permission of the author. All other rights reserved by the author.

Microservices Architecture: Uses and Limitations

Using information found by Narcisa Zysman and Claudia Söhlemann we take a closer look at microservices architecture and learn about why it is being widely used and what are its limitations.


Better fault isolation: If one microservice fails, others will likely continue to work.

Optimized scaling decisions: Scaling decisions can be made at a more granular level, allowing more efficient system optimization and organization.

Localized complexity: Owners of a service need to understand the complexity of only what is within their service, not the whole system.

Increased business agility: Failure of a microservice affects only that service not the whole application so enterprises can afford to experiment with new processes, algorithms, and business logic.

Increased developer productivity: It’s easier to understand a small, isolated piece of functionality than an entire monolithic application.

Better alignment of developers with business users: Microservice architectures are organized around business capabilities, developers can more easily understand the user perspective and create microservices that are better aligned with the business.

Future-proofed applications: Microservice architectures makes it easier to replace or upgrade the individual services without impacting the whole application.

Smaller and more agile development teams: Teams involve fewer people, and they’re more focused on the part of microservices they work on.


Can be complex: While individual microservices may be easier to understand and manage, the application may have significantly more components involved, which have more interconnections. These interdependencies increase the application’s overall complexity.

Requires careful planning: Because all the microservices in an application must work together, developers and software architects must carefully plan out how to break down all the functionality and dependencies. There can be data challenges when starting an application from scratch or modifying a legacy monolithic application. Also, multiple iterations can be required until it works.

Proper sizing is critical and difficult: If microservices are too big, might have all the drawbacks of monoliths. If it is too small, the complexity of the individual services is moved into the dependency maps, which makes the application harder to understand and manage at scale.

Third-party microservices: Third-party services can change their APIs (or dependencies) at any time and in ways that may break your application.

Downstream dependencies: The application must be able to survive failures of individual microservices, yet downstream problems often happen. Building fault-tolerance into an application built with microservices can be more complex than in a monolithic system.

Security Risks: Microservices growth in popularity, may increase an applications’ vulnerability to hackers and cybercriminals. Because microservice architectures allows the use of multiple operating systems and languages when building an application, there’s the possibility of having more targets for malicious intrusions. We are also unaware of the vulnerabilities of third-party services being used.

When complexity increases, we make sure it is warranted and well understood. Regularly examining interconnected set of microservices so the application does not crash. Learning about limitations helps us come up with such solutions which we can apply in our future work and when we work on LibreFoodPantry system.


From the blog CS@worcester – Towards Tech by murtazan and used with permission of the author. All other rights reserved by the author.