Sprint 2

This was the second sprint of the course, Sprint 2. During this sprint I had a major blocker in terms of access to the server from home. As a commuter, I had to rely on the school’s VPN to be able to access the server from my house and for some reason the VPN was not configured properly. I overcame this issue by going to campus a couple extra times throughout the week and doing work from there because I knew for a fact that I had access from there. 

The major goal of this sprint was to update the docker-compose file to configure the proper containers for the frontend, backend, mongoDB, and rabbitmq. 

These are the links to the two tickets related to this sprint:

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/deployment/deployment-server/-/issues/6

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/deployment/deployment-server/-/issues/7

The goals of this sprint in and of themselves were not inherently difficult. The major challenge was in learning and understanding what it meant to update a docker-compose file, where to find the proper links and versions for the updates, how to know what to update and what to leave, and how to work and navigate inside the server through the use of my local terminal. I felt that this pushed me to really become more comfortable and familiar with terminal commands, and be more inquisitive about how things work throughout the server. While this sprint had technically less actionable steps, I felt that I spent more time really thinking about how things worked which helped me to better understand the system as a whole.

The apprenticeship pattern I chose for this sprint is Breakable Toys. This pattern emphasizes the importance of failure, even over success. The pattern pushes the reader to understand that failure is necessary for a complete learning and for the ability to adapt and face challenges in the future.

I chose this apprenticeship pattern because throughout my experience learning computer science, and programming in particular, there has been this looming fear of failure. I have found that this fear is often directly linked to a fear of trying which ultimately leads to less learning and prevents me from really putting in full effort. With this particular sprint, I tried multiple different links and paths and versions which many times did not run or did not start up the system. I was pushed to sit with the discomfort of failing to get the server up and running while still acknowledging that all of the failure was pushing me closer to a complete understanding of the system. 

If I had read this pattern prior to beginning this sprint, I would have been less hesitant to ue trial and error and really dig into the full process of working with the server. There was something very intimidating about the use of sudo and knowing that I had the ability to make permanent changes which has the potential to break the entire server and I think that I let that intimidation really hold me back from making more progress during the sprint.

From the blog CS@Worcester – The Struggle of Being a Female Student in CS by Noam Horn and used with permission of the author. All other rights reserved by the author.

Good Software Design and the Single Responsibility Principle

The single responsibility principle is simple but critical in good software design. As Robert Martin puts it in his blog The Single Responsibility Principle, “The Single Responsibility Principle (SRP) states that each software module should have one and only one reason to change.” He also does a great job of comparing this to cohesion and coupling, stating that cohesion should increase between things that change for same reason and coupling should decrease for those things that change for different reasons. Funnily enough, while I was reading a post on Stack Overflow I ran into this line, “Good software design has high cohesion and low coupling.”

Designing software is complicated, and the program is typically quite complex. The single responsibility principle not only creates a stronger and smarter structure for your software but one that is more future-proof as well. When changes must be made to your program, only the pieces of your software related to that change will be modified. The low coupling I mentioned earlier will now prevent the possibility of breaking something completely unrelated. I couldn’t count the number of times my entire program would break by modifying one line when I first started coding, because my one class was doing a hundred different things.

This directly relates to what we’re working on in class right now. We are currently working with REST API, specifically creating endpoints in our specification.yaml file. Our next step will be to implement JavaScript execution for these endpoints. When we begin work on this keeping the single responsibility principle in mind will be incredibly important. It can be very easy to couple functions that look related but change for completely different reasons. For example, coupling validating new guest data and inserting the new guest into the database. While they seem very related, they may change for very different reasons. Maybe validation requirements change, causing the validation process to be modified but not the inserting of a new guest. The database structure or storage system may change leading to modifications to how the new guest is inserted but not how they’re validated. Keeping in mind that related things may change for different reasons will be key for my group leading into the next phase of our REST API work.

This principle is one that I plan on carrying with me during my career as a programmer. It will help me create more future-proofed programs in a world where things must be ready to constantly evolve and adapt. Uncle Bob’s blog was incredibly useful in my understanding of this principle on a deeper level. I feel like a stronger programmer after just reading it. I look forward to implementing what I’ve learned as soon as we start working with the JavaScript of our REST API.

From the blog CS@Worcester – DPCS Blog by Daniel Parker and used with permission of the author. All other rights reserved by the author.

More on Clean Code

For this quarter’s blog, I decided to research more into the book Clean Code by Robert C. Martin and found a blog discussing the good, the bad, and the ugly regarding the book. I chose this article because we have spent the last few classes working through POGILs related to the book. The author writes about how Clean Code has had its positive and negative impact on software development. For new programmers, the author highlights useful practices that are good for new software developers, such as good naming techniques, not repeating your code, and having functions only do one thing. On the other side, the author describes how the age of the book and its dated techniques can be considered obsolete. Clean Code was written over twenty years ago and is heavily focused on Java programming and outdated extensions that “[limit] the applicability for modern programming practices.” Another criticism by the author is that applying the rules of the book all the time can result in harmful code, such as excessive abstraction and code that is harder to maintain over time. The author argues that programmers should learn when these rules should be broken and apply them on a case by case basis.

This article was certainly helpful to give a further opinion on Clean Code and its subject matter. After going through the Clean Code POGILs in class, I had learned many things that I was not previously taught about programming. They were helpful to correct some bad practices that I was guilty of, such as commenting in place of poorly written code. However, some topics, such as the levels of abstraction or how to use classes and methods properly were initially confusing to me. It seems like the author also expresses similar frustrations in regard to these things. The author of the article describes any of the things from the book can be described in one phrase: “it depends.” Overall though, I felt it necessary to dive deeper into Clean Code for my own benefit. Even though I do not plan on pursuing a career in software development, many of these rules and structures can be applied to other disciplines within computer science and information related fields. When the time comes for me to work on a personal project or something needed for my career, I feel better equipped to handle such a task knowing what I know now. Even if some of the advice is dated, most of it can still be applied and result in better software development.

Original blog post: https://gerlacdt.github.io/blog/posts/clean_code/

From the blog CS@Worcester – zach goddard by Zach Goddard and used with permission of the author. All other rights reserved by the author.

Understanding Software Architecture Through Martin Fowler’s Lens

Software architecture is one of those concepts that students hear often but rarely get a clear definition of. This week, I chose to read Martin Fowler’s Software Architecture Guide because it went into depth beyond surface-level definitions of architectural thinking that we usually hear. Since our course is so strongly focused on building maintainable and scalable systems, this resource fit perfectly with the themes we have discussed around design decisions and long-term maintainability in software projects.

Fowler opens the guide by addressing one of the most debated questions in the software community: What, really is architecture? He explains how many definitions focus on “high-level components” or “early design decisions,” but argues these views are incomplete. Referring to an email exchange with Ralph Johnson, Fowler insists that architecture is about “the important stuff.” Architecture is not about big diagrams or an early-stage structural choice; it is about experienced developers having a common understanding of the parts of a system that matter for its long-term health. This makes architecture dynamic and changing rather than merely static documentation.

Fowler also describes why architecture matters, even when end users never directly see it: A poor architecture leads to “cruft,” or the buildup of confusing, tangled code that slows down development. Instead of enabling fast delivery, weak internal quality ultimately hurts productivity. The argument here by Fowler is that paying attention to internal structure actually increases delivery speed because developers spend less time fighting the codebase and more time building features. What struck a chord for me in this is how architecture is coupled with practical results: maintainability, reliability, and team productivity.

I chose this article because I really enjoy the topic and wanted to learn more about software architecture in depth. Fowler’s explanation really helped me understand that architectural thinking is something developers grow into by learning to identify what is truly important in a system. This directly connects with the principles we’ve discussed in class around clean code, modularity, and design patterns. Reflecting on the material, I realized that in future software projects, including class assignments, internships, I will have to think about how my design decisions today will affect my ability-and my team’s ability-to maintain or extend the system later. Good architecture supports future evolution, as Fowler put it, and this is something I want to actively apply as I head toward more complex development work.

Resource: https://martinfowler.com/architecture/

From the blog Maria Delia by Maria Delia and used with permission of the author. All other rights reserved by the author.

Polymorphism and Inheritance: Building Flexible Game Characters

This topic explores object-oriented programming (OOP) concepts like polymorphism, inheritance, and design patterns, showing how these very basic core concepts create reusable code. In particular, the Gamma et al. book demonstrates practical use of polymorphism and abstract classes to define flexible software structures, while the OpenGL guide shows examples of implementing modular systems, such as game engines, where different objects share common behaviors but have distinct implementations. I chose these materials because developing flexible and scalable gaming systems needs an understanding of polymorphism and inheritance. Multiple character types, enemies, weapons, or objects that behave differently yet have similar functions are frequently included in video games. These resources make it easy for developers to build clear, modular code while handling complex interactions between game objects.

               With polymorphism, game developers can regularly allow different objects while each of them behaves uniquely. For instance, a role-playing game (RPG) may have several characters: Warrior, Mage, and Archer. They get from the common Character class that describe methods like Attack(), Move(), or TakeDamage(). Each subclass overrides Attack () to implement uneasy behavior: the Mage cast spells, the Warrior swings a sword, and the Archer shoots arrows. Without polymorphism, coders would use a lot of conditional statements like if (characterType == “Mage”) … else if (characterType == “Warrior”) …; this goes against Open-Closed Principle (OCP), making it difficult when adding a new character. Using inheritance and polymorphism, the addition of a rogue class would require only the implementation of the Attack() method, while existing code would remain the same.

I believe the contrast between conditional logic and polymorphism in game AI to be instructive. In simple projects, using conditional statements to handle various opponent actions could work, but as the number of characters, skills, and interactions increases, the code rapidly gets crowded and challenging to maintain. In contrast, polymorphism enables any type of enemy—such as a dragon, goblin, or mage—to implement its own action while staying handled by the game engine as a generic enemy object. By using this method, AI action becomes versatile, modular, and simpler to expand, allowing for the addition of new challenge types or unique attacks without requiring changes to the current code.

In the future, I want to use these ideas to develop generic avatar and item systems for my personal projects so that new content can be added without having to rewrite the logic. The usefulness of proper object-oriented design in real-world game production is proven by observing how these concepts are implemented in professional game engines like Unity and OpenGL, which close the gap between theory and practical application.

References

  1. Design Patterns: Elements of Reusable Object‑Oriented Software by Erich Gamma, Richard Helm, Ralph Johnson & John Vlissides — Addison‑Wesley, 1994. Link: https://www.oreilly.com/library/view/design-patterns-elements/0201633612/ O’Reilly Media+1
  2. OpenGL® Programming Guide: The Official Guide to Learning OpenGL® Version 4.5 with SPIR‑V by John Kessenich, Graham Sellers & Dave Shreiner — Addison‑Wesley Professional, 2016. Link: https://www.oreilly.com/library/view/opengl-r-programming-guide/9780134495514/

From the blog CS@Worcester – Pre-Learner —> A Blog Introduction by Aksh Patel and used with permission of the author. All other rights reserved by the author.

From Ideas to Licenses: My First Deep-Dive into Software Licensing

This semester I finally sat down and untangled the confusing world of software licenses, starting from a basic question: what can you actually protect when you write code? One of the first things I learned is that you cannot copyright a mere idea for a program; copyright law only protects the concrete expression, like source … Read more

From the blog CS@Worcester – BforBuild by Johnson K and used with permission of the author. All other rights reserved by the author.

My Journey Learning REST API Calls

Getting to grips with making REST API calls felt like finally being able to have a real conversation with the internet. At first the whole thing was a bit too much but once I started to see the patterns emerging, it just became second nature. It all clicked when I realised that a REST API … Read more

From the blog CS@Worcester – BforBuild by Johnson K and used with permission of the author. All other rights reserved by the author.

Express.js and it’s association with REST API

For this third quarter’s professional development, i read the Treblle post, “How to Structure an Express.js REST API with Best Practices” (https://treblle.com/blog/egergr). The article mostly just talks about on how to properly organize and build RESTful APIs using Express.js while maintaining clean, scalable, and modular code. It covers some key principles like separating app and server logic, implementing a layered architecture, structuring routes and controllers clearly, and designing the API in a way that’s easy to maintain and expand. It also makes mention on the importance of using Docker for containerization and environment consistency, which is essential for deploying APIs reliably across different systems.

I picked this specific resource because i think it ties closely with what we’ve learned in class earlier, taking into account the things like modularity, separation of concerns, and maintainable software design. It helps that we were doing class exercises when it comes to building small web services and understanding how system components interact(at least tangentially when it comes to the guestinfobackend stuff), so reading about how Express.js projects are structured in real-world settings gave a lot of context for people such as myself interested in backend or full-stack development, it’s the kind of practical foundation that helps turn classroom principles into actual coding habits.

From the article, I learned that Express.js isn’t just about setting up endpoints quickly; it’s about creating a clear, layered structure where each part of the API has its own responsibility. In one instance, it recommended dividing the project into three layers: a web layer (for routes and middleware), a service layer (for handling logic and validation), for the other a data layer (for managing database access). I’d say this structure kind of keeps your code modular and easier to debug. Another thing I think is useful was the reminder to containerize the API using docker, which helps standardize development and production environments so you can avoid those “it does/doesn’t work on my machine” problems.

I’d say the article reinforces many of the software architecture concepts we’ve referenced in class, such as modularity, abstraction, and loose coupling. A modular API design definitely makes it easier to scale, test, and maintain which at the end of the day is really the heart of software construction. It also reminded me that tools like Docker play a key role in supporting architecture by making deployment consistent and repeatable, which is just as important as how the code itself is structured.

As a whole, i’d say this article helped me better understand what good backend architecture looks like in practice. It gave me a clearer sense of how to build modular, scalable APIs using Express.js and Docker and i can somewhat see how those principles might carry over into any kind of future coursework and professional projects that i might be part of.

From the blog CS@Worcester – CSTips by Jamaal Gedeon and used with permission of the author. All other rights reserved by the author.

Understanding Code Review

Code reviews are a vital step in the process of putting out not only software but any form of work. This blog post by Kimmo Brunfeldt outlines the need for code reviews on projects as well as some of the best practices for them. The role of a code review is to share knowledge and ownership about the project among the team members as well as discuss development and do overall quality control. A standard flow can be seen as a draft, submission for review and suggested changes (A cycle between these steps often occurs), and then final approval and merging. Keeping this flow with positive feedback and clear and constant communication is key to consistent and good review.

The reasoning for choosing this topic was due to the recent topic of the class into the focus of clean code. This focus on clean code is a part of the process of code review as a key element of the review process as well as something to keep in mind during the writing of code. The ease of being able to not only read the code but how easy it is for someone else to understand it is vital for projects and a key aspect of code review. By having these group review sessions and collaborate with other people, it can help to have others look other the work and point out potential issues in naming or complexity.

The blog also gave some really great tips and ideas for me to use for future projects. Some of the best takeaways to implement were not only simple and obvious in hindsight, but ones that I myself often overlook in discussions and reviews of my work. The biggest thing I found to be key was keep constant code submissions small and concise so that quick review can be done and rapid feedback or implementation can be achieved. Along with this feedback should be given in a public setting or documented so that others can hear it or gain from it. These two statements were some the biggest takeaways that I found from the blog to apply to my own work as they are easy steps to take and implement in overall group work, however by doing these communication will not only increase across the entire project but also learning and overall scrutiny of the code in an easy manner that is not intrusive. Previous ways that I have seen or attempted include large sit downs or meets that often drag on but these seem to be much better alternatives that I look forward to trying.

Works Cited:

Brunfeldt, K. (2025, May 12). A complete guide to code reviews. Swarmiacom RSS. https://www.swarmia.com/blog/a-complete-guide-to-code-reviews/

From the blog CS@Worcester – Dan's Blog by Daniel Fung-A-Fat and used with permission of the author. All other rights reserved by the author.

Let’s Keep Expanding!!

In the workplace, not only do the number of projects continue to increase, but the way in which we approach them exponentially increases as well. Tech professionals are leading the way in developing cutting edge procedures and processes for daily work, tracking and planning. Tools rapidly evolve as well, so with that being said, project management has been a lot more interesting to say the least.

Adapting to new technologies and routines is the best way to be efficient. With the changes between roles and systems of project management, there needs to be some solutions (or at least one). According to the Forbes Technology Council, they have detailed emerging trends and strategies to inform various organizations. By counting off the actions in this list, we get to see an evolution within different companies and organizations if they feel encouraged enough to take them into consideration.

Starting off, project managers need to establish a V.M.O (value management office). It is the idea of shifting the focus from project delivery to value maximization. For example, if an organization has an initiative, they align it with different goals and efficient resource use to allow for continuous improvement. It is absolutely essential, especially in today’s climate of business and organizational behavior.

Never be afraid to rely on the professionals in times of need. This article highlights the importance of relying on “senior talent”. Since they have the broadened experience, they’re capable of making quick and well informed decisions. For more complex projects, this proves to be useful, helping project managers understand business goals by creating emphasis on exceptional product ownership skills.

I told you AI is not always a bad thing. A lot of individuals, including myself, believe in the power of AI being used as a tool. It’s no secret that several organizations use it, but it is very important to note how its being used. This is called resource optimization. By allocating these resources accordingly, complexities can be well more managed. Artificial intelligence can assist with communication, task automation and provide data insight, leading to greater outcomes/output.

For me, specifically reading just the title alone, I was thinking of more teamwork-driven strategies and trends, similarly to the SCRUM and Sprint conversations during lectures. Of course, I judged a book by its cover and I knew immediately that I had to read and dissect this story. It is important to acknowledge that this article is from a year ago, meaning that more and more trends have been created at this point. These strategies however are still very applicable to today’s modern climate of process management.

As a tech person, or any business person, it is crucial to stay optimistic and see the value in making changes. I believe that myself and my generation have the potential and resources to evolve areas in their career field, with ideas that we want to see unfold. It will be very interesting to see the growth.

Source: 20 Emerging Strategies And Trends In Project Management

From the blog CS@Worcester – theJCBlog by Jancarlos Ferreira and used with permission of the author. All other rights reserved by the author.