Quarter-3 blog post

For this week’s blog post, I decided to write about the topic of Object Oriented programming (OOP). During our in class activities, we reviewed and learned how to improve our Object-Oriented software design skills. OOP can sound overwhelming with words like inheritance, encapsulation, polymorphism, and abstraction, but it becomes easier to understand after this blog post breaks it down! The blog I choose today was written by Arslan Ahmad, titled “Beginner’s Guide to Object-Oriented Programming (OOP)”, and I choose it due to how he was able to bring these topics together, explain each one, and how they all can work together in code.

Ahmad begins by tackling the first pillar, Inheritance. Inheritance is a way for one class to receive attributes or methods from another. He includes a fun and interesting example he calls the “Iron man analogy”, describing how all of his suits share core abilities, but certain parts/models add their own specialized features. I found this example useful as a fan of movies but also a great visual to really understand the idea of inheritance. Developers can use this idea to define the basic ideas, expand as needed, and use them somewhere else without rewriting the same code over and over again. This tool is strong to keep code organized and limit the amount of code/logic used.

The next pillar was encapsulation, which focuses on building attributes and the methods that operate them inside a single class. Ahmad uses an example of a house with private and public areas, showing limiting accesses to certain areas. I thought encapsulation was more about hiding information, but the post explains how it plays a key role in security and preventing accidental changes. This is defiantly something I can see using when working on larger programs where multiple classes need to interact safely.

Polymorphism was a pillar that I found the most interesting. He describes it as ” The ability of objects belonging to different classes to be treated as objects of a common superclass or interface.” This basically allows code to become more flexible and reusable. Whether though method overriding or overloading, polymorphism allows developers to write cleaner and more adaptable programs how you want to use.

Finally, the last pillar abstraction, which focuses on simplifying complex systems by deliberately hiding unnecessary details and showing only what the user need to see/interact with. He compares this to a laptop, you click on icons and press keys without having to worry to understand the hardware behind the scenes. Abstraction is very useful to keep their programs organized and easy to use.

In summary, this source helped me to connect concepts and gain further insight on these concepts that I had only understood partially before. His examples where fun and easy to understand which made the material more digestible. In the future, I expect to use these principals when writing class based programs, organizing code, and designing software that is easy to maintain!

Source: https://www.designgurus.io/blog/object-oriented-programming-oop

From the blog CS@Worcester – Mike's Byte-sized by mclark141cbd9e67b5 and used with permission of the author. All other rights reserved by the author.

The Importance of User Experience in Game Testing

While looking at internships I saw a couple of job postings for quality assurance at video game studios. Then I saw the qualifications and skills needed for the job. Then I started to look into the job. Looking at a couple of resources I noticed that this job has a couple of guidelines in order to help the video game have an amazing experience for players. These important concepts are called Functional evaluations, Regression assessment, User experience analysis. Within the job companies use Agile methodology to help QA teams to be able to solve upcoming issues throughout the game’s lifespan. 

This job requires employees to be skilled to fix technical problems and have critical thinking in order to solve problems. Let me explain the guidelines of game QA and why it matters. The first guideline is called Functional evaluations. Functional values is a series of tests that makes sure the game’s features and the game works as intended. 

Functional evaluations are divided into: 

  • Gameplay Mechanics: do player characters interact with objects correctly, can players use game mechanics correctly (for example special universal abilities), is character scale correctly.
  • User Interface: Can player controls activate certain buttons like pause, settings. Can players see certain features like health bars or ability cooldowns. 
  • Missions and Objectives: If a player completes a mission do they get a reward. Is it possible 
  • Multiplayer features: Can players join the server correctly and encrypted, etc…

Moving onto Regression assessments. It is to retest key features in the game after all patches have been implemented. The purpose of these tests is to

  •  Identify Vulnerabilities,
  •  Allocating Resources,
  •  Enhancing Reporting Accuracy. 

The reason why I mentioned these types of reasons for the tests is because they need to consider multiple factors in order to maintain customers’ enjoyment of the game. In addition, QA has to consider if the changes that could make the code be more complex, performance drops, user friction and costs. 

Moving onto the last point is User Experience Analysis. This issue can either make or break the success for a game. When players face some sort of friction like the game is not optimized for certain hardware or even constant disconnects to the server. As a result it will cause more players to return the game or stop playing the game entirely. I noticed that some games companies do not know how to filter through good suggestions and bad suggestions to fix in the next patch of the game. Regardless, that will take time for the company to set a clear roadmap on how they want to make their game.

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

Sprint 2

This was the second sprint of the course, Sprint 2. During this sprint I had a major blocker in terms of access to the server from home. As a commuter, I had to rely on the school’s VPN to be able to access the server from my house and for some reason the VPN was not configured properly. I overcame this issue by going to campus a couple extra times throughout the week and doing work from there because I knew for a fact that I had access from there. 

The major goal of this sprint was to update the docker-compose file to configure the proper containers for the frontend, backend, mongoDB, and rabbitmq. 

These are the links to the two tickets related to this sprint:

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/deployment/deployment-server/-/issues/6

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/deployment/deployment-server/-/issues/7

The goals of this sprint in and of themselves were not inherently difficult. The major challenge was in learning and understanding what it meant to update a docker-compose file, where to find the proper links and versions for the updates, how to know what to update and what to leave, and how to work and navigate inside the server through the use of my local terminal. I felt that this pushed me to really become more comfortable and familiar with terminal commands, and be more inquisitive about how things work throughout the server. While this sprint had technically less actionable steps, I felt that I spent more time really thinking about how things worked which helped me to better understand the system as a whole.

The apprenticeship pattern I chose for this sprint is Breakable Toys. This pattern emphasizes the importance of failure, even over success. The pattern pushes the reader to understand that failure is necessary for a complete learning and for the ability to adapt and face challenges in the future.

I chose this apprenticeship pattern because throughout my experience learning computer science, and programming in particular, there has been this looming fear of failure. I have found that this fear is often directly linked to a fear of trying which ultimately leads to less learning and prevents me from really putting in full effort. With this particular sprint, I tried multiple different links and paths and versions which many times did not run or did not start up the system. I was pushed to sit with the discomfort of failing to get the server up and running while still acknowledging that all of the failure was pushing me closer to a complete understanding of the system. 

If I had read this pattern prior to beginning this sprint, I would have been less hesitant to ue trial and error and really dig into the full process of working with the server. There was something very intimidating about the use of sudo and knowing that I had the ability to make permanent changes which has the potential to break the entire server and I think that I let that intimidation really hold me back from making more progress during the sprint.

From the blog CS@Worcester – The Struggle of Being a Female Student in CS by Noam Horn and used with permission of the author. All other rights reserved by the author.

Good Software Design and the Single Responsibility Principle

The single responsibility principle is simple but critical in good software design. As Robert Martin puts it in his blog The Single Responsibility Principle, “The Single Responsibility Principle (SRP) states that each software module should have one and only one reason to change.” He also does a great job of comparing this to cohesion and coupling, stating that cohesion should increase between things that change for same reason and coupling should decrease for those things that change for different reasons. Funnily enough, while I was reading a post on Stack Overflow I ran into this line, “Good software design has high cohesion and low coupling.”

Designing software is complicated, and the program is typically quite complex. The single responsibility principle not only creates a stronger and smarter structure for your software but one that is more future-proof as well. When changes must be made to your program, only the pieces of your software related to that change will be modified. The low coupling I mentioned earlier will now prevent the possibility of breaking something completely unrelated. I couldn’t count the number of times my entire program would break by modifying one line when I first started coding, because my one class was doing a hundred different things.

This directly relates to what we’re working on in class right now. We are currently working with REST API, specifically creating endpoints in our specification.yaml file. Our next step will be to implement JavaScript execution for these endpoints. When we begin work on this keeping the single responsibility principle in mind will be incredibly important. It can be very easy to couple functions that look related but change for completely different reasons. For example, coupling validating new guest data and inserting the new guest into the database. While they seem very related, they may change for very different reasons. Maybe validation requirements change, causing the validation process to be modified but not the inserting of a new guest. The database structure or storage system may change leading to modifications to how the new guest is inserted but not how they’re validated. Keeping in mind that related things may change for different reasons will be key for my group leading into the next phase of our REST API work.

This principle is one that I plan on carrying with me during my career as a programmer. It will help me create more future-proofed programs in a world where things must be ready to constantly evolve and adapt. Uncle Bob’s blog was incredibly useful in my understanding of this principle on a deeper level. I feel like a stronger programmer after just reading it. I look forward to implementing what I’ve learned as soon as we start working with the JavaScript of our REST API.

From the blog CS@Worcester – DPCS Blog by Daniel Parker and used with permission of the author. All other rights reserved by the author.

More on Clean Code

For this quarter’s blog, I decided to research more into the book Clean Code by Robert C. Martin and found a blog discussing the good, the bad, and the ugly regarding the book. I chose this article because we have spent the last few classes working through POGILs related to the book. The author writes about how Clean Code has had its positive and negative impact on software development. For new programmers, the author highlights useful practices that are good for new software developers, such as good naming techniques, not repeating your code, and having functions only do one thing. On the other side, the author describes how the age of the book and its dated techniques can be considered obsolete. Clean Code was written over twenty years ago and is heavily focused on Java programming and outdated extensions that “[limit] the applicability for modern programming practices.” Another criticism by the author is that applying the rules of the book all the time can result in harmful code, such as excessive abstraction and code that is harder to maintain over time. The author argues that programmers should learn when these rules should be broken and apply them on a case by case basis.

This article was certainly helpful to give a further opinion on Clean Code and its subject matter. After going through the Clean Code POGILs in class, I had learned many things that I was not previously taught about programming. They were helpful to correct some bad practices that I was guilty of, such as commenting in place of poorly written code. However, some topics, such as the levels of abstraction or how to use classes and methods properly were initially confusing to me. It seems like the author also expresses similar frustrations in regard to these things. The author of the article describes any of the things from the book can be described in one phrase: “it depends.” Overall though, I felt it necessary to dive deeper into Clean Code for my own benefit. Even though I do not plan on pursuing a career in software development, many of these rules and structures can be applied to other disciplines within computer science and information related fields. When the time comes for me to work on a personal project or something needed for my career, I feel better equipped to handle such a task knowing what I know now. Even if some of the advice is dated, most of it can still be applied and result in better software development.

Original blog post: https://gerlacdt.github.io/blog/posts/clean_code/

From the blog CS@Worcester – zach goddard by Zach Goddard and used with permission of the author. All other rights reserved by the author.

Understanding Software Architecture Through Martin Fowler’s Lens

Software architecture is one of those concepts that students hear often but rarely get a clear definition of. This week, I chose to read Martin Fowler’s Software Architecture Guide because it went into depth beyond surface-level definitions of architectural thinking that we usually hear. Since our course is so strongly focused on building maintainable and scalable systems, this resource fit perfectly with the themes we have discussed around design decisions and long-term maintainability in software projects.

Fowler opens the guide by addressing one of the most debated questions in the software community: What, really is architecture? He explains how many definitions focus on “high-level components” or “early design decisions,” but argues these views are incomplete. Referring to an email exchange with Ralph Johnson, Fowler insists that architecture is about “the important stuff.” Architecture is not about big diagrams or an early-stage structural choice; it is about experienced developers having a common understanding of the parts of a system that matter for its long-term health. This makes architecture dynamic and changing rather than merely static documentation.

Fowler also describes why architecture matters, even when end users never directly see it: A poor architecture leads to “cruft,” or the buildup of confusing, tangled code that slows down development. Instead of enabling fast delivery, weak internal quality ultimately hurts productivity. The argument here by Fowler is that paying attention to internal structure actually increases delivery speed because developers spend less time fighting the codebase and more time building features. What struck a chord for me in this is how architecture is coupled with practical results: maintainability, reliability, and team productivity.

I chose this article because I really enjoy the topic and wanted to learn more about software architecture in depth. Fowler’s explanation really helped me understand that architectural thinking is something developers grow into by learning to identify what is truly important in a system. This directly connects with the principles we’ve discussed in class around clean code, modularity, and design patterns. Reflecting on the material, I realized that in future software projects, including class assignments, internships, I will have to think about how my design decisions today will affect my ability-and my team’s ability-to maintain or extend the system later. Good architecture supports future evolution, as Fowler put it, and this is something I want to actively apply as I head toward more complex development work.

Resource: https://martinfowler.com/architecture/

From the blog Maria Delia by Maria Delia and used with permission of the author. All other rights reserved by the author.

Polymorphism and Inheritance: Building Flexible Game Characters

This topic explores object-oriented programming (OOP) concepts like polymorphism, inheritance, and design patterns, showing how these very basic core concepts create reusable code. In particular, the Gamma et al. book demonstrates practical use of polymorphism and abstract classes to define flexible software structures, while the OpenGL guide shows examples of implementing modular systems, such as game engines, where different objects share common behaviors but have distinct implementations. I chose these materials because developing flexible and scalable gaming systems needs an understanding of polymorphism and inheritance. Multiple character types, enemies, weapons, or objects that behave differently yet have similar functions are frequently included in video games. These resources make it easy for developers to build clear, modular code while handling complex interactions between game objects.

               With polymorphism, game developers can regularly allow different objects while each of them behaves uniquely. For instance, a role-playing game (RPG) may have several characters: Warrior, Mage, and Archer. They get from the common Character class that describe methods like Attack(), Move(), or TakeDamage(). Each subclass overrides Attack () to implement uneasy behavior: the Mage cast spells, the Warrior swings a sword, and the Archer shoots arrows. Without polymorphism, coders would use a lot of conditional statements like if (characterType == “Mage”) … else if (characterType == “Warrior”) …; this goes against Open-Closed Principle (OCP), making it difficult when adding a new character. Using inheritance and polymorphism, the addition of a rogue class would require only the implementation of the Attack() method, while existing code would remain the same.

I believe the contrast between conditional logic and polymorphism in game AI to be instructive. In simple projects, using conditional statements to handle various opponent actions could work, but as the number of characters, skills, and interactions increases, the code rapidly gets crowded and challenging to maintain. In contrast, polymorphism enables any type of enemy—such as a dragon, goblin, or mage—to implement its own action while staying handled by the game engine as a generic enemy object. By using this method, AI action becomes versatile, modular, and simpler to expand, allowing for the addition of new challenge types or unique attacks without requiring changes to the current code.

In the future, I want to use these ideas to develop generic avatar and item systems for my personal projects so that new content can be added without having to rewrite the logic. The usefulness of proper object-oriented design in real-world game production is proven by observing how these concepts are implemented in professional game engines like Unity and OpenGL, which close the gap between theory and practical application.

References

  1. Design Patterns: Elements of Reusable Object‑Oriented Software by Erich Gamma, Richard Helm, Ralph Johnson & John Vlissides — Addison‑Wesley, 1994. Link: https://www.oreilly.com/library/view/design-patterns-elements/0201633612/ O’Reilly Media+1
  2. OpenGL® Programming Guide: The Official Guide to Learning OpenGL® Version 4.5 with SPIR‑V by John Kessenich, Graham Sellers & Dave Shreiner — Addison‑Wesley Professional, 2016. Link: https://www.oreilly.com/library/view/opengl-r-programming-guide/9780134495514/

From the blog CS@Worcester – Pre-Learner —> A Blog Introduction by Aksh Patel and used with permission of the author. All other rights reserved by the author.

From Ideas to Licenses: My First Deep-Dive into Software Licensing

This semester I finally sat down and untangled the confusing world of software licenses, starting from a basic question: what can you actually protect when you write code? One of the first things I learned is that you cannot copyright a mere idea for a program; copyright law only protects the concrete expression, like source … Read more

From the blog CS@Worcester – BforBuild by Johnson K and used with permission of the author. All other rights reserved by the author.

My Journey Learning REST API Calls

Getting to grips with making REST API calls felt like finally being able to have a real conversation with the internet. At first the whole thing was a bit too much but once I started to see the patterns emerging, it just became second nature. It all clicked when I realised that a REST API … Read more

From the blog CS@Worcester – BforBuild by Johnson K and used with permission of the author. All other rights reserved by the author.

Express.js and it’s association with REST API

For this third quarter’s professional development, i read the Treblle post, “How to Structure an Express.js REST API with Best Practices” (https://treblle.com/blog/egergr). The article mostly just talks about on how to properly organize and build RESTful APIs using Express.js while maintaining clean, scalable, and modular code. It covers some key principles like separating app and server logic, implementing a layered architecture, structuring routes and controllers clearly, and designing the API in a way that’s easy to maintain and expand. It also makes mention on the importance of using Docker for containerization and environment consistency, which is essential for deploying APIs reliably across different systems.

I picked this specific resource because i think it ties closely with what we’ve learned in class earlier, taking into account the things like modularity, separation of concerns, and maintainable software design. It helps that we were doing class exercises when it comes to building small web services and understanding how system components interact(at least tangentially when it comes to the guestinfobackend stuff), so reading about how Express.js projects are structured in real-world settings gave a lot of context for people such as myself interested in backend or full-stack development, it’s the kind of practical foundation that helps turn classroom principles into actual coding habits.

From the article, I learned that Express.js isn’t just about setting up endpoints quickly; it’s about creating a clear, layered structure where each part of the API has its own responsibility. In one instance, it recommended dividing the project into three layers: a web layer (for routes and middleware), a service layer (for handling logic and validation), for the other a data layer (for managing database access). I’d say this structure kind of keeps your code modular and easier to debug. Another thing I think is useful was the reminder to containerize the API using docker, which helps standardize development and production environments so you can avoid those “it does/doesn’t work on my machine” problems.

I’d say the article reinforces many of the software architecture concepts we’ve referenced in class, such as modularity, abstraction, and loose coupling. A modular API design definitely makes it easier to scale, test, and maintain which at the end of the day is really the heart of software construction. It also reminded me that tools like Docker play a key role in supporting architecture by making deployment consistent and repeatable, which is just as important as how the code itself is structured.

As a whole, i’d say this article helped me better understand what good backend architecture looks like in practice. It gave me a clearer sense of how to build modular, scalable APIs using Express.js and Docker and i can somewhat see how those principles might carry over into any kind of future coursework and professional projects that i might be part of.

From the blog CS@Worcester – CSTips by Jamaal Gedeon and used with permission of the author. All other rights reserved by the author.