UML Class Diagrams

Link: https://www.jointjs.com/blog/uml-class-diagrams

In this blog post, I decided to look at Martin Kaněra’s blog post titled “UML Class Diagrams: All you need to know.” This blog post gives a comprehensive overview of UML class diagrams, covering what they are, why they matter in object-oriented development, and how they can compare to other UML diagrams. This post explains that UML class diagrams model the static structure of systems by showing classes, attributes, and methods. They also show the relationships between them, such as association, inheritance, and aggregation. Kaněra says that each class rectangle is divided into three parts, which are name, attributes, and methods. There are visibility symbols such as “+” for public, “#” for protected, and “-” for private. I liked how Kaněra discussed abstract classes and interfaces, as we see those concepts show up often and they can sometimes be a little bit confusing unless you see them drawn out. The blog also mentions how class diagrams should be used for structure, sequence diagrams should be used for object interactions over time, and that activity diagrams should be used for flows of control.

I chose this blog post because we went over the UML class diagrams earlier this semester, so I thought it would be nice to get a quick refresher and also do a deeper dive into the topic. We’ve done assignments involving UML diagrams, but there were always a few small things I would get confused about so it was nice to get clarification. I also think that it is very important to know this stuff, as this is a part of how real teams design and discuss systems. This blog post does a very good job of showing how class diagrams play a role in real life software projects as well.

Reading this blog post made me realize how important UML class diagrams are. Kaněra does a good job placing emphasis on identifying design problems early, such as classes that have too many dependencies. This just further shows how useful these diagrams are for projects. In the future, I can apply this by sketching a quick class diagram before I start coding a feature, and then I could revisit it while I am working on the project. I can also do this for group projects, as using a shared diagram could help keep everyone on the same page for terminology and boundaries between components.

Overall, this blog post helped refresh my memory on UML class diagrams, and also gave me some further clarification on certain principles. After reading this blog, I feel much more confident in implementing UML class diagrams into future projects.

From the blog CS@Worcester – Coding Canvas by Sean Wang and used with permission of the author. All other rights reserved by the author.

API

The software landscape is dynamic and constantly changing. Application Programming Interfaces(API’s) have evolved from what was once an optional tool, to a vital epic of software architecture. These APIs are the joining forces between different systems allowing mobile devices, third party services, backend microservices, and front end interfaces to interact in an efficient structured manner. 

Simply put, API defines rules and protocols for software components to interact. When focusing on backend development, API’s are used to show system functionality. Whether its authenticating users or  retrieving data APIs can expose flaws or confirm functionality. This rewards systems that have flexibility in their development because the internal implementation can be abstracted from the external interface. 

There are several types of APIs that have specific implementations for system architecture.

  • REST API’s: REST or (Representational State Transfer) is the most prevalent for web based services. These use the standard HTTP(GET, POST, PUT, and DELETE), and client stateless architecture. Client stateless architecture is a system where the server stores one of the client information, rather it treats each request as a self contained independent transaction.
  • GraphQL APIs: These enable clients to request specific needed data from a single endpoint. This can cut back on network overhead simplifying client code. This is useful for client-driven backends.
  • Websocket API’s: these are useful for real-time, two directional communication. This includes chats, gaming, updates, ect. This allows persistent connection between the client and server. Using Websocket entails event-driven design

APIs are essential for maintainable back-end architecture. Alongside these APIs, there are best practices to follow to get the most out of API’s. Clear naming conventions should be used to ensure consumers understand. Using intuitive consistent resource names allows for a clearer overall architecture. The next best practice is Versioning. As systems change, versionaling ensures that people running older clients can still run them. This backwards compatibility is essential. Keeping good documentation is also very necessary. This helps developers understand how to use the API and how it interacts with the larger system. Security is needed to safeguard data and API endpoints. And finally error handling with meaningful messages helps keep everything clear while testing for development and dealing with bugs. 

Choosing the correct backend framework can influence how you create your API. Express.js on Node.js, is lightweight and suited for RESTful services. Django REST Framework is built on python and uses rapid API development containing built in features. Spring boot is Java based and is good for microservices architecture. This  isn’t just picking a syntax, this shapes how you test, modularize, secure and scale this architecture. 

My personal experience has now shown how useful API design is. When building a simple REST API for guest data, I defined clear endpoints, selected correct HTTP verbs, and used tools like swagger and spectral to validate my work. This showed me how important consistent status codes are. Even small mistakes like forgetting a field or misnaming a route broke client calls. Working through this has shown me how good design can save time by reducing confusion, bugs, and reworks. 

APIs are not just endpoints, they are crucial pieces to backend software architecture. They encapsulate complexities,  and allow for scalable maintainable systems. This API layer is not just a bridge, but a foundational piece of architecture.

From the blog CS@Worcester – Aaron Nanos Software Blog by Aaron Nano and used with permission of the author. All other rights reserved by the author.

Blog 3

          Collaboration Tools

I am Dipesh Bhatta, and I am writing this blog entry for CS-348 Software Process Management for Blog Quarter 3. For this blog, I chose to write about collaboration tools and how they support software process management. My chosen resource is an article titled “What is collaboration?” published by IBM (https://www.ibm.com/topics/collaboration). This article explains what collaboration means, how teams use digital platforms to work together, and why collaboration tools have become essential for modern organizations.

The article defines collaboration as the process of people working together toward shared goals through communication, coordination, and the use of shared tools. IBM describes collaboration tools as digital technologies that support messaging, file sharing, project management, real-time communication, and content creation. These tools help teams stay connected, maintain organization, and share information efficiently. The article emphasizes that digital collaboration is especially important for hybrid and remote teams who rely on virtual workspaces to stay productive and aligned.

I used this resource because collaboration is a central part of software process management, which we focus on in CS-348. Software development requires communication among developers, testers, designers, and project managers, and collaboration tools help streamline this teamwork. By providing shared workspaces and organized communication channels, these tools reduce confusion and make it easier for teams to track progress, share updates, and resolve issues. Understanding how these tools function helps me see the connection between technical teamwork and the structured processes we learn in this course.

This resource helped me realize that collaboration tools are more than just messaging apps—they create clarity and accountability. IBM’s explanation of digital workspaces reminded me of how our CS-348 project groups rely on tools such as shared documents and group chats to stay organized. When team members can access updated files, communicate instantly, and understand their responsibilities, the entire workflow becomes smoother and more efficient.

The article also made me reflect on my own collaboration habits. Keeping documents updated, communicating clearly, checking in with teammates, and using tools responsibly all contribute to better teamwork. I learned that collaboration tools only work effectively when team members engage with them consistently and respectfully. These habits will help me in future group projects, internships, and professional settings where digital teamwork is a daily requirement.

In short, collaboration tools play a major role in software process management. They strengthen communication, improve coordination, and support teamwork—key themes in CS-348. By applying these collaboration practices during Blog Quarter 3, I am building valuable technical and interpersonal skills that will support my future career in the software industry.

From the blog CS@Worcester – dipeshbhattaprofile by Dipesh Bhatta and used with permission of the author. All other rights reserved by the author.

The Clean Code Debate: General Practices vs. Uncle Bob

For this quarter’s blog post, I decided to speak more about the principles of clean code. The resources I discovered when exploring this topic surprised me. I have understood the general concept of writing code that is easy to read, maintain, and understand from the start of my time here at Worcester State University. However, now I have been introduced to the more specific and controversial perspective explained by Robert C. Martin (aka, Uncle Bob). My goal with the resources I chose was to compare Uncle Bob’s approach against how clean code is viewed in more general practices.  

This resource, Clean Code: The Good, the Bad and the Ugly, explores Uncle Bob’s perspective on clean code, while the author identifies components they see to be good, bad, and ugly. The resource “Clean” Code, Horrible Performance, tests Uncle Bob’s clean code components and explains how it impacts performance. Lastly, there are two other resources that give examples of general practices of clean code.

From what I understand, the core idea of clean code is that people read code way more often than they write it, so it is important to prioritize clarity. However, on the other hand, Uncle Bob’s perspective advises for small functions and almost no comments. Some of Uncle Bob’s advice is similar to general clean code practices, but this specific advice leads to some conflicts.

The first two resources showed that sticking too closely to Uncle Bob’s principles can actually lead to decreased performance. This happens due to highly fragmented code (following his principles on small, single-purpose functions), which can be less efficient for the machine to run. The main takeaway from these conflicting views is that Uncle Bob’s ideas are helpful guidelines, but they are not universal rules. It is important to understand the trade-off between absolute readability and optimal performance based on the focus and needs of the project you are working on. 

The advice against using comments genuinely surprised me. In one way, comments are a great way for beginner coders to track the purpose of their code, and to boost their understanding of certain components (i.e., loops, methods, functions, classes, etc.). At least, that is what I often used comments for when practicing coding alone. Now I understand that, when working on a team, those initial comments can quickly become confusing and unhelpful. If a comment is not deleted or updated when the code changes, it becomes misleading. Your code should describe itself using good variables, functions, and method names. If you need a comment to explain what each part of the code does, that may be a sign of poor coding design. 

I had intended to use clean code as I continue practicing coding. However, now I have more tools under my belt to make sure my code is not just digestible for me, but also digestible to someone who may need to read or update my code without me there to explain it.

Main Resources:
“Clean” Code, Horrible Performance. Many programming “best practices” taught today are performance disasters waiting to happen – https://www.computerenhance.com/p/clean-code-horrible-performance

Clean Code: The Good, the Bad and the Uglyhttps://gerlacdt.github.io/blog/posts/clean_code/

Messy Code V/S Clean Code in MVC contexthttps://medium.com/highape-tech/messy-code-v-s-clean-code-in-mvc-context-9ad99079a4f8

What Is Clean Code? A Guide to Principles and Best Practiceshttps://blog.codacy.com/what-is-clean-code

Additional Resources:
Polymorphism in Java – https://www.geeksforgeeks.org/java/polymorphism-in-java/

From the blog CS@Worcester – Vision Create Innovate by Elizabeth Baker and used with permission of the author. All other rights reserved by the author.

Understanding REST Endpoint Naming and Why It Matters

For my third blog, I read “Best Practices for Naming REST API Endpoints” from the DreamFactory blog. The article explains why clear and consistent endpoint naming makes APIs easier to understand, maintain, and scale. It focuses on something every developer deals with when building REST systems: how to structure resources so the API feels predictable and easy to navigate. Even though naming seems like a small detail, the article shows how much it affects the overall design of a system.

It explains that good endpoint naming starts with using nouns instead of verbs and keeping the focus on resources, not actions. Instead of naming an endpoint something like /createGuest, the blog says you should use /guests and let the HTTP method determine what action is being taken. So POST creates a guest, GET lists them, PUT updates one, and DELETE removes one. Reading that made me think back to what we’ve been doing in class with our Model 5 work, where we looked inside the src/endpoints directory and saw how each file maps to a resource. All of our endpoints follow that same pattern, which helped me see why the structure feels clean.

The article also talks about keeping paths simple and consistent. It mentions using plural nouns, avoiding unnecessary words, and sticking to predictable patterns like /guests/:id. When I went back to look at our endpoint files listGuests.js, retrieveGuest.js, replaceGuest.js, and so on, I noticed how everything lines up with what the blog recommends. Each file handles one resource and uses the method, path, and handler structure to keep things organized. That connection made the blog feel way more relevant, because it matched exactly what we’re practicing.

I picked this article because it ties directly into the work we’ve been doing in class with REST API implementation. We’ve been learning how to structure endpoints, read OpenAPI specs, and understand how operationIDs match the code. This blog basically explains the reasoning behind those design choices. It also fits with the design principles we’ve been talking about, like keeping things modular and easy to maintain as the project grows.

After reading it, I realized that endpoint naming isn’t just a style preference. It affects how fast developers can read the code, how easy it is to extend the system, and how clearly the API communicates its purpose. When the names and paths make sense, everything else falls into place. My main takeaway is that good API design starts with simple, consistent patterns, and naming is a big part of that foundation.

Link: https://blog.dreamfactory.com/best-practices-for-naming-rest-api-endpoints

From the blog CS@Worcester – Harley Philippe's Tech Journal by Harley Philippe and used with permission of the author. All other rights reserved by the author.

Frontend development problems and rules

I was just curious about frontend development. After reading a couple of articles, frontend development is how the customer interacts with the website or program. The key aspects of Frontend development is User Experience, Visual feedback, Optimization, Responsive with devices, Integrating the backend APIs to the Frontend. First let me explain User experience which means the website is accessible, usable, and a good visual design. Next, Visual feedback the frontend can react to user input through the website and animations can appear on time. Moving onto Optimization, to reduce loading from one page to another or a response from the user. 

Another aspect we need to consider is whether the website or program works with multiple devices like a phone, desktop, etc. Finally, integrating the backend APIs so that data can be sent to the user or sent from the user to the backend. 

These 5 goals are meant for a user to be able to not feel any friction between the frontend and the backend. Users want a program or website to be able to use how they want it and does not take too much time. For example, companies like Youtube want users to be on the platform as long as possible to sell more advertising ads. Many other platforms are trying to incorporate more features to have more users just stay on the platform for everything. 

In order to keep users on the platform no one wants to wait a long time to move to the next page or get the response they want. Another issue that frontend developers could face is having the website not be consistent with the responses or animations. Even though these problems are maintenance related it is important to have the website or program be functional as quickly as possible so that users do not notice if the website or program went offline. Another issue that users do not notice initially is does the program or website work with multiple Operating systems and browsers. Each browser and Operating system will react to the program or website differently depending on multiple factors. 

In addition, front-end developers have to consider how the website looks on different browsers. If I have a mac book and a desktop, if I as a user sees the visual differences of the website from the two different browsers it would make me not want to use the platform at all. If a website can look the exact same through multiple platforms then there will be less friction for users and they would know where everything is.

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

Quarter-3 blog post

For this week’s blog post, I decided to write about the topic of Object Oriented programming (OOP). During our in class activities, we reviewed and learned how to improve our Object-Oriented software design skills. OOP can sound overwhelming with words like inheritance, encapsulation, polymorphism, and abstraction, but it becomes easier to understand after this blog post breaks it down! The blog I choose today was written by Arslan Ahmad, titled “Beginner’s Guide to Object-Oriented Programming (OOP)”, and I choose it due to how he was able to bring these topics together, explain each one, and how they all can work together in code.

Ahmad begins by tackling the first pillar, Inheritance. Inheritance is a way for one class to receive attributes or methods from another. He includes a fun and interesting example he calls the “Iron man analogy”, describing how all of his suits share core abilities, but certain parts/models add their own specialized features. I found this example useful as a fan of movies but also a great visual to really understand the idea of inheritance. Developers can use this idea to define the basic ideas, expand as needed, and use them somewhere else without rewriting the same code over and over again. This tool is strong to keep code organized and limit the amount of code/logic used.

The next pillar was encapsulation, which focuses on building attributes and the methods that operate them inside a single class. Ahmad uses an example of a house with private and public areas, showing limiting accesses to certain areas. I thought encapsulation was more about hiding information, but the post explains how it plays a key role in security and preventing accidental changes. This is defiantly something I can see using when working on larger programs where multiple classes need to interact safely.

Polymorphism was a pillar that I found the most interesting. He describes it as ” The ability of objects belonging to different classes to be treated as objects of a common superclass or interface.” This basically allows code to become more flexible and reusable. Whether though method overriding or overloading, polymorphism allows developers to write cleaner and more adaptable programs how you want to use.

Finally, the last pillar abstraction, which focuses on simplifying complex systems by deliberately hiding unnecessary details and showing only what the user need to see/interact with. He compares this to a laptop, you click on icons and press keys without having to worry to understand the hardware behind the scenes. Abstraction is very useful to keep their programs organized and easy to use.

In summary, this source helped me to connect concepts and gain further insight on these concepts that I had only understood partially before. His examples where fun and easy to understand which made the material more digestible. In the future, I expect to use these principals when writing class based programs, organizing code, and designing software that is easy to maintain!

Source: https://www.designgurus.io/blog/object-oriented-programming-oop

From the blog CS@Worcester – Mike's Byte-sized by mclark141cbd9e67b5 and used with permission of the author. All other rights reserved by the author.

The Importance of User Experience in Game Testing

While looking at internships I saw a couple of job postings for quality assurance at video game studios. Then I saw the qualifications and skills needed for the job. Then I started to look into the job. Looking at a couple of resources I noticed that this job has a couple of guidelines in order to help the video game have an amazing experience for players. These important concepts are called Functional evaluations, Regression assessment, User experience analysis. Within the job companies use Agile methodology to help QA teams to be able to solve upcoming issues throughout the game’s lifespan. 

This job requires employees to be skilled to fix technical problems and have critical thinking in order to solve problems. Let me explain the guidelines of game QA and why it matters. The first guideline is called Functional evaluations. Functional values is a series of tests that makes sure the game’s features and the game works as intended. 

Functional evaluations are divided into: 

  • Gameplay Mechanics: do player characters interact with objects correctly, can players use game mechanics correctly (for example special universal abilities), is character scale correctly.
  • User Interface: Can player controls activate certain buttons like pause, settings. Can players see certain features like health bars or ability cooldowns. 
  • Missions and Objectives: If a player completes a mission do they get a reward. Is it possible 
  • Multiplayer features: Can players join the server correctly and encrypted, etc…

Moving onto Regression assessments. It is to retest key features in the game after all patches have been implemented. The purpose of these tests is to

  •  Identify Vulnerabilities,
  •  Allocating Resources,
  •  Enhancing Reporting Accuracy. 

The reason why I mentioned these types of reasons for the tests is because they need to consider multiple factors in order to maintain customers’ enjoyment of the game. In addition, QA has to consider if the changes that could make the code be more complex, performance drops, user friction and costs. 

Moving onto the last point is User Experience Analysis. This issue can either make or break the success for a game. When players face some sort of friction like the game is not optimized for certain hardware or even constant disconnects to the server. As a result it will cause more players to return the game or stop playing the game entirely. I noticed that some games companies do not know how to filter through good suggestions and bad suggestions to fix in the next patch of the game. Regardless, that will take time for the company to set a clear roadmap on how they want to make their game.

From the blog CS@Worcester – Site Title by Ben Santos and used with permission of the author. All other rights reserved by the author.

Sprint 2

This was the second sprint of the course, Sprint 2. During this sprint I had a major blocker in terms of access to the server from home. As a commuter, I had to rely on the school’s VPN to be able to access the server from my house and for some reason the VPN was not configured properly. I overcame this issue by going to campus a couple extra times throughout the week and doing work from there because I knew for a fact that I had access from there. 

The major goal of this sprint was to update the docker-compose file to configure the proper containers for the frontend, backend, mongoDB, and rabbitmq. 

These are the links to the two tickets related to this sprint:

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/deployment/deployment-server/-/issues/6

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/deployment/deployment-server/-/issues/7

The goals of this sprint in and of themselves were not inherently difficult. The major challenge was in learning and understanding what it meant to update a docker-compose file, where to find the proper links and versions for the updates, how to know what to update and what to leave, and how to work and navigate inside the server through the use of my local terminal. I felt that this pushed me to really become more comfortable and familiar with terminal commands, and be more inquisitive about how things work throughout the server. While this sprint had technically less actionable steps, I felt that I spent more time really thinking about how things worked which helped me to better understand the system as a whole.

The apprenticeship pattern I chose for this sprint is Breakable Toys. This pattern emphasizes the importance of failure, even over success. The pattern pushes the reader to understand that failure is necessary for a complete learning and for the ability to adapt and face challenges in the future.

I chose this apprenticeship pattern because throughout my experience learning computer science, and programming in particular, there has been this looming fear of failure. I have found that this fear is often directly linked to a fear of trying which ultimately leads to less learning and prevents me from really putting in full effort. With this particular sprint, I tried multiple different links and paths and versions which many times did not run or did not start up the system. I was pushed to sit with the discomfort of failing to get the server up and running while still acknowledging that all of the failure was pushing me closer to a complete understanding of the system. 

If I had read this pattern prior to beginning this sprint, I would have been less hesitant to ue trial and error and really dig into the full process of working with the server. There was something very intimidating about the use of sudo and knowing that I had the ability to make permanent changes which has the potential to break the entire server and I think that I let that intimidation really hold me back from making more progress during the sprint.

From the blog CS@Worcester – The Struggle of Being a Female Student in CS by Noam Horn and used with permission of the author. All other rights reserved by the author.

Good Software Design and the Single Responsibility Principle

The single responsibility principle is simple but critical in good software design. As Robert Martin puts it in his blog The Single Responsibility Principle, “The Single Responsibility Principle (SRP) states that each software module should have one and only one reason to change.” He also does a great job of comparing this to cohesion and coupling, stating that cohesion should increase between things that change for same reason and coupling should decrease for those things that change for different reasons. Funnily enough, while I was reading a post on Stack Overflow I ran into this line, “Good software design has high cohesion and low coupling.”

Designing software is complicated, and the program is typically quite complex. The single responsibility principle not only creates a stronger and smarter structure for your software but one that is more future-proof as well. When changes must be made to your program, only the pieces of your software related to that change will be modified. The low coupling I mentioned earlier will now prevent the possibility of breaking something completely unrelated. I couldn’t count the number of times my entire program would break by modifying one line when I first started coding, because my one class was doing a hundred different things.

This directly relates to what we’re working on in class right now. We are currently working with REST API, specifically creating endpoints in our specification.yaml file. Our next step will be to implement JavaScript execution for these endpoints. When we begin work on this keeping the single responsibility principle in mind will be incredibly important. It can be very easy to couple functions that look related but change for completely different reasons. For example, coupling validating new guest data and inserting the new guest into the database. While they seem very related, they may change for very different reasons. Maybe validation requirements change, causing the validation process to be modified but not the inserting of a new guest. The database structure or storage system may change leading to modifications to how the new guest is inserted but not how they’re validated. Keeping in mind that related things may change for different reasons will be key for my group leading into the next phase of our REST API work.

This principle is one that I plan on carrying with me during my career as a programmer. It will help me create more future-proofed programs in a world where things must be ready to constantly evolve and adapt. Uncle Bob’s blog was incredibly useful in my understanding of this principle on a deeper level. I feel like a stronger programmer after just reading it. I look forward to implementing what I’ve learned as soon as we start working with the JavaScript of our REST API.

From the blog CS@Worcester – DPCS Blog by Daniel Parker and used with permission of the author. All other rights reserved by the author.