Category Archives: Quarter-3

My Journey Learning REST API Calls

Getting to grips with making REST API calls felt like finally being able to have a real conversation with the internet. At first the whole thing was a bit too much but once I started to see the patterns emerging, it just became second nature. It all clicked when I realised that a REST API … Read more

From the blog CS@Worcester – BforBuild by Johnson K and used with permission of the author. All other rights reserved by the author.

Express.js and it’s association with REST API

For this third quarter’s professional development, i read the Treblle post, “How to Structure an Express.js REST API with Best Practices” (https://treblle.com/blog/egergr). The article mostly just talks about on how to properly organize and build RESTful APIs using Express.js while maintaining clean, scalable, and modular code. It covers some key principles like separating app and server logic, implementing a layered architecture, structuring routes and controllers clearly, and designing the API in a way that’s easy to maintain and expand. It also makes mention on the importance of using Docker for containerization and environment consistency, which is essential for deploying APIs reliably across different systems.

I picked this specific resource because i think it ties closely with what we’ve learned in class earlier, taking into account the things like modularity, separation of concerns, and maintainable software design. It helps that we were doing class exercises when it comes to building small web services and understanding how system components interact(at least tangentially when it comes to the guestinfobackend stuff), so reading about how Express.js projects are structured in real-world settings gave a lot of context for people such as myself interested in backend or full-stack development, it’s the kind of practical foundation that helps turn classroom principles into actual coding habits.

From the article, I learned that Express.js isn’t just about setting up endpoints quickly; it’s about creating a clear, layered structure where each part of the API has its own responsibility. In one instance, it recommended dividing the project into three layers: a web layer (for routes and middleware), a service layer (for handling logic and validation), for the other a data layer (for managing database access). I’d say this structure kind of keeps your code modular and easier to debug. Another thing I think is useful was the reminder to containerize the API using docker, which helps standardize development and production environments so you can avoid those “it does/doesn’t work on my machine” problems.

I’d say the article reinforces many of the software architecture concepts we’ve referenced in class, such as modularity, abstraction, and loose coupling. A modular API design definitely makes it easier to scale, test, and maintain which at the end of the day is really the heart of software construction. It also reminded me that tools like Docker play a key role in supporting architecture by making deployment consistent and repeatable, which is just as important as how the code itself is structured.

As a whole, i’d say this article helped me better understand what good backend architecture looks like in practice. It gave me a clearer sense of how to build modular, scalable APIs using Express.js and Docker and i can somewhat see how those principles might carry over into any kind of future coursework and professional projects that i might be part of.

From the blog CS@Worcester – CSTips by Jamaal Gedeon and used with permission of the author. All other rights reserved by the author.

Understanding Code Review

Code reviews are a vital step in the process of putting out not only software but any form of work. This blog post by Kimmo Brunfeldt outlines the need for code reviews on projects as well as some of the best practices for them. The role of a code review is to share knowledge and ownership about the project among the team members as well as discuss development and do overall quality control. A standard flow can be seen as a draft, submission for review and suggested changes (A cycle between these steps often occurs), and then final approval and merging. Keeping this flow with positive feedback and clear and constant communication is key to consistent and good review.

The reasoning for choosing this topic was due to the recent topic of the class into the focus of clean code. This focus on clean code is a part of the process of code review as a key element of the review process as well as something to keep in mind during the writing of code. The ease of being able to not only read the code but how easy it is for someone else to understand it is vital for projects and a key aspect of code review. By having these group review sessions and collaborate with other people, it can help to have others look other the work and point out potential issues in naming or complexity.

The blog also gave some really great tips and ideas for me to use for future projects. Some of the best takeaways to implement were not only simple and obvious in hindsight, but ones that I myself often overlook in discussions and reviews of my work. The biggest thing I found to be key was keep constant code submissions small and concise so that quick review can be done and rapid feedback or implementation can be achieved. Along with this feedback should be given in a public setting or documented so that others can hear it or gain from it. These two statements were some the biggest takeaways that I found from the blog to apply to my own work as they are easy steps to take and implement in overall group work, however by doing these communication will not only increase across the entire project but also learning and overall scrutiny of the code in an easy manner that is not intrusive. Previous ways that I have seen or attempted include large sit downs or meets that often drag on but these seem to be much better alternatives that I look forward to trying.

Works Cited:

Brunfeldt, K. (2025, May 12). A complete guide to code reviews. Swarmiacom RSS. https://www.swarmia.com/blog/a-complete-guide-to-code-reviews/

From the blog CS@Worcester – Dan's Blog by Daniel Fung-A-Fat and used with permission of the author. All other rights reserved by the author.

Let’s Keep Expanding!!

In the workplace, not only do the number of projects continue to increase, but the way in which we approach them exponentially increases as well. Tech professionals are leading the way in developing cutting edge procedures and processes for daily work, tracking and planning. Tools rapidly evolve as well, so with that being said, project management has been a lot more interesting to say the least.

Adapting to new technologies and routines is the best way to be efficient. With the changes between roles and systems of project management, there needs to be some solutions (or at least one). According to the Forbes Technology Council, they have detailed emerging trends and strategies to inform various organizations. By counting off the actions in this list, we get to see an evolution within different companies and organizations if they feel encouraged enough to take them into consideration.

Starting off, project managers need to establish a V.M.O (value management office). It is the idea of shifting the focus from project delivery to value maximization. For example, if an organization has an initiative, they align it with different goals and efficient resource use to allow for continuous improvement. It is absolutely essential, especially in today’s climate of business and organizational behavior.

Never be afraid to rely on the professionals in times of need. This article highlights the importance of relying on “senior talent”. Since they have the broadened experience, they’re capable of making quick and well informed decisions. For more complex projects, this proves to be useful, helping project managers understand business goals by creating emphasis on exceptional product ownership skills.

I told you AI is not always a bad thing. A lot of individuals, including myself, believe in the power of AI being used as a tool. It’s no secret that several organizations use it, but it is very important to note how its being used. This is called resource optimization. By allocating these resources accordingly, complexities can be well more managed. Artificial intelligence can assist with communication, task automation and provide data insight, leading to greater outcomes/output.

For me, specifically reading just the title alone, I was thinking of more teamwork-driven strategies and trends, similarly to the SCRUM and Sprint conversations during lectures. Of course, I judged a book by its cover and I knew immediately that I had to read and dissect this story. It is important to acknowledge that this article is from a year ago, meaning that more and more trends have been created at this point. These strategies however are still very applicable to today’s modern climate of process management.

As a tech person, or any business person, it is crucial to stay optimistic and see the value in making changes. I believe that myself and my generation have the potential and resources to evolve areas in their career field, with ideas that we want to see unfold. It will be very interesting to see the growth.

Source: 20 Emerging Strategies And Trends In Project Management

From the blog CS@Worcester – theJCBlog by Jancarlos Ferreira and used with permission of the author. All other rights reserved by the author.

OOP: Encapsulation

I wrote my blog based on “What is encapsulation in object-oriented programming (OOP)?” from Cincom Systems Blog. The blog defines encapsulation as bundling data, specifically attributes, and the methods that operate on that data into a single unit, while restricting access to some of the class’s components. The author explains well what we learned in previous classes, that using access modifiers like private and protected, a class can hide their internal state while exposing only controlled interfaces to the outside world. It goes on to describe benefits of encapsulation. Improved modularity, protected internal state, easier debugging and testing, and cleaner interfaces. The article also gives good practices based on the idea of encapsulation.

This blog directly relates to many things that we do in this class, practicing OOP and clean coding. I picked this article because it not only defines the concept of encapsulation but also connects it with practical ideas in software design and maintenance. I felt this article would help remind me on the theory of encapsulation and OOP.

Reading this article reminded me that encapsulation isn’t simply making variables private and using setters and getters but it’s about letting the class control its own state and hide implementation details from the outside world. This blog made me review some of my past projects where I had too many exposed public fields that made the code less clean. My past projects are definitely not written well with encapsulation in mind so reviewing this topic was extremely helpful.

Going forward, in personal coding and team projects, I expect to apply encapsulation and OOP in general by designing classes so that their internal data is private or protected, and only the necessary operations are public. Ensuring that external code interacts with objects via meaningful methods rather than manipulating internals directly.

To summarize this blog. It helped me remember what encapsulation actually means as a core OOP design principle rather than just an after thought. Applying what I learned from this blog, I expect to improve upon writing code that is cleaner and less error prone.

https://www.cincom.com/blog/smalltalk/encapsulation-in-object-oriented-programming/

From the blog CS@Worcester – Coding with Tai by Tai Nguyen and used with permission of the author. All other rights reserved by the author.

Code Reviews

I recently read the blog post titled “Why Code Reviews Are My Highest-Priority Work” by Jordan Edmunds. In the article, Edmunds explains why he regards code reviews as the top priority work. He outlines several reasons. First, that doing reviews early reduces the team’s merge conflicts and speeds up cycle time. He also notes that if a review happens soon after the author writes code, the author still has fresh context, making fixes and tests easier. He points out that timely reviews improve overall team productivity, and supports shared responsibility. He discusses common objections and shares how he integrates reviews into his day.

This blog post directly relates to our course topic of Code Review. I chose it because rather than just defining what code review is, Edmunds gives practical ideas into why it matters, based on real experience in a programming context. Since our course material covers this topic, this article provides a good example of how Edmunds’s code review ideas is applied. I felt it would help me connect theory with real practice.

Reading the article made me realize that code reviews aren’t just an after the thought thing to checkbox but a good practice for improving code quality. I learned that timing matters, a review done soon after code submission is far more effective than one delayed. I also like the emphasis on context, for the programmer, having fresh understanding of their changes makes review feedback much easier. These ideas provided by Edmunds helps shape how I think about code reviews now. I’ll aim to review often and hopefully the fresh context will help me pinpoint any and all problems present.

Personally, this article made me reflect on my own habits where I often skip reviews or wind up doing them late, when I encounter a problem running my programs. I’m now aware that this can slow feedback and especially development. After reading this blog, in any coding assignment or personal project, I plan to start reviewing written code quickly. Reviews are an important process which we upkeep standards, share progress, and improve code maintainability.

In summary, this resource has helped me understand the value of code review, and given me good ideas for applying it in team and personal works. By participating actively and thoughtfully in code review, I’ll hopefully write better code and contribute to smoother team processes.

https://medium.com/%40jordan.l.edmunds/why-code-reviews-are-my-highest-priority-work-8e9b4c410887

From the blog CS@Worcester – Coding with Tai by Tai Nguyen and used with permission of the author. All other rights reserved by the author.

Software Frameworks and REST APIs: Building Scalable, Maintainable Systems

Hello everyone, and welcome to my blog entry for this quarter.

For this quarter’s self-directed professional development, I selected the article “What frameworks are commonly used by REST API developers?” by MoldStud (moldstud.com) which surveys popular frameworks for building REST APIs and outlines why they matter.
Because in our classes we’ve been learning about software architecture, design patterns, and object-oriented design, I wanted to explore how frameworks help bring those concepts into real projects, especially when implementing REST APIs.

Summary of the Article

The article begins by explaining that when developers build REST APIs, choosing the right framework is critical. It reviews several top frameworks, such as:

  • Express.js (for Node.js): praised for its simplicity, flexibility, and modular middleware system.
  • Spring Boot (Java) : known for its strong ecosystem (Spring Data, Spring Security, etc.) and ability to rapidly build production-ready REST APIs
  • Frameworks in Python such as Fast API and Flask which also permit building RESTful services with fewer boilerplate lines and good developer productivity

The article emphasizes that frameworks provide built-in features like routing, serialization, input validation, authentication/authorization, and documentation support, which means developers can focus more on business logic rather than boilerplate.
It also notes that frameworks differ in trade-offs (simplicity vs. features, performance vs. flexibility) so choosing depends on project size, team skill, performance expectations, and ecosystem.

Why I Selected This Resource

I chose this article because, in our coursework and my professional development (including my internship at The Hanover Insurance Group), I have seen frameworks play a key role in making software more maintainable and scalable. Given that we have covered design principles and object-oriented design, understanding how frameworks support those principles (and how REST APIs fit into that) felt like a natural extension of our learning. I wanted a resource that bridges theory (design, architecture) with practice (framework usage, API development), and this article did just that.

Personal Reflections: What I Learned and Connections to Class

Several thoughts stood out for me:

  • Frameworks help enforce design discipline. For example, while in class we’ve talked about abstraction, encapsulation, and modular design, using a framework like Spring Boot means that the structure (controllers, services, repositories) often mirrors those concepts. The separation of concerns is built in.
  • When building a REST API, using a framework means you benefit from standard patterns (e.g., routing endpoints, serializing objects, handling errors) so you can spend more time thinking about how your code relates to design principles, not reinventing infrastructure.
  • I’ve seen in projects (including my internship) how choosing a framework that aligns with the team’s language, domain, and architecture reduces friction. For instance, if you need to scale to many services, choose a framework that supports microservices or lightweight deployments. The article’s discussion about trade-offs reminded me of that.
  • One connection to our class: We’ve drawn UML diagrams to model systems, show how classes relate, and plan modules. Framework usage is like the next step: once the design is set, frameworks implement those modules, enforce contracts, and provide the infrastructure. In particular, when those modules expose REST APIs, the design decisions we make (interface boundaries, class responsibilities) reflect directly in how endpoints are designed.
  • It also made me reflect on how REST APIs themselves are more than just endpoints, they represent system architecture, and frameworks help in realizing that architecture. For example, using a framework that supports versioning, middleware, and layered architecture helps make the API maintainable as it evolves.

Application to Future Practice

Going forward, I plan to apply these lessons in both academic and professional work:

  • When building a project (in class or internship) that uses REST APIs, I’ll choose a framework early in the design phase and consider how the framework’s structure maps to my design model (classes, modules, responsibilities).
  • I’ll evaluate trade-offs consciously: If I need speed and simplicity, maybe a lightweight framework; if I need enterprise features (security, data access, microservices), maybe a full-featured one like Spring Boot.
  • I’ll use the framework’s features (routing, validation, middleware) to enforce design principles like modularity, readability, and maintainability rather than writing everything by hand.
  • From the API perspective, I’ll ensure that endpoint design aligns with our design models: models reflecting resources, controllers respecting single responsibility, services encapsulating business logic, all supported by the framework.
  • Finally, I’ll treat the framework as part of the architecture, not just a tool, meaning I’ll reflect on how framework conventions influence design decisions, and how my design decisions influence framework usage.

Citation / Link
Crudu, Vasile & MoldStud Research Team. “What frameworks are commonly used by REST API developers?” MoldStud, October 30 2024. Available online: https://moldstud.com/articles/p-what-frameworks-are-commonly-used-by-rest-api-developers

From the blog Rick’s Software Journal by RickDjouwe1 and used with permission of the author. All other rights reserved by the author.

Code Reviews: Writing Better Software Through Collaboration and Feedback

Hello everyone, and welcome to my third blog entry of the semester!

For this week’s self-directed professional development, I read the article “Best Practices for Peer Code Review” from SmartBear Software (smartbear.com). This article provides practical guidelines and research-backed insights on how to conduct effective code reviews in a professional setting. Reading it gave me a new appreciation for how structured review processes can transform not only the quality of code but also team communication and learning.

Summary of the Article

The article begins by explaining that code review is one of the most powerful tools for improving software quality. It cites studies showing that even small, well-structured reviews can significantly reduce bugs and improve maintainability.

Some key practices stood out to me:

  • Keep reviews small: Review no more than 400 lines of code at a time.
  • Limit review sessions: Spend no more than 60 minutes per review to stay focused.
  • Encourage collaboration: Authors should add comments and explanations to help reviewers understand their changes.
  • Focus on learning, not blame: Code review is most effective when it fosters shared ownership and continuous improvement.

The article also introduces metrics like inspection rate and defect rate, which can be used to measure how effective a review process is. Overall, the main message is that a good review culture combines process discipline with respect, clarity, and open communication.

Why I Selected This Resource

I chose this article because it connects directly to my real-world experience at The Hanover Insurance Group, where I worked as a PL Automation Developer intern. During my time there, code reviews were a core part of our workflow. Every piece of automation code had to go through review before deployment. I noticed that following consistent guidelines, like those mentioned in the SmartBear article, made a huge difference in maintaining quality and avoiding errors.

Since we’ve been focusing on software design and collaboration in class, this article helped me bridge what I’ve learned in theory with what professionals practice daily.

Personal Reflections: What I Learned and Connections to Class

Reading this article helped me connect classroom concepts like clean design, modularity, and readability with the real-world practice of peer review. At Hanover, I experienced firsthand how detailed feedback from senior developers helped me understand why small changes, like naming conventions or modularizing functions, mattered in the long run.

This article reminded me that code review isn’t just about technical correctness, it’s also about communication. Explaining your decisions helps others understand your design thinking, just like how UML diagrams or documentation clarify structure in class projects.

Application to Future Practice

Going forward, I plan to adopt SmartBear’s recommendations in both academic and professional work. I’ll keep my commits small, make my code clear and documented before review, and always focus on learning from feedback rather than defending my work. I’ve learned that humility and collaboration are just as essential to great software as technical skill.

Citation / Link
SmartBear Software. “Best Practices for Peer Code Review.” SmartBear, 2024. Available online: https://smartbear.com/learn/code-review/best-practices-for-peer-code-review/

From the blog CS@Worcester – Rick’s Software Journal by RickDjouwe1 and used with permission of the author. All other rights reserved by the author.

Software Frameworks: an Introduction

For this post I listened to the Sourcetoad podcast called, Leveraging Frameworks for Your Software Development Project. This podcast features three software developers who work for Sourcetoad, a software consulting and development firm, by which they discuss software frameworks. https://www.youtube.com/watch?v=ik4d2Jf7Rik&t=1539s

To begin with frameworks we must define what exactly is a framework. A framework is collections of pre-built code that builds the blueprint of the app; furthermore, without need to write the code yourself. Frameworks are tremendously useful because with any coding application, you are trying to help solve a problem. To initiate any project, there are these kinds of “price of entry” such as login, authentication, security, database, and server to name a few. With these examples, the framework gives you these things right off the bat, pre-built modules, which allows you to start faster on the solution.

Now the best part about frameworks is that the vast majority are free. Free being that the code is open source; software made by the software community for the software community. Anyone can view, edit, and modify the software. 

Now what would be an instance you would not want to utilize a framework. Say for instance you have a simple application that utilizes a framework, but while you run the code you notice it is rendering slower than expected. This is because with any kind of framework you are getting a lot of pre-built code, which you might not utilize which will slow down the rendering time. Wonderfully put in the podcast, “the great thing about a framework you get a lot of stuff, but you also get a lot of stuff.”  

With frameworks you can build on top of them and one popular method of doing this is by using a CMS: Content Management System (frontend and backend). A CMS enables users to manage the content on a website themselves, without needing to know how to code; gives non technical people the ability to make changes on the website instantly. A con of this is that it is vendor locked in, meaning it cannot transition easily. 

There is also a headless CMS. This is responsible for editing and storing of the content, but is not responsible for how the content is presented visually; it has no frontend, only backend. Some pros of a headless CMS is that it’s an easier content manager, gives developers more freedom to develop code at scale and also, content can be created once and published everywhere.

Overall, I’ve heard the word “framework” get tossed around in the computer science world, but never truly did have a grasp on what it really was. From listening to this podcast, I feel great about what it is and eager to start a project using a framework and even more so exploring the world of CMS and headless CMS, once I feel comfortable with frameworks.

From the blog CS@Worcester – Programming with Santiago by Santiago Donadio and used with permission of the author. All other rights reserved by the author.

Effective API Design

I have been reading one of the articles by Martin Fowler titled APIs: Principles and Best Practices to Design Robust Interfaces. It discusses how API, or small bridges that are known as Application Programming Interfaces, enable various software systems to communicate and keep up with one another. Fowler emphasizes such points as the clarity of words, offering simplicity, being consistent, and not having to break old versions, and supports it with real code demos and real-life scenarios. It is a combination of theory and practical tips, so every person, who is interested in software design, can dive in.

I picked the read as the API design is one of the foundations of software engineering and intersects my course on Software Development and Integration. I prefer scalable apps that keep their heads clean and easy to be connected to by other developers. Exploring the work of Fowler was my form of education on how to create interfaces with sound principles of how to get folks to jump on and expand therein with no hassle. Most of the stuff that remains all theory-heavy is not so in this article but instead it presents actual, practical tactics, just what one needs at school and in the job.

The importance of versioning and maintaining backward compatibility was one of the largest things that I took away in the article. Fowler gives a reminder that APIs must evolve, but not exist to ruin other clients, which will require you to plan, test, and discuss with your users. That resonated with me as in group projects I had done before, a minor change to our module could bring down the line. Upon reflection, the well-planned API design rules seem to be the instinctive means of preventing such headaches and wasting less time.

I also liked the fact that Fowler emphasized intuitive naming and consistency. According to him, the more predictable the method names, parameters and endpoints are, the friendlier an API is. It actually saves a fair deal of time to establish a proper structure and hierarchy and results in a significant reduction of mix-ups, accelerates integration, and makes the entire process of dev enjoyable. I have remembered that a considerate design ensures not only the end user as well as the people who actually create with the API, but the ecosystem becomes efficient and simpler to maintain.

In the future, I will apply these tricks to my class projects and whatever work I happen to do in the profession. Whenever I create an API to support the web application or integrate more third-party services, I will focus on clean documentation, predictability, and retaining older versions. Following these rules, I will deliver interfaces that are great, that are easy to maintain, that will assist other developers and that will survive update. This paper has made me even more respectful of the discipline of API design, and I am willing to put these tangible strategies to immediate use.

From the blog CS@Worcester – Site Title by Yousef Hassan and used with permission of the author. All other rights reserved by the author.