Category Archives: Quarter-3

Let’s Keep Expanding!!

In the workplace, not only do the number of projects continue to increase, but the way in which we approach them exponentially increases as well. Tech professionals are leading the way in developing cutting edge procedures and processes for daily work, tracking and planning. Tools rapidly evolve as well, so with that being said, project management has been a lot more interesting to say the least.

Adapting to new technologies and routines is the best way to be efficient. With the changes between roles and systems of project management, there needs to be some solutions (or at least one). According to the Forbes Technology Council, they have detailed emerging trends and strategies to inform various organizations. By counting off the actions in this list, we get to see an evolution within different companies and organizations if they feel encouraged enough to take them into consideration.

Starting off, project managers need to establish a V.M.O (value management office). It is the idea of shifting the focus from project delivery to value maximization. For example, if an organization has an initiative, they align it with different goals and efficient resource use to allow for continuous improvement. It is absolutely essential, especially in today’s climate of business and organizational behavior.

Never be afraid to rely on the professionals in times of need. This article highlights the importance of relying on “senior talent”. Since they have the broadened experience, they’re capable of making quick and well informed decisions. For more complex projects, this proves to be useful, helping project managers understand business goals by creating emphasis on exceptional product ownership skills.

I told you AI is not always a bad thing. A lot of individuals, including myself, believe in the power of AI being used as a tool. It’s no secret that several organizations use it, but it is very important to note how its being used. This is called resource optimization. By allocating these resources accordingly, complexities can be well more managed. Artificial intelligence can assist with communication, task automation and provide data insight, leading to greater outcomes/output.

For me, specifically reading just the title alone, I was thinking of more teamwork-driven strategies and trends, similarly to the SCRUM and Sprint conversations during lectures. Of course, I judged a book by its cover and I knew immediately that I had to read and dissect this story. It is important to acknowledge that this article is from a year ago, meaning that more and more trends have been created at this point. These strategies however are still very applicable to today’s modern climate of process management.

As a tech person, or any business person, it is crucial to stay optimistic and see the value in making changes. I believe that myself and my generation have the potential and resources to evolve areas in their career field, with ideas that we want to see unfold. It will be very interesting to see the growth.

Source: 20 Emerging Strategies And Trends In Project Management

From the blog CS@Worcester – theJCBlog by Jancarlos Ferreira and used with permission of the author. All other rights reserved by the author.

OOP: Encapsulation

I wrote my blog based on “What is encapsulation in object-oriented programming (OOP)?” from Cincom Systems Blog. The blog defines encapsulation as bundling data, specifically attributes, and the methods that operate on that data into a single unit, while restricting access to some of the class’s components. The author explains well what we learned in previous classes, that using access modifiers like private and protected, a class can hide their internal state while exposing only controlled interfaces to the outside world. It goes on to describe benefits of encapsulation. Improved modularity, protected internal state, easier debugging and testing, and cleaner interfaces. The article also gives good practices based on the idea of encapsulation.

This blog directly relates to many things that we do in this class, practicing OOP and clean coding. I picked this article because it not only defines the concept of encapsulation but also connects it with practical ideas in software design and maintenance. I felt this article would help remind me on the theory of encapsulation and OOP.

Reading this article reminded me that encapsulation isn’t simply making variables private and using setters and getters but it’s about letting the class control its own state and hide implementation details from the outside world. This blog made me review some of my past projects where I had too many exposed public fields that made the code less clean. My past projects are definitely not written well with encapsulation in mind so reviewing this topic was extremely helpful.

Going forward, in personal coding and team projects, I expect to apply encapsulation and OOP in general by designing classes so that their internal data is private or protected, and only the necessary operations are public. Ensuring that external code interacts with objects via meaningful methods rather than manipulating internals directly.

To summarize this blog. It helped me remember what encapsulation actually means as a core OOP design principle rather than just an after thought. Applying what I learned from this blog, I expect to improve upon writing code that is cleaner and less error prone.

https://www.cincom.com/blog/smalltalk/encapsulation-in-object-oriented-programming/

From the blog CS@Worcester – Coding with Tai by Tai Nguyen and used with permission of the author. All other rights reserved by the author.

Code Reviews

I recently read the blog post titled “Why Code Reviews Are My Highest-Priority Work” by Jordan Edmunds. In the article, Edmunds explains why he regards code reviews as the top priority work. He outlines several reasons. First, that doing reviews early reduces the team’s merge conflicts and speeds up cycle time. He also notes that if a review happens soon after the author writes code, the author still has fresh context, making fixes and tests easier. He points out that timely reviews improve overall team productivity, and supports shared responsibility. He discusses common objections and shares how he integrates reviews into his day.

This blog post directly relates to our course topic of Code Review. I chose it because rather than just defining what code review is, Edmunds gives practical ideas into why it matters, based on real experience in a programming context. Since our course material covers this topic, this article provides a good example of how Edmunds’s code review ideas is applied. I felt it would help me connect theory with real practice.

Reading the article made me realize that code reviews aren’t just an after the thought thing to checkbox but a good practice for improving code quality. I learned that timing matters, a review done soon after code submission is far more effective than one delayed. I also like the emphasis on context, for the programmer, having fresh understanding of their changes makes review feedback much easier. These ideas provided by Edmunds helps shape how I think about code reviews now. I’ll aim to review often and hopefully the fresh context will help me pinpoint any and all problems present.

Personally, this article made me reflect on my own habits where I often skip reviews or wind up doing them late, when I encounter a problem running my programs. I’m now aware that this can slow feedback and especially development. After reading this blog, in any coding assignment or personal project, I plan to start reviewing written code quickly. Reviews are an important process which we upkeep standards, share progress, and improve code maintainability.

In summary, this resource has helped me understand the value of code review, and given me good ideas for applying it in team and personal works. By participating actively and thoughtfully in code review, I’ll hopefully write better code and contribute to smoother team processes.

https://medium.com/%40jordan.l.edmunds/why-code-reviews-are-my-highest-priority-work-8e9b4c410887

From the blog CS@Worcester – Coding with Tai by Tai Nguyen and used with permission of the author. All other rights reserved by the author.

Software Frameworks and REST APIs: Building Scalable, Maintainable Systems

Hello everyone, and welcome to my blog entry for this quarter.

For this quarter’s self-directed professional development, I selected the article “What frameworks are commonly used by REST API developers?” by MoldStud (moldstud.com) which surveys popular frameworks for building REST APIs and outlines why they matter.
Because in our classes we’ve been learning about software architecture, design patterns, and object-oriented design, I wanted to explore how frameworks help bring those concepts into real projects, especially when implementing REST APIs.

Summary of the Article

The article begins by explaining that when developers build REST APIs, choosing the right framework is critical. It reviews several top frameworks, such as:

  • Express.js (for Node.js): praised for its simplicity, flexibility, and modular middleware system.
  • Spring Boot (Java) : known for its strong ecosystem (Spring Data, Spring Security, etc.) and ability to rapidly build production-ready REST APIs
  • Frameworks in Python such as Fast API and Flask which also permit building RESTful services with fewer boilerplate lines and good developer productivity

The article emphasizes that frameworks provide built-in features like routing, serialization, input validation, authentication/authorization, and documentation support, which means developers can focus more on business logic rather than boilerplate.
It also notes that frameworks differ in trade-offs (simplicity vs. features, performance vs. flexibility) so choosing depends on project size, team skill, performance expectations, and ecosystem.

Why I Selected This Resource

I chose this article because, in our coursework and my professional development (including my internship at The Hanover Insurance Group), I have seen frameworks play a key role in making software more maintainable and scalable. Given that we have covered design principles and object-oriented design, understanding how frameworks support those principles (and how REST APIs fit into that) felt like a natural extension of our learning. I wanted a resource that bridges theory (design, architecture) with practice (framework usage, API development), and this article did just that.

Personal Reflections: What I Learned and Connections to Class

Several thoughts stood out for me:

  • Frameworks help enforce design discipline. For example, while in class we’ve talked about abstraction, encapsulation, and modular design, using a framework like Spring Boot means that the structure (controllers, services, repositories) often mirrors those concepts. The separation of concerns is built in.
  • When building a REST API, using a framework means you benefit from standard patterns (e.g., routing endpoints, serializing objects, handling errors) so you can spend more time thinking about how your code relates to design principles, not reinventing infrastructure.
  • I’ve seen in projects (including my internship) how choosing a framework that aligns with the team’s language, domain, and architecture reduces friction. For instance, if you need to scale to many services, choose a framework that supports microservices or lightweight deployments. The article’s discussion about trade-offs reminded me of that.
  • One connection to our class: We’ve drawn UML diagrams to model systems, show how classes relate, and plan modules. Framework usage is like the next step: once the design is set, frameworks implement those modules, enforce contracts, and provide the infrastructure. In particular, when those modules expose REST APIs, the design decisions we make (interface boundaries, class responsibilities) reflect directly in how endpoints are designed.
  • It also made me reflect on how REST APIs themselves are more than just endpoints, they represent system architecture, and frameworks help in realizing that architecture. For example, using a framework that supports versioning, middleware, and layered architecture helps make the API maintainable as it evolves.

Application to Future Practice

Going forward, I plan to apply these lessons in both academic and professional work:

  • When building a project (in class or internship) that uses REST APIs, I’ll choose a framework early in the design phase and consider how the framework’s structure maps to my design model (classes, modules, responsibilities).
  • I’ll evaluate trade-offs consciously: If I need speed and simplicity, maybe a lightweight framework; if I need enterprise features (security, data access, microservices), maybe a full-featured one like Spring Boot.
  • I’ll use the framework’s features (routing, validation, middleware) to enforce design principles like modularity, readability, and maintainability rather than writing everything by hand.
  • From the API perspective, I’ll ensure that endpoint design aligns with our design models: models reflecting resources, controllers respecting single responsibility, services encapsulating business logic, all supported by the framework.
  • Finally, I’ll treat the framework as part of the architecture, not just a tool, meaning I’ll reflect on how framework conventions influence design decisions, and how my design decisions influence framework usage.

Citation / Link
Crudu, Vasile & MoldStud Research Team. “What frameworks are commonly used by REST API developers?” MoldStud, October 30 2024. Available online: https://moldstud.com/articles/p-what-frameworks-are-commonly-used-by-rest-api-developers

From the blog Rick’s Software Journal by RickDjouwe1 and used with permission of the author. All other rights reserved by the author.

Code Reviews: Writing Better Software Through Collaboration and Feedback

Hello everyone, and welcome to my third blog entry of the semester!

For this week’s self-directed professional development, I read the article “Best Practices for Peer Code Review” from SmartBear Software (smartbear.com). This article provides practical guidelines and research-backed insights on how to conduct effective code reviews in a professional setting. Reading it gave me a new appreciation for how structured review processes can transform not only the quality of code but also team communication and learning.

Summary of the Article

The article begins by explaining that code review is one of the most powerful tools for improving software quality. It cites studies showing that even small, well-structured reviews can significantly reduce bugs and improve maintainability.

Some key practices stood out to me:

  • Keep reviews small: Review no more than 400 lines of code at a time.
  • Limit review sessions: Spend no more than 60 minutes per review to stay focused.
  • Encourage collaboration: Authors should add comments and explanations to help reviewers understand their changes.
  • Focus on learning, not blame: Code review is most effective when it fosters shared ownership and continuous improvement.

The article also introduces metrics like inspection rate and defect rate, which can be used to measure how effective a review process is. Overall, the main message is that a good review culture combines process discipline with respect, clarity, and open communication.

Why I Selected This Resource

I chose this article because it connects directly to my real-world experience at The Hanover Insurance Group, where I worked as a PL Automation Developer intern. During my time there, code reviews were a core part of our workflow. Every piece of automation code had to go through review before deployment. I noticed that following consistent guidelines, like those mentioned in the SmartBear article, made a huge difference in maintaining quality and avoiding errors.

Since we’ve been focusing on software design and collaboration in class, this article helped me bridge what I’ve learned in theory with what professionals practice daily.

Personal Reflections: What I Learned and Connections to Class

Reading this article helped me connect classroom concepts like clean design, modularity, and readability with the real-world practice of peer review. At Hanover, I experienced firsthand how detailed feedback from senior developers helped me understand why small changes, like naming conventions or modularizing functions, mattered in the long run.

This article reminded me that code review isn’t just about technical correctness, it’s also about communication. Explaining your decisions helps others understand your design thinking, just like how UML diagrams or documentation clarify structure in class projects.

Application to Future Practice

Going forward, I plan to adopt SmartBear’s recommendations in both academic and professional work. I’ll keep my commits small, make my code clear and documented before review, and always focus on learning from feedback rather than defending my work. I’ve learned that humility and collaboration are just as essential to great software as technical skill.

Citation / Link
SmartBear Software. “Best Practices for Peer Code Review.” SmartBear, 2024. Available online: https://smartbear.com/learn/code-review/best-practices-for-peer-code-review/

From the blog CS@Worcester – Rick’s Software Journal by RickDjouwe1 and used with permission of the author. All other rights reserved by the author.

Software Frameworks: an Introduction

For this post I listened to the Sourcetoad podcast called, Leveraging Frameworks for Your Software Development Project. This podcast features three software developers who work for Sourcetoad, a software consulting and development firm, by which they discuss software frameworks. https://www.youtube.com/watch?v=ik4d2Jf7Rik&t=1539s

To begin with frameworks we must define what exactly is a framework. A framework is collections of pre-built code that builds the blueprint of the app; furthermore, without need to write the code yourself. Frameworks are tremendously useful because with any coding application, you are trying to help solve a problem. To initiate any project, there are these kinds of “price of entry” such as login, authentication, security, database, and server to name a few. With these examples, the framework gives you these things right off the bat, pre-built modules, which allows you to start faster on the solution.

Now the best part about frameworks is that the vast majority are free. Free being that the code is open source; software made by the software community for the software community. Anyone can view, edit, and modify the software. 

Now what would be an instance you would not want to utilize a framework. Say for instance you have a simple application that utilizes a framework, but while you run the code you notice it is rendering slower than expected. This is because with any kind of framework you are getting a lot of pre-built code, which you might not utilize which will slow down the rendering time. Wonderfully put in the podcast, “the great thing about a framework you get a lot of stuff, but you also get a lot of stuff.”  

With frameworks you can build on top of them and one popular method of doing this is by using a CMS: Content Management System (frontend and backend). A CMS enables users to manage the content on a website themselves, without needing to know how to code; gives non technical people the ability to make changes on the website instantly. A con of this is that it is vendor locked in, meaning it cannot transition easily. 

There is also a headless CMS. This is responsible for editing and storing of the content, but is not responsible for how the content is presented visually; it has no frontend, only backend. Some pros of a headless CMS is that it’s an easier content manager, gives developers more freedom to develop code at scale and also, content can be created once and published everywhere.

Overall, I’ve heard the word “framework” get tossed around in the computer science world, but never truly did have a grasp on what it really was. From listening to this podcast, I feel great about what it is and eager to start a project using a framework and even more so exploring the world of CMS and headless CMS, once I feel comfortable with frameworks.

From the blog CS@Worcester – Programming with Santiago by Santiago Donadio and used with permission of the author. All other rights reserved by the author.

Effective API Design

I have been reading one of the articles by Martin Fowler titled APIs: Principles and Best Practices to Design Robust Interfaces. It discusses how API, or small bridges that are known as Application Programming Interfaces, enable various software systems to communicate and keep up with one another. Fowler emphasizes such points as the clarity of words, offering simplicity, being consistent, and not having to break old versions, and supports it with real code demos and real-life scenarios. It is a combination of theory and practical tips, so every person, who is interested in software design, can dive in.

I picked the read as the API design is one of the foundations of software engineering and intersects my course on Software Development and Integration. I prefer scalable apps that keep their heads clean and easy to be connected to by other developers. Exploring the work of Fowler was my form of education on how to create interfaces with sound principles of how to get folks to jump on and expand therein with no hassle. Most of the stuff that remains all theory-heavy is not so in this article but instead it presents actual, practical tactics, just what one needs at school and in the job.

The importance of versioning and maintaining backward compatibility was one of the largest things that I took away in the article. Fowler gives a reminder that APIs must evolve, but not exist to ruin other clients, which will require you to plan, test, and discuss with your users. That resonated with me as in group projects I had done before, a minor change to our module could bring down the line. Upon reflection, the well-planned API design rules seem to be the instinctive means of preventing such headaches and wasting less time.

I also liked the fact that Fowler emphasized intuitive naming and consistency. According to him, the more predictable the method names, parameters and endpoints are, the friendlier an API is. It actually saves a fair deal of time to establish a proper structure and hierarchy and results in a significant reduction of mix-ups, accelerates integration, and makes the entire process of dev enjoyable. I have remembered that a considerate design ensures not only the end user as well as the people who actually create with the API, but the ecosystem becomes efficient and simpler to maintain.

In the future, I will apply these tricks to my class projects and whatever work I happen to do in the profession. Whenever I create an API to support the web application or integrate more third-party services, I will focus on clean documentation, predictability, and retaining older versions. Following these rules, I will deliver interfaces that are great, that are easy to maintain, that will assist other developers and that will survive update. This paper has made me even more respectful of the discipline of API design, and I am willing to put these tangible strategies to immediate use.

From the blog CS@Worcester – Site Title by Yousef Hassan and used with permission of the author. All other rights reserved by the author.

Importance of version control in the process of development

An infographic illustrating version control processes in Git, showcasing key operations like fork, merge, and pull request.

As a software developer version control you will undoubtedly run into version control of any projects which you are working on. Eventually a developer will have to fix bugs or add a feature to a product. In order to learn more about version control there is no better website to learn from than Github.

What is Version Control?

Illustration of distributed version control system showing interactions between developers and the main repository.

Github gives an amazing allegory: Imagine you’re a violinist in a 100-piece orchestra, but you and the other musicians can’t see the conductor or hear one another. Instead of synchronized instruments playing music, the result is just noise.

Version control is a tool used to prevent this noise from happening. It helps streamline development, keep track of any changes, and allow for upscaling of projects.

Version Control tool factors

Version control may not be necessary depending on the scale of your project, however most of the time it is useful to have it set up. Some of the factors of deciding to use version control include:

  • Scalability: Large projects with many developers and files benefit from VC
  • Ease of Use: User friendly UI helps manage learning curves and adoption.
  • Collaboration features: Supporting multiple contributors and communication between them.
  • Integration with existing tools: Using tools everyone already has access to.
  • Supports branching: Ability for developers to work on different parts of development benefits a project greatly.

Common Version Control pplications

  • Git: Git is an open-source distributed version control tool preferred by developers for its speed, flexibility, and because contributors can work on the same codebase simultaneously.
  • Subversion (SVN): Subversion is a centralized version control tool used by enterprise teams and is known for its speed and scalability.
  • Azure DevOps Server: Previously known as Microsoft Team Foundation Server (TFS), Azure DevOps Server is a set of modern development services, a centralized version control, and reporting system hosted on-premises.
  • Mercurial: Like Git in scalability and flexibility, Mercurial is a distributed version control system.
  • Perforce: Used in large-scale software development projects, Perforce is a centralized version control system valued for its simplicity and ease of use.

Final thoughts

Every developer has at one point heard of Git, and without a doubt it may be one of the best developer tool ever invented. I have prior experience using version control but this research was an important refresher to learn from. If you wish to learn directly from Github you can read the article this blog was inspired by here.

From the blog CS@Worcester – Petraq Mele blog posts by Petraq Mele and used with permission of the author. All other rights reserved by the author.

Refactoring your program

Sometimes when a program undergoes consistent updates it can get messy, in cases like this it can be useful to refactor it. I’ve had a few experienced cleaning a program however I have never refactored an entire program. The developers over at refactoring guru luckily have a website dedicated to this subject.

An illustrated depiction of a programming refactoring process, highlighting the importance of clean code.

Purpose for refactoring

When you refactor a program you are fighting something they call technical debt and create clean code. With clean code comes a few benefits including:

  • Obvious for other programmers
  • Doesn’t contain duplicate code
  • Minimal number of classes and other moving parts
  • Passing of all tests
  • Easier and cheaper to maintain

What is technical debt?

“Technical debt” as a metaphor was originally suggested by Ward Cunningham using bank loans as an example.

You can make purchases faster If you get a loan from a bank however now on top of principal you have interest. and with time you can rack up so much interest that the amount of interest exceeds your total income, making full repayment impossible.

The same concept can be applied to code. Speeding up without testing new features will gradually slow your progress.

Some causes of technical debt include:

  • Business pressure
  • Lack of understanding the consequence
  • Failing to combat the strict coherence of components
  • Lack of tests, documentation, communication.
  • Long-term simultaneous development in several branches
  • Delayed refactoring
  • Incompetence

So when should one refactor?

Refactoring guru comes up with a few instances on when to refactor.

  • Rule of three:
    • When doing something for the first time, just get it done.
    • When doing something similar for the second time, cringe at having to repeat but do the same thing anyway.
    • When doing something for the third time, start refactoring.
  • Adding a feature:
    • If you have to deal with someone else’s dirty code, try refactoring it first; Easier for future features.
  • Fixing a bug:
    • Clean the code and errors will discover themselves
  • Code reviews:
    • Last chance to tidy up the code
    • Best to perform these reviews in pair with an author

We know when, but how?

Refactoring is done via a series of small changes, each making the existing code slightly better while leaving the program in working order.

Here is a checklist on refactoring done the right way:

  • The code is cleaner
  • There should not be new functionality
  • All existing tests pass

Final Thoughts:

Overall, I found this website on refactoring to be really informative and would recommend refactoring guru as a starting point. The most important thing that I got out of this is that developers should always try to write clean code or clean code as its undergoing development. Unfortunately sometimes software development can be very time containing and its not always possible which is why refactoring is important.

From the blog Petraq Mele blog posts by Petraq Mele and used with permission of the author. All other rights reserved by the author.