Quarter 2 Blog

Git bisect turns the often painful process of “when did this bug creep in?” into a systematic detective job. With just two reference commits — one known good, one known bad — it lets you binary-search through your project’s history, test just a handful of commits, and land on the exact commit that caused the regression.

For developers working on nontrivial projects — multiple contributors, long histories, frequent changes — git bisect is one of those under-appreciated but powerful tools. It doesn’t replace careful code review or good commit hygiene — but when something breaks, it can save you hours, even days, compared to manual debugging. For developers working on nontrivial projects — multiple contributors, long histories, frequent changes — git bisect is one of those under-appreciated but powerful tools. It doesn’t replace careful code review or good commit hygiene — but when something breaks, it can save you hours, even days, compared to manual debugging.

What makes git bisect so valuable is not just the time you save, but the confidence it gives you in understanding when a bug was introduced. Instead of guessing, scrolling through commit logs, or trying to recall what you changed three weeks ago, you rely on a proven algorithm: binary search. This means that whether your repository has ten commits or ten thousand, you’ll narrow down the culprit quickly. Many developers are surprised to discover they only need to test a handful of commits before git bisect pinpoints the exact moment things went wrong.

The workflow itself is remarkably simple. After identifying your known good and known bad points in history, you start bisecting. Git takes you to the midpoint commit, and you run whatever test—or manual check—reveals the bug. Based on whether the bug appears, you mark the commit good or bad. Git narrows the range and takes you to the next candidate. This process continues until only one commit remains: the one where the bug was introduced. The elegance is in the repetition. Git does the heavy lifting while you simply provide yes/no answers. Beyond its practical benefits, git bisect also reinforces better development habits. When you see exactly how helpful small, atomic commits are—or how painful vague or bloated commits can be—you naturally start writing cleaner, more meaningful commit histories. And when a bug is tracked down quickly and precisely, it encourages the whole team to trust the process rather than fear broken code.

In a world where software grows more complex every day, debugging tools that bring clarity are invaluable. Git bisect isn’t flashy, but it’s one of the most reliable debugging companions a developer can have. If you’ve never used it before, learning it just once can save you countless hours in the future—and make you feel like a code detective each time a mystery bug appears.
Citation: YouTube, http://www.youtube.com/watch?v=Q-kqm0AgJZ8. Accessed 10 Dec. 2025.

From the blog cs@worcester – Software Development by Kenneth Onyema and used with permission of the author. All other rights reserved by the author.

Principle of Least Knowledge (AKA Law of Demeter)

Hello everyone, today I will be talking about the Principle of Least Knowledge (AKA Law of Demeter). When first looking into this topic I was unsure of exactly what this was and how it would be applied to programming. When doing my research I found the Khouri College of Computer Sciences at North Eastern University had a page dedicated to this topic, where this law was first introduced.

General Formulation

Illustration of the Law of Demeter, highlighting the principle of limiting interactions between objects.

The LoD is essentially a simple style rule for designing object oriented systems.

“Only talk to your immediate friends” is the motto. 

Professor Leiberherr, the author, states a formulation of “Each unit should have only limited knowledge about other units: only units “closely” related to the current unit.Its main motivation is to control information overload thus helping memory management as each item is closely related.

You can informally summarize the Law with these three formulations:

  • Each method can only send messages to a limited set of objects, namely to the argument objects and to the immediate subparts of the class to which the method is attached.
  • Each method is “dependent” on a limited set of objects (organize dependencies)
  • Each method “collaborates” with a limited set of objects (organize collaborations)

To formulate the Law we can choose from the following independent possibilities:

  • Object/Class
    • Class formulation is intended to be compile-time
    • Object formulation is intended to be followed conceptually
  • Messages/Generic functions/Methods
  • Weak/Strong
    • If we interpret it as all instance variables, including the inherited ones, we get the weak form of the Law. If we exclude the inherited instance variables, we get the strong form of the Law.

Benefits

In a paper written by Leiberherr, there are a couple facts stated for the benefits:

  • If the weak or strong LoD is followed and if class A’s protocol is renamed, then at most the preferred client methods of A and A’s subclasses require modification.
  • If the weak or strong LoD is followed and if the protocol of class A changes, then only preferred client methods of A and its subclasses need to be modified and only methods in the set of potential preferred clients of A and its subclasses need to be added
  • There’s even more benefits highlighted in the paper pertaining to limiting information overload.

Final Thoughts:

Prior to this webpage I knew nothing about this law/principle, however I now understand that it is a fairly useful rule or its respective use case. The law teaches you a way to program Classes, Inheritance, Abstraction, and a few other techniques. Infact There is so much more depth to this that I cant even fully fit it into this blog post. I would highly recommend you check out this page as It contains all the information you need along with sources to learn this.

From the blog Petraq Mele blog posts by Petraq Mele and used with permission of the author. All other rights reserved by the author.

How to become a SOLID software developer.

By Petraq Mele

Hello again to those reading this blog, this time I want to talk about an extremely topic relevant in the programming atmosphere, that being a concept known as SOLID. I managed to find a great section written by Manoj Phandis on these principles.

SOLID is an acronym of five OOP design principles designed to help make it more understandable, flexible, and maintainable.

What are the main 5 design principles?

SINGLE RESPONSIBILITY PRINCIPLE: This principle states a class should have one, and only one, reason to change. Lets take an Animal class example, as opposed to the animal class having a sound and feed parameter, separate those responsibilities into separate classes.

Some benefits include:

  • more readable, that is easier to understand
  • less error prone
  • more robust
  • better testable
  • better maintainable and extendable
  • maximizes the cohesion of classes.

OPEN CLOSED PRINCIPLE: “Open for extension” means the behavior of a module can be extended. “Closed for extension” means when we are adding/extending a modules behavior it should not result in changes to a modules source or binary code.

Demonstration of the Open/Closed Principle in object-oriented programming.

An example Manoj gives is a credit card company wanting to introduce a new preferred credit card product with double reward points. Instead of using conditionals, you create an extension via implementation inheritance or interface abstraction.

LISKOV SUBSTITUTION PRINCIPLE: LSP states functions that use references to base classes must be able to use objects of the derived class without knowing it. For LSP compliance we need to follow some rules that can be categorized into 2 groups:

  • Contract rules
    • Preconditions cannot be strengthened or weakened in a subtype
    • Invariants must be maintained
  • Variance rules
    • There must be contra-variance of the method argument in the subtype & be covariance of the return type in the subtype
    • No new exceptions can be thrown by the subtype unless they are part of the existing exception hierarchy.

INTERFACE SEGREGATION PRINCIPLE: Clients should not be forced to depend on methods they do not use. Interface segregation violations result in classes depending on things they don’t need & an increase of coupling and reduced flexibility/maintainability.

Tips to follow:

  • Prefer small, cohesive interfaces to “fat” interfaces
  • Creating smaller interfaces with just what we need
  • Have the fat interface implement your new interface.
  • Dependency of one class to another should depend on the smallest possible interface.

DEPENDENCY INVERSION PRINCIPLE: This principle has two parts. The first part says high-level modules should not depend on low-level modules. Both should depend on abstractions. The second part says Abstractions should not depend on details. Details should depend on abstractions.

Part one example:

Part two example:

Final thoughts:

Overall these principles are very useful when it comes to object-oriented software development. I learned quite a good amount and I want to thank Manoj Phandis for their amazing outline of the SOLID principles, I would advise you to check them out in his website incase you’re interested in learning more.

From the blog Petraq Mele blog posts by Petraq Mele and used with permission of the author. All other rights reserved by the author.

Blog Post for Quarter 2

October 21st, 2025

Recently, my class has been going over stuff regarding teamwork and ways to approach building a software or product. For example, the waterfall method, agile methodology, and scrum have come up in discussion. This has reminded me of POGIL since POGIL was a group used in the classroom semi-frequently.

Because of this correlation, I decide to look at blogs about POGIL. However, I noticed something interesting about the blog I chose. So I chose two just because I found some things interesting. The first was made with WordPress.com, much like the one I’m making here. It was about POGIL. The blog appeared to just be called “The POGIL Project.” Or, that’s what I have surmised after looking at the web address. Additionally, there was some interesting notes regarding how it appears to be designed for faculty teaching or implementing POGIL based team activities. However, the last post appears to be in 2015, which is not as recent as I’d like. (However, there appears to be someone who has the same name as the author of this blog who is cited to be impactful to the development of POGIL. Which is pretty cool, though I couldn’t concretely find evidence that they were the same person.)

So, I looked for an alternative. The author was not listed which isn’t great but it is recent. It appears to also be about POGIL. But the most interesting part was how it was applied to science as opposed to actual computer science. Actually, both blogs do that as well.

This new blog I picked was basically an overview of how POGIL works and why it is good to use. It overviewed the reasons why POGIL is used and what it is intended to do. It basically overlaps with what I know about POGIL already.

In a way, this is interesting in how this mean POGIL is both universal and useful. It isn’t just a weird Computer Science class thing we do, it’s an actual science thing. Which is definitely more interesting to know about considering I rarely encountered POGIL before college. It probably won’t really affect my opinion on POGIL but it is mildly interesting how it is something that I’ll see around. I guess I can keep that in mind.

FIRST INITIAL BLOG: https://thepogilproject.wordpress.com/

SECOND, REVIEWED BLOG POST: https://www.transtutor.blog/pogil-guide-high-school-biology

From the blog CS@Worcester – Ryan's Blog Maybe. by Ryan N and used with permission of the author. All other rights reserved by the author.

Application Programming Interfaces

Source: altexsoft

Application Programming Interfaces, or APIs, are the communicating code between a client and server, kind of like a user interface for the computer. They are what receives information from the user interface, sends it to the server to get a result, and sends that result back to the user interface. APIs can be private, partnered, or public. Private is where the API is only available for an organization to use. Partnered is where the API is available to a group of organizations that are in partnership with each other share it. Public is where the API is available to everyone. In our setting, the Thea’s Pantry API is public, mainly used by food pantry volunteers, but the code is openly available on GitLab. 

There are also different types/formats of APIs, like Remote Procedure Call (RPC), Service Object Access Protocol (SOAP), and Representational State Transfer (REST). RPC is when the client requests data from the server and the server sends it. SOAP is a protocol that exchanges data in a decentralized environment, it uses XML messaging between the client and server through HTTP or SMTP. SOAP is known for its high security of data. REST is a protocol where resources are given a URL that can be used to request the data using HTTP methods. REST is simpler and more versatile than SOAP because it requires less complex code writing and uses many formats beyond XML. 

In class and in the homework assignments, we focused on REST API with JSON messaging. We practiced sending requests for GET, POST, PUT, PATCH, and DELETE and some of their different responses like success, entry error, and server error. We also learned how to write our own requests for creating guests, changing a guest’s information, retrieving a guest’s information, and deleting a guest from the system. Some methods, like POST, require arguments as well. To post a new guest, the request needs to have the new UUID and all the information their user will have, like if they are a resident and if they receive financial assistance. To receive a list of users that fit a certain criteria, the GET request needs to have a specific tag in the URL to specify the endpoint. 

Learning APIs will be extremely helpful for the future. In software development, APIs are everywhere. While REST is the most common, it is still useful to know the other kinds as well, like SOAP, and how to use them. Everyone uses APIs: Google, Amazon, Microsoft, Netflix, and many, many more. Getting first hand experience using and building APIs allows me to expand my skillset for real-life applications.

From the blog ALIDA NORDQUIST by alidanordquist and used with permission of the author. All other rights reserved by the author.

Understanding the Twelve Principles of Agile Software

The 12 Principles of Agile Software explained

For this professional development entry, I chose to read the article titled The Twelve Principles of Agile Software Explained from the Agile Alliance website. The article provides an overview of the core ideas that shape the Agile Manifesto and explains how they guide the way modern software is built. What immediately caught my attention was how the principles focus on people, teamwork, and adaptability rather than strict processes or heavy documentation. The article highlights that Agile is not simply a development framework but a philosophy centered on collaboration and continuous improvement. It emphasizes that successful teams listen to their customers, respond to change quickly, and work together to deliver valuable software frequently rather than saving everything for one big release.

I found this resource helpful because it connects directly with what we have been studying in CS-343 about software processes and team communication. In many group projects, I have experienced situations where rigid planning or lack of communication slowed progress. Reading this article helped me see that Agile’s emphasis on flexibility and open dialogue could have prevented some of those problems. The principle that stood out to me most was “responding to change over following a plan.” This idea made me realize that while planning is important, being adaptable is even more valuable. Real-world projects rarely go exactly as expected, and being able to adjust quickly is a skill that separates good teams from great ones.

Another key takeaway for me was the focus on sustainable development. The article explained that teams should maintain a consistent pace and avoid burnout, which is something I think every computer science student can relate to. It is easy to fall into a cycle of late nights and last-minute fixes, but this principle reminded me that long-term quality depends on balance and discipline. The principle about motivated individuals also resonated with me. It stated that the best results come from trusting team members and giving them the environment and support they need to succeed. I have noticed this in my own coursework; when everyone feels respected and valued, collaboration becomes smoother and creativity increases.

The article also touched on the importance of reflection, encouraging teams to pause regularly to discuss what went well and what could be improved. This aligns perfectly with the concept of continuous improvement that we discuss in class. I learned that retrospectives are not just about fixing mistakes but about strengthening the team’s process as a whole. Moving forward, I plan to apply these ideas in future projects by promoting open communication, being willing to adjust plans when needed, and supporting my teammates in maintaining a healthy work rhythm.

Overall, this resource gave me a deeper understanding of what it truly means to work in an Agile environment. It showed me that Agile is not about speed but about building smarter, more collaborative, and more human-centered teams. The twelve principles serve as a strong foundation for both professional development and teamwork, and I believe they will continue to guide me as I grow in my career as a software developer.

From the blog CS@Worcester – Life of Chris by Christian Oboh and used with permission of the author. All other rights reserved by the author.

BPM On the Up & Up

With the technological advances and innovations in business continue to exponentially grow within their practices. The biggest challenge is all the pressure they face to stay afloat. From high climbing costs to having the fiercest competition with other companies. Long term, this causes danger to viability and businesses cannot afford to screw up continuously. To address these issues, a software has been implemented that eventually became the foundation of “digital transformation”, according to Digital Journal.

This is called Business Process Management (BPM), a software providing tools to quote “streamline processes, manage compliance, and adapt to shifting market conditions”. BPM is used to touch various different areas like different departments, locations, and IT systems. A revolutionary tactic has been made here, due to this process helping managers efficiently track and pinpoint errors & inefficiencies. Also for the team members, they can have a clear understanding of their goals and mission(s). 

Speaking of efficiency, this is essentially the first and most important benefit (arguably) that users hopefully will recognize. A lot more tasks can be completed in a higher-level manner. Let’s use a couple of examples: finance and healthcare. To treat and prevent illnesses, increased automation creates higher capacity. For financing, provider automation keeps secureness over companies against fraud and error. This translates to transparency, where these companies get to view their status and performance, in turn increasing accountability.

Despite the pros, all ideas come with its cons. BPM is not a straightforward process that is easily implemented whatsoever. Some businesses and organizations are limited in terms of skill gaps, fear of change and redesign complexities. In order to resolve this, companies need to broaden their vision by being open to the idea of BPM. Seeing the benefit will lead to greater outcomes in the performance as a whole.

Why is this important to discuss? The business industry is constantly changing, from the amount of businesses being formed, their structure outlines, idea creations, improving outcomes, etc. Depending on the size of the business (whether it’s yours or you work for one), there needs to be some sort of structure and process to have smooth outputs and less friction. For me personally, I knew that all businesses had their own processes in terms of how they’re run, but BPM is a new term I had never really learned or tapped into.

Moving forward, with the knowledge of programming and software development that I’m currently learning through a curriculum, I can hopefully bring a new/different perspective into a business when I am ready to delve deeper into the work field. Maintaining business process management will improve productivity, so it is not something to ignore at all. 

Source: BPM software and the move toward smarter business practices – Digital Journal

From the blog CS@Worcester – theJCBlog by Jancarlos Ferreira and used with permission of the author. All other rights reserved by the author.

Development Environment

From the blog CS@Worcester – dipeshbhattaprofile by Dipesh Bhatta and used with permission of the author. All other rights reserved by the author.

Blog Entry: Duck Simulator

Summary of the Resource

The resource I explored is the Duck Simulator project from the article “Design Patterns: The Strategy Pattern in Duck Simulations” by Head First Design Patterns (https://www.oreilly.com/library/view/head-first-design/0596007124/ch06.html). The simulator features different types of ducks like Mallard, Redhead, and Rubber ducks with behaviours such as flying and quacking. What’s particularly interesting is that these behaviours aren’t hard-coded into the duck classes; instead, they can be assigned or changed dynamically at runtime. This design highlights important object-oriented programming concepts, including polymorphism, encapsulation, and code reusability. It also demonstrates how the strategy design pattern allows developers to build flexible, scalable, and maintainable programs. The simulation is not only educational but also fun, giving a visual and interactive way to understand abstract programming concepts.

Why I Chose This Resource

I chose the Duck Simulator because it is a hands-on, practical example that clearly demonstrates OOP principles we are currently learning in class. I was looking for a resource that is engaging, easy to follow, and yet illustrates advanced programming concepts like abstraction, interfaces, and composition. The simulator is particularly appealing because it shows how separating behaviours from the main duck classes makes it easy to add new features or modify existing ones without rewriting the core code. This approach mirrors how professional software projects are structured, and I wanted to see an example that connects what we learn in theory to practical programming.

What I Learned and Reflected On

Working through the Duck Simulator helped me understand how to design flexible and maintainable code. Previously, I often relied on inheritance to share behaviours, but this project demonstrated how composition provides more adaptability and control. For example, I could give a Mallard duck a “fly with rocket” behaviour without touching the original class—something that would have been difficult or messy using only inheritance.

The project also helped me see the value of modular thinking, treating behaviours as separate, reusable components that can be mixed and matched across objects. This makes it much simpler to extend the program, add new duck types, or implement additional actions. Experimenting with the simulation gave me a tangible way to understand polymorphism and modular design, which made abstract class concepts from lectures much easier to grasp. It also reinforced the idea that writing clean, reusable code is as important as writing code that just works.

How I’ll Use This in the Future

In my future projects, I plan to apply the lessons from the Duck Simulator by designing programs in which behaviours can be swapped, updated, or extended independently of the main code. This will be especially useful in games, simulations, or any software where features may change over time. The project reinforced the importance of thinking ahead about software structure and planning for flexibility, rather than just focusing on making the code functional. Overall, the Duck Simulator showed me that good software design is a skill that complements programming ability, and it’s something I will carry forward in both my academic and professional projects.

From the blog CS@Worcester – Site Title by Yousef Hassan and used with permission of the author. All other rights reserved by the author.

Team management in software development

As a software developer there is a significant chance that you will develop software in a team environment. I know as an entry level developer gaining this experience beforehand would be a massive boost for my career but what exactly does team management entail?

The importance of team management

In a perfect world a team of developers all work perfectly together synchronously & complete a task in the best way possible. In reality, each team will have people of different skillsets, creativity, and ideas for development. Therefore, teams need to be managed in order to optimize development as much as possible.

Diagram illustrating the roles within a software project development team.

Creating a team

Before assembling a team for a project its important to highlight the scope & needs in order to figure out how many, and the type of, developers. According to itrex, some examples of developers you may need would be:

– Software Developer: Engineers and stabilizes the product & solves any technical problems emerging during the development lifecycle
– Software Architect:
Designs a high-level software architecture, selects appropriate tools and platforms to implement the product vision, & sets up code quality standards and performs code reviews
– UI/UX Developer:
Transforms a product vision into user-friendly designs & creates user journeys for the best user experience and highest conversion rates
– QA(quality assurance) Engineer:
Makes sure an application performs according to requirements & spots functional and non-functional defects
-Test Automation Engineer: Designs a test automation ecosystem & writes and maintains test scripts for automated testing
– DevOps Engineer: Facilitates cooperation between development and operations teams & builds continuous integration and continuous delivery (CI/CD) pipelines for faster delivery
– Business Analyst: Understands customers business processes and translates business needs to requirements.
– Project Manager: Makes sure a product or its part is delivered on time and within budget & manages and motivates the software development team
– Project Owner: Holds responsibility for a product vision and evolution & makes sure the final product meets customer requirements

Infographic illustrating the challenges of managing software development teams, including communication, role clarity, and meeting deadlines.

Post-team assembly

Depending on your project you now have an idea on what team you have, the next step is actually managing them. This entails setting clear objectives/goals, creating a timeline, allocating resources, setting communication strategies, delegating, implementing, tracking progress, monitoring project, managing risks/challenges & maintaining flexibility.

Overview of a project manager’s essential roles and responsibilities in software development.

Final thoughts

I now have a better understanding the importance of team management in software development. In order to maximize efficiency towards a project/goal you definitely need to manage a significant amount of aspects related to development. The ability for a team to work together is also valuable & must be taken into account. Overall, I really enjoyed researching this topic, the main sources I used in my research was this section in Atlassians website as well as this section in the itrexgroup website.

From the blog Petraq Mele blog posts by Petraq Mele and used with permission of the author. All other rights reserved by the author.