Category Archives: Week 8

Working in Agile and Scrum Teams

Source: Hapticmedia and The Scrum Guide

Agile is a methodology that allows for iterative development that is constantly being improved upon for the best product and efficiency. Those who use Agile follow the manifesto, consisting of 4 values and 12 principles of best practices. 

The values are: 

  1. Individuals and Interactions Over Processes and Tools
  2. Working Software Over Comprehensive Documentation
  3. Customer Collaboration Over Contract Negotiation
  4. Responding to Change Over Following a Plan

The first value relates to prioritizing how the team is working as a whole rather than following a strict set of protocols that may hinder productivity. The second value focuses on getting a working product to show the customer over spending too much time on documentation that does not progress the project. The third value involves the customer in the development process, allowing for constant feedback and a product the customer will love. The fourth value is similar to the first, reacting to changes the team needs to make to be more efficient and create a working product is more important than sticking to a plan created in the beginning. 

A type of the Agile methodology is called Scrum. Scrum breaks down a project into small “sprints” where the team works on a small increment of the whole project. In each team there are the Developers, a Product Owner, and a Scrum Master. The Product Owner acts as the communicator between the developers and the customer and maintains a priority list of what needs to be done. The Scrum Master oversees the developers and ensures they are being as effective as they can be. During each sprint, there are 4 main components, the planning meeting, the daily scrum, the sprint review, and the sprint retrospective. The planning meeting happens at the beginning of the sprint and is where the team decides what they will accomplish this sprint. The daily scrum is a daily meeting where everyone decides what they will do that day and what they will do better from the day before. The sprint review is a meeting with the customers/stakeholders where everything that was accomplished is presented. The sprint retrospective is a meeting between the scrum team where they discuss what went well overall and what needs to change for the next sprint. 

Agile is a very effective methodology for software development. Over 85% of developers use it and it improves delivery time and team morale. It also allows for all team members to be on the same level where everyone is important and always making valuable progress. I hope to be in a team that uses Agile because it is the most effective compared to other methods of software development, like Waterfall. I am looking forward to experiencing the Scrum process first-hand in the Software Development Capstone next semester and I have high hopes of what it will do for my long term career.

From the blog ALIDA NORDQUIST by alidanordquist and used with permission of the author. All other rights reserved by the author.

Blog Post for Quarter 2

October 21st, 2025

Recently, my class has been going over stuff regarding teamwork and ways to approach building a software or product. For example, the waterfall method, agile methodology, and scrum have come up in discussion. This has reminded me of POGIL since POGIL was a group used in the classroom semi-frequently.

Because of this correlation, I decide to look at blogs about POGIL. However, I noticed something interesting about the blog I chose. So I chose two just because I found some things interesting. The first was made with WordPress.com, much like the one I’m making here. It was about POGIL. The blog appeared to just be called “The POGIL Project.” Or, that’s what I have surmised after looking at the web address. Additionally, there was some interesting notes regarding how it appears to be designed for faculty teaching or implementing POGIL based team activities. However, the last post appears to be in 2015, which is not as recent as I’d like. (However, there appears to be someone who has the same name as the author of this blog who is cited to be impactful to the development of POGIL. Which is pretty cool, though I couldn’t concretely find evidence that they were the same person.)

So, I looked for an alternative. The author was not listed which isn’t great but it is recent. It appears to also be about POGIL. But the most interesting part was how it was applied to science as opposed to actual computer science. Actually, both blogs do that as well.

This new blog I picked was basically an overview of how POGIL works and why it is good to use. It overviewed the reasons why POGIL is used and what it is intended to do. It basically overlaps with what I know about POGIL already.

In a way, this is interesting in how this mean POGIL is both universal and useful. It isn’t just a weird Computer Science class thing we do, it’s an actual science thing. Which is definitely more interesting to know about considering I rarely encountered POGIL before college. It probably won’t really affect my opinion on POGIL but it is mildly interesting how it is something that I’ll see around. I guess I can keep that in mind.

FIRST INITIAL BLOG: https://thepogilproject.wordpress.com/

SECOND, REVIEWED BLOG POST: https://www.transtutor.blog/pogil-guide-high-school-biology

From the blog CS@Worcester – Ryan's Blog Maybe. by Ryan N and used with permission of the author. All other rights reserved by the author.

Application Programming Interfaces

Source: altexsoft

Application Programming Interfaces, or APIs, are the communicating code between a client and server, kind of like a user interface for the computer. They are what receives information from the user interface, sends it to the server to get a result, and sends that result back to the user interface. APIs can be private, partnered, or public. Private is where the API is only available for an organization to use. Partnered is where the API is available to a group of organizations that are in partnership with each other share it. Public is where the API is available to everyone. In our setting, the Thea’s Pantry API is public, mainly used by food pantry volunteers, but the code is openly available on GitLab. 

There are also different types/formats of APIs, like Remote Procedure Call (RPC), Service Object Access Protocol (SOAP), and Representational State Transfer (REST). RPC is when the client requests data from the server and the server sends it. SOAP is a protocol that exchanges data in a decentralized environment, it uses XML messaging between the client and server through HTTP or SMTP. SOAP is known for its high security of data. REST is a protocol where resources are given a URL that can be used to request the data using HTTP methods. REST is simpler and more versatile than SOAP because it requires less complex code writing and uses many formats beyond XML. 

In class and in the homework assignments, we focused on REST API with JSON messaging. We practiced sending requests for GET, POST, PUT, PATCH, and DELETE and some of their different responses like success, entry error, and server error. We also learned how to write our own requests for creating guests, changing a guest’s information, retrieving a guest’s information, and deleting a guest from the system. Some methods, like POST, require arguments as well. To post a new guest, the request needs to have the new UUID and all the information their user will have, like if they are a resident and if they receive financial assistance. To receive a list of users that fit a certain criteria, the GET request needs to have a specific tag in the URL to specify the endpoint. 

Learning APIs will be extremely helpful for the future. In software development, APIs are everywhere. While REST is the most common, it is still useful to know the other kinds as well, like SOAP, and how to use them. Everyone uses APIs: Google, Amazon, Microsoft, Netflix, and many, many more. Getting first hand experience using and building APIs allows me to expand my skillset for real-life applications.

From the blog ALIDA NORDQUIST by alidanordquist and used with permission of the author. All other rights reserved by the author.

Path Testing

This week in class we learned about path testing, which is a white box method that examines code to find all possible paths. Path testing uses control flow graphs in order to illustrate the different paths that could be executed in a program. In the graph, the nodes represent the lines of code, and the edges represent the order in which the code is executed. Path testing appealed  to me as a testing method because it gives visual representations of how the source code should execute given different inputs. I took a deeper dive into path testing after this week’s classes and found this blog that gave me a deeper understanding of path testing.

Steps

When you have decided that you want to perform path testing, you must create a control flow graph that matches up with the source code. For example, the split of direction between nodes should represent if-else statements and for while loops, the nodes towards the end of the program that have an edge pointed at an earlier node. 

Secondly, pick out a baseline path for the program. This is the path you define to be the original path of your program. After the baseline is created, continue generating paths representing all possible outcomes in the execution. 

How many Test Cases?

For a lengthy source code, the possible outcomes could seem endless and could therefore end up being a difficult, time-consuming task to do manually. Luckily, there is an equation that determines how many test cases a program will need with path testing.

C = E – N + 2P

Where C stands for cyclomatic complexity. The cyclomatic complexity is equivalent to the number of linearly independent paths, which in turn equals the number of required test cases. E represents the number of edges, N is the number of nodes, and P is the number of connected components. Note that for a single program or source of code, P = 1 always.

Benefits

Path testing reveals outcomes that otherwise may not have been known without examining the code. As stated before, it can be difficult for a tester to know all the possible outcomes in a class. Path testing provides a solution to that by using control flow charts, where the tester can examine the different paths. Path testing also ensures branch coverage. Developers don’t need to merge code with an existing repository because the developers can test in their own branch. Unnecessary and overlapping tests are another thing developers don’t have to worry about.

Drawbacks

Path testing can also be time consuming. Quicker testing methods do exist and take less time off further developing projects. Also in many cases, path testing may be unnecessary. Path testing is used often by many DevOps setups that require a certain amount of unit coverage before deploying to the next environment. Outside of this, it may be considered inefficient compared to another testing method.

Blog: https://blog.testlodge.com/basis-path-testing/

From the blog Blog del William by William Cordor and used with permission of the author. All other rights reserved by the author.

Enhancing Software Testing Efficiency with Equivalence Partitioning

I chose this blog because I was interested in learning how to optimize the selection of test cases without sacrificing thorough coverage. During my search, I came across an article on TestGrid.io on equivalence partitioning testing, which was fantastic in removing redundancy from testing. As I develop my programming and software testing skills, I have found this method especially useful in making the testing process simpler.

Equivalence partitioning is a testing technique applied to partition input data into partitions or groups based on the assumption that values in each partition will behave similarly. Instead of testing each possible input, testers choose a sample value from each partition and hope the entire group will result in the same output. This reduces the number of test cases but still provides sufficient coverage.

For example, if a program is able to accept input values ranging from 1 to 100, equivalence partitioning allows the testers to categorize them into two sets: valid values (1-100) and invalid values (less than 1 or more than 100). Rather than testing every number in the valid set, a tester would choose sentinel values like 1, 50, and 100. Similarly, they would test the invalid range with 0 and 101. This is time-efficient but identifies errors simultaneously.

I employed the TestGrid.io article because it explains equivalence partitioning in an understandable and systematic manner. Much other testing material is too complex or ambiguous for newcomers, but this article simplifies it and incorporates real-world examples. This allowed it to be simple not only to understand the theory, but also to apply the method to real-life situations.The article also discusses the advantages of equivalence partitioning, including reducing redundant test cases, being more efficient, and offering complete coverage. As an individual interested in improving my testing methods, I found this useful because it corresponds with my goal of producing better, more efficient test cases without redundant repetition.

Equivalence partitioning testing is a sound approach to maximizing test case selection. It enables the tester to focus on representative cases rather than testing all possible inputs, which is time- and effort-efficient. The TestGrid.io article provided a clear understanding of how to implement this method and why it is significant. For me, learning effective test methods like equivalence partitioning will make me more efficient in my coding, debugging, and software developing abilities, prepared for internships, projects, and software engineering positions.

Blog: https://testgrid.io/blog/equivalence-partitioning-testing/

From the blog CS@Worcester – Matchaman10 by tam nguyen and used with permission of the author. All other rights reserved by the author.

Week 8: Path Testing

This week, our class learned about path testing. It is a white box testing method (meaning we do not actually run the source code) that tests to make sure every specification is met and helps to create an outline for writing test cases. Each line of code is represented by a node and each progression point the compiler follows is represented by a directed edge. Test cases are written for each possible path the compiler can take.

For example, if we are testing the code block below, a path testing diagram can be created: also shown below. Each node represents a line of code and each directed edge represents where the compiler goes next, depending on the conditions. Test cases are written for each condition: running through the while loop and leaving when value is more than 5 or bypassing it if value starts as more than 5.

I wanted to learn more about path testing, so I did some research and found a blog that mentioned cyclomatic complexity. Cyclomatic complexity is a number that classifies how complex the code is based on how many nodes, edges, and conditionals you have. This number will relate to how many tests you need to run, but is not always the same number. The cyclomatic complexity of the example above would be (5-5)+2(1) = 2.

Cyclomatic Complexity = Edges – Nodes + 2(Number of Connected Components)

The blog also explores the advantages and disadvantages of path based testing. Some advantages are performing thorough testing of all paths, testing for errors in loops and branching points, and ensuring any potential bugs in pathing are found. Some disadvantages are failing to test input conditions or runtime compilations, a lot of tests need to be written to test every edge and path, and exponential growth in the number of test cases when code is more complex.

Another exercise we did in class was condensing nodes that do not branch. In the example above, node 2 and 3 can be condensed into one node. This is because there is no alternative path that can be taken between the nodes. If line 2 is run, line 3 is always run right after, no matter what number value is. Condensing nodes would be helpful in slightly more complex programs to make the diagram more readable. Though if you are working with a program with a couple hundred lines, this seems negligible.

When I am writing tests for a program in the future, I would probably choose a more time conscious method. Cyclomatic complexity is a good and useful metric to have, but basing test cases off of the path testing diagram does not seem practical for complex codes and tight time constraints.

Blog post referenced: https://www.testbytes.net/blog/path-coverage-testing/

From the blog CS@Worcester – ALIDA NORDQUIST by alidanordquist and used with permission of the author. All other rights reserved by the author.

Combining Testing Methods

The blog post that I chose to write about this week is one that gives an overview of equivalence class and boundary analysis testing. The main reason why you would use these is to reduce the number of tests you run for a program while still testing full functionality and not sacrificing coverage. It does this by sectioning the range of inputs into different equivalency classes. Equivalency classes are groups of inputs that in theory should behave identically when put into the tested function. The blog then shows a helpful diagram showcasing what this looks like plotted on a number line. This way, tests will give better information by only testing the function where problems may arise and will detail the behavior of the function near edge cases better than other methods.

The blog post also details how you can represent the classes as functions themselves for where the inputs would be, for example, true, false, or valid, by defining ranges of values with interval notation. After then going over boundary test cases, the author explains how these two methods can be used together to efficiently test around the limits of the function behavior. The blog concludes with another example plotted on a table that shows how equivalence classes and boundary testing can be combined to use a minimum number of tests while also ensuring that you test the function at its most important parts where the process will change based on inputs.

I selected this blog to help refresh myself for the upcoming test about different testing methods and to reinforce what I had learned in class. I think that one of the more important takeaways from this blog is the emphasis the author puts on combining the two methods not just because they are two different methods but because they strengthen the overall testing procedure, and this will make me think about how new testing methods can be combined to lead to better and more efficient test cases. Demonstrating the testing in terms of models on number lines and as graphs help visualize what is actually happening and why it works, similar to the models taught in class but the added element of real numbers with example values helps demonstrate the importance of this kind of testing and how it can be useful for any kind of real-world situation. As an introductory post to the topic, and in my case a review, it works well but from here I would like to look more into the different combinations of testing methods that can work well together and some that may not as I learn more methods through the rest of the class.

https://www.testbench.com/blog/equivalence-class-partioning-and-limit-value-analysis/

From the blog CS@Worcester – Computer Science Blog by dzona1 and used with permission of the author. All other rights reserved by the author.

Understanding SOLID Principles: A Guide

As a student learning software design, I’ve heard about the SOLID principles in class, but I wanted to dive deeper to understand how to actually use them. I came across a blog post called “SOLID Principles — The Definitive Guide” by Midhun Vincent on Medium, which breaks down each of the five principles in a way that makes sense for someone new to object-oriented design. The guide was really helpful and lined up well with what we’re covering in my course, so I thought it would be a good opportunity to see how these principles could improve my coding now and in the future.

The article explains the SOLID principles, which are five important guidelines for creating object-oriented software that’s easier to understand, maintain, and extend. The first principle, the Single Responsibility Principle (SRP), says that each class should do only one thing, making it easier to maintain and modify. The Open/Closed Principle (OCP) suggests that classes should be open for extension but closed for modification, meaning you can add features without changing the original code. The Liskov Substitution Principle (LSP) ensures that subclasses can replace their parent class without breaking the system. The Interface Segregation Principle (ISP) advises creating small, specific interfaces rather than large, general ones. Finally, the Dependency Inversion Principle (DIP) suggests that high-level modules should depend on abstractions, not low-level modules, which makes the code more flexible. These principles help make code cleaner, more modular, and easier to adapt over time.

I picked this article because, while the SOLID principles are useful, they can seem pretty abstract at first. The post explains them in a way that feels practical, with examples that make it easier to apply the principles to real-world coding problems. Plus, the examples connected well with the projects I’ve worked on in my course, especially when it comes to organizing code and making it easier to debug. Seeing how these principles prevent code from becoming too messy gave me a new way of thinking about my own assignments.

My Takeaways and Reflection

Before reading this post, I knew the basic ideas behind SOLID, but I wasn’t sure how to apply them in my own code. Now, I get why each principle is important and how they can save time by reducing debugging and refactoring. For example, the Single Responsibility Principle made me realize that I often give classes too many responsibilities, which complicates fixing bugs. By applying SRP, I can keep things simpler and reduce errors.

Looking ahead, I plan to use these principles in my projects, especially the Open/Closed Principle and Interface Segregation Principle. I can see how they’ll help me write code that’s easier to update and adapt. Understanding SOLID will definitely give me a strong foundation as I take on more complex projects in the future.

Resource:

View at Medium.com

From the blog Computer Science From a Basketball Fan by Brandon Njuguna and used with permission of the author. All other rights reserved by the author.

Introduction to REST APIs: A Beginner’s Insight

REST (Representational State Transfer) APIs have become a cornerstone in modern software development, enabling seamless communication between different systems. For those new to the field, understanding REST APIs is essential as it forms the foundation for integrating various services and building scalable applications. The blog post “Rest API Tutorial — A Complete Beginner’s Guide” from Moesif provides an excellent introduction to this topic. Here, I will summarize the content of the blog, explain why I selected this resource, and reflect on what I learned from it.

Summary of the Blog Post

The Moesif blog post starts by explaining what an API (Application Programming Interface) is and introduces REST as an architectural style for designing networked applications. It highlights that RESTful APIs are stateless, meaning each request from a client to a server must contain all the information the server needs to fulfill it. The post further discusses key concepts such as HTTP methods (GET, POST, PUT, DELETE), status codes, and the importance of endpoints in API structure.

The guide provides practical examples to illustrate these concepts, making it easier for beginners to grasp. It also touches on best practices, such as the use of proper status codes to indicate request outcomes and keeping URLs clean and intuitive. The post ends by emphasizing the importance of good documentation and consistent API versioning to ensure ease of use and maintainability.

Why I Selected This Resource

I chose this particular blog post because of its comprehensive yet beginner-friendly approach. As a computer science student, I am currently learning about software architecture and development practices. This guide stood out for its clarity in explaining complex concepts and its practical examples, which help bridge the gap between theory and real-world application. I wanted to understand REST APIs better, not just from a theoretical standpoint but in a way that I could apply in future projects, making this resource ideal.

Reflections on the Content

Reading through the blog post was eye-opening. It clarified the purpose and usage of REST APIs and reinforced my understanding of HTTP methods and status codes. The emphasis on statelessness and how each request must be self-sufficient was particularly insightful, as I previously struggled with this concept. Additionally, I learned the significance of designing intuitive endpoints and properly using status codes to indicate different outcomes, such as 200 for success or 404 for not found.

This resource has given me the confidence to start building simple RESTful APIs. I now appreciate why good API documentation and versioning are critical — they help developers maintain and scale services effectively. Moving forward, I intend to apply this knowledge in my coursework and future software projects, ensuring that the APIs I develop are well-structured, easy to use, and maintainable.

Conclusion

Overall, the “Rest API Tutorial — A Complete Beginner’s Guide” was a valuable resource that provided me with a solid foundation in RESTful API development. I highly recommend it to any beginner looking to understand how APIs work and how to implement them in practical projects. For those interested, you can read the full post here.

From the blog Discoveries in CS world by mgl1990 and used with permission of the author. All other rights reserved by the author.

Software Licenses

Telling the Do’s and Don’t of your own code

Photo by RDNE Stock project on Pexels.com

Hello Debug Ducker here again, and it’s time to talk about legal stuff involving software. Now I am not a lawyer but I am a coder and this is related to that. It still may require some independent research on your behalf but what I have to say is still important. Copyright, love it or hate it is here to stay. Copyright is involved in a lot of things such as movies, products, and even software. Yes, software can have a copyright applied to it, how can this be the case? Well, it is rather simple, if you make the code, as in you wrote it , you are the sole copyright holder of that code. “So why does this matter”, which is what you are probably thinking. It matters because then copyright laws would apply meaning that their are restrictions on someone using your code. “Well, I don’t care, let them use it”. They can’t cause copyright won’t allow them to do so, and some don’t want to risk legal issues. So this is where licenses come in. 

License are a set of guidelines on how a person can use and redistribute the software and it is an essential tool for the field. Now licenses are something you can grab, and put in a section of your software and not something you make yourself.  You shouldn’t make your own license yourself, ever, as that can cause legal troubles. If you want your code to be free to use with almost no restrictions, there is a license for that, if you want a bit of restriction on what they can use it for there is also a license for that. You can pick what suits your needs and should be all set. I won’t go too in-depth about all the licenses cause there are quite a bit, but there are a few resources I can share that can help you find the right license for your work

https://choosealicense.com/

The site above can help guide you on the many different licenses that are out there and can give a gist of the guidelines in the licenses. It is helpful too for developers who just need a quick way to find the right license for what they want to achieve with their software. I can see myself using such a tool in the future. Hope you enjoy this talk about the use of licenses, I hope this was helpful. Thank you for your time.

From the blog Debug Duck by debugducker and used with permission of the author. All other rights reserved by the author.