Category Archives: Week 8

Path Testing

This week in class we learned about path testing, which is a white box method that examines code to find all possible paths. Path testing uses control flow graphs in order to illustrate the different paths that could be executed in a program. In the graph, the nodes represent the lines of code, and the edges represent the order in which the code is executed. Path testing appealed  to me as a testing method because it gives visual representations of how the source code should execute given different inputs. I took a deeper dive into path testing after this week’s classes and found this blog that gave me a deeper understanding of path testing.

Steps

When you have decided that you want to perform path testing, you must create a control flow graph that matches up with the source code. For example, the split of direction between nodes should represent if-else statements and for while loops, the nodes towards the end of the program that have an edge pointed at an earlier node. 

Secondly, pick out a baseline path for the program. This is the path you define to be the original path of your program. After the baseline is created, continue generating paths representing all possible outcomes in the execution. 

How many Test Cases?

For a lengthy source code, the possible outcomes could seem endless and could therefore end up being a difficult, time-consuming task to do manually. Luckily, there is an equation that determines how many test cases a program will need with path testing.

C = E – N + 2P

Where C stands for cyclomatic complexity. The cyclomatic complexity is equivalent to the number of linearly independent paths, which in turn equals the number of required test cases. E represents the number of edges, N is the number of nodes, and P is the number of connected components. Note that for a single program or source of code, P = 1 always.

Benefits

Path testing reveals outcomes that otherwise may not have been known without examining the code. As stated before, it can be difficult for a tester to know all the possible outcomes in a class. Path testing provides a solution to that by using control flow charts, where the tester can examine the different paths. Path testing also ensures branch coverage. Developers don’t need to merge code with an existing repository because the developers can test in their own branch. Unnecessary and overlapping tests are another thing developers don’t have to worry about.

Drawbacks

Path testing can also be time consuming. Quicker testing methods do exist and take less time off further developing projects. Also in many cases, path testing may be unnecessary. Path testing is used often by many DevOps setups that require a certain amount of unit coverage before deploying to the next environment. Outside of this, it may be considered inefficient compared to another testing method.

Blog: https://blog.testlodge.com/basis-path-testing/

From the blog Blog del William by William Cordor and used with permission of the author. All other rights reserved by the author.

Enhancing Software Testing Efficiency with Equivalence Partitioning

I chose this blog because I was interested in learning how to optimize the selection of test cases without sacrificing thorough coverage. During my search, I came across an article on TestGrid.io on equivalence partitioning testing, which was fantastic in removing redundancy from testing. As I develop my programming and software testing skills, I have found this method especially useful in making the testing process simpler.

Equivalence partitioning is a testing technique applied to partition input data into partitions or groups based on the assumption that values in each partition will behave similarly. Instead of testing each possible input, testers choose a sample value from each partition and hope the entire group will result in the same output. This reduces the number of test cases but still provides sufficient coverage.

For example, if a program is able to accept input values ranging from 1 to 100, equivalence partitioning allows the testers to categorize them into two sets: valid values (1-100) and invalid values (less than 1 or more than 100). Rather than testing every number in the valid set, a tester would choose sentinel values like 1, 50, and 100. Similarly, they would test the invalid range with 0 and 101. This is time-efficient but identifies errors simultaneously.

I employed the TestGrid.io article because it explains equivalence partitioning in an understandable and systematic manner. Much other testing material is too complex or ambiguous for newcomers, but this article simplifies it and incorporates real-world examples. This allowed it to be simple not only to understand the theory, but also to apply the method to real-life situations.The article also discusses the advantages of equivalence partitioning, including reducing redundant test cases, being more efficient, and offering complete coverage. As an individual interested in improving my testing methods, I found this useful because it corresponds with my goal of producing better, more efficient test cases without redundant repetition.

Equivalence partitioning testing is a sound approach to maximizing test case selection. It enables the tester to focus on representative cases rather than testing all possible inputs, which is time- and effort-efficient. The TestGrid.io article provided a clear understanding of how to implement this method and why it is significant. For me, learning effective test methods like equivalence partitioning will make me more efficient in my coding, debugging, and software developing abilities, prepared for internships, projects, and software engineering positions.

Blog: https://testgrid.io/blog/equivalence-partitioning-testing/

From the blog CS@Worcester – Matchaman10 by tam nguyen and used with permission of the author. All other rights reserved by the author.

Week 8: Path Testing

This week, our class learned about path testing. It is a white box testing method (meaning we do not actually run the source code) that tests to make sure every specification is met and helps to create an outline for writing test cases. Each line of code is represented by a node and each progression point the compiler follows is represented by a directed edge. Test cases are written for each possible path the compiler can take.

For example, if we are testing the code block below, a path testing diagram can be created: also shown below. Each node represents a line of code and each directed edge represents where the compiler goes next, depending on the conditions. Test cases are written for each condition: running through the while loop and leaving when value is more than 5 or bypassing it if value starts as more than 5.

I wanted to learn more about path testing, so I did some research and found a blog that mentioned cyclomatic complexity. Cyclomatic complexity is a number that classifies how complex the code is based on how many nodes, edges, and conditionals you have. This number will relate to how many tests you need to run, but is not always the same number. The cyclomatic complexity of the example above would be (5-5)+2(1) = 2.

Cyclomatic Complexity = Edges – Nodes + 2(Number of Connected Components)

The blog also explores the advantages and disadvantages of path based testing. Some advantages are performing thorough testing of all paths, testing for errors in loops and branching points, and ensuring any potential bugs in pathing are found. Some disadvantages are failing to test input conditions or runtime compilations, a lot of tests need to be written to test every edge and path, and exponential growth in the number of test cases when code is more complex.

Another exercise we did in class was condensing nodes that do not branch. In the example above, node 2 and 3 can be condensed into one node. This is because there is no alternative path that can be taken between the nodes. If line 2 is run, line 3 is always run right after, no matter what number value is. Condensing nodes would be helpful in slightly more complex programs to make the diagram more readable. Though if you are working with a program with a couple hundred lines, this seems negligible.

When I am writing tests for a program in the future, I would probably choose a more time conscious method. Cyclomatic complexity is a good and useful metric to have, but basing test cases off of the path testing diagram does not seem practical for complex codes and tight time constraints.

Blog post referenced: https://www.testbytes.net/blog/path-coverage-testing/

From the blog CS@Worcester – ALIDA NORDQUIST by alidanordquist and used with permission of the author. All other rights reserved by the author.

Combining Testing Methods

The blog post that I chose to write about this week is one that gives an overview of equivalence class and boundary analysis testing. The main reason why you would use these is to reduce the number of tests you run for a program while still testing full functionality and not sacrificing coverage. It does this by sectioning the range of inputs into different equivalency classes. Equivalency classes are groups of inputs that in theory should behave identically when put into the tested function. The blog then shows a helpful diagram showcasing what this looks like plotted on a number line. This way, tests will give better information by only testing the function where problems may arise and will detail the behavior of the function near edge cases better than other methods.

The blog post also details how you can represent the classes as functions themselves for where the inputs would be, for example, true, false, or valid, by defining ranges of values with interval notation. After then going over boundary test cases, the author explains how these two methods can be used together to efficiently test around the limits of the function behavior. The blog concludes with another example plotted on a table that shows how equivalence classes and boundary testing can be combined to use a minimum number of tests while also ensuring that you test the function at its most important parts where the process will change based on inputs.

I selected this blog to help refresh myself for the upcoming test about different testing methods and to reinforce what I had learned in class. I think that one of the more important takeaways from this blog is the emphasis the author puts on combining the two methods not just because they are two different methods but because they strengthen the overall testing procedure, and this will make me think about how new testing methods can be combined to lead to better and more efficient test cases. Demonstrating the testing in terms of models on number lines and as graphs help visualize what is actually happening and why it works, similar to the models taught in class but the added element of real numbers with example values helps demonstrate the importance of this kind of testing and how it can be useful for any kind of real-world situation. As an introductory post to the topic, and in my case a review, it works well but from here I would like to look more into the different combinations of testing methods that can work well together and some that may not as I learn more methods through the rest of the class.

https://www.testbench.com/blog/equivalence-class-partioning-and-limit-value-analysis/

From the blog CS@Worcester – Computer Science Blog by dzona1 and used with permission of the author. All other rights reserved by the author.

Understanding SOLID Principles: A Guide

As a student learning software design, I’ve heard about the SOLID principles in class, but I wanted to dive deeper to understand how to actually use them. I came across a blog post called “SOLID Principles — The Definitive Guide” by Midhun Vincent on Medium, which breaks down each of the five principles in a way that makes sense for someone new to object-oriented design. The guide was really helpful and lined up well with what we’re covering in my course, so I thought it would be a good opportunity to see how these principles could improve my coding now and in the future.

The article explains the SOLID principles, which are five important guidelines for creating object-oriented software that’s easier to understand, maintain, and extend. The first principle, the Single Responsibility Principle (SRP), says that each class should do only one thing, making it easier to maintain and modify. The Open/Closed Principle (OCP) suggests that classes should be open for extension but closed for modification, meaning you can add features without changing the original code. The Liskov Substitution Principle (LSP) ensures that subclasses can replace their parent class without breaking the system. The Interface Segregation Principle (ISP) advises creating small, specific interfaces rather than large, general ones. Finally, the Dependency Inversion Principle (DIP) suggests that high-level modules should depend on abstractions, not low-level modules, which makes the code more flexible. These principles help make code cleaner, more modular, and easier to adapt over time.

I picked this article because, while the SOLID principles are useful, they can seem pretty abstract at first. The post explains them in a way that feels practical, with examples that make it easier to apply the principles to real-world coding problems. Plus, the examples connected well with the projects I’ve worked on in my course, especially when it comes to organizing code and making it easier to debug. Seeing how these principles prevent code from becoming too messy gave me a new way of thinking about my own assignments.

My Takeaways and Reflection

Before reading this post, I knew the basic ideas behind SOLID, but I wasn’t sure how to apply them in my own code. Now, I get why each principle is important and how they can save time by reducing debugging and refactoring. For example, the Single Responsibility Principle made me realize that I often give classes too many responsibilities, which complicates fixing bugs. By applying SRP, I can keep things simpler and reduce errors.

Looking ahead, I plan to use these principles in my projects, especially the Open/Closed Principle and Interface Segregation Principle. I can see how they’ll help me write code that’s easier to update and adapt. Understanding SOLID will definitely give me a strong foundation as I take on more complex projects in the future.

Resource:

View at Medium.com

From the blog Computer Science From a Basketball Fan by Brandon Njuguna and used with permission of the author. All other rights reserved by the author.

Introduction to REST APIs: A Beginner’s Insight

REST (Representational State Transfer) APIs have become a cornerstone in modern software development, enabling seamless communication between different systems. For those new to the field, understanding REST APIs is essential as it forms the foundation for integrating various services and building scalable applications. The blog post “Rest API Tutorial — A Complete Beginner’s Guide” from Moesif provides an excellent introduction to this topic. Here, I will summarize the content of the blog, explain why I selected this resource, and reflect on what I learned from it.

Summary of the Blog Post

The Moesif blog post starts by explaining what an API (Application Programming Interface) is and introduces REST as an architectural style for designing networked applications. It highlights that RESTful APIs are stateless, meaning each request from a client to a server must contain all the information the server needs to fulfill it. The post further discusses key concepts such as HTTP methods (GET, POST, PUT, DELETE), status codes, and the importance of endpoints in API structure.

The guide provides practical examples to illustrate these concepts, making it easier for beginners to grasp. It also touches on best practices, such as the use of proper status codes to indicate request outcomes and keeping URLs clean and intuitive. The post ends by emphasizing the importance of good documentation and consistent API versioning to ensure ease of use and maintainability.

Why I Selected This Resource

I chose this particular blog post because of its comprehensive yet beginner-friendly approach. As a computer science student, I am currently learning about software architecture and development practices. This guide stood out for its clarity in explaining complex concepts and its practical examples, which help bridge the gap between theory and real-world application. I wanted to understand REST APIs better, not just from a theoretical standpoint but in a way that I could apply in future projects, making this resource ideal.

Reflections on the Content

Reading through the blog post was eye-opening. It clarified the purpose and usage of REST APIs and reinforced my understanding of HTTP methods and status codes. The emphasis on statelessness and how each request must be self-sufficient was particularly insightful, as I previously struggled with this concept. Additionally, I learned the significance of designing intuitive endpoints and properly using status codes to indicate different outcomes, such as 200 for success or 404 for not found.

This resource has given me the confidence to start building simple RESTful APIs. I now appreciate why good API documentation and versioning are critical — they help developers maintain and scale services effectively. Moving forward, I intend to apply this knowledge in my coursework and future software projects, ensuring that the APIs I develop are well-structured, easy to use, and maintainable.

Conclusion

Overall, the “Rest API Tutorial — A Complete Beginner’s Guide” was a valuable resource that provided me with a solid foundation in RESTful API development. I highly recommend it to any beginner looking to understand how APIs work and how to implement them in practical projects. For those interested, you can read the full post here.

From the blog Discoveries in CS world by mgl1990 and used with permission of the author. All other rights reserved by the author.

Software Licenses

Telling the Do’s and Don’t of your own code

Photo by RDNE Stock project on Pexels.com

Hello Debug Ducker here again, and it’s time to talk about legal stuff involving software. Now I am not a lawyer but I am a coder and this is related to that. It still may require some independent research on your behalf but what I have to say is still important. Copyright, love it or hate it is here to stay. Copyright is involved in a lot of things such as movies, products, and even software. Yes, software can have a copyright applied to it, how can this be the case? Well, it is rather simple, if you make the code, as in you wrote it , you are the sole copyright holder of that code. “So why does this matter”, which is what you are probably thinking. It matters because then copyright laws would apply meaning that their are restrictions on someone using your code. “Well, I don’t care, let them use it”. They can’t cause copyright won’t allow them to do so, and some don’t want to risk legal issues. So this is where licenses come in. 

License are a set of guidelines on how a person can use and redistribute the software and it is an essential tool for the field. Now licenses are something you can grab, and put in a section of your software and not something you make yourself.  You shouldn’t make your own license yourself, ever, as that can cause legal troubles. If you want your code to be free to use with almost no restrictions, there is a license for that, if you want a bit of restriction on what they can use it for there is also a license for that. You can pick what suits your needs and should be all set. I won’t go too in-depth about all the licenses cause there are quite a bit, but there are a few resources I can share that can help you find the right license for your work

https://choosealicense.com/

The site above can help guide you on the many different licenses that are out there and can give a gist of the guidelines in the licenses. It is helpful too for developers who just need a quick way to find the right license for what they want to achieve with their software. I can see myself using such a tool in the future. Hope you enjoy this talk about the use of licenses, I hope this was helpful. Thank you for your time.

From the blog Debug Duck by debugducker and used with permission of the author. All other rights reserved by the author.

Another Look Through Someone Else’s Code

On my last post I looked through an independent developer’s code. It is messy so I applied clean code principles to it. Unfortunately, I only made a few fixes. There was a lot to say. Now, let us take a second dive into the rocky waters known as this developer’s code

Just to note, I will continue to use an uploaded replication of his code from GitHub:GitHub – LordEnma/YandereSimulatorDecompiled: Decompiled Code from the game Yandere Simulator.

The first thing I want to jump into is the Timer.cs file. It is part of the minigame where the player works in a cafe to make in game currency. The minigame is timed. It starts when the player presses start and when it runs out the minigames stops, calculates the in-game money earned and gives it to the player. One fix is to change the function named “Awake” into “GameTurnedOn” or “GameBegin”. Awake is supposed to tell the file that the player has started the game. The name given makes no sense since there is no sleeping in this minigame.

The next thing I want to discuss the ServingCounter.cs file. It is part of the cafe minigame. The counter is where the player sets their empty tray and picks of their tray filled with food. My first fix is to remove chefMask and trashMask from the SetMask function. Next, change Setmask to “SetservingcounterMask”. The whole file pertains to the serving counter. The chef AI and trash can should not be affected. It would be better to put something that could call on a function. Then the functions in their respective files would execute.

The next thing I want to discuss the KnifeDetectorScript.cs file. The whole thing is a mess. I think this sees if the player has a knife but it includes code of hiding a knife and interacting with a blowtorch. Maybe the code is for the player heating the knife with a blowtorch. Firstly, the name of the file is bad. It does nothing to describe what the code is doing. It is better to name the file “HeatingKnife”. Second, blowtorch is a bad name in general because the item looks exactly like a Bunsen burner. This has to be renamed to Bunsen burner. Lastly, why does the file mention a knife but the code does not specify a weapon? This would be confusing for anyone jumping into the code. Either change the code to only mention knife or change the file name to “HeatingWeapon”.

I have the same thoughts as the ones in my last post. The code is messy and any outside party trying to change it will have a lot of trouble. I also learned something new: the importance of naming things properly. Coding without proper naming is like trying to put furniture together with no labels. Even if you put to pieces together you run the risk of putting the wrong pieces together, getting them stuck and wasting time trying to fix it.

From the blog CS@Worcester – My Journey through Comp Sci by Joanna Presume and used with permission of the author. All other rights reserved by the author.

Understanding SOLID Principles: A Guide 

As a student learning software design, I’ve come across the SOLID principles in a few lectures, but I wanted a deeper dive to really understand how to apply them. I recently read a blog post titled “SOLID Principles — The Definitive Guide” by Midhun Vincent on Medium. This guide breaks down each of the five SOLID principles in a straightforward way, with examples and explanations that actually make sense for someone still new to object-oriented design. The article is totally in line with what we’re covering in my course, so I figured it was a great chance to see how these principles could improve my coding style now and in the future.

Summary of the Selected Resource

The article explains the SOLID principles, which are five key guidelines for designing object-oriented software that is easier to understand, extend, and maintain. The first principle, the Single Responsibility Principle (SRP), emphasizes that each class should focus on a single task, making the code simpler to maintain and update. Next is the Open/Closed Principle (OCP), which suggests that classes should be open for extension but closed for modification, allowing developers to add new features without altering the original code structure. The Liskov Substitution Principle (LSP)follows, which ensures that objects of a superclass can be replaced with objects of subclasses without causing issues in the application. Then there’s the Interface Segregation Principle (ISP), which advises against creating large, general-purpose interfaces and instead encourages smaller, more specific ones that suit the exact needs of different clients. Finally, the Dependency Inversion Principle (DIP) recommends that high-level modules should not rely on low-level modules but rather on abstractions, which reduces dependency and enhances flexibility. Together, these principles form a strong foundation for writing clean, modular code that can handle future changes more gracefully.

Why I Chose This Resource

I chose this post because the SOLID principles are really useful in building better code but can feel abstract at first. The article breaks down each principle in a way that makes them feel practical and achievable. Also, the examples in the post connect well with coding challenges we’ve faced in our course projects, especially in terms of keeping code organized and easy to debug. Seeing how SOLID principles can prevent code from becoming a tangled mess gave me a new perspective on how I approach my own assignments.

My Takeaways and Reflection

Before reading this post, I understood the theory behind the SOLID principles but not really how to implement them in my own code. Now, I can see why each principle matters and how they can actually save time by reducing the need for debugging and refactoring down the line. The Single Responsibility Principle, for example, made me think about how I often give one class way too many jobs, which then makes fixing issues complicated. By applying SRP, I can keep my classes simpler and less error-prone.

Moving forward, I’m planning to use these principles as I work on my projects, especially with the Open/Closed Principle and the Interface Segregation Principle. I can see how they’ll help me write code that’s easier to adapt if requirements change or if I add new features later. In the future, I think understanding SOLID will give me a solid foundation (pun intended!) as I move into more complex software development work.

https://medium.com/android-news/solid-principles-the-definitive-guide-75e30a284dea

From the blog Computer Science From a Basketball Fan by Brandon Njuguna and used with permission of the author. All other rights reserved by the author.

Tackling Merge Conflicts with GitKit: A Student’s Guide to Smoother Collaboration

Working on team projects in class has really brought out how tricky merge conflicts can be. Nothing quite like seeing “conflict” pop up after a pull request to slow things down! For this blog entry, I looked into a post called “Mastering Merge Conflicts with GitKit” , which breaks down why merge conflicts happen and shows how to tackle them using GitKit’s built-in tools. Since our course covers version control and team-based coding, I figured learning to manage these conflicts more effectively would make a big difference, not just now but for any future projects.

Summary of the Selected Resource

The post explains why merge conflicts occur in collaborative projects, like when multiple team members edit the same file or branch in different ways. The author points out that conflicts are actually pretty normal in team coding—it’s just part of working with a shared codebase. GitKit’s approach to handling conflicts was the real game-changer for me here. It uses interactive conflict markers, visual diffs, and a guided merge workflow to help developers see exactly where conflicts happen and resolve them without a lot of guesswork. It’s clear from the blog that these features simplify what’s often a frustrating process, making it more manageable and, honestly, less intimidating.

Why I Chose This Resource

I picked this post because merge conflicts have been a big obstacle for me and my project teammates. They always seem to come up at the worst times—right when you think you’re wrapping up! Learning more about practical strategies to handle them seemed like a solid move. Plus, I hadn’t really explored GitKit’s full range of features before, so this gave me a chance to see how it can streamline conflict resolution. With team coding becoming more common in projects, internships, and industry work, knowing about these tools feels pretty essential.

My Takeaways and Reflection

Before reading this, I mostly just knew the basics of handling conflicts through the command line. But after seeing what GitKit offers, I realized how helpful visual tools and conflict markers can be. They make it so much easier to understand what’s causing the conflict and to feel more confident about fixing it. Having a clearer view of what’s happening in the code feels like it will help me avoid mistakes and keep our project moving forward without so much stress.

Looking ahead, I’m definitely going to use these GitKit techniques in my future work. I plan to keep practicing conflict resolution so it becomes second nature and doesn’t disrupt my flow as much. I can see how this will really come in handy, especially when I start working on larger projects or in a professional setting where team collaboration is essential.

Link to the Resource

https://dev.to/htsagara/handling-merge-conflicts-in-git-how-to-fix-and-prevent-them-1m62

From the blog Computer Science From a Basketball Fan by Brandon Njuguna and used with permission of the author. All other rights reserved by the author.