Use-Testing Graphs

In the previous week we spoke about Use-Testing which is a graph node with edges where the edges show the flow of the program, and the nodes are each line. These can be very big graphs if it is a large project but luckily, we can always shrink it down by taking out unnecessary nodes. This will also mean redirecting the edges correctly but after working with the full model it is much simpler to shrink it down. These graphs are called Decision to decision path graphs that reduce redundancy while keeping the important parts of the graphs still visible and with the flow of the program. These graphs go well with decision tables where it would be a True or False scenario with a “don’t care” option which can be used together with the path graph.

We also learned to define the paths by splitting the variables, defined at and used nodes into separate columns. This simplified things in order for us to assign the node number to the variable. For example, node 1 is numCells which is used in node 4 and 10, when we do this method, we can see where each variable is used at and therefore when we shrink our graphs, we know how to better organize it.

Overall, the use of the test cases helps us organize for test cases for the entire system. It helps us determine whether something is useful or not like an empty line can be counted as a node because the IDE is still going to check if there is code on that line, but for the final graph empty lines and else statements can be erased or shown as an edge. The graphs also help with noticing what we need for the test case not only what we do not need. This makes it easier to visualize the structure of the code while also easily being able to communicate what is required in order to make the test cases because the last thing anyone wants to do is write unnecessary test cases or and over abundant amount of test cases that should not be there.

Also, by using path testing you can easily visualize which node interacts with a certain edge or another node. If needed to explain to a team of engineers it will be easy and organized for anyone to look at while also having all the valid and important information that they need to create the test cases for the software.  I also learned about the cases that come along with these graphs which are cases 1-5 where case 1 is a node with no edge going into it which would always be the first node of the graph. Case 2 just does not go out to another node implying in some cases the last node. Case 3 would be an indegree greater than 2 and out degree greater than 2. Case 4 consists of 1 degree in and 1 degree out which is usually in variable declaration or in a sequential program. Case 5 is a single entry or single exit. These cases help to define the nodes further in-depth in order to dive deeper into the flow of the program.

Source: Software Testing – Use Case Testing | GeeksforGeeks

From the blog Cinnamon Codes by CinCodes and used with permission of the author. All other rights reserved by the author.

Week 10- Stubs and Mocks

During week 10, we experimented with stubs and mocks. Stubs and mocks are two forms of testing doubles that allow a team to write tests before the whole program has been written. Stubs and mocks simulate the code’s fleshed out methods and allow for testing methods to be written in advance. 

Stubs are methods or classes that are either empty or return a set value so they can run and be tested. Stubs are state testers and focus on testing the outcome of methods. Mocks are more dynamic, the test block can define what it wants the outcome of any method to be and then test for that outcome. It can also test for multiple set outcomes in the same block. Mocks are behavior testers and test the interactions between methods, rather than the outcome. 

The reference I found was a blog that compares and contrasts stubs and mocks by BairesDev to get a better idea of their use cases. They explained what stubs and mocks are, the advantages and disadvantages of each, and the situations to use both of them.

Advantages for stubs are their predictability and isolating the method. They will always return what you are expecting because of how simple they are. Since stubs do not involve any other calls or methods, they are great at isolating testing to just that method. Disadvantages are user error and the lack of behavior testing. The user might have a discrepancy in what they return in the stub and what they expect in the test. If your method needs to interact with other methods, stubs are not great for testing behavior because they only look at the outcome. 

Advantages for mocks are being better at testing subtle bugs and issues and testing the interactions between methods in your code. Since mocks are behavioral tests, if the methods don’t interact how they’re expected to, the test will not pass, which goes beyond what stubs do. Disadvantages are increased complexity and brittle tests. Mocks make your tests more complex than if you were testing the fully written code, which may take more time to adapt to once the program is finished. Brittle tests can occur if tests are too connected to the mock expectations, and small changes can cause a lot of errors. 

Overall, stubs are useful when testing independent methods and those that only need to be tested for the outcome. Mocks are useful when methods are dependent on others and can find errors that might not show up if you were just testing outcomes. Both are great when writing tests, but have different applications and both should be used when testing programs. 

Source: https://www.bairesdev.com/blog/stub-vs-mock/

From the blog CS@Worcester – ALIDA NORDQUIST by alidanordquist and used with permission of the author. All other rights reserved by the author.

Decision Table Testing

Decision table testing is a software testing  technique  that connects  inputs to  outputs. It  utilizes a structured table  for all possible test cases to be analyzed. A  condition represents  a variable or input that  impacts a process, and the action is the outcome influenced by a condition. Condition alternatives define all  possible values for a condition. Each row in  decision table testing is a connection between a  condition and action, with values typically  represented as true or false or yes or no. For instance testing a login system, the conditions are username and password. Action is a successful or  failed  login attempt. Switch table is classified as a type of decision table testing where  a single condition decides a design. A  traffic light system is an instance of decision table testing, with traffic light color being the condition. Action is cars  stopping or  continuing through traffic depending on traffic light color. In  rule based decision table, columns are actions and conditions, and the  row indicates if a condition and action  for a column match the condition and action for a rule.  Another type of decision based testing is a limited decision table. Limited decision tables have a simple and independent condition with an example of a login system. A  login attempt result  is known from a  username and password being correctly entered. Username and password are independent  conditions, as a system identifies username or password is correct, regardless of each other.

First step in creating a decision table is identifying the conditions. A login system will be used for an example. Conditions are valid   username or  password. Once the conditions are constructed, the next step is defining  possible condition alternatives. In a class activity, the conditions were gpa and credits. A condition with either  gpa or credits is allowed  multiple  values  such as gpa > 2.5  or gpa < 4.0. Once condition and condition alternatives have been created, the actions have to  be defined. The next step is setting up  rules for the decision table. After the rules have been set. The table should hold conditions, actions, rules, and condition alternatives. Table should be now filled. The last step to creating a decision table is identifying and deleting  repetitive rules. Prior to taking a software testing class, I had little knowledge on decision table testing. I choose this blog as a chance to expand my  understanding of decision testing. After reading the article, I have a better understanding of decision testing compared to my first learning decision testing.

article: https://www.browserstack.com/guide/decision-table

From the blog CS@Worcester – jonathan&#039;s computer journey by Jonathan Mujjumbi and used with permission of the author. All other rights reserved by the author.

Basis Path Testing in Software Testing

Software testing is a significant part of confirming the functionality, reliability, and performance of software products. Out of all the diverse types of tests, Basis Path Testing is one essential technique for confirming the control flow of a program. In this blog, we share the concept of Basis Path Testing, its importance, and how it is applied in software testing.

What is Basis Path Testing?
Basis Path Testing is a white-box testing method that focuses on software control flow. It was formulated by Thomas J. McCabe as a part of the Cyclomatic Complexity metric, which calculates the amount of linearly independent paths in a program’s control flow. The approach is designed to test the software by executing all the independent paths through the program at least once to provide complete coverage of the code.

The goal of Basis Path Testing is to locate all the potential paths within a program and ensure that each of them is tested for possible issues. It helps in determining logical defects that are not obvious through other testing techniques, such as functional or integration testing.

Key Elements of Basis Path Testing
Control Flow Graph: The first step in Basis Path Testing is to design a control flow graph (CFG) for the program. This graph represents the control structure of the program, including decision points, loops, and function calls.

Cyclomatic Complexity: The second step is to compute the cyclomatic complexity of the program, which is the number of independent paths. The metric is calculated as:
V(G) = e – n + 2*P

Where, e is number of edges, n is number of vertices, P is number of connected components.
The cyclomatic complexity provides the minimum number of test cases required to exercise all the independent paths.

Independent Paths: After calculating the cyclomatic complexity, the independent paths in the control flow graph must be determined. These are paths that don’t reuse any other path’s sequence of execution.

Test Case Design: Once independent paths are identified, test cases are created to execute each path such that all aspects of the program’s logic are exercised.

Importance of Basis Path Testing
Basis Path Testing is particularly useful in revealing intricate logical errors that can result due to intricate control flow. By carrying out all independent paths, it ensures that nothing in the program is left untreated, and this reduces the chances of undiscovered defects.

The approach is used widely in unit testing and integration testing, especially for programs with intricate decision structures and loops. It is also a good approach to use in regression testing, where changing the codebase can probably introduce flaws into previously tested paths.

Conclusion
Basis Path Testing is a highly valuable method for thorough testing of software using independent paths through the control flow of a program. By understanding and applying this method, software developers are able to improve the quality of applications, reduce errors, and deliver improved software to end-users.

Personal Reflection
Having studied Basis Path Testing, I can see how this approach is essential to checking the strength of software systems. As a computer science major, what I have learned from my studies is that testing is not just about checking if the code runs but, more importantly, that the logic and correctness of running are checked. Basis Path Testing’s focus on cyclomatic complexity provides a clear, mathematical way to ensure that all possible execution paths are considered.

My experience is that application of this technique detects logical flaws in programs which would otherwise not be easily seen through normal debugging or functional testing. 

Citation:
“Basis Path Testing in Software Testing.” GeeksforGeeks, https://www.geeksforgeeks.org/basis-path-testing-in-software-testing/.

From the blog Maria Delia by Maria Delia and used with permission of the author. All other rights reserved by the author.

Understanding FIRST Principles: Remastered

This week I am diving into an article by David Rodenas, PhD – a software engineer with a wide array of knowledge on the topic of Software Testing. I found the article, “Improve Your Testing #13: When F.I.R.S.T. Principles Aren’t Enough” which is one of many posts from Rodenas that helps provide insights to Software Testing that only a pro could provide. Through this article, Rodenas walks through each letter of the FIRST acronym with new meaning – not replacing what we know but instead enhancing what we know. As the author teaches us these new ways of understanding, examples are provided to look for ways in our own work in which these can be applied.

The acronym can be read as: Fast, Isolated, Repeatable, Self-Verifying, and Timely, but taking this one step further we can acknowledge the version that builds on top of this as: Focused, Integrated, Reliable, Significant, and Thoughtful. It is obvious that these definitions are not opposites of each other, they should also exist cohesively in our quest for trustworthy software.

One pro-tip that sticks out to me here is keeping your code focused by avoiding using the word “and” in the test name. When I first read it – it seemed kind of silly, but in a broad-sense it really does work. The writer relates this to the Single Responsibility Principle – by writing tests with a clear focus, our tests are fast and purposeful. Another takeaway is the importance of writing reliable and significant tests. Tests should not only validate but also provide meaningful information for what went wrong to cause them to fail. A test that passes all the time but fails to catch real-world issues is not significant. Similarly, flaky tests—ones that pass or fail inconsistently—break trust in the testing suite and should be avoided. Rodenas also emphasizes that integrating tests properly is just as important as isolating them. While unit tests should be isolated for precision, integration tests should ensure that components work together seamlessly. A good balance between both approaches strengthens the overall reliability of a software system.

Ultimately, this article challenges us to go beyond simply following FIRST principles and to think critically about our testing strategy. Are our tests truly adding value? Are they guiding us toward our goal of thoroughly tested software, or are they just passing checks in a pipeline? By embracing this enhanced approach to testing, we can ensure that our tests serve their true purpose: to build confidence in the software we deliver.

From the blog cameronbaron.wordpress.com by cameronbaron and used with permission of the author. All other rights reserved by the author.

CS@Worcester – The Bits & Bytes Universe 2025-04-01 11:45:13

Best UAT Testing Tools in 2025: Deliver Software People Actually Want to Use

User Acceptance Testing (UAT) is one of those final, crucial steps before launching a product. It’s where real users step in to make sure everything works the way it’s supposed to, and more importantly, the way they actually need it to. To make this process smoother and way less painful, there are tons of UAT tools out there. The real challenge? Picking the right one.

In 2025, UAT tools aren’t just about checking off requirements. They help teams stay aligned, find bugs early, and make sure what’s being built is actually useful. If you’re working on anything from a web app to a full on enterprise system, the right tool can seriously level up your process.

Why UAT Tools Matter

UAT isn’t just testing for the sake of it. It’s about making sure the product works in the real world not just in a controlled dev environment. UAT tools help you organize test cases, manage feedback, track bugs, and bring in users to validate everything. The good ones also make it easier for everyone devs, QA, stakeholders, and end users to stay in sync.

If you care about shipping high-quality, user-friendly software , UAT tools are a must.

Top Tools Worth Checking Out

Here are some standout UAT tools in 2025 that I think are really worth a look:

  • LambdaTest: Solid choice for cross-browser testing and real device testing. Supports both manual and automated workflows, plus integrates with tools like Testim.io and QMetry.
  • TestRail: Great for keeping things organized. Helps you manage test cases, track progress, and integrates nicely with Jira and GitHub.
  • Maze: This one’s perfect if you care about user experience. It gives you heatmaps, click tracking, and real time user feedback.
  • SpiraTest: A full suite that combines test case management, bug tracking, and requirement linking all in one place.
  • UserTesting: Lets you test with actual users from around the world and get feedback via video, audio, and written responses.
  • Testim.io: If you’re leaning into automation, this one’s got AI powered test creation and a visual editor that makes building and updating tests way easier.
  • Hotjar: Not your typical test management tool, but super helpful for visualizing user behavior with session replays, heatmaps, and feedback polls.

Final Thoughts

No single UAT tool is perfect for everyone. The best one for you depends on your team, your workflow, and your budget. Sometimes mixing a couple of tools gives you the coverage you need, from test case management to user feedback and performance insights.

At the end of the day, UAT tools exist to help you build better products—and getting that right can make a huge difference.

From the blog CS@Worcester – The Bits &amp; Bytes Universe by skarkonan and used with permission of the author. All other rights reserved by the author.

Improving Test Case Design with Decision Table Testing

I chose this blog because I wanted to learn about methods for ensuring thorough test coverage while minimizing effort. During my research, I stumbled upon an intriguing article on TestSigma about decision table testing. I was particularly intrigued because it offers a structured approach to managing complex decision-making scenarios in software applications. Because I am still working on my programming and testing skills, I found this technique particularly useful for systematically identifying various input conditions and their expected outputs.

Decision table testing is a black-box testing technique that allows testers to organize input conditions and associated output values in a structured table format. This method is especially useful in software applications where multiple variables affect the output. By organizing all of the factors into a decision table, testers can ensure that all possible combinations of inputs and expected outcomes are thoroughly covered, with no gaps in test coverage.

For example, consider an online banking system in which a user can only transfer money if his or her account balance is sufficient and the authentication process is successful. In a decision table, this scenario would have four different combinations:

  • Sufficient balance and successful authentication.
  • Enough balance and failed authentication
  • Insufficient balance with successful authentication.
  • Insufficient balance and failed authentication.

Each combination would be associated with the corresponding expected outcome, ensuring that all scenarios were tested. This structured approach ensures that no critical test case is overlooked, allowing testers to avoid missing edge cases or rare conditions that could lead to system failures.

I chose the TestSigma article because it explains decision table testing in an understandable, simplified manner. Although most other methods of testing may be involved, this one is broken down in depth and includes real-life examples, making it easier for even beginners to understand. The other articles I found were either too technical or too shallow and disorganized, whereas this one managed to strike a balance between the two.

Decision tables are a simple way to solve complex decision-making scenarios in software. It is structuring the input conditions and expected outcomes in order to provide more comprehensive test coverage while avoiding redundancies. The above article provides the most lucid and applicable example of the technique, making it an extremely useful tool for students like me. I’m still honing my programming and testing skills, learning structured testing methods like the decision table, so I can create very efficient, thoroughly organized, error-free tests, preparing me for a future in software development and testing.

Blog: https://testsigma.com/blog/decision-table-testing/

From the blog CS@Worcester – Matchaman10 by tam nguyen and used with permission of the author. All other rights reserved by the author.

Understanding Static and Dynamic Testing in Software Development

Intro

Software testing is an essential part of the development lifecycle, ensuring that applications function correctly and efficiently. The blog post “Static Testing vs. Dynamic Testing” by TatvaSoft provides an insightful comparison and description of these two fundamental testing approaches.

Summary Of The Source

The blog post explains the key differences between static and dynamic testing, their benefits, and when to use each approach:

  1. What is Static Testing? This type of testing is performed without executing the code. It involves reviewing documents and conducting code inspections to detect errors early in the development process.
  2. What is Dynamic Testing? Unlike static testing, dynamic testing requires running the code to identify issues related to performance, security, and functionality. It involves techniques such as unit testing, integration testing, and system testing.
  3. Advantages of Static Testing: Helps detect issues early, reduces debugging costs, and ensures adherence to coding standards.
  4. Advantages of Dynamic Testing: Identifies runtime errors, ensures the software behaves as expected in real-world scenarios, and validates functional correctness.
  5. When to Use Each Method? Static testing is best used in the early stages of development to catch errors before execution, while dynamic testing is crucial before deployment to validate real-world performance.

Why I Chose This Blog

I chose this blog because it breaks down the question of static vs dynamic testing really well. It has clear sections that serve their purpose of answering the important details, such as what they are, their benefits and disadvantages, and even comparing them. It’s a great blog to check out for anyone new to this by keeping it clear and understandable. 

Reflection

The blog reinforced how skipping dynamic testing can lead to undetected runtime issues, which can be costly and damaging after release. One key takeaway for me is the necessity of balancing both testing methods. Relying solely on static testing may overlook execution-related issues, while dynamic testing alone can result in avoidable errors slipping into the later stages of development. A collection of both is needed or at least most optimal, but practicing good static testing early makes it so that the dynamic testing which comes later is less prone to errors.

Future Application

I think in the future, when going about testing software, I will definitely keep in mind these two methodologies and probably incorporate them both as I think static testing is very valuable to prevent any errors or bugs before running but dynamic is really useful to actually see the end functionality as you’re testing. It’s an important topic to know as correct testing methodologies and practices keep code clean and working properly.

Citation

TatvaSoft. (2022, November). Static Testing vs. Dynamic Testing. Retrieved from https://www.tatvasoft.com/outsourcing/2022/11/static-testing-vs-dynamic-testing.html.

From the blog CS@Worcester – The Science of Computation by Adam Jacher and used with permission of the author. All other rights reserved by the author.

The Importance of Quality Testing

By Daniel Parker

When I say quality testing many software developers and others in the computer science field may think I am referring to testing the quality of software. While in many instances this may be the case, here I would like to point out the importance of quality testing; or creating tests with a high level of quality to ensure working software.

Before delving into why it’s so important it might be beneficial to highlight just how disastrous dysfunctional code can be. According to Harry Ngyuen in his blog “Top 8 Reason Why Software Testing Is Important In 2025“, fixing a bug that has made it the maintenance phase can cost 10 times more to fix than it does during the design phase.

As he also mentions many customers expect fast and high-quality applications and if these expectations are not met many customers may switch their provider. As well as customer satisfaction many risks can come with buggy software creating security hazards.

Clearly having buggy code that was haphazardly tested is incredibly detrimental to any software being released by a single developer or even an entire organization. So how do we ensure this doesn’t happen?

There is a magnitude of ways to test code, and I don’t think listing them out will help with our cause as there is no best way. To ensure high quality software you mut write high quality tests.

Your tests have to be able to accurately track the flow of your program, be able to highlight where something failed and why it failed, and ensure all boundaries are tested and a successful output is what you’re left with. This means writing as many tests as needed, whether its five or fifty. It means writing tests that check whether your application can handle anything thrown at it. Testing can take time and effort but it’s much better to spend that time while writing your code and handling the bugs one at a time than to have to deal with many after a release has occured.

I plan on continuing down the road of software development both in my student career and my professional career. With this in mind, I will ensure I apply my quality testing skills to produce the highest quality software I can.

From the blog CS@Worcester – DPCS Blog by Daniel Parker and used with permission of the author. All other rights reserved by the author.

Stubs vs. Mocks

This week in class, we learned about test doubles. Test doubles are make-believe objects in code that take the place of actual objects for testing purposes. Test doubles are a great resource if you are working on a team project and your collaborator’s code is incomplete or not there at all. The test doubles we went over in class are dummies, fakes, stubs, and mocks. And being honest, I didn’t have the best understanding of it at first. But creating stubs in Activity 12 gave me a greater comprehension, as well as interest in it. Upon doing a little more research on stubs, I found a blog all the way from 2007 called Unit Testing with Stubs and Mocks. The blog compared stubs and mocks to each other, as well as which test double is more useful.

Stubs

Stubs are used by implementing part of your peer’s code as a concrete class. How much of the collaborator code needed depends on the minimum code needed to properly test something in the class. A stub’s job is to do nothing more than return the expected value needed for the test to pass. Stubs can also be implemented inside the test class. This saves the tester a lot of time because the stub class now becomes a separate, anonymous declaration. As a result, the same stub does not have to be reused across multiple unit tests. And since the stub becomes a separate declaration, the number of stubs used in a project becomes significantly lower.

A drawback from this however, is that the separately declared stub does not account for any changes that your peer might make to the code. This can mean a lot of work for the tester because new methods and variables may need to be declared. As a solution to this case, it is recommended that a base class is created in the test directory of your project. This way, if a collaborator makes an update to the code, you limit the amount of work you do by only updating the base class.

Mocks

Mock objects are used to obtain a high level of control over testing the given implementation. Mock object generators such as EasyMock and JMock are great tools that create mock objects based on the class being tested. If a test fails in EasyMock, it will create an obvious message. When testing with stubs, it is uncertain whether the test will fail or not. This however leads to constant changes made to the test cases. But through constant updating, the unit tester gains a greater knowledge of the implementation.

When to Use

In the case of which is better between stubs and mocks, there is no wrong answer. There are scenarios where using stubs will fit the test better, but there will also be scenarios where the usage of mocks is the better option. For example, if you need to nest a method call on the collaborator, then stub objects are the way to go. Mocks are more robust than stubs, but stubs are easier to read and maintain. However, mock users say that it is worth it to use mocks over stubs in the long term.

Reference

https://spring.io/blog/2007/01/15/unit-testing-with-stubs-and-mocks

From the blog CS@Worcester – Blog del William by William Cordor and used with permission of the author. All other rights reserved by the author.