For a lot of the projects I’ve worked on, we didn’t really do testing in the traditional way of actually writing tests. Each method or function we created, we would just use debugging in the actual code itself to figure out which parts of the method is working correctly, which fail safes are activating, and which errors are throwing. There didn’t seem like a whole reason for writing tests using JUnit or something similar when all code went to a dev environment first, and then put to production.
Just recently I started using JUnit a little bit just to find out how useful it could be when practicing coding certain problems. It definitely had some perks to it such as without compiling and building the entire project each time, I could just run the test to check if that certain code piece will function properly or not. But for me, that isn’t really enough to warrant using it all that much, if you have access to a dev environment (which you should), even though it isn’t the “proper” way to do things, I believe that writing debug in code instead of tests, is way more efficient.
If you look at it from a different perspective though, writing tests can also be worthwhile if you’re more into the automation part of developing. Since tests will be ran automatically when compiling your projects, you wouldn’t need to keep adding in more debug statements manually. This could increase efficiency more than what I previously mentioned, but you may not get as much information as needed from the test. If one of your methods isn’t working properly but the test keeps returning an exception or error and you don’t understand the cause of why it’s happening, the previous approach of writing debug statements will be more efficient and in turn, will help you understand where you went wrong.
All in all, I personally believe that both unit testing / testing in general, and writing debugs have their own places when writing software. Obviously I don’t know too much about software testing yet, but the more I learn, there is definitely potential that my opinion on it would change. As of now, writing debugs if there is an issue, or even if there isn’t, seems to help me a lot when understanding each part of what I’m writing in a deeper level and I look forward to learning more about testing to where it could also help me reach that point.
This week in class we learned about path testing, which is a white box method that examines code to find all possible paths. Path testing uses control flow graphs in order to illustrate the different paths that could be executed in a program. In the graph, the nodes represent the lines of code, and the edges represent the order in which the code is executed. Path testing appealed to me as a testing method because it gives visual representations of how the source code should execute given different inputs. I took a deeper dive into path testing after this week’s classes and found this blog that gave me a deeper understanding of path testing.
Steps
When you have decided that you want to perform path testing, you must create a control flow graph that matches up with the source code. For example, the split of direction between nodes should represent if-else statements and for while loops, the nodes towards the end of the program that have an edge pointed at an earlier node.
Secondly, pick out a baseline path for the program. This is the path you define to be the original path of your program. After the baseline is created, continue generating paths representing all possible outcomes in the execution.
How many Test Cases?
For a lengthy source code, the possible outcomes could seem endless and could therefore end up being a difficult, time-consuming task to do manually. Luckily, there is an equation that determines how many test cases a program will need with path testing.
C = E – N + 2P
Where C stands for cyclomatic complexity. The cyclomatic complexity is equivalent to the number of linearly independent paths, which in turn equals the number of required test cases. E represents the number of edges, N is the number of nodes, and P is the number of connected components. Note that for a single program or source of code, P = 1 always.
Benefits
Path testing reveals outcomes that otherwise may not have been known without examining the code. As stated before, it can be difficult for a tester to know all the possible outcomes in a class. Path testing provides a solution to that by using control flow charts, where the tester can examine the different paths. Path testing also ensures branch coverage. Developers don’t need to merge code with an existing repository because the developers can test in their own branch. Unnecessary and overlapping tests are another thing developers don’t have to worry about.
Drawbacks
Path testing can also be time consuming. Quicker testing methods do exist and take less time off further developing projects. Also in many cases, path testing may be unnecessary. Path testing is used often by many DevOps setups that require a certain amount of unit coverage before deploying to the next environment. Outside of this, it may be considered inefficient compared to another testing method.
I chose this blog because I was interested in learning how to optimize the selection of test cases without sacrificing thorough coverage. During my search, I came across an article on TestGrid.io on equivalence partitioning testing, which was fantastic in removing redundancy from testing. As I develop my programming and software testing skills, I have found this method especially useful in making the testing process simpler.
Equivalence partitioning is a testing technique applied to partition input data into partitions or groups based on the assumption that values in each partition will behave similarly. Instead of testing each possible input, testers choose a sample value from each partition and hope the entire group will result in the same output. This reduces the number of test cases but still provides sufficient coverage.
For example, if a program is able to accept input values ranging from 1 to 100, equivalence partitioning allows the testers to categorize them into two sets: valid values (1-100) and invalid values (less than 1 or more than 100). Rather than testing every number in the valid set, a tester would choose sentinel values like 1, 50, and 100. Similarly, they would test the invalid range with 0 and 101. This is time-efficient but identifies errors simultaneously.
I employed the TestGrid.io article because it explains equivalence partitioning in an understandable and systematic manner. Much other testing material is too complex or ambiguous for newcomers, but this article simplifies it and incorporates real-world examples. This allowed it to be simple not only to understand the theory, but also to apply the method to real-life situations.The article also discusses the advantages of equivalence partitioning, including reducing redundant test cases, being more efficient, and offering complete coverage. As an individual interested in improving my testing methods, I found this useful because it corresponds with my goal of producing better, more efficient test cases without redundant repetition.
Equivalence partitioning testing is a sound approach to maximizing test case selection. It enables the tester to focus on representative cases rather than testing all possible inputs, which is time- and effort-efficient. The TestGrid.io article provided a clear understanding of how to implement this method and why it is significant. For me, learning effective test methods like equivalence partitioning will make me more efficient in my coding, debugging, and software developing abilities, prepared for internships, projects, and software engineering positions.
This week, our class learned about path testing. It is a white box testing method (meaning we do not actually run the source code) that tests to make sure every specification is met and helps to create an outline for writing test cases. Each line of code is represented by a node and each progression point the compiler follows is represented by a directed edge. Test cases are written for each possible path the compiler can take.
For example, if we are testing the code block below, a path testing diagram can be created: also shown below. Each node represents a line of code and each directed edge represents where the compiler goes next, depending on the conditions. Test cases are written for each condition: running through the while loop and leaving when value is more than 5 or bypassing it if value starts as more than 5.
I wanted to learn more about path testing, so I did some research and found a blog that mentioned cyclomatic complexity. Cyclomatic complexity is a number that classifies how complex the code is based on how many nodes, edges, and conditionals you have. This number will relate to how many tests you need to run, but is not always the same number. The cyclomatic complexity of the example above would be (5-5)+2(1) = 2.
The blog also explores the advantages and disadvantages of path based testing. Some advantages are performing thorough testing of all paths, testing for errors in loops and branching points, and ensuring any potential bugs in pathing are found. Some disadvantages are failing to test input conditions or runtime compilations, a lot of tests need to be written to test every edge and path, and exponential growth in the number of test cases when code is more complex.
Another exercise we did in class was condensing nodes that do not branch. In the example above, node 2 and 3 can be condensed into one node. This is because there is no alternative path that can be taken between the nodes. If line 2 is run, line 3 is always run right after, no matter what number value is. Condensing nodes would be helpful in slightly more complex programs to make the diagram more readable. Though if you are working with a program with a couple hundred lines, this seems negligible.
When I am writing tests for a program in the future, I would probably choose a more time conscious method. Cyclomatic complexity is a good and useful metric to have, but basing test cases off of the path testing diagram does not seem practical for complex codes and tight time constraints.
If you’re like me, getting into software testing might feel overwhelming at first. There are countless methods and techniques, each with its own purpose, and it’s easy to feel lost. But when I first learned about Equivalence Class Testing, something clicked for me. It’s a simple and efficient way to group similar test cases, and once you get the hang of it, you’ll see how much time and effort it can save.
So, what exactly is Equivalence Class Testing? Essentially, it’s a method that helps us divide the input data for a program into different categories, or equivalence classes, that are expected to behave the same way when tested. Instead of testing every single possible input value, you select one or two values from each class that represent the rest. It’s like saying, “if this value works, the others in this group will probably work too.” This approach helps you avoid redundancy and keeps your testing efficient and focused.
Now, why should you care about Equivalence Class Testing? Well, let me give you an example. Imagine you’re writing a program that processes numbers between 1 and 1000. It would be impossible (and impractical!) to test all 1000 values, right? With Equivalence Class Testing, you can group the numbers into a few categories, like numbers in the lower range (1-200), the middle range (201-800), and the upper range (801-1000). You then pick one or two values from each group to test, confident that those values will tell you how the whole range behaves. It’s a major time-saver.
When I started using this method, I realized that testing every possible input isn’t just unnecessary—it’s counterproductive. Instead, I learned to focus on representative values, which allowed me to be much more efficient. For instance, let’s say you’re testing whether a student is eligible to graduate based on their GPA and number of credits. You could create equivalence classes for the GPA values: below 2.0, which would likely indicate the student isn’t ready to graduate; between 2.0 and 4.0, which might be acceptable; and anything outside the 0.0 to 4.0 range, which is invalid. Testing just one GPA value from each class will give you a pretty good sense of whether your function is working properly without overloading you with unnecessary cases.
Another thing I love about Equivalence Class Testing is that it naturally leads into both Normal and Robust Testing. Normal testing focuses on valid inputs—values that your program should accept and process correctly. Robust testing, on the other hand, checks how your program handles invalid inputs. For example, in our GPA scenario, testing GPAs like 2.5 or 3.8 would be normal testing, but testing values like -1 or 5 would fall under robust testing. Both are essential for making sure your program is strong and can handle anything users throw at it.
Lastly, when I first heard about Weak and Strong Equivalence Class Testing, I was a bit confused. But the difference is straightforward. Weak testing means you’re testing just one value from each equivalence class at a time for a single variable. On the other hand, Strong testing means you’re testing combinations of values from multiple equivalence classes for different variables. The more variables you have, the more comprehensive your tests will be, but it can also get more complex. I usually start with weak testing and move into strong testing when I need to be more thorough.
Overall, learning Equivalence Class Testing has made my approach to software testing more strategic and manageable. It’s a method that makes sense of the chaos and helps me feel more in control of my testing process. If you’re new to testing, or just looking for ways to make your tests more efficient, I highly recommend giving this method a try. You’ll save time, energy, and still get great results.
Hello everyone, and welcome back to my weekly blog post! This week, we’re diving into an essential software testing technique: Pairwise and Combinatorial Testing. These methods help testers create effective test cases without needing to check every single possible combination of inputs. Similar to most of the test cases selection methods we’ve learned.
To make things more relatable, let’s start with a real-life example from the insurance industry.
A Real-Life Problem: Insurance Policy Testing
Imagine you are working for an insurance company that sells car insurance. Customers can choose different policy options based on:
Car Type: Sedan, SUV, Truck
Driver’s Age: Under 25, 25–50, Over 50
Coverage Type: Basic, Standard, Premium
If we tested every possible combination, we would have:
3 × 3 × 3 = 27 test cases!
This is just for three factors. If we add more, such as driving history, location, or accident records, the number of test cases grows exponentially, making full testing impossible.
So how can we test efficiently while ensuring that all critical scenarios are covered? That’s where Pairwise and Combinatorial Testing come in!
What is Combinatorial Testing?
Combinatorial Testing is a technique that selects test cases based on different input combinations. Instead of testing all possible inputs, it chooses a smaller set that still covers key interactions between variables.
Example: Combinatorial Testing for Insurance Policies
Instead of testing all 27 cases, we can use combinatorial testing to reduce the number of test cases while still covering important interactions.
A possible set of test cases could be:
Test Case
Car Type
Driver’s Age
Coverage Type
1
Sedan
Under 25
Basic
2
SUV
25–50
Standard
3
Truck
Over 50
Premium
4
Sedan
25–50
Premium
5
SUV
Over 50
Basic
6
Truck
Under 25
Standard
This method reduces the number of test cases while ensuring that each factor appears in multiple meaningful combinations.
What is Pairwise Testing?
Pairwise Testing is a type of combinatorial testing where all possible pairs of input values are tested at least once. Research has shown that most defects in software are caused by the interaction of just two variables, so testing all pairs ensures good coverage with fewer test cases.
Example: Pairwise Testing for Insurance Policies
Instead of testing all combinations, we can create a smaller set where every pair of values appears at least once:
Test Case
Car Type
Driver’s Age
Coverage Type
1
Sedan
Under 25
Basic
2
Sedan
25–50
Standard
3
SUV
Under 25
Premium
4
SUV
Over 50
Basic
5
Truck
25–50
Premium
6
Truck
Over 50
Standard
Here, every pair (Car Type, Driver’s Age), (Car Type, Coverage Type), and (Driver’s Age, Coverage Type) appears at least once. This means we cover all important interactions with just 6 test cases instead of 27!
Permutations and Combinations in Testing
To understand combinatorial testing better, we need to understand permutations and combinations. These are ways of arranging or selecting elements from a set.
What is a Combination?
A combination is a selection of elements where order does not matter. The formula for combinations is: C(n, r) = n! / [r! * (n – r)!]
where:
n is the total number of items
r is the number of selected items
! (factorial) means multiplying all numbers down to 1
Example of Combination in Insurance
If an insurance company wants to offer 3 different discounts from a list of 5 available discounts, the number of ways to choose these discounts is: C(5,3)=5!3!(5−3)!
So, there are 10 different ways to choose 3 discounts.
What is a Permutation?
A permutation is an arrangement of elements where order matters. The formula for permutations is: P(n, r) = n! / (n – r)!
where:
n is the total number of items
r is the number of selected items
Example of Permutation in Insurance
If an insurance company wants to assign 3 priority levels (High, Medium, Low) to 5 claims, the number of ways to arrange these claims is: P(5,3)=5!(5−3)!
So, there are 60 different ways to assign priority levels to 3 claims.
Why Use Pairwise and Combinatorial Testing?
Saves Time and Effort – Testing fewer cases while maintaining coverage.
Covers Critical Scenarios – Ensures every important combination is tested.
Finds Defects Faster – Most bugs are caused by two interacting factors, so pairwise testing helps detect them efficiently.
Reduces Costs – Fewer test cases mean lower testing costs and faster releases.
When Should You Use These Techniques?
When a system has many input variables
When full exhaustive testing is impractical
When you need to find bugs quickly with limited resources
When testing insurance, finance, healthcare, and other complex systems
Tools for Pairwise and Combinatorial Testing
To make the process easier, you can use tools like:
PICT (Pairwise Independent Combinatorial Testing Tool) – Free from Microsoft
Hexawise – A combinatorial test design tool
ACTS (Automated Combinatorial Testing for Software) – Developed by NIST
These tools help generate optimized test cases automatically based on pairwise and combinatorial principles.
Conclusion
Pairwise and Combinatorial Testing are powerful techniques that allow testers to find defects efficiently without having to test every possible combination. They save time, reduce costs, and improve software quality.
Next time you’re dealing with multiple input variables, try using Pairwise or Combinatorial Testing to make your testing smarter and more effective!
The blog post discusses why developers should use API testing and how it is becoming increasingly important, particularly as microservices architecture gains popularity. This technique necessitates that application components work separately, each with their own data storage and commands. As a result, software components can be updated fast without disrupting the whole system, allowing consumers to continue using the application flawlessly.
Most microservices are based on application programming interfaces (APIs), which specify how to connect with them. APIs usually use REST calls over HTTP to simplify data sharing. Despite this, many testers still rely on user interface (UI) testing, particularly using the popular Selenium automation tool. While UI testing is required to ensure interactive functioning, API testing is more efficient and dependable. It enables testers to edit information in real time and detect flaws early in the development process, even before the user interface is constructed. API testing is also important for identifying security flaws.
To effectively test APIs, it is critical to understand the fundamentals. APIs are REST calls that retrieve or update data from a database. Each REST request consists of an HTTP verb (which specifies the action), a URL (which indicates the target), HTTP headers (which provide additional information to the server), and a request body (which contains the data, usually in JSON or XML). Common HTTP methods are GET (retrieving a record), POST (creating a new record), PUT (altering a record), PATCH (partially updating a record), and DELETE. The URL specifies which data is affected, whereas the request body applies to actions such as POST, PUT, and PATCH.
When a REST request is made, the server responds with HTTP headers defining the response, a response code indicating if the request was successful, and, in certain cases, a response body containing extra data. The response codes are categorized as follows: 200-level codes represent success, 400-level codes indicate client-side issues, and 500-level codes signify server-side faults.
To effectively test APIs, testers must first understand the types of REST queries supported by the API and any limitations on their use. Developers can use tools like Swagger to document their APIs. Testers should ask clarifying questions about available endpoints, HTTP methods, authorization requirements, needed data, validation limits, and expected response codes.
API testing often begins with creating requests via a user-friendly tool like Postman, which allows for easy viewing of results. The initial tests should focus on “happy paths,” or typical user interactions. These tests should include assertions to ensure that the response code is proper and that the delivered data is accurate. Negative tests should then be run to confirm that the application handles problems correctly, such as erroneous HTTP verbs, missing headers, or illegal requests.
Finally, the blog underlines the necessity of API testing and encourages engineers to transition from UI testing to API testing. This shift enables faster and more reliable testing, which aids in the detection of data manipulation issues and improves security.
Path Testing is a structural testing method used in software engineering to design test cases by analyzing the control flow graph of a program. This method helps ensure thorough testing by focusing on linearly independent paths of execution within the program. Let’s dive into the key aspects of path testing and how it can benefit your software development process.
The Path Testing Process
Control Flow Graph: Begin by drawing the control flow graph of the program. This graph represents the program’s code as nodes (each representing a specific instruction or operation) and edges (depicting the flow of control from one instruction to the next). It’s the foundational step for path testing.
Cyclomatic Complexity: Calculate the cyclomatic complexity of the program using McCabe’s formula: E−N+2PE – N + 2P, where EE is the number of edges, NN is the number of nodes, and PP is the number of connected components. This complexity measure indicates the number of independent paths in the program.
Identify All Possible Paths: Create a set of all possible paths within the control flow graph. The cardinality of this set should equal the cyclomatic complexity, ensuring that all unique execution paths are accounted for.
Develop Test Cases: For each path identified, develop a corresponding test case that covers that particular path. This ensures comprehensive testing by covering all possible execution scenarios.
Path Testing Techniques
Control Flow Graph: The initial step is to create a control flow graph, where nodes represent instructions and edges represent the control flow between instructions. This visual representation helps in identifying the structure and flow of the program.
Decision to Decision Path: Break down the control flow graph into smaller paths between decision points. By isolating these paths, it’s easier to analyze and test the decision-making logic within the program.
Independent Paths: Identify paths that are independent of each other, meaning they cannot be replicated or derived from other paths in the graph. This ensures that each path tested is unique, providing more thorough coverage.
Advantages of Path Testing
Path Testing offers several benefits that make it an essential technique in software engineering:
Reduces Redundant Tests: By focusing on unique execution paths, path testing minimizes redundant test cases, leading to more efficient testing.
Improves Test Case Design: Emphasizing the program’s logic and control flow helps in designing more effective and relevant test cases.
Enhances Software Quality: Comprehensive branch coverage ensures that different parts of the code are tested thoroughly, leading to higher software quality and reliability.
Challenges of Path Testing
While path testing is advantageous, it does come with its own set of challenges:
Requires Understanding of Code Structure: To effectively perform path testing, a solid understanding of the program’s code and structure is essential.
Increases with Code Complexity: As the complexity of the code increases, the number of possible paths also increases, making it challenging to manage and test all paths.
May Miss Some Conditions: There is a possibility that certain conditions or scenarios might not be covered if there are errors or omissions in identifying the paths.
Conclusion
Path Testing is a valuable technique in software engineering that ensures thorough coverage of a program’s execution paths. By focusing on unique and independent paths, this method helps reduce redundant tests and improve overall software quality. However, it requires a deep understanding of the code and may become complex with larger programs. Embracing path testing can lead to more robust and reliable software, ultimately benefiting both developers and end-users.
I have learned a lot about Boundary Value Testing and Equivalence Class Testing these past few weeks. Equivalence Class testing can be divided into two categories: normal and robust. The best way I can explain this through example. Let’s say you have a favorite shirt, and you lose it. You would have to look for it but where? Under the normal method you would look in normal, or in a way valid, places like under your bed, in your closet or in the dresser. Using the robust way, you would look in those usual spots but also include unusual spots. For example, you would look under your bed but then look under the kitchen table. You are looking in spots where you should find a shirt (valid) but also looking in spots where you should not find a shirt (invalid). Now, in equivalence class testing robust and normal can a part of two other categories: weak and strong. Going back to the shirt example, a weak search would have you looking in a few spots, but a strong one would have you look everywhere. To summarize, a weak normal equivalence class test would have you look in a few usual spots. A strong normal equivalence class test would have you look in a lot of spots. A weak and strong equivalence class test would act similarly to the earlier two, but they would have you look in unusual spots.
Boundary value testing casts a smaller net when it comes to testing. It is similar to equivalence class testing but it does not include weak and strong testing. It does have nominal and robust testing. It also has worst-case testing which is unique to boundary testing. I don’t know much about it, so I looked online.
Worst-case testing removes the single fault assumption. This means that there are more than one fault causing failures which leads to more tests. It can be robust or normal. It is more comprehensive than boundary testing due to its coverage. While normal boundary testing results in 4n+1 test cases, normal worst case testing results in 5n test cases. Think of worst-case testing as putting a putting a magnifying glass on something. From afar you only see one thing but up close you can see that there is a lot going on. This results in worst case testing being used in situations that require a higher degree of testing.
I have learned a lot in these past few weeks. I have learned about boundary testing and how it differs when it is robust or normal. I have learned about equivalence class testing and how it varies when it is a combination of weak, normal, robust or strong. I have also learned about edge and worst-case testing. This is all very interesting.
This week, in my last class we had an activity for Equivalence Class Testing(ECT) under the POGIL activity. For the source of this week, I watched a YouTube video titled “Equivalence Class Testing Explained,” which gives us the essentials about this black-box testing method.
The host of the video defines ECT as a technique for partitioning input data into equivalence classes partitions where inputs are expected to yield similar results. To reduce redundant cases without sacrificing coverage, it is also possible to test one value per class. To demonstrate this reality, the presenter tested a function that takes in integers between 1 and 100. Classes in this example are invalid lower (≤0), invalid upper (≥101), and valid classes (1–100). Boundary value testing, in which values like 0, 1, 100, and 101 are applied to test for common problems in partition boundaries, was also accorded importance in the video.
I chose this video because ECT of the course we took included this topic and I wanted more information about the topic. Reading the course textbook it was difficult to follow. The class activity did make me do this topic, though this clarified it better to me. The video’s visual illustrations and step-by-step discussion clarified the practical application of ECT. The speaker’s observation about maintaining a balance between being thorough and being effective resonated with me, especially after spending hours of writing duplicate test cases for a recent project.
I thought that thorough testing had to test all possible inputs before watching. The video rebutted this by demonstrating how ECT reduces effort without losing effectiveness. I understood that my previous method of testing each edge case individually was not possible. Another fascinating thing was the difference between valid and invalid classes. I had neglected how the system handled wrong data in a previous project, dealing primarily with “correct” inputs. I realize how crucial both testing are for ensuring robustness after watching the demonstration of the video. Henceforth, in the future, I will adopt this approach to my future projects if needed.
My perception regarding testing has changed because of this movie, from a boring activity to a sensible activity. It serves the need of our course directly, i.e., providing efficient, scalable engineering practices. I can create fewer, yet stronger tests with the help of ECT, and that will surely help me as a software programmer. Equivalency class testing is a kit of wiser problem-solving, and I want to keep on practicing it. It’s not theory.