Category Archives: CS-443

Week 8: Path Testing

This week, our class learned about path testing. It is a white box testing method (meaning we do not actually run the source code) that tests to make sure every specification is met and helps to create an outline for writing test cases. Each line of code is represented by a node and each progression point the compiler follows is represented by a directed edge. Test cases are written for each possible path the compiler can take.

For example, if we are testing the code block below, a path testing diagram can be created: also shown below. Each node represents a line of code and each directed edge represents where the compiler goes next, depending on the conditions. Test cases are written for each condition: running through the while loop and leaving when value is more than 5 or bypassing it if value starts as more than 5.

I wanted to learn more about path testing, so I did some research and found a blog that mentioned cyclomatic complexity. Cyclomatic complexity is a number that classifies how complex the code is based on how many nodes, edges, and conditionals you have. This number will relate to how many tests you need to run, but is not always the same number. The cyclomatic complexity of the example above would be (5-5)+2(1) = 2.

Cyclomatic Complexity = Edges – Nodes + 2(Number of Connected Components)

The blog also explores the advantages and disadvantages of path based testing. Some advantages are performing thorough testing of all paths, testing for errors in loops and branching points, and ensuring any potential bugs in pathing are found. Some disadvantages are failing to test input conditions or runtime compilations, a lot of tests need to be written to test every edge and path, and exponential growth in the number of test cases when code is more complex.

Another exercise we did in class was condensing nodes that do not branch. In the example above, node 2 and 3 can be condensed into one node. This is because there is no alternative path that can be taken between the nodes. If line 2 is run, line 3 is always run right after, no matter what number value is. Condensing nodes would be helpful in slightly more complex programs to make the diagram more readable. Though if you are working with a program with a couple hundred lines, this seems negligible.

When I am writing tests for a program in the future, I would probably choose a more time conscious method. Cyclomatic complexity is a good and useful metric to have, but basing test cases off of the path testing diagram does not seem practical for complex codes and tight time constraints.

Blog post referenced: https://www.testbytes.net/blog/path-coverage-testing/

From the blog CS@Worcester – ALIDA NORDQUIST by alidanordquist and used with permission of the author. All other rights reserved by the author.

Mastering Software Testing: The Magic of Equivalence Class Testing.

If you’re like me, getting into software testing might feel overwhelming at first. There are countless methods and techniques, each with its own purpose, and it’s easy to feel lost. But when I first learned about Equivalence Class Testing, something clicked for me. It’s a simple and efficient way to group similar test cases, and once you get the hang of it, you’ll see how much time and effort it can save.

So, what exactly is Equivalence Class Testing? Essentially, it’s a method that helps us divide the input data for a program into different categories, or equivalence classes, that are expected to behave the same way when tested. Instead of testing every single possible input value, you select one or two values from each class that represent the rest. It’s like saying, “if this value works, the others in this group will probably work too.” This approach helps you avoid redundancy and keeps your testing efficient and focused.

Now, why should you care about Equivalence Class Testing? Well, let me give you an example. Imagine you’re writing a program that processes numbers between 1 and 1000. It would be impossible (and impractical!) to test all 1000 values, right? With Equivalence Class Testing, you can group the numbers into a few categories, like numbers in the lower range (1-200), the middle range (201-800), and the upper range (801-1000). You then pick one or two values from each group to test, confident that those values will tell you how the whole range behaves. It’s a major time-saver.

When I started using this method, I realized that testing every possible input isn’t just unnecessary—it’s counterproductive. Instead, I learned to focus on representative values, which allowed me to be much more efficient. For instance, let’s say you’re testing whether a student is eligible to graduate based on their GPA and number of credits. You could create equivalence classes for the GPA values: below 2.0, which would likely indicate the student isn’t ready to graduate; between 2.0 and 4.0, which might be acceptable; and anything outside the 0.0 to 4.0 range, which is invalid. Testing just one GPA value from each class will give you a pretty good sense of whether your function is working properly without overloading you with unnecessary cases.

Another thing I love about Equivalence Class Testing is that it naturally leads into both Normal and Robust Testing. Normal testing focuses on valid inputs—values that your program should accept and process correctly. Robust testing, on the other hand, checks how your program handles invalid inputs. For example, in our GPA scenario, testing GPAs like 2.5 or 3.8 would be normal testing, but testing values like -1 or 5 would fall under robust testing. Both are essential for making sure your program is strong and can handle anything users throw at it.

Lastly, when I first heard about Weak and Strong Equivalence Class Testing, I was a bit confused. But the difference is straightforward. Weak testing means you’re testing just one value from each equivalence class at a time for a single variable. On the other hand, Strong testing means you’re testing combinations of values from multiple equivalence classes for different variables. The more variables you have, the more comprehensive your tests will be, but it can also get more complex. I usually start with weak testing and move into strong testing when I need to be more thorough.

Overall, learning Equivalence Class Testing has made my approach to software testing more strategic and manageable. It’s a method that makes sense of the chaos and helps me feel more in control of my testing process. If you’re new to testing, or just looking for ways to make your tests more efficient, I highly recommend giving this method a try. You’ll save time, energy, and still get great results.

From the blog CS@Worcester – MY_BLOG_ by Serah Matovu and used with permission of the author. All other rights reserved by the author.

Understanding Pairwise and Combinatorial Testing

Hello everyone, and welcome back to my weekly blog post! This week, we’re diving into an essential software testing technique: Pairwise and Combinatorial Testing. These methods help testers create effective test cases without needing to check every single possible combination of inputs. Similar to most of the test cases selection methods we’ve learned.

To make things more relatable, let’s start with a real-life example from the insurance industry.

A Real-Life Problem: Insurance Policy Testing

Imagine you are working for an insurance company that sells car insurance. Customers can choose different policy options based on:

  1. Car Type: Sedan, SUV, Truck
  2. Driver’s Age: Under 25, 25–50, Over 50
  3. Coverage Type: Basic, Standard, Premium

If we tested every possible combination, we would have:

3 × 3 × 3 = 27 test cases!

This is just for three factors. If we add more, such as driving history, location, or accident records, the number of test cases grows exponentially, making full testing impossible.

So how can we test efficiently while ensuring that all critical scenarios are covered?
That’s where Pairwise and Combinatorial Testing come in!

What is Combinatorial Testing?

Combinatorial Testing is a technique that selects test cases based on different input combinations. Instead of testing all possible inputs, it chooses a smaller set that still covers key interactions between variables.

Example: Combinatorial Testing for Insurance Policies

Instead of testing all 27 cases, we can use combinatorial testing to reduce the number of test cases while still covering important interactions.

A possible set of test cases could be:

Test Case Car Type Driver’s Age Coverage Type
1 Sedan Under 25 Basic
2 SUV 25–50 Standard
3 Truck Over 50 Premium
4 Sedan 25–50 Premium
5 SUV Over 50 Basic
6 Truck Under 25 Standard

This method reduces the number of test cases while ensuring that each factor appears in multiple meaningful combinations.

What is Pairwise Testing?

Pairwise Testing is a type of combinatorial testing where all possible pairs of input values are tested at least once. Research has shown that most defects in software are caused by the interaction of just two variables, so testing all pairs ensures good coverage with fewer test cases.

Example: Pairwise Testing for Insurance Policies

Instead of testing all combinations, we can create a smaller set where every pair of values appears at least once:

Test Case Car Type Driver’s Age Coverage Type
1 Sedan Under 25 Basic
2 Sedan 25–50 Standard
3 SUV Under 25 Premium
4 SUV Over 50 Basic
5 Truck 25–50 Premium
6 Truck Over 50 Standard

Here, every pair (Car Type, Driver’s Age), (Car Type, Coverage Type), and (Driver’s Age, Coverage Type) appears at least once. This means we cover all important interactions with just 6 test cases instead of 27!

Permutations and Combinations in Testing

To understand combinatorial testing better, we need to understand permutations and combinations. These are ways of arranging or selecting elements from a set.

What is a Combination?

A combination is a selection of elements where order does not matter. The formula for combinations is: C(n, r) = n! / [r! * (n – r)!]

where:

  • n is the total number of items
  • r is the number of selected items
  • ! (factorial) means multiplying all numbers down to 1

Example of Combination in Insurance

If an insurance company wants to offer 3 different discounts from a list of 5 available discounts, the number of ways to choose these discounts is: C(5,3)=5!3!(5−3)!

So, there are 10 different ways to choose 3 discounts.


What is a Permutation?

A permutation is an arrangement of elements where order matters. The formula for permutations is: P(n, r) = n! / (n – r)!

where:

  • n is the total number of items
  • r is the number of selected items

Example of Permutation in Insurance

If an insurance company wants to assign 3 priority levels (High, Medium, Low) to 5 claims, the number of ways to arrange these claims is: P(5,3)=5!(5−3)!

So, there are 60 different ways to assign priority levels to 3 claims.


Why Use Pairwise and Combinatorial Testing?

  1. Saves Time and Effort – Testing fewer cases while maintaining coverage.
  2. Covers Critical Scenarios – Ensures every important combination is tested.
  3. Finds Defects Faster – Most bugs are caused by two interacting factors, so pairwise testing helps detect them efficiently.
  4. Reduces Costs – Fewer test cases mean lower testing costs and faster releases.

When Should You Use These Techniques?

  • When a system has many input variables
  • When full exhaustive testing is impractical
  • When you need to find bugs quickly with limited resources
  • When testing insurance, finance, healthcare, and other complex systems

Tools for Pairwise and Combinatorial Testing

To make the process easier, you can use tools like:

  • PICT (Pairwise Independent Combinatorial Testing Tool) – Free from Microsoft
  • Hexawise – A combinatorial test design tool
  • ACTS (Automated Combinatorial Testing for Software) – Developed by NIST

These tools help generate optimized test cases automatically based on pairwise and combinatorial principles.

Conclusion

Pairwise and Combinatorial Testing are powerful techniques that allow testers to find defects efficiently without having to test every possible combination. They save time, reduce costs, and improve software quality.

Next time you’re dealing with multiple input variables, try using Pairwise or Combinatorial Testing to make your testing smarter and more effective!

From the blog Rick’s Software Journal by RickDjouwe1 and used with permission of the author. All other rights reserved by the author.

Why API Testing Matters: Ensuring Robust Software Performance

The blog post discusses why developers should use API testing and how it is becoming increasingly important, particularly as microservices architecture gains popularity. This technique necessitates that application components work separately, each with their own data storage and commands. As a result, software components can be updated fast without disrupting the whole system, allowing consumers to continue using the application flawlessly.

Most microservices are based on application programming interfaces (APIs), which specify how to connect with them. APIs usually use REST calls over HTTP to simplify data sharing. Despite this, many testers still rely on user interface (UI) testing, particularly using the popular Selenium automation tool. While UI testing is required to ensure interactive functioning, API testing is more efficient and dependable. It enables testers to edit information in real time and detect flaws early in the development process, even before the user interface is constructed. API testing is also important for identifying security flaws.

To effectively test APIs, it is critical to understand the fundamentals. APIs are REST calls that retrieve or update data from a database. Each REST request consists of an HTTP verb (which specifies the action), a URL (which indicates the target), HTTP headers (which provide additional information to the server), and a request body (which contains the data, usually in JSON or XML). Common HTTP methods are GET (retrieving a record), POST (creating a new record), PUT (altering a record), PATCH (partially updating a record), and DELETE. The URL specifies which data is affected, whereas the request body applies to actions such as POST, PUT, and PATCH.

When a REST request is made, the server responds with HTTP headers defining the response, a response code indicating if the request was successful, and, in certain cases, a response body containing extra data. The response codes are categorized as follows: 200-level codes represent success, 400-level codes indicate client-side issues, and 500-level codes signify server-side faults.

To effectively test APIs, testers must first understand the types of REST queries supported by the API and any limitations on their use. Developers can use tools like Swagger to document their APIs. Testers should ask clarifying questions about available endpoints, HTTP methods, authorization requirements, needed data, validation limits, and expected response codes.

API testing often begins with creating requests via a user-friendly tool like Postman, which allows for easy viewing of results. The initial tests should focus on “happy paths,” or typical user interactions. These tests should include assertions to ensure that the response code is proper and that the delivered data is accurate. Negative tests should then be run to confirm that the application handles problems correctly, such as erroneous HTTP verbs, missing headers, or illegal requests.

Finally, the blog underlines the necessity of API testing and encourages engineers to transition from UI testing to API testing. This shift enables faster and more reliable testing, which aids in the detection of data manipulation issues and improves security.

Blog: https://simpleprogrammer.com/api-testing/

From the blog CS@Worcester – Matchaman10 by tam nguyen and used with permission of the author. All other rights reserved by the author.

Path Testing in Software Engineering

Path Testing is a structural testing method used in software engineering to design test cases by analyzing the control flow graph of a program. This method helps ensure thorough testing by focusing on linearly independent paths of execution within the program. Let’s dive into the key aspects of path testing and how it can benefit your software development process.

The Path Testing Process

  1. Control Flow Graph: Begin by drawing the control flow graph of the program. This graph represents the program’s code as nodes (each representing a specific instruction or operation) and edges (depicting the flow of control from one instruction to the next). It’s the foundational step for path testing.
  2. Cyclomatic Complexity: Calculate the cyclomatic complexity of the program using McCabe’s formula: E−N+2PE – N + 2P, where EE is the number of edges, NN is the number of nodes, and PP is the number of connected components. This complexity measure indicates the number of independent paths in the program.
  3. Identify All Possible Paths: Create a set of all possible paths within the control flow graph. The cardinality of this set should equal the cyclomatic complexity, ensuring that all unique execution paths are accounted for.
  4. Develop Test Cases: For each path identified, develop a corresponding test case that covers that particular path. This ensures comprehensive testing by covering all possible execution scenarios.

Path Testing Techniques

  • Control Flow Graph: The initial step is to create a control flow graph, where nodes represent instructions and edges represent the control flow between instructions. This visual representation helps in identifying the structure and flow of the program.
  • Decision to Decision Path: Break down the control flow graph into smaller paths between decision points. By isolating these paths, it’s easier to analyze and test the decision-making logic within the program.
  • Independent Paths: Identify paths that are independent of each other, meaning they cannot be replicated or derived from other paths in the graph. This ensures that each path tested is unique, providing more thorough coverage.

Advantages of Path Testing

Path Testing offers several benefits that make it an essential technique in software engineering:

  • Reduces Redundant Tests: By focusing on unique execution paths, path testing minimizes redundant test cases, leading to more efficient testing.
  • Improves Test Case Design: Emphasizing the program’s logic and control flow helps in designing more effective and relevant test cases.
  • Enhances Software Quality: Comprehensive branch coverage ensures that different parts of the code are tested thoroughly, leading to higher software quality and reliability.

Challenges of Path Testing

While path testing is advantageous, it does come with its own set of challenges:

  • Requires Understanding of Code Structure: To effectively perform path testing, a solid understanding of the program’s code and structure is essential.
  • Increases with Code Complexity: As the complexity of the code increases, the number of possible paths also increases, making it challenging to manage and test all paths.
  • May Miss Some Conditions: There is a possibility that certain conditions or scenarios might not be covered if there are errors or omissions in identifying the paths.

Conclusion

Path Testing is a valuable technique in software engineering that ensures thorough coverage of a program’s execution paths. By focusing on unique and independent paths, this method helps reduce redundant tests and improve overall software quality. However, it requires a deep understanding of the code and may become complex with larger programs. Embracing path testing can lead to more robust and reliable software, ultimately benefiting both developers and end-users.

All of this comes from:

Path Testing in Software Engineering – GeeksforGeeks

From the blog CS@Worcester – aRomeoDev by aromeo4f978d012d4 and used with permission of the author. All other rights reserved by the author.

Boundary, Equivalence, Edge and Worst Case

I have learned a lot about Boundary Value Testing and Equivalence Class Testing these past few weeks. Equivalence Class testing can be divided into two categories: normal and robust. The best way I can explain this through example. Let’s say you have a favorite shirt, and you lose it. You would have to look for it but where? Under the normal method you would look in normal, or in a way valid, places like under your bed, in your closet or in the dresser. Using the robust way, you would look in those usual spots but also include unusual spots. For example, you would look under your bed but then look under the kitchen table. You are looking in spots where you should find a shirt (valid) but also looking in spots where you should not find a shirt (invalid). Now, in equivalence class testing robust and normal can a part of two other categories: weak and strong. Going back to the shirt example, a weak search would have you looking in a few spots, but a strong one would have you look everywhere. To summarize, a weak normal equivalence class test would have you look in a few usual spots. A strong normal equivalence class test would have you look in a lot of spots. A weak and strong equivalence class test would act similarly to the earlier two, but they would have you look in unusual spots.

Boundary value testing casts a smaller net when it comes to testing. It is similar to equivalence class testing but it does not include weak and strong testing. It does have nominal and robust testing. It also has worst-case testing which is unique to boundary testing. I don’t know much about it, so I looked online.

I used this site: Boundary Value Analysis

Worst-case testing removes the single fault assumption. This means that there are more than one fault causing failures which leads to more tests. It can be robust or normal. It is more comprehensive than boundary testing due to its coverage. While normal boundary testing results in 4n+1 test cases, normal worst case testing results in 5n test cases. Think of worst-case testing as putting a putting a magnifying glass on something. From afar you only see one thing but up close you can see that there is a lot going on. This results in worst case testing being used in situations that require a higher degree of testing.

I have learned a lot in these past few weeks. I have learned about boundary testing and how it differs when it is robust or normal. I have learned about equivalence class testing and how it varies when it is a combination of weak, normal, robust or strong. I have also learned about edge and worst-case testing. This is all very interesting.

From the blog My Journey through Comp Sci by Joanna Presume and used with permission of the author. All other rights reserved by the author.

Equivalence Class Testing

Week 7 – 3/7/2025

This week, in my last class we had an activity for Equivalence Class Testing(ECT) under the POGIL activity. For the source of this week, I watched a YouTube video titled “Equivalence Class Testing Explained,” which gives us the essentials about this black-box testing method.

The host of the video defines ECT as a technique for partitioning input data into equivalence classes partitions where inputs are expected to yield similar results. To reduce redundant cases without sacrificing coverage, it is also possible to test one value per class. To demonstrate this reality, the presenter tested a function that takes in integers between 1 and 100. Classes in this example are invalid lower (≤0), invalid upper (≥101), and valid classes (1–100). Boundary value testing, in which values like 0, 1, 100, and 101 are applied to test for common problems in partition boundaries, was also accorded importance in the video.

I chose this video because ECT of the course we took included this topic and I wanted more information about the topic. Reading the course textbook it was difficult to follow. The class activity did make me do this topic, though this clarified it better to me. The video’s visual illustrations and step-by-step discussion clarified the practical application of ECT. The speaker’s observation about maintaining a balance between being thorough and being effective resonated with me, especially after spending hours of writing duplicate test cases for a recent project.

I thought that thorough testing had to test all possible inputs before watching. The video rebutted this by demonstrating how ECT reduces effort without losing effectiveness. I understood that my previous method of testing each edge case individually was not possible. Another fascinating thing was the difference between valid and invalid classes. I had neglected how the system handled wrong data in a previous project, dealing primarily with “correct” inputs. I realize how crucial both testing are for ensuring robustness after watching the demonstration of the video. Henceforth, in the future, I will adopt this approach to my future projects if needed.

My perception regarding testing has changed because of this movie, from a boring activity to a sensible activity. It serves the need of our course directly, i.e., providing efficient, scalable engineering practices. I can create fewer, yet stronger tests with the help of ECT, and that will surely help me as a software programmer. Equivalency class testing is a kit of wiser problem-solving, and I want to keep on practicing it. It’s not theory.

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Combining Testing Methods

The blog post that I chose to write about this week is one that gives an overview of equivalence class and boundary analysis testing. The main reason why you would use these is to reduce the number of tests you run for a program while still testing full functionality and not sacrificing coverage. It does this by sectioning the range of inputs into different equivalency classes. Equivalency classes are groups of inputs that in theory should behave identically when put into the tested function. The blog then shows a helpful diagram showcasing what this looks like plotted on a number line. This way, tests will give better information by only testing the function where problems may arise and will detail the behavior of the function near edge cases better than other methods.

The blog post also details how you can represent the classes as functions themselves for where the inputs would be, for example, true, false, or valid, by defining ranges of values with interval notation. After then going over boundary test cases, the author explains how these two methods can be used together to efficiently test around the limits of the function behavior. The blog concludes with another example plotted on a table that shows how equivalence classes and boundary testing can be combined to use a minimum number of tests while also ensuring that you test the function at its most important parts where the process will change based on inputs.

I selected this blog to help refresh myself for the upcoming test about different testing methods and to reinforce what I had learned in class. I think that one of the more important takeaways from this blog is the emphasis the author puts on combining the two methods not just because they are two different methods but because they strengthen the overall testing procedure, and this will make me think about how new testing methods can be combined to lead to better and more efficient test cases. Demonstrating the testing in terms of models on number lines and as graphs help visualize what is actually happening and why it works, similar to the models taught in class but the added element of real numbers with example values helps demonstrate the importance of this kind of testing and how it can be useful for any kind of real-world situation. As an introductory post to the topic, and in my case a review, it works well but from here I would like to look more into the different combinations of testing methods that can work well together and some that may not as I learn more methods through the rest of the class.

https://www.testbench.com/blog/equivalence-class-partioning-and-limit-value-analysis/

From the blog CS@Worcester – Computer Science Blog by dzona1 and used with permission of the author. All other rights reserved by the author.

On Structuring and Managing Test Cases

 In this post, I’ll be discussing a recent article I came across on the TestRail website, which can be found here. The post interested me because it dives deep into the importance of organizing and managing test cases effectively, a topic that we have been covering closely in class. As someone who does a lot of tests in various stages, this article gave me some good notes about how proper test case management can streamline the testing process and reduce the risk of overlooked issues.

One of the key takeaways from the article was the concept of structuring test cases with clear, concise steps and expected outcomes. This was notable because I’ve often seen situations where poorly written test cases lead to confusion or unnecessary delays. The article emphasized that each test case should be easily understandable, even for someone unfamiliar with the project, which makes a lot of sense. Clear test cases not only make the process smoother for current testers, but they also provide better documentation for future test cycles. I’ve personally benefited from this approach, especially when revisiting a project after some time has passed, well-written test cases make it easier to pick up where I left off, and they can even give hints (though these shouldn’t be needed) as to what the code is intending to do, and where some logical boundaries may exist.

The article also discussed the importance of categorizing test cases based on their purpose—whether they’re functional, regression, or exploratory tests. This structure helps ensure that each test type is executed at the appropriate stage and that nothing gets missed. In my experience, this kind of organization is crucial, particularly for large-scale projects where test cases can easily become scattered. I’ve found that when I categorize my tests according to at least some standard, I’m able to prioritize them better and avoid redundant testing, ultimately saving time and effort. It’s a simple but effective way to maintain focus on what really matters. My personal default is to follow the code chronologically / in the order of execution, as that is what feels most natural to me.

Another point I appreciated was the article’s advice on using test management tools, like TestRail itself, to keep track of test cases, execution results, and bugs. Granted, they are going to try to sell their own software, but it is still notable. Managing test cases manually in a spreadsheet or document can quickly become cumbersome, especially as projects grow, and using a product or software to handle this for you can be very beneficial.

Overall, this article reaffirmed the importance of a well-organized approach to test case management. As I continue testing processes and software, I’ll be more mindful of how I structure, categorize, and track my test cases, ensuring that testing is as efficient and effective as possible.

From the blog Mr. Lancer 987's Blog by Mr. Lancer 987 and used with permission of the author. All other rights reserved by the author.

J UNIT 5 TESTING

Recently, I dove into unit testing with JUnit 5 as part of my software development journey. Unit testing helps ensure that individual parts of a program work correctly by writing small, focused tests. JUnit 5 is the framework that is used to write these tests for Java applications, and it makes the process simple and efficient.

The first things first, JUnit 5 uses something called annotations to define test methods. The most important one is @Test, which marks a method as a test case. These methods test small units of code, like individual methods in a class, to make sure they return the expected results.

Here’s a simple example of a test method I wrote to check the area of a rectangle:

import static org.junit.jupiter.api.Assertions.assertEquals;

@Test
void testRectangleArea() {
Rectangle r1 = new Rectangle(2, 3);
int area = r1.getArea();
assertEquals(6, area); // Checking if the area is correct
}

In this case try and write these small test cases to check specific outputs, and if something doesn’t match what you expect, JUnit will let you know right away.

The Structure of a Test Case

There are three simple steps you can follow for each test case:

Assert: Compare the result with what you expect using something called an “assertion.

Arrange: Set up the objects or data you are testing.

Act: Call the method you want to test.

For example, here is another test to check if a rectangle is a square:

@Test
void testRectangleNotSquare() {
Rectangle r1 = new Rectangle(2, 3);
boolean isSquare = r1.isSquare();
assertFalse(isSquare); // Checking if it’s not a square
}

In this case, using assertFalse helps to confirm that the rectangle is not a square.

Common JUnit Assertions

JUnit 5 offers several assertion methods, and I quickly got the hang of using them. Here are a few that I used the most:

  • assertEquals(expected, actual): Checks if two values are equal.
  • assertFalse(condition): Checks if a condition is false.
  • assertTrue(condition): Checks if a condition is true.
  • assertNull(object): Verifies if something is null.

These assertions make it easy to confirm whether a piece of code behaves as expected.

Managing Test Execution

One thing that surprised me was that test methods don’t run in any specific order by default. This means each test should be independent of the others, which encourages better organization. I also learned about lifecycle methods like @BeforeEach and @AfterEach, which allow you to run setup and cleanup code before and after each test case. For example, @BeforeEach can be used to initialize objects before each test:

@BeforeEach
void setup() {
// Code to run before each test
}

In conclusion, Learning unit testing with JUnit 5 has been a great experience. It helps me write reliable code and catch bugs early. By writing small tests and using assertions, I can quickly confirm that my programs work as they should. JUnit 5 makes testing simple, and I look forward to improving my skills even more in the future!

If you’re new to testing like I was, JUnit 5 is definitely a great place to start!

From the blog CS@Worcester – MY_BLOG_ by Serah Matovu and used with permission of the author. All other rights reserved by the author.