Category Archives: CS-443

Static vs Dynamic Testing

Source:
https://www.browserstack.com/guide/static-testing-vs-dynamic-testing#:~:text=Static%20testing%20focuses%20on%20reviewing,to%20find%20bugs%20during%20runtime.&text=It%20is%20performed%20at%20the%20later%20stage%20of%20the%20software%20development.

This article is titled “Static vs Dynamic Testing” and explains the differences between them and how they allow for the development of quality software. Static testing is testing where the application isn’t being actively used. Code is manually read through to search for errors. As a result, a computer is not necessarily required for this form of testing as design documents containing the code can be reviewed. This kind of testing is done before the code is executed and early in the development process. The benefits to static testing are that defects are able to be found earlier in the process, it’s usually more cost-effective than other testing techniques, leads to more maintainable code, and encourages collaboration between team members. However some disadvantages of static testing are that not all of the issues could be found until the program/application actually runs, it depends on the experience of the reviewers for it to be effective, and it usually has to be done alongside dynamic testing to uncover other potential issues.

Dynamic testing involves giving an application input and analyzing the output. Code is being compiled in a run-time environment. This form of testing also relies on the expertise of the reviewers as deep knowledge of the system is required to understand how and why a system reacts based on the input. The advantages of dynamic testing are that runtime errors, memory leaks, and other issues that only come to fruition during code execution are revealed, helps verify that the software is working as intended by the developers, and ensures that all parts of the system work together appropriately. However some disadvantages of dynamic testing are that it can be time-consuming, may not cover all possible scenarios, and may be difficult to test uncommon instances in the program. 

Overall it is important to realize that static and dynamic testing are both important in their own ways and emphasize the importance of performing various kinds of testing methods to ensure an application works as intended. I chose this article because we discussed these topics in class and figured learning more about them would be beneficial. 

From the blog CS@Worcester – Shawn In Tech by Shawn Budzinski and used with permission of the author. All other rights reserved by the author.

Boundary, Equivalence, Edge and Worst Case

I have learned a lot about Boundary Value Testing and Equivalence Class Testing. Equivalence Class testing can be divided into two categories: normal and robust. The best way I can explain this through example. Let’s say you have a favorite shirt, and you lose it. You would have to look for it but where? Under the normal method you would look in normal, or in a way valid, places like under your bed, in your closet or in the dresser. Using the robust way, you would look in those usual spots but also include unusual spots. For example, you would look under your bed but then look under the kitchen table. You are looking in spots where you should find a shirt (valid) but also looking in spots where you should not find a shirt (invalid). Now, in equivalence class testing robust and normal can a part of two other categories: weak and strong. Going back to the shirt example, a weak search would have you looking in a few spots, but a strong one would have you look everywhere. To summarize, a weak normal equivalence class test would have you look in a few usual spots. A strong normal equivalence class test would have you look in a lot of spots. A weak and strong equivalence class test would act similarly to the earlier two, but they would have you look in unusual spots.

Boundary value testing casts a smaller net when it comes to testing. It is similar to equivalence class testing but it does not include weak and strong testing. It does have nominal and robust testing. It also has worst-case testing which is unique to boundary testing. I don’t know much about it, so I looked online.

I used this site: Boundary Value Analysis

Worst-case testing removes the single fault assumption. This means that there are more than one fault causing failures which leads to more tests. It can be robust or normal. It is more comprehensive than boundary testing due to its coverage. While normal boundary testing results in 4n+1 test cases, normal worst case testing results in 5n test cases. Think of worst-case testing as putting a putting a magnifying glass on something. From afar you only see one thing but up close you can see that there is a lot going on. This results in worst case testing being used in situations that require a higher degree of testing.

I have learned a lot. I have learned about boundary testing and how it differs when it is robust or normal. I have learned about equivalence class testing and how it varies when it is a combination of weak, normal, robust or strong. I have also learned about edge and worst-case testing. This is another step towards my coding career.

From the blog My Journey through Comp Sci by Joanna Presume and used with permission of the author. All other rights reserved by the author.

Test Doubles: Enhancing Testing Efficiency

When developing robust software systems, ensuring reliable and efficient testing is paramount. Yet, testing can become challenging when the System Under Test (SUT) depends on components that are unavailable, slow, or impractical to use in the testing environment. Enter Test Doubles—a practical solution to streamline testing and simulate dependent components.

What are Test Doubles? In essence, Test Doubles are placeholders or “stand-ins” that replace real components (known as Depended-On Components, or DOCs) during tests. Much like a stunt double in a movie scene, Test Doubles emulate the behavior of the real components, enabling the SUT to function seamlessly while providing better control and visibility during testing.

The implementation of Test Doubles is tailored to the specific needs of a test. Rather than perfectly mimicking the DOC, they replicate its interface and critical functionalities required by the test. By doing so, Test Doubles make “impossible” tests feasible and expedite testing cycles.

Key Variations of Test Doubles Test Doubles come in several forms, each designed to address distinct testing challenges:

  1. Test Stub: Facilitates control of the SUT’s indirect inputs, enabling tests to explore paths that might not otherwise occur.
  2. Test Spy: Combines Stub functionality with the ability to record and verify outputs from the SUT for later evaluation.
  3. Mock Object: Focuses on output verification by setting expectations for the SUT’s interactions and validating them during the test.
  4. Fake Object: Offers simplified functionality compared to the real DOC, often used when the DOC is unavailable or unsuitable for the test environment.
  5. Dummy Object: Provides placeholder objects when the test or SUT does not require the DOC’s functionality.

When to Use Test Doubles Test Doubles are particularly valuable when:

  • Testing requirements exceed the capabilities of the real DOC.
  • Test execution is hindered by slow or inaccessible components.
  • Greater control over the test environment is necessary to assess specific scenarios.

That said, it’s crucial to balance the use of Test Doubles. Excessive reliance on them may lead to “Fragile Tests” that lack robustness and diverge from production environments. Therefore, teams should complement Test Doubles with at least one test using real DOCs to ensure alignment with production configurations.

Conclusion Test Doubles are indispensable tools for efficient and effective software testing. By offering flexibility and enhancing control, they empower developers to navigate complex testing scenarios with ease. However, judicious use is key, striking the right balance ensures tests remain meaningful and closely aligned with real-world conditions.

This information comes from this article:
Test Double at XUnitPatterns.com

From the blog CS@Worcester – aRomeoDev by aromeo4f978d012d4 and used with permission of the author. All other rights reserved by the author.

Learning Boundary Value Analysis in Software Testing

One of the most significant ways of ensuring that an application is reliable and efficient before deployment is through software testing. One of the most powerful functional testing techniques that focuses on testing the boundary cases of a system is Boundary Value Analysis (BVA). Boundary Value Analysis finds potential defects that are apt to show themselves on input partition boundaries.

What is Boundary Value Analysis?

Boundary Value Analysis is a black-box testing method which tests the boundary values of valid and invalid partitions. Instead of testing all the possible values, the testers focus on minimum, maximum, and edge-case values, as these are the most error-prone. This is because defects often occur at the extremities of the input ranges rather than at any point within the range.

For example, if a system accepts values between 18 and 56, instead of testing all the values, testers would test the below-mentioned values:

Valid boundary values: 18, 19, 37, 55, 56

Invalid boundary values: 17 (below minimum) and 57 (above maximum)

By running these primary test cases, the testers can easily determine boundary-related faults without unnecessary repetition of in-between value testing.

Implementing BVA: A Real-World Example

To represent BVA through an example, let us take a system processing dates under the following constraints:

Day: 1 to 31

Month: 1 to 12

Year: 1900 to 2000

Under Single Fault Assumption, where one of the variables is tested while others are at nominal values, test cases like below can be written:

Boundary value checking for years (e.g., 1900, 1960, 2000)

Boundary value checking for days (e.g., 1, 31, invalid cases like 32)

Checking boundary values for months (i.e., 1, 12)

By limiting test cases to boundary values, we are able to have maximum test coverage with minimum test effort.

Equivalence Partitioning and BVA together

Another helpful technique is combining BVA and Equivalence Partitioning (EP) together. EP divides input data into equivalent partitions where every equivalence class is expected to behave in the same way. By using these techniques together, testers can reduce the number of test cases but still maintain complete coverage.

For instance, if a system would only accept passwords of 6 to 10 characters long, test cases can be:

0-5 characters: Not accepted

6-10 characters: Accepted

11-14 characters: Not accepted

This mix makes the testing more efficient, especially when using more than one variable.

Limitations of BVA

Although BVA is strong, it does face some limitations:

It works well when the system contains properly defined numeric input ranges.

It has no regard for functional dependencies of variables.

It may not be equally effective on free-form languages like COBOL, which has more flexible input processing.

Conclusion

Boundary Value Analysis is one very important test method that can help testers define most probable fault sites of a system. Merged with Equivalence Partitioning, it has highest test effectiveness at the maximum elimination of test case replication and minimum complete loss of test coverage. In as much as BVA isn’t a “catch-all”, yet it represents an essential technique of software provision quality and dependability.

Personal Reflection

Learning Boundary Value Analysis has helped me understand more about software testing and how it makes the software reliable. It has shown me that by focusing on boundary values, defects can be detected with higher efficiency without generating surplus test cases. It is a very practical approach to apply in real-world scenarios, such as form validation and number input testing, where boundary-related errors are likely to be found. In the future, I will include BVA in my testing approach to offer more test coverage in software projects that I undertake.

Citation

Geeks for Geeks. (n.d.). Software Testing – Boundary Value Analysis. Retrieved from https://www.geeksforgeeks.org/software-testing-boundary-value-analysis/

From the blog CS@Worcester – Maria Delia by Maria Delia and used with permission of the author. All other rights reserved by the author.

SMURF testing

The blog I chose to write about this week details what different types of tests do and how they should be prioritized to test efficiently. The blog post starts with the writer talking about their own experience testing and how early on they tested their program only through the user interface, which quickly showed the downsides of this method. It was slow, couldn’t be run on all devices, and needed manual checks. Testing like this is called end to end testing and it is, slow, expensive, and not always the most revealing about potential problems. Instead, unit tests are much preferred as they test the very basic functionality and are quick. A middle ground between these two is instance or integration testing, which are able to cover most of the program without having to go through a limited user interface. These three types of testing create a pyramid to show that a majority of your cases should be unit tests, then instance tests being next most common, and finally end to end tests.

The distribution pyramid is based on five principles that make up the Smurf mnemonic. The first is speed, that has you prioritize many quick tests that will catch problems sooner. Next is maintainability that gives importance to tests that will scale well or not be subject to many dependencies. Utilization is important when keeping in mind the cost of running your code repetitiously and minimizing the resources used. Reliability states that you should always have tests that will only return an error when something is actually wrong and using these as indicators of crucial issues that need to be addressed. Lastly, fidelity is testing that recreates the users experience from start to finish, or end to end. Each type of test in the pyramid is distributed across these five factors in variable amounts, each showing their use.

I chose this blog because I wanted to learn more about writing test cases for a complete program in a work environment. I thought that this blog did well in that respect and helped in so far as providing an outline to begin with when starting to write test code. One addition that could have improved the post would be some examples, but they are easily accessible elsewhere. This will be a helpful resource and reference to use in the future when I am put in the position where I would need to start writing test from scratch, as well as being something to keep in mind when looking at prewritten tests to compare.

Test Pyramid Google Testing Blog: SMURF: Beyond the Test Pyramid

From the blog CS@Worcester – Computer Science Blog by dzona1 and used with permission of the author. All other rights reserved by the author.

Software Testing

For a lot of the projects I’ve worked on, we didn’t really do testing in the traditional way of actually writing tests. Each method or function we created, we would just use debugging in the actual code itself to figure out which parts of the method is working correctly, which fail safes are activating, and which errors are throwing. There didn’t seem like a whole reason for writing tests using JUnit or something similar when all code went to a dev environment first, and then put to production.

Just recently I started using JUnit a little bit just to find out how useful it could be when practicing coding certain problems. It definitely had some perks to it such as without compiling and building the entire project each time, I could just run the test to check if that certain code piece will function properly or not. But for me, that isn’t really enough to warrant using it all that much, if you have access to a dev environment (which you should), even though it isn’t the “proper” way to do things, I believe that writing debug in code instead of tests, is way more efficient.

If you look at it from a different perspective though, writing tests can also be worthwhile if you’re more into the automation part of developing. Since tests will be ran automatically when compiling your projects, you wouldn’t need to keep adding in more debug statements manually. This could increase efficiency more than what I previously mentioned, but you may not get as much information as needed from the test. If one of your methods isn’t working properly but the test keeps returning an exception or error and you don’t understand the cause of why it’s happening, the previous approach of writing debug statements will be more efficient and in turn, will help you understand where you went wrong.

All in all, I personally believe that both unit testing / testing in general, and writing debugs have their own places when writing software. Obviously I don’t know too much about software testing yet, but the more I learn, there is definitely potential that my opinion on it would change. As of now, writing debugs if there is an issue, or even if there isn’t, seems to help me a lot when understanding each part of what I’m writing in a deeper level and I look forward to learning more about testing to where it could also help me reach that point.

From the blog CS@Worcester – CS Blog by Mike and used with permission of the author. All other rights reserved by the author.

Path Testing

This week in class we learned about path testing, which is a white box method that examines code to find all possible paths. Path testing uses control flow graphs in order to illustrate the different paths that could be executed in a program. In the graph, the nodes represent the lines of code, and the edges represent the order in which the code is executed. Path testing appealed  to me as a testing method because it gives visual representations of how the source code should execute given different inputs. I took a deeper dive into path testing after this week’s classes and found this blog that gave me a deeper understanding of path testing.

Steps

When you have decided that you want to perform path testing, you must create a control flow graph that matches up with the source code. For example, the split of direction between nodes should represent if-else statements and for while loops, the nodes towards the end of the program that have an edge pointed at an earlier node. 

Secondly, pick out a baseline path for the program. This is the path you define to be the original path of your program. After the baseline is created, continue generating paths representing all possible outcomes in the execution. 

How many Test Cases?

For a lengthy source code, the possible outcomes could seem endless and could therefore end up being a difficult, time-consuming task to do manually. Luckily, there is an equation that determines how many test cases a program will need with path testing.

C = E – N + 2P

Where C stands for cyclomatic complexity. The cyclomatic complexity is equivalent to the number of linearly independent paths, which in turn equals the number of required test cases. E represents the number of edges, N is the number of nodes, and P is the number of connected components. Note that for a single program or source of code, P = 1 always.

Benefits

Path testing reveals outcomes that otherwise may not have been known without examining the code. As stated before, it can be difficult for a tester to know all the possible outcomes in a class. Path testing provides a solution to that by using control flow charts, where the tester can examine the different paths. Path testing also ensures branch coverage. Developers don’t need to merge code with an existing repository because the developers can test in their own branch. Unnecessary and overlapping tests are another thing developers don’t have to worry about.

Drawbacks

Path testing can also be time consuming. Quicker testing methods do exist and take less time off further developing projects. Also in many cases, path testing may be unnecessary. Path testing is used often by many DevOps setups that require a certain amount of unit coverage before deploying to the next environment. Outside of this, it may be considered inefficient compared to another testing method.

Blog: https://blog.testlodge.com/basis-path-testing/

From the blog Blog del William by William Cordor and used with permission of the author. All other rights reserved by the author.

Enhancing Software Testing Efficiency with Equivalence Partitioning

I chose this blog because I was interested in learning how to optimize the selection of test cases without sacrificing thorough coverage. During my search, I came across an article on TestGrid.io on equivalence partitioning testing, which was fantastic in removing redundancy from testing. As I develop my programming and software testing skills, I have found this method especially useful in making the testing process simpler.

Equivalence partitioning is a testing technique applied to partition input data into partitions or groups based on the assumption that values in each partition will behave similarly. Instead of testing each possible input, testers choose a sample value from each partition and hope the entire group will result in the same output. This reduces the number of test cases but still provides sufficient coverage.

For example, if a program is able to accept input values ranging from 1 to 100, equivalence partitioning allows the testers to categorize them into two sets: valid values (1-100) and invalid values (less than 1 or more than 100). Rather than testing every number in the valid set, a tester would choose sentinel values like 1, 50, and 100. Similarly, they would test the invalid range with 0 and 101. This is time-efficient but identifies errors simultaneously.

I employed the TestGrid.io article because it explains equivalence partitioning in an understandable and systematic manner. Much other testing material is too complex or ambiguous for newcomers, but this article simplifies it and incorporates real-world examples. This allowed it to be simple not only to understand the theory, but also to apply the method to real-life situations.The article also discusses the advantages of equivalence partitioning, including reducing redundant test cases, being more efficient, and offering complete coverage. As an individual interested in improving my testing methods, I found this useful because it corresponds with my goal of producing better, more efficient test cases without redundant repetition.

Equivalence partitioning testing is a sound approach to maximizing test case selection. It enables the tester to focus on representative cases rather than testing all possible inputs, which is time- and effort-efficient. The TestGrid.io article provided a clear understanding of how to implement this method and why it is significant. For me, learning effective test methods like equivalence partitioning will make me more efficient in my coding, debugging, and software developing abilities, prepared for internships, projects, and software engineering positions.

Blog: https://testgrid.io/blog/equivalence-partitioning-testing/

From the blog CS@Worcester – Matchaman10 by tam nguyen and used with permission of the author. All other rights reserved by the author.

Week 8: Path Testing

This week, our class learned about path testing. It is a white box testing method (meaning we do not actually run the source code) that tests to make sure every specification is met and helps to create an outline for writing test cases. Each line of code is represented by a node and each progression point the compiler follows is represented by a directed edge. Test cases are written for each possible path the compiler can take.

For example, if we are testing the code block below, a path testing diagram can be created: also shown below. Each node represents a line of code and each directed edge represents where the compiler goes next, depending on the conditions. Test cases are written for each condition: running through the while loop and leaving when value is more than 5 or bypassing it if value starts as more than 5.

I wanted to learn more about path testing, so I did some research and found a blog that mentioned cyclomatic complexity. Cyclomatic complexity is a number that classifies how complex the code is based on how many nodes, edges, and conditionals you have. This number will relate to how many tests you need to run, but is not always the same number. The cyclomatic complexity of the example above would be (5-5)+2(1) = 2.

Cyclomatic Complexity = Edges – Nodes + 2(Number of Connected Components)

The blog also explores the advantages and disadvantages of path based testing. Some advantages are performing thorough testing of all paths, testing for errors in loops and branching points, and ensuring any potential bugs in pathing are found. Some disadvantages are failing to test input conditions or runtime compilations, a lot of tests need to be written to test every edge and path, and exponential growth in the number of test cases when code is more complex.

Another exercise we did in class was condensing nodes that do not branch. In the example above, node 2 and 3 can be condensed into one node. This is because there is no alternative path that can be taken between the nodes. If line 2 is run, line 3 is always run right after, no matter what number value is. Condensing nodes would be helpful in slightly more complex programs to make the diagram more readable. Though if you are working with a program with a couple hundred lines, this seems negligible.

When I am writing tests for a program in the future, I would probably choose a more time conscious method. Cyclomatic complexity is a good and useful metric to have, but basing test cases off of the path testing diagram does not seem practical for complex codes and tight time constraints.

Blog post referenced: https://www.testbytes.net/blog/path-coverage-testing/

From the blog CS@Worcester – ALIDA NORDQUIST by alidanordquist and used with permission of the author. All other rights reserved by the author.

Mastering Software Testing: The Magic of Equivalence Class Testing.

If you’re like me, getting into software testing might feel overwhelming at first. There are countless methods and techniques, each with its own purpose, and it’s easy to feel lost. But when I first learned about Equivalence Class Testing, something clicked for me. It’s a simple and efficient way to group similar test cases, and once you get the hang of it, you’ll see how much time and effort it can save.

So, what exactly is Equivalence Class Testing? Essentially, it’s a method that helps us divide the input data for a program into different categories, or equivalence classes, that are expected to behave the same way when tested. Instead of testing every single possible input value, you select one or two values from each class that represent the rest. It’s like saying, “if this value works, the others in this group will probably work too.” This approach helps you avoid redundancy and keeps your testing efficient and focused.

Now, why should you care about Equivalence Class Testing? Well, let me give you an example. Imagine you’re writing a program that processes numbers between 1 and 1000. It would be impossible (and impractical!) to test all 1000 values, right? With Equivalence Class Testing, you can group the numbers into a few categories, like numbers in the lower range (1-200), the middle range (201-800), and the upper range (801-1000). You then pick one or two values from each group to test, confident that those values will tell you how the whole range behaves. It’s a major time-saver.

When I started using this method, I realized that testing every possible input isn’t just unnecessary—it’s counterproductive. Instead, I learned to focus on representative values, which allowed me to be much more efficient. For instance, let’s say you’re testing whether a student is eligible to graduate based on their GPA and number of credits. You could create equivalence classes for the GPA values: below 2.0, which would likely indicate the student isn’t ready to graduate; between 2.0 and 4.0, which might be acceptable; and anything outside the 0.0 to 4.0 range, which is invalid. Testing just one GPA value from each class will give you a pretty good sense of whether your function is working properly without overloading you with unnecessary cases.

Another thing I love about Equivalence Class Testing is that it naturally leads into both Normal and Robust Testing. Normal testing focuses on valid inputs—values that your program should accept and process correctly. Robust testing, on the other hand, checks how your program handles invalid inputs. For example, in our GPA scenario, testing GPAs like 2.5 or 3.8 would be normal testing, but testing values like -1 or 5 would fall under robust testing. Both are essential for making sure your program is strong and can handle anything users throw at it.

Lastly, when I first heard about Weak and Strong Equivalence Class Testing, I was a bit confused. But the difference is straightforward. Weak testing means you’re testing just one value from each equivalence class at a time for a single variable. On the other hand, Strong testing means you’re testing combinations of values from multiple equivalence classes for different variables. The more variables you have, the more comprehensive your tests will be, but it can also get more complex. I usually start with weak testing and move into strong testing when I need to be more thorough.

Overall, learning Equivalence Class Testing has made my approach to software testing more strategic and manageable. It’s a method that makes sense of the chaos and helps me feel more in control of my testing process. If you’re new to testing, or just looking for ways to make your tests more efficient, I highly recommend giving this method a try. You’ll save time, energy, and still get great results.

From the blog CS@Worcester – MY_BLOG_ by Serah Matovu and used with permission of the author. All other rights reserved by the author.