Category Archives: Week 9

Understanding Static and Dynamic Testing in Software Development

Intro

Software testing is an essential part of the development lifecycle, ensuring that applications function correctly and efficiently. The blog post “Static Testing vs. Dynamic Testing” by TatvaSoft provides an insightful comparison and description of these two fundamental testing approaches.

Summary Of The Source

The blog post explains the key differences between static and dynamic testing, their benefits, and when to use each approach:

  1. What is Static Testing? This type of testing is performed without executing the code. It involves reviewing documents and conducting code inspections to detect errors early in the development process.
  2. What is Dynamic Testing? Unlike static testing, dynamic testing requires running the code to identify issues related to performance, security, and functionality. It involves techniques such as unit testing, integration testing, and system testing.
  3. Advantages of Static Testing: Helps detect issues early, reduces debugging costs, and ensures adherence to coding standards.
  4. Advantages of Dynamic Testing: Identifies runtime errors, ensures the software behaves as expected in real-world scenarios, and validates functional correctness.
  5. When to Use Each Method? Static testing is best used in the early stages of development to catch errors before execution, while dynamic testing is crucial before deployment to validate real-world performance.

Why I Chose This Blog

I chose this blog because it breaks down the question of static vs dynamic testing really well. It has clear sections that serve their purpose of answering the important details, such as what they are, their benefits and disadvantages, and even comparing them. It’s a great blog to check out for anyone new to this by keeping it clear and understandable. 

Reflection

The blog reinforced how skipping dynamic testing can lead to undetected runtime issues, which can be costly and damaging after release. One key takeaway for me is the necessity of balancing both testing methods. Relying solely on static testing may overlook execution-related issues, while dynamic testing alone can result in avoidable errors slipping into the later stages of development. A collection of both is needed or at least most optimal, but practicing good static testing early makes it so that the dynamic testing which comes later is less prone to errors.

Future Application

I think in the future, when going about testing software, I will definitely keep in mind these two methodologies and probably incorporate them both as I think static testing is very valuable to prevent any errors or bugs before running but dynamic is really useful to actually see the end functionality as you’re testing. It’s an important topic to know as correct testing methodologies and practices keep code clean and working properly.

Citation

TatvaSoft. (2022, November). Static Testing vs. Dynamic Testing. Retrieved from https://www.tatvasoft.com/outsourcing/2022/11/static-testing-vs-dynamic-testing.html.

From the blog CS@Worcester – The Science of Computation by Adam Jacher and used with permission of the author. All other rights reserved by the author.

The Importance of Quality Testing

By Daniel Parker

When I say quality testing many software developers and others in the computer science field may think I am referring to testing the quality of software. While in many instances this may be the case, here I would like to point out the importance of quality testing; or creating tests with a high level of quality to ensure working software.

Before delving into why it’s so important it might be beneficial to highlight just how disastrous dysfunctional code can be. According to Harry Ngyuen in his blog “Top 8 Reason Why Software Testing Is Important In 2025“, fixing a bug that has made it the maintenance phase can cost 10 times more to fix than it does during the design phase.

As he also mentions many customers expect fast and high-quality applications and if these expectations are not met many customers may switch their provider. As well as customer satisfaction many risks can come with buggy software creating security hazards.

Clearly having buggy code that was haphazardly tested is incredibly detrimental to any software being released by a single developer or even an entire organization. So how do we ensure this doesn’t happen?

There is a magnitude of ways to test code, and I don’t think listing them out will help with our cause as there is no best way. To ensure high quality software you mut write high quality tests.

Your tests have to be able to accurately track the flow of your program, be able to highlight where something failed and why it failed, and ensure all boundaries are tested and a successful output is what you’re left with. This means writing as many tests as needed, whether its five or fifty. It means writing tests that check whether your application can handle anything thrown at it. Testing can take time and effort but it’s much better to spend that time while writing your code and handling the bugs one at a time than to have to deal with many after a release has occured.

I plan on continuing down the road of software development both in my student career and my professional career. With this in mind, I will ensure I apply my quality testing skills to produce the highest quality software I can.

From the blog CS@Worcester – DPCS Blog by Daniel Parker and used with permission of the author. All other rights reserved by the author.

Stubs vs. Mocks

This week in class, we learned about test doubles. Test doubles are make-believe objects in code that take the place of actual objects for testing purposes. Test doubles are a great resource if you are working on a team project and your collaborator’s code is incomplete or not there at all. The test doubles we went over in class are dummies, fakes, stubs, and mocks. And being honest, I didn’t have the best understanding of it at first. But creating stubs in Activity 12 gave me a greater comprehension, as well as interest in it. Upon doing a little more research on stubs, I found a blog all the way from 2007 called Unit Testing with Stubs and Mocks. The blog compared stubs and mocks to each other, as well as which test double is more useful.

Stubs

Stubs are used by implementing part of your peer’s code as a concrete class. How much of the collaborator code needed depends on the minimum code needed to properly test something in the class. A stub’s job is to do nothing more than return the expected value needed for the test to pass. Stubs can also be implemented inside the test class. This saves the tester a lot of time because the stub class now becomes a separate, anonymous declaration. As a result, the same stub does not have to be reused across multiple unit tests. And since the stub becomes a separate declaration, the number of stubs used in a project becomes significantly lower.

A drawback from this however, is that the separately declared stub does not account for any changes that your peer might make to the code. This can mean a lot of work for the tester because new methods and variables may need to be declared. As a solution to this case, it is recommended that a base class is created in the test directory of your project. This way, if a collaborator makes an update to the code, you limit the amount of work you do by only updating the base class.

Mocks

Mock objects are used to obtain a high level of control over testing the given implementation. Mock object generators such as EasyMock and JMock are great tools that create mock objects based on the class being tested. If a test fails in EasyMock, it will create an obvious message. When testing with stubs, it is uncertain whether the test will fail or not. This however leads to constant changes made to the test cases. But through constant updating, the unit tester gains a greater knowledge of the implementation.

When to Use

In the case of which is better between stubs and mocks, there is no wrong answer. There are scenarios where using stubs will fit the test better, but there will also be scenarios where the usage of mocks is the better option. For example, if you need to nest a method call on the collaborator, then stub objects are the way to go. Mocks are more robust than stubs, but stubs are easier to read and maintain. However, mock users say that it is worth it to use mocks over stubs in the long term.

Reference

https://spring.io/blog/2007/01/15/unit-testing-with-stubs-and-mocks

From the blog CS@Worcester – Blog del William by William Cordor and used with permission of the author. All other rights reserved by the author.

Boundary, Equivalence, Edge and Worst Case

I have learned a lot about Boundary Value Testing and Equivalence Class Testing. Equivalence Class testing can be divided into two categories: normal and robust. The best way I can explain this through example. Let’s say you have a favorite shirt, and you lose it. You would have to look for it but where? Under the normal method you would look in normal, or in a way valid, places like under your bed, in your closet or in the dresser. Using the robust way, you would look in those usual spots but also include unusual spots. For example, you would look under your bed but then look under the kitchen table. You are looking in spots where you should find a shirt (valid) but also looking in spots where you should not find a shirt (invalid). Now, in equivalence class testing robust and normal can a part of two other categories: weak and strong. Going back to the shirt example, a weak search would have you looking in a few spots, but a strong one would have you look everywhere. To summarize, a weak normal equivalence class test would have you look in a few usual spots. A strong normal equivalence class test would have you look in a lot of spots. A weak and strong equivalence class test would act similarly to the earlier two, but they would have you look in unusual spots.

Boundary value testing casts a smaller net when it comes to testing. It is similar to equivalence class testing but it does not include weak and strong testing. It does have nominal and robust testing. It also has worst-case testing which is unique to boundary testing. I don’t know much about it, so I looked online.

I used this site: Boundary Value Analysis

Worst-case testing removes the single fault assumption. This means that there are more than one fault causing failures which leads to more tests. It can be robust or normal. It is more comprehensive than boundary testing due to its coverage. While normal boundary testing results in 4n+1 test cases, normal worst case testing results in 5n test cases. Think of worst-case testing as putting a putting a magnifying glass on something. From afar you only see one thing but up close you can see that there is a lot going on. This results in worst case testing being used in situations that require a higher degree of testing.

I have learned a lot. I have learned about boundary testing and how it differs when it is robust or normal. I have learned about equivalence class testing and how it varies when it is a combination of weak, normal, robust or strong. I have also learned about edge and worst-case testing. This is another step towards my coding career.

From the blog My Journey through Comp Sci by Joanna Presume and used with permission of the author. All other rights reserved by the author.

Testing and Testing (One of them is a Fake).

 Hello!

This week in class we’ve been discussing testing using fakes, more specifically with Stubs. Our first assignment this week told us about the different kinds of fakes used in testing, which I found a little confusing at first, since I would have liked to have seen a more literal example of all the different variants. That is why, for the sake of improving my knowledge on the subject going forward, since this is something we’ll be talking about, I decided to do some reading on stubs and mocks, from a blog post written by Raphael F. on Medium. 

The article spoke at length about the differences between mocks and stubs, and gave some meaningful examples of both. I appreciated the use of diagrams in the article, as it shows what each of them interacts with and how (i.e. A stub doesn’t interact with a database, and is instead a hard-coded value to be grabbed for testing). That wasn’t something that was immediately obvious to me in the assignments, and during our assignment on Stubs specifically it began to make more sense, but I appreciate the way the article laid out the concept in plain text, and made it easier to understand. I also liked that the blog post went over real world applications for each of the fake types, such a using stubs for read/write actions to keep the code and files separate, or using mocks for API testing. Admittedly I am still a little hazy on mocks, but I think by the time we go over it in class, it will all make sense.

 In closing, I really do value stubs as valuable pieces of testing equipment, since they allow me to test code without having to have every intricate detail finished. It makes sense for confirming methods got used, and that a specific path through the program is being followed. Stubs can’t do everything, you can’t really test complex operations on a piece of code that doesn’t work, but for basic probing and testing, I could see myself using stubs a lot more often. It feels like one of those things that I could have used before without thinking about it, which makes sense, as I am the kind of programmer who likes taking things one step at a time, and making sure one piece works before moving on to the next. But now that I have a name connected to the action, I really appreciate it as a tool that will play a big role in my programming career going forward.

 

 Link to the blog post in question: https://medium.com/@fideraphael/a-comprehensive-guide-to-stub-and-mock-testing-unveiling-the-essence-of-effective-software-testing-7f7817e3eab4

 

 

From the blog Camille's Cluttered Closet by Camille and used with permission of the author. All other rights reserved by the author.

Testing and Testing (One of them is a Fake).

 Hello!

This week in class we’ve been discussing testing using fakes, more specifically with Stubs. Our first assignment this week told us about the different kinds of fakes used in testing, which I found a little confusing at first, since I would have liked to have seen a more literal example of all the different variants. That is why, for the sake of improving my knowledge on the subject going forward, since this is something we’ll be talking about, I decided to do some reading on stubs and mocks, from a blog post written by Raphael F. on Medium. 

The article spoke at length about the differences between mocks and stubs, and gave some meaningful examples of both. I appreciated the use of diagrams in the article, as it shows what each of them interacts with and how (i.e. A stub doesn’t interact with a database, and is instead a hard-coded value to be grabbed for testing). That wasn’t something that was immediately obvious to me in the assignments, and during our assignment on Stubs specifically it began to make more sense, but I appreciate the way the article laid out the concept in plain text, and made it easier to understand. I also liked that the blog post went over real world applications for each of the fake types, such a using stubs for read/write actions to keep the code and files separate, or using mocks for API testing. Admittedly I am still a little hazy on mocks, but I think by the time we go over it in class, it will all make sense.

 In closing, I really do value stubs as valuable pieces of testing equipment, since they allow me to test code without having to have every intricate detail finished. It makes sense for confirming methods got used, and that a specific path through the program is being followed. Stubs can’t do everything, you can’t really test complex operations on a piece of code that doesn’t work, but for basic probing and testing, I could see myself using stubs a lot more often. It feels like one of those things that I could have used before without thinking about it, which makes sense, as I am the kind of programmer who likes taking things one step at a time, and making sure one piece works before moving on to the next. But now that I have a name connected to the action, I really appreciate it as a tool that will play a big role in my programming career going forward.

 

 Link to the blog post in question: https://medium.com/@fideraphael/a-comprehensive-guide-to-stub-and-mock-testing-unveiling-the-essence-of-effective-software-testing-7f7817e3eab4

 

 

From the blog Camille's Cluttered Closet by Camille and used with permission of the author. All other rights reserved by the author.

Test Doubles: Enhancing Testing Efficiency

When developing robust software systems, ensuring reliable and efficient testing is paramount. Yet, testing can become challenging when the System Under Test (SUT) depends on components that are unavailable, slow, or impractical to use in the testing environment. Enter Test Doubles—a practical solution to streamline testing and simulate dependent components.

What are Test Doubles? In essence, Test Doubles are placeholders or “stand-ins” that replace real components (known as Depended-On Components, or DOCs) during tests. Much like a stunt double in a movie scene, Test Doubles emulate the behavior of the real components, enabling the SUT to function seamlessly while providing better control and visibility during testing.

The implementation of Test Doubles is tailored to the specific needs of a test. Rather than perfectly mimicking the DOC, they replicate its interface and critical functionalities required by the test. By doing so, Test Doubles make “impossible” tests feasible and expedite testing cycles.

Key Variations of Test Doubles Test Doubles come in several forms, each designed to address distinct testing challenges:

  1. Test Stub: Facilitates control of the SUT’s indirect inputs, enabling tests to explore paths that might not otherwise occur.
  2. Test Spy: Combines Stub functionality with the ability to record and verify outputs from the SUT for later evaluation.
  3. Mock Object: Focuses on output verification by setting expectations for the SUT’s interactions and validating them during the test.
  4. Fake Object: Offers simplified functionality compared to the real DOC, often used when the DOC is unavailable or unsuitable for the test environment.
  5. Dummy Object: Provides placeholder objects when the test or SUT does not require the DOC’s functionality.

When to Use Test Doubles Test Doubles are particularly valuable when:

  • Testing requirements exceed the capabilities of the real DOC.
  • Test execution is hindered by slow or inaccessible components.
  • Greater control over the test environment is necessary to assess specific scenarios.

That said, it’s crucial to balance the use of Test Doubles. Excessive reliance on them may lead to “Fragile Tests” that lack robustness and diverge from production environments. Therefore, teams should complement Test Doubles with at least one test using real DOCs to ensure alignment with production configurations.

Conclusion Test Doubles are indispensable tools for efficient and effective software testing. By offering flexibility and enhancing control, they empower developers to navigate complex testing scenarios with ease. However, judicious use is key, striking the right balance ensures tests remain meaningful and closely aligned with real-world conditions.

This information comes from this article:
Test Double at XUnitPatterns.com

From the blog CS@Worcester – aRomeoDev by aromeo4f978d012d4 and used with permission of the author. All other rights reserved by the author.

Learning Boundary Value Analysis in Software Testing

One of the most significant ways of ensuring that an application is reliable and efficient before deployment is through software testing. One of the most powerful functional testing techniques that focuses on testing the boundary cases of a system is Boundary Value Analysis (BVA). Boundary Value Analysis finds potential defects that are apt to show themselves on input partition boundaries.

What is Boundary Value Analysis?

Boundary Value Analysis is a black-box testing method which tests the boundary values of valid and invalid partitions. Instead of testing all the possible values, the testers focus on minimum, maximum, and edge-case values, as these are the most error-prone. This is because defects often occur at the extremities of the input ranges rather than at any point within the range.

For example, if a system accepts values between 18 and 56, instead of testing all the values, testers would test the below-mentioned values:

Valid boundary values: 18, 19, 37, 55, 56

Invalid boundary values: 17 (below minimum) and 57 (above maximum)

By running these primary test cases, the testers can easily determine boundary-related faults without unnecessary repetition of in-between value testing.

Implementing BVA: A Real-World Example

To represent BVA through an example, let us take a system processing dates under the following constraints:

Day: 1 to 31

Month: 1 to 12

Year: 1900 to 2000

Under Single Fault Assumption, where one of the variables is tested while others are at nominal values, test cases like below can be written:

Boundary value checking for years (e.g., 1900, 1960, 2000)

Boundary value checking for days (e.g., 1, 31, invalid cases like 32)

Checking boundary values for months (i.e., 1, 12)

By limiting test cases to boundary values, we are able to have maximum test coverage with minimum test effort.

Equivalence Partitioning and BVA together

Another helpful technique is combining BVA and Equivalence Partitioning (EP) together. EP divides input data into equivalent partitions where every equivalence class is expected to behave in the same way. By using these techniques together, testers can reduce the number of test cases but still maintain complete coverage.

For instance, if a system would only accept passwords of 6 to 10 characters long, test cases can be:

0-5 characters: Not accepted

6-10 characters: Accepted

11-14 characters: Not accepted

This mix makes the testing more efficient, especially when using more than one variable.

Limitations of BVA

Although BVA is strong, it does face some limitations:

It works well when the system contains properly defined numeric input ranges.

It has no regard for functional dependencies of variables.

It may not be equally effective on free-form languages like COBOL, which has more flexible input processing.

Conclusion

Boundary Value Analysis is one very important test method that can help testers define most probable fault sites of a system. Merged with Equivalence Partitioning, it has highest test effectiveness at the maximum elimination of test case replication and minimum complete loss of test coverage. In as much as BVA isn’t a “catch-all”, yet it represents an essential technique of software provision quality and dependability.

Personal Reflection

Learning Boundary Value Analysis has helped me understand more about software testing and how it makes the software reliable. It has shown me that by focusing on boundary values, defects can be detected with higher efficiency without generating surplus test cases. It is a very practical approach to apply in real-world scenarios, such as form validation and number input testing, where boundary-related errors are likely to be found. In the future, I will include BVA in my testing approach to offer more test coverage in software projects that I undertake.

Citation

Geeks for Geeks. (n.d.). Software Testing – Boundary Value Analysis. Retrieved from https://www.geeksforgeeks.org/software-testing-boundary-value-analysis/

From the blog CS@Worcester – Maria Delia by Maria Delia and used with permission of the author. All other rights reserved by the author.

SMURF testing

The blog I chose to write about this week details what different types of tests do and how they should be prioritized to test efficiently. The blog post starts with the writer talking about their own experience testing and how early on they tested their program only through the user interface, which quickly showed the downsides of this method. It was slow, couldn’t be run on all devices, and needed manual checks. Testing like this is called end to end testing and it is, slow, expensive, and not always the most revealing about potential problems. Instead, unit tests are much preferred as they test the very basic functionality and are quick. A middle ground between these two is instance or integration testing, which are able to cover most of the program without having to go through a limited user interface. These three types of testing create a pyramid to show that a majority of your cases should be unit tests, then instance tests being next most common, and finally end to end tests.

The distribution pyramid is based on five principles that make up the Smurf mnemonic. The first is speed, that has you prioritize many quick tests that will catch problems sooner. Next is maintainability that gives importance to tests that will scale well or not be subject to many dependencies. Utilization is important when keeping in mind the cost of running your code repetitiously and minimizing the resources used. Reliability states that you should always have tests that will only return an error when something is actually wrong and using these as indicators of crucial issues that need to be addressed. Lastly, fidelity is testing that recreates the users experience from start to finish, or end to end. Each type of test in the pyramid is distributed across these five factors in variable amounts, each showing their use.

I chose this blog because I wanted to learn more about writing test cases for a complete program in a work environment. I thought that this blog did well in that respect and helped in so far as providing an outline to begin with when starting to write test code. One addition that could have improved the post would be some examples, but they are easily accessible elsewhere. This will be a helpful resource and reference to use in the future when I am put in the position where I would need to start writing test from scratch, as well as being something to keep in mind when looking at prewritten tests to compare.

Test Pyramid Google Testing Blog: SMURF: Beyond the Test Pyramid

From the blog CS@Worcester – Computer Science Blog by dzona1 and used with permission of the author. All other rights reserved by the author.

Software Testing

For a lot of the projects I’ve worked on, we didn’t really do testing in the traditional way of actually writing tests. Each method or function we created, we would just use debugging in the actual code itself to figure out which parts of the method is working correctly, which fail safes are activating, and which errors are throwing. There didn’t seem like a whole reason for writing tests using JUnit or something similar when all code went to a dev environment first, and then put to production.

Just recently I started using JUnit a little bit just to find out how useful it could be when practicing coding certain problems. It definitely had some perks to it such as without compiling and building the entire project each time, I could just run the test to check if that certain code piece will function properly or not. But for me, that isn’t really enough to warrant using it all that much, if you have access to a dev environment (which you should), even though it isn’t the “proper” way to do things, I believe that writing debug in code instead of tests, is way more efficient.

If you look at it from a different perspective though, writing tests can also be worthwhile if you’re more into the automation part of developing. Since tests will be ran automatically when compiling your projects, you wouldn’t need to keep adding in more debug statements manually. This could increase efficiency more than what I previously mentioned, but you may not get as much information as needed from the test. If one of your methods isn’t working properly but the test keeps returning an exception or error and you don’t understand the cause of why it’s happening, the previous approach of writing debug statements will be more efficient and in turn, will help you understand where you went wrong.

All in all, I personally believe that both unit testing / testing in general, and writing debugs have their own places when writing software. Obviously I don’t know too much about software testing yet, but the more I learn, there is definitely potential that my opinion on it would change. As of now, writing debugs if there is an issue, or even if there isn’t, seems to help me a lot when understanding each part of what I’m writing in a deeper level and I look forward to learning more about testing to where it could also help me reach that point.

From the blog CS@Worcester – CS Blog by Mike and used with permission of the author. All other rights reserved by the author.