Category Archives: CS@Worcester

Dataflow testing overview

White box testing includes the use of all of a program’s internal structure to your advantage in the testing phase. A component of this internal structure that usually makes up a small percentage of the code body, but can contribute to the most amount of problem cases, are variable/data type declarations. Testing for these cases is called dataflow testing. In the blog, “All about dataflow testing in software testing”, by Prashant Bisht, the author details how dataflow testing would be implemented, and some examples of how it might look.

The implementation of dataflow testing, before any interaction with the code is done, is first executed with a control flow graph, which tracks the flow of variable definitions and where they are utilized. This type of organization allows for the first important internal component of dataflow testing, the tracking of unused variables. Removal of these unused variables can help narrow the search for the source of other problem cases. The second anomaly commonly testing for in dataflow testing is the undefined variable. These are more obvious than unused variables, since they almost always produce an error, due to the program relying on non existent data. The final anomaly tested for is multiple definitions of the same variable. Redundancy that can be introduced by this anomaly can lead to unexpected results or output.

Subtypes of dataflow testing exist, and are specialized to test different types of data. For example, static dataflow testing is the tracking of the flow of variables without the running of the tested code. This code only includes the analysis of the code’s structure. Another example, dynamic dataflow testing, focusing only on how the data relating to variables changes throughout the code’s execution.

To show how dataflow testing would work in practice, the author provided an example. This example concerns variables num1, num2, and num3. First, initialization of these variables is checked, i.e. if num1 is initialized as int nuM1 = some_int, the testing phase would catch this. Then, it ensures that the use of these variables don’t cause errors. This depends on program specifications, like if the program is meant to add each variable. The data flow is then analyzed, ensuring that operations including multiple variables are functioning properly. i.e. if num1 + num2 = result1, and num2 + num3 = result2, the dataflow phase would ensure that the operation result1 + result2 = result3 is functioning properly (though result3 being defined is a problem that would be handled by the first phase). The final phase is the data update phase, where the values of operations are verified to be what they’re expected to be.

From the blog CS@Worcester – My first blog by Michael and used with permission of the author. All other rights reserved by the author.

Benefits of gray box testing

In many software testing situations, white box testing, where the internal code of what is being tested is visible to the tester, and black box testing, where the internal code of what is being tested isn’t visible to the tester, are some of the most often used methods. However, integrating elements from both methods into a single testing method is slowly seeing more widespread use, also known as Gray box testing. In the blog “Exploring gray box testing techniques” by Dominik Szahidewicz, the author details different examples of gray box testing, and the benefits of those examples compared to the use of only white or black box testing.

Gray box testing has noticeable benefits that are absent from both white and black box testing. By the using the principles of white box testing (internal structure and design) and of black box testing (output without the context of internal structure), the testing process can be robust and able to account for any problem case.

A specific testing example where gray box testing can be implemented is pattern testing, where recurring patterns are leveraged to improve programs. With the use of gray box testing, the internal structure of the software can be related with the output to create more helpful and efficient test cases.

Another testing example where gray box testing can be implemented is orthogonal array testing, where data for testing is organized into specific test cases. This method is commonly used where exhaustive testing using every possible input is unreasonable because of the amount of inputs. By using the internal structure of the program and the outputs of the program, more efficient test cases can be created.

A basic guide as to how to implement gray box testing includes 4 steps detailed by the author. The first step of of the implementation is acquiring system knowledge. This includes documenting the internals available for use in testing, as well as the available documentation of outputs from the tested program. The second step is to define test cases according to known information and specifications. The third step is the use of both functional and non functional testing. The fourth step is to analyze the results of testing.

From the blog CS@Worcester – My first blog by Michael and used with permission of the author. All other rights reserved by the author.

My Perspective on Risk Based Testing in Software Quality Assurance

As a computer science student getting more into the details of software development, I’ve started to realize how much goes into making sure software actually works the way it’s supposed to. I recently read the article “13 QA Testing Best Practices For 2024” from Testlio, and one part that stood out to me was the idea of risk based testing.
(testlio.com)

Risk based testing is all about using your time and effort wisely. Instead of trying to test every single feature equally, it focuses on the stuff that matters most. You look at what parts of the app are most likely to break or cause problems for users if they fail. Then you make sure those are tested thoroughly before anything else.

The article explains that identifying risky areas early helps teams put their energy in the right place. If you’ve only got so many people and so much time, this method helps avoid wasting those resources. It also means the most important features are solid by the time the app goes live.

This reminded me of a group project I worked on where we made a class management web app. We spent way too much time testing features like color themes and user bios. But when it came to the assignment submission tool, which was probably the most important part, we barely tested it. Sure enough, after we deployed it, users had issues uploading their files. If we had used risk based testing, we probably would’ve caught that.

Now that I know about this approach, I’m going to start using it in future projects. I’ll take time up front to figure out which features are most essential or most likely to go wrong, and make sure we focus testing there first. It’s a simple idea, but it makes a big difference.

In the end, risk based testing is about being smart with your time and making sure what matters most actually works. If you’re also learning software testing, this is a great thing to start thinking about. I definitely recommend checking out the full article if you’re curious:
13 QA Testing Best Practices For 2024

From the blog Zacharys Computer Science Blog by Zachary Kimball and used with permission of the author. All other rights reserved by the author.

My Experience with Software Testing and My Future: A Reflection

Photo by ThisIsEngineering on Pexels.com

I never thought software testing would teach me many new things. I had experience with it in a previous college I attend. So when transferring, I assume I would relearn a lot about what was taught. Now after experiencing the class I realize my previous lessons were a mere microcosm compared to the vast methods of testing. Which makes sense as my testing back then was done out of necessity and as a way to auto grade my assignments. I won’t go too deep in the past, as today I will discuss the present and my future instead.

Hi, this is Debug Ducker, and I want to tell you what I have to learn about software testing. I would also like to share my thoughts and feelings on my upcoming graduating and my future in computer science. I hope you enjoy.

Now software testing is more than just testing, there are methods to it, different ways to approach it. One approach I didn’t really understand until later was black box testing. Basically, you don’t see the code, but you still run it. My first thought was, “Wow, that doesn’t make sense to me”. Why would I test something that I can’t see. Then after a while I understood perfectly. You don’t have bias when you don’t see the code. The developer has an idea how the software works base on what they write, so there is a possibility that they didn’t account for something. A person who wouldn’t know what the code looks like could test best on assumptions, and could find flaws without bias. QA testing does this regularly, and I understand why it helps developers save time.

Why I feel this is important because it opens my eyes to a lot of things about software testing and how useful they can be. Node path to see how the code progresses and to spot potential issues based on the structures of the code. The many range testing methods that can help detect potential functionality issues and see what needs to be tested or not. There is so much to share but so little time.

I have learned a lot and hope to use this knowledge for the future. Speaking of which, what about my future. Well, I think that is hard to say. Once I graduated, I plan to apply to some software development positions and see what happens. This is a very strange moment in my life. Like I am reaching a major conclusion. I can only see a small part of what life has for me, and I hope they are good and without issue. I just have to apply all my skills that I have learn throughout my four years in college and hope I succeed.

Thank you for your time.

From the blog CS@Worcester – Debug Duck by debugducker and used with permission of the author. All other rights reserved by the author.

Manual Versus Automated Tests

Manual and automated testing are the two ways to run tests. One involves human touch while the other needs very little from a third party to work. While one would think automated testing is better in almost every case. That’s not necessarily true. To start, in most cases automated tests are just better. They are more efficient and save people a lot of time. They can be run over and over again. And can be run every time code is pushed, instead of having to be manually runned. Oftentimes the only times manual testing is useful is when things are tested for use by humans. Meaning things like testing how an app feels to use or how the functions in practice. These areas require testing things that are hard for a computer or code to test.

Manual testing can be more cost effective depending on the circumstances. But manual testing is also subject to more error due to the nature of human involvement. Tests are more adaptable because they can be changed more easily. While automated tests being changed might take more time to change to make sure they work with the code. Automated testing offers more coverage since they can be made small and can cover various areas of coding. Automated tests can also handle larger test cases that span over a large area. While manual testing struggles to handle something so large. Overall I’d say that automated tests seem better to use in general. Aside from things like testing for human feel, automated tests seem to handle most things better.

https://www.testrail.com/blog/manual-vs-automated-testing/

From the blog CS@Worcester – Code Craft by Kyle Tucker and used with permission of the author. All other rights reserved by the author.

The Importance Of Security Testing

Security testing is a major area of testing that is very important. In today’s world, security is imperative to a softwares effectiveness. Without security software will be targeted and used against people. The cost of data breaches result in humongous money loss. Some of the goals in security testing is to find weakness in code, finding the impact of security breaches, report findings, and eliminating risks. Some of the principles of security testing is having realistic tests that test real world applications. Tests that are through and wide spanning. Continuous testing because the nature of security and attacks is always changing. Testing should be a collaboration of all parties involved in the software development process.

We always hear on the news about data breaches for some company that cost billions of dollars. It’s hard to put into perspective how much money that is and how that actually affects people. The security of software has real world consequences on people. It’s not something to take lightly. We have to protect software in order to protect the people using it. It’s just as important as testing to make sure the software works. In the blog it said that negligence in security breaches leads to a higher fine. Which makes sense since if you willingly ignore security breaches you’re putting peoples livelihoods at stake, not just at the company. There are many different areas to security testing. API testing, HTTPS, Cloud, basically any area that requires communication is subject to hackers. 

https://fluidattacks.com/blog/security-testing-fundamentals/

From the blog CS@Worcester – Code Craft by Kyle Tucker and used with permission of the author. All other rights reserved by the author.

Understanding Equivalence Partitioning and Boundary Value Analysis

While doing an activity related to Software Quality Assurance concepts in class, I came across an article that clearly explained two crucial black-box testing techniques: Equivalence Partitioning (EP) and Boundary Value Analysis (BVA). The article, “Equivalence Partitioning and Boundary Value Analysis” by Alan Liew, stood out to me because of its simple examples and approachable language. I appreciated how it used realistic scenarios like age and email validation to make the concepts easier to understand.

In summary, the article defines Equivalence Partitioning as a technique that divides input data into partitions or sets that are treated similarly by the system. Inputs from the same partition are expected to behave the same way. For example, if users are allowed to register only when their age is between 1 and 21, then that range is a valid partition, while any value outside it is considered invalid. The article also introduces the idea that only valid partitions should be combined in testing, whereas invalid ones should be tested individually to catch specific error messages or bugs.

Boundary Value Analysis builds on this by emphasizing that input values at the edge of partitions, like 1 and 21 in the age example are more likely to uncover boundary-related bugs. It explains the 2-value and 3-value BVA methods. A 2-value BVA tests the boundary and its neighbor (e.g., 0, 1, 21, 22), while a 3-value BVA goes even further (e.g., -1, 0, 1, 2, 20, 21, 22, 23). This distinction is important for thorough testing and to avoid letting bugs slip by due to limited test coverage.

I chose this topic because it was one of the activities during class that initially confused me. I struggled to understand its purpose and how it applied in real testing scenarios. I wanted to learn more about why this technique matters and how it fits into the bigger picture of software quality assurance.

From the article, I learned that testing isn’t just about checking if a system works, it’s about designing the right test cases to catch errors early. Testing with both valid and invalid inputs, along with carefully chosen boundary values, helps ensure robust software. I also realized how combining invalid inputs in one test can lead to overlooked issues because one error may hide another.

Moving forward, I plan to use these strategies in future development and testing projects, especially where user input validation is involved. I hope to explore more QA topics like this to gain deeper insight into the role of a software tester.

Reference:
Liew, A. (2024, July 14). Equivalence partitioning and boundary value analysis. Medium. https://alanliew.medium.com/equivalence-partitioning-and-boundary-value-analysis-c940a0c120f5 

From the blog CS@Worcester – CodedBear by donna abayon and used with permission of the author. All other rights reserved by the author.

Learning About Spies in Unit Testing

In my software testing class, we’ve been learning a lot about unit testing and how to make sure our tests are clean and focused. For our group project, I needed to learn more about spies specifically. I came across a blog post on testRigor called “Mocks, Spies, and Stubs” that seemed to offer everything I wanted. I already knew a good bit about mocks and stubs, but spies were still kind of confusing to me, and it doesn’t hurt to review.

Summary of the Blog Post

The post explains how testing tools like mocks, stubs, and spies help isolate the code you’re testing. That just means you’re testing one piece of code without depending on other stuff like a real database or API.

Spies are used when you want to track what happens during a test. For example, you can use a spy to see if a method was called, how many times it was called, and what it was called with. What’s different about spies is they don’t change what the function does unless you want them to. They just track what happens for you.

Why I Picked This

I picked this blog because we’ve been working on our spies POGIL, and we haven’t covered these ourselves in class. I figured now was a good time to figure it out. It also helped me understand how spies are different from mocks and stubs, which I didn’t fully get before.

What I Learned

The main thing I learned is that spies are great when you want to see what a method did without actually changing how it works. That sounds really useful for stuff like tracking clicks or making sure a method only runs once. It also helped me realize that mocks and stubs have different purposes too, as mocks check behavior and stubs give fake data.

How I’ll Use Spies

I think I’ll try using spies when I need to test things that happen in the background or when I just want to see if something got called. They seem useful when you don’t want to mess with the actual code but still want to make sure it’s doing what it’s supposed to, and in a pretty safe manner.

Conclusion

After reading this blog, I understand spies way better. They’re another helpful tool for writing good tests, and now I know when to use them instead of just guessing.

From the blog CS@Worcester – KeepOnComputing by CoffeeLegend and used with permission of the author. All other rights reserved by the author.

JUnit Blues

Hello!

 

With the semester wrapping up pretty quickly, and our last homework assignment being to design an in-class assignment, I’ve been doing some brushing up on JUnit testing. Specifically, assertions, which are the crux of how JUnit tests work, by telling a given test what qualifies as a pass or fail. The assignment me and my groupmates designed revolved around the use of various different kinds of assertions, ones that we didn’t cover heavily in class. 

 As such, this week my blog of choice to read revolves around, what else, JUnit testing. Specifically, the article comes from Medium, who I’ve looked at before, and who seem to be quite the useful resource on covering both broad and specific computer science topics. I wanted to take a look at this specific article, mostly because I wanted to see some of the other topics involved with JUnit that we didn’t cover in class. I intend on running Linux on my main PC once the semester ends, and seeing how to install JUnit on specific hardware instead of importing it as a library is pretty interesting! I am very used to just pulling a library from the top of a piece of code, I am not very well versed in actually installing libraries. Granted, this is the kind of thing that Docker and VS Code are made to circumvent, as you can set it to auto install or include certain dependencies. I also enjoyed reading some of the specific recommendations for writing JUnit specific tests. Some of them we kind of touched on in class already, but it is always nice to keep myself fresh on these kinds of things. Keep tests simple and focused, avoid possible edge cases, the list goes on. Something we didn’t touch on at all in class is the various debugging modes found in JUnit, like JDWP and Standard Streams, which can be useful in troubleshooting a program. Standard Streams for instance places every print that would normally go to the main console, but redirects it to the output strea, which can be useful for seeing exactly what is going on with a program. This kind of angle to me is interesting, as I strongly associate testing with debugging, but we didn’t necessarily cover debugging very thoroughly in class, so perhaps that is something I can look up on my own time. 

 I’ve thoroughly enjoyed my time in this class, some things were a little dry like the Boundary testing near the beginning of the semester, but a lot of the things we learned, like JUnit testing or unit testing in general I can see myself using regularly in industry, and I don’t think I am wrong in thinking that. 

Thank you for reading my blog!

Camille

 

Blog Post: https://medium.com/@abhaykhs/junit-a-complete-guide-83470e717dce

 

From the blog Camille's Cluttered Closet by Camille and used with permission of the author. All other rights reserved by the author.

JUnit Blues

Hello!

 

With the semester wrapping up pretty quickly, and our last homework assignment being to design an in-class assignment, I’ve been doing some brushing up on JUnit testing. Specifically, assertions, which are the crux of how JUnit tests work, by telling a given test what qualifies as a pass or fail. The assignment me and my groupmates designed revolved around the use of various different kinds of assertions, ones that we didn’t cover heavily in class. 

 As such, this week my blog of choice to read revolves around, what else, JUnit testing. Specifically, the article comes from Medium, who I’ve looked at before, and who seem to be quite the useful resource on covering both broad and specific computer science topics. I wanted to take a look at this specific article, mostly because I wanted to see some of the other topics involved with JUnit that we didn’t cover in class. I intend on running Linux on my main PC once the semester ends, and seeing how to install JUnit on specific hardware instead of importing it as a library is pretty interesting! I am very used to just pulling a library from the top of a piece of code, I am not very well versed in actually installing libraries. Granted, this is the kind of thing that Docker and VS Code are made to circumvent, as you can set it to auto install or include certain dependencies. I also enjoyed reading some of the specific recommendations for writing JUnit specific tests. Some of them we kind of touched on in class already, but it is always nice to keep myself fresh on these kinds of things. Keep tests simple and focused, avoid possible edge cases, the list goes on. Something we didn’t touch on at all in class is the various debugging modes found in JUnit, like JDWP and Standard Streams, which can be useful in troubleshooting a program. Standard Streams for instance places every print that would normally go to the main console, but redirects it to the output strea, which can be useful for seeing exactly what is going on with a program. This kind of angle to me is interesting, as I strongly associate testing with debugging, but we didn’t necessarily cover debugging very thoroughly in class, so perhaps that is something I can look up on my own time. 

 I’ve thoroughly enjoyed my time in this class, some things were a little dry like the Boundary testing near the beginning of the semester, but a lot of the things we learned, like JUnit testing or unit testing in general I can see myself using regularly in industry, and I don’t think I am wrong in thinking that. 

Thank you for reading my blog!

Camille

 

Blog Post: https://medium.com/@abhaykhs/junit-a-complete-guide-83470e717dce

 

From the blog Camille's Cluttered Closet by Camille and used with permission of the author. All other rights reserved by the author.