Category Archives: CS-443

Decision Table-Based Testing, a Game Changer for Software Bugs.

Today, the next meal on my menu of headaches is Decision Table-Based Testing, which as the name suggests is a table of tests to ensure that your software is working as intended and not printing “Hello World!” when you try to generate your salary. I may be downplaying it somewhat but the truth is that it might be one of the best weapons against bugs in software development.

Photo by Yan Krukau on Pexels.com

This approach is all about making sure your app or software doesn’t throw a tantrum under different situations by planning out every possible scenario in a neat, organized table. It’s a bit like planning a massive party and making sure you’ve thought of everything, so nothing goes wrong (well, almost nothing).

Imagine you’ve got a bunch of switches and dials that can be turned on, off, or dialed up to eleven. Decision tables help you figure out what happens to your software when you mess with those controls in every possible way. It’s a clear, visual way to lay out the “if this, then that” of your app’s behavior. This is very handy because it turns the headache of thinking through a million combinations of inputs and outcomes into something manageable.

What’s awesome about this is how it simplifies the chaos. You get this big-picture view of how different inputs play together and affect your software, making it easier to spot where things might go wrong. It’s like having a map when you’re in a maze, showing you all the paths you can take.

Starting to use Decision Table-Based Testing is pretty straightforward. You write down all the things that could change or affect your software (conditions) and what should happen in response (actions). Then, you mix and match these conditions to cover all your bases. This method is a fantastic way to find those sneaky bugs that only show up under specific conditions and to make sure your software is rock solid.

“But Ano, what if you update the app and add new stuff?”. As your app grows and gets more features, you can just update your decision table to keep up. It’s a flexible, scalable way to keep your testing game strong, no matter how advanced or complex your software gets.

Sure, it might sound a bit daunting, especially with super complicated apps. But, with the right tools and a bit of practice, it becomes a lot less scary. It’s about making the effort now to save a ton of headaches later when you’re not chasing down weird bugs half an hour before a project is due.

In the end, Decision Table-Based Testing is all about making your life easier and your software better. It’s a way to tackle the complexity head-on, with a clear plan and a cool head. And who doesn’t want that? So, if you’re in the business of making software, give it a whirl. It might just be the thing you need to keep those bug boogeymen at bay.

Till next time,

Ano out.

References:

https://testsigma.com/blog/decision-table-testing

https://www.guru99.com/decision-table-testing.html

From the blog CS@Worcester – Anairdo's WSU Computer Science Blog by anairdoduri and used with permission of the author. All other rights reserved by the author.

Simplifying Software Testing: Decision Tables and Program Graphs

In the vast world of computer science, there are various techniques employed to ensure the reliability and efficiency of software systems. Two such techniques that play a crucial role in software testing are Decision Tables and Program Graphs. Let’s delve into what they are and how they contribute to the realm of computer science.

Decision Tables: Decision Tables are a systematic and structured way of representing complex decision-making processes. Imagine a scenario where a software program needs to make different decisions based on various conditions. These conditions can lead to different outcomes or actions. Decision Tables provide a visual representation of all possible combinations of conditions and their corresponding actions, making it easier to analyze and test different scenarios.

To understand Decision Tables better, think of a flowchart but with a more organized and concise format. Each column represents a condition, and each row represents a combination of conditions along with the corresponding action to be taken. By systematically analyzing all possible combinations, testers can ensure that the software behaves as expected under different circumstances.

Program Graphs: Program Graphs, on the other hand, offer a graphical representation of the control flow within a program. They depict how the program transitions from one state to another based on different inputs or conditions. Program Graphs help testers visualize the execution path of a program, identifying potential areas of concern such as loops, branches, or unreachable code segments.

These graphs aid in understanding the program’s behavior and facilitate the creation of comprehensive test cases to ensure thorough testing coverage. By traversing the program graph, testers can validate different paths and verify the correctness and robustness of the software.

DD Path Testing: DD Path Testing, short for Data Flow and Control Flow Path Testing, utilizes graphs to identify and test various paths through a program. It combines both data flow and control flow aspects to ensure comprehensive testing coverage. By analyzing the flow of data and control within the program, testers can identify potential vulnerabilities, errors, or inefficiencies.

By integrating Decision Tables, Program Graphs, and DD Path Testing into the software testing process, developers and testers can enhance the quality and reliability of software systems. These techniques enable thorough testing coverage, helping to identify and address potential issues early in the development lifecycle.

Here are two web links where you can find more information about Decision Tables, Program Graphs, and DD Path Testing:

  1. Decision Tables – Geeks for Geeks
  2. What is Graph in Data Structure & Types of Graph?

Talking about these topics is essential because they form the backbone of effective software testing strategies. By understanding and implementing these techniques, developers and testers can ensure that software systems meet the desired quality standards, resulting in enhanced user satisfaction and trust.

From the blog Discoveries in CS world by mgl1990 and used with permission of the author. All other rights reserved by the author.

spec based testing

As we move onto more code-based testing in class, I wanted to review some of the black box testing techniques we’ve gone over in class, especially since the most recent homework was somewhat confusing for me.

I’ll start off with boundary value testing. According to a blog post on SDET Unicorns, boundary value testing tests valid inputs in the domain (minimum, maximum, and one below and above them respectively), invalid inputs that are close to the domain, and any special inputs, such as empty strings or null pointers. This technique’s main use is testing boundaries, that is, it’s mostly concerned with whether or not invalid inputs are properly dealt with, and valid inputs are processed as valid. The drawback, as we discussed in class, is that it doesn’t really describe the different cases of valid inputs if there is branching taking place.

Equivalence class testing addresses this issue. From the same post above, equivalence class tests (or partitions, in the author’s words) divide inputs into, well, equivalence classes, or groups of input where behavior is expected to be the same. This also means that there are multiple groups of valid inputs, meaning this approach can effectively test different cases of valid inputs based on the specifications, rather than just testing if valid and invalid inputs behave as expected.

The reason why I wanted to look at these two specifically is because they are vital to understanding the decision table-based approach. I’m fairly confident in this approach because I found it fun to work with in class. It’s essentially a visualization and simplification of both boundary value and equivalence class testing, mostly equivalence class testing though, at least in my interpretation. The reason why I find it easier to work with decision tables is because they are much more efficient with regards to the space you use, even if the amount of mental work you have to do is larger.

It’s interesting because I found, in the homework at least, writing out test cases and the like for the non-table based approaches was somewhat frustrating because you have to consider each case even if they do the same thing, and write them out. With decision tables, you can optimize values into ‘dont cares,’ meaning that if the output is solely dependent on one of multiple inputs in this specific equivalence class, so you don’t have to care about what class the other values are. I really enjoy how this cleans up the entire process of black box testing. That being said, I understand that this can become very difficult to test as the complexity of a project increases.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

Testing Paths

https://web.dev/articles/ta-test-cases

Last week, we tracked the path a program would take using program graphs. The program graph is good to represent the overall structure of a program; it shows all the ways the code can go. 

Afterwards, I wondered about the other types of paths a program could take, whether it be the best case, worst case, etc. That’s when I found Ramona Schwering’s article about test cases and happy paths.

Although I primarily read this article for it’s definitions and information on the different paths a software could provide, it also covers the idea of a test case and how to prioritize the right ones. Schwering first reminds us what a test case does whilst checking the results of the program: it verifies the program does what it’s intended to do, and it validates the program is what the customer or user wants. 

Next, Schwering explains what drew me to the article in the first place: test paths and their outcomes. The first path is known as the happy path, since it applies to the most common use case of your program; this is what should happen and what we want to happen. The second path is the scary path, which is used to catch errors and ugly outcomes. 

Schwering lists other test paths that aren’t used as frequently as the first two but could be important to recognize. I’ll talk about a few I find interesting.

  • An angry path is supposed to get an error; making sure error handling works.
  • A desolate path would be important for programs dealing with input data, as it tests to see if it’s given enough data to function correctly. 

Lastly, Schwering goes over the best practices for writing test cases, and how there are two patterns used to structure the cases: Arrange, act, assert and Given, when, then.

I selected this article because it covered my topic of interest and I also enjoyed Schwering’s straight to the point, easy to understand writing style. I decided to do a deeper dive on Schwering herself and found she’s a software developer and tester that speaks at international conferences worldwide. Reading through the post, I found the organization of the information to be helpful in my learning along with the information itself being reflective of my thoughts on our recent classes. 

The content Schwering provides in this article is great for a quick read for anyone wanting to learn more about testing. I personally believe the paths she went over are important to prioritize, especially happy and scary paths. I refreshed myself on the duties of a test case and learned multiple kinds of test paths, and how their importances varies from situation to situation. I enjoyed this article especially due to Schwering’s little illustrations and how easy it was to understand the paths. I hope to create test cases in the future with these paths in mind, using them to my advantage when possible.

From the blog CS@Worcester – Josh's Coding Journey by joshuafife and used with permission of the author. All other rights reserved by the author.

Testing Strategies

Once you learn to test your code, another thing you have to worry about is testing it as efficiently as possible. You can just write tests and hope that it ensures your code works effortlessly, but it may not always catch everything that can go wrong. Adopting some strategies and methods will be important in helping you determine how to go about testing your and managing that. 

There are a number of strategies that you can use and adapt. Early on in class, we went over some strategies very quickly that can be used or serve as a good base. Some of these consisted of black box and white box testing, dynamic and static testing, and some others. Each has their own way of testing, some can be better than others. Ultimately, it’s up to you to determine and figure out which one best suits you, the way you want to do things, and your code.

In this blog post, Janki Mehta writes about testing strategies, its key components, approaches, and some specific ones. They describe testing strategies as plans of “how software testing will be approached, executed, and managed throughout the software development life cycle.” They then describe the key components of the strategies, which are scope and overview, testing methodology, test environment specification, release control, risk analysis, testing tools, and review and approval. They dive into detail with the two approaches, proactive, where it focuses on detecting and catching bugs and defects before they even occur, and reactive, where it focuses on catching them afterwards. Some strategies they list were black box and white box testing, but some others too, like integration testing and regression testing, also describing what each of them are. At the end of the article, there are some final tips that can help easing the testing process and creating a comfortable environment when it comes to testing.

Knowing good strategies is important for testing code, and knowing multiple ones is even more important. It allows you to look around and figure out which fits best for your code and software, and adapt to it. Testing code can and probably will be difficult to get right all the time, learning all you can and having additional resources will be beneficial. I think that Mehta wrote a good article, it covers a lot of material, some of it can be seen as extra but it doesn’t hurt to know extra information, even if you can’t use it now. It’s clear the article is meant for a team to use as reference, but anyone can use it, not just a team. 

From the blog CS@Worcester – Cao's Thoughts by antcao and used with permission of the author. All other rights reserved by the author.

Postman Setup: TestProject

After exploring more of testing, and different strategies, I wondered about how testing could be automated. I came across a blog for Postman, on the TestProject website explaining Postman’s use in automated API testing. This blog is focused on providing a tutorial to understand how to use Postman to automate testing, rather than understanding various testing practices as seen in my previous blogs. The article’s title is Getting Started with using Postman for API Testing by Dave Westerveld

The blog first defines what an API is, as Postman is specifically for API testing. The blog defines API as Application Programming Interfaces. These APIs are extremely important in facilitating communication between software systems. Testing them is paramount for this reason, to assure they are working properly. This is where Postman comes in. It is an automated API tester, with a fully fledged application download. The article outlines the setup of Postman, and explains how to get it to cooperate with your system: First open the app of Postman that you’ve downloaded and then sign in. After this you are allowed to make API requests, the new button is then used and an API request is created. You name the request, and create a collection, where the requests are stored. Following setup, the article explains how to execute a Postman request. During setup, the article created the collection name JsonPlaceHolder, and then used that collection as the URL for the Request URL. After this, Postman will send the request for you. This can be used to compare results from different requests of an API and assure they are the same, although some manual input IS required in automated testing. However more intricate API Testing is shown in further tutorials with Postman.

This is a topic that interested me greatly, as I have been given many suggestions to look into API testing. Through the CS-443 course, I had been looking for more options to test and explore coding. Automated testing is something immensely interesting, and I appreciate the comprehensive guide provided by TestProject for Postman. This is of great value to me as I will continue to look through this guide and potentially post more blogs on this topic, as I explore automated testing. This first blog is mainly involving setup, and the basic concept of requests, but I look forward to further automated API testing using Postman, as these articles are very well structured.

Source:
https://blog.testproject.io/2020/07/15/getting-started-with-using-postman-for-api-testing/

From the blog CS@Worcester – WSU CS Blog: Ben Gelineau by Ben Gelineau and used with permission of the author. All other rights reserved by the author.

Week 6: CS-443

Black box vs. White box vs. Gray box Testing

Black box testing

Black box testing focuses on the behavior of the software on a surface level. This means the tester cannot see any technical details such as code or the structure. Testers are not concerned about the inner workings of the software, rather only on the inputs and outputs. Therefore black box testing comes from the perspective of the user which allows testers to uncover any bugs and/or design flaws that would directly affect the user’s experience.

Some black box testing procedures include boundary value analysis, equivalence partitioning, and decision table testing. Boundary value analysis tests whether the software results in the correct output when the input contains the lower or upper limit of the input range. Equivalence partitioning groups all possible inputs into categories, which can reduce the number of total test cases. Finally, decision table testing involves grouping inputs into a table and testing how the software behaves when those inputs are combined.

White box testing

White box testing is the opposite of black box, meaning testers are given the internals of the system such as the code, the structure, and how its implemented. White box testing allows testers to fully understand how the software is executed. Having access to the internals of the software can help testers find issues related to security vulnerabilities, broken data paths, etc. while black box testing could not focus on those type of issues.

White box testing can be completed with a few techniques. These techniques include statement, branch, and path coverage. Statement coverage ensures that every statement in the code is run and tested at least once. Branch coverage executes all possible branches (i.e. conditional branches) in the code to be tested. Path coverage uses test cases to cover all possible execution paths throughout the software.

Gray box testing

Gray box testing combines black and white box testing into one. Therefore testing can be done from the perspective of the user, while still having access to the software’s internal code. Gray box testing is helpful when identifying problems that may otherwise be hard to find with other types of testing.

A couple techniques that are used in gray box testing are Matrix and Regression testing. Matrix testing looks at all variables in the code and assesses its risk level. Doing this can identify unused/under-optimized variables. Regression testing checks that software changes or bug fixes do not create new errors.

Reflection

This article was chosen because it clearly described the different types of box testing, and what each does. While reading the material, I realized that what we have done in class (equivalence class, boundary value testing) are actually black box testing techniques. I enjoyed learning about the different techniques that are used, and the benefits of them. Although developers and testers are two different roles, developers sometimes play the role as the end user when black box testing, so having a basic understanding of the different types of box testing will be helpful in the future.

References:

https://www.shakebugs.com/blog/black-vs-white-vs-grey-box-testing

From the blog CS@Worcester – Zack's CS Blog by ztram1 and used with permission of the author. All other rights reserved by the author.

Front End Testing

As my journey to becoming a full stack developer continues, I have begun to encounter the need for front-end testing. When finding an article, I realized that I did not have very much knowledge on what front-end testing entailed or how to implement it into my coding. I found the blog “Front End Testing: A Comprehensive Overview” by Kiruthika Devaraj (https://testsigma.com/blog/front-end-testing/) on my search for information on this topic.

The blog starts off by explaining that front-end testing is focused on the user’s experience. While past experiences have taught me that back-end testing specifically tests the functionality of the code, front-end, also known as the user-facing end of a website, testing has a higher focus on the user’s interactions. The author also delves into eight different types of front-end testing and the elements that make up these different types:

  • Unit Testing is the analysis and testing of the individual components to ensure they each work as intended. 
  • Acceptance Testing involves testing to ensure permissions, such as account accesses, are working properly. 
  • Visual Regression Testing tests for visual changes in the front-end by comparing a reference picture to a baseline image. This can typically create brittle tests that can fail due to the slightest changes in an element’s location on a page.
  • Performance Testing measures an applications stability and responsiveness under different levels of simulated stress and traffic.
  • Accessibility Testing ensures that individuals with visual impairments or other additional needs can still use and access the application to the fullest of their capabilities.
  • End-to-End (E2E) Testing involves testing the application from start to finish while testing all components and systems for how they work together.
  • Integration Testing is when each individual module is tested to ensure it has been implemented correctly.
  • Cross-Browser Testing tests that the application works correctly across multiple browsers and confirms compatibility.

Devarj also provides tips for better front-end testing, the first tip is to always use a testing framework. These frameworks, also called front-end tools, are what actually run the tests. Although countless front-end tools exist, the blog provides information to 3 popular options; Testsigma, Selenium, and Ranorex. Testsigma and Selenium are both open-source front-end testing frameworks that may be worth investigating further, while Ranorexis a commercial framework that may be more than what I am currently looking for. 

Front-End Testing Moving Forward

As I move to implement automated front-end testing into my projects, it is important to remember that the front-end testing is focused on ensuring the project is user-friendly, responsive, and visually appealing by testing the layout, design, and functionality across multiple browsers. Doing so will provide users with the best possible experience.

From the blog CS@Worcester – CS Learning by kbourassa18 and used with permission of the author. All other rights reserved by the author.

Boundary Value Testing

Exploring the Edges: A Deep Dive into Boundary Value Testing


In the realm of software testing, Boundary Value Testing (BVT) emerges as a cornerstone, spotlighting the crucial junctures at the extremities of software modules. This method, celebrated for its strategic focus, eschews the intricacies of internal code to scrutinize the behavior at pivotal points: the minimum, maximum, and nominal value inputs. Its essence captures the high propensity for errors lurking at the fringes, thereby underscoring its indispensability, particularly for modules sprawling with vast input ranges.

BVT transcends mere error detection; it embodies efficiency. By tailoring test cases to boundary conditions — including the crucial spots just above and below the extremes — it ensures a comprehensive examination without the exhaustive effort of covering every conceivable input. This approach not only conserves resources but also sharpens the focus on those areas most likely to be fraught with defects.

The methodology behind BVT is meticulous yet straightforward. It commences with the creation of equivalence partitions, a critical step that segments input data into logically similar groups. Within this framework, the boundaries are meticulously identified, crafting a testing landscape where both valid and invalid inputs undergo rigorous scrutiny. The beauty of BVT lies in its precision — it’s a targeted strike on the software’s most vulnerable fronts.

However, the path of BVT is not devoid of obstacles. Its lens, sharply focused on boundaries, may overlook the vast terrain of non-boundary inputs, a limitation particularly pronounced in Boolean contexts where only two states exist. Moreover, its efficacy is inextricably linked to the adeptness in defining equivalence partitions — a misstep here can skew the entire testing trajectory, transforming a potential asset into a liability.

Yet, the allure of BVT remains undiminished. In the high-stakes arena of software development, it stands as a sentinel at the gates of functionality, guarding against the incursion of boundary-related errors. Its role is pivotal, especially when the clock ticks against the backdrop of tight deadlines and expansive test fields.

BVT is more than a testing technique; it’s a strategic imperative. As software landscapes continue to evolve, the demand for precision, efficiency, and effectiveness in testing methodologies will only escalate. BVT, with its focused approach and proven efficacy, is poised to meet this challenge, proving itself as an invaluable ally in the quest for flawless software. In the hands of skilled testers, armed with a deep understanding and meticulous execution, it transforms from a mere method into a beacon of quality assurance. Boundary Value Testing (BVT) is a key black box testing method targeting software’s extreme limits to identify errors without delving into internal code, focusing on boundary conditions for efficient and effective error detection. It requires meticulous planning and execution, especially in defining equivalence partitions, to ensure testing success.

I read the blog on this link:
https://testgrid.io/blog/boundary-value-testing-explained/

From the blog CS@Worcester – Coding by asejdi and used with permission of the author. All other rights reserved by the author.

Levels of Testing

https://testsigma.com/blog/levels-of-testing/#:~:text=The%20most%20common%20types%20of,system%20testing%2C%20and%20acceptance%20testing.

Different levels of testing software is very important when it comes to quality assurance and each of the different levels is based on different combinations of things that help ensure the best conditions and functionality of your software. The four different levels of testing are as follows: “Unit Test” which tests an individual component, “Integration Test” which tests an integrated component, “system Test” which tests the entire system and “acceptance Test” which tests the final system otherwise known as the final product.

Unit Testing is used to outline the expectations of a program or outline its functionality which in essence makes sure that certain parts of the program function as intended. Integration testing allows you to test groups of functions to make sure that they interact properly with each other which is a step above unit testing. System testing is similar to integration testing as it tests a ‘group’ of functions but this group is now comprised of every part of the program/application. Acceptance testing then tests all functional and non functional aspects of the program in order to ensure proper functionality, security, etc.

I selected the levels of testing as a topic as it directly relates to the types of testing we have been doing in class. We have been testing functionality of different bodies of code so far during this semester and we have used different techniques including but not limited to equivalence class testing, edge case testing, worst case testing, boundary value testing, etc. I would say that most if not all of these testing types would fall under the unit testing level of software testing as we have for the most part testing individual system components being able to learn about the different levels of testing makes me excited to continue on through the levels of testing.

This source was very helpful when it came to learning about the other levels of testing we have not used during class yet and I would recommend this article to any of my fellow students although this article does not go extremely in depth when it comes to the different levels of testing it does give you a good general understanding of each level so that you can go on to learn more without getting confused between the different levels of testing. The diagrams and explanations of the software sequence are valuable resources for a software testing course.

From the blog CS@Worcester – Dylan Brown Computer Science by dylanbrowncs and used with permission of the author. All other rights reserved by the author.