Category Archives: CS-443

Cultivating Insight: “Reflect As You Work” Week-5

Introspection in Action:

The “Reflect As You Work” pattern, from “Apprenticeship Patterns” by Dave Hoover and Adewale Oshineye, emphasizes the importance of ongoing reflection during your software development journey. This pattern encourages developers to continually assess their experiences, decisions, and outcomes. It’s about developing a habit of introspective thinking that allows you to learn from your actions and continuously improve your skills and approaches.

A Personal Acknowledgment:

While I haven’t yet embarked on a professional software development career, this pattern resonates with me for its universal applicability. Reflective practice is a concept that I find valuable in any learning process. “Reflect As You Work” aligns with my belief in the power of self-awareness and learning from one’s experiences, whether in academic, personal, or future professional settings.

The Power of Self-Reflection:

What stands out to me about this pattern is its focus on the transformative power of reflection. By regularly taking stock of what works and what doesn’t, and why certain approaches succeed or fail, one can gain deeper insights into their work and personal growth. This practice turns every task and challenge into a learning opportunity.

Shaping a Reflective Mindset:

Though I am yet to apply this in a professional context, “Reflect As You Work” shapes how I view future work and learning. It instills the idea that real growth stems from not just doing but understanding and analyzing the process and outcomes of one’s actions. This continuous cycle of action and reflection is what drives deeper learning and skill development.

Embracing Reflection, Balancing Action:

I wholeheartedly embrace this pattern’s message, but I also recognize the need for a balance between reflection and action. Constant reflection should not impede progress or lead to over-analysis. The challenge lies in integrating reflection effectively into the workflow without it becoming an obstacle to productivity.

In conclusion, “Reflect As You Work” is a pivotal pattern for anyone who seeks not just to work in the field of software development but to excel in it. It encourages a mindset where every experience is a source of learning and every challenge is a stepping stone to improvement. This pattern is a reminder that the journey to becoming a skilled software developer is as much about introspection and learning from one’s own journey as it is about acquiring new technical skills.

From the blog CS@Worcester – Kadriu's Blog by Arber Kadriu and used with permission of the author. All other rights reserved by the author.

Code Review

For this week’s blog post, I chose the article “Code Review Best Practices – Lessons from the Trenches” by Drazen Zaric. I chose this article because its topic fits perfectly with the cove review segment in the syllabus. This article discusses why you should do code reviews, how code reviews act as quality assurance, how code reviews function as a team improvement tool, how to prepare a pull request for preview, and, of course, how to review code. In this article, I will be discussing why you should do code reviews and how they work as quality assurance.

Reviewing code is one of the most essential parts of the development process. “It should be obvious that the primary purpose of code review is to assess the quality of the changes being introduced. I mean, the dictionary definition of review says precisely that ‘review (noun) – a formal assessment of something with the intention of instituting change if necessary.’ Of course, code being code, there’s a lot of things that can be checked and tested automatically, so there’s nuance to what needs to be checked in an actual code review.” As mentioned in the article, there are many things that can be tested and should be tested. This leads to a need for many people to review your code, and you will need to review many other people’s code to make sure the best possible software is being developed. Quality assurance must be done well as a significant part of making sure your software is the best it can be.

In this section of the blog post, I will discuss how the article mentions how code review is incredibly useful in quality assurance. “There are many ways in which code reviews help maintain the quality bar for the codebase and the product. In the end, it comes down to catching mistakes at a level that can hardly be automatically tested, such as architectural inconsistencies. Also, the code for automated tests should be reviewed, so there’s a meta-level at which reviews help with QA.” As mentioned, code review’s main boon to quality assurance is finding issues that can’t, or often need to be caught, through traditional testing methods, like automated testing. The article also mentions using checklists for storing what needs to be checked and how and what the results of said checks should be. “You can have your own checklist or make it a shared list for the team or a project. There’s a ton of material written on the usefulness of checklists. In Getting Things Done, David Allen puts forward a simple idea – our minds are great at processing information but terrible at storing and recalling it. That’s why checklists are a great way of externally storing and breaking down a planned or repetitive task.” Having a method of keeping track of what is done, what needs to be done, and what is incomplete is essential in working on any large project, let alone on a software development project.

From the blog CS@Worcester – P. McManus Worcester State CS Blog by patrickmcmanus1 and used with permission of the author. All other rights reserved by the author.

Static Testing VS Dynamic Testing

The blog post highlights the qualities as well as the differences between these two types of testing, static testing and dynamic testing. I chose this blog post because as this course covers software quality assurance and testing as well as the fact that we have spent time in class covering these two types of testing I believe that this blog post is able to highlight and reinforce core concepts that will be able to assist in gaining further knowledge withing this class. In addition, the blog post is able to explain the information regarding static and dynamic testing in a simple and easy to understand format that further compliments the idea of using this blog post as a resource to reinforce these topics.

The blog post as previously discussed covers static testing vs dynamic testing. We learn that static testing involves testing the code without actually running it while dynamic testing involves running the code to test it’s outcomes through various testing circumstances. From those two descriptions we can tell how static testing relies on testing through software documentation as well as the design of the code itself. However, dynamic testing is able to execute the program allowing the testers to test how the code will work in an assortment of testing scenarios. This concept allows testers to see how the code will work once it is released to the public to verify that it works as intended. These two types of testing are very different in their methods of achieving a successful test. Static testings wants to identify problems and improve on them early in development while dynamic testing wants to validate the performance and functionality of the code once it is in a executable state. Since they have different intents your requirements for the project is what determines what testing you will choose.

From what I have learned from the blog post I believe it was very helpful and was able to reinforce core concepts that will help me further in this class. I believe learning more about static and dynamic testing will help me when it comes to working in this class as well as assist in knowing how to test in a professional setting. Knowing the core differences between the two will allow me to know what type of testing will be best for certain circumstances when it comes to projects. In conclusion, this blog post was very helpful and will be utilized in the future.

https://testsigma.com/blog/static-testing-and-dynamic-testing

From the blog CS@Worcester – Giovanni Casiano – Software Development by Giovanni Casiano and used with permission of the author. All other rights reserved by the author.

Equivalence Class Testing

A Critical Component of Software Quality Assurance

Equivalence Class Testing stands out as a highly efficient and systematic approach. This blog post delves into the concept of Equivalence Class Testing, its significance in SQA, and how it fits into the broader context of software testing.

Understanding Equivalence Class Testing

Equivalence Class Testing is a black box testing method used to divide the input data of a software application into partitions of equivalent data from which test cases can be derived. An equivalence class represents a set of valid or invalid states for input conditions.

The main advantage of Equivalence Class Testing is its efficiency. Instead of testing every possible input individually, which can be impractical or impossible for systems with a vast range of inputs, testers can cover more ground by focusing on one representative per equivalence class.

Identifying Equivalence Classes

Equivalence classes are typically divided into two types: valid and invalid. Valid equivalence classes correspond to a set of inputs that are expected to be accepted by the software system, leading to a correct output. The process of identifying these classes involves analyzing the software specifications and requirements to understand the input data’s boundaries and constraints.

The Role of Equivalence Class Testing in SQA

Software Quality Assurance encompasses a wide array of activities designed to ensure that the developed software meets and maintains the required standards and procedures throughout its lifecycle. Equivalence Class Testing fits into the SQA framework as a key component of the testing phase, contributing to the overall goal of identifying and mitigating defects.

By integrating Equivalence Class Testing into the SQA process, organizations can achieve several objectives:

  1. Enhanced Test Coverage: Equivalence Class Testing allows teams to systematically cover a wide range of input scenarios, thereby increasing the likelihood of uncovering hidden bugs.
  2. Efficiency and Cost-Effectiveness: By reducing the number of test cases without sacrificing the breadth of input conditions tested, teams can optimize their resources and save significant time and costs.
  3. Improved Software Quality: By ensuring that different categories of input are adequately tested, teams can enhance the robustness and reliability of the software product.

Implementing Equivalence Class Testing

To effectively implement Equivalence Class Testing, teams should follow a structured approach:

  1. Review Requirements and Specifications: Begin by thoroughly analyzing the software requirements and design documents to identify all possible input conditions.
  2. Identify and Define Equivalence Classes: Classify these input conditions into valid and invalid equivalence classes.
  3. Design and Execute Test Cases: Develop test cases based on representative values from each equivalence class and execute them to verify the behavior of the application.
  4. Evaluate and Document Results: Record the outcomes of the test cases and analyze them to identify any deviations from the expected results.

 

I was based to this blog: https://www.celestialsys.com/blogs/software-testing-boundary-value-analysis-equivalence-partitioning

From the blog CS@Worcester – Coding by asejdi and used with permission of the author. All other rights reserved by the author.

Equivalence Class Testing

Among various testing techniques, equivalence class testing stands out as an efficient method for cutting down the number of test cases required while maintaining thorough test coverage. 

Equivalence class testing is based on the principle that inputs can be grouped into equivalence classes that exhibit similar behavior. By selecting representative test cases from these classes, testers can efficiently cover various scenarios without testing every possible input value individually. This technique is the best of both worlds, optimizing test case selection all while maintaining thorough test coverage; as those from ProfessionalQA.com put it, both the quality of test cases as well as testing as a whole is enhanced “by removing the vast amount of redundancy and gaps that appear in the boundary value testing.”

Equivalence class testing has four variations, each of which have their own benefits, downsides, and uses. They are determined using the combinations of two factors, the number of test cases and whether only valid values are tested or both valid and invalid are tested Thus, in terms of equivalence classes, we have weak-normal, strong-normal, weak-robust, and strong-robust. Weak-normal has few but effective tests and only covers the valid equivalence classes, strong-normal covers every valid equivalence class, weak-robust is like weak-normal but includes an invalid equivalence class(es) as well, and strong-robust covers every valid and invalid equivalence class. One thing to note about strong-robust equivalence class testing is that there is some redundancy when it comes to testing the invalid equivalence classes.

Equivalence class testing was a bit hard to pick up initially but it really clicked thanks to some visual aid, that being the graphs of the variations of equivalence class testing. With this visual, I was able to understand how effective equivalence class testing is and why some will want to use it. It allows testers to “focus on smaller data sets, which increases the probability to uncovering more defects in the software product” and may reduce the possibility of error on the tester’s part. With other testing techniques that are more difficult or time-consuming when it comes to larger data sets, equivalence class testing is a great alternative.

https://www.professionalqa.com/equivalence-class-testing

From the blog CS@Worcester – Kyler's Blog by kylerlai and used with permission of the author. All other rights reserved by the author.

Interesting Features of JUnit 5

Since beginning to work with code in CS443 – Software Quality Assurance and Testing, we’ve used JUnit framework for designing and running our test cases. So I decided to search for a blog post discussing some interesting features that I may not have come across yet, but could be useful and landed upon Exploring the Exciting New Features of JUnit 5. This post is from December 2023, so it should be relatively up-to-date and I recall a conversation with Dr. Wurst at one point where he briefly mentioned considering switching to a newer version of JUnit for some attractive features – hopefully we can delve into some of these.

Several feature additions come with JUnit 5 and specifically version 5.4. One that immediately stood out to me was support for more/new annotations and assertions like @Nested. We’ve looked at some basic annotations like @BeforeEach and @AfterAll in class but the idea of nesting tests is newer – however it makes perfect sense from a practical perspective. Depending on the outcome of an initial test, testers may want to run further tests on one branch or two different tests depending on which branch is followed. Proper annotating likely helps the compiler recognize the nested nature of the tests preceded and manage potentially complex webs of nested tests most efficiently.

There’s also improvements to the assertEquals() functions and overall flexibility through enhancements to API and insertion features for handling Lambda functions. This goes hand in hand with a new feature of JUnit 5 – the ability for tests to be dynamically generated during test runtime and implemented (if needed) using a Factory class/method. Last semester in Software Construction Design and Architecture, we learned about the Factory architecture and methodology so it was cool to see it applied to enhance features in professional software. 

Another cool feature of JUnit 5 which represents a considerable change from JUnit 4 is the transition to a modular structure, meaning there is a separate test runner and classes which operate independently from the main program. I could imagine that this separation isolates any issues that may arise during testing and protect the main program, while also preventing any unintended interactions with the main from interacting with properly designed tests.

JUnit 5 offers some major features and enhancements over the previous versions with the ability to tag and implement nested tests, improved Lambda function support and Factory method for dynamic test creation and implementation. Considering these, I can see how JUnit could be effective for designing automated test runs. I’m looking forward to implementing more of these features in our class and homework activities for CS443, and trying some extra tests and methods that I read about in this.

Source: 

https://blog.machinet.net/post/exploring-the-exciting-new-features-of-j-unit-5

From the blog CS@Worcester – Tech. Worth Talking About by jelbirt and used with permission of the author. All other rights reserved by the author.

Pairwise and Combinatorial Testing

The article “Combinatorial Testing” focuses on the insights of software testing methods. This article explores the evolution of combinatorial testing, talking about advancements in algorithm performance and constraint representation. The article also talks about the importance in detecting interaction failures within software systems. The article also demonstrates the effectiveness of t-way combinations fault detection across various domains. The article “Pairwise Testing” talks about pair testing as a permutation and combination technique aimed at testing each pair of input parameters to ensure that the system if functioning properly across all possible combinations. The article also addresses the many benefits of pairwise testing and it’s role in reducing test execution time and cost while maintaining test coverage. Also, it talks about the challenges associated with pairwise testing, including the limitations in detecting interactions beyond pairwise combinations.

Pairwise Testing

pairwise testing is a software testing method that aims to comprehensively validate the behavior of a system by testing all possible pairs of input parameters. This method is mainly used when many of the defects in software systems are triggered by interactions between pairs of input parameters, rather than by individual parameters in isolation.

Benefits & Challenges

some benefits that pairwise offers is, efficiency: by testing the combinations of two input parameters at a time. This reduce’s the number of test cases required compared to exhaustive testing. pairwise testing also offers effective defect detection: by effectively finding defects that are triggered by interactions between pairs of input parameters, pairwise testing also helps to identify certain scenarios by systematically exploring pairs of parameters. Some challenges that pairwise testing may face is when it comes to parameter selection. Selecting the right parameters is crucial and requires a lot of knowledge of the software and it’s potential interaction scenarios. If the wrong parameter is selected this can lead to incomplete test coverage and missed defects.

Combinatorial Testing

Combinatorial testing is a software testing technique that focuses on efficiently testing the interactions between different input parameters of a system. This test method involves generating a set of test cases that include various combinations of input values / specific parameter values.

Benefits & Challenges

Some benefits of combinational testing include improved software quality: by being able to identify and address the interaction failures early in the development process. This test method tests various combinations of input parameters, which can help find defects that could impact the systems performance. A challenge that combinational testing may face is the scalability. Combinatorial testing is effective for small to medium sized systems and when scaling it to large and complex systems with a high number of input parameters and values, you may run into some problems.

Why did I pick this Article?

I pick these two article that talk about pairwise and combinatorial testing because both these test methods stand at the forefront of software test methods. The article’s goes into details about how both of these test methods offer an efficient way to ensure comprehensive test coverage while minimizing redundancy. Both of these articles have taught me a lot about pairwise and combinational testing.

Reflection

After reading both of these articles, I have gained a greater understanding of both these test cases. With the new found knowledge, I aspire to apply pairwise and combinatorial testing techniques in my future projects. Both these test methods offer practical solutions to common testing challenges, and by incorporating them into my future endeavors I aim to contribute to the development of reliable software systems.

Article link is here: https://www.sciencedirect.com/science/article/abs/pii/S0065245815000352

https://testsigma.com/blog/pairwise-testing/

From the blog CS@Worcester – In's and Out's of Software Testing by Jaylon Brodie and used with permission of the author. All other rights reserved by the author.

Decision Table-Based Testing, a Game Changer for Software Bugs.

Today, the next meal on my menu of headaches is Decision Table-Based Testing, which as the name suggests is a table of tests to ensure that your software is working as intended and not printing “Hello World!” when you try to generate your salary. I may be downplaying it somewhat but the truth is that it might be one of the best weapons against bugs in software development.

Photo by Yan Krukau on Pexels.com

This approach is all about making sure your app or software doesn’t throw a tantrum under different situations by planning out every possible scenario in a neat, organized table. It’s a bit like planning a massive party and making sure you’ve thought of everything, so nothing goes wrong (well, almost nothing).

Imagine you’ve got a bunch of switches and dials that can be turned on, off, or dialed up to eleven. Decision tables help you figure out what happens to your software when you mess with those controls in every possible way. It’s a clear, visual way to lay out the “if this, then that” of your app’s behavior. This is very handy because it turns the headache of thinking through a million combinations of inputs and outcomes into something manageable.

What’s awesome about this is how it simplifies the chaos. You get this big-picture view of how different inputs play together and affect your software, making it easier to spot where things might go wrong. It’s like having a map when you’re in a maze, showing you all the paths you can take.

Starting to use Decision Table-Based Testing is pretty straightforward. You write down all the things that could change or affect your software (conditions) and what should happen in response (actions). Then, you mix and match these conditions to cover all your bases. This method is a fantastic way to find those sneaky bugs that only show up under specific conditions and to make sure your software is rock solid.

“But Ano, what if you update the app and add new stuff?”. As your app grows and gets more features, you can just update your decision table to keep up. It’s a flexible, scalable way to keep your testing game strong, no matter how advanced or complex your software gets.

Sure, it might sound a bit daunting, especially with super complicated apps. But, with the right tools and a bit of practice, it becomes a lot less scary. It’s about making the effort now to save a ton of headaches later when you’re not chasing down weird bugs half an hour before a project is due.

In the end, Decision Table-Based Testing is all about making your life easier and your software better. It’s a way to tackle the complexity head-on, with a clear plan and a cool head. And who doesn’t want that? So, if you’re in the business of making software, give it a whirl. It might just be the thing you need to keep those bug boogeymen at bay.

Till next time,

Ano out.

References:

https://testsigma.com/blog/decision-table-testing

https://www.guru99.com/decision-table-testing.html

From the blog CS@Worcester – Anairdo's WSU Computer Science Blog by anairdoduri and used with permission of the author. All other rights reserved by the author.

Simplifying Software Testing: Decision Tables and Program Graphs

In the vast world of computer science, there are various techniques employed to ensure the reliability and efficiency of software systems. Two such techniques that play a crucial role in software testing are Decision Tables and Program Graphs. Let’s delve into what they are and how they contribute to the realm of computer science.

Decision Tables: Decision Tables are a systematic and structured way of representing complex decision-making processes. Imagine a scenario where a software program needs to make different decisions based on various conditions. These conditions can lead to different outcomes or actions. Decision Tables provide a visual representation of all possible combinations of conditions and their corresponding actions, making it easier to analyze and test different scenarios.

To understand Decision Tables better, think of a flowchart but with a more organized and concise format. Each column represents a condition, and each row represents a combination of conditions along with the corresponding action to be taken. By systematically analyzing all possible combinations, testers can ensure that the software behaves as expected under different circumstances.

Program Graphs: Program Graphs, on the other hand, offer a graphical representation of the control flow within a program. They depict how the program transitions from one state to another based on different inputs or conditions. Program Graphs help testers visualize the execution path of a program, identifying potential areas of concern such as loops, branches, or unreachable code segments.

These graphs aid in understanding the program’s behavior and facilitate the creation of comprehensive test cases to ensure thorough testing coverage. By traversing the program graph, testers can validate different paths and verify the correctness and robustness of the software.

DD Path Testing: DD Path Testing, short for Data Flow and Control Flow Path Testing, utilizes graphs to identify and test various paths through a program. It combines both data flow and control flow aspects to ensure comprehensive testing coverage. By analyzing the flow of data and control within the program, testers can identify potential vulnerabilities, errors, or inefficiencies.

By integrating Decision Tables, Program Graphs, and DD Path Testing into the software testing process, developers and testers can enhance the quality and reliability of software systems. These techniques enable thorough testing coverage, helping to identify and address potential issues early in the development lifecycle.

Here are two web links where you can find more information about Decision Tables, Program Graphs, and DD Path Testing:

  1. Decision Tables – Geeks for Geeks
  2. What is Graph in Data Structure & Types of Graph?

Talking about these topics is essential because they form the backbone of effective software testing strategies. By understanding and implementing these techniques, developers and testers can ensure that software systems meet the desired quality standards, resulting in enhanced user satisfaction and trust.

From the blog Discoveries in CS world by mgl1990 and used with permission of the author. All other rights reserved by the author.

spec based testing

As we move onto more code-based testing in class, I wanted to review some of the black box testing techniques we’ve gone over in class, especially since the most recent homework was somewhat confusing for me.

I’ll start off with boundary value testing. According to a blog post on SDET Unicorns, boundary value testing tests valid inputs in the domain (minimum, maximum, and one below and above them respectively), invalid inputs that are close to the domain, and any special inputs, such as empty strings or null pointers. This technique’s main use is testing boundaries, that is, it’s mostly concerned with whether or not invalid inputs are properly dealt with, and valid inputs are processed as valid. The drawback, as we discussed in class, is that it doesn’t really describe the different cases of valid inputs if there is branching taking place.

Equivalence class testing addresses this issue. From the same post above, equivalence class tests (or partitions, in the author’s words) divide inputs into, well, equivalence classes, or groups of input where behavior is expected to be the same. This also means that there are multiple groups of valid inputs, meaning this approach can effectively test different cases of valid inputs based on the specifications, rather than just testing if valid and invalid inputs behave as expected.

The reason why I wanted to look at these two specifically is because they are vital to understanding the decision table-based approach. I’m fairly confident in this approach because I found it fun to work with in class. It’s essentially a visualization and simplification of both boundary value and equivalence class testing, mostly equivalence class testing though, at least in my interpretation. The reason why I find it easier to work with decision tables is because they are much more efficient with regards to the space you use, even if the amount of mental work you have to do is larger.

It’s interesting because I found, in the homework at least, writing out test cases and the like for the non-table based approaches was somewhat frustrating because you have to consider each case even if they do the same thing, and write them out. With decision tables, you can optimize values into ‘dont cares,’ meaning that if the output is solely dependent on one of multiple inputs in this specific equivalence class, so you don’t have to care about what class the other values are. I really enjoy how this cleans up the entire process of black box testing. That being said, I understand that this can become very difficult to test as the complexity of a project increases.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.