Category Archives: CS-443

WEEK 8

PATH TESTING.

Path testing is an approach to testing where you can ensure that every path through a program has been executed at least once. However, testing all paths does not mean that you will find all bugs in a program. There are some steps involved in path coverage testing. Step one is code interpretation. It is important to carefully understand the code you want to test. The next step is constructing a control flow graph. It shows the nodes representing code blocks and edges for the movement of control between them. The third step is determining the paths. This entails following the control’s path from its point of entry to its point of exit while considering all potential branch outcomes. While determining paths, you’ll also consider loops, nested conditions, and recursive calls. It is important to list every route like giving each path a special name or label so you can keep track of which paths have been tested. The next step is testing case design. Create test plans for each path that has been determined, make inputs that will make the program take each path in turn. Make sure the test cases are thorough and cover all potential paths. Examine the test results to confirm all possible paths have been taken. It is important to make sure the code responds as anticipated.

Some advantages of path testing is it helps reduce redundant tests, it focuses on the logic of the programs and it is used in test case design. Some cons of using path testing is the test case increases when the code complexity is increased, it will be difficult to create a test path if the application has a high complexity of code and some test paths may skip some of the conditions in the code. There are three path testing techniques which are Control Flow Graph (CFG) – The Program is converted into Flow graphs by representing the code into nodes, regions, and edges. Decision to Decision path (D-D) – The CFG can be broken into various Decision to Decision paths and then collapsed into individual nodes. Independent (basis) paths- Independent path is a path through a DD-path graph which cannot be reproduced from other paths by other methods. I chose these two resources because they go more in depth about path testing and help explain it well. One of the sources talks about the pros and cons of using path testing, the types of path testing which I didn’t know before this.

References.

https://www.geeksforgeeks.org/path-testing-in-software-engineering

https://www.tutorialspoint.com/software_testing_dictionary/path_testing.htm

From the blog CS@Worcester – Site Title by lynnnsubuga and used with permission of the author. All other rights reserved by the author.

Unlocking the Power of Stubs in Software Testing


In the realm of software development, testing is a critical phase that ensures the quality and reliability of the product. This week, my exploration led me to a compelling resource that sheds light on an integral part of testing methodologies: the use of stubs. Stubs are simplified, replaceable components that mimic the behavior of real software modules, allowing testers to isolate and test individual parts of a program. The resource, an insightful article titled “A Comprehensive Guide to Stub and Mock Testing: Unveiling the Essence of Effective Software Testing” provided a comprehensive overview and practical advice that I found particularly enlightening.

The reason I selected this resource was its direct relevance to our current course material on software testing methodologies. As we delve into the complexities of ensuring software reliability, understanding the role of stubs becomes indispensable. This article not only introduces the concept but also illustrates its application with clarity and precision, making it an invaluable tool for beginners and seasoned developers alike.

Upon reading, I was struck by the depth of information presented. The article begins by defining stubs and differentiating them from other testing techniques such as mocks and drivers. It then delves into practical scenarios where stubs can significantly enhance the testing process, such as in unit testing and integration testing. The step-by-step guide on implementing stubs, complete with examples in popular programming languages, was particularly useful.

Reflecting on the content, I realized the importance of stubs in creating a controlled test environment. By simulating specific components, stubs enable testers to pinpoint errors more efficiently and focus on testing the functionality of individual units without the complexity of the entire system. This not only streamlines the testing process but also improves the accuracy of test results.

The application of what I learned from this article to my future practice is clear. I anticipate using stubs to conduct more effective and efficient testing, particularly in complex software systems where isolating components can be challenging. The hands-on examples provided will serve as a reference guide as I implement stubs in my projects.

For those interested in diving deeper into the subject, I highly recommend reading “A Comprehensive Guide to Stub and Mock Testing: Unveiling the Essence of Effective Software Testing” This resource has significantly enhanced my understanding of stubs in software testing and equipped me with practical skills that I look forward to applying in my future endeavors.

Resource Link: https://medium.com/@fideraphael/a-comprehensive-guide-to-stub-and-mock-testing-unveiling-the-essence-of-effective-software-testing-7f7817e3eab4

As we continue to explore the vast landscape of software testing, I am excited to share more discoveries and insights. Stay tuned for more reflections and learning experiences.

From the blog CS@Worcester – Abe's Programming Blog by Abraham Passmore and used with permission of the author. All other rights reserved by the author.

Sprint retrospective

Our team has reflected on our recent project experiences and identified areas where we can strengthen our collaboration to improve project outcomes. Here’s a breakdown of what worked well and what didn’t, along with proposed changes for improvement:

Successes:

  • We successfully completed a significant portion (75%) of small tasks.
  • Our meetings were productive, fostering meaningful discussions.
  • Communication among team members was clear and effective.

Challenges:

  • Difficulty with Docker-compose and documentation tasks due to inaccurate task weighting.
  • Essential details were not easily accessible, often buried within overarching epics.
  • Task distribution among team members was uneven.
  • Overreliance on one team member for handling GitLab logistics.
  • Limited collaborative assistance on individual issues.

Proposed Team Improvements:

  • Foster a culture of teamwork by encouraging mutual assistance on tasks.
  • Assign task weights more accurately based on lessons learned from previous sprints.
  • Improve team proficiency with GitLab and provide support as needed.
  • Ensure that issue details are readily accessible within individual tasks.
  • Distribute task assignments more evenly among team members.

Proposed Individual Improvements:

  • Communicate problems to teammates or the professor promptly for timely resolution.

From the blog CS@Worcester – THE SOLID by isaacstephencs and used with permission of the author. All other rights reserved by the author.

Understanding Mock Objects


Understanding Mock Objects: A Journey from Confusion to Clarity

When I first stumbled upon the concept of “mock objects,” it was during my foray into the Extreme Programming (XP) community. The term has since become more prevalent, particularly among those versed in XP-influenced testing literature. Yet, mock objects are frequently misconstrued, often mixed up with stubs, which serve as basic aids in testing environments. This confusion is understandable;

Mock objects represent a nuanced divergence in the realm of software testing, embodying both a shift in test result verification—state versus behavior verification—and an ideological split in testing and design methodology: classical versus mockist Test Driven Development (TDD).

Diving into Testing Styles

To elucidate, let’s consider a straightforward example: testing an order system interacting with a warehouse. In traditional state verification tests, we’re primarily concerned with the end-state of the system under test (SUT) and its collaborators after the exercise phase. Here, both the SUT (Order) and a real collaborator (Warehouse) are employed, focusing on the system’s final state to verify test success.

Conversely, tests utilizing mock objects—like those in the jMock library—adopt behavior verification, emphasizing the interactions between the SUT and its collaborators. Instead of a real warehouse, a mock warehouse is used, setting expectations for how the SUT should behave. This approach focuses not on the final state but on ensuring the SUT makes the correct calls to its collaborators.

Exploring Classical vs. Mockist TDD

The distinction doesn’t stop at test execution. It extends into the philosophy behind the testing approach. Classical TDD practitioners utilize real objects where feasible, employing stubs or mocks primarily for cumbersome collaborators.

Mock objects are born from the XP community’s focus on TDD, where design evolves through test iterations. This “need-driven development,” particularly championed by mockists, advocates for outside-in programming, starting from the topmost user interface layer and working inwards, designing the system piece by piece.

Fixture Setup and Test Isolation

Fixture setup and test isolation further differentiate the two approaches. Classic TDD often involves extensive fixture setup, creating the SUT along with all necessary real collaborators. Mockist TDD, by contrast, requires only the SUT and its direct mock collaborators, potentially simplifying test setup

Design Implications and Personal Reflections

The decision between classic and mockist TDD extends beyond mere testing strategy; it influences design philosophy and system architecture. Mockist TDD tends to encourage more decoupled, modular designs, as each component’s interactions are explicitly defined and isolated.

As someone who initially grappled with understanding mock objects, I’ve come to appreciate their value in elucidating system behaviors and fostering thoughtful design. Yet, the choice between classical and mockist TDD ultimately depends on individual project needs, team preferences, and the specific challenges at hand.By understanding the nuances between these approaches, developers can make informed decisions that best suit their projects, fostering environments where quality software can thrive.

Based link: https://martinfowler.com/articles/mocksArentStubs.html

From the blog CS@Worcester – Coding by asejdi and used with permission of the author. All other rights reserved by the author.

Another Look at Boundary Value Analysis and Equivalence Class Partitioning

Recently in CS443 – Software Quality Assurance and Testing we’ve been learning some of the conceptual aspects of code testing that are required to identify the relevant points of programs to test as likely break points. We’ve primarily learned about Boundary Value and Equivalence Class testing strategies, so I decided to find a blog to learn more about each of these from a third-party perspective. I landed upon a (relatively) recent blog on TestSigma – a (automatic) testing platform – from June 2023.

The post discusses the overall importance of software testing in ensuring functionality and reliability of software products focusing on the defining aspects of the two methods we’ve been learning: ‘Bound. Value Analysis’ (BVA) and ‘Equiv. Class Partitioning’ (ECP). BVA concentrates on testing the boundaries of a system to identify vulnerabilities, while ECP groups similar items into equivalence classes, helping testers target specific areas with a higher likelihood of containing bugs.

Benefits of applying BVA and ECP in software testing include improved understanding of the system, simplified test design, better test coverage, prioritization, and risk management. The applications of these techniques extend to various scenarios, such as database testing, network testing, hardware testing, time-based functionality, and UI testing. An interesting point that the article emphasizes is that BVA and ECP are often used together, providing an example of testing a form that accepts age as a number. It suggests partitioning the age range into groups for more effective testing while also considering likely break points.

Common challenges discussed to avoid when using BVA and ECP include restricting testing to input values alone, making assumptions about limits and classes, ignoring user behavior, over-relying on these techniques, and neglecting edge cases. The post concludes by comparing BVA and ECP, highlighting their differences in testing approaches and summarizing them as thought processes that enhance testers’ understanding of the system, leading to improved test coverage and strategy.

Test automation for BVA and ECP using tools like TestSigma (or other softwares) is also discussed, highlighting the potential benefits of saving time, ensuring accuracy, and achieving better test coverage. However, the decision to automate tests should be made considering the cost and benefit of automation and set-up.

Overall, this post taught me some interesting differences between BVA and ECP as well as reinforcing the benefits and basics we learned in class. One interesting aspect of this blog that I noticed in review is that it was written by author Apoorva Ram, a non-white woman in the computer science and specifically software engineering industry. This demographic represents a sparse minority in the computer science field and worth recognizing alongside their contribution with this and other blogs.

Sources:

https://testsigma.com/blog/boundary-value-analysis-and-equivalence-class-partitioning

From the blog CS@Worcester – Tech. Worth Talking About by jelbirt and used with permission of the author. All other rights reserved by the author.

Mastering Advanced Unit Testing: Test Doubles and Code Coverage for Beginners

As developers, writing robust, reliable code is a top priority. And when it comes to ensuring the quality of our codebase, unit testing plays a pivotal role. However, as we delve deeper into the realm of unit testing, we encounter advanced concepts like test doubles and code coverage, which might seem intimidating at first glance. But fear not, for in this beginner’s guide, we’ll demystify these concepts and explore why they are essential for writing high-quality code.

Understanding Test Doubles

Test doubles, also known as mocks, stubs, or fakes, are objects used in place of real dependencies in unit tests. They simulate the behavior of these dependencies, allowing us to isolate the code under test and verify its interactions with its collaborators.

For instance, imagine you’re testing a class that relies on an external API. Instead of making actual API calls, you can use a test double to mimic the API’s responses, ensuring your tests run swiftly and independently of external factors.

Test doubles help in:

  1. Isolation: By replacing real dependencies with test doubles, we can focus solely on testing the behavior of the unit under scrutiny without worrying about the intricacies of its collaborators.
  2. Speed: Since test doubles operate in-memory and don’t involve external resources, tests run faster, contributing to quicker feedback loops during development.
  3. Determinism: Test doubles allow us to create predictable test scenarios, ensuring consistent and reliable test results across different environments.

Code Coverage

Code coverage measures the proportion of a codebase that is exercised by automated tests. It provides insights into areas of code that lack sufficient test coverage, enabling developers to identify potential bugs and improve overall code quality.

While achieving 100% code coverage doesn’t guarantee bug-free software, it serves as a valuable metric for assessing the thoroughness of our test suite.

Code coverage aids in:

  1. Identifying Untested Code: It highlights parts of the codebase that lack test coverage, prompting developers to write additional tests for those areas, thus reducing the likelihood of undetected bugs.
  2. Improving Confidence: Higher code coverage instills confidence in the codebase, indicating that most critical paths and edge cases are adequately tested, thereby reducing the risk of regressions.
  3. Refactoring Safely: With comprehensive test coverage, developers can refactor code with confidence, knowing that any unintended changes are likely to be caught by existing tests.

In conclusion, mastering advanced unit testing techniques like test doubles and code coverage is crucial for any developer striving to deliver high-quality software. By leveraging test doubles, we can isolate units under test, while code coverage empowers us to assess the thoroughness of our test suite. Incorporating these practices into our development workflow not only enhances code quality but also fosters a culture of test-driven development, ultimately leading to more robust and maintainable software.

For further reading, check out this article from Christian Findlay on writing testable code and its importance in software development. Happy testing!

From the blog Discoveries in CS world by mgl1990 and used with permission of the author. All other rights reserved by the author.

Path Testing Demystified

Hello, It’s me, your favorite computer science student ready to once again complain about the career path I chose myself.

Today’s menu of minor headaches (I’ve got to stop using this) consists of Path Testing, which is the same as checking every corner of your room for monsters before going to bed to ensure your beauty sleep doesn’t get interrupted.

Imagine you’re playing a video game where you choose paths to reach the exit of a maze. Some paths are straightforward, others are mazes with obstacles. Path testing is the same principle but with your code. You need to check every route your code can take to catch bugs hidden off the beaten path.

Think of your code as a map, with each part representing a stop or a crossroad. The goal is to explore all the stops and paths without an endless journey. We use a Control Flow Graph as a map for your code to ensure that we are not missing any hidden detours.

To implement Path Testing you only need to follow a few key steps:

  1. Create the Control Flow Graphs: This graph maps out all the possible routes through the program.
  2. Calculate Cyclomatic Complexity: This metric is the guide for the number of test cases needed for adequate coverage.
  3. Identify Independent Paths: Determine the set of paths that cover all the edges and nodes in the graph.
  4. Design Test Cases: Create test cases that will traverse each identified path.

That’s pretty much it.

Now you may say “But Ano why even bother with Path testing?”. Well, Path Testing is your code’s ultimate test drive. It uncovers sneaky bugs that hide in specific conditions and gives you a deep understanding of your code, making it easier to add features without issues.

And yet there is a catch. While Path Testing may be great, it can also be tricky for complex apps. Trying to test every path can feel like planning a road trip to Mars. The key is to smartly select which paths to test, covering as much ground as possible without getting lost in the details.

Just like our previous entry on software testing, Path Testing is another secret weapon for robust code. It’s meant to bring you peace of mind, ensuring your app or program performs without any flaws. So, before you deliver or push your code, be sure to take it on this essential road trip that guarantees your code does what it is supposed to.

Till next time,

Ano out.

References:

https://www.geeksforgeeks.org/path-testing-in-software-engineering

https://www.guru99.com/basis-path-testing.html

From the blog CS@Worcester – Anairdo's WSU Computer Science Blog by anairdoduri and used with permission of the author. All other rights reserved by the author.

Black Box Testing

https://www.practitest.com/resource-center/article/black-box-vs-white-box-testing/#:~:text=The%20Black%20Box%20Test%20is,is%20carried%20out%20by%20tester.

Black box testing is a genre of testing which does not take a codes functionality into consideration, in other words the internal workings of the code are not known to the test and the tests are formed strictly by external workings. This is very different from white box testing as white box testing operates under the idea that the ‘tester’ has extensive knowledge of how a system was created along with its inner workings. Some of the differences between black and white box testing are black box testing is carried out by ‘tester’ while white box testing is left to software developers, black box testing is considered behavior testing while white box testing is considered logic testing and black box testing is typically used in system testing while white box testing is used in unit testing. There are also ways in which these two types of testing are similar which is mainly their purpose which is to ensure that a system is working correctly and that you have the best version of the software available.

I chose this article because I liked how it not only explained black box testing but it also compares black box testing to white box testing in order to allow a deeper understanding of the relationship between the two types of testing. I am very interested in seeing how different methods of black box testing work as for the most part we have been practicing with white box testing methods in class so far this semester so the concept of testing without extensive knowledge of a system or without access to the systems internal workings and code seems both interesting and challenging to me. 

With there being a large gap in knowledge of a system between the two different types of testing it seems as though black box testing is something that is done by others not necessarily full blown software developers as developers are expected to spend their time on white box testing so It makes me wonder how exactly these tests are written and how specific results may be measured for success compared to the more straightforward nature of white box testing. Since black box testing can be used for just about every type of testing, even some of the sames types of testing white box is used for it would be nice to see and compare how tests are carried out for both types of testing on a similar component of a system in order to differentiate the information used by each type.

From the blog CS@Worcester – Dylan Brown Computer Science by dylanbrowncs and used with permission of the author. All other rights reserved by the author.

The Logic of Software Testing

Hello everyone,

Although we have been studying software testing for a few weeks, I still want to talk about the “logic” of software testing. Because I think it’s important, to test software comprehensively, we must understand the “logic” of testing. So I want to share a blog with you today:

“What is the Underlying Logic of the Software Testing? ”

by ZenTao 3

Link: https://www.zentao.pm/blog/underlying-logic-of-the-software-testing-1-1249.html

As a student new to software testing, I find the topic of test logic particularly interesting. Recognizing its critical role in ensuring software reliability and functionality, I sought out some resources that could help me develop an understanding of the underlying logic behind the testing process. This blog post does a great job of explaining this to me, providing insight into the complex mechanisms that support effective software testing.

In the first chapter of the article, we learned about the basic requirements for software testing:

  • Software testing is to verify whether the functional characteristics of software meet the requirements;
  • Software testing is to find the defects in the software;
  • Software testing includes static testing – requirement, design, and code review
  • Software testing is to systematically and completely evaluate the quality of software products and provide quality information;
  • Software testing is to expose and reveal product quality risks;
  • Software testing is not only a technical activity but also a comprehensive social and psychological activity;
  • Software testing is to greatly reduce the cost of poor quality by investing in quality assurance costs.

These guidelines will play an important role in our future software testing path. All our testing will be conducted based on these guidelines. Also, article mentioned “based on the understanding of the real needs of users, to obtain the true and comprehensive quality information of software products through various means.” If we become professional software testers in the future, during the testing process, we must Always ask yourself if you are being comprehensive.

What impressed me most was the second chapter in the blog: The Underlying Logic of the Software Testing. And its three questions:

  • Why?
  • What?
  • How?

Why: Because humans are not machines, even machines can make mistakes, so human work will inevitably lead to errors and imperfections due to some reasons (personal habits, time, abilities, etc.).

What: Be clear about the goals, scope, and specific data of the test. Prioritize your testing and do not do tests that are not relevant to your goals.

How: Sometimes obtaining existing test data will help us speed up and improve the efficiency of testing.

Overall, this blog is a great foundation for software testing. During our testing process, we need to ask ourselves whether the above problems have been solved.

From the blog CS@Worcester – Ty-Blog by Tianyuan Wang and used with permission of the author. All other rights reserved by the author.

Code Reviews

Reviewing your code is important, it makes sure that the code is correct and functions the way you want it to. Code review helps cover so many areas of your code. You want to make sure that your code is understandable for anybody else who looks at it and that it’s clear, by reviewing your code you can go back and fix anything that could be making your code more complicated. Code reviewing is important when working with a team, your team should understand what your code is trying to execute without being confused. When it comes to coding so much of your work can change throughout the process which is normal but with so many changes happening it can get confusing. Code reviewing comes with so many different solutions when it comes to writing code it nitpicks what parts are important or not. It’s a helpful tool that helps coders make sure they have clean code and that when working with a team everyone’s comments are heard and changes are made.

Code reviews can be tough if the team isn’t together so questions and comments can’t be answered. Another thing is that the code could be overlapping, and missing a team meeting could affect the code and confuse team members about what they should be writing ending up in similar code. If that happens the next person reviewing the code will find duplicate code and will end up pushing the team’s work behind. Making sure your team communicates will help prevent more code from having to be fixed and reviewed even though you don’t want to add more work on top of what you already have. Code review makes sure your work is good quality and organized. Having sloppy issues and mistakes doesn’t look good for you, your team, and your job. Code review is about making sure your work is organized and clean, you want to make sure you put out good work without having a lot of issues. Especially as a team you want to meet those standards and follow them. Always make sure to review your code because there could be small errors or even confusing comments that will make your code more complicated than it needs to be.

https://blog.pragmaticengineer.com/good-code-reviews-better-code-reviews/

From the blog CS@Worcester – Kaylene Noel's Blog by Kaylene Noel and used with permission of the author. All other rights reserved by the author.