Category Archives: Week-14

Exploring Testing Techniques.

Testing is an essential aspect of software development, ensuring that our applications meet quality standards and perform as expected. However, the world of testing can be vast and intricate, with various techniques and methodologies to choose from. In this guide, we will delve into six key testing approaches: Pairwise, Combinatorial, Mutation, Fuzzing, Stochastic, and Property-Based Testing.

Pairwise Testing:

Pairwise testing, also known as all-pairs testing, is a method used to test the interactions between pairs of input parameters. By selecting a minimal set of test cases that cover all possible combinations of pairs, this technique efficiently identifies defects without exhaustive testing. It’s particularly useful when dealing with large input domains. For more information, you can explore Software Testing Fundamentals.

Combinatorial Testing:

Combinatorial testing extends pairwise testing by considering interactions among multiple parameters simultaneously. Instead of testing every possible combination, it focuses on covering a representative subset of combinations. This approach helps in reducing the number of test cases required while still providing comprehensive coverage. Learn more at National Institute of Standards and Technology (NIST).

Mutation Testing:

Mutation testing involves making small modifications (mutations) to the source code and running test cases to check if these mutations are detected. It assesses the effectiveness of test suites by measuring their ability to detect changes in the code. By simulating faults in the program, mutation testing helps in identifying weaknesses in test cases. You can find further insights at Striker.

Fuzzing:

Fuzz testing, or fuzzing, is a technique where inputs are fed into a system in an automated and randomized manner, often with invalid, unexpected, or malformed data. The goal is to uncover vulnerabilities such as crashes, memory leaks, or security flaws that may not be apparent under normal testing conditions. To explore more about fuzzing, visit OWASP.

Stochastic Testing:

Stochastic testing involves using random or probabilistic techniques to generate test cases. Unlike deterministic testing, where inputs are predefined, stochastic testing introduces variability, mimicking real-world scenarios. It’s particularly useful in systems where inputs are inherently unpredictable or when exhaustive testing is impractical. Dive deeper into stochastic testing at Investopedia.

Property-Based Testing:

Property-based testing focuses on defining properties or specifications that the system should satisfy and then generating test cases automatically to verify these properties. Instead of specifying individual test cases, developers define general rules, and the testing framework generates inputs to validate these rules. Learn more about property-based testing from Medium.

In conclusion, understanding different testing techniques empowers software developers and testers to choose the most appropriate methods for their projects. Whether it’s ensuring thorough coverage, detecting defects, or improving resilience, these approaches play a crucial role in delivering high-quality software products.

From the blog Discoveries in CS world by mgl1990 and used with permission of the author. All other rights reserved by the author.

Use Of AI in Software Testing


 The recent explosion of AI has invaded almost every industry
nowadays. It has become something of a buzzword, with many companies loudly
proclaiming how they are making use of the emergent technology to benefit
their customer bases. Chat gpt and other types of  AI have already
started creating all sorts of problems within the academic setting, giving
many students an easy out on writing essays. Not only that, but AI is also
now being attributed as one of the main driving forces behind massive
layoffs within the tech industry and beyond.

All of that being said, how can AI be utilized to improve software testing.
I know that immediately trying to think of ways for AI to replace even more
jobs within the software industry can be a bit jarring after bringing up the
problems it has already created, but I wanted to look into how the future
may look if we were to utilize this technology to expedite the testing
process. It is entirely possible that we could teach proper testing
etiquette to an AI model and have it automatically produce test cases.
Future IDEs could have an auto generated test file feature added to them to
help developers quickly create basic test cases. Well, I didn’t have to
speculate for long as one google search later I had already found a website
for using an AI to create test cases. This does pose a rather worrying
question about the speed at which AI is developing and whether our modern
society can keep up with it, but I would rather not dwell on such topics.
Now, there have been concerns about the proliferation of AI potentially
poisoning the well of data that they use, and I do believe that certain
measures will need to be taken to prevent another event like the dot com
bubble burst from happening again today. 

https://www.taskade.com/generate/programming/test-case 


 Another use case for artificial intelligence that has been
proposed  is the generation of “synthetic data”. This is data created
to mimic real life data in order to test and train programs. DataCebo is one
such company, and has been using an AI to create synthetic data. Called
Synthetic Data Vaults, or SDV for short, These systems are usually sold to
data scientists, health care companies, and financial companies. The purpose
of creating realistic synthetic data is so companies can train programs in a
number of scenarios without relying on historical data, which was limited to
that which has already happened. This also gets around privacy issues of
companies using people’s private data unethically.

https://news.mit.edu/2024/using-generative-ai-improve-software-testing-datacebo-0305

From the blog CS@Worcester Alejandro Professional Blog by amontesdeoca and used with permission of the author. All other rights reserved by the author.

Use Of AI in Software Testing


 The recent explosion of AI has invaded almost every industry
nowadays. It has become something of a buzzword, with many companies loudly
proclaiming how they are making use of the emergent technology to benefit
their customer bases. Chat gpt and other types of  AI have already
started creating all sorts of problems within the academic setting, giving
many students an easy out on writing essays. Not only that, but AI is also
now being attributed as one of the main driving forces behind massive
layoffs within the tech industry and beyond.

All of that being said, how can AI be utilized to improve software testing.
I know that immediately trying to think of ways for AI to replace even more
jobs within the software industry can be a bit jarring after bringing up the
problems it has already created, but I wanted to look into how the future
may look if we were to utilize this technology to expedite the testing
process. It is entirely possible that we could teach proper testing
etiquette to an AI model and have it automatically produce test cases.
Future IDEs could have an auto generated test file feature added to them to
help developers quickly create basic test cases. Well, I didn’t have to
speculate for long as one google search later I had already found a website
for using an AI to create test cases. This does pose a rather worrying
question about the speed at which AI is developing and whether our modern
society can keep up with it, but I would rather not dwell on such topics.
Now, there have been concerns about the proliferation of AI potentially
poisoning the well of data that they use, and I do believe that certain
measures will need to be taken to prevent another event like the dot com
bubble burst from happening again today. 

https://www.taskade.com/generate/programming/test-case 


 Another use case for artificial intelligence that has been
proposed  is the generation of “synthetic data”. This is data created
to mimic real life data in order to test and train programs. DataCebo is one
such company, and has been using an AI to create synthetic data. Called
Synthetic Data Vaults, or SDV for short, These systems are usually sold to
data scientists, health care companies, and financial companies. The purpose
of creating realistic synthetic data is so companies can train programs in a
number of scenarios without relying on historical data, which was limited to
that which has already happened. This also gets around privacy issues of
companies using people’s private data unethically.

https://news.mit.edu/2024/using-generative-ai-improve-software-testing-datacebo-0305

From the blog CS@Worcester Alejandro Professional Blog by amontesdeoca and used with permission of the author. All other rights reserved by the author.

Use Of AI in Software Testing


 The recent explosion of AI has invaded almost every industry
nowadays. It has become something of a buzzword, with many companies loudly
proclaiming how they are making use of the emergent technology to benefit
their customer bases. Chat gpt and other types of  AI have already
started creating all sorts of problems within the academic setting, giving
many students an easy out on writing essays. Not only that, but AI is also
now being attributed as one of the main driving forces behind massive
layoffs within the tech industry and beyond.

All of that being said, how can AI be utilized to improve software testing.
I know that immediately trying to think of ways for AI to replace even more
jobs within the software industry can be a bit jarring after bringing up the
problems it has already created, but I wanted to look into how the future
may look if we were to utilize this technology to expedite the testing
process. It is entirely possible that we could teach proper testing
etiquette to an AI model and have it automatically produce test cases.
Future IDEs could have an auto generated test file feature added to them to
help developers quickly create basic test cases. Well, I didn’t have to
speculate for long as one google search later I had already found a website
for using an AI to create test cases. This does pose a rather worrying
question about the speed at which AI is developing and whether our modern
society can keep up with it, but I would rather not dwell on such topics.
Now, there have been concerns about the proliferation of AI potentially
poisoning the well of data that they use, and I do believe that certain
measures will need to be taken to prevent another event like the dot com
bubble burst from happening again today. 

https://www.taskade.com/generate/programming/test-case 


 Another use case for artificial intelligence that has been
proposed  is the generation of “synthetic data”. This is data created
to mimic real life data in order to test and train programs. DataCebo is one
such company, and has been using an AI to create synthetic data. Called
Synthetic Data Vaults, or SDV for short, These systems are usually sold to
data scientists, health care companies, and financial companies. The purpose
of creating realistic synthetic data is so companies can train programs in a
number of scenarios without relying on historical data, which was limited to
that which has already happened. This also gets around privacy issues of
companies using people’s private data unethically.

https://news.mit.edu/2024/using-generative-ai-improve-software-testing-datacebo-0305

From the blog CS@Worcester Alejandro Professional Blog by amontesdeoca and used with permission of the author. All other rights reserved by the author.

Use Of AI in Software Testing


 The recent explosion of AI has invaded almost every industry
nowadays. It has become something of a buzzword, with many companies loudly
proclaiming how they are making use of the emergent technology to benefit
their customer bases. Chat gpt and other types of  AI have already
started creating all sorts of problems within the academic setting, giving
many students an easy out on writing essays. Not only that, but AI is also
now being attributed as one of the main driving forces behind massive
layoffs within the tech industry and beyond.

All of that being said, how can AI be utilized to improve software testing.
I know that immediately trying to think of ways for AI to replace even more
jobs within the software industry can be a bit jarring after bringing up the
problems it has already created, but I wanted to look into how the future
may look if we were to utilize this technology to expedite the testing
process. It is entirely possible that we could teach proper testing
etiquette to an AI model and have it automatically produce test cases.
Future IDEs could have an auto generated test file feature added to them to
help developers quickly create basic test cases. Well, I didn’t have to
speculate for long as one google search later I had already found a website
for using an AI to create test cases. This does pose a rather worrying
question about the speed at which AI is developing and whether our modern
society can keep up with it, but I would rather not dwell on such topics.
Now, there have been concerns about the proliferation of AI potentially
poisoning the well of data that they use, and I do believe that certain
measures will need to be taken to prevent another event like the dot com
bubble burst from happening again today. 

https://www.taskade.com/generate/programming/test-case 


 Another use case for artificial intelligence that has been
proposed  is the generation of “synthetic data”. This is data created
to mimic real life data in order to test and train programs. DataCebo is one
such company, and has been using an AI to create synthetic data. Called
Synthetic Data Vaults, or SDV for short, These systems are usually sold to
data scientists, health care companies, and financial companies. The purpose
of creating realistic synthetic data is so companies can train programs in a
number of scenarios without relying on historical data, which was limited to
that which has already happened. This also gets around privacy issues of
companies using people’s private data unethically.

https://news.mit.edu/2024/using-generative-ai-improve-software-testing-datacebo-0305

From the blog CS@Worcester Alejandro Professional Blog by amontesdeoca and used with permission of the author. All other rights reserved by the author.

Integration Testing

After recently reading up on unit testing and having some exposure to it in class, I figured the next step would be to look at integration testing. With plenty of blogs to choose from, all giving pretty similar information on integration testing, I chose to highlight the blog published by Katalon (https://katalon.com/resources-center/blog/integration-testing). This blog provides more information into certain aspects of Integration testing, such as explaining some differences between low- and high-level modules. This extra insight helps piece together a lot of the questions I had while reading other blogs on the topic.

Integration testing is a software testing method that involves combining individual units together and testing how they interact. The main purpose of integration testing is to ensure that each module works together as intended and does not create any unforeseen bugs. Each module should be individually unit tested before integration testing.

Integration testing helps identify bugs that may not be seen in unit testing. These bugs can come from a multitude of places. Inconsistent logic between different programmers, modifying code, data being transferred incorrectly, poor exception handling, and incompatible versions may all lead to bugs that only appear during integration testing. Integration testing helps identify these bugs, and depending on what model is being implemented, may easily help find the locality of the bug.

There are two main approaches to performing integration testing: the big bang approach, and the incremental approach. The big bang approach involves integrating and testing all modules at once. This is great for small systems that do not have much complexity however, this means finding the location of a bug will be harder with a complex system. When systems become larger and more complex it may be best to switch to incremental testing. This is an integration testing method where modules are combined and tested in smaller groups. These groups continue to combine with other groups until the system is tested. Three common incremental integration testing methods are the bottom-up approach, top-down approach, and a hybrid approach. Bottom-up starts by integrating the lowest level modules together before adding higher level modules. Top-down does the opposite where it starts by integrating the highest-level modules before adding the lower ones in as tests pass. The hybrid, also known as the sandwich approach, is a combination that may alternate between testing the top level and bottom level components.

From the blog CS@Worcester – CS Learning by kbourassa18 and used with permission of the author. All other rights reserved by the author.

AI In Testing

For the summer, I will be working on a project revolving around Machine Learning and AI. I have a distinct interest in AI, and I have often wondered how it may affect the jobs of Computer Scientists. Testing specifically has been worked towards automation, but there is also the matter of human intervention required. This gave me a lot of interesting thoughts about how testing might be COMPLETELY automated using AI as the intermediary. My most used blog for testing information, stickyminds, had an article about this exact thought. The article’s title is Examining the Impact of AI on Software Testing: A Conversation with Jeff Payne and Jason Arbon by Todd Kominiak.

The integration of AI into various industries, including software development and testing, has sparked discussions regarding its potential impact. This article is an interview regarding the software testing community, and the outlook of anticipation and apprehension on how AI will reshape their roles and practices. Jeff Payne, the CEO of Coveros and Jason Arbon, CEO of TestersAI, were interviewed to delve into the implications of AI for software testing. Arbon challenges the notion that AI will make testers obsolete.And this is a sentiment that I agree with. He argues that AI will instead increase the importance of testers to ensure software quality, as AI can often err. Arbon states that the current response from testers towards AI has been somewhat ‘lackluster’, with many primarily focusing on experimenting with AI tools rather than fully utilizing its potential. One key piece of information Arbon offers is that testers would need to reevaluate their approach to testing if AI emerged as a driving force in testing. Even as AI technology evolves to automate certain testing tasks, human testers will be required to ensure the reliability and appropriateness of software solutions.This is something I think will be essential for all professions that go the route of automation through AI.  Arbon further suggests that the complexity and scale of software testing will increase alongside with the improvement of AI-testing. In the future, he envisions a pivotal role change for testers, wherein they ask essential questions of the structure and content of the code beyond mere functionality. This would also be beneficial, especially to address diversity concerns in coding, and to keep AI bias in check.

This interview opened my eyes greatly about software testing. Not just how AI will change the roles of testers, but it also changed how I think about testes as a whole. Going into this article I held the belief that a tester’s role was to assure the functionality of the code, but to also be able to address ethical concerns throughout the code, is another role entirely. I am less apprehensive now, about the role of AI in testing and software development for that matter. And I am more optimistic to see how my job will evolve as time goes on.

Source:https://www.stickyminds.com/interview/examining-impact-ai-software-testing-conversation-jeff-payne-and-jason-arbon

From the blog CS@Worcester – WSU CS Blog: Ben Gelineau by Ben Gelineau and used with permission of the author. All other rights reserved by the author.

Technical Review

When working on a big project, in a big team, with a lot of other people working together, things can become confusing. Sometimes, code may not work, or you will not entirely understand what is happening. The best thing to do is step back and take a technical review. Review the code, the goals, and any other areas that may be in need of improvement or assistance. Take the time to straighten things out so you and the team can get back to work efficiently and effectively. But how do you go about a technical review?

In this blog post, Tony Karrer talks about what a technical review is, some ways to identify when you need to review, some strategies, and some areas of review. They describe a technical review as “a deep-dive assessment of your software, infrastructure, team and processes,” and that “it provides findings and recommendations intended to foster a mutual understanding between business and software leaders, shedding light on the current state of your technology and your team.” Some signs that you are in need of a technical review are slow or late delivery, random or persistent bugs, and sleepless nights from strategic worries. However, taking a technical review shouldn’t just be in response to malfunction, it can be due to scaling and new markets, keeping up with competition, outgrowing your stack, changes to the tech team, or simply because you want to be on top of things. Karrer describes four strategies, each different from the other, ranging from general to specific and in depth. They are straightforward analysis, pragmatic assessment, expert recommendations, and finding sessions. After determining which strategy you are comfortable with, you can go ahead and start reviewing. They provide some examples of where you may want to review, including background information to get a general idea of the project, architecture, targeted code, or process and team. If there are areas that need work or are struggling, then that is one hundred percent a spot you want to review. While you are doing that, create a summary and list your findings, and include some recommendations or solutions if you have any. Finally, bring them to the review meeting, where you will review the project together with your team and sort out the issues and find possible solutions.

Doing this in class was actually fairly helpful. I feel like if it was my code, I would have found more benefit in it, but I understand the premise of it, it’s good to have multiple people look at the code and come together and see what kind of issues we found. 

From the blog CS@Worcester – Cao's Thoughts by antcao and used with permission of the author. All other rights reserved by the author.

Testable code

In my last post, I highlighted a blog that gives an overview of unit testing and how we can use it to increase the quality and efficiency of our testing or debugging processes. The point of unit testing is to test “units” or isolated methods, classes, or modules of our code to determine if there are any issues. Writing code with unit testing in mind makes it simpler for developers to debug their code. I felt this next blog, “TDD: Writing Testable Code,” by Eric Elliott, would help further readers’ understanding of writing this kind of code and the benefits that come from it.

Elliott’s blog discusses many aspects of writing testable code, including tight coupling, test-driven development, separation of concerns, and an overview of different data management strategies. He describes how tight coupling limits testability and provides strategies to reduce it, including TDD. He also discusses the benefits of testing first vs. testing after, with test-first being the main focus of test-driven development. He goes on to describe data management strategies.

Reading this post brought my attention to an important aspect of software development, which is influencing a specific type of developer culture that improves the quality of our software as a whole. Writing with testability in mind makes it easier for the users, increases the adaptability of our code, and allows us to fix issues without revamping the whole system. 

The breakdown of tight coupling and the different forms they can take was comprehensive, giving me a straightforward explanation of what I should consider when writing code. Elliott gave 11 forms and causes of tight coupling, including parent class dependencies, temporal coupling, and event chains. TDD was something I was already aware of, but the benefits of using and detriments of not using TDD were still insightful for how we should be taking our development step by step, failure by failure until we have testable code. The separations of concerns were interesting. He says we should isolate our code by concerns, including business and state logic, user interface, and I/O & effects. Separating these into separate modules allows us to test and understand each independently. 

I plan to consider all of these strategies when developing projects in the future. I will use test-driven development to limit the tight coupling of my modules, classes, methods, etc., to ensure that my code is readable and testable and that each of its concerns are independent from each other. 

From the blog CS@Worcester – KindlCoding by jkindl and used with permission of the author. All other rights reserved by the author.

Blog #4: First Exposure to Testing

Before being exposed to JUnit, my only experience with automated testing was through CxxTest while I was learning C++. Once I started to learn JUnit both the syntax and general format seemed to ring a bell. This caused me to check back at my previous C++ programs to find that the assertion-based testing was identical to that of JUnit. After seeing these two side by side I was curious about the comparisons between these two testing frameworks and whether CxxTest had any advantages over that of JUnit.

While looking for an article discussing the full capabilities of CxxTest, I stumbled upon a blog, Exploring the C++ Unit Testing Jungle by user @noel_llopis, which seemed to provided extensive explanations of each popular testing framework for C++ at the time. Do note that this post was written in 2010, so popular testing frameworks from then may have faded into obscurity and new frameworks may be used in their place. My main allure to this article was Llopis’s section describing his experience with CxxTest and how testing frameworks required a little more work from the user back in 2010. Llopis praised CxxTest for it’s relative simplicity in how it’s imported into a program and how it requires much less dependencies. From his explanation, I’ve learned that testing frameworks used to require certain formatting within the file and potentially other libraries for the tests to function. CxxTest, similar to JUnit, can operate by itself with much less dependencies than it’s competitors (at the time). A feature that JUnit lacks that CxxTest contains is the ability to natively mock objects. JUnit does have this ability, but requires the user to add another resource to JUnit meanwhile, CxxTest has this functionality immediately. One downside that the author does mention is that CxxTest did require “use of a scripting language as part of the build process”(Llopis), this may create a barrier of entry to less experienced developers.

While comparing these two testing frameworks, I found myself asking a new question of ‘how accessible or inaccessible were testing frameworks of the past’. Llopis seemed to be enthusiastic about features that I held to be common for all frameworks to have. Additionally writing about this did make me wish that I spent more time in the past programming with C++ outside of classes. Reading this did help expand my knowledge of how CxxTest operates, so when I do inevitably go back to refine my C++ skills I’ll be ready to pickup this framework once more. Between JUnit and CxxTest, there are many surface layer similarities, as both are unit testing frameworks. The differences seem to lie in smaller features that some developers may depend on, such as mocking. After having experience in both I find it hard to chose one or the other as they both generally function the same and have similar levels of accessibility.

-AG

Source: https://gamesfromwithin.com/exploring-the-c-unit-testing-framework-jungle

From the blog CS@Worcester – Computer Science Progression by ageorge4756 and used with permission of the author. All other rights reserved by the author.