Category Archives: CS-443

Use Of AI in Software Testing


 The recent explosion of AI has invaded almost every industry
nowadays. It has become something of a buzzword, with many companies loudly
proclaiming how they are making use of the emergent technology to benefit
their customer bases. Chat gpt and other types of  AI have already
started creating all sorts of problems within the academic setting, giving
many students an easy out on writing essays. Not only that, but AI is also
now being attributed as one of the main driving forces behind massive
layoffs within the tech industry and beyond.

All of that being said, how can AI be utilized to improve software testing.
I know that immediately trying to think of ways for AI to replace even more
jobs within the software industry can be a bit jarring after bringing up the
problems it has already created, but I wanted to look into how the future
may look if we were to utilize this technology to expedite the testing
process. It is entirely possible that we could teach proper testing
etiquette to an AI model and have it automatically produce test cases.
Future IDEs could have an auto generated test file feature added to them to
help developers quickly create basic test cases. Well, I didn’t have to
speculate for long as one google search later I had already found a website
for using an AI to create test cases. This does pose a rather worrying
question about the speed at which AI is developing and whether our modern
society can keep up with it, but I would rather not dwell on such topics.
Now, there have been concerns about the proliferation of AI potentially
poisoning the well of data that they use, and I do believe that certain
measures will need to be taken to prevent another event like the dot com
bubble burst from happening again today. 

https://www.taskade.com/generate/programming/test-case 


 Another use case for artificial intelligence that has been
proposed  is the generation of “synthetic data”. This is data created
to mimic real life data in order to test and train programs. DataCebo is one
such company, and has been using an AI to create synthetic data. Called
Synthetic Data Vaults, or SDV for short, These systems are usually sold to
data scientists, health care companies, and financial companies. The purpose
of creating realistic synthetic data is so companies can train programs in a
number of scenarios without relying on historical data, which was limited to
that which has already happened. This also gets around privacy issues of
companies using people’s private data unethically.

https://news.mit.edu/2024/using-generative-ai-improve-software-testing-datacebo-0305

From the blog CS@Worcester Alejandro Professional Blog by amontesdeoca and used with permission of the author. All other rights reserved by the author.

Use Of AI in Software Testing


 The recent explosion of AI has invaded almost every industry
nowadays. It has become something of a buzzword, with many companies loudly
proclaiming how they are making use of the emergent technology to benefit
their customer bases. Chat gpt and other types of  AI have already
started creating all sorts of problems within the academic setting, giving
many students an easy out on writing essays. Not only that, but AI is also
now being attributed as one of the main driving forces behind massive
layoffs within the tech industry and beyond.

All of that being said, how can AI be utilized to improve software testing.
I know that immediately trying to think of ways for AI to replace even more
jobs within the software industry can be a bit jarring after bringing up the
problems it has already created, but I wanted to look into how the future
may look if we were to utilize this technology to expedite the testing
process. It is entirely possible that we could teach proper testing
etiquette to an AI model and have it automatically produce test cases.
Future IDEs could have an auto generated test file feature added to them to
help developers quickly create basic test cases. Well, I didn’t have to
speculate for long as one google search later I had already found a website
for using an AI to create test cases. This does pose a rather worrying
question about the speed at which AI is developing and whether our modern
society can keep up with it, but I would rather not dwell on such topics.
Now, there have been concerns about the proliferation of AI potentially
poisoning the well of data that they use, and I do believe that certain
measures will need to be taken to prevent another event like the dot com
bubble burst from happening again today. 

https://www.taskade.com/generate/programming/test-case 


 Another use case for artificial intelligence that has been
proposed  is the generation of “synthetic data”. This is data created
to mimic real life data in order to test and train programs. DataCebo is one
such company, and has been using an AI to create synthetic data. Called
Synthetic Data Vaults, or SDV for short, These systems are usually sold to
data scientists, health care companies, and financial companies. The purpose
of creating realistic synthetic data is so companies can train programs in a
number of scenarios without relying on historical data, which was limited to
that which has already happened. This also gets around privacy issues of
companies using people’s private data unethically.

https://news.mit.edu/2024/using-generative-ai-improve-software-testing-datacebo-0305

From the blog CS@Worcester Alejandro Professional Blog by amontesdeoca and used with permission of the author. All other rights reserved by the author.

Use Of AI in Software Testing


 The recent explosion of AI has invaded almost every industry
nowadays. It has become something of a buzzword, with many companies loudly
proclaiming how they are making use of the emergent technology to benefit
their customer bases. Chat gpt and other types of  AI have already
started creating all sorts of problems within the academic setting, giving
many students an easy out on writing essays. Not only that, but AI is also
now being attributed as one of the main driving forces behind massive
layoffs within the tech industry and beyond.

All of that being said, how can AI be utilized to improve software testing.
I know that immediately trying to think of ways for AI to replace even more
jobs within the software industry can be a bit jarring after bringing up the
problems it has already created, but I wanted to look into how the future
may look if we were to utilize this technology to expedite the testing
process. It is entirely possible that we could teach proper testing
etiquette to an AI model and have it automatically produce test cases.
Future IDEs could have an auto generated test file feature added to them to
help developers quickly create basic test cases. Well, I didn’t have to
speculate for long as one google search later I had already found a website
for using an AI to create test cases. This does pose a rather worrying
question about the speed at which AI is developing and whether our modern
society can keep up with it, but I would rather not dwell on such topics.
Now, there have been concerns about the proliferation of AI potentially
poisoning the well of data that they use, and I do believe that certain
measures will need to be taken to prevent another event like the dot com
bubble burst from happening again today. 

https://www.taskade.com/generate/programming/test-case 


 Another use case for artificial intelligence that has been
proposed  is the generation of “synthetic data”. This is data created
to mimic real life data in order to test and train programs. DataCebo is one
such company, and has been using an AI to create synthetic data. Called
Synthetic Data Vaults, or SDV for short, These systems are usually sold to
data scientists, health care companies, and financial companies. The purpose
of creating realistic synthetic data is so companies can train programs in a
number of scenarios without relying on historical data, which was limited to
that which has already happened. This also gets around privacy issues of
companies using people’s private data unethically.

https://news.mit.edu/2024/using-generative-ai-improve-software-testing-datacebo-0305

From the blog CS@Worcester Alejandro Professional Blog by amontesdeoca and used with permission of the author. All other rights reserved by the author.

Use Of AI in Software Testing


 The recent explosion of AI has invaded almost every industry
nowadays. It has become something of a buzzword, with many companies loudly
proclaiming how they are making use of the emergent technology to benefit
their customer bases. Chat gpt and other types of  AI have already
started creating all sorts of problems within the academic setting, giving
many students an easy out on writing essays. Not only that, but AI is also
now being attributed as one of the main driving forces behind massive
layoffs within the tech industry and beyond.

All of that being said, how can AI be utilized to improve software testing.
I know that immediately trying to think of ways for AI to replace even more
jobs within the software industry can be a bit jarring after bringing up the
problems it has already created, but I wanted to look into how the future
may look if we were to utilize this technology to expedite the testing
process. It is entirely possible that we could teach proper testing
etiquette to an AI model and have it automatically produce test cases.
Future IDEs could have an auto generated test file feature added to them to
help developers quickly create basic test cases. Well, I didn’t have to
speculate for long as one google search later I had already found a website
for using an AI to create test cases. This does pose a rather worrying
question about the speed at which AI is developing and whether our modern
society can keep up with it, but I would rather not dwell on such topics.
Now, there have been concerns about the proliferation of AI potentially
poisoning the well of data that they use, and I do believe that certain
measures will need to be taken to prevent another event like the dot com
bubble burst from happening again today. 

https://www.taskade.com/generate/programming/test-case 


 Another use case for artificial intelligence that has been
proposed  is the generation of “synthetic data”. This is data created
to mimic real life data in order to test and train programs. DataCebo is one
such company, and has been using an AI to create synthetic data. Called
Synthetic Data Vaults, or SDV for short, These systems are usually sold to
data scientists, health care companies, and financial companies. The purpose
of creating realistic synthetic data is so companies can train programs in a
number of scenarios without relying on historical data, which was limited to
that which has already happened. This also gets around privacy issues of
companies using people’s private data unethically.

https://news.mit.edu/2024/using-generative-ai-improve-software-testing-datacebo-0305

From the blog CS@Worcester Alejandro Professional Blog by amontesdeoca and used with permission of the author. All other rights reserved by the author.

Use Of AI in Software Testing


 The recent explosion of AI has invaded almost every industry
nowadays. It has become something of a buzzword, with many companies loudly
proclaiming how they are making use of the emergent technology to benefit
their customer bases. Chat gpt and other types of  AI have already
started creating all sorts of problems within the academic setting, giving
many students an easy out on writing essays. Not only that, but AI is also
now being attributed as one of the main driving forces behind massive
layoffs within the tech industry and beyond.

All of that being said, how can AI be utilized to improve software testing.
I know that immediately trying to think of ways for AI to replace even more
jobs within the software industry can be a bit jarring after bringing up the
problems it has already created, but I wanted to look into how the future
may look if we were to utilize this technology to expedite the testing
process. It is entirely possible that we could teach proper testing
etiquette to an AI model and have it automatically produce test cases.
Future IDEs could have an auto generated test file feature added to them to
help developers quickly create basic test cases. Well, I didn’t have to
speculate for long as one google search later I had already found a website
for using an AI to create test cases. This does pose a rather worrying
question about the speed at which AI is developing and whether our modern
society can keep up with it, but I would rather not dwell on such topics.
Now, there have been concerns about the proliferation of AI potentially
poisoning the well of data that they use, and I do believe that certain
measures will need to be taken to prevent another event like the dot com
bubble burst from happening again today. 

https://www.taskade.com/generate/programming/test-case 


 Another use case for artificial intelligence that has been
proposed  is the generation of “synthetic data”. This is data created
to mimic real life data in order to test and train programs. DataCebo is one
such company, and has been using an AI to create synthetic data. Called
Synthetic Data Vaults, or SDV for short, These systems are usually sold to
data scientists, health care companies, and financial companies. The purpose
of creating realistic synthetic data is so companies can train programs in a
number of scenarios without relying on historical data, which was limited to
that which has already happened. This also gets around privacy issues of
companies using people’s private data unethically.

https://news.mit.edu/2024/using-generative-ai-improve-software-testing-datacebo-0305

From the blog CS@Worcester Alejandro Professional Blog by amontesdeoca and used with permission of the author. All other rights reserved by the author.

Exploring Testing Techniques.

Testing is an essential aspect of software development, ensuring that our applications meet quality standards and perform as expected. However, the world of testing can be vast and intricate, with various techniques and methodologies to choose from. In this guide, we will delve into six key testing approaches: Pairwise, Combinatorial, Mutation, Fuzzing, Stochastic, and Property-Based Testing.

Pairwise Testing:

Pairwise testing, also known as all-pairs testing, is a method used to test the interactions between pairs of input parameters. By selecting a minimal set of test cases that cover all possible combinations of pairs, this technique efficiently identifies defects without exhaustive testing. It’s particularly useful when dealing with large input domains. For more information, you can explore Software Testing Fundamentals.

Combinatorial Testing:

Combinatorial testing extends pairwise testing by considering interactions among multiple parameters simultaneously. Instead of testing every possible combination, it focuses on covering a representative subset of combinations. This approach helps in reducing the number of test cases required while still providing comprehensive coverage. Learn more at National Institute of Standards and Technology (NIST).

Mutation Testing:

Mutation testing involves making small modifications (mutations) to the source code and running test cases to check if these mutations are detected. It assesses the effectiveness of test suites by measuring their ability to detect changes in the code. By simulating faults in the program, mutation testing helps in identifying weaknesses in test cases. You can find further insights at Striker.

Fuzzing:

Fuzz testing, or fuzzing, is a technique where inputs are fed into a system in an automated and randomized manner, often with invalid, unexpected, or malformed data. The goal is to uncover vulnerabilities such as crashes, memory leaks, or security flaws that may not be apparent under normal testing conditions. To explore more about fuzzing, visit OWASP.

Stochastic Testing:

Stochastic testing involves using random or probabilistic techniques to generate test cases. Unlike deterministic testing, where inputs are predefined, stochastic testing introduces variability, mimicking real-world scenarios. It’s particularly useful in systems where inputs are inherently unpredictable or when exhaustive testing is impractical. Dive deeper into stochastic testing at Investopedia.

Property-Based Testing:

Property-based testing focuses on defining properties or specifications that the system should satisfy and then generating test cases automatically to verify these properties. Instead of specifying individual test cases, developers define general rules, and the testing framework generates inputs to validate these rules. Learn more about property-based testing from Medium.

In conclusion, understanding different testing techniques empowers software developers and testers to choose the most appropriate methods for their projects. Whether it’s ensuring thorough coverage, detecting defects, or improving resilience, these approaches play a crucial role in delivering high-quality software products.

From the blog Discoveries in CS world by mgl1990 and used with permission of the author. All other rights reserved by the author.

Decoding Software Technical Reviews: A Practical Guide for Developers

Hello again and welcome back to this week’s update. Today, I want to discuss something very critical in software development but perhaps many find it quite intimidating (me included) – Software Technical Review. For any developer, whether they are a professional coder or just a beginner, getting familiar with this process will completely change the game. So, grab your coffee or another favorite brew, and let’s break STR down.

Firstly, what is it?

To describe it in simple terms, a Software Technical Review is a sort of code brainstorming. This procedure requires a team of developers – sometimes, other stakeholders, too, – to meet and look through the code and design of the project, as well as any other technical part. It is a kind of peer review of your code, where every team member, checks each other’s homework to make sure everything is on course before proceeding further.

Secondly, why is it important?

Well, STR helps identify bugs immediately, ensures software complies with quality requirements, and helps consultants stay up-to-date with a project’s aims. All in all, it’s like double-checking an essay before sending it: it would be quite irritating to receive a failing grade for a paper due to small mistakes that could have been rectified. Imagine the irritation when even small details can have a severe aftermath.

What actually happens in the review?

The team analyzes the code to detect blunders, potential security risks, and quality as a whole.
They examine the design document to see if the planned architecture makes sense, and if it is achievable.
The team also looks at the performance of the development and its ability to satisfy the requirements of the proposed user.
The members always come up with their ideas of what to upgrade or alter.
Who is involved, and not just developers?

Finally, here is the STR’s cast:

Project managers, who ensure the project is on the right path to its goals and timeline.
Quality assurance engineers are responsible for the quality and ensuring that no rough edges of under-functionality come out.
Other developers may see an error that was made in the code-reviewing phase by the coder.

Some review suggestions?

You should know what you have done without stammering, have an open mind to revision, know it is for everyone’s best, and try to avoid subjective opinions – stick to points on making your code better, not yourself worse.

As they say, two minds are better than one so try to take advantage of the other minds worrying about the same project as you in order to better yourself and your programming skills.

Till next time,

Ano out.

References

https://www.geeksforgeeks.org/software-technical-reviews-in-software-testing

https://www.softwaretestinggenius.com/understanding-software-technical-reviews-strs/

From the blog CS@Worcester – Anairdo's WSU Computer Science Blog by anairdoduri and used with permission of the author. All other rights reserved by the author.

Exploring Continuous Integration: A Pillar of Modern Software Quality Assurance (Week-14)

Exploring Continuous Integration: A Pillar of Modern Software Quality Assurance (Week-14)

In the rapidly evolving world of software development, Continuous Integration (CI) has emerged as a key practice that significantly enhances the quality and efficiency of products. This week, we explore the critical role of CI in modern software development, highlighting its benefits, best practices, and essential tools.

What is Continuous Integration?

Continuous Integration is a development strategy where developers frequently merge code changes into a central repository, followed by automatic builds and tests. The main aim is to provide quick feedback so that if a problem arises, it can be addressed at the earliest opportunity.

Benefits of Continuous Integration

1. Early Bug Detection: Frequent integration tests help identify defects early, reducing the costs and efforts needed for later fixes.

2. Reduced Integration Issues: Regular merging prevents the integration hell typically associated with the ‘big bang’ approach at the end of projects.

3. Enhanced Quality Assurance: Continuous testing ensures that quality is assessed and maintained throughout the development process.

4. Faster Releases: CI allows more frequent releases, making it easier to respond to market conditions and customer needs promptly.

Implementing Continuous Integration

1. Version Control: A robust system like Git is essential for handling changes and facilitating seamless integrations.

2. Build Automation: Tools such as Jenkins, Travis CI, and CircleCI automate building, testing, and deployment tasks.

3. Quality Metrics: Code coverage and static analysis help maintain high standards of code quality.

4. Automated Testing: A suite of tests, including unit, integration, and functional tests, are crucial for immediate feedback on the system’s health.

5. Infrastructure as Code: Tools like Docker and Kubernetes ensure consistent environments from development to production.

Best Practices for Continuous Integration

1. Maintain a Single Source Repository: Centralizing code in one repository ensures consistency and simplicity.

2. Automate Everything: From compiling, testing to packaging, automation speeds up the development process and reduces human error.

3. Ensure Builds Are Self-Testing: Builds should include a comprehensive test suite, and only successful tests should lead to successful build completions.

4. Prioritize Fixing Broken Builds: Addressing failures in the build/test process immediately keeps the system stable and functional.

5. Optimize Build Speed: A quick build process promotes more frequent code integration and feedback.

Tools for Continuous Integration

  • Jenkins: Manages and automates software development processes.
  • Travis CI: Hosted integration service used for project building and testing at GitHub.
  • CircleCI: Integrates with several cloud environments for CI/CD practices.

Conclusion

Continuous Integration is essential, not just as a technical practice but as part of the culture within high-performing teams. It fosters a disciplined development environment conducive to producing high-quality software efficiently and effectively. For those seeking to delve deeper, Continuous Integration: Improving Software Quality and Reducing Risk by Paul M. Duvall et al. is an excellent resource.

From the blog CS@Worcester – Kadriu's Blog by Arber Kadriu and used with permission of the author. All other rights reserved by the author.

Integration Testing

After recently reading up on unit testing and having some exposure to it in class, I figured the next step would be to look at integration testing. With plenty of blogs to choose from, all giving pretty similar information on integration testing, I chose to highlight the blog published by Katalon (https://katalon.com/resources-center/blog/integration-testing). This blog provides more information into certain aspects of Integration testing, such as explaining some differences between low- and high-level modules. This extra insight helps piece together a lot of the questions I had while reading other blogs on the topic.

Integration testing is a software testing method that involves combining individual units together and testing how they interact. The main purpose of integration testing is to ensure that each module works together as intended and does not create any unforeseen bugs. Each module should be individually unit tested before integration testing.

Integration testing helps identify bugs that may not be seen in unit testing. These bugs can come from a multitude of places. Inconsistent logic between different programmers, modifying code, data being transferred incorrectly, poor exception handling, and incompatible versions may all lead to bugs that only appear during integration testing. Integration testing helps identify these bugs, and depending on what model is being implemented, may easily help find the locality of the bug.

There are two main approaches to performing integration testing: the big bang approach, and the incremental approach. The big bang approach involves integrating and testing all modules at once. This is great for small systems that do not have much complexity however, this means finding the location of a bug will be harder with a complex system. When systems become larger and more complex it may be best to switch to incremental testing. This is an integration testing method where modules are combined and tested in smaller groups. These groups continue to combine with other groups until the system is tested. Three common incremental integration testing methods are the bottom-up approach, top-down approach, and a hybrid approach. Bottom-up starts by integrating the lowest level modules together before adding higher level modules. Top-down does the opposite where it starts by integrating the highest-level modules before adding the lower ones in as tests pass. The hybrid, also known as the sandwich approach, is a combination that may alternate between testing the top level and bottom level components.

From the blog CS@Worcester – CS Learning by kbourassa18 and used with permission of the author. All other rights reserved by the author.

AI In Testing

For the summer, I will be working on a project revolving around Machine Learning and AI. I have a distinct interest in AI, and I have often wondered how it may affect the jobs of Computer Scientists. Testing specifically has been worked towards automation, but there is also the matter of human intervention required. This gave me a lot of interesting thoughts about how testing might be COMPLETELY automated using AI as the intermediary. My most used blog for testing information, stickyminds, had an article about this exact thought. The article’s title is Examining the Impact of AI on Software Testing: A Conversation with Jeff Payne and Jason Arbon by Todd Kominiak.

The integration of AI into various industries, including software development and testing, has sparked discussions regarding its potential impact. This article is an interview regarding the software testing community, and the outlook of anticipation and apprehension on how AI will reshape their roles and practices. Jeff Payne, the CEO of Coveros and Jason Arbon, CEO of TestersAI, were interviewed to delve into the implications of AI for software testing. Arbon challenges the notion that AI will make testers obsolete.And this is a sentiment that I agree with. He argues that AI will instead increase the importance of testers to ensure software quality, as AI can often err. Arbon states that the current response from testers towards AI has been somewhat ‘lackluster’, with many primarily focusing on experimenting with AI tools rather than fully utilizing its potential. One key piece of information Arbon offers is that testers would need to reevaluate their approach to testing if AI emerged as a driving force in testing. Even as AI technology evolves to automate certain testing tasks, human testers will be required to ensure the reliability and appropriateness of software solutions.This is something I think will be essential for all professions that go the route of automation through AI.  Arbon further suggests that the complexity and scale of software testing will increase alongside with the improvement of AI-testing. In the future, he envisions a pivotal role change for testers, wherein they ask essential questions of the structure and content of the code beyond mere functionality. This would also be beneficial, especially to address diversity concerns in coding, and to keep AI bias in check.

This interview opened my eyes greatly about software testing. Not just how AI will change the roles of testers, but it also changed how I think about testes as a whole. Going into this article I held the belief that a tester’s role was to assure the functionality of the code, but to also be able to address ethical concerns throughout the code, is another role entirely. I am less apprehensive now, about the role of AI in testing and software development for that matter. And I am more optimistic to see how my job will evolve as time goes on.

Source:https://www.stickyminds.com/interview/examining-impact-ai-software-testing-conversation-jeff-payne-and-jason-arbon

From the blog CS@Worcester – WSU CS Blog: Ben Gelineau by Ben Gelineau and used with permission of the author. All other rights reserved by the author.