Category Archives: Week-14

Use Of AI in Software Testing


 The recent explosion of AI has invaded almost every industry
nowadays. It has become something of a buzzword, with many companies loudly
proclaiming how they are making use of the emergent technology to benefit
their customer bases. Chat gpt and other types of  AI have already
started creating all sorts of problems within the academic setting, giving
many students an easy out on writing essays. Not only that, but AI is also
now being attributed as one of the main driving forces behind massive
layoffs within the tech industry and beyond.

All of that being said, how can AI be utilized to improve software testing.
I know that immediately trying to think of ways for AI to replace even more
jobs within the software industry can be a bit jarring after bringing up the
problems it has already created, but I wanted to look into how the future
may look if we were to utilize this technology to expedite the testing
process. It is entirely possible that we could teach proper testing
etiquette to an AI model and have it automatically produce test cases.
Future IDEs could have an auto generated test file feature added to them to
help developers quickly create basic test cases. Well, I didn’t have to
speculate for long as one google search later I had already found a website
for using an AI to create test cases. This does pose a rather worrying
question about the speed at which AI is developing and whether our modern
society can keep up with it, but I would rather not dwell on such topics.
Now, there have been concerns about the proliferation of AI potentially
poisoning the well of data that they use, and I do believe that certain
measures will need to be taken to prevent another event like the dot com
bubble burst from happening again today. 

https://www.taskade.com/generate/programming/test-case 


 Another use case for artificial intelligence that has been
proposed  is the generation of “synthetic data”. This is data created
to mimic real life data in order to test and train programs. DataCebo is one
such company, and has been using an AI to create synthetic data. Called
Synthetic Data Vaults, or SDV for short, These systems are usually sold to
data scientists, health care companies, and financial companies. The purpose
of creating realistic synthetic data is so companies can train programs in a
number of scenarios without relying on historical data, which was limited to
that which has already happened. This also gets around privacy issues of
companies using people’s private data unethically.

https://news.mit.edu/2024/using-generative-ai-improve-software-testing-datacebo-0305

From the blog CS@Worcester Alejandro Professional Blog by amontesdeoca and used with permission of the author. All other rights reserved by the author.

Use Of AI in Software Testing


 The recent explosion of AI has invaded almost every industry
nowadays. It has become something of a buzzword, with many companies loudly
proclaiming how they are making use of the emergent technology to benefit
their customer bases. Chat gpt and other types of  AI have already
started creating all sorts of problems within the academic setting, giving
many students an easy out on writing essays. Not only that, but AI is also
now being attributed as one of the main driving forces behind massive
layoffs within the tech industry and beyond.

All of that being said, how can AI be utilized to improve software testing.
I know that immediately trying to think of ways for AI to replace even more
jobs within the software industry can be a bit jarring after bringing up the
problems it has already created, but I wanted to look into how the future
may look if we were to utilize this technology to expedite the testing
process. It is entirely possible that we could teach proper testing
etiquette to an AI model and have it automatically produce test cases.
Future IDEs could have an auto generated test file feature added to them to
help developers quickly create basic test cases. Well, I didn’t have to
speculate for long as one google search later I had already found a website
for using an AI to create test cases. This does pose a rather worrying
question about the speed at which AI is developing and whether our modern
society can keep up with it, but I would rather not dwell on such topics.
Now, there have been concerns about the proliferation of AI potentially
poisoning the well of data that they use, and I do believe that certain
measures will need to be taken to prevent another event like the dot com
bubble burst from happening again today. 

https://www.taskade.com/generate/programming/test-case 


 Another use case for artificial intelligence that has been
proposed  is the generation of “synthetic data”. This is data created
to mimic real life data in order to test and train programs. DataCebo is one
such company, and has been using an AI to create synthetic data. Called
Synthetic Data Vaults, or SDV for short, These systems are usually sold to
data scientists, health care companies, and financial companies. The purpose
of creating realistic synthetic data is so companies can train programs in a
number of scenarios without relying on historical data, which was limited to
that which has already happened. This also gets around privacy issues of
companies using people’s private data unethically.

https://news.mit.edu/2024/using-generative-ai-improve-software-testing-datacebo-0305

From the blog CS@Worcester Alejandro Professional Blog by amontesdeoca and used with permission of the author. All other rights reserved by the author.

Use Of AI in Software Testing


 The recent explosion of AI has invaded almost every industry
nowadays. It has become something of a buzzword, with many companies loudly
proclaiming how they are making use of the emergent technology to benefit
their customer bases. Chat gpt and other types of  AI have already
started creating all sorts of problems within the academic setting, giving
many students an easy out on writing essays. Not only that, but AI is also
now being attributed as one of the main driving forces behind massive
layoffs within the tech industry and beyond.

All of that being said, how can AI be utilized to improve software testing.
I know that immediately trying to think of ways for AI to replace even more
jobs within the software industry can be a bit jarring after bringing up the
problems it has already created, but I wanted to look into how the future
may look if we were to utilize this technology to expedite the testing
process. It is entirely possible that we could teach proper testing
etiquette to an AI model and have it automatically produce test cases.
Future IDEs could have an auto generated test file feature added to them to
help developers quickly create basic test cases. Well, I didn’t have to
speculate for long as one google search later I had already found a website
for using an AI to create test cases. This does pose a rather worrying
question about the speed at which AI is developing and whether our modern
society can keep up with it, but I would rather not dwell on such topics.
Now, there have been concerns about the proliferation of AI potentially
poisoning the well of data that they use, and I do believe that certain
measures will need to be taken to prevent another event like the dot com
bubble burst from happening again today. 

https://www.taskade.com/generate/programming/test-case 


 Another use case for artificial intelligence that has been
proposed  is the generation of “synthetic data”. This is data created
to mimic real life data in order to test and train programs. DataCebo is one
such company, and has been using an AI to create synthetic data. Called
Synthetic Data Vaults, or SDV for short, These systems are usually sold to
data scientists, health care companies, and financial companies. The purpose
of creating realistic synthetic data is so companies can train programs in a
number of scenarios without relying on historical data, which was limited to
that which has already happened. This also gets around privacy issues of
companies using people’s private data unethically.

https://news.mit.edu/2024/using-generative-ai-improve-software-testing-datacebo-0305

From the blog CS@Worcester Alejandro Professional Blog by amontesdeoca and used with permission of the author. All other rights reserved by the author.

Integration Testing

After recently reading up on unit testing and having some exposure to it in class, I figured the next step would be to look at integration testing. With plenty of blogs to choose from, all giving pretty similar information on integration testing, I chose to highlight the blog published by Katalon (https://katalon.com/resources-center/blog/integration-testing). This blog provides more information into certain aspects of Integration testing, such as explaining some differences between low- and high-level modules. This extra insight helps piece together a lot of the questions I had while reading other blogs on the topic.

Integration testing is a software testing method that involves combining individual units together and testing how they interact. The main purpose of integration testing is to ensure that each module works together as intended and does not create any unforeseen bugs. Each module should be individually unit tested before integration testing.

Integration testing helps identify bugs that may not be seen in unit testing. These bugs can come from a multitude of places. Inconsistent logic between different programmers, modifying code, data being transferred incorrectly, poor exception handling, and incompatible versions may all lead to bugs that only appear during integration testing. Integration testing helps identify these bugs, and depending on what model is being implemented, may easily help find the locality of the bug.

There are two main approaches to performing integration testing: the big bang approach, and the incremental approach. The big bang approach involves integrating and testing all modules at once. This is great for small systems that do not have much complexity however, this means finding the location of a bug will be harder with a complex system. When systems become larger and more complex it may be best to switch to incremental testing. This is an integration testing method where modules are combined and tested in smaller groups. These groups continue to combine with other groups until the system is tested. Three common incremental integration testing methods are the bottom-up approach, top-down approach, and a hybrid approach. Bottom-up starts by integrating the lowest level modules together before adding higher level modules. Top-down does the opposite where it starts by integrating the highest-level modules before adding the lower ones in as tests pass. The hybrid, also known as the sandwich approach, is a combination that may alternate between testing the top level and bottom level components.

From the blog CS@Worcester – CS Learning by kbourassa18 and used with permission of the author. All other rights reserved by the author.

AI In Testing

For the summer, I will be working on a project revolving around Machine Learning and AI. I have a distinct interest in AI, and I have often wondered how it may affect the jobs of Computer Scientists. Testing specifically has been worked towards automation, but there is also the matter of human intervention required. This gave me a lot of interesting thoughts about how testing might be COMPLETELY automated using AI as the intermediary. My most used blog for testing information, stickyminds, had an article about this exact thought. The article’s title is Examining the Impact of AI on Software Testing: A Conversation with Jeff Payne and Jason Arbon by Todd Kominiak.

The integration of AI into various industries, including software development and testing, has sparked discussions regarding its potential impact. This article is an interview regarding the software testing community, and the outlook of anticipation and apprehension on how AI will reshape their roles and practices. Jeff Payne, the CEO of Coveros and Jason Arbon, CEO of TestersAI, were interviewed to delve into the implications of AI for software testing. Arbon challenges the notion that AI will make testers obsolete.And this is a sentiment that I agree with. He argues that AI will instead increase the importance of testers to ensure software quality, as AI can often err. Arbon states that the current response from testers towards AI has been somewhat ‘lackluster’, with many primarily focusing on experimenting with AI tools rather than fully utilizing its potential. One key piece of information Arbon offers is that testers would need to reevaluate their approach to testing if AI emerged as a driving force in testing. Even as AI technology evolves to automate certain testing tasks, human testers will be required to ensure the reliability and appropriateness of software solutions.This is something I think will be essential for all professions that go the route of automation through AI.  Arbon further suggests that the complexity and scale of software testing will increase alongside with the improvement of AI-testing. In the future, he envisions a pivotal role change for testers, wherein they ask essential questions of the structure and content of the code beyond mere functionality. This would also be beneficial, especially to address diversity concerns in coding, and to keep AI bias in check.

This interview opened my eyes greatly about software testing. Not just how AI will change the roles of testers, but it also changed how I think about testes as a whole. Going into this article I held the belief that a tester’s role was to assure the functionality of the code, but to also be able to address ethical concerns throughout the code, is another role entirely. I am less apprehensive now, about the role of AI in testing and software development for that matter. And I am more optimistic to see how my job will evolve as time goes on.

Source:https://www.stickyminds.com/interview/examining-impact-ai-software-testing-conversation-jeff-payne-and-jason-arbon

From the blog CS@Worcester – WSU CS Blog: Ben Gelineau by Ben Gelineau and used with permission of the author. All other rights reserved by the author.

Technical Review

When working on a big project, in a big team, with a lot of other people working together, things can become confusing. Sometimes, code may not work, or you will not entirely understand what is happening. The best thing to do is step back and take a technical review. Review the code, the goals, and any other areas that may be in need of improvement or assistance. Take the time to straighten things out so you and the team can get back to work efficiently and effectively. But how do you go about a technical review?

In this blog post, Tony Karrer talks about what a technical review is, some ways to identify when you need to review, some strategies, and some areas of review. They describe a technical review as “a deep-dive assessment of your software, infrastructure, team and processes,” and that “it provides findings and recommendations intended to foster a mutual understanding between business and software leaders, shedding light on the current state of your technology and your team.” Some signs that you are in need of a technical review are slow or late delivery, random or persistent bugs, and sleepless nights from strategic worries. However, taking a technical review shouldn’t just be in response to malfunction, it can be due to scaling and new markets, keeping up with competition, outgrowing your stack, changes to the tech team, or simply because you want to be on top of things. Karrer describes four strategies, each different from the other, ranging from general to specific and in depth. They are straightforward analysis, pragmatic assessment, expert recommendations, and finding sessions. After determining which strategy you are comfortable with, you can go ahead and start reviewing. They provide some examples of where you may want to review, including background information to get a general idea of the project, architecture, targeted code, or process and team. If there are areas that need work or are struggling, then that is one hundred percent a spot you want to review. While you are doing that, create a summary and list your findings, and include some recommendations or solutions if you have any. Finally, bring them to the review meeting, where you will review the project together with your team and sort out the issues and find possible solutions.

Doing this in class was actually fairly helpful. I feel like if it was my code, I would have found more benefit in it, but I understand the premise of it, it’s good to have multiple people look at the code and come together and see what kind of issues we found. 

From the blog CS@Worcester – Cao's Thoughts by antcao and used with permission of the author. All other rights reserved by the author.

Testable code

In my last post, I highlighted a blog that gives an overview of unit testing and how we can use it to increase the quality and efficiency of our testing or debugging processes. The point of unit testing is to test “units” or isolated methods, classes, or modules of our code to determine if there are any issues. Writing code with unit testing in mind makes it simpler for developers to debug their code. I felt this next blog, “TDD: Writing Testable Code,” by Eric Elliott, would help further readers’ understanding of writing this kind of code and the benefits that come from it.

Elliott’s blog discusses many aspects of writing testable code, including tight coupling, test-driven development, separation of concerns, and an overview of different data management strategies. He describes how tight coupling limits testability and provides strategies to reduce it, including TDD. He also discusses the benefits of testing first vs. testing after, with test-first being the main focus of test-driven development. He goes on to describe data management strategies.

Reading this post brought my attention to an important aspect of software development, which is influencing a specific type of developer culture that improves the quality of our software as a whole. Writing with testability in mind makes it easier for the users, increases the adaptability of our code, and allows us to fix issues without revamping the whole system. 

The breakdown of tight coupling and the different forms they can take was comprehensive, giving me a straightforward explanation of what I should consider when writing code. Elliott gave 11 forms and causes of tight coupling, including parent class dependencies, temporal coupling, and event chains. TDD was something I was already aware of, but the benefits of using and detriments of not using TDD were still insightful for how we should be taking our development step by step, failure by failure until we have testable code. The separations of concerns were interesting. He says we should isolate our code by concerns, including business and state logic, user interface, and I/O & effects. Separating these into separate modules allows us to test and understand each independently. 

I plan to consider all of these strategies when developing projects in the future. I will use test-driven development to limit the tight coupling of my modules, classes, methods, etc., to ensure that my code is readable and testable and that each of its concerns are independent from each other. 

From the blog CS@Worcester – KindlCoding by jkindl and used with permission of the author. All other rights reserved by the author.

Blog #4: First Exposure to Testing

Before being exposed to JUnit, my only experience with automated testing was through CxxTest while I was learning C++. Once I started to learn JUnit both the syntax and general format seemed to ring a bell. This caused me to check back at my previous C++ programs to find that the assertion-based testing was identical to that of JUnit. After seeing these two side by side I was curious about the comparisons between these two testing frameworks and whether CxxTest had any advantages over that of JUnit.

While looking for an article discussing the full capabilities of CxxTest, I stumbled upon a blog, Exploring the C++ Unit Testing Jungle by user @noel_llopis, which seemed to provided extensive explanations of each popular testing framework for C++ at the time. Do note that this post was written in 2010, so popular testing frameworks from then may have faded into obscurity and new frameworks may be used in their place. My main allure to this article was Llopis’s section describing his experience with CxxTest and how testing frameworks required a little more work from the user back in 2010. Llopis praised CxxTest for it’s relative simplicity in how it’s imported into a program and how it requires much less dependencies. From his explanation, I’ve learned that testing frameworks used to require certain formatting within the file and potentially other libraries for the tests to function. CxxTest, similar to JUnit, can operate by itself with much less dependencies than it’s competitors (at the time). A feature that JUnit lacks that CxxTest contains is the ability to natively mock objects. JUnit does have this ability, but requires the user to add another resource to JUnit meanwhile, CxxTest has this functionality immediately. One downside that the author does mention is that CxxTest did require “use of a scripting language as part of the build process”(Llopis), this may create a barrier of entry to less experienced developers.

While comparing these two testing frameworks, I found myself asking a new question of ‘how accessible or inaccessible were testing frameworks of the past’. Llopis seemed to be enthusiastic about features that I held to be common for all frameworks to have. Additionally writing about this did make me wish that I spent more time in the past programming with C++ outside of classes. Reading this did help expand my knowledge of how CxxTest operates, so when I do inevitably go back to refine my C++ skills I’ll be ready to pickup this framework once more. Between JUnit and CxxTest, there are many surface layer similarities, as both are unit testing frameworks. The differences seem to lie in smaller features that some developers may depend on, such as mocking. After having experience in both I find it hard to chose one or the other as they both generally function the same and have similar levels of accessibility.

-AG

Source: https://gamesfromwithin.com/exploring-the-c-unit-testing-framework-jungle

From the blog CS@Worcester – Computer Science Progression by ageorge4756 and used with permission of the author. All other rights reserved by the author.

Blog #3: Testing Beyond JUnit

Throughout my experience with Software Quality Assurance, I’ve used two different types of testing frameworks. The most recent is JUnit as I’ve just learned how to use it earlier this year, and the other being CxxTest which I’ve forgotten most of (including its name) until writing this. Each of these operates for different languages, JUnit for Java, and CxxTest for C++. The differences between these two made me curious whether a framework worked for more than one language at once. Note, that frameworks such as CTest that work for both C and C++ I do not include as both exist within the C family. As I searched for a framework that answered my question I stumbled upon Selenium and subsequent articles comparing it to that of JUnit. These comparisons drew my attention and sent me down a path to understand what Selenium is.

Before getting into specifics I must introduce a few definitions that help differentiate these two frameworks. Unit testing is the method of testing smaller increments of source code to ensure that each ‘unit’ works as intended and meets the developers’ specifications. This lays the foundation for later development. End-to-end testing focuses on testing components where the user interacts with the program directly. This tests components such as UI and web applications. JUnit focuses on unit testing with code programmed in Java meanwhile, Selenium focuses on end-to-end testing with multiple different languages including Java. Fundamentally these frameworks are testing different aspects of software development, therefore any comparison between the two must be taken with a bit of nuance.

With definitions aside, we can now talk about what makes these frameworks unique. A community post on StackShare, JUnit Vs Selenium, gives a concise view into what make these frameworks differ. My experience with frameworks has only existed within IDEs however, Selenium is supported on browser and web driver tools. Additionally, Selenium tests run within a browser, meanwhile JUnits require a Java Virtual Machine to be created. One downside of Selenium is its requirements for dependencies, as opposed to JUnit which can be imported into your program. I’m more partial to JUnit as I only have experience with back-end development, so Selenium isn’t directed towards a developer such as myself. With that being said, those who are more experienced in front-end development may find the requirements for dependencies and browser configurations to be a small cost for its flexibility with testing.

I have a little experience with front-end development, so I can understand how the tools provided by Selenium could be invaluable. A lesson I’ve learned from this dive into Selenium is that all aspects of development (Ex: Front vs Back end) will require some form of automated testing. Additionally, testing that may be easy for one department may be more complex for another. With these different areas of testing also come different methods. End-to-end testing will be noticeably different than unit testing, as each method focuses on a specific function of the software.

-AG

Source: https://stackshare.io/stackups/junit-vs-selenium

From the blog CS@Worcester – Computer Science Progression by ageorge4756 and used with permission of the author. All other rights reserved by the author.

Security Testing

Security testing is an important part of software testing, it makes sure to find any weaknesses in the software and if our data is protected from intruders. Security testing is important because it helps us ensure that our software has no weak spots and that our data and information are safe. There are so many versions of security testing like password cracking, penetration testing, and vulnerability. Those three topics are some of the main ideas of security testing and understanding the importance of keeping personal information safe. Password cracking testing is when the system identifies weak passwords and helps make sure that people use stronger passwords. Penetration testing is done by assessing the system by using different techniques, the purpose of it is to protect the important data of the users who don’t have access to the system. Vulnerability testing is used to identify the weakest attributes in the system that can be used by destructive software. A lot can cause the software to be vulnerable like a bug in the software, wrong software testing, and the presence of a bad code. 

I chose this topic because security testing is important in our software testing class and this blog has a lot of information on how it works. It’s important to know more about how our private data is saved from hackers and how safe our software is. I believe that this helped me learn more about security testing like how penetration testing uses different techniques like the black box and white box testing method to detect any vulnerabilities in our software. Security testing is to make sure that applications are not able to leak private information and can handle a threat like hackers. I already knew before reading how important it is for our information and making sure that our passwords are not easy for hackers to figure out but I thought penetration testing was interesting to learn about because I didn’t know it used so many different techniques to find any vulnerabilities. Security testing is about keeping our information safe as well as our software safe from any intruders and I believe that this blog gave the best explanation and information about it.

https://www.idexcel.com/blog/tag/security-testing/

From the blog CS@Worcester – Kaylene Noel's Blog by Kaylene Noel and used with permission of the author. All other rights reserved by the author.