Category Archives: CS-443

A Beginner’s Guide to Software Quality Assurance and Testing

In today’s fast-paced digital world, software is at the core of nearly everything we do—whether it’s managing bank accounts, connecting with friends, or working from home. With so many people depending on technology, ensuring that the software we use is safe, reliable, and user-friendly is more important than ever. This is where Software Quality Assurance and Testing come in.

What is Software Quality Assurance?

Software Quality Assurance is all about making sure that the software developed by companies meets a certain standard of quality. It’s not just about finding bugs after the software has been built; it involves creating guidelines, processes, and checks to ensure software is being built the right way from the start.

Here’s a simple way to think about it: Software Quality Assurance and Testing is like quality control in a factory. Just as a factory ensures that each product coming off the line meets specific standards, Software Quality Assurance and Testing ensures that software does the same.

Key Functions of Software Quality Assurance and Testing:

  • Process Monitoring: Ensuring that software development follows defined processes and standards.
  • Code Review: Examining the code to catch errors before the software is released.
  • Defect Prevention: Putting measures in place that reduce the chance of defects occurring in the first place.

What is Software Testing?

Testing, on the other hand, comes after the development process. It focuses on checking the actual software product to make sure it works as expected. Think of it like test-driving a car before it hits the market.

Software Testing involves running the software through various scenarios to make sure that everything functions smoothly and no bugs are present. It is crucial because even a small bug can cause significant problems for users, and companies could lose their reputation or customers if their software doesn’t work well.

Types of Software Testing:

  • Manual Testing: Testers use the software like a real user would, performing various actions to check for bugs.
  • Automated Testing: Automated scripts run tests on the software to save time and effort on repetitive tasks.
  • Functional Testing: Ensures the software behaves correctly according to requirements.
  • Performance Testing: Verifies how well the software performs under pressure (for example, when thousands of users are using it at once).
  • Security Testing: Identifies vulnerabilities that could expose users to data breaches or other risks.

Why Software Quality Assurance and Testing Are Important

You may wonder, why go through all this trouble? Well, poor-quality software can lead to disastrous results for both users and companies. Imagine if an e-commerce website crashed during Black Friday sales or a banking app exposed sensitive user data—that’s a nightmare scenario!

Here’s why SQA and Testing are critical:

  1. Minimizing Bugs: Testing catches problems early, so developers can fix them before they impact users.
  2. Improving Security: Testing helps find security holes that hackers could exploit, protecting users from cyber threats.
  3. Enhancing User Experience: Reliable, bug-free software creates a better user experience and increases user satisfaction.
  4. Cost Efficiency: Fixing bugs early is much cheaper than addressing problems after software has been released.
  5. Building Trust: Well-tested software builds trust with users, boosting brand reputation and customer loyalty.

How Software Quality Assurance and Testing Affect Everyday Software

Every time you open an app or visit a website, there’s a good chance that it has gone through rigorous quality assurance and testing processes. From banking apps ensuring secure transactions to streaming platforms delivering smooth experiences, Software Quality Assurance and Testing plays a major role in the seamless digital experiences we enjoy daily.

Even the smallest error—like a slow-loading webpage or a glitchy feature—can ruin the user experience, which is why companies invest heavily in making sure their software is as close to perfect as possible. The result? Fewer complaints, better user retention, and a competitive edge in the marketplace.

The Growing Demand for Quality Software

With the continuous rise of new apps, websites, and technologies, the need for high-quality software is more significant than ever. As businesses shift to digital solutions, software development teams face the challenge of delivering robust, reliable software in increasingly shorter timelines.

This demand for quality, combined with the complexity of modern applications, has led to growing opportunities in the field of Software Quality Assurance and Testing Whether you’re a developer, a project manager, or someone interested in tech, understanding the importance of Software Quality Assurance and Testing and how testing works can be a valuable skill in today’s job market.

Conclusion

Software Quality Assurance and Testing are essential for delivering reliable, secure, and user-friendly products in today’s tech-driven world. From preventing bugs to ensuring smooth performance, these processes ensure that the software we depend on every day works as it should.

As technology continues to evolve, the demand for well-tested, high-quality software will only grow. Whether you’re a tech enthusiast or just someone who relies on apps and websites, Software Quality Assurance and Testing and Testing ensure a safer, smoother digital experience for everyone.

So, next time you use a glitch-free app or enjoy a seamless online shopping experience, you can thank the Software Quality Assurance and Testing and Testing teams working behind the scenes to make it possible!

From the blog CS@Worcester – MY_BLOG_ by Serah Matovu and used with permission of the author. All other rights reserved by the author.

Mastering Software Quality: Path Testing and Decision-Based Testing in Real-World Applications

Introduction (GREAT NEWS!!!!)

Hello everyone, I apologize for the delay in posting this week’s blog. I’ve been balancing a lot lately, but I’m excited to share some great news with you. I recently secured a summer internship at Hanover Insurance Group as an automation developer, and I couldn’t be more thrilled! As I dive into this amazing opportunity, I want most of my projects to focus on solving, exploring, or even just addressing challenges within the insurance industry. That’s why this week’s post on Path Testing and Decision-Based Testing will highlight real-world applications in insurance software. Let’s jump in!

How Proven Testing Techniques Ensure Reliability in Insurance Software Systems

In the insurance industry, software systems play a critical role in managing claims, processing policies, and ensuring compliance. Given the complexity of insurance workflows, robust testing is essential to avoid costly errors and enhance customer satisfaction. Two effective testing methods: Path Testing and Decision-Based Testing, are invaluable in achieving high-quality software. Let’s explore these techniques with simple examples and see how they apply to real-world insurance applications.


What is Path Testing?

Ensuring Every Route in the Code is Tested

Path Testing involves checking all possible execution paths within a program to ensure each one functions as expected. This technique is particularly useful in complex systems where different inputs and scenarios lead to various execution routes.

Example:
Consider an insurance claims processing system where a claim can go through multiple steps:

  1. Eligibility Check: Is the policy active?
  2. Coverage Validation: Does the claim fall under covered incidents?
  3. Fraud Check: Are there any red flags?
  4. Approval Process: Does the claim meet all criteria for approval?

Path Testing would generate test cases to ensure every possible scenario is covered, such as:

  • Active policy, valid coverage, no fraud, claim approved (Success)
  • Inactive policy (Failure)
  • Valid policy but uncovered incident (Failure)
  • Fraud detected (Failure)

What is Decision-Based Testing?

Validating Every Decision Made by the Software

Decision-Based Testing (also known as Branch Testing) focuses on testing each decision point in the code, such as conditional statements and logic branches.

Example:
In the same insurance claims system, the decision to approve or deny a claim might depend on multiple conditions:

 ClaimCheck(isPolicyActive) {
if (isCoveredIncident) {
if (!isFraudulent) {
System.out.println("Claim Approved");
} else {
System.out.println("Claim Denied: Fraud Detected");
}
} else {
System.out.println("Claim Denied: Uncovered Incident");
}
} else {
System.out.println("Claim Denied: Inactive Policy");
}

Decision-Based Testing would create test cases to cover all possible outcomes:

  • Active policy, covered incident, no fraud (Approved)
  • Active policy, covered incident, fraud detected (Denied)
  • Active policy, uncovered incident (Denied)
  • Inactive policy (Denied)

Real-World Application: Insurance Claims Processing Systems

Why Insurance Software?
Insurance software involves complex business rules and multiple decision points. Errors in these systems can lead to mismanagement of claims, financial losses, or regulatory issues.

How Are These Testing Techniques Used?

  1. Path Testing: Ensures that all possible claim processing scenarios are thoroughly tested, including edge cases like expired policies or unusual claim amounts.
  2. Decision-Based Testing: Validates that all critical decisions, such as fraud detection or policy eligibility, are handled accurately by the system.

Example Scenario:
An insurance company uses a claims management system that automatically processes thousands of claims daily. Path Testing would ensure that every possible claim scenario is tested, while Decision-Based Testing would verify that all decision points, such as flagging a claim for manual review, function correctly.


Key Differences Between Path Testing and Decision-Based Testing

Aspect Path Testing Decision-Based Testing
Focus All possible execution paths Each decision point (branches)
Best For Complex insurance workflows Policy validation and claim decisions
Example Application Claims processing systems Fraud detection and approvals

Conclusion: Delivering Reliable Insurance Software with Strategic Testing

For software engineers working in the insurance industry, combining Path Testing and Decision-Based Testing is crucial. These techniques ensure the software is well-equipped to handle every possible scenario and make accurate decisions in policy and claims management. By implementing these robust testing strategies, insurance companies can boost efficiency, reduce errors, and maintain compliance with regulatory standards.

For further exploration, consider:

Software-testing-laboon-ebook

From the blog Rick’s Software Journal by RickDjouwe1 and used with permission of the author. All other rights reserved by the author.

Equivalence Class Testing

In the realm of software testing, equivalence class testing stands out as an efficient black-box testing technique. Unlike its counterparts—boundary value analysis, worst-case testing, and robust case testing—equivalence class testing excels in both time efficiency and precision. This methodology logically divides input and output into distinct classes, enabling comprehensive risk identification.

To illustrate its effectiveness, consider the next-date problem. Given a day in the format of day-month-year, the task is to determine the next date while performing boundary value analysis and equivalence class testing. The conditions for this problem are:

  • Day (D): 1 < Day < 31
  • Month (M): 1 < Month < 12
  • Year (Y): 1800 < Year < 2048

Boundary Value Analysis

Boundary value analysis generates 13 test cases by applying the formula:

No. of test cases(n = no. of variables)=4n+1\text{No. of test cases} (n \text{ = no. of variables}) = 4n + 1

For instance, the test cases might include:

  1. Date: 1-6-2000, Expected Output: 2-6-2000
  2. Date: 31-6-2000, Expected Output: Invalid Date
  3. Date: 15-6-2048, Expected Output: 16-6-2048

While this technique effectively captures boundary conditions, it often overlooks special cases like leap years and the varying days in February.

Equivalence Class Testing

Equivalence class testing addresses this gap by creating distinct input classes:

  • Day (D): 1-28, 29, 30, 31
  • Month (M): 30-day months, 31-day months, February
  • Year (Y): Leap year, Normal year

With these classes, the technique identifies robust test cases for each partition. For example:

  • Date: 29-2-2004 (Leap Year), Expected Output: 1-3-2004
  • Date: 29-2-2003 (Non-Leap Year), Expected Output: Invalid Date
  • Date: 30-4-2004, Expected Output: 1-5-2004

This approach ensures comprehensive test coverage, capturing edge cases missed by boundary value analysis.

Conclusion

Equivalence class testing offers a systematic approach to software testing, ensuring efficient and thorough risk assessment. By logically partitioning inputs and outputs, it creates robust test cases that address a wide array of scenarios. Whether dealing with complex date calculations or other software functions, equivalence class testing is a valuable tool in any tester’s arsenal.

In essence, this method not only saves time but also enhances the precision of test cases, making it an indispensable step in the software development lifecycle.

All of this can be found from this link:

Equivalence Class Testing- Next date problem – GeeksforGeeks

From the blog CS@Worcester – aRomeoDev by aromeo4f978d012d4 and used with permission of the author. All other rights reserved by the author.

Software Quality

Software quality is something that I discuss with a lot of my friends who are also interested in software engineering/developing. One of the main points that all of us make sure to implement in our projects is always clean and readable code. When working on each others code-bases, it has become so much easier when our style’s have become more similar in terms of quality and cleanness, obviously not everyone will have the same style of coding, but people should definitely follow certain guidelines when working on a product.

For the start of all of my projects now, I immediately make the layout in a specific way so that anyone who has to view or navigate the project will know exactly where to look. All packages written will be descriptive enough of what they will contain, all classes will have proper name cases as well, if it creates something it will usually be a factory, if its an object class, its a wrapper, and for core functionality we use services. There have been plenty of times where I’ve had to work on someone else’s code-base and I was immediately lost, everything was all over the place, packages didn’t have good naming conventions, classes didn’t belong to their own packages, even the code itself was just nasty.

This leads me to the next part of software quality, which is usually the most important part, code quality. If you’re coding and you use some obscure or weird naming conventions, that should probably stop. Working in production usually means someone else will eventually have to take a look at your work, and if they can’t even figure out what you were doing, that means you did a terrible job. Everything needs to be readable, now obviously someone may not initially understand what the code does, but that may just be a difference in experience, if someone at your skill level reads your code, they should be able to identify what each variable, method, class, ETC… does for the product and how they cant test it to make sure it all works.

Since I started learning more about code quality, making any sort of project for production has become way more optimized for myself. I’m thankful that I had friends there to help me understand where I was going wrong in the first place with poorly structured code. Instead of having to take time remembering how a project should look, its not basically ingrained in my head how to do, which I hope other developers do as well.

From the blog CS@Worcester – CS Blog by Mike and used with permission of the author. All other rights reserved by the author.

Week 6: Boundary Value vs Equivalence Class Testing

This week we learned about boundary value testing and equivalence class testing. Boundary value testing focuses on making sure the values in, out, and around the expected boundary works as it should. Equivalence class testing does the same, but also tests the function itself.

I wanted to know more about the two methods and found a blog post that explains them a little more in depth. The author, Apoorva Ram, says they are more thought processes than testing methods really. The thought process of boundary value testing is self explanatory: testing the edge boundaries of the function. The thought process of equivalence class testing is organizing every possible input into groups of expected outputs and testing the result from each.

Ram also explains the benefits of the methods and how they can be used in software testing. The two seem to go hand in hand. Planning your tests before writing them and knowing the expected output makes the testing process a lot smoother. You know what points you need to hit and have a plan to execute them. Additionally, knowing all the points you need to hit allows you to prioritize ones that are more important.

For example, say you have a boolean function that looks for the input value to be between 15 and 30, but accepts values from 0 to 100. Boundary testing would test the values of xmin-, xmin, xmin+, xnom, xmax-, xmax, and xmax+. In this case: -1, 0, 1, 50, 99, 100, and 101. It mostly makes sure the 0 and 100 boundaries work. But equivalence class testing breaks down the function into classes of values that will give every result: invalid inputs (under 0 and above 100), false cases (between 0-14 and between 31-100), and the true case (between 15-30). In this case: -1, 12, 20, 45, and 101. This method tests the valid ranges as well as the function ranges.

In my opinion, equivalence class testing is better than boundary value testing because it actually tests the function and not just the illegal argument exception, and it eliminates redundant tests like xmin+, xnom, and xmax-, all testing for the same output without actually testing the function. Though ideally, a mix of both would probably be the method I choose. For this example, I would test each equivalence class and its boundaries: xmin-, xmin, xmin+, xtruemin-, xtruemin, xtruemin+, xtruemax-, xtruemax, xtruemax+, xmax-, xmax, and xmax+ (-1, 0, 1, 14, 15, 16, 29, 30, 31, 99, 100, 101).

Blog post referenced: https://testsigma.com/blog/boundary-value-analysis-and-equivalence-class-partitioning/

From the blog ALIDA NORDQUIST by alidanordquist and used with permission of the author. All other rights reserved by the author.

JUnit 5 Testing

2/27/2025

This week we learned about Junit test cases. Coming from a C++ background it was a bit difficult to order my tests and to use one global object that runs before all other tests. In C++ the tests run in the order that you write them, and I learned that in java that is not a thing unless you specifically use order(N) N being the order number. This way of testing makes you able to order your test cases in whatever way you want. Also, I learned that you can just make a setup function with a single object and by using “Before All” this means the set up will run before any other test which helps with repetitive tasks of making objects over and over. I personally would rather create a fresh object for each test, but it was a nice experience learning that java just randomly tests and does not have an actual order unless stated otherwise.

Looking through the Junit 5 user guide when doing the homework I also learned that you can use “Before Each” for tests that you want to run before each test sort of like a for loop. One very interesting thing I have never encountered is that you can use lambda in order to compare variables using in objects. This makes it nice and compact for simpler tests because this can all be done within one line.

In class I also learned that gradle will give a different output for the test’s cases compared to vscode. I also encountered that when I would hit the checkmark in vscode for individual test cases they would pass and then other occurrences where they would not pass. Also, the same with running them globally all together. This confused me and took me a few hours to figure out why, but I realized it was due to my global object that I created. I also noticed that gradle would give me a different result as well, but it was much better and consistent than running the tests individually.

Source: JUnit 5 User Guide
Source: Writing Templates for Test Cases Using JUnit 5 – GeeksforGeeks

From the blog CS@Worcester – Cinnamon Codes by CinCodes and used with permission of the author. All other rights reserved by the author.

JUnit, Test, and Repeat

I’ve decided that I wanted to practice making more JUnit tests. I did well on my last homework assignment, but I feel like I still need more practice. It took me some time to do it. I may have to generate my own JUnit tests in the midterm so I would need to make them at a faster rate. Anyway, practice makes perfect so there is no such thing as too much practice.

For this post I will be using this website: Inheritance in Java – GeeksforGeeks

This website contains two examples of code but I will use the first one. The code does not allow for user input but I’ll be drafting tests as if it does.

Before I write any test I would write:

import org.junit.jupiter.api.Test;

import static org.junit.jupiter.api.Assertions.assertEquals;

import static org.junit.jupiter.api.Assertions.assertThrows;

They will be useful later.

The first test I would make is a constructor test. I believe it should be one of the first tests. It is good to know the test works as it is supposed to since if it cannot do that it will need to be immediately revised. So, I would do this:

@Test

public void testMountainBikeConstructor() {

MountainBike mb = new MountainBike(30, 10, 45);

assertEquals(30, mb.gear);

assertEquals(10, mb.speed);

assertEquals(45, mb.seatHeight);

}

This is supposed to be take in three values and tests if the constructor initializes them correctly. I used the name “mb” because it was already in a class that was used to test inputs. It just made sense to me.

Another test I created tests the set string height method.

@Test

public void testSetHeight() {

mountainBike.setHeight(40);

assertEquals(40, mountainBike.seatHeight);

}

This tests if the height of the mountain bike can handle user input.

@Test

public void testMountainBikeMethods() {

mountainBike.applyBrake(5);

assertEquals(10, mountainBike.speed);

mountainBike.speedUp(10);

assertEquals(20, mountainBike.speed);

}

This tests if the mountain bike can speed up and brake.

@Test
void testToString() {
String expected = “No of gears are 6\nspeed of bicycle is 25\nseat height is 10”;
assertEquals(expected, mountainBike.toString());
}
}

This is supposed to test that the code has the expected output.

For the final test, I wanted to up the ante. What if I could test the limits of the code?

@Test

public void testSpeedLimits() {

MountainBike mb = new MountainBike(3, 100, 25);

mb.setSpeed(0);

assertEquals(0, mb.getSpeed(), “No negative speed!”);

assertThrows(IllegalArgumentException.class, () -> mb.setSpeed(-10), “No negative speed!”);

}

Overall, this was an interesting challenge. The main difficulty was finding the code to do this project on. There was code that was too easy thus difficult to generate meaningful tests on. There was also code that was too complicated which made it difficult to make a significant number of tests. In the end, it was nice to get some practice.

From the blog My Journey through Comp Sci by Joanna Presume and used with permission of the author. All other rights reserved by the author.

On CI/CD Pipelining

 In this post, I’ll be discussing my thoughts on an article I found on the Ministry of Testing website titled “An introduction to Continuous Integration (CI) and Continuous Delivery (CD) pipelines for software testers.” This piece really stood out to me because it highlighted the importance of integrating testing into the continuous integration and continuous delivery (CI/CD) pipeline. I’ve been learning about automated testing and CI/CD practices, and this article helped me better understand how testing can be embedded into each phase of the development cycle to ensure high-quality software and faster release times.

One key point that really resonated with me was the idea of shifting left, which means testing early in the development process. The author explained that integrating tests into the CI/CD pipeline allows teams to detect bugs and issues earlier, rather than waiting until the end of the development cycle. This makes perfect sense to me because I’ve seen firsthand in my career how much more efficient the development process becomes when tests are automated and run continuously. Instead of waiting for a bug to be discovered during a manual testing phase late in the process, CI/CD testing enables teams to catch those issues as they happen, significantly reducing the risk of production bugs and minimizing the effort needed to fix them. When things build up, business units accrue a lot of technical debt, and I end up having to hound them to fix 20 different things at the same time, instead of them being able to handle them as they appear, which CI/CD pipeline testing may help them with.

By incorporating automated tests into the pipeline, I can quickly get feedback about the code I’ve written, allowing me to catch mistakes early. However, I also realized that the article pointed out a very important note that I agree with: not all tests can be fully automated. There are still areas, such as user experience or complex edge cases, that may require more manual or exploratory testing. This balance of automated and manual testing within CI/CD pipelines is something I’ve experienced while developing a public facing status page, where it is not just functionality that needs to be tested, but also human elements, like how the page looks.

The article also discussed how testing within the CI/CD pipeline encourages a mindset of continuous improvement. Each time a test fails or catches an issue, it provides an opportunity to address potential gaps in the process and refine both the tests and the code. I think this aligns perfectly with the idea of being a “Software Apprentice,” always looking for ways to improve and enhance the quality of the product, no matter how far along in the development cycle it may be. Overall, this article reinforced the idea that CI/CD testing is not just about speeding up development—it’s about focusing on quality, where testing is an integral part of every stage.

From the blog Mr. Lancer 987&#039;s Blog by Mr. Lancer 987 and used with permission of the author. All other rights reserved by the author.

On CI/CD Pipelining

 In this post, I’ll be discussing my thoughts on an article I found on the Ministry of Testing website titled “An introduction to Continuous Integration (CI) and Continuous Delivery (CD) pipelines for software testers.” This piece really stood out to me because it highlighted the importance of integrating testing into the continuous integration and continuous delivery (CI/CD) pipeline. I’ve been learning about automated testing and CI/CD practices, and this article helped me better understand how testing can be embedded into each phase of the development cycle to ensure high-quality software and faster release times.

One key point that really resonated with me was the idea of shifting left, which means testing early in the development process. The author explained that integrating tests into the CI/CD pipeline allows teams to detect bugs and issues earlier, rather than waiting until the end of the development cycle. This makes perfect sense to me because I’ve seen firsthand in my career how much more efficient the development process becomes when tests are automated and run continuously. Instead of waiting for a bug to be discovered during a manual testing phase late in the process, CI/CD testing enables teams to catch those issues as they happen, significantly reducing the risk of production bugs and minimizing the effort needed to fix them. When things build up, business units accrue a lot of technical debt, and I end up having to hound them to fix 20 different things at the same time, instead of them being able to handle them as they appear, which CI/CD pipeline testing may help them with.

By incorporating automated tests into the pipeline, I can quickly get feedback about the code I’ve written, allowing me to catch mistakes early. However, I also realized that the article pointed out a very important note that I agree with: not all tests can be fully automated. There are still areas, such as user experience or complex edge cases, that may require more manual or exploratory testing. This balance of automated and manual testing within CI/CD pipelines is something I’ve experienced while developing a public facing status page, where it is not just functionality that needs to be tested, but also human elements, like how the page looks.

The article also discussed how testing within the CI/CD pipeline encourages a mindset of continuous improvement. Each time a test fails or catches an issue, it provides an opportunity to address potential gaps in the process and refine both the tests and the code. I think this aligns perfectly with the idea of being a “Software Apprentice,” always looking for ways to improve and enhance the quality of the product, no matter how far along in the development cycle it may be. Overall, this article reinforced the idea that CI/CD testing is not just about speeding up development—it’s about focusing on quality, where testing is an integral part of every stage.

From the blog Mr. Lancer 987&#039;s Blog by Mr. Lancer 987 and used with permission of the author. All other rights reserved by the author.

On CI/CD Pipelining

 In this post, I’ll be discussing my thoughts on an article I found on the Ministry of Testing website titled “An introduction to Continuous Integration (CI) and Continuous Delivery (CD) pipelines for software testers.” This piece really stood out to me because it highlighted the importance of integrating testing into the continuous integration and continuous delivery (CI/CD) pipeline. I’ve been learning about automated testing and CI/CD practices, and this article helped me better understand how testing can be embedded into each phase of the development cycle to ensure high-quality software and faster release times.

One key point that really resonated with me was the idea of shifting left, which means testing early in the development process. The author explained that integrating tests into the CI/CD pipeline allows teams to detect bugs and issues earlier, rather than waiting until the end of the development cycle. This makes perfect sense to me because I’ve seen firsthand in my career how much more efficient the development process becomes when tests are automated and run continuously. Instead of waiting for a bug to be discovered during a manual testing phase late in the process, CI/CD testing enables teams to catch those issues as they happen, significantly reducing the risk of production bugs and minimizing the effort needed to fix them. When things build up, business units accrue a lot of technical debt, and I end up having to hound them to fix 20 different things at the same time, instead of them being able to handle them as they appear, which CI/CD pipeline testing may help them with.

By incorporating automated tests into the pipeline, I can quickly get feedback about the code I’ve written, allowing me to catch mistakes early. However, I also realized that the article pointed out a very important note that I agree with: not all tests can be fully automated. There are still areas, such as user experience or complex edge cases, that may require more manual or exploratory testing. This balance of automated and manual testing within CI/CD pipelines is something I’ve experienced while developing a public facing status page, where it is not just functionality that needs to be tested, but also human elements, like how the page looks.

The article also discussed how testing within the CI/CD pipeline encourages a mindset of continuous improvement. Each time a test fails or catches an issue, it provides an opportunity to address potential gaps in the process and refine both the tests and the code. I think this aligns perfectly with the idea of being a “Software Apprentice,” always looking for ways to improve and enhance the quality of the product, no matter how far along in the development cycle it may be. Overall, this article reinforced the idea that CI/CD testing is not just about speeding up development—it’s about focusing on quality, where testing is an integral part of every stage.

From the blog Mr. Lancer 987&#039;s Blog by Mr. Lancer 987 and used with permission of the author. All other rights reserved by the author.