Category Archives: CS-443

On CI/CD Pipelining

 In this post, I’ll be discussing my thoughts on an article I found on the Ministry of Testing website titled “An introduction to Continuous Integration (CI) and Continuous Delivery (CD) pipelines for software testers.” This piece really stood out to me because it highlighted the importance of integrating testing into the continuous integration and continuous delivery (CI/CD) pipeline. I’ve been learning about automated testing and CI/CD practices, and this article helped me better understand how testing can be embedded into each phase of the development cycle to ensure high-quality software and faster release times.

One key point that really resonated with me was the idea of shifting left, which means testing early in the development process. The author explained that integrating tests into the CI/CD pipeline allows teams to detect bugs and issues earlier, rather than waiting until the end of the development cycle. This makes perfect sense to me because I’ve seen firsthand in my career how much more efficient the development process becomes when tests are automated and run continuously. Instead of waiting for a bug to be discovered during a manual testing phase late in the process, CI/CD testing enables teams to catch those issues as they happen, significantly reducing the risk of production bugs and minimizing the effort needed to fix them. When things build up, business units accrue a lot of technical debt, and I end up having to hound them to fix 20 different things at the same time, instead of them being able to handle them as they appear, which CI/CD pipeline testing may help them with.

By incorporating automated tests into the pipeline, I can quickly get feedback about the code I’ve written, allowing me to catch mistakes early. However, I also realized that the article pointed out a very important note that I agree with: not all tests can be fully automated. There are still areas, such as user experience or complex edge cases, that may require more manual or exploratory testing. This balance of automated and manual testing within CI/CD pipelines is something I’ve experienced while developing a public facing status page, where it is not just functionality that needs to be tested, but also human elements, like how the page looks.

The article also discussed how testing within the CI/CD pipeline encourages a mindset of continuous improvement. Each time a test fails or catches an issue, it provides an opportunity to address potential gaps in the process and refine both the tests and the code. I think this aligns perfectly with the idea of being a “Software Apprentice,” always looking for ways to improve and enhance the quality of the product, no matter how far along in the development cycle it may be. Overall, this article reinforced the idea that CI/CD testing is not just about speeding up development—it’s about focusing on quality, where testing is an integral part of every stage.

From the blog Mr. Lancer 987's Blog by Mr. Lancer 987 and used with permission of the author. All other rights reserved by the author.

On CI/CD Pipelining

 In this post, I’ll be discussing my thoughts on an article I found on the Ministry of Testing website titled “An introduction to Continuous Integration (CI) and Continuous Delivery (CD) pipelines for software testers.” This piece really stood out to me because it highlighted the importance of integrating testing into the continuous integration and continuous delivery (CI/CD) pipeline. I’ve been learning about automated testing and CI/CD practices, and this article helped me better understand how testing can be embedded into each phase of the development cycle to ensure high-quality software and faster release times.

One key point that really resonated with me was the idea of shifting left, which means testing early in the development process. The author explained that integrating tests into the CI/CD pipeline allows teams to detect bugs and issues earlier, rather than waiting until the end of the development cycle. This makes perfect sense to me because I’ve seen firsthand in my career how much more efficient the development process becomes when tests are automated and run continuously. Instead of waiting for a bug to be discovered during a manual testing phase late in the process, CI/CD testing enables teams to catch those issues as they happen, significantly reducing the risk of production bugs and minimizing the effort needed to fix them. When things build up, business units accrue a lot of technical debt, and I end up having to hound them to fix 20 different things at the same time, instead of them being able to handle them as they appear, which CI/CD pipeline testing may help them with.

By incorporating automated tests into the pipeline, I can quickly get feedback about the code I’ve written, allowing me to catch mistakes early. However, I also realized that the article pointed out a very important note that I agree with: not all tests can be fully automated. There are still areas, such as user experience or complex edge cases, that may require more manual or exploratory testing. This balance of automated and manual testing within CI/CD pipelines is something I’ve experienced while developing a public facing status page, where it is not just functionality that needs to be tested, but also human elements, like how the page looks.

The article also discussed how testing within the CI/CD pipeline encourages a mindset of continuous improvement. Each time a test fails or catches an issue, it provides an opportunity to address potential gaps in the process and refine both the tests and the code. I think this aligns perfectly with the idea of being a “Software Apprentice,” always looking for ways to improve and enhance the quality of the product, no matter how far along in the development cycle it may be. Overall, this article reinforced the idea that CI/CD testing is not just about speeding up development—it’s about focusing on quality, where testing is an integral part of every stage.

From the blog Mr. Lancer 987's Blog by Mr. Lancer 987 and used with permission of the author. All other rights reserved by the author.

The Key Pillars of API Testing

Application Program Interfaces (APIs) provide the framework for how pieces of software communicate with each other, as well as the user. Therefore, proper API testing is key in delivering a reliable product. In a blog for Medium.com, Andrey Enin says that around 3/4 of product functionality involves designing the API correctly. He also lists some of what he considers to be the core ideology behind how these tests are designed and conducted.

Firstly, an API, or any program, must be tested to ensure it works properly. Functionality testing involves sending requests to the API and comparing the results with the intended specifications. End-to-end tests are useful in this case; multiple calls are chained together, often across multiple APIs, to replicate the end user experience in order to bring bugs to the surface.

Performance testing is another point. Measuring how the software and API perform in high use cases or under high traffic loads improves stability by identifying issues in a controlled environment. That is, better to break it now, rather than after delivering the product. Compatibility testing falls under a similar umbrella. Differing and often “unexpected” requests are sent to the API to test how the program responds. While performance testing can involve stress-testing and compatibility testing often involves testing the API in different environments, the philosophy is the same: better for it to break now, when fixes are easier to implement as opposed to post-release.

In a similar thread is error testing. Quality software will be able to handle errors without suffering catastrophic failure. With API testing, this generally comprises of sending intentionally incorrect or malformed requests to ensure the interface returns the relevant error responses.

Testing the usability of the API is somewhat different, in that it refers to the developer experience when working with the interface. Some of the examples listed in the aforementioned Medium.com post include adhering to OpenAPI specifications, uniform naming conventions, and making sure documentation is comprehensive and unambiguous. The unique perspective of focusing on the end user experience helps create developer-friendly interfaces that are popular and remain relevant in the industry.

Finally, security testing involves checking to make sure the API carries data securely and is protected from unauthorized or malicious use. Transmitting data over HTTPS is a common method of protecting sensitive information, and is generally considered standard for most web traffic. Testing the interface’s authentication methods for vulnerabilities protects against outside access to the same sensitive information as well as preventing unauthorized use of the interface.

As a final note, it’s helpful to consider these testing approaches as a foundation for a standard of quality assurance, as opposed to a check list to follow. Specific tests may address multiple different parts, and different approaches may take importance over others. Considering each different aspect of API testing over the course of the development cycle, outside of the testing environment, helps ensure the quality of the final product.

References:

Enin, Andrey. “Field Notes in API Testing, Part 1: Areas of Focus.” Medium.Com, Medium, 25 Jan. 2025, adequatica.medium.com/field-notes-in-api-testing-part-1-areas-of-focus-46b516ccacf4#4439.

From the blog Griffin Butler Computer Science Blog by Griffin Butler and used with permission of the author. All other rights reserved by the author.

Unit Testing: Decision Tables

Week 5 – 2/23/2025

For this week’s blog, I recently came across an insightful article titled “Decision Table Testing: A Comprehensive Guide” on the Testsigma website. This article provided a detailed overview of decision table testing, a technique for testing system behavior for various input combinations. The article not only defined the concept but also went over its applicability, benefits, and practical applications. 

I chose this resource because we had just done a POGIL activity on decision table testing in class, and I wanted to learn more about how it works in real-world circumstances. The article stood out to me because it was well organized, simple to understand, and contained practical examples to make the subject more approachable. As someone who is still learning about software testing, I found this material to be both instructive and accessible.

The article begins by defining decision table testing as a black-box testing technique for determining how a system responds to different input combinations. It then describes the structure of a decision table, which is made up of conditions (inputs) and actions (outputs), as well as how to generate one. What I found most useful was the step-by-step illustration of how to apply decision table testing to a login system. This example helped me visualize how the strategy works in practice.

One of the most important takeaways for me was the emphasis on the value of decision table testing in dealing with complex business logic. The article explained how this technique assures that all conceivable scenarios are examined, lowering the likelihood of missing key edge cases. This spoke to me because, in my limited experience, I’ve seen how easy it is to overlook specific input combinations, particularly in systems with several decision points. The blog also covered decision table testing’s limits, such as its inefficiency for systems with a large number of inputs, which helped me understand when to utilize this technique and when to look into alternatives.

Reading this article has greatly increased my understanding of decision table testing. I’m now more confident in my ability to apply this strategy to future projects. For example, I envision myself utilizing decision tables to evaluate systems with well-defined criteria, such as payment processing or eligibility verification. In addition, the blog emphasized the need for thorough testing and considering all possible circumstances, which I will include in my testing methods.

Overall, this article was a helpful resource for my learning experience. It not only simplified a subject that I was having trouble understanding, but it also provided practical insights that I may use in the future. This article is highly recommended to anyone looking for a clear and practical explanation of decision table testing.

https://testsigma.com/blog/decision-table-testing/

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

On Integrating Automated Testing

 In this post, I’ll be discussing my thoughts on a recent article I read from the Software Testing Help website, which can be found here. The piece really struck me because it reinforced many of the ideas I’ve come to believe about the role of testing in the software development lifecycle, particularly how automation can improve both speed and quality. I’ve always been a fan of automated testing, but this article helped me think more deeply about how it should fit into the broader testing strategy.

One of the key points in the article was the idea of balancing automation with manual testing. While automation is critical for repetitive tasks and quick feedback, the author pointed out that certain aspects of testing—like user experience—cannot be fully captured by automated scripts. This really resonated with me, as I’ve encountered situations where automation was great for catching functional issues, but it missed some of the nuance that a manual tester might be able to identify or spot. I think it’s a reminder that we should never rely too heavily on automation, and that human insight still has an important role to play.

In my own experience, automated testing has been a huge time-saver, especially for regression testing. It helps ensure that previously working functionality remains intact as new features are added. But I’ve also seen the limitations, particularly when automated tests don’t cover edge cases or fail to reflect real-world scenarios. I’ve learned that a good testing strategy needs to integrate both approaches—automation for efficiency and manual testing for critical thinking and creativity. I’ve gotten in the habit of mentally doing a once over to make sure that all my automated tests still cover everything I can think of, instead of just blindly assuming they do.

The article also emphasized the importance of writing testable code to support automation. This is something I think I can improve on in my own work. By considering testability from the start, we can avoid technical debt and create more maintainable, reliable systems. Writing code with testing in mind encourages good design practices and ensures that automated tests are effective.

Lastly, the article touched on continuous integration (CI) and how automated tests play a vital role in CI pipelines. This is something I’ve been trying to implement more consistently, and I’m seeing the value of catching bugs early, before they make it to production. It’s a mindset of constant improvement that aligns well with the idea of being a “Software Apprentice”—always refining and enhancing our process.

In conclusion, this article reaffirmed the importance of finding the right balance between automated and manual testing. As I continue my journey as a developer, I’ll be more mindful of how I integrate both into my workflow to ensure quality and efficiency.


From the blog Mr. Lancer 987's Blog by Mr. Lancer 987 and used with permission of the author. All other rights reserved by the author.

On Integrating Automated Testing

 In this post, I’ll be discussing my thoughts on a recent article I read from the Software Testing Help website, which can be found here. The piece really struck me because it reinforced many of the ideas I’ve come to believe about the role of testing in the software development lifecycle, particularly how automation can improve both speed and quality. I’ve always been a fan of automated testing, but this article helped me think more deeply about how it should fit into the broader testing strategy.

One of the key points in the article was the idea of balancing automation with manual testing. While automation is critical for repetitive tasks and quick feedback, the author pointed out that certain aspects of testing—like user experience—cannot be fully captured by automated scripts. This really resonated with me, as I’ve encountered situations where automation was great for catching functional issues, but it missed some of the nuance that a manual tester might be able to identify or spot. I think it’s a reminder that we should never rely too heavily on automation, and that human insight still has an important role to play.

In my own experience, automated testing has been a huge time-saver, especially for regression testing. It helps ensure that previously working functionality remains intact as new features are added. But I’ve also seen the limitations, particularly when automated tests don’t cover edge cases or fail to reflect real-world scenarios. I’ve learned that a good testing strategy needs to integrate both approaches—automation for efficiency and manual testing for critical thinking and creativity. I’ve gotten in the habit of mentally doing a once over to make sure that all my automated tests still cover everything I can think of, instead of just blindly assuming they do.

The article also emphasized the importance of writing testable code to support automation. This is something I think I can improve on in my own work. By considering testability from the start, we can avoid technical debt and create more maintainable, reliable systems. Writing code with testing in mind encourages good design practices and ensures that automated tests are effective.

Lastly, the article touched on continuous integration (CI) and how automated tests play a vital role in CI pipelines. This is something I’ve been trying to implement more consistently, and I’m seeing the value of catching bugs early, before they make it to production. It’s a mindset of constant improvement that aligns well with the idea of being a “Software Apprentice”—always refining and enhancing our process.

In conclusion, this article reaffirmed the importance of finding the right balance between automated and manual testing. As I continue my journey as a developer, I’ll be more mindful of how I integrate both into my workflow to ensure quality and efficiency.


From the blog Mr. Lancer 987's Blog by Mr. Lancer 987 and used with permission of the author. All other rights reserved by the author.

On Integrating Automated Testing

 In this post, I’ll be discussing my thoughts on a recent article I read from the Software Testing Help website, which can be found here. The piece really struck me because it reinforced many of the ideas I’ve come to believe about the role of testing in the software development lifecycle, particularly how automation can improve both speed and quality. I’ve always been a fan of automated testing, but this article helped me think more deeply about how it should fit into the broader testing strategy.

One of the key points in the article was the idea of balancing automation with manual testing. While automation is critical for repetitive tasks and quick feedback, the author pointed out that certain aspects of testing—like user experience—cannot be fully captured by automated scripts. This really resonated with me, as I’ve encountered situations where automation was great for catching functional issues, but it missed some of the nuance that a manual tester might be able to identify or spot. I think it’s a reminder that we should never rely too heavily on automation, and that human insight still has an important role to play.

In my own experience, automated testing has been a huge time-saver, especially for regression testing. It helps ensure that previously working functionality remains intact as new features are added. But I’ve also seen the limitations, particularly when automated tests don’t cover edge cases or fail to reflect real-world scenarios. I’ve learned that a good testing strategy needs to integrate both approaches—automation for efficiency and manual testing for critical thinking and creativity. I’ve gotten in the habit of mentally doing a once over to make sure that all my automated tests still cover everything I can think of, instead of just blindly assuming they do.

The article also emphasized the importance of writing testable code to support automation. This is something I think I can improve on in my own work. By considering testability from the start, we can avoid technical debt and create more maintainable, reliable systems. Writing code with testing in mind encourages good design practices and ensures that automated tests are effective.

Lastly, the article touched on continuous integration (CI) and how automated tests play a vital role in CI pipelines. This is something I’ve been trying to implement more consistently, and I’m seeing the value of catching bugs early, before they make it to production. It’s a mindset of constant improvement that aligns well with the idea of being a “Software Apprentice”—always refining and enhancing our process.

In conclusion, this article reaffirmed the importance of finding the right balance between automated and manual testing. As I continue my journey as a developer, I’ll be more mindful of how I integrate both into my workflow to ensure quality and efficiency.


From the blog Mr. Lancer 987's Blog by Mr. Lancer 987 and used with permission of the author. All other rights reserved by the author.

On Integrating Automated Testing

 In this post, I’ll be discussing my thoughts on a recent article I read from the Software Testing Help website, which can be found here. The piece really struck me because it reinforced many of the ideas I’ve come to believe about the role of testing in the software development lifecycle, particularly how automation can improve both speed and quality. I’ve always been a fan of automated testing, but this article helped me think more deeply about how it should fit into the broader testing strategy.

One of the key points in the article was the idea of balancing automation with manual testing. While automation is critical for repetitive tasks and quick feedback, the author pointed out that certain aspects of testing—like user experience—cannot be fully captured by automated scripts. This really resonated with me, as I’ve encountered situations where automation was great for catching functional issues, but it missed some of the nuance that a manual tester might be able to identify or spot. I think it’s a reminder that we should never rely too heavily on automation, and that human insight still has an important role to play.

In my own experience, automated testing has been a huge time-saver, especially for regression testing. It helps ensure that previously working functionality remains intact as new features are added. But I’ve also seen the limitations, particularly when automated tests don’t cover edge cases or fail to reflect real-world scenarios. I’ve learned that a good testing strategy needs to integrate both approaches—automation for efficiency and manual testing for critical thinking and creativity. I’ve gotten in the habit of mentally doing a once over to make sure that all my automated tests still cover everything I can think of, instead of just blindly assuming they do.

The article also emphasized the importance of writing testable code to support automation. This is something I think I can improve on in my own work. By considering testability from the start, we can avoid technical debt and create more maintainable, reliable systems. Writing code with testing in mind encourages good design practices and ensures that automated tests are effective.

Lastly, the article touched on continuous integration (CI) and how automated tests play a vital role in CI pipelines. This is something I’ve been trying to implement more consistently, and I’m seeing the value of catching bugs early, before they make it to production. It’s a mindset of constant improvement that aligns well with the idea of being a “Software Apprentice”—always refining and enhancing our process.

In conclusion, this article reaffirmed the importance of finding the right balance between automated and manual testing. As I continue my journey as a developer, I’ll be more mindful of how I integrate both into my workflow to ensure quality and efficiency.


From the blog Mr. Lancer 987's Blog by Mr. Lancer 987 and used with permission of the author. All other rights reserved by the author.

On Integrating Automated Testing

 In this post, I’ll be discussing my thoughts on a recent article I read from the Software Testing Help website, which can be found here. The piece really struck me because it reinforced many of the ideas I’ve come to believe about the role of testing in the software development lifecycle, particularly how automation can improve both speed and quality. I’ve always been a fan of automated testing, but this article helped me think more deeply about how it should fit into the broader testing strategy.

One of the key points in the article was the idea of balancing automation with manual testing. While automation is critical for repetitive tasks and quick feedback, the author pointed out that certain aspects of testing—like user experience—cannot be fully captured by automated scripts. This really resonated with me, as I’ve encountered situations where automation was great for catching functional issues, but it missed some of the nuance that a manual tester might be able to identify or spot. I think it’s a reminder that we should never rely too heavily on automation, and that human insight still has an important role to play.

In my own experience, automated testing has been a huge time-saver, especially for regression testing. It helps ensure that previously working functionality remains intact as new features are added. But I’ve also seen the limitations, particularly when automated tests don’t cover edge cases or fail to reflect real-world scenarios. I’ve learned that a good testing strategy needs to integrate both approaches—automation for efficiency and manual testing for critical thinking and creativity. I’ve gotten in the habit of mentally doing a once over to make sure that all my automated tests still cover everything I can think of, instead of just blindly assuming they do.

The article also emphasized the importance of writing testable code to support automation. This is something I think I can improve on in my own work. By considering testability from the start, we can avoid technical debt and create more maintainable, reliable systems. Writing code with testing in mind encourages good design practices and ensures that automated tests are effective.

Lastly, the article touched on continuous integration (CI) and how automated tests play a vital role in CI pipelines. This is something I’ve been trying to implement more consistently, and I’m seeing the value of catching bugs early, before they make it to production. It’s a mindset of constant improvement that aligns well with the idea of being a “Software Apprentice”—always refining and enhancing our process.

In conclusion, this article reaffirmed the importance of finding the right balance between automated and manual testing. As I continue my journey as a developer, I’ll be more mindful of how I integrate both into my workflow to ensure quality and efficiency.


From the blog Mr. Lancer 987's Blog by Mr. Lancer 987 and used with permission of the author. All other rights reserved by the author.

On Integrating Automated Testing

 In this post, I’ll be discussing my thoughts on a recent article I read from the Software Testing Help website, which can be found here. The piece really struck me because it reinforced many of the ideas I’ve come to believe about the role of testing in the software development lifecycle, particularly how automation can improve both speed and quality. I’ve always been a fan of automated testing, but this article helped me think more deeply about how it should fit into the broader testing strategy.

One of the key points in the article was the idea of balancing automation with manual testing. While automation is critical for repetitive tasks and quick feedback, the author pointed out that certain aspects of testing—like user experience—cannot be fully captured by automated scripts. This really resonated with me, as I’ve encountered situations where automation was great for catching functional issues, but it missed some of the nuance that a manual tester might be able to identify or spot. I think it’s a reminder that we should never rely too heavily on automation, and that human insight still has an important role to play.

In my own experience, automated testing has been a huge time-saver, especially for regression testing. It helps ensure that previously working functionality remains intact as new features are added. But I’ve also seen the limitations, particularly when automated tests don’t cover edge cases or fail to reflect real-world scenarios. I’ve learned that a good testing strategy needs to integrate both approaches—automation for efficiency and manual testing for critical thinking and creativity. I’ve gotten in the habit of mentally doing a once over to make sure that all my automated tests still cover everything I can think of, instead of just blindly assuming they do.

The article also emphasized the importance of writing testable code to support automation. This is something I think I can improve on in my own work. By considering testability from the start, we can avoid technical debt and create more maintainable, reliable systems. Writing code with testing in mind encourages good design practices and ensures that automated tests are effective.

Lastly, the article touched on continuous integration (CI) and how automated tests play a vital role in CI pipelines. This is something I’ve been trying to implement more consistently, and I’m seeing the value of catching bugs early, before they make it to production. It’s a mindset of constant improvement that aligns well with the idea of being a “Software Apprentice”—always refining and enhancing our process.

In conclusion, this article reaffirmed the importance of finding the right balance between automated and manual testing. As I continue my journey as a developer, I’ll be more mindful of how I integrate both into my workflow to ensure quality and efficiency.


From the blog Mr. Lancer 987's Blog by Mr. Lancer 987 and used with permission of the author. All other rights reserved by the author.