CS-443 Blog: Shift Down testing

 CS@Worcester CS-443

 Testing software smarter, not harder: the shift-down strategy | Ministry of Testing

The article “Testing software smarter, not harder: the shift-down strategy” by Manish Saini discusses the importance of focusing test automation efforts closer to the code. This approach, known as shift-down testing, moves away from traditional UI-centric testing to thoroughly testing core code components such as APIs, business logic, and container-based integration. By addressing issues at the code level, the strategy aims to create more reliable and maintainable test suites, ultimately improving software quality.


The shift-down approach involves using mocks for external interactions to isolate and test individual code units effectively. This ensures that potential issues are identified and resolved early in the development process, reducing the overall testing effort required. The article highlights how shift-down testing can lead to more efficient and robust test automation, ultimately resulting in better software outcomes.


I think this article was pretty helpful in explaining the benefits of shift down testing. I think the article does a good job in explaining how this method identifies potential issues early in the development process, reducing the overall testing effort and improving software quality.

From the blog George C Blog by George Chude and used with permission of the author. All other rights reserved by the author.

CS-443 Blog: Shift Down testing

 CS@Worcester CS-443

 Testing software smarter, not harder: the shift-down strategy | Ministry of Testing

The article “Testing software smarter, not harder: the shift-down strategy” by Manish Saini discusses the importance of focusing test automation efforts closer to the code. This approach, known as shift-down testing, moves away from traditional UI-centric testing to thoroughly testing core code components such as APIs, business logic, and container-based integration. By addressing issues at the code level, the strategy aims to create more reliable and maintainable test suites, ultimately improving software quality.


The shift-down approach involves using mocks for external interactions to isolate and test individual code units effectively. This ensures that potential issues are identified and resolved early in the development process, reducing the overall testing effort required. The article highlights how shift-down testing can lead to more efficient and robust test automation, ultimately resulting in better software outcomes.


I think this article was pretty helpful in explaining the benefits of shift down testing. I think the article does a good job in explaining how this method identifies potential issues early in the development process, reducing the overall testing effort and improving software quality.

From the blog George C Blog by George Chude and used with permission of the author. All other rights reserved by the author.

CS-443 Blog: Shift Down testing

 CS@Worcester CS-443

 Testing software smarter, not harder: the shift-down strategy | Ministry of Testing

The article “Testing software smarter, not harder: the shift-down strategy” by Manish Saini discusses the importance of focusing test automation efforts closer to the code. This approach, known as shift-down testing, moves away from traditional UI-centric testing to thoroughly testing core code components such as APIs, business logic, and container-based integration. By addressing issues at the code level, the strategy aims to create more reliable and maintainable test suites, ultimately improving software quality.


The shift-down approach involves using mocks for external interactions to isolate and test individual code units effectively. This ensures that potential issues are identified and resolved early in the development process, reducing the overall testing effort required. The article highlights how shift-down testing can lead to more efficient and robust test automation, ultimately resulting in better software outcomes.


I think this article was pretty helpful in explaining the benefits of shift down testing. I think the article does a good job in explaining how this method identifies potential issues early in the development process, reducing the overall testing effort and improving software quality.

From the blog George C Blog by George Chude and used with permission of the author. All other rights reserved by the author.

CS-443 Blog: Shift Down testing

 CS@Worcester CS-443

 Testing software smarter, not harder: the shift-down strategy | Ministry of Testing

The article “Testing software smarter, not harder: the shift-down strategy” by Manish Saini discusses the importance of focusing test automation efforts closer to the code. This approach, known as shift-down testing, moves away from traditional UI-centric testing to thoroughly testing core code components such as APIs, business logic, and container-based integration. By addressing issues at the code level, the strategy aims to create more reliable and maintainable test suites, ultimately improving software quality.


The shift-down approach involves using mocks for external interactions to isolate and test individual code units effectively. This ensures that potential issues are identified and resolved early in the development process, reducing the overall testing effort required. The article highlights how shift-down testing can lead to more efficient and robust test automation, ultimately resulting in better software outcomes.


I think this article was pretty helpful in explaining the benefits of shift down testing. I think the article does a good job in explaining how this method identifies potential issues early in the development process, reducing the overall testing effort and improving software quality.

From the blog George C Blog by George Chude and used with permission of the author. All other rights reserved by the author.

Sprint #1: Retrospective

Gitlab Deliverables:

Sprint #1 was a rocky path, but through our group’s collective efforts, we all learned new tools and skills we can implement in future sprints. At the beginning of this sprint, our group agreed that I would be the Scrum Master and, by extension, manage our GitLab activity. In hindsight, this was a perfect designation as I’ve had managerial experience in the past and continue to be a very organizationally driven worker. With these traits, I sought to make our workspace as clear and accommodating as possible, which would provide a strong foundation for our team to begin working. One area I could have improved upon is making sure everyone’s voice is heard. During Sprint #1, my confidence with Linux started at a 2/10, so over the next couple of weeks, I had to refresh my memory while learning new skills. Due to this lack of confidence, I was much more reserved during group conversations as I was trying to soak in as much information as possible. Consequently, I did not explicitly check if everyone felt as if their voice was being heard. Towards the end of the sprint, my confidence with Linux grew greatly. As a result, I was able to participate in group discussions and ensure that everyone was on the same page. 

As a team, our biggest developmental challenge was understanding when problems should be taken on individually or as a group. At the beginning of Sprint #1, my bias weighed heavily towards solving problems as a team as it would grant everyone equal opportunities to learn from hands-on experience. As we soon found out, this approach is costly in time and does not let people engage with topics that specifically interest them. The task that gave us the most hassle was getting the FrontEnd certified so that the Backend could connect with it. This task was our “group task”, which we used working classes to try and resolve. Beyond this “group task”, we each had individual tasks we would look into. This approach to distributing work across teammates and holding each other accountable for learning unique material has been effective so far. After learning a new tool or skill, I request that the individual creates documentation listing the steps or sources they referred to so that all team members have access to what the individual learned. So far, there have been no issues with this approach, and we will continue to refine it in Sprint #2.

As previously mentioned in my review of Apprenticeship Patterns, the pattern that has resonated with me the most this sprint was “Be the Worst”. This self-assessment pattern has allowed me to refresh my knowledge of Linux through completing tasks such as organization and documentation of our completed tasks. This pattern has the individual actively learn and listen to their teammates discuss current issues. From this assessment, the individual can learn from the shared knowledge of the group and will slowly catch up to their level of proficiency. In the case of Sprint #1, my Linux skills were beyond rusty, making me the least proficient in the group. Although I was not able to start helping on the server immediately, I was able to help record our steps. By doing so, I could educate myself on how we approached problems as they arose. Towards the end of Sprint #1, I found my confidence in using Linux and began contributing to the server.

Moving forward, I plan on interacting with the server more, and I have already begun that process by researching encrypted volumes. Additionally, I will continue to refine my skills as our group’s Scrum Master. Now that I am nearly as proficient as other members of my team, I can now focus on making sure we all understand our current goals and have everyone’s voice heard. In terms of teamwork distribution, we have struck a fair balance at the end of Sprint #1. If there is any issue in Sprint #2, I will have to reconsider how we organize and assign tasks. Fortunately, we now have assigned days for backlog refinement, so we can discuss what changes we would like to make during those periods.

-AG

From the blog CS@Worcester – Computer Science Progression by ageorge4756 and used with permission of the author. All other rights reserved by the author.

The Software Craftsman’s Journey: Embracing Learning and Growth

Reading Apprenticeship Patterns was an eye-opening experience. Initially, I was confused by the book’s structure, particularly the sections labeled Context, Problem, Solution, and Actions. I assumed the book would be highly technical, filled with complex concepts. However, after I kept reading, I realized that it was quite the opposite. The book is structured in a way that real-life scenarios that apprentices(interns) or newcomers in the software field might encounter, followed by practical solutions and guidance.

One thing that stood out to me was how much I wished I had read this book earlier in my academic journey. As a senior in Computer Science who transitioned from Nursing, my early years in this major felt overwhelming. Learning my first programming language felt like being thrown into the ocean with just one pool noodle. Each new concept learned added another pool noodle, but the struggle to stay afloat was real. This book, had it been introduced in my freshman year, could have served as a guide for navigating challenges, understanding the learning process, and overcoming self-doubt.

What truly changed my perspective was how the book reframes the way we approach learning and problem-solving. It’s not just about memorizing syntax or mastering algorithms but about adopting the right mindset when facing challenges. One of the most impactful insights for me was the idea that learning is an ongoing journey, and it’s okay to struggle. I often get frustrated when I forget concepts I previously learned, but the book reassured me that this is normal. The phrase “You must unlearn what you have learned” resonated with me deeply. It reminded me that forgetting is not failure; it’s an opportunity to re-learn with better understanding. This realization has helped me become more forgiving toward myself when struggling with new skills or concepts.

Chapter two, in particular, resonated with me because it addressed the common issue of struggling to acquire new skills and knowledge retention. I often worry that if I don’t practice something immediately, I will forget it, leading to self-doubt. This chapter provided strategies to combat this issue, encouraging a more structured and patient approach to learning. By applying these principles, I feel more confident in my ability to retain knowledge over time.

Overall, Apprenticeship Patterns is an invaluable resource that I believe all Computer Science students should read early in their studies. It doesn’t just teach technical skills but also offers a roadmap for navigating the emotional and intellectual challenges of becoming a Software Craftsman. While I didn’t find anything in the book that I strongly disagreed with, I do wish it had been a part of my curriculum earlier. This book has reshaped how I view my learning journey, making me more comfortable with the idea that mastery takes time, patience, and continuous effort.

From the blog CS@Worcester – CodedBear by donna abayon and used with permission of the author. All other rights reserved by the author.

Why API Testing Matters: Ensuring Robust Software Performance

The blog post discusses why developers should use API testing and how it is becoming increasingly important, particularly as microservices architecture gains popularity. This technique necessitates that application components work separately, each with their own data storage and commands. As a result, software components can be updated fast without disrupting the whole system, allowing consumers to continue using the application flawlessly.

Most microservices are based on application programming interfaces (APIs), which specify how to connect with them. APIs usually use REST calls over HTTP to simplify data sharing. Despite this, many testers still rely on user interface (UI) testing, particularly using the popular Selenium automation tool. While UI testing is required to ensure interactive functioning, API testing is more efficient and dependable. It enables testers to edit information in real time and detect flaws early in the development process, even before the user interface is constructed. API testing is also important for identifying security flaws.

To effectively test APIs, it is critical to understand the fundamentals. APIs are REST calls that retrieve or update data from a database. Each REST request consists of an HTTP verb (which specifies the action), a URL (which indicates the target), HTTP headers (which provide additional information to the server), and a request body (which contains the data, usually in JSON or XML). Common HTTP methods are GET (retrieving a record), POST (creating a new record), PUT (altering a record), PATCH (partially updating a record), and DELETE. The URL specifies which data is affected, whereas the request body applies to actions such as POST, PUT, and PATCH.

When a REST request is made, the server responds with HTTP headers defining the response, a response code indicating if the request was successful, and, in certain cases, a response body containing extra data. The response codes are categorized as follows: 200-level codes represent success, 400-level codes indicate client-side issues, and 500-level codes signify server-side faults.

To effectively test APIs, testers must first understand the types of REST queries supported by the API and any limitations on their use. Developers can use tools like Swagger to document their APIs. Testers should ask clarifying questions about available endpoints, HTTP methods, authorization requirements, needed data, validation limits, and expected response codes.

API testing often begins with creating requests via a user-friendly tool like Postman, which allows for easy viewing of results. The initial tests should focus on “happy paths,” or typical user interactions. These tests should include assertions to ensure that the response code is proper and that the delivered data is accurate. Negative tests should then be run to confirm that the application handles problems correctly, such as erroneous HTTP verbs, missing headers, or illegal requests.

Finally, the blog underlines the necessity of API testing and encourages engineers to transition from UI testing to API testing. This shift enables faster and more reliable testing, which aids in the detection of data manipulation issues and improves security.

Blog: https://simpleprogrammer.com/api-testing/

From the blog CS@Worcester – Matchaman10 by tam nguyen and used with permission of the author. All other rights reserved by the author.

Path Testing in Software Engineering

Path Testing is a structural testing method used in software engineering to design test cases by analyzing the control flow graph of a program. This method helps ensure thorough testing by focusing on linearly independent paths of execution within the program. Let’s dive into the key aspects of path testing and how it can benefit your software development process.

The Path Testing Process

  1. Control Flow Graph: Begin by drawing the control flow graph of the program. This graph represents the program’s code as nodes (each representing a specific instruction or operation) and edges (depicting the flow of control from one instruction to the next). It’s the foundational step for path testing.
  2. Cyclomatic Complexity: Calculate the cyclomatic complexity of the program using McCabe’s formula: E−N+2PE – N + 2P, where EE is the number of edges, NN is the number of nodes, and PP is the number of connected components. This complexity measure indicates the number of independent paths in the program.
  3. Identify All Possible Paths: Create a set of all possible paths within the control flow graph. The cardinality of this set should equal the cyclomatic complexity, ensuring that all unique execution paths are accounted for.
  4. Develop Test Cases: For each path identified, develop a corresponding test case that covers that particular path. This ensures comprehensive testing by covering all possible execution scenarios.

Path Testing Techniques

  • Control Flow Graph: The initial step is to create a control flow graph, where nodes represent instructions and edges represent the control flow between instructions. This visual representation helps in identifying the structure and flow of the program.
  • Decision to Decision Path: Break down the control flow graph into smaller paths between decision points. By isolating these paths, it’s easier to analyze and test the decision-making logic within the program.
  • Independent Paths: Identify paths that are independent of each other, meaning they cannot be replicated or derived from other paths in the graph. This ensures that each path tested is unique, providing more thorough coverage.

Advantages of Path Testing

Path Testing offers several benefits that make it an essential technique in software engineering:

  • Reduces Redundant Tests: By focusing on unique execution paths, path testing minimizes redundant test cases, leading to more efficient testing.
  • Improves Test Case Design: Emphasizing the program’s logic and control flow helps in designing more effective and relevant test cases.
  • Enhances Software Quality: Comprehensive branch coverage ensures that different parts of the code are tested thoroughly, leading to higher software quality and reliability.

Challenges of Path Testing

While path testing is advantageous, it does come with its own set of challenges:

  • Requires Understanding of Code Structure: To effectively perform path testing, a solid understanding of the program’s code and structure is essential.
  • Increases with Code Complexity: As the complexity of the code increases, the number of possible paths also increases, making it challenging to manage and test all paths.
  • May Miss Some Conditions: There is a possibility that certain conditions or scenarios might not be covered if there are errors or omissions in identifying the paths.

Conclusion

Path Testing is a valuable technique in software engineering that ensures thorough coverage of a program’s execution paths. By focusing on unique and independent paths, this method helps reduce redundant tests and improve overall software quality. However, it requires a deep understanding of the code and may become complex with larger programs. Embracing path testing can lead to more robust and reliable software, ultimately benefiting both developers and end-users.

All of this comes from:

Path Testing in Software Engineering – GeeksforGeeks

From the blog CS@Worcester – aRomeoDev by aromeo4f978d012d4 and used with permission of the author. All other rights reserved by the author.

Boundary, Equivalence, Edge and Worst Case

I have learned a lot about Boundary Value Testing and Equivalence Class Testing these past few weeks. Equivalence Class testing can be divided into two categories: normal and robust. The best way I can explain this through example. Let’s say you have a favorite shirt, and you lose it. You would have to look for it but where? Under the normal method you would look in normal, or in a way valid, places like under your bed, in your closet or in the dresser. Using the robust way, you would look in those usual spots but also include unusual spots. For example, you would look under your bed but then look under the kitchen table. You are looking in spots where you should find a shirt (valid) but also looking in spots where you should not find a shirt (invalid). Now, in equivalence class testing robust and normal can a part of two other categories: weak and strong. Going back to the shirt example, a weak search would have you looking in a few spots, but a strong one would have you look everywhere. To summarize, a weak normal equivalence class test would have you look in a few usual spots. A strong normal equivalence class test would have you look in a lot of spots. A weak and strong equivalence class test would act similarly to the earlier two, but they would have you look in unusual spots.

Boundary value testing casts a smaller net when it comes to testing. It is similar to equivalence class testing but it does not include weak and strong testing. It does have nominal and robust testing. It also has worst-case testing which is unique to boundary testing. I don’t know much about it, so I looked online.

I used this site: Boundary Value Analysis

Worst-case testing removes the single fault assumption. This means that there are more than one fault causing failures which leads to more tests. It can be robust or normal. It is more comprehensive than boundary testing due to its coverage. While normal boundary testing results in 4n+1 test cases, normal worst case testing results in 5n test cases. Think of worst-case testing as putting a putting a magnifying glass on something. From afar you only see one thing but up close you can see that there is a lot going on. This results in worst case testing being used in situations that require a higher degree of testing.

I have learned a lot in these past few weeks. I have learned about boundary testing and how it differs when it is robust or normal. I have learned about equivalence class testing and how it varies when it is a combination of weak, normal, robust or strong. I have also learned about edge and worst-case testing. This is all very interesting.

From the blog My Journey through Comp Sci by Joanna Presume and used with permission of the author. All other rights reserved by the author.

Equivalence Class Testing

Week 7 – 3/7/2025

This week, in my last class we had an activity for Equivalence Class Testing(ECT) under the POGIL activity. For the source of this week, I watched a YouTube video titled “Equivalence Class Testing Explained,” which gives us the essentials about this black-box testing method.

The host of the video defines ECT as a technique for partitioning input data into equivalence classes partitions where inputs are expected to yield similar results. To reduce redundant cases without sacrificing coverage, it is also possible to test one value per class. To demonstrate this reality, the presenter tested a function that takes in integers between 1 and 100. Classes in this example are invalid lower (≤0), invalid upper (≥101), and valid classes (1–100). Boundary value testing, in which values like 0, 1, 100, and 101 are applied to test for common problems in partition boundaries, was also accorded importance in the video.

I chose this video because ECT of the course we took included this topic and I wanted more information about the topic. Reading the course textbook it was difficult to follow. The class activity did make me do this topic, though this clarified it better to me. The video’s visual illustrations and step-by-step discussion clarified the practical application of ECT. The speaker’s observation about maintaining a balance between being thorough and being effective resonated with me, especially after spending hours of writing duplicate test cases for a recent project.

I thought that thorough testing had to test all possible inputs before watching. The video rebutted this by demonstrating how ECT reduces effort without losing effectiveness. I understood that my previous method of testing each edge case individually was not possible. Another fascinating thing was the difference between valid and invalid classes. I had neglected how the system handled wrong data in a previous project, dealing primarily with “correct” inputs. I realize how crucial both testing are for ensuring robustness after watching the demonstration of the video. Henceforth, in the future, I will adopt this approach to my future projects if needed.

My perception regarding testing has changed because of this movie, from a boring activity to a sensible activity. It serves the need of our course directly, i.e., providing efficient, scalable engineering practices. I can create fewer, yet stronger tests with the help of ECT, and that will surely help me as a software programmer. Equivalency class testing is a kit of wiser problem-solving, and I want to keep on practicing it. It’s not theory.

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.