Category Archives: Week 5

Learn how to write unit tests

Hello everyone,

Today I want share a tutorial which explains unit testing with the JUnit 5 framework (JUnit Jupiter). It explains the creation of JUnit 5 tests with the Maven and Gradle build system. After reading it, you will definitely gain something new.

Here is the link:

https://www.vogella.com/tutorials/JUnit/article.html

“JUnit 5 tutorial – Learn how to write unit tests”

By Vogella GmbH

As a new software testing student, I recently stumbled upon the JUnit tutorial on the Vogella website while browsing for some knowledge about software testing. In this blog post, I will share my insights and discuss how this can help This knowledge is applied to our development practices.

This tutorial can be said to be a comprehensive guide that covers various details of JUnit. Including “What is software testing”, “Using Maven in the Eclipse IDE”, “Using the Eclipse IDE for creating and running JUnit test”, “JUnit 5 extensions” and so on. In chapters 1.2 and 1.3 of the article, it demonstrates how to use “assert” to check expected and actual results. Because “assert” is a powerful function, it contains almost all the judgments required by the scenario, so being proficient in the “assert” function is necessary as a software tester. Also, in Chapter 1.5 and 1.6, the article talks about unit tests are created in a separate source folder to keep the test code separate from the real code. This is the problem I started to encounter in HW1, my Gitpod cannot idenify all my Junit5 command. Because my test code and the real code are in the same path. So we should have a main path and a test path, as follows:

src/main/java – for Java classes

src/test/java – for test classes

In Chapter 2 of the article it explains how to use Assertions and assumptions. There are also examples of Testing for exceptions and Testing multiple assertions (grouped assertions) with assertAll, which we need to be proficient in and will use in the following assignments.

import static org.junit.jupiter.api.Assertions.assertThrows;

@Test void exceptionTesting() {

// set up user

Throwable exception = assertThrows(IllegalArgumentException.class, () -> user.setAge(“23”));

assertEquals(“Age must be an Integer.”, exception.getMessage());

}

Anyway, that’s why I recommend this tutorial. As a student who has just started learning software testing, I not only need theoretical knowledge about JUnit, I also need examples and explanations of practical application of this knowledge. This tutorial is exactly what we need, high quality and easy to understand. It left a lasting impression on me and I hope everyone will learn something from this tutorial.

From the blog CS@Worcester – Ty-Blog by Tianyuan Wang and used with permission of the author. All other rights reserved by the author.

Exploring Boundary Value Testing and Equivalence Class Testing in JUnit

Software testing is a crucial aspect of the software development lifecycle. Among various testing techniques, Boundary Value Testing and Equivalence Class Testing stand out for their effectiveness in identifying defects early in the development process. In this blog post, we delve into these two techniques and discuss their implementation using JUnit, a popular Java testing framework.

Boundary Value Testing: Boundary Value Testing focuses on testing the boundaries of input ranges. The rationale behind this technique is that bugs often lurk around the edges of acceptable input values. By testing values at the boundaries, we increase the likelihood of uncovering potential defects. For example, if a function accepts input within the range of 1 to 100, we would test values such as 0, 1, 100, and 101.

In JUnit, implementing Boundary Value Testing involves writing test cases that specifically target boundary values. By using assertions to verify the behavior of the function at these critical points, developers can gain confidence in the robustness of their code.

Equivalence Class Testing: Equivalence Class Testing aims to reduce redundancy in test cases by partitioning the input domain into equivalence classes. Each equivalence class represents a set of input values that should produce the same output when processed by the function under test. By selecting representative values from each equivalence class, testers can ensure adequate coverage without testing every possible input value.

In JUnit, implementing Equivalence Class Testing involves creating test cases that cover each equivalence class. Test inputs are chosen strategically to represent the entire range of possible inputs within each class. This approach not only improves test coverage but also makes test suites more manageable and maintainable.

Combining Boundary Value Testing and Equivalence Class Testing: While both techniques offer unique benefits, they are most effective when used together. By combining Boundary Value Testing to test the edges of input ranges and Equivalence Class Testing to cover representative values within each range, testers can achieve thorough test coverage with minimal redundancy.

Boundary Value Testing and Equivalence Class Testing are powerful techniques for improving the quality of software. By leveraging JUnit, developers can easily implement these techniques within their Java projects. By understanding the principles behind these testing strategies and applying them effectively, teams can build more robust and reliable software products.

Incorporating these testing techniques into your development workflow can help catch bugs early, reduce the risk of defects slipping into production, and ultimately enhance the overall quality of your software.

Link: JUnit 5 – User -guide/

From the blog Discoveries in CS world by mgl1990 and used with permission of the author. All other rights reserved by the author.

week 5 blog

Individual apprenticeship pattern blog post- Use your title.

Hi all and welcome to my blog where today I will be talking about the pattern “Use your title.” This pattern caught my attention for several reasons that I will go through in this blog post in detail. First, let’s start by explaining the pattern itself and give a little context of where the title comes from. Let’s say you work at a job and they give you a certain title, like “assistant pharmacist”. Some time later they will want to promote you. The title of your position will change but you are still going to get paid the same amount. For example, I was known as “CAD assistant designer” for a little bit in the beginning of my job. I worked hard for like 3 years, then I got “promoted” to “CAD designer” which means I am no longer an assistant designer. My responsibilities changed with it. The pay might be the same but others in the office, or even outside the office, will respect you more because of that title. My responsibilities are much different now.

If the Job title doesn’t match the description of what you see yourself doing, then there are things that you can do to change that. It stated the job description is a distraction and should be kept on the outskirts of your consciousness, but I don’t think I agree with that. For me, a job title is something that you should be proud of. If I am not proud of my job title, that just gives me more fire to try and work harder to get the job title I deserve. To solve this problem, you should write down a description of your job title and make sure it fits the work you do in the office and try to give details as if a stranger is reading your job description. Sometimes your job title does reflect the work you do, you could be indicated into a position of authority on your team but it doesn’t state that in the job description which is fine as long as everyone respects you and sees the work you are putting in.

From the blog CS@Worcester – Farouk's blog by afarouk1 and used with permission of the author. All other rights reserved by the author.

Bugs: Severity vs. Priority, and Why It Matters

While looking for more articles online involving testing, I came across SauceLabs. This website is another compendium of blogs, similar to stickyminds mentioned in my last blog. The layout and articles on this website were an excellent selection of resources dedicated to testing and quality assurance. While running through many bugs in my most recent projects and assignments, I was searching for an article dedicated to programming bugs. I found an article that fit my needs, dedicated to the differences between severity and priority of bugs, which is something I had not previously considered. The article’s title is Understanding Bug Severity vs. Priority: Key Differences and Best Practices by Chris Tozzi

The article provides (as the title indicates) a breakdown of bug severity and bug priority in software testing. It explains that bug severity is primarily concerned with the impact of a bug on the system’s functionality. Issues are categorized into the following levels of severity: critical, major, minor, and trivial. Each of these have a different level of impact on the system. For example, critical bugs may cause system crashes, while trivial bugs have a small functionality and performance impact. Bug priority focuses on the order in which bugs should be addressed, considering various factors that will impact the job as well as the program, such as deadlines, or bugs that are caused by higher priority bugs. Bugs with high priority require immediate attention, and those with low priority can be addressed later as the program develops and is tested further. It is an extremely important skill to be able to identify the severity and priority of bugs throughout the system and its development, so as to facilitate a smoother and more efficient development cycle. The article also provides some examples, such as a critical severity bug that causes data loss. This bug would ALSO have high priority because it is causing a lot of damage to the system’s structure.The article offers very useful recommendations for bug reports such as: establishing clear classification guidelines for bugs (priority/severity levels), communicating efficiently within teams, or reassessing bug statuses frequently.

I chose this blog because of my most recent experience in my assignments and projects. I had found that many bugs relied on each other, and I had never previously taken the initiative to learn more about bugs. When working with a team in industry, I will now have an understanding of how bugs are classified, as well as how to manage bug reports. This is something I had very little experience with prior to this article, and I will make sure to put extra effort into bug reports in my own projects with peers. I will continue to look into articles about bugs, testing, and QA to further improve my knowledge.

Source:
https://saucelabs.com/resources/blog/bug-severity-vs-priority

From the blog CS@Worcester – WSU CS Blog: Ben Gelineau by Ben Gelineau and used with permission of the author. All other rights reserved by the author.

Foundations of Unit Testing in Software Development

In software development, Unit Testing turns out to be a crucial phase within the software testing lifecycle, ensuring that each component or “unit” of the software performs as intended and designed. I chose this topic really because it is all we have been doing the past 2 weeks in class and the whole course is about Quality Testing and Assurance so it makes sense. Also Since I am not a software developer (I’m a Data Analytics kinda guy) I have been trying really hard to motivate and enjoy the struggle of doing this courses assignments.

The GeeksforGeeks article provides a comprehensive overview of Unit Testing, defining it as a level of software testing where the individual units/components of a software are tested. The primary goal is to validate that each unit of the software code performs as expected. Unit Testing is typically performed by developers themselves or by QA engineers, emphasizing the importance of testing small parts of the project independently for errors. The article outlines the process, benefits, and challenges of Unit Testing, along with examples of tools that facilitate this testing method, such as JUnit for java and NUnit for .NET framework applications.

I went with this specific post because of the clarity and depth in explaining Unit Testing. It aligns perfectly with out course’s focus on Software quality and Testing, providing a solid foundation for understanding the complexities and nuances of unit testing within the software development lifecycle.

Upon reflecting the content, the article really highlighted the significance of Unit Testing in early bug detection, which not only saves time but also costs in the later stages of development. The emphasis on Unit Testing as a foundation for more comprehensive testing methods resonated with my understanding of a layered system strategy. It was really interesting to learn about the various tools and frameworks that support Unit Testing (though I may not know how to use them) across different programming languages, showing the universality and critical nature of this testing approach.

The knowledge gained from the article will reinforce my commitment to integrating Unit Testing future development projects. Recognizing its rile in maintaining high quality code and facilitating agile development processes. I am motivated to delve deeper into Unit Testing frameworks.

Unit Testing emerges as an indispensable practice in software development. The article serves as a valuable resource for understanding the principles and practices of Unit Testing, leading to a deeper appreciation for meticulous testing methodologies. As I move forward the principles outlined in this resource will be guiding my approach to quality assurance and testing

From the blog CS@Worcester – Josies Notes by josielrivas and used with permission of the author. All other rights reserved by the author.

Behavioral Testing

For this week’s blog post, I chose the article “Behavior Testing | What it is, Why & How to Automate?” from testsigma.com. I selected this article because it fits within the Behavioral testing section in the course topics section of the syllabus. This article goes into great detail about behavioral testing, from discussing what it is to explain how AI could be used in its implementation. For this blog post, however, I will discuss the sections on what behavioral testing is and a couple of methods that can be used to catch errors with behavioral testing.

The article describes behavioral testing as a form of functional testing designed to test the external functionality of a system. “Behavior testing or behavioral testing is a type of testing that focuses on testing the external behavior of a software application. It is a type of functional testing. It helps ensure that software systems meet the expectations and requirements of end-users, making it a valuable part of the software development and testing process. Behavior testing is also known as black-box testing.” As described by the article, behavioral testing is essential to ensure that the systems or products you are designing work well enough so your customers can use them efficiently. There are many different methods when using behavioral testing to find errors, such as equivalence partitioning.

According to the article, one method that can be used with behavioral testing that is good at finding errors is Equivalence Partitioning. “The equivalence partitioning testing technique involves dividing the input data into different classes or partitions, such as valid and invalid data, assuming the system will behave the same for both inputs. Example – For a login form, if the password requires at least eight characters, you might test one case with a 6-character password (invalid) and another with a 10-character password (valid).” When using equivalence partitioning, because you are dividing inputs into separate groups, in a way, you can do two things at once. One is that the system functions as it should with valid inputs, and the other can catch invalid inputs. Another way behavioral testing can be implemented is through boundary value analysis.

According to the article, boundary value analysis is a form of behavioral testing focusing on the possible range of inputs, specifically numerical inputs. “It focuses on testing the boundaries of input ranges, as errors often occur at the edges of these ranges. Test cases are designed for values at the lower and upper boundaries and just above and below. Example – If an input field accepts values from 1 to 100, the test data can be 0, 1, 2, 99, 100, and 101.” This kind of testing can be very helpful in making sure that you have accounted for the possible range of inputs that a user may enter, both valid and invalid.

Article: https://testsigma.com/guides/behavior-testing/

From the blog CS@Worcester – P. McManus Worcester State CS Blog by patrickmcmanus1 and used with permission of the author. All other rights reserved by the author.

Mastering JUnit

Dive into the world of JUnit, the leading Java testing framework. Learn how JUnit streamlines writing, running, and managing tests for robust Java applications.

Introduction to JUnit: Elevating Java Testing to New Heights

Testing is the backbone of any robust software development process, and when it comes to Java, JUnit is the name of the game. As a pivotal Java testing framework, JUnit simplifies the creation and management of tests, ensuring your code stands up to the rigors of use. But what makes JUnit the go-to framework for Java developers? Let’s dive in and uncover the essentials of JUnit, from its core functionalities to setting up your first test suite, ensuring you’re well-equipped to harness the full power of this testing framework.

A Deep Dive into JUnit’s Capabilities

JUnit, inspired by the xUnit architecture, provides a structured way to write and run automated tests. This flexibility extends to various types of tests, including unit, integration, and functional tests, each serving a unique purpose in the development lifecycle. Unit tests scrutinize individual components for correctness, integration tests ensure components work seamlessly together, and functional tests validate the system’s operation against requirements.

At its core, JUnit facilitates test creation through annotations, enabling straightforward test case structuring. Assertions play a critical role here, allowing developers to validate expected outcomes. Additionally, JUnit’s test runners and suites offer a streamlined approach to execute and organize tests, complemented by comprehensive reporting tools that shed light on test outcomes.

Setting the Stage for JUnit Testing

Getting started with JUnit is a breeze, especially within popular IDEs like Eclipse. Installation is straightforward, involving the addition of JUnit to your project’s build path. Once set up, creating a standard test file is your first step toward leveraging JUnit’s testing prowess. This involves defining test methods, utilizing JUnit’s annotations, and employing assertions to verify code behavior.

Crafting Your First Test Class

A well-structured test class is your blueprint for effective testing. Adherence to best practices, such as minimizing class size and focusing on relevant tests, is paramount. Utilize assertions to enforce expected outcomes, and maintain regular test runs to catch and rectify issues early. This iterative process not only enhances code quality but also bolsters your confidence in the software you develop.

Conclusion: Unlocking JUnit’s Full Potential

JUnit’s significance in Java development cannot be overstated. By facilitating efficient, reliable testing, JUnit empowers developers to produce higher-quality code. Whether you’re new to JUnit or looking to refine your testing strategy, understanding and applying JUnit’s features will undoubtedly elevate your development process. So, why not take the leap and integrate JUnit into your next Java project? With the right approach, you’re set to unlock the full potential of this powerful testing framework.

From the blog CS@Worcester – Coding by asejdi and used with permission of the author. All other rights reserved by the author.

TestProject Tutorial Conclusion – Advanced API Testing and Scheduling

We return again this week looking at the online TestProject blog tutorial focusing on the final two chapters 5 and 6 focusing on Advanced API Testing Automation and Scheduling API Automation Flows and CI/CD Execution respectively. In class, we’ve been working with JUnit integrated with VSCode so it’s been interesting seeing a very different User Interface in TestProject.

Chapter 5 looks at “Advanced” API Testing Automation, which primarily looks at more complex interactions and tests involving JSON objects and schemas. It also goes into some other more complicated calls and tests such as formatting URL-encoded requests, reading and using predefined user data sets and tests involving dynamic parameters. This chapter references public NASA API and tools; a key component that stuck out to me was the error report file generation that’s shown which easily identifies and organizes issues. Compared to previous chapters, I found this one to be less relatable and applicable to the things we’re doing in class, but I still learned a lot and was intrigued by the methodologies for test situations with multiple JSON paths to one target.

Chapter 6 focuses on the scheduling aspect of testing automation in TestProject and interactions with CI/CD/CMD pipelines. As always, there are an abundance of screenshots and images to walk readers through an example test set-up – beginning with the interface for scheduling tests and TestProject’s system of creating and assigning ‘jobs’. Tests are aggregated into jobs (typically as a bundle) which can then be executed as a one-time event or assigned a recurrence time frame. The interface to do so is clear and intuitive and reminds me a lot of CS383 – Cloud Computing where we are working on modules in AWS Academy learning about Amazon Web Services. AWS uses a similar interface and logical structure for assigning roles, jobs, permissions and many other facets making it intuitive for me to follow this TestProject tutorial. This chapter also discusses testing within a Docker container, which we used to implement for Dr. Wurst’s assignments but have recently switched to GitPod.

With this reading, we conclude the TestProject tutorial I originally found at the beginning of this semester. There’s been a lot of really valuable material and examples within these tutorials, particularly in chapters 1-4 as they focus on beginner concepts and I’ve just been getting started with learning about software testing and quality assurance. Probably most interesting and encouraging from this set of tutorials was how frequently concepts came up from other courses like Database Design and Cloud Computing. Software that interacts with those areas must be tested too so it’s important to know how to work my way around them and see visual examples of tests being designed and executed. In conclusion – TestProject seems like a great platform with many features, particularly an intuitive scheduling component, however JUnit’s interface remains my current favorite.

Sources:

Tutorial Intro: https://blog.testproject.io/2020/11/10/automating-end-to-end-api-testing-flows/

Chapter 5: https://blog.testproject.io/2020/11/10/advanced-api-test-automation-and-validation-flows/

Chapter 6: https://blog.testproject.io/2020/11/10/scheduling-api-automation-flows-and-ci-cd-execution/

From the blog CS@Worcester – Tech. Worth Talking About by jelbirt and used with permission of the author. All other rights reserved by the author.

Exploring the Classical Waterfall Model in Software Development

In most respects the classical waterfall model serves as the foundational software development life cycle (SDLC) model, almost embodying a structured and sequential approach to project management and software development which can prove effective when doing a variety of coding projects. While it may not be as commonly employed today, its significance lies in being the basis upon which other SDLC models have evolved, the process often involving steps and details with which have been planned beforehand. This model finds its relevance in the realm of more large, complex projects, being a model characterized by its rigorous, phase-driven progression, making it suitable for scenarios where project requirements are well-defined, and project stakeholders seek a high level of confidence in the outcome.

The waterfall model, although now less prevalent in contemporary software development considering it’s lesser effectiveness compared to more agile methodologies , it remains a foundational framework for understanding software development life cycles. This model’s structured, sequential approach entails phases like requirements gathering and analysis, design, implementation, testing, deployment, and maintenance, each building upon the preceding one. It is a document-driven model, placing high importance on quality control and rigorous planning, thus ensuring that the project is well-defined and the team operates with clarity and precision.

it becomes pretty clear that the simplicity and linear progression that come with the waterfall technique offer advantages for specific project scenarios. This approach favors discipline, with a focus on defining requirements before design likewise with the design before coding. For smaller, well-understood projects, it can be effective in maintaining clarity and ensuring milestones are met.

At the same time though, the rigidity and limitations of the waterfall model become apparent in more complex, dynamic projects. Its lack of flexibility to accommodate changing requirements and late defect detection pose significant challenges. The sequential nature of the model restricts stakeholder involvement in later phases, potentially leading to misunderstandings and costly revisions.

in practice, project managers and development teams should carefully assess project requirements, size, complexity, and the degree of uncertainty to select the most appropriate SDLC model since the waterfall method might not always be effective, sometimes proving to be an unwieldy method for projects better suited to adaptability. Moreover, hybrid approaches, combining elements from multiple models, can offer the best of both worlds, allowing for structure and adaptability.

In conclusion, the classical waterfall model, while valuable for certain projects, is not a one-size-fits-all solution. Its use should be considered in situations where requirements are well-defined and change is unlikely, such as large-scale, safety-critical, or government projects considering these have a tendency to have big budgets and therefore need to be mapped out when taking into account the money spent on particular projects. In today’s rapidly evolving software landscape, more adaptive SDLC models have gained prominence, offering flexibility and responsiveness to changing needs.

https://www.geeksforgeeks.org/software-engineering-classical-waterfall-model/

From the blog CS@Worcester – CSTips by Jamaal Gedeon and used with permission of the author. All other rights reserved by the author.

YAGNI

YAGNI is an acronym for You Ain’t Gonna Need It. It’s a principle from extreme Programming that says that programmers should only add functionality once it is definitely necessary. When coding if you are sure that you will need a piece of code or a feature later on, you don’t need to implement it now. Maybe you wouldn’t even need or add it because you might need something else. This is why you don’t want developers to waste their time creating extra elements that might not end up being necessary and can slow the process. YANGI helps save time and avoid spending time on features that might not be used, the main features of the program are developed better, and less time is spent on each release. When you have a problem that you can’t solve, you won’t be capable of making the best choices when coming up with a solution. On the other hand, when you know what is causing the problem, you can come up with a better plan to solve it. In software development, you can think about creating a system that can deal with everything but would only use a few features and could need attention and upgrades.

YAGNI can be implemented by development teams from small to large, so it isn’t limited to only small projects or large enterprises. This principle can help set up a task list of do’s and don’ts. Always try implementing the selling feature and get the app ready for end users. After the app is functional you can start adding extra features in the next version. Waiting to add any additional features will save a lot of time and effort for developers to help them meet project deadlines. Once your app is live you should keep up with its updates and be able to make the app better. By delaying the app’s updates to add more features can give opponents a chance to take your users. The first version of the app doesn’t need to be perfect, if it can just do the simple things and still fulfill its intended purpose then that is enough. With time you can add in all the add-ons you need later on instead of just cramming it into one version. The you ain’t gonna need it principle is very time effective and efficient for developers so that we could get our projects done on time, not adding anything that isn’t necessary at the time, and make sure that developers don’t feel stress by making sure all of the add on features needed to be added. This principle is time, stress and cost efficient for developers, which is why this principle should be used constantly.

https://www.techtarget.com/whatis/definition/You-arent-gonna-need-it

From the blog CS@Worcester – Kaylene Noel's Blog by Kaylene Noel and used with permission of the author. All other rights reserved by the author.