Category Archives: CS-443

Software Quality Assurance and Testing Blog Post #3 (Black-Box vs. Gray-Box vs. White/Clear-Box Testing)

On the first exam for my Software Quality Assurance and Testing course, and in activities previous to it, Black-Box, Gray-Box, and White/Clear-Box Testing were important topics/definitions to thoroughly understand. Not only did we have to know the meanings of these terms, but we had to be able to compare them and know how those testing methods are used. White/Clear-Box Testing is when the tester knows the contents of a function or method. This comes with its advantages and disadvantages of course. The advantages are that it is very easy to navigate the complexity, get legible test cases, and makes debugging smoother. The disadvantages would include bias being used by the tester and possibly longer and more expensive testing in general. On the other hand, Black Box testing is quite the opposite. The tester is not able to view the inner workings of the function/method and is only able to test based on what inputs are given and what outputs are received. Although this seems counterintuitive for a testing method, it also has advantages and disadvantages that make it a viable option. The advantages would be that it would take less time and expenses to test and that it eliminates tester bias altogether. The disadvantages are that because the tester is not able to see the inner workings of the function or method, it makes it harder to debug, find complexity, and have easy to read test cases. The two methods are basically opposites. Lastly, Gray-Box Testing is somewhere in-between the two. The tester knows a little bit about the inner workings of methods and functions, but is not focused on them completely like in White/Clear-Box Testing. This makes all of the advantages and disadvantages even out more overall which could be good in some cases but could also not be a valid testing option in other cases. Before this semester, I actually had never even heard of these terms, and it was interesting to go through and research them for this post and for my course!

From the blog CS@Worcester – Tim Drevitch CS Blog by timdrevitch and used with permission of the author. All other rights reserved by the author.

Improved Testing Methods

As a beginner programmer, testing my code meant putting in a few inputs, and if the code ran then I had myself a successful program. Recently, I’ve learned of two better testing methods that give you the information needed to ensure that the program can run given any particular scenario. These are Boundary Value Testing and Equivalence Class Testing. While they do work differently, both of these methods are similar in that they each choose an input based on the pool of values given the set conditions.

In Equivalence Class Testing, the focus is on the conditions. Looking at the given conditions, we can determine which values are valid and which are invalid. The range of valid values as well as the range for invalid values are the pool of values that will be tested, which are divided into intervals. For each interval, given an input, if it passes then it stands to reason that all inputs within the range of values will pass for that interval. By the same reasoning, if an input does not pass, all inputs within the range will not pass.

For example, consider a program that is testing for a vending machine, with a variable named cash for the amount of money the machine can accept. The range of valid values for cash is 0 <= cash <= 100. Now if we put in a value for cash that is between 0 and 100 and it passes, that means all values between 0 and 100 will pass. Likewise, if the value does not pass, then all values between the range will not pass. All values below 0 and above 100 are invalid, so testing those numbers will result in an error.

Boundary Value Testing is similar, in that we take valid and invalid values and test them. The difference is that there are 5 particular values we are testing. Say that the pool of values are between 0 and 100 inclusively. The values that will be tested are:
1. the minimum valid value, 0
2. the maximum valid value, 100
3. a nominal value between the minimum and maximum, 20
4. a value just below the minimum, -1
5. and a value just above the maximum, 101.

These inputs test all possible scenarios, therefore if they all pass, then the program is successful. The minimum, nominal, and maximum values test all valid inputs, and the minimum below and maximum above values test invalid inputs.

Equivalence testing and boundary testing are both great methods to use when testing your program. They can both be used to test valid and invalid values, and by doing so, are capable of ensuring that a program is error free.

Helpful Source:

https://www.guru99.com/equivalence-partitioning-boundary-value-analysis.html

From the blog CS@Worcester – CSBlogger by mjaber54 and used with permission of the author. All other rights reserved by the author.

Erockwood Post #3

Today we will be discussing some things I find interesting in one of my classes. In this class, we are learning about cloud, parallel, and distributed computing. I find it very interesting that we are able to use one fast computer to send tasks to a bunch of slower computers to calculate data in a very quick and efficient manner, rather than just running the data through one singular computer. I also find it interesting how systems like Hadoop are able to maintain data integrity by storing all information in three different places so that if a computer dies, it still has the data in two other places, in which it will then get the data to another computer to keep the data in three places going.

We are currently discussing machine learning, which I personally think is a little overrated. All it really is is just telling a computer when it guesses if it is right or wrong then it adjusts its guesses accordingly until it gets about 99% correct. This is called training. Once the computer is “trained” it is good to go. Statistics and machine learning go hand in hand, as both are used to take data and use that data to predict outcomes based on the input data. One subset of machine learning is data mining, which is many different methods used to extract insights from data.

From the blog CS@Worcester – Erockwood Blog by erockwood and used with permission of the author. All other rights reserved by the author.

Unit Testing: What and Why

For this class I felt the most fitting first post was one that explained what unit testing is, as this will be key for understanding more complicated topics further on in the semester. I have only had brief experiences with unit tests in the past, usually being included in code I had from a professor. Thus this is still a relatively foreign concept to myself, granted the general purpose is pretty self explanatory. To tackle this subject I found a blog post that goes sufficiently in depth about this topic, explaining what it is and why it’s useful.


So first things first, what exactly is unit testing? As it’s defined in the post on the testingxperts.com blog, this is a testing type done in the earlier stages of the software. This is advantageous as it allows testers and developers to isolate specific modules for any necessary fixes, rather than having to deal with the whole system. Unit tests usually consist of three phases which are arrange, act, and assert. The arrange phase involves implementing the part of the program that is to be tested in your testing environment. The act phase is where you define the test’s stimuli, what could or could not break the function. Finally there is the assert phase, where the behavior of the program is observed for any potential abnormalities. So now that we have a common understanding of these tests, we can ask ourselves why we should even bother using them.

Admittedly, on the few past occasions I had seen these, I thought the unit tests were mostly superfluous and not very useful. In reality however, these tests can be an invaluable asset for larger systems and can save a lot of time and pain from trying to bugfix further into development if used properly. The primary, and pretty major benefit, is that components of a program are tested early on in the process and individually. This is great for the immediately obvious reason, being that you do not have to worry about progress halting further on in the project because you are testing long before then. Besides this, it also helps the development teams understand the code better, as the tests must be tailored to the specific component they are working on. This familiarity can have other benefits as well, allowing certain members knowledge of how code in some components can be changed or reused for further benefits. Then there is also the fact that this debugging process is simpler than a more traditional approach, as it is done before some of the more complex code is written to interconnect components. These are the major reasons listed in the article, and that I could deduce, but there could very well be more! I do not quite have enough space to discuss the rest, but I would recommend reading the rest of the article if you have any interest in unit testing as it goes into further detail.

Thanks for reading!


Source:

From the blog CS@Worcester – My Bizarre Coding Adventures by Michael Mendes and used with permission of the author. All other rights reserved by the author.

Erockwood Post #2

Software testing is very important, as it allows for one to test their/others programs for mistakes, whether it is logical mistakes or syntax mistakes, to ensure that software performs as expected as much as possible. A common testing suite for Java programs is JUnit 5. JUnit 5 comprises of 3 sub-projects, being JUnit Platform, JUnit Jupiter, and JUnit Vintage. The differences between the three are that Platform is used as the base foundation for the frameworks of the Java Virtual Machine(JVM). Jupiter is the model for writing tests and other extensions in JUnit 5. Lastly, Vintage is used for running JUnit 3 and 4 based tests on the newer JUnit 5 platform.

Projects like JUnit are very important, because they allow you to automate testing as much as possible. This allows a tester to save time, by not having to test one thing at a time, they can test all sorts of things at once. It also allows the tester to keep testing methods separate from the main code so they do not accidentally mix and mess some things up. JUnit allows you to write tests using methods like assertTrue, assertFalse, assertEquals, and assertThrows. Each of these are useful in their own ways. For example, assertTrue can be used to ensure that, for example, a customer being created with a customer ID of 1 can be tested to ensure that we expect the customer ID to be 1, then pass in a customer object with the ID 1, to ensure that they are equal. Another thing we can use is assertThrows, to test that invalid parameters throw the correct error, or to make sure an error is thrown at all.

Source:

Stefan Bechtold, S. (n.d.). JUnit 5 user guide. Retrieved March 30, 2021, from https://junit.org/junit5/docs/current/user-guide/

From the blog CS@Worcester – Erockwood Blog by erockwood and used with permission of the author. All other rights reserved by the author.

Scrum Quality Assurance

            Given that we are working in a Scrum team in our Software Development capstone and that we got ample practice with the Scrum in our Software Project Management course, I thought it would be interesting to see the crossover between quality assurance and the Scrum workflow. Scrum is also widely used, so this should be important to know for the future as well. I found this article: https://medium.com/serious-scrum/how-does-qa-fit-with-scrum-4a92f86bec5b which talks about the role of quality assurance on a Scrum team.

            It’s important to remember that the members of a development team do not have pre-defined roles. It’s assumed each member of the team can collaborate with any part of a project, even if certain members have more specialization in certain areas, like product testing in this case. With that in mind, it makes the Definition of Done all the more important. Requirements for testing should be documented and understood by the development team and the product owner. This prevents conflicts during the sprint review if the development team thinks something like compatibility testing needs more work and the product owner is ready to deploy. If there is a more comprehensive Definition of Done, these conflicts are avoided since they would have been discussed ahead of time.

            Quality assurance on a Scrum team is a large part of the process and requirements are developed during sprint planning. It’s important that there is close collaboration during the sprint between the development team and the product owner though, even though the development team is delegated most of the decision-making responsibility for how work will be completed. This keeps the whole team in tune and avoids conflict. It also provides for a closer, more efficient work environment as understanding is enhanced throughout the team and the owner.

From the blog CS@Worcester – Marcos Felipe&#039;s CS Blog by mfelipe98 and used with permission of the author. All other rights reserved by the author.

Software Testing Life Cycle

            I was interested in learning more about how software testing works in the professional world on a software development team. Since we are learning about the crafting of software tests in our class, I thought it would be interesting to learn how the pieces are put together, like how developing requirements goes into crafting tests and then executing them. It’s an essential part of a software development team, of course, and I’m sure I’ll be doing plenty of it in my future. I found this resource: https://www.tutorialspoint.com/stlc/stlc_overview.htm from Tutorials Point that gave an overview of the “Software Testing Life Cycle” (STLC) which help put the pieces together for me.

            First, what is the STLC? It deals strictly with testing and “starts as soon as requirements are defined… by stakeholders.” Sidenote, this reminds me of test-driven development, which, by my understanding, is a common practice in software development these days. The STLC consists of 7 parts. The first is requirement analysis, at which point the team analyzes the application under test (AUT) at a high level. Then comes test planning where a strategy for testing is devised. Then is test case designing which is applying the requirements and making tests according to the planning. Then is the test environment setup for integrated testing. This is the last step before actual testing. Next is test execution which yields defect reporting. This either validates tests or finds bugs. Last is test closure, where testing is finished and matrix, reports, and results are completed.

            This was great to see the pieces come together. This very basic overview helps me see what working on software testing in the professional world would be like. Seeing a timeline also provides context for some of the things we’ve been learning in class.

From the blog CS@Worcester – Marcos Felipe&#039;s CS Blog by mfelipe98 and used with permission of the author. All other rights reserved by the author.

Boundary Value Analysis and Equivalence Partitioning Testing

When it comes to a large pool of input data, it is not possible to perform exhausting testing for each set of test data. There should be an easy way to select test cases from the pool so that all scenarios are covered. This is when the Equivalence Partitioning & Boundary Value Analysis testing techniques were introduced. In today’s blog, I want to do further research on the testing technique of Boundary Value Analysis and Equivalence Partitioning Testing. Equivalence Partitioning and Boundary value analysis are linked to each other and can be used together at all levels of software testing. To start, Boundary value testing is the process of testing between extreme boundaries between the partitions, for example like start, end, lower, upper, maximum, minimum, just inside, and outside values. Normally Boundary value analysis is part of stress and negative testing. Using the Boundary Value Analysis technique tester creates test cases for the required input field.

Now when it comes to equivalence partitioning or equivalence class testing is a type of black box testing technique in which the input data units are divided into equivalent partitions that can be used to derive test cases. This helps with reducing the time required for testing a small number of test cases. This technique can be applied to all levels of software testing like system, unit, and integration. One of the examples that were widely used in many resources that I have look at is: let us say a password field accepts a minimum of 6 characters and a maximum of 10 characters, that means results for values in partitions 0-5, 6-10, 11-14 should be equivalent.  The three testing scenarios will be:

1       Enter 0 to 5 characters         System should not accept

2       Enter 6 to 10 characters       System should accept

3       Enter 11-to-14-character      System should not accept.

Equally, Both testing techniques are used to reduce a very large number of test cases to a manageable piece, Both are appropriate for calculating intensive applications with such a large number of variables and input data.

https://www.guru99.com/equivalence-partitioning-boundary-value-analysis.html

https://www.softwaretestingclass.com/boundary-value-analysis-and-equivalence-class-partitioning-with-simple-example/

From the blog Derin&#039;s CS Journey by and used with permission of the author. All other rights reserved by the author.

The Power of Decision Table Testing

In POGIL activity 8, we worked with decision table-based testing and applied it to applied this concept to the ongoing graduation problem. Unlike the previous activities, we did not go over the advantages of using a decision table opposed to using any of the other testing frameworks. So for this blog post, I want to look more at the pros of using decision table-based testing.

In this article, it gives a complete step-by-step rundown of what is a decision table, why we use it for testing and how to conduct decision table-based tests.

It is basically a review of what we covered in class with pictures/diagrams at each step.

In this second article, it talks about the characteristics of decision tables and why some of these characteristics may make decision table-based testing more preferable to the other testing frameworks.

https://www.edureka.co/blog/decision-table-in-software-testing/#advantages

Decision table-based testing is similar to Equivalence Class testing in the sense that it divides the tests into cases and complete test coverage. Unlike Equivalence Class testing, decision tables are more versatile. One aspect that I really liked about this second article was the fact the author completed multiple action rows into just one rule by defining a key. This made the table smaller and easier to read while not obscuring the meaning within the table.

From the blog CS@Worcester – Just a Guy Passing By by Eric Nguyen and used with permission of the author. All other rights reserved by the author.

Boundary Value Testing and Equivalence Class Testing

Due to their being a large capacity of data that is used for software, there needs to be test implementation that can test the range of values without testing each number. Therefore, we used testing methods such as boundary value testing and equivalence class testing. The purpose of boundary value testing is to test the extreme ends or also known boundaries hence the name. The most common way to implement this test case is to use input variable values such as a minimum value, a value just higher than minimum, a middle value none as the nominal, a value that is just lower than the maximum and the last is the maximum value. One common example that most have seen is when we have to set up a password. Let’s say that a valid number length for the password has to be between 10 and 15. Using this guide, our boundary value testing would consider values that are less than, ones that are 10, ones that are between 10 and 15, ones that are 12 and ones that are greater than 15. Now the valid inputs will be passwords that are between 10 and 15 and so boundary value testing implementations are a good way to test any input errors that are more near the boundaries of valid numbers.

Equivalence class testing is a method of testing that divides the input of the data into various different equivalence classes. This step is the partitioning of the values that need to be tested and is the step that comes before boundary testing. The accurate values that are considered acceptable inputs are divided into a certain range, and the values under as well as over are considered invalid or unacceptable to the software. The valid class partition keeps all the valid values or inputs in it and the invalid class partition contains the invalid values and partition. The example is same as the password one and how the password that in the acceptable range of 10 to 15 will be accepted and valid, whereas passwords that are less than 10 or more than 15 will be invalid and won’t be accepted. Overall, boundary and equivalence class testing are good testing implementations to test input values without having to try to test all the individual values.

Resources:

Boundary Value Analysis and Equivalence Class Partitioning With Simple Example

https://www.guru99.com/equivalence-partitioning-boundary-value-analysis.html

From the blog CS@Worcester – Roller Coaster Coding Journey by fbaig34 and used with permission of the author. All other rights reserved by the author.