Category Archives: CS-443

Erockwood Post #3

Today we will be discussing some things I find interesting in one of my classes. In this class, we are learning about cloud, parallel, and distributed computing. I find it very interesting that we are able to use one fast computer to send tasks to a bunch of slower computers to calculate data in a very quick and efficient manner, rather than just running the data through one singular computer. I also find it interesting how systems like Hadoop are able to maintain data integrity by storing all information in three different places so that if a computer dies, it still has the data in two other places, in which it will then get the data to another computer to keep the data in three places going.

We are currently discussing machine learning, which I personally think is a little overrated. All it really is is just telling a computer when it guesses if it is right or wrong then it adjusts its guesses accordingly until it gets about 99% correct. This is called training. Once the computer is “trained” it is good to go. Statistics and machine learning go hand in hand, as both are used to take data and use that data to predict outcomes based on the input data. One subset of machine learning is data mining, which is many different methods used to extract insights from data.

From the blog CS@Worcester – Erockwood Blog by erockwood and used with permission of the author. All other rights reserved by the author.

Unit Testing: What and Why

For this class I felt the most fitting first post was one that explained what unit testing is, as this will be key for understanding more complicated topics further on in the semester. I have only had brief experiences with unit tests in the past, usually being included in code I had from a professor. Thus this is still a relatively foreign concept to myself, granted the general purpose is pretty self explanatory. To tackle this subject I found a blog post that goes sufficiently in depth about this topic, explaining what it is and why it’s useful.


So first things first, what exactly is unit testing? As it’s defined in the post on the testingxperts.com blog, this is a testing type done in the earlier stages of the software. This is advantageous as it allows testers and developers to isolate specific modules for any necessary fixes, rather than having to deal with the whole system. Unit tests usually consist of three phases which are arrange, act, and assert. The arrange phase involves implementing the part of the program that is to be tested in your testing environment. The act phase is where you define the test’s stimuli, what could or could not break the function. Finally there is the assert phase, where the behavior of the program is observed for any potential abnormalities. So now that we have a common understanding of these tests, we can ask ourselves why we should even bother using them.

Admittedly, on the few past occasions I had seen these, I thought the unit tests were mostly superfluous and not very useful. In reality however, these tests can be an invaluable asset for larger systems and can save a lot of time and pain from trying to bugfix further into development if used properly. The primary, and pretty major benefit, is that components of a program are tested early on in the process and individually. This is great for the immediately obvious reason, being that you do not have to worry about progress halting further on in the project because you are testing long before then. Besides this, it also helps the development teams understand the code better, as the tests must be tailored to the specific component they are working on. This familiarity can have other benefits as well, allowing certain members knowledge of how code in some components can be changed or reused for further benefits. Then there is also the fact that this debugging process is simpler than a more traditional approach, as it is done before some of the more complex code is written to interconnect components. These are the major reasons listed in the article, and that I could deduce, but there could very well be more! I do not quite have enough space to discuss the rest, but I would recommend reading the rest of the article if you have any interest in unit testing as it goes into further detail.

Thanks for reading!


Source:

From the blog CS@Worcester – My Bizarre Coding Adventures by Michael Mendes and used with permission of the author. All other rights reserved by the author.

Erockwood Post #2

Software testing is very important, as it allows for one to test their/others programs for mistakes, whether it is logical mistakes or syntax mistakes, to ensure that software performs as expected as much as possible. A common testing suite for Java programs is JUnit 5. JUnit 5 comprises of 3 sub-projects, being JUnit Platform, JUnit Jupiter, and JUnit Vintage. The differences between the three are that Platform is used as the base foundation for the frameworks of the Java Virtual Machine(JVM). Jupiter is the model for writing tests and other extensions in JUnit 5. Lastly, Vintage is used for running JUnit 3 and 4 based tests on the newer JUnit 5 platform.

Projects like JUnit are very important, because they allow you to automate testing as much as possible. This allows a tester to save time, by not having to test one thing at a time, they can test all sorts of things at once. It also allows the tester to keep testing methods separate from the main code so they do not accidentally mix and mess some things up. JUnit allows you to write tests using methods like assertTrue, assertFalse, assertEquals, and assertThrows. Each of these are useful in their own ways. For example, assertTrue can be used to ensure that, for example, a customer being created with a customer ID of 1 can be tested to ensure that we expect the customer ID to be 1, then pass in a customer object with the ID 1, to ensure that they are equal. Another thing we can use is assertThrows, to test that invalid parameters throw the correct error, or to make sure an error is thrown at all.

Source:

Stefan Bechtold, S. (n.d.). JUnit 5 user guide. Retrieved March 30, 2021, from https://junit.org/junit5/docs/current/user-guide/

From the blog CS@Worcester – Erockwood Blog by erockwood and used with permission of the author. All other rights reserved by the author.

Scrum Quality Assurance

            Given that we are working in a Scrum team in our Software Development capstone and that we got ample practice with the Scrum in our Software Project Management course, I thought it would be interesting to see the crossover between quality assurance and the Scrum workflow. Scrum is also widely used, so this should be important to know for the future as well. I found this article: https://medium.com/serious-scrum/how-does-qa-fit-with-scrum-4a92f86bec5b which talks about the role of quality assurance on a Scrum team.

            It’s important to remember that the members of a development team do not have pre-defined roles. It’s assumed each member of the team can collaborate with any part of a project, even if certain members have more specialization in certain areas, like product testing in this case. With that in mind, it makes the Definition of Done all the more important. Requirements for testing should be documented and understood by the development team and the product owner. This prevents conflicts during the sprint review if the development team thinks something like compatibility testing needs more work and the product owner is ready to deploy. If there is a more comprehensive Definition of Done, these conflicts are avoided since they would have been discussed ahead of time.

            Quality assurance on a Scrum team is a large part of the process and requirements are developed during sprint planning. It’s important that there is close collaboration during the sprint between the development team and the product owner though, even though the development team is delegated most of the decision-making responsibility for how work will be completed. This keeps the whole team in tune and avoids conflict. It also provides for a closer, more efficient work environment as understanding is enhanced throughout the team and the owner.

From the blog CS@Worcester – Marcos Felipe's CS Blog by mfelipe98 and used with permission of the author. All other rights reserved by the author.

Software Testing Life Cycle

            I was interested in learning more about how software testing works in the professional world on a software development team. Since we are learning about the crafting of software tests in our class, I thought it would be interesting to learn how the pieces are put together, like how developing requirements goes into crafting tests and then executing them. It’s an essential part of a software development team, of course, and I’m sure I’ll be doing plenty of it in my future. I found this resource: https://www.tutorialspoint.com/stlc/stlc_overview.htm from Tutorials Point that gave an overview of the “Software Testing Life Cycle” (STLC) which help put the pieces together for me.

            First, what is the STLC? It deals strictly with testing and “starts as soon as requirements are defined… by stakeholders.” Sidenote, this reminds me of test-driven development, which, by my understanding, is a common practice in software development these days. The STLC consists of 7 parts. The first is requirement analysis, at which point the team analyzes the application under test (AUT) at a high level. Then comes test planning where a strategy for testing is devised. Then is test case designing which is applying the requirements and making tests according to the planning. Then is the test environment setup for integrated testing. This is the last step before actual testing. Next is test execution which yields defect reporting. This either validates tests or finds bugs. Last is test closure, where testing is finished and matrix, reports, and results are completed.

            This was great to see the pieces come together. This very basic overview helps me see what working on software testing in the professional world would be like. Seeing a timeline also provides context for some of the things we’ve been learning in class.

From the blog CS@Worcester – Marcos Felipe's CS Blog by mfelipe98 and used with permission of the author. All other rights reserved by the author.

Boundary Value Analysis and Equivalence Partitioning Testing

When it comes to a large pool of input data, it is not possible to perform exhausting testing for each set of test data. There should be an easy way to select test cases from the pool so that all scenarios are covered. This is when the Equivalence Partitioning & Boundary Value Analysis testing techniques were introduced. In today’s blog, I want to do further research on the testing technique of Boundary Value Analysis and Equivalence Partitioning Testing. Equivalence Partitioning and Boundary value analysis are linked to each other and can be used together at all levels of software testing. To start, Boundary value testing is the process of testing between extreme boundaries between the partitions, for example like start, end, lower, upper, maximum, minimum, just inside, and outside values. Normally Boundary value analysis is part of stress and negative testing. Using the Boundary Value Analysis technique tester creates test cases for the required input field.

Now when it comes to equivalence partitioning or equivalence class testing is a type of black box testing technique in which the input data units are divided into equivalent partitions that can be used to derive test cases. This helps with reducing the time required for testing a small number of test cases. This technique can be applied to all levels of software testing like system, unit, and integration. One of the examples that were widely used in many resources that I have look at is: let us say a password field accepts a minimum of 6 characters and a maximum of 10 characters, that means results for values in partitions 0-5, 6-10, 11-14 should be equivalent.  The three testing scenarios will be:

1       Enter 0 to 5 characters         System should not accept

2       Enter 6 to 10 characters       System should accept

3       Enter 11-to-14-character      System should not accept.

Equally, Both testing techniques are used to reduce a very large number of test cases to a manageable piece, Both are appropriate for calculating intensive applications with such a large number of variables and input data.

https://www.guru99.com/equivalence-partitioning-boundary-value-analysis.html

https://www.softwaretestingclass.com/boundary-value-analysis-and-equivalence-class-partitioning-with-simple-example/

From the blog Derin's CS Journey by and used with permission of the author. All other rights reserved by the author.

The Power of Decision Table Testing

In POGIL activity 8, we worked with decision table-based testing and applied it to applied this concept to the ongoing graduation problem. Unlike the previous activities, we did not go over the advantages of using a decision table opposed to using any of the other testing frameworks. So for this blog post, I want to look more at the pros of using decision table-based testing.

In this article, it gives a complete step-by-step rundown of what is a decision table, why we use it for testing and how to conduct decision table-based tests.

It is basically a review of what we covered in class with pictures/diagrams at each step.

In this second article, it talks about the characteristics of decision tables and why some of these characteristics may make decision table-based testing more preferable to the other testing frameworks.

https://www.edureka.co/blog/decision-table-in-software-testing/#advantages

Decision table-based testing is similar to Equivalence Class testing in the sense that it divides the tests into cases and complete test coverage. Unlike Equivalence Class testing, decision tables are more versatile. One aspect that I really liked about this second article was the fact the author completed multiple action rows into just one rule by defining a key. This made the table smaller and easier to read while not obscuring the meaning within the table.

From the blog CS@Worcester – Just a Guy Passing By by Eric Nguyen and used with permission of the author. All other rights reserved by the author.

Boundary Value Testing and Equivalence Class Testing

Due to their being a large capacity of data that is used for software, there needs to be test implementation that can test the range of values without testing each number. Therefore, we used testing methods such as boundary value testing and equivalence class testing. The purpose of boundary value testing is to test the extreme ends or also known boundaries hence the name. The most common way to implement this test case is to use input variable values such as a minimum value, a value just higher than minimum, a middle value none as the nominal, a value that is just lower than the maximum and the last is the maximum value. One common example that most have seen is when we have to set up a password. Let’s say that a valid number length for the password has to be between 10 and 15. Using this guide, our boundary value testing would consider values that are less than, ones that are 10, ones that are between 10 and 15, ones that are 12 and ones that are greater than 15. Now the valid inputs will be passwords that are between 10 and 15 and so boundary value testing implementations are a good way to test any input errors that are more near the boundaries of valid numbers.

Equivalence class testing is a method of testing that divides the input of the data into various different equivalence classes. This step is the partitioning of the values that need to be tested and is the step that comes before boundary testing. The accurate values that are considered acceptable inputs are divided into a certain range, and the values under as well as over are considered invalid or unacceptable to the software. The valid class partition keeps all the valid values or inputs in it and the invalid class partition contains the invalid values and partition. The example is same as the password one and how the password that in the acceptable range of 10 to 15 will be accepted and valid, whereas passwords that are less than 10 or more than 15 will be invalid and won’t be accepted. Overall, boundary and equivalence class testing are good testing implementations to test input values without having to try to test all the individual values.

Resources:

Boundary Value Analysis and Equivalence Class Partitioning With Simple Example

https://www.guru99.com/equivalence-partitioning-boundary-value-analysis.html

From the blog CS@Worcester – Roller Coaster Coding Journey by fbaig34 and used with permission of the author. All other rights reserved by the author.

Difference Between Black-Box, White-Box

 White-box or glass-box testing is testing from a program’s source code without using the user interface. This type of testing needs to look at code syntax to find flaws or errors in the internal code in algorithms, overflows, paths, conditions, and so on, and then fix them.

Black-box testing, or black-box testing, is rigorously tested by using the entire software or a software function without examining the source code of the program or having a clear understanding of how the program or the source code of a software function was designed. Testers understand how the software works by entering their data and seeing the results. Typically, testers run tests using not only input data that is guaranteed to give correct results but also input data that is challenging and may result in errors in order to understand how the software handles various types of data.

The program under test is treated as a black box, without considering the internal structure and characteristics of the program. The tester only knows the relationship between the input and output of the program or the function of the program and determines the test cases and inferences the correctness of the test results by relying on the requirement specifications that can reflect the relationship and function of the program.

Black box testing of software is used to verify the correctness and operability of software functions. Treat the program as a black box, without considering the internal structure of the program box processing. In the program interface test, just to check whether the program function in accordance with the specification of the normal use. Black box testing is also called functional testing or data-driven testing.

White-box testing is exhaustive path testing, and black-box testing is exhaustive input testing. These two methods are based on completely different points of view, reflecting the two extremes of things. They have their own emphasis and advantages, but they cannot replace each other. In the modern concept of testing, the two methods are not separate but intersect.

It relies on the careful examination of the details of the program, the design of test cases for specific conditions, and the testing of the logic path of the software. Check the “state of the program” at various points in the program to see if the actual state corresponds to the expected state. White-box testing of software is used to analyze the internal structure of a program.

sources:

Difference Between Black-Box, White-Box, and Grey-Box Testing

From the blog haorusong by and used with permission of the author. All other rights reserved by the author.

Static vs Dynamic Testing

Two common methods used for software testing are static and dynamic testing. Now, although they are both testing methods, there are a lot of differences between them. First we need to define what each of them are. Static testing is when we test the software to look for errors or defects, but we are not going to actually execute the code. Now on the other hand, for dynamic testing we test the software by executing the code and see if there is any errors with inputs and overall function of the software. Static testing is conducted with specific documents related to the software and the goal is to find errors early on in the development cycle before the software gets too advanced. Dynamic testing executes the code and analyzes things such as the input and output of the software to determine the correct results. The goal for dynamic is to test the functional behavior of the code and also takes into account memory, CPU as well as the performance of the software put together. Static testing uses manual or automated testing of the documents by examining things such as the requirements for the software, the source, necessary test cases or any thing related to the overall design. Dynamic testing is more direct testing to whether the software works by using techniques such as black or white box testing and it confirms that the code works in the way it is desired to.

The main differences between static and dynamic is one thing we stated earlier about how static won’t be executed, whereas dynamic will. Another important factors is the stages with where these tests occur, as static happens early in the process of developing software, whereas dynamic is towards the end or completion stage. The goal of static testing is to prevent any bugs or errors from being produces, but dynamic tests finds bugs or errors that were created with the development of the software. Static testing is more simply known as the verification process, whereas dynamic testing is more about the validation process. Static testing is known to generally take shorter time, whereas dynamic will take a little bit longer due to the variance of test cases that need to be implemented. Overall, both of these testing methods are important in the development of software, but they occur at different stages and can be helpful to do both efficiently to lead to the least amount of error in the software.

Resources:

https://www.geeksforgeeks.org/difference-between-static-and-dynamic-testing/ https://www.guru99.com/static-dynamic-testing.html

From the blog CS@Worcester – Roller Coaster Coding Journey by fbaig34 and used with permission of the author. All other rights reserved by the author.