Category Archives: CS-443

Gradle: Test test

Testing Testing Testing… A few weeks ago we focused on JUnit testing using Gradle. I thought I would share a few things I learned along the way getting my projects setup for JUnit testing with Gradle. Since our testing is centered around Jupiter (JUnit5) there are a few unique things you need to do to get Gradle to behave as expected. If you are using Jupiter for your JUnit testing you need to have Gradle version 4.6 or later installed. So let’s start there. Verify the version of Gradle you have installed: Open a bash shell in your projects root directory and run: ./gradlew –version If you are not running a version greater than 4.6 update to the latest version before proceeding. Let’s setup our project to use Gradle. Open a bash shell in your projects root folder and run this: gradle init –type java-library –dsl groovy –test-framework junit This tells Gradle that we are creating a new JAVA project and that we will be testing with JUnit. It will take a few seconds to run and once complete you should see a message that says: BUILD SUCCESSFUL in xxseconds 2 actionable tasks: 2 executed Now check out your project folder. You will now see 3 new folders: gradle .gradle src and the following new files: .gitignore build.gradle gradlew gradlew.bat settings.gradle We are going to start off making changes to the build.gradle file. Using your favorite editor (I prefer Notepad++)open build.gradle and verify that the following is your frist entry following the commented docs: 1 plugins { 2 // Apply the java-library plugin to add support for Java Library 3 id ‘java-library’ 4 } This tells Gradle that it is going to be building a JAVA program. Now we need to make sure that Gradle gets the required and dependencies so add the following to build.gradle: 20 dependencies { 21 // This dependency is exported to consumers, that is to say found on their compile classpath. 22 api ‘org.apache.commons:commons-math3:3.6.1’ 24 // This dependency is used internally, and not exposed to consumers on their own compile classpath. 25 implementation ‘com.google.guava:guava:27.0.1-jre’ 26 // Use JUnit test framework 27 testImplementation ‘junit:junit:4.12’ 28 testImplementation ‘org.junit.jupiter:junit-jupiter-api:5.5.0-M1’ 29 testRuntimeOnly ‘org.junit.jupiter:junit-jupiter-engine:5.5.0-M1’ 30 } We tell build.gradle which frameworks to include for the JUnit testing. Jupiter is backwards compatible but if we want to run any JUnit 4 test we include that junit:4.12 dependency. This just ensure the correct flavor of JUnit is used for the testing. Now we’ll add one more line to our build.gradle to make sure we enable Gradle’s native JUnit 5 support. Add the following lines after the dependencies: 33 test { 34 useJUnitPlatform() 35 } Now hop back into your IDE and work on your project. Save all of your changes and navigate back to your project folder. Open up the src folder. You will see the following sub-folders: main test Both of these contain a folder called java. You will move your JAVA files into the java folders in the main and test subfolders. EXAMPLE: Let’s say I am working a project that has Duck.java , Pond.java , and DuckTest.java. Both Duck.java and Pond.java should be moved to ../src/main/java and DuckTest.java would be moved to ../src/test/java Once you’ve moved your files into the correct location run this in your bash shell: gradle build Once this succeeds run this in your bash shell: gradle test Once this finishes and you get a success message navigate to your project folder and go to /build/reports/tests/test/ and open up the index.html. This will give you a breakdown of how your gradle test went. Now that you’ve sucessfully setup Gradle you need to go back into your IDE and clean up your projects paths so that you are working in the src/main/java folder and the src/test/java folder. See easey peasey lemon squeezy! Next week we’ll go over how to intigrate our projects with GitLab so that GitLab does the testing for us. In the meantime checkout the docs up on Gradle.org related to testing with JAVA & JVM: https://docs.gradle.org/5.2.1/userguide/java_testing.html#using_junit5 #CS@Worcester #CS443

From the blog Michael Duquette by Michael Duquette and used with permission of the author. All other rights reserved by the author.

Test Driven Development: Formal Trial-and-Error


Test Driven Development (TDD), like many concepts in
Computer Science, is very familiar to even newer programming students but they
lack the vocabulary to formally describe it. However, in this instance they
could probably informally name it: trail-and-error. Yes, very much like the
social sciences, computer science academics love giving existing concepts fancy
names. If we were to humor them, they would describe it in five-ish steps:

  1. Add
    test
  2. Run
    tests, check for failures
  3. Change
    code to address failures/Add another test
  4. Run
    tests again, refactor code
  5. Repeat

The TDD process comes
with some assumptions as well, one being that you are not building the system
to test while writing tests, these tests are for functionally complete
projects. As well, this technique is used to verify that code achieves some
valid outcome outlined for it, with a successful test being one that fails,
rather than “successful” tests that reveal an error as in traditional testing.
Related as well to our most recent classwork, TDD should achieve complete
coverage by testing every single line of code – which in the parlance of said
classwork would be complete node and edge coverage.

Additionally, TDD has
different levels, two to be more precise: Acceptance TDD and Developer TDD. The
first, ATDD, involves creating a test to fulfill the specifications of the
program and correcting the program as necessary to allow it to pass this test.
This testing is also known as Behavioral Driven Development. The latter, DTDD, is
usually referred to as just TDD and involves writing tests and then code to
pass them to, as mentioned before, to test functionality of all aspects of a
program.

As it relates to our coursework, the second assignment involved writing tests to test functionality based on the project specifications. While we did not modify the given program code, at least very little, we used the iterative process of writing and re-writing tests in order to verify the correct functioning of whatever method or feature we were hoping to test. In this way, the concept is very simple, though it remains to be seen if it stays that way given different code to test.

Sources:

Guru99 – Test-Driven Development

From the blog CS@Worcester – Press Here for Worms by wurmpress and used with permission of the author. All other rights reserved by the author.

Follow the Yellow Brick Road

Path testing peaked my interest when discussed in my CS-443 Software testing class, so I decided to dig deeper into the topic and see what others said about the testing method. I found an Article on GeeksforGeeks that focused on Path Testing. this type of testing focuses on the path of the code itself. calculating the complexity by McCabe’s Cyclomatic Complexity = E – N + 2P, where E = Number of edges in control flow graph, N = Number of vertices in control flow graph, P = Program factor. The advantages of Path Testing are reducing redundant tests, and focusing on the logic of the program.
Path testing seems to focus on the specified program and create the most appropriate test cases based on that program which in turn allows for best possible tests to be performed. Understanding code in a node graph way allows the tester to accurately understand the program and what needs to be tested and what can be tested individually or as a group. I really like the way that path testing views code because it is easy to understand and follow. Path testing, to me, is a directed path of testing that most people do without realizing it on a much simpler scale and because of its complexity calculations, it has more concrete evidence to support the style of testing.

Link to Article Referenced: https://www.geeksforgeeks.org/path-testing/

From the blog CS@Worcester – Tyler Quist’s CS Blog by Tyler Quist and used with permission of the author. All other rights reserved by the author.

Integration Testing

Integration testing is the second step in your overall testing process. First off you will perform you Unit test, testing each individual component to see if it will pass. However, your testing process is just getting started and is not ready for a full system test. After the Unit test and before the system test you must run an integration test. This test is the process in which you combine all your units to test for any faults in the interaction between one another. Yes each individual unit might pass on its own but having them work together simultaneously in an integral part of the program. 

These are the many approaches one can take to Integration testing:

  • Big Bang is an approach to Integration Testing where all or most of the units are combined together and tested at one go. This approach is taken when the testing team receives the entire software in a bundle. So what is the difference between Big Bang Integration Testing and System Testing? Well, the former tests only the interactions between the units while the latter tests the entire system.
  • Top Down is an approach to Integration Testing where top-level units are tested first and lower level units are tested step by step after that. This approach is taken when top-down development approach is followed. Test Stubs are needed to simulate lower level units which may not be available during the initial phases.
  • Bottom Up is an approach to Integration Testing where bottom level units are tested first and upper-level units step by step after that. This approach is taken when bottom-up development approach is followed. Test Drivers are needed to simulate higher level units which may not be available during the initial phases.
  • Sandwich/Hybrid is an approach to Integration Testing which is a combination of Top Down and Bottom Up approaches.

I think this part of the testing process is the most interesting. Once you have individually working components it’s like making sure the puzzle pieces fit. 

Integration Testing. (2018, March 3). Retrieved from http://softwaretestingfundamentals.com/integration-testing/.

From the blog cs@worcester – Zac's Blog by zloureiro and used with permission of the author. All other rights reserved by the author.

Docker and Automated Testing

Last week, in my post about CI/CD, I brought up Docker. Docker can be used to create an “image”, which is a series of layers that are built upon each other. For example, you can create an image of the Ubuntu operating system. From there, you can define your own image with Python pre-installed as a second layer. When you run this image, you create a “container”. This is isolated and has everything installed already so that you, or anyone else on your development team, can use the image and know reliably that it has all necessary dependencies and is consistent.

Of course, Docker will get much more complicated and images will tend to have many more layers. In projects that run on various platforms, you will also have images that are built differently for different versions.

So how does this apply to CI/CD? Docker images can be used to run your pipeline, build your software, and run your tests.

The company released a Webinar discussing how CI/CD can be integrated with Docker. They discuss the three-step process of developing with Docker and GitLab: Build, Ship, Run. These are the stages they use in the .gitlab-ci.yml file, but remember you can define other intermediate stages if needed. The power of CI/CD and Docker is apparent, because “from a developer’s perspective, all you have to do is a ‘git push’ — and that’s it”. The developer needs to write the code and upload to version control and the rest is automated, with the exception being human testers who give feedback on the deployed product. However, good test coverage should prevent most issues and these tests are more about overall experience.

Docker CI and Delivery Workflow
From Docker Demo Webinar, 4:49

Only five lines of added code in .gitlab-ci.yml are necessary to automate the entire process, although the Docker file contains much more detail about which containers to make. The Docker file defines the created images and the code that needs to be run. In the demo, their latest Ubuntu image is pulled from a server to create a container, on which the code will be run. Then variables are defined and Git is automated to pull source code from the GitLab repository within this container.

Then, a second container is created from an image with Python pre-installed. This container is automated to copy the code from a directory in the first container, explained above. Next, dependencies are automatically installed for Flask, and Flask is run to host the actual code that was copied from the first image.

This defines the blueprint for what to be done when changes are uploaded to GitLab. When the code is pushed, each stage in the pipeline from the .gitlab-ci.yml file is run, each stage passes, and the result is a simple web application already hosted from the Docker image. Everything is done.

In the demo, as should usually be done in practice, this was done on a development branch. Once the features are complete, they can be merged with the master branch and deployed to actual users. And again, from the developer’s perspective, this is done with a simple ‘git push’.

From the blog CS@Worcester – Inquiries and Queries by ausausdauer and used with permission of the author. All other rights reserved by the author.

Out of Bounds

While searching for more information about Boundary Value testing I stumbled across “Boundary Value Analysis & Equivalence Partitioning with Examples” by the website guru99 which gave a great explanation of what each is and gave an interactive part that you could type in values to see where they would fall on the specific examples. Boundary testing is checking the extreme values and the in-between vales and allows the tester to focus on the important values to be tested rather than go through and try to test every possible input. The great thing about this source is that it provides not only a textual explanation but also a visual and interactive one as well.
while Boundary testing focuses on the minimum and maximum and values between them, Equivalence partition checks even the values that are invalid and do not meet the requirements set by the program. For Example, An Equivalence partition would check for exceptions like if a person gave a negative value for money to be withdrawn from a bank account. I found this website extremely helpful because I was having trouble distinguishing between the two types since they are similar in some ways. personally I believe that Equivalence testing is the more sensible approach because it covers the values that would not and should not work for a system/program and makes sure that they are dealt with appropriately.

Link to Website Referenced: https://www.guru99.com/equivalence-partitioning-boundary-value-analysis.html

From the blog CS@Worcester – Tyler Quist’s CS Blog by Tyler Quist and used with permission of the author. All other rights reserved by the author.

Path of Most Resistance


In my last blog, I sought
to cover Integration Testing and in doing so we covered the two distinct types
outlined by Mr. Fowler. Of these, Broad Integration Testing (BIT to save time)
is most relevant to the next subject I wish to cover: Path Testing. BIT covers
the interactions between all ‘services’ within a program – meaning a program’s
completed modules are tested to ensure that their interactions match
expectations and do not fail some tests created for them. In this way, Path
Testing is very similar but with a focus on how paths through various aspects/modules
of a program, hopefully, work or do not.

           As
opposed to BIT, Path Testing (PT) seeks to identify not just the interactions
between modules, but instead any and all possible paths through an application
– and discover those parts of the application that have no path. The ultimate
goal is to find and test all “linearly independent paths”, which is defined as
a path that covers a partition that has yet to be covered. PT is made up of,
and can integrate, other testing techniques as well, including what we’ve
covered most recently: equivalence testing. Using this technique, paths can be
grouped by their shared functionality into classes, in order to eliminate
repetition in testing.

           When
determining which paths to take, one could be mistaken for wanting to avoid the
same module more than once; as stated previously, we are seeking paths we have
yet to take. However, it is very often that the same path must be taken, at
least initially, which leads to several modules. In fact, a path might be near
or actually identical to one that has come before it, but if it is required
that several values be tested along this path then it as well is considered
distinct as well. An excellent example of this made by the article I chose
states that loops or recursive calls are very often dictated by data, and
necessarily will require multiple test values.

           However, after this point the author begins to move away from the purely conceptual to actual graphs representing these paths, specifically directed graphs. While it was painful to see these again after thinking I had long escaped discrete math, they provide a perfect illustration for the individual modules you expect a path to trace through, as well as possible breaking points. Directed graphs represent tightly coupled conditions, and in this way they express how a program’s run in order and the cause and effect of certain commands upon execution. In this way, it offers a much more concise visual presentation of the testing process as opposed to something like equivalence testing. As well, these graphs are quite self-explanatory but I look forward to applying these concepts in class to actual code.

Sources

Path Testing: The Theory

From the blog CS@Worcester – Press Here for Worms by wurmpress and used with permission of the author. All other rights reserved by the author.

Edge Testing

This past week in my Software Testing course I learned about edge Testing. I think when I was first learning about this subject I was more apt to lean towards boundary testing because my thought process was that everything that is supposed to work in a program should work and that’s what is important to test. Anything outside of that can just be blocked and is time consuming and tedious to check because those values will fail. However, I was intrigued by edge testing because it made importance in all values across the domain as well as out of bounds. I saw that it wasn’t enough to just assume the failures, you had to also know why they are causing a failure in order to combat them. Some problems are not easy to find but it is just as important to test the failures as it is to test the correct values and functions. Edge testing is a much more thorough and logical approach. This article was a nice pep-talk and helped persuade my thinking, check it out!

From the blog cs@worcester – Zac's Blog by zloureiro and used with permission of the author. All other rights reserved by the author.

Equivalence VS. Boundary

For a majority of testing cases, you will want to be using equivalence class testing over boundary class testing. Boundary class testing tests the highest and lowest possible values in a function. Equivalence class testing is one of the most well known and effective way of testing a function. Equivalence class testing is essentially an extension of boundary class testing in which values around the highs and lows of a function are tested. The difference with equivalence class testing is we go a step further and partition up the function into sections of values that would make sense to test for the function itself. For example, with a function that checks whether a student has enough credits to graduate, equivalence class testing would check to make sure the function rejects outliers such as values near 0 or extremely high values. This would be in essence boundary class testing. After, since the function is checking for a certain amount of credits, a user would want to test values just higher and lower than that value. The checking of the value near the functions purpose is why equivalence class testing is much more beneficial. A person does not just want to write tests for the boundaries because that would mean that the actual functionality of the function is never tested. The main reason boundary class testing is used is when partitioning the function into smaller sections would not logical sense.

From the blog CS@Worcester – Journey Through Technology by krothermich and used with permission of the author. All other rights reserved by the author.

Jupiter, But Not the Planet

The most important part of testing is the testing tools used, so its best to see what JUnit 5 or Jupiter has to offer the software tester. I read a post by EUGEN PARASCHIV that was titled “A Look at JUnit 5’s Core Features & New Testing Functionality“. This post explained the new features and strengths of the latest JUnit testing library. One thing that is new in JUnit 5 is the addition of Assert all which allows multiple tests to be ran at the same time and reports a single failure in the multiple tests as a failure. Assert throws allows the tester to test for exceptions and certain conditions that will trigger those exceptions to be thrown. Jupiter has many more great testing tools which makes me excited to write more tests and see what i can test. One of my favorite things is that Jupiter has a vintage mode which allows JUnit 4 and JUnit 3 test to be ran as well with no issues. This article/post will help me a lot moving forward, since I’m sure I will be using these functions in my testing classes in the near future.

Link to Article mentioned: https://stackify.com/junit-5/

From the blog CS@Worcester – Tyler Quist’s CS Blog by Tyler Quist and used with permission of the author. All other rights reserved by the author.