Category Archives: CS443

Path Testing

Hello, and welcome back to my blog.

This week I will be sharing what I have been learning about path testing. Path testing is a white box method of tests that is used to design test cases by using the source code of a program to find every possible executable path. The bug presumption for path testing is such that the program has gone wrong in some way, causing it to follow a different path than desired.

Path testing utilizes Cyclomatic Complexity to establish the quantity of paths, and then tests cases for each path are generated. Developers can choose to execute some or all paths through when performing this testing. Path testing techniques are perhaps the oldest of all structural test methods. Path testing is most useful for unit testing new applications.

It provides full branch coverage, but does so without covering all possible control flow graph paths. The four part process to path testing begins with drawing a Control Flow Graph of the software that is being tested. Next, Cyclomatic Complexity of the program is calculated based off Edges, Number of vertices, and Program factor. Now, we can use the data calculated in the first two steps to find a set of paths to test. The computed cyclomatic complexity equals the set’s cardinality. Finally, we will develop test cases for each of the paths determined in previous steps.

Path Testing process:

Path testing is beneficial because it focuses test cases on program logic. It helps to find all faults within the code and reduces redundant tests. In path testing, all program statements are executed at least once. Thank you for taking the time to visit my blog and join me on my growth as a software developer.

Sources:

http://www.mcr.org.in/sureshmudunuri/stm/unit2.php

https://ifs.host.cs.st-andrews.ac.uk/Books/SE9/Web/Testing/PathTest.html

https://www.geeksforgeeks.org/path-testing-in-software-engineering/

From the blog cs@worcester – Coding_Kitchen by jsimolaris and used with permission of the author. All other rights reserved by the author.

Silent Killers of Quality

Candice Coetzee’s article titled, “Eliminating the Silent Quality Killers in Your Organization” on Software Testing News covers several corners that companies often cut in order to bring their product to market which inevitably created bad organizational habits and setting them back in the long term. These silent killers of course all linked back to the Quality Assurance process and this article signals the necessary steps and organization can take to remediate these failures.

The first failure that the author points out is not having a software Quality Assurance strategy in the first place. She suggests making sure that every facet of your business has QA strategy from project management to development to operations. Another silent killer listed that could be business destroying is skimping on the software architecting process. The author insists that with thorough planning and implementation of things such as regression testing, debugging and logging a lot of the downstream failures will be reduced or virtually eliminated.

We increasingly hear more and more about products attempting to be agile but launching prematurely; Coetzee suggests that “building in some fat” around the QA time window can pay dividends down the road and minimize panel beating and fighting ever-growing fires of customer dissatisfaction with your customer service teams. She describes how these post-launch quality failures may not only be costly to a company’s bottom line but may cause irreparable brand damage. Sounds like a reason to always put quality first!

Read more about the article here!

From the blog CS@Worcester – Cameron Boyle's Computer Science Blog by cboylecsblog and used with permission of the author. All other rights reserved by the author.

Cloudy Future for Mobile Testing?

Eran Kinsbruner’s article at DevOpsOnline.co.uk titled, “Testing Mobile Apps in the Cloud” discusses the recent push, advantage, and necessity of testing mobile applications in the cloud. Prior to the pandemic there was already a struggle to comprehensively test on all of the varieties of existing operating systems. Between unique vendor operating system variants for Android devices (Samsung, LG, Google, etc) and Apple devices (iOS, iPadOS) it was difficult enough to get these devices into the hands of every developer for testing in a single given office but with office spaces being shut down it became even less practical to do physical testing on these systems, queue cloud-based testing.

While there are many benefits to this cloud virtualized testing, the first I found to be the most impactful was the scalability of the operation. Being able to effortlessly spin up thousands of device instances to stress test is far more practical than buying however many of those devices. Not to mention testing the interoperability of varying operating system versions. Rather than having to keep multiple devices with older operating systems or rolling back to test compatibility one can simply boot up an instance with the older version.

Kinsbruner also points out the improvement to team communication and hand offs that come with cloud collaboration. By having a centralized cloud repository, it’s much easier for members in the production, testing or QA teams to observe and monitor program changes in real time using synchronous development environments so they can estimate what their task will be by the time it gets to them. It seems that this remote development and testing in the cloud may just be an integral part of the future of software.

Read the full article here

From the blog CS@Worcester – Cameron Boyle's Computer Science Blog by cboylecsblog and used with permission of the author. All other rights reserved by the author.

Getting Started With Python Unit Testing After Learning JUnit

Christian Shadis

CS-443 Self-Directed Blog Post #5

This past semester I have been spending a lot of time in my QA class working with JUnit 5. I want to be able to take my familiarity with JUnit and apply the same principles with unit testing in Python. I am leaning toward a data-centric career path and Python is widely used for data analytics, so this would be valuable information for me.

This post is not an expert-authored tutorial on Python unit testing because I, myself, am just getting started with it. In this post I will instead give tiny, bite-sized examples of just the basics, translating it from JUnit to Python unittest. I built identical, small classes in Java and Python, and will build tests to go with them. Below are the classes in Java and Python, respectively.

/**
 * Simple class with basic methods, written in Java
 * @author Christian Shadis
 */
public class main {
    public static void main(String[] args){
        int i = 0; // dummy code to keep compiler happy
    }

    public static int addTwoNumbers(int x, int y){
        return x + y;
    }

    public static String toCapital(String str){
        return str.toUpperCase();
    }
}

# Simple class with basic functions, written in Python
# Author: Christian Shadis

class main:
    def add_two_numbers(x, y):
        return x+y

    def to_capital(string):
        return string.upper()

Writing the unit tests in JUnit is simple: we import the JUnit assertions, and the @Test annotation. Then we create the test class, each of the two tests, setup, exercise, and verify just as always.

/**
 * Test Class for main.java
 * @author: Christian Shadis
 */
import static org.junit.jupiter.api.Assertions.*;
import org.junit.jupiter.api.Test;

public class maintest {

    @Test
    void testAddTwoNumbers(){
        int result; // setup
        result = main.addTwoNumbers(566, 42); // exercise
        assertEquals(result, 608); // verify
    }

    @Test
    void testToCapital(){
        String result; // setup
        result = main.toCapital("string"); // exercise
        assertEquals(result, "STRING"); // verify
    }
}

Luckily, writing unit tests is just as easy in Python as Java. We import the unittest library and the other class, define the class, and add a Test Case object as the parameter.

import unittest
from main import *

class maintest(unittest.TestCase):
import unittest
from main import *

class maintest(unittest.TestCase):

    def test_add_two_numbers(self):

        pass
    def test_to_capital(self):

        pass

Now we need a test case for each of our two functions, add_two_numbers and to_capital. Python’s unittest objects have very similar assertions as in JUnit. Use assertEqual(x, y) to check that x == y. This is the assertion we will use in this example, but any of the following are commonly used unittest assertions:

  • assertEqual
  • assertNotEqual
  • assertTrue
  • assertFalse
  • assertIs
  • assertIsNot
  • assertIsNone
  • assertIsNotNone
  • assertIn
  • assertNotIn
  • assertIsInstance
  • assertIsNotInstance

assertEqual takes two arguments and, as the name suggests, asserts their equality. See the implementation below:

    def test_add_two_numbers(self):
        result = main.add_two_numbers(33, 44)
        self.assertEqual(result, 77)

    def test_to_capital(self):
        result = main.to_capital("hi")
        self.assertEqual(result, "HI")

Run the tests and you will see them both pass. Below is a screenshot of the tests executing in Python and then in JUnit. As you can see, the Python tests are slightly faster than the Java tests, but not by much. I used IntelliJ IDEA and Pycharm IDE.

I have found the most helpful way to learn testing is to just play around with the unit tests, see what works and what doesn’t, see what causes failures and what those failures look like, and so forth. I would suggest any other beginner QA student to do the same. Playing around with different assertions and looking at the unittest documentation is a great way to learn this library. I hope this post gave you some insight on how to get started with unit testing your Python modules if you have done some work with JUnit in the past.

4/28/2021

Works Cited:
Unittest – unit testing framework¶. (n.d.). Retrieved April 28, 2021, from https://docs.python.org/3/library/unittest.html

From the blog CS@Worcester – Christian Shadis' Blog by ctshadis and used with permission of the author. All other rights reserved by the author.

Mockito mocking

On the Vogella Mockito tutorial page, there is a brief overview and explanation of what mocking is and what a mock object is. There is a simple diagram showing the sequence of what typically happens when you use Mockito in your tests. This tutorial also mentions that when using the Mockito libraries, it is best to do so with either Maven or Gradle, which are supported by all of the modern IDEs. Here you can also find code examples, and even where to put them in your code. This tutorial is packed with visual representation of the information given and I find that to be extremely helpful.

I would say this particular article/tutorial can be very helpful in bettering one’s understanding of how to use Mockito. It’s filled with tips and simple to understand diagrams, explanations and code examples. It even dives into using the spy() method to wrap Java objects, mocking static methods, creating mock objects (in the exercise provided, you can create a sample Twitter API), and testing an API using Mockito.

I have included in this blog post two other articles I found interesting and informative on the subject of Mocks/Mocking

From the blog cs@worcester – Coding_Kitchen by jsimolaris and used with permission of the author. All other rights reserved by the author.

Static Testing VS Dynamic Testing

Hello and welcome back to my Blog. Today I am going to be sharing about Static and Dynamic Testing.

What is Static Testing?

Static Testing is a method of software testing in which software is tested without executing the code. Static Testings main objective is to find errors early on in the design process in order to improve the software’s quality. This form of testing reduces time spent finding bugs and therefore reduces the cost of having to pay developers to find bugs. This is advantageous because we get fewer defects when we are nearing deployment. Static Testing also increases the amount of communication amongst teams.

Below, I will give a brief overview of Review and Static Analysis, the two main ways in which Static Testing is performed.

Review is a process that is performed to find errors and defects in the software requirement specification. Developers inspect the documents and sort out errors and ambiguities.

In Informal review the dev shares their documents and code design with colleagues to get their thoughts and find defects early.

After it has passed the Informal review, it is moved on to the Walkthrough. Walkthroughs are performed by more experienced devs to look for defects.

Next, a Peer review is performed with colleagues to check for defects.

Below is a list of free, open-source Static Testing tools:

VisualCodeGrepper

Risp

Brakeman

Flawfinder

Bandit

What is Dynamic Testing?

Dynamic Testing is a software testing method that is performed when code is executed. It examines the behavior and relationship of the software in relation to the performance, (e.g. RAM, CPU). In dynamic testing the input and output are examined to check for errors and bugs. One common technique for performing dynamic testing is Unit Testing. In Unit Testing, code is analyzed in units or modules, which you may know as JUnit Testing. Another common approach to Dynamic Testing is Integration Testing. Integration Testing is the process of testing multiple pieces of code together to ensure that they synchronize.

Below is a list of open-source Dynamic Testing tools:

Selenium

Carina

Cucumber

Watir

Appium

Robotium

I hope you find as much value as I do in learning about these testing methods.

From the blog cs@worcester – Coding_Kitchen by jsimolaris and used with permission of the author. All other rights reserved by the author.

C o d e C o v e r a g e T e s t i n g

Cover your tests

Welcome back, I am going to explain why you should be running most of your tests with some sort of code coverage. Code coverage is a software testing metric that analyzes how effective your tests are when ran with your code. Code coverage enables you to see the degree to which your code has been implemented. It also helps you to determine how successful unit tests are by checking the degree to which your code is covered by them. Code coverage percent is calculated by items tested by dividing elements tested by elements by elements found.

Code coverage enables you to improve the efficacy of your code by having a statistical analysis of code coverage. The test cases your currently running without code coverage may be less effective than you realize. Running with code coverage allows you to identify parts of your software that have not been examined by your test cases. This not only makes better code but it saves time because it gives you a clearer picture of where you need to refactor.

Code coverage enables you to identify parts of the software that are not implemented by a set of test cases.

How is code coverage calculated?

Well, that depends on what tools you’re using. However the common tools combine some of the following types:

  • Line coverage: percent of lines tested
  • Functions coverage: percent of functions called
  • Conditions coverage: percent of boolean expressions covered
  • Branch coverage: percent of branches executed

Are all code coverage types created equal?

When I first learned about code coverage, I assumed that line coverage must be superior, because hitting every line must be most important and probably will include most methods. While this is important it will miss out on a few things. Consider the following code from a stackoverflow answer:

public int getNameLength(boolean isCoolUser) {
    User user = null;
    if (isCoolUser) {
        user = new John(); 
    }
    return user.getName().length(); 
}

When you call this method with the boolean variable isCoolUser set to true, you will have your line coverage come back 100%. However, this method has two paths;

  1. boolean is true
  2. boolean is false.

If the boolean is true, all lines are executed, and code coverage will return 100%. What we don’t see with line coverage is that if you call this method with the boolean set to false, you will get a null pointer. Line coverage wouldn’t show you this bug, whilst branch coverage would. For these reasons, best practice is to use a coverage tool that examines multiple types of code coverage

You’ve convinced me, so how do I start?

Code coverage comes built into most IDE’s. In intellij, when you open the run drop-down menu, its two buttons below, “Run with Coverage”. I have found this tool incredibly useful/invaluable in my coding. Since learning of it, I have not run tests without using coverage. There have been a few instances where I created test classes that passed but I wasn’t aware just how terrible they could be. Using code coverage has made me learn more and see my mistakes much more easily. Lines I thought were testing the code, I could comment out and see if it reduces code coverage and learn from that. In Intelli-j’s code coverage, lines that have been executed are green while lines that aren’t are red.

Other code coverage tools are listed below:

SonarQube

JaCoCo

Clover

Selenium

Carina

Cucumber

Good coverage can’t fix bad test writing practices.

If your tests aren’t comprehensive, covering multiple possibilities, coverage won’t save you.

Thanks for taking the time to stop by and read my blog.

I learned quite a bit of information from the following links:

https://ardalis.com/which-is-more-important-line-coverage-or-branch-coverage/

https://stackoverflow.com/questions/8229236/differences-between-line-and-branch-coverage/8229711#8229711

https://www.softwaretestinghelp.com/code-coverage-tutorial/#Methodologies

From the blog cs@worcester – Coding_Kitchen by jsimolaris and used with permission of the author. All other rights reserved by the author.

A Review of: You Still Don’t Know How to Do Unit Testing (and Your Secret is Safe with Me)

Erik Dietrich’s article “You Still Don’t Know How to Do Unit Testing (And Your Secret is Safe with Me) was an amusing and informative read about what Unit Testing is, what it is isn’t and how the average developer can go from creating incorrect tests to effective, bonafide Unit tests. He leads the article with a remark that, since 2002, he has seen unit testing go from an esoteric skill for quirky developers to something of an industry standard that, as it has been ushered in into popularity, has also pushed some developers into the cracks; this article is for them. Dietrich starts by making several important distinctions on what a unit test is not starting with tests that he describes as being able to “fail when run on a machine without the “proper setup”.” He next brings up a concept which I had not heard of before called “smoke testing” which seems to be testing cases that cover the most essential parts of a program to determine if it’s launch ready. Whatever the case, this is certainly not unit testing. Lastly Dietrich states that a unit test, under no circumstance, should be something that works your system from end to end. He gives the example of a user opening a terminal and testing output; he classifies this as an end-to-end test. In the author’s words the concept of Unit Testing is by most accounts circular but with some good reason (to transcend programming language). The example given was programming methods in C# and their respective test methods. The first example is testing with an “add” method from a calculator that checks to see if the arguments passed successfully add or, if they do not, that they properly throw an error; for this case it is the Invalid Operation Exception. For more practical detail check out the original article here.

From the blog CS@Worcester – Cameron Boyle's Computer Science Blog by cboylecsblog and used with permission of the author. All other rights reserved by the author.

Test Precisely and Concretely: A Review

While much of the content was beyond my comprehension, I found Kevlin Henney’s article “Test Precisely and Concretely” article at Medium.com to be elucidating and enjoyable. Kevlin is known for his efficacious brevity in software and so it should come as no surprise that this translates to his writing. He comes right out of the gate with a strong opening sentence:

“It is important to test for the desired, essential behavior of a unit of code, rather than for the incidental behavior of its particular implementation.”

– Kevlin Henney @ https://medium.com/analytics-vidhya/test-precisely-and-concretely-810a83f6309a

While the first half of the quote describes the obvious end result that we as programmers are aiming to achieve, our desire to do write effective tests certainly doesn’t preclude us from making the mistake of the latter. There may be diminishing returns on the profundity of this statement but, as an amateur programmer who has only just begun to learn proper software testing, I’ve found myself writing software tests that give true outcomes but do not necessarily verify that the program itself performs accurate and verifiable calculations. For instance, when I was creating additional instances of an object in a recent assignment I was testing to see if additional objects were consecutively constructed in the sense that memory was allocated for it but I was not properly verifying that the objects upon their creation were given consecutive, uniquely identifying numbers which is where the real meaning of its creation lay. Simply put, it was the software testing equivalent of begging the question.

From my understanding, Kevlin’s prescription to the aforementioned issue is to understand that the conditions which are necessary for your test to pass are not always sufficient for their real use cases and born from this is the realization that programmers ought to more carefully examine the literal meanings of the postconditions they’re testing for. Furthermore, he states an accurate condition does not make for a good test, a “good test should be comprehensible and simple enough that you can readily see that it is correct.” In the spirit of the article, and perhaps even Kevlin himself, I will avoid verbosity and recommend that everyone check out what makes for a good software test here.

From the blog CS@Worcester – Cameron Boyle's Computer Science Blog by cboylecsblog and used with permission of the author. All other rights reserved by the author.

RefinementCodeReview: A Review

Go check out the original article here! It’s great!

In Martin Fowler’s blog post titled, “RefinementCodeReview” he emphasizes the need to rethink the frequency with which developers review code. He points out that pre-integration reviews and pull requests, while important, should not be vital to the development process in the literal sense of the word. Fowler highlights the etymological reason we have collectively acknowledged that software is accurately named; it is soft, and like all malleable mediums, should be worked through from end to end. Fowler seems to suggest that a shift in lexicon from architectural comparison to Erik Dörnenburg’s town planning one would pay dividends in inspiring more frequent refactoring of older code. While this seems obvious, it’s a powerful distinction; if we approach old code like a large stone monolith carved with divine intentionality rather than a living, revisable set of rules used to build and expand, then we are certainly adding unnecessary apprehension around the development process.

“As soon as I have an understanding about the code that wasn’t immediately apparent from reading it, I have the responsibility to (as Ward Cunningham so wonderfully said) take that understanding out of my head and put it into the code. That way the next reader won’t have to work so hard.”

Martin Fowler, 1/28/2021 @ https://martinfowler.com/bliki/RefinementCodeReview.html

I found this quote personally compelling but also particularly salient when Mr. Fowler addresses the dreaded, yet somehow inevitable, scenario where a high-threat vulnerability in the code is exploited. He calls these rare-high-impact concerns and yields that truly no amount of testing will ever eliminate their existence but the process of running continuous and vigorous testing can make accepting a certain amount of risk from older or inherited code much more palatable. While the author doesn’t mention this pattern by name, he implies that these good practitioners of efficacious refactoring, who are more knowledgeable of a code base, who are better still at creating self-testing code, who are better even yet at continuous integration would create a virtuous cycle of software development.

From the blog CS@Worcester – Cameron Boyle's Computer Science Blog by cboylecsblog and used with permission of the author. All other rights reserved by the author.