Category Archives: CS-443

Difference Between Black-Box, White-Box

 White-box or glass-box testing is testing from a program’s source code without using the user interface. This type of testing needs to look at code syntax to find flaws or errors in the internal code in algorithms, overflows, paths, conditions, and so on, and then fix them.

Black-box testing, or black-box testing, is rigorously tested by using the entire software or a software function without examining the source code of the program or having a clear understanding of how the program or the source code of a software function was designed. Testers understand how the software works by entering their data and seeing the results. Typically, testers run tests using not only input data that is guaranteed to give correct results but also input data that is challenging and may result in errors in order to understand how the software handles various types of data.

The program under test is treated as a black box, without considering the internal structure and characteristics of the program. The tester only knows the relationship between the input and output of the program or the function of the program and determines the test cases and inferences the correctness of the test results by relying on the requirement specifications that can reflect the relationship and function of the program.

Black box testing of software is used to verify the correctness and operability of software functions. Treat the program as a black box, without considering the internal structure of the program box processing. In the program interface test, just to check whether the program function in accordance with the specification of the normal use. Black box testing is also called functional testing or data-driven testing.

White-box testing is exhaustive path testing, and black-box testing is exhaustive input testing. These two methods are based on completely different points of view, reflecting the two extremes of things. They have their own emphasis and advantages, but they cannot replace each other. In the modern concept of testing, the two methods are not separate but intersect.

It relies on the careful examination of the details of the program, the design of test cases for specific conditions, and the testing of the logic path of the software. Check the “state of the program” at various points in the program to see if the actual state corresponds to the expected state. White-box testing of software is used to analyze the internal structure of a program.

sources:

Difference Between Black-Box, White-Box, and Grey-Box Testing

From the blog haorusong by and used with permission of the author. All other rights reserved by the author.

Static vs Dynamic Testing

Two common methods used for software testing are static and dynamic testing. Now, although they are both testing methods, there are a lot of differences between them. First we need to define what each of them are. Static testing is when we test the software to look for errors or defects, but we are not going to actually execute the code. Now on the other hand, for dynamic testing we test the software by executing the code and see if there is any errors with inputs and overall function of the software. Static testing is conducted with specific documents related to the software and the goal is to find errors early on in the development cycle before the software gets too advanced. Dynamic testing executes the code and analyzes things such as the input and output of the software to determine the correct results. The goal for dynamic is to test the functional behavior of the code and also takes into account memory, CPU as well as the performance of the software put together. Static testing uses manual or automated testing of the documents by examining things such as the requirements for the software, the source, necessary test cases or any thing related to the overall design. Dynamic testing is more direct testing to whether the software works by using techniques such as black or white box testing and it confirms that the code works in the way it is desired to.

The main differences between static and dynamic is one thing we stated earlier about how static won’t be executed, whereas dynamic will. Another important factors is the stages with where these tests occur, as static happens early in the process of developing software, whereas dynamic is towards the end or completion stage. The goal of static testing is to prevent any bugs or errors from being produces, but dynamic tests finds bugs or errors that were created with the development of the software. Static testing is more simply known as the verification process, whereas dynamic testing is more about the validation process. Static testing is known to generally take shorter time, whereas dynamic will take a little bit longer due to the variance of test cases that need to be implemented. Overall, both of these testing methods are important in the development of software, but they occur at different stages and can be helpful to do both efficiently to lead to the least amount of error in the software.

Resources:

https://www.geeksforgeeks.org/difference-between-static-and-dynamic-testing/ https://www.guru99.com/static-dynamic-testing.html

From the blog CS@Worcester – Roller Coaster Coding Journey by fbaig34 and used with permission of the author. All other rights reserved by the author.

Boundary Value and Equivalence Class Testing

This week in the Software Quality Assur&Test class we got the third assignment that was about Boundary Values and Equivalence Class Testing. We had worked during class time in different activities that covered this assignment so I would say I really enjoyed working in this assignment. We had to complete three parts each with a different level of difficulty. I learned to understand more of the way testing works and what exactly get tested.

What we covered in this homework:

Boundary testing is the process of testing between extreme ends or boundaries between partitions of the input values.

Robust Boundary Values. Introducing outside of boundary values.

Equivalence Class Testing, which is also known as Equivalence Class Partitioning (ECP) and Equivalence Partitioning, is an important software testing technique used by the team of testers for grouping and partitioning of the test input data, which is then used for the purpose of testing the software product into a number of different classes.

Weak Normal Equivalence Class Testing: In this first type of equivalence class testing, one variable from each equivalence class is tested by the team. Moreover, the values are identified in a systematic manner. Weak normal equivalence class testing is also known as single fault assumption.

Strong Normal Equivalence Class Testing: Termed as multiple fault assumption, in strong normal equivalence class testing the team selects test cases from each element of the Cartesian product of the equivalence. This ensures the notion of completeness in testing, as it covers all equivalence classes and offers the team one of each possible combinations of inputs.

Worst-Case boundary value analysis is a Black Box software testing technique.

In Worst case boundary value testing, we make all combinations of each value of one variable with each value of another variable.

Edge Testing is a combination of Boundary Value Analysis and Equivalence Class Testing.

Weak Robust Equivalence Class Testing: Like weak normal equivalence, weak robust testing too tests one variable from each equivalence class. However, unlike the former method, it is also focused on testing test cases for invalid values.

Strong Robust Equivalence Class Testing: Another type of equivalence class testing, strong robust testing produces test cases for all valid and invalid elements of the product of the equivalence class. However, it is incapable of reducing the redundancy in testing.

From the blog CS@Worcester – Tech, Guaranteed by mshkurti and used with permission of the author. All other rights reserved by the author.

The Best of Both Worlds: Edge Testing over Boundary-Value Testing or Equivalence Partition Testing

Christian Shadis

Self-Directed Blog Post #3

Two common testing techniques in Black-Box Software Testing are Boundary Value Testing (or Boundary Value Analysis) or Equivalence Partition Testing (or Equivalence Class Testing). Both techniques examine the output of a program given its inputs.

Consider a function eligibleForScholarship which accepts two inputs, SAT score (for simplicity, only considering 1600 scale) and GPA (on a 4.0 scale) and returns in dollars the total reward a student is eligible to receive. At CompUniversity, a student will receive a scholarship if their SAT score is at least 1200 and their GPA is at least 3.5. The amount of that scholarship is determined by the criteria in the table below.

SAT Score SAT Score Award
0-1199 $0
1200-1399 $1000
1400-1600 $2500

In addition to the SAT reward, additional money is given based on GPA.

GPA % of SAT award earned
0-3.49 0%
3.5-3.74 100%
3.75-3.89 120%
3.90-4.0 150%

For example, a student with an SAT score of 1450 and GPA of 3.8 will receive $2500 plus an additional 20% for a total of $3000. A student with an SAT score of 1600 and a GPA of 3.49 will receive $0.

If software tester wanted to write test bases based on Robust Boundary Value Analysis, they would arrive at the following set of test cases:

Boundaries:
SAT Score upper bound: 1600 (physical limit)
SAT Score lower bound: 0 (physical limit)
GPA upper bound: 4.00 (physical limit)
GPA lower bound: 0.00 (physical limit)

Test Case SAT Score GPA Expected Output
1 <x1min-, x2nom> -1 3.50 Exception
2 <x1min, x2nom> 0 3.50 0
3 <x1min+, x2nom> 1 3.50 0
4 <x1max-, x2nom> 1599 3.50 2500
5 <x1max, x2nom> 1600 3.50 2500
6 <x1max+, x2nom> 1601 3.50 Exception
7 <x1nom, x2nom> 1200 3.50 1000
8 <x1nom, x2min-> 1200 -0.01 Exception
9 <x1nom, x2min> 1200 0.00 0
10 <x1nom, x2min+> 1200 0.01 0
11 <x1nom, x2max-> 1200 3.99 1500
12 <x1nom, x2max> 1200 4.00 1500
13 <x1nom, x2max+> 1200 4.01 Exception

Disadvantages of Boundary Value Testing:
There are possible outputs that this set of test cases never checks (e.g. $3000, $3750). Additionally, this set of test cases fails to test the behavior inside the program, instead focusing on outermost limits.

If the software tester instead wanted to write test cases based off of Strong Normal Equivalence Partitioning, they would arrive at the following set of test cases:

Test Case SAT Score GPA Expected Output ($)
1 (0 <= sat < 1200, 0 <= gpa < 3.50) 1000 2.0 0
2 (1200 <= sat < 1400, 0 <= gpa < 3.50) 1300 2.0 0
3 (1400 <= sat <= 1600, 0 <= gpa < 3.50) 1500 2.0 0
4 (0 <= sat < 1200, 3.50 <= gpa < 3.75) 1000 3.6 0
5 (1200 <= sat < 1400, 3.50 <= gpa < 3.75) 1300 3.6 1000
6 (1400 <= sat <= 1600, 3.50 <= gpa < 3.75) 1500 3.6 2500
7 (0 <= sat < 1200, 3.75 <= gpa < 3.90) 1000 3.8 0
8 (1200 <= sat < 1400, 3.75 <= gpa < 3.90) 1300 3.8 1200
9 (1400 <= sat <= 1600, 3.75 <= gpa < 3.90) 1500 3.8 3000
10 (0 <= sat < 1200, 3.90 <= gpa <= 4.00) 1000 3.95 0
11 (1200 <= sat < 1400, 3.90 <= gpa < 4.00) 1300 3.95 1500
12 (1400 <= sat <= 1600, 3.90 <= gpa < 4.00) 1500 3.95 3750

Disadvantages of Equivalence Partitioning:
This set of test cases does not successfully account for behavior at the boundaries of valid inputs, nor does it account for the behavior of the program with every possible combination of inputs.

The disadvantages of boundary value testing and equivalence partitioning can be addressed by instead implementing edge testing.

Edge Testing

In Edge testing, we want to examine any point at which behavior changes, and create test cases for all of those ‘boundaries’, along with values close to the boundaries on either side.

The SAT edge cases can be listed as {-1, 0, 1, 1199, 1200, 1201, 1399, 1400, 1401, 1599, 1600, 1601}.
The GPA edge cases can be listed as {-0.01, 0.00, 0.01, 3.49, 3.50, 3.51, 3.74, 3.75, 3.76, 3.89, 3.90, 3.91, 3.99, 4.00, 4.01}.

Test cases can now be created by taking all pairwise combinations of edge cases (Worst-case). There are too many test cases to list here without taking up unnecessary space, but now our tests account for each and every possible combination of inputs.

Edge testing combines the best attributes from Boundary Value testing and Equivalence Partitioning into one comprehensive technique. Edge testing successfully tests the upper and lower bounds for each input, and it successfully examining inner behavior at each partition.

Edge testing has a major drawback: the number of test cases needed. In this example, there are 12 * 15 = 180 test cases, but it has unrivaled behavior coverage. The decision of which testing technique to use depends largely on the program for which the tests are being developed, but for a smaller number of partitions in the inputs, edge testing is a relatively simple but comprehensive technique for testing all possible behavior.

Articles used for my research:
https://professionalqa.com/equivalence-class-testing
https://artoftesting.com/boundary-value-analysis
https://www.mindfulqa.com/edge-cases/
Class materials from CS-443 at Worcester State University

From the blog CS@Worcester – Christian Shadis&#039; Blog by ctshadis and used with permission of the author. All other rights reserved by the author.

Black-Box vs. White-Box Testing

In class, we have been learning about the different types of testing methods. Today I want to focus on Black-Box vs. White-Box testing. Let us start by looking at how each test method differs from the other. Black Box testing is a software method in which the internal structure design and implementation of the item being tested are not known to the tester. However, in white box testing, the internal structure, design, and implementation are known to the tester.

Let us start by looking at a diagram example that was provided in one of my resources for black-box testing. The picture above of Black Box testing can be any software system. For example, a website like a google or an amazon database. All under the Black Box testing, you can test the applications by just focusing on the inputs and outputs without knowing their internal code implementation. There are many types of black box testing methods, but the main types are functional, nonfunctional and regression testing.  Now let us look at some of the techniques used in black-box testing. The few main ones are Equivalence class testing, boundary value testing, and decision table testing. I know we went over these in-depth in class, but I had no idea that these were related to black-box testing.

Now unlike Black Box testing, white box testing requires the Knowledge of the implementation to carry out. One of the main goals of white-box testing is to very a working flow for an application. It mainly involves testing a series of inputs against expected or desired output so that when the results in the expected output do not match with the input you have encountered a bug. One of the main techniques that are used in White-Box testing is code coverage analysis, which eliminates any gaps in the test case suite. These tests can be easily automated. While researching some of the disadvantages I found out was that white boxing can be quite complex and expensive. It also can be very time-consuming due to bigger applications taking time to test fully. Overall, both testing methods are important and necessary for successful software delivery.

https://www.geeksforgeeks.org/differences-between-black-box-testing-vs-white-box-testing/

https://www.guru99.com/black-box-testing.html

https://www.guru99.com/white-box-testing.html

From the blog Derin&#039;s CS Journey by and used with permission of the author. All other rights reserved by the author.

Implementing Boundary-Value Testing and Equivalence Partitioning in JUnit 5

By Christian Shadis

CS-443 Self-Directed Blog Post #2

In this post, I will compare the implementations of Robust Boundary Value Testing and Equivalence Partitioning using simple examples and actual JUnit code.

For the purposes of this post, you have been elected ChairBee of HoneyComb Council in your neighborhood’s local beehive. Your main job is to determine whether a worker bee has produced enough honey to retire. There are two criteria by which you base your decision, age and number of flowers pollinated. As a bee, your brain was incapable of making this complex of a decision every time a bee requests retirement, so you decided to write a Java program to do the heavy lifting for you.

Your function needs two inputs: the number of flowers pollinated, and the age of the bee in days. Your decision is based on these conditions: A bee is eligible to retire at age 18, but only if they have pollinated at least 20 flowers.

boolean readyToRetire(int age, int flowersPollinated) throws IllegalArgumentException{

   if(age < 0 || age > 50)
      throw new IllegalArgumentException();
   if(flowersPollinated < 0 || flowersPollinated > 200)
      throw new IllegalArgumentException();
   if(age >= 18 && age <= 50) {
      if(flowersPollinated < 20) 
         return false;
      else
         return true;
   }

   else
      return false;
}

Being the good little former working bee you are, you decide that no program is worth its weight in beetles (joke of an insect, really) if it doesn’t have a JUnit Test Suite to go along with it. You decide that you want to develop test cases with one of two methods: Boundary Value Testing or Equivalence Partitioning.

Boundary Value Testing

There are physical and arbitrary limits implemented on each input of the function. The age of a bee, obviously, cannot be a negative number, and for a creature with a life expectancy of 28 days, you cannot expect it to live past 50 days. Our acceptable inputs for age are 0 <= age <= 50. Likewise, a negative number of flowers being pollinated is impossible, and a bee in our hive is extremely unlikely to pollinate close to 150 flowers, let alone 200. Our acceptable inputs for flowers pollinated are 0 <= flowers <= 200. Thus, we implement a physical limit of 0 to both age and flowers pollinated, and arbitrary upper limits of 50 and 200, respectively.

In Robust Boundary Value testing, we take each boundary of an input, and one step in the positive and negative directions, and test each against the nominal (or ideal) value of the other input. We arrive at the following test cases:

Test Case Age Flowers Pollinated Expected Output
1 <x1min-, x2nom> -1 30 Exception
2 <x1min, x2nom> 0 30 False
3 <x1min+, x2nom> 1 30 False
4 <x1max-, x2nom> 49 30 True
5 <x1max, x2nom> 50 30 True
6 <x1max+, x2nom> 51 30 Exception
7 <x1nom, x2nom> 20 30 True
8 <x1nom, x2min-> 20 -1 Exception
9 <x1nom, x2min> 20 0 False
10 <x1nom, x2min+> 20 1 False
11 <x1nom, x2max-> 20 199 True
12 <x1nom, x2max> 20 200 True
13 <x1nom, x2max+> 20 201 Exception

Now we write JUnit 5 Test cases for each row in the table above. We will assume the readyToRetire function is inside a Bee class.

@Test
void test1(){
   Bee testBee = new Bee();
   assertThrows(IllegalArgumentException.class, 
               () -> { testBee.readyToRetire(-1, 30); } );
}

@Test
void test2(){
   Bee testBee = new Bee();
   boolean result = testBee.readyToRetire(0, 30);
   assertFalse(result);
}

@Test
void test3(){
   Bee testBee = new Bee();
   boolean result = testBee.readyToRetire(1, 30);
   assertFalse(result);
}

@Test
void test4(){
   Bee testBee = new Bee();
   boolean result = testBee.readyToRetire(49, 30);
   assertTrue(result);
}

@Test
void test5(){
   Bee testBee = new Bee();
   boolean result = testBee.readyToRetire(50, 30);
   assertTrue(result);
}

@Test
void test6(){
   Bee testBee = new Bee();
   assertThrows(IllegalArgumentException.class, 
               () -> { testBee.readyToRetire(51, 30); } );
}

@Test
void test7(){
   Bee testBee = new Bee();
   boolean result = testBee.readyToRetire(20, 30);
   assertTrue(result);
}

@Test
void test8(){
   Bee testBee = new Bee();
   assertThrows(IllegalArgumentException.class, 
               () -> { testBee.readyToRetire(20, -1); } );
}

@Test
void test9(){
   Bee testBee = new Bee();
   boolean result = testBee.readyToRetire(20, 0);
   assertFalse(result);
}

@Test
void test10(){
   Bee testBee = new Bee();
   boolean result = testBee.readyToRetire(20, 1);
   assertFalse(result);
}

@Test
void test11(){
   Bee testBee = new Bee();
   boolean result = testBee.readyToRetire(20, 199);
   assertTrue(result);
}

@Test
void test12(){
   Bee testBee = new Bee();
   boolean result = testBee.readyToRetire(20, 200);
   assertTrue(result);
}

@Test
void test13(){
   Bee testBee = new Bee();
   assertThrows(IllegalArgumentException.class, 
               () -> { testBee.readyToRetire(20, 201); } );
}

Equivalence Class Partitioning

Equivalence class partitioning arises from the foundational concept that with this function, there will be one of three results. An Exception will be thrown, true will be returned, or false will be returned. Based on our information about the criteria for deciding if a bee can retire, the behavior of the function can be modeled with a table.

age > 50 E E E E
18 <= age <= 50 E False True E
0 <= age < 18 E False False E
age < 0 E E E E
FP < 0 0 <= FP < 20 20 <= FP <= 200 FP > 200

Each cell in this table is an equivalence class and can represent a test case in the JUnit Test Suite. With your bee work ethic, you decide to implement Strong Robust Equivalence Testing, which is the most thorough method, and uses every cell in the table above as a test case.

Test Case Age Flowers Pollinated Expected Output
1 (age > 50, FP < 0) 51 -1 Exception
2 (18 <= age <= 50, FP < 0) 20 -1 Exception
3 (0 <= age < 18, FP < 0) 15 -1 Exception
4 (age < 0, FP < 0) -1 -1 Exception
5 (age > 50, 0 <= FP < 20) 51 10 Exception
6 (18 <= age <= 50, 0 <= FP < 20) 20 10 False
7 (0 <= age < 18, 0 <= FP < 20) 15 10 False
8 (age < 0, 0 <= FP < 20) -1 10 Exception
9 (age > 50, 20 <= FP <= 200) 51 30 Exception
10 (18 <= age <= 50, 20 <= FP <= 200) 20 30 True
11 (0 <= age < 18, 20 <= FP <= 200) 15 30 False
12 (age < 0, 20 <= FP <= 200) -1 30 Exception
13 (age > 50, FP > 200) 51 201 Exception
14 (18 <= age <= 50, FP > 200 ) 20 201 Exception
15 (0 <= age < 18, FP > 200 ) 15 201 Exception
16 (age < 0, FP > 200 ) -1 201 Exception
@Test
void test1(){
   Bee testBee = new Bee();
   assertThrows(IllegalArgumentException.class, 
               () -> { testBee.readyToRetire(51, -1); } );
}

@Test
void test2(){
   Bee testBee = new Bee();
   assertThrows(IllegalArgumentException.class, 
               () -> { testBee.readyToRetire(20, -1); } );
}

@Test
void test3(){
   Bee testBee = new Bee();
   assertThrows(IllegalArgumentException.class, 
               () -> { testBee.readyToRetire(15, -1); } );
}

@Test
void test4(){
   Bee testBee = new Bee();
   assertThrows(IllegalArgumentException.class, 
               () -> { testBee.readyToRetire(-1, -1); } );
}

@Test
void test5(){
   Bee testBee = new Bee();
   assertThrows(IllegalArgumentException.class, 
               () -> { testBee.readyToRetire(51, 10); } );
}

@Test
void test6(){
   Bee testBee = new Bee();
   boolean result = testBee.readyToRetire(20, 10);
   assertFalse(result);
}

@Test
void test7(){
   Bee testBee = new Bee();
   boolean result = testBee.readyToRetire(15, 10);
   assertFalse(result);
}

@Test
void test8(){
   Bee testBee = new Bee();
   assertThrows(IllegalArgumentException.class, 
               () -> { testBee.readyToRetire(-1, 10); } );
}

@Test
void test9(){
   Bee testBee = new Bee();
   assertThrows(IllegalArgumentException.class, 
               () -> { testBee.readyToRetire(51, 30); } );
}


@Test
void test10(){
   Bee testBee = new Bee();
   boolean result = testBee.readyToRetire(20, 30);
   assertTrue(result);
}

@Test
void test11(){
   Bee testBee = new Bee();
   boolean result = testBee.readyToRetire(15, 30);
   assertFalse(result);
}

@Test
void test12(){
   Bee testBee = new Bee();
   assertThrows(IllegalArgumentException.class, 
               () -> { testBee.readyToRetire(-1, 30); } );
}

@Test
void test13(){
   Bee testBee = new Bee();
   assertThrows(IllegalArgumentException.class, 
               () -> { testBee.readyToRetire(51, 201); } );
}

@Test
void test14(){
   Bee testBee = new Bee();
   assertThrows(IllegalArgumentException.class, 
               () -> { testBee.readyToRetire(20, 201); } );
}

@Test
void test15(){
   Bee testBee = new Bee();
   assertThrows(IllegalArgumentException.class, 
               () -> { testBee.readyToRetire(15, 201); } );
}

@Test
void test16(){
   Bee testBee = new Bee();
   assertThrows(IllegalArgumentException.class, 
               () -> { testBee.readyToRetire(-1, 201); } );
}

Conclusion

We now have JUnit 5 Test Suites for both testing methods (boundary value testing and equivalence class testing). You are now prepared to serve your term as ChairBee knowing every decision will be made correctly, and having peace of mind that your test cases verified all desired behavior.

Equivalence class testing, at least with examples like this one, is a more thorough and robust technique that tests the programs behaviors as well as its input bounds, whereas boundary value testing focuses only on the extremes for each input. Each technique has its pros and cons, but I hope this simple example shed some light on implementing the theory behind boundary-value testing and equivalence class testing in JUnit 5.

From the blog CS@Worcester – Christian Shadis&#039; Blog by ctshadis and used with permission of the author. All other rights reserved by the author.

From C to Shining Sea

The Snake

As of recently, I’ve been spending most of my personal coding time in Python. I enjoy a lot of languages and Python certainly isn’t for everything, but when you can use Python, boy is it a joy. As someone who strictly indents in any language, I love the indentation style of denoting blocks. Curly braces have their use, but the vast majority of the time, they’re purely redundant. The same goes for semicolons. I completely agree with the movement of programming languages towards spoken language. The main downfall of Python, comes from how high-level of a language it is.

Being a high-level language allows for it to be as convenient to write in as it is, however you are completely unable to use low level features. It also means Python’s performance is often much lower than that of C++ or other languages. Of course, everyone says that each language has its own use and Python isn’t meant for performance-intensive programs. But why not? Wouldn’t it be nice if there were a single modular language that had Python-like simple syntax with the features of JS, Python, C++, etc.

The Sea

Before I take on the task of creating such a language, I want to start smaller. Introducing Sea- It’s C, just written differently. I am currently working on a language called Sea which is effectively C, but with Python-like syntax. I say Python-like because much of the syntax of Python relies on internal data types. My goal is to keep Sea true to C. That is, no increase performance penalty; all of the penalty should be paid at compile time. That’s phase one. Start off with a more concise face for C. Then, I want to create libraries for Sea that take it one step further – introducing data types and functions innate to Python like range, enumerate, tuples, etc. Lastly, I want to use the knowledge I’ve gained to create the language to end all languages as described above.

I’m starting off with a Sea-to-C Transpiler, which is available on Github. In its present state, I am able to transpile a few block declarations and statements. I’m currently working on a data structure for representing and parsing statements. Once that’s made, I can add them one by one. The final result should look something like this:

include <stdio.h>
include "my_header.hea"

define ten as 10
define twelve as 12

void func():
    pass

int main():
    if ten is defined and twelve is defined as 12:
        undefine twelve
        // Why not

    c block:
        // Idk how to do this in Sea so I'll just use C
        printf("Interesting");

    do:
        char *language = "Python"

        print(f"This is an f-string like in {language}")

        for letter in language:
            pass

        break if size(language) == 1
    while true and ten == 11

    return 0

Once the transpiler is done, I want to create an actual compiler. I’ll also want to make a C-to-Sea transpiler eventually as well. I’ll also want to create syntax highlighting for VS Code, a linter, etc. It has come a surprisingly long way in such a short while, and I’ve learned so much Python because of it. I’m also learning a good amount about C. I’m hoping once I create this, there will never be any reason to use C over Sea. There are reasons why certain languages aren’t used in certain scenarios. However, I see no reason why certain syntaxes are limited in the same way. Making indentation a part of the language forces developers to write more readable code while removing characters to type. Languages should be made more simple, without compromising on functionality. That is my goal.

From the blog CS@Worcester – The Introspective Thinker by David MacDonald and used with permission of the author. All other rights reserved by the author.

Boundary Value Testing: General Advantages & Disadvantages

Photo by cottonbro on Pexels.com

Boundary value testing is often seen as an ineffective testing methodology in comparison to other techniques. And while other techniques such as equivalence case testing might be better at testing in some situations opposed to boundary value testing, I believe there are some situations where boundary value testing is a better solution.

For instance, in a situation where you want to test that one particular variable (ie: price of a product in a product catalog) which should exist within a certain definable range (suppose price should stay between 49.99 and 28.99 depending on sales or coupons) then you have both a minimum and a maximum value already specified, and a nominal value can be as simple as the value halfway between the minimum and maximum values. In situations similar to this, boundary value testing has the potential to be much easier than some other testing methods, such as creating partitions and going through equivalence case testing.

On the other hand, in a situation where you want to evaluate a boolean value, boundary value testing makes no sense whatsoever. Since the output will on ever be true or false, there is no obvious specified minimum or maximum. And without a minimum or maximum value it would be difficult to find a nominal value which represents typical output for the variable/function being tested.

Additionally, when working with variables, sometimes the output may have an obvious boundary in one direction (ie: maximum age of a bond is 10 years) but no specified boundary in the other direction (no specified minimum age of a bond). In these cases, a limit will often have to be chosen as a minimum or maximum based off of the functionality of the software or what it’s purpose is (ie: bond cannot be less than 0 years old, can say minimum is 0).

This can be confusing at times when the upper limit is not bounded, forcing you to choose an arbitrary upper limit (ie: price is the amount someone pays for a product, specified must be greater than 0, but no specified maximum price) such as 10,000, 50, etc. This can be somewhat subjective and might vary significantly depending on the context of the program being tested, and can often introduce complications.

In terms of concrete advantages and disadvantages, this article provides a good summary of some of the major upsides and downsides (https://qtp.blogspot.com/2009/07/boundary-value-analysis-bva-problems.html).

Regarding disadvantages, boundary value testing is a poor choice for boolean values as previously discussed, and is also ineffective when working with dependent variables (if you try to use BVT with a dependent variable, you would likely also have to test for the independent variable associated which could be better accomplished through other testing procedures).

Conversely, the advantages of boundary value testing largely involve how simple it is to implement and execute, relative ease of finding lower and upper boundaries (often specified within the program being evaluated), and ability to use more or less test cases as needed (robust boundary value testing could provide a more comprehensive analysis than normal, and worst-case robust BVT could provide even more detailed information).

Because it is often dependent on the inclusion of obvious upper and lower limits, and can’t be used effectively with boolean values, the applications of BVT are not universal, but when it can be applied effectively, boundary value testing is easy to use and can provide representation of the majority of expected inputs.

Article referenced: https://qtp.blogspot.com/2009/07/boundary-value-analysis-bva-problems.html

From the blog CS@Worcester – CodeRoad by toomeymatt1515 and used with permission of the author. All other rights reserved by the author.

What makes a good test?

Photo by Skylar Kang on Pexels.com

Software testing often involves the process of writing tests in order to ascertain the proper functioning and capabilities of software, within expected operating conditions. When coming up with test cases for a calculator program for instance, you might find it important to test that the addition of two numbers results in a correct sum following the operation.

Testing against the expected value of two arbitrarily chosen numbers (ie: 5 + 4 should equal 9) would be able to provide insight into whether the calculator is functioning as intended, but you could also choose to test for addition of negative numbers, negative and positive numbers, addition resulting in 0 and so on. Because there could be a large number of potential cases to check for, it might seem unclear what exactly would make for an optimal or objectively “good” test in this situation.

When looking into this concept, I found an article produced by Ministry of Testing (https://www.ministryoftesting.com/) a community focused around software testing and related subjects. The article (https://www.ministryoftesting.com/dojo/lessons/designing-tests-what-s-the-difference-between-a-good-test-and-a-bad-test) discusses some attributes to look out for which could be indicative of good or bad tests during the process of creating tests for a piece of software.

The process described in this article follows a series of steps, first identifying and considering risks involved with the program, things which could go wrong and which could affect the program as well as people or entities dependent on the program (users, other programs which access the one being tested).

Following the identification of risks, forming test ideas is described in terms of asking questions. So good tests would need to answer questions about the software and it’s functionality, maybe entering in a negative number, or something which is not a number in terms of the calculator example. This allows for unexpected behaviors to be considered even when they might not seem readily apparent.

Next, the test should be conducted or executed, and depending on whether or not the test reveals any relevant information (do you learn anything from it?) the information gained can be applied to improve the program, or further testing may be required if the test did not reveal anything.

I would say that this process makes sense as a basic framework to follow when writing test cases. In some cases, such as with very simple applications or non-commercial development, the risk can be practically nonexistent; the risk aspect may not always apply to every situation. But in general, this article brings up some good points and provides a helpful sequence of processes which can be followed and repeated as needed.

Article referenced:

https://www.ministryoftesting.com/dojo/lessons/designing-tests-what-s-the-difference-between-a-good-test-and-a-bad-test

From the blog CS@Worcester – CodeRoad by toomeymatt1515 and used with permission of the author. All other rights reserved by the author.

Blog #3 Decision Tables

Decision Tables have been one of the tricker concepts for me to understand and work with, and was one of the tougher sections on the recent exam for me. I think part of it is me mixing some of the terms up, such as at the conditions and rules, which leads to weird looking tables or I would just get stumped. I wanted to read more articles about decision tables to better understand what I else I might be missing and how to improve.

I understand the main concept of using the tables as another way to show different combinations of values for the variables, with the conditions acting similar to the previous boundary and equivalence classes that we have been working with, and actions simply being the output. So I think it’s the idea of the rules and what goes in those columns that is what I am having trouble with.

At first I though that rules were the classes and was confused on what the conditions were, but after understanding the difference between them, I had a better grasp on making the tables. I was able to set up the conditions and actions after this, and now I wanted to focus on understand rules and what is put in the columns.

The rules themselves are similar to case numbers from boundary and class testing, where they identify each combination of input and output values. Because of these, we are usually able to group rules together that have several input values matching a condition and have a shared output , such as multiple values that fall in the range between 0 and 10 that match an action.

I have a better understanding of decision table testing now after doing more reading on the subject from other articles and activity 8 and seeing my mistakes with my understanding of rules, possible input and output values, and the conditions and actions.

https://www.guru99.com/decision-table-testing.html

Decision Table Testing

From the blog Jeffery Neal&#039;s Blog by jneal44 and used with permission of the author. All other rights reserved by the author.