This week I choose to talk about the craft over art pattern. So, what is this pattern about? The book explains that craftsmanship is built upon strong relationships. This pattern is not about doing merely what is expedient. It also encompasses the idea that a useful craft artifact always displays at least a minimal level of quality. When using this pattern, you will have to balance your customer’s desire for the immediate resolution of their problem with the internal standards that make you a craftsman. There are many things a developer should keep in mind when he/she develops a program but one thing that we should never forget is that you are primarily building something that serves the needs of others, not indulging in artistic expression. Developers think creatively, experimenting with different solutions to fit user needs and generally are comfortable with design systems. I have been many times in the place where I had to build something that I didn’t feel like it, but it is the customer that you need to satisfy. Something that I really like from this pattern is when the book explains that You need to do your best work in ways that place the interests of your customers over your desire to display skill or pad your resume, while still adhering to the minimum standards of competence provided by the software development community. Walking the Long Road means you must balance these conflicting demands. If you starve because you are too much of an artist and your creations are too beautiful to be delivered in the real world, then you have left the craft. If your desire to do beautiful work forces you out of professional software development and away from building useful things for real people, then you have left the craft. When you are making decisions about software, you should guide yourself by always keeping this in mind: How we can help? You can even prioritize feature requests this way. Remember: the purpose of the software is not to show off how intelligent you are. Your purpose is to help people — Max Kanat-Alexander, Code Simplicity
Two common testing techniques in Black-Box Software Testing are Boundary Value Testing (or Boundary Value Analysis) or Equivalence Partition Testing (or Equivalence Class Testing). Both techniques examine the output of a program given its inputs.
Consider a function eligibleForScholarship which accepts two inputs, SAT score (for simplicity, only considering 1600 scale) and GPA (on a 4.0 scale) and returns in dollars the total reward a student is eligible to receive. At CompUniversity, a student will receive a scholarship if their SAT score is at least 1200 and their GPA is at least 3.5. The amount of that scholarship is determined by the criteria in the table below.
SAT Score
SAT Score Award
0-1199
$0
1200-1399
$1000
1400-1600
$2500
In addition to the SAT reward, additional money is given based on GPA.
GPA
% of SAT award earned
0-3.49
0%
3.5-3.74
100%
3.75-3.89
120%
3.90-4.0
150%
For example, a student with an SAT score of 1450 and GPA of 3.8 will receive $2500 plus an additional 20% for a total of $3000. A student with an SAT score of 1600 and a GPA of 3.49 will receive $0.
If software tester wanted to write test bases based on Robust Boundary Value Analysis, they would arrive at the following set of test cases:
Disadvantages of Boundary Value Testing: There are possible outputs that this set of test cases never checks (e.g. $3000, $3750). Additionally, this set of test cases fails to test the behavior inside the program, instead focusing on outermost limits.
If the software tester instead wanted to write test cases based off of Strong Normal Equivalence Partitioning, they would arrive at the following set of test cases:
Test Case
SAT Score
GPA
Expected Output ($)
1 (0 <= sat < 1200, 0 <= gpa < 3.50)
1000
2.0
0
2 (1200 <= sat < 1400, 0 <= gpa < 3.50)
1300
2.0
0
3 (1400 <= sat <= 1600, 0 <= gpa < 3.50)
1500
2.0
0
4 (0 <= sat < 1200, 3.50 <= gpa < 3.75)
1000
3.6
0
5 (1200 <= sat < 1400, 3.50 <= gpa < 3.75)
1300
3.6
1000
6 (1400 <= sat <= 1600, 3.50 <= gpa < 3.75)
1500
3.6
2500
7 (0 <= sat < 1200, 3.75 <= gpa < 3.90)
1000
3.8
0
8 (1200 <= sat < 1400, 3.75 <= gpa < 3.90)
1300
3.8
1200
9 (1400 <= sat <= 1600, 3.75 <= gpa < 3.90)
1500
3.8
3000
10 (0 <= sat < 1200, 3.90 <= gpa <= 4.00)
1000
3.95
0
11 (1200 <= sat < 1400, 3.90 <= gpa < 4.00)
1300
3.95
1500
12 (1400 <= sat <= 1600, 3.90 <= gpa < 4.00)
1500
3.95
3750
Disadvantages of Equivalence Partitioning: This set of test cases does not successfully account for behavior at the boundaries of valid inputs, nor does it account for the behavior of the program with every possible combination of inputs.
The disadvantages of boundary value testing and equivalence partitioning can be addressed by instead implementing edge testing.
Edge Testing
In Edge testing, we want to examine any point at which behavior changes, and create test cases for all of those ‘boundaries’, along with values close to the boundaries on either side.
The SAT edge cases can be listed as {-1, 0, 1, 1199, 1200, 1201, 1399, 1400, 1401, 1599, 1600, 1601}. The GPA edge cases can be listed as {-0.01, 0.00, 0.01, 3.49, 3.50, 3.51, 3.74, 3.75, 3.76, 3.89, 3.90, 3.91, 3.99, 4.00, 4.01}.
Test cases can now be created by taking all pairwise combinations of edge cases (Worst-case). There are too many test cases to list here without taking up unnecessary space, but now our tests account for each and every possible combination of inputs.
Edge testing combines the best attributes from Boundary Value testing and Equivalence Partitioning into one comprehensive technique. Edge testing successfully tests the upper and lower bounds for each input, and it successfully examining inner behavior at each partition.
Edge testing has a major drawback: the number of test cases needed. In this example, there are 12 * 15 = 180 test cases, but it has unrivaled behavior coverage. The decision of which testing technique to use depends largely on the program for which the tests are being developed, but for a smaller number of partitions in the inputs, edge testing is a relatively simple but comprehensive technique for testing all possible behavior.
In class, we have been learning about the different types of testing methods. Today I want to focus on Black-Box vs. White-Box testing. Let us start by looking at how each test method differs from the other. Black Box testing is a software method in which the internal structure design and implementation of the item being tested are not known to the tester. However, in white box testing, the internal structure, design, and implementation are known to the tester.
Let us start by looking at a diagram example that was provided in one of my resources for black-box testing. The picture above of Black Box testing can be any software system. For example, a website like a google or an amazon database. All under the Black Box testing, you can test the applications by just focusing on the inputs and outputs without knowing their internal code implementation. There are many types of black box testing methods, but the main types are functional, nonfunctional and regression testing. Now let us look at some of the techniques used in black-box testing. The few main ones are Equivalence class testing, boundary value testing, and decision table testing. I know we went over these in-depth in class, but I had no idea that these were related to black-box testing.
Now unlike Black Box testing, white box testing requires the Knowledge of the implementation to carry out. One of the main goals of white-box testing is to very a working flow for an application. It mainly involves testing a series of inputs against expected or desired output so that when the results in the expected output do not match with the input you have encountered a bug. One of the main techniques that are used in White-Box testing is code coverage analysis, which eliminates any gaps in the test case suite. These tests can be easily automated. While researching some of the disadvantages I found out was that white boxing can be quite complex and expensive. It also can be very time-consuming due to bigger applications taking time to test fully. Overall, both testing methods are important and necessary for successful software delivery.
In this post, I will compare the implementations of Robust Boundary Value Testing and Equivalence Partitioning using simple examples and actual JUnit code.
For the purposes of this post, you have been elected ChairBee of HoneyComb Council in your neighborhood’s local beehive. Your main job is to determine whether a worker bee has produced enough honey to retire. There are two criteria by which you base your decision, age and number of flowers pollinated. As a bee, your brain was incapable of making this complex of a decision every time a bee requests retirement, so you decided to write a Java program to do the heavy lifting for you.
Your function needs two inputs: the number of flowers pollinated, and the age of the bee in days. Your decision is based on these conditions: A bee is eligible to retire at age 18, but only if they have pollinated at least 20 flowers.
boolean readyToRetire(int age, int flowersPollinated) throws IllegalArgumentException{
if(age < 0 || age > 50)
throw new IllegalArgumentException();
if(flowersPollinated < 0 || flowersPollinated > 200)
throw new IllegalArgumentException();
if(age >= 18 && age <= 50) {
if(flowersPollinated < 20)
return false;
else
return true;
}
else
return false;
}
Being the good little former working bee you are, you decide that no program is worth its weight in beetles (joke of an insect, really) if it doesn’t have a JUnit Test Suite to go along with it. You decide that you want to develop test cases with one of two methods: Boundary Value Testing or Equivalence Partitioning.
Boundary Value Testing
There are physical and arbitrary limits implemented on each input of the function. The age of a bee, obviously, cannot be a negative number, and for a creature with a life expectancy of 28 days, you cannot expect it to live past 50 days. Our acceptable inputs for age are 0 <= age <= 50. Likewise, a negative number of flowers being pollinated is impossible, and a bee in our hive is extremely unlikely to pollinate close to 150 flowers, let alone 200. Our acceptable inputs for flowers pollinated are 0 <= flowers <= 200. Thus, we implement a physical limit of 0 to both age and flowers pollinated, and arbitrary upper limits of 50 and 200, respectively.
In Robust Boundary Value testing, we take each boundary of an input, and one step in the positive and negative directions, and test each against the nominal (or ideal) value of the other input. We arrive at the following test cases:
Test Case
Age
Flowers Pollinated
Expected Output
1 <x1min-, x2nom>
-1
30
Exception
2 <x1min, x2nom>
0
30
False
3 <x1min+, x2nom>
1
30
False
4 <x1max-, x2nom>
49
30
True
5 <x1max, x2nom>
50
30
True
6 <x1max+, x2nom>
51
30
Exception
7 <x1nom, x2nom>
20
30
True
8 <x1nom, x2min->
20
-1
Exception
9 <x1nom, x2min>
20
0
False
10 <x1nom, x2min+>
20
1
False
11 <x1nom, x2max->
20
199
True
12 <x1nom, x2max>
20
200
True
13 <x1nom, x2max+>
20
201
Exception
Now we write JUnit 5 Test cases for each row in the table above. We will assume the readyToRetire function is inside a Bee class.
@Test
void test1(){
Bee testBee = new Bee();
assertThrows(IllegalArgumentException.class,
() -> { testBee.readyToRetire(-1, 30); } );
}
@Test
void test2(){
Bee testBee = new Bee();
boolean result = testBee.readyToRetire(0, 30);
assertFalse(result);
}
@Test
void test3(){
Bee testBee = new Bee();
boolean result = testBee.readyToRetire(1, 30);
assertFalse(result);
}
@Test
void test4(){
Bee testBee = new Bee();
boolean result = testBee.readyToRetire(49, 30);
assertTrue(result);
}
@Test
void test5(){
Bee testBee = new Bee();
boolean result = testBee.readyToRetire(50, 30);
assertTrue(result);
}
@Test
void test6(){
Bee testBee = new Bee();
assertThrows(IllegalArgumentException.class,
() -> { testBee.readyToRetire(51, 30); } );
}
@Test
void test7(){
Bee testBee = new Bee();
boolean result = testBee.readyToRetire(20, 30);
assertTrue(result);
}
@Test
void test8(){
Bee testBee = new Bee();
assertThrows(IllegalArgumentException.class,
() -> { testBee.readyToRetire(20, -1); } );
}
@Test
void test9(){
Bee testBee = new Bee();
boolean result = testBee.readyToRetire(20, 0);
assertFalse(result);
}
@Test
void test10(){
Bee testBee = new Bee();
boolean result = testBee.readyToRetire(20, 1);
assertFalse(result);
}
@Test
void test11(){
Bee testBee = new Bee();
boolean result = testBee.readyToRetire(20, 199);
assertTrue(result);
}
@Test
void test12(){
Bee testBee = new Bee();
boolean result = testBee.readyToRetire(20, 200);
assertTrue(result);
}
@Test
void test13(){
Bee testBee = new Bee();
assertThrows(IllegalArgumentException.class,
() -> { testBee.readyToRetire(20, 201); } );
}
Equivalence Class Partitioning
Equivalence class partitioning arises from the foundational concept that with this function, there will be one of three results. An Exception will be thrown, true will be returned, or false will be returned. Based on our information about the criteria for deciding if a bee can retire, the behavior of the function can be modeled with a table.
age > 50
E
E
E
E
18 <= age <= 50
E
False
True
E
0 <= age < 18
E
False
False
E
age < 0
E
E
E
E
FP < 0
0 <= FP < 20
20 <= FP <= 200
FP > 200
Each cell in this table is an equivalence class and can represent a test case in the JUnit Test Suite. With your bee work ethic, you decide to implement Strong Robust Equivalence Testing, which is the most thorough method, and uses every cell in the table above as a test case.
We now have JUnit 5 Test Suites for both testing methods (boundary value testing and equivalence class testing). You are now prepared to serve your term as ChairBee knowing every decision will be made correctly, and having peace of mind that your test cases verified all desired behavior.
Equivalence class testing, at least with examples like this one, is a more thorough and robust technique that tests the programs behaviors as well as its input bounds, whereas boundary value testing focuses only on the extremes for each input. Each technique has its pros and cons, but I hope this simple example shed some light on implementing the theory behind boundary-value testing and equivalence class testing in JUnit 5.
As of recently, I’ve been spending most of my personal coding time in Python. I enjoy a lot of languages and Python certainly isn’t for everything, but when you can use Python, boy is it a joy. As someone who strictly indents in any language, I love the indentation style of denoting blocks. Curly braces have their use, but the vast majority of the time, they’re purely redundant. The same goes for semicolons. I completely agree with the movement of programming languages towards spoken language. The main downfall of Python, comes from how high-level of a language it is.
Being a high-level language allows for it to be as convenient to write in as it is, however you are completely unable to use low level features. It also means Python’s performance is often much lower than that of C++ or other languages. Of course, everyone says that each language has its own use and Python isn’t meant for performance-intensive programs. But why not? Wouldn’t it be nice if there were a single modular language that had Python-like simple syntax with the features of JS, Python, C++, etc.
The Sea
Before I take on the task of creating such a language, I want to start smaller. Introducing Sea- It’s C, just written differently. I am currently working on a language called Sea which is effectively C, but with Python-like syntax. I say Python-like because much of the syntax of Python relies on internal data types. My goal is to keep Sea true to C. That is, no increase performance penalty; all of the penalty should be paid at compile time. That’s phase one. Start off with a more concise face for C. Then, I want to create libraries for Sea that take it one step further – introducing data types and functions innate to Python like range, enumerate, tuples, etc. Lastly, I want to use the knowledge I’ve gained to create the language to end all languages as described above.
I’m starting off with a Sea-to-C Transpiler, which is available on Github. In its present state, I am able to transpile a few block declarations and statements. I’m currently working on a data structure for representing and parsing statements. Once that’s made, I can add them one by one. The final result should look something like this:
include <stdio.h>
include "my_header.hea"
define ten as 10
define twelve as 12
void func():
pass
int main():
if ten is defined and twelve is defined as 12:
undefine twelve
// Why not
c block:
// Idk how to do this in Sea so I'll just use C
printf("Interesting");
do:
char *language = "Python"
print(f"This is an f-string like in {language}")
for letter in language:
pass
break if size(language) == 1
while true and ten == 11
return 0
Once the transpiler is done, I want to create an actual compiler. I’ll also want to make a C-to-Sea transpiler eventually as well. I’ll also want to create syntax highlighting for VS Code, a linter, etc. It has come a surprisingly long way in such a short while, and I’ve learned so much Python because of it. I’m also learning a good amount about C. I’m hoping once I create this, there will never be any reason to use C over Sea. There are reasons why certain languages aren’t used in certain scenarios. However, I see no reason why certain syntaxes are limited in the same way. Making indentation a part of the language forces developers to write more readable code while removing characters to type. Languages should be made more simple, without compromising on functionality. That is my goal.
The Your First Language pattern, discussed in chapter 2 of Apprenticeship Patterns, concerns the idea of someone picking their first programming language, how they should choose it, what they should do with it once they become proficient, and how it might affect future efforts to learn other languages. I thought that this aspect in particular was very relevant to my own experiences. I first started learning to program in Java, and as a result I find that I tend to look at things in relation to how they might work in Java rather than from the perspective of whatever language I might be working in.
Regarding the text, Apprenticeship Patterns discusses the idea in terms of how it might play into someone’s career over time. If you learn one language first, and put a majority of your development time into that language, then it makes sense that this would impact your views and understanding of other languages. Various techniques are offered in terms of actually learning the language, including building of toy programs (similar to those discussed in the breakable toys pattern), learning through testing frameworks (implementing test cases and using them to understand how the language works), or finding mentors who are already experienced in the language to provide guidance and advice.
In terms of how this is relevant to myself, I certainly feel that starting with an object oriented language has made me more predisposed to prefer this style of programming over others. This can at times make me feel out-of-depth when working with other styles of programming language (functional, scripting, procedural) as object oriented programming is the most familiar to me. The authors of Apprenticeship Patterns recommend trying to learn as many different style of programming language as possible to avoid getting “stuck” in one language.
Going outside of the “comfort zone” so to speak, is a good way to broaden your range of experience relative to any topic in a general sense. In terms of using this approach myself, I would likely first start by taking a smaller step away from more familiar Java-similar languages (python, ruby etc) before moving on to something completely dissimilar to Java (F#, Lua, R etc). This would help broaden my area of knowledge and allow for a greater range of potential solutions when developing software.
Boundary value testing is often seen as an ineffective testing methodology in comparison to other techniques. And while other techniques such as equivalence case testing might be better at testing in some situations opposed to boundary value testing, I believe there are some situations where boundary value testing is a better solution.
For instance, in a situation where you want to test that one particular variable (ie: price of a product in a product catalog) which should exist within a certain definable range (suppose price should stay between 49.99 and 28.99 depending on sales or coupons) then you have both a minimum and a maximum value already specified, and a nominal value can be as simple as the value halfway between the minimum and maximum values. In situations similar to this, boundary value testing has the potential to be much easier than some other testing methods, such as creating partitions and going through equivalence case testing.
On the other hand, in a situation where you want to evaluate a boolean value, boundary value testing makes no sense whatsoever. Since the output will on ever be true or false, there is no obvious specified minimum or maximum. And without a minimum or maximum value it would be difficult to find a nominal value which represents typical output for the variable/function being tested.
Additionally, when working with variables, sometimes the output may have an obvious boundary in one direction (ie: maximum age of a bond is 10 years) but no specified boundary in the other direction (no specified minimum age of a bond). In these cases, a limit will often have to be chosen as a minimum or maximum based off of the functionality of the software or what it’s purpose is (ie: bond cannot be less than 0 years old, can say minimum is 0).
This can be confusing at times when the upper limit is not bounded, forcing you to choose an arbitrary upper limit (ie: price is the amount someone pays for a product, specified must be greater than 0, but no specified maximum price) such as 10,000, 50, etc. This can be somewhat subjective and might vary significantly depending on the context of the program being tested, and can often introduce complications.
Regarding disadvantages, boundary value testing is a poor choice for boolean values as previously discussed, and is also ineffective when working with dependent variables (if you try to use BVT with a dependent variable, you would likely also have to test for the independent variable associated which could be better accomplished through other testing procedures).
Conversely, the advantages of boundary value testing largely involve how simple it is to implement and execute, relative ease of finding lower and upper boundaries (often specified within the program being evaluated), and ability to use more or less test cases as needed (robust boundary value testing could provide a more comprehensive analysis than normal, and worst-case robust BVT could provide even more detailed information).
Because it is often dependent on the inclusion of obvious upper and lower limits, and can’t be used effectively with boolean values, the applications of BVT are not universal, but when it can be applied effectively, boundary value testing is easy to use and can provide representation of the majority of expected inputs.
Software testing often involves the process of writing tests in order to ascertain the proper functioning and capabilities of software, within expected operating conditions. When coming up with test cases for a calculator program for instance, you might find it important to test that the addition of two numbers results in a correct sum following the operation.
Testing against the expected value of two arbitrarily chosen numbers (ie: 5 + 4 should equal 9) would be able to provide insight into whether the calculator is functioning as intended, but you could also choose to test for addition of negative numbers, negative and positive numbers, addition resulting in 0 and so on. Because there could be a large number of potential cases to check for, it might seem unclear what exactly would make for an optimal or objectively “good” test in this situation.
The process described in this article follows a series of steps, first identifying and considering risks involved with the program, things which could go wrong and which could affect the program as well as people or entities dependent on the program (users, other programs which access the one being tested).
Following the identification of risks, forming test ideas is described in terms of asking questions. So good tests would need to answer questions about the software and it’s functionality, maybe entering in a negative number, or something which is not a number in terms of the calculator example. This allows for unexpected behaviors to be considered even when they might not seem readily apparent.
Next, the test should be conducted or executed, and depending on whether or not the test reveals any relevant information (do you learn anything from it?) the information gained can be applied to improve the program, or further testing may be required if the test did not reveal anything.
I would say that this process makes sense as a basic framework to follow when writing test cases. In some cases, such as with very simple applications or non-commercial development, the risk can be practically nonexistent; the risk aspect may not always apply to every situation. But in general, this article brings up some good points and provides a helpful sequence of processes which can be followed and repeated as needed.
The “Reflect As You Work” pattern in Apprenticeship Patterns has much to do with developing a methodical and concrete approach to introspection within one’s software development career on both a macro and micro level. On the micro level, Hoover and Oshineye make the recommendation to consider your day-to-day practices when programming and to take notes on the practices of more senior team members to see what makes them so effective. On a macro level they describe what amounts to a hotwash or after-action review of the operation (to borrow government idioms) but caveat that a certain level of trust by management is necessary for the approach detailed and that may not always be the case.
One of the exercises in this Apprenticeship Pattern I finally found to have immediate utility in my life is accounting for the things that I do or do not do adequately in regards to programming. The exercise as described by the authors:
“If there is something particularly painful or pleasant about your current work, ask yourself how it got that way, and if things are negative, how can they be improved? The goal is to extract the maximum amount of educational value from every experience by taking it apart and putting it back together in new and interesting ways.”
Like many of the Apprenticeship Patterns and the exercises contained within them, much of the immediate implementation is lost due to the apprenticeship patterns pre-supposing that the reader is currently writing a meaningful amount of code to have established a personal pattern. Despite that fact, I’m able to use this exercise to dissect my personal habits prior to entering college which were things such as poor commenting, not adopting to the bracketing style of the language, not using tests, not using git, and CI/CD. What was pleasant about my past work was my consistent use of the Microsoft Visual Studio IDE to debug and step through my programs. Unfortunately, I’ve been able to write substantially less code since I made the choice to return to college but look forward to using the exercises that I learned about in this apprenticeship pattern to maximize the value in learning from my mistakes.
Decision Tables have been one of the tricker concepts for me to understand and work with, and was one of the tougher sections on the recent exam for me. I think part of it is me mixing some of the terms up, such as at the conditions and rules, which leads to weird looking tables or I would just get stumped. I wanted to read more articles about decision tables to better understand what I else I might be missing and how to improve.
I understand the main concept of using the tables as another way to show different combinations of values for the variables, with the conditions acting similar to the previous boundary and equivalence classes that we have been working with, and actions simply being the output. So I think it’s the idea of the rules and what goes in those columns that is what I am having trouble with.
At first I though that rules were the classes and was confused on what the conditions were, but after understanding the difference between them, I had a better grasp on making the tables. I was able to set up the conditions and actions after this, and now I wanted to focus on understand rules and what is put in the columns.
The rules themselves are similar to case numbers from boundary and class testing, where they identify each combination of input and output values. Because of these, we are usually able to group rules together that have several input values matching a condition and have a shared output , such as multiple values that fall in the range between 0 and 10 that match an action.
I have a better understanding of decision table testing now after doing more reading on the subject from other articles and activity 8 and seeing my mistakes with my understanding of rules, possible input and output values, and the conditions and actions.