This week in the Software Quality Assur&Test class we got the third assignment that was about Boundary Values and Equivalence Class Testing. We had worked during class time in different activities that covered this assignment so I would say I really enjoyed working in this assignment. We had to complete three parts each with a different level of difficulty. I learned to understand more of the way testing works and what exactly get tested.
What we covered in this homework:
Boundary testing is the process of testing between extreme ends or boundaries between partitions of the input values.
Robust Boundary Values. Introducing outside of boundary values.
Equivalence Class Testing, which is also known as Equivalence Class Partitioning (ECP) and Equivalence Partitioning, is an important software testing technique used by the team of testers for grouping and partitioning of the test input data, which is then used for the purpose of testing the software product into a number of different classes.
Weak Normal Equivalence Class Testing: In this first type of equivalence class testing, one variable from each equivalence class is tested by the team. Moreover, the values are identified in a systematic manner. Weak normal equivalence class testing is also known as single fault assumption.
Strong Normal Equivalence Class Testing: Termed as multiple fault assumption, in strong normal equivalence class testing the team selects test cases from each element of the Cartesian product of the equivalence. This ensures the notion of completeness in testing, as it covers all equivalence classes and offers the team one of each possible combinations of inputs.
Worst-Case boundary value analysis is a Black Box software testing technique.
In Worst case boundary value testing, we make all combinations of each value of one variable with each value of another variable.
Edge Testing is a combination of Boundary Value Analysis and Equivalence Class Testing.
Weak Robust Equivalence Class Testing: Like weak normal equivalence, weak robust testing too tests one variable from each equivalence class. However, unlike the former method, it is also focused on testing test cases for invalid values.
Strong Robust Equivalence Class Testing: Another type of equivalence class testing, strong robust testing produces test cases for all valid and invalid elements of the product of the equivalence class. However, it is incapable of reducing the redundancy in testing.
Two common testing techniques in Black-Box Software Testing are Boundary Value Testing (or Boundary Value Analysis) or Equivalence Partition Testing (or Equivalence Class Testing). Both techniques examine the output of a program given its inputs.
Consider a function eligibleForScholarship which accepts two inputs, SAT score (for simplicity, only considering 1600 scale) and GPA (on a 4.0 scale) and returns in dollars the total reward a student is eligible to receive. At CompUniversity, a student will receive a scholarship if their SAT score is at least 1200 and their GPA is at least 3.5. The amount of that scholarship is determined by the criteria in the table below.
SAT Score
SAT Score Award
0-1199
$0
1200-1399
$1000
1400-1600
$2500
In addition to the SAT reward, additional money is given based on GPA.
GPA
% of SAT award earned
0-3.49
0%
3.5-3.74
100%
3.75-3.89
120%
3.90-4.0
150%
For example, a student with an SAT score of 1450 and GPA of 3.8 will receive $2500 plus an additional 20% for a total of $3000. A student with an SAT score of 1600 and a GPA of 3.49 will receive $0.
If software tester wanted to write test bases based on Robust Boundary Value Analysis, they would arrive at the following set of test cases:
Disadvantages of Boundary Value Testing: There are possible outputs that this set of test cases never checks (e.g. $3000, $3750). Additionally, this set of test cases fails to test the behavior inside the program, instead focusing on outermost limits.
If the software tester instead wanted to write test cases based off of Strong Normal Equivalence Partitioning, they would arrive at the following set of test cases:
Test Case
SAT Score
GPA
Expected Output ($)
1 (0 <= sat < 1200, 0 <= gpa < 3.50)
1000
2.0
0
2 (1200 <= sat < 1400, 0 <= gpa < 3.50)
1300
2.0
0
3 (1400 <= sat <= 1600, 0 <= gpa < 3.50)
1500
2.0
0
4 (0 <= sat < 1200, 3.50 <= gpa < 3.75)
1000
3.6
0
5 (1200 <= sat < 1400, 3.50 <= gpa < 3.75)
1300
3.6
1000
6 (1400 <= sat <= 1600, 3.50 <= gpa < 3.75)
1500
3.6
2500
7 (0 <= sat < 1200, 3.75 <= gpa < 3.90)
1000
3.8
0
8 (1200 <= sat < 1400, 3.75 <= gpa < 3.90)
1300
3.8
1200
9 (1400 <= sat <= 1600, 3.75 <= gpa < 3.90)
1500
3.8
3000
10 (0 <= sat < 1200, 3.90 <= gpa <= 4.00)
1000
3.95
0
11 (1200 <= sat < 1400, 3.90 <= gpa < 4.00)
1300
3.95
1500
12 (1400 <= sat <= 1600, 3.90 <= gpa < 4.00)
1500
3.95
3750
Disadvantages of Equivalence Partitioning: This set of test cases does not successfully account for behavior at the boundaries of valid inputs, nor does it account for the behavior of the program with every possible combination of inputs.
The disadvantages of boundary value testing and equivalence partitioning can be addressed by instead implementing edge testing.
Edge Testing
In Edge testing, we want to examine any point at which behavior changes, and create test cases for all of those ‘boundaries’, along with values close to the boundaries on either side.
The SAT edge cases can be listed as {-1, 0, 1, 1199, 1200, 1201, 1399, 1400, 1401, 1599, 1600, 1601}. The GPA edge cases can be listed as {-0.01, 0.00, 0.01, 3.49, 3.50, 3.51, 3.74, 3.75, 3.76, 3.89, 3.90, 3.91, 3.99, 4.00, 4.01}.
Test cases can now be created by taking all pairwise combinations of edge cases (Worst-case). There are too many test cases to list here without taking up unnecessary space, but now our tests account for each and every possible combination of inputs.
Edge testing combines the best attributes from Boundary Value testing and Equivalence Partitioning into one comprehensive technique. Edge testing successfully tests the upper and lower bounds for each input, and it successfully examining inner behavior at each partition.
Edge testing has a major drawback: the number of test cases needed. In this example, there are 12 * 15 = 180 test cases, but it has unrivaled behavior coverage. The decision of which testing technique to use depends largely on the program for which the tests are being developed, but for a smaller number of partitions in the inputs, edge testing is a relatively simple but comprehensive technique for testing all possible behavior.
In class, we have been learning about the different types of testing methods. Today I want to focus on Black-Box vs. White-Box testing. Let us start by looking at how each test method differs from the other. Black Box testing is a software method in which the internal structure design and implementation of the item being tested are not known to the tester. However, in white box testing, the internal structure, design, and implementation are known to the tester.
Let us start by looking at a diagram example that was provided in one of my resources for black-box testing. The picture above of Black Box testing can be any software system. For example, a website like a google or an amazon database. All under the Black Box testing, you can test the applications by just focusing on the inputs and outputs without knowing their internal code implementation. There are many types of black box testing methods, but the main types are functional, nonfunctional and regression testing. Now let us look at some of the techniques used in black-box testing. The few main ones are Equivalence class testing, boundary value testing, and decision table testing. I know we went over these in-depth in class, but I had no idea that these were related to black-box testing.
Now unlike Black Box testing, white box testing requires the Knowledge of the implementation to carry out. One of the main goals of white-box testing is to very a working flow for an application. It mainly involves testing a series of inputs against expected or desired output so that when the results in the expected output do not match with the input you have encountered a bug. One of the main techniques that are used in White-Box testing is code coverage analysis, which eliminates any gaps in the test case suite. These tests can be easily automated. While researching some of the disadvantages I found out was that white boxing can be quite complex and expensive. It also can be very time-consuming due to bigger applications taking time to test fully. Overall, both testing methods are important and necessary for successful software delivery.
In this post, I will compare the implementations of Robust Boundary Value Testing and Equivalence Partitioning using simple examples and actual JUnit code.
For the purposes of this post, you have been elected ChairBee of HoneyComb Council in your neighborhood’s local beehive. Your main job is to determine whether a worker bee has produced enough honey to retire. There are two criteria by which you base your decision, age and number of flowers pollinated. As a bee, your brain was incapable of making this complex of a decision every time a bee requests retirement, so you decided to write a Java program to do the heavy lifting for you.
Your function needs two inputs: the number of flowers pollinated, and the age of the bee in days. Your decision is based on these conditions: A bee is eligible to retire at age 18, but only if they have pollinated at least 20 flowers.
boolean readyToRetire(int age, int flowersPollinated) throws IllegalArgumentException{
if(age < 0 || age > 50)
throw new IllegalArgumentException();
if(flowersPollinated < 0 || flowersPollinated > 200)
throw new IllegalArgumentException();
if(age >= 18 && age <= 50) {
if(flowersPollinated < 20)
return false;
else
return true;
}
else
return false;
}
Being the good little former working bee you are, you decide that no program is worth its weight in beetles (joke of an insect, really) if it doesn’t have a JUnit Test Suite to go along with it. You decide that you want to develop test cases with one of two methods: Boundary Value Testing or Equivalence Partitioning.
Boundary Value Testing
There are physical and arbitrary limits implemented on each input of the function. The age of a bee, obviously, cannot be a negative number, and for a creature with a life expectancy of 28 days, you cannot expect it to live past 50 days. Our acceptable inputs for age are 0 <= age <= 50. Likewise, a negative number of flowers being pollinated is impossible, and a bee in our hive is extremely unlikely to pollinate close to 150 flowers, let alone 200. Our acceptable inputs for flowers pollinated are 0 <= flowers <= 200. Thus, we implement a physical limit of 0 to both age and flowers pollinated, and arbitrary upper limits of 50 and 200, respectively.
In Robust Boundary Value testing, we take each boundary of an input, and one step in the positive and negative directions, and test each against the nominal (or ideal) value of the other input. We arrive at the following test cases:
Test Case
Age
Flowers Pollinated
Expected Output
1 <x1min-, x2nom>
-1
30
Exception
2 <x1min, x2nom>
0
30
False
3 <x1min+, x2nom>
1
30
False
4 <x1max-, x2nom>
49
30
True
5 <x1max, x2nom>
50
30
True
6 <x1max+, x2nom>
51
30
Exception
7 <x1nom, x2nom>
20
30
True
8 <x1nom, x2min->
20
-1
Exception
9 <x1nom, x2min>
20
0
False
10 <x1nom, x2min+>
20
1
False
11 <x1nom, x2max->
20
199
True
12 <x1nom, x2max>
20
200
True
13 <x1nom, x2max+>
20
201
Exception
Now we write JUnit 5 Test cases for each row in the table above. We will assume the readyToRetire function is inside a Bee class.
@Test
void test1(){
Bee testBee = new Bee();
assertThrows(IllegalArgumentException.class,
() -> { testBee.readyToRetire(-1, 30); } );
}
@Test
void test2(){
Bee testBee = new Bee();
boolean result = testBee.readyToRetire(0, 30);
assertFalse(result);
}
@Test
void test3(){
Bee testBee = new Bee();
boolean result = testBee.readyToRetire(1, 30);
assertFalse(result);
}
@Test
void test4(){
Bee testBee = new Bee();
boolean result = testBee.readyToRetire(49, 30);
assertTrue(result);
}
@Test
void test5(){
Bee testBee = new Bee();
boolean result = testBee.readyToRetire(50, 30);
assertTrue(result);
}
@Test
void test6(){
Bee testBee = new Bee();
assertThrows(IllegalArgumentException.class,
() -> { testBee.readyToRetire(51, 30); } );
}
@Test
void test7(){
Bee testBee = new Bee();
boolean result = testBee.readyToRetire(20, 30);
assertTrue(result);
}
@Test
void test8(){
Bee testBee = new Bee();
assertThrows(IllegalArgumentException.class,
() -> { testBee.readyToRetire(20, -1); } );
}
@Test
void test9(){
Bee testBee = new Bee();
boolean result = testBee.readyToRetire(20, 0);
assertFalse(result);
}
@Test
void test10(){
Bee testBee = new Bee();
boolean result = testBee.readyToRetire(20, 1);
assertFalse(result);
}
@Test
void test11(){
Bee testBee = new Bee();
boolean result = testBee.readyToRetire(20, 199);
assertTrue(result);
}
@Test
void test12(){
Bee testBee = new Bee();
boolean result = testBee.readyToRetire(20, 200);
assertTrue(result);
}
@Test
void test13(){
Bee testBee = new Bee();
assertThrows(IllegalArgumentException.class,
() -> { testBee.readyToRetire(20, 201); } );
}
Equivalence Class Partitioning
Equivalence class partitioning arises from the foundational concept that with this function, there will be one of three results. An Exception will be thrown, true will be returned, or false will be returned. Based on our information about the criteria for deciding if a bee can retire, the behavior of the function can be modeled with a table.
age > 50
E
E
E
E
18 <= age <= 50
E
False
True
E
0 <= age < 18
E
False
False
E
age < 0
E
E
E
E
FP < 0
0 <= FP < 20
20 <= FP <= 200
FP > 200
Each cell in this table is an equivalence class and can represent a test case in the JUnit Test Suite. With your bee work ethic, you decide to implement Strong Robust Equivalence Testing, which is the most thorough method, and uses every cell in the table above as a test case.
We now have JUnit 5 Test Suites for both testing methods (boundary value testing and equivalence class testing). You are now prepared to serve your term as ChairBee knowing every decision will be made correctly, and having peace of mind that your test cases verified all desired behavior.
Equivalence class testing, at least with examples like this one, is a more thorough and robust technique that tests the programs behaviors as well as its input bounds, whereas boundary value testing focuses only on the extremes for each input. Each technique has its pros and cons, but I hope this simple example shed some light on implementing the theory behind boundary-value testing and equivalence class testing in JUnit 5.
As of recently, I’ve been spending most of my personal coding time in Python. I enjoy a lot of languages and Python certainly isn’t for everything, but when you can use Python, boy is it a joy. As someone who strictly indents in any language, I love the indentation style of denoting blocks. Curly braces have their use, but the vast majority of the time, they’re purely redundant. The same goes for semicolons. I completely agree with the movement of programming languages towards spoken language. The main downfall of Python, comes from how high-level of a language it is.
Being a high-level language allows for it to be as convenient to write in as it is, however you are completely unable to use low level features. It also means Python’s performance is often much lower than that of C++ or other languages. Of course, everyone says that each language has its own use and Python isn’t meant for performance-intensive programs. But why not? Wouldn’t it be nice if there were a single modular language that had Python-like simple syntax with the features of JS, Python, C++, etc.
The Sea
Before I take on the task of creating such a language, I want to start smaller. Introducing Sea- It’s C, just written differently. I am currently working on a language called Sea which is effectively C, but with Python-like syntax. I say Python-like because much of the syntax of Python relies on internal data types. My goal is to keep Sea true to C. That is, no increase performance penalty; all of the penalty should be paid at compile time. That’s phase one. Start off with a more concise face for C. Then, I want to create libraries for Sea that take it one step further – introducing data types and functions innate to Python like range, enumerate, tuples, etc. Lastly, I want to use the knowledge I’ve gained to create the language to end all languages as described above.
I’m starting off with a Sea-to-C Transpiler, which is available on Github. In its present state, I am able to transpile a few block declarations and statements. I’m currently working on a data structure for representing and parsing statements. Once that’s made, I can add them one by one. The final result should look something like this:
include <stdio.h>
include "my_header.hea"
define ten as 10
define twelve as 12
void func():
pass
int main():
if ten is defined and twelve is defined as 12:
undefine twelve
// Why not
c block:
// Idk how to do this in Sea so I'll just use C
printf("Interesting");
do:
char *language = "Python"
print(f"This is an f-string like in {language}")
for letter in language:
pass
break if size(language) == 1
while true and ten == 11
return 0
Once the transpiler is done, I want to create an actual compiler. I’ll also want to make a C-to-Sea transpiler eventually as well. I’ll also want to create syntax highlighting for VS Code, a linter, etc. It has come a surprisingly long way in such a short while, and I’ve learned so much Python because of it. I’m also learning a good amount about C. I’m hoping once I create this, there will never be any reason to use C over Sea. There are reasons why certain languages aren’t used in certain scenarios. However, I see no reason why certain syntaxes are limited in the same way. Making indentation a part of the language forces developers to write more readable code while removing characters to type. Languages should be made more simple, without compromising on functionality. That is my goal.
Boundary value testing is often seen as an ineffective testing methodology in comparison to other techniques. And while other techniques such as equivalence case testing might be better at testing in some situations opposed to boundary value testing, I believe there are some situations where boundary value testing is a better solution.
For instance, in a situation where you want to test that one particular variable (ie: price of a product in a product catalog) which should exist within a certain definable range (suppose price should stay between 49.99 and 28.99 depending on sales or coupons) then you have both a minimum and a maximum value already specified, and a nominal value can be as simple as the value halfway between the minimum and maximum values. In situations similar to this, boundary value testing has the potential to be much easier than some other testing methods, such as creating partitions and going through equivalence case testing.
On the other hand, in a situation where you want to evaluate a boolean value, boundary value testing makes no sense whatsoever. Since the output will on ever be true or false, there is no obvious specified minimum or maximum. And without a minimum or maximum value it would be difficult to find a nominal value which represents typical output for the variable/function being tested.
Additionally, when working with variables, sometimes the output may have an obvious boundary in one direction (ie: maximum age of a bond is 10 years) but no specified boundary in the other direction (no specified minimum age of a bond). In these cases, a limit will often have to be chosen as a minimum or maximum based off of the functionality of the software or what it’s purpose is (ie: bond cannot be less than 0 years old, can say minimum is 0).
This can be confusing at times when the upper limit is not bounded, forcing you to choose an arbitrary upper limit (ie: price is the amount someone pays for a product, specified must be greater than 0, but no specified maximum price) such as 10,000, 50, etc. This can be somewhat subjective and might vary significantly depending on the context of the program being tested, and can often introduce complications.
Regarding disadvantages, boundary value testing is a poor choice for boolean values as previously discussed, and is also ineffective when working with dependent variables (if you try to use BVT with a dependent variable, you would likely also have to test for the independent variable associated which could be better accomplished through other testing procedures).
Conversely, the advantages of boundary value testing largely involve how simple it is to implement and execute, relative ease of finding lower and upper boundaries (often specified within the program being evaluated), and ability to use more or less test cases as needed (robust boundary value testing could provide a more comprehensive analysis than normal, and worst-case robust BVT could provide even more detailed information).
Because it is often dependent on the inclusion of obvious upper and lower limits, and can’t be used effectively with boolean values, the applications of BVT are not universal, but when it can be applied effectively, boundary value testing is easy to use and can provide representation of the majority of expected inputs.
Software testing often involves the process of writing tests in order to ascertain the proper functioning and capabilities of software, within expected operating conditions. When coming up with test cases for a calculator program for instance, you might find it important to test that the addition of two numbers results in a correct sum following the operation.
Testing against the expected value of two arbitrarily chosen numbers (ie: 5 + 4 should equal 9) would be able to provide insight into whether the calculator is functioning as intended, but you could also choose to test for addition of negative numbers, negative and positive numbers, addition resulting in 0 and so on. Because there could be a large number of potential cases to check for, it might seem unclear what exactly would make for an optimal or objectively “good” test in this situation.
The process described in this article follows a series of steps, first identifying and considering risks involved with the program, things which could go wrong and which could affect the program as well as people or entities dependent on the program (users, other programs which access the one being tested).
Following the identification of risks, forming test ideas is described in terms of asking questions. So good tests would need to answer questions about the software and it’s functionality, maybe entering in a negative number, or something which is not a number in terms of the calculator example. This allows for unexpected behaviors to be considered even when they might not seem readily apparent.
Next, the test should be conducted or executed, and depending on whether or not the test reveals any relevant information (do you learn anything from it?) the information gained can be applied to improve the program, or further testing may be required if the test did not reveal anything.
I would say that this process makes sense as a basic framework to follow when writing test cases. In some cases, such as with very simple applications or non-commercial development, the risk can be practically nonexistent; the risk aspect may not always apply to every situation. But in general, this article brings up some good points and provides a helpful sequence of processes which can be followed and repeated as needed.
Decision Tables have been one of the tricker concepts for me to understand and work with, and was one of the tougher sections on the recent exam for me. I think part of it is me mixing some of the terms up, such as at the conditions and rules, which leads to weird looking tables or I would just get stumped. I wanted to read more articles about decision tables to better understand what I else I might be missing and how to improve.
I understand the main concept of using the tables as another way to show different combinations of values for the variables, with the conditions acting similar to the previous boundary and equivalence classes that we have been working with, and actions simply being the output. So I think it’s the idea of the rules and what goes in those columns that is what I am having trouble with.
At first I though that rules were the classes and was confused on what the conditions were, but after understanding the difference between them, I had a better grasp on making the tables. I was able to set up the conditions and actions after this, and now I wanted to focus on understand rules and what is put in the columns.
The rules themselves are similar to case numbers from boundary and class testing, where they identify each combination of input and output values. Because of these, we are usually able to group rules together that have several input values matching a condition and have a shared output , such as multiple values that fall in the range between 0 and 10 that match an action.
I have a better understanding of decision table testing now after doing more reading on the subject from other articles and activity 8 and seeing my mistakes with my understanding of rules, possible input and output values, and the conditions and actions.
During my introductory programming classes, I tended to think that simply running my code and using random values as input to see if any errors popped up was more than enough to ensure that my code was (at least at the time) sound. Perhaps, in ideal circumstances this could be an ideal testing method. After all, if I had an inkling of an idea of what is considered a valid or invalid input, I could simply use some somewhat random input values, see what happens, and call it a day. It works, right?
As tempting as this may be, I am still working on understanding how functional software testing works, what techniques exist and can be used, and how to implement the tools that are created specifically for functional testing. In fact, I did talk about JUnit testing in a previous post, which can automate the process of testing for valid and invalid inputs that we provide to the software. If we were to just use random values without having a specific pattern in mind while testing software, what conclusions we would end up with by the end of testing would not mean much in the long run.
One efficient technique of testing for valid and invalid inputs is Boundary Value Testing or Boundary Value Analysis (two terms that I may be using interchangeably), a software testing technique for which test cases are being written based on what the programs define an acceptable range of inputs to be. To be more specific, when using boundary value testing, a tester may use the values at the boundaries of a range as inputs. In essence, we use the range or set of acceptable values for any variables in the code as the test cases, rather than simply coming up with random valid or invalid values at our own discretion. Moreover, to conduct boundary value testing we take values near and at the boundaries of a defined range [min, max], with some such values being: 1) max and min: the values at the extremes of a range 2) max-: the value just below the upper limit of a range 3) min+: the value just above the lower limit of a range Boundary value analysis can include more values near the extremes of a range based on the needs of specific test cases, thus increasing the testing complexity, though the above mentioned are actually very common among all variations of boundary value testing, which may even include values that fall outside of the valid range. The above values fall within the acceptable (or valid) range of values, thus if they are provided as inputs during testing, we can expect to get reasonable outputs and behaviors from the software in return.
Here is an example of boundary value analysis using mathematical sets: Suppose we have the variable x of type int that has acceptable values within the range [a,b]. This essentially means that we can define the valid range of x as {x: x ≥ a and x ≤ b}, meaning that any value within and including a and b will result in the program behaving in a reasonable manner. Any value that fits the invalid range of {x: x < a and x > b}, on the other hand, may result in the program throwing exceptions.
A simple example of boundary value testing being used in JAVA is the following:
Though boundary value analysis may not absolutely prevent errors and bugs from occurring while testing software, it is an extremely good point of reference that a tester can use when they are testing code. If one expects the software to behave in certain ways depending on specific value intervals, it can be fairly easy for the tester to adjust their approach around such specific intervals and have a better understanding of the software they are testing and how it should work.
In this second assignment for the software quality Assur & test we had to practice writing more Junit5 test cases. We had given three java classes, product, customer, and order. Our job was to write Junit5 test classes. Different from the week before this time we had more test cases to write. I liked and disliked this assignment because I enjoyed writing the test cases but the part that was not my favorite was running gradle in the project. It gave me a lot of trouble until I figure out what was wrong with it.
I believe that like me, many students or developers like to write Junit5 test cases. In our assignments we just scratch the surface of the features offered by JUnit 5. To find out more, go to the JUnit 5 documentation, it covers a huge host of topics, including the features we’ve have used until now and many more in detail. What I personally like about writing in Junit is that it follows a modular approach, which makes extending the API easier. It provides a separation of concern, where writing tests and discovering/running them is served from different APIs. In essence, three main modules exist within JUnit 5: JUnit Platform + JUnit Jupiter + JUnit Vintage.
Another obstacle that I had for this assignment was running gradle in program. Gradle is an open-source build automation tool that is designed to be flexible enough to build almost any type of software. What I like about it is that gradle avoids unnecessary work by only running the tasks that need to run because their inputs or outputs have changed. When I run gradle in my computer some of the test cases failed even though they passed in my environment. The main problem was in one test case. My solution to this problem is think another way you can write your test case, and that’s what I did. I changed the test cases and added some import statement that were missing and no issue after that.
Overall, I enjoyed this assignment. I like writing Junit5, but I don’t enjoy gradle. I had used it before and always causing trouble for me. Hope in the future that things will run smoothly.