Category Archives: Week 10

Test Design Advanced Techniques

We learned few techniques of testing in class such as Boundary Value Testing, Decision Table-Based Testing, Path Testing … however there are other techniques. In this article,  “Test Design Techniques overview” there are other techniques like Classification Tree Method, State Transition Testing, Cause-Effect Graphing, Scenario Testing. Different techniques are useful in different scenario.

Classification Tree Method. When to use it: we are testing hierarchically structured data or data in a form of a hierarchical tree. Identification of test relevant aspects and their corresponding values, then combination of different classes into test cases. Draw hierarchical classes as graph as shown below, then make projections of tests on a horizontal line using one of the combination of strategies.

Capture

           State Transition Testing. When to use it: a tester is testing the application for a finite set of input values, also test sequence of events that occur in the application under test. This will allow the tester to test the application behavior for a sequence of input values.

Capture1

 

            Cause-Effect Graphing. When to use it: To Identify the possible root causes, the reasons for a specific effect, problem, or outcome. To set up, identify conditions and effects, draw the graph with all logical dependencies and constraints. Convert the graph into the decision table, tracing each combination of causes that lead to an effect from the effect. This technique helps us to determine the root causes of a problem or quality using a structured approach.

           Scenario Testing. When to use it: to help understand a complex system to test better where in the scenarios are to be credible which are easy to evaluate. Tester put yourself in the end user’s shoes and figure out the real-world scenarios and use cases. Before using scenarios for creating test cases, they are layout described using a template. Then create specific test cases using equivalence partitioning and boundary values techniques.

I like this article because the simple structure, and information with example. In the “Specification-Based Testing Techniques” section, different techniques divide into different group, and which scenario fit better to techniques. I think it’s would be better if there are example with the source code. We can look that the Test Techniques with real life scenarios, to have better understanding.

From the blog CS@Worcester – Nhat's Blog by Nhat Truong Le and used with permission of the author. All other rights reserved by the author.

Facade Design Pattern

For this week’s blog post I will be looking at another design pattern, this time called Façade. This time once again from the handy site of SourceMaking.com. The overall intent of the design pattern is to provide a unified interface to a set of interfaces in a subsystem. It defines a higher-level interface that makes the subsystem easier to use, wrapping a complicated subsystem with a simpler interface. The problem that faces it is a segment of the client community needs a simplified interface to the overall functionality of a complex subsystem. Essentially façade discusses encapsulating a complex subsystem within a single interface object, thus reducing the learning curve necessary to successfully leverage the subsystem. The only access point for the subsystem will limit the features and flexibility that power users may need. The façade object should be a fairly simple facilitator. “Façade takes a “riddle wrapped in an enigma shrouded in mystery” and injects a wrapper that tames the amorphous and inscrutable mass of software.” With the quote from the site above the name really comes into play here. A good example of this would be something like that of a catalog. Where the consumer calls one number from the catalog and speaks with costumer service. The customer service representative acts as a façade, providing an interface to the order fulfillment department, the billing department and then the shipping department. Façade defines a new interface where in the Adapter design pattern it uses an old interface. The site/article then goes on to explain some general rules of thumb that can prove useful to those wanting to learn more or follow the design pattern even more. All and all the site is an excellent resource for anyone looking to learn more about design patterns and more with this article proving that with valuable information on a new design pattern I have not seen or used before.

From the blog CS@Worcester – Matt's Blog by mattyd99 and used with permission of the author. All other rights reserved by the author.

Acceptance Testing

Acceptance Testing

For this week’s blog post I will be discussing another form of testing called Acceptance testing. Acceptance testing is a form of testing where a system is tested for overall acceptability. The reason for this is to see if the systems compliance with certain business requirements and assess whether it is acceptable for delivery. The site shows a helpful diagram illustrating when you uses this testing, with Unit Testing on the bottom, Followed by Integration testing above that, System testing above that and with Acceptance on that the top of that one. The site also gives a good analogy of what it is exactly happening with the illustration. During the process of a ballpoint pen, the cap, the body, the tail and clip, the ink cartridge and the ballpoint are all produced separately, and unit tested separately. After this they are assembled and begin integration testing, with System testing following this. With of course Acceptance testing last to confirm once more that the pen is ready to function properly. Black Box Testing is the most common method used in performing Acceptance Testing. The testing does not follow any strict procedure and is not scripted but is rather ad-hoc testing. Ad-hoc testing is random testing or Monkey testing, essentially throwing stuff at a wall trying to see what sticks and what does not. You perform this testing after system testing like mentioned above in order for the system to be made available for actual use. Internal Acceptance Testing is performed by members of the organization that developed the software but who are no directly involved in the project, usually this is members of the Product Management, Sales or Customer Support. The opposite end of this is known as External Acceptance Testing where it is performed by those who are not employees of the organization that developed the said software. This is then broken up into two different fields, Customer Acceptance Testing and User Acceptance Testing. Customer Acceptance Testing is a pretty self-explanatory name where it is performed by those who purchased the software or are customers of the company that developed it. User Acceptance Testing is performed by the end users of the software, customers’

Customers essentially.

All and all this website was pretty well organized and explained everything very clearly and so forth in a manner where it made extremely simple to read and understand.

 

http://softwaretestingfundamentals.com/acceptance-testing/

From the blog CS@Worcester – Matt's Blog by mattyd99 and used with permission of the author. All other rights reserved by the author.

Test Driven Development

Summary

In the article Introduction to Test Driven Development (TDD), Scott Wambler talks about what Test Driven Development (TDD) is, as well as many other topics related to it such as traditional testing, documentation, test-driven database development, and scaling TDD via Agile Model-Driven Development (AMDD). He also goes over why you would want to use TDD as well as some of the myths and misconceptions that people may have about it.

A basic description of TDD is that it is a development technique where you must first write a test that fails before you write new functional code.

A simple formula that Wambler included to help understand TDD is as follows:

TDD = Refactoring + Test-First Development (TFD)

Here’s a quick overview of TFD which he provided:

Reaction to Content

I chose to search for articles of this topic because I wanted to really understand how to properly perform test-driven development. For what project that I decide to work on next, I wanted to be able to follow an effective process so as to improve and avoid writing “spaghetti code”. I imagine there are other development methods that may be better suited for what I want to do, but this is one of the ones that I hadn’t had a great understanding of until reading this article. Hopefully I’ll be able to properly use this method to write clean, well-documented code in the future.

While I knew of TDD before reading this article, I never really took the time to actually learn how to do it. I think that this article was very useful in helping me understand what TDD is and what are the advantages of using it, but while I believe that I understand it conceptually, I’ll probably need to look up examples, maybe follow a tutorial to make sure that I don’t do it wrong and end up building bad habits.

One of the things in the article that I had some trouble understanding was when he referred to tests used in developer TDD as “developer tests” and that they were inaccurately referred to as unit tests. I think that he may be referring to unit tests as a section of tests included in developer tests as a whole, but I’m not completely sure.

Source: http://agiledata.org/essays/tdd.html

From the blog CS@Worcester – Andy Pham by apham1 and used with permission of the author. All other rights reserved by the author.

Decorator Design Pattern

For this week’s blog on Software Design, I decided to watch a short tutorial on one of the design patterns I didn’t pick for a previous assignment. I picked Proxy Design pattern to cover before, and now I’m going back to learn about Decorator Design Pattern. It is only a thirteen minute video, so I won’t be going as deep as I would had I picked it for the assignment. I am also going to talk about my reflections on it rather than create a tutorial, so I am not going to reteach it to the person reading this blog post.
The tutorial I chose was made by Derek Banas on YouTube. He used an example of a pizza parlor to illustrate the wrong way to code it by using inheritance. He shows the problem with this because you would have to create a very large number of subclasses for all your objects (in this case pizzas).
Composition, on the other hand, is a dynamic way of modifying objects. Instead of creating as many subclasses, you add functionality at run time. It has the benefits of inheritance, but is better to implement in many cases because it is more flexible. Instead of rewriting old code, you extend with new code.
This seems like a more intuitive way of doing things. In the example in the video, he uses the example of creating different kinds of pizza. If you were to design a menu for the same pizza place and there was twenty ingredients and different prices for each, it would make more sense to list the cost of the sizes “Small $5, Medium $7, etc.” and the prices of the toppings in a similar way, with Pepperoni and Sausage being 50¢ and 65¢ respectively. The customer would then choose the size and toppings and add all those together to get the total price.
It seems like the decorator does exactly this. It would seem silly for a pizza shop to list every permutation of toppings and sizes. It would make sense if there were only a handful of pizzas to choose from, but it does not make sense to do this if there are too many ways to list. Same with inheritance instead of decorator design pattern. The same logic is true with inheritance. This seems like an incredibly useful and versatile tool to use. I think learning about this will be incredibly invaluable for the future.
http://www.newthinktank.com/2012/09/decorator-design-pattern-tutorial/

From the blog Sam Bryan by and used with permission of the author. All other rights reserved by the author.

AntiPatterns: The God Class

We can all remember back to writing programs in our Intro to Programming course, where (if you were learning an OOP language like Java) you placed nearly everything into the main method just for the sake of learning. Projects continued to be approached like that for awhile, until you learned about inheritance and class hierarchies and why putting everything into main is actually not the best idea. It removes the basic principles of OOPs and essentially turns the project into a procedural program.

That is essentially what the God Class, AKA the Blob, does. Any related classes that are used are generally there to store data, and there is a big central class that is core to the functionality of the program. The God Class contains both operations and data relating to an extremely large amount of other classes, and acts as a controller for all of them. They generally seem to arise when time constraints and specification demands are placed on people who already have a shaky foundation in OOP principles. Either that, or often times when writing code developers will add small bits and pieces to existing software that they’re using, and some classes (especially controller-type classes) end up getting a disproportionate amount of these small changes over time.

In terms of drawbacks, there are a lot. It makes updating infrastructure far more complex than necessary, and it makes seamlessly fixing bugs in that system extremely difficult. Blobs are hard to test since there are just so many operations that work together within them, and when you create an object in memory there may be a significant portion of its’ functionality that you’re neglecting which soaks up running times.

So what can we do to refactor a God Class when we see it? First, find methods within the God Class that deal with one another and group them together. If developers have been slowly adding functionality to a controller that has resulted in a Blob-type class, then there should be a good amount of operations within it that could be grouped together. Then, either move those groups of operations into classes which they call upon in some way or create new, smaller classes which perform in the same way. Through doing this, you can extend the functionality of existing classes and reduce the complexity of your Blob. In the long term, reducing God Class complexity will assist in future testing, refactoring and will prevent annoying bugs from popping up that intrude on your beautiful OOP system design.

From the blog CS@Worcester – James Blash by jwblash and used with permission of the author. All other rights reserved by the author.

Levels of Testing – Unit, Integration, System

Hello reader!

Today’s blog,  will discuss the differences between Unit, Intergration, and System testing. I will briefly explain the benefits for each test, as well as pointing out their distinct features. I will also be sharing my analysis of Software Testing Fundamental’s-“Software Testing Levels” article from which I readLet’s begin!

Software testing levels are various phases of the development sequence in which the testing is conducted. The main goal of system testing is to assess the system’s agreement with the specified needs. There are different testing levels that help check behavior and performance. These testing levels are designed to see missing areas and accordance between the development lifecycle states. There are three levels of software testing that I am going to talk about: Unit, Integration, System, and Sytem Testing.

Unit testing is a level of the testing process that tests individual units of a software. The goal is to show that each unit of the software works as designed. Intergration testing is a testing process that combines and tests individual units as a group. The goal of this level is to show faults in the interaction with integrated units. System testing is a testing process in which a program is tested for acceptability. The goal is to evaluate the program’s agreement with the business requirements and to see whether it is acceptable for delivery.

These three testing types cannot just be applied randomly during the forming process. There is a sequence that should be considered to in order to lessen the risk of failures. By increasingly testing the more simple things in the system and moving on to the more complex things, the testers should know that they are carefully examining the software in the most efficient ways possible.

Testing early and often is definitely worth the effort. Having a efficient approach to testing allows the tester to find any faults in the system sooner, which leads to less time and less money wasted later on.

Software Testing Fundamental’s-“Software Testing Levels” was written very well and it was very easy to understand. It taught me a great amount of these three types of testing. I hope I was able to extend what I learned in a informative way.

 

From the blog CS@Worcester – dekeh4 by dekeh4 and used with permission of the author. All other rights reserved by the author.

B8: Test Cases

https://qacomplete.com/resources/articles/test-scripts-test-cases-test-scenarios/

          I found a blog post this week that talked about Test Cases and the differences between Test cases and Test Scenarios. It starts by explaining Test Scripts and how they are the most detailed way to document testing. Test Scripts are usually a line to line description of actions within the code. It holds a lot of information on the software to relay data to the tester. It goes on to talk about how Test Cases are the second most detailed way of documenting testing work. Test Cases hold less information than Test Scripts because they only describe a specific function or idea to be tested. It allows more flexibility for the tester to find different inputs to test. It allows more detailed testing for testers that are familiar to the application for the program. The blog then introduces Test Scenarios that are known as the least detailed type of documentation. Test Scenarios hold a simple description of a function or intent a user might have when using the program. Like Test Cases, this allows more flexibility for the tester but also causes more areas that are likely to not be tested due to the undetermined scope of the problem.

          I chose this article because Test Cases were the most interesting subject when learning about the process of testing. I wanted to understand the difference between cases and scenarios which is what sparked my initial curiosity. I found that this blog was really interesting because it explains how testing can be structured based on the amount of information given. I enjoyed how the post explained the levels while also explaining how they benefit the tester through the expansive flexibility while also stating that this can lead to more grey areas of testing. I was able to grasp an understanding how these testing terms differ from each other and it expanded my visibility when looking at the testing process as a whole. The most interesting part of the blog post was the when it explained that testers can choose any of these terms to test programs depending on their knowledge of the product. This leads to the point where understanding the application of the product can also have great effects on how the tests are created. I found that there was a lack of diagrams within this post, but the information and definitions were easy to understand. I enjoyed this source very much as it easily explained the testing terms and how they differed from each other.

From the blog CS@Worcester – Student To Scholar by kumarcomputerscience and used with permission of the author. All other rights reserved by the author.

Time for Testing

Michael Bolton’s post How Long Will the Testing Take? on his blog DevelopSense goes into depth on how long testing takes. Bolton claims good software testing as a process takes just as long as development, because the two are actually interwoven as processes and should happen simultaneously.

I think a lot of people brush off testing systems and software because of the idea that it takes additional time and can make the release of a project come later. However, as Bolton states, testing actually occurs simultaneously to development, as things are tested as they are implemented into the system to make sure they interact correctly. With this in mind, the program should be fully tested not long after it is completely developed. It doesn’t really cost much extra time, just the extra costs of developing tests. However, doing it this way also allows you to fix issues that come up along the way, before they become larger issues.

In some of my earlier programming classes, students were often encouraged to write one test and make sure it works before moving onto the next test. We were also encouraged to write tests after every method we made in our programs. In a way, we were clearly being prepared for this approach to development and testing, and it was clear even then why it is a good way to work on a program. It is logical, as incremental development seems a lot safer. You avoid getting overwhelmed with errors and issues that all compound on each other at once when you write your tests this way.

The blog post also covers the silliness of the question of how long testing will take. If your developers and testers are working together and doing what they’re supposed to, testing will finish as soon as development does. Although you can test the system in post-production, it is entirely optional, and is only done with good reason. So in the future, if I am ever managing the development of a system, I will be able to recognize what the testers are doing and ask more productive questions.

From the blog CS@Worcester – Let's Get TechNICKal by technickal4 and used with permission of the author. All other rights reserved by the author.

Use Case Diagrams

In the post Grouping Components by Use Case, Robert Annet discusses a strategy he uses when making context diagrams. After determining the use cases of his program, the components involved in each of the seven use cases are divided into groups within the diagram with different colored barriers. Structurizr, the web application Annet used for his diagrams, allowed him to filter the diagrams by the use cases. This makes observing which components are involved even easier visually.

The benefits of using this strategy of dividing your context diagrams into use cases is clear if you are working with a program that is complex enough to warrant it. A complex program in this case being one with a lot of different components which all are involved in several different interactions. If you have to make changes to a certain interaction withing your program, you can look at the components involved in the use cases and identify what you have to change to make the desired effect. Since your program has a lot of different and interlaced interactions and components, using these use cases and filtering your diagram into smaller parts makes it easier to understand the interactions and whats happening with the involved components without being overwhelmed by a ton of different components that don’t have any involvement. In my opinion, it is much clearer when you only have to see two or three components when you want to change an interaction in the program.

Using this strategy can also help you find other issues or reliances in your program. For instance, you may find that most or all of your use cases use the same component. Since this component is so integral to the interactions in your program, it is likely not one you are going to want to change much. This strategy also can help you find components that are not being used in your program, and thus may be unnecessary or you may need to reassess your use cases.

Overall, I thought Robert Annet’s idea of dividing programs into different use cases is very interesting and obviously useful as a concept, so I’m sure I’ll use in the future. Whenever you are dealing with a large enough system, you should consider looking at the diagram from this angle to see if you can identify any problems or vulnerabilities.

From the blog CS@Worcester – Let's Get TechNICKal by technickal4 and used with permission of the author. All other rights reserved by the author.