Category Archives: Week 6

QA & TESTING – Episode 1

AB Testing – Episode 1 by Brent Jenson and Allen Page.

In this weeks testing episode, I went back to episode 1 just so I can address topics that Allen and Brent found necessary to start their testing podcast episodes with. Both Allen and Brent are high-end software developers and testers who worked for many big companies and performed many big tasks in the world of software developing and testing. Allen page was a software-testing manager who contributed to many books in the world of software testing. (His books are really good if you wanted any information of software testing). Brent Jenson also worked for Microsoft for over 20 years and accumulated many experience holding the position of software testing Director. They continued to talk about a presentation method used at Microsoft which I thought would benefit the software testing industry should we all decide to utilize it. They called it the Lean coffee lives. As comical as this sounds, lean coffee is a structured, but agenda-less meeting. Participants gather, build an agenda, and begin talking. Conversations are directed and productive because the agenda for the meeting was democratically generated. These agendas are often address to things viewed as highly important and then goes all the way down to items on the list, which is viewed, as less important. So this sounded like something we can bring to our Testing Team meetings!! As quickly as the podcast began, Allen began to dive into real Software testing concepts. He began by emphasizing the great difference that lies between testing and quality. With constant changes and improvising’s, system and program bugs quickly lose values. Finding bugs on constantly or sometimes daily changing software does not constitute to the quality level of the software product. This is because today’s bug can be fixed in tomorrows code implementation and that can also create a new bug that could be fixed with the next program/code modification. Now here is the case that is it the Job of software testers to find bugs and errors in the program. Now it’s the job of a test manager to schedule test runs and passes for the specific product in development. How do you think a test manager could work successfully an environment where code changes and modifications are being made on a daily base? The proposed solution goes back to the beginning of the project where planning and thoughts have to be put in place. To put forth a great product, time allocation for testing has to be incurred in the project timeline. You can put out quality without considering all aspects of possible challenges and inputs.

From the blog CS@Worcester – Le Blog Spot by houtyr and used with permission of the author. All other rights reserved by the author.

The Dangers of Relying on Automated Testing

After listening to Jean Ann Harrison’s discussion about how important critical thinking is in the context of software testing and quality assurance on an episode of Test Talks, I wrote a post about The Limits of Automated Testing. Although Harrison’s explanation was great, I had a few remaining questions and this week chose to look for more information on automated testing. I came across a post by Martin Jansson from March 2017 titled Implication of emphasis on automation in CI, and it seemed to provide me with the more comprehensive view of testing automation that I was looking for.

Jansson starts out on a positive note, stating that he “less frequently see[s] the argumentation that testing is not needed.” To me it is almost comical to think about someone arguing that testing is unnecessary. While I completely understand that managers and executives are enticed by the possibility of saving time and money by not testing software, this is an extremely risky and careless method of creating a product. I doubt that anyone releasing untested software lasts very long or makes any money in the industry.

So if not testing at all is not an option, what are the options? Going with the bare-minimum for testing would be running only automated tests, a method that Jansson says is actually used. I have to agree with Jansson, however, when he says that this is not testing, rather it is simply checking. Instead of exploring parts of the code that are likely to contain bugs, you will simply be checking acceptance criteria. By not exploring the code fully, you are failing to find anything that might be outside the scope of the specification or the requirements. I feel that the following graphic provides an excellent representation of how few tests are actually performed when following a testing strategy that relies solely on automation.

(Source: http://thetesteye.com/blog/2017/03/implication-of-emphasis-on-automation-in-ci/)

What constitutes the perfect blending of automated and manual testing may be impossible to know. What is certain, however, is that automated testing cannot be relied upon as the sole method for testing. Jansson puts it in layman’s terms when he says that “you rarely automate serendipity.” Just as Jean Ann Harrison points out in the Test Talks podcast mentioned earlier, automation is not and will never be a replacement for thought. It is a bit of a relief to know that the software development companies are maturing and beginning to understand the importance of having testers who use a combination of automated and manual testing. As long as there continues to be humans writing code, there will need to be humans who test that code.

From the blog CS@Worcester – ~/GeorgeMatthew/etc by gmatthew and used with permission of the author. All other rights reserved by the author.

Post #7

I began researching good JUnit practices as a follow-up to our discussions of it in class.  I found a post on the codecentric Blog by Tobias Goeschel entitled “Writing Better Tests With JUnit” that addresses the pros and cons of JUnit and provides tips on how to improve your own testing.  This is the most thorough article I’ve found on JUnit testing (and possibly longest), so it seems fitting to summarize it in a blog post of my own while we cover the subject in class.

From the blog CS@Worcester – by Ryan Marcelonis and used with permission of the author. All other rights reserved by the author.

Post #6

Toward the end of our discussion about the Strategy design pattern, we briefly talked about the open/closed principle; I wanted to further my understanding of this concept, so I decided to do some research of my own.  Today, I will summarize an article by Swedish systems architect Joel Abrahamsson entitled “A simple example of the Open/Closed Principle”.

Abrahamsson begins the article by summarizing the open/closed principle as the object oriented design principle that software entities should be open for extension, but closed for modification.  This means that programmers should write code that doesn’t need to be modified when the program specifications change.  He then explains that, when programming in Java, this principle is most often adhered to when implementing polymorphism and inheritance.  We followed this principle in our first assignment of the class, when we refactored the original DuckSimulator program to utilize the Strategy design pattern.  We realized, in our in-class discussion of the DuckSimulator, that adding behaviors to Ducks would force us to update the implementation of the main class as well as each Duck subclass.  By refactoring the code to implement an interface in independent behavior classes – and then applying those behaviors to Ducks in the form of “setters” – we opened the program for extension and left it closed for modification.  Abrahamsson then gives his own example of how the open/closed principle can improve a program that calculates the area of shapes.  The idea is that, if the open/closed principle is not adhered to in the implementation of a program like this, it is susceptible to rapid growth as functionality is added to calculate the area of more and more shapes.

(Note: This is clearly not a Java implementation.)

public double Area(object[] shapes)
{
    double area = 0;
    foreach (var shape in shapes)
    {
        if (shape is Rectangle)
        {
            Rectangle rectangle = (Rectangle) shape;
            area += rectangle.Width*rectangle.Height;
        }
        else
        {
            Circle circle = (Circle)shape;
            area += circle.Radius * circle.Radius * Math.PI;
        }
    }

    return area;
}

( Abrahamsson’s implementation of an area calculator that does not adhere to the open/closed principle. )


public abstract class Shape
{
    public abstract double Area();
}
public class Rectangle : Shape
{
    public double Width { get; set; }
    public double Height { get; set; }
    public override double Area()
    {
        return Width*Height;
    }
}
public class Circle : Shape
{
    public double Radius { get; set; }
    public override double Area()
    {
        return Radius*Radius*Math.PI;
    }
}
public double Area(Shape[] shapes)
{
    double area = 0;
    foreach (var shape in shapes)
    {
        area += shape.Area();
    }

    return area;
}

( Abrahamsson’s implementation of an area calculator that adheres to the open/closed principle. )

Abrahamsson ends the article by sharing his thoughts on when the open/closed principle should be adhered to.  He believes that the primary focus of any good programmer should be to write code well enough that it doesn’t need to be repeatedly modified as the program grows.  Conversely, he says that the context of each situation should be considered because unnecessarily applying the open/closed principle can sometimes lead to an overly complex design.  I have always known that it is probably good practice to write code that is prepared for the requirements of the program to change, and this principle confirmed that idea.  From this point forward, I will take the open/closed principle into consideration when tackling new projects.

 

 

 

From the blog CS@Worcester – by Ryan Marcelonis and used with permission of the author. All other rights reserved by the author.

Predictive Applications and the ‘Datafication’ of Everything

We live in a world where we are constantly being bombarded with information. Not only do we consume insane amounts of data, we are also providing other people and businesses with information about ourselves. Signing up for online mailing lists, ordering magazine subscriptions, and even making dinner reservations, information about our habits and preferences is constantly being left behind, a concept that Charlie Berger refers to as data exhaust in a podcast from October 10, 2017 on Software Engineering Radio. The larger concept that he is describing is what is known as ‘datafication’, a buzz-word in the data science and big data spheres that refers to the collecting and storing information about social actions that can be used to perform predictive analyses and targeted marketing.

Specific to the computer science discipline, datafication has implications on the development of predictive applications. In the podcast episode, Berger presents the simple yet extremely effective example of an ATM machine as lacking in the predictive application sense. Berger wonders why each time that he uses the ATM he is asked which language he would like to use, and why such preferences are not somehow tracked and stored, making for a more seamless and personalized ATM experience. Berger even suggests that the ATM track more than language preferences, offering withdrawal suggestions based on previous transaction data from a similar day of the week or time of the day.

While it may not be terribly inconvenient to have to choose a language each time you use the ATM, the concept of predictive applications and the advantages associated with creating and using these types of applications becomes much more apparent when considering larger-scale operations. Retailers can use predictive applications to make important decisions about things like advertising and merchandising. Berger mentions the well-known “parable of the beer and diapers,” where an interesting and entirely unexpected correlation was found between purchases of diapers and beer. While some versions of the tale include the retailer moving the two correlated items next to one another in order to drive increases in sales, this may or may not be factual. Regardless, such examples of generating useful information based on querying data is a perfect example of the power the predictive applications have.

Berger repeatedly stresses the importance of moving the algorithm to the data, not vice-versa. By moving the algorithm to the data, we avoid all of the dangers of bypassing security and encryption. Developing applications that perform queries and compile information that is usable and useful to not only data scientists, but normal people as well, is a perfect example of how machine learning and predictive applications can make everyones jobs easier.

As a student, I took one of Berger’s closing remarks under careful consideration. Berger states that it is much easier for a programmer to learn how to make a program that interprets data than for a data scientist to translate his specific, one-off analyses into programs. With a newfound understanding of why predictive applications are so important to our data-obsessed society, I look forward to exploring how I can begin developing applications that take advantage of machine learning.

From the blog CS@Worcester – ~/GeorgeMatthew/etc by gmatthew and used with permission of the author. All other rights reserved by the author.

Software Design Patterns

Software Design Patterns

Depending on the type of project you are designing, there will always be a need to adopt a particular pattern that clearly represents your design. There are many design patterns that a programmer can rely on to present his or her project. For the purpose of studies, I will be exploring and discussion the three most useful design patterns categories. Thus creational design pattern, structural design pattern and Behavioral design pattern. Below is a brief description of each pattern.

Creational Design pattern: This type of design create object as needed within a code and allows for abstract factory which group objects of similar functionality. This also uses polymorphism to allow one object to inherit multiple behaviors within a method. In this pattern, you do not need to declare the exact class or type because polymorphism is used at the end to assign behavior. Usually, an abstract prototype is created and the base classes that inherit it are defined.

Structural Design pattern: Structural patterns deals largely with composition of classes and object by using inheritance and interfaces to enable objects to gives out new functionality. It often has an abstract class which uses method signatures and behaviors for the classes that will implement the interfaces. In structural patterns, objects are group according to their behavior and what they inherit. Also, modifications of object are done before finalizing your code.

Behavioral Design pattern: This is a type of pattern that allows for the behavior of a class to change base on it current state; even though, these states are always changed in the entire programming, implementation of each state is defined by a unique interface. It also allows for new operations to be added to an object without having to modify its original implementation structure.

In general, when you are talking about code responsibility, you really want to have your methods to your classes do one thing and do it well. Bringing it to real life example, it is like texting and driving; there is no way to achieve both effectively at the same time. You are either going to be driving off the road while texting well or driving well and not texting regularly. This implies, having your code do two or more things will make it do one thing well and do badly with the rest. Also, changing one thing on base class might have side effect on the other too. In this case, as a designer, you will want to extend your classes rather than modifying them. Even though you will have more classes created in order to have them do one thing, it is worth doing that because it will provide a clear presentation of your code. This is going to help me a lot in the future because I am going to adopt having classes of my code perform one task which will provides higher efficiency.

 

References:

https://airbrake.io/blog/design-patterns/software-design-patterns-guide

From the blog CS@Worcester – Computer Science Exploration by ioplay and used with permission of the author. All other rights reserved by the author.

AssertThat over Assert Methods

This week I read an article about the Benefits of using assertThat over other Assert Methods in Unit Tests. Since in our reading materials prof. Wurst is using assertThat instead of traditional Asserts, I think it is time to familiarize myself with the new asserting statements.

According to the article, AssertThat method incorporates the use of the hamcrest library and is a much improved way to write assertions. It uses what’s called matchers which are self-contained classes which have static methods that get used with the assertThat method. These static methods can be chained which gives a lot of flexibility over using the old assert methods.

Until now I have been using assertequals while writing my test cases. The biggest issue I had was to remember the correct ordering of (expected, actual) in assertEquals(expected, actual) statement. The keywords ‘expected’ and ‘actual’ used were not even informative enough to determine what exactly goes in it. With the use of assertThat method I don’t see such complications. The first benefit I see is that assertThat is more readable than the other assert methods. Writing same assertion with assertThat looks like:

assertThat(actual, is(equalTo(expected)))

It reads more like a sentence. Assert that the actual value is equal to the expected value, which make more sense.

Similarly, to check for not equals, used to be:

assertFalse(expected.equals(actual))

Now with the use of assertThat it will be:

assertThat(actual, is(not(equalTo(expected)))

The “not” method can surround any other method, which makes it a negate for any matcher. Also, the matcher methods can be chained to create any number of possible assertions. There’s an equivalent short-hand version of the above equality methods which saves on typing:

assertThat(actual, is(expected))

assertThat(actual, is(not(expected)))

Another benefit to assertThat is generic and type-safe. The below given example of assertEquals compiles, but fails:

assertEquals(“abc”, 123)

The assertThat method does not allow this as it typed, so the following would not compile:

assertThat(123, is(“abc”))

This is very handy as it does not allow comparing of different types. I find this a welcome change.

Another benefit to using assertThat are much better error messages.  Below is a common example of the use of assertTrue and its failure message.

assertTrue(expected.contains(actual))

java.lang.AssertionError at …

The problem here is that the assertion error doesn’t report the values for expected and actual.  Granted the expected value is easy to find, but a debugging session will probably be needed to figure out the actual value.

The same test using assertThat:

assertThat(actual, containsString(expected))

java.lang.AssertionError: Expected: a string containing “abc”

In this case, both values are returned in the error message. I think this a much better since in many cases a developer can look at the error message and figure out right away what they did wrong rather than having to debug to find the answer. That saves time and hassle. I am excited to get my hands on with the new AssertThat method in my upcoming days.

Source: (https://objectpartners.com/2013/09/18/the-benefits-of-using-assertthat-over-other-assert-methods-in-unit-tests/)

 

 

 

From the blog CS@Worcester – Not just another CS blog by osworup007 and used with permission of the author. All other rights reserved by the author.

A Closer Look into JUnit 5

For my blog post this week I wanted to take a closer look into what to expect with JUnit 5. Last class, Professor Wurst gave us a brief run down on some of the nifty features and functionalities that were introduced in JUnit 4, such as testing for exceptions and implementing the assertThat() function. Seeing as the new JUnit framework, JUnit 5, was just released this past August I though it would be interesting to take a look into what additional features were added into this new JUnit testing framework. I found this blog post, A Look at JUnit 5’s Core Features & Testing Functionalitywritten by Eugen Paraschiv, a software engineering professional and thought it gave a pretty good run down on what to expect with JUnit 5.

Paraschiv points out a few new and useful assertions that are implemented in the JUnit 5 testing framework; assertAll(), assertArrayEquals(), assertIterableEquals(), and assertThrows(). Assert-all is a pretty useful assertion because it allows you to group all assertions within one test case together and report back the expected vs. actual results for each assertion in your test case using a MultipleFailuresError object, which makes understanding why your test case failed easier. Next, the assert-array-equals and assert-iteratable-equals assertions are also highly useful as they allow you test whether or not your particular data structure (array, list, etc..) contains the elements that you expected it to. In order to use these assertions, however, the objects in your data structure must implement the equals() method. Finally, the test-throws assertion pretty much does what the “@Test(expected = SomeException.class)” annotation did in JUnit 4. I like this way of checking for exceptions much better though because it seems more intuitive and makes the test case easier to read.

In his blog post, Eugene brings up a lot of cool new features implemented in JUnit 5 but the two that really stood out to me were (1) the introduction to the concept of assumptions and (2) conditional test execution. First, assumptions are new to JUnit 5 and I think that they could prove extremely useful in practice. Essentially, assumptions are syntactically similar to assertions (assumption methods: assumeTrue(), assumeFalse(), assumingThat() ) but they do not cause a test to pass or fail. Instead, if an assumption within a test case fails, then the test case simply does not get executed.  Second, conditional test execution is another cool new feature introduced in JUnit 5. JUnit 5 allows you to define custom annotations which can then be used to control whether or not a test case gets executed. I though the idea of writing your own test annotations was really interesting and I could definitely see this being useful in practice.

 

 

 

 

From the blog CS@Worcester – Caleb's Computer Science Blog by calebscomputerscienceblog and used with permission of the author. All other rights reserved by the author.

The Builder Pattern

 

Today I will be talking about an article called “Builder Design Pattern” put out by JournalDev.com. According to the article, the builder design pattern is used to fix some issues that arise when the factory and simple factory design patterns are implemented. The article points out three major problems that come up when using the factory patterns. The first problem is that there can be too many arguments to pass to the factory, which causes error due to the factory not being able to keep track of order. The next problem is that all the parameters must be passed to the factory. If you don’t need to use the parameter, you still need to pass null to it. The last problem occurs when object creation is complex. The factory will become complex, and it will difficult to handle. So what’s the solution to all of this? The builder pattern.

 

So what is the builder pattern? The builder pattern builds objects by individual step, and uses a separate method to return the object when it has been created. This is a great way to implement a “factory” pattern when the object you are trying to create has a large number of parameters. The article uses an example of the builder pattern by writing a java program that builds computers. Two classes, Computer and ComputerBuilder are used. The Computer class has a private Computer constructor, which has the required parameters as arguments. The Computer constructor sets all of the parameters, including the optional ones. Then the ComputerBuilder class is called; note this is a nested class. This class, in addition to being nested, is also static because it belongs to the Computer Class. The ComputerBuilder Class has a ComputerBuilder method which is public, and this method sets the parameters as this.parameter. The ComputerBuilder Class has two other methods used to set the optional parameters as this.parameter. The final method is a builder method, which in this case is public Computer build(), and this method will call the this.parameter arguments to build a computer object. Then it will return the object.

 

I chose this topic because I have experienced the problems mentioned above when using the factory pattern. If there are a lot of parameters to be passed, it can become extremely tedious to code. It also becomes very difficult to keep track of what’s happening as the code becomes more cumbersome to handle. I will definitely have to try implementing the builder pattern because it seems to function like the factory pattern, but in a simpler, easier to understand way. I really like the idea of only having to worry about required parameters and being able to set optional parameters outside of the constructor class. This eliminates having to pass null to the constructor, which should help with the compile time errors. This article uses java example, and it helped me really understand the code as well as the idea behind the code.

 

Here’s the link: https://www.journaldev.com/1425/builder-design-pattern-in-java

From the blog CS@Worcester – The Average CS Student by Nathan Posterro and used with permission of the author. All other rights reserved by the author.

9 Anti-Patterns You Should Be Aware Of

http://sahandsaba.com/nine-anti-patterns-every-programmer-should-be-aware-of-with-examples.html

This blog post covers 9 anti-patterns that are common in software development.

  1. Premature Optimization – Optimizing before you have enough information to make conclusions about where and how to do the optimization. This is bad because it is hard to know exactly what the bottleneck will be before you have empirical data.
  2. Bikeshedding – Spending excessive amounts of time on subjective issues that are not important in the grand scheme of things. This anti-pattern can be avoided by prioritizing reaching a decision when you notice it happening.
  3. Analysis Paralysis – Over-analyzing so much that it prevents action and progress. A sign that this is happening is spending long periods of time on deciding things like a project’s requirements, a new UI, or a database design.
  4. God Class – Classes that control many other classes and have many dependencies and responsibilities. These can be hard to unit-test, debug, and document.
  5. Fear of Adding Classes – Fear of adding new classes or breaking large classes into smaller ones because of the belief that more classes make a design more complicated. In many situations, adding classes can actually reduce complexity significantly.
  6. Inner-platform Effect – Tendency for complex software systems to re-implement features of the platform they run in or the programming language they are implemented in, usually poorly. Doing this is often not necessary and tends to introduce bottlenecks and bugs.
  7. Magic Numbers and Strings – Using unnamed numbers or string literals instead of named constants in code. This makes understanding the code harder, and if it becomes necessary to change the constant, refactoring tools can introduce subtle bugs.
  8. Management by Numbers – Strict reliance on numbers for decision making. Measurements and numbers should be used to inform decisions, not determine them.
  9. Useless (Poltergeist) Classes – Classes with no real responsibility of their own, often used to just invoke methods in another class or add an unneeded layer of abstraction. These can add complexity and extra code to maintain and test, and can make the code less readable.

I chose this blog because anti-patterns are one of the topics on the concept map for this class and I think they are an interesting and useful concept to learn about. I thought this blog was a very good introduction to some of the more common anti-patterns. Each one was explained well and had plenty of examples. The quotes that are used throughout the blog were a good way of reinforcing the ideas behind each anti-pattern. I will definitely be keeping in mind the information that I learned from this blog whenever I code from now on. I think this will help me write better code that is as understandable and bug-free as possible.

From the blog CS@Worcester – Computer Science Blog by rydercsblog and used with permission of the author. All other rights reserved by the author.