Category Archives: Week 6

Post #6

Toward the end of our discussion about the Strategy design pattern, we briefly talked about the open/closed principle; I wanted to further my understanding of this concept, so I decided to do some research of my own.  Today, I will summarize an article by Swedish systems architect Joel Abrahamsson entitled “A simple example of the Open/Closed Principle”.

Abrahamsson begins the article by summarizing the open/closed principle as the object oriented design principle that software entities should be open for extension, but closed for modification.  This means that programmers should write code that doesn’t need to be modified when the program specifications change.  He then explains that, when programming in Java, this principle is most often adhered to when implementing polymorphism and inheritance.  We followed this principle in our first assignment of the class, when we refactored the original DuckSimulator program to utilize the Strategy design pattern.  We realized, in our in-class discussion of the DuckSimulator, that adding behaviors to Ducks would force us to update the implementation of the main class as well as each Duck subclass.  By refactoring the code to implement an interface in independent behavior classes – and then applying those behaviors to Ducks in the form of “setters” – we opened the program for extension and left it closed for modification.  Abrahamsson then gives his own example of how the open/closed principle can improve a program that calculates the area of shapes.  The idea is that, if the open/closed principle is not adhered to in the implementation of a program like this, it is susceptible to rapid growth as functionality is added to calculate the area of more and more shapes.

(Note: This is clearly not a Java implementation.)

public double Area(object[] shapes)
{
    double area = 0;
    foreach (var shape in shapes)
    {
        if (shape is Rectangle)
        {
            Rectangle rectangle = (Rectangle) shape;
            area += rectangle.Width*rectangle.Height;
        }
        else
        {
            Circle circle = (Circle)shape;
            area += circle.Radius * circle.Radius * Math.PI;
        }
    }

    return area;
}

( Abrahamsson’s implementation of an area calculator that does not adhere to the open/closed principle. )


public abstract class Shape
{
    public abstract double Area();
}
public class Rectangle : Shape
{
    public double Width { get; set; }
    public double Height { get; set; }
    public override double Area()
    {
        return Width*Height;
    }
}
public class Circle : Shape
{
    public double Radius { get; set; }
    public override double Area()
    {
        return Radius*Radius*Math.PI;
    }
}
public double Area(Shape[] shapes)
{
    double area = 0;
    foreach (var shape in shapes)
    {
        area += shape.Area();
    }

    return area;
}

( Abrahamsson’s implementation of an area calculator that adheres to the open/closed principle. )

Abrahamsson ends the article by sharing his thoughts on when the open/closed principle should be adhered to.  He believes that the primary focus of any good programmer should be to write code well enough that it doesn’t need to be repeatedly modified as the program grows.  Conversely, he says that the context of each situation should be considered because unnecessarily applying the open/closed principle can sometimes lead to an overly complex design.  I have always known that it is probably good practice to write code that is prepared for the requirements of the program to change, and this principle confirmed that idea.  From this point forward, I will take the open/closed principle into consideration when tackling new projects.

 

 

 

From the blog CS@Worcester – by Ryan Marcelonis and used with permission of the author. All other rights reserved by the author.

Predictive Applications and the ‘Datafication’ of Everything

We live in a world where we are constantly being bombarded with information. Not only do we consume insane amounts of data, we are also providing other people and businesses with information about ourselves. Signing up for online mailing lists, ordering magazine subscriptions, and even making dinner reservations, information about our habits and preferences is constantly being left behind, a concept that Charlie Berger refers to as data exhaust in a podcast from October 10, 2017 on Software Engineering Radio. The larger concept that he is describing is what is known as ‘datafication’, a buzz-word in the data science and big data spheres that refers to the collecting and storing information about social actions that can be used to perform predictive analyses and targeted marketing.

Specific to the computer science discipline, datafication has implications on the development of predictive applications. In the podcast episode, Berger presents the simple yet extremely effective example of an ATM machine as lacking in the predictive application sense. Berger wonders why each time that he uses the ATM he is asked which language he would like to use, and why such preferences are not somehow tracked and stored, making for a more seamless and personalized ATM experience. Berger even suggests that the ATM track more than language preferences, offering withdrawal suggestions based on previous transaction data from a similar day of the week or time of the day.

While it may not be terribly inconvenient to have to choose a language each time you use the ATM, the concept of predictive applications and the advantages associated with creating and using these types of applications becomes much more apparent when considering larger-scale operations. Retailers can use predictive applications to make important decisions about things like advertising and merchandising. Berger mentions the well-known “parable of the beer and diapers,” where an interesting and entirely unexpected correlation was found between purchases of diapers and beer. While some versions of the tale include the retailer moving the two correlated items next to one another in order to drive increases in sales, this may or may not be factual. Regardless, such examples of generating useful information based on querying data is a perfect example of the power the predictive applications have.

Berger repeatedly stresses the importance of moving the algorithm to the data, not vice-versa. By moving the algorithm to the data, we avoid all of the dangers of bypassing security and encryption. Developing applications that perform queries and compile information that is usable and useful to not only data scientists, but normal people as well, is a perfect example of how machine learning and predictive applications can make everyones jobs easier.

As a student, I took one of Berger’s closing remarks under careful consideration. Berger states that it is much easier for a programmer to learn how to make a program that interprets data than for a data scientist to translate his specific, one-off analyses into programs. With a newfound understanding of why predictive applications are so important to our data-obsessed society, I look forward to exploring how I can begin developing applications that take advantage of machine learning.

From the blog CS@Worcester – ~/GeorgeMatthew/etc by gmatthew and used with permission of the author. All other rights reserved by the author.

Software Design Patterns

Software Design Patterns

Depending on the type of project you are designing, there will always be a need to adopt a particular pattern that clearly represents your design. There are many design patterns that a programmer can rely on to present his or her project. For the purpose of studies, I will be exploring and discussion the three most useful design patterns categories. Thus creational design pattern, structural design pattern and Behavioral design pattern. Below is a brief description of each pattern.

Creational Design pattern: This type of design create object as needed within a code and allows for abstract factory which group objects of similar functionality. This also uses polymorphism to allow one object to inherit multiple behaviors within a method. In this pattern, you do not need to declare the exact class or type because polymorphism is used at the end to assign behavior. Usually, an abstract prototype is created and the base classes that inherit it are defined.

Structural Design pattern: Structural patterns deals largely with composition of classes and object by using inheritance and interfaces to enable objects to gives out new functionality. It often has an abstract class which uses method signatures and behaviors for the classes that will implement the interfaces. In structural patterns, objects are group according to their behavior and what they inherit. Also, modifications of object are done before finalizing your code.

Behavioral Design pattern: This is a type of pattern that allows for the behavior of a class to change base on it current state; even though, these states are always changed in the entire programming, implementation of each state is defined by a unique interface. It also allows for new operations to be added to an object without having to modify its original implementation structure.

In general, when you are talking about code responsibility, you really want to have your methods to your classes do one thing and do it well. Bringing it to real life example, it is like texting and driving; there is no way to achieve both effectively at the same time. You are either going to be driving off the road while texting well or driving well and not texting regularly. This implies, having your code do two or more things will make it do one thing well and do badly with the rest. Also, changing one thing on base class might have side effect on the other too. In this case, as a designer, you will want to extend your classes rather than modifying them. Even though you will have more classes created in order to have them do one thing, it is worth doing that because it will provide a clear presentation of your code. This is going to help me a lot in the future because I am going to adopt having classes of my code perform one task which will provides higher efficiency.

 

References:

https://airbrake.io/blog/design-patterns/software-design-patterns-guide

From the blog CS@Worcester – Computer Science Exploration by ioplay and used with permission of the author. All other rights reserved by the author.

AssertThat over Assert Methods

This week I read an article about the Benefits of using assertThat over other Assert Methods in Unit Tests. Since in our reading materials prof. Wurst is using assertThat instead of traditional Asserts, I think it is time to familiarize myself with the new asserting statements.

According to the article, AssertThat method incorporates the use of the hamcrest library and is a much improved way to write assertions. It uses what’s called matchers which are self-contained classes which have static methods that get used with the assertThat method. These static methods can be chained which gives a lot of flexibility over using the old assert methods.

Until now I have been using assertequals while writing my test cases. The biggest issue I had was to remember the correct ordering of (expected, actual) in assertEquals(expected, actual) statement. The keywords ‘expected’ and ‘actual’ used were not even informative enough to determine what exactly goes in it. With the use of assertThat method I don’t see such complications. The first benefit I see is that assertThat is more readable than the other assert methods. Writing same assertion with assertThat looks like:

assertThat(actual, is(equalTo(expected)))

It reads more like a sentence. Assert that the actual value is equal to the expected value, which make more sense.

Similarly, to check for not equals, used to be:

assertFalse(expected.equals(actual))

Now with the use of assertThat it will be:

assertThat(actual, is(not(equalTo(expected)))

The “not” method can surround any other method, which makes it a negate for any matcher. Also, the matcher methods can be chained to create any number of possible assertions. There’s an equivalent short-hand version of the above equality methods which saves on typing:

assertThat(actual, is(expected))

assertThat(actual, is(not(expected)))

Another benefit to assertThat is generic and type-safe. The below given example of assertEquals compiles, but fails:

assertEquals(“abc”, 123)

The assertThat method does not allow this as it typed, so the following would not compile:

assertThat(123, is(“abc”))

This is very handy as it does not allow comparing of different types. I find this a welcome change.

Another benefit to using assertThat are much better error messages.  Below is a common example of the use of assertTrue and its failure message.

assertTrue(expected.contains(actual))

java.lang.AssertionError at …

The problem here is that the assertion error doesn’t report the values for expected and actual.  Granted the expected value is easy to find, but a debugging session will probably be needed to figure out the actual value.

The same test using assertThat:

assertThat(actual, containsString(expected))

java.lang.AssertionError: Expected: a string containing “abc”

In this case, both values are returned in the error message. I think this a much better since in many cases a developer can look at the error message and figure out right away what they did wrong rather than having to debug to find the answer. That saves time and hassle. I am excited to get my hands on with the new AssertThat method in my upcoming days.

Source: (https://objectpartners.com/2013/09/18/the-benefits-of-using-assertthat-over-other-assert-methods-in-unit-tests/)

 

 

 

From the blog CS@Worcester – Not just another CS blog by osworup007 and used with permission of the author. All other rights reserved by the author.

A Closer Look into JUnit 5

For my blog post this week I wanted to take a closer look into what to expect with JUnit 5. Last class, Professor Wurst gave us a brief run down on some of the nifty features and functionalities that were introduced in JUnit 4, such as testing for exceptions and implementing the assertThat() function. Seeing as the new JUnit framework, JUnit 5, was just released this past August I though it would be interesting to take a look into what additional features were added into this new JUnit testing framework. I found this blog post, A Look at JUnit 5’s Core Features & Testing Functionalitywritten by Eugen Paraschiv, a software engineering professional and thought it gave a pretty good run down on what to expect with JUnit 5.

Paraschiv points out a few new and useful assertions that are implemented in the JUnit 5 testing framework; assertAll(), assertArrayEquals(), assertIterableEquals(), and assertThrows(). Assert-all is a pretty useful assertion because it allows you to group all assertions within one test case together and report back the expected vs. actual results for each assertion in your test case using a MultipleFailuresError object, which makes understanding why your test case failed easier. Next, the assert-array-equals and assert-iteratable-equals assertions are also highly useful as they allow you test whether or not your particular data structure (array, list, etc..) contains the elements that you expected it to. In order to use these assertions, however, the objects in your data structure must implement the equals() method. Finally, the test-throws assertion pretty much does what the “@Test(expected = SomeException.class)” annotation did in JUnit 4. I like this way of checking for exceptions much better though because it seems more intuitive and makes the test case easier to read.

In his blog post, Eugene brings up a lot of cool new features implemented in JUnit 5 but the two that really stood out to me were (1) the introduction to the concept of assumptions and (2) conditional test execution. First, assumptions are new to JUnit 5 and I think that they could prove extremely useful in practice. Essentially, assumptions are syntactically similar to assertions (assumption methods: assumeTrue(), assumeFalse(), assumingThat() ) but they do not cause a test to pass or fail. Instead, if an assumption within a test case fails, then the test case simply does not get executed.  Second, conditional test execution is another cool new feature introduced in JUnit 5. JUnit 5 allows you to define custom annotations which can then be used to control whether or not a test case gets executed. I though the idea of writing your own test annotations was really interesting and I could definitely see this being useful in practice.

 

 

 

 

From the blog CS@Worcester – Caleb's Computer Science Blog by calebscomputerscienceblog and used with permission of the author. All other rights reserved by the author.

The Builder Pattern

 

Today I will be talking about an article called “Builder Design Pattern” put out by JournalDev.com. According to the article, the builder design pattern is used to fix some issues that arise when the factory and simple factory design patterns are implemented. The article points out three major problems that come up when using the factory patterns. The first problem is that there can be too many arguments to pass to the factory, which causes error due to the factory not being able to keep track of order. The next problem is that all the parameters must be passed to the factory. If you don’t need to use the parameter, you still need to pass null to it. The last problem occurs when object creation is complex. The factory will become complex, and it will difficult to handle. So what’s the solution to all of this? The builder pattern.

 

So what is the builder pattern? The builder pattern builds objects by individual step, and uses a separate method to return the object when it has been created. This is a great way to implement a “factory” pattern when the object you are trying to create has a large number of parameters. The article uses an example of the builder pattern by writing a java program that builds computers. Two classes, Computer and ComputerBuilder are used. The Computer class has a private Computer constructor, which has the required parameters as arguments. The Computer constructor sets all of the parameters, including the optional ones. Then the ComputerBuilder class is called; note this is a nested class. This class, in addition to being nested, is also static because it belongs to the Computer Class. The ComputerBuilder Class has a ComputerBuilder method which is public, and this method sets the parameters as this.parameter. The ComputerBuilder Class has two other methods used to set the optional parameters as this.parameter. The final method is a builder method, which in this case is public Computer build(), and this method will call the this.parameter arguments to build a computer object. Then it will return the object.

 

I chose this topic because I have experienced the problems mentioned above when using the factory pattern. If there are a lot of parameters to be passed, it can become extremely tedious to code. It also becomes very difficult to keep track of what’s happening as the code becomes more cumbersome to handle. I will definitely have to try implementing the builder pattern because it seems to function like the factory pattern, but in a simpler, easier to understand way. I really like the idea of only having to worry about required parameters and being able to set optional parameters outside of the constructor class. This eliminates having to pass null to the constructor, which should help with the compile time errors. This article uses java example, and it helped me really understand the code as well as the idea behind the code.

 

Here’s the link: https://www.journaldev.com/1425/builder-design-pattern-in-java

From the blog CS@Worcester – The Average CS Student by Nathan Posterro and used with permission of the author. All other rights reserved by the author.

9 Anti-Patterns You Should Be Aware Of

http://sahandsaba.com/nine-anti-patterns-every-programmer-should-be-aware-of-with-examples.html

This blog post covers 9 anti-patterns that are common in software development.

  1. Premature Optimization – Optimizing before you have enough information to make conclusions about where and how to do the optimization. This is bad because it is hard to know exactly what the bottleneck will be before you have empirical data.
  2. Bikeshedding – Spending excessive amounts of time on subjective issues that are not important in the grand scheme of things. This anti-pattern can be avoided by prioritizing reaching a decision when you notice it happening.
  3. Analysis Paralysis – Over-analyzing so much that it prevents action and progress. A sign that this is happening is spending long periods of time on deciding things like a project’s requirements, a new UI, or a database design.
  4. God Class – Classes that control many other classes and have many dependencies and responsibilities. These can be hard to unit-test, debug, and document.
  5. Fear of Adding Classes – Fear of adding new classes or breaking large classes into smaller ones because of the belief that more classes make a design more complicated. In many situations, adding classes can actually reduce complexity significantly.
  6. Inner-platform Effect – Tendency for complex software systems to re-implement features of the platform they run in or the programming language they are implemented in, usually poorly. Doing this is often not necessary and tends to introduce bottlenecks and bugs.
  7. Magic Numbers and Strings – Using unnamed numbers or string literals instead of named constants in code. This makes understanding the code harder, and if it becomes necessary to change the constant, refactoring tools can introduce subtle bugs.
  8. Management by Numbers – Strict reliance on numbers for decision making. Measurements and numbers should be used to inform decisions, not determine them.
  9. Useless (Poltergeist) Classes – Classes with no real responsibility of their own, often used to just invoke methods in another class or add an unneeded layer of abstraction. These can add complexity and extra code to maintain and test, and can make the code less readable.

I chose this blog because anti-patterns are one of the topics on the concept map for this class and I think they are an interesting and useful concept to learn about. I thought this blog was a very good introduction to some of the more common anti-patterns. Each one was explained well and had plenty of examples. The quotes that are used throughout the blog were a good way of reinforcing the ideas behind each anti-pattern. I will definitely be keeping in mind the information that I learned from this blog whenever I code from now on. I think this will help me write better code that is as understandable and bug-free as possible.

From the blog CS@Worcester – Computer Science Blog by rydercsblog and used with permission of the author. All other rights reserved by the author.

A Structured QA Process

The blog post I chose this week comes from stickyminds.com and discusses how the quality assurance testing process is changing.  In the post (https://www.stickyminds.com/article/4-strategies-structured-qa-process) Praveena Ramakrishnan gives a general overview of the old way of testing where the tester was just focused on finding bugs.  Then they discuss how their next job gave her a different perspective.  That her job wasn’t just to find the bugs and try to break the program, but to work as a team towards the overall improvement of the software.  I think she has a positive view of her role as a tester and how employ some strategies to continue that positive mentality.  Her first strategy is to review documentation.  This is a good reminder for testers at all levels.  When approaching a project we need to remember not to rush into writing tests before we read the documentation and have a solid understanding of not only what the program is doing, but also try and gain some perspective of what the designers want the program to do.  The second strategy is to research past defects.  When we look at the past issues we can try to identify if there are any patterns.  This could help speed up future testing by improving the efficiency.  She then emphasizes that it is important to triage the defects.  When we as testers find a bug we should report it as soon as possible, but that is only the beginning.  After that we should look into what caused the issue to occur and what version it was added to the code.  This again helps paint a fuller picture of the defects and the code in general so that we can identify any patterns and try to improve in the future.  This goes into the last strategy which is to go beyond the reported issue.  Try and look beyond just your tests.  If you have logs review them as well.  The tests may pass, but you notice other errors occurring.  Catching these can improve future performance as well as prevent future defects.  Going above and beyond the minimum also typically results in higher pride in your work.  Employing these strategies will have a snowball effect to your job.  While you may not see a clear difference overnight keep working on implementing them and over time your skills will improve leaps and bounds over your peers.  Remember that it’s not just about breaking the application until the developers fix all the bugs, it’s about being a part of a team that strives to create the best product possible.

From the blog CS@Worcester – Tim's Blog by nbhc24 and used with permission of the author. All other rights reserved by the author.

Object Oriented Testing

Link to Blog: http://it.toolbox.com/blogs/enterprise-solutions/object-oriented-testing-issues-21274

This blog explains the issues of object oriented testing. Craig Borysowich identifies the strategies of object oriented testing, the strategies for selecting test cases, and the levels of testing, which are all involved in analyzing the testing process. In the beginning of his blog, Borysowich states that testing in an object oriented context must address the basics of testing a base class and the code that uses the base class. Factors that affect this testing are inheritance and dynamic binding. Dynamic binding is also known as dynamic dispatch. Since the factors that affect this testing are inheritance and dynamic binding, it brings up the point that some systems are harder to test than others. For example, systems with inheritance of implementation are harder to test than inheritance of interfaces.

Some object oriented testing strategies include white-box and black-box testing. Two assessments that determine the strategies to select test cases include the assessments of “likely faults” and “likely usage.”

“Likely Faults” involve types of tests which are based on practical experiences of areas in which errors are most likely to occur. For example they can occur from certain syntax in a particular programming language or boundaries such as beginning and end conditions. “Likely Usage” involve types of tests that test to exercise the system in the ways that the user of the system will be likely to use the system, or aim to test completely the elements of the system most likely to be used. These strategies apply to structural and behavioral testing.

The levels of testing that object oriented testing undergoes are Unit, Integration, and Acceptance levels. The Unit test is more effective in the overall system than with procedural unit tests. Integration test focuses on interactions among classes. It is recommended that units be integrated in an incremental fashion at a steady rate. Acceptance test ensure that all of the use cases appear in a test.

I chose this blog because I wanted to know what the necessary processes are when it comes to object oriented testing and its pros and cons. Borysowich briefly explains what these processes and steps are when it comes to object oriented testing and how it can involve Unit, Integration, and Acceptance levels of testing. Knowing the two assessments of “likely faults” and “likely usage” helps determine what strategies to use when choosing test cases. Object Oriented testing will be useful to know, especially when it comes to video games. There are games that have implemented object oriented programming, and it is important to understand the issues and solutions when it comes to testing in thwe object oriented environment.

From the blog CS@Worcester – Ricky Phan by Ricky Phan CS Worcester and used with permission of the author. All other rights reserved by the author.

Software Design Principles

Link to blog: http://www.programmr.com/blogs/5-solid-principles-object-oriented-software-design

This blog gives the description of the 5 different types of design principles. These principles include the Single Responsibility Principle, Open-Closed Principle, Liskov Substitution Principle, Interface Segregation Principle, and the Dependency Inversion Principle. The acronym “SOLID” represents these principles in the given order:

S – Single Responsibility Principle

O – Open-Closed Principle

L – Liskov Substitution Principle

I – Interface Segregation Principle

D – Dependency Inversion Principle

Single Responsibility Principle:

“A class should only have one reason to change” is how the author of this blog describes this principle. This principle states that every class in your software should have one and only one responsibility.

Open-Closed Principle:

“Software entities should be open for extensions, but closed for modification.” This means that software systems should be available for change. Customers will request new features and changes to existing features. Designing a system such that changes or extensions in requirements can be done by adding subclasses instead of changing existing code is a way to avoid rewriting an entire system.

Liskov Substitution Principle:

“Derived classes must be substitutable for their base classes.” This means that there will be some implementation of inheritance from understanding inheritance hierarchies and traps that can cause the open/close principle to fail with certain hierarchies. This principle fixes the violation that a function causes towards the open/closed principle.

Interface Segregation Principle:

“Make fine grained interfaces that are client specific.” This means that client code should not be aware of such a non-cohesive class as one unit. The class should have multiple interfaces and the client code should only be aware of the interface which is specific to its needs.

Dependency Inversion Principle:

“Depend on abstractions, not on concretions.” This means that this principle attempts to prevent a tangle of dependencies between modules by stipulating that entities and high level modules must not depend on concrete implementations but should depend only on abstractions.

The author of this blog identified the five design principles in a way that is easier to understand. He highlights the main concepts by providing a brief one sentence description about each principle. The acronym S.O.L.I.D. also makes it easier to understand on which design principles are which. I chose this blog because I wanted to know more about certain design principles. I previously knew the Single Responsibility and the Open-Closed principle, but didn’t know the remaining three. Understanding these principles will help me in my future career because there will be many different principles I will need to apply as a video game developer considering that there will be many different design principles involved for coding games.

 

From the blog CS@Worcester – Ricky Phan by Ricky Phan CS Worcester and used with permission of the author. All other rights reserved by the author.