Category Archives: Week 7

Why use Mocking

After being familiar with JUnit testing, I wanted to look more into Mocking. The idea of mock objects was confusing at first. In class our Prof. covered some examples how to use various mock object frameworks such as EasyMock and Jmockit. So, I thought to do some more readings on it. The article I read focusses on the concept of mocking in general. What is a mock object? What is it used for? Why can’t I mock object XYZ?

In the real world, software has dependencies. We have action classes that depend on services and services that depend on data access objects.  The idea of unit testing is that we want to test our code without testing the dependencies. The code below would be an example of this:

import java.util.ArrayList;

public class Counter {
public Counter() {
}

public int count(ArrayList items) {
int results = 0;

for(Object curItem : items) {
results ++;
}

return results;
}
}

If we wanted to test the method count, we would write at test that addressed how the count method works. We aren’t trying to test that ArrayList works because you assume that it has been tested and works as designed. Our only goal is to test your use of ArrayList.

Let’s look at a slightly more realistic example:

public class Action extends ActionSupport {

private LookupService service;

private String key;

public void setKey(String curKey) {
key = curKey;
}

public String getKey() {
return key;
}

public void setService(LookupService curService) {
service = curService;
}

public String doLookup() {

if(StringUtils.isBlank(key)) {
return FAILURE;
}

List results = service.lookupByKey(key);

if(results.size() > 0) {
return SUCCESS;
}

return FAILURE;
}
}

We wanted to test the doLookup method in the above code, we would want to be able to test it without testing the lookupByKey method. For the sake of this test, we assume that the lookupByKey method is tested to work as designed. As long as we pass in the correct key, we will get back the correct results. In reality, we make sure that lookupByKey is also tested if it is code we wrote. How do we test doLookup without executing lookupByKey? The concept behind mock objects is that we want to create an object that will take the place of the real object. This mock object will expect a certain method to be called with certain parameters and when that happens, it will return an expected result. Using the above code as an example, let’s say that when we pass in 1234 for my key to the service.lookupByKey call, we should get back a List with 4 values in it. Our mock object should expect lookupByKey to be called with the parameter “1234” and when that occurs, it will return a List with four objects in it.

After reading the article, the key takeaways for me was to know that mock objects are a very valuable tool in testing. They provide us with the ability to test what we write without having to address dependency concerns. In my coming weeks I will learn about implementing a specific mocking framework.

Source: http://www.michaelminella.com/testing/the-concept-of-mocking.html

From the blog CS@Worcester – Not just another CS blog by osworup007 and used with permission of the author. All other rights reserved by the author.

Command Design Pattern

In today’s blog, I will be discussing about a design pattern called the Command Design Pattern.

What is a Command Design Pattern?

Command Design Pattern is a behavioral design pattern in which an object is used to represent and encapsulate all the information needed to perform an action or trigger and event at a later time.

How does it work?

The requests are wrapped as commands and passed to invoker object. The invoker object then looks for the appropriate object which can handle this command and gives the command to the corresponding object that will execute this command. The base class contains an execute() method that simply calls the action on the reciever.

A command class includes some of the following: an object, the method to be applied to the object, and the arguments to be passed when the method is applied.

Command Design Pattern allows you to store lists of code that is executed at a later time or many times. Client do not know what the encapsulated objects are, they just call the Command to run when execute is called.  An object called the Invoker transfers this command to another object called the receiver to execute the right code.

How to Implement the Command Pattern?

 

First of, you have to create an interface that acts as a command.  Command object knows about the receiver and invokes a method of the receiver.

Second, create your objects(client) that will serve as requests.  The client decides which receiver objects it assigns to the command objects, and which commands it assigns to the invoker.

Third, create concrete command classes(also known as the receiver) that implements the command interface which will do the actual work when the execute() method in command is called.

Lastly, create an invoker object to identify which object will execute which command based on the type of command.  The invoker object does not know anything about the concrete command, it only knows the command interface.

Command

I selected this post because I wanted to learn more about different patterns that is not covered in class. This post shows you what the Command Pattern is and how it works. It also shows you the different steps and an example code on how to use the command design. The diagram above is from the post.

The Command Pattern seems to be very useful when you found yourself using code that looks like:

if (…..)

do();

else if(……)

do();

else if(……)

do();

else if

……..

I think Command pattern is very useful when you are creating an interface where objects are waiting to be executed, just like the menu interface.

https://www.tutorialspoint.com/design_pattern/command_pattern.htm

From the blog cs-wsu – Computer Science by csrenz and used with permission of the author. All other rights reserved by the author.

Protected: Post #8

This post is password protected. You must visit the website and enter the password to continue reading.

From the blog CS@Worcester – by Ryan Marcelonis and used with permission of the author. All other rights reserved by the author.

10/30/2017–blog assignment week 7 CS 443

http://www.kyleblaney.com/junit-best-practices/
This week we turn to the best testing practices in JUnit. This article lists out some of the most effective testing practices in JUnit that I hope to incorporate in my own testing practices. This is the main reason why I chose this blog post this week. Testing has two issues. First there are a lot of unit tests to conduct. So, testing needs to be lightning fast. Second, tests are used to indicate problems in the production code. Testing should fail if and only if the production code is broken. Therefore, testing needs to be extremely reliable.
This article discusses unit tests that needs to be ran completely in memory. This means that unit tests should not make HTTP requests, access a database, or read from a file system. These types of tests could take up too much time or are too unreliable to proceed with unit testing. Instead, they should be left to other types of tests such as functional tests. In addition, filesystems are too complicated to be considered for unit testing. The article lists the following complications:
Filesystems need to acknowledge the location of the current working directory. This can be difficult on developer machines and build machines. Many times, they require that the files to be read be stored in source control. However, it can be difficult to keep the items in the source control up to date.

My favorite recommendation above all in computer science is to not use static members in a test class. Static members creates dependencies in unit testing. For example, if a class depended on the DatabaseUtils.createConnection() method, then that method and whatever is dependent on it would almost be impossible to test. Instead, it would require having a database or a testing flag in the DatabaseUtils to be able to test the function. Another issue is that static methods creates behaviors that applies to all classes. In order to alter its behavior, a flag must be passed in as a parameter to the method, or the flag must be set as static. The main issue with passing flags in as parameters is that it changes the signature for every caller. This becomes cumbersome as more and more flags are added. The problem with the second approach is that the code can be everywhere.
The last recommendation discussed in the article is to not skip unit tests. A few ways to skip unit tests, but should be avoided, are to not use JUnit’s @Ignore annotation, to not use Mavel’s maven.test.skip property, to not use Maven Surefire Plugin’s skipTests property, and finally to not use Maven Surefire Plugin’s excludes property. Skipped unit tests provides no benefits and has to be checked out of source control and compiled. So, instead of skipping unit tests, they should be removed from source control.

From the blog CS@Worcester – Site Title by myxuanonline and used with permission of the author. All other rights reserved by the author.

10/30/2017–blog assignment week 7

https://jobs.zalando.com/tech/blog/design-patterns-redux/?gh_src=4n3gxh1
This week I introduce one of my favorite Javascript library for building interfaces for web development, that is the redux library. React allows for the development of large web-applications that use data and can change its state overtime without having to reload the page. Its main aims are to provide speed, simplicity, and scalability. React and Redux together provides powerful libraries for building UIs. What makes redux so powerful? Redux is a predictable state container for UI app development. It helps programmers to write applications that behaves consistently, that can be ran in different environments and finally for writing applications that are easily testable. In addition, redux provides for live code editing.

What I have always wondered is what makes redux so powerful. What internal codes and software design makes it better than other program languages? This article discusses the different design patterns used in redux. Redux as explained is a container for organizing data on the frontend. Its strict guideline for how data will flow through a project is known as unidirectional data flow.

This article discusses the design patterns that organizes the state tree and connect method in redux. The state tree uses my favorite design pattern the singleton pattern which restricts instantiations of a class to one object. Its use helps to organize the data and reduce instantiation by creating one state tree. This means that in redux there can only be one state tree. The importance of its use in redux is that it provides for only one place to look for the different states or changes within the application. Therefore, it reduces the need for instantiation and helps to speed up production. The singleton pattern is one of the major differences between Redux and Flux.

Finally, the connect method in redux uses the Observer pattern. The observer pattern is a software design pattern in which objects maintains a list of dependencies called observers that notifies them of automatic state changes. The observer pattern describes how to solve recurring design problems to design flexible and reusable object-oriented software. This helps to make objects easier to implement, change, test, and reuse. The observer pattern helps to solve three problems in software engineering:

A one to many dependency between objects is defined without making tightly coupled objects.
It is ensured that when one object changes state an open-ended number of dependent objects are updated automatically.
It is possible for one object to notify an open-ended number of other objects.

In redux the observers are the components and the object is the state tree. In redux, the observer pattern is used to make it possible for components to listen or connect to any part of the state tree. When the state changes the components can automatically update. Together, the singleton and observer pattern provides a powerful container for organizing data.

Redux has many advantages. It provides a predictable state for understanding how data flows through an application. Its use of reducer function makes testing easier. Finally, its use of the singleton pattern provides a centralized state, making implementations easier. It helps to make data persistent to change between page refreshes. From experiences in working with redux, I chose this post to learn more about the internal workings behind the redux library.

From the blog CS@Worcester – Site Title by myxuanonline and used with permission of the author. All other rights reserved by the author.

Effective Data Migration Testing

When considering the term software quality assurance and testing, what comes to mind? For me, I think of developing test cases that exercise program functionality and aim to expose flaws in the implementation. In my mind, this type of testing comes mainly before a piece of software is released, and often occurs alongside development. After the product is released, the goals and focus of software quality assurance and testing change.

My views were challenged, however, when I recently came across an interesting new take on software testing. The post by Nandini and Gayathri titled “Data Migration Testing Tutorial: A Complete Guide” provides helpful advice and a process to follow when testing the migration of data. These experienced testers draw on their experiences to point out specific places in the migration of software where errors are likely to occur, and effective methods of exposing these flaws before they impact end-users and the reputation of the company.

The main point that Nandini and Gayathri stress is that there are three phases of testing in data migration. The first phase of testing is pre-migration testing, which occurs, as the name would suggest, before migration occurs. In this phase, the legacy state of the data is observed and provides a baseline to which the new system can than be compared to. During this phase, differences between the legacy application and the new application are also noted. Methods of dealing with these differences in implementation are developed and implemented, to ensure a smooth transmission of data.

The second phase is the migration testing phase, where a migration guide is followed to ensure that all of the necessary tasks are performed in order to accurately migrate the data from the legacy application to the new application. The first step of the phase is to create a backup of the data, which can be relied upon in case of disaster as a rollback point. Also during this phase metrics including downtime, migration time, time to complete n transfers, and other relevant information are recorded to later evaluate the success of the migration.

The final phase of data migration testing occurs post-migration. During this phase, many of the tests that are used can be automated in nature. These tests compare the data from the legacy application to the data in the new application, and alerts testers to any abnormalities or inconsistencies in the data. The tutorial lists 24 categories of post-migration tests that should be completed satisfactorily in order to say that migration was successful.

Reading this tutorial on data migration testing has certainly changed my views on what testing means. The actual definition seems much broader than what I would had thought in the past. Seeing testing from the perspective of migrating applications gave me insight on the capabilities of and responsibilities placed on software testers. If something in the migration does not go according to plan, it may be easy to place blame on the testers for not considering that case. I enjoyed reading about software testing from this new perspective and learning some of the most important things to consider when performing data migration testing.

From the blog CS@Worcester – ~/GeorgeMatthew/etc by gmatthew and used with permission of the author. All other rights reserved by the author.

Software Development Paths

https://simpleprogrammer.com/2017/07/17/software-development-career-paths/

This detailed article by John Sonmezn excerpt of one chapter out of the book “The Complete Software Developers Career Guide.” This chapter explains the different career choices available in the software development field. This is important to me because I am a junior in college, on my way to a bachelors degree in computer science with a software concentration, so planning out my future from here on out is vital. There are three types of software developers described here; Career developers, Freelancers, and Entreprogrammers.

Career developers are the most common type, where the developer is working for a company where they are paid regularly. All programmers either already are career developers, or they are working towards establishing their position as career developers. This path is pretty standard, a developer works at a company, gets promoted within that company or switches to a new one, then eventually retires.

Freelancers are a more risky choice, as their income is tied to the amount of work they can find themselves. A freelancer doesn’t work for one company particularly, and you could say a freelancer is his own boss, however to be successful you need to produce a quality service to anybody who hires you, which means taking down their intentions very carefully.

Entreprogrammers is a term for a mixture between a programmer, and an entrepreneur. In this path, a programmer uses his skills to develop some marketable software to sell to clients. An entreprogrammer could writing their own application, create training videos or tutorials, writing a book, or even having a successful blog.

After choosing a broad type of software development, there are several concentrations within to choose from. Some of these include web development, video games, data science, and automation. Some developers choose to have multiple specializations, though it is imperative to at least choose one.

Web Development is the most populated specialization for programmers, where developers create various web based applications. Within this specialization there is front end, middleware, and back end technologies for the developer to work on. More specifically, this pertains to the user interface, business logic, and databases of the application. Developers who are skilled in all three of these technologies are called “full stack developers.”

Software development in video games is a viable career option, though it sounds like a dream to most. As a result, there is a lot of competition which makes it a challenging field to be successful in. The trade off for the amount of hard effort put into this specialization, is the satisfaction of working in video games.

Data science is a new and lucrative path where a data scientist uses various skills and tools to analyze large amounts of data to be able to draw conclusions. Often times, software programming experience is helpful because a data scientist can write specific programs to organize the data in order to make better predictions.

From the blog CS@Worcester – CS Mikes Way by CSmikesway and used with permission of the author. All other rights reserved by the author.

B7: High Cohesion and Loose Coupling

Cohesion and Coupling

      This week, I chose to write about a blog that talked about high cohesion and loose coupling. The post went over the definitions of both terms and used examples to help solidify the idea. It started with high cohesion by defining it as the degree to which certain modules correlate or belong with each other. Essentially comparing the functionality of different parts of a software module to see how similar they are. The blog then talks about different types of cohesion and goes on to list worst to best. It talked about coincidental, logical, temporal, and even functional which allowed a wider understanding of the sub categories within cohesion. The blog then defines couplings as how much one module knows about the inner workings of another. It then divides it into the subcategories of loose and tight couplings saying that loose is a method that makes the modules as least dependent as possible from each other while tight is a method that makes it so you can’t change one module without changing the other. The blog then goes into more detail about the Law of Demeter which is a well-known specific case of loose coupling that explains more about what fields the object in question should access.

      I chose this article because it was a main topic within our class discussion of design patterns. I thought that it would be a good idea to review this topic more as it seemed important to know for the future projects given to us in class. The post was a great source of information that balanced the amount of text given with visual examples. It stuck to basic definitions with applications to code while also using visual drawings to intrigue the reader. The tables and code used to explain the ideas were very helpful as they illustrated how each example differed from the others. I now understand that high cohesion is better used to see the similarities between modules and loose coupling allows as much freedom as possible within the dependent relationship of the modules. Since theses fell under the category of design patterns, I think it would be very helpful to fully understand the subcategories as well since there is a best and worst type for a given situation. This could help out a lot in future practices of code when dealing with multiple relationships of modules in a software that need to be optimized using a design pattern. Once again, since we learned this material in class, I thought it would be important to review this topic before we have to apply it again in our upcoming classes.

From the blog CS@Worcester – Student To Scholar by kumarcomputerscience and used with permission of the author. All other rights reserved by the author.

10 Object Oriented Design Principles

http://javarevisited.blogspot.com/2012/03/10-object-oriented-design-principles.html

For my blog post this week I chose another topic on the concept map that I thought looked interesting: design principles. These are helpful guidelines to follow that will make your code cleaner and more modular. This blog post describes 10 design principles that are useful in object oriented programming.

  1. DRY (Don’t Repeat Yourself) – Means don’t write duplicate code. It is better to abstract common things in one place as this will make your code easier to maintain.
  2. Encapsulate What Changes –  This will make it easier to modify your code in the future. One good way to implement this is by making variables and methods private by default and increasing access step by step.
  3. Open/Closed Principle – Classes and methods should be open for extension and closed for modification. Following this principle means that you will not have to change much existing code when new functionality is added.
  4. Single Responsibility Principle – A class should always handle a single functionality. Introducing more functionality to a class will increase coupling which makes it hard to modify a portion of code without breaking another part.
  5. Dependency Injection or Inversion Principle – High level modules should not depend on low-level modules; both should depend on abstractions.
  6. Favor Composition Over Inheritance – Composition is a lot more flexible than inheritance and allows changes to the behavior of a class at run-time.
  7. Liskov Substitution Principle – Subtypes should be able to be substituted for their supertypes without any issues. To follow this principle, subclasses must enhance functionality and not reduce it.
  8.  Interface Segregation Principle (ISP) – Avoid monolithic interfaces that have multiple functionalities. This is intended to keep a system decoupled which makes it easier to change.
  9. Program For an Interface, Not an Implementation – Leads to flexible code that can work with any new implementation of an interface.
  10. Delegation Principle – Delegate tasks to specific classes. An example of this design principle is the equals() method in Java.

I chose this blog because I wanted to learn more about design principles. We have covered some of them in class, such as the open/closed principle and composition over inheritance. However, this blog introduced me to a few new ones like the Liskov Substitution principle and interface segregation principle. I thought this was a decent rundown of the most common design principles, but I could tell the author is not a native English speaker which made some parts not entirely clear. This encouraged me to look up other resources and now I feel like I have a firm grasp on all of these principles. I will be applying what I’ve learned to all future code I write because like the design patterns, they will make my code more flexible, readable, and easy to maintain.

From the blog CS@Worcester – Computer Science Blog by rydercsblog and used with permission of the author. All other rights reserved by the author.

What Is Needed to Make Automated Testing Successful? Some Basics…

Testing software can take up a lost of time. One way to reduce the time spent on testing is to automate it, meaning that a person doesn’t have to run each individual test manually. Automated testing is the way of the future, and there is information everywhere on it. Bas Dijkstra of StickyMinds.com discusses some of the basic principals of automated testing in his blog post, “5 Pillars of a Successful Test Automation Implementation”.

The first of the five pillars is the automation tool. Everybody has to have some sort of tool to help organize and run automated tests. Testing teams often overlook the tool and just go with whatever is available on hand. By doing this, they may be making their own lives more challenging. Take the time to make sure you have tool that fits your needs. I agree that this is an important first step. If you pick or are forced to use a tool that is poorly designed or doesn’t meet your needs, you are putting yourself behind the eight ball to start.

The second and third pillars discusses test data and test environments. Depending on how broad the scope of the tests are that are being run, data can become a pain to maintain. You want to make sure that you have this situation under control or you are asking for trouble. It is easy to imagine how out of hand and disorganized this could get in large scale testing. To go along with test data is the test environment. Have an environment that is as realistic as possible. Make sure it has everything you need to complete your testing and if possible, make it easy to replicate. This can allow you to run multiple tests is independent environments and/or continue with develop in one of the environments. Nothing is more frustrating that not having an environment to do you work on, whether another team member is using it, it is down for maintenance, etc. and one that is easy to duplicate can help eliminate this problem.

Next is reporting and craftsmanship. Reporting is vital as it allows others and yourself to analyze test results. Dijkstra suggests that a good report should show what went wrong, where it went wrong, and the error message that went with it. This can relate directly to craftsmanship as testing can be challenging if the correct skills aren’t available. There should be someone who has experience creating reports, for example. Make sure the correct developers, engineers, etc. are on hand to answers questions and help when needed.

My experience with automating testing is limited, which is why I have started to investigating it. From experience with manual testing, I can say that what Dijkstra discusses certainly applies, so I see no reason why it wouldn’t apply to automated testing too. I hope continue reading about automated testing as I feel it is an important and necessary tool/skill to have.

Link:

https://www.stickyminds.com/article/5-pillars-successful-test-automation-implementation

 

From the blog CS@Worcester – README by Matthew Foley and used with permission of the author. All other rights reserved by the author.