Category Archives: Week 7

Protected: Post #8

This post is password protected. You must visit the website and enter the password to continue reading.

From the blog CS@Worcester – by Ryan Marcelonis and used with permission of the author. All other rights reserved by the author.

10/30/2017–blog assignment week 7 CS 443

http://www.kyleblaney.com/junit-best-practices/
This week we turn to the best testing practices in JUnit. This article lists out some of the most effective testing practices in JUnit that I hope to incorporate in my own testing practices. This is the main reason why I chose this blog post this week. Testing has two issues. First there are a lot of unit tests to conduct. So, testing needs to be lightning fast. Second, tests are used to indicate problems in the production code. Testing should fail if and only if the production code is broken. Therefore, testing needs to be extremely reliable.
This article discusses unit tests that needs to be ran completely in memory. This means that unit tests should not make HTTP requests, access a database, or read from a file system. These types of tests could take up too much time or are too unreliable to proceed with unit testing. Instead, they should be left to other types of tests such as functional tests. In addition, filesystems are too complicated to be considered for unit testing. The article lists the following complications:
Filesystems need to acknowledge the location of the current working directory. This can be difficult on developer machines and build machines. Many times, they require that the files to be read be stored in source control. However, it can be difficult to keep the items in the source control up to date.

My favorite recommendation above all in computer science is to not use static members in a test class. Static members creates dependencies in unit testing. For example, if a class depended on the DatabaseUtils.createConnection() method, then that method and whatever is dependent on it would almost be impossible to test. Instead, it would require having a database or a testing flag in the DatabaseUtils to be able to test the function. Another issue is that static methods creates behaviors that applies to all classes. In order to alter its behavior, a flag must be passed in as a parameter to the method, or the flag must be set as static. The main issue with passing flags in as parameters is that it changes the signature for every caller. This becomes cumbersome as more and more flags are added. The problem with the second approach is that the code can be everywhere.
The last recommendation discussed in the article is to not skip unit tests. A few ways to skip unit tests, but should be avoided, are to not use JUnit’s @Ignore annotation, to not use Mavel’s maven.test.skip property, to not use Maven Surefire Plugin’s skipTests property, and finally to not use Maven Surefire Plugin’s excludes property. Skipped unit tests provides no benefits and has to be checked out of source control and compiled. So, instead of skipping unit tests, they should be removed from source control.

From the blog CS@Worcester – Site Title by myxuanonline and used with permission of the author. All other rights reserved by the author.

10/30/2017–blog assignment week 7

https://jobs.zalando.com/tech/blog/design-patterns-redux/?gh_src=4n3gxh1
This week I introduce one of my favorite Javascript library for building interfaces for web development, that is the redux library. React allows for the development of large web-applications that use data and can change its state overtime without having to reload the page. Its main aims are to provide speed, simplicity, and scalability. React and Redux together provides powerful libraries for building UIs. What makes redux so powerful? Redux is a predictable state container for UI app development. It helps programmers to write applications that behaves consistently, that can be ran in different environments and finally for writing applications that are easily testable. In addition, redux provides for live code editing.

What I have always wondered is what makes redux so powerful. What internal codes and software design makes it better than other program languages? This article discusses the different design patterns used in redux. Redux as explained is a container for organizing data on the frontend. Its strict guideline for how data will flow through a project is known as unidirectional data flow.

This article discusses the design patterns that organizes the state tree and connect method in redux. The state tree uses my favorite design pattern the singleton pattern which restricts instantiations of a class to one object. Its use helps to organize the data and reduce instantiation by creating one state tree. This means that in redux there can only be one state tree. The importance of its use in redux is that it provides for only one place to look for the different states or changes within the application. Therefore, it reduces the need for instantiation and helps to speed up production. The singleton pattern is one of the major differences between Redux and Flux.

Finally, the connect method in redux uses the Observer pattern. The observer pattern is a software design pattern in which objects maintains a list of dependencies called observers that notifies them of automatic state changes. The observer pattern describes how to solve recurring design problems to design flexible and reusable object-oriented software. This helps to make objects easier to implement, change, test, and reuse. The observer pattern helps to solve three problems in software engineering:

A one to many dependency between objects is defined without making tightly coupled objects.
It is ensured that when one object changes state an open-ended number of dependent objects are updated automatically.
It is possible for one object to notify an open-ended number of other objects.

In redux the observers are the components and the object is the state tree. In redux, the observer pattern is used to make it possible for components to listen or connect to any part of the state tree. When the state changes the components can automatically update. Together, the singleton and observer pattern provides a powerful container for organizing data.

Redux has many advantages. It provides a predictable state for understanding how data flows through an application. Its use of reducer function makes testing easier. Finally, its use of the singleton pattern provides a centralized state, making implementations easier. It helps to make data persistent to change between page refreshes. From experiences in working with redux, I chose this post to learn more about the internal workings behind the redux library.

From the blog CS@Worcester – Site Title by myxuanonline and used with permission of the author. All other rights reserved by the author.

Effective Data Migration Testing

When considering the term software quality assurance and testing, what comes to mind? For me, I think of developing test cases that exercise program functionality and aim to expose flaws in the implementation. In my mind, this type of testing comes mainly before a piece of software is released, and often occurs alongside development. After the product is released, the goals and focus of software quality assurance and testing change.

My views were challenged, however, when I recently came across an interesting new take on software testing. The post by Nandini and Gayathri titled “Data Migration Testing Tutorial: A Complete Guide” provides helpful advice and a process to follow when testing the migration of data. These experienced testers draw on their experiences to point out specific places in the migration of software where errors are likely to occur, and effective methods of exposing these flaws before they impact end-users and the reputation of the company.

The main point that Nandini and Gayathri stress is that there are three phases of testing in data migration. The first phase of testing is pre-migration testing, which occurs, as the name would suggest, before migration occurs. In this phase, the legacy state of the data is observed and provides a baseline to which the new system can than be compared to. During this phase, differences between the legacy application and the new application are also noted. Methods of dealing with these differences in implementation are developed and implemented, to ensure a smooth transmission of data.

The second phase is the migration testing phase, where a migration guide is followed to ensure that all of the necessary tasks are performed in order to accurately migrate the data from the legacy application to the new application. The first step of the phase is to create a backup of the data, which can be relied upon in case of disaster as a rollback point. Also during this phase metrics including downtime, migration time, time to complete n transfers, and other relevant information are recorded to later evaluate the success of the migration.

The final phase of data migration testing occurs post-migration. During this phase, many of the tests that are used can be automated in nature. These tests compare the data from the legacy application to the data in the new application, and alerts testers to any abnormalities or inconsistencies in the data. The tutorial lists 24 categories of post-migration tests that should be completed satisfactorily in order to say that migration was successful.

Reading this tutorial on data migration testing has certainly changed my views on what testing means. The actual definition seems much broader than what I would had thought in the past. Seeing testing from the perspective of migrating applications gave me insight on the capabilities of and responsibilities placed on software testers. If something in the migration does not go according to plan, it may be easy to place blame on the testers for not considering that case. I enjoyed reading about software testing from this new perspective and learning some of the most important things to consider when performing data migration testing.

From the blog CS@Worcester – ~/GeorgeMatthew/etc by gmatthew and used with permission of the author. All other rights reserved by the author.

Software Development Paths

https://simpleprogrammer.com/2017/07/17/software-development-career-paths/

This detailed article by John Sonmezn excerpt of one chapter out of the book “The Complete Software Developers Career Guide.” This chapter explains the different career choices available in the software development field. This is important to me because I am a junior in college, on my way to a bachelors degree in computer science with a software concentration, so planning out my future from here on out is vital. There are three types of software developers described here; Career developers, Freelancers, and Entreprogrammers.

Career developers are the most common type, where the developer is working for a company where they are paid regularly. All programmers either already are career developers, or they are working towards establishing their position as career developers. This path is pretty standard, a developer works at a company, gets promoted within that company or switches to a new one, then eventually retires.

Freelancers are a more risky choice, as their income is tied to the amount of work they can find themselves. A freelancer doesn’t work for one company particularly, and you could say a freelancer is his own boss, however to be successful you need to produce a quality service to anybody who hires you, which means taking down their intentions very carefully.

Entreprogrammers is a term for a mixture between a programmer, and an entrepreneur. In this path, a programmer uses his skills to develop some marketable software to sell to clients. An entreprogrammer could writing their own application, create training videos or tutorials, writing a book, or even having a successful blog.

After choosing a broad type of software development, there are several concentrations within to choose from. Some of these include web development, video games, data science, and automation. Some developers choose to have multiple specializations, though it is imperative to at least choose one.

Web Development is the most populated specialization for programmers, where developers create various web based applications. Within this specialization there is front end, middleware, and back end technologies for the developer to work on. More specifically, this pertains to the user interface, business logic, and databases of the application. Developers who are skilled in all three of these technologies are called “full stack developers.”

Software development in video games is a viable career option, though it sounds like a dream to most. As a result, there is a lot of competition which makes it a challenging field to be successful in. The trade off for the amount of hard effort put into this specialization, is the satisfaction of working in video games.

Data science is a new and lucrative path where a data scientist uses various skills and tools to analyze large amounts of data to be able to draw conclusions. Often times, software programming experience is helpful because a data scientist can write specific programs to organize the data in order to make better predictions.

From the blog CS@Worcester – CS Mikes Way by CSmikesway and used with permission of the author. All other rights reserved by the author.

B7: High Cohesion and Loose Coupling

Cohesion and Coupling

      This week, I chose to write about a blog that talked about high cohesion and loose coupling. The post went over the definitions of both terms and used examples to help solidify the idea. It started with high cohesion by defining it as the degree to which certain modules correlate or belong with each other. Essentially comparing the functionality of different parts of a software module to see how similar they are. The blog then talks about different types of cohesion and goes on to list worst to best. It talked about coincidental, logical, temporal, and even functional which allowed a wider understanding of the sub categories within cohesion. The blog then defines couplings as how much one module knows about the inner workings of another. It then divides it into the subcategories of loose and tight couplings saying that loose is a method that makes the modules as least dependent as possible from each other while tight is a method that makes it so you can’t change one module without changing the other. The blog then goes into more detail about the Law of Demeter which is a well-known specific case of loose coupling that explains more about what fields the object in question should access.

      I chose this article because it was a main topic within our class discussion of design patterns. I thought that it would be a good idea to review this topic more as it seemed important to know for the future projects given to us in class. The post was a great source of information that balanced the amount of text given with visual examples. It stuck to basic definitions with applications to code while also using visual drawings to intrigue the reader. The tables and code used to explain the ideas were very helpful as they illustrated how each example differed from the others. I now understand that high cohesion is better used to see the similarities between modules and loose coupling allows as much freedom as possible within the dependent relationship of the modules. Since theses fell under the category of design patterns, I think it would be very helpful to fully understand the subcategories as well since there is a best and worst type for a given situation. This could help out a lot in future practices of code when dealing with multiple relationships of modules in a software that need to be optimized using a design pattern. Once again, since we learned this material in class, I thought it would be important to review this topic before we have to apply it again in our upcoming classes.

From the blog CS@Worcester – Student To Scholar by kumarcomputerscience and used with permission of the author. All other rights reserved by the author.

10 Object Oriented Design Principles

http://javarevisited.blogspot.com/2012/03/10-object-oriented-design-principles.html

For my blog post this week I chose another topic on the concept map that I thought looked interesting: design principles. These are helpful guidelines to follow that will make your code cleaner and more modular. This blog post describes 10 design principles that are useful in object oriented programming.

  1. DRY (Don’t Repeat Yourself) – Means don’t write duplicate code. It is better to abstract common things in one place as this will make your code easier to maintain.
  2. Encapsulate What Changes –  This will make it easier to modify your code in the future. One good way to implement this is by making variables and methods private by default and increasing access step by step.
  3. Open/Closed Principle – Classes and methods should be open for extension and closed for modification. Following this principle means that you will not have to change much existing code when new functionality is added.
  4. Single Responsibility Principle – A class should always handle a single functionality. Introducing more functionality to a class will increase coupling which makes it hard to modify a portion of code without breaking another part.
  5. Dependency Injection or Inversion Principle – High level modules should not depend on low-level modules; both should depend on abstractions.
  6. Favor Composition Over Inheritance – Composition is a lot more flexible than inheritance and allows changes to the behavior of a class at run-time.
  7. Liskov Substitution Principle – Subtypes should be able to be substituted for their supertypes without any issues. To follow this principle, subclasses must enhance functionality and not reduce it.
  8.  Interface Segregation Principle (ISP) – Avoid monolithic interfaces that have multiple functionalities. This is intended to keep a system decoupled which makes it easier to change.
  9. Program For an Interface, Not an Implementation – Leads to flexible code that can work with any new implementation of an interface.
  10. Delegation Principle – Delegate tasks to specific classes. An example of this design principle is the equals() method in Java.

I chose this blog because I wanted to learn more about design principles. We have covered some of them in class, such as the open/closed principle and composition over inheritance. However, this blog introduced me to a few new ones like the Liskov Substitution principle and interface segregation principle. I thought this was a decent rundown of the most common design principles, but I could tell the author is not a native English speaker which made some parts not entirely clear. This encouraged me to look up other resources and now I feel like I have a firm grasp on all of these principles. I will be applying what I’ve learned to all future code I write because like the design patterns, they will make my code more flexible, readable, and easy to maintain.

From the blog CS@Worcester – Computer Science Blog by rydercsblog and used with permission of the author. All other rights reserved by the author.

What Is Needed to Make Automated Testing Successful? Some Basics…

Testing software can take up a lost of time. One way to reduce the time spent on testing is to automate it, meaning that a person doesn’t have to run each individual test manually. Automated testing is the way of the future, and there is information everywhere on it. Bas Dijkstra of StickyMinds.com discusses some of the basic principals of automated testing in his blog post, “5 Pillars of a Successful Test Automation Implementation”.

The first of the five pillars is the automation tool. Everybody has to have some sort of tool to help organize and run automated tests. Testing teams often overlook the tool and just go with whatever is available on hand. By doing this, they may be making their own lives more challenging. Take the time to make sure you have tool that fits your needs. I agree that this is an important first step. If you pick or are forced to use a tool that is poorly designed or doesn’t meet your needs, you are putting yourself behind the eight ball to start.

The second and third pillars discusses test data and test environments. Depending on how broad the scope of the tests are that are being run, data can become a pain to maintain. You want to make sure that you have this situation under control or you are asking for trouble. It is easy to imagine how out of hand and disorganized this could get in large scale testing. To go along with test data is the test environment. Have an environment that is as realistic as possible. Make sure it has everything you need to complete your testing and if possible, make it easy to replicate. This can allow you to run multiple tests is independent environments and/or continue with develop in one of the environments. Nothing is more frustrating that not having an environment to do you work on, whether another team member is using it, it is down for maintenance, etc. and one that is easy to duplicate can help eliminate this problem.

Next is reporting and craftsmanship. Reporting is vital as it allows others and yourself to analyze test results. Dijkstra suggests that a good report should show what went wrong, where it went wrong, and the error message that went with it. This can relate directly to craftsmanship as testing can be challenging if the correct skills aren’t available. There should be someone who has experience creating reports, for example. Make sure the correct developers, engineers, etc. are on hand to answers questions and help when needed.

My experience with automating testing is limited, which is why I have started to investigating it. From experience with manual testing, I can say that what Dijkstra discusses certainly applies, so I see no reason why it wouldn’t apply to automated testing too. I hope continue reading about automated testing as I feel it is an important and necessary tool/skill to have.

Link:

https://www.stickyminds.com/article/5-pillars-successful-test-automation-implementation

 

From the blog CS@Worcester – README by Matthew Foley and used with permission of the author. All other rights reserved by the author.

S.O.L.I.D. Design Principals – What Are They and What Purpose Do They Serve?

This week I went back to Professor Wurst’s concept map looking for some fresh material to research. This week’s topic of choice are the S.O.L.I.D. deign principals. These are design principals discussed and promoted by Uncle Bob (Robert C. Martin). The goal of having design principals is the make software easier to deal with, maintain, and expand upon. Samuel Oloruntoba does a great job at giving a general overview of these principles in his blog.

First things first – what does S.O.L.I.D. stand for? Well, it stands for Single-responsibility principal, Open-closed principal, Lisko substitution principle, Interface segregation principle, and Dependency inversion principle.

Single-Responsibility Principle: A class should only have one responsibility. In other words, a class should perform only one job. This can be applied to help make your program more modular. If you have a class that performs many tasks, it can become challenging to make changes to it.

Open-Closed Principle: It should be easy to extend a class without having to make changes within the class that needs to be extended. In other words, be prepared for the future. Don’t assume that the class/program will ever need to do additional things/serve a different purpose than it does today.

Liskov Substitution Principle:  Every subclass should be able to act as a substitute for the parent class. This once again promotes the ability to extend upon a program if need be.

Interface Segregation Principle: Don’t make people use methods or interfaces that they don’t need to use and/or are not needed. In my opinion, this can create unneeded work for the user and can cause confusion/be deceptive because it may not be clear what they need to do.

Dependency Inversion Principle: Items should depend on abstractions rather than concretions. Don’t pigeon hole yourself. This is probably best explained in an example: Say you have class LandscapeWorker with several methods including one that assigns a piece of equipment to them. Once could simple assign the equipment in the LandscapeWorker class, but then if they want to switch equipment, you would have to change the LandscapeWorker class that should not have to be changed. Instead, have an interface called equipment with separate classes for each piece of equipment, that way the class can simple be called.

I feel design principals are important as they help paint a picture of the logic and power behind programming languages they are designed to be used in. They cause you to think of new and better way to design code in ways you may not have beforehand. This is why I chose to discuss some of them this week. I feel that S.O.L.I.D. design principles are a good place to start for anyone, including those who are just starting out. I understand that I have barely scratched the surface with design principals in general and purpose behind the ones discussed here.  In the coming weeks, I hope to dive deeper into design principals and perhaps go in-depth into some of the principles discussed here. Stay tuned.

Link:

https://scotch.io/bar-talk/s-o-l-i-d-the-first-five-principles-of-object-oriented-design

 

From the blog CS@Worcester – README by Matthew Foley and used with permission of the author. All other rights reserved by the author.

The Clean Coder 13 & 14 Week 7

The first part of this reading discusses teams. Teams are a very important part of completing a project. “The Gelled Team” is a very efficient way of forming a team, it usually consists of 12 people, some programmers, some testers and a project manager. This team needs to learn eachothers habits and learn to be able to work with them, this can sometimes take a long time but is necessary. This team can then take on multiple projects and is much more efficient.

The second part of the reading focuses on learning to program outside of school. A lot of programmers haven’t even gone to school, they teach themselves. The ones that do go to school are not ready to start coding right away, it takes years of learning the way an office works and how to perform efficiently.

From the blog CS@Worcester – Software Testing by kyleottblog and used with permission of the author. All other rights reserved by the author.