Author Archives: computingfinn

Santa Uses Java?


Since Christmas was just past I was wondering how Santa could handle so much data so quickly. This blog post seems to suggest that through the use of Java and Speedment is how he accesses this database of children and presents. Speedment is a software tool for enterprises that run Java applications and relational databases. It works by creating a virtual data object, retrieving only the data the application needs when it needs it. What it does is reduces the amount of develop time, increases data access, and builds a better infrastructure. How would Santa use Speedment though? Well he would implement Speedment into java code and call on his database of children in order to gather the information he needed.

An Example:

. var niceChildren =

. filter(Child.NICE.isTrue())

. sorted(Child.COUNTRY.comparator())

. collect(Collectors.toList());


This stream will yield a long list containing only the kids that have been nice. To enable Santa to optimize his delivery route, the list is sorted by country of residence. What is exciting about a Java application like this is you can create code that easily checks on a database. I had created a SQLite program earlier in the year and the way the code was built felt clunky. In the case of Speedment you can do this process smoother. In terms of software development when looking into database access Speedment seems like a valuable tool to implement. Their website has a code generator for different databases, and languages to make sure you can quickly implement it in any program. I believe this is truly how Santa is able to access his list so quickly to make sure he can check it twice.

From the blog CS@Worcester – Computing Finn by computingfinn and used with permission of the author. All other rights reserved by the author.

Lights Camera Action – Model View Controller


A lot of the focus on the end of the year is around the Model View Controller pattern. Our projects have been based on it as well so it is something that we put a lot of focus on. While studying for exams I like to find relations in things that help me to remember them. I thought that the Model View Controller reminded me of a movie production. All the little pieces fitting into their spots so that the whole thing works flawlessly.

MVC, Model View Controller is like the director hierarchy of a coding project. View is similar to the camera being used in a film. It doesn’t know about the controller except for providing methods for the user to use the information. View is used to display all the information that the user needs to see. Basically a more front end version of the code. Where the controller is like the director. When the cameras are rolling the director commands what to do next. Similar to the controller which activates when the view model has a user. It also contains all the relative information to control the operation and make everything run smoothly. Finally Model which is like the set, and actors of a movie. Overall the model contains all the information that is needed for the production or program. Inside of the model is the constructor and the methods needed to be called upon. The article listed also explains the structure in simple terms: “The idea is to separate the user interface (the Presentation in the previous example) into a View (creates the display, calling the Model as necessary to get information), and Controller (responds to user requests, interacting with both the View and Controller as necessary).”

From the blog CS@Worcester – Computing Finn by computingfinn and used with permission of the author. All other rights reserved by the author.

It’s Alive! Mutation Testing

Mutation testing is something that we learned the last day of classes. In a short and simple definitions mutations are seeded into your code and then tests are run on that mutated code. If the mutated code fails the tests you have then a mutation is killed, and if the mutated code passes then the mutation lived. From here you can gage what kind of quality your code is, and how well it responds to mutations. The higher percentage mutations killed the better overall quality is you code. However what are mutations exactly? Mutations are automatically modified versions of your code that make changes with the intention of making your code fail. The code that is changed is then called a mutation. The reason that you would use mutation based testing is to make sure your tests are capable of finding faults. Traditional test coverage measures only what code is executed by your test. Mutations are particularly useful in testing tests with no assertions, and partially tested code. Partially tested code being only testing certain branches of code and not all of the branches. A leading brand name and common mutation test suite is PIT what puts PIT ahead of the competition is its fast, easy to use system that is actively developed and supported. Pits UI is not complex and shows an easy to read file of mutations.


Mutation testing is a fun gimmick of a name, and also a useful tool for testing the validity of your code. Expanding beyond just expected to results but to unexpected results.

From the blog CS@Worcester – Computing Finn by computingfinn and used with permission of the author. All other rights reserved by the author.

Be Lean Don’t Unit Test

Eugen Kiss writes in his blog about some of the disadvantages of unit testing and offers an alternative to unit testing which is lean testing. His focus is about “return on investment” of using lean testing over unit testing. Basically the focus on what is being returned for the work of unit testing. The argument is that you put resources into creating a unit test for one specific outcome when lean create end to end testing that covers the critical paths that the user would actually take.

He continues to say that the problem with unit testing is inherently in coverage. The argument is made that even with 100 percent coverage the code may cover zero. Since you create code to match that of what you expect the result can be different from the user path that is actually followed. Additionally another issue he brings up is the quality of the tests “Imagine you have three components, A, B and C. You have written an extensive unit test suite to test them. Later on you decide to refactor the architecture so that functionality of B will be split among A and C. You now have two new components with different interfaces. All the unit tests are suddenly rendered useless. Some test code may be reused but all in all the entire test suite has to be rewritten.” This would cause a come back turn around where since you are rewriting the tests you are investing more time into unit testing costing more to “payback”. Finally another point of cost efficiency is when actually designing your code. You are coding with the tests in mind, and potentially making code that you don’t need to fulfill the tests.

The blog offers incite that unit testing isn’t always the best use of time and suggests that time can be better spent focusing on user interface.

From the blog CS@Worcester – Computing Finn by computingfinn and used with permission of the author. All other rights reserved by the author.

Typescript vs JavaScript

Given that we have been working with Typescript I wanted to find a blog post that compare and contrasts these languages to grasp a better understanding of their benefits. First we should define what the languages are. Typescript is an open source “syntactic superset” of JavaScript that compiles to JavaScript. Typescript offers the developer a static type checking for JavaScript to be compiled. Where JavaScript is almost the opposite of this; JavaScript uses type less variables that can be assigned any data. That is a basic understanding of the two languages that are based from Java.

From here lets discus the benefits of using Typescript language as stated above Typescript allows for the assignment of static types. This prevents complications like data being changed if not meant to, or faulty variable assignment to begin with.

An Example:

Typescript when a is an int:  a = 1;

Javascript when a is typeless: a = “1”

The thing is that JavaScript would allow you to assign the value to string 1, but Typescript would not allow you to make the same assignment. Additionally Typescript also adds other features to JavaScript like: Interfaces, Generics, Namespaces, Null Checking, and Access Modifiers.

Interestingly the blog talks about how type values are not the only reason to use typescript, or at least the exclusive reason. When deciding to use typescript the blog offers this advice:


  • Prefer Compile Time Type Checking: It is entirely possible to perform runtime type verification using vanilla JavaScript. However, this introduces additional runtime overhead that could be avoided by performing compile-time validation
  • Working with a New Library or Framework: Let’s suppose you’re taking up React for a new project. You are not familiar with React’s APIs, but since they offer type definitions, you can get intellisense that will help you navigate and discover the new interfaces.
  • Large Projects or Multiple Developers: TypeScript makes the most sense when working on large projects or you have several developers working together. Using TypeScript’s interfaces and access modifiers can be invaluable in communicating APIs (which members of a class are available for consumption).


  • Build Tools Required: TypeScript necessitates a build step to produce the final JavaScript to be executed. However, it is becoming increasingly rare to develop JavaScript applications without build tools of any kind.
  • Small Projects: TypeScript may be overkill for small teams or projects with a small code surface area.
  • Strong Testing Workflow: If you have a strong JavaScript team who is already implementing test-driven development, switching to TypeScript may not give you enough to make it worth the associated costs.
  • Added Dependencies: In order to use libraries with TS, you will need their type definitions. Every type definition means an extra npm package. By depending on these extra packages you are accepting the risk that these may go un-maintained or may be incorrect. If you choose not to import the type definitions, you are going to lose much of the TS benefit. Note that the DefinitelyTyped project exists to mitigate these risks. The more popular a library is, the more likely the type definitions are to be maintained for the foreseeable future.
  • Framework Unsupported: If your framework of choice does not support TS, such as EmberJS (although this is planned and is the language of choice for Glimmer), then you may be unable to take advantage of its features.

The author Jared Nance goes on to say that Typescript isn’t the best tool out there for coding, but presents the information as best as possible. When considering to use Typescript or JavaScript the suggested reasons above may be good guide lines in making that choice.

From the blog CS@Worcester – Computing Finn by computingfinn and used with permission of the author. All other rights reserved by the author.

What You Need To Node

For me working with Node.js is a new and exciting journey we have ventured down. You can see from simple webapps the ability that Node.js has at influencing Javascript code. According to Dale Knauss in his blog about the NodeSummit2016 that opinion was shown as fact. As Walmart one of the largest retailers in the world primarily uses Node for its traffic. 98% of that traffic is directed through Node APIs. Accompanied by their sister company SamsClub who 100% of their traffic is through Node. The exclusive use by such a massive company surly speaks for the viability of Node. The reason that Walmart is exclusively using Node is so that their development team can all develop for their “entire stack”. Allowing for faster speeds of updates, and web retrieval. Dale states the major change is: “Where before they needed dedicated front end, back end, mobile, and devops developers, they now are able to have each member of their team work on any of those positions.” Another exciting prospect of Node is that NASA is beginning to use it in non-mission critical methods. The government agency that sends people to the moon is using a Node program that we are beginning to implement. Major companies are beginning to use Node or fully switching over and for good reasons.

Those reasons according to Dale are the empowerment of developers, it’s perfect for micros services, its lightweight and scalable, constantly improving, and its performance. Developers are being empowered because frontend is being handled by Javascript as well as reusable tools that developers can all use. The best tooling for micro services in the industry is developed and implemented by node. Micro services paired with rapid development and open source library that makes Node lightweight is what really streamlines development with Node. Node is also updating and being consistently maintained as well so the service is always up to date. To attest to the performance the example that Dale really puts it into perspective “ PayPal, moving their mobile servers from Rails to Node resulted in going from 30 servers to just 3 and performance that was up to 20x faster.” After reading over this article and seeing how relevant and powerful Node is I am excited to be diving in and exploring the uses for myself.

From the blog CS@Worcester – Computing Finn by computingfinn and used with permission of the author. All other rights reserved by the author.

Angular Testing

Recently as we have been working more with Angular I was interested in what king of testing was available for Angular code. The blog post on angular website does a good job of explaining how to get these tests done. To get started you need to mimic the application environment and set up the object you are testing in the case of this blog post they use a tree. Now it is time to actually set up the tests and that begins with an “it” statement.


it(‘fails with missing tree’, () => {

      expect(() => testRunner.runSchematic(

      ‘simple-schematic’, {name: “test”},



               This code example asserts that an error was thrown because of an invalid tree. You can also assert invalid parameters as well with additional code shown in the blog post. The actual code for testing the outcome is the “expect()” function which can be used to test your code. Inside of the () is where you put the test you are running. In the case of this example we test to make sure all the expected files that’s are expected to be created with the expect(tree.files).toEqual([(files)]) to makes sure all created files are added to the array.

You can follow the blog for a direct example of the testing in Angular it is a bit complex in comparison to Java because you have to do a lot of set up instead of just creating a junit test. Over all I can see the benefit of wanting to test parts of your webapp without having to load up the app every time, but without a bit of practice Ill be hitting the refresh button a lot.


-Computing Finn

From the blog CS@Worcester – Computing Finn by computingfinn and used with permission of the author. All other rights reserved by the author.


                 What is a Rest API? Well first lets discus what an API is. Application Programming Interface, API, enables to applications to communicate with each other. The example given in the blog post is when you visit a webpage your browser sends a request to the server where the site is located, and their API is what receives the request and interprets it. What is useful about APIs is that large companies can give access to their code in an API for developers to use and implement in their programs. An example would be using Google maps API to build a navigation application. Essentially APIs are a public extension of code that developers can use.

                Then what is a REST API? Well REST stands for Representational State Transfer and what that means is creating an API that has a specific rules for developers to follow. According to the Blog post those 5 rules are:

  1. Client-server architecture. The API should be built so that the client and the server remain separate from one another. That way they can continue to develop on their own, and can be used independently.
  2. Statelessness. REST APIs must follow a ‘stateless’ protocol. In other words, they can’t store any information about the client on the server. The client’s request should include all the necessary data upfront, and the response should provide everything the client needs. This makes each interaction a ‘one and done’ deal, and reduces both memory requirements and the potential for errors.
  3. Cacheability. A ‘cache’ is the temporary storage of specific data, so it can be retrieved and sent faster. RESTful APIs make use of cacheable data whenever possible, to improve speed and efficiency. In addition, the API needs to let the client know if each piece of data can and should be cached.
  4. Layered system. Well-designed REST APIs are built using layers, each one with its own designated functionality. These layers interact, but remain separate. This makes the API easier to modify and update over time, and also improves its security.
  5. Uniform interface. All parts of a REST API need to function via the same interface, and communicate using the same languages. This interface should be designed specifically for the API and able to evolve on its own. It should not be dependent on the server or client to function.

When using a REST API you will be using predefined features like: GET, POST, PUT, or DELETE followed by a root path to call those features on and then fill out the body if the feature requires it. With this you can easily fill a data base or call on other features of API. This Blog post looks to use the WordPress API which I will look into and see if I can get to work.

From the blog CS@Worcester – Computing Finn by computingfinn and used with permission of the author. All other rights reserved by the author.


Well Halloween has come and gone and you know what holiday is next.  Thanksgiving that’s right and you need to decorate quickly before it is time to switch to Christmas decorations! Well if you had to do something similar in the coding world you would use the Decorator design pattern. The decorator design pattern is a design blueprint that seeks to eliminate complex hierarchy’s and allow the programmer to define the object at run time. The upside to the decorator pattern is the easy of adding new components. You essentially create an object as an interface and then create extensions of the object through a decorator class that implements that interface. Which allows you to set all the methods, or information that you want the object to have like price, or description and define that addition price or description with a decorator object. To visually think about this design pattern you can think of a rubber band ball. It has one bouncy ball in the core that is the interface, and each rubber band added to the ball is surrounding that interface object the ball. As you add more rubber bands the object is still a ball, but it is “decorated” in rubber bands. The rubber bands are now an additional quality of the ball, and extension of the ball.

Consider now your house the dining table if you wanted to create a decorator pattern for the table for the next two holidays you could do it easily with the decorator pattern. Your base object or interface would be the table. From there how you decorate it is up to you if you create a thanksgiving table cloth object you can at the time of creating the table add the thanksgiving table cloth in the creation of the table. Then when you are ready for Christmas on November 23 at 00:01 you can just create a Christmas table cloth decorator object and apply it at runtime for it to be added to the table. Then additionally you can create other decorations to be added and you don’t even need to delete the class for the code to work you simply just don’t need to call it at run time.

Overall the decorator pattern helps to avoid an inheritance tree that relies heavily on the parents. This lets you define objects that may or may not be used and call them at run time to decorate your object. For additional information you can check out the link below for additional research that I have done on the topic of the decorator pattern.

From the blog CS@Worcester – Computing Finn by computingfinn and used with permission of the author. All other rights reserved by the author.

The Automatic Test Pyramid

Ham Vocke in his blog about The Practical Test Pyramid says “Traditionally software testing was overly manual work done by deploying your application to a test environment and then performing some black-box style testing e.g. by clicking through your user interface to see if anything’s broken. Often these tests would be specified by test scripts to ensure the testers would do consistent checking.



It’s obvious that testing all changes manually is time-consuming, repetitive and tedious. Repetitive is boring, boring leads to mistakes and makes you look for a different job by the end of the week.”



From this thought you ask yourself how you can make something boring not boring. Well as a great computer scientist you decide to “pawn” that work off on the computer. This automation was thought up by Mike Cohn in his book Succeeding with Agile

Mike Cohn’s original test pyramid consists of three layers that your test suite should consist of (bottom to top):

  1. Unit Tests
  2. Service Tests
  3. User Interface Tests

Although this pyramid can be overly simplistic it still serves as a good rule of thumb to follow when establishing your own tests. The names of the layers may not stand out to everyone specifically Service Test, but given the shortcomings of the original names it’s totally okay to come up with other names for your test layers, as long as you keep it consistent within your codebase and your team’s discussions.

Ham Vocke offers an extremely detailed and long blog entry that goes into much more detail. I plan to introduce more ideas from this blog as a continuation, but the main idea to grasp from this first post is the idea of automated testing, and introduce the metaphor of the pyramid. If you want to read ahead of my post check out his entry at the link below.

-Computing Finn

Andrew Finneran

From the blog CS@Worcester – Computing Finn by computingfinn and used with permission of the author. All other rights reserved by the author.