Category Archives: Week 9

Firmly GRASP it (cohesion and coupling)

This week, for professional development, I chose to read and learn about two of the GRASP principles of software design, High Cohesion and low coupling.  One of the best resources I found was this blog post, https://thebojan.ninja/2015/04/08/high-cohesion-loose-coupling/

The post starts by mentioning the need for code to be maintainable and changeable, especially in business, to backdrop how useful these coding principles are.  Following, the author explains cohesion and goes through what seemed like an exhaustive list of the types of the common types of cohesion.  After giving a few visual examples (class diagrams and code) the same is done for coupling as well, along even more code and diagrams.

Now, all other resources do basically the same thing with the topic, but I felt this resource was the best for explaining both how and why we try to achieve loose coupling and cohesion.  Along with his diagrams and code, the examples provided to explain the concepts were easily understandable as well.  A good example would be the explanation provided for loose and tight coupling

iPods are a good example of tight coupling: once the battery dies you might as well buy a new iPod because the battery is soldered fixed and won’t come loose, thus making replacing very expensive. A loosely coupled player would allow effortlessly changing the battery.  The same, 1:1, goes for software development.

The example’s simple language and nature makes the subject a bit easier and a bit quicker to learn for me.  It’s important too that I understand this topic well because nearly everything I write will need to be refactored.  If not by me then by someone else, and having to change someone else’s code is already difficult enough.  So, it’s in my interest for both simplicity and professionalism that I make sure to value loose coupling and high cohesion in my work.

 

Now, I can see more clearly how my past object orientated class assignments have all stressed keeping classes independent and ignorant of each other. Loose coupling and high cohesion are instrumental to keeping code manageable and now I’ve got a much better grasp of these concepts.

Beyond the general concepts, I appreciated the overview for the different types of cohesion.  They are just different ways of grouping modules, but knowing some of the most common ones will improve my own designing speed and quality.

I know I’ll use the information from this post enough that it should become second nature to me, hopefully subconscious.

From the blog CS@Worcester – W.I.P. (Something catchy) by aguillardcsblog and used with permission of the author. All other rights reserved by the author.

Testing Really Does Matter

This week I read an article called “Why Testing Matters (https://blog.smartbear.com/software-testing/why-testing-matters/).  It’s easy to take testing for granted and start to think it is less important when everything is going right.  When testing is not taken seriously though serious problems can arise.  The article touches on a  few recent issues to further cement the point of the article that testing plays a critical role.  The first issue is that the iOS 11.1 update causing the letter “I” not typing and a crackling sound during calls.  This can be detrimental to a company especially one like Apple that has strong competition with Android and people are quick to switch phone types after a few issues.  Android is not without their own testing issues though.  The Android OnePlus 5 disconnected users who tried calling 911.  It’s important to remember that faults and errors are not only a slight annoyance to the end user, but can also cause dangerous situations for your users.  The one that stood out to me in the article is a software bug that revealed the names over 1,000 Facebook moderators to hate groups that were being watched for posting inappropriate content including potential terrorist organizations.  This puts Facebook’s own employees and their families at risk.  Beyond that it is also a reminder how easily our personal information can be exposed just by ignoring or overlooking a few simple tests.  A few additional hours of testing and work can save your end users many months or even years of head aches and problems dealing with identity theft.  There were also issues that were found with the global positioning system ground software.  Due to software issues there was an error of 13 microseconds.  While that doesn’t seem like a lot that averages out to be just under four kilometers off course.  This is a very serious error in positioning and navigation and can cause errors for millions of users.  Since software testing was not seen as a priority a delay in testing caused a new fleet of passenger train cars in Oakland to deal with overcrowded trains to be delayed.  While the exteriors of the cars are being built on schedule the delay in testing is bottle necking the process.  I like this blog because it shows a broad spectrum of issues and in different industries.  That’s good because it helps promote the idea that software testing is needed everywhere and it’s not just for a company that only produces commercial software.

From the blog CS@Worcester – Tim's Blog by nbhc24 and used with permission of the author. All other rights reserved by the author.

Post #11

This week, I thought it would be useful to look into how testing is conducted in TypeScript, to follow up to Post #8.  I found a blog post by Sudarsan Balaji, entitled “Unit testing node applications with TypeScript — using mocha and chai”, that describes the process of using the assertion library chai in conjunction with the testing framework mocha to conduct unit testing on node applications written in TypeScript.  I believe that knowing how to conduct proper unit testing in TypeScript will give me an advantage in the upcoming assignment as well as the final project of the semester.

Balaji begins the article by explaining his reason for advocating the use of mocha and chai, specifically, and also how to install them.  As I explained in the introduction to this post, mocha is a noteable JavaScript testing framework and chai is an assertion library (a collection of tools to assert that things are true/correct).  Balaji believes these these two tools work well together because they are simple yet effective enough to get the job done.  I’m not going to explain the installation, here, but once you have mocha and chai installed, you simply create a new TypeScript file and invoke some import statements.  Balaji then provides an example of a TypeScript test (I added some comments for clarity):

/** hello-world.spec.ts */
import { hello } from ‘./hello-world’;
import { expect } from ‘chai’;
import ‘mocha’;

describe(‘Hello function’, ( ) => { /** test group name */
it(‘should return hello world’, ( ) => { /** specific test */
const result = hello( ); /** run hello( ) function from hello-world program
expect(result).to.equal(‘Hello world!’); /** compare result to expectation */
});
/** to test something else we would just add more it( )statements here */
});

Sample expected output:
Hello function
√ should return hello world

1 passing (8ms)

To run this unit test, Belaji recommends creating an npm script that calls mocha and passes in the path as a parameter.  This can be done, for this example, by creating a JSON file with the following contents:

/** package.json */
{
“scripts”: {
“test”: “mocha -r ts-node/register src/**/*.spec.ts”,
},
}

(For those unaware, a JSON file is a JavaScript Object Notation file.  JSON files are written in JavaScript object notation and are used for storing in exchanging data.  In this example, we are using it to store a script that we will run from the console, using npm.  i.e. npm run test)

The remainder of Belaji’s post is a discussion of using mocha and chai to conduct unit testing on client-side applications.  The final project may require me to delve into this aspect of TypeScript testing, but because I have yet to begin work on the final project, I think I will refer back to this section of his post and summarize it later, if I feel the need to.   Given the nature of the projects we are working on, right now, I think it is sufficient to stop this post here.  I now have an introductory understanding of how to conduct unit testing in TypeScript, which I can now use to assure that I am producing quality JavaScript applications.

From the blog CS@Worcester – by Ryan Marcelonis and used with permission of the author. All other rights reserved by the author.

11/13/2017–Assignment 9 CS 343

http://javarevisited.blogspot.com/2016/08/adapter-design-pattern-in-java-example.html
This week we turn to the adapter design pattern. The adapter design pattern is one of the most useful of the GOF pattern. This article is a good summation of its intent and purpose in software engineering. As stated in the blog post, the adapter design pattern, also known as the Wrapper pattern, helps to bridge a gap between two classes. Similar to the decorator design pattern, it is a structural design pattern. Its main intent is to bridge a connection between 2 classes with different interfaces. It allows the two classes to work together without having to modify the internal code on either side of the classes.
A good analogy provided in the article is that electricity adaptors from different countries. For example the US has rectangular socket while India has cylindrical ones. The main point to get at from the socket example is that the sockets of the visiting country and the plug of the laptop never changes, but that the adapter bridges a gap between them. Altogether, the adapter design pattern makes incompatible interfaces work together, without changing either of the interfaces’ properties.
An interesting aspect of the adapter design pattern is that it can be implemented in 2 different ways. The first uses inheritance and is called the Class Adapter pattern. The second is through composition, which is known as the Object Adapter pattern. Of course, as what we have learned for the strategy design pattern, composition provides for more flexibility and code reusability. Composition is favored over inheritance. So, the Object Adapter pattern is often prefered over the Class Adapter pattern.
The main use of the adaptor design pattern is to independently add responsibilities to individual objects without affecting the other objects. This means that responsibilities can be independently withdrawn just as it can be independently added. Finally, when extension by subclassing is impractical, the adapter pattern can be used.
The main reason for introducing the adaptor design pattern this week is to learn about its useful in software development to incorporate into future designs. The adapter pattern like all of the other design patterns encountered thus far in the course allows for code reusability. It acts as a wrapper to the connected interfaces. It allows what would have been incompatible interfaces to work together and provides for loose coupling between the two. It can also be used for conversion classes. A good example provided is that to do calculations in miles but the library used only expects kilometers. In this case, the adapter class can take miles from the Client and converts it to kilometers. It can then leverage external library methods for all of the calculation. Kilometers can then be converted back to miles and the results can be sent back to the Client. Finally, to close off this week’s blog post my favorite example of the adaptor design pattern is in third-party libraries. The adapter design pattern allows for code reusability and for easy addition and removal of responsibilities. This allows you the flexibility and more control to replace third-party library with better performing API. This is the reason why I chose this topic this week. To learn about the advantages of using the Adapter design pattern to incorporate in my own coding style.

From the blog CS@Worcester – Site Title by myxuanonline and used with permission of the author. All other rights reserved by the author.

#5_343

From the blog CS@Worcester – Not just another CS blog by osworup007 and used with permission of the author. All other rights reserved by the author.

Unit Testing: Fakes, Mocks, and Stubs, Oh My!

Have you ever watched a movie where the main character is involved in some crazy action scene, like getting thrown through a glass window or rolling over a car? The actor playing the main character is not actually doing that stuff. It is a stunt double made to look like the actor and is specialized in doing the crazy stunts.  What does this have to do with Unit Testing? Well, sometimes we work on a project with other people and the part we need to finish our portion of the program is not done yet, we need a stunt double. These stunt doubles we use in unit testing are fakes, mocks and stubs.

What’s the difference?

Fakes-  A fake is an object that is has working implementations that are usually different from the production code. A fake would cut corners to get to the data and return it back to the class calling it.  Using an in-memory database would be a good way to take advantage of fakes.

Mocks-  Mocks are objects that register calls they receive. They are pre-programmed with specifications on what is supposed to happen when the method is called. Mocks are useful for when you need to test the behavior of a code rather than a result. For example, if you are testing an instant messaging application on the iPad you don’t want to have to send out a message for every single test you run. Instead you would create a mock object that verifies that the program instructed the message to send. This is testing the behavior of the program.

Stubs-  A stub holds predefined data and returns it when called. We use stubs when we don’t want to touch the real data. or can’t. You create stubs and have them return a value that isn’t going to change, so you know exactly what the resultss should be when testing. For example, in class we talked a lot about classes named “Student”, “Grade”, and “Transcript”. We acted as though the transcript class was being written by a classmate and we were required to write tests for the Student and Grade classes without the real Transcript class. We were able to do this by creating a stub transcript class that return the information that we would expect in our tests. This is a way of checking that the method is still being called and that it works the way we want it to.

 

You can read more about this here:

http://www.softwaretestingmagazine.com/knowledge/unit-testing-fakes-mocks-and-stubs/

 

From the blog CS@Worcester – Rookey Mistake by Shane Rookey and used with permission of the author. All other rights reserved by the author.

Angular and the OnPush Detection Strategy

Today I will be discussing a blog by Throughtram on how to make Angular applications fast. The article describes the term ‘fast’ as dependent on the context of the situation. Typically, fast means best performance. The author has a program that contains two components; AppComponent (runs the application) and BoxComponent (draws 10,000 boxes with randomized coordinates). This is what he will use as his default case. To measure the application’s performance, the author wants to know how long it will take angular to perform a task when it has been triggered. To measure this, the author is using Chrome devtools, specifically the timeline tool. The timeline tool can be used to profile JavaScript execution. When the author measured the performance of his code, he ranged from 40ms – 61ms. To make the code run faster (optimize the code), the author suggests using a few different angular strategies. Due to word limits, in this blog I will only discuss the OnPush strategy.

 

Angular’s OnPush strategy is used to change detection strategy. This is used to reduce the number of checks angular makes when there is a change in an application. When the author applies this to his application, he is able to reduce the number of checks. How does he apply this to the code? All he has to do is change the detection strategy by adding a few lines of code in the BoxComponent (part of the application that draws the boxes). He uses something along the lines of “…changeDetection: ChangeDetectionStrategy.OnPush…”. He then exports his components, which are now implementing the OnPush detection strategy. After rerunning his code, the optimized runtimes are now ranging from 21ms – 44ms, a drastic improvement over the default code.

 

I chose this blog because I have a project to do in angular, which I am very new to. I have always been a fan of optimized, clean, and readable code. Nobody likes code that takes forever to run, and nobody can understand code that is a big mess of spaghetti. I have always strived to make my code minimal, clear, and concise. This is because it makes it easier for me to go back and fix, review, or do whatever I need to my code. I think optimizing code is super important, because slow programs aren’t practical. I hope to implement this strategy when I make my angular project. Even if I don’t get the chance to tinker with the detection strategy, I would at least like to look into Chrome’s devtools and measure my project’s performance.

 

Here’s the link: https://blog.thoughtram.io/angular/2017/02/02/making-your-angular-app-fast.html

From the blog CS@Worcester – The Average CS Student by Nathan Posterro and used with permission of the author. All other rights reserved by the author.

Code review guidelines

Since we are doing an assignment on “Software Technical Review”, every reviewer is required to follow certain guidelines while reviewing the code. So, this week I read an article on code review guidelines written by Madalin Ilie.

Ilie starts the article by explaining why Code Reviews are important? As per the article, software testing alone has limited effectiveness — the average defect detection rate is only 25 percent for unit testing, 35 percent for function testing, and 45 percent for integration testing. In contrast, the average effectiveness of design and code inspections are 55 and 60 percent. Case studies of review results have been impressive.

Then the author, lists some useful tips for the reviewer, as:

Critique code instead of people – be kind to the coder, not to the code. 

Treat people who know less than you with respect, deference, and patience. Nontechnical people who deal with developers on a regular basis almost universally hold the opinion that we are prima donnas at best and crybabies at worst. Don’t reinforce this stereotype with anger and impatience.

The only true authority stems from knowledge, not from position. Knowledge engenders authority, and authority engenders respect – so if you want respect in an egoless environment, cultivate knowledge.

Note that Review meetings are NOT problem solving meetings.

Ask questions rather than make statements.

Avoid the “Why” questions. Although extremely difficult at times, avoiding the “Why” questions can substantially improve the mood. Just as a statement is accusatory—so is a why question. Most “Why” questions can be reworded to a question that doesn’t include the word “Why” and the results can be dramatic.

Remember to praise. The purposes of code reviews are not focused at telling developers how they can improve, and not necessarily that they did a good job. Human nature is such that we want and need to be acknowledged for our successes, not just shown our faults. Because development is necessarily a creative work that developers pour their soul into, it often can be close to their hearts. This makes the need for praise even more critical.

Make sure you have good coding standards to reference. Code reviews find their foundation in the coding standards of the organization. Coding standards are supposed to be the shared agreement that the developers have with one another to produce quality, maintainable code. If you’re discussing an item that isn’t in your coding standards, you have some work to do to get the item in the coding standards. You should regularly ask yourself whether the item being discussed is in your coding standards.

Remember that there is often more than one way to approach a solution. Although the developer might have coded something differently from how you would have, it isn’t necessarily wrong. The goal is quality, maintainable code. If it meets those goals and follows the coding standards, that’s all you can ask for.

I much agree with everything in this list. While every item on the list is important, some have more resonance for me. As much as possible, I will try to make all of my comments positive and oriented to improving the code. Overall, I believe this article will definitely aid me to be more effective during my code review time.

Source: https://www.codeproject.com/Articles/524235/Codeplusreviewplusguidelines

From the blog CS@Worcester – Not just another CS blog by osworup007 and used with permission of the author. All other rights reserved by the author.

Progressive Web Apps

When you visit a website, web-apps are mainly what you will use. Believe it or not, we utilize web apps almost everyday. From online wikis to video hosting websites, these are all including in the wide world of Web-Apps. Today, I want to discuss Google’s developer program and their developer tools for progressive web apps. But what are progressive web apps? Progressive web-apps are applications that are reliable, fast, and engaging, according to googles development page. These are very interesting points because they can be relocatable to other aspects of computer science. Whether it is programming or deciding which algorithm is best for a certain scenario. These three key factors can help our understanding and visualization of future projects we may want to work on, which is why I choose this article. It helps detail each important aspect of user experiences and describes why these aspects need to be present.

First, let’s start of with reliability. By Google’s definition of reliable, “When launched from the user’s home screen, service workers enable a Progressive Web App to load instantly, regardless of the network state.”

This is a great viewpoint because you wouldn’t want your web-apps to load slow. By having the web-app load slow, it could alter the user’s experience – which is what our primary goal is to enhance. Determining ways that make items load faster can be a great challenge in itself. The article explains that pre-caching key resources can increase stability and enhance the user’s reliable experience because it eliminates the dependence of the app from the network. An example of this would be a service worker written in JavaScript that acts as a client-side proxy.

Google’s statistics mention that approximately 53% of users will abandon a website if it takes longer than 3 seconds to load. This data is interesting because it shows how far loading and caching algorithms and optimization have come. This also can have a big impact for monetized web-pages. If the page doesn’t load fast enough, the user could then leave, resulting in potential profit loss.

The final key point is engagement. An example of this would be your push notifications that you receive on a smartphone. Whenever the web-app wants to notify you of a change or a message, depending on what the web-app is, it sends notification to the home screen of your phone which in turn lessens the burden of opening the app itself. Small quality of life enhancements such as push notifications can really immerse a user in your product, and with a progressive web-app, that is our main goal. Knowing these main design principles of web-apps really helped me understand why and how we can further enhance user experience. Most of the time when we are developing something, it will be for the use of others, whether it’s internally or client based operation, reliability, speed, and engagement are all key aspects of creating a great web-app.

Source: https://developers.google.com/web/progressive-web-apps/

From the blog CS@Worcester – Amir Adelinia's Computer Science Blog by aadelinia1 and used with permission of the author. All other rights reserved by the author.

Don’t be an Outlaw

Source: https://haacked.com/archive/2009/07/14/law-of-demeter-dot-counting.aspx/

The Law of Demeter Is Not A Dot Counting Exercise by Phil Haack on hacked.com is a great read on the applications of the Law of Demeter. Phil starts off by analysis of a code snippit to see if it violates the “sacred Law of Demeter”. Then proceeds to give a short briefing of the Law by referencing a paper by David Bock. He then proceeds to clear up a misunderstanding or usage of the Law of Demeter by people who do not know it well, hence the title of his post. “Dot counting” does not necessarily tell you that there is a violation of the law. He closes out with an example by Alex Blabs that when you apply a fix to multiple dots in one call, you effectively lose out on code maintainability. Lastly, he explains that digging deeper into new concepts is all and well, but being able to explain disadvantages alongside the advantages will show a better understanding of the topic.

Encapsulation as a concept introduced to me, is about encapsulating what varies. However, different applications like the Law of Demeter which is specific to methods. It is formally written as “Each unit should have only limited knowledge about other units: only units “closely” related to the current unit”. The example in the paper by David Bock makes it easy to understand where this is coming from with the Paperboy and the Wallet example. Having methods that have access to more information is unnecessary and should be left out. Also, letting the method have direct access to changes made by another method is a bad idea. By applying the Law of Demeter, you encapsulate this information which simplifies the code in one class but increases the complexity of the class. Overall, you end up with a product that is easily maintainable in a sense where if you change values in one place, it will apply across the board to where it’s used.

Although encapsulation is not a new topic, knowing how to properly apply encapsulation for methods through knowing the Law of Demeter should be a good practice. This would be remembering that “a method of an object may only call methods of the object itself, an argument of the method, any object created within the method, and any direct properties/fields of the object”. For example, knowing that applying the Law of Demeter to a chained get statements is a good idea. Also, the importation of many classes that you won’t use is a bad idea. With this understanding, although incomplete, I will hopefully avoid violating the Law of Demeter and share it with my fellow colleagues.

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.