Author Archives: Ryan Marcelonis

Post # 13

While researching AngularJS for the final project, I came across a really useful blog post written by Todd Motto, owner of Ultimate Angular, entitled “Ultimate guide to learning AngularJS in one day”.  I was intrigued by this post because it contains a section solely devoted to defining the terminology that is commonly-used in AngularJS development, which I found to be incredibly helpful.  In this blog post, I will reiterate the definitions of the most important terms, in hopes that it will aid me in the development of my team’s application.

The article begins by defining what exactly AngularJS is and what it is used for.   Motto defines AngularJS as a “client-side MVC/MVVM framework built in JavaScript, essential for modern single page web applications (and even websites)”.  MVC is short for Model-View-Controller, and MVCs are used in many programming languages as a means of structuring software.  Model refers to the data structure behind a specific portion of an application, usually ported in JSON; View refers to the HTML and/or rendered output of an application.  By using an MVC, you’ll pull down Model data which updates your View and displays the relevant data in the HTML; Controller refers to a mechanism, within an application, that provides direct access from the server to the view so that data can be updated on the fly via communication between the server and client(s).

Motto then explains how to set up an AngularJS project with the bare essentials.  The essential elements that make up an AngularJS application are a definition, controllers, and binding and inclusion of AngularJS within an HTML file.

Controllers, as defined by Motto, are the direct access points between the server and view that are used to update data on-the-fly .  The HTML of an AngularJS application should contain little to no physical text or hard coded values – this is because all of that data should be pushed into the view from a controller.  Web-applications should be as dynamic as possible and, by pushing values to the view from a controller in the back-end, we can achieve this.  Motto then emphasizes that Controllers are to be used for data only, and creating functions that are used in communication between the server and JSON.

Directives, as defined by Motto, are small pieces of templated HTML that should be used multiple times throughout an application’s development.  Directives are the easiest way to push data into the view.  Directives consist of a list of properties, including: restrict (restriction of element’s usage), replace (replaces markup in view that defines directive), transclude, template (allows declaration of markup to be injected into the view), templateURL (similar to template but kept in its own file).

Services, as defined by Motto, are stylistic design patterns.  Services are used for singletons, and Factories are used for functions.  Filters are used in conjunction with arrays to loop through data and filter specific findings.

Two-way data-binding, is a full-circle of synhronized data; update the Model and it updates the View, update the View and it updates the Model.  This implies that data is kept in sync without issue.

A lot of the results of my research into AngularJS, at first, were of no use because I had little understanding of the terminology and concepts described in them.  I believe that this post gave me a good understanding of the fundamental concepts of AngularJS and I now feel more confident as I continue development of my own application.  I will likely refer back to this article as I make progress in my project and, inevitably, conduct more research.

From the blog CS@Worcester – by Ryan Marcelonis and used with permission of the author. All other rights reserved by the author.

Post #12

Last class, we were divided into teams to begin working on the software technical code review.  I thought it would be useful to look up what kinds of practices are the most effective, with regard to reviewing code.  I found a post by Gareth Wilson entitled “Effective Code Reviews – 9 Tips from a Converted Skeptic” , which explains the intuition behind the process of conducting code reviews and then provides a list of nine tips to help readers get started with their own reviews.  I hope that this article will help me be more effective in my contribution to the team’s overall review.

Wilson begins the post by listing the benefits of coducting code reviews as: it helps you find bugs, it ensures radability and sustainability of code, it spreads understanding of the implementation throughout the team, it helps new teammates get up to speed with the team’s process, and it exposes each member of the team to different approaches.  He explains that many entry level software engineers don’t appreciate the value of code reviews, at first, and that he was no different until he realized that the code reviews he was apart of were being conducted poorly. After gaining some experience and developing new working strategies at different companies, Wilson now feels confident enough to provide advice on how to conduct code reviews effectively.  His nine tips to performing effective code reviews are:

  • Review the right things, let tools do the rest – IDE’s and developer tools are in-place to not only help you troubleshoot problems and assure quality, but also help you notice the small style and syntax errors that are detrimental to readability and uniformity of code.  Knowing this, you should let the tools make note of those problems instead of including them in a code review.  Ensuring that code is correct, understandable, and maintainable is the goal, so that should be the focus of any code review.
  • Everyone should code review – Conducting code reviews spreads the understanding of code implementation, and helps new developers get up-to-speed with the working practices of an organization, so the ivolvement of veteran reviewers and newcomers is equally valued.
  • Review all code – A proper code review means that it was done as thoroughly as possible.  Wilson believes that no code is too short or simple to review, and that reviewing every line of code means that nothing is overlooked, and that you can feel more confident about the quality and reliability of code after a rigorous review of it is done.
  • Adopt a positive attitude – Code reviews are meant to be beneficial experiences and, by heading into them with a positive attitude and responding to constructive criticism appropriately, you can maximize the potential for this.
  • Review code often and in short sessions – Wilson believes that the effectiveness of a code review will begin to dwindle after about an hour, so it is best to refrain from reviewing for any longer than that.  A good way to avoid doing this is to set aside time throughout the day to coincide with breaks, to help form a habit and resolve issues quicker whilst code is still fresh in the head of developers.
  • It’s OK to say “It’s all good” – Not all code reviews will contain an issue.
  • Use a checklist to ensure consistence – A checklist will help to keep reviewers on track.

I think these tips are useful to consider and I hope to apply them in the upcoming code review.

From the blog CS@Worcester – by Ryan Marcelonis and used with permission of the author. All other rights reserved by the author.

Post #11

This week, I thought it would be useful to look into how testing is conducted in TypeScript, to follow up to Post #8.  I found a blog post by Sudarsan Balaji, entitled “Unit testing node applications with TypeScript — using mocha and chai”, that describes the process of using the assertion library chai in conjunction with the testing framework mocha to conduct unit testing on node applications written in TypeScript.  I believe that knowing how to conduct proper unit testing in TypeScript will give me an advantage in the upcoming assignment as well as the final project of the semester.

Balaji begins the article by explaining his reason for advocating the use of mocha and chai, specifically, and also how to install them.  As I explained in the introduction to this post, mocha is a noteable JavaScript testing framework and chai is an assertion library (a collection of tools to assert that things are true/correct).  Balaji believes these these two tools work well together because they are simple yet effective enough to get the job done.  I’m not going to explain the installation, here, but once you have mocha and chai installed, you simply create a new TypeScript file and invoke some import statements.  Balaji then provides an example of a TypeScript test (I added some comments for clarity):

/** hello-world.spec.ts */
import { hello } from ‘./hello-world’;
import { expect } from ‘chai’;
import ‘mocha’;

describe(‘Hello function’, ( ) => { /** test group name */
it(‘should return hello world’, ( ) => { /** specific test */
const result = hello( ); /** run hello( ) function from hello-world program
expect(result).to.equal(‘Hello world!’); /** compare result to expectation */
});
/** to test something else we would just add more it( )statements here */
});

Sample expected output:
Hello function
√ should return hello world

1 passing (8ms)

To run this unit test, Belaji recommends creating an npm script that calls mocha and passes in the path as a parameter.  This can be done, for this example, by creating a JSON file with the following contents:

/** package.json */
{
“scripts”: {
“test”: “mocha -r ts-node/register src/**/*.spec.ts”,
},
}

(For those unaware, a JSON file is a JavaScript Object Notation file.  JSON files are written in JavaScript object notation and are used for storing in exchanging data.  In this example, we are using it to store a script that we will run from the console, using npm.  i.e. npm run test)

The remainder of Belaji’s post is a discussion of using mocha and chai to conduct unit testing on client-side applications.  The final project may require me to delve into this aspect of TypeScript testing, but because I have yet to begin work on the final project, I think I will refer back to this section of his post and summarize it later, if I feel the need to.   Given the nature of the projects we are working on, right now, I think it is sufficient to stop this post here.  I now have an introductory understanding of how to conduct unit testing in TypeScript, which I can now use to assure that I am producing quality JavaScript applications.

From the blog CS@Worcester – by Ryan Marcelonis and used with permission of the author. All other rights reserved by the author.

Post #10

This week, I will be preparing a tutorial on the Observer Design Pattern and I figured I could assist my own understanding of the pattern by writing a blog post to accompany it.  This post will be a summary of the Observer pattern as it is described on this page of the Object Oriented Design website.  The motivation behind the Observer pattern is the frequent need of objects, in object oriented programming, to be informed about the changes that occur in another object.  An exemplification of this concept in the real-world can be imagined as a stock system which provides data for several types of client; the subject(stocks server) needs to be separated from its observers(client applications) in such a way that adding a new observer would be transparent to the servers.

The intention of implementing the Observer pattern is to define a one-to-many dependant relationship between objects so that when an object’s state is altered, its dependants are notified and updated automatically.  The Observer pattern is effectively adhered to with the implementation of 4 classes:

  • Observable/Subject(GOF) – interface or abstract class; defines operations for attaching/de-attaching observers to client
  • ConcreteObservable – concrete Observable class; maintains state of an object and notifies Observers when a change occurs
  • Observer – interface or abstract class; defines notification operations
  • ConcreteObserverA, ConcreteObserver2, etc. – concrete Observer implementations

How it Works:

  • ConcreteObserverable object is instantiated in main framework
  • ConcreteObserver objects are instantiated and attached to ConcreteObservable object via methods defined in the Observable interface
  • ConcreteObserver objects are notified each time the state of the ConcreteObservable object is changed
  • ConcreteObservers are added to the application by simply instantiating them in the main framework and attaching them to the ConcreteObservable object

This allows for classes, that are already created, to remain unchanged.

The Observer pattern is applicable in situations where the change of a state in one object must be reflected in another object without keeping the objects coupled, and in situations where the main framework needs to introduce new observers with minimal changes.

The Observer pattern is often used, in conjunction, with other design patterns:

  • Factory – A Factory pattern can be utilized to create observers so no changes will be required in main framework
  • Template – An Observer pattern can be utilized to make sure that Subject state is self-consistent
  • Mediator – A Mediator pattern can be utilized in cases of many subjects and many observers

The Observer pattern is so well-known because its implementation has proven to be quite powerful in event handling systems of languages such as Java and C# and in the model-view-controller architectural pattern.  I am excited to put together a tutorial on how to use this pattern and I believe that having a good grasp on its motivations and intentions will aid me in future if I ever consider using it.

From the blog CS@Worcester – by Ryan Marcelonis and used with permission of the author. All other rights reserved by the author.

Post #9

Today, I will be summarizing an interesting article I found on www.softwaretestinghelp.com about software quality assurance and how to prevent defects in a timely fashion.  The article, entitled “Defect Prevention Methods and Techniques”, was last updated in October 2017, but the author is not credited.  I selected this article as a topic for a blog post because I found it interesting how it provided a definition to “quality assurance” as well as methods and techniques on how to prevent software defects during development.

The article begins by differentiating between quality assurance and quality control; quality assurance activities are targeted toward preventing and identifying defects while quality control activities are targeted toward finding defects after they’ve already happened.  The reason that the emphasis of this article is on quality assurance and defect prevention is because, according to the article, defects found during the testing phase or after release are costlier to find and fix.  Preventing defects motivates the staff and makes them more aware, improves customer satisfaction, increases reliability/manageability, and enhances continuous process improvement.  Taking measures to prevent defects, early on,not only improves the quality of the finished product but also helps companies achieve the highest Capability Maturity Model Integration Level (CMMI).

The process of preventing defects begins after implementation is complete and is comprised of four main stages:

Review and Inspection – Review of all work products
Walkthrough – Comare the system to its prototype
Defect Logging and Documentation – Key information; arguments/parameters that can be used to support the analysis of defects
Root Cause Analysis – Identifying the root cause of problems and prioritizing problem resolution for maximum impact

This process often requires the involvement of three critical groups within an organization, each with their own sets of roles and responsibilities;

Managers are responsible for:

  • Support in the form of resources, training, and tools
  • Definitions of appropriate policy and procedure
  • Promotion of discussion, distribution, and changes

Testers are responsible for:

  • Maintenance of the defect database
  • Planning implementation of changes

Clients are responsible for:

  • Providing feedback

The article concludes by emphasizing the importance of defect prevention and its effect on the final product.  I believe that actively participating in a process of avoiding defects  before the final stages and release of a product is good practice and should be followed just like any other beneficial development strategy.  The article certainly contributed to my repertoire of development techniques and I think I will refer to these concepts in the future.

 

From the blog CS@Worcester – by Ryan Marcelonis and used with permission of the author. All other rights reserved by the author.

Post #9

From the blog CS@Worcester – by Ryan Marcelonis and used with permission of the author. All other rights reserved by the author.

Post #8

From the blog CS@Worcester – by Ryan Marcelonis and used with permission of the author. All other rights reserved by the author.

Protected: Post #8

This post is password protected. You must visit the website and enter the password to continue reading.

From the blog CS@Worcester – by Ryan Marcelonis and used with permission of the author. All other rights reserved by the author.

Post #7

I began researching good JUnit practices as a follow-up to our discussions of it in class.  I found a post on the codecentric Blog by Tobias Goeschel entitled “Writing Better Tests With JUnit” that addresses the pros and cons of JUnit and provides tips on how to improve your own testing.  This is the most thorough article I’ve found on JUnit testing (and possibly longest), so it seems fitting to summarize it in a blog post of my own while we cover the subject in class.

From the blog CS@Worcester – by Ryan Marcelonis and used with permission of the author. All other rights reserved by the author.

Post #6

Toward the end of our discussion about the Strategy design pattern, we briefly talked about the open/closed principle; I wanted to further my understanding of this concept, so I decided to do some research of my own.  Today, I will summarize an article by Swedish systems architect Joel Abrahamsson entitled “A simple example of the Open/Closed Principle”.

Abrahamsson begins the article by summarizing the open/closed principle as the object oriented design principle that software entities should be open for extension, but closed for modification.  This means that programmers should write code that doesn’t need to be modified when the program specifications change.  He then explains that, when programming in Java, this principle is most often adhered to when implementing polymorphism and inheritance.  We followed this principle in our first assignment of the class, when we refactored the original DuckSimulator program to utilize the Strategy design pattern.  We realized, in our in-class discussion of the DuckSimulator, that adding behaviors to Ducks would force us to update the implementation of the main class as well as each Duck subclass.  By refactoring the code to implement an interface in independent behavior classes – and then applying those behaviors to Ducks in the form of “setters” – we opened the program for extension and left it closed for modification.  Abrahamsson then gives his own example of how the open/closed principle can improve a program that calculates the area of shapes.  The idea is that, if the open/closed principle is not adhered to in the implementation of a program like this, it is susceptible to rapid growth as functionality is added to calculate the area of more and more shapes.

(Note: This is clearly not a Java implementation.)

public double Area(object[] shapes)
{
    double area = 0;
    foreach (var shape in shapes)
    {
        if (shape is Rectangle)
        {
            Rectangle rectangle = (Rectangle) shape;
            area += rectangle.Width*rectangle.Height;
        }
        else
        {
            Circle circle = (Circle)shape;
            area += circle.Radius * circle.Radius * Math.PI;
        }
    }

    return area;
}

( Abrahamsson’s implementation of an area calculator that does not adhere to the open/closed principle. )


public abstract class Shape
{
    public abstract double Area();
}
public class Rectangle : Shape
{
    public double Width { get; set; }
    public double Height { get; set; }
    public override double Area()
    {
        return Width*Height;
    }
}
public class Circle : Shape
{
    public double Radius { get; set; }
    public override double Area()
    {
        return Radius*Radius*Math.PI;
    }
}
public double Area(Shape[] shapes)
{
    double area = 0;
    foreach (var shape in shapes)
    {
        area += shape.Area();
    }

    return area;
}

( Abrahamsson’s implementation of an area calculator that adheres to the open/closed principle. )

Abrahamsson ends the article by sharing his thoughts on when the open/closed principle should be adhered to.  He believes that the primary focus of any good programmer should be to write code well enough that it doesn’t need to be repeatedly modified as the program grows.  Conversely, he says that the context of each situation should be considered because unnecessarily applying the open/closed principle can sometimes lead to an overly complex design.  I have always known that it is probably good practice to write code that is prepared for the requirements of the program to change, and this principle confirmed that idea.  From this point forward, I will take the open/closed principle into consideration when tackling new projects.

 

 

 

From the blog CS@Worcester – by Ryan Marcelonis and used with permission of the author. All other rights reserved by the author.