Category Archives: Week 9

Unit Testing: Fakes, Mocks, and Stubs, Oh My!

Have you ever watched a movie where the main character is involved in some crazy action scene, like getting thrown through a glass window or rolling over a car? The actor playing the main character is not actually doing that stuff. It is a stunt double made to look like the actor and is specialized in doing the crazy stunts.  What does this have to do with Unit Testing? Well, sometimes we work on a project with other people and the part we need to finish our portion of the program is not done yet, we need a stunt double. These stunt doubles we use in unit testing are fakes, mocks and stubs.

What’s the difference?

Fakes-  A fake is an object that is has working implementations that are usually different from the production code. A fake would cut corners to get to the data and return it back to the class calling it.  Using an in-memory database would be a good way to take advantage of fakes.

Mocks-  Mocks are objects that register calls they receive. They are pre-programmed with specifications on what is supposed to happen when the method is called. Mocks are useful for when you need to test the behavior of a code rather than a result. For example, if you are testing an instant messaging application on the iPad you don’t want to have to send out a message for every single test you run. Instead you would create a mock object that verifies that the program instructed the message to send. This is testing the behavior of the program.

Stubs-  A stub holds predefined data and returns it when called. We use stubs when we don’t want to touch the real data. or can’t. You create stubs and have them return a value that isn’t going to change, so you know exactly what the resultss should be when testing. For example, in class we talked a lot about classes named “Student”, “Grade”, and “Transcript”. We acted as though the transcript class was being written by a classmate and we were required to write tests for the Student and Grade classes without the real Transcript class. We were able to do this by creating a stub transcript class that return the information that we would expect in our tests. This is a way of checking that the method is still being called and that it works the way we want it to.

 

You can read more about this here:

http://www.softwaretestingmagazine.com/knowledge/unit-testing-fakes-mocks-and-stubs/

 

From the blog CS@Worcester – Rookey Mistake by Shane Rookey and used with permission of the author. All other rights reserved by the author.

Angular and the OnPush Detection Strategy

Today I will be discussing a blog by Throughtram on how to make Angular applications fast. The article describes the term ‘fast’ as dependent on the context of the situation. Typically, fast means best performance. The author has a program that contains two components; AppComponent (runs the application) and BoxComponent (draws 10,000 boxes with randomized coordinates). This is what he will use as his default case. To measure the application’s performance, the author wants to know how long it will take angular to perform a task when it has been triggered. To measure this, the author is using Chrome devtools, specifically the timeline tool. The timeline tool can be used to profile JavaScript execution. When the author measured the performance of his code, he ranged from 40ms – 61ms. To make the code run faster (optimize the code), the author suggests using a few different angular strategies. Due to word limits, in this blog I will only discuss the OnPush strategy.

 

Angular’s OnPush strategy is used to change detection strategy. This is used to reduce the number of checks angular makes when there is a change in an application. When the author applies this to his application, he is able to reduce the number of checks. How does he apply this to the code? All he has to do is change the detection strategy by adding a few lines of code in the BoxComponent (part of the application that draws the boxes). He uses something along the lines of “…changeDetection: ChangeDetectionStrategy.OnPush…”. He then exports his components, which are now implementing the OnPush detection strategy. After rerunning his code, the optimized runtimes are now ranging from 21ms – 44ms, a drastic improvement over the default code.

 

I chose this blog because I have a project to do in angular, which I am very new to. I have always been a fan of optimized, clean, and readable code. Nobody likes code that takes forever to run, and nobody can understand code that is a big mess of spaghetti. I have always strived to make my code minimal, clear, and concise. This is because it makes it easier for me to go back and fix, review, or do whatever I need to my code. I think optimizing code is super important, because slow programs aren’t practical. I hope to implement this strategy when I make my angular project. Even if I don’t get the chance to tinker with the detection strategy, I would at least like to look into Chrome’s devtools and measure my project’s performance.

 

Here’s the link: https://blog.thoughtram.io/angular/2017/02/02/making-your-angular-app-fast.html

From the blog CS@Worcester – The Average CS Student by Nathan Posterro and used with permission of the author. All other rights reserved by the author.

Code review guidelines

Since we are doing an assignment on “Software Technical Review”, every reviewer is required to follow certain guidelines while reviewing the code. So, this week I read an article on code review guidelines written by Madalin Ilie.

Ilie starts the article by explaining why Code Reviews are important? As per the article, software testing alone has limited effectiveness — the average defect detection rate is only 25 percent for unit testing, 35 percent for function testing, and 45 percent for integration testing. In contrast, the average effectiveness of design and code inspections are 55 and 60 percent. Case studies of review results have been impressive.

Then the author, lists some useful tips for the reviewer, as:

Critique code instead of people – be kind to the coder, not to the code. 

Treat people who know less than you with respect, deference, and patience. Nontechnical people who deal with developers on a regular basis almost universally hold the opinion that we are prima donnas at best and crybabies at worst. Don’t reinforce this stereotype with anger and impatience.

The only true authority stems from knowledge, not from position. Knowledge engenders authority, and authority engenders respect – so if you want respect in an egoless environment, cultivate knowledge.

Note that Review meetings are NOT problem solving meetings.

Ask questions rather than make statements.

Avoid the “Why” questions. Although extremely difficult at times, avoiding the “Why” questions can substantially improve the mood. Just as a statement is accusatory—so is a why question. Most “Why” questions can be reworded to a question that doesn’t include the word “Why” and the results can be dramatic.

Remember to praise. The purposes of code reviews are not focused at telling developers how they can improve, and not necessarily that they did a good job. Human nature is such that we want and need to be acknowledged for our successes, not just shown our faults. Because development is necessarily a creative work that developers pour their soul into, it often can be close to their hearts. This makes the need for praise even more critical.

Make sure you have good coding standards to reference. Code reviews find their foundation in the coding standards of the organization. Coding standards are supposed to be the shared agreement that the developers have with one another to produce quality, maintainable code. If you’re discussing an item that isn’t in your coding standards, you have some work to do to get the item in the coding standards. You should regularly ask yourself whether the item being discussed is in your coding standards.

Remember that there is often more than one way to approach a solution. Although the developer might have coded something differently from how you would have, it isn’t necessarily wrong. The goal is quality, maintainable code. If it meets those goals and follows the coding standards, that’s all you can ask for.

I much agree with everything in this list. While every item on the list is important, some have more resonance for me. As much as possible, I will try to make all of my comments positive and oriented to improving the code. Overall, I believe this article will definitely aid me to be more effective during my code review time.

Source: https://www.codeproject.com/Articles/524235/Codeplusreviewplusguidelines

From the blog CS@Worcester – Not just another CS blog by osworup007 and used with permission of the author. All other rights reserved by the author.

Progressive Web Apps

When you visit a website, web-apps are mainly what you will use. Believe it or not, we utilize web apps almost everyday. From online wikis to video hosting websites, these are all including in the wide world of Web-Apps. Today, I want to discuss Google’s developer program and their developer tools for progressive web apps. But what are progressive web apps? Progressive web-apps are applications that are reliable, fast, and engaging, according to googles development page. These are very interesting points because they can be relocatable to other aspects of computer science. Whether it is programming or deciding which algorithm is best for a certain scenario. These three key factors can help our understanding and visualization of future projects we may want to work on, which is why I choose this article. It helps detail each important aspect of user experiences and describes why these aspects need to be present.

First, let’s start of with reliability. By Google’s definition of reliable, “When launched from the user’s home screen, service workers enable a Progressive Web App to load instantly, regardless of the network state.”

This is a great viewpoint because you wouldn’t want your web-apps to load slow. By having the web-app load slow, it could alter the user’s experience – which is what our primary goal is to enhance. Determining ways that make items load faster can be a great challenge in itself. The article explains that pre-caching key resources can increase stability and enhance the user’s reliable experience because it eliminates the dependence of the app from the network. An example of this would be a service worker written in JavaScript that acts as a client-side proxy.

Google’s statistics mention that approximately 53% of users will abandon a website if it takes longer than 3 seconds to load. This data is interesting because it shows how far loading and caching algorithms and optimization have come. This also can have a big impact for monetized web-pages. If the page doesn’t load fast enough, the user could then leave, resulting in potential profit loss.

The final key point is engagement. An example of this would be your push notifications that you receive on a smartphone. Whenever the web-app wants to notify you of a change or a message, depending on what the web-app is, it sends notification to the home screen of your phone which in turn lessens the burden of opening the app itself. Small quality of life enhancements such as push notifications can really immerse a user in your product, and with a progressive web-app, that is our main goal. Knowing these main design principles of web-apps really helped me understand why and how we can further enhance user experience. Most of the time when we are developing something, it will be for the use of others, whether it’s internally or client based operation, reliability, speed, and engagement are all key aspects of creating a great web-app.

Source: https://developers.google.com/web/progressive-web-apps/

From the blog CS@Worcester – Amir Adelinia's Computer Science Blog by aadelinia1 and used with permission of the author. All other rights reserved by the author.

Don’t be an Outlaw

Source: https://haacked.com/archive/2009/07/14/law-of-demeter-dot-counting.aspx/

The Law of Demeter Is Not A Dot Counting Exercise by Phil Haack on hacked.com is a great read on the applications of the Law of Demeter. Phil starts off by analysis of a code snippit to see if it violates the “sacred Law of Demeter”. Then proceeds to give a short briefing of the Law by referencing a paper by David Bock. He then proceeds to clear up a misunderstanding or usage of the Law of Demeter by people who do not know it well, hence the title of his post. “Dot counting” does not necessarily tell you that there is a violation of the law. He closes out with an example by Alex Blabs that when you apply a fix to multiple dots in one call, you effectively lose out on code maintainability. Lastly, he explains that digging deeper into new concepts is all and well, but being able to explain disadvantages alongside the advantages will show a better understanding of the topic.

Encapsulation as a concept introduced to me, is about encapsulating what varies. However, different applications like the Law of Demeter which is specific to methods. It is formally written as “Each unit should have only limited knowledge about other units: only units “closely” related to the current unit”. The example in the paper by David Bock makes it easy to understand where this is coming from with the Paperboy and the Wallet example. Having methods that have access to more information is unnecessary and should be left out. Also, letting the method have direct access to changes made by another method is a bad idea. By applying the Law of Demeter, you encapsulate this information which simplifies the code in one class but increases the complexity of the class. Overall, you end up with a product that is easily maintainable in a sense where if you change values in one place, it will apply across the board to where it’s used.

Although encapsulation is not a new topic, knowing how to properly apply encapsulation for methods through knowing the Law of Demeter should be a good practice. This would be remembering that “a method of an object may only call methods of the object itself, an argument of the method, any object created within the method, and any direct properties/fields of the object”. For example, knowing that applying the Law of Demeter to a chained get statements is a good idea. Also, the importation of many classes that you won’t use is a bad idea. With this understanding, although incomplete, I will hopefully avoid violating the Law of Demeter and share it with my fellow colleagues.

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.

What is microservices architecture?

I chose the topic after googling about the different types of software architecture and realizing I’d never read about microservices. I chose this specific article titled “Microservices” because I know Martin Fowler’s blog contains reputable information as he’s also been referenced in class. Microservices architecture is a way of designing applications as groups or suites that can be deployed independently as a service. While he mentions there is no exact definition, this style usually has similar characteristics around capability, automated deployment, endpoints intelligence, and decentralized control of data.

The microservice style is a newer and more common approach for enterprise applications. It’s a single application as a suite of small services. Each service runs in its own process and usually communicates with an HTTP resource API.

One key feature of microservices architecture is componentization via services. The different services of an application are a way to break down the software by components. These services are out of process components that communicate using web service requests. An advantage of using services as components instead of libraries is that they can be deployed independently. Therefore only requiring a single service to be deployed when there is a change.

A second key feature of microservices architecture is that it is organized around business capabilities. This approach combats the negative effects of separate teams working on an application as management usually splits focus between the technology layer, leading UI teams, server-side logic teams, and database teams. Services allow teams to be cross functional and include a full range of skills as products can be split up by individual services and communicate via a message bus.

Some of the more brief features of microservices architecture include the idea that a team should own a product over it’s lifetime and not just treat it as a project. Additionally microservices usually follow a decentralized governance which is less constricting and allow each service to take advantage of different technology that best suits the service.  Lastly decentralized data management is common using microservices. Typically each service will manage its own database.

After reading this article I definitely have an idea of what microservices architecture is but I think I’d need a more beginner level article to explain it. There were definitely some terms and concepts that Martin referred to that I wasn’t familiar with. One thing I did like about the Article was that he mentioned how well known companies use some of the technology he was talking about such as Amazon and Netflix. Seeing as microservices are mainly used for enterprise applications I have yet to gain any experience with them but most likely will in the near future.

 

 

From the blog CS@Worcester – Software Development Blog by dcafferky and used with permission of the author. All other rights reserved by the author.

What is Functional Testing?

For this blog I’ll be covering the basics of functional testing based on the article I read titled “Functional Testing Tutorial“. Although this topic is a pretty broad topic, it will compliment my previous blog covering “What is non-functional testing?” Functional testing is when each function of an application is tested and verified that it satisfies the specifications and requirements. Most functional testing is black box testing and does not deal with any of the actual code. To test all of the functions in an application testers provide input to the function and verify the output with what is expected. This is carried out either by manual effort or automated testing.

Aside from testing each individual function, functional testing also checks system usability, system accessibility, and error conditions. Basically a user should not have difficulty using a system and in an error condition, the correct error message or procedure should be followed. In order to carry out functional testing there is a basic testing process that must be followed. First identify the test data or input, then calculate the expected outcome and values, execute your test cases, and last conduct a comparative analysis to make sure all expected outputs match the actual outputs.

The article moves on to compare functional and non-functional testing to give an idea when each is used. Functional testing is usually done first and can be manual or automated. The testing coverage is used to ensure business requirements are met based on inputs. Functional testing is more a description on what the product or system actually does. There are a lot of ways to implement functional testing for a system. Some of the most popular types are Unit Testing, Smoke Testing, Sanity Testing, Integration Testing, White Box Testing, Black Box Testing, User Acceptance Testing, and Regression Testing. A few of the popular tools used to execute these tests include Selenium, Junit, and QTP.

After reading this article I’m confident I’ll be able to classify a type of testing between functional and non-functional. The article didn’t go too specific into the execution of functional testing but that’s because there are just too many types to generalize the implementation. When I want to actually execute and implement functional testing I’ll have to read more in depth into one specific type in order to actually test a system. I think the most important concept of functional testing is to remember that the business requirements are most important. That being said it will be important to make sure the business requirements are fully understood through the development cycle to ensure proper test coverage.

 

From the blog CS@Worcester – Software Development Blog by dcafferky and used with permission of the author. All other rights reserved by the author.

Why TypeScript?

We’re beginning to work on our final class projects using Angular and TypeScript, both of which I was previously unfamiliar with before this semester. Since our projects will be implemented using the Angular framework and the TypeScript programming language, I want to learn more about the concepts behind these applications. I found an informative blog on the subject; it is entitled Angular: Why TypeScript? by Victor Savkin. The main topic here describes the benefits of using TypeScript in general, and how it is an efficient means of producing quality Angular projects.

Victor points out that while using TypeScript for Angular projects is not required, but is encouraged to be used within the framework for several reasons; many of which I will summarize here. I will also offer my personal thoughts and takeaways regarding the content.

Prior to reading Victor’s blog, I was not familiar with the fact that Angular was written in TypeScript. Now it makes more sense to me why we are going to be using TypeScript to code our projects; since Angular was written in TypeScript, it ought to have high compatibility with that particular language.

Another encouraging aspect brought up by Victor is the fact that TypeScript is equipped with many useful tools, such as “auto-completion, navigation and refactoring.” Based on his explanations, I feel that TypeScript is a very flexible language; for example, if we need to rename an instance, it seems that TypeScript will automatically change all implementations of that instance to its new name without having to do it manually. I find this to be a very powerful and efficient feature that I look forward to using while coding future projects.

Victor also eludes to the fact that TypeScript is actually a superset of JavaScript, something I was completely unaware of. I find this fact very interesting in the sense that it suggests that anything that can be done in JavaScript can be implemented in TypeScript. Victor even suggests we can rename a JavaScript .js file with the .ts extension, and with the proper annotations, the program could theoretically run completely in TypeScript.

Victor further explains that TypeScript simply makes code “easier to understand” rather than other languages such as JavaScript. I tend to agree with him in the sense that it seems that TypeScript offers explicit declarations, such as that of parameters, interfaces and abstractions. It seems that JavaScript does not offer much of this functionality. Victor offers sample code, first implemented in JavaScript, and then again in TypeScript. In my opinion, the TypeScript code is much easier for me to comprehend due to the explicit declarations that are lacking in JavaScript.

Reading Victor’s blog has made me further intrigued with the capabilities of TypeScript; I learned that it is even more flexible and powerful than I originally thought. Based on Victor’s insights, I feel that learning TypeScript will be useful to me during my professional career, since he states that it is widely used in the field of Computer Science.

 

From the blog CS@Worcester – Jason Knowles by Jason Knowles and used with permission of the author. All other rights reserved by the author.

Writing Great Test Cases and Becoming a Great Software Tester

I completely agree with Kyle McMeekin when he states in a blog post titled “5 Manual Test Case Writing Hacks,” from April 11th, 2016, that it should come as no surprise that great software testers should have an eye for detail. What may not be as obvious, however, is that great software testers should be able to write great and effective test cases. McMeekin goes on to observe that writing effective test cases requires both talent and experience. In an attempt to begin my journey to become a great software tester, I decided that I should pay close attention to the advice offered by experienced testers as they reflect on the skills they have gained from their time in the industry. Hopefully, by following the tips of more experienced testers, I too will someday be able to contribute to highly valuable test cases the improve productivity and help to create high quality software.

The first step to writing great test cases is knowing what components make up the test case. While many of the components were obvious to me, there were others that I had not thought of. The test steps, for example, are important because the person performing the test may not be the same person who wrote the test. Knowing how the test should be performed is important to obtaining a valuable result from the test.

What I found most valuable about McMeekin’s post, however, were his tips on how to “write better test cases that will lead to better quality software for your company.” His first piece of advice is to keep test cases simple. They should be in simple language, and follow the company’s template. Although not specifically mentioned in this guide, I remember reading that if a test case seems to become too complex, you should begin considering breaking it up into smaller pieces. Second, McMeekin recommends making test cases reusable. Taking into consideration that your test cases could be adapted to other scenarios or reused in another application should help to develop test cases that are reusable. Third, McMeekin suggests placing yourself in the shoes of the tester or developer rather than the test-case writer, and being your own critic. Considering what parts of your test-case may be ambiguous or frustrating for others using them will often help to create better tests. This goes hand in hand with the fourth recommendation, which is to think about the end user. Understanding the expectations and desires of the end user will certainly help to create test cases that lead to better, more successful software. The last recommendation that McMeekin gives is to stay organized. This suggestion could apply just about anywhere, but with hundreds or possibly even thousands of potential test cases, staying organized is certainly essential to being a great tester.

Although I am sure there is a great deal more to consider in my quest to one day become a great software tester, I think that keeping these things in mind will certainly improve the quality of the test cases that I write. In the rapidly advancing field of computer science, I don’t feel that I will ever stop learning new and improved ways of doing things or further developing my skills.

From the blog CS@Worcester – ~/GeorgeMatthew/etc by gmatthew and used with permission of the author. All other rights reserved by the author.

Thoughts on the Angular Material Datepicker

While researching how to make the dreams of developing a countdown clock Angular application for the final project of Software Construction, Design, and Architecture, I came across an interesting writeup on the Angular Material Datepicker by one of the Angular Material developers, Miles Malerba. With plans of creating a user-inputted countdown timer, a datepicker component sounded like welcome alternative to making one from scratch. I decided to look further into the Material Datepicker to see if it would be something that could prove useful.

The Material Datepicker includes support for the required attribute, which is used for data validation when a form is submitted. This seems like a worthwhile feature, as it would make little sense to allow the user to create a countdown timer without inputting a date to countdown to. The datepicker also has an additional mdDatepickerFilter attribute, which allows for “finer grained control of what’s considered a valid date.” This also seems like an important feature for a countdown timer input, as I would want to disallow users from selected a date in the past, as this would be invalid to count down to.

While I had not previously thought about supporting mobile users with my countdown timer application, the Material Datepicker’s mention of a specific “touch UI mode” made me reconsider. I think that a countdown timer that is tailored mobile users would be an important audience to appeal to. Perhaps mobile users would have more use for a countdown timer on their phones than on the computer. I will have to look into the possibility of supporting mobile users.

While it does not apply to my project, I thought that the Material Datepicker’s DateAdapter and support for any locale was an interesting addition. The DateAdapter is an abstract class that allows developers to specify the formatting of dates, which allows for representations of 1/2/2017 to mean January 2nd, 2017 in America and February 1st, 2017 just about anywhere else. Since my project will only need to include support for the American date representation, the included NativeDateAdapter class should fill my needs. This class uses the Javascript Date to represent dates, which is the American version mentioned earlier.

In conclusion, I think that Angular Material Datepicker will certainly help in the development of my Angular Countdown Timer Single Page Application (SPA). Having a datepicker component that is already written will allow me to focus on the more important aspects of design, such as allowing users to save their countdown timers by implementing database calls. While there is certainly still much work to be done on my Angular SPA, reading about the Angular Material Datepicker has me excited to get started developing.

From the blog CS@Worcester – ~/GeorgeMatthew/etc by gmatthew and used with permission of the author. All other rights reserved by the author.