WSU Blog #6 for CS-443

URL: http://www.ambysoft.com/essays/floot.html

The Full Life Cycle Object-Oriented Testing (FLOOT) Method

Part of my recent assignment had me analyze object oriented code and provide adequate test coverage. 

From the blog Rick W Phillips - CS@Worcester by rickwphillips and used with permission of the author. All other rights reserved by the author.

WSU Blog #7 for CS-343

URL: http://www.bambielli.com/posts/2017-04-02-decorator/

The Decorator Pattern

I am choosing to use the decorator pattern as the framework for my final project in CS-343.

From the blog Rick W Phillips - CS@Worcester by rickwphillips and used with permission of the author. All other rights reserved by the author.

Progressive Web Apps

When you visit a website, web-apps are mainly what you will use. Believe it or not, we utilize web apps almost everyday. From online wikis to video hosting websites, these are all including in the wide world of Web-Apps. Today, I want to discuss Google’s developer program and their developer tools for progressive web apps. But what are progressive web apps? Progressive web-apps are applications that are reliable, fast, and engaging, according to googles development page. These are very interesting points because they can be relocatable to other aspects of computer science. Whether it is programming or deciding which algorithm is best for a certain scenario. These three key factors can help our understanding and visualization of future projects we may want to work on, which is why I choose this article. It helps detail each important aspect of user experiences and describes why these aspects need to be present.

First, let’s start of with reliability. By Google’s definition of reliable, “When launched from the user’s home screen, service workers enable a Progressive Web App to load instantly, regardless of the network state.”

This is a great viewpoint because you wouldn’t want your web-apps to load slow. By having the web-app load slow, it could alter the user’s experience – which is what our primary goal is to enhance. Determining ways that make items load faster can be a great challenge in itself. The article explains that pre-caching key resources can increase stability and enhance the user’s reliable experience because it eliminates the dependence of the app from the network. An example of this would be a service worker written in JavaScript that acts as a client-side proxy.

Google’s statistics mention that approximately 53% of users will abandon a website if it takes longer than 3 seconds to load. This data is interesting because it shows how far loading and caching algorithms and optimization have come. This also can have a big impact for monetized web-pages. If the page doesn’t load fast enough, the user could then leave, resulting in potential profit loss.

The final key point is engagement. An example of this would be your push notifications that you receive on a smartphone. Whenever the web-app wants to notify you of a change or a message, depending on what the web-app is, it sends notification to the home screen of your phone which in turn lessens the burden of opening the app itself. Small quality of life enhancements such as push notifications can really immerse a user in your product, and with a progressive web-app, that is our main goal. Knowing these main design principles of web-apps really helped me understand why and how we can further enhance user experience. Most of the time when we are developing something, it will be for the use of others, whether it’s internally or client based operation, reliability, speed, and engagement are all key aspects of creating a great web-app.

Source: https://developers.google.com/web/progressive-web-apps/

From the blog CS@Worcester – Amir Adelinia's Computer Science Blog by aadelinia1 and used with permission of the author. All other rights reserved by the author.

Don’t be an Outlaw

Source: https://haacked.com/archive/2009/07/14/law-of-demeter-dot-counting.aspx/

The Law of Demeter Is Not A Dot Counting Exercise by Phil Haack on hacked.com is a great read on the applications of the Law of Demeter. Phil starts off by analysis of a code snippit to see if it violates the “sacred Law of Demeter”. Then proceeds to give a short briefing of the Law by referencing a paper by David Bock. He then proceeds to clear up a misunderstanding or usage of the Law of Demeter by people who do not know it well, hence the title of his post. “Dot counting” does not necessarily tell you that there is a violation of the law. He closes out with an example by Alex Blabs that when you apply a fix to multiple dots in one call, you effectively lose out on code maintainability. Lastly, he explains that digging deeper into new concepts is all and well, but being able to explain disadvantages alongside the advantages will show a better understanding of the topic.

Encapsulation as a concept introduced to me, is about encapsulating what varies. However, different applications like the Law of Demeter which is specific to methods. It is formally written as “Each unit should have only limited knowledge about other units: only units “closely” related to the current unit”. The example in the paper by David Bock makes it easy to understand where this is coming from with the Paperboy and the Wallet example. Having methods that have access to more information is unnecessary and should be left out. Also, letting the method have direct access to changes made by another method is a bad idea. By applying the Law of Demeter, you encapsulate this information which simplifies the code in one class but increases the complexity of the class. Overall, you end up with a product that is easily maintainable in a sense where if you change values in one place, it will apply across the board to where it’s used.

Although encapsulation is not a new topic, knowing how to properly apply encapsulation for methods through knowing the Law of Demeter should be a good practice. This would be remembering that “a method of an object may only call methods of the object itself, an argument of the method, any object created within the method, and any direct properties/fields of the object”. For example, knowing that applying the Law of Demeter to a chained get statements is a good idea. Also, the importation of many classes that you won’t use is a bad idea. With this understanding, although incomplete, I will hopefully avoid violating the Law of Demeter and share it with my fellow colleagues.

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.

What is microservices architecture?

I chose the topic after googling about the different types of software architecture and realizing I’d never read about microservices. I chose this specific article titled “Microservices” because I know Martin Fowler’s blog contains reputable information as he’s also been referenced in class. Microservices architecture is a way of designing applications as groups or suites that can be deployed independently as a service. While he mentions there is no exact definition, this style usually has similar characteristics around capability, automated deployment, endpoints intelligence, and decentralized control of data.

The microservice style is a newer and more common approach for enterprise applications. It’s a single application as a suite of small services. Each service runs in its own process and usually communicates with an HTTP resource API.

One key feature of microservices architecture is componentization via services. The different services of an application are a way to break down the software by components. These services are out of process components that communicate using web service requests. An advantage of using services as components instead of libraries is that they can be deployed independently. Therefore only requiring a single service to be deployed when there is a change.

A second key feature of microservices architecture is that it is organized around business capabilities. This approach combats the negative effects of separate teams working on an application as management usually splits focus between the technology layer, leading UI teams, server-side logic teams, and database teams. Services allow teams to be cross functional and include a full range of skills as products can be split up by individual services and communicate via a message bus.

Some of the more brief features of microservices architecture include the idea that a team should own a product over it’s lifetime and not just treat it as a project. Additionally microservices usually follow a decentralized governance which is less constricting and allow each service to take advantage of different technology that best suits the service.  Lastly decentralized data management is common using microservices. Typically each service will manage its own database.

After reading this article I definitely have an idea of what microservices architecture is but I think I’d need a more beginner level article to explain it. There were definitely some terms and concepts that Martin referred to that I wasn’t familiar with. One thing I did like about the Article was that he mentioned how well known companies use some of the technology he was talking about such as Amazon and Netflix. Seeing as microservices are mainly used for enterprise applications I have yet to gain any experience with them but most likely will in the near future.

 

 

From the blog CS@Worcester – Software Development Blog by dcafferky and used with permission of the author. All other rights reserved by the author.

What is Functional Testing?

For this blog I’ll be covering the basics of functional testing based on the article I read titled “Functional Testing Tutorial“. Although this topic is a pretty broad topic, it will compliment my previous blog covering “What is non-functional testing?” Functional testing is when each function of an application is tested and verified that it satisfies the specifications and requirements. Most functional testing is black box testing and does not deal with any of the actual code. To test all of the functions in an application testers provide input to the function and verify the output with what is expected. This is carried out either by manual effort or automated testing.

Aside from testing each individual function, functional testing also checks system usability, system accessibility, and error conditions. Basically a user should not have difficulty using a system and in an error condition, the correct error message or procedure should be followed. In order to carry out functional testing there is a basic testing process that must be followed. First identify the test data or input, then calculate the expected outcome and values, execute your test cases, and last conduct a comparative analysis to make sure all expected outputs match the actual outputs.

The article moves on to compare functional and non-functional testing to give an idea when each is used. Functional testing is usually done first and can be manual or automated. The testing coverage is used to ensure business requirements are met based on inputs. Functional testing is more a description on what the product or system actually does. There are a lot of ways to implement functional testing for a system. Some of the most popular types are Unit Testing, Smoke Testing, Sanity Testing, Integration Testing, White Box Testing, Black Box Testing, User Acceptance Testing, and Regression Testing. A few of the popular tools used to execute these tests include Selenium, Junit, and QTP.

After reading this article I’m confident I’ll be able to classify a type of testing between functional and non-functional. The article didn’t go too specific into the execution of functional testing but that’s because there are just too many types to generalize the implementation. When I want to actually execute and implement functional testing I’ll have to read more in depth into one specific type in order to actually test a system. I think the most important concept of functional testing is to remember that the business requirements are most important. That being said it will be important to make sure the business requirements are fully understood through the development cycle to ensure proper test coverage.

 

From the blog CS@Worcester – Software Development Blog by dcafferky and used with permission of the author. All other rights reserved by the author.

Why TypeScript?

We’re beginning to work on our final class projects using Angular and TypeScript, both of which I was previously unfamiliar with before this semester. Since our projects will be implemented using the Angular framework and the TypeScript programming language, I want to learn more about the concepts behind these applications. I found an informative blog on the subject; it is entitled Angular: Why TypeScript? by Victor Savkin. The main topic here describes the benefits of using TypeScript in general, and how it is an efficient means of producing quality Angular projects.

Victor points out that while using TypeScript for Angular projects is not required, but is encouraged to be used within the framework for several reasons; many of which I will summarize here. I will also offer my personal thoughts and takeaways regarding the content.

Prior to reading Victor’s blog, I was not familiar with the fact that Angular was written in TypeScript. Now it makes more sense to me why we are going to be using TypeScript to code our projects; since Angular was written in TypeScript, it ought to have high compatibility with that particular language.

Another encouraging aspect brought up by Victor is the fact that TypeScript is equipped with many useful tools, such as “auto-completion, navigation and refactoring.” Based on his explanations, I feel that TypeScript is a very flexible language; for example, if we need to rename an instance, it seems that TypeScript will automatically change all implementations of that instance to its new name without having to do it manually. I find this to be a very powerful and efficient feature that I look forward to using while coding future projects.

Victor also eludes to the fact that TypeScript is actually a superset of JavaScript, something I was completely unaware of. I find this fact very interesting in the sense that it suggests that anything that can be done in JavaScript can be implemented in TypeScript. Victor even suggests we can rename a JavaScript .js file with the .ts extension, and with the proper annotations, the program could theoretically run completely in TypeScript.

Victor further explains that TypeScript simply makes code “easier to understand” rather than other languages such as JavaScript. I tend to agree with him in the sense that it seems that TypeScript offers explicit declarations, such as that of parameters, interfaces and abstractions. It seems that JavaScript does not offer much of this functionality. Victor offers sample code, first implemented in JavaScript, and then again in TypeScript. In my opinion, the TypeScript code is much easier for me to comprehend due to the explicit declarations that are lacking in JavaScript.

Reading Victor’s blog has made me further intrigued with the capabilities of TypeScript; I learned that it is even more flexible and powerful than I originally thought. Based on Victor’s insights, I feel that learning TypeScript will be useful to me during my professional career, since he states that it is widely used in the field of Computer Science.

 

From the blog CS@Worcester – Jason Knowles by Jason Knowles and used with permission of the author. All other rights reserved by the author.

Writing Great Test Cases and Becoming a Great Software Tester

I completely agree with Kyle McMeekin when he states in a blog post titled “5 Manual Test Case Writing Hacks,” from April 11th, 2016, that it should come as no surprise that great software testers should have an eye for detail. What may not be as obvious, however, is that great software testers should be able to write great and effective test cases. McMeekin goes on to observe that writing effective test cases requires both talent and experience. In an attempt to begin my journey to become a great software tester, I decided that I should pay close attention to the advice offered by experienced testers as they reflect on the skills they have gained from their time in the industry. Hopefully, by following the tips of more experienced testers, I too will someday be able to contribute to highly valuable test cases the improve productivity and help to create high quality software.

The first step to writing great test cases is knowing what components make up the test case. While many of the components were obvious to me, there were others that I had not thought of. The test steps, for example, are important because the person performing the test may not be the same person who wrote the test. Knowing how the test should be performed is important to obtaining a valuable result from the test.

What I found most valuable about McMeekin’s post, however, were his tips on how to “write better test cases that will lead to better quality software for your company.” His first piece of advice is to keep test cases simple. They should be in simple language, and follow the company’s template. Although not specifically mentioned in this guide, I remember reading that if a test case seems to become too complex, you should begin considering breaking it up into smaller pieces. Second, McMeekin recommends making test cases reusable. Taking into consideration that your test cases could be adapted to other scenarios or reused in another application should help to develop test cases that are reusable. Third, McMeekin suggests placing yourself in the shoes of the tester or developer rather than the test-case writer, and being your own critic. Considering what parts of your test-case may be ambiguous or frustrating for others using them will often help to create better tests. This goes hand in hand with the fourth recommendation, which is to think about the end user. Understanding the expectations and desires of the end user will certainly help to create test cases that lead to better, more successful software. The last recommendation that McMeekin gives is to stay organized. This suggestion could apply just about anywhere, but with hundreds or possibly even thousands of potential test cases, staying organized is certainly essential to being a great tester.

Although I am sure there is a great deal more to consider in my quest to one day become a great software tester, I think that keeping these things in mind will certainly improve the quality of the test cases that I write. In the rapidly advancing field of computer science, I don’t feel that I will ever stop learning new and improved ways of doing things or further developing my skills.

From the blog CS@Worcester – ~/GeorgeMatthew/etc by gmatthew and used with permission of the author. All other rights reserved by the author.

Thoughts on the Angular Material Datepicker

While researching how to make the dreams of developing a countdown clock Angular application for the final project of Software Construction, Design, and Architecture, I came across an interesting writeup on the Angular Material Datepicker by one of the Angular Material developers, Miles Malerba. With plans of creating a user-inputted countdown timer, a datepicker component sounded like welcome alternative to making one from scratch. I decided to look further into the Material Datepicker to see if it would be something that could prove useful.

The Material Datepicker includes support for the required attribute, which is used for data validation when a form is submitted. This seems like a worthwhile feature, as it would make little sense to allow the user to create a countdown timer without inputting a date to countdown to. The datepicker also has an additional mdDatepickerFilter attribute, which allows for “finer grained control of what’s considered a valid date.” This also seems like an important feature for a countdown timer input, as I would want to disallow users from selected a date in the past, as this would be invalid to count down to.

While I had not previously thought about supporting mobile users with my countdown timer application, the Material Datepicker’s mention of a specific “touch UI mode” made me reconsider. I think that a countdown timer that is tailored mobile users would be an important audience to appeal to. Perhaps mobile users would have more use for a countdown timer on their phones than on the computer. I will have to look into the possibility of supporting mobile users.

While it does not apply to my project, I thought that the Material Datepicker’s DateAdapter and support for any locale was an interesting addition. The DateAdapter is an abstract class that allows developers to specify the formatting of dates, which allows for representations of 1/2/2017 to mean January 2nd, 2017 in America and February 1st, 2017 just about anywhere else. Since my project will only need to include support for the American date representation, the included NativeDateAdapter class should fill my needs. This class uses the Javascript Date to represent dates, which is the American version mentioned earlier.

In conclusion, I think that Angular Material Datepicker will certainly help in the development of my Angular Countdown Timer Single Page Application (SPA). Having a datepicker component that is already written will allow me to focus on the more important aspects of design, such as allowing users to save their countdown timers by implementing database calls. While there is certainly still much work to be done on my Angular SPA, reading about the Angular Material Datepicker has me excited to get started developing.

From the blog CS@Worcester – ~/GeorgeMatthew/etc by gmatthew and used with permission of the author. All other rights reserved by the author.

Stubs, Mocks and Service Virtualization

We’ve been discussing the differences between stubs and mocks, along with the fact that many software testers may initially assume the two are the same concept. But as we have learned, mocks and stubs are not synonyms; they are two different techniques used within unit testing.

I wanted to learn more about some specific differences between stubs and mocks, such as potential advantages and disadvantages of using each of these techniques. Wojciech Bulaty has an informative article on the subject. Before getting into the pros and cons of each aforementioned technique, Bulaty explains the main concepts of stubs, mocks, and service virtualization.

He asserts that stubs use hardcoded data within the testing program itself, where the tests depend on this data. Based on Bulaty’s description of stubs, it seems it would be most beneficial to me to apply stubs when testing simpler code where no more than minimal implementation of an interface is required.

Mocks on the other hand seem to be most useful in situations for larger, more complex tests to “verify output against exceptions.” In my opinion, mocks ought to be used in addition to stubs to further satisfy the integrity of our testing. For instance, mocks may produce findings that stubs could not, due to the limitations of the hardcoded data within stubs. On the other hand, I feel it may be useful to use stubs as well, to help compare its findings with that of the mocks within our tests.

Bulaty introduces a third method to use within our unit tests: service virtualization. I found this particularly interesting because we haven’t discussed much about service virtualization in class yet, and this article helped me learn more about it. He explains that service virtualization supports testing through multiple protocols, and employs “test doubles” which are usually offered as Software as a Service (SaaS). It seems to me that service virtualization relies more on recorded data from a wide scope of different projects, whereas stubs and mocks are typically written by developers of their own projects.

My main takeaway from this article was recognizing some differences in particular between mocks, stubs, and service virtualization that were previously a bit unclear to me. Whereas stubs seem to concentrate on state verification, mocks tend to be more focused on behavior interactions within our code. I also learned which testing techniques may be most appropriate in regards to the particular code we are testing. For instance, if I am testing simpler tests with limited resources, based on what I’ve learned from this article, I will likely use stubs for my testing. But for larger, more complex tests, I feel I ought to implement mocks within my testing as well. Finally, when I am involved in large scale projects during my professional career, Bulaty’s article has convinced me that I should look into using service virtualization in addition to stubs and mocks when performing unit testing.

From the blog CS@Worcester – Jason Knowles by Jason Knowles and used with permission of the author. All other rights reserved by the author.