Category Archives: Week 9

Future of Testing Continued

A while ago (long while), I talked about an interesting post about the something called Test Ops in this post: https://fusfaultyfunctions.wordpress.com/2017/09/20/the-future-of-testing-taking-an-interesting-turn/.

Now I’d like to talk about a post by Awesome Testing describing an important topic in Test Ops, Testing in Progress. Essentially, its a set of ways of testing that utilizes real users and the different ideas and implementations that arise in a production environment. So how do you test a new feature or update produced for a service.

Obviously, the one metric is that it works without errors for the users. But the next most important metric, is the number of users it retains. The amount of people using the service and continuing to use the service is the most important thing for these applications. And this needs to be tested.

Now what do you do when you produce a new feature and need to test it? You could just throw it out into the wild and then see how the statistics work out. If it worked, keep it, otherwise throw it away. But that can annoy users and make you lose people.

There is no one best way, but there are several different ones used. There’s risks that need to be mitigated. So the first method outlined is Blue-Green Deployment, or Canary Deployment. You deploy the new feature or software on a separate series of servers, the blue pool. Preliminary tests are done, internal, users, and then if it looks good 5% of users are redirected to it from the original servers, the green pool. Then you can see how well the new software is working. If it doesn’t look good, move everyone back to the green pool.

Test Flights are similar. You hide a new feature in a code path, with another code path without the feature. By changing a config file, you show the new feature to users in the same manner as in Canary Deployment. First internal users, then lets say 5%. The feature can always be reverted with a change of a config file. A/B testing is a bit more extreme, essentially you have, say, two variations of an application. Fifty percent of users see one and the other, and the one that retains the most users becomes the finalized version.

There’s also a technique where faults are intentionally injects in software. In this way, it leads to a design that focuses on being secure. And then there’s one popularized by Microsoft. Developers are forced to use the applications that are being developed locally, to ensure the program is a reasonably good user experience.

Overall, it’s really interesting seeing the considerations required when dealing with testing new software. I never considered that not only would testing test for working product, but one that works well too. It makes testing a much more complicated, yet exciting field. It also makes the job of a tester much more integral to the success of an application.

Original Post: http://www.awesome-testing.com/2016/09/testops-2-testing-in-production.html

From the blog CS@Worcester – Fu's Faulty Functions by fymeri and used with permission of the author. All other rights reserved by the author.

CS@Worcester – Fun in Function 2017-11-13 23:23:40

The resource I discovered this week is partially a blog post, but mostly the mobile app that the blog post is about. The app is called Enki. It’s similar to the language-learning app Duolingo, but its purpose is to help people learn new software development skills in small chunks every day. The blog post explains why the app was developed: the options that currently exist for ongoing learning about software development take a lot of time, something developers tend to be short on. They can also be boring and inefficient. The app creators wanted something fun, engaging, useful, and quick. The mobile platform was selected so that users would always be able to have it with them, and therefore be able to squeeze learning into limited free time like a work commute or the time it takes for their code to compile. The daily lessons, called workouts, stick to small tips and bits of applicable information instead of getting bogged down in details that people who’ve passed the beginner stage probably already know. They’re also designed specifically with avoiding boredom in mind, so the workouts contain engaging challenges and game-like elements.

I chose this resource because it’s one of the only

books and video courses

Blog post explained why the app was developed

current options for [continuous] learning

they wanted something that could help you practice a little bit each day

Affected me immediately because I downloaded and started using the app. So far have only brushed up on html skills, but it seems interesting, and I intend to keep using it. It seems like a fun way to learn new things or to get a refresher.

From the blog CS@Worcester – Fun in Function by funinfunction and used with permission of the author. All other rights reserved by the author.

Web Application Testing

Source: http://www.softwaretestinghelp.com/web-application-testing/

This week I decided to look up different types of testing and I found a blog that talked about web testing. This website called software testing help had a blog that talked about web application testing and it gave a complete guide about testing web applications. I tried to find who the author of the post is but I could not find the name but the author talks about some web testing cheklist that programmers should go through before deploying their website. Which are

 1) functionality testing:

you should test your website to make sure that all of the links, forms, cookies, html, css, and database all work fine. It is important to make sure that the links are not broken links and that the code that you want to display onto the website are displayed correctly and in the right position.

2) Usability testing:

The usability test means that the website should be easy to use, any and all instructions that are on the website should be clear. There should be a main menu link on every page and it should be constant though out the website. You should also make sure that there are no spelling errors, all the fonts, colors, frames, etc. are all on the right setting.

3) Interface testing:

Check if the interacting between the website and the servers are functional and any errors that are given are handled properly.

4) Compatibility Testing:

Make sure that the browsing compatibility is available for all sorts of browsers and OS such as Firefox, google chrome, AOL, windows, MAC, Linux and other OS and brewers that are available today.

5) Performance testing:

You should make sure that the website can handle heavy load of traffic though the webpage so it does not crash and would be able to handle large user request simultaneously. A web stress test is to test the specific limits of the website and sees how the system reacts when it crashes and recovers from those crash.

6) Security testing:

Finally you want to make sure the security for the website especially a website that would take personal data and save it on its data base be secured enough for people to log in and feel safe that their personal information is protects.

I like this blog because right now in our class we are going to be devolving our own website and even though we are not going to have it be deployed to the public there are some testing in here that are good to remember when writing our code. In future situations when I am writing programs for websites this blog is a good reminder of what kind of tests I should do to make sure that I am doing everything before I deploy a website.

From the blog CS@Worcester – The Road of CS by Henry_Tang_blog and used with permission of the author. All other rights reserved by the author.

Refactoring Code

Source: https://snipcart.com/blog/tips-on-code-refactoring-from-a-former-addict

This week I decided to write about refactoring code. I found a blog written by Charles Ouellet called “tips on Code Refactoring, from a Former Addict” this blog talks about his own experience on refactoring code. Refactoring is rewriting existing code and redoing it to make it more readable and more maintainable. When Charles first started he career as a developer he had a serious refactoring addiction where it even hindered his own work and delaying him from sending out his code to the consumer. For this blog, he talks about his own perspective of what are good reasons to refactor code and what are some bad reasons to refactor code.

Some of the good reasons to refactor code is to avoid technical debt. When you write code and the business takes off after it starts to grow even more there may be new problems that arise out of it. Refactoring code can be a cheaper option of fixing the program then to write a brand-new program to do the same thing that you want it to do. Another reason why it is good to refactor code is that it can be able to teach new people that are joining the project half way though to learn the base of the code. This way it allows the new coder to see every class that is related to the application and they would be able to understand most of the code that was written before him. Finally, with technology evolving every day refactoring code to adapt to those new technology is a much cleaner and easier option to do then to write a brand-new code to adapt to the new technology that is out on the market today.

Some of the bad reasons to refactor code is to make the code look prettier. Most of the time coders would look back after a few days or weeks because the code they wrote was just ugly and wants to go back and fix it. However, that is a waste of time and money for both the coder and the company. If the code works fine you should not refactor the code just for the sake of refactoring. Another reason is to integrate new useless technology. When you are refactoring your code to integrate it for a new technology you should take a step back and think about what does this new technology bring to the table of your product and would it be worth it to even refactor the code for something that is just a one hit wonder.

I like this blog because it is from an experience developer and his own opinion of when it is good to refactor code. I think this topic is very good to learn because in school we are always changing out code to make it look nice and make sure it works so we are constantly refactoring it but in the real world we would not have the luxury to keep changing our code to make it look nice and to just re work it all together. So this blog gives you a good base of when you should refactor your code and when you shouldn’t

From the blog CS@Worcester – The Road of CS by Henry_Tang_blog and used with permission of the author. All other rights reserved by the author.

API Design with Java 8

In this blog post, Per-Åke Minborg talks about the fundamentals of good API design. His inspiration for writing this article is a blog post by Ference Mihaly which is essentially a check-list for good Java API design.

“API combines the best of two worlds, a firm and precise commitment combined with a high degree of implementation flexibility, eventually benefiting both the API designers and the API users.”

Do not Return Null to Indicate the Absence of a Value

Using the Optional class that was introduced in Java 8 can alleviate Javas problem with handling nulls. An example below shows the implementation of the Optional class.

Without Optional class

public String getComment() {

    return comment; // comment is nullable

}


With Optional class

public Optional getComment() {

    return Optional.ofNullable(comment);

}

Do not Use Arrays to Pass Values to and From the API

“In the general case, consider exposing a Stream, if the API is to return a collection of elements. This clearly states that the result is read-only (as opposed to a List which has a set() method).”

Not exposing a Stream

public String[] comments() {

    return comments; // Exposes the backing array!

}

Exposing a Stream

public Optional getComment() {

    return Optional.ofNullable(comment);

}

Consider Adding Static Interface Methods to Provide a Single Entry Point for Object Creation

Adding static interface methods allows the client code to create onjects that implement the interface. “For example, if we have an interface Point with two methods int x() and int y(), then we can expose a static method Point.of(int x, int y) that produces a (hidden) implementation of the interface.”

Not using static interface methods

Point point = new PointImpl(1,2);

Using static interface methods

Point point = Point.of(1,2);

I selected this resource because it relates to design principles, and it’s something that I do not know well. I would like to learn more about APIs and good practices for designing them using Java. The article had useful information and topics that I didn’t know well. A few more pointers from the article I learned:

  • Favor Composition With Functional Interfaces and Lambdas Over Inheritence – Avoid API inheritance, instead use static interface methods that take “one or several lambda parameters and apply those given lambdas to a default internal API implementation class.”
  • Ensure That You Add the @FunctionalInterface Annotation to Functional Interfaces – signals that API users may use lambdas to implement the interface
  • Avoid Overloading Methods With Functional Interfaces as Parameters – Can cause lambda ambiguity on the client side.

I would like to learn more about using AWS Lambdas, and learning more about APIs in general. I plan to use this information if I am ever to work on Java APIs which is possible. I want to make sure I have the best practices and the check list mentioned in the start of the article seems like a good starting point.

The post API Design with Java 8 appeared first on code friendly.

From the blog CS@Worcester – code friendly by erik and used with permission of the author. All other rights reserved by the author.

What is Model-Based Testing?

Link to blog: https://blogs.msdn.microsoft.com/specexplorer/2009/10/27/what-is-model-based-testing/

There are many different ways to test software. This blog written by Nico Kicillof defines what model-based testing is and what it is used for.

Model-Based Testing is a lightweight formal method used to validate software systems. It is considered “formal” because it works out of a formal specification or model of the software system. It is “lightweight” since it doesn’t aim at mathematically proving that the implementation matches the specification under all possible circumstances. Here is an image below from the link:

image

(In the image above, Kicillof shows the diagram that illustrates model-based testing in a nutshell.)

The difference from considering a lightweight method from a heavyweight method is that it comes between sufficient confidence vs. complete certainty as Kicillof likes to put it.

The first step of the process is that it is important to know that the method starts off as a set of requirements that could be written from the development team, or you the developer. The next step is to create a readable model that shows all the possible behaviors of a system meeting the requirements that were given to you the developer. Keep the model manageable by creating the right level of abstraction. Kicillof says this because it makes sense to keep a model manageable. If a model isn’t manageable, then it is basically a bad model to use. It would be difficult to make edits or reconfigure your model if you run into a specific type of problem where your model doesn’t clearly highlight how to fix it.

Kicillof gives the example of model based testing through the use of Microsoft Spec Explorer. He explains that in Spec Explorer, models are written as a set of rules in the mainstream language of C# which makes it less difficult to learn in comparison ad-hoc formal languages.

Kicillof’s explanation on how model-based testing works made me understand the concept better than before. The image he provides, just like the one shown above, illustrates the process of model-based testing. The numbers near each part of the process in the image made it easy to follow. I chose this blog because I wanted to know a little more on model-based testing. I wanted to think on how it would be used in my future career of video game development since it is another great way to test software. Depending on what games I’ll be creating in the workplace, knowing model-based testing is important because I think it is an easier formal way of testing in comparison to other types of testing.

From the blog CS@Worcester – Ricky Phan by Ricky Phan CS Worcester and used with permission of the author. All other rights reserved by the author.

TS>JS (For Scaling)

Before this class, I had heard of Angular before and had always heard it referred to as a JavaScript framework. I guess technically it is, because it still compiles to JavaScript but the language itself is written in the JavaScript superset, TypeScript. I had never heard of TypeScript before this class and the idea of a superset language was also a new concept. I was curious why Angular was written in TypeScript over JavaScript or other supersets. I know from outside sources that the first iteration of Angular was written in JavaScript but every iteration of the framework since, has been written in TypeScript and I wanted to know why. This post by Victor Savkin lays out the benefits of using TypeScript as a way of explaining why Google chose TypeScript for Angular.

Victor clears up my lack of understanding of what a superset is relatively simply. He explains that being a superset of JavaScript, everything you can do in JavaScript, you can do in TypeScript. With small changes to the code, a “.js” file can be renamed a “.ts” file and compile properly.

Victor emphasizes the lack of scalability in vanilla JavaScript. With JavaScript, large projects are much more difficult to manage for many reasons including the weakly typed aspect and the lack of interfaces. TypeScript solves these problems and helps create better organized code with its syntax. With its better organization, TypeScript is much easier to read through compared to JavaScript.

The lack of interfaces in JavaScript is a major reason why it doesn’t scale well for large projects. Victor uses a code example in JavaScript that shows that aspects of a class can be lost in JavaScript because of its lack of explicitly typed interfaces. The code is then reshown in TypeScript and is much more clear because you can actually see the interface. It’s that simple, in JavaScript, you have to really focus in on how every object interacts with each other because of the lack of explicitly typed interfaces. TypeScript allows you to create these interfaces so future code maintenance is less stressful and bug filled.

TypeScript offers explicit declarations, which is illustrated through some code samples in the article. The JavaScript code is a simple function signature that lacks any typing or clear information about the variables involved in the function. In the TypeScript version, it is clear what the types of the variables are, making the function much more clear in TypeScript.

There are other reasons defined in this article on why TypeScript was chosen and what makes it so great, but these are a few of the ones that stuck out to me. I’ve written JavaScript projects before and wondered how some of these concepts would affect larger projects, and now I know how they can be resolved. This article and information will prove useful for writing my final project and any future TypeScript-based projects that I decide to take on in the future.

Original Post: https://vsavkin.com/writing-angular-2-in-typescript-1fa77c78d8e8

 

From the blog CS@Worcester – Learning Software Development by Stephen Burke and used with permission of the author. All other rights reserved by the author.

Revisiting Polymorphism

This week I found this article on JavaWorld by Jeff Friesen that is entirely about polymorphism. Polymorphism is a core concept for all software development and a lack of strong understand will lead to weak applications and poorly written code. One can never revisit topics like polymorphism too much. This article discussing the four types of polymorphism, upcasting and late binding, abstraction, downcasting, and runtime type identification (RTTI). The article presents everything in the form of a code example and constantly revisits in the explanation process, which I find incredibly useful.

The first part of the article is an overview of the four types of polymorphism, coercion, overloading, parametric and subtype.. Of these four there are some bits that stuck out more to me than others. For coercion the obvious example is when you pass a subclass type to a method’s superclass parameter. At compilation, the subclass object is treated as the superclass to restrict operation. The simple example that is forgotten is when you perform math operations on different “number” data types like int’s and floats. The compiler converts the int’s to float’s for the same reason as the subclass to superclass object. Another piece in this first section that stood out to me was the overloading type of polymorphism. The obvious example here is overloading methods. In a class, you can have different method signatures for the same method name and each will be called depending on the context. Again, there is a simple example that is overlooked in the operators like “+”. For example, this operator can be used to add int’s, float’s, and concatenate Strings depending on the types of the operands.

The upcasting section uses our favorite example, different types of shapes and is almost entirely written in reference to one line of code, Shape s = new Circle(); This upcasts from Circle to Shape. After this happens, the object has no knowledge about the Circle specific methods and only knows about the superclass, Shape methods. When a shared method (implemented in both Shape & Circle), the object is going to use the Shape implementation as well unless it is clear through context to use Circle’s.  Similar to this section that uses superclasses and subclasses, is the section on abstraction. The article uses the Shape and Circle example again but points out that having a draw() method that does something in Shape does not make any sense from a realistic standpoint. Abstraction resolves this, meaning that in an abstract class, there is no way to instantiate a method like draw() for the superclass of Shape. This also means that every abstract superclass needs subclasses to instantiate their behaviors.

These are just a few parts of the article that stood out to me personally. The article as a whole is incredibly useful and worthy of a bookmark by every developer. I will continue to revisit this article if I find myself lacking confidence about the topic of polymorphism in the future!

Original Article: https://www.javaworld.com/article/3033445/learn-java/java-101-polymorphism-in-java.html

From the blog CS@Worcester – Learning Software Development by Stephen Burke and used with permission of the author. All other rights reserved by the author.

Blog 5

Soft Qual Ass & Test

From the blog CS@Worcester – BenLag's Blog by benlagblog and used with permission of the author. All other rights reserved by the author.

Abstraction

From the blog CS@Worcester – Computer Science Exploration by ioplay and used with permission of the author. All other rights reserved by the author.