Author Archives: Nathan Posterro

Gradle Build

Today I will be talking about Gradle! For this I will be referencing a blog on gradle called “Why Build Your Java Projects With Gradle Rather than Ant or Maven?”. This article discusses the pros and cons of building Java projects with Gradle instead of similar programs such as maven and ant. The blog starts by talking about how project builds used to be sort of a simple process that did not have to use fancy applications such as Gradle, ant, maven, or any other build program. Most requirements for builds just entailed packaging and compiling software. As mentioned in the article, as we now are seeing an increase in high tech programming and agile formatted programming, the requirements for building/compiling/packaging software is a much larger process. So why does this blog prefer gradle to another build program such as ant or maven?

One of the top reasons this article prefers gradle is because it is simple and user friendly. One big thing this article mentions is that when using build tools frustration is a common encounter while trying to use these tools. I personally have had this frustrating experience with maven about two or three semesters ago when we were using build tools in our software class. Building with maven was a giant headache and had to be done through linux, so if you happened to be one of the few who don’t primarily run linux (Yes, that is sarcasm) then you had to use a linux subenvironment in windows and run maven through there. However when you ran maven through that, it did not have access to your windows file explorer unless you mounted your C drive every time you wanted to explore files. This was a giant headache for me, and a problem I have not encountered with Gradle. Gradle can be used simply through a git bash terminal or any other terminal on your host operating system, for that matter. I personally enjoy the ease of gradle. Building with gradle is as simple as running “gradle build” on your master branch and it will build and let you know if there are any errors and what they are. You can also run html versions of your test reports to get a very detailed debugging without having to do everything in the terminal. I personally have learned a lot from using gradle and I look forward to using gradle more in the future to build my projects with ease.

Here’s the link: http://www.drdobbs.com/jvm/why-build-your-java-projects-with-gradle/240168608

From the blog CS@Worcester – The Average CS Student by Nathan Posterro and used with permission of the author. All other rights reserved by the author.

Path Testing in Software

Hello! Today’s topic of discussion will be path testing in software development. The article that is up for discussion today is “Path Testing: The Coverage” by Jeff Nyman. So let’s get right into it. What exactly is path testing? Path testing is a method of testing that involves traversing through the code in a linear fashion to ensure that the entire program gets test coverage. The point of path testing is to make a graphs that represent your tests. This is done by graphing out your program by nodes, which are used to represent different lines of code/methods/bodies of code. The connections between the nodes represent their linear relationship. If a program graph is made correctly, you should easily be able to identify the flow of the program and things such as loops should be distinctly represented. So what’s the big deal with testing using these program graphs? When we are testing we want to make sure we test all of our code and our relationships between different parts of code. If we use nodes as these different parts of our code, we can test using the program graph layout by making sure that every node in our program graph is traversed during testing, as well as every relationship between the nodes. This is an important concept to ensure that your program gets 100% testing coverage, and to make sure that the different parts of the code work together as they properly should.

I have drafted a few different program graphs for programs in my testing class, and I have to say that they make the objective of the code entirely clear. By objective, I mean I can tell exactly what is supposed to happen in the code and the exact order of execution. Loops are entirely clear in these graphs because the are represented by a line going from one node looping back up to a node above that. If there are 2 nodes in between the first node and the node that loops up to the first node, then I know that nodes 1-4 are a loop of some sort, and that nodes 2 & 3 are some sort of method body that do some action within the loop while node 1 is the beginning of the loop and node 4 is the exit condition node. You can test the edges between the nodes to ensure the different relationships for the nodes are correct as well as test the nodes themselves to ensure they work as desired. I think that program graph testing is a great way to visualize testing and I hope to use it a lot in the future.

Here’s the link: http://testerstories.com/2014/06/path-testing-the-coverage/

 

From the blog CS@Worcester – The Average CS Student by Nathan Posterro and used with permission of the author. All other rights reserved by the author.

Testing With Mocking

For today’s blog I will be discussing the topic of testing with mocking. I recently read an article called “Mock? What, When, How?” by Lovis Moller. So let’s jump right in. What is mocking in terms of testing? Think about your code as a giant puzzle. All the pieces fit together, but they fit together in specific ways. Say you’re missing a piece of the puzzle because your partner hasn’t finished their part of the code yet. The new problem is this: how can I test my code without my partner’s piece of the puzzle? The answer is by using mocking. Mocking allows us to emulate or “mock” our partner’s puzzle piece that we are missing so that we can test our code without having to actually have the rest of the puzzle pieces. So mocking seems like a pretty good idea and like something we should use all the time, right? Wrong. The article talks about some of the pitfalls of mocking, which we will now talk about. First, the article says to mock only code you own. Do not mock third-party code simply because it’s not yours and you do not know how it works or how it should work. Another rule of thumb is to avoid mocking values and concrete classes. You should avoid mocking values because the objective of mocking is not to test for specific values, but rather it is meant to test the relationships and interactions between different classes (pieces of the puzzle). Concrete classes should not be mocked because of all the extras (methods, unused lines of code, etc) that come along with the  concrete class. It is more efficient to use mocking with an interface in this case.

In the past few weeks we have been doing testing with mocking in our CS class. I think it is a really neat and useful way to test because you can get stuff done on your end without having to wait for someone else to finish. In a world where we work as groups to complete projects, human error is always an issue. We could have our parts of the code done weeks before a deadline while our partner waits until the night before the deadline. Problem here is that we might not be able to proceed in testing and fixing our code because what our code does as a function relies on our partners code. We certainly do not want to wait until the night before a deadline to test and fix everything. Mocking helps us with this by emulating the code we are dependent on. I hope to use this more in the future.

Here’s the link to the blog: https://blog.codecentric.de/en/2018/03/mock-what-when-how/

 

From the blog CS@Worcester – The Average CS Student by Nathan Posterro and used with permission of the author. All other rights reserved by the author.

Boundary Value and Equivalence Partitioning

For today’s blog I am going to be referencing another blog called “What is Boundary Value Analysis and Equivalence Partitioning?” by Ulf Eriksson. The article talks about boundary value testing being a way to test values between the valid input ranges in test cases. Boundary testing is important so that we as testers can figure out where our valid input range lies, and we do not mistakenly use inputs outside of the valid range (the invalid range). This is also important to do so that if an end user does enter an invalid input, then the program knows how to handle it rather than crashing and burning. One other area the article talks about is called equivalence partitioning. This one is a little different from boundary value testing in that it divides the test into a range of values and selects one input from each range. This is a form of black box testing because you are testing the value without knowing what is going on inside and in turn, not knowing the exact output. There is, however, an expected output that you determine before testing and it will be proved either true or false based off the actual output of the test. The best way to differentiate between boundary value and equivalence partitioning is summed up very nicely in the article. The basic idea of the point being made in the article is that boundary value testing is testing for the valid range of inputs whereas equivalence partitioning is slicing that valid input area (as determined in the boundary value tests) into equal parts and selecting one value from each partition to test. I think this is is a very constructive way to test within a valid value range because you are getting a “good spread” of test values. In other words, the test values you are getting will be from across the spectrum of valid test values because of the equal partitions and selecting a value from each partition. I have been using this idea of equivalence partitioning in class by selecting test values such as min, min+, nom, max-, and max. We have learned in my software class that testing from each of these values will give you a good showing of how accurate your tests are and how good your tests are. Thinking about these five test values as equivalent partitions within the valid boundary range will help me find these values easier in the future.

Link: https://reqtest.com/testing-blog/what-is-boundary-value-analysis-and-equivalence-partitioning/

 

From the blog CS@Worcester – The Average CS Student by Nathan Posterro and used with permission of the author. All other rights reserved by the author.

How to Write Better Unit Tests

Today I will be talking about a blog called “Unit Testing, How to Write Testable Code and Why it Matters”. This blog talks about the importance of unit testing for anyone who is a software developer. The blog talks about what unit testing is, what it consists of, and something that I had never heard of until today; the three A’s of unit testing:  Arrange, Act, Assert. We will talk about these three A’s of unit testing in more detail later. Another thing the blog talks about is unit testing vs integration testing. The blog sums up the difference between the two as unit tests have a narrow scope to test just one small part of the program whereas integration tests test how the “pieces” of code fit together and work hand in hand. Essentially, integration testing is a larger scale version of unit testing. So what exactly makes a good unit test? According to the blog, good unit tests consist of tests that are easy to write, readable, reliable, fast, and truly unit testing (not integration testing). This is where the three A’s of unit testing come into play. After you make sure that your unit tests adhere to the rules of good  unit testing, you can apply the three A’s of unit testing. The three A’s (as mentioned before) are Arrange, Act, and Assert. Firstly, we arrange. We do this by testing small portions of the code (unit tests) to ensure they work as designed. Next we give the test some sort of input to “test” its function to make sure it works correctly, this is also known as the Act phase. Lastly, we Assert what we know our output should be and we then compare it to the output of the function we are testing.

I think this is an extremely important article because it has given me a lot of insight on how to make good unit tests and how to make the testing cycle easier by remembering the three A’s of unit testing; Arrange, Act, Assert. I hope to use the three A’s of unit testing in my future programs in hopes that my testing will go smoothly and quickly. Another thing the article talked about that cleared up some confusion for me is unit testing versus integration testing. For me, the two have always been kind of interchanged and intermingled, as they are very closely related. I think that many people know there is a difference between the two, but do not know exactly what that difference is. It is good to think of it as integration tests being larger scale unit tests. In my head, I now think of integration tests as being made up of smaller unit tests, and seeing how those unit tests work together. I think this article was overall a really good read and will make my approach to writing tests for my programs change for the better.

Here’s The Link: https://www.toptal.com/qa/how-to-write-testable-code-and-why-it-matters

From the blog CS@Worcester – The Average CS Student by Nathan Posterro and used with permission of the author. All other rights reserved by the author.

Agile Testing

For today’s blog I am writing about agile testing. I found this blog on agile testing called “A Coach Guide to Agile Testing”. The blog talks about the different types of agile testing and the different types of teams/groups/personalities that use agile testing in different ways. The article mentions that people who use traditional ways of testing usually do not like agile testing because they consider it a threat to their job, which is to identify discrepancies between the working system and the specifications. This is different from agile because agile does not have specification documents that are detailed enough to help the tester do their job.  Something I found interesting about this article was when they were talking about the traditional testers and their approach of “follow specifications and report how the system differs from these specifications”. The article says that checking to see how closely the program follows the specifications does not actually say anything about the quality of the program. I think this point really hit home, because you can make a program that follows the specifications to a tee, and it can still be a poorly designed or poorly running program.

For an example, say you are building a program to manage someones bank account. You would need methods to deposit, withdraw, check balance, transfer money, etc. Now let us say the only specification for the program is that it compiles and runs. We could write a program that compiles and runs and in the sense of traditional software testing, we have a passing program. However, what if the method for deposit didn’t actually add money into the account, or the method for withdraw added money instead of taking it out? According to traditional software testing we have a functional program, when in reality we know our program does not do what it is supposed to. This is where agile testing comes in handy. Agile testing will test every method in our program (not just the specifications) to make sure that our methods actually do what they are supposed to. This is why I like agile testing, and try to do it in all my programs. It is a good way to assure you have a quality product that functions as designed and has also been tested and passed. I hope to continue using agile testing in my future programs and reading this article has given me some new tips and tricks to think of when doing agile testing.

Here’s the link: http://www.softwaretestingmagazine.com/knowledge/a-coach-guide-to-agile-testing/

 

From the blog CS@Worcester – The Average CS Student by Nathan Posterro and used with permission of the author. All other rights reserved by the author.

JUnit Best Practices

For this week, I read an article called “JUnit Best Practices” by Kyle Blaney. The article talks about how to use JUnit in the most efficient way and get the most out of the tests. The two main goals of unit testing according to the article are making sure tests are “extremely fast” and “extremely reliable”, meaning your tests should run as fast as possible while also producing correct, reliable results. The article then goes on to point out a number of different important practices when doing unit tests. Some of these practices are ensuring unit tests are running completely in-memory (tests should not read from the filesystem), not skipping unit tests, aiming to have each test only test one thing, using strong assertions, and much more.

I thought the article was extremely informational and very useful. We will be looking at unit testing in CS 443 this semester and having this information should help me produce better unit tests, whether they be self-written or just by using JUnit in the most efficient ways possible. I found it useful that the article mentioned doing things such as using the strongest assertions possible and making sure your tests are reliable. Having strong assertions will mean that if the unit test passes your assertion, then your code should hold up in the long run. However, none of this matters if your tests are not reliable and do not produce correct results. So making sure your tests are reliable is a huge part of the testing process. With the combination of correct, fast tests and strong assertions, your unit tests should be very strong. I like this idea of having strong unit tests because if my code can get through the strongest, hardest tests then my code should be very strong overall.

I agree with all the points made in the article, and I hope to use the information presented to me in the article in the future when I go to do unit testing. I would like to see how applicable these practices are in real life testing and see if they all hold up like they are supposed to. I am looking forward to unit testing because I now know some very important aspects to look out for.

Here’s the link: http://www.kyleblaney.com/junit-best-practices/

 

From the blog CS@Worcester – The Average CS Student by Nathan Posterro and used with permission of the author. All other rights reserved by the author.

Introductory Post

This is my introductory blog post for CS-443 Fall 2018.

From the blog CS@Worcester – The Average CS Student by Nathan Posterro and used with permission of the author. All other rights reserved by the author.

The Decorator Pattern

Today I will be talking about the decorator pattern. I found a blog on the decorator pattern by Bambielli’s Blog. The decorator pattern, according to the blog, allows objects to have new responsibilities at runtime without changing any code in their underlying classes. The blog writer then makes a point to favor composition over inheritance because it can reduce the frequency of runtime errors. It also conforms to the “open for extension closed for modification” rule of good programming. Decorating a class is essentially extending objects to add functionality at runtime. This allows things to be more current and up to date for user demands. In the UML diagram provided by the blog, the abstract decorator class extends the abstract component, which gives the abstract decorator the implementations to be used in place of the component implementations. The decorator classes extend the abstract decorator class, and provide the abstract decorator class with the current overriden methods.

 

The blog uses a pizza shop as an example. Pizza can come in a variety of toppings and crusts. The decorator pattern allows us to construct any combination of pizza toppings. Concrete classes DeepDish and ThinCrust extend Abstract Class Pie. The decorator pattern comes into effect with the topping decorator. The Abstract Class ToppingDecorator extends Abstract Class Pie, and the ToppingDecorator class has two decorators, PepperoniDecorator and CheeseDecorator.

 

The blog points out some disadvantages to the decorator pattern, one being that decorating objects manually is a hassle due to the large number of parameters to pass. The solution to this? Use the factory pattern in combination with the decorator pattern to make the objects for you, eliminating you having to manually pass all the parameters.

 

I chose this pattern because I am always looking to learn how to improve my code and make it  more efficient. Learning different design patterns is a great way to help my cause. The only thing about design patterns is that they aren’t always widely applicable. Most design patterns are specific for a small number of cases, so learning more design patterns helps me to overcome more situations that I face when coding. I think this blog really helped my understanding of the decorator pattern. I had looked at a few before this one, and the decorator pattern seemed to be pretty vague to me. The pizza example in this one really helped solidify my understanding of this pattern. All the examples from the other blogs had lots of code samples, and not a lot of explanation. Though simple and basic, I really liked the pizza topping example. I hope to apply this in future practice. I would like to have a program that allows users to make a number of different combinations (such as a sandwich topping program, or a car customizer program (color, size, etc.)). This pattern seems to be really effective with being able to create whatever combination of choices the user asks for.

 

Here’s the link: http://www.bambielli.com/posts/2017-04-02-decorator/

 

From the blog CS@Worcester – The Average CS Student by Nathan Posterro and used with permission of the author. All other rights reserved by the author.

Angular and the OnPush Detection Strategy

Today I will be discussing a blog by Throughtram on how to make Angular applications fast. The article describes the term ‘fast’ as dependent on the context of the situation. Typically, fast means best performance. The author has a program that contains two components; AppComponent (runs the application) and BoxComponent (draws 10,000 boxes with randomized coordinates). This is what he will use as his default case. To measure the application’s performance, the author wants to know how long it will take angular to perform a task when it has been triggered. To measure this, the author is using Chrome devtools, specifically the timeline tool. The timeline tool can be used to profile JavaScript execution. When the author measured the performance of his code, he ranged from 40ms – 61ms. To make the code run faster (optimize the code), the author suggests using a few different angular strategies. Due to word limits, in this blog I will only discuss the OnPush strategy.

 

Angular’s OnPush strategy is used to change detection strategy. This is used to reduce the number of checks angular makes when there is a change in an application. When the author applies this to his application, he is able to reduce the number of checks. How does he apply this to the code? All he has to do is change the detection strategy by adding a few lines of code in the BoxComponent (part of the application that draws the boxes). He uses something along the lines of “…changeDetection: ChangeDetectionStrategy.OnPush…”. He then exports his components, which are now implementing the OnPush detection strategy. After rerunning his code, the optimized runtimes are now ranging from 21ms – 44ms, a drastic improvement over the default code.

 

I chose this blog because I have a project to do in angular, which I am very new to. I have always been a fan of optimized, clean, and readable code. Nobody likes code that takes forever to run, and nobody can understand code that is a big mess of spaghetti. I have always strived to make my code minimal, clear, and concise. This is because it makes it easier for me to go back and fix, review, or do whatever I need to my code. I think optimizing code is super important, because slow programs aren’t practical. I hope to implement this strategy when I make my angular project. Even if I don’t get the chance to tinker with the detection strategy, I would at least like to look into Chrome’s devtools and measure my project’s performance.

 

Here’s the link: https://blog.thoughtram.io/angular/2017/02/02/making-your-angular-app-fast.html

From the blog CS@Worcester – The Average CS Student by Nathan Posterro and used with permission of the author. All other rights reserved by the author.