Author Archives: calebscomputerscienceblog

What to Consider When Doing a Technical Review on Object Oriented Code

For this week’s blog post I decided to write about important things to consider when doing a technical review on object-oriented code.  I chose this topic as I figured it would serve as a good refresher on key object-oriented concepts that we should keep in mind going forward with the next software testing project.  I thought that reminiscing on good practices of object oriented design would enable us to be more aware of what to look for while writing out software technical review on the Sir-Tommy Solitaire program.

I found a short book entitled Object Oriented Programing that was written by a software professional named Carl Erickson.  Erickson is also the CEO and founder of a software company called Atomic Object.  Although the book was written in 2009, I still think it is relative in the world of object-oriented design today.  There are many sections to Erickson’s book but I plan on looking at just a few sections that I thought were most relevant to writing our software technical review.  The sections I intend to focus on in this blog post are the following: sections 4 – OO Naming Conventions, 8 – Encapsulation & Modularity, 9 – Object-Oriented Hierarchy.

In section 4, Naming Conventions, Erickson briefly touches upon good practice for naming classes, methods, and variables.  He explains that class names should have the first letter of the first word capitalized along with the first letter of every word after that whereas, object variable and method names should have the first word begin with a lowercase letter and all following words begining with capitalized letters.  This probably isn’t news to any of use in CS-443 but it’s definitely something to consider when writing our software technical reviews.

Next, in sections 8 and 9, Erickson explores the ideas of class encapsulation and hierarchy.  Encapsulation is an important idea to consider when writing software technical reviews because improper use of encapsulation such as instantiating a variable as public when it should be private can lead to serious headache down the road.  For example, if something in the code were to go wrong with regards to a public variable that you instantiated, it may be very difficult to track down the culprit of the bug since the variable in question is not private. Also, Erickson brings up the notion of the “is a” relationship between classes, sub-classes, and interfaces which will be significant in determining the flexibility of the code in question.

It’s also worth reading over the rest of the sections in Erickson’s book. In particular sections 10 -14 as they go over essential concepts regarding to object-oriented design.  Section 13 does a good job of explaining how to evaluate whether or not a piece of object-oriented code is designed and implemented efficiently by focusing whether the code adheres to user specifications and object-oriented design techniques. Keeping in mind the foundational concepts of object-oriented design may allow us to make more insightful suggestions and write better technical reviews.

 

 

 

 

 

 

From the blog CS@Worcester – Caleb's Computer Science Blog by calebscomputerscienceblog and used with permission of the author. All other rights reserved by the author.

Best Practices for Naming Your JUnit Test Cases

Over the past couple of weeks in our Software Quality Assurance and Testing course, we’ve been working on writing code using Test-Driven-Development as well as coming up with JUnit test cases based on code that was already written. Over this time, I’ve come to notice that naming conventions for test cases can prove to be a little challenging in some situations. This is especially the case when writing tests in a Behavior-Driven-Development (BDD) manor. BDD, I feel, has kind of become the norm when it comes to naming conventions for JUnit tests and what this means is that tests are written specifically to test certain expected behaviors of methods/classes and are named as such. To clarify this, let me give an example using the code we were working on in class; imagine you write a test case that is going to test our readToGraduate() method in our Student class. While you will most likely have multiple test cases for this method, since there are multiple factors to consider when determining if a student is ready to graduate, one of your JUnit tests might test to make sure the method returns false when the student’s LASC and major requirements are complete, and they have obtained enough credits but their GPA is less than 2.0. A possible name for this test, following the BDD practice of naming, might be:

 

As you can see this is kind of a mouth-full. Even though this describes the behavior we should expect with the given input, the readability of this code is pretty lousy. That’s why I decided to do some research on different naming conventions for naming JUnit test cases for this week’s blog post.

I found a blog post titled, Getting JUnit Test Names Right, written by Frank Appel, a software engineering professional, in which he addresses the same problem I have just described. Essentially what Frank suggests in his blog post is that we keep naming simple, which may mean making our test names less descriptive, and use good naming conventions for methods and variables inside the test case to enhance readability. He explains that test names can be made simple through our test’s name only describing state information. Hence the test case I described earlier may look something like this:

 

Although the name of this test may apply to multiple tests with different inputs we could address this by simply adding a number at the end of the name for different input values being tested.  Frank, addresses this issue as well in his explanation. He says that although this naming convention may lead to names that could apply to a variety of tests, using good naming conventions inside your test method will clear up some of the vagueness. Also, as Frank points out, since the JUnit reporting view provides pretty descriptive messages when a test fails, this will also help clear up some of the ambiguity.

From the blog CS@Worcester – Caleb's Computer Science Blog by calebscomputerscienceblog and used with permission of the author. All other rights reserved by the author.

A Closer Look into JUnit 5

For my blog post this week I wanted to take a closer look into what to expect with JUnit 5. Last class, Professor Wurst gave us a brief run down on some of the nifty features and functionalities that were introduced in JUnit 4, such as testing for exceptions and implementing the assertThat() function. Seeing as the new JUnit framework, JUnit 5, was just released this past August I though it would be interesting to take a look into what additional features were added into this new JUnit testing framework. I found this blog post, A Look at JUnit 5’s Core Features & Testing Functionalitywritten by Eugen Paraschiv, a software engineering professional and thought it gave a pretty good run down on what to expect with JUnit 5.

Paraschiv points out a few new and useful assertions that are implemented in the JUnit 5 testing framework; assertAll(), assertArrayEquals(), assertIterableEquals(), and assertThrows(). Assert-all is a pretty useful assertion because it allows you to group all assertions within one test case together and report back the expected vs. actual results for each assertion in your test case using a MultipleFailuresError object, which makes understanding why your test case failed easier. Next, the assert-array-equals and assert-iteratable-equals assertions are also highly useful as they allow you test whether or not your particular data structure (array, list, etc..) contains the elements that you expected it to. In order to use these assertions, however, the objects in your data structure must implement the equals() method. Finally, the test-throws assertion pretty much does what the “@Test(expected = SomeException.class)” annotation did in JUnit 4. I like this way of checking for exceptions much better though because it seems more intuitive and makes the test case easier to read.

In his blog post, Eugene brings up a lot of cool new features implemented in JUnit 5 but the two that really stood out to me were (1) the introduction to the concept of assumptions and (2) conditional test execution. First, assumptions are new to JUnit 5 and I think that they could prove extremely useful in practice. Essentially, assumptions are syntactically similar to assertions (assumption methods: assumeTrue(), assumeFalse(), assumingThat() ) but they do not cause a test to pass or fail. Instead, if an assumption within a test case fails, then the test case simply does not get executed.  Second, conditional test execution is another cool new feature introduced in JUnit 5. JUnit 5 allows you to define custom annotations which can then be used to control whether or not a test case gets executed. I though the idea of writing your own test annotations was really interesting and I could definitely see this being useful in practice.

 

 

 

 

From the blog CS@Worcester – Caleb's Computer Science Blog by calebscomputerscienceblog and used with permission of the author. All other rights reserved by the author.

Code Coverage Alone Probably Won’t Ensure Your Code is Fully Tested.

For this week’s CS-443 self-directed professional development blog entry I read a blog post written by Mark Seemann, a professional programmer/ software architect from Copenhagen Denmark. The blog post I read is entitled “Code coverage is a useless target measure,” and I found it to be quite relevant to the material we’ve been discussing in class the past couple of weeks, especially regarding path testing and data-flow testing. In this blog post, Seemann urges project managers and test developers not to just set a “code coverage goal” as a means for measuring whether their code is completely tested or not. Seemann explains that he finds this to be a sort of “perverse incentive” as it could encourage developers to write bad unit tests for the simple purpose of just covering the code as the project’s deadline approaches and the pressure on them increases. He provides some examples of what this scenario might look like using the C# programming language.

In his examples, Seemann shows that it is pretty easy to achieve 100% code coverage for a specific class, however, that doesn’t mean the code in that class is sufficiently tested for correct functionality. In his first example test, Seemann shows that it is possible to write an essentially useless test by using a try/catch block and no assertions existing solely for the purpose of covering code. Next, he gives an example of a test with an assertion that might seem like a legitimate test but Seemann shows that “[the test] doesn’t prevent regressions, or [prove] that the System Under Test works as intended.” Finally, Seemann gives a test in which he uses multiple boundary values and explains that even though it is a much better test, it hasn’t increased code coverage over the previous two tests. Hence, Seemann concludes that in order to show that software is working as it is supposed to, you need to do more than just make sure all the code is covered by unit tests, you need to write multiple tests for certain portions of code and ensure correct outputs are generated when given boundary-value inputs as well.

The reason I chose Mark’s blog post for my entry this week was because I thought it related to the material we’ve been discussing recently in class, especially data-flow testing. I think it is important for us to remember, when we are using code based testing techniques, that writing unit tests to simply cover the code are not sufficient to ensuring software is completely functional. Therefore, it’s probably a good idea to use a combination of code and specification based techniques when writing unit tests.

From the blog CS@Worcester – Caleb's Computer Science Blog by calebscomputerscienceblog and used with permission of the author. All other rights reserved by the author.

Finding & Testing Independent Paths

Since we have been going over path testing in class this past week I decided to find a blog post relating to that material. The post I found titled, “Path Testing: Independent Paths,” is a continuation of a couple previous posts, Path Testing: The Theory & Path Testing: The Coverage, written by the same author, Jeff Nyman. In this blog post, Nyman offers an explanation into what basis path testing is as well as how to determine the number of linearly independent paths in a chunk of code.

Nyman essentially describes a linearly independent path as any unique path in  the graph that does not contain the same combinations of nodes as any other linearly independent path. He also brings up the point that even though path testing is mainly a code-based approach to testing, by assessing what the inputs and outputs should be of a certain piece of code it is still possible “to figure out and model paths.” He gives the specific example of a function that takes in arbitrary values and determines their Greatest Common Denominator. Nyman uses the following diagram to show how he is able to determine each linearly independent path:

I really liked how he was able to break down the logic in the form of processes, edges and decisions without looking at the code. I feel like sometimes when we are building our graphs strictly based on code it’s easy to get confused and forget about the underlying logic that will determine the amount of tests that are necessary to ensure our code is completely tested. It also helped me understand how basis path testing should work and how it should be implemented.

Nyman goes on by showing that he is able to calculate the number of independent paths using the above graph and the formula for cyclomatic complexity. First he points out that number of nodes is equal to the sum of the number of decisions and the number of processes, which in this case is 6. Then, by plugging numbers into cyclomatic complexity formula (V(G) = e – n +2p), Nyman was able to obtain the following results:

screenshot

Finally, Nyman ends the post by showing that the same results are obtained when going over the actual code for the Greatest Common Denominator function. He also shows that this same graph could be applicable to something like an amazon shopping cart/wishlist program. I think the biggest take-away from this post was that there is a strong relationship between cyclomatic complexity and testing which can prevent bugs through determining each linearly independent path and making sure they are producing the desired functionality.

October 1, 2017

-Caleb Pruitt

 

From the blog CS@Worcester – Caleb's Computer Science Blog by calebscomputerscienceblog and used with permission of the author. All other rights reserved by the author.

Could Robotics Process Automation (RPA) Be the Future of Testing?

According to blogger, Swapnil Bhukan, robotics process automation is indeed the future of software testing. In his blog post, Robotic Process Automation(RPA) evolution and it’s impact on Testing, he predicts that RPA will perform about “50 to 60% of testing tasks” by the year 2025. If you are not familiar with what RPA is, Bhukan also made a blog post describing what it is, how it works, and some of its benefits. Essentially RPA is a way to automate any repetitive task using bots that are taught how to do said tasks. Currently, according to Bhukan, RPA is only used to perform only about 4% of software testing tasks but that is sure to change as RPA technology advances. Today, the main use case for RPA has to deal with pretty basic data entry tasks.

The reason I chose to write about Bhakan’s blog post is because I found it quite interesting; especially since I was able to relate to the growth of RPA through past experience. Over the summer I had the opportunity to work an internship at an insurance company and all the IT interns had the pleasure of getting to sit down and talk with the EVP/Chief Innovation Technology Officer and ask him some questions. I asked him what kinds of new technologies the company was looking to invest in as well as what new technologies he was most excited about. His answer to both of these questions, requiring little time to think, was, hands down, RPA. Companies today are striving harder and harder to automate as many tasks as possible in order to save money.

One of the downsides to RPA, as Bhukan points out, is that it could potentially put many software testers out of a job. Some of the things keeping RPA form taking over the field of software testing at the moment are budgetary issues (RPA software is pretty expensive), companies being reluctant to adopt such new technology, and apprehension due to the possibility of losing customers if the tests aren’t done correctly. However, I believe that software testers may just have to realign their expertise as RPA technology evolves. By this I mean that the software testing professionals/developers should begin to learn how to teach these bots and leverage the bots’ usefulness in completing repetitive tasks; after all, the bots can only be as smart as those that teach them. I think Bhukan shares this view when he says at the end of his blog post, “sooner or later we (Software testing professionals) need to upgrade our skill set to train the Robots.”

 

September 24, 2017

-Caleb Pruitt

 

From the blog CS@Worcester – Caleb's Computer Science Blog by calebscomputerscienceblog and used with permission of the author. All other rights reserved by the author.

First Blog Post for CS-443

This is the Blog I will be using for Dr. Wurst’s CS-443: “Software Quality Assurance & Testing” course.

From the blog CS@Worcester – Caleb's Computer Science Blog by calebscomputerscienceblog and used with permission of the author. All other rights reserved by the author.