Author Archives: gmatthew

Making Testing More Effective Through Increased Testability

While there are few people who would would argue that testing is easy, it also should not be prohibitively difficult. The difficult part in testing software should be in deciding what to test, not how to test it. In a post from late September 2017, Michael Bolton describes how important testability is in the creation of a stable project with less risk of bugs at the time of delivery. If releasing a project untested or with insufficient depth of testing sounds risky to you, you are in good company. Making testing easier, known as increasing testability, allows for more thorough testing and (hopefully) a more polished, bug-free finished product.

Bolton describes testability in terms of visibility and controllability. The examples that he gives for visibility are log files and continuous monitoring. For controllability, Bolton cites application programming interfaces or APIs as the most common method for the easy manipulation of the product. An important takeaway from the post is that while it is certainly helpful to the tester if a product has things like log files and an API, this is not all that testability encompasses. Bolton presents the idea of testability as a set of relationships between multiple elements in the design process including the product, the tester, the development team, and the development environment. The overall testability of a product is a result of the complex interactions between all of these people and things involved.

The first category that Bolton mentions is epistemic testability. It is impossible for a tester to know all of the bugs in the code before performing any testing. If this were the case, there would be no need for software testers at all. The act of testing explores what Bolton calls a “risk gap,” or the areas in the project that the tester is uncertain about or unfamiliar with. Next, Bolton considers value-related testability, which refers to knowledge of what the intended user of a program is looking to gain. Understanding what is valuable to others allows a tester to focus his or her efforts where it will have the most significant impact. Intrinsic testability refers to the product’s ability to be easily understood by the tester. If a program’s behavior is easy to follow and its state is transparent to the tester, he or she will have a far easier time properly testing it. Since most projects are assigned to teams of people in many different positions, with different tools and knowledge, access to these people and resources is essential for project-related testability. Finally, subjective testability refers to the skills of the tester or testing teams matches the requirements of the project.

Bolton’s more literary definitions of testing were a welcome change from the testing material that I’ve read online and in text. Bolton seems to focus more on the people and the environment that the testing is being conducted in rather than on what specific tests are used. I think that as a student, many of the points that he makes are important to carry with me into any potential professional positions. Evaluating the testability of products through Bolton’s methods will allow me to better manage risk and deliver products with fewer bugs.

From the blog CS@Worcester – ~/GeorgeMatthew/etc by gmatthew and used with permission of the author. All other rights reserved by the author.

Turning the Big Ball of Mud into Modular Code

What Konrad Gadzinowski describes in the opening paragraphs of his post on “Creating Truly Modular Code with No Dependencies,” the “emotional rollercoaster” of developing software, is something that I’m sure anyone who has ever written a program has experienced. I certainly encounter this each time that I’m writing code for a project, whether it is an academic project or a professional one. Eager to begin a project, I often dive in and begin completing the simpler parts first. During this time, it seems that progress moves very quickly. After all of these easy, simple pieces are done, however, progress seems to slow or stall. As the requirements become more complex, I often find myself going back to previous code and rewriting things so that they integrate more seamlessly with the new element that I am adding. This problem is what Gadzinowski describes as the “big ball of mud.” Gadzinowski provides Apache Hadoop as an example of a program with the ball of mud interdependencies that slows further development and makes tracing the source of bugs more difficult. In the image below, each class is represented as a point on the outside of the circle, and the lines between the points are representative of a dependency.

(Image source: https://www.toptal.com/software/creating-modular-code-with-no-dependencies)

With so many interdependent classes, I imagine that untangling the web to trace bugs in Apache Hadoop would be a nightmarish task. Gadzinowski offers a solution to the problem of the ball of mud, however, that seems like sound advice. His suggestion is to use the element design pattern when developing software. This modular pattern aims to create reusable pieces of code that are independent of other classes. This is done through the use of element classes and element listener interfaces. In this way, all of the required dependencies for an element are encapsulated within that element. Outside classes that wish to utilize the element are not concerned with the underlying design of the element, they interact with the element’s listener. Gadzinowski presents this as a way to increase the flexibility of the element, allowing it to, for example, output to any number of different external environments through an identical listener call.

While I was immediately willing to listen to the post’s advice after it described a miserable situation that I’ve encountered countless times, I think that reading Gadzinowski’s explanations and examples of the element design pattern has certainly made me a believer. I think that what makes him so credible is his willingness to acknowledge the value in initially jumping into design without worrying too much about the big ball of mud that you may be creating. While this may not be the solution for a final release, it can get the ball rolling and allow for the element pattern to make your code more reusable and stable for production releases later on. I will keep Gadzinowski’s advice in mind the next time that I begin to worry that I have too many interdependent classes to make my classes reusable or easily maintainable.

From the blog CS@Worcester – ~/GeorgeMatthew/etc by gmatthew and used with permission of the author. All other rights reserved by the author.

Which Came First… The Test or the Code?

In episode 31 of a podcast by Brian Okken on his show titled “Test and Code,” Brian has a discussion with guest Paul Merrill about the testing pyramid and why he is frustrated with certain test-driven development (TDD) models. The discussion began as a Twitter disagreement between the two test enthusiasts and blossomed into a full-fledged “civil discussion.” While both Brian and Paul agree about the importance of testing and see the value in test-driven development, they disagree about how extensive testing needs to be in order to be effective. The two also disagree about how tests should be written and to what extent code should be based on testing versus tests written based on code. Although they do not seem to ever reach a consensus on the issue, each of them make some excellent points and give examples of personal experiences to support their opinions.

Brian, for example, is fed up with the sheer number of redundant unit tests that are written to test the same thing in traditional pyramid testing. Brian presents an example of a hypothetical method that has a test written for handling negative numbers, but the higher-level method that passes values to the first method will never pass a negative value. Brian sees testing the first method’s ability to handle negative numbers as unnecessary and a waste of time. If these same types of tests are written for hundreds of methods, Brian argues, an extraordinary amount of time is wasted on useless testing. As a rebuttal, Paul argues that if changes are made to the higher-level method that allows negatives to be passed down, the tests would already be written. Clearly not convinced, Brian scoffs at this justification for the test he clearly sees no value in.

Paul seems to have some sort of personal experience in all of the obscure areas that Brian dismisses as rare or unusual scenarios. Paul seems to support a bottom-up test-driven development platform, where tests are written that outline how every last detail that a program will perform. Paul argues that this is the best way for tests to effectively drive development. He seems to think that tests are not able to aid in the development process if they are written after development has taken place. Brian, on the other hand, sees this issue from a top-down perspective. He argues that higher, user-level tests should be written first. When the high-level tests are insufficient to further drive development or are too ambiguous to write code for, then lower-level unit tests should be written.

It was extremely interesting to listen to two experienced testers discuss such a controversial topic. While I can see where both men are coming from with their opinions, I seem to lean towards Brian’s side in the argument over test-driven development. I think that the points he makes about a pragmatic approach to testing is important. Rather than generating some ridiculous number of unit tests that may not have any bearing on the actual functioning of the program, the effort should be put into doing what was originally promised by the program specification. I think that I will follow Brian’s advice and also take the pragmatic approach to writing tests, in the hopes of avoiding rewriting tests and code. In the end, its not about how many tests can be written, its about testing for the correct things in order to deliver a bug-free program.

From the blog CS@Worcester – ~/GeorgeMatthew/etc by gmatthew and used with permission of the author. All other rights reserved by the author.

Is ‘Agile’ really agile?

The Agile software development methodology is based on the “Manifesto for Agile Software Development,” which outlines the values and goals of the platform. For many software development teams, an Agile methodology has replaced the dated Waterfall method. I think that the diagram below does an excellent job of highlighting the key differences between the two methodologies.

(Image source: https://www.seguetech.com/waterfall-vs-agile-methodology/)

The Agile method allows developers more flexibility and involvement in some of the stages of the development that were previously dominated by managers and other higher-ups with no connection to the code itself. In cases where getting a working prototype of a project deployed quickly is of primary importance, the Agile method is the clear choice. In Agile development, responding to changes in the program specification can be done relatively simply through regular meetings and discussions of progress.

The more traditional Waterfall methodology follows a linear sequencing, where each step must be completed in order before the next step is begun. This means that there is often a longer period of development before any product is ready to be deployed. When the product is deployed, however, it will often be more polished and complete. The Waterfall methodology does not respond well to changes in the specification, as this will often require backing up in the process and then reworking each of the steps.

Now, with a general idea of the two methodologies, I could begin to understand where user ayasin is coming from in his rather intense post titled, “Agile Is The New Waterfall.” The post on Medium.com generated quite the buzz of controversy, and even attracted the attention of well-know computer science figures including Uncle Bob. In his post, ayasin argues that Agile has become the tiresome, outdated successor of Waterfall. While he does not offer any solutions, he sure presents a lot of problems with Agile. Ayasin describes the Agile development process as follows, “You just throw stuff together as quickly as possible because you know it’s mostly trash anyway.” This hardly seems like a way to produce quality software. What’s more, ayasin argues, is that more of the responsibility (and potentially blame) is placed on the developers themselves, as they are given the illusion of involvement in the process without any real control of the outcome.

Before finding ayasin’s post on Medium.com, I had a vague idea of the Waterfall and Agile methodologies. After a bit of research of the two strategies, the post seems to make some excellent points. While I agree with some of them, I’m not sure if ayasin is being a bit harsh on Agile. It would seem that when properly implemented and followed, the Agile methodology has significant advantages over the traditional Waterfall method. Reading about the two methods has given me insight into some of the challenges I can expect to face when working on a project in the future. I feel nervous but prepared for these potential challenges and look forward to someday working on projects like the ones described in my research.

From the blog CS@Worcester – ~/GeorgeMatthew/etc by gmatthew and used with permission of the author. All other rights reserved by the author.

When Object Oriented Programming (OOP) becomes POO

Something that has certainly been engrained in my programming brain is that code should be as easy to reuse as possible, and this is done through the use of objects in what is known as object oriented programming. In my very first programming class I used Java and most of the academic programming that I have done since that point has also been in Java. Java is a self-described “general-purpose, concurrent, strongly typed, class-based object-oriented language.” As a result of its object-oriented nature, one cannot learn to program effectively in Java without learning how to program in an object-oriented manner. While object-oriented programming can often allow for the efficient reuse and maintenance of code, it may also overcomplicate things in certain instances. Knowing when and where to step away from an object-oriented approach to programming can be important to creating something that is easy for others to understand and build from.

In a Coding Horror post titled “Your Code: OOP or POO?” from March 2007, Jeff Atwood explains why programming in a way that considers fellow programmers who may work with your code after you is more important than mindlessly creating objects for the sake of creating them. Atwood goes on to explain why it is the principles of object-oriented design that are truly important. These are things like encapsulation, simplicity, and the reusability of your code. Atwood stresses that if you attempt to “object-ify” every concept in your code, you will often be introducing unnecessary complexity. He uses an interesting metaphor that compares adding objects to adding salt to a dish – “a little goes a long way.”

It must be made clear that Jeff Atwood and all of the other programmers that he mentions in his post are not against OOP. Rather, they are against the abuse and misuse of OOP by those who do not understand where and when creating objects is beneficial and where it is simply cumbersome or clumsy. Object-oriented programming is an extensively powerful tool for creating projects that are reusable and easily maintained or changed. What is important to take away from Atwood’s post is that it is the way that new programmers are being brainwashed into thinking that every piece of code that they write must somehow become an object lest it be poor programming is what actually causes problems. Although never directly stated, I took Atwood’s post as a call to educate new programmers about the potential pitfalls of writing overly complex object-oriented code in place of a simpler alternative that does not involve objects.

From the blog CS@Worcester – ./George Matthew by gmatthew and used with permission of the author. All other rights reserved by the author.

On Exploratory Testing and ‘Playing About’

A thought that occurred to me recently while following procedures for testing using the boundary value testing method is, “What would happen if I tried values for inputs of an incorrect type rather than whatever values the particular testing method called for?” While simple test procedures such as normal boundary value testing clearly leave much to be desired, more comprehensive testing methods such as robust equivalence class testing seem to cover most of the potential sources of input errors, right? I think not. It was often difficult to fight my intuition and desire to choose test values that did not necessarily line up with the testing procedures covered in boundary value or equivalence class testing procedures. This desire to choose what to test and how to test it based simply on experience and intuition is whats known in the software quality assurance field as exploratory testing, and it is an extremely powerful method of testing when used properly and appropriately.

In a Let’s Talk About Tests podcast titled “Players gonna play play play play play,” from September 21, 2017, Gem Hills talks about the benefits of getting familiar and gaining a better understanding of a program simply by “playing about,” with the program itself. Hills argues that sometimes simply using the program and applying knowledge from previous experiences with similar programs or with the development team itself can often provide a software tester with potentially revealing test cases that would be missed if the tester was relying solely on a testing framework. In her post, Hills references Richard Bradshaw, perhaps better known as the “Friendly Tester.” Specifically, Hills talks about attending one of Bradshaw’s talks at the BBC in August where he spoke about how “having a play” with a program can become far more impactful by simply recording the results and tests, effectively turning this play into exploratory testing.

One of the reasons that Hills cites for playing with a program with no real agenda is that, “[She] is not confident that [she’s] tested everything but [she’s] not sure what [she’s] missing.” I found this point especially telling. This is exactly how I felt when I was writing test cases for the boundary value testing assignment, as if I was sure that I was missing something, but unsure as to exactly what it was that I was missing. It would therefore seem to make sense to me that I should begin playing around in an attempt to discover what it is that I am missing. Perhaps it is simply that I am lacking the appropriate testing method, and performing some exploratory testing provides insight into an effective method. It is possible, however, that a bug exists in the program that cannot be found by applying any number of testing procedures. In these cases, exploratory testing, or “playing about,” is the only effective testing tool.

From the blog CS@Worcester – ./George Matthew by gmatthew and used with permission of the author. All other rights reserved by the author.

Why Repeatedly Repeating Code is Bad Programming Practice

After a discussion with a friend about the recent eclipse, the subject of the apocalyptic end of the world came up. I was reminded of Y2K, and decided that it may be worth some research, as I was too young at the time to really understand what was truly going on. As a student of computer science, perhaps it would provide me with some important examples of things not to do in my own coding. On a blog post written by Steve Rowe for Microsoft Developer, he shares what he learned from an instructor about the true cause of the Y2K scare, a lack of implementation of the DRY, or the Don’t Repeat Yourself principle. Y2K was caused not by mistakenly representing a four-digit year with too few digits, but by making this error over and over throughout and across multiple files. Unless absolutely necessary, code with identical or near-identical functionality should not be duplicated. Following the DRY principle makes maintaining and repairing code easier and simpler; it is important that those striving to become excellent programmers follow this principle.

While my mistakes are not going to cause the same devastation as the mistakes of the developers that caused the Y2K scare (yet), they have certainly caused me a great deal of frustration while programming for assignments or personal projects. On more than one occasion, I’ve found myself repeatedly trying to remedy a certain piece of code, only to find out later that the error was caused by similar code that was implemented elsewhere. It was this duplicated code that was actually responsible for the error, not the unused or irrelevant piece that I had been wasting time attempting to correct. My failure to follow (or even be aware of) the DRY principle, which I was unfamiliar with before looking over the syllabus for Software Construction, Design and Architecture has resulted in countless hours of wasted time and energy. Any programmer, no matter how good he or she may think they are, could always stand to improve. Not only will following the DRY principle allow your code to be more easily understood by others, it will make writing documentation and performing any maintenance much simpler. Steve Rowe makes an interesting comment before closing his post, stating that, if duplicating code is deemed necessary, “It might not be a bad idea to put a comment in the code to let future maintainers know that there’s similar code elsewhere that they should fix.” If we all attempt to better follow DRY and Rowe’s advice, maybe we can avoid future Y2K-esque scares.

From the blog CS@Worcester – ~/GeorgeMatthew/etc by gmatthew and used with permission of the author. All other rights reserved by the author.

Introductory Post

My name is George Matthew and I am currently a junior in the Computer Science program at Worcester State.

From the blog CS@Worcester – ~/GeorgeMatthew/etc by gmatthew and used with permission of the author. All other rights reserved by the author.