Category Archives: Software Testing

Test Driven Development: Formal Trial-and-Error


Test Driven Development (TDD), like many concepts in
Computer Science, is very familiar to even newer programming students but they
lack the vocabulary to formally describe it. However, in this instance they
could probably informally name it: trail-and-error. Yes, very much like the
social sciences, computer science academics love giving existing concepts fancy
names. If we were to humor them, they would describe it in five-ish steps:

  1. Add
    test
  2. Run
    tests, check for failures
  3. Change
    code to address failures/Add another test
  4. Run
    tests again, refactor code
  5. Repeat

The TDD process comes
with some assumptions as well, one being that you are not building the system
to test while writing tests, these tests are for functionally complete
projects. As well, this technique is used to verify that code achieves some
valid outcome outlined for it, with a successful test being one that fails,
rather than “successful” tests that reveal an error as in traditional testing.
Related as well to our most recent classwork, TDD should achieve complete
coverage by testing every single line of code – which in the parlance of said
classwork would be complete node and edge coverage.

Additionally, TDD has
different levels, two to be more precise: Acceptance TDD and Developer TDD. The
first, ATDD, involves creating a test to fulfill the specifications of the
program and correcting the program as necessary to allow it to pass this test.
This testing is also known as Behavioral Driven Development. The latter, DTDD, is
usually referred to as just TDD and involves writing tests and then code to
pass them to, as mentioned before, to test functionality of all aspects of a
program.

As it relates to our coursework, the second assignment involved writing tests to test functionality based on the project specifications. While we did not modify the given program code, at least very little, we used the iterative process of writing and re-writing tests in order to verify the correct functioning of whatever method or feature we were hoping to test. In this way, the concept is very simple, though it remains to be seen if it stays that way given different code to test.

Sources:

Guru99 – Test-Driven Development

From the blog CS@Worcester – Press Here for Worms by wurmpress and used with permission of the author. All other rights reserved by the author.

Path of Most Resistance


In my last blog, I sought
to cover Integration Testing and in doing so we covered the two distinct types
outlined by Mr. Fowler. Of these, Broad Integration Testing (BIT to save time)
is most relevant to the next subject I wish to cover: Path Testing. BIT covers
the interactions between all ‘services’ within a program – meaning a program’s
completed modules are tested to ensure that their interactions match
expectations and do not fail some tests created for them. In this way, Path
Testing is very similar but with a focus on how paths through various aspects/modules
of a program, hopefully, work or do not.

           As
opposed to BIT, Path Testing (PT) seeks to identify not just the interactions
between modules, but instead any and all possible paths through an application
– and discover those parts of the application that have no path. The ultimate
goal is to find and test all “linearly independent paths”, which is defined as
a path that covers a partition that has yet to be covered. PT is made up of,
and can integrate, other testing techniques as well, including what we’ve
covered most recently: equivalence testing. Using this technique, paths can be
grouped by their shared functionality into classes, in order to eliminate
repetition in testing.

           When
determining which paths to take, one could be mistaken for wanting to avoid the
same module more than once; as stated previously, we are seeking paths we have
yet to take. However, it is very often that the same path must be taken, at
least initially, which leads to several modules. In fact, a path might be near
or actually identical to one that has come before it, but if it is required
that several values be tested along this path then it as well is considered
distinct as well. An excellent example of this made by the article I chose
states that loops or recursive calls are very often dictated by data, and
necessarily will require multiple test values.

           However, after this point the author begins to move away from the purely conceptual to actual graphs representing these paths, specifically directed graphs. While it was painful to see these again after thinking I had long escaped discrete math, they provide a perfect illustration for the individual modules you expect a path to trace through, as well as possible breaking points. Directed graphs represent tightly coupled conditions, and in this way they express how a program’s run in order and the cause and effect of certain commands upon execution. In this way, it offers a much more concise visual presentation of the testing process as opposed to something like equivalence testing. As well, these graphs are quite self-explanatory but I look forward to applying these concepts in class to actual code.

Sources

Path Testing: The Theory

From the blog CS@Worcester – Press Here for Worms by wurmpress and used with permission of the author. All other rights reserved by the author.

Static VS Dynamic Testing


In Software Testing, the two
most popular or important methods for testing are static and dynamic. While both
are obviously named, its good to go over the distinctions, and more importantly,
the actual implementations of both.

           Static
testing involves review of project documents, including source code, to weed
out any actual or potential security flaws and general mistakes. This takes the
form of Informal and Technical Reviews, Inspections, Walkthroughs, and more. This
process can involve an inspection tool or be performed manually, searching
for potential run-time problems, but not actually running the code. Dynamic testing
conversely involves actually executing code looking for functionality,
resource usage, and performance. We have been using dynamic testing, specifically
unit testing, in our class so far. Dynamic testing is used mainly to verify the
program runs in accordance with the specifications outlined for it, specifically
called System Testing.

           Of
course, both have their own advantages and disadvantages. In static testing you
have the potential to find bugs that may have made later development more
troublesome early in the process, which can save lots of time and frustration in
the future. However, there may exist some flaw that only a running program can
reveal, but it only in the code that is executed. It would seem to me, that
these should not be exclusive and in fact, they have a logical order.

           You
would begin with static testing, like proofreading a paper, searching for misspelled
words (poor variable/method names), run on sentences (needless complexity or
repetition), and reorganizing to best get your point across (logical order of
declarations/methods, documentation). Like writing a paper, it is best to do
this ahead of run-time (reading through), that way you don’t have to constantly
stop to bandage small errors. In the same way it seems essential to ‘proof-read’
your code before execution to make sure it is free from identifiable errors
before you move on to seeking out run-time errors. You wouldn’t want to have to
fix both at the same time like with proof-reading.

           In sum,
these two could be grouped as pro-active or reactive, static and dynamic respectively,
and best explain their use. As mentioned, these also make sense to utilize in a
specific order, ensuring code is as correct as possible upon inspection, then
running it to see where flaws undetectable in a static environment have arisen.
These together ensure quality software that is optimized and secure.

Static Testing vs Dynamic Testing: What’s the Difference?
Static Testing vs. Dynamic Testing

From the blog CS@Worcester – Press Here for Worms by wurmpress and used with permission of the author. All other rights reserved by the author.

Evaluating Software Testing Strategies

https://www.mitre.org/publications/systems-engineering-guide/se-lifecycle-building-blocks/test-and-evaluation

From the blog CS@Worcester – Caleb's Computer Science Blog by calebscomputerscienceblog and used with permission of the author. All other rights reserved by the author.

Software Testing With Security in Mind

For this weeks blog post I wanted to take a look at the security aspect of software testing.  I feel that we have discussed many aspects of software testing in our CS 443 class but one of the aspects we haven’t really gone over is how to determine whether code we write is secure.  Many of us, at some point in our carriers, will probably write software that will be used in a web/ desktop environment. By testing such code for vulnerabilities before it is released we can save ourselves and the companies we work for from falling victim to data breaches and stolen information. I found this article titled, How to Test Application Security – Web and Desktop Application Security Testing Techniques, and it discusses the issues I have just introduced.

The author of the article defines security as meaning “that authorized access is granted to protected data and unauthorized access is restricted.”  They then go on to distinguish between desktop and web-based software and the different security needs for both. Essentially, they suggest that both types of software require similar security measures to protect sensitive data, however, most web based software will require a little extra security measures since this type of software is accessible to anyone on the internet.

In the Article the author brings up a number of interesting points regarding testing how secure a piece of software is but I would like to focus on three of their main points as I feel they are really important. The three points I’d like to focus on are data protection, brut-force attacks, and SQL injections/ XSS.  To test for data protection in your software, the author suggests, you should ensure all passwords in your DB are being encrypted when they are transmitted. Also, if your software is web based, you should be using the HTTPS protocol rather than HTTP and you should test certificate validity on the server side. When it comes to testing whether your software is vulnerable to brut force attacks, the author says you should have some kind include “some mechanism of account suspension” into your software.  Finally, in order to test for SQL injections and XSS attacks we must treat any part of the code that accepts user input as a vulnerability.  The author advises that make sure there is a maximum length of characters for valid input as well as a checking mechanism for basic SQL injection techniques.

From the blog CS@Worcester – Caleb's Computer Science Blog by calebscomputerscienceblog and used with permission of the author. All other rights reserved by the author.

What to Consider When Doing a Technical Review on Object Oriented Code

For this week’s blog post I decided to write about important things to consider when doing a technical review on object-oriented code.  I chose this topic as I figured it would serve as a good refresher on key object-oriented concepts that we should keep in mind going forward with the next software testing project.  I thought that reminiscing on good practices of object oriented design would enable us to be more aware of what to look for while writing out software technical review on the Sir-Tommy Solitaire program.

I found a short book entitled Object Oriented Programing that was written by a software professional named Carl Erickson.  Erickson is also the CEO and founder of a software company called Atomic Object.  Although the book was written in 2009, I still think it is relative in the world of object-oriented design today.  There are many sections to Erickson’s book but I plan on looking at just a few sections that I thought were most relevant to writing our software technical review.  The sections I intend to focus on in this blog post are the following: sections 4 – OO Naming Conventions, 8 – Encapsulation & Modularity, 9 – Object-Oriented Hierarchy.

In section 4, Naming Conventions, Erickson briefly touches upon good practice for naming classes, methods, and variables.  He explains that class names should have the first letter of the first word capitalized along with the first letter of every word after that whereas, object variable and method names should have the first word begin with a lowercase letter and all following words begining with capitalized letters.  This probably isn’t news to any of use in CS-443 but it’s definitely something to consider when writing our software technical reviews.

Next, in sections 8 and 9, Erickson explores the ideas of class encapsulation and hierarchy.  Encapsulation is an important idea to consider when writing software technical reviews because improper use of encapsulation such as instantiating a variable as public when it should be private can lead to serious headache down the road.  For example, if something in the code were to go wrong with regards to a public variable that you instantiated, it may be very difficult to track down the culprit of the bug since the variable in question is not private. Also, Erickson brings up the notion of the “is a” relationship between classes, sub-classes, and interfaces which will be significant in determining the flexibility of the code in question.

It’s also worth reading over the rest of the sections in Erickson’s book. In particular sections 10 -14 as they go over essential concepts regarding to object-oriented design.  Section 13 does a good job of explaining how to evaluate whether or not a piece of object-oriented code is designed and implemented efficiently by focusing whether the code adheres to user specifications and object-oriented design techniques. Keeping in mind the foundational concepts of object-oriented design may allow us to make more insightful suggestions and write better technical reviews.

 

 

 

 

 

 

From the blog CS@Worcester – Caleb's Computer Science Blog by calebscomputerscienceblog and used with permission of the author. All other rights reserved by the author.

Code Coverage Alone Probably Won’t Ensure Your Code is Fully Tested.

For this week’s CS-443 self-directed professional development blog entry I read a blog post written by Mark Seemann, a professional programmer/ software architect from Copenhagen Denmark. The blog post I read is entitled “Code coverage is a useless target measure,” and I found it to be quite relevant to the material we’ve been discussing in class the past couple of weeks, especially regarding path testing and data-flow testing. In this blog post, Seemann urges project managers and test developers not to just set a “code coverage goal” as a means for measuring whether their code is completely tested or not. Seemann explains that he finds this to be a sort of “perverse incentive” as it could encourage developers to write bad unit tests for the simple purpose of just covering the code as the project’s deadline approaches and the pressure on them increases. He provides some examples of what this scenario might look like using the C# programming language.

In his examples, Seemann shows that it is pretty easy to achieve 100% code coverage for a specific class, however, that doesn’t mean the code in that class is sufficiently tested for correct functionality. In his first example test, Seemann shows that it is possible to write an essentially useless test by using a try/catch block and no assertions existing solely for the purpose of covering code. Next, he gives an example of a test with an assertion that might seem like a legitimate test but Seemann shows that “[the test] doesn’t prevent regressions, or [prove] that the System Under Test works as intended.” Finally, Seemann gives a test in which he uses multiple boundary values and explains that even though it is a much better test, it hasn’t increased code coverage over the previous two tests. Hence, Seemann concludes that in order to show that software is working as it is supposed to, you need to do more than just make sure all the code is covered by unit tests, you need to write multiple tests for certain portions of code and ensure correct outputs are generated when given boundary-value inputs as well.

The reason I chose Mark’s blog post for my entry this week was because I thought it related to the material we’ve been discussing recently in class, especially data-flow testing. I think it is important for us to remember, when we are using code based testing techniques, that writing unit tests to simply cover the code are not sufficient to ensuring software is completely functional. Therefore, it’s probably a good idea to use a combination of code and specification based techniques when writing unit tests.

From the blog CS@Worcester – Caleb's Computer Science Blog by calebscomputerscienceblog and used with permission of the author. All other rights reserved by the author.

Finding & Testing Independent Paths

Since we have been going over path testing in class this past week I decided to find a blog post relating to that material. The post I found titled, “Path Testing: Independent Paths,” is a continuation of a couple previous posts, Path Testing: The Theory & Path Testing: The Coverage, written by the same author, Jeff Nyman. In this blog post, Nyman offers an explanation into what basis path testing is as well as how to determine the number of linearly independent paths in a chunk of code.

Nyman essentially describes a linearly independent path as any unique path in  the graph that does not contain the same combinations of nodes as any other linearly independent path. He also brings up the point that even though path testing is mainly a code-based approach to testing, by assessing what the inputs and outputs should be of a certain piece of code it is still possible “to figure out and model paths.” He gives the specific example of a function that takes in arbitrary values and determines their Greatest Common Denominator. Nyman uses the following diagram to show how he is able to determine each linearly independent path:

I really liked how he was able to break down the logic in the form of processes, edges and decisions without looking at the code. I feel like sometimes when we are building our graphs strictly based on code it’s easy to get confused and forget about the underlying logic that will determine the amount of tests that are necessary to ensure our code is completely tested. It also helped me understand how basis path testing should work and how it should be implemented.

Nyman goes on by showing that he is able to calculate the number of independent paths using the above graph and the formula for cyclomatic complexity. First he points out that number of nodes is equal to the sum of the number of decisions and the number of processes, which in this case is 6. Then, by plugging numbers into cyclomatic complexity formula (V(G) = e – n +2p), Nyman was able to obtain the following results:

screenshot

Finally, Nyman ends the post by showing that the same results are obtained when going over the actual code for the Greatest Common Denominator function. He also shows that this same graph could be applicable to something like an amazon shopping cart/wishlist program. I think the biggest take-away from this post was that there is a strong relationship between cyclomatic complexity and testing which can prevent bugs through determining each linearly independent path and making sure they are producing the desired functionality.

October 1, 2017

-Caleb Pruitt

 

From the blog CS@Worcester – Caleb's Computer Science Blog by calebscomputerscienceblog and used with permission of the author. All other rights reserved by the author.

Could Robotics Process Automation (RPA) Be the Future of Testing?

According to blogger, Swapnil Bhukan, robotics process automation is indeed the future of software testing. In his blog post, Robotic Process Automation(RPA) evolution and it’s impact on Testing, he predicts that RPA will perform about “50 to 60% of testing tasks” by the year 2025. If you are not familiar with what RPA is, Bhukan also made a blog post describing what it is, how it works, and some of its benefits. Essentially RPA is a way to automate any repetitive task using bots that are taught how to do said tasks. Currently, according to Bhukan, RPA is only used to perform only about 4% of software testing tasks but that is sure to change as RPA technology advances. Today, the main use case for RPA has to deal with pretty basic data entry tasks.

The reason I chose to write about Bhakan’s blog post is because I found it quite interesting; especially since I was able to relate to the growth of RPA through past experience. Over the summer I had the opportunity to work an internship at an insurance company and all the IT interns had the pleasure of getting to sit down and talk with the EVP/Chief Innovation Technology Officer and ask him some questions. I asked him what kinds of new technologies the company was looking to invest in as well as what new technologies he was most excited about. His answer to both of these questions, requiring little time to think, was, hands down, RPA. Companies today are striving harder and harder to automate as many tasks as possible in order to save money.

One of the downsides to RPA, as Bhukan points out, is that it could potentially put many software testers out of a job. Some of the things keeping RPA form taking over the field of software testing at the moment are budgetary issues (RPA software is pretty expensive), companies being reluctant to adopt such new technology, and apprehension due to the possibility of losing customers if the tests aren’t done correctly. However, I believe that software testers may just have to realign their expertise as RPA technology evolves. By this I mean that the software testing professionals/developers should begin to learn how to teach these bots and leverage the bots’ usefulness in completing repetitive tasks; after all, the bots can only be as smart as those that teach them. I think Bhukan shares this view when he says at the end of his blog post, “sooner or later we (Software testing professionals) need to upgrade our skill set to train the Robots.”

 

September 24, 2017

-Caleb Pruitt

 

From the blog CS@Worcester – Caleb's Computer Science Blog by calebscomputerscienceblog and used with permission of the author. All other rights reserved by the author.

Week 4 (2/7 – 2/13) Clean Coder Ch. 7 & 8

When I decided that I wanted to become a programmer, I thought it would be 99% program code and 1% test code. I thought I could mindlessly write as much code as I wanted and just know if it works or not by running it. I was wrong. Completely wrong. After having more insight into being a software developer, I realize now that testing is so under-rated and overlooked. After reading chapters 7 and 8, this concept is reinforced even more.

If you really think about it, its quite simple. You want your program to work and the only way to know if it correctly works or not is to test it. This is where acceptance testing comes into play to tell you when a requirement is done and that is obviously important because if you don’t know the requirements then how would you even know what to test?

Chapter 8 was somewhat broad since it covers a big array of topics regarding which tests to use depending on which level of the system you are working on but nonetheless it was good review. It’s always good to review testing strategies because at the end of the day, your program is only as good as your tests are. 

From the blog CS@Worcester – Tan Trieu's Blog by tanminhtrieu and used with permission of the author. All other rights reserved by the author.