Author Archives: orscsblog

Thoughts on “AI Test Automation: The AI Test Bots Are Coming”

In this article, Greg Sypolt talks in brief about the role of AI as a software testing assistant.  I chose this piece because it combines a field I’m interested in (AI and machine learning) with the content of my software testing course.  I am interested in AI task automation already, so a piece that dovetails these two topics is a perfect fit.  The author has chops as well — he oversaw the conversion from manual to automated testing at his company, and offers training to help other teams transition to automation.

Sypolt starts off by outlining three uses of AI in testing:

  1. Automatic test case generation that reduces level of effort (he abbreviates as “LOE”) while maintaining consistent standards.
  2. Generating test code or pseudocode from user stories; a user story is a use case or use sequence for some kind of product (software or hardware).
  3. Codeless test automation, or testing without writing tests as code.

He then outlines the necessity of properly training the testing bots, and some of the difficulties that may be involved:

  1. Identifying the proper training algorithms.
  2. Collecting a massive quantity of useful data.
  3. Ensuring that bots behave in a reasonable fashion from a given set of inputs, and ensuring that they exhibit new behavior (generate new tests) when the inputs change.
  4. The training process never really ends; each new set of tests and results gives the bots another data point to learn from.

I firmly believe that we are at the very start of a revolution in machine learning.  Why not apply those principles to software testing?  There are, of course, a couple of issues that arise which Sypolt didn’t mention: quality of tests and accuracy of tests.

A point firmly pushed by other articles and texts I’ve read is that quality is more important than quantity.  There can be little difference between running ten tests and running one hundred tests if the extra ninety don’t teach us much of anything.  The trick isn’t to create an AI that merrily runs thousands on thousands of unit tests; it’s to create on that identifies the important tests which reveal faults we don’t know about and confine itself to executing exactly those.

It’s also very important to ensure that the AI has learned properly and is up to standards — and that means testing the AI.  Which means testing the AI that tests the AI, and it’s digital turtles all the way down.

I can take away from this article two things: Firstly, it’s reasonable to combine two fields that I’m interested in (AI and testing) and that resources exist or will exist to support the two together.  Secondly, the field of testing is constantly and rapidly changing.  Additional learning is crucial, just like AI systems continue to learn from each new piece of data.

Article link

From the blog CS@Worcester – orscsblog by orscsblog and used with permission of the author. All other rights reserved by the author.

Thoughts on “Six Things That Go Wrong With Discussions About Testing”

In this article, James Bach talks about the ways in which conversations about testing skew away from the reality of what it means to actually do software testing, and what it means to be a skilled tester.  As is clear from the title, he divides this broad topic into six smaller pieces:

  1. Size doesn’t matter.  The number of test cases that you run is not a meaningful number, just like the number of lines of code in a project is not meaningful.  What matters is what they cover and what they can teach us.
  2. Tests are performances.  It’s an activity, not an object.  The person implementing test cases is more important than the cases themselves.
  3. Testing strategy needs to evolve.  Testing is a process of constant interrogation (I’ve written about this before in response to other articles) of both the code and the tester.
  4. Automation does not define testing.  Automation is a tool with which to run tests.  It takes human judgment and skill to design and implement quality tests.  Automation is a way to do a lot, but it takes a tester’s skills to do a lot with as little as possible.
  5. There are many kinds of test coverage.  No one type of coverage is truly comprehensive, and making changes to tests to give additional coverage by one metric may change what is covered by another metric.
  6. Testing is not static.  It’s an activity that’s fundamentally about learning.  Some things are predictable, but many things are not.  Testing is essetially formulating and then running experiments.  Much can be gained by deviation from established procedures.

The biggest takeaway for me from this article is the notion of testing as design and execution of experiments.  It’s not something I’d ever really thought about before, and it makes a lot of sense.  Testing is the process of formulating a hypothesis (these inputs in this context should result in this output) and then trying it out to see whether it’s right.  It even involves fairly thorough hypothesis testing, so that we can claim with confidence that a particular outcome of use is guaranteed (or close enough).

This article is also a discussion of the ways in which one can have a useful conversation about testing, and the ways in which those conversations can turn useless.  I think this is important to take away too, when I’m looking for jobs in the software development field.  I want to avoid making conversational mistakes in interviews by going down the wrong path, and I also may want to avoid working at a company where those conversational mistakes are part of the corporate culture.

article link

From the blog CS@Worcester – orscsblog by orscsblog and used with permission of the author. All other rights reserved by the author.

Thoughts on “Top Ten Factors to Consider When Choosing a Testing Technique”.

In this article, Manoj Verma describes (in his own words) “an exhaustive list of factors to consider” in the choice of testing technique.

But let’s back up a bit.

I chose this article to write about (and this topic) because in our work last week we were tasked with testing a small play problem; and shortly after starting I realized that for all of my practice with different testing methods, I didn’t know where to begin.  So when I went to write my weekly blog post, that’s where I looked.

Verma’s list of factors is as follows:

  1. Risk assessment — how tolerant to failure is the product?
  2. Client requirements — did the client specify a testing technique?
  3. Time/budget constraints — how long do we have to test?  How much money can we spend on testing?
  4. Industry guidelines — does the product need to fit into some kind of regulation? Which ones?
  5. Documentation — does documentation for the product exist?  Does it contain testing history?  Does it contain logical constructions such as decision tables or state graphs?
  6. Objective of test — what are we testing for?  What does the product need to do, or need to never do?
  7. Software development lifecycle — how is the development process managed?
  8. Models used to develop the system — how was the software system built?  Are there logical models that can be adapted into testing techniques or cases?
  9. Tester experience — how experienced is the tester?  How accurate is their assessment likely to be?
  10. Flexibility — does the devlopment process model involve a lot of flexibility?

Verma also concludes that choosing the right technique “is not child’s play”.  It isn’t easy, and he doesn’t offer hard metrics, but his guidelines serve as a way for me to ask myself a series of questions and discover the method that may work the best.

My search for answers, as it turns out, did not give me any that are truly satisfying.  It led me to more questions.  Software testing in general can be seen as the act of asking questions of both the design and implementation.  It seems natural that software testing techniques and methods should also be subject to questioning; in fact, software testers should subject themselves to similar questions.

So what can I take away from this?  How can it help me move past the brand-new-problem paralysis?  That I should start by asking questions of the problem I’ve been presented with, and of myself and my own testing experience.

Article link.

From the blog CS@Worcester – orscsblog by orscsblog and used with permission of the author. All other rights reserved by the author.

Thoughts on “Top Ten Factors to Consider When Choosing a Testing Technique”.

In this article, Manoj Verma describes (in his own words) “an exhaustive list of factors to consider” in the choice of testing technique.

But let’s back up a bit.

I chose this article to write about (and this topic) because in our work last week we were tasked with testing a small play problem; and shortly after starting I realized that for all of my practice with different testing methods, I didn’t know where to begin.  So when I went to write my weekly blog post, that’s where I looked.

Verma’s list of factors is as follows:

  1. Risk assessment — how tolerant to failure is the product?
  2. Client requirements — did the client specify a testing technique?
  3. Time/budget constraints — how long do we have to test?  How much money can we spend on testing?
  4. Industry guidelines — does the product need to fit into some kind of regulation? Which ones?
  5. Documentation — does documentation for the product exist?  Does it contain testing history?  Does it contain logical constructions such as decision tables or state graphs?
  6. Objective of test — what are we testing for?  What does the product need to do, or need to never do?
  7. Software development lifecycle — how is the development process managed?
  8. Models used to develop the system — how was the software system built?  Are there logical models that can be adapted into testing techniques or cases?
  9. Tester experience — how experienced is the tester?  How accurate is their assessment likely to be?
  10. Flexibility — does the devlopment process model involve a lot of flexibility?

Verma also concludes that choosing the right technique “is not child’s play”.  It isn’t easy, and he doesn’t offer hard metrics, but his guidelines serve as a way for me to ask myself a series of questions and discover the method that may work the best.

My search for answers, as it turns out, did not give me any that are truly satisfying.  It led me to more questions.  Software testing in general can be seen as the act of asking questions of both the design and implementation.  It seems natural that software testing techniques and methods should also be subject to questioning; in fact, software testers should subject themselves to similar questions.

So what can I take away from this?  How can it help me move past the brand-new-problem paralysis?  That I should start by asking questions of the problem I’ve been presented with, and of myself and my own testing experience.

Article link.

From the blog CS@Worcester – orscsblog by orscsblog and used with permission of the author. All other rights reserved by the author.

Thoughts on “TDD Guided by Zombies”

In “TDD Guided by Zombies”, James Grenning shares a seasonally-appropriate acronym for developing unit tests — just in time for our class to wrap up its unit testing section!  ZOMBIES stands for Zero, One, Many/More, Boundary behaviors, Interface definition, Exercise exceptional behavior, Simple scenarios/Simple solutions.  The first three letters (Zero, One, Many) describe, in order, the complexity of the behavior undergoing testing — for example, a queue with zero, one, and then many entries.  The second set of letters (Boundary, Interface, Exercise Exceptions) give the order in which to build test cases — start with boundaries, design the interface undergoing testing (or ensure that the code meets the interface requirements), and then test exceptional behavior.  The last letter tells the tester to create simple scenarios with simple solutions — add the minimal possible behaviors to production code in order to pass the tests generated by the first six parts of the acronym.

Grenning spends the bulk of the article on an example of a circular queue implementation in C++, showing how he progresses through each step.  He gives clear examples of both what to do and not to do.  However, I’m more concerned with the underlying principles and process, so in the interest of brevity I’ll skip a detailed review of his example and simply say that it’s thorough and worth a longer period of study.

I chose this article not just because of the season (or that it coincides neatly with our in-class work), but because it helps to answer the question of how.  We’ve learned many methods for generating unit test cases, but how do we pick the order?  How do we work backwards from tests to code in a way that makes sense and satisfies the specification that we’re handed?  This confirms what I learned in 343 last year: ZOMBIES are the key to TDD and to successful unit testing.

And what can I, as a learner, take away from this?  Firstly, and perhaps most trivially, it can help to wrap important concepts in cute acronyms.  It makes sharing and remembering knowledge easier.  Secondly, it clarifies the most important sets of values to test, and the most important times at which to test newly-instantiated objects (or other language-apprpriate constructs): when they are fresh and empty, when they have a single value, and when they contain many values.  Thirdly, the article really drives home the importance of minimalism in testing and test-driven coding.  If a simple solution is all that it takes to meet the specification, then use it.  If a complex solution can be simplified, simplify it.

This semester and onward, when I need to develop unit test cases, I’ll be thinking ZOMBIES.

Article link.

From the blog CS@Worcester – orscsblog by orscsblog and used with permission of the author. All other rights reserved by the author.

Thoughts on “Deeper Testing (3): Testability”

In his article “Deeper Testing (3): Testability”, Michael Bolton defines the testability of a product not only in terms of how it can be manipulated (although that’s one of his categories), but as “a set of relationships between the product, the team, the tester, and the context in which the product is being developed and maintained”.  He breaks this into five somewhat overlapping regions:

  • Epistemic testability — how we, as testers and developers, find the “unknown unknowns” related to the product.
  • Value-related testability — understanding the goals of everyone with a stake in the product and what they intend to get out of it.
  • Intrinsic testability — designing a product to facilitate easier and more meaningful testing.  This includes the ability to manipulate and view the product and its environment.
  • Project-related testability — how the testing team is supported by and supports other teams or team members attached to the product.
  • Subjective testability — the skills of the tester or testers.

Bolton then details how he thinks a tester on an agile team should operate: by bringing up and advocating for the testability of the product, and trying to keep the growth rate of the product sustainable in relation to testing.

As I learn more in the course, I find it increasingly important to understand not just how we test, but why and in what context.  Bolton’s article is especially helpful in understanding the context, and that is why I chose to write about it.  The article highlights aspects of the environment that surrounds a product, and the ways that those aspects contribute to or detract from the feasability of testing.  It also speaks to a tester’s role in an agile team, which is in practical terms useful to know about as many companies use some form of agile development.

With any resource I find in relation to my coursework, I look at what I can take away and what I can apply in my life (sometimes not limited to software development or computer science).  This piece gives me a better understanding of how to begin testing a product — not by writing test cases, or even determining the optimal testing tools to use, but by looking at the bigger picture.  I need to ask myself what I don’t know about the product, how it is going to be used, how it can be designed (or was designed) to facilitate testing, how I as a tester can and should engage other team members, and what skills I have that can make testing easier or what skills I need to hone that I don’t already possess.

Article link. 

From the blog CS@Worcester – orscsblog by orscsblog and used with permission of the author. All other rights reserved by the author.

Thoughts on “Rethinking Equivalence Class Partitioning, Part 1”

In his blog post, James Bach picks apart the Wikipedia article on equivalence class testing (ECT) as a way to explain the way in which ECT acts as a heuristic founded on logic rather than a rigid procedure that applies only to computer science.

His main points are:

A) That ECT is not purely based on input partitioning, nor is it purely based in computer science.  It comes from logical partitioning of sets.

B) That ECT is useful for dividing inputs (and outputs) into groups which are likely to expose the same bug in a system.

C) That ECT is about prioritization of bugs rather than exhaustive exposure.

In essence, he states that equivalence class testing is not an algorithm used to generate test cases, but a heuristic technique used to partition conditions such that they reveal the most important bugs first.

I chose this article because it relates to equivalence class testing (which we covered recently in class) and also refutes points made by a resource commonly used for basic knowledge.  I (and I think many other students) frequently use Wikipedia for summaries of subjects, and getting an expert opinion on what’s wrong with that kind of source helps deepen understanding of why they need to at the very least be supplemented by more nuanced sources.  It moves my understanding beyond the high school teacher’s decree of “DO NOT USE WIKIPEDIA IT IS NOT A GOOD SOURCE”.

While I think that picking apart the phrasing and wording of specific passages from a Wikipedia article is needlessly pedantic, I also think it’s important to critically interrogate sources of “common knowledge”.  When it comes to testing, it is necessary to understand the ways in which techniques are grounded in needs and conditions outside of just the code.  A tester should not only know how to test but also why those tests are used and what they are best at uncovering.  Bach’s article helped me think about why we test things, why we use the techniques that we use, and what we hope to get out of them.  When it comes to ECT, we partition conditions into classes in order to make a model of the ways in which the product under testing behaves.  The model can and should be modified through the testing process; conditions can be grouped differently in order to reveal different bugs.  Back presents ECT as well as other techniques as a fallable but important tool, and I think that’s good to keep in mind as I learn more about testing.

Link to blog post.

 

From the blog CS@Worcester – orscsblog by orscsblog and used with permission of the author. All other rights reserved by the author.

HelloWorld.java

class HelloWorld{

public static void main(String[] args){

System.out.println(“Hello World!”);

}

}

From the blog CS@Worcester – orscsblog by orscsblog and used with permission of the author. All other rights reserved by the author.