Category Archives: CS-443

The Future of Testing Taking An Interesting Turn

So, I was reading a blog called Awesome Testing the other day, (really clever name, I know, it’s what caught my attention) and I saw a whole section of pages titled “TestOps”. Intrigued, I ventured further by reading the posts, which led me to the initial blog post that coined the term, by Seth Elliot. The short and sweet of it is that originally testing can be represented by this diagram:


The tests are created, they are run on the system, the results are obtained, the results are checked against the ‘oracle’, and an assessment is completed. However, the interesting thing is the current trend in software products. The most popular products now a days are not simple programs, but are actually services. Facebook, Amazon, Twitter, etc. With this new change, testing becomes a bit more different. Big Data concepts are used, new features are tested with exposure control and monitors. This becomes the new model:


The testing system becomes a whole architecture testers are serviced with maintaining and using. This whole post was incredibly interesting to me. As someone new to Software Testing, my experience up until now has been discrete test cases and suites. With TestOps however, testing becomes a whole new beast. Using Big Data techniques to test how well a service is doing, not just if it runs correctly was a definite surprise. As someone who is currently dipping their toes into Big Data as I take a Software Quality Assurance course, I didn’t expect the areas to meet like this.

The final paragraphs of the post describing how this development is also blurring the lines between tester and developer is exciting. There was the worry in the back of my head, that software testing and development were two divided areas. That one had to choose one or the other. But knowing how much they will intermingle in the present and future helps alleviate this to a great extent.

It also makes a large amount of sense. As the software being tested becomes much more dynamic, testing must as well. Not just testing if software is working, but working well is quite the interesting distinction that requires more complicated solutions. These tests will require a process similar to software development to create. Choosing the right kind of Architecture, using tools similar to software process management tools to cut down on the time QA requires to make new tests or change existing ones, and the continuous updates and integration.

The incredibly interesting blog post can be found here:

The original blog post that sent me to it:

From the blog CS@Worcester – Fu's Faulty Functions by fymeri and used with permission of the author. All other rights reserved by the author.

The Business Case for Unit Testing

In this post, Erik Dietrich talks about the importance of making sure developers are well-trained in writing units tests.

“If you have developers writing good unit tests, you’ll develop a nice, robust unit test suite over the course of time. Developers will run this test suite constantly, and they’ll even have the team build to run it in a production-like environment.”

Making sure your units tests are well written and easy for maintainers to understand helps prevent regression, like a developer editing code and breaking functionality elsewhere.

“Testable code is better code, as measured by how easy it is to change that code.”

Writing better unit tests allows for bugs to be caught earlier and increases functionality and modularity, saving businesses money and making you a better developer with a more productive team.

The post The Business Case for Unit Testing appeared first on code friendly.

From the blog CS@Worcester – code friendly by erik and used with permission of the author. All other rights reserved by the author.

Intro To Interoperability Testing

As the Fall semester kicks off and I begin to dive into the curriculum of Software QA and Testing I quickly realized how little I actually know and how in depth testing really needs to be. That being said I want to use my blog posts as an opportunity to learn about different types of testing. That lead me to go with the article “A Simple Guide to Interoperability Testing” written by the team over at ThinkSys. I really had no idea what Interoperability Testing was before I read this so I decided to learn. I also liked that it was written in 2017. I know a lot of software topics have remained the same for decades but there’s something refreshing about reading something that has been published more recently.

First off, interoperability testing is a type of non-functional testing. This means it is testing the way a system operates and the readiness of the system. In the most general sense interoperability is how well a system interacts with other systems and applications. A great example provided is a banking application system (seen below) where a user is transferring money. There is data/info exchange on both sides of the transfer without an interruption of functioning to finish the transaction.

Banking Application Interoperability

The testing of the functionality that takes place to allow a fluent interaction between systems is what interoperability testing really is. It ensures end to end functionality between systems based on protocols and standards. The article covers 5 steps to perform this type of testing. They consist of:

  1. Planning/Strategy: Understand each application that will be interacting in the network
  2.  Mapping/Implementing: Each requirement should have appropriate test cases associated. All test plans and test cases are developed.
  3. Execution: Running all test cases and logging and correcting defects. Also retesting and regression testing after patches have been applied.
  4. Evaluate: Determine what the test results mean and ensure you have complete coverage of requirements.
  5. Review: Document and review the testing approach, outline all test cases and practices so further testing can improve on what has been done.

Some of the challenges that can arise with interoperability testing include the wide range of interactions that can take place between systems, testing multiple system environments can be difficult, and root causes of defects can be harder to track with multiple systems involved.

After reading this article I can definitely see the complexity in interoperability testing. Taking an iterative approach seems like it would be the best method because you can use your results each iteration to create better test cases and more coverage. Trying to tackle all the test cases in one execution would be overwhelming and it would be difficult to have close to full coverage. It seems like interoperability testing would need to take place anytime an application is updated as well to make sure the systems that interact with it are still compatible. Now that I have a general understanding of interoperability testing I am certain it will play a role in future jobs and work I do. With today’s technology, it is rare to have a complete standalone application that doesn’t interact with additional systems.

In conclusion I enjoyed this article because it was simple, to the point, and I was able to retain the information.

From the blog CS@Worcester – Software Development Blog by dcafferky and used with permission of the author. All other rights reserved by the author.

Effective Code Reviews

Effective Code Reviews

On my way of finding a podcast for my first assignment, I passed by this podcast about Code Review. I decided to choose it for my first blog entry because it had some interesting opinions about code reviews. By listening to this, I learned about other benefits of code reviews, beside bugs detection and code improvement. Below was the link of the podcast:

In this podcast hosted by Michael Kennedy, Dougal Matthews shared his thoughts and experience about the benefits of code reviews and the elements effective code reviews should have. Dougal gave an interesting scenario that code reviews could save the day. For example, there were two persons, one knew more about C++, one knew more about Python. Even though they might not understand deeply what the other person is doing, they should have a code review with lighter level in case that one of them was ill or left the company. He also brought up one of the reason he thought that many people did not like doing code reviews. They expected bugs detection for their codes, but they usually received code improvement more than bugs for their valid codes. Michael and Dougal also had some interesting ideas for an effective code reviews.

To be honest, I haven’t thought that code reviews could be used as a tool to back-up basic information of a project like Dougal mentioned. Usually, the first thing that came to my mind when hearing code reviews would be bugs detection. But his scenario pointed out a special case, which some small groups might have, that code reviews could help fixing it.

About one of the reasons many people did not like doing code reviews, I understood their feelings when they just wanted to test if their codes had bugs or not, not to re-do the whole project again just because they received a better solution to solve their problems. But I thought that we should be more flexible about that. If we had another solution, which was better than our original one in one or many ways, then we should choose that solution. Beside improving the code, we also gained experience from it. The next time we countered something that was similar, we could jump directly to the better solution and reduce the time we might spend for the worse solutions that we had used before.

I thought that the code review checklist which was mentioned by Michael and Dougal would be very helpful. Like they said, it kept the reviewers and the developers on the same page. This certainly reduced the time everyone spending to ask around the same question about the progress. I also agreed with their idea about having more than one people doing code review for a same project. Different people had different points of view, and people might make mistakes. By having more people involved in the reviewing process, we could increase the quality of the review, and share our knowledge to each other.

From the blog CS@Worcester – Learn More Everyday by ziyuan1582 and used with permission of the author. All other rights reserved by the author.

9/18/2017 — 1st Assignment Blog Post
Staying current with the current course topic and keeping things as simple as possible, this blog post gives some interesting examples of boundary value testing that I believe are helpful for understanding the course material on boundary value testing.
Boundary value testing is a simple concept of error checking range functions at the boundary. As stated in the article, it is one of the most important techniques for test case designs that every software developer will come across. It is basically a black box testing technique where knowledge of the source code is not required.
Boundary value testing checks for off-by-one error, a common mistake in computer programming. The most common off-by-one error are the following:
misplacing < with with >=.
Under indexing or exceeding the limit of the storage space of an array
incrementing and decrementing variables that are not needed or failing to do so when needed.
loop conditionals
The simplest example given is that of N apples assembled in a straight line. Each apple is a assigned a number corresponding to its position. Taking all apple from p to q, including the two variables, q-p apples will be off by one, whereas q-p+1 will include all of the apples needed in counting.
Another example given is a function which colors rectangles with coordinates of the upper left and bottom right points. The code given are as follows:

if (points > 90) grade = ‘A’;
if (points > 80 && points 70 && points 60 && points 50 && points < 60) grade = 'E';
if (points < 50) grade = 'F';

This a a typical off-by-one error. The error that needs correction is the < sign which has to be <=. The error is that the last column and row of the rectangle is not colored in.
The final example are simply boundary ranges, say from 1-10. Some good test cases are 1 and 10 or 0 and 11, or perhaps 2 and 9. This is a typical example that is much like the exercises done in class or on the hw assignments.
This blog post gives a variety of interesting examples of a simple concept in boundary value testing that is off-by one error that was not shown in the examples in class or on the hw assignment that I think are worth noting for any entry level programmer. It gives a brief overview of different test cases that any programmer should try to avoid. It gives a decent overview of boundary value testing which is a simple concept of testing boundary values of functions that takes in ranges of independent variables. Those are the reasons why I choose this post for last week’s blog post assignments.

From the blog CS@Worcester – Site Title by myxuanonline and used with permission of the author. All other rights reserved by the author.

What’s Up World!

First Post of the blog! Exciting stuff.

The main focus for now is going to be Quality Assurance and Software Testing. The faulty functions in question are hopefully not going to be ones I myself create.

The picture is actually really symbolic, by the way. Let’s see if the Anteater symbolizes me and the blog or if the glass does.

From the blog CS@Worcester – Fu&#039;s Faulty Functions by fymeri and used with permission of the author. All other rights reserved by the author.

Intro Post

Hello, my name is Tim Kmiec and this is my new blog for CS-443 at Worcester State University.

From the blog CS@Worcester – Tim&#039;s Blog by nbhc24 and used with permission of the author. All other rights reserved by the author.