Category Archives: quality assurance

CS443 – Introduction

Hello everyone, we are back and doing it again this semester. Tech Worth Talking About returns with posts about two courses, CS443 – Software Quality Assurance and Testing as well as CS448 – Software Development Capstone.
CS443 will focus more on strategies and tools for automating code testing as well as how testing incorporates into the overall development process. This goes far beyond simply debugging.

Welcome, and stay tuned!

From the blog CS@Worcester – Tech. Worth Talking About by jelbirt and used with permission of the author. All other rights reserved by the author.

The Clean Coder: Chapter 7 & 8

This week in the Clean Coder I read chapters 7 and 8. These chapters covered a lot about testing. Acceptance testing and testing strategies to be specific. One of the more interesting topics that was talked about was estimates. This is an interesting topic to me because a few months back I listened to an interesting podcast on the same exact topic. Now Uncle Bob didn’t go into as much details here as did Steve McConnell on the podcast but he made the most important point:

Professional developers understand that estimates can, and should, be made
based on low precision requirements, and recognize that those estimates are
estimates.

The later-half of that quote is the important part. The statement that estimates are estimates is important. A lot of the time estimates are taken as absolute fact in industry and unfortunately this has become poor practice. Once we remember that estimates are estimates then we start taking uncertainty back into account and everyone is happier for it.


In chapter 8 Uncle Bob began talking about testing strategies. The first point he decided to hit was actually a reiteration of something he said in an earlier chapter, “QA Should Find Nothing”. My understanding initially is that as a developer you should make sure QA has NO role. However, this is not the case. The issue here was my view on the role of the QA engineers. I assumed that their job was to catch the bugs I missed. This is wrong. According to Uncle Bob their role should consist of creating automated acceptance tests and characterizing the true  behavior of an application.

The rest of chapter 8 continues to talk about the different types or stages of automated testing. These are things like; unit testing, component testing, integration testing, system testing and exploratory testing. These are all things I’ve talked about in previous blog posts so I won’t spend too much time talking about them. I do however, want to note one thing Uncle Bob mentioned,

Unit tests provide as close to 100% coverage as is practical. Generally this
number should be somewhere in the 90s. And it should be true coverage as
opposed to false tests that execute code without asserting its behavior.

It’s interesting that when he talks about code coverage he makes it a point to say that our tests should assert something about >90% of the code we’ve written.


That’s all for this week. I look forward to the next week’s chapters which talk about Management and go into more depth about Estimations!!

From the blog CS@Worcester – Tyler Lundstrom by CS@Worcester – Tyler Lundstrom and used with permission of the author. All other rights reserved by the author.

What is Reasonable Test Coverage?

Earlier this week I was reading through some blogs and stumbled across this particular piece that I found highly amusing but answered a question that I had been looking for an answer.

I found this article called The way of Testivus. It’s essential Confucius meets programming. While reading through and chuckling at most of the things mentioned on this page I stopped being side tracked and continued to see if I could find a good answer to my question – What is a reasonable amount of test coverage? As I continued to google I landed back onto the artima.com forums and found there was the question I had been searching for. Funny enough the writer of the “The way of Testivus” replied with an answer to this poster’s questions

Testivus On Test Coverage
Early one morning, a programmer asked the great master:

“I am ready to write some unit tests. What code coverage should I aim for?”
The great master replied:

“Don’t worry about coverage, just write some good tests.”
The programmer smiled, bowed, and left.

Later that day, a second programmer asked the same question.

The great master pointed at a pot of boiling water and said:

“How many grains of rice should put in that pot?”
The programmer, looking puzzled, replied:

“How can I possibly tell you? It depends on how many people you need to feed, how hungry they are, what other food you are serving, how much rice you have available, and so on.”
“Exactly,” said the great master.

The second programmer smiled, bowed, and left.

Toward the end of the day, a third programmer came and asked the same question about code coverage.

“Eighty percent and no less!” Replied the master in a stern voice, pounding his fist on the table.
The third programmer smiled, bowed, and left.

After this last reply, a young apprentice approached the great master:

“Great master, today I overheard you answer the same question about code coverage with three different answers. Why?”
The great master stood up from his chair:

“Come get some fresh tea with me and let’s talk about it.”
After they filled their cups with smoking hot green tea, the great master began to answer:

“The first programmer is new and just getting started with testing. Right now he has a lot of code and no tests. He has a long way to go; focusing on code coverage at this time would be depressing and quite useless. He’s better off just getting used to writing and running some tests. He can worry about coverage later.”

“The second programmer, on the other hand, is quite experience both at programming and testing. When I replied by asking her how many grains of rice I should put in a pot, I helped her realize that the amount of testing necessary depends on a number of factors, and she knows those factors better than I do – it’s her code after all. There is no single, simple, answer, and she’s smart enough to handle the truth and work with that.”
“I see,” said the young apprentice, “but if there is no single simple answer, then why did you answer the third programmer ‘Eighty percent and no less’?”

The great master laughed so hard and loud that his belly, evidence that he drank more than just green tea, flopped up and down.

“The third programmer wants only simple answers – even when there are no simple answers … and then does not follow them anyway.”
The young apprentice and the grizzled great master finished drinking their tea in contemplative silence.

 

In my early stage as a programmer I decided I am going to dedicate myself to not best understanding what I should be looking for in code coverage but better understanding how I can get there with quality tests.

From the blog CS@Worcester – Tyler Lundstrom by CS@Worcester – Tyler Lundstrom and used with permission of the author. All other rights reserved by the author.

Why Good Management Leads to Better Software

This week I was listening to a few podcasts and one in particular sprung out to me. It was from Software Engineering Radio and it pertained to Software Quality(SE Radio Episode 262: Software Quality with Bill Curtis). In this podcast Bill Curtis began by discussing what could potentially go wrong if systems are in place for good software quality. One of the companies he talked about what Knight Capital Group.If you’ve never heard of Knight Capital Group then let me give you some background.

Who is Knight Capital Group?

Knight Capital Group was one of the largest stock market trading companies in 2012. It had a U.S. market share of approximately 17% on the NYSE and NASDAQ. Well one day Knight Capital Group was gearing up to upgrade a piece of their software to integrate with the NYSE’s Retail Liquidity Program. In the process of doing the updates with the new code on all of their servers, one of their technicians forgot about a server. Due to that server not having the old code removed and the new code added, the company lost over 400 billion dollars in a approximately 45 minutes. Because they did not have a simple procedure in place to double and triple check all of their systems, Knight Capital Group is no longer one of the top trading companies in the U.S.. Instead they are the subject for many articles/blogs/podcasts on the internet. This event was investigated by the SEC and they have an SEC Filling on public record. This was the most relevant part out of it.

15. Beginning on July 27, 2012, Knight deployed the new RLP code in SMARS in stages by placing it on a limited number of servers in SMARS on successive days. During the deployment of the new code, however, one of Knight’s technicians did not copy the new code to one of the eight SMARS computer servers. Knight did not have a second technician review this deployment and no one at Knight realized that the Power Peg code had not been removed from the eighth server, nor the new RLP code added. Knight had no written procedures that required such a review.

 

Where to Go from Here?

The conversation on the podcast went from what happens to when you don’t have controls in place to assure software quality to how to obtain it. I was listening to Bill Curtis speak and he kept bringing up the subject of CMM. While listening to the podcast, I had no idea what CMM was and I was driving so I wouldn’t be able to full grasp the podcast until I did a little digging.

What is CMM? I had no clue.

After a little digging I found out the CMM actually stand for Capability Maturity Model for Software. It’s a development model created by Carnegie Mellon University in Pittsburgh, PA. It is a required standard for most D.o.D. software/technology contracts due to it’s strict guidelines and allowing for more quality software to come forth.

CMM is a more of a project management model than it is anything else. It certainly helps the engineers because prior to the CMM, the engineers had impossible deadlines and they were unable to write elegant code and test that code. Now, CMM assists management in creating realistic deadlines and still create stable and working software. It consists of 5 levels: Initial, Repeatable, Defined, Managed and Optimizing. Each projects gets assigned a level as it begins to meet each of the new requirements. The ideal is to reach level 5. Here’s a brief understanding of each level;

  1. Initial – Your organization typically isn’t a very stable environment. Typically these organization have a really hard time making dead lines and that usually leads to cutting corners and producing poor products
  2. Repeatable – At this level organizations are have policies and procedures in place at the engineering level to produce good quality code and maintainable projects.
  3. Defined – This level is similar to level 2 but you know have policies and procedures in place for both engineering and management.
  4. Managed – At this level data is constantly being collected to refine policies and procedures that are put into place.
  5. This is the ideal level and it is when you are at an clean running point but able to “cut off the fat” of unneeded processes.

The take away?

Bill Curtis’s major point in his podcast was how Engineering Managers or Projects managers can be the biggest issue when integrating CMM. Even if the executives put all these policies into place, if the managers do not follow them then no changes will happen. In order to make sure that the managers know how to handle these new policies, a lot of training needs to happen.

I can tell you from experience that having non-compliant managers can make your QA/QC efforts disappear. Earlier this year I was promoted from Technician to Engineer and moved departments. My previous roles included a large bit of QA/QC on all of our equipment. Before I took on this roll the QA/QC plans were very lazy and ineffective. I noticed as I look through records that there were periods when equipment was un-calibrated and being improperly used. In order to counter-act this I implemented and put controls into place for all of our equipment and inventories. When I moved departments, the lead technician I handed my responsibilities off to neglected to continue those policies I put into place. A few months later that individual quit and I was called back to that department. It took me weeks to clean up all of the QA messes that he left and I found out that we had done some work with un-calibrated equipment. That’s no good. Moving forward management has decided they need to make sure they take a bigger role in the QA/QC.

 

From the blog CS@Worcester – Tyler Lundstrom by CS@Worcester – Tyler Lundstrom and used with permission of the author. All other rights reserved by the author.