Category Archives: testing

The Clean Coder: Chapter 7 & 8

This week in the Clean Coder I read chapters 7 and 8. These chapters covered a lot about testing. Acceptance testing and testing strategies to be specific. One of the more interesting topics that was talked about was estimates. This is an interesting topic to me because a few months back I listened to an interesting podcast on the same exact topic. Now Uncle Bob didn’t go into as much details here as did Steve McConnell on the podcast but he made the most important point:

Professional developers understand that estimates can, and should, be made
based on low precision requirements, and recognize that those estimates are
estimates.

The later-half of that quote is the important part. The statement that estimates are estimates is important. A lot of the time estimates are taken as absolute fact in industry and unfortunately this has become poor practice. Once we remember that estimates are estimates then we start taking uncertainty back into account and everyone is happier for it.


In chapter 8 Uncle Bob began talking about testing strategies. The first point he decided to hit was actually a reiteration of something he said in an earlier chapter, “QA Should Find Nothing”. My understanding initially is that as a developer you should make sure QA has NO role. However, this is not the case. The issue here was my view on the role of the QA engineers. I assumed that their job was to catch the bugs I missed. This is wrong. According to Uncle Bob their role should consist of creating automated acceptance tests and characterizing the true  behavior of an application.

The rest of chapter 8 continues to talk about the different types or stages of automated testing. These are things like; unit testing, component testing, integration testing, system testing and exploratory testing. These are all things I’ve talked about in previous blog posts so I won’t spend too much time talking about them. I do however, want to note one thing Uncle Bob mentioned,

Unit tests provide as close to 100% coverage as is practical. Generally this
number should be somewhere in the 90s. And it should be true coverage as
opposed to false tests that execute code without asserting its behavior.

It’s interesting that when he talks about code coverage he makes it a point to say that our tests should assert something about >90% of the code we’ve written.


That’s all for this week. I look forward to the next week’s chapters which talk about Management and go into more depth about Estimations!!

From the blog CS@Worcester – Tyler Lundstrom by CS@Worcester – Tyler Lundstrom and used with permission of the author. All other rights reserved by the author.

What is Reasonable Test Coverage?

Earlier this week I was reading through some blogs and stumbled across this particular piece that I found highly amusing but answered a question that I had been looking for an answer.

I found this article called The way of Testivus. It’s essential Confucius meets programming. While reading through and chuckling at most of the things mentioned on this page I stopped being side tracked and continued to see if I could find a good answer to my question – What is a reasonable amount of test coverage? As I continued to google I landed back onto the artima.com forums and found there was the question I had been searching for. Funny enough the writer of the “The way of Testivus” replied with an answer to this poster’s questions

Testivus On Test Coverage
Early one morning, a programmer asked the great master:

“I am ready to write some unit tests. What code coverage should I aim for?”
The great master replied:

“Don’t worry about coverage, just write some good tests.”
The programmer smiled, bowed, and left.

Later that day, a second programmer asked the same question.

The great master pointed at a pot of boiling water and said:

“How many grains of rice should put in that pot?”
The programmer, looking puzzled, replied:

“How can I possibly tell you? It depends on how many people you need to feed, how hungry they are, what other food you are serving, how much rice you have available, and so on.”
“Exactly,” said the great master.

The second programmer smiled, bowed, and left.

Toward the end of the day, a third programmer came and asked the same question about code coverage.

“Eighty percent and no less!” Replied the master in a stern voice, pounding his fist on the table.
The third programmer smiled, bowed, and left.

After this last reply, a young apprentice approached the great master:

“Great master, today I overheard you answer the same question about code coverage with three different answers. Why?”
The great master stood up from his chair:

“Come get some fresh tea with me and let’s talk about it.”
After they filled their cups with smoking hot green tea, the great master began to answer:

“The first programmer is new and just getting started with testing. Right now he has a lot of code and no tests. He has a long way to go; focusing on code coverage at this time would be depressing and quite useless. He’s better off just getting used to writing and running some tests. He can worry about coverage later.”

“The second programmer, on the other hand, is quite experience both at programming and testing. When I replied by asking her how many grains of rice I should put in a pot, I helped her realize that the amount of testing necessary depends on a number of factors, and she knows those factors better than I do – it’s her code after all. There is no single, simple, answer, and she’s smart enough to handle the truth and work with that.”
“I see,” said the young apprentice, “but if there is no single simple answer, then why did you answer the third programmer ‘Eighty percent and no less’?”

The great master laughed so hard and loud that his belly, evidence that he drank more than just green tea, flopped up and down.

“The third programmer wants only simple answers – even when there are no simple answers … and then does not follow them anyway.”
The young apprentice and the grizzled great master finished drinking their tea in contemplative silence.

 

In my early stage as a programmer I decided I am going to dedicate myself to not best understanding what I should be looking for in code coverage but better understanding how I can get there with quality tests.

From the blog CS@Worcester – Tyler Lundstrom by CS@Worcester – Tyler Lundstrom and used with permission of the author. All other rights reserved by the author.

Creating Tests with Laravel

Laravel, Artisan and Testing

By: Tyler Lundstrom

The past few weeks I have been working on a project using the Laravel PHP framework. So far I love it and it has increased my abilities and knowledge for developing PHP immensely. One of the many great things about Laravel is it’s Artisan tool. Laravel’s Artisan tool is used to to create a lot of the boiler plate code for you as well as put the generated files in the proper repositories.

The command I’d  like to talk about today is php artisan make:test this command when given a name will auto generate the test boiler plate you need to running unit tests in Laravel. This saves you time from having to manual type out all the files you are using as well as the class name and extensions. As a side note about the extension, you extend a class called TestCase which is an abstract class that has many of the boiler plate style syntax already filled out and you simply just extend upon that.

Once your file is created and you are ready to start creating your tests writing PHPunit is now as simple as creating jUnit code.

public function testExample(){ $this->assertTrue(true); }

That’s all you need to know about testing with Laravel. Thank you Taylor Otwell for making PHP development a delight!

From the blog CS@Worcester – Tyler Lundstrom by CS@Worcester – Tyler Lundstrom and used with permission of the author. All other rights reserved by the author.

Ugh! PHP Unit Testing

I tend to spend a lot of time programming in PHP in my free time. One of the things I have been looking at, since I am taking a software testing class, is PHPUnit and unit testing in PHP. I was getting discouraged with how odd PHPUnit seemed in comparison to the ease of use of something like JUnit. I stumbled upon this blog post that is a conversation between two people about PHPUnit.

The conversation starts with Ed explaining how he doesn’t like PHPUnit specifically:

Ed: I guess realistically my complaints are aimed at PHPUnit . It’s very powerful and very complete from what I can tell, but I think it’s difficult to pick up and I think that difficulty makes people less likely to use it. Because it’s by far the best known testing tool, I think that tends to limit the use of unit testing, period, in PHP.

He goes on to state how other languages have unit testing support that are far easier to use. Java and Python for example have extremely easy to use unit testing frameworks/built in features.

After a little while he continues to explain the reasons he believes PHPUnit is so intimidating:

Ed: I think boilerplate is part of the issue. I think that’s intimidating. Tools can mitigate that to some extent, but I don’t think it eliminates the problem entirely. I just don’t think writing a simple test should be anything more than a couple lines of code.

Previous to that quote the other host, Chris, gave a snippet of a simple unit test and it required 17 lines of code to simply write one assertion. That seems absurd!

At the end of the post Ed notes something that is very important and relates closely to TDD:

To write testable code, you really have to be thinking about testing when you write your code. It takes a bit of time to get used to that, but I think it’s very doable.

The number one reason that PHP developers have a tough time thinking in this manner is due to the fact that most of them are self taught. PHP has a really small learning curve which is great for people who want to get into web development, but in order for PHP unit testing to move forward and to better things, more developers have to get into the TDD mindset.

 

P.S. I find it amusing that in my WordPress (Mainly written in PHP) post editor, the acronym PHP isn’t included in the base dictionary.

From the blog CS@Worcester – Tyler Lundstrom by CS@Worcester – Tyler Lundstrom and used with permission of the author. All other rights reserved by the author.

Lessons to Learn from Kent Beck the father of TDD

In a podcast I listened to earlier this week from PythonTesting, Test & Code podcast, I was able to hear from the “father” of TDD himself Kent Beck. In this podcast, the host Brian took snippets from a Software Engineering radio podcast (Episode 167) that, what he thought, would have the most impact on the listener.

These are the 5 things Brian decided to look at:

  1. You’re tests should tell a story.

  2. Be careful of DRY, inheritance, and other software development practices that might get in the way of keeping your tests easy to understand.

  3. All test should help differentiate good programs from bad programs and not be redundant.

  4. Test at multiple levels and multiple scales where it makes sense.

  5. Differentiating between TDD, BDD, ATDD, etc. isn’t as important as testing your software to learn about it. Who cares what you call it.

One interesting thing I noted from this was point number 2. Being careful of writing test that align with common software design practices. This kind of went hand-in-hand with point 1 where he says your tests should tell a story. Each individual test should tell the person reading it, what was being accomplished at that given point. For example if you try to use the Don’t Repeat Yourself (DRY) philosophy then you won’t be able to see that story as well in your tests because you’ve tried consolidating the repeats.

From the blog CS@Worcester – Tyler Lundstrom by CS@Worcester – Tyler Lundstrom and used with permission of the author. All other rights reserved by the author.

To Test it All

As I learn more about Software development and testing, I’ve come to find that 100% testing a major software system is very difficult. Earlier this week I read a blog post from Saucelabs (you can read it here) that covered the topic of code coverage vs. available resources. The main focus of this blog post was not worrying about testing 100% of the software system but testing it smartly.

The writer looked at a few different ways to approach testing smartly. One of the ways was by looking at what the “typical” user input was. I found this interesting because it makes total sense. Instead of testing for every thing you might expect the user to input. Thoroughly test the most common things you expect the user to input.

The next way the writier looked at with testing smarter is the Pareto Principle. This principle simply states that 20% of your inputs should result you in 80% gains. So in testing that translates to for 20% of your tests you should find 80% of your bugs. This is quite an interesting topic and intuitive when you think about it. 20% of your tests should cover everything you can easily think of that could be an issue. That leaves 80% of your tests and time to be able to worry about the many edge cases that effect a software system.

The last way the writer looked at approaching software smartly was by using big data. Using big data we are able to look at trends of what the user is using for supporting hardware or software and what types of things the users are clicking and using more often. That way you are able to focus your efforts on the more important aspects of your software system.

From the blog CS@Worcester – Tyler Lundstrom by CS@Worcester – Tyler Lundstrom and used with permission of the author. All other rights reserved by the author.

Why Good Management Leads to Better Software

This week I was listening to a few podcasts and one in particular sprung out to me. It was from Software Engineering Radio and it pertained to Software Quality(SE Radio Episode 262: Software Quality with Bill Curtis). In this podcast Bill Curtis began by discussing what could potentially go wrong if systems are in place for good software quality. One of the companies he talked about what Knight Capital Group.If you’ve never heard of Knight Capital Group then let me give you some background.

Who is Knight Capital Group?

Knight Capital Group was one of the largest stock market trading companies in 2012. It had a U.S. market share of approximately 17% on the NYSE and NASDAQ. Well one day Knight Capital Group was gearing up to upgrade a piece of their software to integrate with the NYSE’s Retail Liquidity Program. In the process of doing the updates with the new code on all of their servers, one of their technicians forgot about a server. Due to that server not having the old code removed and the new code added, the company lost over 400 billion dollars in a approximately 45 minutes. Because they did not have a simple procedure in place to double and triple check all of their systems, Knight Capital Group is no longer one of the top trading companies in the U.S.. Instead they are the subject for many articles/blogs/podcasts on the internet. This event was investigated by the SEC and they have an SEC Filling on public record. This was the most relevant part out of it.

15. Beginning on July 27, 2012, Knight deployed the new RLP code in SMARS in stages by placing it on a limited number of servers in SMARS on successive days. During the deployment of the new code, however, one of Knight’s technicians did not copy the new code to one of the eight SMARS computer servers. Knight did not have a second technician review this deployment and no one at Knight realized that the Power Peg code had not been removed from the eighth server, nor the new RLP code added. Knight had no written procedures that required such a review.

 

Where to Go from Here?

The conversation on the podcast went from what happens to when you don’t have controls in place to assure software quality to how to obtain it. I was listening to Bill Curtis speak and he kept bringing up the subject of CMM. While listening to the podcast, I had no idea what CMM was and I was driving so I wouldn’t be able to full grasp the podcast until I did a little digging.

What is CMM? I had no clue.

After a little digging I found out the CMM actually stand for Capability Maturity Model for Software. It’s a development model created by Carnegie Mellon University in Pittsburgh, PA. It is a required standard for most D.o.D. software/technology contracts due to it’s strict guidelines and allowing for more quality software to come forth.

CMM is a more of a project management model than it is anything else. It certainly helps the engineers because prior to the CMM, the engineers had impossible deadlines and they were unable to write elegant code and test that code. Now, CMM assists management in creating realistic deadlines and still create stable and working software. It consists of 5 levels: Initial, Repeatable, Defined, Managed and Optimizing. Each projects gets assigned a level as it begins to meet each of the new requirements. The ideal is to reach level 5. Here’s a brief understanding of each level;

  1. Initial – Your organization typically isn’t a very stable environment. Typically these organization have a really hard time making dead lines and that usually leads to cutting corners and producing poor products
  2. Repeatable – At this level organizations are have policies and procedures in place at the engineering level to produce good quality code and maintainable projects.
  3. Defined – This level is similar to level 2 but you know have policies and procedures in place for both engineering and management.
  4. Managed – At this level data is constantly being collected to refine policies and procedures that are put into place.
  5. This is the ideal level and it is when you are at an clean running point but able to “cut off the fat” of unneeded processes.

The take away?

Bill Curtis’s major point in his podcast was how Engineering Managers or Projects managers can be the biggest issue when integrating CMM. Even if the executives put all these policies into place, if the managers do not follow them then no changes will happen. In order to make sure that the managers know how to handle these new policies, a lot of training needs to happen.

I can tell you from experience that having non-compliant managers can make your QA/QC efforts disappear. Earlier this year I was promoted from Technician to Engineer and moved departments. My previous roles included a large bit of QA/QC on all of our equipment. Before I took on this roll the QA/QC plans were very lazy and ineffective. I noticed as I look through records that there were periods when equipment was un-calibrated and being improperly used. In order to counter-act this I implemented and put controls into place for all of our equipment and inventories. When I moved departments, the lead technician I handed my responsibilities off to neglected to continue those policies I put into place. A few months later that individual quit and I was called back to that department. It took me weeks to clean up all of the QA messes that he left and I found out that we had done some work with un-calibrated equipment. That’s no good. Moving forward management has decided they need to make sure they take a bigger role in the QA/QC.

 

From the blog CS@Worcester – Tyler Lundstrom by CS@Worcester – Tyler Lundstrom and used with permission of the author. All other rights reserved by the author.