Author Archives: CS@Worcester – Tyler Lundstrom

Lessons to Learn from Kent Beck the father of TDD

In a podcast I listened to earlier this week from PythonTesting, Test & Code podcast, I was able to hear from the “father” of TDD himself Kent Beck. In this podcast, the host Brian took snippets from a Software Engineering radio podcast (Episode 167) that, what he thought, would have the most impact on the listener.

These are the 5 things Brian decided to look at:

  1. You’re tests should tell a story.

  2. Be careful of DRY, inheritance, and other software development practices that might get in the way of keeping your tests easy to understand.

  3. All test should help differentiate good programs from bad programs and not be redundant.

  4. Test at multiple levels and multiple scales where it makes sense.

  5. Differentiating between TDD, BDD, ATDD, etc. isn’t as important as testing your software to learn about it. Who cares what you call it.

One interesting thing I noted from this was point number 2. Being careful of writing test that align with common software design practices. This kind of went hand-in-hand with point 1 where he says your tests should tell a story. Each individual test should tell the person reading it, what was being accomplished at that given point. For example if you try to use the Don’t Repeat Yourself (DRY) philosophy then you won’t be able to see that story as well in your tests because you’ve tried consolidating the repeats.

From the blog CS@Worcester – Tyler Lundstrom by CS@Worcester – Tyler Lundstrom and used with permission of the author. All other rights reserved by the author.

To Test it All

As I learn more about Software development and testing, I’ve come to find that 100% testing a major software system is very difficult. Earlier this week I read a blog post from Saucelabs (you can read it here) that covered the topic of code coverage vs. available resources. The main focus of this blog post was not worrying about testing 100% of the software system but testing it smartly.

The writer looked at a few different ways to approach testing smartly. One of the ways was by looking at what the “typical” user input was. I found this interesting because it makes total sense. Instead of testing for every thing you might expect the user to input. Thoroughly test the most common things you expect the user to input.

The next way the writier looked at with testing smarter is the Pareto Principle. This principle simply states that 20% of your inputs should result you in 80% gains. So in testing that translates to for 20% of your tests you should find 80% of your bugs. This is quite an interesting topic and intuitive when you think about it. 20% of your tests should cover everything you can easily think of that could be an issue. That leaves 80% of your tests and time to be able to worry about the many edge cases that effect a software system.

The last way the writer looked at approaching software smartly was by using big data. Using big data we are able to look at trends of what the user is using for supporting hardware or software and what types of things the users are clicking and using more often. That way you are able to focus your efforts on the more important aspects of your software system.

From the blog CS@Worcester – Tyler Lundstrom by CS@Worcester – Tyler Lundstrom and used with permission of the author. All other rights reserved by the author.

Why Good Management Leads to Better Software

This week I was listening to a few podcasts and one in particular sprung out to me. It was from Software Engineering Radio and it pertained to Software Quality(SE Radio Episode 262: Software Quality with Bill Curtis). In this podcast Bill Curtis began by discussing what could potentially go wrong if systems are in place for good software quality. One of the companies he talked about what Knight Capital Group.If you’ve never heard of Knight Capital Group then let me give you some background.

Who is Knight Capital Group?

Knight Capital Group was one of the largest stock market trading companies in 2012. It had a U.S. market share of approximately 17% on the NYSE and NASDAQ. Well one day Knight Capital Group was gearing up to upgrade a piece of their software to integrate with the NYSE’s Retail Liquidity Program. In the process of doing the updates with the new code on all of their servers, one of their technicians forgot about a server. Due to that server not having the old code removed and the new code added, the company lost over 400 billion dollars in a approximately 45 minutes. Because they did not have a simple procedure in place to double and triple check all of their systems, Knight Capital Group is no longer one of the top trading companies in the U.S.. Instead they are the subject for many articles/blogs/podcasts on the internet. This event was investigated by the SEC and they have an SEC Filling on public record. This was the most relevant part out of it.

15. Beginning on July 27, 2012, Knight deployed the new RLP code in SMARS in stages by placing it on a limited number of servers in SMARS on successive days. During the deployment of the new code, however, one of Knight’s technicians did not copy the new code to one of the eight SMARS computer servers. Knight did not have a second technician review this deployment and no one at Knight realized that the Power Peg code had not been removed from the eighth server, nor the new RLP code added. Knight had no written procedures that required such a review.

 

Where to Go from Here?

The conversation on the podcast went from what happens to when you don’t have controls in place to assure software quality to how to obtain it. I was listening to Bill Curtis speak and he kept bringing up the subject of CMM. While listening to the podcast, I had no idea what CMM was and I was driving so I wouldn’t be able to full grasp the podcast until I did a little digging.

What is CMM? I had no clue.

After a little digging I found out the CMM actually stand for Capability Maturity Model for Software. It’s a development model created by Carnegie Mellon University in Pittsburgh, PA. It is a required standard for most D.o.D. software/technology contracts due to it’s strict guidelines and allowing for more quality software to come forth.

CMM is a more of a project management model than it is anything else. It certainly helps the engineers because prior to the CMM, the engineers had impossible deadlines and they were unable to write elegant code and test that code. Now, CMM assists management in creating realistic deadlines and still create stable and working software. It consists of 5 levels: Initial, Repeatable, Defined, Managed and Optimizing. Each projects gets assigned a level as it begins to meet each of the new requirements. The ideal is to reach level 5. Here’s a brief understanding of each level;

  1. Initial – Your organization typically isn’t a very stable environment. Typically these organization have a really hard time making dead lines and that usually leads to cutting corners and producing poor products
  2. Repeatable – At this level organizations are have policies and procedures in place at the engineering level to produce good quality code and maintainable projects.
  3. Defined – This level is similar to level 2 but you know have policies and procedures in place for both engineering and management.
  4. Managed – At this level data is constantly being collected to refine policies and procedures that are put into place.
  5. This is the ideal level and it is when you are at an clean running point but able to “cut off the fat” of unneeded processes.

The take away?

Bill Curtis’s major point in his podcast was how Engineering Managers or Projects managers can be the biggest issue when integrating CMM. Even if the executives put all these policies into place, if the managers do not follow them then no changes will happen. In order to make sure that the managers know how to handle these new policies, a lot of training needs to happen.

I can tell you from experience that having non-compliant managers can make your QA/QC efforts disappear. Earlier this year I was promoted from Technician to Engineer and moved departments. My previous roles included a large bit of QA/QC on all of our equipment. Before I took on this roll the QA/QC plans were very lazy and ineffective. I noticed as I look through records that there were periods when equipment was un-calibrated and being improperly used. In order to counter-act this I implemented and put controls into place for all of our equipment and inventories. When I moved departments, the lead technician I handed my responsibilities off to neglected to continue those policies I put into place. A few months later that individual quit and I was called back to that department. It took me weeks to clean up all of the QA messes that he left and I found out that we had done some work with un-calibrated equipment. That’s no good. Moving forward management has decided they need to make sure they take a bigger role in the QA/QC.

 

From the blog CS@Worcester – Tyler Lundstrom by CS@Worcester – Tyler Lundstrom and used with permission of the author. All other rights reserved by the author.

console.log(“Hello world!”);

Hello World! This is the first of (hopefully) many post that are in regards to all things software related.

The next few months are going to be related to Software testing to go along with a class I am taking. The class is CS443 at WSU – Software Quality Assurance and Testing. I am really excited for this class because at work a big part of my job is QA/QC (Non-Software related).

 

From the blog CS@Worcester – Tyler Lundstrom by CS@Worcester – Tyler Lundstrom and used with permission of the author. All other rights reserved by the author.