Category Archives: Worcester State Blogs

Unit Testing Tips and Tricks

First of all, what are unit tests and why are they important? The meaning behind them is kind of given away in the name. They are designed to test individual features/components (a.k.a. units) of code to makes sure it is working the way it is supposed to. This helps eliminate external factors like other features affecting the one you want to test. Unit tests are used anywhere and everywhere whether you are aware that you are using them or not. I can say that we use them on a regular basis at work and I use them when writing any sort of software, even if I might not specifically think “Lets unit test this piece”. With unit testing being so important, I thought why not see if there are ways to improve them, and luckily Stormpath has a blog with tips on it that I found quite useful and intend to use in the future.

The first thing they suggest is to use a framework for testing such as Junit. To put it simply, these frameworks just make life easier. They help setup, organize, and run your tests and I cannot agree more that frameworks both make tests easier and help improve testing. Next on their list is to use test driven development. This basically means that your write the tests before you write the code. Assuming the tests are written based off requirement, this forces you to make sure you hit those requirements and it provides a more complete, modular product in the end. I generally agree with these, but I do believe there are times where this can be difficult. Sometimes the customer may not know what they want to end product to be. What happens if the design shifts in the middle of development? Then you might have to rewrite all of the tests. If possible however, test driven development is a good approach to use. They then suggest to check how much code you are covering. In other words, are you testing every line of the unit you are testing? If not, you are leaving yourself open to bugs. Stormpath discussed a couple of other suggestions as well, but they have one that I feel is particularly important, so I am going to devote my time to that one, which is to tests negative situations and edge cases.  Sure, you’ve tested all of your code and it works as long valid data has been entered. However, your program doesn’t live under a rock, so it is going to be exposed all sorts of situations. This can include users putting in invalid data. You have no control over what they are going to do with it, so your program needs to be able to handle those situations. I would argue that testing edge and invalid tests cases might be more important than normal test cases, simply because you can probably predict how it will be have when the user stays in bounds, but who knows what could happen if they don’t.

Link:

https://stormpath.com/blog/7-tips-writing-unit-tests-java

 

From the blog CS@Worcester – README by Matthew Foley and used with permission of the author. All other rights reserved by the author.

DRY, KISS, and YAGNI Design Principals

This week I have decided to continue with the discussion about design principals from last week. I like design principals because they generally provide important things to keep in mind while coding and are written in a fashion that is easy to remember (typically an acronym of some sort), which is why I’ve continued to discuss them. This week there are three I want to discuss: KISS, YANGI, and DRY. Now, you might be asking, why three? Well, these principals all focus on keeping things as simple as possible, so they kind of go together. Jonathan San Miguel has nice blog that discusses the basics of these principals.

First in the queue is DRY. Don’t Repeat Yourself. Another way of putting this would be to say, “don’t reinvent the wheel”. Have you ever been looking through your code or someone else’s code and realizes that there are similar or identical pieces of code in different parts of the program? Well, following this design principal will help eliminate that. You shouldn’t have to repeat code, and if you do, the original piece should be redesigned so you don’t have to waist your time writing the same thing over and over again. I always find it frustrating when I find similar code because I have to spend time trying to determine if they are exactly the same, why there is a need to have it twice, what is the little variation in it that made them write it twice, etc. So to summarize, unless you have a clear, distinct reason for repeating code, don’t do it.

Next up is KISS. Keep It Simple Stupid. This one is kind of self-explanatory, but basically it means don’t make your code more complicated than it needs to be. Keeping code simpler makes it easier for others to read, easier to modify, and easier to look back to later on. I especially agree with being able to go back and modify later on. At the time, the complicated code you wrote probably made perfect sense to you, but if you go back to it a year later to make some adjusts you may end up asking yourself what the heck you were trying to do with that piece of code for the next two hours. I know I’ve done it. Miguel also suggest avoiding using language specific features. By avoiding this, it can allow other people that aren’t familiar with the language to still understand what is going on.

Last on the list YAGNI. You Aren’t Gonna Need It. If you don’t need a feature at the point in time of writing it, don’t include it. You don’t need it now and you probably won’t need it in the future. You are simple wasting your time and other people’s time as well as probably causing some confusion.

I hope you found these principals insightful and useful. I know I did and will keep them in mind when writing code.

Link:

http://www.itexico.com/blog/bid/99765/software-development-kiss-yagni-dry-3-principles-to-simplify-your-life

 

From the blog CS@Worcester – README by Matthew Foley and used with permission of the author. All other rights reserved by the author.

What Is Needed to Make Automated Testing Successful? Some Basics…

Testing software can take up a lost of time. One way to reduce the time spent on testing is to automate it, meaning that a person doesn’t have to run each individual test manually. Automated testing is the way of the future, and there is information everywhere on it. Bas Dijkstra of StickyMinds.com discusses some of the basic principals of automated testing in his blog post, “5 Pillars of a Successful Test Automation Implementation”.

The first of the five pillars is the automation tool. Everybody has to have some sort of tool to help organize and run automated tests. Testing teams often overlook the tool and just go with whatever is available on hand. By doing this, they may be making their own lives more challenging. Take the time to make sure you have tool that fits your needs. I agree that this is an important first step. If you pick or are forced to use a tool that is poorly designed or doesn’t meet your needs, you are putting yourself behind the eight ball to start.

The second and third pillars discusses test data and test environments. Depending on how broad the scope of the tests are that are being run, data can become a pain to maintain. You want to make sure that you have this situation under control or you are asking for trouble. It is easy to imagine how out of hand and disorganized this could get in large scale testing. To go along with test data is the test environment. Have an environment that is as realistic as possible. Make sure it has everything you need to complete your testing and if possible, make it easy to replicate. This can allow you to run multiple tests is independent environments and/or continue with develop in one of the environments. Nothing is more frustrating that not having an environment to do you work on, whether another team member is using it, it is down for maintenance, etc. and one that is easy to duplicate can help eliminate this problem.

Next is reporting and craftsmanship. Reporting is vital as it allows others and yourself to analyze test results. Dijkstra suggests that a good report should show what went wrong, where it went wrong, and the error message that went with it. This can relate directly to craftsmanship as testing can be challenging if the correct skills aren’t available. There should be someone who has experience creating reports, for example. Make sure the correct developers, engineers, etc. are on hand to answers questions and help when needed.

My experience with automating testing is limited, which is why I have started to investigating it. From experience with manual testing, I can say that what Dijkstra discusses certainly applies, so I see no reason why it wouldn’t apply to automated testing too. I hope continue reading about automated testing as I feel it is an important and necessary tool/skill to have.

Link:

https://www.stickyminds.com/article/5-pillars-successful-test-automation-implementation

 

From the blog CS@Worcester – README by Matthew Foley and used with permission of the author. All other rights reserved by the author.

S.O.L.I.D. Design Principals – What Are They and What Purpose Do They Serve?

This week I went back to Professor Wurst’s concept map looking for some fresh material to research. This week’s topic of choice are the S.O.L.I.D. deign principals. These are design principals discussed and promoted by Uncle Bob (Robert C. Martin). The goal of having design principals is the make software easier to deal with, maintain, and expand upon. Samuel Oloruntoba does a great job at giving a general overview of these principles in his blog.

First things first – what does S.O.L.I.D. stand for? Well, it stands for Single-responsibility principal, Open-closed principal, Lisko substitution principle, Interface segregation principle, and Dependency inversion principle.

Single-Responsibility Principle: A class should only have one responsibility. In other words, a class should perform only one job. This can be applied to help make your program more modular. If you have a class that performs many tasks, it can become challenging to make changes to it.

Open-Closed Principle: It should be easy to extend a class without having to make changes within the class that needs to be extended. In other words, be prepared for the future. Don’t assume that the class/program will ever need to do additional things/serve a different purpose than it does today.

Liskov Substitution Principle:  Every subclass should be able to act as a substitute for the parent class. This once again promotes the ability to extend upon a program if need be.

Interface Segregation Principle: Don’t make people use methods or interfaces that they don’t need to use and/or are not needed. In my opinion, this can create unneeded work for the user and can cause confusion/be deceptive because it may not be clear what they need to do.

Dependency Inversion Principle: Items should depend on abstractions rather than concretions. Don’t pigeon hole yourself. This is probably best explained in an example: Say you have class LandscapeWorker with several methods including one that assigns a piece of equipment to them. Once could simple assign the equipment in the LandscapeWorker class, but then if they want to switch equipment, you would have to change the LandscapeWorker class that should not have to be changed. Instead, have an interface called equipment with separate classes for each piece of equipment, that way the class can simple be called.

I feel design principals are important as they help paint a picture of the logic and power behind programming languages they are designed to be used in. They cause you to think of new and better way to design code in ways you may not have beforehand. This is why I chose to discuss some of them this week. I feel that S.O.L.I.D. design principles are a good place to start for anyone, including those who are just starting out. I understand that I have barely scratched the surface with design principals in general and purpose behind the ones discussed here.  In the coming weeks, I hope to dive deeper into design principals and perhaps go in-depth into some of the principles discussed here. Stay tuned.

Link:

https://scotch.io/bar-talk/s-o-l-i-d-the-first-five-principles-of-object-oriented-design

 

From the blog CS@Worcester – README by Matthew Foley and used with permission of the author. All other rights reserved by the author.

Bug Taxonomy – Classifying Software Bugs

This week I have decided to change things up a bit. Seeing as we are now past the halfway point in the semester, I decided to start exploring some blogs other than the group I normally browse, just to try and find a different voice, a different point of view. I am happy to report I have succeeded in this mission, and have found a blog post by Michael Stahl on stickyminds.com that clicked with me.

One of the recurring problems we face as testers is making sure that we have covered everything that could possible happen in a piece of software, good or bad. Stahl suggests using bug taxonomy as a way to think of new ideas on what needs to be covered. This type of taxonomy is not trying to compare types of testing with type of bugs, but trying to put software bugs into categories. If you have a list of categories you can go to each time you run through the testing gauntlet, it may allow you to think of new tests that need to be written for your product. A few bug items under the category of performance he suggests are from Testing Computer Software, by Cem Kaner, Hung Q. Nguyen, and Jack Falk: slow program, poor, responsiveness, and no progress reports. His list goes on, but you get the point.

Once I saw this list of categories, this strategy totally made sense to me. Basically, have a list that is entitled “Have I covered:” or something to that extent. Running through the list forces you to think of scenarios you may not have covered, but should be covered. And since the list is categorized (i.e. performance, user experience, etc.) it allows you to focus on one testing area at a time. I can tell you that after reading this post I made a list of things I need to go back and check on for something I am working on at work. So, this strategy has already paid dividends for me.

Although other testers list can be useful and is a good way to share ideas, Stahl strongly suggests making your own list to reference again and again. This is because you may not agree with how another person’s list is laid out. For example, in the list from Testing Computer Software, Stahl mention how under the performance category is “no progress reports”. He feels it should be under a user experience category. I agree that is should be under user experience, but these lists are all up to the testers opinion, so it is not wrong that the book has it in a different spot. This can be avoided by making your list.  

I really enjoyed this blog because making a bug taxonomy list seems like a relatively simple way of trying to find new tests. It practical, and be applied in everyday use without a big hit on time. We always talk about how important time is with testing, so if there is a quick and efficient way that is going to help make my tests better, I am in.

Link:

https://www.stickyminds.com/article/using-bug-taxonomy-design-better-software-tests

Link to picture: https://images-na.ssl-images-amazon.com/images/I/710avuIF12L._SY550_.jpg

 

 

From the blog CS@Worcester – README by Matthew Foley and used with permission of the author. All other rights reserved by the author.

What is Smelly Code and Why Your Code May be Stinking Up the Room -Continued…

So, we meet again with what makes code stink. As I mentioned in my blog last week, this is a continuation from the same post that I covered last week, as there were simply too many points I wanted to hit upon to fit in one post. For those who missed last week’s post or can’t remember, smelly code is basically trends in code that commonly known to cause problems. The goal is to make sure you code doesn’t stink when you are done with it, or you’ll regret it later down the line.

The next item on the list are comments. Comments can be great. They provide insight into how the developer was thinking, why they designed it the way they did, what is going on in that quadruple nested for-loop below the comment, etc. The thing is, if you are really a good developer, an argument can be made that the code can speak/explain itself. Personally, I’ve always found comments useful and insightful, but the point here is that your code should not be so ambiguous or so complicated that you need an entire paragraph just to explain what is going on. While I don’t necessary agree that there should be no comments at all, I do agree that comments should be kept to a minimum, and if you can’t, something is wrong with your code.

Pressing further down Jeff Atwood’s list of smelly code we run into the category of duplicate code. Please don’t duplicate your code. The reasons are self-explanatory, but I will reiterate: it is bad, wasteful, and inefficient. If you need to perform a task more than once, put it in a function, simple as that. You’ll save everyone a few headaches. Similarly, he mentions dead code. That is code that is sitting in there, wasting its life away as a comment or performing some task is never used, wasting resources. There should never be a need to leave unused code. The wonderful invention of version eliminated the need to leave old code sitting in a program.

The last item on his list that I want to touch upon is the bad habit of making items public when shouldn’t be, or as he calls it, indecent exposure. There should be a strong effort to make everything is as private as possible, and there should be a damn good reason if it not private. Exposing the internals of a class is dangerous and unwarranted. Unless something absolutely has to be public, it should stay private.

There were items on Atwood’s list that I felt were unnecessary or didn’t completely agree with, but overall, I found his list of code warning signs useful and brought up many valid points. As developers we want our code to be clean, efficient, and easy to read. I feel that going through this list would certainly help anyone reach that goal. A lot of the items on the list are thing experienced developers should know better than to try anyhow, but it can’t hurt double check when you are done. I know I certainly will try to run through his list in the future.

Link:

https://blog.codinghorror.com/code-smells/

From the blog CS@Worcester – README by Matthew Foley and used with permission of the author. All other rights reserved by the author.

Quality Assurance – The Most Important Aspect of Testing

A big part of testing is making sure that the product you are working on is going to be a highly reliable, quality piece of software. Quality assurance is a big part of testing as a poorly designed product could mean bad news later on down the line. Softwaretestinghelp.com recently posted a blog that dives into this subject.

The first thing that stood out to me in the blog was a formula: Quality assurance = quality control + defect prevention. This formal makes a lot of sense to be. One of the main goals of testing is making sure it works as it should and if it doesn’t making sure someone knows it needs to be fixed. That is the quality control portion of the equation. The second part, defect prevention, is preventing bugs from getting into the software in the first place or recognizing a problem before it happens. I feel if you complete testing with confidence that those two items have been completed, you have done your job as a tester.

Now, how might one go about making sure they hit the mark with the formula? First of all, reviews are a very important aspect. This includes design reviews, specification reviews, code reviews, etc. I cannot express the importance of reviews. Getting other sets of eyes on things are crucial to making sure nothing is missed. From my experience at work, reviews are done for anything and everything, and if a review isn’t done it usually sent out with a disclaimer that whatever is being sent out is a draft. It will bite you if you don’t. Another important step to meeting the requirements of the formula is logging any issues that may have been find when testing. Any issue, not matter how small, should be logged and investigated to determine the problem and if any action is needed. This relates to the next item of the list, which is finding the root of the problem. Often a bunch of little issues that have to keep being fended off are really due to some underlying issue. It is important to make sure that you find the real issue, and not just cover it up. Lastly, make sure as a tester you utilize the resources you have available to you, especially your manager. They have the ability to get you what you need and probably know that quickest way to get it. Most of the time they are more than willing to help as your work reflects on them as well.

This blog provided a nice intro into testing with quality assurance in mind.  I found their thoughts to be intriguing and will be on the lookout for more blogs like this one considering how important quality assurance is to a piece of software.

Link:

http://www.softwaretestinghelp.com/defect-prevention-methods/

From the blog CS@Worcester – README by Matthew Foley and used with permission of the author. All other rights reserved by the author.

What is Smelly Code and Why Your Code May be Stinking Up the Room

Code Smells. Kind of a strange phrase, isn’t it? At least I thought so while perusing Professor Wurst’s concept diagram. I decided to look it up out of curiosity and found some interesting information on it.

Well, what exactly does it mean? In short, code smells is basically code, trends, patterns, etc. that indicate there may a be an underlying issue with the code that could cause issues later down the line and/or is already causing problems. Now, that doesn’t mean the “smelly” code will always cause problems. It could be there for a specific reason. Perhaps it has to handle an odd scenario or something similar, but the point of knowing what smelly code is to be able to determine where there my be common issues or bad habits. In his blog at codinghorror.com, Jeff Atwood dives into some signs of stinky code…

Going through his list, there are several that stuck out to me. The first couple involve length. Long methods are trouble for several reasons, but perhaps the most important is they can be hard to read and trouble shoot. Atwood mentions that methods that are significantly longer that the rest of the methods in the class/program are often trouble and it is a good idea to break it into smaller methods if possible. He also mentions lengthy parameter lists. The more parameters, the longer and more complex the method is going to be. I have to say I agree with both of these. Nothing is worse than trying to read through an endless block of code. Items can get lost and users can easily get confused or lost while trying to decrypt what is going on.

Next on the list are “oddball solutions”. If there is a problem that needs to be solved multiple times, there should only be one way of getting to that solution within the code. There may be multiple ways to get to an answer (i.e. 5*1 = 5 and 1*5 = 5), but the way of getting to the answer should be consistent throughout your code. I agree with his thoughts here. Seeing two different equations to get to one solution could certainly confuse the reader. Not to mention that if the process is the same it should probably just be put into a method anyhow.

Last on the list this week are temporary fields. Make sure that all the fields are actually needed. Unnecessary fields can cause, you guessed it, confusion. The user may think they are needed for some reason and may deem the program to not work properly or something similar. Not to mention that fields that need to be filled out each time the program is run can be extremely annoying while trying to test, so you are doing yourself a favor by keeping the number of fields down as well.

Since I have found this blog particularly useful and insightful, I plan to continue with a part two next week, so stay tuned.

Link:

https://blog.codinghorror.com/code-smells/

 

From the blog CS@Worcester – README by Matthew Foley and used with permission of the author. All other rights reserved by the author.

Anti-Patterns Part 2

So this week I have decided to pick up where I left off two week ago and discuss some more anti-patterns I have found. I find them to be quite useful as they are often easy to pick out/recognize, thus they are easy to avoid. This blog by Sahand Saba discusses several of them, but once again I will pick and choose a few as space is limited.

Have you ever been looking through some code and seen random, ambiguous numbers everywhere? I’m sure you have, so I am sure you know how annoying it is. These “magic numbers” make code hard to understand and can cause problems when trying to modify it later down the line. Always try to associate numbers with a variable or at the very least some sort of description to go along with it. It will prevent headaches and problems in the future. These “magic numbers” have always been a pet peeve of mine, so this just drives the point home further for me.

The next one I’ll admit I have a tendency to do, but after reading what Saba had to say about it, I am going to try and avoid doing it in the future. This anti-pattern is being afraid to add too many classes. The story goes, sometimes developers are afraid to add more classes and they think it will increase the complexity. However, in reality, it is just the opposite. Saba likens it to a tangled ball or yarns vs single strands of yards. Which is easier to work with? I think you know the answer. That example really made it clear to me that smaller, simpler classes really do make it easier and less complex.

Over analyzing is the last one I wanted to mention. This is basically thinking and discussing about a problem so much, that no work actually ever gets done on it. To avoid this, it is suggested that you break apart the problem into smaller pieces/iterations (a.k.a. Agile Software Development) and address them one step at a time. This way the overall problem is far less overwhelming and easier to deal with. I feel that this a very important skill to have and makes life a lot easier when dealing a massive, overwhelming issue. I always try to take this approach, although sometimes it can be easier said than done.

There are several other anti-patterns discussed including dealing with team meetings, management, etc., but since I am running out of space, I will leave them for another time. However, I wanted to reiterate that I feel anti-patterns are a great way to learn good coding practices. It is basically learning from other people’s mistakes and experiences, so hopefully their mistakes will allow you save some time and headaches in the future.

Link:

http://sahandsaba.com/nine-anti-patterns-every-programmer-should-be-aware-of-with-examples.html#magic-numbers-and-strings

 

From the blog CS@Worcester – README by Matthew Foley and used with permission of the author. All other rights reserved by the author.

Excuses Can Be the Death of a Product

As we find ourselves really getting into the thick of the semester, I still find myself going back to the same site to find stuff to write about for this class. I’ve found softwaretestinghelp.com to really useful and has provided me with some valuable advice. Now, about this week topic, which is “5 Excuses Every Software Tester Must Stop Giving”…

Excuses are wonderful thing when testing software as it often provides an out for when tests fail or don’t go quite as expected. It often takes a realist to get to the truth of the matter when it comes to finding the true bugs within a piece of software. This article discussed five in total, but there are two that stood out to me and since space is limited, I will focus on those.

The first one discussed is not having full control over a test environment. This allows one to blame the environment when a test doesn’t go as planned (i.e. claiming it wasn’t configured correctly). Although this can be frustrating, it can be beneficial as it gets your product out of the ideal, perfect scenario sandbox it has been developed it. It allows both the testers and developers to see how it can handle an environment closer to what it may actually be in when the product is released. Yes, technically it may not be configured correctly, but guess what, it probably won’t be in the real world either so it is important to see how it handles these situations. The article also mentions that if you really need control over the environment, you should work closely with whoever owns it. Either way, saying you don’t have control is really a poor excuse and I know personally, I will be sure to try and not use this as an excuse.

The other “excuse” that stood out to me is about not having the ability to do anything other than manual testing. In other words, you only have time to/are only allowed to do basic, by the book testing. The tester cannot collect metrics, run stress tests, etc. I agree that this often the case, but the article points out some good ways around this that I know I will take into account in the future. The main point of the work-arounds is that a lot of this “non-functional” testing may be able to be done while you are doing to regular testing. Instead of just running through the procedure, watch performance metrics at the same time for example. Maybe grab someone else to help with this rather than asking them after or doing it yourself. In other words, kill two birds with one stone. In may be a bit more challenging, but it can save time and allow for the testing to collect the information they need.

This blog provided some good detail on how to stop making excuses while testing. Excuses can kill a product as often time they are used in a way to avoid the truth. You may be able to push the truth off further down the road, but eventually it is going to come out, and one can only hope there will still be time to address the real problem.

Link:

http://www.softwaretestinghelp.com/5-excuses-every-software-tester-must-stop-giving/

 

 

From the blog CS@Worcester – README by Matthew Foley and used with permission of the author. All other rights reserved by the author.