Category Archives: Week 5

GUI Testing

GUI Testing

When I read the definition about one of the course topics, Test Automation, on Wikipedia, Graphic User Interface (GUI) testing had made me curious since I hadn’t tested the GUI “formally” before. In the past, I only tested the basics like the behaviors of the components, the expected outputs, and the layout of the GUI. In order to understand more what I should do when testing GUI, I chose a blog that had a complete guide about GUI testing. I hoped that after reading this blog, I could learn the ways to test the GUI correctly. Below is the URL of the blog.

https://www.guru99.com/gui-testing.html

This blog explained the definition of GUI and GUI testing, the purpose, and the importance of GUI testing. It also provided the checklist to ensure detailed GUI testing: checking all the GUI elements, the expected executions, correctly displayed Error Messages, the demarcation, the font, the alignment, the color, the pictures’ clarity and alignment, GUI elements’ positions for different screen resolution. Moreover, it mentioned about the three approaches of GUI testing: Manual Based, Record and Replay, and Model Based. It also included a list of test cases and testing tools for GUI testing and its challenges.

After reading the checklist, I recognized that there is one test case that I never paid attention before: checking the positioning of GUI elements for different screen resolution. This mean that neither images nor content should shrink, crop, or overlap when the user resized the screen. When I thought about it, it really made sense that users probably would not use the application again if it was hard to read the content or see the pictures just because their phones/PC/laptops did not have the same screen resolution as the default screen resolution the developers set up. Therefore, it was important to remember to test the GUI elements’ positions for different screen resolution. I would remember it and apply it the next time I tested GUI.

About the approaches of GUI testing, I only had done manual based testing and model based testing. Record and Replay approach was interesting in my opinion, where I could “record” all the test steps at the first time, then “replay” it whenever I wanted to test the application again afterward. The blog did not provide a lot of information about this approach beside the brief introduction and a link redirected to one of the tools used for this approach, which was called QTP. Since I had not used it before, I did not know about its advantages and disadvantages. Therefore, I would have to research and try it first before deciding to use it in future.

 

From the blog CS@Worcester – Learn More Everyday by ziyuan1582 and used with permission of the author. All other rights reserved by the author.

YAGNI

This week I decided to look up blogs about YAGNI also known as “You Ain’t Gonna Need It.” This blog called DevIQ had a post about YAGNI. For me this was the first time I have ever heard of the acronym. This blog talks about how most of the times people code things for the future of the development. However, it is not a good thing to do. For many programmers, they follow this rule because it maximizes the amount of unnecessary work that is left undone. Features that are not necessary is a huge source of waste. In the blog, they compare YAGNI to the Just-In-Time manufacturing. Instead of ordering a bunch of parts based on what you think you are going to get you should order the parts based on what the customers need. This insures that you don’t waste money and time. This is the same as programing where you should just keep things basic and never try to do anything over the top and try to keep things to what the customer wants done. The blog talks about keeping code simple. Avoid adding features and complex designs unless it is needed. There is a good quote that is in the blog also that says “every line of code we don’t write is dollars we didn’t spend, and time on the calendar we get back for free.” This to me is very simple but powerful because it talks about how we should not over complicate things because it would not be worth the time and money we are putting in to trying to make it better.

To me this blog is something I should follow to heart because when I was doing my assignment 1 for a class I was reading too much into the assignments and was making things way too complicated for myself. There were times I wanted to delete the whole code and rewrite it however luckily in the instructions it said not to change anything and just to add a few lines. This kept me on track for my assignment. So with the YAGNI I think I would be keeping this to heart for as long as I code because this is something that many programmers especially people who are programing for the 1st time should hear about.

From the blog CS@Worcester – The Road of CS by Henry_Tang_blog and used with permission of the author. All other rights reserved by the author.

Thoughts on “TDD Guided by Zombies”

In “TDD Guided by Zombies”, James Grenning shares a seasonally-appropriate acronym for developing unit tests — just in time for our class to wrap up its unit testing section!  ZOMBIES stands for Zero, One, Many/More, Boundary behaviors, Interface definition, Exercise exceptional behavior, Simple scenarios/Simple solutions.  The first three letters (Zero, One, Many) describe, in order, the complexity of the behavior undergoing testing — for example, a queue with zero, one, and then many entries.  The second set of letters (Boundary, Interface, Exercise Exceptions) give the order in which to build test cases — start with boundaries, design the interface undergoing testing (or ensure that the code meets the interface requirements), and then test exceptional behavior.  The last letter tells the tester to create simple scenarios with simple solutions — add the minimal possible behaviors to production code in order to pass the tests generated by the first six parts of the acronym.

Grenning spends the bulk of the article on an example of a circular queue implementation in C++, showing how he progresses through each step.  He gives clear examples of both what to do and not to do.  However, I’m more concerned with the underlying principles and process, so in the interest of brevity I’ll skip a detailed review of his example and simply say that it’s thorough and worth a longer period of study.

I chose this article not just because of the season (or that it coincides neatly with our in-class work), but because it helps to answer the question of how.  We’ve learned many methods for generating unit test cases, but how do we pick the order?  How do we work backwards from tests to code in a way that makes sense and satisfies the specification that we’re handed?  This confirms what I learned in 343 last year: ZOMBIES are the key to TDD and to successful unit testing.

And what can I, as a learner, take away from this?  Firstly, and perhaps most trivially, it can help to wrap important concepts in cute acronyms.  It makes sharing and remembering knowledge easier.  Secondly, it clarifies the most important sets of values to test, and the most important times at which to test newly-instantiated objects (or other language-apprpriate constructs): when they are fresh and empty, when they have a single value, and when they contain many values.  Thirdly, the article really drives home the importance of minimalism in testing and test-driven coding.  If a simple solution is all that it takes to meet the specification, then use it.  If a complex solution can be simplified, simplify it.

This semester and onward, when I need to develop unit test cases, I’ll be thinking ZOMBIES.

Article link.

From the blog CS@Worcester – orscsblog by orscsblog and used with permission of the author. All other rights reserved by the author.

Flaky Tests

Flaky tests are tests that could pass or fail for the same code. This is a problem because the failure of a flaky test doesn’t always indicate a problem with the code, but you can’t just ignore the test because you could be ignoring a bug.

In the the blog post “Where do our flaky tests come from?”, Jeff Listfield, a Senior Software Engineer at Google, talks about the the potential causes of flaky tests and what can be done to avoid creating flaky tests. He demonstrates the correlation between the objective size of the test (binary size, memory usage) and the likelihood for it to be flaky. He also shows a correlation between certain tools and a higher rate of flaky tests, however, the reason for this is because larger tests are more commonly written using those tools. The tools themselves only contribute a small amount to the likelihood of a flaky test being created. When writing tests you should think about what code that you are testing and what a minimal test would look like in order to minimize the likelihood of creating a flaky test.

I chose this topic because effective use of test cases is very important in software development because it allows you to make sure that you’ve addressed and tested all of the product requirements, allows future testers to run your test cases when needed, and also makes it possible to build automated scripts to run as many tests as possible. By writing your test cases out, you also won’t need to constantly repeat the process and remember what values you’re testing every time as they’ll already contain all necessary variables, allowing you to maintain consistency in your tests.

This blog post in particular was interesting because it uses data gathered from real tests used in Google’s continuous integration system to show a cause of flaky tests and how to avoid them. Before reading this blog I didn’t realize how important writing test cases really was, but seeing just how many automated tests that Google used (4.2 million!) which lead to me doing more research on their importance. It also reminded me that the best solution is usually the simplest, in that you should remember to keep your test cases as simple as possible to avoid creating a flaky test and giving you a headache in the future trying to figure out what’s wrong.

Source: https://testing.googleblog.com/2017/04/where-do-our-flaky-tests-come-from.html

From the blog CS@Worcester – Andy Pham by apham1 and used with permission of the author. All other rights reserved by the author.

Flaky Tests

Flaky tests are tests that could pass or fail for the same code. This is a problem because the failure of a flaky test doesn’t always indicate a problem with the code, but you can’t just ignore the test because you could be ignoring a bug.

In the the blog post “Where do our flaky tests come from?”, Jeff Listfield, a Senior Software Engineer at Google, talks about the the potential causes of flaky tests and what can be done to avoid creating flaky tests. He demonstrates the correlation between the objective size of the test (binary size, memory usage) and the likelihood for it to be flaky. He also shows a correlation between certain tools and a higher rate of flaky tests, however, the reason for this is because larger tests are more commonly written using those tools. The tools themselves only contribute a small amount to the likelihood of a flaky test being created. When writing tests you should think about what code that you are testing and what a minimal test would look like in order to minimize the likelihood of creating a flaky test.

I chose this topic because effective use of test cases is very important in software development because it allows you to make sure that you’ve addressed and tested all of the product requirements, allows future testers to run your test cases when needed, and also makes it possible to build automated scripts to run as many tests as possible. By writing your test cases out, you also won’t need to constantly repeat the process and remember what values you’re testing every time as they’ll already contain all necessary variables, allowing you to maintain consistency in your tests.

This blog post in particular was interesting because it uses data gathered from real tests used in Google’s continuous integration system to show a cause of flaky tests and how to avoid them. Before reading this blog I didn’t realize how important writing test cases really was, but seeing just how many automated tests that Google used (4.2 million!) which lead to me doing more research on their importance. It also reminded me that the best solution is usually the simplest, in that you should remember to keep your test cases as simple as possible to avoid creating a flaky test and giving you a headache in the future trying to figure out what’s wrong.

Source: https://testing.googleblog.com/2017/04/where-do-our-flaky-tests-come-from.html

From the blog CS@Worcester – Andy Pham by apham1 and used with permission of the author. All other rights reserved by the author.

Flaky Tests

Flaky tests are tests that could pass or fail for the same code. This is a problem because the failure of a flaky test doesn’t always indicate a problem with the code, but you can’t just ignore the test because you could be ignoring a bug.

In the the blog post “Where do our flaky tests come from?”, Jeff Listfield, a Senior Software Engineer at Google, talks about the the potential causes of flaky tests and what can be done to avoid creating flaky tests. He demonstrates the correlation between the objective size of the test (binary size, memory usage) and the likelihood for it to be flaky. He also shows a correlation between certain tools and a higher rate of flaky tests, however, the reason for this is because larger tests are more commonly written using those tools. The tools themselves only contribute a small amount to the likelihood of a flaky test being created. When writing tests you should think about what code that you are testing and what a minimal test would look like in order to minimize the likelihood of creating a flaky test.

I chose this topic because effective use of test cases is very important in software development because it allows you to make sure that you’ve addressed and tested all of the product requirements, allows future testers to run your test cases when needed, and also makes it possible to build automated scripts to run as many tests as possible. By writing your test cases out, you also won’t need to constantly repeat the process and remember what values you’re testing every time as they’ll already contain all necessary variables, allowing you to maintain consistency in your tests.

This blog post in particular was interesting because it uses data gathered from real tests used in Google’s continuous integration system to show a cause of flaky tests and how to avoid them. Before reading this blog I didn’t realize how important writing test cases really was, but seeing just how many automated tests that Google used (4.2 million!) which lead to me doing more research on their importance. It also reminded me that the best solution is usually the simplest, in that you should remember to keep your test cases as simple as possible to avoid creating a flaky test and giving you a headache in the future trying to figure out what’s wrong.

Source: https://testing.googleblog.com/2017/04/where-do-our-flaky-tests-come-from.html

From the blog CS@Worcester – Andy Pham by apham1 and used with permission of the author. All other rights reserved by the author.

CS@Worcester – Le Blog Spot 2017-10-16 19:24:48

Week 5

AB Testing – Episode 68 by Brent Jenson and Allen Page.

 

-In this episode, Brent and Allen begin by talking about their stressful lives as software QA engineers. Working late hours and trying to stay up to date with the latest trends in the industry. From this brief intro I was able to get look at the potential challenges that exist in the field of Software testing. One needs to have the ability to be able to learn and implement new technologies regardless of how qualified or advanced one is at software QA and testing. As the podcast continues, allan talks about his position and how much work goes in to generating and creating substantial testing procedures that gets the job done and optimizes the testing process. To me, a student majoring in Computer science with a focus in software development, it’s easy for me to understand and know what Alan is talking about. Due to my class studies, I understand the importance of software testing and how a bad-testing process can be of little to no value to a product whiles a good testing procedure can make or break a product. HE emphasized on all the intangibles that only a person in the industry could understand and enumerated on how useless his positions appears to be to the average person. Brent expands on this topic by saying that organizations generally see the QA department of organizations as a cost for the organization without realizing the true benefits of this department. They are often not given the credit when things are going well and consistent but they are often the first department in software industries that deals with layoffs and budget cuts should there be a need for cuts.

With this topic both Allan and Brent went off on a tangent and began comparing and contrasting Traditional testing and modern testing. Prior to this podcast, I didn’t know there were “traditional testing era” and   “modern testing era”. According to them, the old way of testing followed the following sequence. Requirement => Design =>Code & Build => Testing => Maintenance whiles the modern way of testing follows the following sequence.

Requirement => Testing => Design => Testing => Build & Execution => Testing =>Test => Testing => Installation => Testing => Maintenance => Testing.

As we can see, the modern way implements more testing stage with might appear to be more costly that the old way but overtime saves more and enables modulation of components. The software gets built by parts instead of one giant big project.

LINK 

https://testingpodcast.com/category/ab-testing/

 

From the blog CS@Worcester – Le Blog Spot by houtyr and used with permission of the author. All other rights reserved by the author.

Blog 1

soft qual ass & test

From the blog CS@Worcester – BenLag's Blog by benlagblog and used with permission of the author. All other rights reserved by the author.

CS@Worcester – Not just another CS blog 2017-10-16 16:57:49

Second blog

From the blog CS@Worcester – Not just another CS blog by osworup007 and used with permission of the author. All other rights reserved by the author.

CS@Worcester – Not just another CS blog 2017-10-16 16:57:00

Second blog

From the blog CS@Worcester – Not just another CS blog by osworup007 and used with permission of the author. All other rights reserved by the author.