Author Archives: ziyuan1582

Documentation Testing

Documentation Testing

When I was searching for different types of testing, I found a type of testing that I never heard about, which was documentation testing. I was curious since I only knew documentation that testers/developers added at the beginning of the methods/classes/test cases to give others enough basic information about them. I wanted to know more about this type of testing, therefore, I chose a basic introduction of documentation testing to read. Below was the URL of the post.

http://blog.e-zest.com/why-is-documentation-important-in-software-testing/

In this blog, Kirti Mansabdar explained the definition of documentation testing and some common documents that should be used and maintained regularly, the importance of documents. According to her, documentation testing was a non-functional type of software testing. Poor quality documentation showed badly on the quality of the product and vendor. She also provided some advantages and disadvantages of preparing documentation. Some of the advantages were making project testing easy and systematic, saving time and cost, maintaining good relationship with the client, making the client satisfied, etc.

Kirti said that in many cases, projects could be rejected in the proposal/acceptance phase for lack of documentation. This should be highlighted to get the attention since I thought that being rejected just because of lack of documentation was unworthy. In the beginning of the blog, she mentioned that one of the reasons people did not talk much about documentation in software testing was they did not want to waste time preparing documents. They wanted to spend all their time on the more functional aspects of their jobs. I disagree with them. In my opinion, even though testers knew what they were doing, but their projects would be presented and used by people who knew a little or nothing about the projects. Without documents, it would be hard for others to fully understand what the projects were and how useful they were.

In some important documents that Kirti recommended to use and maintain regularly, there were some of them that I never heard, never did, and wanted to try when I had a chance, like Test Plan Document, Weekly Status Report, User Acceptance Document. Test Plan Document included testing schedule, team structure, H/W-S/W specification, environment specifications, risk analysis and scope of testing among other points. Weekly Status Report included the status of all bugs and requirements. It could help in improving the quality of the product. This one would be useful for me since I rarely report anything weekly. It was also good to know that these documents could be used as evidence for the delay when having problem regarding product delivery.

 

From the blog CS@Worcester – Learn More Everyday by ziyuan1582 and used with permission of the author. All other rights reserved by the author.

Code Coverage

Code Coverage

“Did my tests cover all the code?” – this would be a question that absolutely popped up in testers’ mind often when they were writing their tests. Code coverage could answer this question for them. Code coverage helped testers understand how much of their code was tested. Since I had not used any code coverage tools before, I thought that it would be a good idea to start learning about them through an introduction post. Below was the URL for the post.

https://www.atlassian.com/continuous-delivery/introduction-to-code-coverage

In this post, Sten Pittet, who had been in the software business for ten years in various roles from development to product management, introduced the definition of code coverage, the common metrics in the coverage reports, a tip to choose the right tool for different projects. He also discussed about what percentage of coverage that testers should aim for. Moreover, he thought that testers should focus on unit testing first, use coverage reports to identify critical misses in testing, make code coverage part of their continuous integration flow.

Sten mentioned a few metrics that testers should pay attention to when reading coverage reports They were function coverage, statement coverage, branches coverage, condition coverage, line coverage. Function coverage showed how many of the functions defined had been called. Statement coverage showed how many of statements in the program had been executed. Branches coverage showed how many of the Boolean sub-expressions had been tested for a true and a false value. Line coverage showed how many of lines of source code had been tested. The example given in the post helped me understand the terms easier.

In Sten’s opinion, 80% code coverage was a good goal to aim for. Trying to reach higher coverage might turn out to be costly, while not necessary producing enough benefit. He also said that it was normal to have low coverage for the first run, and testers should not feel pressure to reach 80% coverage right away. To be honest, I really surprised that he only recommended 80% coverage. But when I thought harder, it made sense that getting higher coverage might be costlier, less benefit since real-life projects were usually bigger than school projects. He also highlighted that testers should write tests based on the business requirements of the applications rather than write tests that hitting every line of the code.

Furthermore, Sten emphasized that good coverage did not equal good tests. Code coverage tools could help testers understand where they should focus next, but they would not tell if their existing tests were robust enough for unexpected behaviors. Therefore, beside achieving great coverage, testers should have a good robust test suite and verify the integrity of the system. I agreed with him. Looking at his example, I could see clearly how bad it was if we only relied on the tools to write tests. Beside information about code coverage, I could apply his advice whenever I wrote tests.

 

From the blog CS@Worcester – Learn More Everyday by ziyuan1582 and used with permission of the author. All other rights reserved by the author.

Test-Driven Development

Test-Driven Development

This week, I choose Test-Driven Development (TDD) as my topic. After reading many articles, I realized that I only knew a little bit about it, which was the part that I needed to create test before writing functional code. Therefore, I thought that I should research more about it through an introduction article. I chose this article below because it had detailed information about TDD. Below was the URL of the article.

http://agiledata.org/essays/tdd.html

In this article, Scott Wambler explained the definition of TDD, the two levels of TDD, the advantages and disadvantages of TDD, the importance of documentation, the Test-Driven Database Development, the Agile Model-Driven Development, the myths, and misconceptions about TDD. He also provided the adoption rates of TDD and a list of TDD tools.

The basic idea of TDD is dividing the coding process into smaller steps: writing one test first for a small bit of corresponding functional code at a time. According to Scott, taking TDD approach required great discipline because it was easy to “slip” and write functional code without first writing a new test. I agreed with him, since I still sometimes forgot creating test first when coding. There were two levels of TDD: Acceptance TDD (ATDD) and Developer TDD. The goal of ATDD, also known as Behavior Driven Development (BDD), was to specify detailed, executable requirements for the solution on a just in time basic. Developers only needed to write acceptance test and enough functional code to fulfill that test. The goal of developer TDD, or simply called  TDD, was to specify a detailed, executable design for the solution on a just in time basic. Developers only needed to write developer test (unit test) and enough functional code to fulfill the test. Developers could apply both levels or one of them for their projects. I usually applied developer TDD for my projects, therefore, I would consider using the other level as well.

According to Scott, there were three challenges that made Test-Driven Database Development (TDDD) not work as smoothly as application TDD: the lack of tool support, the lack of motivation in taking a TDD approach, and the popularity of model-driven approach. I understood the challenges that TDDD had when it was a “new” approach in many data professionals’ eyes. In my opinion, if we could overcome the lack of tool support, the situation would be better. With enough tool support, we could encourage people to try taking this approach.

Scott also said that TDD worked for both small projects and “real” projects. He used his experience and Beck’s report to support for his statement. Personally, I doubted that I could even use it for “real” projects, since the adoption rates of TDD (mentioned in the article) was not high. Moreover, most of my previous partners did not even use TDD for our small projects. Therefore, it might not be useful right now for me, but I would consider it in future.

 

 

From the blog CS@Worcester – Learn More Everyday by ziyuan1582 and used with permission of the author. All other rights reserved by the author.

Record and Playback Advantages and Disadvantages

Record and Playback Advantages and Disadvantages

Since last week blog did not had a lot of information about Record and Replay (or Record and Playback), I did not know whether I should use it for (Graphic User Interface) GUI testing or not. Therefore, I decided that I should learn more about it and its advantages/disadvantages. After reading blogs and articles related to Record and Playback, I chose this particular article because it clearly stated the problems testers could have when using Record and Playback tools and the scenarios when Record and Playback could be useful. Below is the URL of the blog:

https://www.cio.com/article/3077286/application-testing/record-playback-automation-its-a-trap.html

In this article, Troy T. Walsh, a principal consultant at Magenic in St. Louis Park, shared his thought about Record and Playback as a trap that many projects fell into. He provided the disadvantages that these tools had, for example, high maintenance cost, limited test coverage, poor understanding about the tools, poor integration, limit features, high price, locked in. He also gave some scenarios when Record and Playback might be a good option, like learning the underlying automation framework can be leveraged from the code, loading testing, and proving concept.

According to Troy, Record and Playback had limited test coverage. Since it followed the exact steps the testers recorded, it limited to testing against the user interface. Therefore, it made sense when Record and Playback was recommended for GUI testing last week. But for test automation, it did not have great value. He also thought that most testers had an incomplete understanding of what exactly these tools were doing which could lead to huge gaps in the test coverage. In my opinion, this disadvantage could be fixed if the testers studied more about the tools before using them. Furthermore, Record and Playback tools had limited features which are important for test automation like remote execution, parallelization, configuration, data driving and test management integration. Furthermore, to use feature rich options, the users needed to pay a lot of money every year. Beside those disadvantages, Record and Playback could be used to study the underlying automation framework of the code by recording the steps and observing what get generated.

After reading the advantages and disadvantages of Record and Playback, I could see that it was not a good tool for test automation since it was limited in many aspects. It had high price, high maintenance cost, limited features, limited test coverage, etc. However, in my opinion, it was good enough to be a GUI testing tool. Since GUI testing checked whether the expected executions, the Error Messages, the GUI elements layout, the font, the color, etc. were correctly executed or not, the testers only need to “record” the steps that the users would do and “playback” to see the results. Therefore, I would try Record and Playback to test GUI but not for test automation.

From the blog CS@Worcester – Learn More Everyday by ziyuan1582 and used with permission of the author. All other rights reserved by the author.

GUI Testing

GUI Testing

When I read the definition about one of the course topics, Test Automation, on Wikipedia, Graphic User Interface (GUI) testing had made me curious since I hadn’t tested the GUI “formally” before. In the past, I only tested the basics like the behaviors of the components, the expected outputs, and the layout of the GUI. In order to understand more what I should do when testing GUI, I chose a blog that had a complete guide about GUI testing. I hoped that after reading this blog, I could learn the ways to test the GUI correctly. Below is the URL of the blog.

https://www.guru99.com/gui-testing.html

This blog explained the definition of GUI and GUI testing, the purpose, and the importance of GUI testing. It also provided the checklist to ensure detailed GUI testing: checking all the GUI elements, the expected executions, correctly displayed Error Messages, the demarcation, the font, the alignment, the color, the pictures’ clarity and alignment, GUI elements’ positions for different screen resolution. Moreover, it mentioned about the three approaches of GUI testing: Manual Based, Record and Replay, and Model Based. It also included a list of test cases and testing tools for GUI testing and its challenges.

After reading the checklist, I recognized that there is one test case that I never paid attention before: checking the positioning of GUI elements for different screen resolution. This mean that neither images nor content should shrink, crop, or overlap when the user resized the screen. When I thought about it, it really made sense that users probably would not use the application again if it was hard to read the content or see the pictures just because their phones/PC/laptops did not have the same screen resolution as the default screen resolution the developers set up. Therefore, it was important to remember to test the GUI elements’ positions for different screen resolution. I would remember it and apply it the next time I tested GUI.

About the approaches of GUI testing, I only had done manual based testing and model based testing. Record and Replay approach was interesting in my opinion, where I could “record” all the test steps at the first time, then “replay” it whenever I wanted to test the application again afterward. The blog did not provide a lot of information about this approach beside the brief introduction and a link redirected to one of the tools used for this approach, which was called QTP. Since I had not used it before, I did not know about its advantages and disadvantages. Therefore, I would have to research and try it first before deciding to use it in future.

 

From the blog CS@Worcester – Learn More Everyday by ziyuan1582 and used with permission of the author. All other rights reserved by the author.

Integration Testing

Integration Testing

When I read the blog entry about unit testing last week, the author had mentioned about integration testing as a tool to detect regressions. Since the topic of that blog entry was unit testing, I did not have a chance to research this type of testing. Therefore, I decided to choose integration testing as the topic for this week’s entry. Because I did not know much about this type of testing, I found a blog entry that had detailed introduction of integration testing. This blog was written by Shilpa C. Roy, a member of STH team, who was working in software testing field for the past nine years. Below was the URL of the blog entry:

http://www.softwaretestinghelp.com/what-is-integration-testing/

In this blog, Shilpa introduced the definition of integration testing, its approaches, its purpose along with the steps to create an integration test. In Shilpa’s opinion, integration testing was a “level” of testing rather than a “type” of testing. She also believed that the concept of integration testing could be applied in not only White Box technique but also Black Box technique. Beside the two approaches of this testing, which were Bottom Up and Top Down, she also mentioned another approach called “Sandwich Testing”, which combined the features of both Bottom Up and Top Down approach. Moreover, Shilpa gave an example how integration testing could be applied in Black Box technique.

I thought that the “third” approach called Sandwich testing was interesting. I always thought I could only test either top down or bottom up, but this really change my way of thinking. By starting at the middle layer and moving simultaneously towards up and down, the job was divided into two smaller parts, which was more efficient. Unfortunately, this technique was complex and required more people with specific skill sets so I really doubted if I could use it in the future. But the general idea that I could start the process at the middle part would be useful.

According to Shilpa, validating the XML files could be considered as part of Integrating testing for a product that we only knew its architecture. Since the users’ inputs would be converted into an XML formats then be transferred from one model to another, validating the XML files could test the behavior of the product. In my opinion, this example was easy to understand how integration testing could be applied in Black Box technique. I always thought that it could only be available for White Box technique, but this example had proved that I was wrong. Now that I knew about it, I could try to apply it the next time I encountered similar scenarios.

From the blog CS@Worcester – Learn More Everyday by ziyuan1582 and used with permission of the author. All other rights reserved by the author.

Writing Great Unit Tests

Writing Great Unit Tests

Unit testing was one of the types of testing that I had played a little bit before. Therefore, when I found this blog entry about Writing Great Unit Tests: Best and Worst Practices, I decided to choose it to understand more about unit testing and check if all the unit tests that I wrote in the past were good or bad. Below was the URL of the blog entry:

http://blog.stevensanderson.com/2009/08/24/writing-great-unit-tests-best-and-worst-practises/

In this blog, Steve Sanderson discussed about unit test and the tips to write great unit tests. He mentioned that it was overwhelmingly easy to write bad unit tests that add very little value to a project while inflating the cost of code changes astronomically. In his opinion, unit testing was not an effective way to find bugs or detect regressions. Unit testing was more about designing software components. He also compared good unit tests with bad unit tests, and provided some tips to write great unit tests.

Steve explained the reason he thought that unit testing was not an effective way to detect bugs or regressions. I agreed with him that only proving component X and Y both worked independently did not prove that they were compatible with one another or configured correctly. Therefore, bugs still might occur when the application was run. This absolutely reduced the effectiveness of bugs detection. Before reading this blog entry, I never realized that unit testing was not that suitable to find bugs like that. Furthermore, Steve provided a table to identify which type of test we should use for different purposes like finding bugs, detecting regressions, or designing software components.

Steve also gave some tips about writing good unit tests. In my opinion, some of them were very basic, but Steve still included them. I thought that was because he saw a lot of bad unit tests had those mistakes. He recommended not to make unnecessary assertions because unit tests were a design specification of how a certain behavior should work. Including multiple assertions only increased the frequency of pointless failures. Moreover, the unit tests’ names should be clear and consistent. Personally, I liked the way he used to name the example unit test. It helped you quickly identify the subject, the scenario, and the result of the unit test. This way of naming was more descriptive than the way I always used. I would apply it the next time I wrote tests.

 

From the blog CS@Worcester – Learn More Everyday by ziyuan1582 and used with permission of the author. All other rights reserved by the author.

Effective Code Reviews

Effective Code Reviews

On my way of finding a podcast for my first assignment, I passed by this podcast about Code Review. I decided to choose it for my first blog entry because it had some interesting opinions about code reviews. By listening to this, I learned about other benefits of code reviews, beside bugs detection and code improvement. Below was the link of the podcast:

https://talkpython.fm/episodes/show/102/effective-code-reviews.

In this podcast hosted by Michael Kennedy, Dougal Matthews shared his thoughts and experience about the benefits of code reviews and the elements effective code reviews should have. Dougal gave an interesting scenario that code reviews could save the day. For example, there were two persons, one knew more about C++, one knew more about Python. Even though they might not understand deeply what the other person is doing, they should have a code review with lighter level in case that one of them was ill or left the company. He also brought up one of the reason he thought that many people did not like doing code reviews. They expected bugs detection for their codes, but they usually received code improvement more than bugs for their valid codes. Michael and Dougal also had some interesting ideas for an effective code reviews.

To be honest, I haven’t thought that code reviews could be used as a tool to back-up basic information of a project like Dougal mentioned. Usually, the first thing that came to my mind when hearing code reviews would be bugs detection. But his scenario pointed out a special case, which some small groups might have, that code reviews could help fixing it.

About one of the reasons many people did not like doing code reviews, I understood their feelings when they just wanted to test if their codes had bugs or not, not to re-do the whole project again just because they received a better solution to solve their problems. But I thought that we should be more flexible about that. If we had another solution, which was better than our original one in one or many ways, then we should choose that solution. Beside improving the code, we also gained experience from it. The next time we countered something that was similar, we could jump directly to the better solution and reduce the time we might spend for the worse solutions that we had used before.

I thought that the code review checklist which was mentioned by Michael and Dougal would be very helpful. Like they said, it kept the reviewers and the developers on the same page. This certainly reduced the time everyone spending to ask around the same question about the progress. I also agreed with their idea about having more than one people doing code review for a same project. Different people had different points of view, and people might make mistakes. By having more people involved in the reviewing process, we could increase the quality of the review, and share our knowledge to each other.

From the blog CS@Worcester – Learn More Everyday by ziyuan1582 and used with permission of the author. All other rights reserved by the author.

First blog post

This is my first post for CS-443 class.

 

From the blog CS@Worcester – Learn More Everyday by ziyuan1582 and used with permission of the author. All other rights reserved by the author.