Category Archives: CS-443

Continuous Development

Continuing on with the TestOps posts by the, well, awesome Awesome Testing blog, is Continuous Development. Which is actually very interesting as it was a large part of what was taught in my Software Process Management course last year, so it was an enjoyable surprise to see this as the next covered topic.

Generally speaking, Continuous Development is, according to Wikipedia, “the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with software development. ”

The first step is Continuous Integration and unit tests. After every single commit by a developer, the main branch app should be compiled and built, and then unit tests should b executed to give the quickest feedback possible. The post suggests using mutation testing, testing that adds random faults to your code to see how well your tests perform, to test the unit tests themselves to see how good they are. After that, the developer should be made aware of their commit changed overall code coverage statistics.

The next step is Continuous Delivery or Automated Deployment. One should do numerous test environment deployments to test the deployment process of the application as well. After this is testing higher level things, such as functionalities on the integration or API level. End to end testing is very expensive, resource-wise, and should be done sparingly.

After that is performance testing, using a testing environment as close to the production environment as possible. You want to see how the application handles heavy loads. And then is security testing, to make sure the application is as safe from being hacked as you can manage, and then the hardest step, exploratory testing. This is a manual exploration of the application, that takes a lot of time and resources. It should be done sparingly as well.

Overall, this was another nice intersection between software development and testing. It was also a good reminder of concepts I learned in the very recent past which I found very interesting at the time. The ability to streamline the process for a developer and to give them feedback as quickly as possible is incredibly important, its ability to foster greater productivity readily available. To create such, there a many useful tools out there for testers and developers alike. Its a very straightforward example of testing directly helping developers, which is nice to see.

Original Post: http://www.awesome-testing.com/2016/10/testops-3-continuous-testing.html

From the blog CS@Worcester – Fu's Faulty Functions by fymeri and used with permission of the author. All other rights reserved by the author.

AB Testing – Episode 5 by Brent Jenson and Allen Page.

In this week’s testing episode, Brent and Allen begin by addressing end-to-end automation testing. It seemed that the original purpose of automation testing was being bypassed. Automation testing is best suited for short tip test and regression checks. But by implementing dev. architecture in testing, we are able to create a more organized and more structural development environment. Brent continues by addressing an issue that happened at amazon while he was there. They didn’t seem to have enough testers because whenever an update was made, it was reverted back due to bugs and collisions with other programs that were later found. The reversion process caused developers to place program signals and interrupts that would be triggered when parts of the apps or project was breaking up. This ended up educating the team about the need and importance of more testers to be able to find bugs and faults in the programs and update. Project rollout and changes often have drastically changed on overall product quality in the eyes of the users. It is often overlooked that creating proper checkpoints in a program creates great barriers against loss of services since it would be triggered should there be an update that can affect the performances of the program. Teaching programmers the testing techniques forces them to refactor their codes and build it to withstand updates that can break it. Also they tend to write codes that can be easily tested for bugs and holes. This practice creates a unique optimization of cost, which creates very complex codes that are not easily tested using automation since outputs cannot be predicted. Another tool that was introduced in the podcast was automated gui testing. This is a testing feature that is often used by developers to build proper test cases and scenarios. Automated GUI testing increases testing efforts, speed up delivery time, and improve test coverage. This is the main reason why teams that adopt the agile testing methodologies and continuous integration practices continue to invest in automated testing tools that can be used to perform front-end testing. Implementing GUI testing becomes more complex as time progresses and is almost never a linear process. It is a demanding part of the development lifecycle that forces QA teams to dedicate a large amount of time to. To sum things up , The best automated testing tools will not only have strong record-and-replay capabilities and flexible testing frameworks but they help you cut down on testing times and increase the speed to delivery.

 

 

 

 

LINK

https://testingpodcast.com/?powerpress_pinw=4538-podcast

https://smartbear.com/learn/automated-testing/manual-vs-automated-gui-testing/

From the blog CS@Worcester – Le Blog Spot by Abranti3 Dada Kay and used with permission of the author. All other rights reserved by the author.

Code Review: What is it and Why is it Important?

Link to blog: http://thinkapps.com/blog/development/what-is-code-review/

In this blog written by Dario Macchi, Macchi explains what is code review and why it necessary to do. He identifies what it is, its purpose, What is peer review, what do peer reviewers look for, what is an external review, what do external reviewers look for, and a few scenarios on what should code reviewers do if something goes wrong or if something is missed within the the code review process.

Code Review: is systematic examination … of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers’ skills.”

Purpose: to validate the design and implementation of features within the code. Macchi identifies that there are two levels of code Review. These are the peer review and the external review levels.

Peer Review:  focused on functionality, design, implementation, and usefulness of proposed fixes for stated problems. Macchi explains why it is neccessary to perform a peer review within his company because in his company, they expect developers to talk to each other about their design intentions and receive feedback throughout the design and implementation process. Macchi’s real life experience gives us an example about the working field of software development and testing and how peer review is necessary. Peer Reviewers look for feature completion, potential side effects, readability and maintenance, consistency, performance, exception handling, simplicity, the reuse of existing code, and test cases.

External Review: addresses different issues and focuses on how to increase code quality, promote the best practices, and remove “code smells” or poorly written code. This review process looks at the quality of the code itself and its effects on the overall project. External reviewers look for readability and maintenance, coding style, and code smells.

A scenario that Macchi illustrates is “What is an external reviewer misses something?” The answer he gives is that we do not expect the external reviewer to make everything perfect. There is always something that would be missed. In this case, it is better to have more than one external reviewer. Another set of eyes always helps.

I chose this blog because I wanted to know the right process on reviewing code. I also chose this blog because it relates to my Software Quality Assurance and Testing Class since I was given an assignment to review code with my group of classmates which emulates the peer and external review process and emulates the real life workplace. Macchi definitely outlines the aspects of code review very well when he identified the two levels which were peer review and external review. Knowing code reviews will definitely help me apply myself in the future to many software development and testing jobs as well as my video game development career because there is always a 100% chance that I will review code with a team of other programmers no matter where I work at, especially when it comes to creating video gaming software.

 

From the blog CS@Worcester – Ricky Phan by Ricky Phan CS Worcester and used with permission of the author. All other rights reserved by the author.

Code Coverage

Code Coverage

“Did my tests cover all the code?” – this would be a question that absolutely popped up in testers’ mind often when they were writing their tests. Code coverage could answer this question for them. Code coverage helped testers understand how much of their code was tested. Since I had not used any code coverage tools before, I thought that it would be a good idea to start learning about them through an introduction post. Below was the URL for the post.

https://www.atlassian.com/continuous-delivery/introduction-to-code-coverage

In this post, Sten Pittet, who had been in the software business for ten years in various roles from development to product management, introduced the definition of code coverage, the common metrics in the coverage reports, a tip to choose the right tool for different projects. He also discussed about what percentage of coverage that testers should aim for. Moreover, he thought that testers should focus on unit testing first, use coverage reports to identify critical misses in testing, make code coverage part of their continuous integration flow.

Sten mentioned a few metrics that testers should pay attention to when reading coverage reports They were function coverage, statement coverage, branches coverage, condition coverage, line coverage. Function coverage showed how many of the functions defined had been called. Statement coverage showed how many of statements in the program had been executed. Branches coverage showed how many of the Boolean sub-expressions had been tested for a true and a false value. Line coverage showed how many of lines of source code had been tested. The example given in the post helped me understand the terms easier.

In Sten’s opinion, 80% code coverage was a good goal to aim for. Trying to reach higher coverage might turn out to be costly, while not necessary producing enough benefit. He also said that it was normal to have low coverage for the first run, and testers should not feel pressure to reach 80% coverage right away. To be honest, I really surprised that he only recommended 80% coverage. But when I thought harder, it made sense that getting higher coverage might be costlier, less benefit since real-life projects were usually bigger than school projects. He also highlighted that testers should write tests based on the business requirements of the applications rather than write tests that hitting every line of the code.

Furthermore, Sten emphasized that good coverage did not equal good tests. Code coverage tools could help testers understand where they should focus next, but they would not tell if their existing tests were robust enough for unexpected behaviors. Therefore, beside achieving great coverage, testers should have a good robust test suite and verify the integrity of the system. I agreed with him. Looking at his example, I could see clearly how bad it was if we only relied on the tools to write tests. Beside information about code coverage, I could apply his advice whenever I wrote tests.

 

From the blog CS@Worcester – Learn More Everyday by ziyuan1582 and used with permission of the author. All other rights reserved by the author.

Creating Your Code Review Checklist

In this blog post, Erik Dietrich goes over creating a code review checklist. If you were to Google “Code review checklist”, the author lists results that may show up:

  • Does every method have an XML comment?
  • Do classes have a copyright header?
  • Do fields, methods, and types follow our standard naming convention?
  • Do methods have too many parameters?
  • Are you checking validity of method parameters?
  • Does the code have “magic” values instead of named constants?

He then goes on to list two problems with going through a sometimes lengthy checklist:

  • You can’t keep 100+ items in your head as you look at every method or clause in a code base, so you’re going to have to read the code over and over, looking for different things.
  • None of the checks I listed above actually require human intervention. They can all be handled via static analysis.

His suggestion for stream-lining the process of going through a big checklist is to automate the easy stuff. “Get static analysis tools that developers can install in their IDEs and run prior to delivering code, which will flag violations as errors or warnings. Get static analysis tools that run on the build machine and fail the build for violations.”

Code Review for the Important Stuff

The author lists an example checklist for a code author:

  • Does my code compile without errors and run without exceptions in happy path conditions?
  • Have I checked this code to see if it triggers compiler or static analysis warnings?
  • Have I covered this code with appropriate tests, and are those tests currently green?
  • Have I run our performance/load/smoke tests to make sure nothing I’ve introduced is a performance killer?
  • Have I run our suite of security tests/checks to make sure I’m not opening vulnerabilities?

The author lists an example checklist for a code reviewer

  • Does this code read like prose?
  • Do the methods do what the name of the method claims that they’ll do? Same for classes?
  • Can I get an understanding of the desired behavior just by doing quick scans through unit and acceptance tests?
  • Does the understanding of the desired behavior match the requirements/stories for this work?
  • Is this code introducing any new dependencies between classes/components/modules and, if so, is it necessary to do that?
  • Is this code idiomatic, taking full advantage of the language, frameworks, and tools that we use?
  • Is anything here a re-implementation of existing functionality the developer may not be aware of?

I chose this resource because it had very useful information on code review, which ties into QA. I feel the content of the article is very informational and useful to my future career. It is a good starting point for making sure you have thoroughly checked the quality of your code and automated tests are only going to increase in use, so it makes sense to automate what you can.

The post Creating Your Code Review Checklist appeared first on code friendly.

From the blog CS@Worcester – code friendly by erik and used with permission of the author. All other rights reserved by the author.

Blog 6

Soft Qual Ass & Test

From the blog CS@Worcester – BenLag's Blog by benlagblog and used with permission of the author. All other rights reserved by the author.

CS@Worcester – Fun in Function 2017-11-20 19:03:56

The blog post this is written about can be found here.

I picked this blog post because because we’ve been utilizing mock objects in class lately, and this post explains in-depth the logic behind using them in addition to succinctly summarizing the different types of mock objects.

Using mock objects focuses a test on the specific code we want to test, eliminating its dependencies on other pieces of code we don’t care about at the moment. This way, if a test fails, we can be sure it’s because of a problem in the code under test and not in something called by it. This greatly simplifies searching for faults and reduces time spent looking for them.

Mock objects also serve to keep the test results consistent, especially when the real object you’re creating a mock of can undergo unpredictable changes. If you utilize a changing database, for instance, your test might pass one time and then fail the next, which gives you no useful information.

Mock objects can also reduce the time necessary to run tests. If code would normally call outside resources, running hundreds of tests which utilize the actual code could take a long while. Mocks of these resources would respond much more quickly. Obviously we want to test calls to the actual resources at some point, but they aren’t necessary in every instance.

“Mock” is also used as a generic term for any kind of imitation object used to replace a real object during testing, of which there are several. Fakes return a predictable result, but the result isn’t based on the logic used to obtain a result in the real object. Stubs return a specific result in response to specific input, but they aren’t equipped to handle other inputs. Stubs can also retain information about how they were called, such as how many times and with what data. Mocks are far more sophisticated versions of stubs which will return values in similar ways, but can also hold expectations about how many times each method should be called, in which order, and with what data. Mocks can ensure that the code we’re testing is using its dependencies the exact way we want it to. Spies replace the methods of the real object a test wants to call instead of acting as a stand-in for the object. Dummies are objects that are passed in place of another object, but never used.

Creating the most sophisticated type of mock seems like it might take more time than it’s worth, but existing mocking frameworks can take care of most of the work of creating mock objects for your tests.

In the future I expect to write tests that utilize mocking. This post, along with Martin Fowler’s article, has given me a good starting point in being able to utilize them effectively as well as decide how elaborate a mock needs to be for a particular test.

From the blog CS@Worcester – Fun in Function by funinfunction and used with permission of the author. All other rights reserved by the author.