Author Archives: Johnny To

Mutation Testing

Source: https://www.guru99.com/mutation-testing.html

This week’s reading is on mutation testing. It is stated to be a technique that changes certain statements in the source code to see if the test cases are able to find the errors. The overarching objective of mutation testing is accessing the quality or robustness of the test cases. These mutations are generally placed in several categories, operand replacement, expression modification, and statement modification. For operand replacement, an example would be simply replacing a variable in an if-statement with a constant. For expression modification, an example would be to replace a less than or equal to operator to a greater than or equal to operator. Lastly, for statement modification, an example would be simplify deleting lines of code, adding code, or modifying data types in the program. This type of testing allows the testers to uncover errors that would have remained undetected. By comprehensively testing the tests, it will provide a large amount of code coverage of the source code. It is a very useful white-box testing technique.

I found that the article provides great depth on mutation testing, steps on how the technique works, advantages, disadvantages, and provide examples. This serves as a great refresher to the activity done in class. Before that, I would have never thought about a software technique that provides such coverage to the tests already created. I can agree with the article that it is powerful tool that brings adequate amount of error detection. It will improve code quality and early bug detection will save costs later on in development if they are caught early using mutation testing. What I found most interesting about mutation testing in general is the mutation score. It is defined as the percentage of killed mutants with the total number of mutants. By observing the percentage of killed mutants, we can see if the test cases are effective against the mutations. Unlike other white-box testing techniques, it is a very unique way of effectively testing tests for their techniques. This is similar to the black-box testing technique fuzzing, where it creatively creates unexpected creative inputs for software to uncover bugs that would have otherwise been missed. In conclusion, this exhaustive technique is very useful for comprehensively testing a program and is very effective.

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.

Path or No Path!

Source: http://www.professionalqa.com/path-testing

This week’s reading is about Path Testing. It is said that a vital part of software engineering, is to ensure that proper tests are provided such that errors or issues that exist within a software product will be resolved before it could turn into a potential costly threat to the product. By turning to path testing, it will help evaluate and verify structural components of the product to ensure the quality meets the standards. This is done by checking every possible executable path in the software product or application. Simply put, another structural testing method when provided with the source code. By this method, many techniques are available, for example, control flow charts, basis path testing, and decision to decision path testing. These types of testing include its fair share of advantages and disadvantages. However, path testing is considered a vital part of unit testing and will likely improve the functionality and quality of the product.

What I found thought-provoking about the content is the section on the significance of Path. By providing an understanding what the term “path” means will certainly break down the importance of this test. Knowing that path is likely describing a programs execution path, from initialization to termination. As a white-box testing technique, we can be sure that the tests cover a large portion of the source code. But it’s also useful that they acknowledge the problems that can be found while doing path testing. These errors are either caused by processes being out of order or code that has yet to be refactored. For example, leftover code from a previous revision or variables being initialized in places where they should not be. Utilizing path testing will reveal these error paths and will greatly improve the quality of the code-base. Also, I agree that path testing like most white-box testing techniques will require individuals who know the code-base well enough to contribute to these types of tests. Which also includes another downside where it will not catch other issues that can be found through black-box testing. This article allows me to reinforce what I had learned in class about Path Testing and DD-Path Testing.

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.

Black-Box vs White-Box Testing

Source: https://www.guru99.com/back-box-vs-white-box-testing.html

This week’s reading is about the differences between black-box and white-box testing. For starters, it states that in black-box testing, the tester does not have any information about what goes on inside the software. So, it mainly focuses on tests from outside or on the level of the end-user. In comparison, white box testing allows the tester to check within the software. They will have access to the code, in this case, another name for white-box testing is code-based testing. In this article, the differences in each type of test is listed in a table format. Let it be known, that the bases of testing, usage, automation, objective and many other categories will be different. For example, black-box testing is stated to be ideal for testing like system testing and acceptance testing. While white-box is much more suited for unit-testing and integration testing. The many advantages and disadvantages of each method are clearly defined and provides a clear consensus on how each method will pan out.

What I found useful about this article is the clear and concise language it uses for describing each category for each category. Unlike other articles I’ve come across about the topic, they beat around the bush and make it difficult to discern the importance of each type of testing. Many of the information provided by this article can be supported by activities done in class. One of the categories labeled time labeled black-box testing as less exhaustive and time-consuming, while white-box is the very opposite. I somewhat agree with this description as with white-box testing, you will have much more information to work with. Every detail of code can in some way be processed into a test as deemed necessary. The overall quality of the code as stated, is being checked during this test. While in black-box testing, the main objective is to test the functionality, which means it’s not as extensive of a test in general. Also, what struck as interesting was the category Granularity. With a single google search, yielded the meaning “the scale or level of detail present in a set of data”. Low for black-box and high for white-box, which rings true for both tests. In conclusion, this article reinforces prior knowledge on the differences between black-box and white-box testing.

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.

Dynamic Test Process

Source: https://www.guru99.com/dynamic-testing.html

This week’s reading is about a dynamic testing tutorial written by Radhika Renamala. It is stated to be a software testing technique where the dynamic behavior is being parsed. An example provided is a simple login page which requires input from the end user for a username and password. When the user enters in either a password or username there is an expected behavior based on the input. By comparing the actual behavior to the expected behavior, you are working with the system to find errors in the code. This article also provides a dynamic testing process in the order of test case design and implementation, test environment setup, test execution, and bug reporting. The first step is simply identifying the features to be tested and deriving test cases and conditions for them. Then setup the tests to be executed, execute the tests then document the findings. By using this method, it can reveal hidden bugs that can’t be found by static testing.

This reading was interesting because I thought it was a simple test process than what is written in this article. Personally, I thought by randomly inputting values and observing the output would be sufficient. However, the simplified steps aren’t as simple as there are necessary considerations. In the article, there is a warning given to the reader by the author that other factors should be considered before jumping into dynamic testing. Two of the most important would be time and resource as they will make or break the efficiency of running these tests. Unlike static testing as I have learned, which is more based around creating tests around the code provided by the user. This allows them to easily create tests that are clearly related to the code. But this does not allow them to think outside the box where dynamic testing allows testers to do so. This type of testing will certainly create abnormal situations that can generate a bug. As stated in the article, this is type of testing is useful for increasing the quality of your product. By reading this article, I can see why the author concluded that using both static and dynamic in conjunction with each other is a good way to properly deliver a quality product.

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.

Developer Ego Begone!

Source: https://blog.lelonek.me/how-should-we-do-code-reviews-ced54cede375

This week’s reading is a blog about conducting code reviews properly by Kamil Lelonek. It gives a general overview of code review as a process of giving feedback about another person’s code. By utilizing this process of rejecting and approving certain changes to the codebase, it will generate improvements as a whole. However, it goes much further beyond code review as it is not as simple as it seems. Even though benefits such as catching bugs early and ensuring that the code is legible and maintainable moving forward. The post makes it important to realize that developers are very protective about the code that they write and will attempt to defend against criticism. So, it provides different approaches to mitigating problems that could rise while undergoing code review. The benefits of these approaches should be able to correctly reach out without appearing as a threat. Some of these techniques would be to distinguish opinions from facts, avoiding sarcasm, and being honest with yourself. Also, by understanding the ten tips provided, it should make code reviews more effective for everyone involved.

What I found interesting about the article is how straight forward it is towards addressing one’s ego. They are right when they say that developers would like to say that they have wrote good code but sometimes they need to leave the ego behind them. By not opening themselves to criticism and addressing it as threats is detrimental to the team as a whole. Also, when actively code reviewing, I can see that providing evidence for nitpicking at certain lines of code should make it easier for the reviewee to understand what you are addressing specifically. However, I believe that avoiding jokes and sarcasm should be remembered as top priority. Especially when you are reviewing code for a friend. Recalling from personal experience, I do believe I did not help my peer to the best of my abilities by using sarcasm during code review. This also applies to distinguishing opinions from facts where sometimes through practice, you are led to believe that one technique is better than another. In conclusion, these tips are great for improving code review sessions.

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.

Test Automation, are you doing it right?

Source: https://www.softwaretestinghelp.com/automation-testing-tutorial-1/

In this week’s reading was about a test automation tutorial. It defined test automation as a technique to test and compare the actual outcome with expected outcome. Mainly used for automating repetitive tasks which are difficult to perform manually. Test automation allows testers to achieve consistent accuracy and steps towards testing. This in return would reduce overall time towards testing the same thing over and over. As the tests should not be obsolete, it would allow new tests to be added on top of the current scripts when a product evolves. They also suggest that these tests should be planned so that maintenance will be minimal, otherwise time will be wasted when fixing automation scripts. The benefits are huge but there will be challenges, risks, and other obstacles. Such as knowing when not to automate and turn to manual testing which would allow a more analytical approach towards certain situations. Which is directly related to the perception that if no bugs are introduced if automation scripts are running smoothly. It is concluded that test automation is only right for certain types of tests.

I found this tutorial to be incredibly helpful as it provided real-life situations as examples for many of the topics covered. It is effective at making the user see the reality behind test automation, through the five W’s – who, what, when, where, and why – even not stated explicitly. I can conclude that I took test automation for granted as I assumed that all tests would be automated regardless. That way of thinking is a wrong step for a tester to make, as not all bugs can be discovered through pre-defined tests in static test cases. Manual testing is necessary to be able to nudge bugs to appear through manual intervention as it pushes the limits of the product. Overall, the main take away for myself would be the planning phase of test automation. By splitting different tests into different groups, we can easily set a path for testing in an ordered way. For example, it would be best to do tests for basic functionality then integration before testing certain features and functionalities. It would logically be more difficult to solve complex bugs before smaller bugs. It goes to show that test automation is not as easy as it looks.

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.

Differences in Integration Testing

Source: http://www.satisfice.com/blog/archives/1570

The blog post Re-Inventing Testing: What is Integration Testing? by James Bach gives an interview-like approach to exploring Integration Testing. It starts with a simple question of “What is integration testing?” and goes off from there. As the respondent answers, James leaves a descriptive analysis of the answer and what he is thinking at that point in time. The objective of the interview is to test both his knowledge and interviewee.

This was an interesting read as it seems related to a topic from a previous course that discussed coupling between which showed the degree of interdependence between software modules, which is why I chose this blog post. What I found interesting about this blog post would be his chosen interviewee was a student. So that entire conversation can be viewed from a teacher and student perspective. This is useful because it allows me to see how a professional would like an answer to be crafted and presented in a clear manner. For example, the interviewee’s initial answer is text-book like which prompted James to press for details.

Some useful information about integration testing is also provided because of this conversation. Integration testing is used during an interaction between multiple software are combined and tested together in a group. In this blog post, it is noted that not all levels of integration are the same. Sometimes “weak” forms of integrations exist, an example provided from the blog would be, when a system creates a file for another system to read it. There is a slight connection between the two systems due to them interacting with the same file. But as they are independent systems, neither of the two systems knows that the other exists.

From this blog post, I can tell that any testing requires much more than textbook knowledge on the subject. As mentioned briefly in the blog, there are risks involved with integrating two independent systems together and there is a certain amount of communication between the two systems. Depending on the amount of communication determines the level of integration between the two. The stronger the communication is between the two systems means that they are highly dependent on one another to execute certain functions.

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.

Experiencing something new…

This semester, as a Senior, I wish to have a wonderful experience learning and retaining all the skills taught to me in CS-443.

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.

Liskov Substitution Principle

Source: https://www.tomdalling.com/blog/software-design/solid-class-design-the-liskov-substitution-principle/

SOLID Class Design: The Liskov Substitution Principle written by Tom Dalling on Tomdalling.com is a five part series about the SOLID class design principles in OOP. He starts off with a problem about inheritance. For instance, if you had a penguin which is a bird that falls under an “is a” relationship. However, when the penguin inherits from the bird class, it will also inherit the fly method. As soon, as you set the fly function to do nothing, then it violates the LSP. Then Tom explains that from applying the Open Closed Principle, subclasses must follow the interface of the abstract base class. If the class has to be altered in such a way to account for certain classes then it also violates the Open Closed Principle of being able to extend a classes behavior without modifying it. In conclusion, two solutions are presented, one is adding a method to check for a flying bird or non-flying bird. The other which, he states as a better solution, would be to create separate classes to account for a flightless type, that way the fly method is not inherited from the superclass.

After doing assignments for CS-343 revolving around refactoring a pre-existing poorly implemented code by applying design patterns. I did not realize that we touched upon the Liskov Substitution Principle before reading the blog post. However, choosing this post, serves as a great source of review material upon topics covered in class. The assignment that incorporated multiple design patterns started off with a clear application of the LSP, as the original design had two instances of overriding to do nothing. However, the criteria for our first refactor requires the LSP and other inheritance reworks.  Since we were working with Ducks, you can imagine a QuackBehavior and a FlyBehavior, that incorporated real and inanimate ducks. So, the LSP application is similar to the second solution presented earlier, in which the fly and quack methods aren’t inherited from a superclass but rather an interface later implemented by the duck class. But even though the method still does nothing, it isn’t an override method, so it does not violate the LSP.

Like other OOP principles, these exist to help achieve code that is maintainable and reusable. By acknowledging the existence of these SOLID class design principles, it will hopefully prevent future projects from projecting these type of code smells. Also, by understanding them and utilizing them wherever they suit best, my code will be cleaner, easier to maintain and add new features.

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.

Don’t be an Outlaw

Source: https://haacked.com/archive/2009/07/14/law-of-demeter-dot-counting.aspx/

The Law of Demeter Is Not A Dot Counting Exercise by Phil Haack on hacked.com is a great read on the applications of the Law of Demeter. Phil starts off by analysis of a code snippit to see if it violates the “sacred Law of Demeter”. Then proceeds to give a short briefing of the Law by referencing a paper by David Bock. He then proceeds to clear up a misunderstanding or usage of the Law of Demeter by people who do not know it well, hence the title of his post. “Dot counting” does not necessarily tell you that there is a violation of the law. He closes out with an example by Alex Blabs that when you apply a fix to multiple dots in one call, you effectively lose out on code maintainability. Lastly, he explains that digging deeper into new concepts is all and well, but being able to explain disadvantages alongside the advantages will show a better understanding of the topic.

Encapsulation as a concept introduced to me, is about encapsulating what varies. However, different applications like the Law of Demeter which is specific to methods. It is formally written as “Each unit should have only limited knowledge about other units: only units “closely” related to the current unit”. The example in the paper by David Bock makes it easy to understand where this is coming from with the Paperboy and the Wallet example. Having methods that have access to more information is unnecessary and should be left out. Also, letting the method have direct access to changes made by another method is a bad idea. By applying the Law of Demeter, you encapsulate this information which simplifies the code in one class but increases the complexity of the class. Overall, you end up with a product that is easily maintainable in a sense where if you change values in one place, it will apply across the board to where it’s used.

Although encapsulation is not a new topic, knowing how to properly apply encapsulation for methods through knowing the Law of Demeter should be a good practice. This would be remembering that “a method of an object may only call methods of the object itself, an argument of the method, any object created within the method, and any direct properties/fields of the object”. For example, knowing that applying the Law of Demeter to a chained get statements is a good idea. Also, the importation of many classes that you won’t use is a bad idea. With this understanding, although incomplete, I will hopefully avoid violating the Law of Demeter and share it with my fellow colleagues.

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.