Journey into Mocking in Testing

As I take another step towards Software Quality Assurance Testing. I decide to research I topic discuss in class the other day. The topic being Mocking in testing using mockito. For this blog I will be talking about a blog that relates to my topic of interest. It’s name is “What is Mocking in Testing?” by Piraveena Paralogarajah.

This blog was very interesting and does a good job introducing to readers what mocking is and why it is used. I suggest this blog to readers with no experience or knowledge to the software framework known as mockito and/or the mocking concept.

A brief summary about the blog “What is Mocking in Testing?” by Piraveena Paralogarajah.

What is mocking?

Mocking is the notion of making “a replica or imitation of something”.

Where and why is mocking use?

Mocking is used in unit testing. It is used on object that depends on other objects in order to isolate the object by having it mock and simulate the behavior of the original object. Mocking is done using general types of mocking frameworks. A few of these framework mentioned by the blog are the following:

  • “Proxy based ( eg: EasyMock, JMock, Mockito)”
  • “Byte code Manipulation / Classloader remapping ( eg: jMockit, PowerMock)”

What is Proxy based Mocking? How it works?

“A proxy is just an object which will be used instead of the original object. If a method of the proxy object is called than the proxy object can decide what it will do with this call:

  • delegate it to the original object
  • handles the call itself”

The blog goes on and list the limits of Proxy and the fact that “proxy does not require an instance of an interface /class if the proxy handles all method invocations itself”.

What is Classloader remapping based Mocking?

Is the notion of telling “the class loader to remap a class reference to the class file it loads”.

 

ALL QUOTES ARE FROM THE BLOG “What is Mocking in Testing?” by Piraveena Paralogarajah

I enjoy the blog and also learned that there are other types of mocking frameworks not just mockito which I will have to look into more. That being said if you take the time to read the blog I am blogging about you will get more detail information on what mocking is and why it is used and a few other mocking related details such as the general types of mocking framework.

Thank you for your time. This has been YessyMer in the World Of Computer Science, until next time.

From the blog cs@Worcester – YessyMer In the world of Computer Science by yesmercedes and used with permission of the author. All other rights reserved by the author.

Angular Testing

Angular Unit Testing

For this week’s blog post I thought I would take a look at Angular Testing considering in class we have been discussing it and are going to use it for our final project. From the site I found they go into great detail as why it is good to use angular and overall how to conduct it properly. They first start off by listing an example of code that complies properly and “works” but under a more complex situation with dozens of tests in a suite it may begin to fall apart. They go on to explain that for every run it will recompile the components and operations taking up to 75% of your time recompiling and not actually running the tests themselves. Next, they show an Angular Testbed monkey patch that will patch the testing framework resetting the testing module before each run and each test. The TestBed.resetTestingModule function they use will clean up all the overrides, modules, module factories and disposes all active fixtures as well, essentially cleaning up what you have. A _initIfNeeded function then comes into play , preserving the factories from the previous run doing what they need to do. If the flag is false the TestBed function will re-create components required for the test, creating a new zone and testing module but not recompiling anything if moduleFactory is in its correct place. After this they run the code they have, clocking in at a 24 second run time to complete its test suite. Then they apply the patch by calling a setupTestSuite function and replacing beforeEach with beforeAll causing the run time to become 8 seconds. A notice 3x time increase in time efficiency for this certain example. With this patch it allows them to preserve the compliation results and re-use them for multiple test per suite.

After this they start up Karma parallel tests, allowing for the tests to be run in parallel. This website shows all the code which is quite lengthy and I wouldn’t want to subject you all to a blog post that takes up too much space. By clicking the link below you can see all the code they have and more. All and all this website was pretty handy and interesting with what they described and how to show angular to its potential.

https://blog.angularindepth.com/angular-unit-testing-performance-34363b7345ba

From the blog CS@Worcester – Matt's Blog by mattyd99 and used with permission of the author. All other rights reserved by the author.

Why you need to know these 9 automated testing tools for Java

Hi everyone and welcome to my sixth 443 cs blog post. Today I’m going to go over this article that I am certainly reading which is titled “why you need to know these 9 automated testing tools for java”. This article explains how do selenium, junit, grinder and other automated testing tools work and how to use them. Testing java applications is important because you must ensure that your program is supposed to do what it is doing. There are many automated testing tools for java, but this article goes over the most common and useful of them. First, there are different types of tools for different kinds of situations. For example, unit testing is for newly written code before its incorporated into the base of the code. Next, integration testing prevents the application from crashing from newly written code. Also, to test the performance of the application you would use performance and user experience testing. Finally, to test the vulnerabilities of the application to would use security testing. These are some the testing methods and there’s many more and some of these methods are used for multiple purposed. This article listed what the best automated testing tools for java and there are:

  1. Junit which is the most popular and is usually for unit testing
  2. Testingng which is a versatile tool because it can be used for unit test, integration test and many others.
  3. Jtest which is good for security testing.
  4. The grinder is for performance testing.
  5. Gatling which is also a performance testing.
  6. Selenium which is for interface and experience testing.
  7. Mockito which is good because it makes it fast and easy to write automated java test.
  8. Powermock which is a unit testing framework.
  9. Arquillian which allows you to write tests that execute in real runtime environments.

 

All in all, this article is very helpful and change the way I think about this subject because I did not know there was so many testing tools out there and the purpose of each testing tools. I now know what and when to use the automated tools depending on the situation.

From the blog CS@Worcester – Phan's CS by phancs and used with permission of the author. All other rights reserved by the author.

Mocking

In the post Mock? What, When, How? by Lovis Möller, he discusses mocking and when to use it. Mocking is a strategy used in test-driven development where a mock object is created to test the interactions and dependencies this class has with other classes. Lovis talks about looking at his own and other people’s code and determines some situations in which he would use or would not use mocking, and why he would or would not use it.

Lovis first point is that you should only mock types that are internal to your program, and avoid mocking external types. External types may have dependencies of their own that you don’t know about, and could change in a later version of the code. Avoiding mocking makes your code more adaptable for future versions. The next thing Lovis recommends is that you do not mock values. If you mock values, you aren’t actually testing any useful part of the code. Lovin states:

“Mocking is a technique that is used to make the relationships and interactions between objects visible.”

The last big example Lovis gives is to not to mock concrete classes. The example and associated code he gives makes it pretty clear, as the result depends on a method that has not been covered by the mocked methods within the code. It is easy to see how this could get out of hand if you had to covered each method of a mocked class for your mock testing.

I had a lot of trouble understanding mocking and when you would want to use and and why before reading this post. Although it is still a difficult concept to understand why we are making these mock objects, seeing circumstances in which you would or wouldn’t want to use mocking, and alternatives to mocking make it a little easier to see how it is testing a program and why it is a useful concept. The quote I selected above made this even more apparent to me. Mocking isn’t testing the implementation or your program so much as the relationships between objects and their methods and how they interact.

From the blog CS@Worcester – Let's Get TechNICKal by technickal4 and used with permission of the author. All other rights reserved by the author.

Journey into the Top 12 Tools, Frameworks, and Libraries for Software Development in 2018

Another blog means another step towards my journey in Software C.D.A. This blog would be a review on the blog I read “Top 12 Tools, Frameworks, and Libraries for Software Development in 2018” by Archna Oberoi.

This blog talks about what are the top 12 software tools, frameworks, and libraries happening  now in 2018. Personally the blog was extremely well written, insightful, and clearly the writer knows what she is talking about. I suggest readers interested in this topic to read the blog. I will provide a very brief summary of the main points to this blog.

In the blog “Top 12 Tools, Frameworks, and Libraries for Software Development in 2018” by Archna Oberoi. The writer mention how the following 12 tools, frameworks, and library are pretty much considered to be the best for software development in 2018. They are the following:

  1. NodeJS – “the javascript runtime framework built on Chrome V8 engine” is the best for applications that requires data input and output available for user in real time.
  2. Angularjs – “Introduced by Google in 2012, this javascript framework for front-end development is great for building Single Page Applications (SPAs).”
  3. React – “a javascript library by Facebook for building user interfaces (for web).”
  4. .NET Core – ” an open-source, next-gen .NET framework by Microsoft.”
  5. Spring – “a dependency injection framework (Inversion of Control) that assigns dependencies to the object during run-time.”
  6. Django – “an open-source framework for web app development, written in Java.”
  7. TensorFlow – “a machine learning framework by Google, meant for creating Deep Learning models.”
  8. Xamarin – “Xamarin offers an edge over the proprietary and hybrid development models as it allows developing full-fledged mobile apps using single language, i.e. C#. Moreover, Xamarin offers a class library and runtime environment, which is similar to rest of the development platforms (iPhone, Android, and Windows).”
  9. Spark – ” an open-source, micro framework, meant for creating web applications in Kotlin and Java.”
  10. Cordova – “(formly Phonegap) is a hybrid app development framework that uses HTML, CSS, and Javascript for building mobile apps.”
  11. Hadoop – “an open-source framework by Apache that stores and distributes large data sets across several servers, operating parallely.”
  12. Torch/PyTorch – ” a machine learning library for Python.”
ALL QUOTES ARE FROM THE BLOG “Top 12 Tools, Frameworks, and Libraries for Software Development in 2018” by Archna Oberoi.

 

From this blog I read I have to say that I found both TensorFlow and Xamarin framework very interesting to the point I am going to do my research on it. It also was my first time hearing about TensorFlow framework. That being said Software development has very good tool, framework, and library worth learning about especially if you aspire to be a great software developer.

Thank you for your time. This has been YessyMer in the World Of Computer Science, until next time.

From the blog cs@Worcester – YessyMer In the world of Computer Science by yesmercedes and used with permission of the author. All other rights reserved by the author.

Design Pattern Limit

The post Are Patterns like Mummies? by Michael Stal discusses the patterns from the book Design Patterns: Elements of Reusable Object-Oriented Software by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides, otherwise known as the Gang of Four. This is the book containing the design patterns that we have been studying in class. He discusses how when the book first came out how groundbreaking it was for the software engineering community. Michael also discusses how after the Gang of Four book came out, there wasn’t anything that came after it that had quite as much impact. Michael questions whether more patterns exist, and whether they’ve been documented, or if the book contains most if not all of the worthwhile software design patterns.

The Gang of Four design patterns are basically ways to organize and structure your code in a certain way that solves certain issues you would run into and has certain advantages that can lend themselves to whatever you are doing. The author’s break down the design patterns into three different categories, which are:

  • Creational
  • Structural
  • Behavioral

Creational patterns deal with the creation of objects, structural patterns are executed through inheritance and interfaces, and behavioral patterns concern themselves with the communication between objects. There are several design patterns within each of these categories.

It is interesting to think about if the patterns covered within this book are the only design patterns (or at least the strongest) within software design. This means every large-scale program is ultimately composed of many components using only the design patterns contained within this book. Is there even a need to define more software design patterns, or can any given program or implementation issue be solved with the patterns found within the Gang of Four book?

I agree with Michael’s closing thoughts about design patterns, I think using them correctly and consistently leads to much more functional code and encourages best practices among software engineers. Whether or not there are more design patterns to uncover, or if we have reached our limit, the patterns we are aware of are still important to use to make communication between software engineers easier.

From the blog CS@Worcester – Let's Get TechNICKal by technickal4 and used with permission of the author. All other rights reserved by the author.

Boundary Value Testing, Equivalence Class Testing, Decision Table-Based Testing

Greetings reader!

This blog will give a short summary over topics that are essential in the Computer Science field: Boundary Value Testing, Equivalence Class Testing, and Decision Table- based testing.  I will be expressing my reaction to the content by sharing what I find useful and interesting. Without any further introduction, let’s begin.

Equivalent Class Testing is a black box method that can be used with all levels of testing (unit, integration, system). In this technique, the tester divides the set of test conditions into a partition. Equivalence class testing is used to reduce large numbers of test cases into clusters that are much more easy to manage. It also makes clear instructions on determining test cases without adjusting on the efficiency of testing.

Boundary value testing is the testing between extreme ends of the input values. The idea of boundary value testing is to select input values at their minimum, just above the minimum,  the nominal value, just below the maximum, and the maximum. In boundary value testing, equivalence class testing plays a huge role because boundary testing comes after the equivalence class testing. Boundary testing is used when it is  almost impossible to test large pool of test cases individually.

Decision table testing is a technique that is used to test system behavior for many different input combinations. This is a strategic approach where the different input combinations and their outputs are captured in a table format. Decision table testing is also called cause and effect testing. This testing method is important when it is necessary to test different combinations. Some advantages of decision table testing is when the system behavior is different for different inputs and not the same for a range of inputs, both equivalent testing, and boundary value testing wouldn’t help, but the decision table can be used.

Decision tables are so simple that they can be easily interpreted and used for development and also business. This table will help make effective combinations and will ensure better coverage for testing. In a case when a tester is going for 100% coverage, when the input combinations are low, this technique can typically ensure the coverage.

 

From the blog CS@Worcester – dekeh4 by dekeh4 and used with permission of the author. All other rights reserved by the author.

The Iterator Design Pattern

Ever since taking Data Structures, where we had to implement different projects and include an iterator object for each of them, I was curious about why exactly they were so pervasive. As it turned out, the Gang of Four had defined the iterator design pattern, and a helpful article on oodesign.com taught me about the motivation/intent of this pattern as well as how to apply it.

The Gang of Four classify the iterator design pattern as a behavioral pattern, as it contains functions that handle accessing the objects in a collection. The motivation behind this pattern is the ability to control and navigate through a variety of different data structures: arrays, lists, trees, stacks, queues, etc. Also, if the type of the objects within these different kinds of collections are the same, an iterator could be used to handle accessing these objects in the same way. Finally, according to the Gang of Four, “a collection should provide away to access its elements without exposing its internal structure”. This is important for security as well as providing a single access point for users.

When implementing an iterator, there should be an interface containing useful functions such as hasNext(), next(), remove(), and other methods that might be necessary. The iterator object itself is implemented as a nested class inside a collection class, so the iterator has access to the local variables and functions of the collection. Structuring it in this way allows different types of iterator objects which handle different collection types to still implement the same interface, and users will be able to process elements in different collections in your program using the same functions.

Reading this article definitely cleared up a lot of misunderstandings I had about the iterator, and definitely made me appreciate its usefulness in the context of cohesively tying together different types of data structures. This article in particular was helpful because it contained examples as well as UML diagrams, and further reading and applications that were slightly beyond my scope at this point. Still, I highly recommend it for anyone who is curious about what the iterator design pattern does.

From the blog CS@Worcester – Bit by Bit by rdentremont58 and used with permission of the author. All other rights reserved by the author.

Commonly Used Software Testing Strategies

In this article, The 7 Common Types of Software Testing, founder of the Simple Programmer blog John Sonmez uses his experience to describe common strategies of software testing, explains some of the benefits and liabilities of them, and how they are applied in the world of software development. Sonmez stresses how there are many other methods, and each of these are not used in isolation.

The first Software Testing method Sonmez describes is black box testing, where we are only concerned with the output of the program, and no actual code is given to the tester. The benefits of this method are simplifying test cases to input/output, and tests are from a user perspective. The downsides are the underlying reasons behind errors are not knowable, and cases can be hard to design.

Naturally, the next method the author discusses is white box testing. The pros of having the source code to test are discovering hidden bugs, optimizing code, and faster problem solving. The cons are the tester must have knowledge of programming and have access to the code, and it only works on existing code, so missing functionality may be overlooked.

Sonmez next describes specification based or acceptance testing. This methodology is where preset specifications guide the development process. The main benefit of this strategy is that errors are discovered and fixed early in development. The main drawback is the effectiveness of this method relies on well defined and complete specifications, which is time consuming to say the least.

The next strategy the author describes is automated testing and regression testing. This style of testing is designed to make sure changes in software does not cause problems, and is used where manual tests are slow and costly. Sonmez explains how vital these testing strategies are in Agile frameworks, where software is constantly added to. Automated regression tests are integral to this methodology because the same test cases have to be applied frequently, so automation is more advantageous than manual tests.

While there are more strategies Sonmez describes in this article, and many more in general, I chose to focus on the ones we discussed in class, as the author provides good context and a broad overview of these commonly used testing methodologies that further cement my understanding of these concepts and when to use them.

From the blog CS@Worcester – Bit by Bit by rdentremont58 and used with permission of the author. All other rights reserved by the author.

Figuring out Continuous Integration

So for this week, I have decided to read “Introduction to Continuous Integration Testing” from the TestLodge blog. The reason I have chosen to read this is because it is crucial to know the concepts of Continuous Integration and how this practice provides flexibility to development teams. It will help in understanding the workflow and why it helps developers to develop software that is cohesive even at the shortest amount of time.

This blog post goes over what is Continuous Integration, the advantages from it, and concepts before embarking Continuous Integration. Continuous Integration is an approach within software development in which the developer pushes code into a respiratory, such as Git, several times daily during the development phase. There are tools available that developers can use to set up Continuous Integration. They can be both licensed and open source and by using them, simultaneous builds can be run on multiple platforms. Once they are initiated, solutions can be built, run unit as well as functional tests, and automatically deploy to the application worked on to a server. There are some benefits such as ensuring full integrity across code and build before deployment and simple setup and configuration. These tools consist mainly of a build server and build agents. To name a few from the automatic build process, there is the Build Server, Build Configuration, Build Source path, and Build step. Depending on the requirements and size of budget available to the development team, the favoring between open source and license will go in many ways.

What I think is intriguing about this blog is that it goes out of its way to explain the automatic build parts. Usually when it comes to Continuous Integration, there would be some difficulty to have the concepts down at first before making the automatic testing. I do understand that automation does play a critical part in the process, which is why it is appreciated to have the concepts down when explaining it to others. The content of this blog has changed my way of thinking with this practice.

Based on this content of this blog, I would say this blog is a great read to understand the general ideas of Continuous Integration. I do not disagree with this content of this blog since it gives an understanding of the goals in continuous testing. For future practice, I shall try to perform load tests for projects that require response time. That way, finding code or bugs can be much faster to do.

 

Link to the blog: https://blog.testlodge.com/continuous-integration-testing/

From the blog CS@Worcester – Onwards to becoming an expert developer by dtran365 and used with permission of the author. All other rights reserved by the author.