Category Archives: CS-443

Early Testing

https://www.softwaretestinghelp.com/early-testing/

 

This week I read the post from the site linked above. It talks about effective quality assurance strategy in the early stages of a project. Uncaught bugs or fundamental flaws in the foundations of the software life cycle can result in setbacks for the later stages. Ultimately, it is more efficient to test early and test often than to spend resources going back to fix an early issue. If this is the case, sometimes the best solution is to rewrite many built features of a software project just to change the foundation, which is the worst case scenario in a time crunch environment.

There is a balance to be achieved between spending enough time testing early as to not create fundamental mistakes, and spending too much time testing without devoting enough resources to producing a working prototype. Software projects are typically planned on a time based release quarterly, half-yearly, or yearly, depending on the size of the project and the goals in mind. Because time is of the essence, defects need to be organized based on severity, time allocation, and expected collateral impact for the rest of the code.

This cycle can be broken down into steps; the developer creates, the tester tests the creation and ranks severity, the developer responds by fixing the most important issues, and the tester evaluates the fixes. Ultimately, this cycle never ends as long as they are employed. Constantly, there are new features and new bugs, and it is impossible to discover every bug from every new feature, the product is never defect free. In this way, testing is especially important in the early stages of development since it plays better into the long term which we expect to never end.

Now that I’ve summarized why and how early testing occurs, the final segment of the site reviews the primary targets for early testing. To begin, stakeholders determine which features will be the most effective by generating the most revenue, complying with standards, catching up to a competitor, or succeeding a competitor. These selected new features are the focus of early testing since they usually involve many lines of code with a high possibility of intersection. QA and development leaders both need to work together when working on these high priority features in the early stages of development. This collaboration must be stressed as especially important to the work efficiency of the entire project.

From the blog CS@Worcester – CS Mikes Way by CSmikesway and used with permission of the author. All other rights reserved by the author.

Software Testing Principles

I am writing in response to the blog post at https://www.guru99.com/software-testing-seven-principles.html titled “7 Software Testing Principles: Learn with Examples”.

This blog post highlights some useful general guidelines for software testing. In order to write effective test cases, it is important to follow some logical approach toward determining what to test for and how to test it, and this is a guide that describes such a set of principles to follow that are well suited to capture the logic involved in software testing.

The first principle is that exhaustive testing is not possible. I am not sure I believe this. In general it is useful to assume there exists no perfect test, but for simple enough applications where the number of possible interactions are enumerable, I would think that it would be possible to achieve exhaustive testing, much like an exhaustive proof, where every possible path is covered and verified. Maybe I am missing the implausible event that the test is correct but the computer running the tests is corrupted in such a way that certain tests are not run. This is relevant to a point made in this area, which is risk assessment.

The second and third principles make similar points. Always running the same tests will eventually not cover certain issues. If all of the same methods for testing are always applied exactly the same, then eventually there will be some scenario which the particular method is not suited for, and it will miss something. This leads into the later principles: the absence of a failure is not proof of success, and context is important. Developing tests suited for the particular application is necessary to ensure the correlation between tests passing and the program functioning correctly, and just because every test passed does not mean the program is going to work perfectly.

This set of software testing principles can be summarized in a few basic points. Develop test cases that are well suited specifically for the application that is being tested, consider the risk of certain operations causing a failure, and do not assume that everything works perfectly just because every test case passed.

From the blog CS@Worcester – klapointe blog by klapointe2 and used with permission of the author. All other rights reserved by the author.

Mocking & Mockito

For the final blog entry for this class, I decided to cover a topic that I’ve had personal experience with but wanted to review for further review and understanding, mocking and specifically, mockito. I found an article on toptal.com by Ivan Pavlov about mockito that is a guide for everyday use of the tool, exactly what I was looking for! The article has sections that includes some in depth analysis that could likely be used as a more in depth documentation for several aspects of mockito. The article also includes some insight into what mocks and spies are in the context of testing, unit testing specifically as well as some examples of the mockito tool in action.

For unit tests, we are testing the smallest unit of code possible for validity. The article states that the dependencies are not actually needed because we can create empty implementations of an interface for specific scenarios known as stubs. A stub is another kind of test double, along with mocks and spies, which are the two test doubles relevant to mockito. According to the article “mocking for unit testing is when you create an object that implements the behavior of areal subsystem in controlled ways.” They basically serve as a replacement for a dependency in a unit test. Mockito is used to create mocks, where you can tell mockito what you want to do when a method is called on a mocked object.

A spy for unit testing is another test double used by mockito. A spy, unlike a mock, requires a non-mocked instance to “spy on”. The spy “delegates all method calls to the real object and records what method was called and with what parameters”. It works as you would expect a spy to, it spies on the instance.  In general, mocks are more useful than the spy as they require no real implementation of the dependency as the spy does. The spy is also an indicator that a class is doing too much, “thus violating the single responsibility principle” of clean code.

Mocking with mockito can be used in a situation where you want to test that an object is being assigned a specific value when a method is called on it and certain parameters are met. A mock does not require the implementation of the methods to be written yet as you can assign values when a method is called by the power of mockito. This is why mockito is such a popular testing tool and mocking is such a popular testing strategy, and why I’ll continue to utilize it whenever relevant.

Link to original article: https://www.toptal.com/java/a-guide-to-everyday-mockito

 

From the blog CS@Worcester – The Road to Software Engineering by Stephen Burke and used with permission of the author. All other rights reserved by the author.

Boundary Value Testing

For this week’s blog post, I wanted to cover a topic that we covered earlier this semester, boundary value testing. I found an article on seemingly my go to site for articles for this blog, guru99. The article has several sections including a brief definition on what boundary testing is, as well as a definition of what equivalent class partitioning is and the relationship between the two. There are also a couple of examples and an analysis section that explains why we should use equivalence and boundary testing.

The article describes boundary testing as “the process of testing between extreme ends of boundaries between partitions of the input values.” These extreme values include the lower and upper bounds, or the minimum and maximum acceptable values for a given variable. For boundary testing, you need more than just the minimum and maximum values though. For the actual testing, you need a value that is just above the minimum, a nominal value that lies somewhere in the middle of the range, and a value that is just below the maximum value.

Equivalent class partitioning is explained by the article as “a black box technique (code is not visible to tester) which can be applied to all levels of testing.” It also states that when using equivalent class partitioning, “you divide the set of test condition into a partition that can be considered the same”.  This may sound a bit confusing but it is actually pretty simple. The article uses an example of a ticket system where values 1-10 are the only acceptable values and values 11-99 are invalid. You could break up this set of numbers into two different equivalent class partitions and then returning back to our boundary testing concept, we would test values like 0, 1, 5, 9, and 10.

The best part about boundary testing is that because it is a form of black box testing, you do not need to see how the actual code works. You only need to know the specifications for the valid and invalid numbers. Another reason to use boundary testing is that it eliminates the need for testing every value and thus, minimizes the tests needed. I have and will continue to utilize this testing strategy throughout my software engineering career.

Link to original article: https://www.guru99.com/equivalence-partitioning-boundary-value-analysis.html

 

From the blog CS@Worcester – The Road to Software Engineering by Stephen Burke and used with permission of the author. All other rights reserved by the author.

Test Plan

In this blog, we are talking about test plan. Test plan is one such important testing deliverable offered during the release of the product. A software product, once developed and tested completely, is prepared for its release, during which various documents, reports, screenshots, etc. are also delivered to the client and other stakeholders of the project. Known as deliverables, these documents and reports are an integral part of software development life cycle (SDLC), in my last blog, as they necessary information related to the product to the concerned individual.

Test plan encompasses all the activities performed during the testing process. A test plan document offers all the necessary and relevant information to the developers, business managers, as well as the customers. There are different types of test plan:

  • Level specific test plans: These include Unit test plan, Integration test plan and system test plan.
  • Type specific test plan: these include plans for major parameters like performance testing plan.
  • Master test plan: This is one single big plan combining all the other plans to be carried out on the software product.

Test Plan Template, the testing team or the test management team ensures that it follows a set template, which allows them to log all the necessary details about the testing process in the document. There are fixed set of parameters such as test items, testing approach, pass/fail criteria, approvals … there are all defined by the standard IEEE 829.

Creating a test plan by following guidelines, to make sure to create test plan accurately. It is important for them to consider few guidelines, which can assist them in recording all the necessary information in the document with precision. Create a concise test plan, with all the necessary information. The information provided should not be redundant and superfluous. While preparing a test plan it is necessary for the team members to be specific and precise. Create points, list, and tables wherever necessary, to increase the readability of the document. Review the document constantly before it is released with the product. Update the test plan with all the recent changes and modifications.
We know to make sure the test is in good quality, we need to know what we are looking for in testing. A testing plan would help us, a well written test plan ensures that all the aspects of the software are covered and tested. This combine with a check list would make further ensure the accuracy of out test.

From the blog CS@Worcester – Nhat's Blog by Nhat Truong Le and used with permission of the author. All other rights reserved by the author.

White Box Testing Blog 2

Welcome back to Dcanton’s blog. This week I’m going to be talking about  a blog posted on Apiumhub, a tech hub that specializes in software architecture. The blog talks about the benefits of unit testing. The author starts by giving the goal of unit testing which is to segregate each part of the program and test that the individual parts are working correctly. He then continues to list why it is beneficial. One of the benefits is that it improves the design of the program, it makes you think about if your code is well defined and if it reflects what the outcome is supposed to be. Another benefit of unit testing is that reduces cost long term, since bugs are found earlier in the process, they won’t be as costly and much easier to fix than later in the process.

I chose this blog to discuss this week because it has a lot of information on how effective unit testing is. Since there are other ways of test such as black box testing and grey box testing, this gave me a good understanding of it and the benefits that come from using this method. One thing I learned is that white box is tested by tester while others are tested by developers. Thank you, Dcanton blog will be back next week with another blog

 

https://apiumhub.com/tech-blog-barcelona/top-benefits-of-unit-testing/

From the blog CS@Worcester – Dcanton Blog by dcantonblog and used with permission of the author. All other rights reserved by the author.

Technical Interview Tips

Summary

In the article 5 things you need to know in a programming interview, Zhia Hwa Chong gives some useful tips for those starting their programming careers or those who are preparing for an interview. A quick summary of these tips are as follows:

  • “Always Think Ahead” – Referring to making sure when solving a problem to always look ahead and think about potential improvements. For example, he specifically says to think about edge cases, scaling issues, problem areas, and other topic-specific issues (e.g. handling collisions in a hash table).
  • “There’s more than one answer” – Each interview problem always has more than one solution, however, some of these solutions may not be optimal. It’s important to be able to write a working solution, but you should also look to improve upon it.
  • “OOP is not dead” – Make sure to think object-oriented (e.g. don’t cram everything into one method, don’t reuse code, etc.). Following these practices creates cleaner code, simplifying the code and makes it easier to understand.
  • “Craft your résumé” – Make sure to not skip preparing a great resume.
  • “Communicate early and communicate often” – Talk through the problem with your interviewer so they can understand your thought process and push you in the right direction.
  • “Use abstraction” – Using abstraction to hide complicated implementation details creates clean and easy to understand code. Afterwards if requested, you can implement any abstracted details.

Reaction to Content

I chose this topic because it’s something that is currently very relevant to me, as I’ll be graduating next May and hope to get something lined up before then. I had already seen many variations of these tips before, but I think reading this is useful for reinforcing them. While not necessarily applicable to all interviews, most popular tech companies follow the white-boarding process that this blog is giving tips for. For anyone looking to work for any of those companies, following these tips would definitely be valuable. However, they only cover things you should do during your interview, not topics that you would need to prepare for long before it, such as data structures, algorithms, general problem solving skills by solving similar problems, etc.

 

Source: https://medium.freecodecamp.org/the-most-important-things-you-need-to-know-for-a-programming-interview-3429ac2454b

From the blog CS@Worcester – Andy Pham by apham1 and used with permission of the author. All other rights reserved by the author.

Software Development Life Cycle (SDLC)

Software Development Life Cycle (SDLC) is a process used by the software industry to design, develop and test high quality software. The SDLC aims to Captureproduce a high-quality software that meets or exceeds customer expectations, reaches completion within times and cost estimates. SDLC refers to whole process of software development. It defines and describes, each and every phase that contributed towards the development of the software.

What is the need for SDLC? Software Development is a tedious and complex job. As such, standard guidelines and established framework works well to carry out the development process in an effectively organized manner, repeatedly for each unique software product. Segregates the process of development life-cycle into separate phases, for their independent and smooth implementation. And to minimize failures in a software project. There are six phases of Software Development Life Cycle in a subsequent manner:

Requirement Gathering and Analysis: This phase visions the gathering of business requirements, followed by the analysis to study and validate the feasibility of these requirements for implementation in the system. Client and Project Manager are the key persons in this phase.

Design: blueprint/software design is prepared, based on inputs, provided from the requirement gathering and analysis phase. This blueprint helps in determining the requirements, needed in the development of software such as hardware and system requirements. The outcome of this phase is software design.

Implementation: The software design is implemented in this phase through coding and programming. This phase generally involves modules and codes. This phase is developed or working software product that acts as an input for the next stage.

Testing: developed software is handed over to the testing team, to evaluate and validate the functioning of the software product, in accordance with its pre-defined requirements and meet the end users’ expectations.

Deployment: After getting through testing phase, successfully, software product is ready to get deployed on customer’s side for its use

Maintenance: Maintenance phase is all about resolving defects or issues, occurring on the customer’s side, while using the software product. It ensures the fixing of all issues post the deployment of software product at the customer’s site.

This is the main steps of software review, it is a standard practice that empowers organizations to follow systematic & well-defined approach, for carrying out the development in an effective way, so as to achieve desired software product of highest quality. I find this blog helpful to follow and make sure product is going to the direction.  These steps need to apply to future product., as I have seen so many products got issues and have to call back.

From the blog CS@Worcester – Nhat's Blog by Nhat Truong Le and used with permission of the author. All other rights reserved by the author.

Data Flow Testing

What is data flow testing? As we looked at path testing in class, data flow testing is one of the testing strategies, which focuses on the data variables and their values, used in the programming logic of the software product, by making use of the control flow graph. Data flow testing is the form of white box testing and structural type testing, which generally keeps check at the points, where the data values are being received by the variables, and at the points, when it is called for use. It is used to fill the gap between the path testing and branch testing.

Data flow testing keep in check of the coding errors and mistakes, which may result in to improper implementation and usage of the data variables or data values in the programming code. If all the data variables, present in the programming code have been initialized, or data variables which are put into use, have been, priory initialized, and if the initialized data variables, has been used, at least once, in the programming code. The data used in programming code, the life cycle goes through 3 phases:

  • Definition: data variables are defined, created and initialized, along with the allocation of the memory to that particular data object.
  • Usage: Declared data variables may be used in the programming code, in two forms
  • Deletion or Kill: Memory allocated to the variables, gets freed and is put into for some other use.

2 types of Data Flow Testing: static data flow testing study and analysis of code is done without performing the actual execution of the code such as wrong header files or library files use or syntax error. And dynamic data flow testing, this involves the execution of the code, to monitor and observe the intermediate results. It basically, looks after the coverage of data flow properties.

The coverage of data flow in terms of “sub-paths” and “complete path” may be categorized under following types: all definition coverage all definition-c use coverage, all definition-p use coverage, all use coverage, and all definition use coverage.

This blog goes over basically all what we look at in class, although there not a lot of new information. There are more information on this website, specially about testing.

From the blog CS@Worcester – Nhat's Blog by Nhat Truong Le and used with permission of the author. All other rights reserved by the author.

Test Doubles — Fakes, Mocks and Stubs.

Capture1We are looking at testing with Fakes, Mocks and Stubs. A Test Double is automated testing it is common to use objects that look and behave like their production equivalents but are simplified. This reduces complexity, allows to verify code independently from the rest of the system and sometimes it is even necessary to execute self-validating tests at all. The three implementation variations of testing doubles:

Fakes are objects that have working implementations, but not same as production one. Usually they take some shortcut and have simplified version of production code. Fake implementation can come handy for prototyping and spikes. We can quickly implement and run our system with in-memory store, deferring decisions about database design. Another example can be also a fake payment system, that will always return successful payments.

Command Query Separation- Methods that return some result and do not change the state of the system, are called Query. It returns a value and is free of side effects. There is also another category of methods called Command. This is when a method performs some actions, that changes the system state, but we don’t expect any return value from it. For testing Query type methods we should prefer use of Stubs as we can verify method’s return value.

Stub is an object that holds predefined data and uses it to answer calls during tests. It is used when we cannot or don’t want to involve objects that would answer with real data or have undesirable side effects. An object that needs to grab some data from the database to respond to a method call. Instead of the real object, we introduced a stub and defined what data should be returned.

Mocks are objects that register calls they receive. In test assertion we can verify on Mocks that all expected actions were performed. We use mocks when we don’t want to invoke production code or when there is no easy way to verify, that intended code was executed. There is no return value and no easy way to check system state change.

There are more test doubles such as dummy object, test spy. I thought this blog show clear each test and simple example.

From the blog CS@Worcester – Nhat's Blog by Nhat Truong Le and used with permission of the author. All other rights reserved by the author.