Category Archives: CS@Worcester

Test Doubles: Enhancing Testing Efficiency

When developing robust software systems, ensuring reliable and efficient testing is paramount. Yet, testing can become challenging when the System Under Test (SUT) depends on components that are unavailable, slow, or impractical to use in the testing environment. Enter Test Doubles—a practical solution to streamline testing and simulate dependent components.

What are Test Doubles? In essence, Test Doubles are placeholders or “stand-ins” that replace real components (known as Depended-On Components, or DOCs) during tests. Much like a stunt double in a movie scene, Test Doubles emulate the behavior of the real components, enabling the SUT to function seamlessly while providing better control and visibility during testing.

The implementation of Test Doubles is tailored to the specific needs of a test. Rather than perfectly mimicking the DOC, they replicate its interface and critical functionalities required by the test. By doing so, Test Doubles make “impossible” tests feasible and expedite testing cycles.

Key Variations of Test Doubles Test Doubles come in several forms, each designed to address distinct testing challenges:

  1. Test Stub: Facilitates control of the SUT’s indirect inputs, enabling tests to explore paths that might not otherwise occur.
  2. Test Spy: Combines Stub functionality with the ability to record and verify outputs from the SUT for later evaluation.
  3. Mock Object: Focuses on output verification by setting expectations for the SUT’s interactions and validating them during the test.
  4. Fake Object: Offers simplified functionality compared to the real DOC, often used when the DOC is unavailable or unsuitable for the test environment.
  5. Dummy Object: Provides placeholder objects when the test or SUT does not require the DOC’s functionality.

When to Use Test Doubles Test Doubles are particularly valuable when:

  • Testing requirements exceed the capabilities of the real DOC.
  • Test execution is hindered by slow or inaccessible components.
  • Greater control over the test environment is necessary to assess specific scenarios.

That said, it’s crucial to balance the use of Test Doubles. Excessive reliance on them may lead to “Fragile Tests” that lack robustness and diverge from production environments. Therefore, teams should complement Test Doubles with at least one test using real DOCs to ensure alignment with production configurations.

Conclusion Test Doubles are indispensable tools for efficient and effective software testing. By offering flexibility and enhancing control, they empower developers to navigate complex testing scenarios with ease. However, judicious use is key, striking the right balance ensures tests remain meaningful and closely aligned with real-world conditions.

This information comes from this article:
Test Double at XUnitPatterns.com

From the blog CS@Worcester – aRomeoDev by aromeo4f978d012d4 and used with permission of the author. All other rights reserved by the author.

Learning Boundary Value Analysis in Software Testing

One of the most significant ways of ensuring that an application is reliable and efficient before deployment is through software testing. One of the most powerful functional testing techniques that focuses on testing the boundary cases of a system is Boundary Value Analysis (BVA). Boundary Value Analysis finds potential defects that are apt to show themselves on input partition boundaries.

What is Boundary Value Analysis?

Boundary Value Analysis is a black-box testing method which tests the boundary values of valid and invalid partitions. Instead of testing all the possible values, the testers focus on minimum, maximum, and edge-case values, as these are the most error-prone. This is because defects often occur at the extremities of the input ranges rather than at any point within the range.

For example, if a system accepts values between 18 and 56, instead of testing all the values, testers would test the below-mentioned values:

Valid boundary values: 18, 19, 37, 55, 56

Invalid boundary values: 17 (below minimum) and 57 (above maximum)

By running these primary test cases, the testers can easily determine boundary-related faults without unnecessary repetition of in-between value testing.

Implementing BVA: A Real-World Example

To represent BVA through an example, let us take a system processing dates under the following constraints:

Day: 1 to 31

Month: 1 to 12

Year: 1900 to 2000

Under Single Fault Assumption, where one of the variables is tested while others are at nominal values, test cases like below can be written:

Boundary value checking for years (e.g., 1900, 1960, 2000)

Boundary value checking for days (e.g., 1, 31, invalid cases like 32)

Checking boundary values for months (i.e., 1, 12)

By limiting test cases to boundary values, we are able to have maximum test coverage with minimum test effort.

Equivalence Partitioning and BVA together

Another helpful technique is combining BVA and Equivalence Partitioning (EP) together. EP divides input data into equivalent partitions where every equivalence class is expected to behave in the same way. By using these techniques together, testers can reduce the number of test cases but still maintain complete coverage.

For instance, if a system would only accept passwords of 6 to 10 characters long, test cases can be:

0-5 characters: Not accepted

6-10 characters: Accepted

11-14 characters: Not accepted

This mix makes the testing more efficient, especially when using more than one variable.

Limitations of BVA

Although BVA is strong, it does face some limitations:

It works well when the system contains properly defined numeric input ranges.

It has no regard for functional dependencies of variables.

It may not be equally effective on free-form languages like COBOL, which has more flexible input processing.

Conclusion

Boundary Value Analysis is one very important test method that can help testers define most probable fault sites of a system. Merged with Equivalence Partitioning, it has highest test effectiveness at the maximum elimination of test case replication and minimum complete loss of test coverage. In as much as BVA isn’t a “catch-all”, yet it represents an essential technique of software provision quality and dependability.

Personal Reflection

Learning Boundary Value Analysis has helped me understand more about software testing and how it makes the software reliable. It has shown me that by focusing on boundary values, defects can be detected with higher efficiency without generating surplus test cases. It is a very practical approach to apply in real-world scenarios, such as form validation and number input testing, where boundary-related errors are likely to be found. In the future, I will include BVA in my testing approach to offer more test coverage in software projects that I undertake.

Citation

Geeks for Geeks. (n.d.). Software Testing – Boundary Value Analysis. Retrieved from https://www.geeksforgeeks.org/software-testing-boundary-value-analysis/

From the blog CS@Worcester – Maria Delia by Maria Delia and used with permission of the author. All other rights reserved by the author.

SMURF testing

The blog I chose to write about this week details what different types of tests do and how they should be prioritized to test efficiently. The blog post starts with the writer talking about their own experience testing and how early on they tested their program only through the user interface, which quickly showed the downsides of this method. It was slow, couldn’t be run on all devices, and needed manual checks. Testing like this is called end to end testing and it is, slow, expensive, and not always the most revealing about potential problems. Instead, unit tests are much preferred as they test the very basic functionality and are quick. A middle ground between these two is instance or integration testing, which are able to cover most of the program without having to go through a limited user interface. These three types of testing create a pyramid to show that a majority of your cases should be unit tests, then instance tests being next most common, and finally end to end tests.

The distribution pyramid is based on five principles that make up the Smurf mnemonic. The first is speed, that has you prioritize many quick tests that will catch problems sooner. Next is maintainability that gives importance to tests that will scale well or not be subject to many dependencies. Utilization is important when keeping in mind the cost of running your code repetitiously and minimizing the resources used. Reliability states that you should always have tests that will only return an error when something is actually wrong and using these as indicators of crucial issues that need to be addressed. Lastly, fidelity is testing that recreates the users experience from start to finish, or end to end. Each type of test in the pyramid is distributed across these five factors in variable amounts, each showing their use.

I chose this blog because I wanted to learn more about writing test cases for a complete program in a work environment. I thought that this blog did well in that respect and helped in so far as providing an outline to begin with when starting to write test code. One addition that could have improved the post would be some examples, but they are easily accessible elsewhere. This will be a helpful resource and reference to use in the future when I am put in the position where I would need to start writing test from scratch, as well as being something to keep in mind when looking at prewritten tests to compare.

Test Pyramid Google Testing Blog: SMURF: Beyond the Test Pyramid

From the blog CS@Worcester – Computer Science Blog by dzona1 and used with permission of the author. All other rights reserved by the author.

Software Testing

For a lot of the projects I’ve worked on, we didn’t really do testing in the traditional way of actually writing tests. Each method or function we created, we would just use debugging in the actual code itself to figure out which parts of the method is working correctly, which fail safes are activating, and which errors are throwing. There didn’t seem like a whole reason for writing tests using JUnit or something similar when all code went to a dev environment first, and then put to production.

Just recently I started using JUnit a little bit just to find out how useful it could be when practicing coding certain problems. It definitely had some perks to it such as without compiling and building the entire project each time, I could just run the test to check if that certain code piece will function properly or not. But for me, that isn’t really enough to warrant using it all that much, if you have access to a dev environment (which you should), even though it isn’t the “proper” way to do things, I believe that writing debug in code instead of tests, is way more efficient.

If you look at it from a different perspective though, writing tests can also be worthwhile if you’re more into the automation part of developing. Since tests will be ran automatically when compiling your projects, you wouldn’t need to keep adding in more debug statements manually. This could increase efficiency more than what I previously mentioned, but you may not get as much information as needed from the test. If one of your methods isn’t working properly but the test keeps returning an exception or error and you don’t understand the cause of why it’s happening, the previous approach of writing debug statements will be more efficient and in turn, will help you understand where you went wrong.

All in all, I personally believe that both unit testing / testing in general, and writing debugs have their own places when writing software. Obviously I don’t know too much about software testing yet, but the more I learn, there is definitely potential that my opinion on it would change. As of now, writing debugs if there is an issue, or even if there isn’t, seems to help me a lot when understanding each part of what I’m writing in a deeper level and I look forward to learning more about testing to where it could also help me reach that point.

From the blog CS@Worcester – CS Blog by Mike and used with permission of the author. All other rights reserved by the author.

Agile Project Management

There is a video on YouTube from a channel called The Digital Project Manager about Project Management called, “Agile project management methodology explained (with burgers?!)”. This video talks about Project Management, a topic from our course, but more specifically, agile project management. I found this video particularly educational and informative because instead of explaining agile project management in typical terms, the methodology is described using burgers. The video explains that agile project management is all about flexibility and adaptability. The agile project management style runs through multiple cycles of project production and release, taking feedback and input from the customer and users after each release cycle. This is a very effective and efficient method of project management because it allows a smooth flow of progress while making a more accurate product based on the customers’ desires. The way that the video compared this method of project management to burgers is by suggesting that there could be a goal of making the world’s best burger and in order to do that the chef would have to make a burger, get feedback from those who try it, then make changes to the burger in order to improve it. Agile project management implies that after every round of changes, the product should (hypothetically) get better and better. This made me wonder, in a professional setting, how each of these phases of the project would be tested and improved. Further, at what point could the project/program be determined as complete and who has the authority to make that decision? Agile project management is also inherently iterative. This means that multiple iterations or cycles can be completed throughout the process of completing the project. This method is particularly successful with dynamic projects and environments such as game development and software development because the requirements for these types of projects tend to be constantly changing and developing. Agile project management is also a more flexible form of project management. Because multiple cycles of development will occur, not all of the planning for the project has to be done prior to the developing stage of the project. This allows for more flexibility and a more robust and high quality product to come out of the process. During agile project management, each sprint can be broken down into five parts: Define, Craft, Develop, Deploy, Evaluate. The evaluate phase is what allows for this method to be iterable and flexible to multiple rounds of work.

From the blog CS@Worcester – The Struggle of Being a Female Student in CS by Noam Horn and used with permission of the author. All other rights reserved by the author.

Sprint 1 Retrospective Blog Post

As a Computer Science major in my final year of college, I thought I had learned everything I needed to know. However, my Software Development Capstone class has proven me wrong. I’ve realized there’s still so much to learn, especially when it comes to working in a team and building a project from scratch. My team and I were tasked with creating a system to improve the University’s food pantry inventory management. This project has been both challenging and rewarding, and Sprint 1 was a great starting point for our journey.

GitLab Evidence

Throughout Sprint 1, my partner and I focused on creating the frontend prototype for the barcode scanner. Below are the links to our GitLab activities, along with a brief description of each:

  • Initial Front Page:
    GitLab Issue #7
    Description: Created the initial front page using HTML and CSS.
  • Scanner Page:
    GitLab Issue #9
    Description: Created the scanner page using HTML, CSS, and JavaScript to implement the barcode scanner.
  • Barcode Scanning Functionality:
    GitLab Issue #10
    Description: Implemented the html5-qrcode library to enable barcode scanning functionality.
    Library Link: html5-qrcode GitHub
  • Displaying UPC Results:
    GitLab Issue #11
    Description: Modified the scanner page to display the results of the UPC.
  • Aesthetic Improvements:
    GitLab Issue #12
    Description: Improved the aesthetics of the front page (index.html) and scanner page (scanner.html) by updating the color palette and fonts.

Despite being strangers at the beginning of the semester, my team has worked together seamlessly. We communicate effectively, both in and out of class, and support each other not only in this project but also in our other classes. Everyone feels comfortable asking questions and sharing ideas without fear of judgment, which has created a chill and productive atmosphere. Additionally, our scrum master assigned us to sub-teams, and each member knew exactly what they needed to do. This clarity helped us stay organized and focused.

While we used GitLab to track issues, we could have utilized it more effectively. For example, we sometimes forget to update issues or document progress in detail. Also, since my partner and I were new to frontend development, we spent a lot of time learning the basics of HTML, CSS, and JavaScript, which slowed down our progress initially. In Sprint 2, we plan to use GitLab more effectively to track issues, as we discussed during sprint planning. My team and I will make a conscious effort to update GitLab issues regularly and document our progress more thoroughly. This will help us track our work more effectively, avoid confusion, and ensure that everyone is aligned on the tasks and their status.

I want to dedicate more time outside of class to learn frontend technologies. This will help me contribute more effectively to the project and build a stronger foundation for my future career. In Apprenticeship Patterns by Dave Hoover and Adewale Oshineye, the pattern “The Long Road” emphasizes the importance of committing to a lifelong journey of learning and mastery. It encourages aspiring software craftsmen to focus on long-term growth rather than chasing quick success, promotions, or material rewards. The pattern reminds us that mastery takes time and that we should embrace the journey, even if it means being seen as unconventional. I selected this pattern because it resonated deeply with my experience during Sprint 1. As someone who is new to frontend development, I felt overwhelmed at times by how much I still needed to learn. However, this pattern reminded me that mastery is a gradual process and that I shouldn’t compare myself to others who may seem further ahead. It also reinforced my desire to focus on frontend development as a long-term career path, even if it means taking the time to build a strong foundation. If I had read this pattern earlier, I would have approached Sprint 1 with more patience and confidence. Instead of feeling pressured to produce quick results, I would have focused on learning and improving my skills at a sustainable pace. This mindset would have helped me enjoy the process more and reduce the stress of trying to catch up to others.

From the blog CS@Worcester – CodedBear by donna abayon and used with permission of the author. All other rights reserved by the author.

Software Licensing

I happened to stumble across a YouTube video by a channel called Licenseware by the title of, “A Brief History of Software Licensing – Why it exists, and why it is so important”. This video covered information about how software licensing came to be, how it is used, and why it is important in the world of software and coding. The video explains how our world has developed to become, ultimately, reliant on software for organizing and controlling everything from transportation to communication. Because of all of this, the software industry has gained many rules and regulations around when, where, why, and how software can be used. These rules and regulations are more commonly known as Software Licenses. Software Licensing is one of the topics we covered relatively early on in our course, going over the different types of licenses and their uses as well as when and in what scenarios they worked or didn’t work. The history of licensing stems from the history of copyright which came from The Copyright Act of 1710 (The Statute of Anne). This act, passed by the British Parliament, stated that authors had the right to publish and sell their work for a renewable period of time. This is relevant because Software Licensing falls under the category of Copyright Law. This is because software is ultimately seen as a type of literary work. The video goes on to talk about how the Free Software Foundation (FSF) introduced the concept of open source software which led to the development of the open source software movement and the creation of the General Public License. I found this video very interesting because, prior to watching it, I hadn’t realized to what extent and depth software licensing was similar to and fell under the umbrella of a legally documented copyright. This confusion came from the fact that open source software creates an atmosphere of open communication and sharing that doesn’t typically appear in other copyrighted pieces of work. I find it interesting that software seems to be the only field, at least to my knowledge, that has this type of sharing of information and work between other users and developers. The collaboration between maintainers, leaders, collaborators and users allows the field to expand and advance quicker and more efficiently than other fields. This video helped solidify my understanding of different licenses and the importance of not only having an active license in place but also the correct license in place. This also gave me a better concept of what to do in the future if I should choose to post any of my code online as open source material.

From the blog CS@Worcester – The Struggle of Being a Female Student in CS by Noam Horn and used with permission of the author. All other rights reserved by the author.

Coding Practices & Standards:

From the blog CS@Worcester – The Struggle of Being a Female Student in CS by Noam Horn and used with permission of the author. All other rights reserved by the author.

Sprint Retrospective: Learning to love Group Projects

Hi Debug Ducker here, and I just recently finished my first sprint with a group project. I have to say it went well, better than I expected. This coming from someone who has had poor experiences with group work.

Let’s begin on what exactly was working on for these past months. You see I was assigned to work on a project based on my college campus’s food pantry. We were assigned to work on an Inventory culling system based on the expiration dates of the products on the shelves. To say that I had way more different expectations of what needed to be done would be an understatement but I am getting ahead of myself.

Back to the main project, I am in a group of five and we came up with several ways to approach this project. We decided that we should split the work, two would work on a scanner that would check the items’ barcodes for identifying product information and the other 3 would work on the backend for the function of culling the inventory.

I found that working on separate parts of the project worked well in the long run allowing people to focus on one of the many aspects of the projects. Especially with the amount that got done at the end. I would know as my part of the project was going well….sorta. 

There was some trouble, such as using an already established code as the base for the project. It made me realize something, I wasn’t sure how to approach the issues as the code base was made with and due to my lack of knowledge of JavaScript, it was going to be problematic. Fortunately, I had 2 other companions that could assist me and did a great job. From this, I seek to improve my overall knowledge of JavaScript and seek ways to utilize the code base better.

Recently I read a bit of a programmer mentoring book called Apprenticeship Patterns by Dave Hoover and Adewale Oshineye. This experience reminds me of a pattern that I resonated with. Accurate Self-assessment, basically identifying what you know and what you don’t. A self-reflection of my skills and I found out that there is more that I can learn. I want to see this project succeed so I think I need to brush up on some skills that I am lacking so the project can come out great. That pattern is a good encouragement for me to study further.

Near the end of the project I was worried that it wasn’t going to be complete by our standards, fortunately, the other group got the scanner worker to find it, and we made some progress on the backend but I found that it didn’t reach our goal of what we wanted it to do. In the end, we were satisfied with our progress and hope to continue integrating the rest of the work.

Here is most of the work I have done it was mostly focused on trying to figure out testing our culling system and integration of product schema.

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem-culling/guestinfobackend/-/tree/main/specification?ref_type=heads

Here is the backend for the rest of the work done in collaboration with the others

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem-culling/guestinfobackend/-/tree/main/src?ref_type=heads

Thank you for your time, Have a nice one.

From the blog CS@Worcester – Debug Duck by debugducker and used with permission of the author. All other rights reserved by the author.

Path Testing

This week in class we learned about path testing, which is a white box method that examines code to find all possible paths. Path testing uses control flow graphs in order to illustrate the different paths that could be executed in a program. In the graph, the nodes represent the lines of code, and the edges represent the order in which the code is executed. Path testing appealed  to me as a testing method because it gives visual representations of how the source code should execute given different inputs. I took a deeper dive into path testing after this week’s classes and found this blog that gave me a deeper understanding of path testing.

Steps

When you have decided that you want to perform path testing, you must create a control flow graph that matches up with the source code. For example, the split of direction between nodes should represent if-else statements and for while loops, the nodes towards the end of the program that have an edge pointed at an earlier node. 

Secondly, pick out a baseline path for the program. This is the path you define to be the original path of your program. After the baseline is created, continue generating paths representing all possible outcomes in the execution. 

How many Test Cases?

For a lengthy source code, the possible outcomes could seem endless and could therefore end up being a difficult, time-consuming task to do manually. Luckily, there is an equation that determines how many test cases a program will need with path testing.

C = E – N + 2P

Where C stands for cyclomatic complexity. The cyclomatic complexity is equivalent to the number of linearly independent paths, which in turn equals the number of required test cases. E represents the number of edges, N is the number of nodes, and P is the number of connected components. Note that for a single program or source of code, P = 1 always.

Benefits

Path testing reveals outcomes that otherwise may not have been known without examining the code. As stated before, it can be difficult for a tester to know all the possible outcomes in a class. Path testing provides a solution to that by using control flow charts, where the tester can examine the different paths. Path testing also ensures branch coverage. Developers don’t need to merge code with an existing repository because the developers can test in their own branch. Unnecessary and overlapping tests are another thing developers don’t have to worry about.

Drawbacks

Path testing can also be time consuming. Quicker testing methods do exist and take less time off further developing projects. Also in many cases, path testing may be unnecessary. Path testing is used often by many DevOps setups that require a certain amount of unit coverage before deploying to the next environment. Outside of this, it may be considered inefficient compared to another testing method.

Blog: https://blog.testlodge.com/basis-path-testing/

From the blog Blog del William by William Cordor and used with permission of the author. All other rights reserved by the author.