Category Archives: Week 9

Unit Testing

This week I decided to discuss Unit testing because although we have finished a few activities on the subject, I would like to read up on it to get a better understanding of it. While searching for a blog to discuss, I found this blog called “Unit Testing Best Practices: 9 to Ensure You Do It Right” from Testim. This blog also discusses what a unit test is, why we write them, the benefits of unit testing, how to achieve testable code, who creates them, and the difference between unit testing and integration testing.

The text begins by explaining that unit tests focus on very small parts of the application in complete isolation and compare their actual behavior with the expected behavior. In this section, the idea of complete isolation is thoroughly explained with the idea that “you don’t typically connect your application with external dependencies”. This is what makes unit tests so fast and stable.

The next section explains why they are created in bullet points and have the main ideas bolded. This structuring is helpful to simplify the ideas for the reader and make sure that the main ideas stick out for them. These were the bolded parts of the bullet points in the section: unit tests help you find and fix bugs earlier, your suite of unit tests becomes a safety net for developers, unit tests can contribute to higher code quality, unit tests might contribute to better application architecture, unit tests can act as documentation, and detect code smells in your codebase. I think a lot of the ideas presented were easy to comprehend based on the bolded part but I had to read more about the last bullet point because the term” code smells”(signs that something is wrong with your code ) was not obvious to me. 

The section that explained the difference between integration testing and unit testing was fairly short because they narrowed the main idea down to a few sentences: “It’s all about the level of isolation. While unit tests have to be completely isolated, integration tests don’t shy away from using external dependencies….integration tests offer a more high-level view of the application than unit tests do. Because of that, the feedback they provide is both more realistic and less focused.”

The 9 best practices at the end are titled: Tests Should Be Fast, Tests Should Be Simple, Test Shouldn’t Duplicate Implementation Logic, Tests Should Be Readable, Tests Should Be Deterministic, Make Sure They’re Part of the Build Process, Distinguish Between The Many Types of Test Doubles and Use Them Appropriately, Adopt a Sound Naming Convention for Your Tests, and Don’t Couple Your Tests With Implementation Details. These tips are easy to understand based off of the title alone but throughout each section, the author uses bits of code as examples and bullet points to thoroughly explain each idea. In the future, I will refer to these tips when I need to write tests for a project I will work on.

From the blog CS@Worcester – Live Laugh Code by Shamarah Ramirez and used with permission of the author. All other rights reserved by the author.

A Deeper Dive into Mocking

In class, we began learning about the use of mocking in software testing. After doing an activity, I realized I only slightly understood the purpose of it. So, I decided to investigate, which is when I found The Art of Mocking by Gil Zilberfeld and Dror Helper.

The article begins by giving an introduction to unit tests for those who might not be familiar. They also give an example in C#, which is the language they use for the rest of the article. However, this does not impact information about mocking besides the syntax of the examples.

Once unit tests are covered, hand-rolled mock tests are introduced: they are classes that the developer(s) code which take the place of the real objects for testing purposes. It might not accurately test the system, but it ensures the business logic of the class works. 

The types of mocking listed are those manually made by the developers and those auto generated by mocking frameworks (Mockito or Moq). The article goes quite in depth into mocking frameworks, in their types, what they do, and what to look for when choosing one. Zilberfeld and Helper also discuss the benefits of mocking, which generally boils down to the fact that mocking cuts loose extra requirements for testing, such as databases and other external/complex resources.

Lastly, the best practices and potential hazards are listed. From what I gathered, the important ones were:

  • Only use fake objects when necessary, as too many can create weak/fragile tests
  • Understand what you’re testing and the dependency
  • At most two assertions/verifications per test

I selected this article because I was struggling a little bit with what mocking was and why it was useful. I had somewhat understood the concept and the idea behind it in class, but needed to go on a deeper dive to get a better grip on it. This resource also seems pretty real, in the sense of it feels homemade and focused; all the important stuff is in the articles themselves, not a subscription or anything. 

This article is superb with its examples and clear, in-depth explanations of unit tests and mocking. The formatting is easy to follow and allows for smooth learning and transitions from concept to concept. I personally believe that mocking is quite useful when applied lightly. I learned that Verify should only be used in fake objects when it’s the only way your program can pass or fail; otherwise, it might not matter whether every method actually calls the expected method. 

This affected me in a positive way as it gave me greater insight to mocking and more confidence in my personal understanding as to why we use it. I expect to apply this in a future job when I might need to test a method that uses database information, but it would take too many resources to use the database for testing purposes.

From the blog CS@Worcester – Josh's Coding Journey by joshuafife and used with permission of the author. All other rights reserved by the author.

Week 9 – Better late than never.

Jesus, yeah I’ve kinda been slacking on my blog writings huh? So sorry for the radio silence coming from this place. Going forwards I can hopefully write every week moving forwards.

Anyways, for my first time writing for this class with Professor Wurst, being Software Quality and Testing, I wanted to talk specifically more in depth about what I had already sent him earlier this semester in our course’s Discord server, that being a video by Matt McMuscles about Sonic The Hedgehog (2006)

I have personal experience with this game, as I specifically remember going to BlockBuster when it was still around, wanting to get the game LittleBigPlanet, and my father reaching into the the bargain bin to find Sonic 06, as its better known to fans (quite the mouthful the real title is, and better to differentiate it from the original game on Sega Genesis), and handing it to me. Unfortunately, I had left the store only with Sonic 06.

The reason I relate this to Software Quality and Testing, is because this game is a heavy example of what happens when you rush it, as the video shows. This game was rushed out to shelves for the Christmas of 2006, so SEGA could make a good profit off of it, benefitting off of the increased demand that comes with Christmas.

Many things in this game show that it was improperly tested, with bugs, glitches, and even crashes all about the game. The game is held back due to this, as there is a proper and interesting idea underneath the surface. However, the lack of finding these extremely prevalent bugs and glitches lead to this being one of the worst selling titles in the Sonic franchise.

This goes to show why testing and quality assurance is so important, even in the gaming field. Video Games are software after all, just specifically for entertainment and enjoyment.

I specifically wanted to cover this for this class because it shows me why making sure the testing is as fluid as the programming to create a perfect product. This is especially true in the gaming field, which I hope to have a future in.

Needless to say, I don’t own my own copy anymore. I snapped my PlayStation 3 disc a long time ago. And months after I got it, I did end up getting LittleBigPlanet, and played it way more than this travesty.

tl;dr dont get the game in bargain bin.

From the blog CS@Worcester – You're Telling Me A Shrimp Wrote This Code?! by tempurashrimple and used with permission of the author. All other rights reserved by the author.

Understanding Integration Testing, System Testing, Requirements, Test Plans, and Defects in JUnit

In the world of software development, ensuring the quality of a product is paramount. This necessitates comprehensive testing methodologies that cover various aspects of the software development lifecycle. Among these methodologies, Integration Testing and System Testing play crucial roles in ensuring that software meets its requirements and functions as expected. In this blog post, we’ll delve into Integration Testing, System Testing, the role of requirements and test plans, and how JUnit, a widely-used testing framework for Java, assists in detecting defects.

Integration Testing: Integration Testing involves testing the interfaces and interactions between different components or modules of a software application. It verifies that integrated units work together as expected. This testing phase is crucial as it identifies defects that arise from the interaction between integrated components. JUnit provides a framework to write and execute integration tests efficiently, facilitating seamless integration between components.

System Testing: System Testing is a comprehensive testing phase that evaluates the entire system’s behavior against specified requirements. Unlike Integration Testing, which focuses on component interactions, System Testing examines the system’s functionality, performance, security, and other quality attributes. JUnit enables developers to write system tests that validate the system’s behavior as a whole, ensuring that it meets the defined requirements.

Requirements and Test Plans: Requirements serve as the foundation for testing activities. They outline the expected behavior and functionality of the software system. Test Plans are derived from requirements and define the approach, scope, resources, and schedule for testing activities. JUnit allows developers to align test cases with requirements, ensuring comprehensive test coverage. By mapping test cases to specific requirements, teams can verify that each requirement is adequately tested, thereby reducing the risk of undetected defects.

Defects in JUnit: Defects, or bugs, are inevitable in software development. JUnit plays a crucial role in identifying and addressing defects through its testing capabilities. When a test case fails, JUnit provides detailed information about the failure, including the location and nature of the defect. This information helps developers quickly identify and fix the issue, ensuring the software’s reliability and stability.

Conclusion: Integration Testing, System Testing, requirements, test plans, and defect management are essential components of the software testing process. JUnit simplifies and streamlines these activities by providing a robust framework for writing and executing tests. By leveraging JUnit effectively, developers can ensure that their software meets requirements, functions as intended, and delivers a seamless user experience.

Websites:

Link to JUnit Documentation

Get starter with JUnit 5

From the blog Discoveries in CS world by mgl1990 and used with permission of the author. All other rights reserved by the author.

The Happy Path

Testing code and software can come in many different forms, some may be better than others. In this post, we will look at path testing, or happy path testing. Path testing is representing your code in a linear graph, using nodes and arrows. The nodes represent lines of code, and the arrows dictate the flow of the code or program. It’s a fairly straightforward way of testing, depicting how you want your code to flow, and how the code actually flows. It can help you visualize the execution of your program.

In this blog post, it talks specifically about happy path testing, which is described as “a technique that tests the application through a positive flow to generate a default output,” or “a type of software testing that focuses on the most common and expected scenarios that a user will encounter when using an application.” Essentially, it allows you to see how your code executes in a typical environment. In the blog post, the example of an online shopping site is used, where the typical flow would be a user visiting the website and browsing through the products, adding some products to their cart and going to checkout, entering their shipping address and payment details, and finally finally receiving a confirmation and an email. That’s the happy path the website takes when a normal user goes to shop. This kind of testing ensured that nothing wrong occurred when it came to a normal execution. This is the same thing that happens when applying this testing strategy to your code, going through your code in a normal, typical situation and making sure you will not run into bugs and errors. Some steps to perform happy path testing effectively would be defining the scope and objectives of the testing, designing the test cases and scenarios, executing the test cases and scenarios, analyzing and reporting the results and outcomes, and fixing and retesting the issues and defects. The post also talks about the opposite of happy path testing, and some challenges when it comes to this kind of testing, such as overlooking negative and edge cases and relying on the happy path as a final verdict.

Although happy path testing is an effective testing strategy, it only covers the main part of your code, leaving some areas vulnerable to possible bugs and errors that may not be picked up or detected. But even with that, this is good for an initial testing strategy. It allows you to confirm that your code works as intended and expected when it comes to the most common scenarios of your code. Personally, I’m a fan of this kind of testing, being able to visualize the way my code works is nice. However, I know its limits and when it is effective and when it is not.

From the blog CS@Worcester – Cao's Thoughts by antcao and used with permission of the author. All other rights reserved by the author.

Success Begins By Sweeping The Floor

In the Software Development industry, every journey begins with a step forward and is often in the form of menial tasks. The Sweeping the Floor pattern emphasizes the importance of starting off small and embracing humbling tasks as a newcomer to a team. This means volunteering to do simple yet essential tasks to help contribute to the team’s overall success and grow as a developer. These tasks may not seem as exciting, but they help form the backbone of the team and provide valuable learning opportunities.

While learning more about the pattern, I found myself reflecting on my own experiences. This pattern brings up the idea that we should try to tackle not just simple tasks, but challenging ones as well. This helped me realize that regardless what our level of knowledge is, every contribution matters and serves as a stepping stone towards future accomplishments.

In a field that is often associated with complexity and innovation, I found it interesting for a pattern to highlight the importance of starting from the ground up and acknowledging that mastery is a journey that comes with time, rather than a destination. Also, by taking on tasks within our teams, we are gaining additional knowledge. The patterns focus on filling knowledge gaps through hands-on experience and learning underscores the importance of practical learning in the field.

Sweeping the Floor has led me to reevaluate my approach to the field. While I have always recognized the importance of continuous learning, this pattern has reinforced the idea that no task is beneath us as craftsmen and apprentices. This helped inspire me to contribute more to any team I’m on, even if it means taking on some of the more challenging tasks.

While I agree with the message of the pattern, I believe there’s a fine line between taking on humbling tasks and being pigeonholed into a role with limited growth potential. I believe it is essential to seek opportunities for growth and development beyond just menial tasks. While Sweeping the Floor is a part of the apprenticeship journey, it’s also crucial to strike a balance and demonstrate knowledge for more significant challenges and roles within a team.

With this being said, the Sweeping the Floor pattern can serve as a reminder that with dedication and continuous learning, we can follow our own paths to mastery. By embracing the humbling tasks from the beginning rather than pushing them away, and reaching for various opportunities for growth, we as apprentices can then lay a solid foundation that’ll set us up for success in the future.

From the blog CS@Worcester – Conner Moniz Blog by connermoniz1 and used with permission of the author. All other rights reserved by the author.

Decision Tables from a Template

Over the past few weeks in CS443 – Software Quality Assurance and Testing, we’ve been learning how to apply our boundary test classes to create Decision Tables and apply somewhat similar logic to create Program and DD-Path Graphs for code segments. Decision tables are visual tools used in software testing and analysis to specify actions based on given conditions. The strategy we learned in class of assessing all possibilities then systematically combining them based on the decision outcomes and particularly “Don’t care” scenarios seems like a useful and interesting way to map out test designs.

So, I decided to look into blogs discussing Decision Tables and their implementation in software testing and found a great post on ShiftAsia with abstract and specific examples alongside general discussion. This post is also quite recent – posted on January 9, 2024 – which is something I always appreciate as the software/tech world is constantly changing. It opens by describing how to create a Decision Table by representing it with the following matrix:

Condition Stub Condition Entries
Action Stub Action Entries

Condition stub: List of all conditions in consideration

Condition entries: Filled out with Y/N (or X) to cover all possible combinations of conditions

Action stub: List of all possible actions/output

Action Entries: Marked (generally with X or blank) to show outcome and an association between a condition and result.

This is then illustrated with an example of being able to register according to conditions of having a valid email, registered email, and valid password. I found this template and example helpful to better understand Decision Tables in general by comparing them to the steps we did in our In-Class Assignment 7. And, using the example of an altogether invalid email forcing all results to be “Invalid” makes sense logically for the column consolidation.

The process of combining columns and simplifying Decision Tables is reminiscent of CS254 – Computer Architecture and Organization concepts, particularly using K-Maps to calculate Sum of Products and Product of Sums. Based on similar responses to a variety of inputs, we are able to essentially combine and simplify the K-Map table and in turn the expression it produces. While K-Map logic works based on binary math laws rather than actual outcomes, there’s a clear correlation here as we represent outcomes with boolean values that can be easily represented in binary – as either a 0(false) or 1(true). My personal experience in CS254 wasn’t the best – I didn’t totally understand how many of the concept we learned are applicable in practical situations, so it’s cool and exciting to see it applied in software testing – an area I would’ve probably least expected it.

Sources:

https://blog.shiftasia.com/use-decision-table-in-software-development

From the blog CS@Worcester – Tech. Worth Talking About by jelbirt and used with permission of the author. All other rights reserved by the author.

Navigating the Nuances of Mock Testing: A Reflection

In the realm of software engineering, particularly within the course content of CS-401, the concept of mock testing stands out as a pivotal technique in the landscape of software testing methodologies. Recently, I delved into an insightful resource on mock testing https://www.geeksforgeeks.org/software-testing-mock-testing/ , which offered a comprehensive exploration of its applications, benefits, and best practices.

Why This Resource?

Choosing this article stemmed from my quest to understand the intricacies of unit testing, especially how mock objects can simulate the behavior of real dependencies. The clarity and depth of the article provided a solid foundation, aligning perfectly with our coursework on advanced software development practices.

Insights Gained

The article elucidates mock testing as a technique where simulated objects, or “mocks,” replace system dependencies. This isolation allows for the rigorous testing of individual components without the overhead or unpredictability of their real counterparts. Notably, the piece highlighted the distinction between mocks, stubs, and fakes, demystifying their respective roles in a testing environment.

Personal Reflection

Engaging with the material, I was struck by the elegance of mock testing in decoupling code, facilitating a cleaner, more modular design. The practice of defining expectations for mock objects not only enforces a contract between different parts of a system but also embeds a level of documentation within the test itself. Reflecting on past projects, I recognize instances where a lack of isolation complicated both the development and testing phases. Moving forward, I’m keen to apply mock testing more judiciously, ensuring each component can be tested in isolation, thus enhancing test reliability and code quality.

Applying What Was Learned from this Resource

In future software projects, I plan to leverage mock testing to streamline the development process. By isolating external dependencies and focusing on the behavior of the system under test, I anticipate a more efficient debugging and validation process. Furthermore, the insights gained on best practices will be instrumental in avoiding common pitfalls, such as over-mocking, which can obscure the clarity and purpose of tests.

Conclusion

The exploration of mock testing through [Resource Title] has been both enlightening and validating, reinforcing the relevance of mock testing within our CS-401 curriculum. As the software complexity grows, so does the necessity for sophisticated testing methodologies. Mock testing, with its promise of isolation and focused validation, is a technique I look forward to mastering and applying in my journey as a software developer.

From the blog CS@Worcester – Abe's Programming Blog by Abraham Passmore and used with permission of the author. All other rights reserved by the author.

Finding Your Path with “Craft over Art”: A Balance of Purpose and Passion

Summary of the Pattern:
“Craft over Art” is a pattern that addresses the tension between pursuing personal artistic aspirations and delivering work that serves a practical, often communal purpose. It suggests that while software development allows for creativity and self-expression, the primary goal should be to craft solutions that meet the needs of users, clients, or the community. This pattern encourages developers to find a balance between their artistic ambitions and the craftsmanship required to build reliable, usable, and maintainable software.

My Reaction:
The “Craft over Art” pattern deeply resonated with me. It articulates a dilemma I’ve often encountered: the desire to innovate and create freely versus the responsibility to deliver functional, user-centric solutions. This pattern has helped me appreciate the beauty and satisfaction that come from craftsmanship – the meticulous attention to detail and the joy of solving real-world problems. It underscores the importance of empathy and utility in our work, which I find both humbling and motivating.

Insights and Changes in Perspective:
Reflecting on this pattern prompted me to reevaluate how I approach my projects. I’ve started to see my work not just as a platform for personal expression but as an opportunity to impact others positively. This shift in perspective has made me more conscious of the users’ needs and the broader implications of my work. It’s a reminder that at the heart of technology lies the potential to improve lives, and this purpose should guide our creative and technical decisions.

Disagreements and Critiques:
While I agree with the core message of “Craft over Art,” I believe there’s room for a nuanced view that doesn’t see art and craft as opposing forces but as complementary aspects of creative work. The best solutions often come from a fusion of innovative thinking (art) and practical application (craft). Encouraging a dialogue between these aspects can lead to more holistic and innovative outcomes. Hence, while the pattern is valuable, it’s important not to diminish the role of artistic creativity in problem-solving.

Conclusion:
“Craft over Art” has offered me a fresh lens through which to view my role as a developer. It has emphasized the importance of balancing personal creative aspirations with the responsibility to deliver practical, effective solutions. As I continue my journey in software development, I am inspired to embrace this balance, ensuring that my work not only satisfies a technical or aesthetic urge but also serves a greater purpose. This pattern is a powerful reminder of the impact our choices as developers can have on the world around us.

From the blog CS@Worcester – Abe's Programming Blog by Abraham Passmore and used with permission of the author. All other rights reserved by the author.

Understanding Object-Orientated Testing

Testing In context of software development is a critical process that involves systematically checking a program or system to ensure it performs as intended. In software development, It is really important to check our work making sure everything works as it should. When we write code using object-orientated programming (OOP) which is a common way to organize and write our software, we need a special kind of checking called Object-Orientated Testing (OOT). This blog dives into what OOT is, inspired by the detailed article from GeeksforGeeks , showing why it is different and important.
Summary of the resource

The article from GeeksforGeeks explains how testing for Object-Orientated programming is different than traditional testing. OOP deals with concepts like classes and objects (which are basically groups of functions and data that model real-world things). OOT then focuses on checking these classes and objects, along with how they interact with each other, which is not something you do in traditional testing. The article talks about the challenges of doing OOT, like making sure objects work well together and the need for different tools and strategies to do it right.

Reason for selection

I picked this article because it does a great job of showing how checking object-orientated code is different from the usual way of testing code. It fits well with what we are learning in class about how to build software, giving us a clear picture of how to make sure out OOP projects work well

Reflection:

Reading about OOT made me realize that checking our code in OOP needs more than just looking at each part by itself. We need to see how all parts work together. It was an eye-opener to learn about the different tools we can use for OOT and how it helps us find and fix problems early on.

Looking forward

This article made me more aware of how important it is to use OOT in my future projects. Knowing how to do this kinds of testing means I can make sure my software is solid and works well, which is very important for any software developer.

Conclusion

Object-Orientated Testing is a key skill for software developers, especially as we build more complex and interconnected software. The insights from the GeeksforGeeks article highlights the unique aspect of OOT and remind us why adapting our testing to match our coding style is crucial. As we tackle bigger projects keeping these OOT principles in mind will help us build better and more reliable software

From the blog CS@Worcester – Josies Notes by josielrivas and used with permission of the author. All other rights reserved by the author.