Category Archives: Week 6

Equivalence Class Testing

In the realm of software testing, equivalence class testing stands out as an efficient black-box testing technique. Unlike its counterparts—boundary value analysis, worst-case testing, and robust case testing—equivalence class testing excels in both time efficiency and precision. This methodology logically divides input and output into distinct classes, enabling comprehensive risk identification.

To illustrate its effectiveness, consider the next-date problem. Given a day in the format of day-month-year, the task is to determine the next date while performing boundary value analysis and equivalence class testing. The conditions for this problem are:

  • Day (D): 1 < Day < 31
  • Month (M): 1 < Month < 12
  • Year (Y): 1800 < Year < 2048

Boundary Value Analysis

Boundary value analysis generates 13 test cases by applying the formula:

No. of test cases(n = no. of variables)=4n+1\text{No. of test cases} (n \text{ = no. of variables}) = 4n + 1

For instance, the test cases might include:

  1. Date: 1-6-2000, Expected Output: 2-6-2000
  2. Date: 31-6-2000, Expected Output: Invalid Date
  3. Date: 15-6-2048, Expected Output: 16-6-2048

While this technique effectively captures boundary conditions, it often overlooks special cases like leap years and the varying days in February.

Equivalence Class Testing

Equivalence class testing addresses this gap by creating distinct input classes:

  • Day (D): 1-28, 29, 30, 31
  • Month (M): 30-day months, 31-day months, February
  • Year (Y): Leap year, Normal year

With these classes, the technique identifies robust test cases for each partition. For example:

  • Date: 29-2-2004 (Leap Year), Expected Output: 1-3-2004
  • Date: 29-2-2003 (Non-Leap Year), Expected Output: Invalid Date
  • Date: 30-4-2004, Expected Output: 1-5-2004

This approach ensures comprehensive test coverage, capturing edge cases missed by boundary value analysis.

Conclusion

Equivalence class testing offers a systematic approach to software testing, ensuring efficient and thorough risk assessment. By logically partitioning inputs and outputs, it creates robust test cases that address a wide array of scenarios. Whether dealing with complex date calculations or other software functions, equivalence class testing is a valuable tool in any tester’s arsenal.

In essence, this method not only saves time but also enhances the precision of test cases, making it an indispensable step in the software development lifecycle.

All of this can be found from this link:

Equivalence Class Testing- Next date problem – GeeksforGeeks

From the blog CS@Worcester – aRomeoDev by aromeo4f978d012d4 and used with permission of the author. All other rights reserved by the author.

Software Quality

Software quality is something that I discuss with a lot of my friends who are also interested in software engineering/developing. One of the main points that all of us make sure to implement in our projects is always clean and readable code. When working on each others code-bases, it has become so much easier when our style’s have become more similar in terms of quality and cleanness, obviously not everyone will have the same style of coding, but people should definitely follow certain guidelines when working on a product.

For the start of all of my projects now, I immediately make the layout in a specific way so that anyone who has to view or navigate the project will know exactly where to look. All packages written will be descriptive enough of what they will contain, all classes will have proper name cases as well, if it creates something it will usually be a factory, if its an object class, its a wrapper, and for core functionality we use services. There have been plenty of times where I’ve had to work on someone else’s code-base and I was immediately lost, everything was all over the place, packages didn’t have good naming conventions, classes didn’t belong to their own packages, even the code itself was just nasty.

This leads me to the next part of software quality, which is usually the most important part, code quality. If you’re coding and you use some obscure or weird naming conventions, that should probably stop. Working in production usually means someone else will eventually have to take a look at your work, and if they can’t even figure out what you were doing, that means you did a terrible job. Everything needs to be readable, now obviously someone may not initially understand what the code does, but that may just be a difference in experience, if someone at your skill level reads your code, they should be able to identify what each variable, method, class, ETC… does for the product and how they cant test it to make sure it all works.

Since I started learning more about code quality, making any sort of project for production has become way more optimized for myself. I’m thankful that I had friends there to help me understand where I was going wrong in the first place with poorly structured code. Instead of having to take time remembering how a project should look, its not basically ingrained in my head how to do, which I hope other developers do as well.

From the blog CS@Worcester – CS Blog by Mike and used with permission of the author. All other rights reserved by the author.

Week 6: Boundary Value vs Equivalence Class Testing

This week we learned about boundary value testing and equivalence class testing. Boundary value testing focuses on making sure the values in, out, and around the expected boundary works as it should. Equivalence class testing does the same, but also tests the function itself.

I wanted to know more about the two methods and found a blog post that explains them a little more in depth. The author, Apoorva Ram, says they are more thought processes than testing methods really. The thought process of boundary value testing is self explanatory: testing the edge boundaries of the function. The thought process of equivalence class testing is organizing every possible input into groups of expected outputs and testing the result from each.

Ram also explains the benefits of the methods and how they can be used in software testing. The two seem to go hand in hand. Planning your tests before writing them and knowing the expected output makes the testing process a lot smoother. You know what points you need to hit and have a plan to execute them. Additionally, knowing all the points you need to hit allows you to prioritize ones that are more important.

For example, say you have a boolean function that looks for the input value to be between 15 and 30, but accepts values from 0 to 100. Boundary testing would test the values of xmin-, xmin, xmin+, xnom, xmax-, xmax, and xmax+. In this case: -1, 0, 1, 50, 99, 100, and 101. It mostly makes sure the 0 and 100 boundaries work. But equivalence class testing breaks down the function into classes of values that will give every result: invalid inputs (under 0 and above 100), false cases (between 0-14 and between 31-100), and the true case (between 15-30). In this case: -1, 12, 20, 45, and 101. This method tests the valid ranges as well as the function ranges.

In my opinion, equivalence class testing is better than boundary value testing because it actually tests the function and not just the illegal argument exception, and it eliminates redundant tests like xmin+, xnom, and xmax-, all testing for the same output without actually testing the function. Though ideally, a mix of both would probably be the method I choose. For this example, I would test each equivalence class and its boundaries: xmin-, xmin, xmin+, xtruemin-, xtruemin, xtruemin+, xtruemax-, xtruemax, xtruemax+, xmax-, xmax, and xmax+ (-1, 0, 1, 14, 15, 16, 29, 30, 31, 99, 100, 101).

Blog post referenced: https://testsigma.com/blog/boundary-value-analysis-and-equivalence-class-partitioning/

From the blog ALIDA NORDQUIST by alidanordquist and used with permission of the author. All other rights reserved by the author.

JUnit 5 Testing

2/27/2025

This week we learned about Junit test cases. Coming from a C++ background it was a bit difficult to order my tests and to use one global object that runs before all other tests. In C++ the tests run in the order that you write them, and I learned that in java that is not a thing unless you specifically use order(N) N being the order number. This way of testing makes you able to order your test cases in whatever way you want. Also, I learned that you can just make a setup function with a single object and by using “Before All” this means the set up will run before any other test which helps with repetitive tasks of making objects over and over. I personally would rather create a fresh object for each test, but it was a nice experience learning that java just randomly tests and does not have an actual order unless stated otherwise.

Looking through the Junit 5 user guide when doing the homework I also learned that you can use “Before Each” for tests that you want to run before each test sort of like a for loop. One very interesting thing I have never encountered is that you can use lambda in order to compare variables using in objects. This makes it nice and compact for simpler tests because this can all be done within one line.

In class I also learned that gradle will give a different output for the test’s cases compared to vscode. I also encountered that when I would hit the checkmark in vscode for individual test cases they would pass and then other occurrences where they would not pass. Also, the same with running them globally all together. This confused me and took me a few hours to figure out why, but I realized it was due to my global object that I created. I also noticed that gradle would give me a different result as well, but it was much better and consistent than running the tests individually.

Source: JUnit 5 User Guide
Source: Writing Templates for Test Cases Using JUnit 5 – GeeksforGeeks

From the blog CS@Worcester – Cinnamon Codes by CinCodes and used with permission of the author. All other rights reserved by the author.

JUnit, Test, and Repeat

I’ve decided that I wanted to practice making more JUnit tests. I did well on my last homework assignment, but I feel like I still need more practice. It took me some time to do it. I may have to generate my own JUnit tests in the midterm so I would need to make them at a faster rate. Anyway, practice makes perfect so there is no such thing as too much practice.

For this post I will be using this website: Inheritance in Java – GeeksforGeeks

This website contains two examples of code but I will use the first one. The code does not allow for user input but I’ll be drafting tests as if it does.

Before I write any test I would write:

import org.junit.jupiter.api.Test;

import static org.junit.jupiter.api.Assertions.assertEquals;

import static org.junit.jupiter.api.Assertions.assertThrows;

They will be useful later.

The first test I would make is a constructor test. I believe it should be one of the first tests. It is good to know the test works as it is supposed to since if it cannot do that it will need to be immediately revised. So, I would do this:

@Test

public void testMountainBikeConstructor() {

MountainBike mb = new MountainBike(30, 10, 45);

assertEquals(30, mb.gear);

assertEquals(10, mb.speed);

assertEquals(45, mb.seatHeight);

}

This is supposed to be take in three values and tests if the constructor initializes them correctly. I used the name “mb” because it was already in a class that was used to test inputs. It just made sense to me.

Another test I created tests the set string height method.

@Test

public void testSetHeight() {

mountainBike.setHeight(40);

assertEquals(40, mountainBike.seatHeight);

}

This tests if the height of the mountain bike can handle user input.

@Test

public void testMountainBikeMethods() {

mountainBike.applyBrake(5);

assertEquals(10, mountainBike.speed);

mountainBike.speedUp(10);

assertEquals(20, mountainBike.speed);

}

This tests if the mountain bike can speed up and brake.

@Test
void testToString() {
String expected = “No of gears are 6\nspeed of bicycle is 25\nseat height is 10”;
assertEquals(expected, mountainBike.toString());
}
}

This is supposed to test that the code has the expected output.

For the final test, I wanted to up the ante. What if I could test the limits of the code?

@Test

public void testSpeedLimits() {

MountainBike mb = new MountainBike(3, 100, 25);

mb.setSpeed(0);

assertEquals(0, mb.getSpeed(), “No negative speed!”);

assertThrows(IllegalArgumentException.class, () -> mb.setSpeed(-10), “No negative speed!”);

}

Overall, this was an interesting challenge. The main difficulty was finding the code to do this project on. There was code that was too easy thus difficult to generate meaningful tests on. There was also code that was too complicated which made it difficult to make a significant number of tests. In the end, it was nice to get some practice.

From the blog My Journey through Comp Sci by Joanna Presume and used with permission of the author. All other rights reserved by the author.

Exploring REST API Calls

In the realm of web development, the importance of REST (Representational State Transfer) APIs cannot be overstated. They serve as the backbone of communication between client and server applications, facilitating the seamless exchange of data. My recent exploration of the article “Understanding REST APIs: A Comprehensive Guide” on Medium provided a deep dive into the intricacies of RESTful architecture, the principles behind it, and practical examples of its implementation.

I chose this article because it offers a holistic view of REST APIs, making it suitable for both beginners and seasoned developers looking to refine their understanding. The author does a commendable job of breaking down complex concepts into digestible sections, ensuring readers can follow along easily. The article not only covers the technical aspects of REST APIs but also emphasizes best practices and common pitfalls, which are crucial for anyone working with APIs.

One of the key takeaways from the article was the emphasis on statelessness in REST. Each API call is independent; the server does not store any client context between requests. This design choice simplifies scalability and reliability, allowing systems to handle multiple requests without the overhead of session management. Understanding this principle has reshaped my approach to API design. I now recognize the importance of making each API call self-contained and meaningful.

Additionally, the article highlighted the significance of HTTP methods—GET, POST, PUT, DELETE—and their respective roles in interacting with resources. This reinforced my understanding of how to use these methods appropriately to perform CRUD (Create, Read, Update, Delete) operations effectively. The practical examples provided illustrated how to structure requests and handle responses, making the learning experience both informative and applicable.

This material significantly impacted my perspective on API integration in my projects. Previously, I approached APIs with a surface-level understanding, often overlooking essential details that could enhance my applications. Now, I feel equipped to design and implement more robust and efficient RESTful services. In my future practice, I plan to apply these principles not just in personal projects but also in collaborative environments, where clear communication with APIs is crucial for successful integration.

In conclusion, “Understanding REST APIs: A Comprehensive Guide” served as an invaluable resource that deepened my understanding of RESTful architecture. The insights gained will undoubtedly influence my future work as I continue to navigate the complex world of web development and API integration.

For more details, you can read the article here: Understanding REST APIs: A Comprehensive Guide.

From the blog Discoveries in CS world by mgl1990 and used with permission of the author. All other rights reserved by the author.

Looking Through Someone Else’s Code

Recently, I have been learning about clean code. It is a way of coding that prevents cluttering and confusion which is very important since coding is a collaborative effort. Without clean code mistakes would be constantly made, time would be wasted trying to understand each teammates code and most importantly nothing would get done.

It got me thinking about a developer. He’s infamous for his poor coding skills. It has gotten so bad to the point that he was wasted years coding one game even with volunteers’ aid. So, I would like to use my newfound knowledge to see how bad his code is and offer ways to clean it up.

I will be using an uploaded replication of his code from GitHub:GitHub – LordEnma/YandereSimulatorDecompiled: Decompiled Code from the game Yandere Simulator.

There are many parts of the code that need to be fixed but I will start in the ActiveAnimation.cs file. It dictates when a game cutscene is supposed to happen. One thing that stands out to me is the void function play. It contains a lot if statements (10 in fact). First, play is a vague name. It does not clearly explain what the code does. One can conclude that it plays the cutscene but there’s already a function that does this. Second, this function should be separated into smaller functions. One could play the cutscenes. Another could adjust animation components. Another could allow for earlier execution which would get rid of the else if statements. The computer would go, these conditions are met execute this or another set of conditions are met execute that. It would also make changing code easier which tends to be a problem for the developer.

Next, I want to discuss the AIControllers.cs file. It is part of a minigame where the player works in a cafe to make in game currency. Looking at the name one can assume that it only deals with player controls but that is not true. It also controls nonplayable characters which does not make sense. If something goes wrong with the nonplayable characters, it will be a pain to find where their code is. It is a little amusing since there are two files for the chairs of the minigame and all they do is move left and right. The main fix would be to take out the nonplayable characters’ code and put it with the chairs’ code since they both rely on each other. Overall, this file seems to be an improvement from the previous one but in general it should just focus on player controls. Remove everything else and put them in their own files.

To conclude, I understand why this game is taking a while to complete. From looking at two files one can see that the code is a mess. I think the developer and his volunteers should stop and focus on organizing the code. So in the end I learned one thing: do not code like this developer.

From the blog CS@Worcester – My Journey through Comp Sci by Joanna Presume and used with permission of the author. All other rights reserved by the author.

On the subject of Development Environments…

This week’s blog post is about the differences between Visual Studio and Visual Studio Code. I wrote about this topic because we just started exploring development environments, and prior to Thursday’s class, I didn’t know these were even two different software packages. I had previously installed Visual Studio on my personal computer, and when asked to install Visual Studio Code, I thought I had it already. Upon realizing they are different suites, I realized that that distinction may have been the source of a lot of confusion and issues I was running into during a previous project, and therefore is an important enough distinction to discuss in the blog post. The resource I will be referencing is an entire thread I read through on StackOverflow, which was wildly helpful in understanding some of the key differences between the two. For anybody that (somehow) isn’t familiar with StackOverflow, it is a public forum where all kinds of tech people discuss code, useful field concepts, or really anything. This specific thread was a discussion about the two software applications and the differences between them, which I chose because StackOverflow has never failed me in a time of technical need. Here are some of the key distinctions I found.

Firstly, Visual Studio is a comprehensive Integrated Development Environment (IDE) designed  for larger applications, specifically those built with .NET languages like C# and VB.NET. It offers a ton of built-in tools for debugging, checking performance, and designing user interfaces. In contrast, Visual Studio Code is a lightweight, open-source code editor that supports a bunch of programming languages through extensions. It trades off some of the advanced features found in Visual Studio in exchange for flexibility and simplicity. VS Code is well-suited for web development, Python scripting, or small coding projects. It runs efficiently on less powerful machines, which makes it accessible for developers who may not need everything provided by Visual Studio.

One of the largest distinctions is in their intended use cases. Visual Studio is much better suited for working on complex applications that require extensive debugging and testing. For example, if you’re building a massive application with multiple dependencies and a need for thorough testing, Visual Studio’s comprehensive toolset can be extremely useful. Visual Studio Code is much better suited towards working with a variety of programming languages or with a more minimalistic setup. Visual Studio Code allows you to tailor your environment exactly to your needs through extensions and customizable settings.

Additionally, they differ largely in resource requirements. Visual Studio is bulky and requires a significant amount of storage and resources. Visual Studio Code is smaller and quick to install, making it a practical choice if you don’t need a full IDE.

In conclusion, choosing Visual Studio or Visual Studio Code largely depends on your project requirements and personal preferences. Each has its strengths and drawbacks. Moving forward personally, I will be using Visual Studio Code until a need for Visual Studio arises.

From the blog CS@Worcester – Mr. Lancer 987&#039;s Blog by Mr. Lancer 987 and used with permission of the author. All other rights reserved by the author.

The Strategy Pattern

Christopher Okhravi explains in this video what a Strategy Design pattern is and how it works through some UML class diagrams and a little bit of pseudo-code as well as stopping at each point to explain each piece of the Strategy Pattern. All of this coming from a book called “Head-First Design Patterns” by Eric and Elizabeth Freeman. I wanted another viewpoint on the strategy pattern outside of class and in a more succinct form, so this video seemed like a good resource. Mr. Okhravi actually really cleared up a lot of the misconceptions I had about the Strategy Pattern in general, as the entirety of the video is essentially just one big example walking through each step of the Strategy Pattern process, even using the same example we had in class of the duck classes, although he does go into explaining each detail of the Strategy pattern thoroughly going into how inheritance is nearly as important as cohesion for this pattern. It also led to the revelation for me about not even needing subclasses and introducing the idea that cohesion was more important than inheritance with cohesion leading to more flexibility than just straight inheritance. Learning about this design pattern actually really helped to me to better understand how I would apply this when actually writing code, it also put the other design patterns we’ve learned about into a better perspective for me and kind of how powerful they really are. Other than this it also showed me that I didn’t really understand how the strategy pattern worked or what a design pattern completely was. But now thanks to this video I’ve definitely developed a better appreciation and a better understanding of how these design patterns work. From this point forward I’ll need to really have more supplemental material for every patter we learn about so I can actually interface with them properly. I think in the future I’ll really have to spend time learning more about other design patterns and their uses as these seem highly valuable to really understand and know how to use. Also, for any further delves into the other design patterns I’ll probably be going back to this creator here as he seems to have a good grasp on them and explains them clearly enough with the visual component for me to really understand it. I highly recommend giving this and his other videos a look as he sorts them into playlists for easy viewing.

Here’s the video on the Strategy Pattern.

From the blog CS@Worcester – aRomeoDev by aromeo4f978d012d4 and used with permission of the author. All other rights reserved by the author.

Uncle Bob – Lesson 1

In this presentation we are presented with a veteran of the IT industry, an author of the AGILE manifesto and manifesto for Software Craftmanship, and a man who hates that AGILE has been kidnapped by consultants and conference organizers and abandoned programmers, Robert C. Martin or better known as Uncle Bob. Now I wanted to watch this because I wanted to understand who this Uncle Bob person was and what he really meant by “Clean Code”. Uncle Bob’s first lesson starts off by giving us some examples of the scope and reality of software in our current day and age. He tells us how a person generally “can’t go more than 60 seconds without interacting with a software system” and then imparts some quick knowledge about why programmers are so slow. After this he then begins to break down this concept of “Clean Code” showing us what established and respected programmers think it us and giving us some examples of horrible first drafts of code that are nearly unreadable. Then after this showing us what refactoring this code looks like and then after this explaining some essential rules to follow when making your code clean. First and foremost, I really like listening to Uncle Bob as presents a lot of this information in an easily digestible manner and has excellent oratory skills that helped me really stay focused on the topic. The actual topic at hand was actually rather interesting, such as Uncle Bob explaining how teams get progressively slower and slower as they add features into a program due to the program becoming a bit of a monolithic mess. Or outlining a rule for functions to be as small as possible as “Long code is bad code”. Some of the topics Uncle Bob talked about were things I had no idea about like this Lambda feature he kept talking about in Java code, but that’ll have to be something I look into in the future. I think personally all of this has really given me a better perspective into what I should expect to be doing as a developer and how I should think about my code as I’m writing it. Mainly how my code should be organized and structured and especially what my workflow would look like, where I first write the code and get it working and then go back and make it clean. In the future I fully expect to be administering these principles into my code and career, and it would probably make sense to continually do this as it will not only make all my work easier for myself but also easier for my colleagues.

-Antonio Romeo

Also, here’s a link to Uncle Bob’s first lesson.

From the blog CS@Worcester – aRomeoDev by aromeo4f978d012d4 and used with permission of the author. All other rights reserved by the author.