Category Archives: CS-443

HOW DECISION TABLES CHANGED MY SOFTWARE TESTING MINDSET.

If you’ve ever written test cases based on your gut feeling, you’re not alone. I used to write JUnit tests by simply thinking, “What might go wrong?” While that’s a decent start, I quickly realized that relying on intuition alone isn’t enough especially for complex systems where logical conditions stack up fast.

That’s when I understood the magic of Decision Table-Based Testing.

What Is Decision Table Testing?

A decision table is like a truth table, but for real-world logic in your code. It lays out different conditions and maps them to the actions or outcomes your program should take. By organizing conditions and results in a table format, it becomes much easier to identify what combinations of inputs need to be tested and which ones don’t. It’s especially helpful when you want to reduce redundant or impossible test cases., when you have multiple input variables (like GPA, credits, user roles, etc.) and also when your program behaves differently depending on combinations of those inputs

Applying Decision Tables in Real Time

For a project that I happened to work on, we analyzed a simple method; boolean readyToGraduate(int credits, double gpa). This method is meant to return true if Credits ≥ 120 and also when GPA ≥ 2.0. We had to figure out what inputs would cause a student to graduate, not graduate, or throw an error—such as when the GPA or credit values were outside of valid ranges.

Instead of testing random values like 2.5 GPA or 130 credits, we created a decision table with all the possible combinations of valid, borderline, and invalid values.

We even simplified the process using equivalence classes, like:

  • GPA < 0.0 → invalid
  • 0.0 ≤ GPA < 2.0 → not graduating
  • 2.0 ≤ GPA ≤ 4.0 → eligible to graduate
  • GPA > 4.0 → invalid

By grouping these ranges, we reduced a potential 256 test cases to a manageable 68 and even further after combining rules with similar outcomes.

Well you must be wondering why this even matters in real projects. It matters because in real-world applications, time and efficiency are everything. Decision tables help you cover all meaningful test scenarios. They also help to cut down on unnecessary or duplicate test cases. Decision tables as well help to reduce human error and missed edge cases and provide a clear audit trail of your testing logic.

If you’re working in QA, development, or just trying to pass that software testing class, mastering decision tables is a must-have skill.Switching from intuition-based testing to structured strategies like decision tables has completely shifted how I write and evaluate test cases. It’s no longer a guessing game—it’s a methodical process with justifiable coverage. And the best part? It saves a ton of time.Next time you’re designing tests, don’t just hope you’ve covered the edge cases. Prove it—with a decision table

Have you used decision tables in your projects? Drop a comment below and share your experience!

From the blog CS@Worcester – MY_BLOG_ by Serah Matovu and used with permission of the author. All other rights reserved by the author.

Black Box Testing

URL: https://www.testscenario.com/black-box-testing/

Black box testing is a great tool for improving functionality, identifying issues within the user interface, and, in many cases, it does not require any programming knowledge. It also plays a key role in validating system acceptance, pre-launch stability, security, and third-party integrations. These aspects make black box testing a highly valuable method not only for developers but also for testers and stakeholders.

It offers great advantages when it comes to presenting results to the customer in order to gain approval. Because it follows a more user-oriented approach, black box testing produces results that are of interest to clients and stakeholders. This approach is referred to as Functional Testing. Additionally, black box testing can be used to assess user-friendliness, usability, and reliability—all of which help ensure that the software runs smoothly and provides meaningful feedback. This type of testing is known as Non-Functional Testing.

Regression Testing, another form of black box testing, ensures that new features or updates do not break any existing functionalities. This is especially important when a software release includes significant or breaking changes. User Acceptance Testing (UAT) typically takes place in the final phase of the testing cycle, where end users verify whether the software meets the necessary business requirements before its official release. Lastly, Security Testing serves as a method of vulnerability assessment, aiming to expose a system’s weaknesses and protect it against potential cyber threats.

The main reason I chose this article is because black box testing, to me, always seemed a little meaningless. Why would anyone test something without reading the code? But after reading the article, I realized that developers actually perform this kind of testing quite often—especially in web development. We constantly test various inputs without necessarily diving into the source code. The article also helped me understand that black box testing is an excellent tool for non-developers, allowing them to effectively test and better understand the product without having to read hundreds of lines of code.

From the blog CS@Worcester – CS Today by Guilherme Salazar Almeida Nazareth and used with permission of the author. All other rights reserved by the author.

Code Reviews

Source: https://about.gitlab.com/topics/version-control/what-is-code-review/

A code review is code that is peer-reviewed, which helps developers validate the code’s quality before it is merged and shipped to production. Code reviews are done to identify bugs, increase the overall quality of the code, and to ensure that the developers of the product understand the source code. Code reviews allow for a “second opinion” on the functionality of code before it is actually implemented in the systems. This prevents non-functional code from being implemented in the product and potentially causing issues or bottlenecks in performance. Ensuring that code is always being reviewed before merging encourages the developers to think more critically of their own code, and allows reviewers to gain more domain knowledge regarding the systems of the product. Code reviews prevent unstable code from being used by customers, which would lead to poor credibility and overall act as a detriment on the business. The benefits of code reviews are as follows: knowledge is shared among developers, bugs are discovered earlier, establishment of a shared development style/environment, enhanced security, increased collaboration, and most importantly, improved code quality. As with everything, there still are disadvantages. Code reviews lead to longer shipping times, pull focus/manpower from other aspects of the process,and large code reviewers equal longer review times. But the benefits far outweigh these disadvantages.

Code reviews can be implemented in multiple ways, through pair programming, over-the-shoulder reviews, tool-assisted reviews, or even email pass-around. Gitlab offers an interesting feature where developers can require approval from reviewers before their code can be merged. I chose this article because I use this feature frequently in my capstone class. My teammates and I review each other’s changes in the codebase through this Gitlab feature and, if needed, go over these changes in class whether it be through pair programming or over-the-shoulder reviews.

From the blog CS@Worcester – Shawn In Tech by Shawn Budzinski and used with permission of the author. All other rights reserved by the author.

CS443: A Wishlist for Automation and Productivity

You ever think about how being a software engineer is kind of like working in a factory?

Mill & Main in Maynard, where I did a summer fellowship a few years ago. Fun fact: this building and the rest of the town feature prominently in Knives Out (2019). True story!

I mean that quite literally. Especially here in Massachusetts, where primo office space quite frequently gets hollowed out of old textile mills. (The old David Clark building by the intermodal port, and a slew of defense contractors in Cambridge-Braintree, my old workplace included, come to mind.)

In some ways, the comparison isn’t unmerited. I don’t think it’s far-fetched to say that the focus of industry is to deliver product.

Okay, but how?

Last week, I wrote about the failure of the Spotify model — specifically, their implementation of large-scale Agile-based DevOps. You can read more about that here.

The impetus for this week’s blog is ‘what-if’; if, instead of Spotify’s focus on large-scale Agile integration, we approached DevOps (in a general sense) from the bottom-up, with a clear emphasis on software tools and freeform, ad-hoc team structure. What can we use, what can we do to effect a stable and logical working environment?

Just one quick disclaimer: this is bound to be biased, especially in terms of what I’ve seen work in industry. Small, tight-knit teams and relatively flat hierarchies. This won’t work for every situation or circumstance — and by sidestepping the issue of Agile at scale, I feel like I’m ignoring the issues endemic to Spotify’s structure.

Still, I figure it’s worth a shot.

Issue Hub: Atlassian Jira

The first thing we’ll need is an issue tracker. Atlassian doesn’t do a very good job at marketing its products to the non-corporate world, but it’s very likely that almost everyone reading this post has used an Atlassian product at some point or another: Trello, Bitbucket, and, best of them all, Jira. Think of it as a team whiteboard, where we can report on bugs, update our wikis, and view the overall health of our build, all within one web server.

Version Control: Subversion

Subversion is going to be our version control software. Although this doesn’t have all of the downstream merging capability of Git, its centralized nature actually works to our benefit; the specific combination of Jenkins, Jira, and SVN form a tightly-knit build ecosystem, as we will see.

CI Automation: Jenkins

Jenkins is a continuous integration (CI) and build automation utility which will run health checks on downstream builds before they’re committed to the nightly build, and then to the master build overnight. We’ll implement all of our tests and sanity checks within it, to ensure that no one pushes bad code. If, by some miracle, something does get through, we can revert those changes—another handy feature.

How does this work?

SVN repo → Jenkins (throughout-day staging, then end-of-day nightly build, then overnight master) → Jira (for reports and long-term progress tracking).

Does this all work?

In a word, hopefully. The social contract between you and a team of four or five people is much simpler to fulfill than that of you and the Tribe in the Spotify model. (You only have to track the work of several people, as opposed to almost everyone on-campus with the Tribal model).

There are commitments and onboarding requirements to a system like this, too, as there was with the Tribal model, but they’re not as pronounced, especially since we aren’t scaling our structure beyond this one team.

I think what is especially true of the workplace is that no two teams are alike, and it’s kind of crazy to assume that they are, which is exactly what Spotify did. How is it worthwhile to tell people who they should be working with, instead of letting them figure that out on their own?

Rather, by placing constraints on how the work is done (which is what we’re doing here—the emphasis on software as opposed to structure) we can get better results by letting people figure out how to get from Point A to Point B, assuming we properly define both A and B.

Between last week and now: a lot of thoughts to digest.

Kevin N.

From the blog CS-443 – Kevin D. Nguyen by Kevin Nguyen and used with permission of the author. All other rights reserved by the author.

What I Learned About QA: A Computer Science Student’s Take on Real-World Testing Practices

I recently read the article “Streamlining the QA Process: Best Practices for Software Quality Assurance Testing” published by KMS Technology. As a college student studying computer science and still learning the ins and outs of software testing, I found this article especially helpful. It gave me a clearer understanding of what quality assurance (QA) really looks like in real-world software projects.

I chose this article because I’ve been trying to get a better grasp on how testing fits into the bigger picture of software development. A lot of what we learn in class focuses on writing code, but not always on making sure that code actually works the way it’s supposed to. This article breaks down what can go wrong in the testing process and how to avoid those issues, which is something I know I’ll need as I continue learning and working on team projects.

The article talks about a few key challenges that QA teams run into:

Unclear Requirements – This one really stood out to me. The article explains that if the project requirements aren’t clearly defined, testing becomes almost impossible. How can you verify if something works if you’re not even sure what it’s supposed to do? It made me realize how important it is to ask questions early on and make sure everyone’s on the same page before writing code.

Lack of Communication – The article also highlights how communication gaps can mess up testing. If developers and testers aren’t talking regularly, bugs can slip through the cracks. As someone who’s worked on class group projects where communication wasn’t great, I totally see how this could happen on a larger scale.

Skipping or Rushing Testing – The article warns against rushing through testing or treating it like an afterthought. I’ve definitely been guilty of this in my own assignments—leaving testing until the last minute, which usually results in missing bugs. The article suggests integrating testing throughout development, not just at the end, and that’s something I want to start practicing more.

Reading this article made me reflect on my own experience so far. In one of my programming classes, our final project had a vague prompt and my group didn’t ask enough questions. We ended up spending extra time rewriting parts of our code because the requirements kept changing. After reading this article, I see how important it is to define everything early and communicate often.

I also plan to be more intentional about testing as I continue to build projects. Instead of waiting until the code is “done,” I want to get into the habit of testing as I go and making sure I understand the expected behavior before writing a single line.

Overall, this article helped me understand why QA is such a critical part of software development—not just something to tack on at the end. If you’re also a student learning about testing, I recommend giving it a read: Streamlining the QA Process: Best Practices for Software Quality Assurance Testing.

From the blog CS@Worcester – Zacharys Computer Science Blog by Zachary Kimball and used with permission of the author. All other rights reserved by the author.

JUnit Testing

Hello everyone,

For this week’s blog topic I will talk about JUnit, what it is, the importance of it, why it is used, the features that it offers and many more. First to start everything, what is even JUnit. So JUnit is an open source testing framework for Java and it allows programmers to write and then run automated tests. It is very useful to catch bugs early in the development when they are the least expensive to fix. Some of the key features that JUnit has to offer are its powerful testing abilities. It has easy and simple annotation, making writing down the tests even easier. It is intuitive and with just a few practices anyone can get the hang of it. Similar to the Happy Path Tests learnt with behavioral testing, JUnit encourages those normal operations first to be tested. It also supports negative cases and also boundary tests. The blog that I read was really useful as not only it explained what JUnit is but also recommended some good practices for new programmers. For example they advised to test one behavior at a time. This is important as you wanna test a single aspect of the code then move into the other parts of it. You should also use descriptive test names. This is helpful as a clear name can explain directly what you are trying to test for, eliminating confusion and possibly the chance of writing the same test twice. Another good advice given from the author of the blog is that you need to write tests which are independent. This means that different tests should not depend on each other’s results in order for them to run correctly. Lastly, you should always try to test the edge cases. Testing the boundary conditions of the code and also unexpected inputs. Your project should be ready for anything to handle even if an input does not make sense, it should be able to handle correctly and guide the user in the right path. The blog also gives a detailed tutorial on how to not only install JUnit, giving step by step instructions with examples included but also teaches us how to perform automated testing and even in the cloud. At the end of the blog it even offers a FAQ section, clearing any bit of confusion that readers might have. This is a great blog that I recommend everyone to read. It is useful for all ranges of programmers, from beginners to more experienced ones.

In conclusion JUnit Testing is a fundamental skill to learn if you wanna become a great Java Developer. It helps you verify how your code behaves, helps you catch and fix any bugs that might come up at any time of the project development time. Mastering JUnit will not only improve your code quality but also it will give you a boost of confidence when you make any changes, knowing that JUnit will be there for you to catch any bugs. 

Source:
https://testgrid.io/blog/junit-testing/

From the blog Elio&#039;s Blog by Elio Ngjelo and used with permission of the author. All other rights reserved by the author.

Static Testing

article https://www.browserstack.com/guide/static-software-testing-tools

This blog will focus on static testing. Static testing is the inspection of a code  program  without  execution. Static testing is an  early stage of creating a program, where a program is being developed, and code can be adjusted before the final product. A program’s files being reviewed before its release saves a company money, without the program being reworked. Review analysis and static analysis are two different methods for static testing. Informal review is a type of review analysis where team members provide code feedback, while static code analysis for static analysis uses software tools to detect coding errors. Static testing is used  multiple times in  coding a program. When a project is first assigned whether in a professional or academic setting, programmers need to understand the requirements of their projects. Usually after  instructions have been reviewed, coding would be the next step, but  static testing adds an extra step of  checking if  a program has the  documents used for  coding. Throughout the development of a program, a common practice is running the program, whether with unit testing or running a whole program, for a programmer  to know if the program is error free. Static testing at the coding stage can either be feedback from team members, or  different software tools such as Soot and checkstyle. BrowserStack Code Quality tool is one software tool for static testing. In my programming experience, I am used to  having to manually fix my errors. This past week, I was introduced to new visual studio code  software tools for coding errors. BrowserStack Code Quality tool is one tool of automated stack testing, where static testing is done through software tools. 

BrowserStack Code Quality has an assistant that recommends how large classes in a program can be split into smaller classes. BrowserStack Code Quality can be downloaded in either Android studio, Vscode, or Intellij, with a quick program scan with feedback. Another software tool is Checkstyle which only works with Java. Developers using Checkstyle  learn about  errors when writing code, compared to after a program has executed. Developers who are using Checkstyle can create coding  conditions, and  a program is checked for following defined coding conditions. Recently, I learned how to use PMD in Visual Studio Code. PMD  detects logical errors in code such as uninitialized variables, unused code. PMD has a copy paste detector that identifies duplicated code. PMD supports more than 10 different programming languages.

From the blog jonathan&#039;s computer journey by Jonathan Mujjumbi and used with permission of the author. All other rights reserved by the author.

Comprehending Program Logic with Control Flow Graphs

This week I am discussing a blog post titled, “Control Flow Graph In Software Testing” by Medium user Amaralisa. When I read through this post initially, it immediately clicked for me with what we have been studying in class with different path testing types which capture the logic similarly. The comparison between CFGs and a map used to explore the world or get from point A to point B is incredibly useful as it explains the need for having a guide to explain the many execution paths of the program. The writer made the topic easy to understand while still including the technical information that is required to apply these techniques moving forward.

This post helped me see the bigger picture in terms of the flow of a program and how the logic is truly working behind the code we write. It tied directly into what we’ve covered about testing strategies, especially white-box testing, which focuses on knowing the internal logic of the code. The connection between the CFG and how it helps test different code paths felt like a practical application of what we’ve been reading about in our course.

It also made me think about how often bugs or unexpected behavior aren’t because the output is flat-out wrong, but because a certain path the code takes wasn’t anticipated. Seeing how a Control Flow Graph can lay out those paths visually gives me a better sense of how to test and even write code more deliberately. It’s one thing to read through lines of code and think you understand what’s going on, but when you actually map it out, you might catch paths or branches you hadn’t considered before. I could definitely see this helping with debugging too—like, instead of blindly poking around trying to find what’s breaking, I can trace through the flow and pinpoint where things start to fall apart.

I also really liked that the blog didn’t try to overcomplicate anything. It stuck to the fundamentals but still gave enough technical depth that I felt like I could walk away and try it on my own. It gave me the confidence to try using CFGs as a tool not just during testing but also during planning, especially for more complex logic where things can easily go off track.

Moving forward, I am going to spend time practicing using CFGs as a part of my development process to ensure that I am taking advantage of tools that are designed to help. Whether it’s for assignments, personal projects, or even during team collaboration, I think having this extra layer of structure will help catch mistakes early and improve the quality of the final product. It feels like one of those concepts that seems small at first, but it shifts the way you approach programming altogether when applied properly.

From the blog cameronbaron.wordpress.com by cameronbaron and used with permission of the author. All other rights reserved by the author.

Spies and Their Role in Software Testing

As I was doing some at home research on stubs and mocking for one of my courses, I came across the idea of spies. Unlike stubs and mocks which allow for the program and tests to run while giving canned answers or being unfinished, spies perform a much needed but previously unfilled role.

Spies are used to ensure a function was called. It’s of course more in-depth than this but that’s it’s basic function.

On a deeper level a spy can not only tell if a call to function was made, but how many calls, what arguments were passed, and if a specific argument was passed to the function.

Abby Campbell has great examples of these in her blog, “Spies, Stubs, and Mocks: An Introduction to Testing Strategies” where she displays easy to understand code. I would definitely recommend taking a look at them, her blog also goes in depth on stubs and mocking.

When writing test cases, the addition of a spy to ensure a thorough case can’t be undersold. Imagine a simple test case that uses a stub, without the use of a spy you can’t be sure the correct function was called unless every function returns a different value which would be inefficient to set-up. By using a spy the function called is checked, the argument passed is checked, and the output can even be checked as well leaving little to no room for an error in the test case aside from human error.

With the addition of spies to our arsenal of tools for software testing we check off the need to find a reliable way of testing for ensuring correct function calls and arguments. I plan on carrying this new tool with me throughout the rest of my career. It allows for much more efficient, effective, and sound testing.

From the blog CS@Worcester – DPCS Blog by Daniel Parker and used with permission of the author. All other rights reserved by the author.

Behavioral Testing

Source: https://keploy.io/blog/community/understanding-different-types-of-behavioral-unit-tests

Behavioral unit tests validate how code units operate under certain conditions, allowing developers to ensure that the software/application is working as it should be. Behavioral unit tests focus on specific pieces of code. They help developers find bugs early and work based on real scenarios. They lead to improved code quality because this unit testing ensures that the software is up to the expectations of the user, and allows for easier refactoring. The key types of behavioral unit tests include happy path tests, negative tests, boundary tests, error handling tests, state transition tests, performance driven tests, and integration-friendly tests. The one that caught my attention was the performance-driven test. These tests validate performance under specified constraints, such as handling 10,000 queries. The test is run to ensure that performance remains acceptable under various loads. This test caught my attention because in my cloud computing class, I was loading files with millions of data entries, and the performance suffered, which highlights the importance of unit testing under conditions such as these.

The difference between functional and behavioral unit tests is that functional tests validate the system’s function overall, whereas behavioral tests focus on specific pieces of code to make sure that they behave as expected under various conditions. Behavioral unit tests should be run every time code is built to make sure changes in the code don’t cause problems. Tools that can be used for this kind of testing include JUnit/Mockito for Java, pytest for Python, and Jest for JavaScript. I chose this article because we use JUnit/Mockito in class and thought it’d be wise to expand my knowledge on other unit tests. It’s good to get reassurance that unit testing is important from all of the sources I’ve been learning from, because it is very apparent that many different scenarios can cause many different problems in regard to the production of code/software/applications.

From the blog CS@Worcester – Shawn In Tech by Shawn Budzinski and used with permission of the author. All other rights reserved by the author.