Category Archives: CS-443

Static and Dynamic Testing

Recently, I have been learning a lot about testing code, specifically methods such as static and dynamic testing of code. For this blog post, I wanted to research deeper into static and dynamic testing methods, as well as their applications, especially given my limited experience with using them up until now. As for researching this topic, I came across a well made podcast titled, “Static Testing vs Dynamic Testing: What is the Difference?” by CTSS Academy on spotify. I found this podcast to be a perfect choice to aid in my research for the software quality assurance and testing course I am taking this semester, especially because of the speakers prior experience in a very relevant workplace that focused on software testing, and the manner in which they address their video to an intended audience of prospect software testers. 

STATIC TESTING: 

This involves looking at software without ever running it in the first place, much like looking at a blueprint. Static testing has a heavy emphasis on preventing bugs from existing in the first place, much like proofreading instructions that you have been given before investing time and resources into the task that would be unrecoverable. Initial requirement documents are involved here, as well as all other outlines for how the project will go about being constructed in the first place. Source code can also be included here, as well as test cases and scripts needed to run when testing the software, really anything generated when developing the software, except running the code itself. The most basic form of static review could be considered “informal review” or something as simple as a coworker looking over another co-worker’s code and giving feedback. Technical review is more formal and is essentially a peer-review of the technical procedures or specifications taken. Static testing also includes “walkthroughs” or presentations of a project to peers, including step-by-step processes taken when creating the project. Static code review or white-box testing itself could include specifically looking at the code written and making sure that it follows the proper syntax of whatever language it is written in, as well as looking for any obvious security flaws.

DYNAMIC TESTING:

This involves executing a software or code and understanding how it processes information and behaves. Dynamic testing is all about executing code and tracking how it behaves and responds to certain situations, more specifically whether it is performing efficiently as it should, while also using the proper amount of data it should be. If static testing refers to whether or not we are building something the right way, dynamic testing refers to whether or not we are building the right thing. Unit testing is a basic place to start with dynamic testing, as it encompasses individual units of code apart from one another, using unit tests to make sure that one job of that part of the code is functioning properly. Integration testing is one step more complex than unit testing, involving multiple unit tests and the manners in which they interact with each other. System testing refers to testing the entire system of code and how it functions as a single machine, assessing whether it meets all criteria established. Security testing, performance testing, or any other form of testing that would require running the software to figure out are all considered to be dynamic testing as well.

From the blog CS Blogs with Aidan by anoone234 and used with permission of the author. All other rights reserved by the author.

Test Driven Development

After learning about test driven development in class a couple classes ago, I found myself confused as to why this process would prove itself successful in software development so I did some research and found a video on YouTube from a channel called Fireship called, “Test-Driven Development //Fun TDD Introduction with JavaScript“ While, of course, JavaScript is not currently one of the programming languages that I primarily work with, the video proved informative and educational nonetheless. The video begins by emphasizing the importance of the phrase “Red, Green, Refactor” and describes test driven development as a technique where a programmer describes the behavior of the code before proceeding to implementing that code. “Red, Green, Refactor” describes the process of, firstly, writing a failing test, then writing some code to get the test to pass, then going back and optimizing the code in a manner that suits the project requirements. 

Although I have no doubt that the majority of the confusion around this topic comes from inexperience and being in the process of learning, I certainly had questions and confusion around how and why it makes sense to put forth an implementation process for code before even writing a single line. Conceptually it makes sense to lay out the general steps to a project and attempt to anticipate some of the issues that may come up beforehand, but something about testing code that doesn’t exist is a confusing concept for me.

The video talked about different types of testing at the different levels starting with unit testing and working into more complex testing methods. Each testing method is built to cater to a different type of program that is being tested.

Generally, after watching this, it sparked curiosity around what software development would look like without testing and how the continual development of the testing process has allowed for programers to not only improve the way they work but to create more robust and successful outcomes in the results of their code. 

I think it’s interesting looking back at the first three or so years of computer science classes, learning to code in multiple different languages with different use cases and styles and intents and outcomes. Not one class between the two schools I went to and the three years I was in school worked with testing before the higher level senior year classes. This made me wonder why something so important and integral to software development such as testing code was left out of the learning process until very late.

From the blog CS@Worcester – The Struggle of Being a Female Student in CS by Noam Horn and used with permission of the author. All other rights reserved by the author.

Week 12- Test Driven Development

During week 12, we learned about test driven development. Test driven development is a testing method that outlines what functions need to be tested, writes the tests, then writes the code. It is an iterative process that goes through each test one at a time, building on what is already written. 

In class, we practiced the method by building a Mars Rover. We read what the rover should do and made a list of what we needed to test and build. We started with the easiest thing on the list and wrote a test for it. Then, we wrote the minimum amount of code to make the test run. We tweaked the test block, if it needed to be, and then ran the single test. Next, we repeated the process for the next easiest thing on the list. We will repeat the process until all of the tests are written. 

Test driven development focuses on just the tests before the actual code is written. It was a little confusing at first to think about how to write a test before the code, but once it clicked, it was easy. 

I found a website on test driven development and it described the process in a slightly different way. BrowserStack explains test driven development as a “red-green-refactor” cycle. The red phase is when just the test is written and it doesn’t pass because there’s no code. The green phase is when the bare minimum code is written for the test to run and it passes. The refactor phase is tweaking the tests and code so everything runs and passes. The cycle is run through every time a new test is written and repeats until all of the tests are written and pass. 

Test driven development is a very useful method for debugging. Since the tests are being run at every new addition, bugs are found as they appear and do not slip into the cracks as easily. The method also is very efficient and allows the programmer to easily add new functions later on in development. The iterative process is easy to maintain for the programmer, and provides a much larger testing scope than other methods. 

I liked working with test driven development. I think the process is very organized and straightforward. I would definitely use this in the future for projects that require thorough tests and with functions that build on each other. 

Source referenced: https://www.browserstack.com/guide/what-is-test-driven-development

From the blog ALIDA NORDQUIST by alidanordquist and used with permission of the author. All other rights reserved by the author.

Write Code that’s Readable. Please.

Let’s talk about something that affects programmers at all levels, on all codebases and projects: readability. Specifically, the balance between code that is fast and resource efficient and code that is easily understood and maintained.

First, readability. It’s a given that any code produced and delivered in any professional capacity is almost certainly not being written by a single developer. Therefore, good developers will write code that others will be able to understand and pick up what the intended behavior of the code is. Commenting, appropriate use of whitespace and indentation, and effective developer documentation can make code easy to work with, for anyone.

Here’s an example of two Python code snippets that both return the summation of a given array, but are structured quite differently:

Code 1:

# Returns summation of given array of numbers

def sum_numbers_readable(nums):
   total = 0    # Initalize starting total
   for num in nums:   # Sum all numbers in array nums
      total += num
   return total   # Return sum

print(sum_numbers_readable([1, 2, 3, 4]))

Code 2:

def sum_numbers_fast(nums):
   return sum(nums)
print(sum_numbers_fast([1, 2, 3, 4]))

See the difference? Both pieces of code do the same thing and achieve the same output. Code 1, however, contains comments describing the intended behavior of the function and the expected parameter type, as well as step-by-step what each line of code does in the function. Blank lines and proper use of whitespace also divide the code into relevant sections. Code 2, on the other hand, has no comments added, poor use of whitespace (or as poor as Python would allow), and uses the sum() function from Python’s native library. While the intended behavior of this function is pretty clear, this may not always be the case.

Take a more extreme example, just for the fun of it: go write or refactor some piece of Java code to be all on the same line. It’ll compile and run fine. But then go ask someone else to take the time to review your code, and see what their reaction is.

On the other hand, we have code efficiency and speed, generally measured in runtime or operations per second. Comparing our two code snippets using a tool like perfpy.com to see performance, and we find that the second, less readable code executes faster with a higher operations per second. (931 nanoseconds vs. 981 nanoseconds and 1.07 op/sec vs. 1.02 op/sec. Not a huge difference at all, but this scales with program complexity).

This gives us some perspective on the balance between performance and readability/maintainability. It also helps to keep in mind that performance and readability are both relative.

Looking back at our two Python snippets, most developers with experience would opt for the second design style. It’s faster, but also easily readable provided that you understand how Python methods work. However, people just learning programming would probably opt for the more cumbersome but readable first design. With regards to performance, the difference in runtime could be a nonissue or it could be a catastrophic slowdown. It depends entirely on the scope of the product and the needs of the development team.

As a final rule of thumb, the best developers will balance readability and efficiency as best they can, while above all else remaining consistent with their style. When looking at possible code optimizations, consider how the balance between readability and performance could shift. The phase “good enough” tends to have a negative connotation, but if your code is readable enough that other team members can work on it, and fast enough that it satisfies the product requirements, “good enough” is perfect.

Refrences:
Lazarev-Zubov, Nikita. “Code Readability vs. Performance: Shipbook Blog.” Shipbook Blog RSS, 16 June 2022, blog.shipbook.io/code-readability-vs-performance.

From the blog Griffin Butler Computer Science Blog by Griffin Butler and used with permission of the author. All other rights reserved by the author.

Understanding Smoke Testing in Software Development

In software development, a build has to be stable before further more comprehensive testing in a project so that the project is successful. One of the ways of guaranteeing this is smoke testing, which is otherwise known as Build Verification Testing or Build Acceptance Testing. Smoke testing is an early check-point to verify that the major features of the software are functioning as desired before other more comprehensive testing is done.

What is Smoke Testing?
Smoke testing is a form of software testing that involves executing a quick and superficial test of the most crucial features of an application to determine whether the build is stable enough for further testing. It is a minimum set of tests created to verify if the core features of the application are functioning. Smoke tests are generally executed once a new build is promoted to a quality assurance environment, and they act as an early warning system of whether the application is ready for further testing or requires correction immediately.

Important Features of Smoke Testing

-Level of Testing: Smoke tests are interested in the most important and basic features of the software, without exploring each and every functionality.
-Automation: Automated smoke testing is a common routine, especially in the case of time limitations, to perform quick, repeatable tests.
-Frequency: Smoke testing is normally run after every build or significant change in code in order to allow early identification of major issues.
-Time Management: The testing itself is quick in nature, so it is a valuable time-saver by catching critical issues early.
-Environment: Smoke testing is typically performed in an environment that mimics the production environment so that test results are as realistic as possible.

Goal of Smoke Testing

The primary objectives of smoke testing are:

-Resource Optimization: Don’t waste resources and time on testing if core functionalities are broken.
-Early Detection of Issues: Identify any significant issues early so that they can be fixed at a quicker pace.
-Refined Decision-Making: Present an open decision schema on whether or not the build is ready to go to thorough, detailed testing.
-Continuous Integration: Make every new build meet basic quality standards before it is added to the master codebase.
-Pragmatic Communication: Give rapid feedback to development teams, allowing them to communicate clearly about build stability.

Types of Smoke Testing
There are several types of smoke tests based on methodology chosen and setting where it is put to practice:
-Manual Testing: Test cases are written and executed manually for each build by testers.
-Automated Testing: Automation tools make the process work by itself best used in situations of tight deadline projects.
-Hybrid Testing: Combines a mixture of automated as well as manual tests for capitalizing on both the pros of each methodology.
Daily Smoke Testing: Conducted on a daily basis, especially in projects with frequent builds and continuous integration.
Acceptance Smoke Testing: Specifically focused on verifying whether the build meets the key acceptance criteria defined by stakeholders.
UI Smoke Testing: Tests only the user interface features of an application to verify whether basic interactions are working.

Applying Smoke Testing at Various Levels
Smoke testing can be applied at various levels of software testing:
Acceptance Testing Level: Ensures that the build meets minimum acceptance criteria established by the stakeholders or client.
System Testing Level: Ensures that the system as a whole behaves as expected when all modules work together.
Integration Testing Level: Ensures that modules that have been integrated work and communicate as expected when combined.

Advantages of Smoke Testing
Smoke testing possesses several advantages, including:
Quick Execution: It is easy and quick to run, and hence ideal for frequent builds.
Early Detection: It helps in defect detection in the initial stage, preventing wasting money on faulty builds.
Improved Quality of Software: By detecting the issues at the initial stage, smoke testing allows for improved software quality.
Risk of Failure is Minimized: Detecting core faults in earlier phases minimizes failure risk at subsequent testing phases.
Time and Effort Conservation: Time as well as effort is conserved as it prevents futile testing within unstable builds.

Disadvantages of Smoke Testing
Although smoke testing is useful in many respects, it has some disadvantages too:
Limited Coverage: It checks only the most critical functions and doesn’t cover other potential issues.
Manual Testing Drawbacks: Manually, it could be time-consuming, especially for larger projects.
Inadequate for Negative Tests: Smoke testing typically doesn’t involve negative testing or invalid input scenarios.
Minimal Test Cases: Since it only checks the basic functionality, it may fail to identify all possible issues.

Conclusion
In conclusion, smoke testing is an important practice at the early stages of software development. It decides whether a build is stable enough to go for further testing, saving time and resources. By identifying major issues early in the development stage, it facilitates an efficient and productive software testing process. However, it should be remembered that smoke testing is not exhaustive and has to be supported by other forms of testing in order to ensure complete quality assurance. 

Personal Reflection

Looking at the concept of smoke testing, I view the importance of catching issues early in the software development process.
It’s easy to get swept up in the excitement of rolling out new features and fully testing them, but if the foundation is unstable, all the subsequent tests and optimizations can be pointless. Smoke testing, in this sense, serves as a safety net, getting the critical functions running before delving further into more rigorous tests. I think the idea of early defect detection resonates with my own working style.

As I like to fix small issues as they arise rather than letting them escalate into big problems, smoke testing allows development teams to solve “show-stoppers” early on, preventing wasted time, effort, and resources in the future. Though it does not pick up everything, its simplicity and the fact that it executes fast can save developers from wasted time spent on testing a defective product, thus ending up with a smooth and efficient workflow. The process, especially in a scenario where there are frequent new builds being rolled out, seems imperative to maintain a rock-solid and healthy product.
The benefits of early problem detection not only make software better, but also stimulate a positive feedback loop of constant improvement between the development team. 

From the blog CS@Worcester – Maria Delia by Maria Delia and used with permission of the author. All other rights reserved by the author.

HOW DECISION TABLES CHANGED MY SOFTWARE TESTING MINDSET.

If you’ve ever written test cases based on your gut feeling, you’re not alone. I used to write JUnit tests by simply thinking, “What might go wrong?” While that’s a decent start, I quickly realized that relying on intuition alone isn’t enough especially for complex systems where logical conditions stack up fast.

That’s when I understood the magic of Decision Table-Based Testing.

What Is Decision Table Testing?

A decision table is like a truth table, but for real-world logic in your code. It lays out different conditions and maps them to the actions or outcomes your program should take. By organizing conditions and results in a table format, it becomes much easier to identify what combinations of inputs need to be tested and which ones don’t. It’s especially helpful when you want to reduce redundant or impossible test cases., when you have multiple input variables (like GPA, credits, user roles, etc.) and also when your program behaves differently depending on combinations of those inputs

Applying Decision Tables in Real Time

For a project that I happened to work on, we analyzed a simple method; boolean readyToGraduate(int credits, double gpa). This method is meant to return true if Credits ≥ 120 and also when GPA ≥ 2.0. We had to figure out what inputs would cause a student to graduate, not graduate, or throw an error—such as when the GPA or credit values were outside of valid ranges.

Instead of testing random values like 2.5 GPA or 130 credits, we created a decision table with all the possible combinations of valid, borderline, and invalid values.

We even simplified the process using equivalence classes, like:

  • GPA < 0.0 → invalid
  • 0.0 ≤ GPA < 2.0 → not graduating
  • 2.0 ≤ GPA ≤ 4.0 → eligible to graduate
  • GPA > 4.0 → invalid

By grouping these ranges, we reduced a potential 256 test cases to a manageable 68 and even further after combining rules with similar outcomes.

Well you must be wondering why this even matters in real projects. It matters because in real-world applications, time and efficiency are everything. Decision tables help you cover all meaningful test scenarios. They also help to cut down on unnecessary or duplicate test cases. Decision tables as well help to reduce human error and missed edge cases and provide a clear audit trail of your testing logic.

If you’re working in QA, development, or just trying to pass that software testing class, mastering decision tables is a must-have skill.Switching from intuition-based testing to structured strategies like decision tables has completely shifted how I write and evaluate test cases. It’s no longer a guessing game—it’s a methodical process with justifiable coverage. And the best part? It saves a ton of time.Next time you’re designing tests, don’t just hope you’ve covered the edge cases. Prove it—with a decision table

Have you used decision tables in your projects? Drop a comment below and share your experience!

From the blog CS@Worcester – MY_BLOG_ by Serah Matovu and used with permission of the author. All other rights reserved by the author.

Black Box Testing

URL: https://www.testscenario.com/black-box-testing/

Black box testing is a great tool for improving functionality, identifying issues within the user interface, and, in many cases, it does not require any programming knowledge. It also plays a key role in validating system acceptance, pre-launch stability, security, and third-party integrations. These aspects make black box testing a highly valuable method not only for developers but also for testers and stakeholders.

It offers great advantages when it comes to presenting results to the customer in order to gain approval. Because it follows a more user-oriented approach, black box testing produces results that are of interest to clients and stakeholders. This approach is referred to as Functional Testing. Additionally, black box testing can be used to assess user-friendliness, usability, and reliability—all of which help ensure that the software runs smoothly and provides meaningful feedback. This type of testing is known as Non-Functional Testing.

Regression Testing, another form of black box testing, ensures that new features or updates do not break any existing functionalities. This is especially important when a software release includes significant or breaking changes. User Acceptance Testing (UAT) typically takes place in the final phase of the testing cycle, where end users verify whether the software meets the necessary business requirements before its official release. Lastly, Security Testing serves as a method of vulnerability assessment, aiming to expose a system’s weaknesses and protect it against potential cyber threats.

The main reason I chose this article is because black box testing, to me, always seemed a little meaningless. Why would anyone test something without reading the code? But after reading the article, I realized that developers actually perform this kind of testing quite often—especially in web development. We constantly test various inputs without necessarily diving into the source code. The article also helped me understand that black box testing is an excellent tool for non-developers, allowing them to effectively test and better understand the product without having to read hundreds of lines of code.

From the blog CS@Worcester – CS Today by Guilherme Salazar Almeida Nazareth and used with permission of the author. All other rights reserved by the author.

Code Reviews

Source: https://about.gitlab.com/topics/version-control/what-is-code-review/

A code review is code that is peer-reviewed, which helps developers validate the code’s quality before it is merged and shipped to production. Code reviews are done to identify bugs, increase the overall quality of the code, and to ensure that the developers of the product understand the source code. Code reviews allow for a “second opinion” on the functionality of code before it is actually implemented in the systems. This prevents non-functional code from being implemented in the product and potentially causing issues or bottlenecks in performance. Ensuring that code is always being reviewed before merging encourages the developers to think more critically of their own code, and allows reviewers to gain more domain knowledge regarding the systems of the product. Code reviews prevent unstable code from being used by customers, which would lead to poor credibility and overall act as a detriment on the business. The benefits of code reviews are as follows: knowledge is shared among developers, bugs are discovered earlier, establishment of a shared development style/environment, enhanced security, increased collaboration, and most importantly, improved code quality. As with everything, there still are disadvantages. Code reviews lead to longer shipping times, pull focus/manpower from other aspects of the process,and large code reviewers equal longer review times. But the benefits far outweigh these disadvantages.

Code reviews can be implemented in multiple ways, through pair programming, over-the-shoulder reviews, tool-assisted reviews, or even email pass-around. Gitlab offers an interesting feature where developers can require approval from reviewers before their code can be merged. I chose this article because I use this feature frequently in my capstone class. My teammates and I review each other’s changes in the codebase through this Gitlab feature and, if needed, go over these changes in class whether it be through pair programming or over-the-shoulder reviews.

From the blog CS@Worcester – Shawn In Tech by Shawn Budzinski and used with permission of the author. All other rights reserved by the author.

CS443: A Wishlist for Automation and Productivity

You ever think about how being a software engineer is kind of like working in a factory?

Mill & Main in Maynard, where I did a summer fellowship a few years ago. Fun fact: this building and the rest of the town feature prominently in Knives Out (2019). True story!

I mean that quite literally. Especially here in Massachusetts, where primo office space quite frequently gets hollowed out of old textile mills. (The old David Clark building by the intermodal port, and a slew of defense contractors in Cambridge-Braintree, my old workplace included, come to mind.)

In some ways, the comparison isn’t unmerited. I don’t think it’s far-fetched to say that the focus of industry is to deliver product.

Okay, but how?

Last week, I wrote about the failure of the Spotify model — specifically, their implementation of large-scale Agile-based DevOps. You can read more about that here.

The impetus for this week’s blog is ‘what-if’; if, instead of Spotify’s focus on large-scale Agile integration, we approached DevOps (in a general sense) from the bottom-up, with a clear emphasis on software tools and freeform, ad-hoc team structure. What can we use, what can we do to effect a stable and logical working environment?

Just one quick disclaimer: this is bound to be biased, especially in terms of what I’ve seen work in industry. Small, tight-knit teams and relatively flat hierarchies. This won’t work for every situation or circumstance — and by sidestepping the issue of Agile at scale, I feel like I’m ignoring the issues endemic to Spotify’s structure.

Still, I figure it’s worth a shot.

Issue Hub: Atlassian Jira

The first thing we’ll need is an issue tracker. Atlassian doesn’t do a very good job at marketing its products to the non-corporate world, but it’s very likely that almost everyone reading this post has used an Atlassian product at some point or another: Trello, Bitbucket, and, best of them all, Jira. Think of it as a team whiteboard, where we can report on bugs, update our wikis, and view the overall health of our build, all within one web server.

Version Control: Subversion

Subversion is going to be our version control software. Although this doesn’t have all of the downstream merging capability of Git, its centralized nature actually works to our benefit; the specific combination of Jenkins, Jira, and SVN form a tightly-knit build ecosystem, as we will see.

CI Automation: Jenkins

Jenkins is a continuous integration (CI) and build automation utility which will run health checks on downstream builds before they’re committed to the nightly build, and then to the master build overnight. We’ll implement all of our tests and sanity checks within it, to ensure that no one pushes bad code. If, by some miracle, something does get through, we can revert those changes—another handy feature.

How does this work?

SVN repo → Jenkins (throughout-day staging, then end-of-day nightly build, then overnight master) → Jira (for reports and long-term progress tracking).

Does this all work?

In a word, hopefully. The social contract between you and a team of four or five people is much simpler to fulfill than that of you and the Tribe in the Spotify model. (You only have to track the work of several people, as opposed to almost everyone on-campus with the Tribal model).

There are commitments and onboarding requirements to a system like this, too, as there was with the Tribal model, but they’re not as pronounced, especially since we aren’t scaling our structure beyond this one team.

I think what is especially true of the workplace is that no two teams are alike, and it’s kind of crazy to assume that they are, which is exactly what Spotify did. How is it worthwhile to tell people who they should be working with, instead of letting them figure that out on their own?

Rather, by placing constraints on how the work is done (which is what we’re doing here—the emphasis on software as opposed to structure) we can get better results by letting people figure out how to get from Point A to Point B, assuming we properly define both A and B.

Between last week and now: a lot of thoughts to digest.

Kevin N.

From the blog CS-443 – Kevin D. Nguyen by Kevin Nguyen and used with permission of the author. All other rights reserved by the author.

What I Learned About QA: A Computer Science Student’s Take on Real-World Testing Practices

I recently read the article “Streamlining the QA Process: Best Practices for Software Quality Assurance Testing” published by KMS Technology. As a college student studying computer science and still learning the ins and outs of software testing, I found this article especially helpful. It gave me a clearer understanding of what quality assurance (QA) really looks like in real-world software projects.

I chose this article because I’ve been trying to get a better grasp on how testing fits into the bigger picture of software development. A lot of what we learn in class focuses on writing code, but not always on making sure that code actually works the way it’s supposed to. This article breaks down what can go wrong in the testing process and how to avoid those issues, which is something I know I’ll need as I continue learning and working on team projects.

The article talks about a few key challenges that QA teams run into:

Unclear Requirements – This one really stood out to me. The article explains that if the project requirements aren’t clearly defined, testing becomes almost impossible. How can you verify if something works if you’re not even sure what it’s supposed to do? It made me realize how important it is to ask questions early on and make sure everyone’s on the same page before writing code.

Lack of Communication – The article also highlights how communication gaps can mess up testing. If developers and testers aren’t talking regularly, bugs can slip through the cracks. As someone who’s worked on class group projects where communication wasn’t great, I totally see how this could happen on a larger scale.

Skipping or Rushing Testing – The article warns against rushing through testing or treating it like an afterthought. I’ve definitely been guilty of this in my own assignments—leaving testing until the last minute, which usually results in missing bugs. The article suggests integrating testing throughout development, not just at the end, and that’s something I want to start practicing more.

Reading this article made me reflect on my own experience so far. In one of my programming classes, our final project had a vague prompt and my group didn’t ask enough questions. We ended up spending extra time rewriting parts of our code because the requirements kept changing. After reading this article, I see how important it is to define everything early and communicate often.

I also plan to be more intentional about testing as I continue to build projects. Instead of waiting until the code is “done,” I want to get into the habit of testing as I go and making sure I understand the expected behavior before writing a single line.

Overall, this article helped me understand why QA is such a critical part of software development—not just something to tack on at the end. If you’re also a student learning about testing, I recommend giving it a read: Streamlining the QA Process: Best Practices for Software Quality Assurance Testing.

From the blog CS@Worcester – Zacharys Computer Science Blog by Zachary Kimball and used with permission of the author. All other rights reserved by the author.