Category Archives: CS-443

Starting My Journey in Software Process Management

Hello everyone, my name is Rick Djouwe, and this semester I am also taking Software Process Management. While some of my other computer science courses focus on the technical side of software development, like design, coding, and architecture, this class emphasizes the processes, management strategies, and professional practices that ensure software projects succeed.

What This Course is About

Software Process Management is designed to explore the methods and tools used to manage software projects from start to finish. Topics include:

  • Version control and collaboration tools for effective teamwork.
  • Software process models (from agile to large-scale iterative methodologies).
  • Project management skills such as planning, measuring progress, estimating costs, and managing risks.
  • Software licensing and contracts, and an introduction to intellectual property.
  • Coding standards, documentation standards, and code reviews to ensure consistency and quality.
  • Software maintenance and testing as ongoing parts of the development lifecycle.

In short, this course highlights the practices that make the difference between a project that simply “works” and one that is well-managed, scalable, and sustainable.

Skills and Outcomes

By the end of this course, I will be able to:

  • Gather and prioritize requirements through communication and negotiation with stakeholders.
  • Develop project plans and track progress to ensure goals are met on time and within budget.
  • Apply management techniques in both agile and larger-scale development contexts.
  • Analyze needs and goals to make informed decisions about software solutions.
  • Understand contracts, licensing, and professional ethics within the software industry.

These skills go hand-in-hand with the Computer Science program outcomes, such as analyzing problems, applying ethical reasoning, and demonstrating leadership and effective teamwork.

Why This Matters to Me

As I prepare for a career as a software engineer, this course will strengthen my ability not only to contribute technically, but also to lead and manage software projects effectively. Understanding process management is critical in real-world environments, where collaboration, deadlines, and accountability are just as important as writing clean code.

I also see a strong connection to my current role at The Hanover Insurance Group, where teamwork, version control, documentation, and project management practices are essential to delivering quality solutions. What I learn in this class will help me bring even more value to my work, both now and in the future.

I look forward to exploring how different methodologies shape the software development lifecycle, and how project management skills complement technical expertise. My goal is to come out of this course not only as a better developer, but also as someone prepared to guide teams, manage projects, and ensure successful outcomes.

I’m also excited to meet everyone in this class and learn from each other’s perspectives and experiences as we grow together throughout the semester.

From the blog Rick’s Software Journal by RickDjouwe1 and used with permission of the author. All other rights reserved by the author.

Welcome to My Journey in CS 343: Software Construction, Design & Architecture

Hello everyone, my name is Rick Djouwe, and this semester I am beginning CS 343: Software Construction, Design & Architecture. I am truly excited for this class because it represents the next step in strengthening my ability to think beyond coding and focus on building well-structured, scalable, and maintainable software systems.

What This Course is About

CS 343 covers a wide range of essential topics in modern software development, including:

  • Design principles such as abstraction, encapsulation, inheritance, and polymorphism.
  • Best practices like SOLID, DRY (“Don’t Repeat Yourself”), and YAGNI (“You Ain’t Gonna Need It”).
  • Design patterns that provide reusable solutions to common problems.
  • Software architectures and frameworks, including REST API design.
  • Refactoring, code smells, and concurrency, which improve software quality and longevity.
  • Modeling and documentation tools like UML, which ensure clear communication of design decisions.

In short, this course is not just about writing code, it’s about learning to think like a software engineer who can approach problems critically, design solutions thoughtfully, and work effectively with others.

Skills and Outcomes

Through CS 343, I will gain valuable experience in:

  • Collaborating with stakeholders to design, test, and deliver software systems.
  • Applying professional judgment and staying current with evolving tools and practices.
  • Organizing projects using proven methodologies and team processes.
  • Communicating complex technical concepts clearly, both in writing and orally.

These outcomes connect directly to the broader goals of my Computer Science major: analyzing problems, building solutions, and developing the professional skills needed to succeed in the field.

Why This Matters to Me

As someone pursuing a career as a software engineer specializing in artificial intelligence, this course will help me strengthen the foundations of software design and architecture that are critical in building intelligent, scalable systems. Beyond my academic goals, I also see a strong connection to my current role as an Automation Developer at The Hanover Insurance Group, where I contribute to projects that rely on thoughtful design, testing, and collaboration. The principles and practices I learn here will make me more effective in my work today while preparing me for even greater responsibilities in the future.

I am eager to reflect on my progress throughout the semester, connect this material with experiences across my other courses, and apply these lessons directly to both my professional role and long-term career.

For me, CS 343 is more than a class, it’s a bridge between where I am now and the kind of innovative, responsible, and skilled software engineer I strive to become. I am also excited to meet everyone in this course and learn from each other as we move forward together. Feel free to reach out if you’d like to connect, collaborate, or study together this semester!

From the blog Rick’s Software Journal by RickDjouwe1 and used with permission of the author. All other rights reserved by the author.

AI Incorperation In Software Testing

For this weeks’ log entry, I wanted to cover a topic that relates to the class but was not covered. I wanted to conduct some research of my own in regards to how AI is changing the ways in which people are testing code, as well as some of the new testing methods that are being used thanks to implementation of AI. When researching this topic, I came across a podcast titled, “The Role Of AI In Software Testing” by Test & Code on spotify. I also specifically chose this podcast because of both its popularity, as well as because of its recency, given that it was posted just over one week ago.

Over the past few years, AI has exploded in its popularity. Not only is AI able to process basic information with relatively high accuracy, but it is able to do so in such a manner as to allow that information to be processed automatically. One thing that AI is now being used to do within a software testing space is generate tests. In general, many people entering the software testing field or programming in general, do not have a very high level of comfort or practice with writing tests. AI in this case is beginning to be implemented to fill the gaps in knowledge that people have (writing tests in this case), allowing people to theoretically make more progress while working for the same amount of time with less debugging needed. AI technology has even developed far enough to the point where people are using it to completely replace the rest of their role in writing most coding and testing projects. At a point not too long ago, people were using AI to help write tests and code to meet a specification, but now things are much different. People are easily able to use AI to generate not only a specification for itself to write code for, but also write competent code to fulfill that specification that it gave itself, while also writing and running tests for it. AI has become scary good when it comes to being competent at writing almost all kinds of tests and code. For now, the code it writes is just competent. It is able to complete a task but often not in ways that we, as humans, would think to be a logical solution to the specification given, and also often not in the way that we intend. One way that AI is being quickly incorporated into the workplace is through tools and writing or describing how to write certain things for programmers or testers that may not have an expertises in a certain aspect of the job and need assistance with getting started. For those who are more informed in the field, looking at AI responses to questions that people are asking or answers generated, such as how to perform certain tasks, can be jarring and often return responses that are more than unsatisfactory, but in a weird way, when the person who is using the AI is also the person who is uninformed on how to compete a task, the only thing shown is satisfactory by the person who finished their testing earlier than they expected. 

From the blog CS Blogs with Aidan by anoone234 and used with permission of the author. All other rights reserved by the author.

Understanding QA Testing

I’ve always wondered what other roles exist in the tech industry besides software engineers or developers. Then I discovered QA testers, and I wanted to know exactly what they do. I chose to reflect on the article What is QA Testing? because it provides a comprehensive introduction to Quality Assurance (QA) testing. This is a topic we’ve explored in our Software Quality Assurance course. I found this resource particularly useful because it clearly explained the entire QA testing process, from requirement analysis to test execution and verification, while also explaining why QA is essential in software development.

The article defines QA testing as a process used to ensure that a software product meets customer requirements and functions correctly before it is released. Traditionally, QA testing happened at the end of the development cycle. However, modern practices now include QA throughout the entire process. This shift helps QA teams detect and resolve issues earlier, leading to improved efficiency and better teamwork.

The article outlines the QA process in six major stages: analyzing requirements, planning, test case development, test execution, verification, and documentation. Each step is explained in detail, showing how important it is to follow a structured and thoughtful approach to maintain high software quality. The article also introduces best practices such as combining manual and automated testing, using crowdtesting, adopting DevOps workflows, and applying predictive analytics. These practices help teams maintain high standards without slowing down delivery.

I chose this article because I’ve always been curious about how software is tested before it is released. After taking this course, I now understand that QA testing is more than just finding bugs. It involves improving user experience, ensuring reliability, and supporting the development team in delivering better products. This article helped me better understand those ideas and the important role QA plays in every project.

What stood out to me the most was the idea that QA should be an ongoing part of development rather than something saved for the end. This supports what we’ve learned in class, that early testing saves time, money, and effort in the long run. I also learned about tools like bug trackers and test scenario checklists, which help organize the QA process and make it more efficient.

After reading this article, I feel encouraged to explore QA roles further. Even if I am not working as a developer, I now see how I can still make meaningful contributions to a tech team. I’ve learned that skills like analytical thinking, attention to detail, and strong documentation are essential in QA, and these are skills I am actively working to improve. In the future, I plan to apply what I’ve learned by incorporating test planning and QA thinking into every project I work on.

Reference:
Team, The Upwork. “QA Testing: Beginner’s Guide to Quality Assurance.” Upwork.com, Upwork, 6 Sept. 2022, http://www.upwork.com/resources/what-is-qa-testing.

From the blog CS@Worcester – CodedBear by donna abayon and used with permission of the author. All other rights reserved by the author.

Real-World Testing

This week, I read a blog post called “Netflix App Testing at Scale” which is based on an interview with Ken Yee, a Senior Engineer at Netflix. It takes a look at how Netflix tests their Android app, which is one of the most widely used streaming apps in the world. With over a million lines of code, 400+ modules, and support for all kinds of devices (including foldables and Android Go phones), testing at Netflix isn’t just about making sure the app works—it’s about making sure it works everywhere. I chose this article because we’ve been covering testing frameworks and strategies in class, and this felt like the real-world version of everything we’ve been learning. I also use Netflix a lot,  it is interesting to learn how they keep it running smoothly through so many updates and features. This blog helped me connect the theory from class to an actual large-scale product.

Netflix used to have a separate team of SDETs (Software Development Engineers in Test), but now every feature team handles their own testing. That includes unit tests, screenshot tests, and end-to-end tests. They still have two SDETs who help across teams, but quality is everyone’s job now. I thought that was cool—it encourages developers to think about testing earlier and more often, rather than just tossing it over to QA at the end. They also go into the frameworks they use. For unit tests, they use tools like Strikt (for fluent assertions), Turbine (to help with Kotlin Flows), and Mockito (for mocks). They also use Hilt for dependency injection and Robolectric when they need to test Android-specific logic. What stood out to me was how conscious they are of performance—each layer of test framework (plain unit → Hilt → Robolectric → device tests) adds more time, so they encourage developers to keep tests as fast and simple as possible. That’s a great tip I’ll definitely remember for my own projects. I also learned a lot from their section on flakiness. I hadn’t realized how much those little issues could mess up tests—and how fixing them makes everything more reliable. Finally, Netflix uses screenshot testing heavily. They use Paparazzi for Jetpack Compose UI, localization testing for checking designs across different languages, and even visual accessibility checks. It is interesting to find out that they care about accessibility and localization.

This blog gave me a better understanding of how layered and thoughtful good testing needs to be—especially at scale. I’ll definitely use what I learned about speed, flakiness, and strategy in my future development work.

From the blog cameronbaron.wordpress.com by cameronbaron and used with permission of the author. All other rights reserved by the author.

Legacy Tests : A Problem From Mindset

The blog post “What Do You Fix When You Fix a Test?” by Joep Schuurckes explores the nuanced decisions developers and testers face when a test fails. The central question is whether the issue lies in the test itself, the code under test, or possibly even the expectations behind the test. Schuurckes starts with bringing up the topic of legacy tests, legacy test code that makes “more decisions about what and how things are tested than the team.” He encourages readers to reflect before blindly “fixing” a test by editing it to pass again. This is what he calls “tests-as-code” where the developer is trying to change the code of the test so that it passes rather than treating it with the mindset of “tests-as-code”. “Tests-as-code” would be where the developer looks at a failing test as an information dump where each test returns some kind of data and thus any changes must preserve that data return. Part of keeping tests as code is keeping to a naming scheme so that each test is obvious in what you are expecting as a return which is the exact same thing I was taught in class, but now I understand the reasoning a bit more.

This post made me rethink what it means to “fix a test.” I’ve been guilty of tweaking test code just to get everything green in the test runner again, without stopping to think whether the test was telling me something important. Joep’s approach feels like a call for discipline and care in testing—treating tests as important resources in the codebase rather than disposable tools. The idea that tests should be maintained with this way of thinking resonated with me, especially since I’ve seen how neglected or misleading tests can erode trust in automated test suites.

Going forward, I want to adopt a more mindful approach when a test fails. Instead of rushing to “fix” it, I’ll start by asking why it failed. Is the requirement outdated? Is the test too brittle? Has the functionality truly changed? Also, I want to be more deliberate in writing tests—designing them to clearly document behavior and to be resilient against irrelevant changes. This is especially relevant for integration tests, which are often more vulnerable to external factors and instability. By treating each failed test as an opportunity to learn, not just a checklist item to resolve, I hope to contribute to codebases that are easier to maintain and trust.

source :
https://smallsheds.garden/blog/2024/what-do-you-fix-when-you-fix-a-test/

From the blog Coder's First Steps by amoulton2 and used with permission of the author. All other rights reserved by the author.

Testers Aren’t Developers ! Their Role Is There for a Reason

The article “The Difference in Perspective of Testers and Developers” by Vijay provides a comparison of how software testers and developers approach software QA and testing. Developers are often focused on building features that work as intended, with an optimistic mindset geared toward implementation. Testers, on the other hand, adopt a critical mindset, aiming to uncover flaws, edge cases, and unintended consequences. The article explains that while developers ensure that the software does what it’s supposed to do, testers ensure it doesn’t do what it’s not supposed to do. This difference in thought processes is not a conflict but a complement to creating robust, reliable software. The article encourages improved collaboration, mutual respect, and open communication between these roles to produce higher-quality software.

Reading this article gave me a better understanding of how vital both roles are in the software development lifecycle. As a CS student who has done more programming than testing, I’ve often written code with the assumption that if it runs without errors and produces the expected result, it’s good to go. But throughout my QA and Testing class, through reading different blogs, I now see how much that perspective misses. I haven’t always taken the time to think about how a user might unintentionally (or maliciously) use the software, or how fragile assumptions can be unless they’re tested thoroughly. It’s easy to feel like bugs are personal flaws, but this article helped me appreciate the tester’s role and the stark difference in mindsets because I truly saw testing as just another tool in the developer toolkit and not as the fully fleshed-out role it is.

This new understanding is something I plan to apply in future group projects. I want to make a conscious effort to invite feedback from teammates who are testing my code, and more importantly, not take that feedback defensively. I also want to improve my own testing practices by thinking like a tester during development. That means writing unit tests that go beyond the “happy path” and trying to break my own code before someone else does. In larger projects, I now see the importance of collaboration between testers and developers early in the development process, rather than waiting until the end to start thinking about quality. Encouraging open communication can lead to better designs and fewer bugs downstream.

source:
https://www.softwaretestinghelp.com/the-difference-in-perspective-of-testers-and-developers/

From the blog Coder's First Steps by amoulton2 and used with permission of the author. All other rights reserved by the author.

Dataflow testing overview

White box testing includes the use of all of a program’s internal structure to your advantage in the testing phase. A component of this internal structure that usually makes up a small percentage of the code body, but can contribute to the most amount of problem cases, are variable/data type declarations. Testing for these cases is called dataflow testing. In the blog, “All about dataflow testing in software testing”, by Prashant Bisht, the author details how dataflow testing would be implemented, and some examples of how it might look.

The implementation of dataflow testing, before any interaction with the code is done, is first executed with a control flow graph, which tracks the flow of variable definitions and where they are utilized. This type of organization allows for the first important internal component of dataflow testing, the tracking of unused variables. Removal of these unused variables can help narrow the search for the source of other problem cases. The second anomaly commonly testing for in dataflow testing is the undefined variable. These are more obvious than unused variables, since they almost always produce an error, due to the program relying on non existent data. The final anomaly tested for is multiple definitions of the same variable. Redundancy that can be introduced by this anomaly can lead to unexpected results or output.

Subtypes of dataflow testing exist, and are specialized to test different types of data. For example, static dataflow testing is the tracking of the flow of variables without the running of the tested code. This code only includes the analysis of the code’s structure. Another example, dynamic dataflow testing, focusing only on how the data relating to variables changes throughout the code’s execution.

To show how dataflow testing would work in practice, the author provided an example. This example concerns variables num1, num2, and num3. First, initialization of these variables is checked, i.e. if num1 is initialized as int nuM1 = some_int, the testing phase would catch this. Then, it ensures that the use of these variables don’t cause errors. This depends on program specifications, like if the program is meant to add each variable. The data flow is then analyzed, ensuring that operations including multiple variables are functioning properly. i.e. if num1 + num2 = result1, and num2 + num3 = result2, the dataflow phase would ensure that the operation result1 + result2 = result3 is functioning properly (though result3 being defined is a problem that would be handled by the first phase). The final phase is the data update phase, where the values of operations are verified to be what they’re expected to be.

From the blog CS@Worcester – My first blog by Michael and used with permission of the author. All other rights reserved by the author.

Benefits of gray box testing

In many software testing situations, white box testing, where the internal code of what is being tested is visible to the tester, and black box testing, where the internal code of what is being tested isn’t visible to the tester, are some of the most often used methods. However, integrating elements from both methods into a single testing method is slowly seeing more widespread use, also known as Gray box testing. In the blog “Exploring gray box testing techniques” by Dominik Szahidewicz, the author details different examples of gray box testing, and the benefits of those examples compared to the use of only white or black box testing.

Gray box testing has noticeable benefits that are absent from both white and black box testing. By the using the principles of white box testing (internal structure and design) and of black box testing (output without the context of internal structure), the testing process can be robust and able to account for any problem case.

A specific testing example where gray box testing can be implemented is pattern testing, where recurring patterns are leveraged to improve programs. With the use of gray box testing, the internal structure of the software can be related with the output to create more helpful and efficient test cases.

Another testing example where gray box testing can be implemented is orthogonal array testing, where data for testing is organized into specific test cases. This method is commonly used where exhaustive testing using every possible input is unreasonable because of the amount of inputs. By using the internal structure of the program and the outputs of the program, more efficient test cases can be created.

A basic guide as to how to implement gray box testing includes 4 steps detailed by the author. The first step of of the implementation is acquiring system knowledge. This includes documenting the internals available for use in testing, as well as the available documentation of outputs from the tested program. The second step is to define test cases according to known information and specifications. The third step is the use of both functional and non functional testing. The fourth step is to analyze the results of testing.

From the blog CS@Worcester – My first blog by Michael and used with permission of the author. All other rights reserved by the author.

My Perspective on Risk Based Testing in Software Quality Assurance

As a computer science student getting more into the details of software development, I’ve started to realize how much goes into making sure software actually works the way it’s supposed to. I recently read the article “13 QA Testing Best Practices For 2024” from Testlio, and one part that stood out to me was the idea of risk based testing.
(testlio.com)

Risk based testing is all about using your time and effort wisely. Instead of trying to test every single feature equally, it focuses on the stuff that matters most. You look at what parts of the app are most likely to break or cause problems for users if they fail. Then you make sure those are tested thoroughly before anything else.

The article explains that identifying risky areas early helps teams put their energy in the right place. If you’ve only got so many people and so much time, this method helps avoid wasting those resources. It also means the most important features are solid by the time the app goes live.

This reminded me of a group project I worked on where we made a class management web app. We spent way too much time testing features like color themes and user bios. But when it came to the assignment submission tool, which was probably the most important part, we barely tested it. Sure enough, after we deployed it, users had issues uploading their files. If we had used risk based testing, we probably would’ve caught that.

Now that I know about this approach, I’m going to start using it in future projects. I’ll take time up front to figure out which features are most essential or most likely to go wrong, and make sure we focus testing there first. It’s a simple idea, but it makes a big difference.

In the end, risk based testing is about being smart with your time and making sure what matters most actually works. If you’re also learning software testing, this is a great thing to start thinking about. I definitely recommend checking out the full article if you’re curious:
13 QA Testing Best Practices For 2024

From the blog Zacharys Computer Science Blog by Zachary Kimball and used with permission of the author. All other rights reserved by the author.