Author Archives: cameronbaron

Sprint Two Retrospective

This sprint pushed us out of the setup phase and into the real struggles of implemenation. Our focus was on certificate configuration, domain integration, and establishing a stable server environment capable of supporting frontend and backend communication through Docker Compose. We encountered technical challenges that required a lot of trial and error, especially with NGINX, RabbitMQ, and SSL certs, but we made real progress and learned a lot along the way.

Key Work Accomplished:
  • SSL Certificates: After experimenting with self-signed certs, we shifted direction and enabled Let’s Encrypt with Certbot. I verified auto-renewal functionality using a dry run, ensuring long-term reliability.
  • Domain & NGINX Setup: Switching from IP to domain access opened the door for proper HTTPS handling and better routing. We spent time researching NGINX as a reverse proxy and adjusted our configuration to support this change.
  • RabbitMQ Troubleshooting: I spent time debugging why connection.createChannel in messageBroker.js fails during Docker Compose builds. The issue appears tied to the container configuration or startup timing, which I plan to isolate in the next sprint.
  • Documentation: Added notes and resources for Docker Compose Watch (for future CI/CD use) and contributed to team documentation related to server setup and troubleshooting steps.

This sprint tested our patience and problem-solving skills. Even when progress felt slow, I stayed focused on learning and finding answers. I also made it a point to keep the energy positive in the team, especially when the same problem stretched across multiple weeks. I think maintaining that morale helped keep us all engaged and moving forward. One area I’d like to improve is how I manage my research time. It’s easy to get stuck digging through forums or chasing edge cases, so next sprint I want to be more intentional about how I explore solutions and when to pivot. I also want to get better at helping guide conversations to a technical conclusion more efficiently during group work. My top priority going forward will be testing subsystems and verifying proper communication across containers. I also want to finalize our server hosting and make sure the front end is accessible through the domain without errors. Overall, I’m proud of our persistence. This sprint was about learning and solving problems within the systems we’re building.

Apprenticeship Patterns: The Long Road

This sprint reinforced the mindset behind Apprenticeship Pattern: The Long Road—that true growth in software development comes from persistence, patience, and a long-term commitment to learning. While troubleshooting issues like Docker container problems and RabbitMQ errors was frustrating at times, I stayed focused on understanding the root causes. Each challenge became an opportunity to learn. I’ve started recognizing that even slow progress is part of the journey to achieving the goal. This pattern will help me stay motivated and positive, even when things don’t go as expected. Moving forward, I want to manage my time better when diving into technical problems and continue building habits that support my learning and my team. There is still a lot of work to complete for our team, but I think we expect this and we will hit the ground running for the last sprint.

From the blog cameronbaron.wordpress.com by cameronbaron and used with permission of the author. All other rights reserved by the author.

Comprehending Program Logic with Control Flow Graphs

This week I am discussing a blog post titled, “Control Flow Graph In Software Testing” by Medium user Amaralisa. When I read through this post initially, it immediately clicked for me with what we have been studying in class with different path testing types which capture the logic similarly. The comparison between CFGs and a map used to explore the world or get from point A to point B is incredibly useful as it explains the need for having a guide to explain the many execution paths of the program. The writer made the topic easy to understand while still including the technical information that is required to apply these techniques moving forward.

This post helped me see the bigger picture in terms of the flow of a program and how the logic is truly working behind the code we write. It tied directly into what we’ve covered about testing strategies, especially white-box testing, which focuses on knowing the internal logic of the code. The connection between the CFG and how it helps test different code paths felt like a practical application of what we’ve been reading about in our course.

It also made me think about how often bugs or unexpected behavior aren’t because the output is flat-out wrong, but because a certain path the code takes wasn’t anticipated. Seeing how a Control Flow Graph can lay out those paths visually gives me a better sense of how to test and even write code more deliberately. It’s one thing to read through lines of code and think you understand what’s going on, but when you actually map it out, you might catch paths or branches you hadn’t considered before. I could definitely see this helping with debugging too—like, instead of blindly poking around trying to find what’s breaking, I can trace through the flow and pinpoint where things start to fall apart.

I also really liked that the blog didn’t try to overcomplicate anything. It stuck to the fundamentals but still gave enough technical depth that I felt like I could walk away and try it on my own. It gave me the confidence to try using CFGs as a tool not just during testing but also during planning, especially for more complex logic where things can easily go off track.

Moving forward, I am going to spend time practicing using CFGs as a part of my development process to ensure that I am taking advantage of tools that are designed to help. Whether it’s for assignments, personal projects, or even during team collaboration, I think having this extra layer of structure will help catch mistakes early and improve the quality of the final product. It feels like one of those concepts that seems small at first, but it shifts the way you approach programming altogether when applied properly.

From the blog cameronbaron.wordpress.com by cameronbaron and used with permission of the author. All other rights reserved by the author.

Understanding FIRST Principles: Remastered

This week I am diving into an article by David Rodenas, PhD – a software engineer with a wide array of knowledge on the topic of Software Testing. I found the article, “Improve Your Testing #13: When F.I.R.S.T. Principles Aren’t Enough” which is one of many posts from Rodenas that helps provide insights to Software Testing that only a pro could provide. Through this article, Rodenas walks through each letter of the FIRST acronym with new meaning – not replacing what we know but instead enhancing what we know. As the author teaches us these new ways of understanding, examples are provided to look for ways in our own work in which these can be applied.

The acronym can be read as: Fast, Isolated, Repeatable, Self-Verifying, and Timely, but taking this one step further we can acknowledge the version that builds on top of this as: Focused, Integrated, Reliable, Significant, and Thoughtful. It is obvious that these definitions are not opposites of each other, they should also exist cohesively in our quest for trustworthy software.

One pro-tip that sticks out to me here is keeping your code focused by avoiding using the word “and” in the test name. When I first read it – it seemed kind of silly, but in a broad-sense it really does work. The writer relates this to the Single Responsibility Principle – by writing tests with a clear focus, our tests are fast and purposeful. Another takeaway is the importance of writing reliable and significant tests. Tests should not only validate but also provide meaningful information for what went wrong to cause them to fail. A test that passes all the time but fails to catch real-world issues is not significant. Similarly, flaky tests—ones that pass or fail inconsistently—break trust in the testing suite and should be avoided. Rodenas also emphasizes that integrating tests properly is just as important as isolating them. While unit tests should be isolated for precision, integration tests should ensure that components work together seamlessly. A good balance between both approaches strengthens the overall reliability of a software system.

Ultimately, this article challenges us to go beyond simply following FIRST principles and to think critically about our testing strategy. Are our tests truly adding value? Are they guiding us toward our goal of thoroughly tested software, or are they just passing checks in a pipeline? By embracing this enhanced approach to testing, we can ensure that our tests serve their true purpose: to build confidence in the software we deliver.

From the blog cameronbaron.wordpress.com by cameronbaron and used with permission of the author. All other rights reserved by the author.

Sprint One Retrospective

Retro

This first sprint was a deep dive into new territory. Our team focused on understanding our project, setting up the proper tools and environment, and managing challenges as they arose. We have created a working agreement, completed multiple issues on GitLab as individuals and as a team, and we have ensured collaboration was a priority throughout each task.

GitLab Activity:

  • README.md – Updated the README document for our GitLab group to detail the major goals of our project.
  • Docker Compose Watch Documentation – Drafted documentation for Docker Compose Watch for possible use for CI/CD later on

Our team dynamic has been great from the beginning with a strong working agreement in place from day one being we were able to focus on the work getting done without worrying about team cohesion. We were able to complete a lot of research during this sprint to gather information about Docker and its many variants, NGINX, and CI/CD options. This research was turned into details documentation within our GitLab group. We worked successfully toward solving problems like issues with debugging Docker Containers, using SSL certificates properly, and scheduling our tasks based on priority. Overall, flexibility in our team has been vital and we have adapted to any challenges that we have faced.

I believe we have already learned a lot through our research and time spent of digging around in the server for answers, but we can still work toward improving our outcome by approaching our research differently and ensuring we have a clear focus on what we want to accomplish. Personally, I plan to improve my work by improving my time management regarding troubleshooting/research because sometimes I can find myself down a rabbit hole and working to ensure that all of my teammates and I are on the same page through clear communications.

Apprenticeship Pattern: Retreat into Confidence

“Retreat into Competence” is a strategy for regaining confidence when feeling overwhelmed by complex challenges. It involves stepping back into an area of expertise before pushing forward again. The key is to avoid staying in the comfort zone for too long and instead use the retreat as a way to launch forward with renewed energy. During this sprint, there were moments of troubleshooting that felt a bit discouraging, particularly with Docker and SSL certificates, where progress seemed a little slow and confusing. The feeling of being stuck highlighted the need to step back into something familiar—whether it was revisiting basic Docker configurations or focusing on smaller, more manageable tasks like completing documentation on GitLab—before tackling the larger issues again. Had I known about this pattern sooner, I would have structured my work differently to maximize results. Moving forward, I plan to:

  • Time box troubleshooting sessions to avoid going down the rabbit hole
  • Focus in my research to ensure that the resulting information is useful to our project
  • Seek help from teammates during moments of need
  • Ensure that I am still completing the smaller tasks that can be completed while working on larger issues

This sprint was a valuable learning experience in both technical and team collaboration aspects. While challenges arose, the team adapted, and I gained insight into how to manage difficult situations more effectively. By refining research strategies, improving troubleshooting workflows, and applying patterns like “Retreat into Competence,” I can optimize future sprints for even greater success.

From the blog cameronbaron.wordpress.com by cameronbaron and used with permission of the author. All other rights reserved by the author.

Apprenticeship Patterns Intro

After reading through the assigned sections in the text Apprenticeship Patterns: Guidance for the Aspiring Software Craftsman by Dave Hoover and Adewale Oshineye I feel as though I have already learned lessons that are vital to my progression as a computer science student. One lesson that stood out to me was the story of the Zen master, I really enjoyed the symbolism in the story and the alignment with real experience. In all aspects of life it is important to feel confident in one’s knowledge, but the other half of this is knowing when to be humble enough to learn and accept new information when it is presented. As many of us are familiar, the world of computer science is a realm of continuous improvement – this story helps to remind me how hard it would be to be successful in practice if I do not continue to save space for learning.

Another thought-provoking section was the introduction to Chapter Three relating to Daves realization of the extraordinarily long road of learning that lies in front of all of us. Though the passage was short it was notable to me because I remember having this realization myself. When I came to terms with the fact that there was (likely) no one that knows everything and that most of us are all on the same path just at different points along the way – then I was able to freely learn from my honest starting point, wherever that may be, in any given topic without the shame of being a newbie. Another thought I had while reading this section was that although certificates can be helpful and feel like an accomplishment they are only as valuable as you make them. Being able to use what was learned within a certificate course (or similar) and build upon that I think is where the majority of the real learning occurs.

Apprenticeship Patterns has really helped me rethink how I approach learning. The Zen master story reminded me that it’s important to stay humble and always be open to new ideas. I also liked the reminder that learning is a never-ending journey. These insights will definitely help me stay focused on growing and applying what I learn as I move forward in my studies.

From the blog cameronbaron.wordpress.com by cameronbaron and used with permission of the author. All other rights reserved by the author.

CS-443: Self-Introduction

Hi everyone, I am Cameron Baron – a senior Computer Science student at Worcester State University. This is an introductory post for my Software Quality Assurance and Testing course to ensure all blog posts will be posted as expected. Upcoming blog posts will be centered around finding useful information and various tutorial overviews related to this topic as I continue to learn.

From the blog CS@Worcester by cameronbaron and used with permission of the author. All other rights reserved by the author.

Discovering LFP and Thea’s Pantry

The LibreFoodPantry (LFP) website and Thea’s Pantry GitLab Group are full of knowledge about the open-source project itself and related information to support the users and developers. After reading through these resources, I felt I gained a more thorough understanding of the purpose of this software and it’s potential trajectory. Specifically relating to the LFP website – I found the Values section extremely useful, as it provides clear expectations for all community members with links to further understand the Code of Conduct and to learn more about Agile values and FOSSisms. Regarding the information about Thea’s Pantry in GitLab, there are many useful subsections within this group, but I was particularly impressed by the Architecture section as it presents the microservices architecture clearly through diagrams with clear systems, features, and components. An additional useful link relating to Thea’s Pantry GitLab Group is User Stories. After reading through the different situations expressing the intended use of the software, I had a better understanding of the role that this project plays throughout every step of this process on both, the staff side and the guest side. I was surprised reading the User Story titled “A Pantry Administrator Requests a Monthly Report for the Worcester County Food Bank” as I was unaware of the link between the two systems. Overall, these webpages provide a simple and clear interface to learn about the project’s values and community expectations, as well as, technical details.

From the blog CS@Worcester by cameronbaron and used with permission of the author. All other rights reserved by the author.

Reliability in Development Environments

This week, the blog that caught my attention was “How to Make Your Development Environment More Reliable” by Shlomi Ratsabbi and David Gang from the Lightricks Tech Blog. This work highlights common challenges encountered in software development and provides actionable solutions — specifically, how to optimize development environments. The insights offered in this blog align perfectly with our coursework, as we have continuously learned about and worked within our own development environments. This post emphasizes the importance of ensuring consistency and reliability when creating these environments, offering practical advice to achieve this goal.

The writers begin by outlining the necessity of development environments, describing the challenges that arise during software releases. These challenges include discrepancies between local and production configurations, mismatched data, permission conflicts, and system interaction issues. While creating a shared development environment may seem like the obvious solution, the authors point out that this approach introduces its own set of problems, such as debugging difficulties due to parallel testing, interruptions caused by developer collisions, and divergence between shared and production environments.

To address these challenges, the authors advocate for the implementation of branch environments. Branch environments are independent, isolated setups for developing and testing specific features or issues. These environments, when paired with tools like Terraform, Argo CD, and Helm, enable integration with Infrastructure as Code (IaC) and GitOps, automate infrastructure management, and ensure automatic cleanup of unused resources. This approach promotes consistent documentation of dependencies and application details within version control systems like GitHub. The blog includes a clear diagram and code snippets that effectively demonstrate how to set up branch environments using these tools, making it accessible and actionable for readers.

Branch environments offer several key advantages. By isolating changes, they ensure that all updates are properly tracked, simplifying debugging and maintaining consistency across development efforts. This isolation also eliminates conflicts inherent in shared environments, reducing the risk of outdated configurations or data interfering with new testing and development efforts. Tools like Terraform and Argo CD further enhance this process by automating repetitive tasks such as infrastructure provisioning and application deployment, saving developers time and reducing the likelihood of human error.

Additionally, branch environments improve resource efficiency. Since these environments are ephemeral, they are cleaned up automatically when no longer needed, freeing up valuable system resources and lowering costs. The inclusion of tools like Helm simplifies configuration management, even for complex architectures, ensuring a streamlined, manageable workflow.

Overall, this blog provides a thorough and practical framework for tackling one of the most common challenges in software development: creating reliable and consistent environments. The adoption of branch environments combined with IaC and GitOps principles enhances scalability, collaboration, and efficiency. As I continue to develop my own projects, I plan to incorporate these strategies and tools to build environments that are both robust and resource-efficient.

From the blog CS@Worcester by cameronbaron and used with permission of the author. All other rights reserved by the author.

Essential Software Architecture Design Patterns

This week, I am focusing on the blog “The Software Architect: Demystifying 18 Software Architecture Patterns.” by Amit Verma. Verma is an architect who focuses on creating Java applications among other work. This blog aligns with our course work as we focus on software architecture and we have recently learned the details related to some of the design patterns mentioned here. This post has introduced me to new architecture design patterns to study and provided a vast array of knowledge like when to apply them and how to properly use them in my future projects.

Verma begins by specifying what software architecture is and how it differs from software design. I appreciate how this was explained because it makes the concepts simple to understand. Software architecture focuses on the high-level structure like the blueprint of the project, software design focuses on translating this blueprint into real project specifications which developers can implement. A concept in this post that was new to me was the architecture documentation approach, the C4 model. This model ensures that there is a structure that is able to be implemented by addressing four main levels: context, containers, components, and code. The C4 model encourages architects to create diagrams and written documents to ensure comprehensive project documentation.

The beauty of architectural patterns lies in their repeatability; they provide reusable solutions that address common goals and challenges across projects, enabling architects to achieve a specific, predetermined outcome efficiently. This post provides information related to 18 different architecture patterns that are commonly used and necessary for software developers to be familiar with. Some of the patterns were entirely new to me like the Layered Pattern, Pipe-filter Pattern, and the Microkernel Pattern. Beginning with the Layered Pattern, this architecture separates different functionalities into different layers. One common representation of this uses three main layers, the presentation layer which responsible for the UI and other user-focused components, the business logic layer which focuses on the business rules and the actual code to manage data and make calculations, and the data access layer which is handles database/external data source interactions. Next, the Pipe-filter Pattern uses independent components to process data using separate processing tasks. Beginning with the data source, moving into the pipe which connects components, then using filters to transform the data based on a given function, and finally the data sink or endpoint which receives the processed data output. Lastly, the Microkernel Pattern a.k.a. Pluggable Architecture allows for modular, flexible systems to be built. In this architecture, the system’s core services are handled in the microkernel and all other components are separated known as a plug-and-play concept.

This post is rich with vital information related to software architectures and provides knowledge far beyond what I have included here. I know I will be returning to this blog repeatedly as a resource to learn from and to use a basis of how these patterns can be implemented.

From the blog CS@Worcester by cameronbaron and used with permission of the author. All other rights reserved by the author.

A Strategic Approach to Code Review

A blog that recently caught my attention was GitHub Engineer Sarah Vessels, “How to review code effectively: A GitHub staff engineer’s philosophy”. Vessels focuses on code review every single day which has brought her to create her own strategies for successful code review to ensure we are building good software. Though code review can be done in different ways I selected this blog because it directly involves code review via pull request reviews on GitHub which aligns perfectly with our classwork over the past couple of weeks learning to manage Git properly.  

One large part of Vessels job as a code reviewer is having an open discussion with the author of the code posing questions that may have not yet been considered. The phrase two sets of eyes are better than one comes to mind here as we can very frequently catch other’s coding mishaps, but we might miss our own. The writer stresses that acting as a reviewer for another teammate benefits both parties as the reviewer is constantly seeing someone else’s logic and new code, while the author of the code is gaining a new perspective – this exchange of knowledge is extremely valuable. 

This blog also provides tips and tricks of how to manage a queue of pull requests properly providing simple Slack queries to organize new requests by team or outstanding requests that require attention. Another tip from the writer is ensuring the reviewer team stays small – this benefits the development team overall as there is clear accountability for who is to review the changes. Vessels also commented on the benefit of specifying code review requirements/frameworks to ensure a seamless, consistent review process amongst teams.  

The writer also provides samples of good code review feedback and poor code review feedback to highlight the main differences between them. Good feedback should include specific details, references to specific issues/lines, provides a possible solution, and provides reasoning. This blog post also offers vital information related to how to give a good code review. Some of the tips seem like common sense like offering affirmations and asking questions, but an important tip Vessels shares is to be aware of biases and assumptions. The writer highlights that even the most-senior programmers can make mistakes so only you (as the reviewer) have the opportunity to validate the work and catch any issues before deployment.  

GitHub Engineer Sarah Vessels shares her invaluable experience with code review through this blog post which discusses fine-tuning the review process, good vs bad reviews, how to give good reviews, and how to get the most out of a review. As a student, it is often my own code that I must turn back to and review to enhance, but after this reading I am feeling encouraged to seek opportunities to study others code with a focus on the exchange of knowledge and getting experience on my own for how to review code for a development team in the future. 

From the blog CS@Worcester by cameronbaron and used with permission of the author. All other rights reserved by the author.