Author Archives: cameronbaron

Sprint One Retrospective

Retro

This first sprint was a deep dive into new territory. Our team focused on understanding our project, setting up the proper tools and environment, and managing challenges as they arose. We have created a working agreement, completed multiple issues on GitLab as individuals and as a team, and we have ensured collaboration was a priority throughout each task.

GitLab Activity:

  • README.md – Updated the README document for our GitLab group to detail the major goals of our project.
  • Docker Compose Watch Documentation – Drafted documentation for Docker Compose Watch for possible use for CI/CD later on

Our team dynamic has been great from the beginning with a strong working agreement in place from day one being we were able to focus on the work getting done without worrying about team cohesion. We were able to complete a lot of research during this sprint to gather information about Docker and its many variants, NGINX, and CI/CD options. This research was turned into details documentation within our GitLab group. We worked successfully toward solving problems like issues with debugging Docker Containers, using SSL certificates properly, and scheduling our tasks based on priority. Overall, flexibility in our team has been vital and we have adapted to any challenges that we have faced.

I believe we have already learned a lot through our research and time spent of digging around in the server for answers, but we can still work toward improving our outcome by approaching our research differently and ensuring we have a clear focus on what we want to accomplish. Personally, I plan to improve my work by improving my time management regarding troubleshooting/research because sometimes I can find myself down a rabbit hole and working to ensure that all of my teammates and I are on the same page through clear communications.

Apprenticeship Pattern: Retreat into Confidence

“Retreat into Competence” is a strategy for regaining confidence when feeling overwhelmed by complex challenges. It involves stepping back into an area of expertise before pushing forward again. The key is to avoid staying in the comfort zone for too long and instead use the retreat as a way to launch forward with renewed energy. During this sprint, there were moments of troubleshooting that felt a bit discouraging, particularly with Docker and SSL certificates, where progress seemed a little slow and confusing. The feeling of being stuck highlighted the need to step back into something familiar—whether it was revisiting basic Docker configurations or focusing on smaller, more manageable tasks like completing documentation on GitLab—before tackling the larger issues again. Had I known about this pattern sooner, I would have structured my work differently to maximize results. Moving forward, I plan to:

  • Time box troubleshooting sessions to avoid going down the rabbit hole
  • Focus in my research to ensure that the resulting information is useful to our project
  • Seek help from teammates during moments of need
  • Ensure that I am still completing the smaller tasks that can be completed while working on larger issues

This sprint was a valuable learning experience in both technical and team collaboration aspects. While challenges arose, the team adapted, and I gained insight into how to manage difficult situations more effectively. By refining research strategies, improving troubleshooting workflows, and applying patterns like “Retreat into Competence,” I can optimize future sprints for even greater success.

From the blog cameronbaron.wordpress.com by cameronbaron and used with permission of the author. All other rights reserved by the author.

Apprenticeship Patterns Intro

After reading through the assigned sections in the text Apprenticeship Patterns: Guidance for the Aspiring Software Craftsman by Dave Hoover and Adewale Oshineye I feel as though I have already learned lessons that are vital to my progression as a computer science student. One lesson that stood out to me was the story of the Zen master, I really enjoyed the symbolism in the story and the alignment with real experience. In all aspects of life it is important to feel confident in one’s knowledge, but the other half of this is knowing when to be humble enough to learn and accept new information when it is presented. As many of us are familiar, the world of computer science is a realm of continuous improvement – this story helps to remind me how hard it would be to be successful in practice if I do not continue to save space for learning.

Another thought-provoking section was the introduction to Chapter Three relating to Daves realization of the extraordinarily long road of learning that lies in front of all of us. Though the passage was short it was notable to me because I remember having this realization myself. When I came to terms with the fact that there was (likely) no one that knows everything and that most of us are all on the same path just at different points along the way – then I was able to freely learn from my honest starting point, wherever that may be, in any given topic without the shame of being a newbie. Another thought I had while reading this section was that although certificates can be helpful and feel like an accomplishment they are only as valuable as you make them. Being able to use what was learned within a certificate course (or similar) and build upon that I think is where the majority of the real learning occurs.

Apprenticeship Patterns has really helped me rethink how I approach learning. The Zen master story reminded me that it’s important to stay humble and always be open to new ideas. I also liked the reminder that learning is a never-ending journey. These insights will definitely help me stay focused on growing and applying what I learn as I move forward in my studies.

From the blog cameronbaron.wordpress.com by cameronbaron and used with permission of the author. All other rights reserved by the author.

CS-443: Self-Introduction

Hi everyone, I am Cameron Baron – a senior Computer Science student at Worcester State University. This is an introductory post for my Software Quality Assurance and Testing course to ensure all blog posts will be posted as expected. Upcoming blog posts will be centered around finding useful information and various tutorial overviews related to this topic as I continue to learn.

From the blog CS@Worcester by cameronbaron and used with permission of the author. All other rights reserved by the author.

Discovering LFP and Thea’s Pantry

The LibreFoodPantry (LFP) website and Thea’s Pantry GitLab Group are full of knowledge about the open-source project itself and related information to support the users and developers. After reading through these resources, I felt I gained a more thorough understanding of the purpose of this software and it’s potential trajectory. Specifically relating to the LFP website – I found the Values section extremely useful, as it provides clear expectations for all community members with links to further understand the Code of Conduct and to learn more about Agile values and FOSSisms. Regarding the information about Thea’s Pantry in GitLab, there are many useful subsections within this group, but I was particularly impressed by the Architecture section as it presents the microservices architecture clearly through diagrams with clear systems, features, and components. An additional useful link relating to Thea’s Pantry GitLab Group is User Stories. After reading through the different situations expressing the intended use of the software, I had a better understanding of the role that this project plays throughout every step of this process on both, the staff side and the guest side. I was surprised reading the User Story titled “A Pantry Administrator Requests a Monthly Report for the Worcester County Food Bank” as I was unaware of the link between the two systems. Overall, these webpages provide a simple and clear interface to learn about the project’s values and community expectations, as well as, technical details.

From the blog CS@Worcester by cameronbaron and used with permission of the author. All other rights reserved by the author.

Reliability in Development Environments

This week, the blog that caught my attention was “How to Make Your Development Environment More Reliable” by Shlomi Ratsabbi and David Gang from the Lightricks Tech Blog. This work highlights common challenges encountered in software development and provides actionable solutions — specifically, how to optimize development environments. The insights offered in this blog align perfectly with our coursework, as we have continuously learned about and worked within our own development environments. This post emphasizes the importance of ensuring consistency and reliability when creating these environments, offering practical advice to achieve this goal.

The writers begin by outlining the necessity of development environments, describing the challenges that arise during software releases. These challenges include discrepancies between local and production configurations, mismatched data, permission conflicts, and system interaction issues. While creating a shared development environment may seem like the obvious solution, the authors point out that this approach introduces its own set of problems, such as debugging difficulties due to parallel testing, interruptions caused by developer collisions, and divergence between shared and production environments.

To address these challenges, the authors advocate for the implementation of branch environments. Branch environments are independent, isolated setups for developing and testing specific features or issues. These environments, when paired with tools like Terraform, Argo CD, and Helm, enable integration with Infrastructure as Code (IaC) and GitOps, automate infrastructure management, and ensure automatic cleanup of unused resources. This approach promotes consistent documentation of dependencies and application details within version control systems like GitHub. The blog includes a clear diagram and code snippets that effectively demonstrate how to set up branch environments using these tools, making it accessible and actionable for readers.

Branch environments offer several key advantages. By isolating changes, they ensure that all updates are properly tracked, simplifying debugging and maintaining consistency across development efforts. This isolation also eliminates conflicts inherent in shared environments, reducing the risk of outdated configurations or data interfering with new testing and development efforts. Tools like Terraform and Argo CD further enhance this process by automating repetitive tasks such as infrastructure provisioning and application deployment, saving developers time and reducing the likelihood of human error.

Additionally, branch environments improve resource efficiency. Since these environments are ephemeral, they are cleaned up automatically when no longer needed, freeing up valuable system resources and lowering costs. The inclusion of tools like Helm simplifies configuration management, even for complex architectures, ensuring a streamlined, manageable workflow.

Overall, this blog provides a thorough and practical framework for tackling one of the most common challenges in software development: creating reliable and consistent environments. The adoption of branch environments combined with IaC and GitOps principles enhances scalability, collaboration, and efficiency. As I continue to develop my own projects, I plan to incorporate these strategies and tools to build environments that are both robust and resource-efficient.

From the blog CS@Worcester by cameronbaron and used with permission of the author. All other rights reserved by the author.

Essential Software Architecture Design Patterns

This week, I am focusing on the blog “The Software Architect: Demystifying 18 Software Architecture Patterns.” by Amit Verma. Verma is an architect who focuses on creating Java applications among other work. This blog aligns with our course work as we focus on software architecture and we have recently learned the details related to some of the design patterns mentioned here. This post has introduced me to new architecture design patterns to study and provided a vast array of knowledge like when to apply them and how to properly use them in my future projects.

Verma begins by specifying what software architecture is and how it differs from software design. I appreciate how this was explained because it makes the concepts simple to understand. Software architecture focuses on the high-level structure like the blueprint of the project, software design focuses on translating this blueprint into real project specifications which developers can implement. A concept in this post that was new to me was the architecture documentation approach, the C4 model. This model ensures that there is a structure that is able to be implemented by addressing four main levels: context, containers, components, and code. The C4 model encourages architects to create diagrams and written documents to ensure comprehensive project documentation.

The beauty of architectural patterns lies in their repeatability; they provide reusable solutions that address common goals and challenges across projects, enabling architects to achieve a specific, predetermined outcome efficiently. This post provides information related to 18 different architecture patterns that are commonly used and necessary for software developers to be familiar with. Some of the patterns were entirely new to me like the Layered Pattern, Pipe-filter Pattern, and the Microkernel Pattern. Beginning with the Layered Pattern, this architecture separates different functionalities into different layers. One common representation of this uses three main layers, the presentation layer which responsible for the UI and other user-focused components, the business logic layer which focuses on the business rules and the actual code to manage data and make calculations, and the data access layer which is handles database/external data source interactions. Next, the Pipe-filter Pattern uses independent components to process data using separate processing tasks. Beginning with the data source, moving into the pipe which connects components, then using filters to transform the data based on a given function, and finally the data sink or endpoint which receives the processed data output. Lastly, the Microkernel Pattern a.k.a. Pluggable Architecture allows for modular, flexible systems to be built. In this architecture, the system’s core services are handled in the microkernel and all other components are separated known as a plug-and-play concept.

This post is rich with vital information related to software architectures and provides knowledge far beyond what I have included here. I know I will be returning to this blog repeatedly as a resource to learn from and to use a basis of how these patterns can be implemented.

From the blog CS@Worcester by cameronbaron and used with permission of the author. All other rights reserved by the author.

A Strategic Approach to Code Review

A blog that recently caught my attention was GitHub Engineer Sarah Vessels, “How to review code effectively: A GitHub staff engineer’s philosophy”. Vessels focuses on code review every single day which has brought her to create her own strategies for successful code review to ensure we are building good software. Though code review can be done in different ways I selected this blog because it directly involves code review via pull request reviews on GitHub which aligns perfectly with our classwork over the past couple of weeks learning to manage Git properly.  

One large part of Vessels job as a code reviewer is having an open discussion with the author of the code posing questions that may have not yet been considered. The phrase two sets of eyes are better than one comes to mind here as we can very frequently catch other’s coding mishaps, but we might miss our own. The writer stresses that acting as a reviewer for another teammate benefits both parties as the reviewer is constantly seeing someone else’s logic and new code, while the author of the code is gaining a new perspective – this exchange of knowledge is extremely valuable. 

This blog also provides tips and tricks of how to manage a queue of pull requests properly providing simple Slack queries to organize new requests by team or outstanding requests that require attention. Another tip from the writer is ensuring the reviewer team stays small – this benefits the development team overall as there is clear accountability for who is to review the changes. Vessels also commented on the benefit of specifying code review requirements/frameworks to ensure a seamless, consistent review process amongst teams.  

The writer also provides samples of good code review feedback and poor code review feedback to highlight the main differences between them. Good feedback should include specific details, references to specific issues/lines, provides a possible solution, and provides reasoning. This blog post also offers vital information related to how to give a good code review. Some of the tips seem like common sense like offering affirmations and asking questions, but an important tip Vessels shares is to be aware of biases and assumptions. The writer highlights that even the most-senior programmers can make mistakes so only you (as the reviewer) have the opportunity to validate the work and catch any issues before deployment.  

GitHub Engineer Sarah Vessels shares her invaluable experience with code review through this blog post which discusses fine-tuning the review process, good vs bad reviews, how to give good reviews, and how to get the most out of a review. As a student, it is often my own code that I must turn back to and review to enhance, but after this reading I am feeling encouraged to seek opportunities to study others code with a focus on the exchange of knowledge and getting experience on my own for how to review code for a development team in the future. 

From the blog CS@Worcester by cameronbaron and used with permission of the author. All other rights reserved by the author.

Implementing Design Patterns in Java

This week I found a great blog titled, “Mastering Design Patterns in Java”, that delves deep into design patterns specifically in Java. This piece of work aligns with our course topics and focuses on a programming language that many of us are most comfortable with. The writer, Dharshi Balasubramaniyam, discusses six notable design patterns in software engineering: 

  1. Singleton 
  1. Factory 
  1. Builder 
  1. Adaptor 
  1. Decorator 
  1. Observer 

The focus of the discussion is how to implement these patterns using detailed examples and how they can be used to deal with common coding scenarios like creating objects, managing inter-class relationships, and optimizing object behavior. 

Our work in our course has focused on some of the design patterns that are discussed in this blog, but the rich examples provided here are incredibly valuable when trying to gain a complete understanding of the patterns and learning when to use them. A great example of this is the mention of the Singleton pattern – I am already familiar with this one, but the example being used made the concept easy to remember and understand. The example references the simple idea of the clipboard. If we had more than one instance of the clipboard being accessed by the user of a device, it would be very likely to have conflicting data saved – to avoid this issue we can apply the Singleton pattern to ensure that there is only ever going to be one clipboard instance at any given time. The writer provides the code which enables this example and shows the value of using this design pattern. 

One new pattern I learned about was the Builder pattern which focuses on simplifying object construction with required and optional properties. The pattern works to manage the parameters by using a constructor with the required properties and different setter methods with optional properties by using an object class and an objectBuilder class. This pattern provides flexibility for the object being created – the given example creates a user with required properties of name and email and optional properties of phone and city. In the case of the example, we can note that the properties will have their own functions for setter methods which return an objectBuilder object – if the function does not get called to set a new value all optional parameters will contain the string “unknown”. This technique makes the code easy to understand and ensures we are not getting errors due to missing parameters as they will always contain some string.  

Using this blog to practice and learn from new examples is extremely helpful and will contribute to the enhancement of my skills as I continue to learn and get more comfortable with writing good, clean code the first time. By implementing the examples shown in the article, I can start noticing opportunities to apply these design patterns in my own work avoiding hours of refactoring code later.  

From the blog CS@Worcester by cameronbaron and used with permission of the author. All other rights reserved by the author.

Optimizing Docker in the Cloud

After our recent studies relating to Docker managing dependencies and ensuring consistent development environments, I was interested in learning more about how to use it because I thought something like this could have saved me many hours of troubleshooting while completing a recent research project. This article, written by Medium user Dispanshu, highlights the capabilities of Docker and how to efficiently use the service in a cloud environment.  

The article focuses on optimizing Docker images to achieve high-performance, low-cost deployment. The problem some developers run into is having very large images which slow the build processes, waste storage, and reduce application speeds. I learned from this work that large images result from including unnecessary tools/dependencies/files, inefficient layer caching, and including other full images (like Python in this case). Dispanshu focuses on achieving the solution in 5 parts: 

  1. Multi-stage builds 
  1. Layer optimizations 
  1. Minimal base images (including Scratch) 
  1. Advanced techniques like distroless images 
  1. Security best practices 

Using these techniques, the image receives a size reduction from 1.2GB to 8MB! The most impactful change being multi-stage builds to which the writer accredits over 90% of this size reduction. I have never used these techniques before, but my interest definitely peaked when I saw the massive size reduction that resulted from these changes.  

The multi-stage builds technique references the build stage and the production stage. By using this technique build-time dependencies are separated from the actual runtime environment which avoids the inclusion of any unnecessary files or tools in the resulting image. Another technique recommends minimal base images using the slim or alpine version (for Python) over the full version for the build stage and for production stage it is recommended to use the scratch base image (no OS, no dependencies, no data or apps). Using a scratch image has pros and cons, but when we are considering image sizes and optimization this is an ideal route. 

Another interesting piece of this article is the information relating to advanced techniques like distroless images, using Docker Buildkit, using .dockerignore file, and eliminating any excess files. The way that distroless images are explained by the writer makes the concept and the use case very clear. The high-level differences between the Full Distribution Image, the Scratch Image, and the Distroless Image are described as the different ways we can pack for a trip:  

  1. Pack your entire wardrobe (Full Distribution Image) 
  1. Pack nothing and buy everything at your destination (Scratch Image) 
  1. Pack only what you’ll actually need (Distroless Image) 

The analogy makes understanding the relationship between these three image options seemingly obvious, but I can imagine applying any of these techniques described would require some perseverance. This article describes an architecture that juggles simplicity, performance, cost, and security with very impactful results. The results of this article are proof of the value these techniques can provide, and I will be seeking to apply them in my future work.

From the blog CS@Worcester by cameronbaron and used with permission of the author. All other rights reserved by the author.

Exploring OpenAPI with YAML

A blog from Amir Lavasani caught my attention this week as it perfectly aligns with topics we are currently focusing on in our course like working with OpenAPI Specification (OAS) using YAML. Working with REST APIs is relatively new for me and fully comprehending how these requests and interactions work is still in progress.  Lavasani structures this post as a tutorial for building a blog posts API, but there is a surplus of background information and knowledge provided for each step. Please consider reading this blog post and giving the tutorial a try!  

This tutorial starts as we all should with planning out anticipated endpoints and methods, understanding the schema of JSON objects that will be used, and acknowledging project requirements. This step, though simple, is helpful to remind us that it is vital to plan to ensure a clean and concise structure when we have implemented our work. Moving into the OAS architecture, Lavasani simplifies this into The Metadata Section, The Info Object, The Servers Object, and The Paths Object (and all objects that fall within). Authentication is touched on briefly, but the writer provides links to outside sources to learn more. For me, this post reiterates all the OpenAPI work we have completed in class with another example of project application and provides additional resources to learn from and work with.   

Most of the basics that this post touches on we have already reviewed in our course, the writer provides valuable information related to the minor details. A hint that was useful to me was the ability to use Markdown syntax in the description fields in OAS – this can ensure ease of use and understanding of the API. I also learned how the full URL is constructed based on the paths object. We know our paths are based on the endpoints we define and the operations (HTTP methods) they support, but to make sense of it all these pieces of information are concatenated with the URL from the servers to create the full URL. This is somewhat obvious, but seeing it spelled out as Lavasani does is very useful to fully reinforce what we know about these interactions. Another new piece of knowledge relates to the parameter object. I was not initially aware of all of the ways we can set operation parameters. We know how to use parameters via the URL path or the query string, but we can also use headers for this purpose or cookies which is useful to know for future implementations. Lastly, the writer mentions the OpenAPI Generator which can generate client-side and server-side code from an OAS – though I am not familiar with this service I can see it’s practicality and I will likely try to complete this tutorial and follow up with learning about the outside tools mentioned.   

This blog provides a practical example of working with the OpenAPI Specification, reinforcing concepts we’ve studied in class while providing useful insights. 

From the blog CS@Worcester by cameronbaron and used with permission of the author. All other rights reserved by the author.