Author Archives: hndaie

Sprint 1 Retrospective Blog

My task for Sprint 1 was to add a new date field that shows when the guest was created into the backend API. In this sprint, I worked closely with my teammate Sean Kirwin as he was working on the backend as well. I had to create a reference to the schema for the date of the first entry in the OpenAPI field. Below is the link to the merge request.

What worked well for me in Sprint 1 was being able to work on a task that I had experience with. I was confident to get what I needed to get done in a timely manner. I also appreciated the fact that I was able to work with one of my team members. 

As a team, I believe we worked well together. My team did a good job with the sprint planning and dividing the roles for each task effectively. Each team member was good at communicating what they had done each week of the sprint. I appreciated that my team members were always there to give constructive feedback each week and also helped me out when I had a question about how to navigate working on my task. A way my team could improve is show the work we did for the week instead of just explaining it. It would help visually. 

This task took me more time than it needed to. As an individual, I could improve my time management when it comes to working on my assigned tasks. Although I had improved my communication with the team throughout the sprint, I think I could do. I did not necessarily have issues with making the changes to the files, however, my partner and I had issues with the merge request. The pipeline kept failing because of the lint-commit-messages, but my partner and I were able to resolve this issue with research. 

The pattern I related to from the Apprenticeship Patterns book was “Learn How You Fail”.In this chapter, it highlights the significance of being aware of your weaknesses and areas of failure. It implies that although learning improves performance, shortcomings and failures endure unless they are actively addressed. The most important lesson is to seek self-awareness about the actions, circumstances, and habits that contribute to failure. Making deliberate choices either to accept and overcome limits or to do better in particular areas is made possible by this self-awareness. The pattern promotes a balanced approach to growth and discourages self-pity or perfectionism.

I chose this pattern because it is closely related to my experience during the sprint. I ran into issues with time management, working with others on code reviews, and managing merge request issues during the sprint. I became aware that I was frequently making the same errors, such as failing to consider edge scenarios or misinterpreting project specifications. I could have dealt with these failure patterns proactively rather than reactively when issues emerged if I had been more deliberate about identifying them early. The “Learn How You Fail” pattern would have helped me take a step back, identify recurring problems, and strategize ways to improve.

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/guestinfosystem/guestinfobackend/-/merge_requests/107

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Equivalence Class Testing

Week 7 – 3/7/2025

This week, in my last class we had an activity for Equivalence Class Testing(ECT) under the POGIL activity. For the source of this week, I watched a YouTube video titled “Equivalence Class Testing Explained,” which gives us the essentials about this black-box testing method.

The host of the video defines ECT as a technique for partitioning input data into equivalence classes partitions where inputs are expected to yield similar results. To reduce redundant cases without sacrificing coverage, it is also possible to test one value per class. To demonstrate this reality, the presenter tested a function that takes in integers between 1 and 100. Classes in this example are invalid lower (≤0), invalid upper (≥101), and valid classes (1–100). Boundary value testing, in which values like 0, 1, 100, and 101 are applied to test for common problems in partition boundaries, was also accorded importance in the video.

I chose this video because ECT of the course we took included this topic and I wanted more information about the topic. Reading the course textbook it was difficult to follow. The class activity did make me do this topic, though this clarified it better to me. The video’s visual illustrations and step-by-step discussion clarified the practical application of ECT. The speaker’s observation about maintaining a balance between being thorough and being effective resonated with me, especially after spending hours of writing duplicate test cases for a recent project.

I thought that thorough testing had to test all possible inputs before watching. The video rebutted this by demonstrating how ECT reduces effort without losing effectiveness. I understood that my previous method of testing each edge case individually was not possible. Another fascinating thing was the difference between valid and invalid classes. I had neglected how the system handled wrong data in a previous project, dealing primarily with “correct” inputs. I realize how crucial both testing are for ensuring robustness after watching the demonstration of the video. Henceforth, in the future, I will adopt this approach to my future projects if needed.

My perception regarding testing has changed because of this movie, from a boring activity to a sensible activity. It serves the need of our course directly, i.e., providing efficient, scalable engineering practices. I can create fewer, yet stronger tests with the help of ECT, and that will surely help me as a software programmer. Equivalency class testing is a kit of wiser problem-solving, and I want to keep on practicing it. It’s not theory.

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Unit Testing: Decision Tables

Week 5 – 2/23/2025

For this week’s blog, I recently came across an insightful article titled “Decision Table Testing: A Comprehensive Guide” on the Testsigma website. This article provided a detailed overview of decision table testing, a technique for testing system behavior for various input combinations. The article not only defined the concept but also went over its applicability, benefits, and practical applications. 

I chose this resource because we had just done a POGIL activity on decision table testing in class, and I wanted to learn more about how it works in real-world circumstances. The article stood out to me because it was well organized, simple to understand, and contained practical examples to make the subject more approachable. As someone who is still learning about software testing, I found this material to be both instructive and accessible.

The article begins by defining decision table testing as a black-box testing technique for determining how a system responds to different input combinations. It then describes the structure of a decision table, which is made up of conditions (inputs) and actions (outputs), as well as how to generate one. What I found most useful was the step-by-step illustration of how to apply decision table testing to a login system. This example helped me visualize how the strategy works in practice.

One of the most important takeaways for me was the emphasis on the value of decision table testing in dealing with complex business logic. The article explained how this technique assures that all conceivable scenarios are examined, lowering the likelihood of missing key edge cases. This spoke to me because, in my limited experience, I’ve seen how easy it is to overlook specific input combinations, particularly in systems with several decision points. The blog also covered decision table testing’s limits, such as its inefficiency for systems with a large number of inputs, which helped me understand when to utilize this technique and when to look into alternatives.

Reading this article has greatly increased my understanding of decision table testing. I’m now more confident in my ability to apply this strategy to future projects. For example, I envision myself utilizing decision tables to evaluate systems with well-defined criteria, such as payment processing or eligibility verification. In addition, the blog emphasized the need for thorough testing and considering all possible circumstances, which I will include in my testing methods.

Overall, this article was a helpful resource for my learning experience. It not only simplified a subject that I was having trouble understanding, but it also provided practical insights that I may use in the future. This article is highly recommended to anyone looking for a clear and practical explanation of decision table testing.

https://testsigma.com/blog/decision-table-testing/

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Microservice Architecture

In today’s fast-paced digital natural world, software systems are required to be scalable, adaptable, and powerful. Microservice architecture is one architectural approach that has gained significant popularity in answering these objectives. Recently, I discovered a helpful document named “Microservices Architecture” on Microsoft’s Azure Architecture Center website, which offered a full description of this technique.

The article describes microservices architecture as a design pattern in which applications are developed as a collection of small, independent, and loosely linked services. Each service is liable for an independent function and may be built, launched, and expanded separately. This differs from monolithic systems, which have all components tightly integrated into a single codebase. The article highlights the advantages of microservices, such as increased scalability, shorter development cycles, and the option to utilize various technologies for different services. It also addresses difficulties like increased complexity in managing inter-service communication, data consistency, and deployment pipelines.

The reason I chose this article is because Microsoft Azure is a cloud computing platform that I am familiar with and am learning more about how it is within microservice architecture. The article’s clear explanations and practical insights make it an excellent pick for learning about microservices in a real-world setting.

Reading the article was an eye-opening experience. I was particularly struck by the emphasis on independence and modularity in microservices. The thought of each service being created and deployed individually appealed to me since it enables teams to work on different areas of an application without stepping on each other’s toes. This method not only accelerates development but also makes it easier to discover and resolve problems.

However, this article also made me aware of the issues that come with microservices. For example, maintaining communication across services necessitates careful design, while guaranteeing data consistency between services can be challenging. This helped me realize the value of solutions like API gateways and message brokers, which assist to speed these operations.

One of the most important lessons that I learned is that microservices aren’t a one-size-fits-all solution. The article highlights that this architecture is best suited for big, complicated applications that demand a high level of scalability and flexibility. For smaller projects, a monolithic approach may be more suitable. This sophisticated viewpoint helped me comprehend that the correct architecture is determined by the project’s individual requirements.

In the future, I plan to apply microservices architectural ideas to my own projects. I’m particularly looking forward to exploring containerization technologies like Docker and orchestration platforms like Kubernetes, both of which are commonly used in microservices setups. I’ll also remember how important it is to build clear APIs and implement effective monitoring mechanisms to handle the complexity of distributed systems.

https://learn.microsoft.com/en-us/azure/architecture/guide/architecture-styles/microservices

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Apprenticeship Reflection

February 10, 2025

Honesty speaking, learning does not end in the classroom. Real-world experience is needed for growth in the professional world, which is where Apprenticeship Patterns—an idea from Oshineye and Hoover’s book Apprenticeship Patterns: Guidance for the Aspiring Software Craftsman—come in handy. These patterns serve as a road map for learning and progress, particularly in technical and skill-based fields. 

This book emphasizes lifelong learning, adaptability, and community participation. It promotes a development mentality and emphasizes the value of experimentation and information exchange. One of the most useful aspects of the reading was the idea that becoming a Software Craftsman is not a linear path. Chapter 5 – “Learn How You Fail”. somewhat changed the way I think about life in general. My first thought before even reading it was how failing could possibly make you succeed in life, but then I realized how critical it is for academic and professional advancement. By reflecting on failures, such as a tough exam or a coding error, students may alter their approach and develop their talents. Using apprenticeship patterns can help us as young upcoming professionals establish an attitude of continual learning, making us more adaptable and equipped for future jobs. When we start seeing our education as an apprenticeship, we hopefully may effectively bridge the gap between theory and practice.

Growing up, I have always been pushed to be the best at everything that I do, especially when it comes to academics. My parents always expected me to be the number one student. Hence why I do not agree with “Being the Worst”. There is no issue with being surrounded by intelligent people, or rather, people who are more knowledgeable than you; however, you cannot be completely clueless.

The chapters that are most relevant to me right now are Chapter 2 –  “Emptying the Cup” and Chapter 5 – Perpetual Learning. As someone early in my career, I’m constantly dealing with imposter syndrome and the need to unlearn outdated practices, and these chapters provide practical advice and reassurance that these struggles are part of the journey.

Overall, the reading has changed my understanding of what it is to be a Software Craftsman. It’s more than simply technical skills; it’s about adopting a growth attitude, community, and lifelong learning. I’m looking forward to learning more about these patterns and applying what I have learned to my own profession.

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Introductory Blog: CS-443

Jan 28, 2025

Hello everyone, and welcome to my blog! My name is Hiercine Ndaie. I am excited to be taking this class. I hope to use what I learn in this class in my professional career.

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Clean Code

Week-15: 12/21/2024

It’s easy to get something functional, but is it good? The book “Clean Code” by Robert C. Martin shows how my past coding techniques were wrong. This book isn’t just about making code work; it’s about crafting code that is understandable, maintainable, and, dare I say, beautiful.

The book highlights the importance of seeing code as a kind of communication rather than simply computer instructions. It’s about creating code that’s simple to understand, alter, and, ultimately, live with. The book provides several definitions of clean code, including the following: elegant, efficient, simple, straightforward, and carefully crafted. It also covers ideas and practices including utilizing meaningful names, constructing concise functions, correct commenting, formatting, error handling, unit testing, and so on. The book depicts the transition from sloppy to tidy code. It also includes a collection of code smells or heuristics that might help you write better code. 

I chose this book because I often find myself spending more time deciphering what code does than actually adding new features. I was looking for guidance on how to avoid creating such a mess and to understand what makes code easy to work with, and the book seemed highly relevant to my goals as a student. I also appreciated the idea that code is read far more often than it is written, so making it easy to read is very important.

This book has opened my eyes, particularly to the importance of code readability. It is not enough to have code that is functional; it must also be understandable to everybody who will work with it. This is particularly crucial in collaborative efforts. The author’s description of “code sense,” a carefully developed feeling of ‘cleanliness,’ struck a chord with me. Knowing that code should be clean isn’t enough; we also need to learn how to make it clean. 

I will be putting the principles of “Clean Code” into practice by always striving to leave the code cleaner than I found it. I also plan to be more diligent in naming variables and functions, keeping functions short and focused, and implementing tests to validate and describe code. This book has taught me that clean code is more than a nice-to-have; it is an important component of becoming a professional software developer. I now recognize that “working” code is only the first step; “good” code needs ongoing attention and effort. I’m excited to implement these concepts into my future projects, not just to improve my grades but also to gain more experience programming software.

Book: Clean Code: A Handbook of Agile Software Craftsmanship by Robert Cecil Martin

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

REST API Designs

Week-15: 12/20/2024

This blog post really helped me understand how important design patterns are for making REST APIs that are easy to use and can handle a lot of traffic. The post talked about things like naming resources consistently, breaking up big chunks of data into smaller pieces, and making sure that updates don’t break older versions of the API.

One thing that really stuck with me was how important it is to name your resources consistently. The post emphasized that using standard and intuitive naming conventions makes APIs predictable for other developers. It’s like having a well-organized filing system—if everyone knows where things are supposed to go, it’s much easier to find what you need.

The post also talked about things like versioning and error handling, which are super important for making sure that your API can evolve over time without breaking things for people who are already using it. Versioning ensures backward compatibility as APIs evolve, which means that even if you make changes to your API, older applications that use it will still continue to work. Error handling is all about giving developers useful information when something goes wrong. The post mentioned that providing informative error responses can help guide developers who are using your API. I’m definitely going to be focusing more on these aspects in my future projects.

The reason that I chose this blog post was because it aligns closely with what I learned in the class. Our class discussions often emphasized the best practices for creating maintainable, user-friendly APIs, and this post serves as a practical extension of those theoretical concepts. I wish I had read this article before some of my homework assignments; it would have made it easier for me to understand the assignments. Additionally, its concise, actionable guidance provides a clear framework for applying these principles in real-world scenarios.

Overall, this blog post was a great way to connect the theoretical stuff we’ve been learning in class with how things actually work in the real world. It’s one thing to understand these concepts in theory, but it’s another thing to see how they’re applied in practice. This blog post provides practical insights into designing APIs that are easy to use and can be scaled up as needed. It’s given me a much better understanding of how to create APIs that are not just functional but also developer-friendly and built to last.

Blog link: https://blog.stoplight.io/api-design-patterns-for-rest-web-services

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Git: Merge conflicts

Week-13: 12/2/2024

This week in class we worked on merge conflicts and how to resolve them. I had to do an activity that had to do with solving a merge conflict. This experience was not as frustrating as I thought it was going to be. While doing the activity, Professor Wurst highlighted how important it is to understand version control systems like Git and to develop effective strategies for resolving conflicts collaboratively.

On this week’s blog hunt, I came across a helpful blog post by Sid Puri titled “Git Merge Conflicts,” which provided a clear explanation of what merge conflicts are, why they occur, and how to resolve them using Git tools. It broke down the process into manageable steps and even offered advice on how to prevent these conflicts from happening in the first place.

One of the key takeaways for me was understanding the root cause of merge conflicts: multiple developers making changes to the same file at the same time. Because Git can’t automatically figure out which changes to keep, it flags these conflicts and requires manual intervention. The post explained how to use Git’s notification system to identify conflicts and then how to manually merge code using conflict markers – those weird symbols like <<<<<<<, =======, and >>>>>>> that used to make my head spin.

The post also emphasized the importance of communication in preventing merge conflicts. This really resonated with me because our team conflict stemmed from two of us accidentally modifying the same section of code. If we had just communicated about our tasks beforehand, we could have avoided the whole issue. Moving forward, I’m definitely going to advocate for more frequent team check-ins and a more organized approach to task allocation.

What I really appreciated about the blog post was its practical approach to conflict resolution. It explained how to use built-in Git tools like git status and git diff to navigate conflicts with confidence. Mastering these tools will definitely save me time and frustration in future projects. Plus, learning how to handle and resolve conflicts collaboratively is a transferable skill that will be valuable in any team setting, not just software development.

Overall, this blog post was a great resource that directly complemented our coursework on team-based software development. It reinforced the idea that understanding and resolving merge conflicts isn’t just a technical skill; it’s an essential component of effective teamwork in software engineering. I feel much more prepared to tackle these challenges in the future and to contribute more effectively to my team projects.

Blog link: https://medium.com/version-control-system/git-merge-conflicts-4a18073dcc96

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Software Architecture Patterns

Week-13: 12/2/2024

Understanding software architectural patterns is critical in the software development industry for creating strong, scalable, and maintainable products.

A recent Turing blog post, “Software Architecture Patterns and Types,” has been useful in solidifying my understanding of this important concept. This article provided a comprehensive overview of various patterns, including monolithic, microservices, event-driven, layered, and serverless architectures. The article clearly gives explanations of each pattern’s design principles, advantages, and limitations.

For instance, while monolithic architectures offer simplicity, they often struggle with scalability. On the other hand, microservices excel in scalability and allow for independent deployment but can introduce complexity in maintenance and debugging. The article also explores emerging trends like serverless architecture, emphasizing their importance in modern cloud-based systems.

The practical examples and concise explanations in the article made it extremely relevant to what I learned in my classes, particularly my software construction, design, & architecture class. The discussion on system scalability and maintainability directly aligns with the topics we’re covering.

One of the most valuable takeaways for me was the emphasis on aligning architectural decisions with business objectives. The article effectively illustrates that a microservices architecture, while attractive for its scalability, might be overkill for a small-scale project. This resonated strongly with my recent experience in a group project where we debated between microservices and a layered design. Reflecting on the deployment and dependency management challenges we faced, the article validated our decision to opt for a simpler layered design as a better fit for our project’s scope.

Furthermore, the article’s discussion of serverless architecture was truly eye-opening. I had previously held a somewhat simplistic view of serverless as a universal scaling solution. However, the article shed light on its potential drawbacks, such as vendor lock-in and latency issues. This more nuanced perspective will undoubtedly inform my approach to future projects, encouraging me to critically evaluate new trends before jumping on the bandwagon.

Moving forward, I intend to apply this knowledge by diligently assessing the specific needs and constraints of each project before selecting an architectural pattern. For instance, when tackling a high-traffic e-commerce site, I would now consider employing an event-driven architecture to effectively handle asynchronous data flow. Alternatively, for smaller projects, I would advocate for a monolithic or layered approach to minimize overhead.

By understanding the trade-offs inherent in different architectural patterns, I feel better prepared to design and build software systems that are not only functional but also scalable, maintainable, and aligned with business goals.

Blog link: https://www.turing.com/blog/software-architecture-patterns-types

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.