Author Archives: hndaie

Sprint 3 Retrospective Blog

In Sprint 3, I did not have a specific personal task. Instead, the sprint was primarily spent working together to fix issues that were generated in Sprint 1 and completing outstanding tasks from Sprint 2. As a recap, in Sprint 1, my task was to add a new date field to the backend API to indicate when a new account was created. In Sprint 2, my task was to modify all endpoints to accept access tokens so that the system could verify whether the request was being made by an authorized user or not.

What was best for me in Sprint 3 was being able to fix bugs that were present, which gave me better insight into the general structure and functionality of the system. Better understanding of backend behavior, especially in how to apply access tokens and authorization, I gained a lot of experience in backend development. What did not work best for me was when we did a merge request, the pipeline failed because of lint comments and tests. My teammate Sean and I were so confused about why the test failed in the pipeline. The error was in an endpoint that we did not even touch when working on our tasks. This is not ideally how I wanted to end the Sprint, however, we did not have enough time to fix this issue.

In general, I believe we worked quite well together as a team. My team was effective at sprint planning and working on each team member’s task well. All members of the team were good at reporting what they had accomplished every week of the sprint. I liked that my team members were always there to provide feedback every week and also helped me when I was not sure how to go about doing my assignment. As we have come to the end of the semester, I honestly do not have much to say about my team improving. In the future, I hope that each team member continues to improve in their personal tasks in their future workplaces where they need to work in a group.

Since Sprint 3 involved less outright ownership and more integration and collaboration work, I found my experience closely relating to the “Be the Worst” pattern in Apprenticeship Patterns: Guidance for the Aspiring Software Craftsman by Dave Hoover and Adewale Oshineye. The “Be the Worst” approach tells programmers to specifically put themselves in situations where they are the least experienced or the least able member of the group. The idea is that this positioning optimizes for growth by allowing you to learn from more experienced colleagues, be challenged by high expectations, and learn good habits by watching and collaborating with others.

I chose this pattern because my experience in Sprint 3 repeated its underlying message. Rather than concentrating on completing work on new tasks, I was deeply involved in teamwork, reviewing and debugging code that I did not write, learning unfamiliar parts of the system, and completing work that demanded a broader view of the backend infrastructure. This made me consider how valuable it is to be in situations where you’re pushed to learn by being around more experienced or otherwise capable coworkers.

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/guestinfosystem/guestinfobackend/-/merge_requests/118

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Security Testing

Week 13 – 4/27/2025

OWASP Web Security Testing Guide (WSTG) is a globally recognized standard for web application security testing. It presents a formalized methodology divided between passive testing (e.g., information gathering, application logic knowledge) and active testing (e.g., vulnerability exploitation), with key categories including authentication, authorization, input validation, and API security. The guide defines the black-box approach first, mimicking real-world attack patterns, and includes versioned identifiers (e.g., WSTG-v42-INFO-02) to give more transparency with revisions. Collaborative and open-source, the WSTG accepts input from security professionals to have the document updated in real-time on new threats.

I chose this resource because we use web applications every day, and it is interesting to see how security testing is implemented in them. The WSTG is ideal for students transitioning into cybersecurity careers due to its systematic nature, which bridges the gap between theoretical concepts (e.g., threat modeling) and actual evaluation procedures. Its emphasis on rigor and reproducibility echoes industry standards that are widely discussed in our training, e.g., GDPR and PCI DSS compliance.

I was impressed with the WSTG’s emphasis on proactive security integration. I’ve noticed that fully automated approaches occasionally overlook context-dependent vulnerabilities like business logic problems, so its suggestion to combine automated tools (like SAST/DAST) with manual penetration testing closes that gap. The manner in which the tests are categorized in the guide, i.e., input validation testing to avert SQL injection, offers a clear path for risk prioritization, which I now see is a skill I must acquire for effective resource allocation in real-world projects. An extremely useful lesson learned was the importance of ongoing testing along the development trajectory. Our study of DevOps practices is supplemented by the WSTG “shift-left” model, adding security at the beginning of the SDLC and minimizing risk post-deployment. One way of finding misconfigurations before deployment is using tools like OWASP ZAP, which is explained in the handbook, during code reviews. However, novices may be overwhelmed with the scope of the instruction. I will start by addressing this with its risk-based testing methodology, with particular emphasis on high-risk areas such as session management and authentication. This is in line with HackerOne’s best practices in adversarial testing, where vulnerabilities are ordered by their exploitability potential.

Going forward, I would like to use the approach of the WSTG taking advantage of the guide’s open-source status to support collaboration, for example, holding seminars for developers on threat modeling, which is emphasized as an important step in NordPass security best practices. I would like to improve application security and support a proactive risk management culture through the adoption of the WSTG’s formalized approach. This is important in the current threat landscape, where web application vulnerabilities represent 39% of breaches.

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Software Technical Review

Week 14 – 5/2/2025

This is my last week of class, and this is kind of bittersweet. The topic for this week was software technical review. While I was working on my last project for the class, I went ahead and read a blog post called “What is Technical Review in Software Testing?” by Ritika Kumari. I did not read this article to find out what a technical review is but to learn more about the process of it.

The article gives a suitable introduction to technical reviews in software testing, stating that technical reviews are formal assessments conducted by technical reviewers to examine software products like documentation, code, and design. Technical reviews are designed to check compliance with standards, enhance the quality of the code, and identify defects at the initial phase of the Software Development Life Cycle (SDLC). The blog discusses how technical reviews reduce the cost of rework, enhance the level of expertise of the team, and get software outcomes in line with business goals.

I picked this article because it is very much in line with the topic we had for this week’s class. The article mixes practical applications, such as Testsigma’s integration for test case management, with abstract concepts, like static testing and peer reviews. Its emphasis on collaborative procedures also aligns with our class’s ideas about agile teamwork.

The blog highlighted the importance of spotting design or code bugs early in development, for if one does so, he or she can save post-release costs up to 70%, as illustrated through the example of re-engineering faulty software. This aligns with the “shift-left” testing philosophy that we examined. Technical reviews are as much about information sharing as they are about error detection. For example, I had not realized how much cross-functional knowledge was built up through walkthroughs and peer reviews. I will look to apply this idea further in automation efforts. Testsigma’s review capabilities, such as automated test case submission and element management, demonstrated how tools could speed up reviews. The blog made me rethink my understanding that reviews are only a “checklist activity.” Rather, they are interactive processes that achieve harmony between teamwork and technical correctness. For instance, the difference between formal defect-oriented inspections and informal knowledge-swap peer reviews led to a better understanding of how to customize reviews according to project requirements. I will promote systematic technical assessments in my next work environment in the future. This class overall was an interesting class and I hope to use the lessons that I have learnt throughout my professional career.

https://testsigma.com/blog/technical-review-in-software-testing/

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospective Blog

My task in Sprint 2 was to modify all endpoints to accept access tokens that will determine the system whether the request is from an authorized individual making that request. The goal is that these tokens will let the system know if the requester is authorized. But this can only happen after the IAM team figures out how the tokens will work. In this sprint, I collaborated very closely with my partner Sean Kirwin.

What was best for me in Sprint 2 was to work with Sean and continue to work on the backend. I was working on an issue that I am comfortable with again. I was also able to continue a task that was in progress from the previous sprint.

As a team, we still managed to get along. Communication from my teammates was really the best. Every individual could complain about what issues they were facing, why they were late to class or missing class, and was helpful whenever someone needed help from a teammate. Every week, my teammates were always there to offer helpful criticism, and I appreciated their help when I needed it. They also answered my questions about how to proceed with my assignment. I know that there is always room for improvement. However, I think that my team did very well in this sprint and we did not have any major issues during this sprint.

Relying on another team was somewhat challenging for me. We were not able to accomplish as much as needed because we were waiting to see how the tokens will be, as it is the IAM team’s decision to make that. Although we keep communicating with them on Discord, in my opinion, we should have sat down and spoken face-to-face or on Zoom so that we can better understand each other’s views. I sort of felt like both teams were getting a bit mixed up with each other’s arguments. The pattern I associated with from the Apprenticeship Patterns book was “Create Feedback Loops”. Here in this chapter, it highlights the importance of constant, actionable feedback in speeding up learning and improvement in the journey of an apprentice towards mastery. Feedback loops facilitate the discovery of weaknesses, confirm progress, and hone skills. They are imperative to move from apprentice to journeyman and later to master because they enable self-awareness and incremental improvement.

I chose this pattern because it is so closely related to what was happening in the sprint. Although it is stated not to move/work on tasks from the previous sprint, I managed to do so because I had the time available to complete a small mistake that was executed in sprint 1 in order to wait for feedback from the other group so that I could work on my task in sprint 2. I was able to learn the feedback from my other team members and rectify the small mistakes done during sprint 1. The “Create Feedback Loops” pattern helped me to step back, hear the feedback from my peers, and strategize on how to improve.

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/guestinfosystem/guestinfobackend/-/issues/141

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Sprint 1 Retrospective Blog

My task for Sprint 1 was to add a new date field that shows when the guest was created into the backend API. In this sprint, I worked closely with my teammate Sean Kirwin as he was working on the backend as well. I had to create a reference to the schema for the date of the first entry in the OpenAPI field. Below is the link to the merge request.

What worked well for me in Sprint 1 was being able to work on a task that I had experience with. I was confident to get what I needed to get done in a timely manner. I also appreciated the fact that I was able to work with one of my team members. 

As a team, I believe we worked well together. My team did a good job with the sprint planning and dividing the roles for each task effectively. Each team member was good at communicating what they had done each week of the sprint. I appreciated that my team members were always there to give constructive feedback each week and also helped me out when I had a question about how to navigate working on my task. A way my team could improve is show the work we did for the week instead of just explaining it. It would help visually. 

This task took me more time than it needed to. As an individual, I could improve my time management when it comes to working on my assigned tasks. Although I had improved my communication with the team throughout the sprint, I think I could do. I did not necessarily have issues with making the changes to the files, however, my partner and I had issues with the merge request. The pipeline kept failing because of the lint-commit-messages, but my partner and I were able to resolve this issue with research. 

The pattern I related to from the Apprenticeship Patterns book was “Learn How You Fail”.In this chapter, it highlights the significance of being aware of your weaknesses and areas of failure. It implies that although learning improves performance, shortcomings and failures endure unless they are actively addressed. The most important lesson is to seek self-awareness about the actions, circumstances, and habits that contribute to failure. Making deliberate choices either to accept and overcome limits or to do better in particular areas is made possible by this self-awareness. The pattern promotes a balanced approach to growth and discourages self-pity or perfectionism.

I chose this pattern because it is closely related to my experience during the sprint. I ran into issues with time management, working with others on code reviews, and managing merge request issues during the sprint. I became aware that I was frequently making the same errors, such as failing to consider edge scenarios or misinterpreting project specifications. I could have dealt with these failure patterns proactively rather than reactively when issues emerged if I had been more deliberate about identifying them early. The “Learn How You Fail” pattern would have helped me take a step back, identify recurring problems, and strategize ways to improve.

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/guestinfosystem/guestinfobackend/-/merge_requests/107

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Equivalence Class Testing

Week 7 – 3/7/2025

This week, in my last class we had an activity for Equivalence Class Testing(ECT) under the POGIL activity. For the source of this week, I watched a YouTube video titled “Equivalence Class Testing Explained,” which gives us the essentials about this black-box testing method.

The host of the video defines ECT as a technique for partitioning input data into equivalence classes partitions where inputs are expected to yield similar results. To reduce redundant cases without sacrificing coverage, it is also possible to test one value per class. To demonstrate this reality, the presenter tested a function that takes in integers between 1 and 100. Classes in this example are invalid lower (≤0), invalid upper (≥101), and valid classes (1–100). Boundary value testing, in which values like 0, 1, 100, and 101 are applied to test for common problems in partition boundaries, was also accorded importance in the video.

I chose this video because ECT of the course we took included this topic and I wanted more information about the topic. Reading the course textbook it was difficult to follow. The class activity did make me do this topic, though this clarified it better to me. The video’s visual illustrations and step-by-step discussion clarified the practical application of ECT. The speaker’s observation about maintaining a balance between being thorough and being effective resonated with me, especially after spending hours of writing duplicate test cases for a recent project.

I thought that thorough testing had to test all possible inputs before watching. The video rebutted this by demonstrating how ECT reduces effort without losing effectiveness. I understood that my previous method of testing each edge case individually was not possible. Another fascinating thing was the difference between valid and invalid classes. I had neglected how the system handled wrong data in a previous project, dealing primarily with “correct” inputs. I realize how crucial both testing are for ensuring robustness after watching the demonstration of the video. Henceforth, in the future, I will adopt this approach to my future projects if needed.

My perception regarding testing has changed because of this movie, from a boring activity to a sensible activity. It serves the need of our course directly, i.e., providing efficient, scalable engineering practices. I can create fewer, yet stronger tests with the help of ECT, and that will surely help me as a software programmer. Equivalency class testing is a kit of wiser problem-solving, and I want to keep on practicing it. It’s not theory.

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Unit Testing: Decision Tables

Week 5 – 2/23/2025

For this week’s blog, I recently came across an insightful article titled “Decision Table Testing: A Comprehensive Guide” on the Testsigma website. This article provided a detailed overview of decision table testing, a technique for testing system behavior for various input combinations. The article not only defined the concept but also went over its applicability, benefits, and practical applications. 

I chose this resource because we had just done a POGIL activity on decision table testing in class, and I wanted to learn more about how it works in real-world circumstances. The article stood out to me because it was well organized, simple to understand, and contained practical examples to make the subject more approachable. As someone who is still learning about software testing, I found this material to be both instructive and accessible.

The article begins by defining decision table testing as a black-box testing technique for determining how a system responds to different input combinations. It then describes the structure of a decision table, which is made up of conditions (inputs) and actions (outputs), as well as how to generate one. What I found most useful was the step-by-step illustration of how to apply decision table testing to a login system. This example helped me visualize how the strategy works in practice.

One of the most important takeaways for me was the emphasis on the value of decision table testing in dealing with complex business logic. The article explained how this technique assures that all conceivable scenarios are examined, lowering the likelihood of missing key edge cases. This spoke to me because, in my limited experience, I’ve seen how easy it is to overlook specific input combinations, particularly in systems with several decision points. The blog also covered decision table testing’s limits, such as its inefficiency for systems with a large number of inputs, which helped me understand when to utilize this technique and when to look into alternatives.

Reading this article has greatly increased my understanding of decision table testing. I’m now more confident in my ability to apply this strategy to future projects. For example, I envision myself utilizing decision tables to evaluate systems with well-defined criteria, such as payment processing or eligibility verification. In addition, the blog emphasized the need for thorough testing and considering all possible circumstances, which I will include in my testing methods.

Overall, this article was a helpful resource for my learning experience. It not only simplified a subject that I was having trouble understanding, but it also provided practical insights that I may use in the future. This article is highly recommended to anyone looking for a clear and practical explanation of decision table testing.

https://testsigma.com/blog/decision-table-testing/

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Microservice Architecture

In today’s fast-paced digital natural world, software systems are required to be scalable, adaptable, and powerful. Microservice architecture is one architectural approach that has gained significant popularity in answering these objectives. Recently, I discovered a helpful document named “Microservices Architecture” on Microsoft’s Azure Architecture Center website, which offered a full description of this technique.

The article describes microservices architecture as a design pattern in which applications are developed as a collection of small, independent, and loosely linked services. Each service is liable for an independent function and may be built, launched, and expanded separately. This differs from monolithic systems, which have all components tightly integrated into a single codebase. The article highlights the advantages of microservices, such as increased scalability, shorter development cycles, and the option to utilize various technologies for different services. It also addresses difficulties like increased complexity in managing inter-service communication, data consistency, and deployment pipelines.

The reason I chose this article is because Microsoft Azure is a cloud computing platform that I am familiar with and am learning more about how it is within microservice architecture. The article’s clear explanations and practical insights make it an excellent pick for learning about microservices in a real-world setting.

Reading the article was an eye-opening experience. I was particularly struck by the emphasis on independence and modularity in microservices. The thought of each service being created and deployed individually appealed to me since it enables teams to work on different areas of an application without stepping on each other’s toes. This method not only accelerates development but also makes it easier to discover and resolve problems.

However, this article also made me aware of the issues that come with microservices. For example, maintaining communication across services necessitates careful design, while guaranteeing data consistency between services can be challenging. This helped me realize the value of solutions like API gateways and message brokers, which assist to speed these operations.

One of the most important lessons that I learned is that microservices aren’t a one-size-fits-all solution. The article highlights that this architecture is best suited for big, complicated applications that demand a high level of scalability and flexibility. For smaller projects, a monolithic approach may be more suitable. This sophisticated viewpoint helped me comprehend that the correct architecture is determined by the project’s individual requirements.

In the future, I plan to apply microservices architectural ideas to my own projects. I’m particularly looking forward to exploring containerization technologies like Docker and orchestration platforms like Kubernetes, both of which are commonly used in microservices setups. I’ll also remember how important it is to build clear APIs and implement effective monitoring mechanisms to handle the complexity of distributed systems.

https://learn.microsoft.com/en-us/azure/architecture/guide/architecture-styles/microservices

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Apprenticeship Reflection

February 10, 2025

Honesty speaking, learning does not end in the classroom. Real-world experience is needed for growth in the professional world, which is where Apprenticeship Patterns—an idea from Oshineye and Hoover’s book Apprenticeship Patterns: Guidance for the Aspiring Software Craftsman—come in handy. These patterns serve as a road map for learning and progress, particularly in technical and skill-based fields. 

This book emphasizes lifelong learning, adaptability, and community participation. It promotes a development mentality and emphasizes the value of experimentation and information exchange. One of the most useful aspects of the reading was the idea that becoming a Software Craftsman is not a linear path. Chapter 5 – “Learn How You Fail”. somewhat changed the way I think about life in general. My first thought before even reading it was how failing could possibly make you succeed in life, but then I realized how critical it is for academic and professional advancement. By reflecting on failures, such as a tough exam or a coding error, students may alter their approach and develop their talents. Using apprenticeship patterns can help us as young upcoming professionals establish an attitude of continual learning, making us more adaptable and equipped for future jobs. When we start seeing our education as an apprenticeship, we hopefully may effectively bridge the gap between theory and practice.

Growing up, I have always been pushed to be the best at everything that I do, especially when it comes to academics. My parents always expected me to be the number one student. Hence why I do not agree with “Being the Worst”. There is no issue with being surrounded by intelligent people, or rather, people who are more knowledgeable than you; however, you cannot be completely clueless.

The chapters that are most relevant to me right now are Chapter 2 –  “Emptying the Cup” and Chapter 5 – Perpetual Learning. As someone early in my career, I’m constantly dealing with imposter syndrome and the need to unlearn outdated practices, and these chapters provide practical advice and reassurance that these struggles are part of the journey.

Overall, the reading has changed my understanding of what it is to be a Software Craftsman. It’s more than simply technical skills; it’s about adopting a growth attitude, community, and lifelong learning. I’m looking forward to learning more about these patterns and applying what I have learned to my own profession.

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Introductory Blog: CS-443

Jan 28, 2025

Hello everyone, and welcome to my blog! My name is Hiercine Ndaie. I am excited to be taking this class. I hope to use what I learn in this class in my professional career.

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.