Sprint 3 Retrospective

In this post, I’ll be reflecting on our third and final sprint towards developing and implementing an Identity and Access Management system for Thea’s Pantry. Coming out of Sprint 2, we had an almost-fully-functional proof of concept which integrated a mock frontend capable of calling out to Keycloak to require authentication, getting an access token, and passing that token to the backend for authentication and validation. Our goal for sprint 3 was to fully implement production microservices for Keycloak, the IAMBackend, and the IAMFrontend. These goals may not have been explicitly defined like that at the beginning of the sprint, but that ended up being our objective. We also wanted to have finalized documentation that explains our implementation and design choices.

Some of my personal work towards that goal was as follows:

GitLab

  • Documenting our low-level issues in GitLab and assigning them accordingly. I put additional focus/effort this sprint into properly linking related issues, blockers, and tracking various key information in comments, as opposed to just using issues as a task list. Epic

  • Document and ticket/issue any outstanding work that is necessary but out of the time constraints of this sprint / the semester. Approaching the end of the semester, there is still some necessary work to fully implement our microservices, but there is not enough time to complete it all. I have gone through and added any issues I can think of.

Backend

  • Fully implement an IAMBackend that mirrors the structure of the GuestInfoBackend. It is not yet merged, as it is not yet a fully functional MVP. That branch is here. This has included but was not limited to work such as:

    • Refactoring comments and text to apply to an IAMBackend instead of GuestInfoBackend

    • Removing files and code that does not apply to IAMBackend, such as messageBroker.js

    • Modifying backend endpoint code to finalize it

    • Updating dependencies

    • Ensuring the GitLab build processes function as expected

  • I will be tying up some loose ends and hopefully merging IAMBackend before “finishing” the semester.

Frontend

  • Fully implement an IAMFrontend that mirrors the structure of the GuestInfoFrontend. It is not yet merged, as it is not yet a fully functional MVP. That branch is here. This has included but was not limited to work such as:

    • Refactoring comments and text to apply to an IAMFrontend instead of GuestInfoFrontend

    • Updating dependencies

    • Ensuring the GitLab build processes function as expected

    • Adding and reconfiguring bin scripts as necessary

    • Doing frontend Vue work to actually create a redirect page

  • I will be tying up some loose ends and hopefully merging IAMFrontend before “finishing” the semester.

PantryKeycloak

Just providing a link to the repository because I am the only one that has touched it, all work in there was done by me. PantryKeycloak

  • Fully configure and implement a production Keycloak repository

  • Add all custom settings for the TheasPantry Realm

  • Create an entrypoint script that automatically exports and saves all changes to the realm

Documentation

Documentation Repository

  • Finalize and write a lot of documentation

I feel like this sprint was a rough one across the board. I could definitely tell people were on the home stretch and sliding towards the end of the semester. I think people did good work, and our class meetings worked well for us; we always came to agreements and had purpose. I feel like we were working very slowly though, and at times, I felt like I was taking the bulk of the issues on myself. I think I could work on that as an individual, and understand that it is not necessarily my job to make sure every single thing gets done. I am very mission-oriented, so I naturally do that, despite the cost to myself. I feel like as a team, we could have paid more attention to each other and what people’s workloads looked like. That might have allowed us to work better together and be more mindful of our own deadlines and how we can help each other.

The pattern I have chosen for this sprint is Dig Deeper. This pattern describes learning things at a level where you truly understand why they are the way they are, how they work, and how to use them. I think it is relevant because the more I tried to learn how the different services and Docker interact, the better I was able to understand and implement solutions. If I had read this prior, I may have focused more on understanding before trying to implement, as opposed to while.

From the blog Mr. Lancer 987's Blog by Mr. Lancer 987 and used with permission of the author. All other rights reserved by the author.

Dice Game Code Review

This week we started to work on our own POGIL activity similar to the Sir Tommy code review. The activity the team has chosen will be a dice game with a specification sheet that expects the people to do the activity based off of the specification sheet. The sheet is going to have specific questions asking what lines contain bugs or have format issues that are not best practice. This will help the students read code more in depth as well as to work together in searching for bugs and format issues. The team is going to meet out of school to determine the questions and how we will go about working on the project. We will also focus on the types of questions we are going to ask in order to make the reader think about what they are reading and to critically think about how they will address the issues presented in the source code as well as test code.
                We are thinking about making a few models explaining and asking questions that send the users to the code to examine it, but also think about much deeper than just the code they are looking at. We will implement Encapsulation, Inheritance and Polymorphism. This means that the users will have to read and understand underlying methods within other methods in order to progress through the models, but it still will be simple enough so that the users do not take up too much time and can focus on the questions rather than the code.

In class yesterday we talked about what other teams were doing, and it was very interesting that everyone took a different approach to the homework. I am still glad my group stuck with sir tommy due to underlying issues with the original that we thought was a good idea to add certain bits and pieces to make the activity more understandable. We did the model questions asking about stubs, mocks, fake and dummies in mind, we dedicated an entire model to these objectives so that the student would understand how each works and how to create and get rid of each for a more optimal solution.

                Overall, this activity showed me how much I learned about different types of testing how to read and understand certain aspects of other people’s codes as well as paying close attention to imports as usually you assume the imports are always correct. I also learned that I have been using dummies, fakes, stubs and mocks without knowing since I did not have prior knowledge to these types of testing. I really enjoyed the class as well as how it was structured and how we had to figure things out on our own as well as a team.

From the blog CS@Worcester – Cinnamon Codes by CinCodes and used with permission of the author. All other rights reserved by the author.

The Best Java Testing Framework: JUnit

Summary of the Source

The blog post introduces JUnit as a unit testing framework designed for the Java programming language, explaining its evolution, core features, and significance. It outlines the primary components of JUnit, such as test cases, test suites, annotations (like @Test, @Before, and @After), and assertions. The guide also discusses test driven development (TDD) and how JUnit supports this methodology by encouraging developers to write tests before writing the actual implementation.

Additional features covered include mocking with Mockito and how to structure test cases for better readability and maintainability. Overall, the article serves as both an introduction and a deeper dive for those wanting to use JUnit effectively in real world software projects.

Reason For Selecting This Blog

I chose this blog post because it well written, and in only a roughly 10 minute read, covers everything there is to know about JUnit, at least as a foundation for starting out with it. It goes over what it is, why its used, and even its history, before diving into how to set the environment up to use it, the shows examples of test cases using JUnit. I think its an overall great resource for any developer who is interested in testing using Java as it covers all the bases.

Personal Reflection

I was introduced to JUnit in university, and learning it there was really helpful towards understanding how testing code works in general. I also liked JUnit especially because it seems very easy to understand and write, where only a couple lines of code could write a test case for your code. Assertions are especially useful as that’s the end result of the test, and with JUnit it’s very simple. One line of AssertEquals tests the expected vs the actual, that’s all it takes to test the correct output. I also see how this framework would be preferred when doing test driven development as each test is separated into different cases denoted by the @Test annotation. This makes it easy to make specific tests for each feature as development continues. I haven’t tried any other testing frameworks but now that I have used JUnit I think it won’t be as confusing to understand a different framework, but since I like how JUnit works I’ll compare the others to it like it’s the gold standard for testing frameworks.

Conclusion

Knowing about JUnit is imperative if you plan on testing code in Java. It has everything that a testing framework could want by making it easy to write, handle multiple test cases, and including assertions. I know that if I ever use Java in the future to test code, I will use JUnit because of how powerful, reliable,  and simple it is.

Citation:
HeadSpin. (n.d.). JUnit: A Complete Guide. https://www.headspin.io/blog/junit-a-complete-guide

From the blog CS@Worcester – The Science of Computation by Adam Jacher and used with permission of the author. All other rights reserved by the author.

Security Testing

Week 13 – 4/27/2025

OWASP Web Security Testing Guide (WSTG) is a globally recognized standard for web application security testing. It presents a formalized methodology divided between passive testing (e.g., information gathering, application logic knowledge) and active testing (e.g., vulnerability exploitation), with key categories including authentication, authorization, input validation, and API security. The guide defines the black-box approach first, mimicking real-world attack patterns, and includes versioned identifiers (e.g., WSTG-v42-INFO-02) to give more transparency with revisions. Collaborative and open-source, the WSTG accepts input from security professionals to have the document updated in real-time on new threats.

I chose this resource because we use web applications every day, and it is interesting to see how security testing is implemented in them. The WSTG is ideal for students transitioning into cybersecurity careers due to its systematic nature, which bridges the gap between theoretical concepts (e.g., threat modeling) and actual evaluation procedures. Its emphasis on rigor and reproducibility echoes industry standards that are widely discussed in our training, e.g., GDPR and PCI DSS compliance.

I was impressed with the WSTG’s emphasis on proactive security integration. I’ve noticed that fully automated approaches occasionally overlook context-dependent vulnerabilities like business logic problems, so its suggestion to combine automated tools (like SAST/DAST) with manual penetration testing closes that gap. The manner in which the tests are categorized in the guide, i.e., input validation testing to avert SQL injection, offers a clear path for risk prioritization, which I now see is a skill I must acquire for effective resource allocation in real-world projects. An extremely useful lesson learned was the importance of ongoing testing along the development trajectory. Our study of DevOps practices is supplemented by the WSTG “shift-left” model, adding security at the beginning of the SDLC and minimizing risk post-deployment. One way of finding misconfigurations before deployment is using tools like OWASP ZAP, which is explained in the handbook, during code reviews. However, novices may be overwhelmed with the scope of the instruction. I will start by addressing this with its risk-based testing methodology, with particular emphasis on high-risk areas such as session management and authentication. This is in line with HackerOne’s best practices in adversarial testing, where vulnerabilities are ordered by their exploitability potential.

Going forward, I would like to use the approach of the WSTG taking advantage of the guide’s open-source status to support collaboration, for example, holding seminars for developers on threat modeling, which is emphasized as an important step in NordPass security best practices. I would like to improve application security and support a proactive risk management culture through the adoption of the WSTG’s formalized approach. This is important in the current threat landscape, where web application vulnerabilities represent 39% of breaches.

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Software Technical Review

Week 14 – 5/2/2025

This is my last week of class, and this is kind of bittersweet. The topic for this week was software technical review. While I was working on my last project for the class, I went ahead and read a blog post called “What is Technical Review in Software Testing?” by Ritika Kumari. I did not read this article to find out what a technical review is but to learn more about the process of it.

The article gives a suitable introduction to technical reviews in software testing, stating that technical reviews are formal assessments conducted by technical reviewers to examine software products like documentation, code, and design. Technical reviews are designed to check compliance with standards, enhance the quality of the code, and identify defects at the initial phase of the Software Development Life Cycle (SDLC). The blog discusses how technical reviews reduce the cost of rework, enhance the level of expertise of the team, and get software outcomes in line with business goals.

I picked this article because it is very much in line with the topic we had for this week’s class. The article mixes practical applications, such as Testsigma’s integration for test case management, with abstract concepts, like static testing and peer reviews. Its emphasis on collaborative procedures also aligns with our class’s ideas about agile teamwork.

The blog highlighted the importance of spotting design or code bugs early in development, for if one does so, he or she can save post-release costs up to 70%, as illustrated through the example of re-engineering faulty software. This aligns with the “shift-left” testing philosophy that we examined. Technical reviews are as much about information sharing as they are about error detection. For example, I had not realized how much cross-functional knowledge was built up through walkthroughs and peer reviews. I will look to apply this idea further in automation efforts. Testsigma’s review capabilities, such as automated test case submission and element management, demonstrated how tools could speed up reviews. The blog made me rethink my understanding that reviews are only a “checklist activity.” Rather, they are interactive processes that achieve harmony between teamwork and technical correctness. For instance, the difference between formal defect-oriented inspections and informal knowledge-swap peer reviews led to a better understanding of how to customize reviews according to project requirements. I will promote systematic technical assessments in my next work environment in the future. This class overall was an interesting class and I hope to use the lessons that I have learnt throughout my professional career.

https://testsigma.com/blog/technical-review-in-software-testing/

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospective Blog

Sprint 2 really changed the way I see collaboration. As an introvert, I used to be scared of team projects and working with others. But during this sprint, that mindset shifted completely. It turned out to be fun and rewarding to collaborate with a team where everyone was understanding, supportive, and eager to learn together. After receiving helpful feedback at the end of Sprint 1, we had a much clearer understanding of what needed to be done. In Sprint 1, we struggled a lot with confusion and having different views, but this time around, we had a shared direction that made a big difference.

Throughout Sprint 2, we faced more complex problems than before. My partner and I worked on several issues based on the feedback we received, as well as on things we identified as a team. Our main priorities were fixing the camera mirroring issue on the UPC scanner, updating the frontend’s visual identity, and adjusting the layout for mobile and tablet devices.

We successfully fixed the mirroring issue, which felt like a big win. However, aligning our frontend designs with the other teams was much more difficult than we anticipated. Initially, we had decided on a color palette, fonts, logos, and design styles that we thought represented the project well. Midway through the sprint, though, we were given Worcester State University’s Visual Identity Guidelines, and suddenly, everything we had designed had to be reworked. It was frustrating to undo what we had already built, but it taught us the importance of flexibility and communication.

Another major challenge we encountered  and that we had to push into Sprint 3, was getting our web app running on a server rather than just locally. We started working on it, but deployment turned out to be trickier than we expected.

Here’s a look at some of the issue we were working on:

Overall, I think our team worked really well together this sprint. Despite the technical obstacles, especially around connecting the frontend and backend, we stayed positive and pushed through. We were eventually able to connect both parts and present a working version of the app to the customer, which was a great feeling. As a team, one area we could improve on is making sure everyone is on the same page and updated about what the different sub-teams are working on. Sometimes there were minor moments of confusion because different people had slightly different ideas of where each group was at. It wasn’t a major issue this sprint, but better communication would definitely help prevent misunderstandings and keep us even more organized. On a personal level, I realized that learning never really stops when you’re working in the tech industry or when you’re a developer. If there’s something I want to improve on, it’s learning new concepts more quickly. Being able to pick up new ideas faster would help me feel more confident during team discussions and allow me to contribute more effectively.

 In the Apprenticeship Patterns book by Dave Hoover and Adewale Oshineye, one pattern really stood out to me this sprint: “Rubbing Elbows.” It talks about the importance of working closely with more experienced peers and teammates. Instead of trying to learn everything on your own, this pattern encourages you to learn through collaboration, by watching how others work and asking questions. I chose this pattern because it perfectly describes what changed for me. This sprint  working side-by-side with my teammate helped me learn so much faster than I could have on my own. If I had read about “Rubbing Elbows” earlier, I think I would have embraced collaboration a lot sooner instead of seeing it as something intimidating. It would have reminded me that it’s okay and even expected  to learn through other people, not just through personal effort.

From the blog CS@Worcester – CodedBear by donna abayon and used with permission of the author. All other rights reserved by the author.

Static Testing vs. Dynamic Testing

Hello everyone,

This week’s blog topic is about Static Testing vs. Dynamic Testing. This is something we discussed at the beginning of the semester but nonetheless is still very important to know. It took me a bit of time to understand the differences between the two and when was each one used but after reading this blog it made it a lot clearer to understand but let’s start with what they are. Let’s start with just a simple definition to separate the two terms. Static Testing which is done manually without executing the application and dynamic testing is an automated approach that involves executing the code and testing it in various ways within a closed run-time environment. If we read more into the blog we can understand that Static Testing is the process of checking an application or website without executing the code. It’s a manual process and it is done usually in the early stages of the development life cycle. A person compares the code against the requirements and specification  that he needs to meet and this review allows him to identify any flaws, defects or possible changes. Now Dynamic Testing is more for the customer and user experience and it is the process of executing controlled tests and experiments on live digital platforms with real user traffic. Unlike static testing, where you manually have to review the design and the actual code, dynamic testing deploys different variations in order to understand how users behave and then they can analyze user engagement and other performance data.

The blog does a great job of not only explaining the definition of the two different types of testing but also gives the Advantages and Disadvantages for both. For example as per the authors notes he says that Static shines on identifying  potential experience issues upfront in the early stages of the development process. This helps to positively impact performance metrics before launching to the customers and also prevents a poor experience for the developers as well. But it has some disadvantages like it heavily relies on the expertise of reviewers evaluating designs, the better the reviewer the better the static testing. It is also very time consuming and may not catch flaws that can only appear in real and actual user interaction. For the Advantages of Dynamic Testing we have that it is amazing at allowing continuous optimization of multiple experiences. It has Controlled experiments that lets you roll out new features and experiences with minimal risk and its ability of Automated testing means you can quickly scale testing. While it sounds amazing and all it has some Disadvantages which are that it can be very time-consuming for complex experiments with many versions. In some cases it won’t cover every potential user scenario and edge case and it requires a lot of upfront investment in testing tools. 

In conclusion, it is very important to know and use both testing types so you can get the best of both worlds!

Source:

https://monetate.com/resources/blog/static-testing-vs-dynamic-testing/

From the blog Elio's Blog by Elio Ngjelo and used with permission of the author. All other rights reserved by the author.

Code Reviews and Their Importance in Keeping Maintainable Code

Summary of the Blog

The article emphasizes that code reviews are not just about finding mistakes but are primarily about improving code quality, spreading knowledge across the team, and building better, more maintainable software.

  • Code reviews help maintain consistent coding standards across a project.
  • They foster team learning, as developers can see different approaches to solving problems.
  • Good code reviews catch potential bugs and architectural flaws early, preventing costly fixes later.
  • Reviews create a sense of shared ownership over the codebase, leading to more sustainable, long-term development.

He also stresses that code reviews should be approached positively, focusing on collaboration rather than criticism. The goal is to help, not to criticize harshly, and reviewers should offer suggestions rather than simply pointing out what’s wrong.

Why I Chose This Resource

I chose this blog because it reads as someone who actually has been through the experience of not doing them and realizing the hard way why they exist and are used regularly. It’s a resource that goes through the whole process of code reviews, but the added element of feeling that the person writing it actually understands why the thing they are explaining is useful makes it feel a lot more credible. It also just makes it easier and more interesting to read in my opinion.

Personal Reflection

Messy codebases can lead to immense technical debt over time, and code reviews are the solution. Of course it would be great to simply adhere to the rules and standards set by the group to avoid the sloppiness in the first place, but the code reviews are necessary to ensure that if there is messy code it doesn’t make it into the production branch, like a last defense. The steps laid out in the blog create a healthy environment to improve the code being reviewed, while also being respectful in the manner it is done. The checklist of standards to go over during the review makes sense and is the base of keeping everything “correct”, but it pleasantly surprised me when they mentioned the correct way to communicate the changes. Most people wouldn’t think about how they mention the changes found, but this blog states the correct way, even showing examples, which takes into account respect for the developer as well as being descriptive / informative with the comment. 

Conclusion

This blog made me understand why code reviews are important. Before reading I thought similarly to the author that code reviews are a waste of time and just some bureaucratic process, but now I see that the time spent on making all of the code cohesive and adhere to coding standards actually saves a lot of time in the long run from fixing bugs and reading sloppy code. 

Citation

Kravcenko, V. (n.d.). The Importance of Code Reviews. Retrieved from https://vadimkravcenko.com/shorts/code-reviews/

From the blog CS@Worcester – The Science of Computation by Adam Jacher and used with permission of the author. All other rights reserved by the author.

Why Test-Driven Development Boosts Code Quality

In the fast-moving world of software development, one approach stands out for promoting cleaner code, fewer bugs, and better design: Test-Driven Development (TDD). Instead of writing code first and testing later, TDD flips the script — you write the test first, watch it fail, then build just enough code to pass that test.

🚀 What is TDD?

Test-Driven Development is a simple but powerful cycle:

  1. Write a failing test (Red)
  2. Write minimal code to pass the test (Green)
  3. Refactor the code without changing its behavior (Refactor)

This process repeats for each small piece of functionality. Over time, it builds up a fully tested, reliable system.

🧩 A Real Example: Word Count Kata

Recently, I practiced TDD while solving a classic coding exercise called the Word Count Kata. The goal was to analyze a piece of text, count the words, ignore case and punctuation, and even filter out unwanted “stop words.”

Here’s how TDD helped guide the process:

  • First, I wrote a test expecting the word "hello" to be counted twice in "Hello hello".
    ➡ The test failed (as expected).
    ➡ I then implemented the countWords method to pass it by converting the text to lowercase and splitting words properly.
  • Next, I tested a sentence with multiple words: "Hello world hello again".
    ➡ I wanted to make sure the system counted "hello" twice, and "world" and "again" once each.
  • Then, I challenged the code to ignore punctuation by testing a sentence like "This, is a test!".
    ➡ The code had to split words correctly, even when commas and exclamation marks appeared.
  • Moving to an intermediate stage, I added “stop words” like "the" and "and", and made sure they were excluded from the count.
  • Finally, for the advanced part, I created a sorted list showing the most frequent words first, such as "again 3"appearing before "test 2".

By adding each test one by one, my code grew naturally and remained stable.

🔥 Why TDD Matters

Through this exercise, I experienced firsthand why TDD is powerful:

  • Confidence: Every time I changed the code, I knew instantly whether I broke something because all tests ran automatically.
  • Clarity: Writing tests forced me to think about the expected behavior before diving into coding.
  • Design: Since I only built what was needed to pass the next test, the code stayed simple and focused.

Rather than rushing ahead and debugging messy errors later, TDD helped me build my project brick by brick, with each piece carefully tested.

🎯 Final Thoughts

Test-Driven Development isn’t just for “perfect coders.” It’s a learning tool, a design assistant, and a safety net. Even on a small assignment like the Word Count Kata, using TDD made my work cleaner, more organized, and far less stressful.

If you want to level up your coding habits, I highly recommend giving TDD a real try — one failing test at a time.

From the blog CS@Worcester – MY_BLOG_ by Serah Matovu and used with permission of the author. All other rights reserved by the author.

Test-Driven Development vs Behavior-Driven Development

For this weeks’ log entry, I wanted to cover a topic that was a bit different from the post that I made last week. One topic that caught my eye as being a perfect option to learn more about was Behavior Driven, and potentially also Test-Driven, development of code. When researching this topic, I came across a podcast titled, “Behavior-Driven vs Test-Driven Development & Using Regex in Python” by The Real Python Podcast on spotify. One of the largest factors that drew me to choosing this podcast to learn from was the fact that it was made incredibly recently (only a few months ago).

TEST-DRIVEN DEVELOPMENT:

A process of developing code that revolves around writing automated tests for code before ever actually writing the code itself. The process starts with a programmer writing a test for some sort of new feature that they want added to their code or that they want their code to be able to solve. After this, they write the smallest amount of code that they possibly can in order for that test to pass, potentially refactoring after the test is passed if they feel a need to do so. TDD originated before and was used to heavily inspire the agile process and scrum methodology that we see used today. TDD can be a helpful process to follow for several reasons. First, it ensures that you have actually written good tests and helps to mitigate any sort of temptation to cut corners when working on a project. TDD also helps teams to work specifically within the realm of what is being asked of them, rather than “gold-plating” their work and adding other features to their code that may not have been asked of them. Overall, TDD is very structured and easily provides a path for workflow to follow without confusion.  

BEHAVIOR-DRIVEN DEVELOPMENT:

BDD for short, in a way it is an extension of TDD, only focusing on the highest levels of the testing pyramid and involves Acceptance Test Driven Development. Essentially, before you ever write the function that you would use to test in TDD, you first write tests about how you think the application is actually supposed to behave. It focuses on a feature behaving in a particular way as opposed to that same feature returning a particular response. First, you would identify things that your code would need to create some sort of list that can be used to define what needs to be done or whether or not that feature is properly working. After identifying the need, you would write high level tests to check this. Same as in TDD, you would essentially be writing the code in parts, running tests in between to ensure that the code being written is being done so with an immediate purpose and goal to achieve within the project.

From the blog CS Blogs with Aidan by anoone234 and used with permission of the author. All other rights reserved by the author.