Path Testing

This week in class we learned about path testing, which is a white box method that examines code to find all possible paths. Path testing uses control flow graphs in order to illustrate the different paths that could be executed in a program. In the graph, the nodes represent the lines of code, and the edges represent the order in which the code is executed. Path testing appealed  to me as a testing method because it gives visual representations of how the source code should execute given different inputs. I took a deeper dive into path testing after this week’s classes and found this blog that gave me a deeper understanding of path testing.

Steps

When you have decided that you want to perform path testing, you must create a control flow graph that matches up with the source code. For example, the split of direction between nodes should represent if-else statements and for while loops, the nodes towards the end of the program that have an edge pointed at an earlier node. 

Secondly, pick out a baseline path for the program. This is the path you define to be the original path of your program. After the baseline is created, continue generating paths representing all possible outcomes in the execution. 

How many Test Cases?

For a lengthy source code, the possible outcomes could seem endless and could therefore end up being a difficult, time-consuming task to do manually. Luckily, there is an equation that determines how many test cases a program will need with path testing.

C = E – N + 2P

Where C stands for cyclomatic complexity. The cyclomatic complexity is equivalent to the number of linearly independent paths, which in turn equals the number of required test cases. E represents the number of edges, N is the number of nodes, and P is the number of connected components. Note that for a single program or source of code, P = 1 always.

Benefits

Path testing reveals outcomes that otherwise may not have been known without examining the code. As stated before, it can be difficult for a tester to know all the possible outcomes in a class. Path testing provides a solution to that by using control flow charts, where the tester can examine the different paths. Path testing also ensures branch coverage. Developers don’t need to merge code with an existing repository because the developers can test in their own branch. Unnecessary and overlapping tests are another thing developers don’t have to worry about.

Drawbacks

Path testing can also be time consuming. Quicker testing methods do exist and take less time off further developing projects. Also in many cases, path testing may be unnecessary. Path testing is used often by many DevOps setups that require a certain amount of unit coverage before deploying to the next environment. Outside of this, it may be considered inefficient compared to another testing method.

Blog: https://blog.testlodge.com/basis-path-testing/

From the blog Blog del William by William Cordor and used with permission of the author. All other rights reserved by the author.

Enhancing Software Testing Efficiency with Equivalence Partitioning

I chose this blog because I was interested in learning how to optimize the selection of test cases without sacrificing thorough coverage. During my search, I came across an article on TestGrid.io on equivalence partitioning testing, which was fantastic in removing redundancy from testing. As I develop my programming and software testing skills, I have found this method especially useful in making the testing process simpler.

Equivalence partitioning is a testing technique applied to partition input data into partitions or groups based on the assumption that values in each partition will behave similarly. Instead of testing each possible input, testers choose a sample value from each partition and hope the entire group will result in the same output. This reduces the number of test cases but still provides sufficient coverage.

For example, if a program is able to accept input values ranging from 1 to 100, equivalence partitioning allows the testers to categorize them into two sets: valid values (1-100) and invalid values (less than 1 or more than 100). Rather than testing every number in the valid set, a tester would choose sentinel values like 1, 50, and 100. Similarly, they would test the invalid range with 0 and 101. This is time-efficient but identifies errors simultaneously.

I employed the TestGrid.io article because it explains equivalence partitioning in an understandable and systematic manner. Much other testing material is too complex or ambiguous for newcomers, but this article simplifies it and incorporates real-world examples. This allowed it to be simple not only to understand the theory, but also to apply the method to real-life situations.The article also discusses the advantages of equivalence partitioning, including reducing redundant test cases, being more efficient, and offering complete coverage. As an individual interested in improving my testing methods, I found this useful because it corresponds with my goal of producing better, more efficient test cases without redundant repetition.

Equivalence partitioning testing is a sound approach to maximizing test case selection. It enables the tester to focus on representative cases rather than testing all possible inputs, which is time- and effort-efficient. The TestGrid.io article provided a clear understanding of how to implement this method and why it is significant. For me, learning effective test methods like equivalence partitioning will make me more efficient in my coding, debugging, and software developing abilities, prepared for internships, projects, and software engineering positions.

Blog: https://testgrid.io/blog/equivalence-partitioning-testing/

From the blog CS@Worcester – Matchaman10 by tam nguyen and used with permission of the author. All other rights reserved by the author.

Sprint 1 Retrospective

Gitlab Descriptions

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem-culling/addbarcodefrontend : created addBarcodeFrontend under our teams project to work on the scanner page.

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem-culling/addbarcodefrontend/-/commit/64bc3d4f720e6e8a69bafa709276beff820f719b : I added the scanner onto the frontend page, using a vue library to make it look good.

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem-culling/addbarcodefrontend/-/commit/200cac8ad4de849c3647b1223d9bc6f7075b8da2: I fixed the layout to how it ended up looking by the end of the sprint, as well as working on some other issues like fixing camera mirroring, and also added logos to the page.

Team Reflection

As this project is the first time I have worked in a team adopting the scrum methodology, I was a little hesitant at first as to how well we would work together, with possible conflicting problems that would only lead to decreased productivity, but the way that it has turned out isn’t like that at all. An important part of the lack of disagreements comes from our collective trust in each other to produce the outcome that we envisioned from the issue that they were assigned. Our issue allowed us to split our team into smaller teams, focusing on separate issues which would eventually connect to fulfill our team’s original project goal. This delegation of tasks to each subteam has allowed us to focus on our own tasks, while also checking in with each other to see how the rest are doing. This frees us of the burden of trying to work over each other to accomplish one thing which I believe would lead to inefficiencies and conflicting ideas. This subteam workflow allows us to dedicate our time and effort to one topic while checking in on the other topics to stay updated and give feedback. 

Although I just mentioned that the subteam system allows us to check in on each other and give feedback, this is somewhat untrue or at the very least an idealized version of what we are practicing. I do feel that I am working well in my subteam, and with my partner we are efficiently approaching our goal, however when it comes time to check in on the other teams, I feel as though I don’t get enough of an idea of the inner workings of what they’re doing to give good, constructive feedback. The same goes for my teammates giving me feedback. I think that is the tradeoff with this system though because it entrusts you with knowing what you’re doing, and I don’t think it’s as detrimental as it could be because each subteam at least has a partner that does give detailed feedback, sacrificing quantity for quality.

Apprenticeship Pattern

The perpetual learning pattern assumes there is always more knowledge to be known and your journey as a learner is never truly over. I selected this pattern because I think the pursuit of knowledge is important to sustain a continued expansion of knowledge and innovation, as well as satisfying a person’s internal sense of accomplishment and fulfillment. The point that this book tries to make about being a software apprentice is that you can become a master within your lifetime, but that doesn’t mean you stop learning. This pattern fits into the sprint because there was never a time I really felt like I fully understood a concept to a point where there wasn’t anything else I could learn by further investigating it or digging deeper. Even in established technologies or languages like HTML or CSS, there are a lot of things that many people may not know and working with it so much means I am more comfortable with it, but it doesn’t mean I am anywhere near mastering it, following this pattern of perpetual learning.

If I had thought of this pattern before the sprint started I would have gone into it more open-minded, knowing that anything I encountered would be surface level and that if I really wanted to I could go down a rabbit hole for each new thing I learned. There’s so much information out there that is almost impossible to get to, especially for me in the time of one sprint. I think to improve as an individual in future sprints, I could go out of my way to learn as much as I can so I can understand what I’m working on as best I can.

From the blog CS@Worcester – The Science of Computation by Adam Jacher and used with permission of the author. All other rights reserved by the author.

Week 8: Path Testing

This week, our class learned about path testing. It is a white box testing method (meaning we do not actually run the source code) that tests to make sure every specification is met and helps to create an outline for writing test cases. Each line of code is represented by a node and each progression point the compiler follows is represented by a directed edge. Test cases are written for each possible path the compiler can take.

For example, if we are testing the code block below, a path testing diagram can be created: also shown below. Each node represents a line of code and each directed edge represents where the compiler goes next, depending on the conditions. Test cases are written for each condition: running through the while loop and leaving when value is more than 5 or bypassing it if value starts as more than 5.

I wanted to learn more about path testing, so I did some research and found a blog that mentioned cyclomatic complexity. Cyclomatic complexity is a number that classifies how complex the code is based on how many nodes, edges, and conditionals you have. This number will relate to how many tests you need to run, but is not always the same number. The cyclomatic complexity of the example above would be (5-5)+2(1) = 2.

Cyclomatic Complexity = Edges – Nodes + 2(Number of Connected Components)

The blog also explores the advantages and disadvantages of path based testing. Some advantages are performing thorough testing of all paths, testing for errors in loops and branching points, and ensuring any potential bugs in pathing are found. Some disadvantages are failing to test input conditions or runtime compilations, a lot of tests need to be written to test every edge and path, and exponential growth in the number of test cases when code is more complex.

Another exercise we did in class was condensing nodes that do not branch. In the example above, node 2 and 3 can be condensed into one node. This is because there is no alternative path that can be taken between the nodes. If line 2 is run, line 3 is always run right after, no matter what number value is. Condensing nodes would be helpful in slightly more complex programs to make the diagram more readable. Though if you are working with a program with a couple hundred lines, this seems negligible.

When I am writing tests for a program in the future, I would probably choose a more time conscious method. Cyclomatic complexity is a good and useful metric to have, but basing test cases off of the path testing diagram does not seem practical for complex codes and tight time constraints.

Blog post referenced: https://www.testbytes.net/blog/path-coverage-testing/

From the blog CS@Worcester – ALIDA NORDQUIST by alidanordquist and used with permission of the author. All other rights reserved by the author.

Mastering Software Testing: The Magic of Equivalence Class Testing.

If you’re like me, getting into software testing might feel overwhelming at first. There are countless methods and techniques, each with its own purpose, and it’s easy to feel lost. But when I first learned about Equivalence Class Testing, something clicked for me. It’s a simple and efficient way to group similar test cases, and once you get the hang of it, you’ll see how much time and effort it can save.

So, what exactly is Equivalence Class Testing? Essentially, it’s a method that helps us divide the input data for a program into different categories, or equivalence classes, that are expected to behave the same way when tested. Instead of testing every single possible input value, you select one or two values from each class that represent the rest. It’s like saying, “if this value works, the others in this group will probably work too.” This approach helps you avoid redundancy and keeps your testing efficient and focused.

Now, why should you care about Equivalence Class Testing? Well, let me give you an example. Imagine you’re writing a program that processes numbers between 1 and 1000. It would be impossible (and impractical!) to test all 1000 values, right? With Equivalence Class Testing, you can group the numbers into a few categories, like numbers in the lower range (1-200), the middle range (201-800), and the upper range (801-1000). You then pick one or two values from each group to test, confident that those values will tell you how the whole range behaves. It’s a major time-saver.

When I started using this method, I realized that testing every possible input isn’t just unnecessary—it’s counterproductive. Instead, I learned to focus on representative values, which allowed me to be much more efficient. For instance, let’s say you’re testing whether a student is eligible to graduate based on their GPA and number of credits. You could create equivalence classes for the GPA values: below 2.0, which would likely indicate the student isn’t ready to graduate; between 2.0 and 4.0, which might be acceptable; and anything outside the 0.0 to 4.0 range, which is invalid. Testing just one GPA value from each class will give you a pretty good sense of whether your function is working properly without overloading you with unnecessary cases.

Another thing I love about Equivalence Class Testing is that it naturally leads into both Normal and Robust Testing. Normal testing focuses on valid inputs—values that your program should accept and process correctly. Robust testing, on the other hand, checks how your program handles invalid inputs. For example, in our GPA scenario, testing GPAs like 2.5 or 3.8 would be normal testing, but testing values like -1 or 5 would fall under robust testing. Both are essential for making sure your program is strong and can handle anything users throw at it.

Lastly, when I first heard about Weak and Strong Equivalence Class Testing, I was a bit confused. But the difference is straightforward. Weak testing means you’re testing just one value from each equivalence class at a time for a single variable. On the other hand, Strong testing means you’re testing combinations of values from multiple equivalence classes for different variables. The more variables you have, the more comprehensive your tests will be, but it can also get more complex. I usually start with weak testing and move into strong testing when I need to be more thorough.

Overall, learning Equivalence Class Testing has made my approach to software testing more strategic and manageable. It’s a method that makes sense of the chaos and helps me feel more in control of my testing process. If you’re new to testing, or just looking for ways to make your tests more efficient, I highly recommend giving this method a try. You’ll save time, energy, and still get great results.

From the blog CS@Worcester – MY_BLOG_ by Serah Matovu and used with permission of the author. All other rights reserved by the author.

Understanding Pairwise and Combinatorial Testing

Hello everyone, and welcome back to my weekly blog post! This week, we’re diving into an essential software testing technique: Pairwise and Combinatorial Testing. These methods help testers create effective test cases without needing to check every single possible combination of inputs. Similar to most of the test cases selection methods we’ve learned.

To make things more relatable, let’s start with a real-life example from the insurance industry.

A Real-Life Problem: Insurance Policy Testing

Imagine you are working for an insurance company that sells car insurance. Customers can choose different policy options based on:

  1. Car Type: Sedan, SUV, Truck
  2. Driver’s Age: Under 25, 25–50, Over 50
  3. Coverage Type: Basic, Standard, Premium

If we tested every possible combination, we would have:

3 × 3 × 3 = 27 test cases!

This is just for three factors. If we add more, such as driving history, location, or accident records, the number of test cases grows exponentially, making full testing impossible.

So how can we test efficiently while ensuring that all critical scenarios are covered?
That’s where Pairwise and Combinatorial Testing come in!

What is Combinatorial Testing?

Combinatorial Testing is a technique that selects test cases based on different input combinations. Instead of testing all possible inputs, it chooses a smaller set that still covers key interactions between variables.

Example: Combinatorial Testing for Insurance Policies

Instead of testing all 27 cases, we can use combinatorial testing to reduce the number of test cases while still covering important interactions.

A possible set of test cases could be:

Test Case Car Type Driver’s Age Coverage Type
1 Sedan Under 25 Basic
2 SUV 25–50 Standard
3 Truck Over 50 Premium
4 Sedan 25–50 Premium
5 SUV Over 50 Basic
6 Truck Under 25 Standard

This method reduces the number of test cases while ensuring that each factor appears in multiple meaningful combinations.

What is Pairwise Testing?

Pairwise Testing is a type of combinatorial testing where all possible pairs of input values are tested at least once. Research has shown that most defects in software are caused by the interaction of just two variables, so testing all pairs ensures good coverage with fewer test cases.

Example: Pairwise Testing for Insurance Policies

Instead of testing all combinations, we can create a smaller set where every pair of values appears at least once:

Test Case Car Type Driver’s Age Coverage Type
1 Sedan Under 25 Basic
2 Sedan 25–50 Standard
3 SUV Under 25 Premium
4 SUV Over 50 Basic
5 Truck 25–50 Premium
6 Truck Over 50 Standard

Here, every pair (Car Type, Driver’s Age), (Car Type, Coverage Type), and (Driver’s Age, Coverage Type) appears at least once. This means we cover all important interactions with just 6 test cases instead of 27!

Permutations and Combinations in Testing

To understand combinatorial testing better, we need to understand permutations and combinations. These are ways of arranging or selecting elements from a set.

What is a Combination?

A combination is a selection of elements where order does not matter. The formula for combinations is: C(n, r) = n! / [r! * (n – r)!]

where:

  • n is the total number of items
  • r is the number of selected items
  • ! (factorial) means multiplying all numbers down to 1

Example of Combination in Insurance

If an insurance company wants to offer 3 different discounts from a list of 5 available discounts, the number of ways to choose these discounts is: C(5,3)=5!3!(5−3)!

So, there are 10 different ways to choose 3 discounts.


What is a Permutation?

A permutation is an arrangement of elements where order matters. The formula for permutations is: P(n, r) = n! / (n – r)!

where:

  • n is the total number of items
  • r is the number of selected items

Example of Permutation in Insurance

If an insurance company wants to assign 3 priority levels (High, Medium, Low) to 5 claims, the number of ways to arrange these claims is: P(5,3)=5!(5−3)!

So, there are 60 different ways to assign priority levels to 3 claims.


Why Use Pairwise and Combinatorial Testing?

  1. Saves Time and Effort – Testing fewer cases while maintaining coverage.
  2. Covers Critical Scenarios – Ensures every important combination is tested.
  3. Finds Defects Faster – Most bugs are caused by two interacting factors, so pairwise testing helps detect them efficiently.
  4. Reduces Costs – Fewer test cases mean lower testing costs and faster releases.

When Should You Use These Techniques?

  • When a system has many input variables
  • When full exhaustive testing is impractical
  • When you need to find bugs quickly with limited resources
  • When testing insurance, finance, healthcare, and other complex systems

Tools for Pairwise and Combinatorial Testing

To make the process easier, you can use tools like:

  • PICT (Pairwise Independent Combinatorial Testing Tool) – Free from Microsoft
  • Hexawise – A combinatorial test design tool
  • ACTS (Automated Combinatorial Testing for Software) – Developed by NIST

These tools help generate optimized test cases automatically based on pairwise and combinatorial principles.

Conclusion

Pairwise and Combinatorial Testing are powerful techniques that allow testers to find defects efficiently without having to test every possible combination. They save time, reduce costs, and improve software quality.

Next time you’re dealing with multiple input variables, try using Pairwise or Combinatorial Testing to make your testing smarter and more effective!

From the blog Rick’s Software Journal by RickDjouwe1 and used with permission of the author. All other rights reserved by the author.

Sprint 1 Retrospective

During the first Sprint of this semester, I worked on adding a new field to the Guest object that records the date of the first visit to the pantry in the backend of the system. While working on this issue, me and one of my teammates named Hiercine, decided that it would be better for us to work on our issues together as she was supposed to implement it in the API.

We had a few issues in the beginning of our Sprint, but these issues were later resolved. Some of the issues we had included being confused on what some of our issues were, and not communicating to our teammates when we needed help. Also, we had people working on similar issues but not working together.

While we had a couple things that didn’t work well, we managed to improve upon them and had things that did work well. At the beginning of the Sprint, we divided the issues based on what each person was most comfortable and confident working on. We realized that it was a lot easier for us to work in pairs, so we started to do that with our issues. This made us complete them a lot faster as well as have another set of eyes when one of us was confused on what to do next. Similarly, we asked each other how we were doing on our issues every class and helped where it was needed.

I can’t think of any changes we can make to improve upon as a team except for maybe communicating more outside of class, but that hasn’t been needed as of yet.

As an individual, I can improve by speaking up more. This includes talking about my ideas, my problems, and checking in on others. I tend to be a quiet person, and it would be helpful for both me and my team if I spoke up more.

A pattern from the Apprenticeship Patterns book that is relevant to my experience during this Spring is the “Rubbing Elbows” pattern. This pattern is about learning by working closely with others. When you spend time coding, discussing, and solving problems with teammates, you pick up new skills and ideas faster. Instead of struggling alone, you get to see how others work, ask questions, and get help when needed. It encourages developers to actively seek out collaboration because working side by side makes learning easier and improves teamwork. I picked this pattern because it relates with how me and Hiercine worked together on related tasks. I had issues at first, but after collaborating with her, I was able to improve. If I had read about this pattern before the first sprint, I would have been more into working with teammates.

Overall, I think this Sprint went very well and our future Sprints will go even better.

Here are links to some of the work we did, as well as a description for each:

This link is to the merge request for all of the work we did during this issue, adding the new field to the API and the rest of the backend. https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/guestinfosystem/guestinfobackend/-/merge_requests/111/commits

This link shows the commit for adding the schema to the openapi file, as well as creating the new field within the backend of the file itself. https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/guestinfosystem/guestinfobackend/-/merge_requests/111/diffs?commit_id=570a34899a7e81af0dc97024016f1ca60aef035f

This link shows the commit where we fixed the tests so that they didn’t include the first visit date of the guest, so that the tests would pass. https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/guestinfosystem/guestinfobackend/-/merge_requests/111/diffs?commit_id=575da5d7eb01d0752a5c825bccbd996f7a9939f0

From the blog CS@Worcester – One pixel at a time by gizmo10203 and used with permission of the author. All other rights reserved by the author.

Sprint 1 Retrospective Blog

My task for Sprint 1 was to add a new date field that shows when the guest was created into the backend API. In this sprint, I worked closely with my teammate Sean Kirwin as he was working on the backend as well. I had to create a reference to the schema for the date of the first entry in the OpenAPI field. Below is the link to the merge request.

What worked well for me in Sprint 1 was being able to work on a task that I had experience with. I was confident to get what I needed to get done in a timely manner. I also appreciated the fact that I was able to work with one of my team members. 

As a team, I believe we worked well together. My team did a good job with the sprint planning and dividing the roles for each task effectively. Each team member was good at communicating what they had done each week of the sprint. I appreciated that my team members were always there to give constructive feedback each week and also helped me out when I had a question about how to navigate working on my task. A way my team could improve is show the work we did for the week instead of just explaining it. It would help visually. 

This task took me more time than it needed to. As an individual, I could improve my time management when it comes to working on my assigned tasks. Although I had improved my communication with the team throughout the sprint, I think I could do. I did not necessarily have issues with making the changes to the files, however, my partner and I had issues with the merge request. The pipeline kept failing because of the lint-commit-messages, but my partner and I were able to resolve this issue with research. 

The pattern I related to from the Apprenticeship Patterns book was “Learn How You Fail”.In this chapter, it highlights the significance of being aware of your weaknesses and areas of failure. It implies that although learning improves performance, shortcomings and failures endure unless they are actively addressed. The most important lesson is to seek self-awareness about the actions, circumstances, and habits that contribute to failure. Making deliberate choices either to accept and overcome limits or to do better in particular areas is made possible by this self-awareness. The pattern promotes a balanced approach to growth and discourages self-pity or perfectionism.

I chose this pattern because it is closely related to my experience during the sprint. I ran into issues with time management, working with others on code reviews, and managing merge request issues during the sprint. I became aware that I was frequently making the same errors, such as failing to consider edge scenarios or misinterpreting project specifications. I could have dealt with these failure patterns proactively rather than reactively when issues emerged if I had been more deliberate about identifying them early. The “Learn How You Fail” pattern would have helped me take a step back, identify recurring problems, and strategize ways to improve.

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/guestinfosystem/guestinfobackend/-/merge_requests/107

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Sprint Retrospect Blog 1

When you have been in college for about 5 years, you start to think that there can’t be any more surprises and that you have been through pretty much everything, boy was I wrong on that one. This semester started in a way that I was at the same time familiar and a stranger to. Being divided into groups was nothing new to me, but sharing a long-term goal with a group of people was something that I was unaccustomed with.

We started this project together without any of us having any idea how to proceed, and I doubt anyone knew what they were doing the first week, but what we missed in knowledge we found in supporting each other. We have been part of groups before, but I can wholeheartedly say that this is the first time I’ve felt myself being part of a team. Through friendship and communication, we started to lay out a plan that helped us give form to a project we started from scratch. The best thing we might have done is to divide ourselves in sub-groups, each focusing on specific tasks that they felt comfortable working on. Be it by coincidence or instinct, I have always found myself to be the odd one out, being the only one working independently but this was very short lived in this setting as after the first week of actually working on the project I quickly found myself asking and finding a great deal of help and opinions from my teammates. We all ask and give help without even noticing it, learning new things along the way even though after years of being a CS student.

This leads me to one of the Apprenticeship Patterns called “The Long Road”. This pattern reminds us that becoming a true software craftsman isn’t about reaching a final destination. Instead, it’s an ongoing process where every project, challenge, and mistake is an opportunity to learn and rather than expecting immediate expertise, the long road emphasizes small, consistent improvements over time with each step building on the previous one. Reflecting on both successes and failures evolves or changes our goals and teaches us that on the long run we should not only value coding proficiency but also soft skills such as communication, teamwork and the ability to collaborate with mentors and peers. I was too fixated on the end-goals, finishing the project, and reading this pattern before starting to work would have saved me some good hours of unnecessary headaches.

This pattern became even more evident to me when I reviewed the first commits I did in this project. Working on the guest-info system the past semester, I thought it would be a breeze adding on to it and giving it shape to help our end-goals, which ended up in failure. I tried to do too much in a short amount of time, add new endpoints to the API, add the script to update our existing database with new objects with extra attributes. My whole work ended up failing at the end as my lack of knowledge caused irreparable errors in the code.

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem-culling/guestinfobackend/-/tree/newAPI?ref_type=heads – My initial branch where 85% of the added code went wrong.

Thankfully, I learned from these mistakes, and having great teammates helps a lot. Michael and Prince worked on shaping our schemas and API to match the attributes from the objects we were fetching from an external dataset, by adding a few tweaks and a quick script, I managed to fetch the correct object and lay the groundwork for our next sprint, which is combining all of our work together.

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem-culling/guestinfobackend/-/tree/main/src?ref_type=heads – The current folder containing the script and tweaks to the guest-info API.

I am happy to have communicative and capable teammates to help me and each other during this project. Being able to ask for help and share opinions without being judged is working wonders on our team and helping us understand each other’s work more efficently.

What I’m not happy with is myself and how I more than often take a bigger bite than what I can chew, working on small increments is something that I would improve for myself in the future.

I know that the upcoming spring will be challenging, but with a great team and gradual self-improvement, I’m confident that we will be proud of the end result.

From the blog CS@Worcester – Anairdo's WSU Computer Science Blog by anairdoduri and used with permission of the author. All other rights reserved by the author.

Sprint Retrospective – Sprint-1

Evidence of Activity on GitLab

  1. Meeting with Customer to Identify Categories
    • A .js file named categories.js was built to help my peers start testing different category structures.
  2. Data Extraction for TeaSparty Project
    • The USDA FoodData Central downloadable data was extracted to build a comprehensive database.
  3. Database Optimization
    • The original database file was over 3.2GB and contained multiple unnecessary fields.
    • A Java file was developed to remove unnecessary fields such as foodNutrients, nutrients, rank, protein, and their respective IDs.
    • Successfully reduced the database from 3.2GB to 0.7GB, making it more manageable for our project.

Reflection on What Worked Well

  1. Customer Collaboration
    • Meeting with the customer to identify category requirements ensured that the project met their needs effectively.
    • The creation of categories.js provided a starting point for peer testing, helping streamline future work.
  2. Database Optimization
    • By removing unnecessary fields, the database was significantly reduced in size, improving efficiency.
    • The pre-cleaning process helped identify redundant fields early on, making subsequent steps more manageable.

Reflection on What Didn’t Work Well

  1. API Call Extraction Issue
    • Initially, I attempted to extract the database via API calls but encountered a limitation: API calls only supported retrieving 100 items at a time, whereas there were over 100 pages to extract.
    • Researching solutions, I found that automation was the best approach, but due to my lack of expertise in automation, I couldn’t implement it effectively.
  2. Challenges in Categorizing Products
    • A Java file was created to categorize all products, but it was crashing due to issues with object field descriptions containing multiple nested quotes.
    • Example issue: "description":"ORIGINAL SWEET & SMOKY BAR \"\"B\"\" \"\"Q\"\" SAUCE, ORIGINAL".
    • As a workaround, I manually cleaned some of the problematic entries but am still looking for a better solution.

Reflection on What Changes Could Be Made to Improve as a Team

  • Better Communication
    • Ensuring consistent updates within the team would help synchronize our work and avoid redundant efforts.
    • More frequent check-ins to discuss roadblocks and potential solutions could improve efficiency.

Reflection on What Changes Could Be Made to Improve as an Individual

  • Increase Work Hours on the Project
    • Spending more time on the project will allow my peers to see my progress and provide feedback earlier.
  • Dependency on Others’ Progress
    • I am currently waiting for Alex to finish the dev container. In the meantime, I am focusing on cleaning the db.json file so I can upload it as soon as possible.

Apprenticeship Pattern: “Expose Your Ignorance”

Summary of the Pattern

The “Expose Your Ignorance” pattern from the Apprenticeship Patterns book emphasizes acknowledging one’s knowledge gaps and actively seeking to learn. It encourages transparency about what you don’t know while making efforts to bridge those gaps.

Why This Pattern Was Selected

During this sprint, I faced challenges with API automation and data cleaning that slowed my progress. If I had openly acknowledged my lack of expertise in automation sooner, I could have sought help from peers or external resources earlier in the sprint. This pattern aligns with my experience because it highlights the importance of being upfront about knowledge gaps and taking proactive steps to fill them.

How This Pattern Would Have Changed My Behavior

Had I followed this pattern earlier in the sprint:

  • I would have sought advice from team members or forums on API automation instead of spending excessive time trying to figure it out alone.
  • I would have explored existing libraries or tools for handling JSON parsing issues more efficiently instead of resorting to manual cleaning.

Conclusion

This sprint provided valuable lessons in data handling, API limitations, and the importance of seeking help when needed. Moving forward, I plan to improve communication within the team, allocate more time to the project, and be more proactive in addressing technical challenges. Applying the “Expose Your Ignorance” pattern will help me become a more effective contributor in future sprints.

From the blog Discoveries in CS world by mgl1990 and used with permission of the author. All other rights reserved by the author.