Write Code that’s Readable. Please.

Let’s talk about something that affects programmers at all levels, on all codebases and projects: readability. Specifically, the balance between code that is fast and resource efficient and code that is easily understood and maintained.

First, readability. It’s a given that any code produced and delivered in any professional capacity is almost certainly not being written by a single developer. Therefore, good developers will write code that others will be able to understand and pick up what the intended behavior of the code is. Commenting, appropriate use of whitespace and indentation, and effective developer documentation can make code easy to work with, for anyone.

Here’s an example of two Python code snippets that both return the summation of a given array, but are structured quite differently:

Code 1:

# Returns summation of given array of numbers

def sum_numbers_readable(nums):
   total = 0    # Initalize starting total
   for num in nums:   # Sum all numbers in array nums
      total += num
   return total   # Return sum

print(sum_numbers_readable([1, 2, 3, 4]))

Code 2:

def sum_numbers_fast(nums):
   return sum(nums)
print(sum_numbers_fast([1, 2, 3, 4]))

See the difference? Both pieces of code do the same thing and achieve the same output. Code 1, however, contains comments describing the intended behavior of the function and the expected parameter type, as well as step-by-step what each line of code does in the function. Blank lines and proper use of whitespace also divide the code into relevant sections. Code 2, on the other hand, has no comments added, poor use of whitespace (or as poor as Python would allow), and uses the sum() function from Python’s native library. While the intended behavior of this function is pretty clear, this may not always be the case.

Take a more extreme example, just for the fun of it: go write or refactor some piece of Java code to be all on the same line. It’ll compile and run fine. But then go ask someone else to take the time to review your code, and see what their reaction is.

On the other hand, we have code efficiency and speed, generally measured in runtime or operations per second. Comparing our two code snippets using a tool like perfpy.com to see performance, and we find that the second, less readable code executes faster with a higher operations per second. (931 nanoseconds vs. 981 nanoseconds and 1.07 op/sec vs. 1.02 op/sec. Not a huge difference at all, but this scales with program complexity).

This gives us some perspective on the balance between performance and readability/maintainability. It also helps to keep in mind that performance and readability are both relative.

Looking back at our two Python snippets, most developers with experience would opt for the second design style. It’s faster, but also easily readable provided that you understand how Python methods work. However, people just learning programming would probably opt for the more cumbersome but readable first design. With regards to performance, the difference in runtime could be a nonissue or it could be a catastrophic slowdown. It depends entirely on the scope of the product and the needs of the development team.

As a final rule of thumb, the best developers will balance readability and efficiency as best they can, while above all else remaining consistent with their style. When looking at possible code optimizations, consider how the balance between readability and performance could shift. The phase “good enough” tends to have a negative connotation, but if your code is readable enough that other team members can work on it, and fast enough that it satisfies the product requirements, “good enough” is perfect.

Refrences:
Lazarev-Zubov, Nikita. “Code Readability vs. Performance: Shipbook Blog.” Shipbook Blog RSS, 16 June 2022, blog.shipbook.io/code-readability-vs-performance.

From the blog Griffin Butler Computer Science Blog by Griffin Butler and used with permission of the author. All other rights reserved by the author.

Understanding Smoke Testing in Software Development

In software development, a build has to be stable before further more comprehensive testing in a project so that the project is successful. One of the ways of guaranteeing this is smoke testing, which is otherwise known as Build Verification Testing or Build Acceptance Testing. Smoke testing is an early check-point to verify that the major features of the software are functioning as desired before other more comprehensive testing is done.

What is Smoke Testing?
Smoke testing is a form of software testing that involves executing a quick and superficial test of the most crucial features of an application to determine whether the build is stable enough for further testing. It is a minimum set of tests created to verify if the core features of the application are functioning. Smoke tests are generally executed once a new build is promoted to a quality assurance environment, and they act as an early warning system of whether the application is ready for further testing or requires correction immediately.

Important Features of Smoke Testing

-Level of Testing: Smoke tests are interested in the most important and basic features of the software, without exploring each and every functionality.
-Automation: Automated smoke testing is a common routine, especially in the case of time limitations, to perform quick, repeatable tests.
-Frequency: Smoke testing is normally run after every build or significant change in code in order to allow early identification of major issues.
-Time Management: The testing itself is quick in nature, so it is a valuable time-saver by catching critical issues early.
-Environment: Smoke testing is typically performed in an environment that mimics the production environment so that test results are as realistic as possible.

Goal of Smoke Testing

The primary objectives of smoke testing are:

-Resource Optimization: Don’t waste resources and time on testing if core functionalities are broken.
-Early Detection of Issues: Identify any significant issues early so that they can be fixed at a quicker pace.
-Refined Decision-Making: Present an open decision schema on whether or not the build is ready to go to thorough, detailed testing.
-Continuous Integration: Make every new build meet basic quality standards before it is added to the master codebase.
-Pragmatic Communication: Give rapid feedback to development teams, allowing them to communicate clearly about build stability.

Types of Smoke Testing
There are several types of smoke tests based on methodology chosen and setting where it is put to practice:
-Manual Testing: Test cases are written and executed manually for each build by testers.
-Automated Testing: Automation tools make the process work by itself best used in situations of tight deadline projects.
-Hybrid Testing: Combines a mixture of automated as well as manual tests for capitalizing on both the pros of each methodology.
Daily Smoke Testing: Conducted on a daily basis, especially in projects with frequent builds and continuous integration.
Acceptance Smoke Testing: Specifically focused on verifying whether the build meets the key acceptance criteria defined by stakeholders.
UI Smoke Testing: Tests only the user interface features of an application to verify whether basic interactions are working.

Applying Smoke Testing at Various Levels
Smoke testing can be applied at various levels of software testing:
Acceptance Testing Level: Ensures that the build meets minimum acceptance criteria established by the stakeholders or client.
System Testing Level: Ensures that the system as a whole behaves as expected when all modules work together.
Integration Testing Level: Ensures that modules that have been integrated work and communicate as expected when combined.

Advantages of Smoke Testing
Smoke testing possesses several advantages, including:
Quick Execution: It is easy and quick to run, and hence ideal for frequent builds.
Early Detection: It helps in defect detection in the initial stage, preventing wasting money on faulty builds.
Improved Quality of Software: By detecting the issues at the initial stage, smoke testing allows for improved software quality.
Risk of Failure is Minimized: Detecting core faults in earlier phases minimizes failure risk at subsequent testing phases.
Time and Effort Conservation: Time as well as effort is conserved as it prevents futile testing within unstable builds.

Disadvantages of Smoke Testing
Although smoke testing is useful in many respects, it has some disadvantages too:
Limited Coverage: It checks only the most critical functions and doesn’t cover other potential issues.
Manual Testing Drawbacks: Manually, it could be time-consuming, especially for larger projects.
Inadequate for Negative Tests: Smoke testing typically doesn’t involve negative testing or invalid input scenarios.
Minimal Test Cases: Since it only checks the basic functionality, it may fail to identify all possible issues.

Conclusion
In conclusion, smoke testing is an important practice at the early stages of software development. It decides whether a build is stable enough to go for further testing, saving time and resources. By identifying major issues early in the development stage, it facilitates an efficient and productive software testing process. However, it should be remembered that smoke testing is not exhaustive and has to be supported by other forms of testing in order to ensure complete quality assurance. 

Personal Reflection

Looking at the concept of smoke testing, I view the importance of catching issues early in the software development process.
It’s easy to get swept up in the excitement of rolling out new features and fully testing them, but if the foundation is unstable, all the subsequent tests and optimizations can be pointless. Smoke testing, in this sense, serves as a safety net, getting the critical functions running before delving further into more rigorous tests. I think the idea of early defect detection resonates with my own working style.

As I like to fix small issues as they arise rather than letting them escalate into big problems, smoke testing allows development teams to solve “show-stoppers” early on, preventing wasted time, effort, and resources in the future. Though it does not pick up everything, its simplicity and the fact that it executes fast can save developers from wasted time spent on testing a defective product, thus ending up with a smooth and efficient workflow. The process, especially in a scenario where there are frequent new builds being rolled out, seems imperative to maintain a rock-solid and healthy product.
The benefits of early problem detection not only make software better, but also stimulate a positive feedback loop of constant improvement between the development team. 

From the blog CS@Worcester – Maria Delia by Maria Delia and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospecive

A lot of things worked pretty well this sprint compared to the last. Communication, effort, work, progress, collaboration, planning, and more were all present and at a great level compared to the last sprint. We still had our own section that we mainly worked in but there was plenty of collaboration for tackling a difficult issue, researching an issue, or just divvying up work should someone have free time. I think that our Sprint 2 goal was very concrete and our issues that we made at the start of the sprint as well as throughout led to a smooth sailing to the finish line.

I wouldn’t say that there was anything that didn’t work well but rather things that could be worked on. Communication could always be improved, and even though it’s a night and day difference between Sprint 1, there were some points in which it would’ve been nice to hear from certain team members what was going on or why they weren’t present. This may be more of just my personal feelings but there is one member who handles almost all Gitlab organization and use and takes notes of our meetings, and I’m not sure if it’s for their sake or for the team’s sake, but I feel that they could ask the rest of the team to help with any of these. They are the most adept at using Gitlab and I assume wants to stay on top of things and as organized as possible but I’m certain other team members could help out even if a little. 

To improve as a team, communicating when a team member can’t make the meeting and making good progress and decisions without certain team members present would be great. I think we divvy up our issues well but other work that aren’t exactly issues could be divided up between members especially if it’s rather menial.

I think I could’ve gathered a better idea of our group’s work and flow and what other team members are working on. For the Sprint 2 Review, I prepared to share what I had worked on but was wholly unprepared to share what other members have worked on and present our work. 

The apprenticeship pattern I felt was most relevant to my experiences was “Confront Your Ignorance” from Chapter 2. This pattern states that “There are tools and techniques that you need to master, but you do not know how to begin… and there is an expectation that you already have this knowledge.” The solution to this pattern states to “Pick one skill, tool, or technique and actively fill the gaps in your knowledge about it.” Although this pattern doesn’t exactly describe my experience during the sprint, it felt the closest out of all the other patterns. Rather than tools and techniques, I would say that I need to better understand our group’s work and plans as well as individual members’ work. To handle this, I could go through our Gitlab issues seeing what has been accomplished and who is assigned to what, I could explore our repositories to get a better understanding of the individual pieces of the project, and I could simply ask my team members about what they have worked on, what they plan to work on, and their idea of our group’s work. Taking notes would’ve been a great option as well due to how much information might be shared during our meetings.

From the blog CS@Worcester – Kyler's Blog by kylerlai and used with permission of the author. All other rights reserved by the author.

HOW DECISION TABLES CHANGED MY SOFTWARE TESTING MINDSET.

If you’ve ever written test cases based on your gut feeling, you’re not alone. I used to write JUnit tests by simply thinking, “What might go wrong?” While that’s a decent start, I quickly realized that relying on intuition alone isn’t enough especially for complex systems where logical conditions stack up fast.

That’s when I understood the magic of Decision Table-Based Testing.

What Is Decision Table Testing?

A decision table is like a truth table, but for real-world logic in your code. It lays out different conditions and maps them to the actions or outcomes your program should take. By organizing conditions and results in a table format, it becomes much easier to identify what combinations of inputs need to be tested and which ones don’t. It’s especially helpful when you want to reduce redundant or impossible test cases., when you have multiple input variables (like GPA, credits, user roles, etc.) and also when your program behaves differently depending on combinations of those inputs

Applying Decision Tables in Real Time

For a project that I happened to work on, we analyzed a simple method; boolean readyToGraduate(int credits, double gpa). This method is meant to return true if Credits ≥ 120 and also when GPA ≥ 2.0. We had to figure out what inputs would cause a student to graduate, not graduate, or throw an error—such as when the GPA or credit values were outside of valid ranges.

Instead of testing random values like 2.5 GPA or 130 credits, we created a decision table with all the possible combinations of valid, borderline, and invalid values.

We even simplified the process using equivalence classes, like:

  • GPA < 0.0 → invalid
  • 0.0 ≤ GPA < 2.0 → not graduating
  • 2.0 ≤ GPA ≤ 4.0 → eligible to graduate
  • GPA > 4.0 → invalid

By grouping these ranges, we reduced a potential 256 test cases to a manageable 68 and even further after combining rules with similar outcomes.

Well you must be wondering why this even matters in real projects. It matters because in real-world applications, time and efficiency are everything. Decision tables help you cover all meaningful test scenarios. They also help to cut down on unnecessary or duplicate test cases. Decision tables as well help to reduce human error and missed edge cases and provide a clear audit trail of your testing logic.

If you’re working in QA, development, or just trying to pass that software testing class, mastering decision tables is a must-have skill.Switching from intuition-based testing to structured strategies like decision tables has completely shifted how I write and evaluate test cases. It’s no longer a guessing game—it’s a methodical process with justifiable coverage. And the best part? It saves a ton of time.Next time you’re designing tests, don’t just hope you’ve covered the edge cases. Prove it—with a decision table

Have you used decision tables in your projects? Drop a comment below and share your experience!

From the blog CS@Worcester – MY_BLOG_ by Serah Matovu and used with permission of the author. All other rights reserved by the author.

Sprint 2

During this sprint, I switched up the work I did compared to the last sprint instead of working on the backend I was given the task to oversee the front end. First going into the front end blindly I didn’t know exactly how far the front end was already and had to accommodate myself to the code. First opening the frontend I noticed it was missing several files that would hold great importance in starting the frontend. One of my first fixes was the docker-compose file left by the other group was the backend file the same so I created a new file that would hold both servers of the front end and the backend. I updated some of the documentation that had been outdated compared to what we were working on currently. I created a new way to start the frontend server because it was missing that and added an up.sh, down.sh, and rebuild.sh for the front end. I also updated the outdated docker file. After I had a basis for what was needed I decided to go in and organize it better to read the new files in an organized manner to see what files were missing. Had to add new assets and components including new pictures of Libre food pantry and any other ones that were needed. 

During this sprint, I worked well working alone but I should have checked more often with my teammates because of the issue that would later fall upon me. I was making progress every week but I hadn’t yet seen the biggest fault was the backend would crash on me. Next time I should check in with my teammates sooner instead of making progress on my own. I just didn’t want to slow down my teammates with my work. That is a fault in itself I shouldn’t be scared for help because the team will fail if I do this. 

This time around the apprenticeship pattern that resonated with me the most was breakable toys because it is the basis of each sprint. You must create a small project where you can make mistakes in order to learn and be better. During this spring I was learning the ins and outs of the front end on something someone else created that I had to update and fix. This pattern could have made me more confident in messing up because it’s now wrong to mess up. Failing would help with issues that would come up that would show me what to do in other scenarios. Another pattern I would choose is to learn how you fail. I had many times during this sprint where I was head scratching to find an answer or come to an understanding of something but picking up on the patterns of how I failed I could change for the next issue that would come up. This issue opens my mind to being more intentional with my work and picking up on my common mistakes even if sometimes simple mistakes are the biggest headaches. Errors can be preventable by understanding why you do something and adapting to catch them next time. During this sprint, I learned a great deal about my project and I hope in the next sprint I will be able to learn more.

From the blog cs-wsu – DCO by dcastillo360 and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospective

Gitlab Descriptions

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem-culling/addbarcodefrontend/-/commit/6999f8a611a955be33526c2e6f712240f69ab5c4 – Cleaned the project folders and files to remove any unnecessary files and clean up the directory paths, also changed the docker files to run the website locally.

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem-culling/addbarcodefrontend/-/commit/08d0b14fc04493a29242fd458a1bcf05992dd603 – modified the layout of the website to be responsive to smaller screen sizes and added the WSU logo.
https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem-culling/inventorybackend/-/commit/9b6d6d0d68e0387f1214410f56702fce3b84f3a8 – copied the work from the addBarcodeFrontend over to the inventoryBackend project to connect to our backend.

Sprint Reflection

We all worked well this sprint to put together a nearly finished product. I am confident that by the end of sprint 3 our product will be ready to be used, but likely could use improvements still. Each of us has been diligently working on our assigned issues and bringing them together to make it all work. Our team started this project from scratch unlike most other groups and I feel proud of the progress we have managed to reach in this short time, even considering the fact that we decided to add on to the initial task that was given to us.

I don’t think that our communication was as good or as effective as it should have been. There was a large sense of abstraction between each team member and their work, where changes were being made and there was progress toward a better functioning product, but the details of who was doing what were somewhat lost I feel. It seemed as though we each knew what we were supposed to do and just did it, but I felt lost when I was asked about my teammates’ roles. Another thing I think could be improved is the actual codebase. Looking at it now is confusing and although it does work, it doesn’t meet any of the design principles that make code efficient to read and manage. 

Speaking on the things I thought could be improved, the communication heading into the last sprint should be better as we’re rolling out final changes. This is the time to let our teammates know of any problems we’re facing and the changes we’re making to solidify our project before we leave. This also leads into my point of refactoring and documenting. As I said previously, I think that the code is confusing and doesn’t follow design principles so I think we could improve the clarity by refactoring wherever possible and also making comments where necessary to describe how something works.

In relation to the lack of design principles, I learned about them in the software design class and fell into the trap of making code that works now, but isn’t optimized for future use. This is a basic mistake that I knew about but still regrettably made. It’s very easy to make code that works now but won’t be easy to read or modify later. This is a problem I’ve realized I have made and I’m glad I made this mistake now because I can learn from it and hopefully avoid it in the future by being mindful about my development and taking the extra time to be careful with my software design.

Apprenticeship Pattern

The “construct your curriculum” apprenticeship pattern was relevant to me during this sprint. It basically states that your path to mastery won’t be laid out for you, and you are going to have to be the one in charge of what and how you learn. I feel this pattern was reflected this sprint with the lack of guidance that was given to us. This forced us to lay out our own road to learning what we needed in order to complete the task we were given. Having known about this pattern during the sprint I would have seeked out more sources and probably learned more because I would understand that my learning starts and ends with me so if I am unhappy with my level of knowledge, then it’s on me to improve that.

From the blog CS@Worcester – The Science of Computation by Adam Jacher and used with permission of the author. All other rights reserved by the author.

Sprint Retrospective – Sprint 2

This sprint pushed us a bit more out of our comfort zones as we moved deeper into authentication, authorization and infrastructure setup. While we had a better sense of our team dynamics and project direction, some unexpected technical challenges tested our adaptability and communication. Especially, With Keycloak setup and JWT integration becoming a major focus, we encountered some unexpected challenges that helped us grow both technically and as a team.

Evidence of activity

  • Configure Realm Export Script/Command(s): Exported realm configuration from binary to JSON to enable version control and team collaboration.
  • Added user group/role to JWT payload: Configured keycloak settings to add users needed information to JWT payload.

What worked well

Clearer Role Distribution:
This sprint had more defined roles, which helped prevent overlap and increased efficiency.

Deeper Technical Understanding:
Working with Keycloak gave us valuable hands-on experience with client scopes, tokens, and user roles.

Proactive Communication:
Team members were quicker to reach out for testing, review, and feedback. That made a noticeable difference in momentum.

why didn’t work well:
One major challenge was managing Keycloak realm configuration collaboratively.

At first, we exported Keycloak changes (clients, roles, users) using the admin console, which generated binary .realm files. These couldn’t be diffed, merged, or version-controlled properly. This became a blocker when multiple teammates made changes at the same time.

The workaround:

  • We each exported changes into JSON format.
  • Then we manually merged the configurations later, ensuring nothing was overwritten.
  • We also began documenting our changes to avoid confusion.

It was tedious and not ideal, but we learned the value of choosing version-control-friendly formats early on when working with tools like Keycloak.

Individual improvement goals:
– Programming Improvement: Over the last sprint, I struggled with translating my thought process into actual code and logic. This made me realize that while computer science courses often emphasize theory, I need to spend more time improving my practical programming skills to bridge that gap.

– Ask Sooner: I struggled with JWT for hours, when help was just a message away.

Apprenticeship Pattern: “Concrete Skills”

Summary:
The “Concrete Skills” pattern emphasizes the importance of acquiring real, tangible skills that make you immediately useful to a team. These are the kinds of abilities that allow you to contribute from day one—whether it’s setting up a development environment, debugging a system, or deploying a service.

Why I Chose It:
This sprint made me realize how much more confident and helpful I feel when I have concrete, hands-on experience with the tools we’re using, especially Keycloak. Initially, I felt lost trying to work with realm exports and JWT validation because I didn’t have real-world practice with them. After digging in and troubleshooting the JSON config issue, I started gaining practical skills that I could apply directly to our project.

How It Could’ve Helped Earlier:
If I had taken time before the sprint to develop specific, concrete skills—like how to export/import Keycloak realms properly, or how JWT verification works at the code level, I could have contributed more efficiently and with less frustration. This pattern reminded me that building a toolbox of real, usable skills is what makes you valuable in a team setting—not just theoretical knowledge.

Final Thought

Sprint 2 challenged me on a deeper level, not just with code, but with collaboration, tooling, and infrastructure. The lessons around version control, documentation, and communication will stick with us as we move forward. We’re not just building an app—we’re learning how to work like a real development team.

From the blog CS@Worcester – The Bits &amp; Bytes Universe by skarkonan and used with permission of the author. All other rights reserved by the author.

Software Development Capstone: Sprint 2 Review

When choosing an apprenticeship patten to write about for this sprint, Chapter 4’s Rubbing Elbows stuck out almost immediately. The problem raised by the authors is one for those who prefer to work independently when writing and developing software. The agency to decide which direction to approach a task, coupled with the efficiency of being the only decision maker, is appealing, and developers can absolutely be successful working this way.

The downside is, those who work alone cannot draw on the experience and knowledge of others, and place a hard limit on their productivity. Additionally, there are so many micro-techniques that can sometimes only be learned through partnership; small strategies that on their own are insignificant but accumulate over time into a much stronger knowledge-base.

The balance between independent and team focused development is something that I would like to continue to work at. Over the past sprint, I believe I did a better job of communicating progress on my issues and problems I needed help with. That said, I could always be better. I would like to place more focus on making myself available to my team mates for assistance with issues. Partially due to the desire to be productive and deliver a successful product, but also due to the ideas outlined in the Rubbing Elbows pattern. I’ve found over the past sprint that an area I’d like to improve in is the more practical knowledge that I would normally pick up working in groups and communicating with my team.

Some tasks I accomplished during the sprint:

  1. Completed JSDoc Environment setup: Our backend code now uses JSDoc compatible tags in comment lines, which is used to auto-generate developer documentation. Link
  2. buildReport outputs correct report totals with headers: the buildReport class in our reporting system backend correctly formats report headers and calculates correct tallies for visit info. Link

For the final sprint, I plan on finalizing the buildReport backend code and designing tests to be ran on the function. The final component is the connection between the backend code and the database, which I also will be working on with my team. Because we are now in the final sprint, my main overall goal is to finish developing the project as far along as time allows before making sure the environment is prepared for future work.

From the blog Griffin Butler Computer Science Blog by Griffin Butler and used with permission of the author. All other rights reserved by the author.

Second Sprint Retrospective

Last week, my team completed our second Sprint. Like the first Sprint, we did a good job. Each of us stuck to the parts we left off after the first Sprint, so with me, that meant continuing with the CheckInventory frontend. The issue I wanted to get done by the end of the first Sprint was connecting the backend to frontend, so that instead of displaying a hardcoded value, it would pull from the backend. By doing so, the frontend would have its full functionality. However, that was easier said than done, as I spent almost two weeks trying to figure it out, only for my teammate to figure it out before me. They had created a shared network for their frontend that connected to the backend. I ended up using the same shared network with no problems. I created a method that called ‘GET’ from the backend in order to display the value. The next issue was figuring out how to call this method upon mounting the page. I spent a week trying to solve this, until it was suggested that I try calling it through a button press, as that had success with my teammate’s frontend. I tried that, but the button simply refreshed the page. Later, I tried a different button, which ultimately worked. However, I was not satisfied with that, so I did some more research and found a ‘beforeMount’ method, which solved the problem. I committed, and made a merge request for both the shared network, method and mount issue. With a week left of the Sprint, I spent time playing around with the design of the frontend, as there was discussion in the visual identity team. I refactored the frontend in order to call the header, CheckInventory, and footer components in the App.vue file, instead of using a router, like it was originally. I then fixed the footer to sit properly, and used the appropriate colors and fonts, according to the school’s guidelines. I completely overhauled the header, using the ‘Thea’s Pantry’ logo, adding a navigation bar, and adding a yellow strip to the bottom. Additionally, I used the correct colors and fonts, once again complying with the school’s guidelines. I was able to commit and merge those changes before the end of the Sprint, and present them to the client.

What worked well was still communication. We all communicate really well, and when we need something from someone, or someone to review something, it’s done fairly quickly. Our chemistry has also become more friendly, we’re more open and joke around with each other, while also being strict with our work. Speaking of work, something else that worked well was solidifying our roles in a way. Since we all stayed working with stuff we worked on during the first Sprint, we knew what we were doing from the beginning, instead of figuring out everything from scratch. Personally, what worked well for me was being proactive, especially with the design part of the frontend. I felt motivated to make it look good, and it felt less like work. However, something that did not work for me was again the expectation of doing enough. Towards the end, I felt like I was, but I felt most of the time I wasted a lot of time researching. It is not exactly my fault, but it also is.

As a team, there’s not much we can improve on, as communication is good, chemistry is great, and we have done a lot of work and progress. Something I suggested in the last blog post was clarifying what we are working on, like a stand up at the start of class. We had done this one time. Although I do not think it’s necessary, it could be a way to clear up confusion if there is any. Personally, what I want to improve on myself is using my time better. I feel I spend too much time researching, and although I cannot just speed that up, I could do it more efficiently.

The Apprenticeship pattern I chose for this Sprint was “Retreat into Competence,” in layman’s terms means I am overwhelmed by how dumb I am. This is referring to the fact that I have to spend time researching in order to fix my issues. There is nothing wrong with doing that, it is just that I spend too much time doing that. If I had read that before the Sprint, it probably wouldn’t have changed anything.

From the blog CS@Worcester – Cao&#039;s Thoughts by antcao and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospective

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem-category-based/frontend/-/blob/WEB-APP-FRONTEND/src/frontend/src/components/BurgerMenu.vue?ref_type=heads

This is the link to the burgermenu component that I recently implemented.

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem-category-based/frontend/-/blob/WEB-APP-FRONTEND/src/frontend/src/App.vue?ref_type=heads

This is the link to the change i made in APP.vue to make sure the navigations between pages was working.

What Worked Well

What worked well this sprint was my ability to identify the direction I needed to go in early. By proactively researching what types of menus we needed (such as a scanning menu and an inventory display menu), I was able to start building out the feature rather than waiting for instruction. Additionally, I took more ownership of frontend work this sprint, which helped me feel more confident in my contributions. Our team also communicated better this time, which made planning and aligning tasks easier.

What Didn’t Work Well

Time management was one of my weaker areas this sprint. While I began working on the menu features early, I underestimated how long certain aspects, like tying barcode input to inventory display, would take. As a result, some parts are still in development. Another issue was not asking for help as early as I should have when I got stuck on some front-end design decisions, which could’ve saved me time.

Team Improvement Suggestions

Our team has improved in communication and breaking down tasks, but we could still do a better job tracking progress on GitLab. Sometimes, issues remain open even after tasks are completed, which makes it harder to see what’s done at a glance. Adding regular, short update posts during the week, even if work hasn’t been pushed yet would help us stay more aligned. I also think we could benefit from regular 15-minute team syncs mid-sprint.

Personal Improvement Suggestions

Individually, I’d like to work on improving my frontend development speed and getting more comfortable integrating UI features with logic and API calls. I also need to be better about reaching out when I hit roadblocks rather than spending too long stuck. Going forward, I will try to break down my assigned features into smaller sub-tasks and check them off one by one to maintain steady momentum.

Apprenticeship Pattern: “Expose Your Ignorance”

Summary:
The “Expose Your Ignorance” pattern, found in Chapter 2 of Apprenticeship Patters by Dave Hoover and Adewale Oshineye, emphasizes that developers shouldn’t hide what they don’t know. Instead, they should openly acknowledge areas of inexperience and use that as a tool to learn and grow. This transparency not only helps personal development but also builds trust within teams and encourages collaboration.

Why I Chose This Pattern:
I chose this pattern because it perfectly describes one of the issues I struggled with this sprint. There were a few moments where I wasn’t entirely sure how to implement something correctly in Vue.js, especially around the menu layout and dynamic binding of scanned values. Instead of asking for help or noting my uncertainty in discord chat, I tried to figure it out myself and ended up wasting valuable time.

How It Could Have Helped This Sprint:
Had I truly embraced this pattern earlier, I would have flagged those areas where I felt unsure and asked for input from my teammates or professor. This could have saved me hours of trial and error and allowed me to refocus on other key tasks. Going forward, I want to normalize being transparent about what I don’t know so I can get help sooner and grow faster.

From the blog CS@Worcester – Software Dev Capstone by Jaylon Brodie and used with permission of the author. All other rights reserved by the author.