Category Archives: CS@Worcester

Test Driven Development

After learning about test driven development in class a couple classes ago, I found myself confused as to why this process would prove itself successful in software development so I did some research and found a video on YouTube from a channel called Fireship called, “Test-Driven Development //Fun TDD Introduction with JavaScript“ While, of course, JavaScript is not currently one of the programming languages that I primarily work with, the video proved informative and educational nonetheless. The video begins by emphasizing the importance of the phrase “Red, Green, Refactor” and describes test driven development as a technique where a programmer describes the behavior of the code before proceeding to implementing that code. “Red, Green, Refactor” describes the process of, firstly, writing a failing test, then writing some code to get the test to pass, then going back and optimizing the code in a manner that suits the project requirements. 

Although I have no doubt that the majority of the confusion around this topic comes from inexperience and being in the process of learning, I certainly had questions and confusion around how and why it makes sense to put forth an implementation process for code before even writing a single line. Conceptually it makes sense to lay out the general steps to a project and attempt to anticipate some of the issues that may come up beforehand, but something about testing code that doesn’t exist is a confusing concept for me.

The video talked about different types of testing at the different levels starting with unit testing and working into more complex testing methods. Each testing method is built to cater to a different type of program that is being tested.

Generally, after watching this, it sparked curiosity around what software development would look like without testing and how the continual development of the testing process has allowed for programers to not only improve the way they work but to create more robust and successful outcomes in the results of their code. 

I think it’s interesting looking back at the first three or so years of computer science classes, learning to code in multiple different languages with different use cases and styles and intents and outcomes. Not one class between the two schools I went to and the three years I was in school worked with testing before the higher level senior year classes. This made me wonder why something so important and integral to software development such as testing code was left out of the learning process until very late.

From the blog CS@Worcester – The Struggle of Being a Female Student in CS by Noam Horn and used with permission of the author. All other rights reserved by the author.

Week 12- Test Driven Development

During week 12, we learned about test driven development. Test driven development is a testing method that outlines what functions need to be tested, writes the tests, then writes the code. It is an iterative process that goes through each test one at a time, building on what is already written. 

In class, we practiced the method by building a Mars Rover. We read what the rover should do and made a list of what we needed to test and build. We started with the easiest thing on the list and wrote a test for it. Then, we wrote the minimum amount of code to make the test run. We tweaked the test block, if it needed to be, and then ran the single test. Next, we repeated the process for the next easiest thing on the list. We will repeat the process until all of the tests are written. 

Test driven development focuses on just the tests before the actual code is written. It was a little confusing at first to think about how to write a test before the code, but once it clicked, it was easy. 

I found a website on test driven development and it described the process in a slightly different way. BrowserStack explains test driven development as a “red-green-refactor” cycle. The red phase is when just the test is written and it doesn’t pass because there’s no code. The green phase is when the bare minimum code is written for the test to run and it passes. The refactor phase is tweaking the tests and code so everything runs and passes. The cycle is run through every time a new test is written and repeats until all of the tests are written and pass. 

Test driven development is a very useful method for debugging. Since the tests are being run at every new addition, bugs are found as they appear and do not slip into the cracks as easily. The method also is very efficient and allows the programmer to easily add new functions later on in development. The iterative process is easy to maintain for the programmer, and provides a much larger testing scope than other methods. 

I liked working with test driven development. I think the process is very organized and straightforward. I would definitely use this in the future for projects that require thorough tests and with functions that build on each other. 

Source referenced: https://www.browserstack.com/guide/what-is-test-driven-development

From the blog ALIDA NORDQUIST by alidanordquist and used with permission of the author. All other rights reserved by the author.

Write Code that’s Readable. Please.

Let’s talk about something that affects programmers at all levels, on all codebases and projects: readability. Specifically, the balance between code that is fast and resource efficient and code that is easily understood and maintained.

First, readability. It’s a given that any code produced and delivered in any professional capacity is almost certainly not being written by a single developer. Therefore, good developers will write code that others will be able to understand and pick up what the intended behavior of the code is. Commenting, appropriate use of whitespace and indentation, and effective developer documentation can make code easy to work with, for anyone.

Here’s an example of two Python code snippets that both return the summation of a given array, but are structured quite differently:

Code 1:

# Returns summation of given array of numbers

def sum_numbers_readable(nums):
   total = 0    # Initalize starting total
   for num in nums:   # Sum all numbers in array nums
      total += num
   return total   # Return sum

print(sum_numbers_readable([1, 2, 3, 4]))

Code 2:

def sum_numbers_fast(nums):
   return sum(nums)
print(sum_numbers_fast([1, 2, 3, 4]))

See the difference? Both pieces of code do the same thing and achieve the same output. Code 1, however, contains comments describing the intended behavior of the function and the expected parameter type, as well as step-by-step what each line of code does in the function. Blank lines and proper use of whitespace also divide the code into relevant sections. Code 2, on the other hand, has no comments added, poor use of whitespace (or as poor as Python would allow), and uses the sum() function from Python’s native library. While the intended behavior of this function is pretty clear, this may not always be the case.

Take a more extreme example, just for the fun of it: go write or refactor some piece of Java code to be all on the same line. It’ll compile and run fine. But then go ask someone else to take the time to review your code, and see what their reaction is.

On the other hand, we have code efficiency and speed, generally measured in runtime or operations per second. Comparing our two code snippets using a tool like perfpy.com to see performance, and we find that the second, less readable code executes faster with a higher operations per second. (931 nanoseconds vs. 981 nanoseconds and 1.07 op/sec vs. 1.02 op/sec. Not a huge difference at all, but this scales with program complexity).

This gives us some perspective on the balance between performance and readability/maintainability. It also helps to keep in mind that performance and readability are both relative.

Looking back at our two Python snippets, most developers with experience would opt for the second design style. It’s faster, but also easily readable provided that you understand how Python methods work. However, people just learning programming would probably opt for the more cumbersome but readable first design. With regards to performance, the difference in runtime could be a nonissue or it could be a catastrophic slowdown. It depends entirely on the scope of the product and the needs of the development team.

As a final rule of thumb, the best developers will balance readability and efficiency as best they can, while above all else remaining consistent with their style. When looking at possible code optimizations, consider how the balance between readability and performance could shift. The phase “good enough” tends to have a negative connotation, but if your code is readable enough that other team members can work on it, and fast enough that it satisfies the product requirements, “good enough” is perfect.

Refrences:
Lazarev-Zubov, Nikita. “Code Readability vs. Performance: Shipbook Blog.” Shipbook Blog RSS, 16 June 2022, blog.shipbook.io/code-readability-vs-performance.

From the blog Griffin Butler Computer Science Blog by Griffin Butler and used with permission of the author. All other rights reserved by the author.

Understanding Smoke Testing in Software Development

In software development, a build has to be stable before further more comprehensive testing in a project so that the project is successful. One of the ways of guaranteeing this is smoke testing, which is otherwise known as Build Verification Testing or Build Acceptance Testing. Smoke testing is an early check-point to verify that the major features of the software are functioning as desired before other more comprehensive testing is done.

What is Smoke Testing?
Smoke testing is a form of software testing that involves executing a quick and superficial test of the most crucial features of an application to determine whether the build is stable enough for further testing. It is a minimum set of tests created to verify if the core features of the application are functioning. Smoke tests are generally executed once a new build is promoted to a quality assurance environment, and they act as an early warning system of whether the application is ready for further testing or requires correction immediately.

Important Features of Smoke Testing

-Level of Testing: Smoke tests are interested in the most important and basic features of the software, without exploring each and every functionality.
-Automation: Automated smoke testing is a common routine, especially in the case of time limitations, to perform quick, repeatable tests.
-Frequency: Smoke testing is normally run after every build or significant change in code in order to allow early identification of major issues.
-Time Management: The testing itself is quick in nature, so it is a valuable time-saver by catching critical issues early.
-Environment: Smoke testing is typically performed in an environment that mimics the production environment so that test results are as realistic as possible.

Goal of Smoke Testing

The primary objectives of smoke testing are:

-Resource Optimization: Don’t waste resources and time on testing if core functionalities are broken.
-Early Detection of Issues: Identify any significant issues early so that they can be fixed at a quicker pace.
-Refined Decision-Making: Present an open decision schema on whether or not the build is ready to go to thorough, detailed testing.
-Continuous Integration: Make every new build meet basic quality standards before it is added to the master codebase.
-Pragmatic Communication: Give rapid feedback to development teams, allowing them to communicate clearly about build stability.

Types of Smoke Testing
There are several types of smoke tests based on methodology chosen and setting where it is put to practice:
-Manual Testing: Test cases are written and executed manually for each build by testers.
-Automated Testing: Automation tools make the process work by itself best used in situations of tight deadline projects.
-Hybrid Testing: Combines a mixture of automated as well as manual tests for capitalizing on both the pros of each methodology.
Daily Smoke Testing: Conducted on a daily basis, especially in projects with frequent builds and continuous integration.
Acceptance Smoke Testing: Specifically focused on verifying whether the build meets the key acceptance criteria defined by stakeholders.
UI Smoke Testing: Tests only the user interface features of an application to verify whether basic interactions are working.

Applying Smoke Testing at Various Levels
Smoke testing can be applied at various levels of software testing:
Acceptance Testing Level: Ensures that the build meets minimum acceptance criteria established by the stakeholders or client.
System Testing Level: Ensures that the system as a whole behaves as expected when all modules work together.
Integration Testing Level: Ensures that modules that have been integrated work and communicate as expected when combined.

Advantages of Smoke Testing
Smoke testing possesses several advantages, including:
Quick Execution: It is easy and quick to run, and hence ideal for frequent builds.
Early Detection: It helps in defect detection in the initial stage, preventing wasting money on faulty builds.
Improved Quality of Software: By detecting the issues at the initial stage, smoke testing allows for improved software quality.
Risk of Failure is Minimized: Detecting core faults in earlier phases minimizes failure risk at subsequent testing phases.
Time and Effort Conservation: Time as well as effort is conserved as it prevents futile testing within unstable builds.

Disadvantages of Smoke Testing
Although smoke testing is useful in many respects, it has some disadvantages too:
Limited Coverage: It checks only the most critical functions and doesn’t cover other potential issues.
Manual Testing Drawbacks: Manually, it could be time-consuming, especially for larger projects.
Inadequate for Negative Tests: Smoke testing typically doesn’t involve negative testing or invalid input scenarios.
Minimal Test Cases: Since it only checks the basic functionality, it may fail to identify all possible issues.

Conclusion
In conclusion, smoke testing is an important practice at the early stages of software development. It decides whether a build is stable enough to go for further testing, saving time and resources. By identifying major issues early in the development stage, it facilitates an efficient and productive software testing process. However, it should be remembered that smoke testing is not exhaustive and has to be supported by other forms of testing in order to ensure complete quality assurance. 

Personal Reflection

Looking at the concept of smoke testing, I view the importance of catching issues early in the software development process.
It’s easy to get swept up in the excitement of rolling out new features and fully testing them, but if the foundation is unstable, all the subsequent tests and optimizations can be pointless. Smoke testing, in this sense, serves as a safety net, getting the critical functions running before delving further into more rigorous tests. I think the idea of early defect detection resonates with my own working style.

As I like to fix small issues as they arise rather than letting them escalate into big problems, smoke testing allows development teams to solve “show-stoppers” early on, preventing wasted time, effort, and resources in the future. Though it does not pick up everything, its simplicity and the fact that it executes fast can save developers from wasted time spent on testing a defective product, thus ending up with a smooth and efficient workflow. The process, especially in a scenario where there are frequent new builds being rolled out, seems imperative to maintain a rock-solid and healthy product.
The benefits of early problem detection not only make software better, but also stimulate a positive feedback loop of constant improvement between the development team. 

From the blog CS@Worcester – Maria Delia by Maria Delia and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospecive

A lot of things worked pretty well this sprint compared to the last. Communication, effort, work, progress, collaboration, planning, and more were all present and at a great level compared to the last sprint. We still had our own section that we mainly worked in but there was plenty of collaboration for tackling a difficult issue, researching an issue, or just divvying up work should someone have free time. I think that our Sprint 2 goal was very concrete and our issues that we made at the start of the sprint as well as throughout led to a smooth sailing to the finish line.

I wouldn’t say that there was anything that didn’t work well but rather things that could be worked on. Communication could always be improved, and even though it’s a night and day difference between Sprint 1, there were some points in which it would’ve been nice to hear from certain team members what was going on or why they weren’t present. This may be more of just my personal feelings but there is one member who handles almost all Gitlab organization and use and takes notes of our meetings, and I’m not sure if it’s for their sake or for the team’s sake, but I feel that they could ask the rest of the team to help with any of these. They are the most adept at using Gitlab and I assume wants to stay on top of things and as organized as possible but I’m certain other team members could help out even if a little. 

To improve as a team, communicating when a team member can’t make the meeting and making good progress and decisions without certain team members present would be great. I think we divvy up our issues well but other work that aren’t exactly issues could be divided up between members especially if it’s rather menial.

I think I could’ve gathered a better idea of our group’s work and flow and what other team members are working on. For the Sprint 2 Review, I prepared to share what I had worked on but was wholly unprepared to share what other members have worked on and present our work. 

The apprenticeship pattern I felt was most relevant to my experiences was “Confront Your Ignorance” from Chapter 2. This pattern states that “There are tools and techniques that you need to master, but you do not know how to begin… and there is an expectation that you already have this knowledge.” The solution to this pattern states to “Pick one skill, tool, or technique and actively fill the gaps in your knowledge about it.” Although this pattern doesn’t exactly describe my experience during the sprint, it felt the closest out of all the other patterns. Rather than tools and techniques, I would say that I need to better understand our group’s work and plans as well as individual members’ work. To handle this, I could go through our Gitlab issues seeing what has been accomplished and who is assigned to what, I could explore our repositories to get a better understanding of the individual pieces of the project, and I could simply ask my team members about what they have worked on, what they plan to work on, and their idea of our group’s work. Taking notes would’ve been a great option as well due to how much information might be shared during our meetings.

From the blog CS@Worcester – Kyler's Blog by kylerlai and used with permission of the author. All other rights reserved by the author.

HOW DECISION TABLES CHANGED MY SOFTWARE TESTING MINDSET.

If you’ve ever written test cases based on your gut feeling, you’re not alone. I used to write JUnit tests by simply thinking, “What might go wrong?” While that’s a decent start, I quickly realized that relying on intuition alone isn’t enough especially for complex systems where logical conditions stack up fast.

That’s when I understood the magic of Decision Table-Based Testing.

What Is Decision Table Testing?

A decision table is like a truth table, but for real-world logic in your code. It lays out different conditions and maps them to the actions or outcomes your program should take. By organizing conditions and results in a table format, it becomes much easier to identify what combinations of inputs need to be tested and which ones don’t. It’s especially helpful when you want to reduce redundant or impossible test cases., when you have multiple input variables (like GPA, credits, user roles, etc.) and also when your program behaves differently depending on combinations of those inputs

Applying Decision Tables in Real Time

For a project that I happened to work on, we analyzed a simple method; boolean readyToGraduate(int credits, double gpa). This method is meant to return true if Credits ≥ 120 and also when GPA ≥ 2.0. We had to figure out what inputs would cause a student to graduate, not graduate, or throw an error—such as when the GPA or credit values were outside of valid ranges.

Instead of testing random values like 2.5 GPA or 130 credits, we created a decision table with all the possible combinations of valid, borderline, and invalid values.

We even simplified the process using equivalence classes, like:

  • GPA < 0.0 → invalid
  • 0.0 ≤ GPA < 2.0 → not graduating
  • 2.0 ≤ GPA ≤ 4.0 → eligible to graduate
  • GPA > 4.0 → invalid

By grouping these ranges, we reduced a potential 256 test cases to a manageable 68 and even further after combining rules with similar outcomes.

Well you must be wondering why this even matters in real projects. It matters because in real-world applications, time and efficiency are everything. Decision tables help you cover all meaningful test scenarios. They also help to cut down on unnecessary or duplicate test cases. Decision tables as well help to reduce human error and missed edge cases and provide a clear audit trail of your testing logic.

If you’re working in QA, development, or just trying to pass that software testing class, mastering decision tables is a must-have skill.Switching from intuition-based testing to structured strategies like decision tables has completely shifted how I write and evaluate test cases. It’s no longer a guessing game—it’s a methodical process with justifiable coverage. And the best part? It saves a ton of time.Next time you’re designing tests, don’t just hope you’ve covered the edge cases. Prove it—with a decision table

Have you used decision tables in your projects? Drop a comment below and share your experience!

From the blog CS@Worcester – MY_BLOG_ by Serah Matovu and used with permission of the author. All other rights reserved by the author.

Sprint 2

During this sprint, I switched up the work I did compared to the last sprint instead of working on the backend I was given the task to oversee the front end. First going into the front end blindly I didn’t know exactly how far the front end was already and had to accommodate myself to the code. First opening the frontend I noticed it was missing several files that would hold great importance in starting the frontend. One of my first fixes was the docker-compose file left by the other group was the backend file the same so I created a new file that would hold both servers of the front end and the backend. I updated some of the documentation that had been outdated compared to what we were working on currently. I created a new way to start the frontend server because it was missing that and added an up.sh, down.sh, and rebuild.sh for the front end. I also updated the outdated docker file. After I had a basis for what was needed I decided to go in and organize it better to read the new files in an organized manner to see what files were missing. Had to add new assets and components including new pictures of Libre food pantry and any other ones that were needed. 

During this sprint, I worked well working alone but I should have checked more often with my teammates because of the issue that would later fall upon me. I was making progress every week but I hadn’t yet seen the biggest fault was the backend would crash on me. Next time I should check in with my teammates sooner instead of making progress on my own. I just didn’t want to slow down my teammates with my work. That is a fault in itself I shouldn’t be scared for help because the team will fail if I do this. 

This time around the apprenticeship pattern that resonated with me the most was breakable toys because it is the basis of each sprint. You must create a small project where you can make mistakes in order to learn and be better. During this spring I was learning the ins and outs of the front end on something someone else created that I had to update and fix. This pattern could have made me more confident in messing up because it’s now wrong to mess up. Failing would help with issues that would come up that would show me what to do in other scenarios. Another pattern I would choose is to learn how you fail. I had many times during this sprint where I was head scratching to find an answer or come to an understanding of something but picking up on the patterns of how I failed I could change for the next issue that would come up. This issue opens my mind to being more intentional with my work and picking up on my common mistakes even if sometimes simple mistakes are the biggest headaches. Errors can be preventable by understanding why you do something and adapting to catch them next time. During this sprint, I learned a great deal about my project and I hope in the next sprint I will be able to learn more.

From the blog cs-wsu – DCO by dcastillo360 and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospective

Gitlab Descriptions

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem-culling/addbarcodefrontend/-/commit/6999f8a611a955be33526c2e6f712240f69ab5c4 – Cleaned the project folders and files to remove any unnecessary files and clean up the directory paths, also changed the docker files to run the website locally.

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem-culling/addbarcodefrontend/-/commit/08d0b14fc04493a29242fd458a1bcf05992dd603 – modified the layout of the website to be responsive to smaller screen sizes and added the WSU logo.
https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem-culling/inventorybackend/-/commit/9b6d6d0d68e0387f1214410f56702fce3b84f3a8 – copied the work from the addBarcodeFrontend over to the inventoryBackend project to connect to our backend.

Sprint Reflection

We all worked well this sprint to put together a nearly finished product. I am confident that by the end of sprint 3 our product will be ready to be used, but likely could use improvements still. Each of us has been diligently working on our assigned issues and bringing them together to make it all work. Our team started this project from scratch unlike most other groups and I feel proud of the progress we have managed to reach in this short time, even considering the fact that we decided to add on to the initial task that was given to us.

I don’t think that our communication was as good or as effective as it should have been. There was a large sense of abstraction between each team member and their work, where changes were being made and there was progress toward a better functioning product, but the details of who was doing what were somewhat lost I feel. It seemed as though we each knew what we were supposed to do and just did it, but I felt lost when I was asked about my teammates’ roles. Another thing I think could be improved is the actual codebase. Looking at it now is confusing and although it does work, it doesn’t meet any of the design principles that make code efficient to read and manage. 

Speaking on the things I thought could be improved, the communication heading into the last sprint should be better as we’re rolling out final changes. This is the time to let our teammates know of any problems we’re facing and the changes we’re making to solidify our project before we leave. This also leads into my point of refactoring and documenting. As I said previously, I think that the code is confusing and doesn’t follow design principles so I think we could improve the clarity by refactoring wherever possible and also making comments where necessary to describe how something works.

In relation to the lack of design principles, I learned about them in the software design class and fell into the trap of making code that works now, but isn’t optimized for future use. This is a basic mistake that I knew about but still regrettably made. It’s very easy to make code that works now but won’t be easy to read or modify later. This is a problem I’ve realized I have made and I’m glad I made this mistake now because I can learn from it and hopefully avoid it in the future by being mindful about my development and taking the extra time to be careful with my software design.

Apprenticeship Pattern

The “construct your curriculum” apprenticeship pattern was relevant to me during this sprint. It basically states that your path to mastery won’t be laid out for you, and you are going to have to be the one in charge of what and how you learn. I feel this pattern was reflected this sprint with the lack of guidance that was given to us. This forced us to lay out our own road to learning what we needed in order to complete the task we were given. Having known about this pattern during the sprint I would have seeked out more sources and probably learned more because I would understand that my learning starts and ends with me so if I am unhappy with my level of knowledge, then it’s on me to improve that.

From the blog CS@Worcester – The Science of Computation by Adam Jacher and used with permission of the author. All other rights reserved by the author.

Sprint Retrospective – Sprint 2

This sprint pushed us a bit more out of our comfort zones as we moved deeper into authentication, authorization and infrastructure setup. While we had a better sense of our team dynamics and project direction, some unexpected technical challenges tested our adaptability and communication. Especially, With Keycloak setup and JWT integration becoming a major focus, we encountered some unexpected challenges that helped us grow both technically and as a team.

Evidence of activity

  • Configure Realm Export Script/Command(s): Exported realm configuration from binary to JSON to enable version control and team collaboration.
  • Added user group/role to JWT payload: Configured keycloak settings to add users needed information to JWT payload.

What worked well

Clearer Role Distribution:
This sprint had more defined roles, which helped prevent overlap and increased efficiency.

Deeper Technical Understanding:
Working with Keycloak gave us valuable hands-on experience with client scopes, tokens, and user roles.

Proactive Communication:
Team members were quicker to reach out for testing, review, and feedback. That made a noticeable difference in momentum.

why didn’t work well:
One major challenge was managing Keycloak realm configuration collaboratively.

At first, we exported Keycloak changes (clients, roles, users) using the admin console, which generated binary .realm files. These couldn’t be diffed, merged, or version-controlled properly. This became a blocker when multiple teammates made changes at the same time.

The workaround:

  • We each exported changes into JSON format.
  • Then we manually merged the configurations later, ensuring nothing was overwritten.
  • We also began documenting our changes to avoid confusion.

It was tedious and not ideal, but we learned the value of choosing version-control-friendly formats early on when working with tools like Keycloak.

Individual improvement goals:
– Programming Improvement: Over the last sprint, I struggled with translating my thought process into actual code and logic. This made me realize that while computer science courses often emphasize theory, I need to spend more time improving my practical programming skills to bridge that gap.

– Ask Sooner: I struggled with JWT for hours, when help was just a message away.

Apprenticeship Pattern: “Concrete Skills”

Summary:
The “Concrete Skills” pattern emphasizes the importance of acquiring real, tangible skills that make you immediately useful to a team. These are the kinds of abilities that allow you to contribute from day one—whether it’s setting up a development environment, debugging a system, or deploying a service.

Why I Chose It:
This sprint made me realize how much more confident and helpful I feel when I have concrete, hands-on experience with the tools we’re using, especially Keycloak. Initially, I felt lost trying to work with realm exports and JWT validation because I didn’t have real-world practice with them. After digging in and troubleshooting the JSON config issue, I started gaining practical skills that I could apply directly to our project.

How It Could’ve Helped Earlier:
If I had taken time before the sprint to develop specific, concrete skills—like how to export/import Keycloak realms properly, or how JWT verification works at the code level, I could have contributed more efficiently and with less frustration. This pattern reminded me that building a toolbox of real, usable skills is what makes you valuable in a team setting—not just theoretical knowledge.

Final Thought

Sprint 2 challenged me on a deeper level, not just with code, but with collaboration, tooling, and infrastructure. The lessons around version control, documentation, and communication will stick with us as we move forward. We’re not just building an app—we’re learning how to work like a real development team.

From the blog CS@Worcester – The Bits &amp; Bytes Universe by skarkonan and used with permission of the author. All other rights reserved by the author.

Software Development Capstone: Sprint 2 Review

When choosing an apprenticeship patten to write about for this sprint, Chapter 4’s Rubbing Elbows stuck out almost immediately. The problem raised by the authors is one for those who prefer to work independently when writing and developing software. The agency to decide which direction to approach a task, coupled with the efficiency of being the only decision maker, is appealing, and developers can absolutely be successful working this way.

The downside is, those who work alone cannot draw on the experience and knowledge of others, and place a hard limit on their productivity. Additionally, there are so many micro-techniques that can sometimes only be learned through partnership; small strategies that on their own are insignificant but accumulate over time into a much stronger knowledge-base.

The balance between independent and team focused development is something that I would like to continue to work at. Over the past sprint, I believe I did a better job of communicating progress on my issues and problems I needed help with. That said, I could always be better. I would like to place more focus on making myself available to my team mates for assistance with issues. Partially due to the desire to be productive and deliver a successful product, but also due to the ideas outlined in the Rubbing Elbows pattern. I’ve found over the past sprint that an area I’d like to improve in is the more practical knowledge that I would normally pick up working in groups and communicating with my team.

Some tasks I accomplished during the sprint:

  1. Completed JSDoc Environment setup: Our backend code now uses JSDoc compatible tags in comment lines, which is used to auto-generate developer documentation. Link
  2. buildReport outputs correct report totals with headers: the buildReport class in our reporting system backend correctly formats report headers and calculates correct tallies for visit info. Link

For the final sprint, I plan on finalizing the buildReport backend code and designing tests to be ran on the function. The final component is the connection between the backend code and the database, which I also will be working on with my team. Because we are now in the final sprint, my main overall goal is to finish developing the project as far along as time allows before making sure the environment is prepared for future work.

From the blog Griffin Butler Computer Science Blog by Griffin Butler and used with permission of the author. All other rights reserved by the author.