Category Archives: CS@Worcester

Favor real dependencies for unit testing

URL: https://stackoverflow.blog/2022/01/03/favor-real-dependencies-for-unit-testing/

Mark Seeman brings us an interesting idea about which dependencies should be used during testing. In his article Favor real dependencies for unit testing, he explains that not every dependency necessarily helps you develop your tests. His main point concerns the use of dependencies that generate some kind of fake implementation of your methods in order to allow you to test them. One great example of this is Mockito, a widely used Java library where you can ask the tool to mock an entire class implementation. Although that sounds completely reasonable at first glance, what could be one issue that Mark is possibly missing in his argument? I would say that he is missing the reason why developers often rely on mocks and stubs in real-world development scenarios.

The main reason someone may choose to use mocks and stubs is more related to collaborative group work rather than projects handled by a single developer. In group settings, such as when working on a complex system like a hotel booking website, developers are usually assigned to different components or features of the system. For example, imagine a situation where you are working as a developer on such a project and are responsible for the Bookings class, while your teammate is assigned to the Suites class. Both of you have been making progress on your respective parts, and now you want to start writing tests to ensure everything functions as expected.

However, if any of your methods rely on a function that your teammate has not yet implemented, you could run into difficulties. Without the other function available, you might not be able to fully test your own code, even though your part is technically complete. This could lead to a development bottleneck, preventing you from moving forward until the rest of the system is ready.

To solve such a problem, one practical solution is to use libraries like Mockito. These tools allow you to create a mock version of your teammate’s class or method, enabling you to continue writing and running tests without delay. As explained earlier, Mockito generates fake implementations that simulate the behavior of the real components. This makes it possible to isolate and verify your own code independently.

Mark’s point is valid in scenarios where a developer is solely responsible for both the implementation and testing of all related methods. In such cases, using real dependencies like database fakes or stubs may be more effective. However, in collaborative environments, mocking libraries are essential tools that support parallel development.

This article surprised me with its perspective and application. As I’ve learned in class, the use of mocks allows developers to test features that haven’t been implemented yet adding a useful layer of abstraction. I believe that such libraries are not meant to stay in the codebase permanently but rather serve as temporary scaffolding—tools meant to be discarded once the full system is in place.

From the blog CS@Worcester – CS Today by Guilherme Salazar Almeida Nazareth and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospective

During Sprint Two, my main task was troubleshooting an issue with sending data to MongoDB and its connection within Docker. Initially, a teammate had trouble sending the fake data from RabbitMQ to MongoDB. The data was reaching the RabbitMQ queue, but there were problems with the consumer. I suggested that we seed fake data directly into the database so we could work on calculating totals and building the report. When working on that, it soon became clear that the issue was not the consumer but the connection to MongoDB itself. Even with the database running and showing access to an empty database on the browser, we could not add the fake data from the queue. Further attempts to connect to the database did not work. At the end of the sprint, we decided to split the data transfer portion into another subproject for sprint three.

This was the main experience for half of the team, while the others worked on JSDoc, calculating report totals, and developing the frontend. We faced setbacks throughout the sprint but collaborated well and tackled the problem from different angles each week. Unfortunately, the troubleshooting process regarding the MongoDB connection bore no fruit. We misidentified the problem early on, which took some time, but this was mostly due to our inexperience with Docker. When comparing and implementing the guestinfobackend with our backend, I grew consistently confused about what was causing the issue and believed it had something to do with using the docker container we did not understand.

Moving the data transfer portion to a different subproject seemed like the best approach. It will move us around the roadblock we hit and give us an achievable goal for sprint three. Although we could not solve the issue, I still learned a lot about MongoDB, MongoDB Compass, how to seed fake data, the docker-compose, and how Docker can complicate connections between different services.

Our team did not initially understand the extent of the problem, and in hindsight, we should have had all teammates working on the issue earlier. We assumed the roadblock would be resolved quickly, but we should have brought the rest of the team in after week two. Our communication in and out of class has been fine, but while we were stuck on the database, I believe similar roadblocks on the frontend side were being downplayed. Since we are no longer stuck on the database, I will focus on helping with the frontend during sprint three.

Individually, I need to deepen my understanding of Docker containers networking and ports. None of the attempts produced functional or testable changes, and we actively discussed what we tried each week, so it was disheartening not having anything to commit. I was scrapping entire changes and trying different approaches constantly to no avail.

The apprenticeship pattern reflecting my experience for this sprint would be “Learn How You Fail.” This pattern emphasizes analyzing the reason for your failures and not dwelling on them. Understanding what led to the failure, like incorrect conclusions, gaps in knowledge, or missed details, can turn something frustrating into a learning opportunity. I chose this pattern because the sprint was full of failed attempts. Despite researching and testing each week, we could not solve the problem. Initially, it was frustrating, but seeing it as part of the process makes more sense when stuck. Our group talked about our lack of experience in Docker, and it helped us to know what we needed to work on and that I was not the only one confused. If I had read this pattern earlier, I may have worked on the issue with less frustration and more patience.

While I did not make any commits this sprint due to the unresolved database issue, I was consistently involved in troubleshooting the MongoDB connection issue alongside my team, as shown in the weekly reports.

From the blog CS@Worcester – KindlCoding by jkindl and used with permission of the author. All other rights reserved by the author.

Top javaScript testing frameworks

Jest:

Developed by Facebook, Jest is one of the most beginner friendly frameworks, especially for those working with React. It comes pre-configured and includes a test runner, mocking, and assertion libraries. Its snapshot testing and excellent documentation make it a favorite among React developers. However, debugging can be tricky in some IDEs, and large snapshot files can be hard to maintain.

Mocha:

Mocha is a flexible framework ideal for Node.js applications. Its simplicity and long-standing presence in the testing world make it reliable. With support for async testing and various plugins like Chai and Sinon, it offers solid control. That said, it requires more configuration than Jest and lacks some built-in features.

Jasmine:
Jasmine supports asynchronous testing and integrates well with external libraries. It’s loved for its flexibility and extensive community support. The trade-off is more setup complexity, especially if you need additional libraries for mocking or assertions.

Nightwatch
Nightwatch is great for E2E testing with Selenium WebDriver. It’s particularly useful if your team has a Java background, thanks to its object-oriented syntax. However, its syntax can be less readable, and logging failures can be cumbersome without detailed error messages.

Playwright
A rising star from Microsoft, Playwright allows you to automate Chromium, Firefox, and WebKit using one API. It’s fast, supports modern web features, and works well with headless browsers. Being newer, it still lacks the depth of resources available for older frameworks.

Puppeteer
Built by Google, Puppeteer is tailored for Chrome/Chromium automation. It’s fast, developer-friendly, and ideal for tasks like form submission or page scraping. Its main limitation is the lack of cross-browser support.

Selenium
The veteran of test automation, Selenium remains the go-to for cross-browser testing. While powerful, it often requires additional setup and can struggle with scalability unless paired with tools like Selenium Grid or LambdaTest.

Karma
Karma offers real-time feedback and runs tests across devices and browsers. It supports popular frameworks like Mocha and Jasmine. It’s highly flexible, though less commonly used in newer projects today.

Cypress
Cypress is designed for modern JavaScript apps and offers unique features like time-travel debugging and real-time reloads. It runs directly in the browser but is limited to a few supported browsers and doesn’t allow multi-tab or remote execution.

Final Thoughts
Your ideal framework depends on your project’s size, tech stack, and testing goals. Whether you’re working with React, Node.js, or need robust cross-browser support, there’s a JavaScript testing framework tailored to your needs. Test smart, and happy coding!

Reference: https://www.lambdatest.com/blog/top-javascript-testing-frameworks/

From the blog CS@Worcester – The Bits & Bytes Universe by skarkonan and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospective Blog

My task in Sprint 2 was to modify all endpoints to accept access tokens that will determine the system whether the request is from an authorized individual making that request. The goal is that these tokens will let the system know if the requester is authorized. But this can only happen after the IAM team figures out how the tokens will work. In this sprint, I collaborated very closely with my partner Sean Kirwin.

What was best for me in Sprint 2 was to work with Sean and continue to work on the backend. I was working on an issue that I am comfortable with again. I was also able to continue a task that was in progress from the previous sprint.

As a team, we still managed to get along. Communication from my teammates was really the best. Every individual could complain about what issues they were facing, why they were late to class or missing class, and was helpful whenever someone needed help from a teammate. Every week, my teammates were always there to offer helpful criticism, and I appreciated their help when I needed it. They also answered my questions about how to proceed with my assignment. I know that there is always room for improvement. However, I think that my team did very well in this sprint and we did not have any major issues during this sprint.

Relying on another team was somewhat challenging for me. We were not able to accomplish as much as needed because we were waiting to see how the tokens will be, as it is the IAM team’s decision to make that. Although we keep communicating with them on Discord, in my opinion, we should have sat down and spoken face-to-face or on Zoom so that we can better understand each other’s views. I sort of felt like both teams were getting a bit mixed up with each other’s arguments. The pattern I associated with from the Apprenticeship Patterns book was “Create Feedback Loops”. Here in this chapter, it highlights the importance of constant, actionable feedback in speeding up learning and improvement in the journey of an apprentice towards mastery. Feedback loops facilitate the discovery of weaknesses, confirm progress, and hone skills. They are imperative to move from apprentice to journeyman and later to master because they enable self-awareness and incremental improvement.

I chose this pattern because it is so closely related to what was happening in the sprint. Although it is stated not to move/work on tasks from the previous sprint, I managed to do so because I had the time available to complete a small mistake that was executed in sprint 1 in order to wait for feedback from the other group so that I could work on my task in sprint 2. I was able to learn the feedback from my other team members and rectify the small mistakes done during sprint 1. The “Create Feedback Loops” pattern helped me to step back, hear the feedback from my peers, and strategize on how to improve.

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/guestinfosystem/guestinfobackend/-/issues/141

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospect

In sprint 2 my team and I actually got the project functioning as we had intended. We now have a webpage with a scanner that can read UPC codes off any product we can find. A fetch script that takes that UPC code and uses an FDA API to give us data on the product. This data includes brand, brand owner, product category and a description. This data is then merged into our back end and stored with a quantity that gets manually entered. 

While demoing the webpage our customer seemed very interested in it and how user friendly we had made it. They seemed concerned about having to scan every individual product until they saw that you only needed to edit the quantities after scanning one single time. There is only one issue we have in that regard. We are not able to find absolutely any products data in that FDA API. However we are in the process of creating manual entries that will be saved, so it will only be a problem the first time that item is scanned. Attempting to keep our theme of user friendliness.

It has been cool and very rewarding for the team to actually see our project function this way. We’ve all done projects where things don’t really get to the point of looking and functioning like a completed product. This is a first for a few of us.

Our communication with each other during the weeks has stayed pretty consistent throughout this entire semester. I would say that is the strongest characteristic of this team. We all hold each other accountable by just being visible and seeing the others working hard to get this done.

Relating our team with a chapter from the book apprenticeship patterns, a good one would be – be the worst. With our strong communication and the fact we are putting the different parts of our project together into one product. We all noticed the strengths each member has to offer. All though this does not perfectly match the example of the book by surrounding yourself with a full team that is better than you. I think we have all done a good job at noticing each other’s strengths and understanding of each portion. This sprint showed how much we really started to learn and grow as a team. 

After sprint one I wanted to make it clearer for everyone to understand what each part of the team was doing and where they were at. This sprint actually managed to do a lot of this for me. While putting the pieces together everyone really started to see the full picture and there was points when you could see the excitement across everyone’s face when things started to shape up and start working. This also proved to create a more cohesive team as the pairs that had been working in a more broken off system were having to explain what they did in order to get certain parts to function properly. Now that everyone’s working in the same directories we all witness the fast pace of progress we have had the whole time, that I believe I was the only one really witnessing.

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem-culling/inventorybackend

-this is the current state of our project. It has a working webpage, back end and fetch script that works between them.

Start up:

  • start the backend using bin/rebuild.sh
  • Install the live server extension on vscode
  • Right click scanner.html and open with webpage
  • Start scanning items =]

From the blog CS@Worcester – Mike St G – 448 by Michael St. Germain and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospect Blog 2

If I were to put our journey so far in simpler terms, that gamers like myself would have an easier time understanding, I would say that this whole project has felt like an MMORPG game. During the first sprint, we all had nearly no idea what we were doing, trying to do easy tasks just so we could get a few levels and understand the mechanics behind the game so we could maybe get a glimpse of the bigger picture. After that was said and done, coming into the second sprint of this project, we all felt more comfortable with the project and its continuity. By now, we all have a clear image of our roles and what we bring to the table. We no longer run around aimlessly trying to defeat a minor boss, but now each of us has developed specific skills that make “fighting challenges” feel like a walk in the park, and everything goes smoothly like clockwork.

The one apprenticeship pattern that I would choose for this sprint would be “Concrete Skills – Making your abilities tangible and demonstrable”. We are entering into a more advanced phase in our journey as CS students, with most of us graduating in a month, so now is the time to let go of the abstract knowledge and hone tangible and demonstrable skills. This pattern really pushes you to refine your skills with specific and marketable abilities. The division into subgroups pushed us to learn more about particular skills. Working on the backend and connecting it to the frontend pushed me to learn or better yet, refine some skills that I thought I had a pretty good understanding of until I had to put them in practice. My teammates had already done a great job setting up the front end and adhering to visual identity guidelines, which meant I had to do as good of a job as they did, making everything functional and smooth.

After creating a functional backend last sprint, now I had to connect it to the frontend and make it actually “do something”.
I started by copying the front-end files my teammates had worked on so far to our main branch and started editing their scanner and database files. A simple JS script to fetch the information from the FDC database and add it to our own backend was all that it took to give life to our project. It took me a while to figure out the correct pathing for each API call, but once that was set up, everything was a breeze. My teammates had also created a “Database” page in which we would store our scanned products. I populated the page with items stored in our backend from the previous scanner page. After that, adding two buttons that were connected to simple API calls made the page more interactive. Now we can easily store, edit, and delete products, which is way more than what was asked of us initially.

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem-culling/inventorybackend/-/commit/b57e1ea44a8113bac4075769da0b25336c975c8f – my latest commit with all the functionalities.

I guess if I had prior knowledge of the above-mentioned pattern, I would have had an easier time working on the project, as I would have “honed” my knowledge of backend-to-frontend connections beforehand, which would make my work even more efficient.

My teammates continue to be a great force in this project, and we are able to communicate clearly between us and solve issues in a short and manageable time without putting a lot of strain on ourselves. In fact, our communication might even backfire in a sense because we solve the issues so efficiently that we don’t even put them up on the issue board, a simple “hey, we have to fix this” is all that it takes to “clear” an issue, so we might want to work on that a little bit for the upcoming sprint.

Going back to my analogy, I guess that by now we have reached “level 40” with the cap being at 60. I hope that as we reach the “end-game,” we continue to have the great teamwork that we have had so far and that has helped us so much. We all have our roles and work cut out by now, so tackling this last sprint should be easy, with the end result being a successful project that we will leave behind but take with us valuable, tangible skills to help us take on future projects and challenges.

From the blog CS@Worcester – Anairdo's WSU Computer Science Blog by anairdoduri and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospective Blog

During Sprint 2, my main task was to update our REST API by replacing the use of WSUIDs with UUIDs. Our sprint planning phase for this sprint was good, we broke down the tasks into what everyone would be doing for the sprint. Our communication was effective, especially during our team meetings in class. To implement the switch from WSUID to UUID, I updated the open API specification to reflect the new identifier structure. One of the key wins in this sprint was the successful refactor of the API to use UUIDs instead of WSUIDs for identifying guests. This transition improved the design by making guest identifiers more secure and consistent. I gained a deeper understanding of Open API specifications and how to maintain consistency across the schema definitions and endpoint parameters. Using UUIDs helped eliminate potential conflicts or collisions that may have occurred with custom WSUIDs. Collaboration also worked well with my team. Once I communicated the change to my teammates, everyone was supportive in reviewing related files and helping test the changes locally. This made integration and deployment less stressful.

At first, I underestimated how widespread the WSUID field was across the codebase. I initially thought I could change a few lines in the schema, but it quickly became clear that the change had to be made in multiple endpoints, error responses, and even test data. I had to backtrack several times to hunt down instances of WSUID that I missed earlier, which slowed down progress and created some confusion. Additionally, I didn’t write enough tests initially. As a result, one of the updates to the /guests/{uuid} path temporarily broke until I realized the mock data still used WSUID formatting. In the future, I need to write and run tests as I go rather than treating it as a final step. As a team, I think overall we have worked well and collaborate well with each other. Personally, I need to improve at assessing the scope of tasks more accurately. I underestimated the time and complexity involved in replacing WSUID to UUID. I want to become more consistent about writing tests early in the process.

“Be the Worst” from Apprenticeship Patterns (Chapter 2) encourages developers to seek out teams where they are the least experienced. The idea is that being surrounded by more skilled developers pushes you to grow faster through mentorship, observation, and collaboration. During this sprint, I didn’t have as good experience working with API design and backend architecture. At first, I felt a bit behind, especially when I didn’t account for all the areas affected by the WSUID-to-UUID switch. This pattern reminded me that growth often comes from being challenged by your environment and teammates. Had I internalized this pattern from the start, I would have asked more questions earlier instead of assuming I had to figure everything out solo. I would have scheduled a quick check-in to confirm I was on the right path before updating the spec. By embracing being “the worst,” I could’ve saved time and avoided some of the errors that slowed me down mid-sprint. This sprint taught me that meaningful growth comes from deep work, reflection, and surrounding myself with teammates who challenge and support me. I look forward to carrying these lessons and complete sprint 3 successfully.

References.

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/guestinfosystem/guestinfobackend/-/issues/142

From the blog CS@Worcester – Lynn'sBlogs by lynnnsubuga and used with permission of the author. All other rights reserved by the author.

Path Testing

Hello! Welcome back to my next blog post. This post is about path testing. I used this article to do some research on it: Basis Path Testing in Software Testing | GeeksforGeeks

In class, we learnt about this in depth in one of our POGIL activities. This type of testing has to do with actual code, and creating charts to further analyze and organize each step of the testing. Program graphs are graphs created with circles and arrows pointing to other circles to showcase the flow of the code. For example, loops will have arrows that point back to previous circles, until the entire loop is completed. Branches can be shown by splitting one circle into two other circles with two arrows; one pointing to each one.

In the article, they called it a “Control Flow Graph.” The article called a node with two or more arrows exiting it a decision node. They also called nodes where there are two or more input arrows junction nodes. This article went even more in depth than we did in class, because it talked about regions as well, which are basically just certain areas of the graph.

This article was very interesting, and it was made easy to understand because there were several pictures indicating each type of code and how it would look like in one of the graphs. They showed examples for do while loops, if statements, and many more aspects of code. This is the reason why I picked this article; since it expanded on what we already learnt about in our class.

Some more information that we learnt about in class beyond just the graphs was about DD-Paths, where the previous graphs I just mentioned are condensed into smaller, easier to understand graphs that are organized based on each type of node. Similar nodes are condensed into one node, but the first and last node are separated.

Overall, I think this was an interesting topic to both learn about in class, and also learn more about with the article I looked into. This is definitely an important part of testing, since it helps organize everything. Sometimes, reading code is not very easy or organized and this method helps with both of those problems.

From the blog cs@worcester – Akshay's Blog by Akshay Ganesh and used with permission of the author. All other rights reserved by the author.

Week 12: ​Understanding the Importance of Software Testing: A Comprehensive Guide

In the dynamic world of software development, ensuring the reliability and functionality of applications is paramount. Software testing emerges as a critical process that verifies whether a software product meets its intended requirements and functions seamlessly under various conditions. It’s not merely about identifying bugs; it’s about validating that the software delivers a quality experience to its users.

Types of Software Testing

Software testing encompasses various methodologies, each targeting specific aspects of the application:​

  1. Functional Testing: Validates that each function of the software operates in conformance with the requirement specifications.​Full Scale
  2. Non-functional Testing: Assesses non-functional aspects like performance, usability, and reliability.​Full Scale+1Full Scale+1
  3. Regression Testing: Ensures that new code changes do not adversely affect the existing functionalities of the product.​
  4. Black Box Testing: Examines the functionality of an application without peering into its internal structures or workings.​
  5. White Box Testing: Involves testing internal structures or workings of an application, as opposed to its functionality.​
  6. Automated Testing: Utilizes specialized tools to execute tests automatically, increasing efficiency and coverage.​
  7. Manual Testing: Involves human testers executing test cases without the use of automation tools.​

Levels of Software Testing

Testing is conducted at various stages of the software development lifecycle:​

  1. Unit Testing: Focuses on individual components or pieces of code for a system.​
  2. Integration Testing: Examines the interactions between integrated units/modules.​
  3. System Testing: Tests the complete and integrated software to evaluate the system’s compliance with its specified requirements.​Full Scale+1Full
  4. Acceptance Testing: Determines whether the system satisfies the business requirements and is ready for deployment.​

The Role of Software Testers

Software testers play a pivotal role in the development process. They are responsible for designing test cases, executing tests, and identifying bugs before the software reaches the end-user. Their work ensures that the final product is reliable, efficient, and user-friendly.


Conclusion

Software testing is an indispensable part of the software development process. It ensures that the final product is of high quality, meets user expectations, and functions as intended. By understanding and implementing effective testing strategies, organizations can deliver robust and reliable software products.

From the blog CS@Worcester – Nguyen Technique by Nguyen Vuong and used with permission of the author. All other rights reserved by the author.

Test-Driven Development

Hello everyone,

This week’s blog topic would be about Test-Driven Development (TTD) as this is something we recently talked about in class and it didn’t take long for me to understand the importance of it. So Test-Driven Development (TTD) is a software development process that involves writing tests for your code before you write the code. At first this was very confusing but after trying and actually working on it, I actually saw its purpose.  This approach has transformed the development of coding projects and has revolved around testing. While the more traditional way is concentrated around the waterfall model, which is a more linear approach where testing occurs near the end of a one long timeline. TDD makes testing an ongoing process, a reiterative process. It follows simple steps which cycle through the process. You first write the desired test for the desired feature, ensuring the test fails because the feature has not been written first and then you write enough code where it passes the test. This cycle repeats with further improvements and new features until the project is complete. Its core principle is rooted in Agile development, where it emphasizes iterative development with collaborative efforts based on customer feedback and the ability to change. The benefits of TTD is that it enhances collaboration through shared understanding of requirements, it is one of the best ways to detect bugs early on the development cycle of the project and it improves code design immensely. Because it is driven from testing it allows the code to evolve organically, it creates a program having the same format of coding throughout it without even trying too much. Due to its core principles, it also lowers the long term maintenance costs and this can be hugely beneficial for big projects that have a long lifespan. Not only does this blog explain what TTD is, the author also shares its best advice and practices for the developers who are trying to implement it in their work. You first start simple by writing a focused test on the fundamentals features of the project. Then you create more complex tests for specific situations. What I liked about this post and the reason why I chose it was that it has a good balance of theoretical concepts and also actual practical use. I appreciated how the author walked us over through the concepts and then immediately followed up with simple examples. This reinforces what you are trying to learn and through practice you test the concepts that you just learnt.

In conclusion, Test-Driven Development represents a new way in software development, where it highlights and prioritizes the importance of testing and the quality that it brings and you revolve it around it. By integrating testing into every step of development, TTD creates a strong and maintainable software program while supporting modern development practices. The development may be slow at first, but the long term benefits and the high quality in code make up for it.

Source: https://circleci.com/blog/test-driven-development-tdd/

From the blog Elio's Blog by Elio Ngjelo and used with permission of the author. All other rights reserved by the author.