Category Archives: CS@Worcester

Favor real dependencies for unit testing

URL: https://stackoverflow.blog/2022/01/03/favor-real-dependencies-for-unit-testing/

Mark Seeman brings us an interesting idea about which dependencies should be used during testing. In his article Favor real dependencies for unit testing, he explains that not every dependency necessarily helps you develop your tests. His main point concerns the use of dependencies that generate some kind of fake implementation of your methods in order to allow you to test them. One great example of this is Mockito, a widely used Java library where you can ask the tool to mock an entire class implementation. Although that sounds completely reasonable at first glance, what could be one issue that Mark is possibly missing in his argument? I would say that he is missing the reason why developers often rely on mocks and stubs in real-world development scenarios.

The main reason someone may choose to use mocks and stubs is more related to collaborative group work rather than projects handled by a single developer. In group settings, such as when working on a complex system like a hotel booking website, developers are usually assigned to different components or features of the system. For example, imagine a situation where you are working as a developer on such a project and are responsible for the Bookings class, while your teammate is assigned to the Suites class. Both of you have been making progress on your respective parts, and now you want to start writing tests to ensure everything functions as expected.

However, if any of your methods rely on a function that your teammate has not yet implemented, you could run into difficulties. Without the other function available, you might not be able to fully test your own code, even though your part is technically complete. This could lead to a development bottleneck, preventing you from moving forward until the rest of the system is ready.

To solve such a problem, one practical solution is to use libraries like Mockito. These tools allow you to create a mock version of your teammate’s class or method, enabling you to continue writing and running tests without delay. As explained earlier, Mockito generates fake implementations that simulate the behavior of the real components. This makes it possible to isolate and verify your own code independently.

Mark’s point is valid in scenarios where a developer is solely responsible for both the implementation and testing of all related methods. In such cases, using real dependencies like database fakes or stubs may be more effective. However, in collaborative environments, mocking libraries are essential tools that support parallel development.

This article surprised me with its perspective and application. As I’ve learned in class, the use of mocks allows developers to test features that haven’t been implemented yet adding a useful layer of abstraction. I believe that such libraries are not meant to stay in the codebase permanently but rather serve as temporary scaffolding—tools meant to be discarded once the full system is in place.

From the blog CS@Worcester – CS Today by Guilherme Salazar Almeida Nazareth and used with permission of the author. All other rights reserved by the author.

Top javaScript testing frameworks

Jest:

Developed by Facebook, Jest is one of the most beginner friendly frameworks, especially for those working with React. It comes pre-configured and includes a test runner, mocking, and assertion libraries. Its snapshot testing and excellent documentation make it a favorite among React developers. However, debugging can be tricky in some IDEs, and large snapshot files can be hard to maintain.

Mocha:

Mocha is a flexible framework ideal for Node.js applications. Its simplicity and long-standing presence in the testing world make it reliable. With support for async testing and various plugins like Chai and Sinon, it offers solid control. That said, it requires more configuration than Jest and lacks some built-in features.

Jasmine:
Jasmine supports asynchronous testing and integrates well with external libraries. It’s loved for its flexibility and extensive community support. The trade-off is more setup complexity, especially if you need additional libraries for mocking or assertions.

Nightwatch
Nightwatch is great for E2E testing with Selenium WebDriver. It’s particularly useful if your team has a Java background, thanks to its object-oriented syntax. However, its syntax can be less readable, and logging failures can be cumbersome without detailed error messages.

Playwright
A rising star from Microsoft, Playwright allows you to automate Chromium, Firefox, and WebKit using one API. It’s fast, supports modern web features, and works well with headless browsers. Being newer, it still lacks the depth of resources available for older frameworks.

Puppeteer
Built by Google, Puppeteer is tailored for Chrome/Chromium automation. It’s fast, developer-friendly, and ideal for tasks like form submission or page scraping. Its main limitation is the lack of cross-browser support.

Selenium
The veteran of test automation, Selenium remains the go-to for cross-browser testing. While powerful, it often requires additional setup and can struggle with scalability unless paired with tools like Selenium Grid or LambdaTest.

Karma
Karma offers real-time feedback and runs tests across devices and browsers. It supports popular frameworks like Mocha and Jasmine. It’s highly flexible, though less commonly used in newer projects today.

Cypress
Cypress is designed for modern JavaScript apps and offers unique features like time-travel debugging and real-time reloads. It runs directly in the browser but is limited to a few supported browsers and doesn’t allow multi-tab or remote execution.

Final Thoughts
Your ideal framework depends on your project’s size, tech stack, and testing goals. Whether you’re working with React, Node.js, or need robust cross-browser support, there’s a JavaScript testing framework tailored to your needs. Test smart, and happy coding!

Reference: https://www.lambdatest.com/blog/top-javascript-testing-frameworks/

From the blog CS@Worcester – The Bits & Bytes Universe by skarkonan and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospective Blog

My task in Sprint 2 was to modify all endpoints to accept access tokens that will determine the system whether the request is from an authorized individual making that request. The goal is that these tokens will let the system know if the requester is authorized. But this can only happen after the IAM team figures out how the tokens will work. In this sprint, I collaborated very closely with my partner Sean Kirwin.

What was best for me in Sprint 2 was to work with Sean and continue to work on the backend. I was working on an issue that I am comfortable with again. I was also able to continue a task that was in progress from the previous sprint.

As a team, we still managed to get along. Communication from my teammates was really the best. Every individual could complain about what issues they were facing, why they were late to class or missing class, and was helpful whenever someone needed help from a teammate. Every week, my teammates were always there to offer helpful criticism, and I appreciated their help when I needed it. They also answered my questions about how to proceed with my assignment. I know that there is always room for improvement. However, I think that my team did very well in this sprint and we did not have any major issues during this sprint.

Relying on another team was somewhat challenging for me. We were not able to accomplish as much as needed because we were waiting to see how the tokens will be, as it is the IAM team’s decision to make that. Although we keep communicating with them on Discord, in my opinion, we should have sat down and spoken face-to-face or on Zoom so that we can better understand each other’s views. I sort of felt like both teams were getting a bit mixed up with each other’s arguments. The pattern I associated with from the Apprenticeship Patterns book was “Create Feedback Loops”. Here in this chapter, it highlights the importance of constant, actionable feedback in speeding up learning and improvement in the journey of an apprentice towards mastery. Feedback loops facilitate the discovery of weaknesses, confirm progress, and hone skills. They are imperative to move from apprentice to journeyman and later to master because they enable self-awareness and incremental improvement.

I chose this pattern because it is so closely related to what was happening in the sprint. Although it is stated not to move/work on tasks from the previous sprint, I managed to do so because I had the time available to complete a small mistake that was executed in sprint 1 in order to wait for feedback from the other group so that I could work on my task in sprint 2. I was able to learn the feedback from my other team members and rectify the small mistakes done during sprint 1. The “Create Feedback Loops” pattern helped me to step back, hear the feedback from my peers, and strategize on how to improve.

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/guestinfosystem/guestinfobackend/-/issues/141

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospect

In sprint 2 my team and I actually got the project functioning as we had intended. We now have a webpage with a scanner that can read UPC codes off any product we can find. A fetch script that takes that UPC code and uses an FDA API to give us data on the product. This data includes brand, brand owner, product category and a description. This data is then merged into our back end and stored with a quantity that gets manually entered. 

While demoing the webpage our customer seemed very interested in it and how user friendly we had made it. They seemed concerned about having to scan every individual product until they saw that you only needed to edit the quantities after scanning one single time. There is only one issue we have in that regard. We are not able to find absolutely any products data in that FDA API. However we are in the process of creating manual entries that will be saved, so it will only be a problem the first time that item is scanned. Attempting to keep our theme of user friendliness.

It has been cool and very rewarding for the team to actually see our project function this way. We’ve all done projects where things don’t really get to the point of looking and functioning like a completed product. This is a first for a few of us.

Our communication with each other during the weeks has stayed pretty consistent throughout this entire semester. I would say that is the strongest characteristic of this team. We all hold each other accountable by just being visible and seeing the others working hard to get this done.

Relating our team with a chapter from the book apprenticeship patterns, a good one would be – be the worst. With our strong communication and the fact we are putting the different parts of our project together into one product. We all noticed the strengths each member has to offer. All though this does not perfectly match the example of the book by surrounding yourself with a full team that is better than you. I think we have all done a good job at noticing each other’s strengths and understanding of each portion. This sprint showed how much we really started to learn and grow as a team. 

After sprint one I wanted to make it clearer for everyone to understand what each part of the team was doing and where they were at. This sprint actually managed to do a lot of this for me. While putting the pieces together everyone really started to see the full picture and there was points when you could see the excitement across everyone’s face when things started to shape up and start working. This also proved to create a more cohesive team as the pairs that had been working in a more broken off system were having to explain what they did in order to get certain parts to function properly. Now that everyone’s working in the same directories we all witness the fast pace of progress we have had the whole time, that I believe I was the only one really witnessing.

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem-culling/inventorybackend

-this is the current state of our project. It has a working webpage, back end and fetch script that works between them.

Start up:

  • start the backend using bin/rebuild.sh
  • Install the live server extension on vscode
  • Right click scanner.html and open with webpage
  • Start scanning items =]

From the blog CS@Worcester – Mike St G – 448 by Michael St. Germain and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospect Blog 2

If I were to put our journey so far in simpler terms, that gamers like myself would have an easier time understanding, I would say that this whole project has felt like an MMORPG game. During the first sprint, we all had nearly no idea what we were doing, trying to do easy tasks just so we could get a few levels and understand the mechanics behind the game so we could maybe get a glimpse of the bigger picture. After that was said and done, coming into the second sprint of this project, we all felt more comfortable with the project and its continuity. By now, we all have a clear image of our roles and what we bring to the table. We no longer run around aimlessly trying to defeat a minor boss, but now each of us has developed specific skills that make “fighting challenges” feel like a walk in the park, and everything goes smoothly like clockwork.

The one apprenticeship pattern that I would choose for this sprint would be “Concrete Skills – Making your abilities tangible and demonstrable”. We are entering into a more advanced phase in our journey as CS students, with most of us graduating in a month, so now is the time to let go of the abstract knowledge and hone tangible and demonstrable skills. This pattern really pushes you to refine your skills with specific and marketable abilities. The division into subgroups pushed us to learn more about particular skills. Working on the backend and connecting it to the frontend pushed me to learn or better yet, refine some skills that I thought I had a pretty good understanding of until I had to put them in practice. My teammates had already done a great job setting up the front end and adhering to visual identity guidelines, which meant I had to do as good of a job as they did, making everything functional and smooth.

After creating a functional backend last sprint, now I had to connect it to the frontend and make it actually “do something”.
I started by copying the front-end files my teammates had worked on so far to our main branch and started editing their scanner and database files. A simple JS script to fetch the information from the FDC database and add it to our own backend was all that it took to give life to our project. It took me a while to figure out the correct pathing for each API call, but once that was set up, everything was a breeze. My teammates had also created a “Database” page in which we would store our scanned products. I populated the page with items stored in our backend from the previous scanner page. After that, adding two buttons that were connected to simple API calls made the page more interactive. Now we can easily store, edit, and delete products, which is way more than what was asked of us initially.

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/inventorysystem-culling/inventorybackend/-/commit/b57e1ea44a8113bac4075769da0b25336c975c8f – my latest commit with all the functionalities.

I guess if I had prior knowledge of the above-mentioned pattern, I would have had an easier time working on the project, as I would have “honed” my knowledge of backend-to-frontend connections beforehand, which would make my work even more efficient.

My teammates continue to be a great force in this project, and we are able to communicate clearly between us and solve issues in a short and manageable time without putting a lot of strain on ourselves. In fact, our communication might even backfire in a sense because we solve the issues so efficiently that we don’t even put them up on the issue board, a simple “hey, we have to fix this” is all that it takes to “clear” an issue, so we might want to work on that a little bit for the upcoming sprint.

Going back to my analogy, I guess that by now we have reached “level 40” with the cap being at 60. I hope that as we reach the “end-game,” we continue to have the great teamwork that we have had so far and that has helped us so much. We all have our roles and work cut out by now, so tackling this last sprint should be easy, with the end result being a successful project that we will leave behind but take with us valuable, tangible skills to help us take on future projects and challenges.

From the blog CS@Worcester – Anairdo's WSU Computer Science Blog by anairdoduri and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospective Blog

During Sprint 2, my main task was to update our REST API by replacing the use of WSUIDs with UUIDs. Our sprint planning phase for this sprint was good, we broke down the tasks into what everyone would be doing for the sprint. Our communication was effective, especially during our team meetings in class. To implement the switch from WSUID to UUID, I updated the open API specification to reflect the new identifier structure. One of the key wins in this sprint was the successful refactor of the API to use UUIDs instead of WSUIDs for identifying guests. This transition improved the design by making guest identifiers more secure and consistent. I gained a deeper understanding of Open API specifications and how to maintain consistency across the schema definitions and endpoint parameters. Using UUIDs helped eliminate potential conflicts or collisions that may have occurred with custom WSUIDs. Collaboration also worked well with my team. Once I communicated the change to my teammates, everyone was supportive in reviewing related files and helping test the changes locally. This made integration and deployment less stressful.

At first, I underestimated how widespread the WSUID field was across the codebase. I initially thought I could change a few lines in the schema, but it quickly became clear that the change had to be made in multiple endpoints, error responses, and even test data. I had to backtrack several times to hunt down instances of WSUID that I missed earlier, which slowed down progress and created some confusion. Additionally, I didn’t write enough tests initially. As a result, one of the updates to the /guests/{uuid} path temporarily broke until I realized the mock data still used WSUID formatting. In the future, I need to write and run tests as I go rather than treating it as a final step. As a team, I think overall we have worked well and collaborate well with each other. Personally, I need to improve at assessing the scope of tasks more accurately. I underestimated the time and complexity involved in replacing WSUID to UUID. I want to become more consistent about writing tests early in the process.

“Be the Worst” from Apprenticeship Patterns (Chapter 2) encourages developers to seek out teams where they are the least experienced. The idea is that being surrounded by more skilled developers pushes you to grow faster through mentorship, observation, and collaboration. During this sprint, I didn’t have as good experience working with API design and backend architecture. At first, I felt a bit behind, especially when I didn’t account for all the areas affected by the WSUID-to-UUID switch. This pattern reminded me that growth often comes from being challenged by your environment and teammates. Had I internalized this pattern from the start, I would have asked more questions earlier instead of assuming I had to figure everything out solo. I would have scheduled a quick check-in to confirm I was on the right path before updating the spec. By embracing being “the worst,” I could’ve saved time and avoided some of the errors that slowed me down mid-sprint. This sprint taught me that meaningful growth comes from deep work, reflection, and surrounding myself with teammates who challenge and support me. I look forward to carrying these lessons and complete sprint 3 successfully.

References.

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/guestinfosystem/guestinfobackend/-/issues/142

From the blog CS@Worcester – Lynn'sBlogs by lynnnsubuga and used with permission of the author. All other rights reserved by the author.

Path Testing

Hello! Welcome back to my next blog post. This post is about path testing. I used this article to do some research on it: Basis Path Testing in Software Testing | GeeksforGeeks

In class, we learnt about this in depth in one of our POGIL activities. This type of testing has to do with actual code, and creating charts to further analyze and organize each step of the testing. Program graphs are graphs created with circles and arrows pointing to other circles to showcase the flow of the code. For example, loops will have arrows that point back to previous circles, until the entire loop is completed. Branches can be shown by splitting one circle into two other circles with two arrows; one pointing to each one.

In the article, they called it a “Control Flow Graph.” The article called a node with two or more arrows exiting it a decision node. They also called nodes where there are two or more input arrows junction nodes. This article went even more in depth than we did in class, because it talked about regions as well, which are basically just certain areas of the graph.

This article was very interesting, and it was made easy to understand because there were several pictures indicating each type of code and how it would look like in one of the graphs. They showed examples for do while loops, if statements, and many more aspects of code. This is the reason why I picked this article; since it expanded on what we already learnt about in our class.

Some more information that we learnt about in class beyond just the graphs was about DD-Paths, where the previous graphs I just mentioned are condensed into smaller, easier to understand graphs that are organized based on each type of node. Similar nodes are condensed into one node, but the first and last node are separated.

Overall, I think this was an interesting topic to both learn about in class, and also learn more about with the article I looked into. This is definitely an important part of testing, since it helps organize everything. Sometimes, reading code is not very easy or organized and this method helps with both of those problems.

From the blog cs@worcester – Akshay's Blog by Akshay Ganesh and used with permission of the author. All other rights reserved by the author.

Test-Driven Development

Hello everyone,

This week’s blog topic would be about Test-Driven Development (TTD) as this is something we recently talked about in class and it didn’t take long for me to understand the importance of it. So Test-Driven Development (TTD) is a software development process that involves writing tests for your code before you write the code. At first this was very confusing but after trying and actually working on it, I actually saw its purpose.  This approach has transformed the development of coding projects and has revolved around testing. While the more traditional way is concentrated around the waterfall model, which is a more linear approach where testing occurs near the end of a one long timeline. TDD makes testing an ongoing process, a reiterative process. It follows simple steps which cycle through the process. You first write the desired test for the desired feature, ensuring the test fails because the feature has not been written first and then you write enough code where it passes the test. This cycle repeats with further improvements and new features until the project is complete. Its core principle is rooted in Agile development, where it emphasizes iterative development with collaborative efforts based on customer feedback and the ability to change. The benefits of TTD is that it enhances collaboration through shared understanding of requirements, it is one of the best ways to detect bugs early on the development cycle of the project and it improves code design immensely. Because it is driven from testing it allows the code to evolve organically, it creates a program having the same format of coding throughout it without even trying too much. Due to its core principles, it also lowers the long term maintenance costs and this can be hugely beneficial for big projects that have a long lifespan. Not only does this blog explain what TTD is, the author also shares its best advice and practices for the developers who are trying to implement it in their work. You first start simple by writing a focused test on the fundamentals features of the project. Then you create more complex tests for specific situations. What I liked about this post and the reason why I chose it was that it has a good balance of theoretical concepts and also actual practical use. I appreciated how the author walked us over through the concepts and then immediately followed up with simple examples. This reinforces what you are trying to learn and through practice you test the concepts that you just learnt.

In conclusion, Test-Driven Development represents a new way in software development, where it highlights and prioritizes the importance of testing and the quality that it brings and you revolve it around it. By integrating testing into every step of development, TTD creates a strong and maintainable software program while supporting modern development practices. The development may be slow at first, but the long term benefits and the high quality in code make up for it.

Source: https://circleci.com/blog/test-driven-development-tdd/

From the blog Elio's Blog by Elio Ngjelo and used with permission of the author. All other rights reserved by the author.

CS443: Cultivating a Culture of DevOps

A snap of the Microsoft Azure learning portal.

Today’s article, “Recommendations for fostering DevOps culture”, comes from the Azure Well-Architected Framework. You can find it here.

Following up on my previous two articles on the failure of Spotify model and an exercise to attempt to salvage some part of said model, I decided to read an article from Microsoft on how to best implement DevOps. Their suggestion? Start with the team culture.

What is it about?

This article lays a whole bunch of best-practices for creating a successful, productive, and well-managed team structure, in accordance with Azure’s Well-Architected Framework.

What did I learn?

That more than anything else, culture is what separates a computer hobbyist from a professional engineer. It’s not enough to simply be individually passionate; you have to be part of a team structure that allows your passion to directly contribute to the success of the team, and for you to be similarly inspired by your own teammates’ contributions.

What was most surprising to me?

That these are “common sense” principles. I think a lot of people, especially in the corporate world, have a particularly hierarchical idea of team structure. I’m sure we’ve all had one boss who’s been far too interested in holding pulse checks, team meetings, and debriefs than they were in actually letting their employees do their work.

Even so, the fact that many of the WAF’s principles not only make sense but make good sense seems to be a credit to how well-thought out they are. Rather than forcing the issue of collaborations through meetings, for example, it says to allow for collaboration to occur in small spaces between workers, so that the process of ideating comes naturally at its own pace.

What are my take-aways?

  • Encourage interrelatedness as opposed to interdependence.
  • Use effective team charting software, i.e. Jira. (See last week’s article.)
  • Embrace continuous improvement, continuous integration, and continuous development wholeheartedly.

Kevin N.

From the blog CS-443 – Kevin D. Nguyen by Kevin Nguyen and used with permission of the author. All other rights reserved by the author.

Learning the Ropes: A Student’s Dive into QA Testing Best Practices

I recently came across the article “Best Practices for Software Quality Assurance Testing” by KMS Technology. As a computer science student eager to understand the practical aspects of software development, this piece offered a clear roadmap of the QA process, breaking it down into four main stages: planning, test design, execution, and reporting.

1. Planning: The article emphasizes the importance of early-stage planning, including resource allocation, timeline estimation, and tool selection. This stage sets the foundation for the entire QA process.​

2. Test Design: Here, the focus is on defining both functional and non-functional requirements. The article suggests determining which tests can be automated and which require manual intervention, ensuring comprehensive coverage of user scenarios.​KMS Technology

3. Test Execution: This phase involves running the designed tests and evaluating any defects or issues that arise. It’s a repetitive process, ensuring that each identified problem is addressed and retested.​

4. Reporting and Maintenance: The final stage is about documenting findings, analyzing results, and ensuring that the software is ready for release. Continuous feedback loops between testers and developers are crucial here.​

Reflecting on these stages, I realize how often, in academic projects, we might overlook structured testing due to time constraints or lack of emphasis. However, this article highlights that integrating testing throughout the development cycle, rather than treating it as a final step, leads to more reliable and efficient software.

One key takeaway for me is the significance of clear communication and documentation. In group projects, miscommunication can lead to redundant work or overlooked bugs. By adopting a structured QA approach, we can mitigate these issues.

In conclusion, this article has provided me with a practical framework for approaching software testing. As I continue my studies and work on more complex projects, I plan to implement these best practices to enhance the quality and reliability of my work.

For those interested in a deeper dive, here’s the full article: Best Practices for Software Quality Assurance Testing.

From the blog Zacharys Computer Science Blog by Zachary Kimball and used with permission of the author. All other rights reserved by the author.