Category Archives: CS@Worcester

Sprint 2 Retrospective

Continuing from my last blog post for my CS-448 class, Sprint 2 has just finished. This Sprint I was assigned the issues of creating the code that mimics what the Guest Info system would handle (https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/reportingsystem/reportingbackend/-/issues/94) in putting guest info into RabbitMQ and the issue to take the data from RabbitMQ and put it into the database (https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/reportingsystem/reportingbackend/-/issues/96).

For what worked well this sprint, we as a team worked well and as we had planned to. On the backend, setting up the queue to put data into RabbitMQ was easy enough to do after reading the documentation on how the queues work which is what I wanted to improve last sprint so that I didn’t feel like I was unaware of how the systems worked. After that we started to set up the code that would take the data from RabbitMQ and put it into the database. The RabbitMQ for this part went as smoothly as the queue did, but when entering the data into the mongo database is where we ran into issues. The location of the data insert was in the dev container and so it was unable to connect to the database which is where I was stuck for most of the sprint. Another problem was that it wasn’t immediately obvious that the database connection was the problem which led me to looking for errors in places that there likely were none.

I think that for the next sprint a way to improve as a team would be two things. One would be to not assume that just because code exists for it, it doesn’t mean that it works in the context you need it to, and the other is that we should be more open to solutions to problems that may include adding entire new functionality that we didn’t specify was needed in planning but is needed to support another integral system. Individually I would want to improve by thinking harder about where my problems are and how they might affect things afterwards. I found that when I do ask questions about a particular hold up, after I solve that part, it leaves me in a place where I don’t have a direction because I didn’t think ahead after the roadblock. I have found that this really slows down my productivity and is overall an inefficient workflow.

This sprint, I think the apprenticeship pattern that is most applicable is the fifth pattern, perpetual learning. This is because this sprint I was learning new things about building a system, like the reporting system for Thea’s Pantry, only when there was an issue I was trying to fix, instead of always taking time to just learn whatever I can to build a base of knowledge that will help me when I come upon issues so I will already know what is going on. I also want to ask around for help from people who may know more about a certain system or function than me, including my group and the other groups.

From the blog CS@Worcester – Computer Science Blog by dzona1 and used with permission of the author. All other rights reserved by the author.

Understanding Mocking in Software Testing

Software testing is crucial to ensure that the system acts as expected. However, when dealing with complex systems, it can be challenging to test components in isolation since they rely on external systems like databases, APIs, or services. This is where mocking is used, a technique that employs test doubles. Mocking allows developers to simulate the behavior of real objects within a test environment, isolating the components they want to test. This blog post explains what mocking is, how it is applied in unit testing, the categories of test doubles, best practices, and more.

What is Mocking?
Mocking is the act of creating test doubles, which are copies of real objects that behave in pre-defined ways. These fake doubles stand in for actual objects or services and allow the developers to isolate chunks of code. Mocking also allows them to simulate edge cases, mistakes, or specific situations that could be hard to replicate in the real world. For instance, instead of conversing with a database, a developer can use a mock object that mimics database returns. This offers greater control of the testing environment, increases testing speed, and allows finding issues early.

Knowing Test Doubles
It is important to know test doubles to completely comprehend mocking. Test doubles are mock objects that replace actual components of the system for the purpose of testing. Test doubles share the same interface with the actual object but act in a controlled fashion. There are different types of test doubles:

Mocks: Mocks are pre-initialized objects that carry expectations for being called. Mocks are used to force particular interactions, i.e., function calls with specified arguments, to occur while the test runs. When interactions are  not up to expectations, then the test would fail.

Stubs: Stubs do not care about interactions. They simply provide pre-defined responses to method calls so that the test can just go ahead and not worry about the actual component behavior.

Fakes: They are more evolved test doubles with smaller implementations of real components. For example, an in-memory database simulating a live database can be used as a fake to speed up testing without relying on external systems.

Spies: Spies are similar to mocks but are employed to log interactions against an object. You can verify the spy after a test to ensure that the expected methods were invoked with the correct parameters. Unlike mocks, spies will not make the test fail if the interactions are unexpected.

The Role of Mocking in Unit Testing
Unit testing is testing individual pieces, such as functions or methods, in isolation. But most pieces rely on external services, such as databases or APIs. These dependencies add complexity, unpredictability, and outside factors that can get in the way of testing.

Mocking enables developers to test the unit under test in isolation by substituting external dependencies with controlled, fake objects. This ensures that any problems encountered during the test are a result of the code being tested, not the external systems it depends on.

Mocking also makes it easy to test edge cases and error conditions. For example, you can use a mock object to throw an exception or return a given value, so you can see how your code handles these situations. In addition, mocking makes tests faster because it avoids the overhead of invoking real systems like databases or APIs.

Mocking Frameworks: Mockito and Beyond
Various mocking libraries are utilized by programmers in order to craft and manipulate the mocks for unit testing. Among the most commonly used libraries used in the Java community is Mockito. Mockito makes it easy for one to write mock objects, specify their behavior, and confirm interactions in an easy-to-read manner.

Highlights of Mockito include:

Behavior Verification: One can assert that certain methods were called with the right arguments.
Stubbing: Mockito allows you to define return values for mock methods so that various scenarios can be tested.
Argument Matchers: It provides flexible argument matchers for verifying method calls with a range of values.
Other than Mockito, other libraries like JMock, EasyMock, and JUnit 5 can also be used. For Python developers, the unittest.mock module is utilized. In the.NET ecosystem, libraries like Moq and NSubstitute are commonly used. For JavaScript, Sinon.js is the go-to library for mocking, stubbing, and spying.

Best Practices in Mocking
As terrific as mocking is, there is a best-practice way of doing it and having meaningful, sustainable tests. Here are a few rules of thumb to bear in mind:

Mock Only What You Own: Mock only entities you own, such as classes or methods that you have created. Mocking third-party APIs or external dependencies will lead to brittle tests, which will be broken when outer dependencies change.

Keep Mocks Simple: Don’t overcomplicate mocks with too many configurations or behaviors. Simple mocks are more maintainable and understandable.

Avoid Over-Mocking: Over-mocking your tests can make them too implementation-focused. Mock only what’s required for the test, and use real objects when possible.

Assert Behavior, Not Implementation: Tests must assert the system’s behavior is right, not how the system implements the behavior. Focus on asserting the right methods with the right arguments are called, rather than making assertions about how the system works internally.

Use Mocks to Isolate Tests: Use mocks to isolate tests from slow or flaky external dependencies like databases or networks. This results in faster and more deterministic tests.

Clear Teardown and Setup: Ensure that the mocks are created before each test and destroyed thereafter. This results in tests that are repeatable and don’t produce any side effects.

Conclusion
Mocking is an immensely valuable software testing strategy that provides developers with a way to segregate and test separate components isolated from outside dependencies. Through the use of test doubles like mocks, stubs, fakes, and spies, programmers are able to fake out actual conditions, test on the boundary, and make their tests more reliable and quicker. Good practices must be followed, like mock only what you own, keep mocks as plain as possible, and assert behavior and not implementation. Applied in the right way, mocking is a great friend in creating robust, stable, and quality software.

Personal Reflection

I find mocking to be an interesting approach that enables specific and effective testing. In this class, Quality Assurance & Testing, I’ve gained insight into how crucial it is to isolate the units being tested in order to check their functionality in real-world settings. Precisely, I’ve understood how beneficial mocking can be in unit testing in order to enable the isolation of certain interactions and edge cases.

I also believe that, as developers, we tend to over-test or rely too heavily on mocks, especially when working with complex systems. Reflecting back on my own experience, I will keep in mind that getting the balance right, mocking when strictly required and testing behavior, not implementation, is the key to writing meaningful and sustainable tests. This approach helps us ensure that the code is useful and also adjustable when it encounters future changes, which is, after all, what any well-designed testing system is hoping for.

Reference:

What is Mocking? An Introduction to Test Doubles by GeeksforGeeks, Available at: https://www.geeksforgeeks.org/mocking-an-introduction-to-test-doubles/.

From the blog Maria Delia by Maria Delia and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospective

In this post, I’ll be reflecting on our second sprint towards developing and implementing an Identity and Access Management system for Thea’s Pantry. Coming out of Sprint 1, we had a better idea of Keycloak in general, and we had some basic frameworks for a fake frontend and fake backend. Our sprint goal for Sprint 2 was to fully integrate these components, so that we could provide a proof of concept for the entire workflow, as opposed to just one component. We wanted to be able to force authentication on a frontend page via a Keycloak login page, and then we wanted to be able to store the resultant access token from that interaction so that we can perform authenticated actions without ever talking to Keycloak again.

Some of my personal work towards that goal was as follows:

GitLab

  • Documenting our low-level issues in GitLab and assigning them accordingly. I put additional focus/effort this sprint into properly linking related issues, blockers, and tracking various key information in comments, as opposed to just using issues as a task list. Epic

Backend

  • Refactor the backend endpoint to verify the signature of a JWT to ensure authenticity. Note – this was a great learning experience in better understanding how async and await work in JS. This issue took me way too long to resolve. Squash Commit

  • Further briefly modify the endpoint to pull specific custom data out of the generated JWT from Keycloak. Commit

Frontend

  • Configure Docker compose files and Git submodules to containerize all three repositories into the fake frontend to test the whole flow. Commit

  • Completely facelift/refactor/rework/reimplement the fake frontend to use Vue as a build framework to test our implementation in the same context as it will be used in production. Configure dependency and instantiation of Keycloak in the JS to handle redirect and access token storage and usage. Commits: 1 , 2

Something that worked particularly well this sprint was our focus on increased communication. We refactored our working agreement to address some of our shortcomings in communication and accountability, and I felt like this sprint was better for us around the board. We had a bit more direction this sprint, and we accomplished our goal exactly as we laid it out, barring 2 lines of code that we have to add that are just blocked right now.

That said, – at risk of contradicting myself – I feel like something that did not work as well, and that we can continue to improve, is also our communication. Though it was better this sprint, it still definitely felt at times like we were not a team, and instead like we each had our tasks that we would connect on once or twice a week in class meetings. Maybe this is fine, and to be honest it worked okay for the most part, but I feel like in an ideal world for me, I would have us all being very proactive and communicative about our issues, though I don’t know if this is a fair thing to aim for our team to improve, or if maybe I should reevaluate my expectations.

Something I could improve is my focus on defining roles and responsibilities for the general team dynamic, not just for issues. I felt like I focused on accountability for issues on GitLab, for example, but I also feel like I informally assumed the role of Scrum Master / Sprint Lead for this sprint, though we never really defined or said that. It seemed to work fine for us, but it is something I think I could have specified better, instead of just sort of assuming a leadership role.

The pattern I have chosen for this sprint is The Deep End. This is because one of the issues I spent the most time on during this sprint was implementing JWT signature verification. This should not have been a difficult issue, but I really have never worked with functions in js specifically, and for some reason I was caught in a loop of bad syntax and usage of things like const, async, and await. I had no idea what I was doing, and was so lost as to why my code was not working. It took a lot of reading and being lost for a while before finally realizing my error was not the libraries I was using, but just a lack of understanding regarding js. 

From the blog Mr. Lancer 987's Blog by Mr. Lancer 987 and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospective

In this post, I’ll be reflecting on our second sprint towards developing and implementing an Identity and Access Management system for Thea’s Pantry. Coming out of Sprint 1, we had a better idea of Keycloak in general, and we had some basic frameworks for a fake frontend and fake backend. Our sprint goal for Sprint 2 was to fully integrate these components, so that we could provide a proof of concept for the entire workflow, as opposed to just one component. We wanted to be able to force authentication on a frontend page via a Keycloak login page, and then we wanted to be able to store the resultant access token from that interaction so that we can perform authenticated actions without ever talking to Keycloak again.

Some of my personal work towards that goal was as follows:

GitLab

  • Documenting our low-level issues in GitLab and assigning them accordingly. I put additional focus/effort this sprint into properly linking related issues, blockers, and tracking various key information in comments, as opposed to just using issues as a task list. Epic

Backend

  • Refactor the backend endpoint to verify the signature of a JWT to ensure authenticity. Note – this was a great learning experience in better understanding how async and await work in JS. This issue took me way too long to resolve. Squash Commit

  • Further briefly modify the endpoint to pull specific custom data out of the generated JWT from Keycloak. Commit

Frontend

  • Configure Docker compose files and Git submodules to containerize all three repositories into the fake frontend to test the whole flow. Commit

  • Completely facelift/refactor/rework/reimplement the fake frontend to use Vue as a build framework to test our implementation in the same context as it will be used in production. Configure dependency and instantiation of Keycloak in the JS to handle redirect and access token storage and usage. Commits: 1 , 2

Something that worked particularly well this sprint was our focus on increased communication. We refactored our working agreement to address some of our shortcomings in communication and accountability, and I felt like this sprint was better for us around the board. We had a bit more direction this sprint, and we accomplished our goal exactly as we laid it out, barring 2 lines of code that we have to add that are just blocked right now.

That said, – at risk of contradicting myself – I feel like something that did not work as well, and that we can continue to improve, is also our communication. Though it was better this sprint, it still definitely felt at times like we were not a team, and instead like we each had our tasks that we would connect on once or twice a week in class meetings. Maybe this is fine, and to be honest it worked okay for the most part, but I feel like in an ideal world for me, I would have us all being very proactive and communicative about our issues, though I don’t know if this is a fair thing to aim for our team to improve, or if maybe I should reevaluate my expectations.

Something I could improve is my focus on defining roles and responsibilities for the general team dynamic, not just for issues. I felt like I focused on accountability for issues on GitLab, for example, but I also feel like I informally assumed the role of Scrum Master / Sprint Lead for this sprint, though we never really defined or said that. It seemed to work fine for us, but it is something I think I could have specified better, instead of just sort of assuming a leadership role.

The pattern I have chosen for this sprint is The Deep End. This is because one of the issues I spent the most time on during this sprint was implementing JWT signature verification. This should not have been a difficult issue, but I really have never worked with functions in js specifically, and for some reason I was caught in a loop of bad syntax and usage of things like const, async, and await. I had no idea what I was doing, and was so lost as to why my code was not working. It took a lot of reading and being lost for a while before finally realizing my error was not the libraries I was using, but just a lack of understanding regarding js. 

From the blog Mr. Lancer 987's Blog by Mr. Lancer 987 and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospective

In this post, I’ll be reflecting on our second sprint towards developing and implementing an Identity and Access Management system for Thea’s Pantry. Coming out of Sprint 1, we had a better idea of Keycloak in general, and we had some basic frameworks for a fake frontend and fake backend. Our sprint goal for Sprint 2 was to fully integrate these components, so that we could provide a proof of concept for the entire workflow, as opposed to just one component. We wanted to be able to force authentication on a frontend page via a Keycloak login page, and then we wanted to be able to store the resultant access token from that interaction so that we can perform authenticated actions without ever talking to Keycloak again.

Some of my personal work towards that goal was as follows:

GitLab

  • Documenting our low-level issues in GitLab and assigning them accordingly. I put additional focus/effort this sprint into properly linking related issues, blockers, and tracking various key information in comments, as opposed to just using issues as a task list. Epic

Backend

  • Refactor the backend endpoint to verify the signature of a JWT to ensure authenticity. Note – this was a great learning experience in better understanding how async and await work in JS. This issue took me way too long to resolve. Squash Commit

  • Further briefly modify the endpoint to pull specific custom data out of the generated JWT from Keycloak. Commit

Frontend

  • Configure Docker compose files and Git submodules to containerize all three repositories into the fake frontend to test the whole flow. Commit

  • Completely facelift/refactor/rework/reimplement the fake frontend to use Vue as a build framework to test our implementation in the same context as it will be used in production. Configure dependency and instantiation of Keycloak in the JS to handle redirect and access token storage and usage. Commits: 1 , 2

Something that worked particularly well this sprint was our focus on increased communication. We refactored our working agreement to address some of our shortcomings in communication and accountability, and I felt like this sprint was better for us around the board. We had a bit more direction this sprint, and we accomplished our goal exactly as we laid it out, barring 2 lines of code that we have to add that are just blocked right now.

That said, – at risk of contradicting myself – I feel like something that did not work as well, and that we can continue to improve, is also our communication. Though it was better this sprint, it still definitely felt at times like we were not a team, and instead like we each had our tasks that we would connect on once or twice a week in class meetings. Maybe this is fine, and to be honest it worked okay for the most part, but I feel like in an ideal world for me, I would have us all being very proactive and communicative about our issues, though I don’t know if this is a fair thing to aim for our team to improve, or if maybe I should reevaluate my expectations.

Something I could improve is my focus on defining roles and responsibilities for the general team dynamic, not just for issues. I felt like I focused on accountability for issues on GitLab, for example, but I also feel like I informally assumed the role of Scrum Master / Sprint Lead for this sprint, though we never really defined or said that. It seemed to work fine for us, but it is something I think I could have specified better, instead of just sort of assuming a leadership role.

The pattern I have chosen for this sprint is The Deep End. This is because one of the issues I spent the most time on during this sprint was implementing JWT signature verification. This should not have been a difficult issue, but I really have never worked with functions in js specifically, and for some reason I was caught in a loop of bad syntax and usage of things like const, async, and await. I had no idea what I was doing, and was so lost as to why my code was not working. It took a lot of reading and being lost for a while before finally realizing my error was not the libraries I was using, but just a lack of understanding regarding js. 

From the blog Mr. Lancer 987's Blog by Mr. Lancer 987 and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospective

In this post, I’ll be reflecting on our second sprint towards developing and implementing an Identity and Access Management system for Thea’s Pantry. Coming out of Sprint 1, we had a better idea of Keycloak in general, and we had some basic frameworks for a fake frontend and fake backend. Our sprint goal for Sprint 2 was to fully integrate these components, so that we could provide a proof of concept for the entire workflow, as opposed to just one component. We wanted to be able to force authentication on a frontend page via a Keycloak login page, and then we wanted to be able to store the resultant access token from that interaction so that we can perform authenticated actions without ever talking to Keycloak again.

Some of my personal work towards that goal was as follows:

GitLab

  • Documenting our low-level issues in GitLab and assigning them accordingly. I put additional focus/effort this sprint into properly linking related issues, blockers, and tracking various key information in comments, as opposed to just using issues as a task list. Epic

Backend

  • Refactor the backend endpoint to verify the signature of a JWT to ensure authenticity. Note – this was a great learning experience in better understanding how async and await work in JS. This issue took me way too long to resolve. Squash Commit

  • Further briefly modify the endpoint to pull specific custom data out of the generated JWT from Keycloak. Commit

Frontend

  • Configure Docker compose files and Git submodules to containerize all three repositories into the fake frontend to test the whole flow. Commit

  • Completely facelift/refactor/rework/reimplement the fake frontend to use Vue as a build framework to test our implementation in the same context as it will be used in production. Configure dependency and instantiation of Keycloak in the JS to handle redirect and access token storage and usage. Commits: 1 , 2

Something that worked particularly well this sprint was our focus on increased communication. We refactored our working agreement to address some of our shortcomings in communication and accountability, and I felt like this sprint was better for us around the board. We had a bit more direction this sprint, and we accomplished our goal exactly as we laid it out, barring 2 lines of code that we have to add that are just blocked right now.

That said, – at risk of contradicting myself – I feel like something that did not work as well, and that we can continue to improve, is also our communication. Though it was better this sprint, it still definitely felt at times like we were not a team, and instead like we each had our tasks that we would connect on once or twice a week in class meetings. Maybe this is fine, and to be honest it worked okay for the most part, but I feel like in an ideal world for me, I would have us all being very proactive and communicative about our issues, though I don’t know if this is a fair thing to aim for our team to improve, or if maybe I should reevaluate my expectations.

Something I could improve is my focus on defining roles and responsibilities for the general team dynamic, not just for issues. I felt like I focused on accountability for issues on GitLab, for example, but I also feel like I informally assumed the role of Scrum Master / Sprint Lead for this sprint, though we never really defined or said that. It seemed to work fine for us, but it is something I think I could have specified better, instead of just sort of assuming a leadership role.

The pattern I have chosen for this sprint is The Deep End. This is because one of the issues I spent the most time on during this sprint was implementing JWT signature verification. This should not have been a difficult issue, but I really have never worked with functions in js specifically, and for some reason I was caught in a loop of bad syntax and usage of things like const, async, and await. I had no idea what I was doing, and was so lost as to why my code was not working. It took a lot of reading and being lost for a while before finally realizing my error was not the libraries I was using, but just a lack of understanding regarding js. 

From the blog Mr. Lancer 987's Blog by Mr. Lancer 987 and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospective

In this post, I’ll be reflecting on our second sprint towards developing and implementing an Identity and Access Management system for Thea’s Pantry. Coming out of Sprint 1, we had a better idea of Keycloak in general, and we had some basic frameworks for a fake frontend and fake backend. Our sprint goal for Sprint 2 was to fully integrate these components, so that we could provide a proof of concept for the entire workflow, as opposed to just one component. We wanted to be able to force authentication on a frontend page via a Keycloak login page, and then we wanted to be able to store the resultant access token from that interaction so that we can perform authenticated actions without ever talking to Keycloak again.

Some of my personal work towards that goal was as follows:

GitLab

  • Documenting our low-level issues in GitLab and assigning them accordingly. I put additional focus/effort this sprint into properly linking related issues, blockers, and tracking various key information in comments, as opposed to just using issues as a task list. Epic

Backend

  • Refactor the backend endpoint to verify the signature of a JWT to ensure authenticity. Note – this was a great learning experience in better understanding how async and await work in JS. This issue took me way too long to resolve. Squash Commit

  • Further briefly modify the endpoint to pull specific custom data out of the generated JWT from Keycloak. Commit

Frontend

  • Configure Docker compose files and Git submodules to containerize all three repositories into the fake frontend to test the whole flow. Commit

  • Completely facelift/refactor/rework/reimplement the fake frontend to use Vue as a build framework to test our implementation in the same context as it will be used in production. Configure dependency and instantiation of Keycloak in the JS to handle redirect and access token storage and usage. Commits: 1 , 2

Something that worked particularly well this sprint was our focus on increased communication. We refactored our working agreement to address some of our shortcomings in communication and accountability, and I felt like this sprint was better for us around the board. We had a bit more direction this sprint, and we accomplished our goal exactly as we laid it out, barring 2 lines of code that we have to add that are just blocked right now.

That said, – at risk of contradicting myself – I feel like something that did not work as well, and that we can continue to improve, is also our communication. Though it was better this sprint, it still definitely felt at times like we were not a team, and instead like we each had our tasks that we would connect on once or twice a week in class meetings. Maybe this is fine, and to be honest it worked okay for the most part, but I feel like in an ideal world for me, I would have us all being very proactive and communicative about our issues, though I don’t know if this is a fair thing to aim for our team to improve, or if maybe I should reevaluate my expectations.

Something I could improve is my focus on defining roles and responsibilities for the general team dynamic, not just for issues. I felt like I focused on accountability for issues on GitLab, for example, but I also feel like I informally assumed the role of Scrum Master / Sprint Lead for this sprint, though we never really defined or said that. It seemed to work fine for us, but it is something I think I could have specified better, instead of just sort of assuming a leadership role.

The pattern I have chosen for this sprint is The Deep End. This is because one of the issues I spent the most time on during this sprint was implementing JWT signature verification. This should not have been a difficult issue, but I really have never worked with functions in js specifically, and for some reason I was caught in a loop of bad syntax and usage of things like const, async, and await. I had no idea what I was doing, and was so lost as to why my code was not working. It took a lot of reading and being lost for a while before finally realizing my error was not the libraries I was using, but just a lack of understanding regarding js. 

From the blog Mr. Lancer 987's Blog by Mr. Lancer 987 and used with permission of the author. All other rights reserved by the author.

Testing Documentation

For this blog post I read an article about test documentation. We haven’t talked about this in class yet, so I was curious to learn about how documentation for testing differs from normal documentation. The blog post starts about stressing the importance of testing and how it helps keep consistency, structure, and record keeping. The author then lays out their five key elements of good testing standards to keep in mind when working on a project.

The first point is that documentation should define the boundaries and scope of the project by detailing what they tests are testing for and how far the functionality of the application can go. This also helps with efficiency because it can keep people on the important objectives and not get lost working on things that are not needed. The next point is that documentation should reflect your testing strategy and approach. This includes mentioning what level or test you are running, unit, integration, or user testing (which is what my last blog talked about). It should also define the project specifications and the reasoning for the test and how why it is necessary to ensure functionality. A third element to have in testing documentation is to detail the software, hardware, equipment, and configurations for the testing, to reduce the number of variables that can account for unexpected and untested program behavior. Another key point is to have a test schedule and milestones as part of your outline and documentation to assist in workflow and keep large teams on track. The final part that should be included in the included information is the approach to be taken for defect management and error reporting. This will facilitate improvement by being consistent with company standards and working towards a complete set of tests. The author summarizes his suggestions that all comments and documentation should be consistent, clear, and regularly updated.

I wanted to look into a blog post about documentation because I know that it is important, and I personally rely on in depth documentation when looking at a new project for the first time. In this or other classes, proper annotation in code isn’t taught because of everything else that needs to be covered so I thought it would be a good topic to research on my own. With certain testing tools, sometimes it can seem that documentation is more than what is needed due to detailed automated reports that come with testing but when tracing code or looking through tests that have failed for any number of reasons it can be invaluable to have comments that describe a method’s intended function. Going forward, with the next project I do that involves testing I will make an effort to write proper documentation that follows the five elements described in the blog.

Test Management 101: Creating Comprehensive Test Documentation

From the blog CS@Worcester – Computer Science Blog by dzona1 and used with permission of the author. All other rights reserved by the author.

What is Integration Testing?

Integration Testing is a key phase in the software testing lifecycle where individual software modules are combined and tested as a group. While unit testing verifies that each module works independently, integration testing checks how these modules interact with each other. In simple terms, Integration Testing is where you check if different modules in your application play nice together. Each module might work fine on its own (thanks to unit testing), but once you connect them, all sorts of bugs can pop up—data not flowing correctly, broken links, miscommunication between components, etc. That’s why this type of testing focuses on the interaction between modules. You’ll hear it called things like “I & T,” “String Testing,” or “Thread Testing.” All fancy names for the same thing—making sure things work together.

Why is Integration Testing Important?

Even if all modules pass unit testing, defects can arise when modules are combined. This can happen due to:

  • Differences in how developers code or interpret requirements
  • Changes in client requirements that weren’t unit tested
  • Faulty interfaces with databases or external systems
  • Inadequate exception handling

Integration testing helps identify these issues early, ensuring seamless data flow and communication between components.

Types of Integration Testing

Types of Integration Testing

There are several strategies to conduct integration testing:

1. Big Bang Testing
All modules are integrated and tested simultaneously.

  • Pros: Simple for small systems.
  • Cons: Difficult to isolate defects, delays testing until all modules are ready.

2. Incremental Testing
Modules are integrated and tested step-by-step.

  • Bottom-Up: Test lower-level modules first, then move up.
  • Top-Down: Start with higher-level modules and use stubs to simulate lower ones.
  • Sandwich: A mix of both, testing top and bottom modules simultaneously.

Stubs and Drivers

These are dummy components used in integration testing:

Driver: Simulates a higher-level module that calls the module under test.

Stub: Simulates a lower-level module that is called by the module under test.

Final Thoughts

Integration Testing bridges the gap between unit testing and system testing. It ensures that individually functional modules work together as a complete system. Whether you’re using Big Bang or an incremental approach, thorough planning and detailed test cases are key to successful integration testing.

Reference:

https://www.guru99.com/integration-testing.html

From the blog The Bits & Bytes Universe by skarkonan and used with permission of the author. All other rights reserved by the author.

CS443: The Tribal Engineering Model (“the Spotify Model”)—An Autopsy

This codebase ain’t big enough for the two of us, pardner.

Something I think about quite frequently is how engineers seem to live in a world of their own creation.

No, really: open an app—look at a website—play a game of (virtual) cards. The designers of these programs each had their own take on how these experiences ought to be represented; the visual language, the information density, the flow from one experiential strand to another. These aren’t happenstance—they’re conscious decisions about how to approach a design.

But engineering is a bit like collectively building a diorama. Things look well enough in miniature, but the problems—with the product, and with the people working on it—become more clear the more you scale upwards.

This problem isn’t new, or all that far-fetched: engineers are specialists, and they’re good at what they do. Put two engineers together, and you’ve got yourself a crowd. Put three, and you have a quorum. Put a whole team of engineers on the same problem, and now you have a new problem.

(If you’ve ever wondered what happens when an engineer fears of being obsolesced out of their job, it’s not hard to imagine. It rhymes with “reductoring.” Alternatively, imagine a small, innocent codebase exploding into a million pieces, and you get the idea.)

So, how do you get engineers to play nice?

Plenty of solutions have been suggested; most of them involve Agile in some form or another.

(I personally don’t believe in it. I like—and have used—many of its byproducts, such as Continuous Delivery and Continuous Integration, but I don’t find that the underlying methodology behind the model itself is as universally applicable as it would have you believe.)

About a decade ago, there was a lot of buzz over Spotify having finally, definitively, solved the problem of widescale Agile implementation. It was highly ambitious—virtually utopian in its claimed economies-of-scale, with reports of unprecedented efficiency/productivity gains. Almost too good to be true.

Just one problem: it didn’t actually work. Also, they never actually used it, according to the testimony of one of the senior product managers involved in implementing it. Despite a grand showing in 2012, Scaling Agile @ Spotify was pretty much DOA before it had even hit the ground.

In hindsight, it should have been fairly obvious that things over at Spotify weren’t as brilliantine-sheened as the glowing whitepaper they said contained the key to their company’s success may have suggested. Years of complaints about incomprehensible UX decisions seemingly passed unilaterally with little user (or developer) input; a platform that abruptly pivoted towards delivering podcast and, strangely enough, audiobook content with almost the same priority as music streaming, and a lukewarm reception to the arrival of social networking capabilities.

So, what went wrong?

Firstly: every single organizational unit under this model, with the exception of Guilds, were ordered as essentially fixed, permanent structures. If a Squad was assigned to work on UX, that was their sole responsibility, unless a member had other duties through a Guild (more on that later). Accordingly, each Squad had equal representation within a Tribe, regardless of how important or pressing their work was. Further, each Squad had its own manager, who existed in direct opposition to three other managers in the same Tribe, who all had diametrically opposing interests due to representing different product segments. If an overriding voice was required, Spotify sought to preempt such issues by including a fifth Übermanager in a Tribe, who did not actually have a day-to-day purpose other than mediating disputes between managers, presumably because such instances were or were expected to be extremely common. (It is unknown whether this fifth manager was included in a similarly structured college of Middle Managers).

Worse, yet, however, is it becomes painfully evident how little work could get done under such a model. Because Tribes were, by design, interdependent on each other due to cross-pollination of key devs through Guilds, a work blockage in a Tribe not only required the intervention of two or more Tribes, but required the key drivers of the entire Tribe to do so, preventing any of the involved Tribes from doing any meaningful work. This is on top of the presupposition that each Squad has had to have mastered Agile in small-groups for the idea of an Agile economy-of-scale to even make sense.

Probably most damning, though, is the impulse to just double down on your own work when confronted by a deluge of meetings, progress reports, post mortems, and pulse checks. Rather than focusing on the interconnectedness of the Team-Tribe-Guild structure and how to best contribute as but a small part of a greater whole, many Spotify engineers simply did what they are instinctually led to do and submitted product as if working in small-groups.

This essentially created a “push” rather than “pull” system where engineers would deliver the product they had believed higher-ups would expect them to deliver, rather than the actual direction that Spotify executives wanted to steer them in. When noticed by upper management, these facilitated “course corrections”. Sweeping, unilateral, and uniquely managerial.

And that was pretty much the end of Agile for Spotify.

Things look well enough in miniature, but the problems become more clear the more you scale upwards.

So, why even talk about this?

I’ve had plenty of good collaborative experiences with engineers in industry. I want to work with people—I have worked with people, and well. But I believe there’s a science to it, and as engineers, we simply do not focus enough on the human element of the equation. The Tribal model reminds me a lot of Mocks, and even the POGIL methods we’ve used in this course so far.

Mocks are a way to implement that at an individual, codebase level. POGIL teaches us to think for ourselves and expand our individual skills. I think what Spotify failed to recognize is that people do not instinctively know how to work together, especially tech-minded people like engineers. Interdependence isn’t something to be cultivated absentmindedly, as if we could ignore the effects of losing the one person who knows what piece of the pie to cut for a day, week, or more; rather, it’s something to be warded against.

The Guinness pie from O’Connor’s in Worcester, by the way, is life-changingly good.

Some food for thought.

Kevin N.

From the blog CS-443 – Kevin D. Nguyen by Kevin Nguyen and used with permission of the author. All other rights reserved by the author.