Category Archives: Week-14

JUnit Blues

Hello!

 

With the semester wrapping up pretty quickly, and our last homework assignment being to design an in-class assignment, I’ve been doing some brushing up on JUnit testing. Specifically, assertions, which are the crux of how JUnit tests work, by telling a given test what qualifies as a pass or fail. The assignment me and my groupmates designed revolved around the use of various different kinds of assertions, ones that we didn’t cover heavily in class. 

 As such, this week my blog of choice to read revolves around, what else, JUnit testing. Specifically, the article comes from Medium, who I’ve looked at before, and who seem to be quite the useful resource on covering both broad and specific computer science topics. I wanted to take a look at this specific article, mostly because I wanted to see some of the other topics involved with JUnit that we didn’t cover in class. I intend on running Linux on my main PC once the semester ends, and seeing how to install JUnit on specific hardware instead of importing it as a library is pretty interesting! I am very used to just pulling a library from the top of a piece of code, I am not very well versed in actually installing libraries. Granted, this is the kind of thing that Docker and VS Code are made to circumvent, as you can set it to auto install or include certain dependencies. I also enjoyed reading some of the specific recommendations for writing JUnit specific tests. Some of them we kind of touched on in class already, but it is always nice to keep myself fresh on these kinds of things. Keep tests simple and focused, avoid possible edge cases, the list goes on. Something we didn’t touch on at all in class is the various debugging modes found in JUnit, like JDWP and Standard Streams, which can be useful in troubleshooting a program. Standard Streams for instance places every print that would normally go to the main console, but redirects it to the output strea, which can be useful for seeing exactly what is going on with a program. This kind of angle to me is interesting, as I strongly associate testing with debugging, but we didn’t necessarily cover debugging very thoroughly in class, so perhaps that is something I can look up on my own time. 

 I’ve thoroughly enjoyed my time in this class, some things were a little dry like the Boundary testing near the beginning of the semester, but a lot of the things we learned, like JUnit testing or unit testing in general I can see myself using regularly in industry, and I don’t think I am wrong in thinking that. 

Thank you for reading my blog!

Camille

 

Blog Post: https://medium.com/@abhaykhs/junit-a-complete-guide-83470e717dce

 

From the blog Camille's Cluttered Closet by Camille and used with permission of the author. All other rights reserved by the author.

JUnit Blues

Hello!

 

With the semester wrapping up pretty quickly, and our last homework assignment being to design an in-class assignment, I’ve been doing some brushing up on JUnit testing. Specifically, assertions, which are the crux of how JUnit tests work, by telling a given test what qualifies as a pass or fail. The assignment me and my groupmates designed revolved around the use of various different kinds of assertions, ones that we didn’t cover heavily in class. 

 As such, this week my blog of choice to read revolves around, what else, JUnit testing. Specifically, the article comes from Medium, who I’ve looked at before, and who seem to be quite the useful resource on covering both broad and specific computer science topics. I wanted to take a look at this specific article, mostly because I wanted to see some of the other topics involved with JUnit that we didn’t cover in class. I intend on running Linux on my main PC once the semester ends, and seeing how to install JUnit on specific hardware instead of importing it as a library is pretty interesting! I am very used to just pulling a library from the top of a piece of code, I am not very well versed in actually installing libraries. Granted, this is the kind of thing that Docker and VS Code are made to circumvent, as you can set it to auto install or include certain dependencies. I also enjoyed reading some of the specific recommendations for writing JUnit specific tests. Some of them we kind of touched on in class already, but it is always nice to keep myself fresh on these kinds of things. Keep tests simple and focused, avoid possible edge cases, the list goes on. Something we didn’t touch on at all in class is the various debugging modes found in JUnit, like JDWP and Standard Streams, which can be useful in troubleshooting a program. Standard Streams for instance places every print that would normally go to the main console, but redirects it to the output strea, which can be useful for seeing exactly what is going on with a program. This kind of angle to me is interesting, as I strongly associate testing with debugging, but we didn’t necessarily cover debugging very thoroughly in class, so perhaps that is something I can look up on my own time. 

 I’ve thoroughly enjoyed my time in this class, some things were a little dry like the Boundary testing near the beginning of the semester, but a lot of the things we learned, like JUnit testing or unit testing in general I can see myself using regularly in industry, and I don’t think I am wrong in thinking that. 

Thank you for reading my blog!

Camille

 

Blog Post: https://medium.com/@abhaykhs/junit-a-complete-guide-83470e717dce

 

From the blog Camille's Cluttered Closet by Camille and used with permission of the author. All other rights reserved by the author.

Dice Game Code Review

This week we started to work on our own POGIL activity similar to the Sir Tommy code review. The activity the team has chosen will be a dice game with a specification sheet that expects the people to do the activity based off of the specification sheet. The sheet is going to have specific questions asking what lines contain bugs or have format issues that are not best practice. This will help the students read code more in depth as well as to work together in searching for bugs and format issues. The team is going to meet out of school to determine the questions and how we will go about working on the project. We will also focus on the types of questions we are going to ask in order to make the reader think about what they are reading and to critically think about how they will address the issues presented in the source code as well as test code.
                We are thinking about making a few models explaining and asking questions that send the users to the code to examine it, but also think about much deeper than just the code they are looking at. We will implement Encapsulation, Inheritance and Polymorphism. This means that the users will have to read and understand underlying methods within other methods in order to progress through the models, but it still will be simple enough so that the users do not take up too much time and can focus on the questions rather than the code.

In class yesterday we talked about what other teams were doing, and it was very interesting that everyone took a different approach to the homework. I am still glad my group stuck with sir tommy due to underlying issues with the original that we thought was a good idea to add certain bits and pieces to make the activity more understandable. We did the model questions asking about stubs, mocks, fake and dummies in mind, we dedicated an entire model to these objectives so that the student would understand how each works and how to create and get rid of each for a more optimal solution.

                Overall, this activity showed me how much I learned about different types of testing how to read and understand certain aspects of other people’s codes as well as paying close attention to imports as usually you assume the imports are always correct. I also learned that I have been using dummies, fakes, stubs and mocks without knowing since I did not have prior knowledge to these types of testing. I really enjoyed the class as well as how it was structured and how we had to figure things out on our own as well as a team.

From the blog CS@Worcester – Cinnamon Codes by CinCodes and used with permission of the author. All other rights reserved by the author.

The Best Java Testing Framework: JUnit

Summary of the Source

The blog post introduces JUnit as a unit testing framework designed for the Java programming language, explaining its evolution, core features, and significance. It outlines the primary components of JUnit, such as test cases, test suites, annotations (like @Test, @Before, and @After), and assertions. The guide also discusses test driven development (TDD) and how JUnit supports this methodology by encouraging developers to write tests before writing the actual implementation.

Additional features covered include mocking with Mockito and how to structure test cases for better readability and maintainability. Overall, the article serves as both an introduction and a deeper dive for those wanting to use JUnit effectively in real world software projects.

Reason For Selecting This Blog

I chose this blog post because it well written, and in only a roughly 10 minute read, covers everything there is to know about JUnit, at least as a foundation for starting out with it. It goes over what it is, why its used, and even its history, before diving into how to set the environment up to use it, the shows examples of test cases using JUnit. I think its an overall great resource for any developer who is interested in testing using Java as it covers all the bases.

Personal Reflection

I was introduced to JUnit in university, and learning it there was really helpful towards understanding how testing code works in general. I also liked JUnit especially because it seems very easy to understand and write, where only a couple lines of code could write a test case for your code. Assertions are especially useful as that’s the end result of the test, and with JUnit it’s very simple. One line of AssertEquals tests the expected vs the actual, that’s all it takes to test the correct output. I also see how this framework would be preferred when doing test driven development as each test is separated into different cases denoted by the @Test annotation. This makes it easy to make specific tests for each feature as development continues. I haven’t tried any other testing frameworks but now that I have used JUnit I think it won’t be as confusing to understand a different framework, but since I like how JUnit works I’ll compare the others to it like it’s the gold standard for testing frameworks.

Conclusion

Knowing about JUnit is imperative if you plan on testing code in Java. It has everything that a testing framework could want by making it easy to write, handle multiple test cases, and including assertions. I know that if I ever use Java in the future to test code, I will use JUnit because of how powerful, reliable,  and simple it is.

Citation:
HeadSpin. (n.d.). JUnit: A Complete Guide. https://www.headspin.io/blog/junit-a-complete-guide

From the blog CS@Worcester – The Science of Computation by Adam Jacher and used with permission of the author. All other rights reserved by the author.

Software Technical Review

Week 14 – 5/2/2025

This is my last week of class, and this is kind of bittersweet. The topic for this week was software technical review. While I was working on my last project for the class, I went ahead and read a blog post called “What is Technical Review in Software Testing?” by Ritika Kumari. I did not read this article to find out what a technical review is but to learn more about the process of it.

The article gives a suitable introduction to technical reviews in software testing, stating that technical reviews are formal assessments conducted by technical reviewers to examine software products like documentation, code, and design. Technical reviews are designed to check compliance with standards, enhance the quality of the code, and identify defects at the initial phase of the Software Development Life Cycle (SDLC). The blog discusses how technical reviews reduce the cost of rework, enhance the level of expertise of the team, and get software outcomes in line with business goals.

I picked this article because it is very much in line with the topic we had for this week’s class. The article mixes practical applications, such as Testsigma’s integration for test case management, with abstract concepts, like static testing and peer reviews. Its emphasis on collaborative procedures also aligns with our class’s ideas about agile teamwork.

The blog highlighted the importance of spotting design or code bugs early in development, for if one does so, he or she can save post-release costs up to 70%, as illustrated through the example of re-engineering faulty software. This aligns with the “shift-left” testing philosophy that we examined. Technical reviews are as much about information sharing as they are about error detection. For example, I had not realized how much cross-functional knowledge was built up through walkthroughs and peer reviews. I will look to apply this idea further in automation efforts. Testsigma’s review capabilities, such as automated test case submission and element management, demonstrated how tools could speed up reviews. The blog made me rethink my understanding that reviews are only a “checklist activity.” Rather, they are interactive processes that achieve harmony between teamwork and technical correctness. For instance, the difference between formal defect-oriented inspections and informal knowledge-swap peer reviews led to a better understanding of how to customize reviews according to project requirements. I will promote systematic technical assessments in my next work environment in the future. This class overall was an interesting class and I hope to use the lessons that I have learnt throughout my professional career.

https://testsigma.com/blog/technical-review-in-software-testing/

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Mastering Test-Driven Development

For this blog post, I’d like to discuss an interesting piece by Jeremy D. Miller titled “Effective Test-Driven Development”. It provides some practical tips and advice for developers on how to make the most of Test-Driven Development (TDD). I found it very relatable because it relates to what we’ve been discussing in our software development class, particularly testing and keeping our code in good shape.

Miller’s article delves into TDD, which entails writing tests before coding. While TDD is a common method, he explains how to avoid common issues and adhere to best practices. He discusses how too much setup code, slow feedback, and unnecessary tests can all contribute to slow performance. He contrasts these issues with good habits such as writing quick tests, providing timely feedback, and ensuring tests are clear about what they check in the code.

Miller also discusses how TDD can help with design and quick feedback. He mentions that TDD encourages developers to plan out how their code will look, making it easier to maintain and less buggy. The goal is to make testing an integral part of development rather than a final step, so that problems can be identified early.

I chose this blog because we had only touched on TDD in class and I wanted to see how it applied in real-world software development. I thought it was fascinating that TDD is about more than just writing tests; it’s also about improving code structure. Miller’s suggestions are ones I’d heard of but hadn’t looked into thoroughly. His advice helped me understand how TDD can improve code quality and make life easier for developers, which is extremely useful to me as a student learning to write solid code. Reading this blog made me realize how important it is to write tests that do more than just check if things work; they should also ensure that the code is easy to change later. Miller, for example, discusses “happy paths” in which tests check for expected results and “negative tests” in which errors are handled. This is consistent with what we’ve been learning about testing, but it also gives me a better understanding of how TDD can help clean up and improve code over time.

Miller’s post emphasizes the importance of not rushing through TDD. It’s better to take your time and write clean, manageable code. I found this extremely useful because, as a beginner, I frequently feel the need to complete tests quickly. But Miller reassured me that taking it slowly can save time in the long run by identifying problems early on.

I intend to use TDD more actively in future projects. By writing tests first, I can keep the end goal in mind from the start, reducing the need for major rewrites later. I also feel more confident about refactoring because TDD will help me keep the code solid while I make changes.

In the future, I want to incorporate Miller’s concept of clear intent expression into my code. Writing tests that clearly demonstrate what the code is supposed to do will simplify things for others and help me stay focused when working on larger projects.

Blog: https://jeremydmiller.com/2022/10/03/effective-test-driven-development/

From the blog CS@Worcester – Matchaman10 by tam nguyen and used with permission of the author. All other rights reserved by the author.

Microservice Architecture

In today’s fast-paced digital natural world, software systems are required to be scalable, adaptable, and powerful. Microservice architecture is one architectural approach that has gained significant popularity in answering these objectives. Recently, I discovered a helpful document named “Microservices Architecture” on Microsoft’s Azure Architecture Center website, which offered a full description of this technique.

The article describes microservices architecture as a design pattern in which applications are developed as a collection of small, independent, and loosely linked services. Each service is liable for an independent function and may be built, launched, and expanded separately. This differs from monolithic systems, which have all components tightly integrated into a single codebase. The article highlights the advantages of microservices, such as increased scalability, shorter development cycles, and the option to utilize various technologies for different services. It also addresses difficulties like increased complexity in managing inter-service communication, data consistency, and deployment pipelines.

The reason I chose this article is because Microsoft Azure is a cloud computing platform that I am familiar with and am learning more about how it is within microservice architecture. The article’s clear explanations and practical insights make it an excellent pick for learning about microservices in a real-world setting.

Reading the article was an eye-opening experience. I was particularly struck by the emphasis on independence and modularity in microservices. The thought of each service being created and deployed individually appealed to me since it enables teams to work on different areas of an application without stepping on each other’s toes. This method not only accelerates development but also makes it easier to discover and resolve problems.

However, this article also made me aware of the issues that come with microservices. For example, maintaining communication across services necessitates careful design, while guaranteeing data consistency between services can be challenging. This helped me realize the value of solutions like API gateways and message brokers, which assist to speed these operations.

One of the most important lessons that I learned is that microservices aren’t a one-size-fits-all solution. The article highlights that this architecture is best suited for big, complicated applications that demand a high level of scalability and flexibility. For smaller projects, a monolithic approach may be more suitable. This sophisticated viewpoint helped me comprehend that the correct architecture is determined by the project’s individual requirements.

In the future, I plan to apply microservices architectural ideas to my own projects. I’m particularly looking forward to exploring containerization technologies like Docker and orchestration platforms like Kubernetes, both of which are commonly used in microservices setups. I’ll also remember how important it is to build clear APIs and implement effective monitoring mechanisms to handle the complexity of distributed systems.

https://learn.microsoft.com/en-us/azure/architecture/guide/architecture-styles/microservices

From the blog CS@Worcester – computingDiaries by hndaie and used with permission of the author. All other rights reserved by the author.

Understanding Technical Debt

In software development, technical debt refers to the extra work required in the future when quick, easy solutions are chosen instead of more thorough, time-consuming approaches. This concept is explored in the article “What is Technical Debt & How Can Companies Manage It?” by Coder Academy.

The article defines technical debt as the accumulation of suboptimal code or design choices made to deliver projects faster. While these shortcuts may lead to quicker releases, they often result in higher maintenance costs and reduced code quality over time.

Key Factors Contributing to Technical Debt

  • Time Constraints: Meeting tight deadlines can lead to hasty decisions that prioritize speed over quality.
  • Evolving Requirements: As project requirements change, older code may no longer align with the current needs.
  • Lack of Documentation: Poor documentation can create misunderstandings and increase errors, contributing to technical debt.

Strategies for Managing Technical Debt

The article suggests several ways to manage technical debt effectively:

  • Regular Code Reviews: Consistently reviewing code helps identify and address suboptimal practices early.
  • Refactoring: Improving existing code without changing its functionality can enhance readability and maintainability.
  • Comprehensive Documentation: Thorough documentation supports better understanding and future modifications.
  • Prioritization: Address technical debt based on its impact on the overall project’s progress and quality.

I chose this article because technical debt is a common issue in software development and is closely related to our course material. For Computer Science students, learning how to avoid technical debt is critical. If technical debt becomes a habit, it can lead to poor time management, less active learning, and weak decision-making skills. Understanding its causes and management strategies is essential for maintaining code quality and ensuring project success.

The article provided valuable insights into technical debt and its consequences. I learned that while quick fixes may save time initially, they often lead to higher maintenance efforts and system issues later. The importance of regular code reviews and refactoring stood out, as these practices can help reduce technical debt and improve code quality.

I also appreciated the visual diagrams and tables in the article, which made it easier to understand what technical debt is and how to manage it. I particularly liked the advice on avoiding technical debt and understanding its long-term impacts as a programmer.

By adopting the strategies outlined in the article, I aim to contribute to developing sustainable, high-quality software solutions. This knowledge will help me avoid accumulating technical debt in my future work. I am motivated to build new habits, such as maintaining good documentation, participating in regular code reviews, and prioritizing refactoring. These practices will help me become a more effective and skilled programmer.

Sources:
What is Technical Debt, and how can you manage it?

Citation:
Academy, Coder. “What Is Technical Debt, and How Can You Manage It?” Medium, Medium, 18 May 2016, medium.com/@coderacademy/what-is-technical-debt-how-can-companies-manage-it-1af08992f6d0. 

From the blog CS@Worcester – CodedBear by donna abayon and used with permission of the author. All other rights reserved by the author.

Git Tricks For the New Dev

I just recently finished learning about git in a classroom setting, so every step from forking to cloning to branching and staging then committing into pushing ending with pulling. All the parts to get the gist of git, but nothing in the way of advanced use. Enter this article written by Gitlab.

As its title suggests, “15 Git tips to improve your workflow” has 15 total tips in regards to git, so lets go through some of them together.
1. Git aliases; amazing function, to think that rather than checkout, branch, or commit I could use a custom name. This is, in my opinion, great for new devs since once they grasp the concept of what a command does they can alias it to something that makes sense to them.
2. Visualizing repo status with git-prompt.sh; needs to be downloaded but definitely a useful tool for people like me who benefit from a more visual experience.
3. Command line commit comparisons; definitely more of a practical command that is helpful to see your workflow. Definitely going to be using this one to help me track what I actually worked on and might even download the Meld tool they mentioned.
4. Stashing commits; another practical command that makes sense just for a dev to know. If you have to push a sudden fix in the middle of adding a feature you can stash the changes made for the feature, commit the fix and then just pop the stash to get back all the previous work.
5. Pull frequently; nothing to add.
6. Auto-complete commands; tab to automatically finish a word is also applicable in search engine prompts as well. So useful for a new dev since if they forget the command but remember the first letter they can just flick through the commands until they find what they were looking for.
7. Set a global .gitignore; create a list of files to remove from commits and put it on the exclusion list, nice and simple.
8. Enable autosquash by default; had to look this up, apparently the squash command merges commits into one big commit. Personally not too sure of the use case so will have to test it out at a later date.
9. Delete branches locally that remote removed when fetching/pulling; as part of fetch, there is a prune attribute that will work this functionality and it just needs to be set to true.

Obviously there are 6 more tips and they are: Use Git blame more efficiently, Add an alias to check out merge requests locally, An alias of HEAD, Resetting files, The git-open plugin, and The git-extras plugin. I will not go over them here but definitely give the article a read if you are interested.

Link:
https://about.gitlab.com/blog/2020/04/07/15-git-tips-improve-workflow/

From the blog CS@Worcester – Coder's First Steps by amoulton2 and used with permission of the author. All other rights reserved by the author.

(Week-14) Scrum Methodology in Software Development

Scrum is a widely-adopted framework in software development that is designed to encourage collaboration, smart time usage, adaptability, and transparency to deliver high-quality results to customers. The methodology centers around three roles: Scrum Master, Product Owner, and the Development Team.  Each role is critical to ensuring the scrum process is effective. Together, they uphold Scrum’s core values: commitment, focus, openness, respect, and courage.

The Roles in Scrum

  1. Scrum Master:
    The Scrum Master serves as the facilitator and coach, ensuring that the team will follow the Scrum principles. They steer the team away from distractions, help remove obstacles along the way, and guide the team toward self-organization and improvement. Their role is not necessarily  about managing the team but empowering it to achieve its goals effectively.
  2. Product Owner:
    The Product Owner is the voice of the customer, responsible for “maximizing” the product’s potential and value. They manage the product backlog, prioritize features based on the current sprint, and provide clear requirements to the team. They act as the bridge between stakeholders and the development team, ensuring that there is a complete alignment on goals and expectations.
  3. Development Team:
    The development team consists of software development professionals who collaborate to deliver increments of the product during each sprint. They are self-organizing, meaning they decide how to accomplish their tasks without intervention from the Scrum aster or product owner. This is great for fostering ownership, accountability, and for delivering high-quality work.

The Values of Scrum

Scrum is mostly made up of  five key values that the team’s behavior and decision making:

  • Commitment: Teams dedicate themselves to achieving sprint goals and delivering value.
  • Focus: By working on a limited set of tasks at a time, teams maintain clarity and productivity.
  • Openness: Transparent communication fosters trust and ensures that challenges are addressed collaboratively.
  • Respect: Team members value each other’s contributions and expertise, creating a positive and supportive work environment.
  • Courage: Teams take bold steps to innovate and tackle tough problems.

Why Scrum Matters

Scrum’s structured yet flexible approach enhances collaboration, reduces waste, and drives continuous improvement. By empowering teams to adapt to change and deliver incrementally, organizations can respond more effectively to customer needs and market shifts. Whether done in software development, marketing, or other fields, Scrum’s roles and values provide an amazing foundation for the success of a company.

Resource on Scrum Management

“What is Scrum in Project Management?” is an informative video about the Scrum methodology by the work management company Wrike.  This resource informatively explains all aspects of scrum, including but not limited to: goals, roles, practices, and examples.  It also explains the idea of a “sprint” which is the current goals of the development team that they must get done before the next sprint.  Check out the video for more information on Scrum Methodology.

Link: https://www.youtube.com/watch?v=M12HSYZkrgQ

From the blog CS@Worcester – Elliot Benoit's Blog by Elliot Benoit and used with permission of the author. All other rights reserved by the author.