Category Archives: Week 5

Bugs: Severity vs. Priority, and Why It Matters

While looking for more articles online involving testing, I came across SauceLabs. This website is another compendium of blogs, similar to stickyminds mentioned in my last blog. The layout and articles on this website were an excellent selection of resources dedicated to testing and quality assurance. While running through many bugs in my most recent projects and assignments, I was searching for an article dedicated to programming bugs. I found an article that fit my needs, dedicated to the differences between severity and priority of bugs, which is something I had not previously considered. The article’s title is Understanding Bug Severity vs. Priority: Key Differences and Best Practices by Chris Tozzi

The article provides (as the title indicates) a breakdown of bug severity and bug priority in software testing. It explains that bug severity is primarily concerned with the impact of a bug on the system’s functionality. Issues are categorized into the following levels of severity: critical, major, minor, and trivial. Each of these have a different level of impact on the system. For example, critical bugs may cause system crashes, while trivial bugs have a small functionality and performance impact. Bug priority focuses on the order in which bugs should be addressed, considering various factors that will impact the job as well as the program, such as deadlines, or bugs that are caused by higher priority bugs. Bugs with high priority require immediate attention, and those with low priority can be addressed later as the program develops and is tested further. It is an extremely important skill to be able to identify the severity and priority of bugs throughout the system and its development, so as to facilitate a smoother and more efficient development cycle. The article also provides some examples, such as a critical severity bug that causes data loss. This bug would ALSO have high priority because it is causing a lot of damage to the system’s structure.The article offers very useful recommendations for bug reports such as: establishing clear classification guidelines for bugs (priority/severity levels), communicating efficiently within teams, or reassessing bug statuses frequently.

I chose this blog because of my most recent experience in my assignments and projects. I had found that many bugs relied on each other, and I had never previously taken the initiative to learn more about bugs. When working with a team in industry, I will now have an understanding of how bugs are classified, as well as how to manage bug reports. This is something I had very little experience with prior to this article, and I will make sure to put extra effort into bug reports in my own projects with peers. I will continue to look into articles about bugs, testing, and QA to further improve my knowledge.

Source:
https://saucelabs.com/resources/blog/bug-severity-vs-priority

From the blog CS@Worcester – WSU CS Blog: Ben Gelineau by Ben Gelineau and used with permission of the author. All other rights reserved by the author.

Foundations of Unit Testing in Software Development

In software development, Unit Testing turns out to be a crucial phase within the software testing lifecycle, ensuring that each component or “unit” of the software performs as intended and designed. I chose this topic really because it is all we have been doing the past 2 weeks in class and the whole course is about Quality Testing and Assurance so it makes sense. Also Since I am not a software developer (I’m a Data Analytics kinda guy) I have been trying really hard to motivate and enjoy the struggle of doing this courses assignments.

The GeeksforGeeks article provides a comprehensive overview of Unit Testing, defining it as a level of software testing where the individual units/components of a software are tested. The primary goal is to validate that each unit of the software code performs as expected. Unit Testing is typically performed by developers themselves or by QA engineers, emphasizing the importance of testing small parts of the project independently for errors. The article outlines the process, benefits, and challenges of Unit Testing, along with examples of tools that facilitate this testing method, such as JUnit for java and NUnit for .NET framework applications.

I went with this specific post because of the clarity and depth in explaining Unit Testing. It aligns perfectly with out course’s focus on Software quality and Testing, providing a solid foundation for understanding the complexities and nuances of unit testing within the software development lifecycle.

Upon reflecting the content, the article really highlighted the significance of Unit Testing in early bug detection, which not only saves time but also costs in the later stages of development. The emphasis on Unit Testing as a foundation for more comprehensive testing methods resonated with my understanding of a layered system strategy. It was really interesting to learn about the various tools and frameworks that support Unit Testing (though I may not know how to use them) across different programming languages, showing the universality and critical nature of this testing approach.

The knowledge gained from the article will reinforce my commitment to integrating Unit Testing future development projects. Recognizing its rile in maintaining high quality code and facilitating agile development processes. I am motivated to delve deeper into Unit Testing frameworks.

Unit Testing emerges as an indispensable practice in software development. The article serves as a valuable resource for understanding the principles and practices of Unit Testing, leading to a deeper appreciation for meticulous testing methodologies. As I move forward the principles outlined in this resource will be guiding my approach to quality assurance and testing

From the blog CS@Worcester – Josies Notes by josielrivas and used with permission of the author. All other rights reserved by the author.

Behavioral Testing

For this week’s blog post, I chose the article “Behavior Testing | What it is, Why & How to Automate?” from testsigma.com. I selected this article because it fits within the Behavioral testing section in the course topics section of the syllabus. This article goes into great detail about behavioral testing, from discussing what it is to explain how AI could be used in its implementation. For this blog post, however, I will discuss the sections on what behavioral testing is and a couple of methods that can be used to catch errors with behavioral testing.

The article describes behavioral testing as a form of functional testing designed to test the external functionality of a system. “Behavior testing or behavioral testing is a type of testing that focuses on testing the external behavior of a software application. It is a type of functional testing. It helps ensure that software systems meet the expectations and requirements of end-users, making it a valuable part of the software development and testing process. Behavior testing is also known as black-box testing.” As described by the article, behavioral testing is essential to ensure that the systems or products you are designing work well enough so your customers can use them efficiently. There are many different methods when using behavioral testing to find errors, such as equivalence partitioning.

According to the article, one method that can be used with behavioral testing that is good at finding errors is Equivalence Partitioning. “The equivalence partitioning testing technique involves dividing the input data into different classes or partitions, such as valid and invalid data, assuming the system will behave the same for both inputs. Example – For a login form, if the password requires at least eight characters, you might test one case with a 6-character password (invalid) and another with a 10-character password (valid).” When using equivalence partitioning, because you are dividing inputs into separate groups, in a way, you can do two things at once. One is that the system functions as it should with valid inputs, and the other can catch invalid inputs. Another way behavioral testing can be implemented is through boundary value analysis.

According to the article, boundary value analysis is a form of behavioral testing focusing on the possible range of inputs, specifically numerical inputs. “It focuses on testing the boundaries of input ranges, as errors often occur at the edges of these ranges. Test cases are designed for values at the lower and upper boundaries and just above and below. Example – If an input field accepts values from 1 to 100, the test data can be 0, 1, 2, 99, 100, and 101.” This kind of testing can be very helpful in making sure that you have accounted for the possible range of inputs that a user may enter, both valid and invalid.

Article: https://testsigma.com/guides/behavior-testing/

From the blog CS@Worcester – P. McManus Worcester State CS Blog by patrickmcmanus1 and used with permission of the author. All other rights reserved by the author.

Mastering JUnit

Dive into the world of JUnit, the leading Java testing framework. Learn how JUnit streamlines writing, running, and managing tests for robust Java applications.

Introduction to JUnit: Elevating Java Testing to New Heights

Testing is the backbone of any robust software development process, and when it comes to Java, JUnit is the name of the game. As a pivotal Java testing framework, JUnit simplifies the creation and management of tests, ensuring your code stands up to the rigors of use. But what makes JUnit the go-to framework for Java developers? Let’s dive in and uncover the essentials of JUnit, from its core functionalities to setting up your first test suite, ensuring you’re well-equipped to harness the full power of this testing framework.

A Deep Dive into JUnit’s Capabilities

JUnit, inspired by the xUnit architecture, provides a structured way to write and run automated tests. This flexibility extends to various types of tests, including unit, integration, and functional tests, each serving a unique purpose in the development lifecycle. Unit tests scrutinize individual components for correctness, integration tests ensure components work seamlessly together, and functional tests validate the system’s operation against requirements.

At its core, JUnit facilitates test creation through annotations, enabling straightforward test case structuring. Assertions play a critical role here, allowing developers to validate expected outcomes. Additionally, JUnit’s test runners and suites offer a streamlined approach to execute and organize tests, complemented by comprehensive reporting tools that shed light on test outcomes.

Setting the Stage for JUnit Testing

Getting started with JUnit is a breeze, especially within popular IDEs like Eclipse. Installation is straightforward, involving the addition of JUnit to your project’s build path. Once set up, creating a standard test file is your first step toward leveraging JUnit’s testing prowess. This involves defining test methods, utilizing JUnit’s annotations, and employing assertions to verify code behavior.

Crafting Your First Test Class

A well-structured test class is your blueprint for effective testing. Adherence to best practices, such as minimizing class size and focusing on relevant tests, is paramount. Utilize assertions to enforce expected outcomes, and maintain regular test runs to catch and rectify issues early. This iterative process not only enhances code quality but also bolsters your confidence in the software you develop.

Conclusion: Unlocking JUnit’s Full Potential

JUnit’s significance in Java development cannot be overstated. By facilitating efficient, reliable testing, JUnit empowers developers to produce higher-quality code. Whether you’re new to JUnit or looking to refine your testing strategy, understanding and applying JUnit’s features will undoubtedly elevate your development process. So, why not take the leap and integrate JUnit into your next Java project? With the right approach, you’re set to unlock the full potential of this powerful testing framework.

From the blog CS@Worcester – Coding by asejdi and used with permission of the author. All other rights reserved by the author.

TestProject Tutorial Conclusion – Advanced API Testing and Scheduling

We return again this week looking at the online TestProject blog tutorial focusing on the final two chapters 5 and 6 focusing on Advanced API Testing Automation and Scheduling API Automation Flows and CI/CD Execution respectively. In class, we’ve been working with JUnit integrated with VSCode so it’s been interesting seeing a very different User Interface in TestProject.

Chapter 5 looks at “Advanced” API Testing Automation, which primarily looks at more complex interactions and tests involving JSON objects and schemas. It also goes into some other more complicated calls and tests such as formatting URL-encoded requests, reading and using predefined user data sets and tests involving dynamic parameters. This chapter references public NASA API and tools; a key component that stuck out to me was the error report file generation that’s shown which easily identifies and organizes issues. Compared to previous chapters, I found this one to be less relatable and applicable to the things we’re doing in class, but I still learned a lot and was intrigued by the methodologies for test situations with multiple JSON paths to one target.

Chapter 6 focuses on the scheduling aspect of testing automation in TestProject and interactions with CI/CD/CMD pipelines. As always, there are an abundance of screenshots and images to walk readers through an example test set-up – beginning with the interface for scheduling tests and TestProject’s system of creating and assigning ‘jobs’. Tests are aggregated into jobs (typically as a bundle) which can then be executed as a one-time event or assigned a recurrence time frame. The interface to do so is clear and intuitive and reminds me a lot of CS383 – Cloud Computing where we are working on modules in AWS Academy learning about Amazon Web Services. AWS uses a similar interface and logical structure for assigning roles, jobs, permissions and many other facets making it intuitive for me to follow this TestProject tutorial. This chapter also discusses testing within a Docker container, which we used to implement for Dr. Wurst’s assignments but have recently switched to GitPod.

With this reading, we conclude the TestProject tutorial I originally found at the beginning of this semester. There’s been a lot of really valuable material and examples within these tutorials, particularly in chapters 1-4 as they focus on beginner concepts and I’ve just been getting started with learning about software testing and quality assurance. Probably most interesting and encouraging from this set of tutorials was how frequently concepts came up from other courses like Database Design and Cloud Computing. Software that interacts with those areas must be tested too so it’s important to know how to work my way around them and see visual examples of tests being designed and executed. In conclusion – TestProject seems like a great platform with many features, particularly an intuitive scheduling component, however JUnit’s interface remains my current favorite.

Sources:

Tutorial Intro: https://blog.testproject.io/2020/11/10/automating-end-to-end-api-testing-flows/

Chapter 5: https://blog.testproject.io/2020/11/10/advanced-api-test-automation-and-validation-flows/

Chapter 6: https://blog.testproject.io/2020/11/10/scheduling-api-automation-flows-and-ci-cd-execution/

From the blog CS@Worcester – Tech. Worth Talking About by jelbirt and used with permission of the author. All other rights reserved by the author.

Exploring the Classical Waterfall Model in Software Development

In most respects the classical waterfall model serves as the foundational software development life cycle (SDLC) model, almost embodying a structured and sequential approach to project management and software development which can prove effective when doing a variety of coding projects. While it may not be as commonly employed today, its significance lies in being the basis upon which other SDLC models have evolved, the process often involving steps and details with which have been planned beforehand. This model finds its relevance in the realm of more large, complex projects, being a model characterized by its rigorous, phase-driven progression, making it suitable for scenarios where project requirements are well-defined, and project stakeholders seek a high level of confidence in the outcome.

The waterfall model, although now less prevalent in contemporary software development considering it’s lesser effectiveness compared to more agile methodologies , it remains a foundational framework for understanding software development life cycles. This model’s structured, sequential approach entails phases like requirements gathering and analysis, design, implementation, testing, deployment, and maintenance, each building upon the preceding one. It is a document-driven model, placing high importance on quality control and rigorous planning, thus ensuring that the project is well-defined and the team operates with clarity and precision.

it becomes pretty clear that the simplicity and linear progression that come with the waterfall technique offer advantages for specific project scenarios. This approach favors discipline, with a focus on defining requirements before design likewise with the design before coding. For smaller, well-understood projects, it can be effective in maintaining clarity and ensuring milestones are met.

At the same time though, the rigidity and limitations of the waterfall model become apparent in more complex, dynamic projects. Its lack of flexibility to accommodate changing requirements and late defect detection pose significant challenges. The sequential nature of the model restricts stakeholder involvement in later phases, potentially leading to misunderstandings and costly revisions.

in practice, project managers and development teams should carefully assess project requirements, size, complexity, and the degree of uncertainty to select the most appropriate SDLC model since the waterfall method might not always be effective, sometimes proving to be an unwieldy method for projects better suited to adaptability. Moreover, hybrid approaches, combining elements from multiple models, can offer the best of both worlds, allowing for structure and adaptability.

In conclusion, the classical waterfall model, while valuable for certain projects, is not a one-size-fits-all solution. Its use should be considered in situations where requirements are well-defined and change is unlikely, such as large-scale, safety-critical, or government projects considering these have a tendency to have big budgets and therefore need to be mapped out when taking into account the money spent on particular projects. In today’s rapidly evolving software landscape, more adaptive SDLC models have gained prominence, offering flexibility and responsiveness to changing needs.

https://www.geeksforgeeks.org/software-engineering-classical-waterfall-model/

From the blog CS@Worcester – CSTips by Jamaal Gedeon and used with permission of the author. All other rights reserved by the author.

YAGNI

YAGNI is an acronym for You Ain’t Gonna Need It. It’s a principle from extreme Programming that says that programmers should only add functionality once it is definitely necessary. When coding if you are sure that you will need a piece of code or a feature later on, you don’t need to implement it now. Maybe you wouldn’t even need or add it because you might need something else. This is why you don’t want developers to waste their time creating extra elements that might not end up being necessary and can slow the process. YANGI helps save time and avoid spending time on features that might not be used, the main features of the program are developed better, and less time is spent on each release. When you have a problem that you can’t solve, you won’t be capable of making the best choices when coming up with a solution. On the other hand, when you know what is causing the problem, you can come up with a better plan to solve it. In software development, you can think about creating a system that can deal with everything but would only use a few features and could need attention and upgrades.

YAGNI can be implemented by development teams from small to large, so it isn’t limited to only small projects or large enterprises. This principle can help set up a task list of do’s and don’ts. Always try implementing the selling feature and get the app ready for end users. After the app is functional you can start adding extra features in the next version. Waiting to add any additional features will save a lot of time and effort for developers to help them meet project deadlines. Once your app is live you should keep up with its updates and be able to make the app better. By delaying the app’s updates to add more features can give opponents a chance to take your users. The first version of the app doesn’t need to be perfect, if it can just do the simple things and still fulfill its intended purpose then that is enough. With time you can add in all the add-ons you need later on instead of just cramming it into one version. The you ain’t gonna need it principle is very time effective and efficient for developers so that we could get our projects done on time, not adding anything that isn’t necessary at the time, and make sure that developers don’t feel stress by making sure all of the add on features needed to be added. This principle is time, stress and cost efficient for developers, which is why this principle should be used constantly.

https://www.techtarget.com/whatis/definition/You-arent-gonna-need-it

From the blog CS@Worcester – Kaylene Noel's Blog by Kaylene Noel and used with permission of the author. All other rights reserved by the author.

Unveiling the Blueprint of Software Architectures: The Foundation of Digital Development

In the intricate world of software development, one essential factor underpins the creation of every digital marvel – software architectures. These structural frameworks are the unsung heroes, the master plans guiding the intricate construction of software applications. They serve as the invisible hand that shapes the organization of an application, defining its key components, the relationships between them, and the fundamental principles that govern their interactions.

Software architectures, though often behind the scenes, are pivotal in crafting software that’s not just functional but also efficient and tailored to meet specific requirements. They’re akin to the architects of a grand skyscraper, ensuring that each piece falls into place seamlessly, resulting in a robust and scalable digital structure.

Understanding the diverse architectural styles empowers developers to choose the right path for their projects. It’s akin to a skilled craftsman selecting the finest tools and materials for a unique creation. The choice of architecture significantly influences various aspects of a software system. It impacts the system’s performance, scalability, maintainability, security, and adaptability to change.

Embracing the versatility of architectural styles is akin to choosing different brushes for a painting. The software architects are the artists, and the blueprint they select is their canvas. As software development progresses, these architectures are not just abstract concepts; they become the very foundation upon which the digital world evolves.

References:

From the blog CS-343 – Hieu Tran Blog by Trung Hiếu and used with permission of the author. All other rights reserved by the author.

The Art of Code Refactoring

Since we have been discussing refactoring in class recently, it got me interested in finding out more about what makes refactoring… well “refactoring”. I found this interesting article “Refactoring vs. Defactoring” by Nicolas Carlo, a French-Canadian Software Engineer, which describes the difference between refactoring and debugging while also introducing the idea of “defactoring”.

The article starts with the definition of refactoring, which according to Martin Fowler, is “a change made to the internal structure of software to make it easier to understand and cheaper to modify without changing its observable behavior“. In simpler terms, refactoring is all about tidying up the interior of a program while keeping the exterior the same.

Nicolas is clear however that fixing a bug, adding features, or changing features is not refactoring but points out the importance of refactoring code before altering the functionality of a program.

Nicolas states that in his experience things that can help with solidifying this distinction are to, start doing distinct commits separating both refactoring and change commits, make commits more frequently, prefix commit messages with R or C to specify the change(s) made, and learn how to use automated refactoring to improve the health of one’s code.

Nicolas also explains by following these practices he feels as if his work has become much safer and simpler than before. With his newfound awareness, he feels as if he can put the best quality into his work. He also gives a brief rundown of how thinking of Refactoring and Changes as two hats you wear when programming can also help increase developer awareness.

While we discussed refactoring, I thought it was interesting how Nicolas framed defactoring as an opposing process to refactoring in the title of the article but I came to find out it is not that at all. Defactoring is described by Nicolas as “cognitive refactoring” which is done by making the code less abstract in places where abstraction is no longer required.

He says that when working with legacy code, he notes some items such as temporary variables that are just not needed in places where they were necessary in the past. By altering code to remove such variables, Nicolas signifies this process as “defactoring” since it removes old abstractions that just are not needed anymore.

After reading this article I feel as if I have a stronger newfound understanding of the importance of separating refactoring from normal changes since it can make a dramatic difference in a program’s overall transparency. In my work alone, I have realized the importance of taking on one aspect at a time improves the cohesion and efficacy of the final product, but I never really thought about the importance of distinguishing changes and refactoring. Trying to be aware of this in the future will help me create the best version of my work possible by ensuring I have a more robust knowledge of a program’s behavior and added transparency of said program through my code alone.

Article Link: https://understandlegacycode.com/blog/refactoring-and-defactoring/

From the blog CS@Worcester – Eli's Corner of the Internet by Eli and used with permission of the author. All other rights reserved by the author.

Week 5 – A bit late but we’re getting there…

So it’s been a hot second since I set this blog up, and I apologize for the silence. Been busy focusing on homework and figuring out my work situation.

But with that aside, I just wanna talk about my past with GitHub and repositories before this class. I’ve actually used GitHub many times before, because I collaborate with a modding community. We focus on modding a video game known as Luxor, a classic PC game from the 2000s that I’ll share gameplay of below.

As for what a mod of this game entails, here’s an example of one of my favorites from recent, Hollow, made by my friend Dommo:

A lot of effort has been put into these mods, and I’ve contributed to a lot of them, and even made my own. I have no recordings of it, unfortunately, but I swear it exists, haha.

Though as of recent, we’ve been discussing how to properly archive mods. For the longest time, we’ve been using our Discord server for modding to store them, but that poses an issue: Many people might not have access to Discord due to their countries, operating systems, or various other reasons.

This led to some people moving over to GitHub, which was one of my first times learning how it actually properly worked. Before this, I simply downloaded stuff from it, but I learned the basics of how to push and pull repositories and have a local clone to work on and collaborate with multiple people.

Currently one of the biggest projects being developed using GitHub is OpenSMCE (https://github.com/jakubg1/OpenSMCE) which is a game engine being built off of the Love2D engine to allow us to have an opensource engine to work off of for our mods, as opposed to the limited and clunky engine we use currently with the original game.

The reason I discuss this is actually because the new information I’m learning in these classes is inspiring me to help work on and learn the process of being in a team working on a software/engine development with Jakub, the developer of OpenSMCE. This has been an application I’ve been very excited to see have a full release, and being able to say I contributed to it and helped it reach that state would be amazing.

Hopefully as the semester goes on, with the lessons I’m learning about how to create an application as well as work in a collaborative environment, I’ll end up contributing to this project, and maybe I can even use this blog as a way to discuss the ongoing developments and issues we’ve been facing with the development of OpenSMCE. It would be interesting, and I will probably reach out to Jakub within the next week about it.

Anyways, that’s all I have for this week, until next time!

-Tempura

From the blog CS@Worcester – You're Telling Me A Shrimp Wrote This Code?! by tempurashrimple and used with permission of the author. All other rights reserved by the author.