Category Archives: Week 5

Behavioral Testing

For this week’s blog post, I chose the article “Behavior Testing | What it is, Why & How to Automate?” from testsigma.com. I selected this article because it fits within the Behavioral testing section in the course topics section of the syllabus. This article goes into great detail about behavioral testing, from discussing what it is to explain how AI could be used in its implementation. For this blog post, however, I will discuss the sections on what behavioral testing is and a couple of methods that can be used to catch errors with behavioral testing.

The article describes behavioral testing as a form of functional testing designed to test the external functionality of a system. “Behavior testing or behavioral testing is a type of testing that focuses on testing the external behavior of a software application. It is a type of functional testing. It helps ensure that software systems meet the expectations and requirements of end-users, making it a valuable part of the software development and testing process. Behavior testing is also known as black-box testing.” As described by the article, behavioral testing is essential to ensure that the systems or products you are designing work well enough so your customers can use them efficiently. There are many different methods when using behavioral testing to find errors, such as equivalence partitioning.

According to the article, one method that can be used with behavioral testing that is good at finding errors is Equivalence Partitioning. “The equivalence partitioning testing technique involves dividing the input data into different classes or partitions, such as valid and invalid data, assuming the system will behave the same for both inputs. Example – For a login form, if the password requires at least eight characters, you might test one case with a 6-character password (invalid) and another with a 10-character password (valid).” When using equivalence partitioning, because you are dividing inputs into separate groups, in a way, you can do two things at once. One is that the system functions as it should with valid inputs, and the other can catch invalid inputs. Another way behavioral testing can be implemented is through boundary value analysis.

According to the article, boundary value analysis is a form of behavioral testing focusing on the possible range of inputs, specifically numerical inputs. “It focuses on testing the boundaries of input ranges, as errors often occur at the edges of these ranges. Test cases are designed for values at the lower and upper boundaries and just above and below. Example – If an input field accepts values from 1 to 100, the test data can be 0, 1, 2, 99, 100, and 101.” This kind of testing can be very helpful in making sure that you have accounted for the possible range of inputs that a user may enter, both valid and invalid.

Article: https://testsigma.com/guides/behavior-testing/

From the blog CS@Worcester – P. McManus Worcester State CS Blog by patrickmcmanus1 and used with permission of the author. All other rights reserved by the author.

Mastering JUnit

Dive into the world of JUnit, the leading Java testing framework. Learn how JUnit streamlines writing, running, and managing tests for robust Java applications.

Introduction to JUnit: Elevating Java Testing to New Heights

Testing is the backbone of any robust software development process, and when it comes to Java, JUnit is the name of the game. As a pivotal Java testing framework, JUnit simplifies the creation and management of tests, ensuring your code stands up to the rigors of use. But what makes JUnit the go-to framework for Java developers? Let’s dive in and uncover the essentials of JUnit, from its core functionalities to setting up your first test suite, ensuring you’re well-equipped to harness the full power of this testing framework.

A Deep Dive into JUnit’s Capabilities

JUnit, inspired by the xUnit architecture, provides a structured way to write and run automated tests. This flexibility extends to various types of tests, including unit, integration, and functional tests, each serving a unique purpose in the development lifecycle. Unit tests scrutinize individual components for correctness, integration tests ensure components work seamlessly together, and functional tests validate the system’s operation against requirements.

At its core, JUnit facilitates test creation through annotations, enabling straightforward test case structuring. Assertions play a critical role here, allowing developers to validate expected outcomes. Additionally, JUnit’s test runners and suites offer a streamlined approach to execute and organize tests, complemented by comprehensive reporting tools that shed light on test outcomes.

Setting the Stage for JUnit Testing

Getting started with JUnit is a breeze, especially within popular IDEs like Eclipse. Installation is straightforward, involving the addition of JUnit to your project’s build path. Once set up, creating a standard test file is your first step toward leveraging JUnit’s testing prowess. This involves defining test methods, utilizing JUnit’s annotations, and employing assertions to verify code behavior.

Crafting Your First Test Class

A well-structured test class is your blueprint for effective testing. Adherence to best practices, such as minimizing class size and focusing on relevant tests, is paramount. Utilize assertions to enforce expected outcomes, and maintain regular test runs to catch and rectify issues early. This iterative process not only enhances code quality but also bolsters your confidence in the software you develop.

Conclusion: Unlocking JUnit’s Full Potential

JUnit’s significance in Java development cannot be overstated. By facilitating efficient, reliable testing, JUnit empowers developers to produce higher-quality code. Whether you’re new to JUnit or looking to refine your testing strategy, understanding and applying JUnit’s features will undoubtedly elevate your development process. So, why not take the leap and integrate JUnit into your next Java project? With the right approach, you’re set to unlock the full potential of this powerful testing framework.

From the blog CS@Worcester – Coding by asejdi and used with permission of the author. All other rights reserved by the author.

TestProject Tutorial Conclusion – Advanced API Testing and Scheduling

We return again this week looking at the online TestProject blog tutorial focusing on the final two chapters 5 and 6 focusing on Advanced API Testing Automation and Scheduling API Automation Flows and CI/CD Execution respectively. In class, we’ve been working with JUnit integrated with VSCode so it’s been interesting seeing a very different User Interface in TestProject.

Chapter 5 looks at “Advanced” API Testing Automation, which primarily looks at more complex interactions and tests involving JSON objects and schemas. It also goes into some other more complicated calls and tests such as formatting URL-encoded requests, reading and using predefined user data sets and tests involving dynamic parameters. This chapter references public NASA API and tools; a key component that stuck out to me was the error report file generation that’s shown which easily identifies and organizes issues. Compared to previous chapters, I found this one to be less relatable and applicable to the things we’re doing in class, but I still learned a lot and was intrigued by the methodologies for test situations with multiple JSON paths to one target.

Chapter 6 focuses on the scheduling aspect of testing automation in TestProject and interactions with CI/CD/CMD pipelines. As always, there are an abundance of screenshots and images to walk readers through an example test set-up – beginning with the interface for scheduling tests and TestProject’s system of creating and assigning ‘jobs’. Tests are aggregated into jobs (typically as a bundle) which can then be executed as a one-time event or assigned a recurrence time frame. The interface to do so is clear and intuitive and reminds me a lot of CS383 – Cloud Computing where we are working on modules in AWS Academy learning about Amazon Web Services. AWS uses a similar interface and logical structure for assigning roles, jobs, permissions and many other facets making it intuitive for me to follow this TestProject tutorial. This chapter also discusses testing within a Docker container, which we used to implement for Dr. Wurst’s assignments but have recently switched to GitPod.

With this reading, we conclude the TestProject tutorial I originally found at the beginning of this semester. There’s been a lot of really valuable material and examples within these tutorials, particularly in chapters 1-4 as they focus on beginner concepts and I’ve just been getting started with learning about software testing and quality assurance. Probably most interesting and encouraging from this set of tutorials was how frequently concepts came up from other courses like Database Design and Cloud Computing. Software that interacts with those areas must be tested too so it’s important to know how to work my way around them and see visual examples of tests being designed and executed. In conclusion – TestProject seems like a great platform with many features, particularly an intuitive scheduling component, however JUnit’s interface remains my current favorite.

Sources:

Tutorial Intro: https://blog.testproject.io/2020/11/10/automating-end-to-end-api-testing-flows/

Chapter 5: https://blog.testproject.io/2020/11/10/advanced-api-test-automation-and-validation-flows/

Chapter 6: https://blog.testproject.io/2020/11/10/scheduling-api-automation-flows-and-ci-cd-execution/

From the blog CS@Worcester – Tech. Worth Talking About by jelbirt and used with permission of the author. All other rights reserved by the author.

Exploring the Classical Waterfall Model in Software Development

In most respects the classical waterfall model serves as the foundational software development life cycle (SDLC) model, almost embodying a structured and sequential approach to project management and software development which can prove effective when doing a variety of coding projects. While it may not be as commonly employed today, its significance lies in being the basis upon which other SDLC models have evolved, the process often involving steps and details with which have been planned beforehand. This model finds its relevance in the realm of more large, complex projects, being a model characterized by its rigorous, phase-driven progression, making it suitable for scenarios where project requirements are well-defined, and project stakeholders seek a high level of confidence in the outcome.

The waterfall model, although now less prevalent in contemporary software development considering it’s lesser effectiveness compared to more agile methodologies , it remains a foundational framework for understanding software development life cycles. This model’s structured, sequential approach entails phases like requirements gathering and analysis, design, implementation, testing, deployment, and maintenance, each building upon the preceding one. It is a document-driven model, placing high importance on quality control and rigorous planning, thus ensuring that the project is well-defined and the team operates with clarity and precision.

it becomes pretty clear that the simplicity and linear progression that come with the waterfall technique offer advantages for specific project scenarios. This approach favors discipline, with a focus on defining requirements before design likewise with the design before coding. For smaller, well-understood projects, it can be effective in maintaining clarity and ensuring milestones are met.

At the same time though, the rigidity and limitations of the waterfall model become apparent in more complex, dynamic projects. Its lack of flexibility to accommodate changing requirements and late defect detection pose significant challenges. The sequential nature of the model restricts stakeholder involvement in later phases, potentially leading to misunderstandings and costly revisions.

in practice, project managers and development teams should carefully assess project requirements, size, complexity, and the degree of uncertainty to select the most appropriate SDLC model since the waterfall method might not always be effective, sometimes proving to be an unwieldy method for projects better suited to adaptability. Moreover, hybrid approaches, combining elements from multiple models, can offer the best of both worlds, allowing for structure and adaptability.

In conclusion, the classical waterfall model, while valuable for certain projects, is not a one-size-fits-all solution. Its use should be considered in situations where requirements are well-defined and change is unlikely, such as large-scale, safety-critical, or government projects considering these have a tendency to have big budgets and therefore need to be mapped out when taking into account the money spent on particular projects. In today’s rapidly evolving software landscape, more adaptive SDLC models have gained prominence, offering flexibility and responsiveness to changing needs.

https://www.geeksforgeeks.org/software-engineering-classical-waterfall-model/

From the blog CS@Worcester – CSTips by Jamaal Gedeon and used with permission of the author. All other rights reserved by the author.

YAGNI

YAGNI is an acronym for You Ain’t Gonna Need It. It’s a principle from extreme Programming that says that programmers should only add functionality once it is definitely necessary. When coding if you are sure that you will need a piece of code or a feature later on, you don’t need to implement it now. Maybe you wouldn’t even need or add it because you might need something else. This is why you don’t want developers to waste their time creating extra elements that might not end up being necessary and can slow the process. YANGI helps save time and avoid spending time on features that might not be used, the main features of the program are developed better, and less time is spent on each release. When you have a problem that you can’t solve, you won’t be capable of making the best choices when coming up with a solution. On the other hand, when you know what is causing the problem, you can come up with a better plan to solve it. In software development, you can think about creating a system that can deal with everything but would only use a few features and could need attention and upgrades.

YAGNI can be implemented by development teams from small to large, so it isn’t limited to only small projects or large enterprises. This principle can help set up a task list of do’s and don’ts. Always try implementing the selling feature and get the app ready for end users. After the app is functional you can start adding extra features in the next version. Waiting to add any additional features will save a lot of time and effort for developers to help them meet project deadlines. Once your app is live you should keep up with its updates and be able to make the app better. By delaying the app’s updates to add more features can give opponents a chance to take your users. The first version of the app doesn’t need to be perfect, if it can just do the simple things and still fulfill its intended purpose then that is enough. With time you can add in all the add-ons you need later on instead of just cramming it into one version. The you ain’t gonna need it principle is very time effective and efficient for developers so that we could get our projects done on time, not adding anything that isn’t necessary at the time, and make sure that developers don’t feel stress by making sure all of the add on features needed to be added. This principle is time, stress and cost efficient for developers, which is why this principle should be used constantly.

https://www.techtarget.com/whatis/definition/You-arent-gonna-need-it

From the blog CS@Worcester – Kaylene Noel's Blog by Kaylene Noel and used with permission of the author. All other rights reserved by the author.

Unveiling the Blueprint of Software Architectures: The Foundation of Digital Development

In the intricate world of software development, one essential factor underpins the creation of every digital marvel – software architectures. These structural frameworks are the unsung heroes, the master plans guiding the intricate construction of software applications. They serve as the invisible hand that shapes the organization of an application, defining its key components, the relationships between them, and the fundamental principles that govern their interactions.

Software architectures, though often behind the scenes, are pivotal in crafting software that’s not just functional but also efficient and tailored to meet specific requirements. They’re akin to the architects of a grand skyscraper, ensuring that each piece falls into place seamlessly, resulting in a robust and scalable digital structure.

Understanding the diverse architectural styles empowers developers to choose the right path for their projects. It’s akin to a skilled craftsman selecting the finest tools and materials for a unique creation. The choice of architecture significantly influences various aspects of a software system. It impacts the system’s performance, scalability, maintainability, security, and adaptability to change.

Embracing the versatility of architectural styles is akin to choosing different brushes for a painting. The software architects are the artists, and the blueprint they select is their canvas. As software development progresses, these architectures are not just abstract concepts; they become the very foundation upon which the digital world evolves.

References:

From the blog CS-343 – Hieu Tran Blog by Trung Hiếu and used with permission of the author. All other rights reserved by the author.

The Art of Code Refactoring

Since we have been discussing refactoring in class recently, it got me interested in finding out more about what makes refactoring… well “refactoring”. I found this interesting article “Refactoring vs. Defactoring” by Nicolas Carlo, a French-Canadian Software Engineer, which describes the difference between refactoring and debugging while also introducing the idea of “defactoring”.

The article starts with the definition of refactoring, which according to Martin Fowler, is “a change made to the internal structure of software to make it easier to understand and cheaper to modify without changing its observable behavior“. In simpler terms, refactoring is all about tidying up the interior of a program while keeping the exterior the same.

Nicolas is clear however that fixing a bug, adding features, or changing features is not refactoring but points out the importance of refactoring code before altering the functionality of a program.

Nicolas states that in his experience things that can help with solidifying this distinction are to, start doing distinct commits separating both refactoring and change commits, make commits more frequently, prefix commit messages with R or C to specify the change(s) made, and learn how to use automated refactoring to improve the health of one’s code.

Nicolas also explains by following these practices he feels as if his work has become much safer and simpler than before. With his newfound awareness, he feels as if he can put the best quality into his work. He also gives a brief rundown of how thinking of Refactoring and Changes as two hats you wear when programming can also help increase developer awareness.

While we discussed refactoring, I thought it was interesting how Nicolas framed defactoring as an opposing process to refactoring in the title of the article but I came to find out it is not that at all. Defactoring is described by Nicolas as “cognitive refactoring” which is done by making the code less abstract in places where abstraction is no longer required.

He says that when working with legacy code, he notes some items such as temporary variables that are just not needed in places where they were necessary in the past. By altering code to remove such variables, Nicolas signifies this process as “defactoring” since it removes old abstractions that just are not needed anymore.

After reading this article I feel as if I have a stronger newfound understanding of the importance of separating refactoring from normal changes since it can make a dramatic difference in a program’s overall transparency. In my work alone, I have realized the importance of taking on one aspect at a time improves the cohesion and efficacy of the final product, but I never really thought about the importance of distinguishing changes and refactoring. Trying to be aware of this in the future will help me create the best version of my work possible by ensuring I have a more robust knowledge of a program’s behavior and added transparency of said program through my code alone.

Article Link: https://understandlegacycode.com/blog/refactoring-and-defactoring/

From the blog CS@Worcester – Eli's Corner of the Internet by Eli and used with permission of the author. All other rights reserved by the author.

Week 5 – A bit late but we’re getting there…

So it’s been a hot second since I set this blog up, and I apologize for the silence. Been busy focusing on homework and figuring out my work situation.

But with that aside, I just wanna talk about my past with GitHub and repositories before this class. I’ve actually used GitHub many times before, because I collaborate with a modding community. We focus on modding a video game known as Luxor, a classic PC game from the 2000s that I’ll share gameplay of below.

As for what a mod of this game entails, here’s an example of one of my favorites from recent, Hollow, made by my friend Dommo:

A lot of effort has been put into these mods, and I’ve contributed to a lot of them, and even made my own. I have no recordings of it, unfortunately, but I swear it exists, haha.

Though as of recent, we’ve been discussing how to properly archive mods. For the longest time, we’ve been using our Discord server for modding to store them, but that poses an issue: Many people might not have access to Discord due to their countries, operating systems, or various other reasons.

This led to some people moving over to GitHub, which was one of my first times learning how it actually properly worked. Before this, I simply downloaded stuff from it, but I learned the basics of how to push and pull repositories and have a local clone to work on and collaborate with multiple people.

Currently one of the biggest projects being developed using GitHub is OpenSMCE (https://github.com/jakubg1/OpenSMCE) which is a game engine being built off of the Love2D engine to allow us to have an opensource engine to work off of for our mods, as opposed to the limited and clunky engine we use currently with the original game.

The reason I discuss this is actually because the new information I’m learning in these classes is inspiring me to help work on and learn the process of being in a team working on a software/engine development with Jakub, the developer of OpenSMCE. This has been an application I’ve been very excited to see have a full release, and being able to say I contributed to it and helped it reach that state would be amazing.

Hopefully as the semester goes on, with the lessons I’m learning about how to create an application as well as work in a collaborative environment, I’ll end up contributing to this project, and maybe I can even use this blog as a way to discuss the ongoing developments and issues we’ve been facing with the development of OpenSMCE. It would be interesting, and I will probably reach out to Jakub within the next week about it.

Anyways, that’s all I have for this week, until next time!

-Tempura

From the blog CS@Worcester – You're Telling Me A Shrimp Wrote This Code?! by tempurashrimple and used with permission of the author. All other rights reserved by the author.

Data Redundancy – Relevance in Software Systems and Websites

In today’s world, businesses, organizations, and other entities that software and web developers consider “clients” heavily rely on being able to efficiently collect, access, and otherwise manage data for their day-to-day operations. For many, losing access to databases or similar outages hinders their ability to continue operations. In Data Redundancy: Meaning and Importance, author Charlotte White discusses data redundancy and some basic strategies and implementations to address these vulnerabilities.

Data Redundancy goes beyond simply having backups of existing data (although they’re an important component), it’s a proactive plan to prevent data loss and maintain smooth operations in the case of a server shutdown, hardware malfunction, or other major disruptive issue. It’s crucial for ensuring the continuity of business operations as website downtime often leads to financial losses especially for new websites or those with low traffic. Outages can also impact search engine rankings, as uptime is a factor commonly considered by search algorithms. Furthermore, failure to do so resulting in data loss can result in crashes/issues in other systems, loss of customer information, business details, and other critical and/or confidential information which is essential for an organization’s success and reputation.

How Redundancy Works: Effective redundancy designs reduce dependency on any single copy of data or data center. They commonly implement a 3-2-1 rule of backups, which means having three copies of data in two different locations, one of which is offline storage. Redundancy strategies should also consider factors like hardware redundancy; many servers use hard disk drives (HDDs) to store data which can fail due to simple wear and tear. Some hosting companies use RAID (Redundant Array of Independent Disks) and un-RAID solutions to mirror data from HDDs to other storage devices, minimizing the impact of HDD failures.

Recently in CS343, we’ve been looking at software architectures and strategies for organizing systems that could be realistically implemented to address clients’ needs. In particular, we’ve been considering the differences and strengths/weaknesses between a simpler architecture such as the Monolith versus a more complex architecture such as the MicroServices model, with several intercommunicating systems.

Most of the scenarios we discussed involved the ease of pushing out updates, but I was left wondering about the repercussions and ways to manage the possibility and reality of a database or system going totally offline. For businesses involved in eCommerce, uptime is money in terms of sales as well as maintaining search engine optimization. Given how damaging a disruption like this could be, data redundancy plans are an important consideration when planning and setting up a website or system. Understanding the value of D.R. and how they are implemented is an asset in planning and designing software systems and projects, and generally beneficial for computer science students and professionals.

Source:
1. Data Redundancy Meaning and Importance: A Complete Guide | ResellerClub India Blog

From the blog CS@Worcester – Tech. Worth Talking About by jelbirt and used with permission of the author. All other rights reserved by the author.

Week of October 9, 2023

https://www.atlassian.com/microservices/microservices-architecture/microservices-vs-monolith

Since learning about different software architecture styles like the monolithic architecture, the client-server architecture and the microservices architecture, I’ve been curious how large-scale applications transition from one architecture to another as the project grows in scale. I found this blog post on the Atlassian website breaking down the differences between the monolithic architecture and the microservices architecture, as well as telling the story of Netflix’s innovative migration from a monolithic architecture to a microservices architecture.

The article begins with the example of Netflix’s transition between architectures. Netflix was growing rapidly by 2009 and needed to expand its software infrastructure to meet the massive demand. Before “microservices” as a term was in wide usage, Netflix was one of the first major companies to migrate to a microservices architecture, and in 2015 earned a JAX Special Jury award for its successful deployment. Netflix’s new architecture would model itself on DevOps, defined by Amazon as “the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity (https://aws.amazon.com/devops/what-is-devops/#:~:text=DevOps%20is%20the%20combination%20of,development%20and%20infrastructure%20management%20processes.)”.

Following the story of Netflix’s change in infrastructure model, the article continues with an explanation of the monolithic architecture style. The monolithic architecture is a traditional design, which is defined by the application being housed in a single self-contained server. This architecture is simple to understand and easy to use as a foundation for your application. The major drawback of the monolith, however, is the difficulty in updating the application. Making changes to the code base requires bringing the entire service offline. Monolith architectures are not scalable with the growth of the application either.

The microservices architecture addresses some of the disadvantages that come along with a monolith architecture. The application is divided into independent services each with their own databases and methods. With this architecture model, only the components of the application that require changes need to be taken down, leaving the other components of the application free to continue working.

One inherent barrier to using the microservices architecture is the expense of multiple machines to host the different microservices, as well as storage space for their accompanying databases. It may only be beneficial for an application to transition to a microservices model once it has reached a certain scale. Small to mid-size applications may be perfectly well served by monolith architectures for much less cost than hosting the application across a microservices architecture.

From the blog CS@Worcester – Michael's Programming Blog by mikesprogrammingblog and used with permission of the author. All other rights reserved by the author.