Category Archives: CS-443

Spies and Their Role in Software Testing

As I was doing some at home research on stubs and mocking for one of my courses, I came across the idea of spies. Unlike stubs and mocks which allow for the program and tests to run while giving canned answers or being unfinished, spies perform a much needed but previously unfilled role.

Spies are used to ensure a function was called. It’s of course more in-depth than this but that’s it’s basic function.

On a deeper level a spy can not only tell if a call to function was made, but how many calls, what arguments were passed, and if a specific argument was passed to the function.

Abby Campbell has great examples of these in her blog, “Spies, Stubs, and Mocks: An Introduction to Testing Strategies” where she displays easy to understand code. I would definitely recommend taking a look at them, her blog also goes in depth on stubs and mocking.

When writing test cases, the addition of a spy to ensure a thorough case can’t be undersold. Imagine a simple test case that uses a stub, without the use of a spy you can’t be sure the correct function was called unless every function returns a different value which would be inefficient to set-up. By using a spy the function called is checked, the argument passed is checked, and the output can even be checked as well leaving little to no room for an error in the test case aside from human error.

With the addition of spies to our arsenal of tools for software testing we check off the need to find a reliable way of testing for ensuring correct function calls and arguments. I plan on carrying this new tool with me throughout the rest of my career. It allows for much more efficient, effective, and sound testing.

From the blog CS@Worcester – DPCS Blog by Daniel Parker and used with permission of the author. All other rights reserved by the author.

Behavioral Testing

Source: https://keploy.io/blog/community/understanding-different-types-of-behavioral-unit-tests

Behavioral unit tests validate how code units operate under certain conditions, allowing developers to ensure that the software/application is working as it should be. Behavioral unit tests focus on specific pieces of code. They help developers find bugs early and work based on real scenarios. They lead to improved code quality because this unit testing ensures that the software is up to the expectations of the user, and allows for easier refactoring. The key types of behavioral unit tests include happy path tests, negative tests, boundary tests, error handling tests, state transition tests, performance driven tests, and integration-friendly tests. The one that caught my attention was the performance-driven test. These tests validate performance under specified constraints, such as handling 10,000 queries. The test is run to ensure that performance remains acceptable under various loads. This test caught my attention because in my cloud computing class, I was loading files with millions of data entries, and the performance suffered, which highlights the importance of unit testing under conditions such as these.

The difference between functional and behavioral unit tests is that functional tests validate the system’s function overall, whereas behavioral tests focus on specific pieces of code to make sure that they behave as expected under various conditions. Behavioral unit tests should be run every time code is built to make sure changes in the code don’t cause problems. Tools that can be used for this kind of testing include JUnit/Mockito for Java, pytest for Python, and Jest for JavaScript. I chose this article because we use JUnit/Mockito in class and thought it’d be wise to expand my knowledge on other unit tests. It’s good to get reassurance that unit testing is important from all of the sources I’ve been learning from, because it is very apparent that many different scenarios can cause many different problems in regard to the production of code/software/applications.

From the blog CS@Worcester – Shawn In Tech by Shawn Budzinski and used with permission of the author. All other rights reserved by the author.

Automation Tools

This week in class, we did activities based on static testing. We analyzed code with Gradle And Gradle is not a new tool we’re just hearing about, as we’ve worked with it throughout the semester. And since it is an automation tool with a lot of cool features, I took a further look into automation in software development. I wanted to know what the best features were as well as potential drawbacks from using automation. I ended up finding a blog called “Automation in Software Development: Pros, Cons, and Tools.”

What Else can be Automated?

We’ve learned by now that software testing can be automated. But is that it? Absolutely not. There are some other important software processes that can be automated. One of them is CI/CD (Continuous Integration/Continuous Deployment). Automating continuous integration allows changes in code by multiple developers to be continuously integrated into a common version control repository throughout the day, after which tests are run automatically, making sure that newly written code is not interfering with existing codes. Automating continuous deployment results in integrated and tested code in the production phase being released automatically. Releases are quicker due to the automated deployment, and better because every new line of code is tested before even being integrated.

Automation can also be used to monitor and maintain code. There are automation tools that help analyze data, identify issues, and also provide notifications of a deployed software product. With automation, issues can even be resolved automatically. This is really helpful because it drastically reduces time and resources spent trying to correct errors.

Pros

Three of the largest benefits that come with automation are reduction in manual workload, lower development costs, and an increase in software quality. When tasks are automated, developers can use that now-free time to find ways to improve the software. This way, there is a better chance of the software having more advanced features, as well as customers being satisfied with the product. Many errors and defects of a deployed product come from human errors made during the development of the software. This is where automated testing comes in. Testing tools such as Gradle, JUnit, and Selenium were created for this purpose. Automated testing tools provide feedback on code at the snap of a finger compared how long manual testing might take, which as said before, leads to less time and money being spent to rectify errors. Reduced time and cost are two of the most key automation features that persuade businesses to use automation.

Cons

The challenges most faced when implementing automation tend to be: complexity of the tools, financial constraints, and human resistance. Automation tools can be tough for a corporation to set up and some automation tools require skills that a corporation’s employees might not have. So that means they have to be trained to use it, which means more time and money spent. Though the last paragraph mentioned how automation had lower costs, it can be quite expensive when first implemented. From purchasing the required motion control equipment to paying subscription and renewal fees, automation on a large scale seems to only be a realistic option for large companies. The return on investment might not be immediate either. There is also a concern that automation will soon replace human employees. This can create uncertainty and division in the workplace because employees might not know if they are at risk of being let go or not, so they might object to using automation.

Reference

https://www.orientsoftware.com/blog/automation-in-software-development/

From the blog CS@Worcester – Blog del William by William Cordor and used with permission of the author. All other rights reserved by the author.

Black Box VS White Box Testing

Hello! Today’s blog is about different types of testing methods. This is a topic that we learnt about very early on in my Software Quality Assurance and Testing class semester. I found an interesting article that expanded on what we already learnt in class, and even introduced a new type of testing; gray box testing. The article is linked here: Black box vs white box vs grey box testing

The definition of black box testing that we were given during class was testing based on the specifications of a component/system only. The definition of white box testing that we were given during class was testing based only on the internal structure of a component/system.

Now, this may make sense to some people, but many people who don’t deal with code may have trouble understanding exactly what this means. The article I read helped me understand it a lot more, and that is the main goal of this blog post as well.

Black box testing is exactly how it sounds. Imagine a box, and everything inside it is blacked out. You cannot see what is going on inside of the system, you can only see it from a user’s perspective.

With white box testing, you can actually see all of the internal systems and functions going on. The user can actually go into detail and analyze what is occurring.

Now that I summarized both of the main types of box testing, I want to introduce the idea of gray box testing as well. This testing is kind of a combination of both black box and white box testing. With gray box testing, you can access the code from a user’s perspective, but you also have access to some of what’s inside the code as well.

Out of all three types of testing, black box testing is the most user-friendly. Black box testing is pretty much designed for users who do not have much knowledge of the actual internal functions of the code, but they still need to be able to access the code. This would be like how websites have lots and lots of code that go behind them (even including the website you are reading this blog post on), but it is simplified for the user by having labels and buttons. I do not need to know how to read HTML to be able to use this website. Although this isn’t testing, that is one example so that you can kind of visualize what this type of testing is made of.

White box testing, on the other hand, requires the user to have extreme knowledge of the coding at hand. Since you can only access the inside with white box testing, the only way to be able to make use of it is if you know how to manipulate the code based on your own knowledge.

Overall, this was a very intriguing topic to me, which is why I decided to write about it on this blog. Both the classwork and the articles I read were very helpful and informative.

From the blog cs@worcester – Akshay's Blog by Akshay Ganesh and used with permission of the author. All other rights reserved by the author.

Understanding Mocking in Software Testing

Software testing is crucial to ensure that the system acts as expected. However, when dealing with complex systems, it can be challenging to test components in isolation since they rely on external systems like databases, APIs, or services. This is where mocking is used, a technique that employs test doubles. Mocking allows developers to simulate the behavior of real objects within a test environment, isolating the components they want to test. This blog post explains what mocking is, how it is applied in unit testing, the categories of test doubles, best practices, and more.

What is Mocking?
Mocking is the act of creating test doubles, which are copies of real objects that behave in pre-defined ways. These fake doubles stand in for actual objects or services and allow the developers to isolate chunks of code. Mocking also allows them to simulate edge cases, mistakes, or specific situations that could be hard to replicate in the real world. For instance, instead of conversing with a database, a developer can use a mock object that mimics database returns. This offers greater control of the testing environment, increases testing speed, and allows finding issues early.

Knowing Test Doubles
It is important to know test doubles to completely comprehend mocking. Test doubles are mock objects that replace actual components of the system for the purpose of testing. Test doubles share the same interface with the actual object but act in a controlled fashion. There are different types of test doubles:

Mocks: Mocks are pre-initialized objects that carry expectations for being called. Mocks are used to force particular interactions, i.e., function calls with specified arguments, to occur while the test runs. When interactions are  not up to expectations, then the test would fail.

Stubs: Stubs do not care about interactions. They simply provide pre-defined responses to method calls so that the test can just go ahead and not worry about the actual component behavior.

Fakes: They are more evolved test doubles with smaller implementations of real components. For example, an in-memory database simulating a live database can be used as a fake to speed up testing without relying on external systems.

Spies: Spies are similar to mocks but are employed to log interactions against an object. You can verify the spy after a test to ensure that the expected methods were invoked with the correct parameters. Unlike mocks, spies will not make the test fail if the interactions are unexpected.

The Role of Mocking in Unit Testing
Unit testing is testing individual pieces, such as functions or methods, in isolation. But most pieces rely on external services, such as databases or APIs. These dependencies add complexity, unpredictability, and outside factors that can get in the way of testing.

Mocking enables developers to test the unit under test in isolation by substituting external dependencies with controlled, fake objects. This ensures that any problems encountered during the test are a result of the code being tested, not the external systems it depends on.

Mocking also makes it easy to test edge cases and error conditions. For example, you can use a mock object to throw an exception or return a given value, so you can see how your code handles these situations. In addition, mocking makes tests faster because it avoids the overhead of invoking real systems like databases or APIs.

Mocking Frameworks: Mockito and Beyond
Various mocking libraries are utilized by programmers in order to craft and manipulate the mocks for unit testing. Among the most commonly used libraries used in the Java community is Mockito. Mockito makes it easy for one to write mock objects, specify their behavior, and confirm interactions in an easy-to-read manner.

Highlights of Mockito include:

Behavior Verification: One can assert that certain methods were called with the right arguments.
Stubbing: Mockito allows you to define return values for mock methods so that various scenarios can be tested.
Argument Matchers: It provides flexible argument matchers for verifying method calls with a range of values.
Other than Mockito, other libraries like JMock, EasyMock, and JUnit 5 can also be used. For Python developers, the unittest.mock module is utilized. In the.NET ecosystem, libraries like Moq and NSubstitute are commonly used. For JavaScript, Sinon.js is the go-to library for mocking, stubbing, and spying.

Best Practices in Mocking
As terrific as mocking is, there is a best-practice way of doing it and having meaningful, sustainable tests. Here are a few rules of thumb to bear in mind:

Mock Only What You Own: Mock only entities you own, such as classes or methods that you have created. Mocking third-party APIs or external dependencies will lead to brittle tests, which will be broken when outer dependencies change.

Keep Mocks Simple: Don’t overcomplicate mocks with too many configurations or behaviors. Simple mocks are more maintainable and understandable.

Avoid Over-Mocking: Over-mocking your tests can make them too implementation-focused. Mock only what’s required for the test, and use real objects when possible.

Assert Behavior, Not Implementation: Tests must assert the system’s behavior is right, not how the system implements the behavior. Focus on asserting the right methods with the right arguments are called, rather than making assertions about how the system works internally.

Use Mocks to Isolate Tests: Use mocks to isolate tests from slow or flaky external dependencies like databases or networks. This results in faster and more deterministic tests.

Clear Teardown and Setup: Ensure that the mocks are created before each test and destroyed thereafter. This results in tests that are repeatable and don’t produce any side effects.

Conclusion
Mocking is an immensely valuable software testing strategy that provides developers with a way to segregate and test separate components isolated from outside dependencies. Through the use of test doubles like mocks, stubs, fakes, and spies, programmers are able to fake out actual conditions, test on the boundary, and make their tests more reliable and quicker. Good practices must be followed, like mock only what you own, keep mocks as plain as possible, and assert behavior and not implementation. Applied in the right way, mocking is a great friend in creating robust, stable, and quality software.

Personal Reflection

I find mocking to be an interesting approach that enables specific and effective testing. In this class, Quality Assurance & Testing, I’ve gained insight into how crucial it is to isolate the units being tested in order to check their functionality in real-world settings. Precisely, I’ve understood how beneficial mocking can be in unit testing in order to enable the isolation of certain interactions and edge cases.

I also believe that, as developers, we tend to over-test or rely too heavily on mocks, especially when working with complex systems. Reflecting back on my own experience, I will keep in mind that getting the balance right, mocking when strictly required and testing behavior, not implementation, is the key to writing meaningful and sustainable tests. This approach helps us ensure that the code is useful and also adjustable when it encounters future changes, which is, after all, what any well-designed testing system is hoping for.

Reference:

What is Mocking? An Introduction to Test Doubles by GeeksforGeeks, Available at: https://www.geeksforgeeks.org/mocking-an-introduction-to-test-doubles/.

From the blog Maria Delia by Maria Delia and used with permission of the author. All other rights reserved by the author.

Testing Documentation

For this blog post I read an article about test documentation. We haven’t talked about this in class yet, so I was curious to learn about how documentation for testing differs from normal documentation. The blog post starts about stressing the importance of testing and how it helps keep consistency, structure, and record keeping. The author then lays out their five key elements of good testing standards to keep in mind when working on a project.

The first point is that documentation should define the boundaries and scope of the project by detailing what they tests are testing for and how far the functionality of the application can go. This also helps with efficiency because it can keep people on the important objectives and not get lost working on things that are not needed. The next point is that documentation should reflect your testing strategy and approach. This includes mentioning what level or test you are running, unit, integration, or user testing (which is what my last blog talked about). It should also define the project specifications and the reasoning for the test and how why it is necessary to ensure functionality. A third element to have in testing documentation is to detail the software, hardware, equipment, and configurations for the testing, to reduce the number of variables that can account for unexpected and untested program behavior. Another key point is to have a test schedule and milestones as part of your outline and documentation to assist in workflow and keep large teams on track. The final part that should be included in the included information is the approach to be taken for defect management and error reporting. This will facilitate improvement by being consistent with company standards and working towards a complete set of tests. The author summarizes his suggestions that all comments and documentation should be consistent, clear, and regularly updated.

I wanted to look into a blog post about documentation because I know that it is important, and I personally rely on in depth documentation when looking at a new project for the first time. In this or other classes, proper annotation in code isn’t taught because of everything else that needs to be covered so I thought it would be a good topic to research on my own. With certain testing tools, sometimes it can seem that documentation is more than what is needed due to detailed automated reports that come with testing but when tracing code or looking through tests that have failed for any number of reasons it can be invaluable to have comments that describe a method’s intended function. Going forward, with the next project I do that involves testing I will make an effort to write proper documentation that follows the five elements described in the blog.

Test Management 101: Creating Comprehensive Test Documentation

From the blog CS@Worcester – Computer Science Blog by dzona1 and used with permission of the author. All other rights reserved by the author.

What is Integration Testing?

Integration Testing is a key phase in the software testing lifecycle where individual software modules are combined and tested as a group. While unit testing verifies that each module works independently, integration testing checks how these modules interact with each other. In simple terms, Integration Testing is where you check if different modules in your application play nice together. Each module might work fine on its own (thanks to unit testing), but once you connect them, all sorts of bugs can pop up—data not flowing correctly, broken links, miscommunication between components, etc. That’s why this type of testing focuses on the interaction between modules. You’ll hear it called things like “I & T,” “String Testing,” or “Thread Testing.” All fancy names for the same thing—making sure things work together.

Why is Integration Testing Important?

Even if all modules pass unit testing, defects can arise when modules are combined. This can happen due to:

  • Differences in how developers code or interpret requirements
  • Changes in client requirements that weren’t unit tested
  • Faulty interfaces with databases or external systems
  • Inadequate exception handling

Integration testing helps identify these issues early, ensuring seamless data flow and communication between components.

Types of Integration Testing

Types of Integration Testing

There are several strategies to conduct integration testing:

1. Big Bang Testing
All modules are integrated and tested simultaneously.

  • Pros: Simple for small systems.
  • Cons: Difficult to isolate defects, delays testing until all modules are ready.

2. Incremental Testing
Modules are integrated and tested step-by-step.

  • Bottom-Up: Test lower-level modules first, then move up.
  • Top-Down: Start with higher-level modules and use stubs to simulate lower ones.
  • Sandwich: A mix of both, testing top and bottom modules simultaneously.

Stubs and Drivers

These are dummy components used in integration testing:

Driver: Simulates a higher-level module that calls the module under test.

Stub: Simulates a lower-level module that is called by the module under test.

Final Thoughts

Integration Testing bridges the gap between unit testing and system testing. It ensures that individually functional modules work together as a complete system. Whether you’re using Big Bang or an incremental approach, thorough planning and detailed test cases are key to successful integration testing.

Reference:

https://www.guru99.com/integration-testing.html

From the blog The Bits & Bytes Universe by skarkonan and used with permission of the author. All other rights reserved by the author.

CS443: The Tribal Engineering Model (“the Spotify Model”)—An Autopsy

This codebase ain’t big enough for the two of us, pardner.

Something I think about quite frequently is how engineers seem to live in a world of their own creation.

No, really: open an app—look at a website—play a game of (virtual) cards. The designers of these programs each had their own take on how these experiences ought to be represented; the visual language, the information density, the flow from one experiential strand to another. These aren’t happenstance—they’re conscious decisions about how to approach a design.

But engineering is a bit like collectively building a diorama. Things look well enough in miniature, but the problems—with the product, and with the people working on it—become more clear the more you scale upwards.

This problem isn’t new, or all that far-fetched: engineers are specialists, and they’re good at what they do. Put two engineers together, and you’ve got yourself a crowd. Put three, and you have a quorum. Put a whole team of engineers on the same problem, and now you have a new problem.

(If you’ve ever wondered what happens when an engineer fears of being obsolesced out of their job, it’s not hard to imagine. It rhymes with “reductoring.” Alternatively, imagine a small, innocent codebase exploding into a million pieces, and you get the idea.)

So, how do you get engineers to play nice?

Plenty of solutions have been suggested; most of them involve Agile in some form or another.

(I personally don’t believe in it. I like—and have used—many of its byproducts, such as Continuous Delivery and Continuous Integration, but I don’t find that the underlying methodology behind the model itself is as universally applicable as it would have you believe.)

About a decade ago, there was a lot of buzz over Spotify having finally, definitively, solved the problem of widescale Agile implementation. It was highly ambitious—virtually utopian in its claimed economies-of-scale, with reports of unprecedented efficiency/productivity gains. Almost too good to be true.

Just one problem: it didn’t actually work. Also, they never actually used it, according to the testimony of one of the senior product managers involved in implementing it. Despite a grand showing in 2012, Scaling Agile @ Spotify was pretty much DOA before it had even hit the ground.

In hindsight, it should have been fairly obvious that things over at Spotify weren’t as brilliantine-sheened as the glowing whitepaper they said contained the key to their company’s success may have suggested. Years of complaints about incomprehensible UX decisions seemingly passed unilaterally with little user (or developer) input; a platform that abruptly pivoted towards delivering podcast and, strangely enough, audiobook content with almost the same priority as music streaming, and a lukewarm reception to the arrival of social networking capabilities.

So, what went wrong?

Firstly: every single organizational unit under this model, with the exception of Guilds, were ordered as essentially fixed, permanent structures. If a Squad was assigned to work on UX, that was their sole responsibility, unless a member had other duties through a Guild (more on that later). Accordingly, each Squad had equal representation within a Tribe, regardless of how important or pressing their work was. Further, each Squad had its own manager, who existed in direct opposition to three other managers in the same Tribe, who all had diametrically opposing interests due to representing different product segments. If an overriding voice was required, Spotify sought to preempt such issues by including a fifth Übermanager in a Tribe, who did not actually have a day-to-day purpose other than mediating disputes between managers, presumably because such instances were or were expected to be extremely common. (It is unknown whether this fifth manager was included in a similarly structured college of Middle Managers).

Worse, yet, however, is it becomes painfully evident how little work could get done under such a model. Because Tribes were, by design, interdependent on each other due to cross-pollination of key devs through Guilds, a work blockage in a Tribe not only required the intervention of two or more Tribes, but required the key drivers of the entire Tribe to do so, preventing any of the involved Tribes from doing any meaningful work. This is on top of the presupposition that each Squad has had to have mastered Agile in small-groups for the idea of an Agile economy-of-scale to even make sense.

Probably most damning, though, is the impulse to just double down on your own work when confronted by a deluge of meetings, progress reports, post mortems, and pulse checks. Rather than focusing on the interconnectedness of the Team-Tribe-Guild structure and how to best contribute as but a small part of a greater whole, many Spotify engineers simply did what they are instinctually led to do and submitted product as if working in small-groups.

This essentially created a “push” rather than “pull” system where engineers would deliver the product they had believed higher-ups would expect them to deliver, rather than the actual direction that Spotify executives wanted to steer them in. When noticed by upper management, these facilitated “course corrections”. Sweeping, unilateral, and uniquely managerial.

And that was pretty much the end of Agile for Spotify.

Things look well enough in miniature, but the problems become more clear the more you scale upwards.

So, why even talk about this?

I’ve had plenty of good collaborative experiences with engineers in industry. I want to work with people—I have worked with people, and well. But I believe there’s a science to it, and as engineers, we simply do not focus enough on the human element of the equation. The Tribal model reminds me a lot of Mocks, and even the POGIL methods we’ve used in this course so far.

Mocks are a way to implement that at an individual, codebase level. POGIL teaches us to think for ourselves and expand our individual skills. I think what Spotify failed to recognize is that people do not instinctively know how to work together, especially tech-minded people like engineers. Interdependence isn’t something to be cultivated absentmindedly, as if we could ignore the effects of losing the one person who knows what piece of the pie to cut for a day, week, or more; rather, it’s something to be warded against.

The Guinness pie from O’Connor’s in Worcester, by the way, is life-changingly good.

Some food for thought.

Kevin N.

From the blog CS-443 – Kevin D. Nguyen by Kevin Nguyen and used with permission of the author. All other rights reserved by the author.

Customer and Enterprise: Why is one valued over the other

Photo by Anna Nekrashevich on Pexels.com

Hello, Debug Ducker here and have you ever thought how low quality a software you use feels, despite being made by a well known company. This is how I feel when it comes to videos games.

It was a thought that came to me during class when a professor said, if a company release buggy untested software that may ruin the companies reputation. A student ask well what makes the game industry different then. For those in the know the game industry has been plagued with the problem or releasing products in a buggy or half-finished state, that they expect the consumer to buy.

You would think after years of doing such thing, game development companies would be careful about development. Many gamers have criticized this on going problem within the industry and some gaming companies are seen in a poor light, though such reputation never seems to completely ruined them, it does make them less trustworthy. So why are video games different in terms of software testing.

This question kept bothering me, and I brought this up with a friend who may know more. He states that it is because that the consumer is not the most important person to disappoint, that in the software testing field the one who you don’t want to give a poor product or low quality product would be a company or a business. As they aren’t the average customer and have a lot more money to spend.

This is where I did a bit more digging and found out a lot of interesting things when it comes to making a product for the average consumer and making one for a company.

There is a lot of money making a product for a company. The graph from Dells revenue throughout the years showcase how much money can be made in enterprise products

As you can see the commercial products, which are products businesses themselves purchase make most of dell revenue compared to the average consumer. In a way I can see them being prioritized when it comes to reputation, you don’t want to have bad relations with the ones bring in the money.

There is possibly a more logical than financial answer to the question. Consumers are the common people and there are a lot of them. They may have different reactions to the product but since they are so many, there will always be someone willing to buy a product despite the quality. Then comes the company who probably needs the product to do a services and would prefer it thoroughly tested before getting it.

With this I can understand a little of why it is so important to test products in software testing especially when it comes to businesses. We need them for their continued support and they bring in a lot of money.

Admin. “Dell Statistics 2024 by User and Revenue.” KMA Solutions, 22 Apr. 2024, http://www.kma.ie/dell-statistics-2024-by-user-and-revenue/.

From the blog CS@Worcester – Debug Duck by debugducker and used with permission of the author. All other rights reserved by the author.

Different Types Of Behavioral Unit

Hello everyone,

The topic of this week’s blog will be about Behavioral Testing. Testing your code is one of the most important things every programmer has to master in their professional career. There are many many ways to test your code, and each niche technique focuses and works differently for specific purposes. Some can be similar enough compared to each other but different enough so they can be separated, and what we will focus on today will be about the “Different Types Of Behavioral Unit” As the name suggests, Behavioral Testing focuses more on how your code behaves rather than how it is written. While it may sound plain and simple, this type of testing has a lot of different ways programmers can use to test their code.

The first one you always start with is the “Happy Path Tests”, which basically checks if everything is working the way it should. The first goal each time you work on a project is to make sure that it runs and it outputs the wanted results, and then after you try to see how it reacts when things get a bit more complicated. Next we have “Negative Tests” and you use this to see how the program reacts when bad inputs are entered on purpose. This is used to see if some specific features work, like entering the wrong password. If that happens the program should give you another chance to enter the password or guide you on how to make a new one. This makes the program more secure and trustworthy for all users. The next most common type of Behavioral Testing, is “Boundary Tests” which allows you to see how the code behaves when inputs outside of the wanted range are entered or it can also be used to check the limit and boundaries of the code. This helps out with scaling the program if things are predicted to go bigger, from the database, users etc. One of my favorite things about this blog is that it covers a lot of key aspects that everyone should know about Behavioral Testing. Some tips that I learnt from it is that when writing this type of test you need to test one behavior at a time. Trying to test two at the same time will just ruin the purpose of it. You should also be very clear and describe exactly what you are trying to test. Another good habit you can do is to simulate events, from successful ones to trying to break your code on purpose to see how well it behaves in all conditions.

In conclusion Behavioral Testing is important because you can not only check if there are errors in the code from early to end development but also helps you understand how it behaves in different scenarios which is so important to know, it helps you to understand better and indirectly makes debugging a lot easier.

Source:

https://keploy.io/blog/community/understanding-different-types-of-behavioral-unit-tests

From the blog Elio's Blog by Elio Ngjelo and used with permission of the author. All other rights reserved by the author.