Author Archives: gizmo10203

Design Patterns – Creational, Structural, and Behavioral

Design patterns are extremely useful within software design. Essentially they are like established solutions to common problems within various contexts. We have learned a lot about design patterns this semester, especially creational, structural, and behavioral. Since we have talked about them and worked with them a lot, I thought it would be a good idea to write a post about what each is. A blog I found while researching more into the topic is called “Intro to Design Patterns” by Saverio Mazza. I think this blog does a great job at giving a brief overview of each of the three different design patterns and how they work. They are extremely helpful for anyone working in software design, and I feel like they will be used in any career I choose. I’m very grateful that I was able to learn more about these design patterns, and I hope my blog post will help you learn more too.

Creational Patterns: As it mentions in the name, creational patterns have to do with the way that objects are created. According to Mazza “They aim to abstract the instantiation process, making a system independent of how its objects are created, composed, and represented. Some examples of creational patterns include the singleton pattern, the factory method pattern, the abstract factory pattern, the builder pattern, and the prototype pattern.

Structural Patterns: These types of patterns deal with how classes and objects are composed in order to form larger structures. They make sure that when one part of a system changes, the rest of it doesn’t need to. Similarly, they make sure that parts of the system are decoupled and can improve the system’s flexibility and reusability. Some examples of structural patterns include the adapter pattern, the composite pattern, the proxy pattern, the flyweight pattern, the bridge pattern, and the decorator pattern.

Behavioral Patterns: Behavioral patterns deal with algorithms and the assignment of responsibilities between objects. They describe the patterns of communication between objects and classes as well as the patterns between them. They let you concentrate just on the way that the objects are interconnected. Some examples of behavioral patterns are the observer pattern, the strategy pattern, the command pattern, the state pattern, the visitor pattern, the mediator pattern, the memento pattern, and the template method pattern.

If you want to learn more about the specific patterns mentioned, I highly recommend checking out the linked blog for more information!

Link: https://medium.com/@saverio3107/intro-to-design-patterns-66f1aebe5be5

From the blog CS@Worcester – One pixel at a time by gizmo10203 and used with permission of the author. All other rights reserved by the author.

Object Oriented Programming – Abstraction, Encapsulation, Polymorphism, and Inheritance

Within object oriented programming, there are four main pillars. These are known as abstraction, encapsulation, polymorphism, and inheritance. These four are essential in understanding object oriented programming and why it is important. While researching, I found a blog called “Encapsulation, Abstraction, Inheritance, and Polymorphism” by Cole Davis which I believe does a great job at explaining all four of the pillars as well as why they are important. I chose to write about this topic as I use object oriented programming all the time, and I plan to do it in the future. Because of this, I wanted to help share some information that I find to be very useful in understanding how it works in case anyone else wants to do the same.

Abstraction: One of the first major pillars you’ll learn about is known as abstraction. Cole Davis does a great job at explaining this pillar, as shown in a quote from his blog: “Abstraction is the process of combining many functions into one. Think of a thermostat. Typically, a thermostat allows the user to change the target temperature, select different modes such as heating, cooling, or fan, and turn the unit on or off. When we use a thermostat, we are unaware of the intricacies that create these functionalities under the hood. By exposing only the necessary abstracted functions to the user, we make it easier for the user to use our programs.” I really enjoyed reading this example as it relates abstraction to real-life terms instead of just using coding terms, making it a lot easier to understand. Essentially, abstraction does the same thing. It makes our code easier to understand, allowing others to get a high-level understanding of our program.

Encapsulation: The second main pillar is known as encapsulation. Encapsulation is the idea of hiding and restricting access to the implementation details of our objects. Basically, this protects the data and functions of our code from being improperly accessed by things other than our objects. It makes our code more robust and predictable, allowing others to see its purpose more clearly. Another major benefit of encapsulation is it allows us to see precisely where we can change implementation details, allowing us to safely change our program.

Inheritance: The third main pillar is known as inheritance. According to the blog “Inheritance is a technique that involves a child class “inheriting” functionality from a parent or super class.” This increases usability in our code as well as stops it from being redundant.

Polymorphism: The four main pillar is known as polymorphism. Polymorphism is a hard one to explain, but its very easy to show. Essentially, it is when child classes run the same inherited method that returns different values. They use the same method, but can return different values based on what they do. Polymorphism allows us to have a more dynamic inheritance, which enables us to use inheritance more for its values that it provides.

Link: https://medium.com/@colebuildanddevelop/encapsulation-abstraction-inheritance-and-polymorphism-26aa98042d41

From the blog CS@Worcester – One pixel at a time by gizmo10203 and used with permission of the author. All other rights reserved by the author.

GRASP

Similar to SOLID, GRASP is an acronym for a set of design principles used in object-oriented programming to clearly define responsibilities within a software system. GRASP stands for General Responsibility Assignment Software Patterns, and focuses on 9 different patterns or principles. A blog on kamilgrzybek.com called “Grasp – General Responsibility Assignment Software Patterns Explained” does a really good job at explaining all nine of the different patterns, as well as gives helpful examples of each. According to the article, “GRASP is a set of exactly 9 General Responsibility Assignment Software Patterns. As I wrote above assignment of object responsibilities is one of the key skill of OOD. Every programmer and designer should be familiar with these patterns and what is more important – know how to apply them in every day work (by the way – the same assumptions should apply to SOLID principles.” This quote shows that the GRASP principles are just as important to know as the SOLID principles. I think that everyone should learn the GRASP principles alongside the SOLID principles, and I plan on trying to use them in the future.

The nine principles consist of the following:

Controller
Creator
High Cohesion
Indirection
Information Expert
Low Coupling
Polymorphism
Protected Variations
Pure Fabrication

Controller tends to represent the device that the software is running within, such as the overall system. It also represents a use case scenario within the system operation occurs. Controller depends on high level design of the system being used, but generally we need to define the object which orchestrates the business transaction before it is processed.

Creator is a hard principle to explain. I think the example that the article gave is helpful. “Problem: Who creates object A? Solution: Assign class B the responsibility to create object A if one of these is true (more is better): B contains or compositely aggregates A, B records A, B closely uses A, B has the initializing data for A.”

High cohesion is how well the elements in a class work together. If a class has low cohesion, then it has a lot of unrelated data or behaviors inside of it. Classes with high cohesion, meaning it has data and behaviors that are all related to each other, are a lot more efficient.

Indirection avoids direct coupling between two or more things and allows an intermediate object to mediate other components.

Information expert assigns a class the responsibility of holding all of the information needed to fulfill the object and its needs.

Low coupling is when objects are more independent and isolated, rather than being completely dependent on other parts of the system. This helps reduce the risk of breaking the system.

Polymorphism is an essential principle of OOD. It allows different types of objects to be treated as a single type.

Protected variations identify points of predicted instability and assign responsibilities in order to create a stable interface around them.

Pure fabrication assigns a highly cohesive set of responsibilities to an artificial class that doesn’t represent a problem domain concept when you don’t want to violate high cohesion and low coupling.

Link: https://www.kamilgrzybek.com/blog/posts/grasp-explained

From the blog CS@Worcester – One pixel at a time by gizmo10203 and used with permission of the author. All other rights reserved by the author.

SOLID

SOLID is an acronym for a set of design principles for object-oriented programming that is used to help developers create flexible, efficient, and easily maintainable software. I think the article “SOLID: The First 5 Principles of Object Oriented Design” by Samuel Oloruntoba and Anish Singh Walia does a great job explaining the SOLID principles. I plan on using these principles in the future to further develop my code and to make sure it is easy to read and understand.

Here is what each of the letters in SOLID stand for:

S: Single Responsibility Principle (SRP)
O: Open-Closed Principle (OCP)
L: Liskov Substitution Principle (LSP)
I: Interface Segregation Principle (ISP)
D: Dependency Inversion Principle (DIP)

SRP states “A class should have one and only one reason to change, meaning that a class should have only one job.” Essentially this means that any object or class should be made for one specific function in order to better understand the code.

OCP states “Objects or entities should be open for extension but closed for modification.” The article states that “This means that a class should be extendable without modifying the class itself.” I think this article does a great job explaining what all of the different principles mean as well as giving examples for each of them. I strongly recommend using this website if you’re trying to learn about SOLID.

LSP states “Let q(x) be a property provable about objects of x of type T. Then q(y) should be provable for objects y of type S where S is a subtype of T.” At first, this really confused me. After doing some research and coming across this website, I began to understand this principle more. In simpler terms, it means that every subclass should be a substitute for their parent class. If you need examples to see how this would work, I highly recommend looking at the linked article.

ISP states “A client should never be forced to implement an interface that it doesn’t use, or clients shouldn’t be forced to depend on methods they do not use.” This principle states that software should be broken down into smaller, more specific parts. This is so it doesn’t depend on code it doesn’t use.

DIP states “Entities must depend on abstractions, not on concretions. It states that the high-level module must not depend on the low-level module, but they should depend on abstractions.” In simple terms, both high-level and low-level modules should depend on abstractions, and abstractions should not depend on details.

Link to article: https://www.digitalocean.com/community/conceptual-articles/s-o-l-i-d-the-first-five-principles-of-object-oriented-design#dependency-inversion-principle

From the blog CS@Worcester – One pixel at a time by gizmo10203 and used with permission of the author. All other rights reserved by the author.

CS-343

As I’m starting my final year here at WSU, I only have a few classes left to take until I officially have my bachelor’s degree! One of the few remaining classes is CS-343, otherwise known as Software Construction, Design, and Architecture. This class is going to be similar to my last class, but it is going to go more in depth with design principles, frameworks, modeling, etc. I hope this class is as fun as the other ones have been!

From the blog CS@Worcester – One pixel at a time by gizmo10203 and used with permission of the author. All other rights reserved by the author.

Combinatorial Testing

One of the last types of testing techniques that we learned about this semester is known is Combinatorial Testing. Combinatorial testing is a testing technique that is used for software applications with a lot of different input possibilities and a high complexity. Even if you create a large number of different test cases, you will most likely still miss a test scenario. I’m not the best at explaining what something is, so I did some research in order to find an article that helps describe the aspects of combinatorial testing as well as how to use it and what its benefits are. This website is called testsigma.com

I think that this website does a great job at explaining what combinatorial testing is as well as all of its different benefits. In the article by Shanika Wickramasinghe, it states “Combinatorial testing is a testing method that uses multiple combinations of input parameters to perform testing for a software application. The main goal of combinatorial testing is to make sure that the software product can handle different combinations of test data as input parameters and configuration options.” This means that combinatorial testing takes a bunch of different input parameters, similar to pairwise testing, and uses it to test a bunch of different cases for the program. This can be extremely useful because some of the errors with a program can only be found with specific inputs. I’ve actually had this happen to me before in one of my classes. I wrote a program and testing a bunch of different inputs and they all worked, but when my teacher tried an input I never used, it failed. I think combinational testing is going to be extremely useful for me in the future. I know it seems like combinatorial testing can only be used in certain scenarios so it might be better to not learn it, but it actually has a lot of benefits according to Wickramasinghe:

  • Covers a broad range of input combinations using a minimum number of test cases.
  • Increases test coverage compared to normal component testing since it always considers multiple input combinations.
  • Helps to detect bugs, defects, vulnerabilities, and unexpected outputs that might not be detected during the usual component and regression testing phases.
  • Reduces testing effort, cost, and time. (Since combinatorial tests use fewer test cases to cover a wide scope of testing.)
  • Identifies issues at the earliest while allowing the team to address and fix those earlier in the software development life cycle.
  • Optimizes the testing process by removing unwanted test cases while ensuring that the cost and effort are not wasted on repeating the same test scenarios again and again.
  • Helps to test complex software applications with a large number of parameters, settings, and options.
  • Reduces the risk of critical defects going unnoticed, which can occur only when handling specific input combinations.

I recommend that everyone tries to use combinatorial testing at least once so they know how it works in case they ever need to use it again in the future to make sure all of their different input possibilities work.

Link: https://testsigma.com/blog/combinatorial-testing/

From the blog CS@Worcester – One pixel at a time by gizmo10203 and used with permission of the author. All other rights reserved by the author.

Stochastic and Property Based Testing

During this semester we have learned about many different testing techniques, some being a lot easier than others. One of the testing techniques that we have learned is called Stochastic Testing. Stochastic testing is a type of black box testing method in which random tests are conducted over time. These tests are performed by automated testing tools in order to see if the software can pass a large number of individual tests.

Another type of testing technique that we have talked about is Property-based Testing. Property-based Testing is very similar to Stochastic testing. In this testing technique, the testing is also automated. However, it isn’t just executing the tests that is automated, it is also generating the tests. While researching the topic, I came across an article called “What is Property-based Testing?” by Alex Robert. In this article, Robert states “Property-based testing automates that work for us. Test automation allows us to generate better tests with less work and less code, so that we can focus our effort on the less mechanical work developers excel at. Instead of writing tests with manually created examples, a property-based test defines the types of inputs it needs. In the example above with CommonPrefix, the input would be two strings. The property-based test framework will generate hundreds if not thousands of examples and feed them to your test function.” He talks about how property-based testing can create a ton of different examples for your test function in order to help you test your program.

In this article, Robert also talks about input generators and what they do. A generator is a function that returns an instance of a given type from a source of randomness. These generators are great for randomly creating new inputs for the test function. However, they are only efficient if the generators have a good framework in order to help uncover more issues with the code. Robert gives a list of qualities that he thinks would make a good generator. His qualities include: a generator should be fast, a generator should be deterministic, a generator should not waste randomness, and a generator should cover the code under test. Most of these property-based testing frameworks come with a built-in generator which can be extremely helpful for testers, depending on the type input parameter. I really enjoyed reading this article as looking at all of the examples given by Robert helped my understanding of the topic. I know that if I ever need to use this type of testing technique, I will definitely use this article as a reference.

Link: https://www.mayhem.security/blog/what-is-property-based-testing

From the blog CS@Worcester – One pixel at a time by gizmo10203 and used with permission of the author. All other rights reserved by the author.

Pairwise Testing

Another important topic that we have discussed in CS-443, or Software Quality Assurance and Testing, is known as Pairwise Testing. Pairwise Testing is yet another form of testing, but this type is a little bit different than the rest. Pairwise Testing, sometimes known as all-pairs testing, tests each pair of input parameters in order to make sure that the functions in the system run correctly no matter what the input is, guaranteeing that it will run for every combination. Pairwise Testing is known as a Permutations and Combinations (P&C) based software testing technique. A blog that I found to be really helpful in explaining Pairwise Testing is known as Pairwise Testing | What It Is, When & How to Perform by Kiruthika D. In the blog, she gives an example that helped me understand more. She states “Let’s say you have an application that allows users to enter two numbers, and the application will output the sum of the two numbers. You can use pairwise testing to test all possible combinations of two numbers, such as (1, 2), (2, 3), (3, 4), (4, 5), etc. By testing all the combinations of two numbers, you can be sure that the application is working correctly and will not fail when given different numbers.” This shows that you don’t actually test every single combination, but you test every single input with another input. This way, it makes sure that all of the inputs work instead of testing a potentially infinite amount of combinations.

The actual purpose/use of Pairwise Testing is exactly what I previously stated. It is used to make sure that all combinations of inputs are possible, but you don’t need to test every single combination. It can be extremely helpful as it reduces the amount of time it takes to test the program as well as the amount of effort. While Pairwise testing is a great testing technique, you obviously can’t use it all the time as it involves pairs. As for when to use it, Kiruthika states “Pairwise testing is helpful when testing complex systems that have multiple input parameters and multiple possible values for each parameter. It can significantly reduce the number of test cases that need to be created while ensuring that all possible discrete combinations of parameters are tested. This can help reduce test case creation time and cost and improve the software’s overall quality. Pairwise testing is not appropriate for all types of software testing. As we discussed, it is most effective for systems with multiple parameters and multiple possible values for each parameter. If a system has only a few parameters and a small number of possible values for each parameter, pairwise testing may be unnecessary. Pairwise testing, also, will not be useful if the values of inputs are inappropriate.” Essentially she is saying that Pairwise testing is used for functions that have multiple parameters with multiple values, and the order of parameters doesn’t matter. On top of that, depending on the type of parameter, the technique might not work either. While I personally don’t see myself using this technique in the future, I think that it has the opportunity to be very useful in certain situations, so I’m glad that I was able to understand it more in case I ever need to use it.

Link: https://testsigma.com/blog/pairwise-testing

From the blog CS@Worcester – One pixel at a time by gizmo10203 and used with permission of the author. All other rights reserved by the author.

Test Driven Development

The last few weeks of class have all been about Test Driven Development. Test Driven Development is when you use tests in order to guide you into developing software. According to a website called martinfowler, there are three main steps that we follow repeatedly. Those steps are:

  1. “Write a test for the next bit of functionality you want to add.
  2. Write the functional code until the test passes.
  3. Refactor both new and old code to make it well structured.”

Essentially what this means is you create a list of things that you want included in your test, then you create a test for just one of those items. Once you create the test, you implement the code in order to get the test to pass. Once it passes, then you move on to creating the next test and getting that test to pass. If it causes the previous test you fail, you refractor until all of the current tests pass, then you repeat until all of the items you need are done. I personally find writing the test first to be a lot easier than writing the code first. That way, while you’re creating the code, you have an example to look back at so you know exactly what you want it to do. On top of that, focusing on one part of the code at a time makes it a little easier to develop the code without making mistakes, as you aren’t focusing on the entire program at once, just one small part at a time. According to the article, this form of development has two main benefits. It says “Most obviously it’s a way to get SelfTestingCode, since we can only write some functional code in response to making a test pass. The second benefit is that thinking about the test first forces us to think about the interface to the code first. This focus on interface and how you use a class helps us separate interface from implementation, a key element of good design that many programmers struggle with.” I chose this article as my source because I thought it did it a really good job explaining what to do step by step so you don’t get confused. On top of that, it explains the benefits and consequences of this method and how to avoid/achieve them. This article helped me further understand Test Driven Development and why it is useful.

Source: https://martinfowler.com/bliki/TestDrivenDevelopment.html

From the blog CS@Worcester – One pixel at a time by gizmo10203 and used with permission of the author. All other rights reserved by the author.

Define-Use Testing

A little while ago we learned how Define-Use Testing works as well as how to use it ourselves. However at the time, it still confused me a bit and I wasn’t really sure what it was used for. After reviewing the previous work we’ve done and coming back across this topic, I realized that I still didn’t fully understand what was going on. I decided to look further into it and to do some research in order to understand it more. As a result, I found a website called testsigma which I believe does a really good job at explaining what Define-Use case testing is, its advantages and its disadvantages, and when to use it.

With the website’s help, I’m going to explain everything that you need to know about Define-Use case testing in order to understand it and to know when to use it. As for what a use-case itself is, the website states “The concept of a Use Case can be likened to a real-life scenario in which users and their objectives are present. A Use Case outlines the events unfolding when these users interact with the system to achieve their objectives. The Use Case allows for a thorough understanding of the overall behavior and functionality of the system from the user’s point of view.” Essentially this means that a Use Case shows all of the decisions that were made while the user was trying to achieve their outcome, allowing you to understand all of the behavior and functionality from the user’s perspective. Use Case Testing was designed to make sure that the system that is currently being tested meets all of the expectations. It essentially simulates different real-life scenarios in order to make sure the software does everything that it is supposed to. While the website gives a lot of different advantages and disadvantages of Use Case testing, I’m just going to list a few. Some advantages include helping to identify and validate the functional requirements of a system or software, explaining how the software will be used in real-world scenarios, and helping uncover potential defects. Some disadvantages include it could be time-consuming and require a lot of effort, it may not cover all scenarios which could leave to gaps in test coverage, and it relies on the accuracy of the use case documentation. As for when to use Use Case testing, the website gives a checklist which I found to be extremely useful in my understanding of when to use it. I’m really glad I found this website as it further helped my understanding of the topic, and it has helped me prepare for my final exam. The checklist states:

“1. Review the software requirements, specifications, design documentation, and use case scenarios. 

2. Identify the possible scenarios related to the use case. 

3. Determine the functional requirements of the use case. 

4. Create test cases for each possible use case scenario. 

5. Define the expected results of the use case scenarios. 

6. Execute the test cases and compare them with the expected results. 

7. Retest the fixes after identifying failed test cases. 

8. Check for exceptions and errors for each use case scenario. 

9. Verify compliance and security requirements for each use case scenario. 

10. Monitor the system’s overall functionality across use scenarios. 

11. Validate the overall system performance with user expectations”

From the blog CS@Worcester – One pixel at a time by gizmo10203 and used with permission of the author. All other rights reserved by the author.