Author Archives: graceciampa

Semantic Versioning

Semantic versioning is a versioning scheme in three parts; major.minor.patch. Patch is incremented with the addition of a bug fix, minor is incremented with the addition of a new feature, and major is incremented when the changes being made are incompatible with previous versions. Though it is among the most popular versioning schemes, Colin Eberhardt explains in a blog post “Semantic Versioning is not Enough” that it is not without issue.

Eberhardt describes that issues may surface when updating the dependencies of a project. There are two ways to update a dependency within a project; passively or actively. Passively updating allows a computer to update the dependency without human intervention while actively updating requires a human to manually update the dependency.

When adding a library to your project, you may specify a range of acceptable versions of that library. This hands the decision of whether to update a dependency to a new version, provided that the new version is still within range. Eberhardt warns that declaring dependencies like this could create different development environments on different machines, where one machine decides that an update is appropriate and another does not. Applications may function differently after even small bug fixes in their dependencies. In addition, passively updating to a version of a library with new features is nearly pointless. You would need to change your code to be able to use those features.

It seems more useful to actively accept updates to a dependency. This is fine for patch updates since these do not require any alterations to your code. However, updating to either a minor or major update would require you to change your code, whether it be to utilize a new feature or to prevent your project from breaking. Eberhardt argues that, because of this, the distinction between a major and a minor update is not useful.

Eberhardt also points out that a major update can often fail to convey the severity of what was changed. A trivial breaking change would increment the same as a rewrite of an entire library. Eberhardt suggests combining romantic and semantic versioning by giving names to more significant changes.

I selected this article because I thought it would be useful to see another perspective. Eberhardt seems to have a lot of experience, and he draws from that experience to write this critique. I still think that semantic versioning is a useful scheme, but it is interesting to see a critique of something so popular. I think the suggestion of naming significant updates in addition to incrementing the major number is a useful one that I will try adopting in future projects.

From the blog CS@Worcester – Ciampa's Computer Science Blog by graceciampa and used with permission of the author. All other rights reserved by the author.

Design Smells

In class several weeks ago we began a discussion on design smells. Design smells, or code smells, are qualities of a code that may indicate a greater flaw in its design. I had not heard this term before, so I wanted to spend this blog post familiarizing myself with it. Luckily, there are plenty of resources available to help me.

Hiroo Kato examines this topic in great detail in a blog post “Code Smells and 5 Things You Need To Know To Achieve Cleaner Code.” This article is a great resource for anyone looking to learn more about design smells.

Kato discusses several common types of design smells: bloaters, object orientation abusers, change preventers, dispensables, and couplers. Bloaters, as is implied by the name, are sections of code that have grown too large to be efficiently worked with. Object orientation abusers refer to improper implementation of “object-oriented programming principles.” Change preventers complicate the development process by necessitating more involved changes to the code. Dispensables refer to elements of a code that are unnecessary, and couplers refer to “excessive coupling between classes.”

Design smells can occur at various levels of a program. These levels include application level smells (e.g. duplicate code, shotgun surgery), class level smells (e.g. large class, little class), and method level smells (e.g. long method, speculative generality).

Fixing these design smells as they are noticed is important to keep a project clean and efficient. Kato suggests implementing a code review process. Ideally, this would involve going through the code, identifying design smells (either by hand or by using an automated detection tool), and refactoring them. The way you wind up refactoring your code will depend on the design smell it is afflicted with.

I chose this resource because I think it offers valuable information. The article is well researched, citing studies where relevant to validate claims. The article was also published very recently, so the information is definitely relevant. The author, Hiroo Kato, seems reputable, having written for the article’s host website for over five years. I was curious about this topic, and Hiroo Kato’s writing has added quite a bit to my understanding of design smells. I was not aware that there were types beyond the few we learned in class, or that there are automated tools available to help identify these smells. It’s really useful to be able to identify and avoid or fix design smells, and I will be utilizing this information in future projects.

From the blog CS@Worcester – Ciampa's Computer Science Blog by graceciampa and used with permission of the author. All other rights reserved by the author.

CS-343 Introduction

Hello! My name is Grace Ciampa. I am currently a senior in computer science at Worcester State University. I’ll be using this blog to document my progress in my courses and to explore whatever topics I think are interesting. I hope everyone has a great semester!

From the blog CS@Worcester – Ciampa's Computer Science Blog by graceciampa and used with permission of the author. All other rights reserved by the author.

Static vs. Dynamic Testing

Software testing falls largely into two categories: static and dynamic. Both are valuable, but the scenario around the test being performed determines which method best fits.

Static testing is a testing technique that does not require the code to execute. This can include manual or automated reviews of the code, requirement documents, and document design, all of which are intended to catch errors in the code. Static testing is done in order to prevent errors during the early stages of development and can help locate errors undetected through dynamic testing methods. Reviews are an important facet of static testing and are the primary method for carrying out a static test. A review is a meeting or other process aimed at locating flaws and defects in the design of a program. Apart from physical walkthroughs of the code, there exist tools that automatically search for errors. These tools, such as CheckStyle and SourceMeter, can help developers adhere to standards in code and style.

Dynamic testing is a testing technique that checks the program’s behavior when code is executed. As the name suggests, it is intended to test the dynamic behavior of a software. This encompasses a vast majority of the testing methods we’ve discussed both in this class and on this blog. The two types of dynamic testing are white and black box testing, which are both techniques that I have discussed on this blog before. Dynamic testing ensures that a software is working properly during and after its installation, it makes sure that the software is stable, and it encourages consistency in the software’s functionality.

Sources:
https://www.guru99.com/testing-review.html
https://www.guru99.com/dynamic-testing.html

From the blog CS@Worcester – Ciampa's Computer Science Blog by graceciampa and used with permission of the author. All other rights reserved by the author.

Test Doubles

Test doubles are a method of unit testing that allows a user to test how the code interacts with an external dependency. It is intended to focus on the code being tested rather than the behavior of the external dependencies. As such, a preset object is used to represent those dependencies and their intended behavior. This allows a section of code to be tested even when its external dependencies are not yet finished or are not yet working. There are several types of objects used as test doubles: dummies, fakes, stubs, and mocks.

Dummies are objects that are passed but never used. In most instances, they are only used to fill out a parameter and are used as a placeholder for arguments.

Fakes are objects that simulate the external dependency by implementing the same interface without interacting with any other objects. Fakes are typically hard-coded in order to represent the correct behavior and must be modified whenever the interface is modified. A new fake object must be created for every unique test case.

Stubs are objects which return a result based on specific input, and they do not usually react beyond the scope of the test. Stubs will usually have all the methods required by the external dependency but will provide hard-coded returns that may need to be altered depending on the test case.

Mocks, much like stubs, will return a result based on a specific input. Mocks can also be programmed with data like the number of times its methods should be called and in what order.

Sources:
https://devopedia.org/mock-testing
https://www.telerik.com/products/mocking/unit-testing.aspx

From the blog CS@Worcester – Ciampa's Computer Science Blog by graceciampa and used with permission of the author. All other rights reserved by the author.

Boundary Value and Equivalence Class Testing

When testing a software, it is important to consider what inputs will be used and how they will affect the outcome and usefulness of the test. There are several design techniques for determining input variables for test cases, but the two I will focus on in this post are boundary value and equivalence class.

Boundary value testing is designed to, as the name suggests, test the boundary values of a given variable. Specifically, this test design technique is intended to test the extreme minimum, center, and extreme maximum values of a variable, and it assumes that a single variable is faulty. In normal boundary value testing, the following values are selected for a given variable: the minimum possible value, the minimum possible value plus one, the median value, the maximum possible value minus one, and the maximum possible value. Normal testing only tests valid inputs. Robust boundary value testing, in addition to testing the values of normal testing, tests invalid values. This includes the minimum value minus one and the maximum value plus one.

Equivalence class testing divides the possible values of a variable into equivalence classes dependent on the output that value represents. Equivalence class testing can also be divided into robust and normal where robust testing includes invalid values and normal does not. In addition, equivalence class testing can also be weak or strong. Weak testing only tests one value from each partition and assumes that a single variable is at fault. Strong testing tests all possible combinations of equivalence classes and assumes that multiple variables are at fault.

Sources:
https://www.guru99.com/equivalence-partitioning-boundary-value-analysis.html
https://www.softwaretestingclass.com/boundary-value-analysis-and-equivalence-class-partitioning-with-simple-example/

From the blog CS@Worcester – Ciampa's Computer Science Blog by graceciampa and used with permission of the author. All other rights reserved by the author.

Gradle

Something that has haunted me since last semester is Gradle. Gradle is a flexible tool used in the automation process that can build nearly any software and allows for the testing of that software to happen automatically. I know how to use it, but it has been difficult for me to understand how Gradle actually works. So I wanted to do some research and attempt to rectify this.

Gradle’s own user manual, linked below in the ‘Sources’ section, provides a high-level summary of the tool’s inner workings that has been instrumental in developing my own understanding of it. According to the manual, Gradle “models its builds as Directed Acyclic Graphs (DAGs) of tasks (units of work).” When building a project, Gradle will set up a series of tasks to be linked together. This linkage, which considers the dependencies of each task, forms the DAG. This modeling is accessible to most build processes, which allows Gradle to be so flexible. The actual tasks consist of actions, inputs, and outputs.

Gradle’s build lifecycle is comprised of three phases: initialization, configuration, and execution. During the initialization phase, Gradle establishes the environment the build will utilize and determines which projects are involved in the build. The configuration phase builds the Directed Acyclic Graph discussed previously. It evaluates the project’s code, configures the tasks that need to be executed, and it determines the order in which those tasks must be executed. This evaluation happens every time the build is run. Those tasks are then executed during the execution phase.

Sources:
https://docs.gradle.org/current/userguide/what_is_gradle.html

From the blog CS@Worcester – Ciampa's Computer Science Blog by graceciampa and used with permission of the author. All other rights reserved by the author.

White Box Testing vs. Black Box Testing

When considering how to test a software, there are two major options: white box testing and black box testing. Both are uniquely useful depending on the purpose of your test.

White box testing is a technique that examines the code and internal structure of a software. This technique is often automated and used within CI / CD pipelines. It is intended to focus on the implementation and architecture of an application’s code. As such, this testing technique is useful in identifying security risks, finding flaws in the code’s logic, and testing the implementation of the code’s algorithms. Some examples of white box testing include unit testing, integration testing, and static code analysis.

Black box testing is a technique that tests a system and requires no knowledge of that system’s code. This technique involves a tester providing input to the system and monitoring for an expected output. It is meant to focus on user interaction with the system and can be used to identify issues with usability, reliability, and system responses to unexpected input. Unlike white box testing, black box testing is difficult to fully automate. In the event of a failed test, it can be difficult to determine the source of the failure. Some examples of black box testing include functional testing, non-functional testing, and regression testing.

Grey box testing exists in addition to these. It is a combination of white and black box testing which tests a system from the perspective of the user but also requires a limited understanding of the system’s inner structure. This allows the application to be tested for both internal and external security threats. Some examples of grey box testing include matrix testing, regression testing, and pattern testing.

Sources:
https://www.imperva.com/learn/application-security/white-box-testing/
https://www.imperva.com/learn/application-security/black-box-testing/
https://www.imperva.com/learn/application-security/gray-box-testing/
https://www.geeksforgeeks.org/differences-between-black-box-testing-vs-white-box-testing/

From the blog CS@Worcester – Ciampa's Computer Science Blog by graceciampa and used with permission of the author. All other rights reserved by the author.

Introduction

Hello! This is my first post on this blog. My name is Grace Ciampa, and I am a Computer Science major at Worcester State University. I am currently in my junior year, and I am hoping to graduate next spring. This blog will be focused mostly on coursework. I am looking forward to the rest of the semester.

From the blog CS@Worcester – Ciampa's Computer Science Blog by graceciampa and used with permission of the author. All other rights reserved by the author.