Author Archives: V

continuous integration

For my final blog post for my Software Testing course, I wanted to go over something that also ties with other projects I’ve been working on during this semester in some way. In our Softawre Development Capstone team, we’ve had a couple of issues where we’ve had to touch upon the Thea’s Pantry continuous integration system through GitLab to have linter tests run automatically when pushing commits, so that commits that don’t match up to specification, the pipeline fails and the person who pushed those changes gets notified to fix any issues that arise. In this post, I want to go over a bit more detail about what continuous integration is and how it’s useful for developers.

According to Stephen Roddewig’s blog post on HubSpot, contiguous integration is an approach to development where code changes are regularly merged into a shared repository, built into a test application automatically, and the results of running this test application go back to the developer if any bugs or defects are found.

In practice, a continuous integration tool typically is an automated system where the source code is compiled with tests being automatically run on every individual push from a developer. This means that the developer’s contribution to the project is automatically tested on push, then either pushed forward to the maintainers or pushed back as a result of a pipeline failure.

The benefit of this is clear: bugs are caught quicker, the source code is updated on a regular basis with greater confidence because of the automated tests, and the pipeline provides an explicit and clear vision of what the specifications are for the project.

While working with the Thea’s Pantry system for my capstone, I could see these benefits in action. If a developer forgets to run tests locally (in the case of the Thea’s Pantry system, all tests are run in a script, and linters are run in their own script as well), the pipeline will catch any problems seamlessly, and the developer can easily see the pipeline failure, look at the output on GitLab, and determine what they need to fix in their branch.

In addition, it clarifies the specifications of what commits should look like, and what code should look like as well, on the basis that you can even add linters to the pipeline. It’s very useful in case someone forgets that we use conventional commits to have our changes be more clear with what they do, as the pipeline will detect that and function as a reminder of what things should look like.

All around, continuous integration is always a benefit for everyone involved in the software development process, creating a smooth system for testing that functions automatically rather than requiring developers to remember to run tests so they don’t mess up their branch.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

software testing life cycle

During the time I’ve spent in my software development concentration in my computer science studies (and even in general), we’ve mostly been concerned with the software development life cycle, where we focus mostly on getting a finished product as efficiently as possible that matches specifications, and build it up better and better over time. There is an interesting counterpart to this in the software testing life cycle, which is technically a part of the software development life cycle, but has it’s own specific steps.

In this post, I will be referencing this blog post from Testim on the STLC.

The point of STLC is similar to the point of SDLC at its core, getting a functional testing suitebased on specifications. The end goal has to do with finding problems and reporting them, however, rather than having a functional piece of software, which makes sense considering that testing is a step toward that piece of software.

The software testing life cycle is split up into 6 phases:

  1. Requirement Analysis: Understand what the product should do, prioritize issues and brainstorm potential solutions (and whether they can be automated) with the team.
  2. Test Planning: This is where the scope, tools and objectives are set for the following phases. It’s similar to a sprint planning meeting where tasks are assigned, time is estimated and issues are weighted.
  3. Test Case Designing and Development: This is where the tests are, well, designed and created based on the specifications and priorities set up from previous phases.
  4. Test Environment Setup: Software is ran on different configurations and setups to determine levels of performance and minimum requirements. We want to make sure our software works well on all possible configurations where it would be used, making a smoother experience for the end-user.
  5. Test Execution: The tests are actually run all together, and the results are logged with details, and rerun with any changes to the main project as needed. Automated testing tools are preferred, as it makes this process significantly more refined.
  6. Test Closure: Evaluate the testing result, taking into account things like test coverage, quality, and review the testing process. This is analogous to a sprint review, where the team comes together to review the results.

In an agile environment, these phases should all be covered in every sprint. All things considered, this is a necessary step in having working, quality software, as without a good testing environment your software could behave unexpectedly, and bugs will be more obfuscated.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

behavior driven testing and what is cucumber

For our final assignment for the course, we are being tasked to create a sort of activity similar to the activities we have been doing in class this semester for my software testing course. In working on this activity, we chose to write an assignment based on behavior driven development, using the Cucumber tool. As such, before we really get into the weeds of the assignment, I thought it would be a good idea to look into the tool myself and what behavior driven development looks like.

For today’s post, I’m looking at a blog post from Moisés Macero on The Practical Developer blog.

Firstly, behavior driven development is built for fostering communication and discussion around systems and how they should properly be working. Typically, the process is in three stages: discuss, capture, and write tests. We want to talk about what the requirements really should be between the product owner and developers, refine what our requirements and targets should be, then move forward to building the software and testing it.

What Cucumber does for this is introducing syntax, called Gherkin, to enhance the readability of tests even for people who don’t know how to read code. The syntax is styled in a format of ‘given’, ‘when’, ‘then’. You mark these test cases with keywords such as feature, scenario or example to accurately describe what the code should be doing and how it is being tested. One of the examples used in the article is the following:

  Scenario: Users solve challenges, they get feedback and their stats.
    Given a new user John
    When he requests a new challenge
    And he sends the correct challenge solution
    Then his stats include 1 correct attempt

Here, you can see that the syntax is (probably) easily readable and understandable even if you don’t have any software development experience. These statements are stored in .feature files, and act as sorts of definitions for tests.

For the actual testing, you apply these definitions by writing Cucumber expressions in a step definitions file. These are essentially files that not only test the code in a step-by-step approach, but also features readable headers for each function. Here’s an example of a couple of functions using Cucumber expressions (taken from this post in the same series):

    @Given("a new user {word} is created")
    public void aNewUser(String user) {
        this.challengeActor = new Challenge(user);
    }

    @When("they request a new challenge")
    public void userRequestsANewChallenge() throws Exception {
        this.challengeActor.askForChallenge();
    }

    @Then("they gets a mid-complexity multiplication to solve")
    public void getsAMidComplexityMultiplicationToSolve() {
        assertThat(this.challengeActor.getCurrentChallenge().getFactorA())
                .isBetween(9, 100);
        assertThat(this.challengeActor.getCurrentChallenge().getFactorB())
                .isBetween(9, 100);
    }

Cucumber supports a variety of different languages, and also supports testing frameworks such as JUnit, which means it is also very versatile and can be used alongside a testing driven development environment as well.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

sprint 3 retrospective

This sprint, the bulk of my time was spent looking into ESLint and keeping up with the team to ensure that we are meeting our goals. I also made a tentative branch for how the pipeline should look like after the linters are added, as these changes were never pushed by the team that was in charge of actually putting the linters in the ReportingAPI repository. I also participated in a clean-up of previous issues that were completed, and reviewed some merge requests from team members.

ESLint Research and Configuration: I checked to see if ESLint could theoretically fulfill the needs for an active linter that shows JavaScript syntax errors in the editor, and it seems like with proper configuration it does fit that role. I found that the ESLint version we currently use is out of date, and the old style of configuration file (.eslintrc) was hard to get working properly. With the new format for ESLint configuration, I found that ESLint needs to probably be installed locally in order to have this take effect properly, and that we should update the ESLint version we use in LibreFoodPantry to adopt the new configuration standard.

Pipeline Configuration: I configured the pipeline based on the GuestInfoBackend repository, and it currently fails because there are no linters actually installed in the repository. The idea is that when the linters are merged into the main branch, this pipeline branch should fetch those changes, push them to the branch on GitLab, then check to see if the pipeline does work correctly with that. Theoretically it should.

Cleanup ReportingIntegration: I checked over the work I did in this repository and it looked fine. I also adjusted Hieu’s branch where he cleaned up some of the documentation.

Cleanup ReportingBackend: I checked over the work I did in this repository and it looked fine.

We had some logistical issues again this sprint, but ended up completing nearly all of the work that we set out to do, thankfully. The problem was that there was a kind of rush during the last few days leading up to the sprint review, which led to me having to take charge as scrum master and keep up with each group member actively in the days leading up to the review to make sure everything was completed. That being said, our communication was the best that it has been this semester during this sprint, the one issue I was consistently having is that we should be sharing communication with each other, but team members were direct messaging me personally for issues they were having.

On my part, I do think I could’ve spent more time with the ESLint issue and also looked into other solutions, but I definitely was feeling crunch from other courses and ended up putting the work for this course on the backlog while other courses were piling work on me. I’m happy I got to a relatively satisfying conclusion with the issue, but I feel like I could’ve done more. On the part of being scrum master, I performed much better at keeping everyone on task by checking in every week, though I could’ve been a little more strict so there was less crunch at the end of the sprint.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

security testing

For this week’s Software Testing blog post, I wanted to have a glance into the world of security testing.

In a big picture sense, security testing is a simple concept to understand even from a layman perspective. It entails testing software for any vulnerabilities, risks or threats that can be posed and to any stakeholders in the software that could cause a negative impact. Stakeholders can include the company that creates and / or deploys the software, the end-user, and even the software itself.

There are many different types and methods to security testing, and in the blog post I would like to go through a couple that caught my mind, with information courtesy of Oliver Maradov’s very thorough security testing blog post on BrightSec.

Before getting into the actual methods, I found some key principles at the start of the article noteworthy to mention as well. Confidentiality and authorization have to do with limiting access to sensitive data, with authorization only allowing access on the basis of permission. Authentication is the principle of verifying identities that access data. Integrity and availability are crucial to preserving the consistency of data and making sure that is accessible when needed without failure. Lastly, non-repudiation sets a principle of of logging while further making sure data is accessible.

Now, for the first subject in the article that I found interesting. What first caught my eye was the use of the different ‘box’ testing methodologies, that is, black box, white box and grey box testing. The reason I was interested is because the application of black box testing makes a lot of sense (compared to the standard software development application, in my opinion) in a security context. Essentially, where black box testing for software developers mostly deals with the specifications prior to writing the code for the system, black box testing in a security context means approaching software that has been written through the prespective of an attacker. We typically see this through ethical hacking and penetration testing. I just found this a lot more compelling than the software engineer side of black box testing.

The second subject I found compelling was the section on DevSecOps. The idea of DevSecOps is to, as the name implies, merge the software development, security and operations process together to ensure the whole endeavor of creating a great piece of software is built from the ground up with good principles in a multi-faceted way, and to ensure that each team member has a level of understanding of the key principles. As such, the end product should (hopefully) become a product of a strong approach to each process.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

share your knowledge

As my final Apprenticeship Pattern blog post for my capstone course, I found it fitting to write about the “Share What You Learn” pattern. The idea is fairly simple, if you are gaining knowledge on a topic, you should be able to share that knowledge with others effectively to foster mutual growth, which results in everyone building on their ‘craftsmanship,’ which further results in better products from everyone involved.

We don’t work in a vacuum by ourselves, and so communication is incredibly important. We will always be working on teams of software developers, and even in our personal projects, we are working with information that we are informed about from the entirety of the software development ‘community.’ As such, learning to communicate your ideas and share your knowledge is always great for your team.

Different people have different specializations, and in software development, we have our own specializations and interests within this field, and it is beneficial to not only contribute your expertise to the project you are working on with your team, but also share things about that expertise to get everyone on the same board with what you are doing, and perhaps foster growth in them as people and the project as a whole.

The intersting thing is that you can also learn from others’ specializations and expertise when you are sharing your own, and you can also build on your knowledge from ideas and suggestions that others may make when hearing your ideas. It’s a bounce back and forth.

I think everyone has probably experienced this to some extent, even in small circumstances. As the authors mention, simply knowing one small thing more than another person allows you the opportunity to inform that person about your piece of knowledge, and that fosters growth, no matter how small. I know that, for me, I’ve had multiple people ask me about how to do things when they need reminders or help with assignments or issues that they run into at work, and in that situation, it is beneficial to know how to communicate solutions, suggestions and feedback in a way where everyone stands to gain.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

test-driven development

I’ve been working on a homework assignment where I’ve been tasked with undergoing the process of test-driven development while working on a code kata where I am to count the number of times each word in a string appears whilst ignoring special characters, spaces and new lines.

To summarize what test-driven development entails quickly, I consulted a blog post by Denis Peganov. Basically, when you do test-driven development, you write tests before you write the code. In practice, this means you have to figure out what the inputs and outputs should be for the code you’re working on, then organize the order with which you should be fulfilling tests in. Denis also mentions the cycle of test-driven development with the red, green and refactor phases. You start with a failing test in the red phase, write the minimum code to make the test pass in the green phase, then, as the name implies, refactor the program to enhance the program’s design if necessary. Denis goes further to make a great point that the iterative nature of test-driven development lends itself to modularity and can create a flow to the evolution of the codebase where this strategy is being applied.

I have to say, doing this in practice is fairly enjoyable up until the point where you have to rewrite everything. The flaw (while it is based upon the practitioner of this strategy of software development) is that if you ‘mess up’ the order of the tests, you can reach a point at which your design needs to change so drastically that you are effectively working backwards. This doesn’t necessarily mean that the strategy is bad, it’s moreso that it does require practice and experience in order to have a smooth development. It also requires the willpower to refactor massive chunks of code when you reach a critical refactor point, which for me it may just be that I’m rusty with regard to coding.

Regardless of this critique, test-driven development does seem to be the best of both worlds. Not only are you able to properly plan software from specifications, but you also get to actively see the results of your code while developing it. It seems incredibly efficient in comparison to other philosophies of software testing, and for me it’s honestly more engaging to write tests and write the code than just writing tests and reading the code that already exists or the specifications alone.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

sprint 2 retrospective

During the course of this sprint, I spent most of my time looking at the ReportingBackend repository to figure out what was going wrong with the testing suite. I also had assigned myself to configuring the pipeline specifically in ReportingAPI with the linters, but unfortunately was unable to complete this issue because the team responsible for actually adding the linters to the repository was not able to push their changes before the sprint ended. I continued to act as the scrum master for the team, taking an active approach to making sure everyone is doing work, getting credit for their work, and setting up meetings when necessary. I had to help out a bit with other team member’s issues as well, providing some guidance.

Verify the Testing Suite works: Investigate the testing suite and see if I can get it to work, or at least figure out the magnitude of the problem. I applied a couple of fixes (one of which was in the build.sh script that I had a separate 0 weight issue created for, as I had already found the bug and fix prior to the sprint starting.), but it seems like the issue transcends the testing suite and is moreso an issue with the build process with the server.

This sprint was pretty rough, if I’m being honest. For what we did well, our communication overall improved, and the team was better able to utilize GitLab for their work. I think that everything on GitLab was better organized and most of the issues had better descriptions on them than what we had last sprint, and the weight assigned to issues seemed fairly accurate all things considered, aside from the testing issue (which I did spend a lot of time on). We also did work with each other more than we did on the last sprint.

Most of the issues that were assigned to our team were rushed together during the last week of the sprint (from my perspective at least, there may have been some confusion with pushing to GitLab). As such, we weren’t able to get to all of the issues, and only got 14 weight out of the 24 weight assigned to our team fully merged in. I was offering support during the whole sprint, but it seemed like team members were not able to coordinate into a meeting at the same time for most of the sprint. I don’t really have a great reasoning for this, it seems like things weren’t being done until the time was very short to were we couldn’t get everything done, taking all of our other schoolwork into account.

Ultimately, I think for next sprint our team really just needs to work on spreading work out during the entire sprint rather than getting things done last minute. We should also work on coordination and being more present, as when anyone asked if we wanted to do a meeting, the team would all say they were available and no one would take the initiative to actually begin the meeting. Part of that is on me as scrum master, but ultimately we should be willing to meet without having everyone on board if we are working on specific issues.

I was able to spread out my work fairly well as an individual, but I can’t say the same for other team members, and so as scrum master I should be more active in making sure everyone is doing their work not just so the project moves forward, but so everyone gets the credit they need for the course.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

mocking review (and more)

A while ago, we used the Mockito framework in Java during an in-class activity. If I’m being honest, I wasn’t paying full attention to the material, and so I wanted to review some of the material in addition to looking into mocking more.

The idea with mocking is that unit testing without the actual classes being written is problematic solely because we can’t actually test stuff that isn’t written. This mostly applies to a specification-based testing environment, a subset of black-box testing. Mocking solves this by simulating the construction of objects and allowing set values to pass through for tests, which can then be refactored once the classes are actually created. As such, we can have tests created for the behaviors we want in our project, without having written the project yet, which is great if we’re applying specifications to tests.

This contrasts with stubs, pieces of code that are written as minimally as possible to get the test suite to function. Stubs are useful in that they can quickly provide the correct result for a test without the need of an outside resource, but what seems to happen is that as you write more tests, you end up writing the actual program in the process to make the tests match. This is good for test-driven development, where we actively write tests while we code the actual program, but if our interest is solely on writing the tests based on specifications, this isn’t a great approach.

According to a blog post by Rohit Khankhoje, mocks are great in that they are more cost-effective, acccessible and efficient than writing stubs or creating fakes. By setting up mocks, you have less set-up time for tests because you don’t need to have the system accessible in order to have a fully functioning test suite. While this is great, there are some obvious drawbacks. For example, tests that utilize mocking will still frequently have to adapt to new code changes when functionality is added to the project. This is more pressing with the idea in mind that tests will continue to pass with mocks despite code in the actual project changing, and so mocked tests need to be developed side-by-side with the actual code, otherwise the testing suite isn’t actually portraying accurate information regarding the accuracy and performance of the system.

As such, while mocking is a very strong tool, it seems to only be very useful in an environment where we don’t have access to code, and when we do have access to code (or we are also the coders ourselves, not just testers) it seems much more efficient to implement test-driven development.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

sweep the floor

When you first get placed on a team, it’s sometimes hard to get your bearings if you aren’t given an explicit task to work on to begin with. We sort of experienced this at the start of the semester, where we weren’t all too familiar with the Thea’s Pantry project even after working with forks of it for assignments in prior courses. Part of this is just the nature of working with something new, and having the expectation of providing value to it in a practical sense.

In this way, it makes sense the way we approached the issues that we took for the first sprint. For the most part, these issues were fairly simple and mostly plumbing, which can also be classified as sweeping the floor. The idea for sweeping the floor is that you take simple tasks that, while necessary, aren’t all that interesting, in order to build confidence in yourself with the project and build rapport with the team.

The authors make a good point in that it might not feel great to do as someone with a Computer Science degree that you worked hard for, but the reality is that the degree is really just a way to get your foot in the door. Same with any other way that you gained the qualifications to get accepted for a job or project that you’re working on. The real work is the work you do when you’re placed into a project, where you really get to apply the basics that you learned in college while also learning more practical skills.

I really think this is a good approach to take when you feel out of your element in a new environment. Maybe you just got hired for your first internship or even first full-time job, and while you are excited, you don’t really know how to actually provide value to the project, because you haven’t necessarily had that sort of experience before. This seems really helpful to get your bearings in a new project. The authors do mention a couple drawbacks to this approach, with the most notable one being the feeling of being stuck doing the small tasks without branching out due to anxiety, but there are ways out of this mindset too.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.