Category Archives: Week 11

Capstone Sprint 2 Retrospective

This second sprint brought with it some challenges in moving to online classes for the ongoing epidemic, but with it a stronger grasp on communication and documentation using GitLab and Discord, both out of necessity and intentional effort. We have learned to better work with LibreFoodPantry workflow and are ready to go into our next sprint with our REST API, Database, and Frontend all working in isolation.

My contributions

Create file with definition of done.

Change to default .gitignore file for Spring and remove unnecessary tracked files from before we added the .gitignore,

Research internationalization support and decide that it is better to be saved once a more-final version of the front-end is complete.

Research Angular Testing and create a Spike project that covers most cases we will encounter.

Integrate ID scanner to get Student ID with an Angular component and create tests to have 100% code coverage.

Retrospective

For the first half of the sprint, we were still having weekly meetings to work together. One of our troubles last sprint was that we were discussing things in person and not doing well documenting the reasons for decisions we made. We improved on this even while having in-person meetings. By the second half, although we were all coping with changes brought on by moving to online classes, we did well in keeping each other updated and communicating through GitLab. In hindsight, it’s probably a good experience to be forced to do this. Especially if this epidemic inspires more software companies to promote working from home.

The biggest issue we had as a team was working with merge requests. There were a couple cases where code on a feature branch was not kept up to date with the master branch. As a result, there were a lot of merge conflicts to work together on resolving as a team. Overall, working through these together as a team was a good experience, because this is bound to happen when working in tandem with version control. However, now we will be reminding ourselves to pull changes from origin/master as we are working on our local branches.

We also improved with creating merge requests for each individual features, although this took a few weeks for us to all do efficiently. GitLab has a great feature where you can tightly-bind an issue to a merge request, but this caused a couple of problems for me. When the merge request is accepted, the issue is automatically closed. This messes with our workflow, because we want the issues in the “done” column, only to be closed by the product owner. Moving forward, the issues should be linked with their merge request but we will have to take care that the description doesn’t include a “Closes issue” tag.

Furthermore, when a branch is automatically made in GitLab, it creates a very verbose branch name, which is simply annoying if your Git isn’t configured to autocomplete branch names when pressing “tab”. In the future, I will create a new merge request and manually select my already-created branch. Then I will manually link the issue.

The team’s willingness to quickly meet over Discord about an issue we were having was the best thing about this sprint. In the few cases where something occurred outside of class time that required all of us, we were able to set up a time the same day or the next day and resolve the problem. This flexibility to schedule work within the sprint is what helped us get as much work done as we did.

The next sprint will involve combining our individual pieces into a working product that is capable of storing actual checkout transactions. There is a still a lot to learn and to do, but we are well on our way to finishing a viable product that we are proud of, albeit with much room to grow in the future. We will have to pay close attention in the next sprint to creating well-written documentation as we combine our API, database, and front end so that future developers can easily recreate what we’ve done and get it running.

From the blog CS@Worcester – Inquiries and Queries by James Young and used with permission of the author. All other rights reserved by the author.

Dig Deeper

For my final blog post on apprenticeship patterns, I wanted to discuss my favorite pattern. Software is so pervasive now that anyone can make a working product with little more than superficial knowledge of a language and a framework. This is great motivation to continue, but it may lead one to erroneously believe they are an expert. Finishing a product, even a successful one, doesn’t make you an expert programmer.

Digging deeper is going below surface knowledge of a technology and learning the nitty-gritty bit-y details. The caveat is to not become too specialized. The book warns to keep your perspective of the project as a whole, and only the learn as much detail necessary to help with a given task or problem.

I was originally taught to treat new classes as a black box, and I only found it frustrating once I graduated to more complicated tools. To truly understand how something is meant to work, you have to look inside. Another example: I’ve taken a few introductory classes that used metaphors to explain concepts and/or taught from the top down, adding detail over time. Biology class was boring and difficult because I had to memorize that a blue circle will separate the green, spiked lines so that two red hexagons can copy each of them. It wasn’t until high school, which provided an understanding of underlying chemical reactions, that biology became interesting and easy to remember.

So it is with software. I’ve been exceedingly frustrated with new tools when I tried to play without understanding. Sometimes, it works. Others, when things begin to get confusing, diving in becomes a necessity. Another caveat: you don’t know what you don’t know, and if you assume you’re doing it right, you may be wrong. Even if it works.

This is another pattern that requires balance. Learning details provides diminishing returns over time, but you should mostly understand why you need to do something a certain way, and how it is working. If you can explain this in simple words, you’re probably on the right track. This applies not only to software tools, but work processes as well.

You may not always agree with how a technology was designed. No one will tell you that the modern Internet is a perfect design, because it has been manipulated into working in a world it wasn’t designed for. Created in a world of text, it now works in a world of streaming video across billions of devices. This would never have been achieved without engineers and developers who understood the basic building blocks of the technology. Be like them.

From the blog CS@Worcester – Inquiries and Queries by James Young and used with permission of the author. All other rights reserved by the author.

Improving the Spoken Digit Speech Recognition Machine Learning Model

After getting a simple machine learning model to recognize spoken digits, I was able to begin the iterative process of improving the model. Using only MFCC’s, the model was failing more than desirable, reaching a maximum of 60% accuracy when using validation data (my own voice, which was not used in training the model).

Below you will see plots of a sample of results when validating the model. For each digit, there is the extracted MFCC features, the actual spoken digit, the predicted digit by the model, and the certainty. There is also a plot of the certainty for each other digit for that recording.

This is just a sample of a larger validation set, and the actual results in this first model was only 45% accurate. But this shows that for all of these digits except 3 and 5, the model was 99% to 100% certain of the result. The differences in the MFCCs are subtle, but stark differences in color appear to be more likely to be correct, whereas 5 is clearly closer in color to 1, which it was mistake for. Additionally, every single audio clip of 3 was mistaken for a 0 using this model.

Retraining the model with different parameters may help in this case, but we can also hypothesize about the reason for these mistakes. Perhaps the MFCC is finding patterns in vowels that make “zero” and “three” look identical. If that’s the case, features that can detect consonants might help improve results. This sounds pretty obvious anyway, so it might be a good next step on the next iteration.

But first, let’s retrain the model without any changes.

Okay! This 3 was very accurately predicted. But the total accuracy of validation was only 50% (remember, this only shows a sample size of 10). Inspection of actual results now shows that 3 is sometimes mistake for a 2, and vice versa. This model is slightly better, bit still flawed. Which makes sense, because no changes have been made to the model and we just got lucky that it learned to be a bit better this time.

I’ve been training with 25 epochs, and getting 95-97% accuracy during training, and 93-97% accuracy using test data (from the same dataset as the training data, which was not used to train the model). Those results are pretty good, so maybe we can use fewer epochs and prevent some overfitting.

This certainly looks promising. With 95% accuracy during training, and 93.8% accuracy using test data, the results are still pretty good. However, the validation data with my voice is now 57.5% accurate! Only a single 3 was mistaken for a 0.

So I’m using a dataset of 4 voices to train and test, and my own voice to validate. But more data is probably better, so let’s use my voice to train the model and take a random sample to validate.

The plot is looking good! Each of these was very accurately predicted. During training and test data was 97% accurate. The validation data was 100% accurate. Of course, now that the validation data contains all voices that were trained, it’s more likely to be correct. Furthermore, the sample is small. So let’s see what happens if we use a new voice to validate. I had my roommate record himself saying each digit and used only his voice for validation data.

In general, the model is much more certain of its guesses. The final validation result was 80% accuracy, so not perfect but a major improvement. This much of an improvement was gotten just by adding more data and making small modifications to the model.

The importance of collecting data in order to improve a model is apparent. Even with 80% accuracy, there is still some predictive power. If this can be found to be useful, further data can be collected as it is used and this new data can be cleaned and used to train better models.

From the blog CS@Worcester – Inquiries and Queries by James Young and used with permission of the author. All other rights reserved by the author.

Angular CDK and Popover

With CS 343’s final project beginning, I have looked into some ways of displaying data. For this week, I will focus on Angular’s CDK. In Netanel Basal’s article “Creating Powerful Components with Angular CDK,” Basal describes the process of making an overlay in angular. He starts with creating a Popover service, Popover being a component from the CDK. This component is used for creating popup overlays, such as info that pops up while the mouse is hovering over something. Basal then creates the PopoverRef and its Injector. The PopoverRef is then injected into a ComponentPortal which is then attached the origin page. Portals dynamically render UI to the page. For the portal, the author creates a custom component to receive the contents to be displayed and how to render them. The article then covers three types of content the PopoverComponent can receive: text, template, and component. With that, the article’s example is completed. Now let us get a little more in-depth.

Let’s start with the Popover service. This service creates an
open() method that creates the overlay, its contents, and its injector. This
service is injected into the Popover component, where the open() method is
utilized in a show() method in the app component that is called on when a
button is clicked in the root component HTML. From what I understand, this
service handles the creation of all the parts needed to make the popover with an
open() method.

Next up is the PopoverRef, a class that receives a
overlayRef, content, and data, and creates a close() method to dispose of the
overlay. It seems that this class is used for the storage of the parent
overlay, the content, and the data, and the close() method that removes the popover.

Since Basal wanted to use a custom component, he needed to
create an Injector for it. The injector is created in the Popover service in a
createInjector() method, which converts the custom injector to a PortalInjector.

The author then attaches the soon-to-be-created Popover
component to the overlayRef in the popover service. This is done by overlayRef’s
attach() method, where a new ComponentPortal containing the PopoverComponent is
attached to the overlayRef.

The Popover component’s job is simple, just to inject the
popoverRef and render its contents. This article’s example provides multiple
rendering methods depending on content type. The three content types being template,
component, and text.

For all three types, a show() method is added to app
component that injects PopoverComponent and creates a Popover from popover service.
The only difference between the content type’s show() is what the content in
Popover is set to. This method is called on in the app component’s HTML where
it is attached to a button.

Reading through this article and examining its code has helped me towards learning how to create popup overlays. I feel I have a more solid grasp of how angular components interact with each other. I will undoubtedly use knowledge I gained from the article, “Creating Powerful Components with Angular CDK” by Netanel Basal, in my final project.

Article Referenced:
https://netbasal.com/creating-powerful-components-with-angular-cdk-2cef53d81cea

From the blog CS@Worcester – D’s Comp Sci Blog by dlivengood and used with permission of the author. All other rights reserved by the author.

Behavior-Driven Development (BDD)

Once again, Martin Fowler has some comments on this week’s topic, but this post will mainly reference Dan Terhorst-North’s post, which better describes the reason and motivation for creating Behavior-Driven Development. BDD arose from TDD and it is better to think of it as an extension of TDD.

BDD tests follow the pattern “Given-When-Then” (notice the similarity to Arrange-Act-Assert). Mockito can define this pattern, but the BDDMockito aims to follow the same human-readable principles. The idea is simple: given a precondition, if some other condition occurs, then something else should have happened. Fowler’s example includes checking the state of an object, so clearly BDD is not simply testing the behavior of an object. However, it is a very important part.

Terhorst-North mentions that BDD picks up where TDD left off. I have seen some evidence that BDD has influenced TDD, such as describing tests as a sentence like “testFailsForDuplicateCustomers()”, which can be seen in many TDD test examples. Imitation is the highest form of flattery, so clearly this is a good approach. Or maybe, BDD is just more consistent in this naming because they put it in the specification.

Regardless, BDD developed out of Agile processes. It aimed to make writing tests part of the entire process and help future developers work well together in doing so. This is where many of Terhorst-North’s ideas stemmed from, and his main point, and the motivation behind the name BDD, is that “‘Behaviour’ is a more useful word than ‘Test’”. If you describe each test as a behavior, you know how to define the test, and you know when the specification has changed enough to warrant deletion of a test.

“What to call your test is easy – it’s a sentence describing the next behaviour in which you are interested. How much to test becomes moot – you can only describe so much behaviour in a single sentence. When a test fails. . . either you introduced a bug, the behaviour moved, or the test is no longer relevant.”

Daniel Terhorst-North, “Introducing BDD”

Real mastery of a subject is when it becomes simple. BDD is the next step to understanding testing on an intuitive, subconscious level. It wasn’t immediately obvious that tests don’t need to be difficult to write, but Terhorst-North managed to figure out a way to make it so. It is another part of the iterative process that is technology. Next time I encounter a difficult concept, I think asking three questions, in order, might help: Am I misunderstanding the concept? Do I just need more practice? Or is this method flawed?

Someone saw a flaw in TDD and developed BDD to improve it. This came only through thorough understanding, practicing, and identifying problems. This is applicable to any career, to provide real value.

From the blog CS@Worcester – Inquiries and Queries by ausausdauer and used with permission of the author. All other rights reserved by the author.

FPL&S 2: Uploading Files Through an API

I must say, this project has gotten much more complicated than I was expecting, even last week. Not difficult necessarily, but requiring much more knowledge of the framework that I expected. But after a steep hill over the course of the week, the good news is the features of Angular are much more powerful and exciting than I had thought. While the project specification requires communication with a REST API, which will be used for the database, I also require remote file storage. Since I will be using Google Firebase for both of these services, and file storage has a much simpler API which I’d prefer to use with mobile, I opted to not use the REST API for file storage.

Understanding the Firebase Storage API was the first step, but after reading through much of Google’s documentation for the web API, I still had difficulty translating everything to Angular and TypeScript. But luckily, Google is kind of a big deal these days, so David East from Angular posted a short tutorial that helped me bridge the gap.

The most mindblowing portion of completing this task was the concept of an async pipe, which I will explain shortly. You can’t get through a CS degree without learning and performing asynchronous tasks, but this syntax was completely alien to me. Take this HTML from David East’s post:

<label for="file">File:</label>
<input type="file" (change)="upload($event)" accept=".png,.jpg" />

This demonstrates Angular’s ability of event binding. The (change) syntax is binding a change in input to the upload() method in the Component, while also passing the event DOM, from which you can get the File using TypeScript. My needs were a bit different, but this same syntax can be used to simply store the file when the input is changed. Then, a separate upload button’s (click) event can trigger an upload within in the Component:

<button class="btn" (click)="submit()">Upload</button>

From there, my Component uses Typescript to communicate with the AngularFireStorage service, and even update a progress bar. This is where the async pipe comes in:

<progress max="100" [value]="(uploadProgress | async)"></progress>

More binding! This time, we’re using square brackets to bind the value property of the HTML progress tag to an Observable<number> object, which we can get within the Component from the AngularFireStorage service. This Observable will update the progress bar as the file uploads if we pipe it through an async task, as shown above. This subscribes to the Observable (or Promise) and automatically unsubscribes when the Component is destroyed.

I highly recommend reading over Angular’s template syntax documentation to get better acquainted with these concepts.

After completing this portion of the project, I’ve determined I’ve been thinking too much in terms of JavaScript, because I’ve found some of TypeScript’s rules to be a hindrance. Some of the examples noted above have shown me that if I get back to an object-oriented, strongly-typed mindset, I will be able to work quicker on future tasks. Essentially, this is just a matter of getting practice with a new language and framework.

From the blog CS@Worcester – Inquiries and Queries by ausausdauer and used with permission of the author. All other rights reserved by the author.

Displaying JUnit Test Reports on GitLab

In this blog post I wanted to look at something interesting that I found a while back this semester when we were learning about JUnit testing and combining this with GitLab. In class we learned how to use Gradle to build our Java programs and to run JUnit tests and get GitLab to run them when we pushed our code to GitLab using GitLab’s continuous integration feature. As GitLab’s documentation says, using Gradle and JUnit on Gitlab will show whether the tests fail on GitLab, but I wanted to take this a step further and looked into seeing if it was possible to display more information about JUnit test statuses on GitLab itself. This led me to finding this article in GitLab’s documentation about a feature in GitLab that can display JUnit test reports on merge requests. As this documentation shows this can easily be enabled on any GitLab project that already is a Java project with GitLab’s CI using Gradle to run JUnit tests. All you need to do is add the necessary lines in the GitLab CI/CD config file provided in this document to display the tests reports in a merge request.

I already tried this before a couple of months ago when I first found it, but I wanted a nice demonstration of this feature, so I created a little Java test program that has a basic Student class with a JUnit test class. I then converted this to a Gradle project using some previous programs we have used as examples and instructions provided from this class and then pushed this to a new GitLab project. On the master branch of this project the tests pass as indicated by the previous GitLab CI job. After getting the initial code pushed and passing the tests, I then created a testing branch where I changed the code in the Student class so that one of the tests (testSetLastName) would deliberately fail. Creating a merge request on GitLab for this branch and pushing this “broken” code results in the test failing when it runs on GitLab with Gradle and therefore GitLab displays on the merge request which JUnit test(s) failed:

blog-post-1-screenshot.PNG

I found this little feature to be pretty awesome in combining software testing along with software management tools and can easily see how this would be very useful for checking if new or modified code in a project causes tests to fail. In addition to checking if the tests pass, this feature allows us to easily see which tests fail, and directly on the merge request itself, instead of the alternative that the documentation says of looking through reports possibly containing thousands of lines for the failed test. I will definitely be implementing this on any projects I’m working on that use JUnit tests and are hosted on GitLab.

Link to article: https://docs.gitlab.com/ee/ci/junit_test_reports.html

Link to demo program: https://gitlab.com/cradkowski/gitlab-junit-test-reports-demo

From the blog CS@Worcester – Chris&#039; Computer Science Blog by cradkowski and used with permission of the author. All other rights reserved by the author.

New Tricentis qTest Case Studies Highlight Testing’s Critical Role in Agile Transformation

Hello again everyone. For my second blog of the semester (Technically third because of intro post) I am using another article by Lanier Norville. Last week, I wrote about her article on testers becoming agents of change. This week, however, I am going to be writing about some Tricentis qTest Case Studies. I picked this article because it talks about Agile Transformation, and I am personally fond of agile frameworks.

Once again Norville leads off with a nice and small yet appropriate introduction on what she is discussing. In this case, she is talking about how companies are transforming agile and DevOps. She uses her case studies to show the “critical role” (Norville) of testers.

The first case study involves a payment processing technology provider. The VP of Test Engineering, Nick Jones attended an event on DevOps and he decided that his organization needed to transform as well. Norville then discusses how payment options have an effect on whether customers end up buying something or not. Jones developed a DevOps roadmap with his team and according to Norville they have reduced the time of delivery from “14 hours to 4 minute” (Norville) This is interesting because it is a huge drop in time which is huge for a company that is constantly making deliveries.

The next study isn’t as much of a success story as the last study, however it still seems like it is helpful. The University of the West of England switched to Tricentis qTest before using an agile framework. The head of testing, Heather Daniels that she “needed to implement a test management tool” (Norville) She needed to do this in order to maintain everything that the school used. (Library systems, eLearning systems, etc.) As of now, according to Norville, two of the organizations have switched to agile. It is a big change according to Daniels, and the University still isn’t used to making small functional deliveries, but it seems like they are getting the hang of it. I like that they switched because now they are “prioritizing the things that are most important to users first.” (Norville)

This article, in my opinion, was a little bit more difficult for me to follow, but it was still a very well written article overall. I did like how well she described each of the scenarios and what was done with qTest to transform into an agile framework. It is obvious that the article is supporting the websites own product, but that is what companies are supposed to do. Overall, this was a good read, but for my next blog, I will probably be visiting another website.

New Tricentis qTest Case Studies Highlight Testing’s Critical Role in Agile Transformation

From the blog CS@Worcester – My Life in Comp Sci by Tyler Rego and used with permission of the author. All other rights reserved by the author.

The Workflow in Action

Last week started on Monday with getting back on track with this project. I looked at some of the issue updates on GitHub including the issue about switching to Discord. I then emailed Dr. Wurst asking about how the DCO sign-off works for committing to the project, hoping that I could push my setup documents and diagrams before the next day’s LFP committee meeting. To my surprise, the GitLab Gold issue had been fixed and the WSU account now has access to all of the Gold features again. I was really happy about this since it means I could go back to testing the advanced features offered with this package. Sadly, the GitHub issue still remains but as of Monday they hadn’t deleted my testing accounts yet. After that, I looked at the Probot question one last time and created my reply to Dr. Jackson about this. Finally, I looked at Dr. Wurst’s earlier question about free server time for open source projects. I couldn’t seem to find much information directly about if this was available, at least with Amazon Web Services or Google.

Tuesday started with a research meeting with Dr. Wurst. We covered a lot in this meeting, but the biggest thing was separating out the old issue card and breaking it into multiple new issues, as the old one was starting to get too big and congested. Dr. Wurst then closed the old issue for good. The new issues included:

By doing this it makes it easier to choose one task and finish it and see the whole progress as it moves across the project board on GitHub. 

After the meeting, I read the contributing document to see how I am supposed to be making additions to this project before committing and pushing to the repository. I then updated my local repository for the shop setup documents and re-exported all of the graph files. I then looked at the first pull request for this project and approved it and merged it into the master branch. Dr. Jackson decided that we should start using our own workflow when making additions to the LFP projects now, so I created a new branch for my setup documents, signed the previous commits with the DCO and then push the changes to GitHub. I then created a pull request and requested it to be reviewed by Dr. Jackson. 

Wednesday started with reading issue updates. I then looked to see if there was a way to always sign commits without having to add the -s parameter each time. I couldn’t find what I wanted to for this. I then started working on revising the documentation according to Dr. Jackson’s review comments. I pushed my changes to the branch. Finally, I started looking at what to do for creating workflow documentation. I discovered that Dr. Jackson had already wrote a lot that covered this already and asked on this issue card what more I can add to it. 

Thursday, I found out that the links in the setup documentation for the diagrams had broken when merged into the master branch. I fixed these image links for the setup diagrams by making them relative as Dr. Jackson suggested. I then started figuring out the answer to Dr. Jackson’s question about how labels work on GitLab issues and issue boards. With this I figured out how the different labels work in GitLab. This included group labels that allow a group level issue board to control issues with projects underneath the group. In response to the original question about how the two boards had a “doing” column I finally came to the conclusion that the original hand-off situation Dr. Jackson was asking about must have had two different column names such as “Team1 doing” and “Team2 doing” as if both boards had the same column name, moving an issue on one board automatically moves it on the other. I then updated the issue card with my response. 

Friday, I updated the platform comparison feature sheet with some of the things I’ve discovered since using the platform and closed Dr. Jackson’s comment about WIP merge requests. I then posted the link to this Google Sheet on the issue card on the community board about GitHub vs. GitLab. I then reviewed and approved some pull requests for Dr. Jackson. I then started looking at the documenting continuous integration issue and looked at how to enable CI for a GitHub project specifically using Travis as it seems to be the most popular CI marketplace app on GitHub. I actually found this really easy to do, only needing to set the language in the config file to Java for an example project I had. Finally, I reviewed another pull request for Dr. Jackson and added some suggested edits before approving, especially fixing broken document links. 

Saturday, I approved and merged some more pull requests and continued looking at how to get the DCO bot to work on a GitHub repository in order to double check the instructions Dr. Jackson posted for how to do this in our documentation. 

Sunday, I looked at if GitLab has its own version of a DCO bot. I found through different pages that using GitLab’s Push Rules for commits you can create a rule that enforces all commits to be signed according to a regular expression. I then had to figure out the correct regular expression that checks that all commits have a line matching the form:

Signed-off-by: ‘firstName lastName’ <username@domain>

After reading through a couple of tutorial websites and finding this great one that lets you test in real-time your expression against a string I finally figured out the correct expression to be:

Signed-off-by: \w+ \w+ <.+@.+\..+>

I then tested this on the GitLab web UI to make sure it works, and it did. I then tested it with locally with GitBash and also with branches. I found it works a little differently than the DCO bot on GitHub and blocks all commits from being pushed unless they have the included signature. I actually like this as it prevents the commits from even getting pushed to a project without including the sign-off. The downside is that this sign-off is needed on merge commits too. I then updated the DCO documentation to include instructions for enabling this on GitLab and added some comments on the pull request about this. Finally, I created the high-level continuous integration document, added a definition for it and an overview of how to enable it on GitHub and GitLab. I pushed this and created a pull request for this.

In retrospective after this week, I found that I enjoyed the workflow we are using with selecting an issue to work on, creating a branch, pushing the work, then asking for it to be reviewed by someone else before merging. I especially like the reviewing part as you can always have a second opinion before posting the work you have done to the master branch.

From the blog CS@Worcester – Chris&#039; Computer Science Blog by cradkowski and used with permission of the author. All other rights reserved by the author.

Draw Your Own Map

According to my next Apprenticeship Pattern blog I chose “Draw Your Own Map,” as one of the most interesting patterns and which fits perfectly in my logic. When you decide to enter the Software Development world, you may think that it’s a hard and tough game, or sometimes you believe that your career will always … Continue reading Draw Your Own Map

From the blog cs-wsu – Kristi Pina&#039;s Blog by kpina23 and used with permission of the author. All other rights reserved by the author.