Category Archives: Week 11

GRASP (General Responsibility Assignment Software Patterns)

Hello everyone and welcome to week 11 of the coding journey blog. In this week’s post I will be talking about the design pattern acronym GRASP. This acronym is short for general responsibility assignment software patterns. Design patterns are essential for software developers, and we will dive more deep into the benefits of using GRASP.

Originally, the GRASP design patterns were introduced after gang of four book, which has details on commonly used software design patterns. The GRASP design pattern is answering what each certain role each aspect plays in the software. There is the controller whose essential role is to take responsibility to encapsulate a system operation, which is something the user is trying to process such as purchasing an item. The system operation is achieved by calling one or more method calls between the software objects. Also the controller is responsible for providing a layer between the UI and the domain model. Then there is the creator which is responsible to help decide which class going to be responsible for creating a new instance of a class. There is a pattern known as high cohesion which is essentially responsible to keep objects understandable and more manageable. An example of this is breaking classes down and different subclasses for different roles making it easier in the bigger picture. Then there is the indirection principle which is responsible for low coupling and gives interaction responsibility to an intermediate object. Another part of GRASP is the information expert which gives guidelines about giving responsibilities to classes and one example would be methods.

Also in the GRASP design pattern there is the pattern of low coupling which decides how to assign responsibilities to lower dependency of classes and the change in classes impacting on another as well as the higher reuse potential. Polymorphism is a concept most of us know about because it is one of the principles of objected oriented programming. In brief to recap the concept, polymorphism provides guidelines on how to use the object oriented feature in your design. Then there is protected variation which protects elements from the variation on other elements by wrapping it with an interface and using polymorphism for many other implementations. The last part of the GRASP principles is pure fabrication which is made up to achieve low coupling and high cohesion with a class that does not represent a concept in the problem domain.

I personally think that the design concepts of GRASP have many essential components of programming and creating real world software. Many of these components are used in our everyday world and features we see without even recognizing. As I learn more in my coding journey and take in more concepts, I will most certainly take into account GRASP principles in my future projects to make it easier.

For more resources on this topic check out these links:

https://dzone.com/articles/solid-grasp-and-other-basic-principles-of-object-o https://medium.com/@ReganKoopmans/understanding-the-grasp-design-patterns-2cab23c7226e

From the blog CS@Worcester – Roller Coaster Coding Journey by fbaig34 and used with permission of the author. All other rights reserved by the author.

A Different Road

Just because they’re not on your road doesn’t mean they’ve gotten lost.

-H. Jackson Brown Jr., Life’s Little Instruction Book

“A different Road” pattern shows us exactly how to follow our map and recall what we know. For some time, we might have walked through a road, and then realized that this route is not an acceptable option for us because of our drawn map. It has happened that we have found a way that is more in line with our current values. Based on this pattern, if we permanently leave the road, we will still have values and principles established along the way. This pattern shows different examples of people who decided to move on into something else and come back to bring new ideas to a company. It is okay to set people free if they have different ideas and let them come back along with the new interactions they have discovered. Unfortunately, traditional software companies are not so welcome. Such detours are also seen as questionable shortcomings in your profession you must explain later in the future. You would hope that your reason for why you left and why you came back will become important within your belief system. However, this shouldn’t be an issue for someone who wants to pursue their dream in a way or another.

What got my attention in this pattern is that we shouldn’t be afraid to do something else with our life, no matter the risks. The skills we have gotten during our journey will not leave, and at one point they will be useful wherever we go. As a software developer, with the experience, we will enrich everything we want for our future. Leaving behind the software development journey to become a professor or an instructor could be an option in our lives. We may like it or not, in the end, it’s all about trying to find where we and our knowledge best fit. I have always thought that leaving a company and coming back after a while wouldn’t be so good but based on this pattern it wouldn’t be so bad. However, there are several reasons why we would and wouldn’t want to go back to our previous employer. But let’s be positive and do what we feel it’s right. If what you were doing didn’t feel right in your career, you have the chance to return to a higher level and be where you always wanted to be in your previous company or move on to another one.

From the blog CS@Worcester – Gloris's Blog by Gloris Pina and used with permission of the author. All other rights reserved by the author.

Apprenticeship Patterns: Nurture Your Passion

The next Apprenticeship pattern I would like to discuss is titled “Nurture Your Passion.” This pattern is targeted at software developers whose work environments drain them of their passion for creating software. It emphasizes that a passion for software craftsmanship is crucial for improving our skills, and that we should take steps to protect our passion if we find ourselves in such an environment. The pattern suggests several techniques we can use to strengthen our passion for software development. These include investing time into enjoyable projects, joining groups that focus on our interests, and changing our work environments.

As I mentioned in my post on “Breakable Toys,” I have struggled to stay passionate about programming since I started college. When software development became the focus of my education, I stopped working on personal projects because they took time away from more important (yet far less enjoyable) class assignments. This really damaged my ability to enjoy programming, and I think that following this pattern’s tips could help me regain some of my passion. I already expressed my desire to start working on personal projects on my own time again in my “Breakable Toys” post. While this pattern recommends this once again, it also provides several new suggestions that I think could be just as useful.

The pattern first recommends focusing on enjoyable topics while working as a way to make work less draining. This suggestion changes my perspective on how I should go about my work, as I have generally not prioritized my own interests during assignments. For example, I have recently been working on testing for my group’s project in CS-448, which has been exhausting for me since I dislike writing tests. I might try to contribute to more interesting aspects for the remainder of the project so that I can be more invested in it. The pattern also recommends joining groups and reading books that focus on topics of interest. Groups haven’t really worked for me in the past, and I’ve never been a fan of reading, but knowing that these options could help nurture my passion might make them worth trying. Finally, the pattern recommends having a list of positive ideas to talk about whenever work conversations become exhausting. Although I’m not great at conversation, I think this might be an action worth taking. Even if I never have a work environment that completely engages me, talking about my interests with others might be enough to keep my passions alive.

From the blog CS@Worcester – Computer Science with Kyle Q by kylequad and used with permission of the author. All other rights reserved by the author.

Capstone Sprint 2 Retrospective

This second sprint brought with it some challenges in moving to online classes for the ongoing epidemic, but with it a stronger grasp on communication and documentation using GitLab and Discord, both out of necessity and intentional effort. We have learned to better work with LibreFoodPantry workflow and are ready to go into our next sprint with our REST API, Database, and Frontend all working in isolation.

My contributions

Create file with definition of done.

Change to default .gitignore file for Spring and remove unnecessary tracked files from before we added the .gitignore,

Research internationalization support and decide that it is better to be saved once a more-final version of the front-end is complete.

Research Angular Testing and create a Spike project that covers most cases we will encounter.

Integrate ID scanner to get Student ID with an Angular component and create tests to have 100% code coverage.

Retrospective

For the first half of the sprint, we were still having weekly meetings to work together. One of our troubles last sprint was that we were discussing things in person and not doing well documenting the reasons for decisions we made. We improved on this even while having in-person meetings. By the second half, although we were all coping with changes brought on by moving to online classes, we did well in keeping each other updated and communicating through GitLab. In hindsight, it’s probably a good experience to be forced to do this. Especially if this epidemic inspires more software companies to promote working from home.

The biggest issue we had as a team was working with merge requests. There were a couple cases where code on a feature branch was not kept up to date with the master branch. As a result, there were a lot of merge conflicts to work together on resolving as a team. Overall, working through these together as a team was a good experience, because this is bound to happen when working in tandem with version control. However, now we will be reminding ourselves to pull changes from origin/master as we are working on our local branches.

We also improved with creating merge requests for each individual features, although this took a few weeks for us to all do efficiently. GitLab has a great feature where you can tightly-bind an issue to a merge request, but this caused a couple of problems for me. When the merge request is accepted, the issue is automatically closed. This messes with our workflow, because we want the issues in the “done” column, only to be closed by the product owner. Moving forward, the issues should be linked with their merge request but we will have to take care that the description doesn’t include a “Closes issue” tag.

Furthermore, when a branch is automatically made in GitLab, it creates a very verbose branch name, which is simply annoying if your Git isn’t configured to autocomplete branch names when pressing “tab”. In the future, I will create a new merge request and manually select my already-created branch. Then I will manually link the issue.

The team’s willingness to quickly meet over Discord about an issue we were having was the best thing about this sprint. In the few cases where something occurred outside of class time that required all of us, we were able to set up a time the same day or the next day and resolve the problem. This flexibility to schedule work within the sprint is what helped us get as much work done as we did.

The next sprint will involve combining our individual pieces into a working product that is capable of storing actual checkout transactions. There is a still a lot to learn and to do, but we are well on our way to finishing a viable product that we are proud of, albeit with much room to grow in the future. We will have to pay close attention in the next sprint to creating well-written documentation as we combine our API, database, and front end so that future developers can easily recreate what we’ve done and get it running.

From the blog CS@Worcester – Inquiries and Queries by James Young and used with permission of the author. All other rights reserved by the author.

Dig Deeper

For my final blog post on apprenticeship patterns, I wanted to discuss my favorite pattern. Software is so pervasive now that anyone can make a working product with little more than superficial knowledge of a language and a framework. This is great motivation to continue, but it may lead one to erroneously believe they are an expert. Finishing a product, even a successful one, doesn’t make you an expert programmer.

Digging deeper is going below surface knowledge of a technology and learning the nitty-gritty bit-y details. The caveat is to not become too specialized. The book warns to keep your perspective of the project as a whole, and only the learn as much detail necessary to help with a given task or problem.

I was originally taught to treat new classes as a black box, and I only found it frustrating once I graduated to more complicated tools. To truly understand how something is meant to work, you have to look inside. Another example: I’ve taken a few introductory classes that used metaphors to explain concepts and/or taught from the top down, adding detail over time. Biology class was boring and difficult because I had to memorize that a blue circle will separate the green, spiked lines so that two red hexagons can copy each of them. It wasn’t until high school, which provided an understanding of underlying chemical reactions, that biology became interesting and easy to remember.

So it is with software. I’ve been exceedingly frustrated with new tools when I tried to play without understanding. Sometimes, it works. Others, when things begin to get confusing, diving in becomes a necessity. Another caveat: you don’t know what you don’t know, and if you assume you’re doing it right, you may be wrong. Even if it works.

This is another pattern that requires balance. Learning details provides diminishing returns over time, but you should mostly understand why you need to do something a certain way, and how it is working. If you can explain this in simple words, you’re probably on the right track. This applies not only to software tools, but work processes as well.

You may not always agree with how a technology was designed. No one will tell you that the modern Internet is a perfect design, because it has been manipulated into working in a world it wasn’t designed for. Created in a world of text, it now works in a world of streaming video across billions of devices. This would never have been achieved without engineers and developers who understood the basic building blocks of the technology. Be like them.

From the blog CS@Worcester – Inquiries and Queries by James Young and used with permission of the author. All other rights reserved by the author.

Improving the Spoken Digit Speech Recognition Machine Learning Model

After getting a simple machine learning model to recognize spoken digits, I was able to begin the iterative process of improving the model. Using only MFCC’s, the model was failing more than desirable, reaching a maximum of 60% accuracy when using validation data (my own voice, which was not used in training the model).

Below you will see plots of a sample of results when validating the model. For each digit, there is the extracted MFCC features, the actual spoken digit, the predicted digit by the model, and the certainty. There is also a plot of the certainty for each other digit for that recording.

This is just a sample of a larger validation set, and the actual results in this first model was only 45% accurate. But this shows that for all of these digits except 3 and 5, the model was 99% to 100% certain of the result. The differences in the MFCCs are subtle, but stark differences in color appear to be more likely to be correct, whereas 5 is clearly closer in color to 1, which it was mistake for. Additionally, every single audio clip of 3 was mistaken for a 0 using this model.

Retraining the model with different parameters may help in this case, but we can also hypothesize about the reason for these mistakes. Perhaps the MFCC is finding patterns in vowels that make “zero” and “three” look identical. If that’s the case, features that can detect consonants might help improve results. This sounds pretty obvious anyway, so it might be a good next step on the next iteration.

But first, let’s retrain the model without any changes.

Okay! This 3 was very accurately predicted. But the total accuracy of validation was only 50% (remember, this only shows a sample size of 10). Inspection of actual results now shows that 3 is sometimes mistake for a 2, and vice versa. This model is slightly better, bit still flawed. Which makes sense, because no changes have been made to the model and we just got lucky that it learned to be a bit better this time.

I’ve been training with 25 epochs, and getting 95-97% accuracy during training, and 93-97% accuracy using test data (from the same dataset as the training data, which was not used to train the model). Those results are pretty good, so maybe we can use fewer epochs and prevent some overfitting.

This certainly looks promising. With 95% accuracy during training, and 93.8% accuracy using test data, the results are still pretty good. However, the validation data with my voice is now 57.5% accurate! Only a single 3 was mistaken for a 0.

So I’m using a dataset of 4 voices to train and test, and my own voice to validate. But more data is probably better, so let’s use my voice to train the model and take a random sample to validate.

The plot is looking good! Each of these was very accurately predicted. During training and test data was 97% accurate. The validation data was 100% accurate. Of course, now that the validation data contains all voices that were trained, it’s more likely to be correct. Furthermore, the sample is small. So let’s see what happens if we use a new voice to validate. I had my roommate record himself saying each digit and used only his voice for validation data.

In general, the model is much more certain of its guesses. The final validation result was 80% accuracy, so not perfect but a major improvement. This much of an improvement was gotten just by adding more data and making small modifications to the model.

The importance of collecting data in order to improve a model is apparent. Even with 80% accuracy, there is still some predictive power. If this can be found to be useful, further data can be collected as it is used and this new data can be cleaned and used to train better models.

From the blog CS@Worcester – Inquiries and Queries by James Young and used with permission of the author. All other rights reserved by the author.

Angular CDK and Popover

With CS 343’s final project beginning, I have looked into some ways of displaying data. For this week, I will focus on Angular’s CDK. In Netanel Basal’s article “Creating Powerful Components with Angular CDK,” Basal describes the process of making an overlay in angular. He starts with creating a Popover service, Popover being a component from the CDK. This component is used for creating popup overlays, such as info that pops up while the mouse is hovering over something. Basal then creates the PopoverRef and its Injector. The PopoverRef is then injected into a ComponentPortal which is then attached the origin page. Portals dynamically render UI to the page. For the portal, the author creates a custom component to receive the contents to be displayed and how to render them. The article then covers three types of content the PopoverComponent can receive: text, template, and component. With that, the article’s example is completed. Now let us get a little more in-depth.

Let’s start with the Popover service. This service creates an
open() method that creates the overlay, its contents, and its injector. This
service is injected into the Popover component, where the open() method is
utilized in a show() method in the app component that is called on when a
button is clicked in the root component HTML. From what I understand, this
service handles the creation of all the parts needed to make the popover with an
open() method.

Next up is the PopoverRef, a class that receives a
overlayRef, content, and data, and creates a close() method to dispose of the
overlay. It seems that this class is used for the storage of the parent
overlay, the content, and the data, and the close() method that removes the popover.

Since Basal wanted to use a custom component, he needed to
create an Injector for it. The injector is created in the Popover service in a
createInjector() method, which converts the custom injector to a PortalInjector.

The author then attaches the soon-to-be-created Popover
component to the overlayRef in the popover service. This is done by overlayRef’s
attach() method, where a new ComponentPortal containing the PopoverComponent is
attached to the overlayRef.

The Popover component’s job is simple, just to inject the
popoverRef and render its contents. This article’s example provides multiple
rendering methods depending on content type. The three content types being template,
component, and text.

For all three types, a show() method is added to app
component that injects PopoverComponent and creates a Popover from popover service.
The only difference between the content type’s show() is what the content in
Popover is set to. This method is called on in the app component’s HTML where
it is attached to a button.

Reading through this article and examining its code has helped me towards learning how to create popup overlays. I feel I have a more solid grasp of how angular components interact with each other. I will undoubtedly use knowledge I gained from the article, “Creating Powerful Components with Angular CDK” by Netanel Basal, in my final project.

Article Referenced:
https://netbasal.com/creating-powerful-components-with-angular-cdk-2cef53d81cea

From the blog CS@Worcester – D’s Comp Sci Blog by dlivengood and used with permission of the author. All other rights reserved by the author.

Behavior-Driven Development (BDD)

Once again, Martin Fowler has some comments on this week’s topic, but this post will mainly reference Dan Terhorst-North’s post, which better describes the reason and motivation for creating Behavior-Driven Development. BDD arose from TDD and it is better to think of it as an extension of TDD.

BDD tests follow the pattern “Given-When-Then” (notice the similarity to Arrange-Act-Assert). Mockito can define this pattern, but the BDDMockito aims to follow the same human-readable principles. The idea is simple: given a precondition, if some other condition occurs, then something else should have happened. Fowler’s example includes checking the state of an object, so clearly BDD is not simply testing the behavior of an object. However, it is a very important part.

Terhorst-North mentions that BDD picks up where TDD left off. I have seen some evidence that BDD has influenced TDD, such as describing tests as a sentence like “testFailsForDuplicateCustomers()”, which can be seen in many TDD test examples. Imitation is the highest form of flattery, so clearly this is a good approach. Or maybe, BDD is just more consistent in this naming because they put it in the specification.

Regardless, BDD developed out of Agile processes. It aimed to make writing tests part of the entire process and help future developers work well together in doing so. This is where many of Terhorst-North’s ideas stemmed from, and his main point, and the motivation behind the name BDD, is that “‘Behaviour’ is a more useful word than ‘Test’”. If you describe each test as a behavior, you know how to define the test, and you know when the specification has changed enough to warrant deletion of a test.

“What to call your test is easy – it’s a sentence describing the next behaviour in which you are interested. How much to test becomes moot – you can only describe so much behaviour in a single sentence. When a test fails. . . either you introduced a bug, the behaviour moved, or the test is no longer relevant.”

Daniel Terhorst-North, “Introducing BDD”

Real mastery of a subject is when it becomes simple. BDD is the next step to understanding testing on an intuitive, subconscious level. It wasn’t immediately obvious that tests don’t need to be difficult to write, but Terhorst-North managed to figure out a way to make it so. It is another part of the iterative process that is technology. Next time I encounter a difficult concept, I think asking three questions, in order, might help: Am I misunderstanding the concept? Do I just need more practice? Or is this method flawed?

Someone saw a flaw in TDD and developed BDD to improve it. This came only through thorough understanding, practicing, and identifying problems. This is applicable to any career, to provide real value.

From the blog CS@Worcester – Inquiries and Queries by ausausdauer and used with permission of the author. All other rights reserved by the author.

FPL&S 2: Uploading Files Through an API

I must say, this project has gotten much more complicated than I was expecting, even last week. Not difficult necessarily, but requiring much more knowledge of the framework that I expected. But after a steep hill over the course of the week, the good news is the features of Angular are much more powerful and exciting than I had thought. While the project specification requires communication with a REST API, which will be used for the database, I also require remote file storage. Since I will be using Google Firebase for both of these services, and file storage has a much simpler API which I’d prefer to use with mobile, I opted to not use the REST API for file storage.

Understanding the Firebase Storage API was the first step, but after reading through much of Google’s documentation for the web API, I still had difficulty translating everything to Angular and TypeScript. But luckily, Google is kind of a big deal these days, so David East from Angular posted a short tutorial that helped me bridge the gap.

The most mindblowing portion of completing this task was the concept of an async pipe, which I will explain shortly. You can’t get through a CS degree without learning and performing asynchronous tasks, but this syntax was completely alien to me. Take this HTML from David East’s post:

<label for="file">File:</label>
<input type="file" (change)="upload($event)" accept=".png,.jpg" />

This demonstrates Angular’s ability of event binding. The (change) syntax is binding a change in input to the upload() method in the Component, while also passing the event DOM, from which you can get the File using TypeScript. My needs were a bit different, but this same syntax can be used to simply store the file when the input is changed. Then, a separate upload button’s (click) event can trigger an upload within in the Component:

<button class="btn" (click)="submit()">Upload</button>

From there, my Component uses Typescript to communicate with the AngularFireStorage service, and even update a progress bar. This is where the async pipe comes in:

<progress max="100" [value]="(uploadProgress | async)"></progress>

More binding! This time, we’re using square brackets to bind the value property of the HTML progress tag to an Observable<number> object, which we can get within the Component from the AngularFireStorage service. This Observable will update the progress bar as the file uploads if we pipe it through an async task, as shown above. This subscribes to the Observable (or Promise) and automatically unsubscribes when the Component is destroyed.

I highly recommend reading over Angular’s template syntax documentation to get better acquainted with these concepts.

After completing this portion of the project, I’ve determined I’ve been thinking too much in terms of JavaScript, because I’ve found some of TypeScript’s rules to be a hindrance. Some of the examples noted above have shown me that if I get back to an object-oriented, strongly-typed mindset, I will be able to work quicker on future tasks. Essentially, this is just a matter of getting practice with a new language and framework.

From the blog CS@Worcester – Inquiries and Queries by ausausdauer and used with permission of the author. All other rights reserved by the author.

Displaying JUnit Test Reports on GitLab

In this blog post I wanted to look at something interesting that I found a while back this semester when we were learning about JUnit testing and combining this with GitLab. In class we learned how to use Gradle to build our Java programs and to run JUnit tests and get GitLab to run them when we pushed our code to GitLab using GitLab’s continuous integration feature. As GitLab’s documentation says, using Gradle and JUnit on Gitlab will show whether the tests fail on GitLab, but I wanted to take this a step further and looked into seeing if it was possible to display more information about JUnit test statuses on GitLab itself. This led me to finding this article in GitLab’s documentation about a feature in GitLab that can display JUnit test reports on merge requests. As this documentation shows this can easily be enabled on any GitLab project that already is a Java project with GitLab’s CI using Gradle to run JUnit tests. All you need to do is add the necessary lines in the GitLab CI/CD config file provided in this document to display the tests reports in a merge request.

I already tried this before a couple of months ago when I first found it, but I wanted a nice demonstration of this feature, so I created a little Java test program that has a basic Student class with a JUnit test class. I then converted this to a Gradle project using some previous programs we have used as examples and instructions provided from this class and then pushed this to a new GitLab project. On the master branch of this project the tests pass as indicated by the previous GitLab CI job. After getting the initial code pushed and passing the tests, I then created a testing branch where I changed the code in the Student class so that one of the tests (testSetLastName) would deliberately fail. Creating a merge request on GitLab for this branch and pushing this “broken” code results in the test failing when it runs on GitLab with Gradle and therefore GitLab displays on the merge request which JUnit test(s) failed:

blog-post-1-screenshot.PNG

I found this little feature to be pretty awesome in combining software testing along with software management tools and can easily see how this would be very useful for checking if new or modified code in a project causes tests to fail. In addition to checking if the tests pass, this feature allows us to easily see which tests fail, and directly on the merge request itself, instead of the alternative that the documentation says of looking through reports possibly containing thousands of lines for the failed test. I will definitely be implementing this on any projects I’m working on that use JUnit tests and are hosted on GitLab.

Link to article: https://docs.gitlab.com/ee/ci/junit_test_reports.html

Link to demo program: https://gitlab.com/cradkowski/gitlab-junit-test-reports-demo

From the blog CS@Worcester – Chris&#039; Computer Science Blog by cradkowski and used with permission of the author. All other rights reserved by the author.