Category Archives: Week 11

Expose Your Ignorance

The explicit context of this pattern is that you’re a developer on payroll and you have been tasked with successfully delivering an artifact of software to your employer but unfortunately you have not previously developed software with some requisite technology for the project ahead of you. The solution prescribed in this Apprenticeship Pattern is a bold one that leans on previous rapport with your superiors; it insists on radical honesty where one admits that they do not presently know all of the information necessary to deliver on the project and to appeal to their previously established competency and ability to learn new technologies.

What I find most elegant about this approach is not only the ability to tailor current expectations using honesty but also the renewability inherent in this approach:

“In this way, your reputation will be built upon your learning ability rather than what you already know.”

The implication there is that if you stick the landing on delivering the necessary code, you will have created an evergreen approach to setting expectations and earned yourself the reputation of being not somebody who is valued for their finite pool of knowledge but rather their ability to tap into an infinite pool of knowledge. The authors cite Carol Dweck’s work, particularly how the need to appear competent is a mindset (no pun intended) that has proliferated throughout industrialized societies and is hard to break through. They insist that, while this embarrassment may be hard to overcome at first, your peers will have to notice your rapid progress and moreover they may find that by you forcing the issue they may have new realizations about their intelligence; after all people like to solve problems and feel intelligent.

I enjoyed the juxtaposition the authors make between those who become experts as a result of the perhaps not being as inquisitive or less confrontational with their inadequacies and so-called craftsmanship-seekers who become experts just by virtue of interest. While the first group may not be as ambitious or aggressive in their pursuit of knowledge there is no need to disparage this group and I appreciate that the authors did not decide to punch down where many other snobbier tech experts certainly would have.

From the blog CS@Worcester – Cameron Boyle's Computer Science Blog by cboylecsblog and used with permission of the author. All other rights reserved by the author.

Don’t Talk to Strangers

The Law of Demeter was proposed by Ian Holland in 1987. During the development of a system called Demeter using oriented object programming, Holland and his colleagues realized that the code that fulfilled a series of rules was less coupled. Although it is called The Law of Demeter, it is not really a law, but more of a guideline to help reduce coupling between components. When applying LoD to object orientated design, there is a set of four rules that formalizes the “Tell Don’t Ask” principle;

You may call methods of objects that are:
1. Passed as arguments
2. Created locally
3. Instance variables
4. Globals

A great example of this is:

    class User {
        Account account;
        ...
        double discountedPlanPrice(String discountCode) {
            Coupon coupon = Coupon.create(discountCode);
            return coupon.discount(account.getPlan().getPrice());
        }
   }
   class Account {
       Plan plan;
       ...
   }

Here account.getPlan( ).getPrice( ) violated the LoD. The most obvious fix is to delegate/tell:

    class User {
        Account account;
        ...
        double discountedPlanPrice(String discountCode) {
            // delegate
            return account.discountedPlanPrice(discountCode);
         }
     }
     class Account {
         Plan plan;
         ...
         double discountedPlanPrice(String discountCode) {
             Coupon coupon = Coupon.create(discountCode);
             return coupon.discount(plan.getPrice());
         }
      }

Each function should have a limited amount of knowledge as opposed to knowing about the whole object map so our neighboring objects need to know what we have done in order to depend on them to propagate that message to the correct location. Following this rule is hard, which is why it is called the “Suggestion of Demeter” by many because it is so easy to violate. Following this rule, though, is extremely beneficial because any function that “tells” instead of “asks” is decoupled from the rest of the code around it.

The blog I retrieved this information from was https://hackernoon.com/object-oriented-tricks-2-law-of-demeter-4ecc9becad85. The information was extremely easy to follow and understand. The coding examples given to show the difference between following the LoD and not following it were simple and clear. I also found the explanation of LoD given to be simple and to the point. Going forward with coding, although understanding that following LoD can be hard, I plan to utilize this guideline in order to enhance and simplify all of my future codes.

By following the LoD in future coding endeavors, my code will be easier to test, I can reuse classes more easily, the amount of coupling and dependencies between classes will be reduced, my code will more flexible when I make changes to it and it will be more maintainable.

Information gathered for this blog:
https://medium.com/better-programming/demeters-law-don-t-talk-to-strangers-87bb4af11694
https://hackernoon.com/object-oriented-tricks-2-law-of-demeter-4ecc9becad85
https://en.wikipedia.org/wiki/Law_of_Demeter

From the blog cs@worcester – Coding_Kitchen by jsimolaris and used with permission of the author. All other rights reserved by the author.

REST API’s

This week on my CS Journey, I want to look closely at the topic of REST API Design. I know We have been doing several activities regarding the topic in class and the homework assignment is associated with it, however, I wanted to be very knowledgeable on the topic, so I decided to do more research. REST is an acronym for Representational State Transfer. A REST API is a way for two computer systems to communicate over HTTP in a similar way to web browsers and servers do. Let start by looking at what An API is,  An API is an application programming interface. It is a set of rules that allow programs to talk to each other. The developer generally creates the API on the server and allows the client to talk to it and the REST determines how an API should look like.

Now let’s look at the anatomy of a request is, An API request has four main important parts: The endpoint, The method, The headers, and The data or body. When an API interacts with another system, the touchpoints of that communication are considered endpoints. Each endpoint is the location from which APIs can access the resources they need to carry out to do their function. The way APIs work is using  “requests” and “responses.” Meaning that each URL is called a request while the data sent back to you is called a response.

Generally, when it comes to methods it has five types. Which are: GET, POST, PUT, PATCH, and DELETE. These methods provide meaning for the request you’re making. They are also used to perform four possible actions that are Create, Read, Update, and Delete also known as CRUD. Next, the Headers are used to provide information to both the client and the server. It can be used for many purposes, such as authentication and providing information about body content. Lastly, the body or the data is what contains information you want to be sent to the server. This option is only used with POST, PUT, PATCH, or DELETE requests.

Overall, I learned a lot from this blog. The source I used explained the topic very well. I highly recommend everyone to check it out, because it has a variety of examples and documents that you need to know about REST APIs to be able to read the API documentation and use them effectively. It also goes deep into the methods and the request meaning of each of them, I think it is very important to understand those concepts because companies all over the world are using APIs to transfer vital information, processes, transactions, and more.

 

Source: https://www.smashingmagazine.com/2018/01/understanding-using-rest-api/

            https://www.sitepoint.com/developers-rest-api/

From the blog Derin's CS Journey by and used with permission of the author. All other rights reserved by the author.

What is Rest API

 

API is short for Application Programming Interface (API), which describes a class library’s characteristics or how to use it. Your personal library may contain “API documentation” of available functionality.

A REST API in API Gateway is a collection of resources and methods integrated with back-end HTTP endpoints, Lambda functions, or other AWS services. You can use API Gateway features to help you with all aspects of the API lifecycle, from creation through monitoring your production APIs.

API Gateway REST APIs use a request/response model where a client sends a request to a service and responds back synchronously. This kind of model is suitable for many different kinds of applications that depend on synchronous communication.

When many people refer to API documentation these days, they often refer to an HTTP API that might share your application data over the web. For example, Twitter provides an API that allows users to request tweets in a specific format to easily import them into their own applications. This is where the HTTP API is potent. It can mix and match data from multiple applications to a mixed application or create an application that enhances the experience of using other people’s applications.

It is an application that allows us to view, create, edit, and delete parts.

REST is the shorthand for Representational State Transfer, which was proposed by Roy Fielding T to describe the standard way of creating an HTTP API, and he found that the four common behaviors (view, create, edit, and delete) could all be mapped directly to the implementations in HTTP.

The different HTTP methods:

GET

POST

PUT

DELETE

OPTIONS

HEAD

TRACE

CONNECT

Most of the time, when you’re looking at your browser’s dots, you’re using the HTTP GET method. The GET method is only used when you request resources from the Internet. When you submit a form, you often use the POST method to send data back to the site. As for the other approaches, some browsers may not fully implement them at all. But that’s fine if it’s for our use. We have many HTTP methods to choose from to help describe these four behaviors, and we will use client libraries that already know how to use these different HTTP methods.

Rest API benefits:

Resource oriented, easy to see

To GET something, you need to GET (GET is safe, does not modify the service resource), you need to POST (POST is unsafe), you need to PUT (PUT is idempotent), and DELETE (DELETE is idempotent).

Traditional CRUD requires four different interfaces, but the REST API requires only one. (Distinguish between different requests)

Source:

https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-create-api-from-example.html

From the blog haorusong by and used with permission of the author. All other rights reserved by the author.

GRASP (General Responsibility Assignment Software Patterns)

Hello everyone and welcome to week 11 of the coding journey blog. In this week’s post I will be talking about the design pattern acronym GRASP. This acronym is short for general responsibility assignment software patterns. Design patterns are essential for software developers, and we will dive more deep into the benefits of using GRASP.

Originally, the GRASP design patterns were introduced after gang of four book, which has details on commonly used software design patterns. The GRASP design pattern is answering what each certain role each aspect plays in the software. There is the controller whose essential role is to take responsibility to encapsulate a system operation, which is something the user is trying to process such as purchasing an item. The system operation is achieved by calling one or more method calls between the software objects. Also the controller is responsible for providing a layer between the UI and the domain model. Then there is the creator which is responsible to help decide which class going to be responsible for creating a new instance of a class. There is a pattern known as high cohesion which is essentially responsible to keep objects understandable and more manageable. An example of this is breaking classes down and different subclasses for different roles making it easier in the bigger picture. Then there is the indirection principle which is responsible for low coupling and gives interaction responsibility to an intermediate object. Another part of GRASP is the information expert which gives guidelines about giving responsibilities to classes and one example would be methods.

Also in the GRASP design pattern there is the pattern of low coupling which decides how to assign responsibilities to lower dependency of classes and the change in classes impacting on another as well as the higher reuse potential. Polymorphism is a concept most of us know about because it is one of the principles of objected oriented programming. In brief to recap the concept, polymorphism provides guidelines on how to use the object oriented feature in your design. Then there is protected variation which protects elements from the variation on other elements by wrapping it with an interface and using polymorphism for many other implementations. The last part of the GRASP principles is pure fabrication which is made up to achieve low coupling and high cohesion with a class that does not represent a concept in the problem domain.

I personally think that the design concepts of GRASP have many essential components of programming and creating real world software. Many of these components are used in our everyday world and features we see without even recognizing. As I learn more in my coding journey and take in more concepts, I will most certainly take into account GRASP principles in my future projects to make it easier.

For more resources on this topic check out these links:

https://dzone.com/articles/solid-grasp-and-other-basic-principles-of-object-o https://medium.com/@ReganKoopmans/understanding-the-grasp-design-patterns-2cab23c7226e

From the blog CS@Worcester – Roller Coaster Coding Journey by fbaig34 and used with permission of the author. All other rights reserved by the author.

A Different Road

Just because they’re not on your road doesn’t mean they’ve gotten lost.

-H. Jackson Brown Jr., Life’s Little Instruction Book

“A different Road” pattern shows us exactly how to follow our map and recall what we know. For some time, we might have walked through a road, and then realized that this route is not an acceptable option for us because of our drawn map. It has happened that we have found a way that is more in line with our current values. Based on this pattern, if we permanently leave the road, we will still have values and principles established along the way. This pattern shows different examples of people who decided to move on into something else and come back to bring new ideas to a company. It is okay to set people free if they have different ideas and let them come back along with the new interactions they have discovered. Unfortunately, traditional software companies are not so welcome. Such detours are also seen as questionable shortcomings in your profession you must explain later in the future. You would hope that your reason for why you left and why you came back will become important within your belief system. However, this shouldn’t be an issue for someone who wants to pursue their dream in a way or another.

What got my attention in this pattern is that we shouldn’t be afraid to do something else with our life, no matter the risks. The skills we have gotten during our journey will not leave, and at one point they will be useful wherever we go. As a software developer, with the experience, we will enrich everything we want for our future. Leaving behind the software development journey to become a professor or an instructor could be an option in our lives. We may like it or not, in the end, it’s all about trying to find where we and our knowledge best fit. I have always thought that leaving a company and coming back after a while wouldn’t be so good but based on this pattern it wouldn’t be so bad. However, there are several reasons why we would and wouldn’t want to go back to our previous employer. But let’s be positive and do what we feel it’s right. If what you were doing didn’t feel right in your career, you have the chance to return to a higher level and be where you always wanted to be in your previous company or move on to another one.

From the blog CS@Worcester – Gloris's Blog by Gloris Pina and used with permission of the author. All other rights reserved by the author.

Apprenticeship Patterns: Nurture Your Passion

The next Apprenticeship pattern I would like to discuss is titled “Nurture Your Passion.” This pattern is targeted at software developers whose work environments drain them of their passion for creating software. It emphasizes that a passion for software craftsmanship is crucial for improving our skills, and that we should take steps to protect our passion if we find ourselves in such an environment. The pattern suggests several techniques we can use to strengthen our passion for software development. These include investing time into enjoyable projects, joining groups that focus on our interests, and changing our work environments.

As I mentioned in my post on “Breakable Toys,” I have struggled to stay passionate about programming since I started college. When software development became the focus of my education, I stopped working on personal projects because they took time away from more important (yet far less enjoyable) class assignments. This really damaged my ability to enjoy programming, and I think that following this pattern’s tips could help me regain some of my passion. I already expressed my desire to start working on personal projects on my own time again in my “Breakable Toys” post. While this pattern recommends this once again, it also provides several new suggestions that I think could be just as useful.

The pattern first recommends focusing on enjoyable topics while working as a way to make work less draining. This suggestion changes my perspective on how I should go about my work, as I have generally not prioritized my own interests during assignments. For example, I have recently been working on testing for my group’s project in CS-448, which has been exhausting for me since I dislike writing tests. I might try to contribute to more interesting aspects for the remainder of the project so that I can be more invested in it. The pattern also recommends joining groups and reading books that focus on topics of interest. Groups haven’t really worked for me in the past, and I’ve never been a fan of reading, but knowing that these options could help nurture my passion might make them worth trying. Finally, the pattern recommends having a list of positive ideas to talk about whenever work conversations become exhausting. Although I’m not great at conversation, I think this might be an action worth taking. Even if I never have a work environment that completely engages me, talking about my interests with others might be enough to keep my passions alive.

From the blog CS@Worcester – Computer Science with Kyle Q by kylequad and used with permission of the author. All other rights reserved by the author.

Capstone Sprint 2 Retrospective

This second sprint brought with it some challenges in moving to online classes for the ongoing epidemic, but with it a stronger grasp on communication and documentation using GitLab and Discord, both out of necessity and intentional effort. We have learned to better work with LibreFoodPantry workflow and are ready to go into our next sprint with our REST API, Database, and Frontend all working in isolation.

My contributions

Create file with definition of done.

Change to default .gitignore file for Spring and remove unnecessary tracked files from before we added the .gitignore,

Research internationalization support and decide that it is better to be saved once a more-final version of the front-end is complete.

Research Angular Testing and create a Spike project that covers most cases we will encounter.

Integrate ID scanner to get Student ID with an Angular component and create tests to have 100% code coverage.

Retrospective

For the first half of the sprint, we were still having weekly meetings to work together. One of our troubles last sprint was that we were discussing things in person and not doing well documenting the reasons for decisions we made. We improved on this even while having in-person meetings. By the second half, although we were all coping with changes brought on by moving to online classes, we did well in keeping each other updated and communicating through GitLab. In hindsight, it’s probably a good experience to be forced to do this. Especially if this epidemic inspires more software companies to promote working from home.

The biggest issue we had as a team was working with merge requests. There were a couple cases where code on a feature branch was not kept up to date with the master branch. As a result, there were a lot of merge conflicts to work together on resolving as a team. Overall, working through these together as a team was a good experience, because this is bound to happen when working in tandem with version control. However, now we will be reminding ourselves to pull changes from origin/master as we are working on our local branches.

We also improved with creating merge requests for each individual features, although this took a few weeks for us to all do efficiently. GitLab has a great feature where you can tightly-bind an issue to a merge request, but this caused a couple of problems for me. When the merge request is accepted, the issue is automatically closed. This messes with our workflow, because we want the issues in the “done” column, only to be closed by the product owner. Moving forward, the issues should be linked with their merge request but we will have to take care that the description doesn’t include a “Closes issue” tag.

Furthermore, when a branch is automatically made in GitLab, it creates a very verbose branch name, which is simply annoying if your Git isn’t configured to autocomplete branch names when pressing “tab”. In the future, I will create a new merge request and manually select my already-created branch. Then I will manually link the issue.

The team’s willingness to quickly meet over Discord about an issue we were having was the best thing about this sprint. In the few cases where something occurred outside of class time that required all of us, we were able to set up a time the same day or the next day and resolve the problem. This flexibility to schedule work within the sprint is what helped us get as much work done as we did.

The next sprint will involve combining our individual pieces into a working product that is capable of storing actual checkout transactions. There is a still a lot to learn and to do, but we are well on our way to finishing a viable product that we are proud of, albeit with much room to grow in the future. We will have to pay close attention in the next sprint to creating well-written documentation as we combine our API, database, and front end so that future developers can easily recreate what we’ve done and get it running.

From the blog CS@Worcester – Inquiries and Queries by James Young and used with permission of the author. All other rights reserved by the author.

Dig Deeper

For my final blog post on apprenticeship patterns, I wanted to discuss my favorite pattern. Software is so pervasive now that anyone can make a working product with little more than superficial knowledge of a language and a framework. This is great motivation to continue, but it may lead one to erroneously believe they are an expert. Finishing a product, even a successful one, doesn’t make you an expert programmer.

Digging deeper is going below surface knowledge of a technology and learning the nitty-gritty bit-y details. The caveat is to not become too specialized. The book warns to keep your perspective of the project as a whole, and only the learn as much detail necessary to help with a given task or problem.

I was originally taught to treat new classes as a black box, and I only found it frustrating once I graduated to more complicated tools. To truly understand how something is meant to work, you have to look inside. Another example: I’ve taken a few introductory classes that used metaphors to explain concepts and/or taught from the top down, adding detail over time. Biology class was boring and difficult because I had to memorize that a blue circle will separate the green, spiked lines so that two red hexagons can copy each of them. It wasn’t until high school, which provided an understanding of underlying chemical reactions, that biology became interesting and easy to remember.

So it is with software. I’ve been exceedingly frustrated with new tools when I tried to play without understanding. Sometimes, it works. Others, when things begin to get confusing, diving in becomes a necessity. Another caveat: you don’t know what you don’t know, and if you assume you’re doing it right, you may be wrong. Even if it works.

This is another pattern that requires balance. Learning details provides diminishing returns over time, but you should mostly understand why you need to do something a certain way, and how it is working. If you can explain this in simple words, you’re probably on the right track. This applies not only to software tools, but work processes as well.

You may not always agree with how a technology was designed. No one will tell you that the modern Internet is a perfect design, because it has been manipulated into working in a world it wasn’t designed for. Created in a world of text, it now works in a world of streaming video across billions of devices. This would never have been achieved without engineers and developers who understood the basic building blocks of the technology. Be like them.

From the blog CS@Worcester – Inquiries and Queries by James Young and used with permission of the author. All other rights reserved by the author.

Improving the Spoken Digit Speech Recognition Machine Learning Model

After getting a simple machine learning model to recognize spoken digits, I was able to begin the iterative process of improving the model. Using only MFCC’s, the model was failing more than desirable, reaching a maximum of 60% accuracy when using validation data (my own voice, which was not used in training the model).

Below you will see plots of a sample of results when validating the model. For each digit, there is the extracted MFCC features, the actual spoken digit, the predicted digit by the model, and the certainty. There is also a plot of the certainty for each other digit for that recording.

This is just a sample of a larger validation set, and the actual results in this first model was only 45% accurate. But this shows that for all of these digits except 3 and 5, the model was 99% to 100% certain of the result. The differences in the MFCCs are subtle, but stark differences in color appear to be more likely to be correct, whereas 5 is clearly closer in color to 1, which it was mistake for. Additionally, every single audio clip of 3 was mistaken for a 0 using this model.

Retraining the model with different parameters may help in this case, but we can also hypothesize about the reason for these mistakes. Perhaps the MFCC is finding patterns in vowels that make “zero” and “three” look identical. If that’s the case, features that can detect consonants might help improve results. This sounds pretty obvious anyway, so it might be a good next step on the next iteration.

But first, let’s retrain the model without any changes.

Okay! This 3 was very accurately predicted. But the total accuracy of validation was only 50% (remember, this only shows a sample size of 10). Inspection of actual results now shows that 3 is sometimes mistake for a 2, and vice versa. This model is slightly better, bit still flawed. Which makes sense, because no changes have been made to the model and we just got lucky that it learned to be a bit better this time.

I’ve been training with 25 epochs, and getting 95-97% accuracy during training, and 93-97% accuracy using test data (from the same dataset as the training data, which was not used to train the model). Those results are pretty good, so maybe we can use fewer epochs and prevent some overfitting.

This certainly looks promising. With 95% accuracy during training, and 93.8% accuracy using test data, the results are still pretty good. However, the validation data with my voice is now 57.5% accurate! Only a single 3 was mistaken for a 0.

So I’m using a dataset of 4 voices to train and test, and my own voice to validate. But more data is probably better, so let’s use my voice to train the model and take a random sample to validate.

The plot is looking good! Each of these was very accurately predicted. During training and test data was 97% accurate. The validation data was 100% accurate. Of course, now that the validation data contains all voices that were trained, it’s more likely to be correct. Furthermore, the sample is small. So let’s see what happens if we use a new voice to validate. I had my roommate record himself saying each digit and used only his voice for validation data.

In general, the model is much more certain of its guesses. The final validation result was 80% accuracy, so not perfect but a major improvement. This much of an improvement was gotten just by adding more data and making small modifications to the model.

The importance of collecting data in order to improve a model is apparent. Even with 80% accuracy, there is still some predictive power. If this can be found to be useful, further data can be collected as it is used and this new data can be cleaned and used to train better models.

From the blog CS@Worcester – Inquiries and Queries by James Young and used with permission of the author. All other rights reserved by the author.