Category Archives: CS-343

Refactoring

Hello and welcome to the last post of my blog. I’ve really enjoyed writing these throughout the semester since it helped me learn more about computer science and programming in general. For this last week, I will talk about refactoring. It is one of the most important concepts to grasp in programming because it can greatly improve your code over time. Refactoring is a way of rewriting and improving code without changing the functionality of the original code. Your code may be hard to look at or understand by other programmers so you may want to refactor it to make it easier to read and understand.

Your code should be clean so you don’t end up having technical debt later on. Rushing your coding to meet deadlines will pretty much guarantee messy code and you should start refactoring early on so that the messy code does not keep piling up. When messy code keeps piling up, it becomes more of a burden to refactor and the programmer may become more unmotivated to fix it. When you refactor code, you want to make sure all your variable and method names make sense and provide some context. For example, the variable name “x” does not provide any information about what information it stores or what it is used in. Instead, your variable name should be more specific, like “accountNumber.” Having more specific variable names also helps other programmers identify what your code does exactly.

You want to start looking at refactoring at specific times during your coding process. Refactoring Guru talks about the “Rule of Three” which means when you have to code the same thing for the third time you should consider refactoring. Also consider beginning to refactor when you add a new feature because someone else’s code might be too messy to read and you need to refactor to clean it up. That also makes it easier to implement additional features later down the line. You should also consider refactoring close to the deadline of your project since it will be the last chance to make changes to the code. Your project will become open to the public and you do not want other people to see your messy code.

When you refactor, the code should become cleaner and still maintain its functionality. Sometimes you may have to completely rewrite some parts of your code in order to do this. Other times it may just be as simple as adding some spacing or renaming a variable. All your tests should still pass after refactoring. If a test fails, you messed up somewhere and made an error.

In the past, I have done some refactoring myself and I hope to continue to practice refactoring to keep my code clean, easy to understand, and easy to manage.

https://refactoring.guru/refactoring

From the blog Comfy Blog by and used with permission of the author. All other rights reserved by the author.

Angular (No .js)

Looking at more frontend frameworks, this blog post was originally going to be used to take a look at Angular.js, but with its EOL soon approaching on December 31, 2021, I thought it may be better to look at a framework that is still receiving support from its developers. Angular (No js) is another type of framework that allows you to develop front-end applications. Angular works off of four main concepts : Components, Templates, Directives, and Dependency Injection. 

Components in Angular (like other frameworks and libraries) are the bread and butter of the framework. You need them in order to design your application. Every component in Angular has three pieces that make it whole. The first piece is an HTML template which is responsible for what can be seen on the page. The second piece involves the use of TypeScript. TypeScript works like JavaScript but has added support to work with types like in object-oriented programming. A class is written in TypeScript and it then defines the behavior of the component. The third piece is a CSS selector which determines how the component should be used.

Templates are the working parts of HTML in your application. They can be present in a component in one of two ways: “template” which allows you to define the content that goes in the component, or “templateUrl” which allows you to define the content of the component using a reference to another file. It’s important to note that only one of these declarations can be used in a component at a time. Because a template represents the HTML element it refers to, it allows the developer to omit the use of HTML tags while creating it.

Directives are like classes that allow you to add extra functionality to the elements in your application. Angular currently uses three types of directives. Components are the first type of directive which is defined as a directive with a template. The second directive is an attribute directive. This directive’s purpose is to change the appearance and or behavior of an entity within the application. The third directive is a structural directive. This directive allows you to change the Document Object Model (DOM) layout by adding or removing DOM elements.

Dependency Injection is a design pattern. Angular uses this pattern whenever a class needs an outside service in order to be able to carry out its functions. Instead of creating a new instance every time a service is needed which can be resource deficient, during runtime, the needed service(s) can be called upon and referred to.

Angular even has its own Command Line Interface(CLI) to help make the use of the framework more simplistic for the developer. 

I learned about the Angular framework in the link below. You can use it to find out more about things I didn’t get to mention in this post.

https://angular.io/guide/understanding-angular-overview

From the blog CS@Worcester – You have reached the upper bound by cloudtech360 and used with permission of the author. All other rights reserved by the author.

Why Vue

There are several frontend frameworks available to pick from, so why do we use Vue? To research about Vue and learn about its benefits, I decided to read blogs from Vue mastery, specifically one written by Lauren Ramirez.

  • Vue does not use up too much memory. Vue allows us to import only the pieces of the library that we need, which means whatever we don’t use will be removed for us via treeshaking.
  • Virtual DOM (Document Object Model) uses compile-based optimization resulting in faster rendering times.
  • To work with Vue, we did not have to learn HTML, CSS, and JavaScript. It was surprisingly easy how we were able to learn as we go.
  • Vue has many libraries that can be added as needed. Some of which are:
    • Vue Router (client-side routing)
    • Vuex (state management)
    • Vue Test Utils (unit testing)
    • vue-devtools (debugging browser extension)
    • Vue CLI (for rapid project scaffolding and plugin management)
  • Vue’s one of the best features – Composition API;
    • We are able to group features into composition functions then call them in the setup instead of having large unreadable and unmaintained code directly in setup.
    • We are able to export features from components into the functions. This means we don’t have to keep re-writing code and avoid having useless repetition.
  • Vue has enhanced support for TypeScript users as well.
  • In Vue, we are able to use multi root components. In most front-end frameworks component template should contain exactly one root element because sibling elements aren’t allowed. The way around to that problem is using functional components, they are components where you have to pass no reactive data means component will not be watching for any data changes as well as not updating itself when something in parent component changes. However, they are instance less and you cannot refer to them anymore and everything is passed with context. With the multi root component support of Vue (Vue 3), there are no such restrictions and we can use any number of tags inside the template section.
  • Vue 3 gives us the Teleport component, which allows us to specify template HTML that we can send to another part of the DOM. Sometimes a piece of a component’s template belongs there logically, but it would be preferable to render it somewhere else. This is useful for things like modals, which may need to be placed outside of the body tag or outside the Vue app.
  • Most importantly, Vue is open source. Vue has complete freedom to be community-driven and its bottom line is the satisfaction of its end users. It doesn’t have to answer to the company-specific feature demands and corporate bureaucracy.

Source: https://www.vuemastery.com/blog/why-vue-is-the-best-framework-for-2021-and-beyond/

From the blog CS-WSU – Towards Tech by murtazan and used with permission of the author. All other rights reserved by the author.

Encapsulation

For this week’s blog post, I have found an article about encapsulation. Encapsulation is more than defining accessor and mutator methods. It also follows two main objectives hiding complexity and hiding the sources of change. To understand more about encapsulation, we will first need to understand the two concepts of modularity and abstraction. In the article, Edwin Dalorzo, the author of the article, uses the example of a car to understand abstraction. He says on this, “A car is complex in its internal working. They have several subsystems, like the transmission system, the break system, the fuel system, etc. However, we have simplified its abstraction, … we know that all cars have a steering wheel through which we control direction, they have a pedal that when we press it we accelerate the car and control speed, … These features constitute the public interface of the car abstraction” (Edwin Dalorzo). This is a great example because we can use abstraction to simplify unnecessary parts that the users would not need to understand. This concept is similar to modularity. In his book Code Complete, Steve McConnell said on complexity “ the interface should reveal as little possible about its inner workings”. So now that we understand the two concepts, the idea of encapsulation starts to unravel.

One of the things that we want to always encapsulate in Java is the state of a class. This should only be accessed through its public interface. Edwin says on encapsulation in Java, “In a object-oriented programming language like Java, we achieve encapsulation by hiding details using the accessibility modifiers … With these levels of accessibility we control the level of encapsulation, the less restrictive the level, the more expensive change is when it happens and the more coupled the class is with other dependent classes (i.e. user classes, subclasses, etc.).” (Edwin Dalorzo).  It is crucial that we keep this idea in mind while we design encapsulation for these public interfaces so that we can foster evolution of our APIs.

As talked about in the article, it is often wondered why we need to use accessor and mutator methods in Java, aka getters and setters. With encapsulation in mind, it is not there to hide the data itself but the implementation details on how the data is being manipulated. So once again we would need a public interface to gain access to this data. However, by exposing this data we risk losing encapsulation. So this is why we would need to encapsulate this information.

https://dzone.com/articles/why-encapsulation-matters

From the blog CS@worcester – Michale Friedrich by mikefriedrich1 and used with permission of the author. All other rights reserved by the author.

GRASP

In this final week of blogging for CS-343, I wanted to look over the General Responsibility Assignment Software Principles, or GRASP.

I took to learning about GRASP from Code Specialist at https://code-specialist.com/code-principles/grasp/, which discussed all nine GRASP principles and provided diagrams and code examples.

I’ve learned that the nine principles are: controller, creator, indirection, information expert, low coupling, high cohesion, polymorphism, protected variations, and pure fabrication.

Controller: helps “control” (not implement) events indirectly related to the user interface. It acts as a mediator between signals from the user interface and the backend.

Creator: a class responsible for the creation of certain objects, such as object A. This principle has a few rules for the creator, which are: creator B aggregates instances of A, B contains A objects, B records instances of A objects, B closely uses A objects, and B has the inputs for when A is created.

Indirection: is an idea that works with other concepts, like low coupling. It serves to avoid direct coupling. For example, the controller is an indirection between the UI and backend.

Information expert: a class containing the information needed to decide where an operation should occur. The operation should occur where most of the input needed for it is stored.

Low coupling: the idea of little interdependency between modules. By having few dependencies between modules, it would be less complicated to make changes to the code.

High Cohesion: describes the flow inside modules, not between them. High cohesion mainly helps to reduce complexity. Classes should be made to only fit their purposes and not go beyond that scope. We should not have large classes that do not really relate and are hard to work with.

Polymorphism: there are many variations of the same method and they work differently in different classes.

Protected variations: wrap unstable code with a stable environment. The unstable code can be wrapped with an interface that creates multiple implementations of that code.

Pure fabrication: creating a class that does not represent a real-world problem, but serves to support low coupling and high cohesion.

I thought this was a nice source to learn from because the page has neat orientation and provides diagrams and some sample code. Many of these principles seem interconnected with other principles to help reduce the complexity of code. High cohesion reminded me of the SOLID design principle single-responsibility principle in which classes have one responsibility and should not cover more than that responsibility. It would make the classes easier to understand, and less changes would need to be made to the class if it covers less functions. All of these principles I have learned will help me write neater code in the future as I keep in mind the need to reduce interdependence and complexity.

From the blog CS@Worcester – CS With Sarah by Sarah T and used with permission of the author. All other rights reserved by the author.

CS-343 Post #7

I wanted to focus my last blog post on frontend development because I think I have a better understanding of the backend than the frontend, and while I think I get how the frontend works and what is going on in it, I was having trouble in the latest activity adding to the frontend that we were given and connecting it to the backend. I also had to make some edits to my assignment and could talk about what I had trouble with.

First, with the last homework assignment, I had a few issues with adding onto my frontend. When I made the new filterByName button, I did not add it to the data() section, which would cause an error. I added the button and the status I used for it to the data() section and gave them default values similar to what was already there. My other two issues had to do with my filterNames method, where I was stating the endpoint to get names incorrectly, and I had the resulting response given to the wrong variable. I was a little stuck at first changing my endpoint name because at first I changed it to just /items instead of items/{name}, which I had before and is not the correct way to get to the endpoint. I was trying to add a name variable to the get method as a way of getting the names, but after looking through the frontend example, specifically versions v2 and v3, I saw that I could just connect to the names of the items endpoint with /items + itemName. The last issue was sending the response to the wrong variable because I had it set to just a names variable, but I changed it to a filter variable to show that the result is being given to the resulting filter, not changing the actual names in the endpoint.

I also did some more into endpoints to see if there was anything I was missing when it came to understanding the frontend. I read an article called “What is Front-End Web Development?’ by Trio Developers, and most of the article was a review of what I knew from the previous article I read for the blog: The frontend makes up the interface of the program that users interact with, and the frontend uses different elements that can be interacted with to go through the site, such as drop down menus and sliders. The frontend is immersive for the user interacting with the site.

The article goes on to talk about what frontend developers do, and their jobs focus on developing interfaces for code, improve upon applications, and work on UI/UX designs and their usability/feasibility. It also goes into the differences between the frontend and backend, but I already talked about those in the previous blog post.

I think I correctly fixed my mistakes on my homework assignment, and while the information from the article was mostly review, it was still a good read and I liked how it talked more about developers and what their jobs are.

https://trio.dev/blog/front-end-web-development

From the blog Jeffery Neal's Blog by jneal44 and used with permission of the author. All other rights reserved by the author.

Exploring Pipe-And-Filter Architecture

https://www.dossier-andreas.net/software_architecture/pipe_and_filter.html

The Pipe-And-Filter architecture is conceptually very simple. It essentially consists of breaking down one operation into a sequence of smaller operations, in which the input of each is the output of the previous. These operations are called “filters” and the connectors linking them are called “pipes”. Sometimes the terms “pump” and “sink” are used to refer to the initial input and final output, respectively. I think making up those last two terms is a little excessive, but overall I like the metaphor – it makes me think of an actual physical machine, which is similar to how I prefer to think about computing in general.

The most well-known example of this pattern is seen in Unix and Unix-like operating systems, which are also the most obvious example of this pattern’s utility. There is no need for any program to have word counting functionality, because its output can be piped into wc, a program that only does that. Similar functionality for pattern matching is provided by grep. With this ecosystem of programs in place, a skilled user of a Unix shell has a great deal of functionality available to them through composing these programs in different ways, as opposed to creating new programs from scratch that do slightly different things than what the programs on their machine already do. Another common example is compilers, which also function in a similar way in order to streamline and simplify the process of translating between languages. The OpenGL rendering pipeline has similar motivations, only for processing graphics instead of programs.

One drawback of a pipe-and-filter system is that it has the potential to draw too much overhead. Being able to pipe something into wc rather than counting words yourself is a more flexible solution, but it does involve running a separate program. At small scales (i.e. most use cases) this isn’t an issue, but if your data is large enough you may need to abandon this system.

Before looking into this, I wasn’t really aware of the pipe-and-filter architecture as a distinct pattern. I was aware of how the Unix ecosystem worked, but I thought the practice of piping together small programs in sophisticated ways was just a quirk of that operating system. I didn’t connect the dots that it was also the same basic concept being used in graphics pipelines, even though I was aware of them. I also have always regarded compilers as a little magical, and seeing that their workflow can be decomposed like this makes them seem a little more approachable.

From the blog CS@Worcester – Tom's Blog by Thomas Clifford and used with permission of the author. All other rights reserved by the author.

Frontend Architecture

My experience with HTML programming outside of classwork pretty much begins and ends with making forum posts look fancy in 2012 messaging boards, and I have even less experience designing frontends with HTML. I wanted to use this blog post to learn more about how to build frontends. Matias Lorenzo’s “Frontend Architecture and Best Practices for Consuming APIs” seemed like a very good start.

Lorenzo notes that issues begin to arise when designing a frontend because that frontend is often dependent on a backend API. This means that whenever the backend or the API undergoes a change, the frontend must also be changed. Lorenzo describes some other issues one might encounter when integrating an API into their frontend. APIs may use lengthy key names that necessitate mass replacement. APIs might structure data in a way that is difficult to fully utilize, or they might include too much data.

Lorenzo proposes a frontend architecture that would further isolate the frontend from the backends it depends on so that it is more stable when the backend changes and is titled model-controller-serializer. Below is a diagram of how this architecture fetches information, in this example it is fetching products for a storefront, from an API:

Controllers in this architecture are the point of contact between the rest of the app and the API. This portion of the app should be able to create any relevant request. Information comes in through the controller, is deserialized by the serializer, then is passed to the model to create instances of that information.

Serializers in this architecture decipher information coming in from the API. If an API includes information that is not relevant to the app, the serializer can remove this information. The serializer can also change the type of a field (eg. from a string to a boolean). Similarly, if the API uses long or clunky names for fields, the serializer can alter the name of the field before passing it along.

Models in this architecture are representations of information from the API that can be read by the app. These store information as an object in Javascript (or whatever language is being used in the app) so that it is easy to access and understand.

I chose this source because I wanted a more in-depth look at frontends. I know what a frontend does, but I would be at a loss for how to design one. This frontend architecture Lorenzo describes seems really useful to help keep the frontend and backend seperate, and I will be keeping this reference in mind for future projects.

From the blog CS@Worcester – Ciampa's Computer Science Blog by robiciampa and used with permission of the author. All other rights reserved by the author.

Differences in RESTApi’s.

In class we have been working on a RESTApi, frontend/backend, that supports a Food Drive. So far, we have a database that stores Items with orders. These orders have an ID as well as the items. These orders have preferences, restrictions, and emails attached. We developed HTTP calls to manipulate this data in a Rest API. We developed ways for the admins in the backend to manipulate this data. And we developed ways for the users to interact with their orders and items within the frontend of the API.

I thought it would be interesting to take a look at RESTApis that are available through the internet that are individually made, professionally made, and company made. The number of APIs out there is extremely broad and vast. Which makes it very exciting. There is essentially an API for everything out there. From news, health, gaming, PayPal, etc. there is literally everything available to you to implement in your own web service app.

This website here is a database full of APIs that are available for you to use.

Link: https://rapidapi.com/hub

                This led me to a couple interesting ones. For example, a gaming API for Call of Duty: Modern Warfare. It’s neat. You can implement this API into your web service and users will be able to get access to multiple statistics throughout the game. Say you wanted to develop a gaming website. Where users build teams, go to a match finder, play other teams, and improve their placing on a website leaderboard. If Call of Duty is one of the games these users play against each other on, you could implement this API. And allow users to show off their in-game stats.

                Try bringing this site even further. Imagine giving each users an account that has an empty balance. If there is a way for them to add their own money into the balance, then you could create wagers on the match finder. The match would require a price to enter. If the players have the funds in their balance, it will allow them to play. You would need to implement someway for the users to add money into their accounts. That’s when you implement the PayPal API.

Paypal API Link: https://developer.paypal.com/docs/api-basics/

Since PayPal interacts with user’s banking and extremely confidential information, to work with a PayPal API, you need Developer access. When you create a sandbox or live REST API app, PayPal generates a set of OAuth 2.0 client ID and secret credentials for the sandbox or live environment. You must receive a access token to be authorized. To go live with a PayPal API, your application must get accepted by PayPal before having access to any accounts. They way the accomplish this is by giving you a Sandbox that acts as a life environment but isn’t connected to any accounts. Once all your testing and debugging is done, that’s when you apply to use it live.

From the blog CS-WSU – Andrew Sychtysz Software Developer by Andrew Sychtysz and used with permission of the author. All other rights reserved by the author.

Getting Started with JavaScript and the DOM

Christian Shadis

As I transition into the final semester of my undergraduate degree, I plan to learn Javascript. In my Software Construction, Design, and Architecture class, we briefly explored and edited some frontend code in Node.js and Vue.js. Using my prior knowledge in Java, I was able to understand how most of the code was functioning. As Javascript continues to rise in popularity, it would be a valuable language to add to my skillset. One of the fundamental aspects of learning Javascript, or so it seems, is to understand the Document Object Model (DOM). Having used his book HTML & CSS: Design and Build Web Sites to supplement my knowledge in web design, I decided to consult the fifth chapter of Jon Duckett’s Javascript and JQuery: Interactive Front-End Web Development to learn more about the DOM.

Duckett first specifies the DOM as a set of rules, separate from the HTML and CSS of the website, which “specifies how browsers should create a model of an HTML page and how JavaScript can access and update the contents of a web page while it is in the browser window” (Duckett 184). In other words, the DOM acts analogously to an API in that it facilitates or enables real-time communication between JavaScript code and the HTML page. Duckett proceeds to describe the DOM tree, the benefits of caching DOM queries, and how to traverse the DOM tree. He also described how to access, update, add, and delete HTML content from the page using the DOM. The chapter included a bit of bonus information such as cross-site scripting attack prevention and how to view the DOM in each of the major web browsers.

Reading this chapter was useful in contextualizing how the different parts of a webpage interact with each other. In addition to learning the fundamentals of the DOM, I also gained experience reading JavaScript code, which is a great way to learn how a language works. What stood out to me most about the material was the parallels between the DOM nodes and the LinkedList data structure. Having coded an implementation of the data structure in a previous course, it was intuitive to see how traversal of the DOM tree worked.

I plan to continue to learn JavaScript, and plan to read the remainder of the book. Web design is becoming more ubiquitous by the day, and thus a more valuable tool for developers to have. I would highly recommend this book and others by Jon Duckett to developers – the design is pleasing, he provides sample code, and offers excellent simplistic explanations.

Duckett, J. (2014). Document Object Model. In JavaScript & JQuery: Interactive Front-end web development (pp. 183–242). essay, John Wiley & Sons.

From the blog CS@Worcester – Christian Shadis' Blog by ctshadis and used with permission of the author. All other rights reserved by the author.