Category Archives: CS-343

Encapsulation

For this week’s blog post, I have found an article about encapsulation. Encapsulation is more than defining accessor and mutator methods. It also follows two main objectives hiding complexity and hiding the sources of change. To understand more about encapsulation, we will first need to understand the two concepts of modularity and abstraction. In the article, Edwin Dalorzo, the author of the article, uses the example of a car to understand abstraction. He says on this, “A car is complex in its internal working. They have several subsystems, like the transmission system, the break system, the fuel system, etc. However, we have simplified its abstraction, … we know that all cars have a steering wheel through which we control direction, they have a pedal that when we press it we accelerate the car and control speed, … These features constitute the public interface of the car abstraction” (Edwin Dalorzo). This is a great example because we can use abstraction to simplify unnecessary parts that the users would not need to understand. This concept is similar to modularity. In his book Code Complete, Steve McConnell said on complexity “ the interface should reveal as little possible about its inner workings”. So now that we understand the two concepts, the idea of encapsulation starts to unravel.

One of the things that we want to always encapsulate in Java is the state of a class. This should only be accessed through its public interface. Edwin says on encapsulation in Java, “In a object-oriented programming language like Java, we achieve encapsulation by hiding details using the accessibility modifiers … With these levels of accessibility we control the level of encapsulation, the less restrictive the level, the more expensive change is when it happens and the more coupled the class is with other dependent classes (i.e. user classes, subclasses, etc.).” (Edwin Dalorzo).  It is crucial that we keep this idea in mind while we design encapsulation for these public interfaces so that we can foster evolution of our APIs.

As talked about in the article, it is often wondered why we need to use accessor and mutator methods in Java, aka getters and setters. With encapsulation in mind, it is not there to hide the data itself but the implementation details on how the data is being manipulated. So once again we would need a public interface to gain access to this data. However, by exposing this data we risk losing encapsulation. So this is why we would need to encapsulate this information.

https://dzone.com/articles/why-encapsulation-matters

From the blog CS@worcester – Michale Friedrich by mikefriedrich1 and used with permission of the author. All other rights reserved by the author.

GRASP

In this final week of blogging for CS-343, I wanted to look over the General Responsibility Assignment Software Principles, or GRASP.

I took to learning about GRASP from Code Specialist at https://code-specialist.com/code-principles/grasp/, which discussed all nine GRASP principles and provided diagrams and code examples.

I’ve learned that the nine principles are: controller, creator, indirection, information expert, low coupling, high cohesion, polymorphism, protected variations, and pure fabrication.

Controller: helps “control” (not implement) events indirectly related to the user interface. It acts as a mediator between signals from the user interface and the backend.

Creator: a class responsible for the creation of certain objects, such as object A. This principle has a few rules for the creator, which are: creator B aggregates instances of A, B contains A objects, B records instances of A objects, B closely uses A objects, and B has the inputs for when A is created.

Indirection: is an idea that works with other concepts, like low coupling. It serves to avoid direct coupling. For example, the controller is an indirection between the UI and backend.

Information expert: a class containing the information needed to decide where an operation should occur. The operation should occur where most of the input needed for it is stored.

Low coupling: the idea of little interdependency between modules. By having few dependencies between modules, it would be less complicated to make changes to the code.

High Cohesion: describes the flow inside modules, not between them. High cohesion mainly helps to reduce complexity. Classes should be made to only fit their purposes and not go beyond that scope. We should not have large classes that do not really relate and are hard to work with.

Polymorphism: there are many variations of the same method and they work differently in different classes.

Protected variations: wrap unstable code with a stable environment. The unstable code can be wrapped with an interface that creates multiple implementations of that code.

Pure fabrication: creating a class that does not represent a real-world problem, but serves to support low coupling and high cohesion.

I thought this was a nice source to learn from because the page has neat orientation and provides diagrams and some sample code. Many of these principles seem interconnected with other principles to help reduce the complexity of code. High cohesion reminded me of the SOLID design principle single-responsibility principle in which classes have one responsibility and should not cover more than that responsibility. It would make the classes easier to understand, and less changes would need to be made to the class if it covers less functions. All of these principles I have learned will help me write neater code in the future as I keep in mind the need to reduce interdependence and complexity.

From the blog CS@Worcester – CS With Sarah by Sarah T and used with permission of the author. All other rights reserved by the author.

CS-343 Post #7

I wanted to focus my last blog post on frontend development because I think I have a better understanding of the backend than the frontend, and while I think I get how the frontend works and what is going on in it, I was having trouble in the latest activity adding to the frontend that we were given and connecting it to the backend. I also had to make some edits to my assignment and could talk about what I had trouble with.

First, with the last homework assignment, I had a few issues with adding onto my frontend. When I made the new filterByName button, I did not add it to the data() section, which would cause an error. I added the button and the status I used for it to the data() section and gave them default values similar to what was already there. My other two issues had to do with my filterNames method, where I was stating the endpoint to get names incorrectly, and I had the resulting response given to the wrong variable. I was a little stuck at first changing my endpoint name because at first I changed it to just /items instead of items/{name}, which I had before and is not the correct way to get to the endpoint. I was trying to add a name variable to the get method as a way of getting the names, but after looking through the frontend example, specifically versions v2 and v3, I saw that I could just connect to the names of the items endpoint with /items + itemName. The last issue was sending the response to the wrong variable because I had it set to just a names variable, but I changed it to a filter variable to show that the result is being given to the resulting filter, not changing the actual names in the endpoint.

I also did some more into endpoints to see if there was anything I was missing when it came to understanding the frontend. I read an article called “What is Front-End Web Development?’ by Trio Developers, and most of the article was a review of what I knew from the previous article I read for the blog: The frontend makes up the interface of the program that users interact with, and the frontend uses different elements that can be interacted with to go through the site, such as drop down menus and sliders. The frontend is immersive for the user interacting with the site.

The article goes on to talk about what frontend developers do, and their jobs focus on developing interfaces for code, improve upon applications, and work on UI/UX designs and their usability/feasibility. It also goes into the differences between the frontend and backend, but I already talked about those in the previous blog post.

I think I correctly fixed my mistakes on my homework assignment, and while the information from the article was mostly review, it was still a good read and I liked how it talked more about developers and what their jobs are.

https://trio.dev/blog/front-end-web-development

From the blog Jeffery Neal's Blog by jneal44 and used with permission of the author. All other rights reserved by the author.

Exploring Pipe-And-Filter Architecture

https://www.dossier-andreas.net/software_architecture/pipe_and_filter.html

The Pipe-And-Filter architecture is conceptually very simple. It essentially consists of breaking down one operation into a sequence of smaller operations, in which the input of each is the output of the previous. These operations are called “filters” and the connectors linking them are called “pipes”. Sometimes the terms “pump” and “sink” are used to refer to the initial input and final output, respectively. I think making up those last two terms is a little excessive, but overall I like the metaphor – it makes me think of an actual physical machine, which is similar to how I prefer to think about computing in general.

The most well-known example of this pattern is seen in Unix and Unix-like operating systems, which are also the most obvious example of this pattern’s utility. There is no need for any program to have word counting functionality, because its output can be piped into wc, a program that only does that. Similar functionality for pattern matching is provided by grep. With this ecosystem of programs in place, a skilled user of a Unix shell has a great deal of functionality available to them through composing these programs in different ways, as opposed to creating new programs from scratch that do slightly different things than what the programs on their machine already do. Another common example is compilers, which also function in a similar way in order to streamline and simplify the process of translating between languages. The OpenGL rendering pipeline has similar motivations, only for processing graphics instead of programs.

One drawback of a pipe-and-filter system is that it has the potential to draw too much overhead. Being able to pipe something into wc rather than counting words yourself is a more flexible solution, but it does involve running a separate program. At small scales (i.e. most use cases) this isn’t an issue, but if your data is large enough you may need to abandon this system.

Before looking into this, I wasn’t really aware of the pipe-and-filter architecture as a distinct pattern. I was aware of how the Unix ecosystem worked, but I thought the practice of piping together small programs in sophisticated ways was just a quirk of that operating system. I didn’t connect the dots that it was also the same basic concept being used in graphics pipelines, even though I was aware of them. I also have always regarded compilers as a little magical, and seeing that their workflow can be decomposed like this makes them seem a little more approachable.

From the blog CS@Worcester – Tom's Blog by Thomas Clifford and used with permission of the author. All other rights reserved by the author.

Frontend Architecture

My experience with HTML programming outside of classwork pretty much begins and ends with making forum posts look fancy in 2012 messaging boards, and I have even less experience designing frontends with HTML. I wanted to use this blog post to learn more about how to build frontends. Matias Lorenzo’s “Frontend Architecture and Best Practices for Consuming APIs” seemed like a very good start.

Lorenzo notes that issues begin to arise when designing a frontend because that frontend is often dependent on a backend API. This means that whenever the backend or the API undergoes a change, the frontend must also be changed. Lorenzo describes some other issues one might encounter when integrating an API into their frontend. APIs may use lengthy key names that necessitate mass replacement. APIs might structure data in a way that is difficult to fully utilize, or they might include too much data.

Lorenzo proposes a frontend architecture that would further isolate the frontend from the backends it depends on so that it is more stable when the backend changes and is titled model-controller-serializer. Below is a diagram of how this architecture fetches information, in this example it is fetching products for a storefront, from an API:

Controllers in this architecture are the point of contact between the rest of the app and the API. This portion of the app should be able to create any relevant request. Information comes in through the controller, is deserialized by the serializer, then is passed to the model to create instances of that information.

Serializers in this architecture decipher information coming in from the API. If an API includes information that is not relevant to the app, the serializer can remove this information. The serializer can also change the type of a field (eg. from a string to a boolean). Similarly, if the API uses long or clunky names for fields, the serializer can alter the name of the field before passing it along.

Models in this architecture are representations of information from the API that can be read by the app. These store information as an object in Javascript (or whatever language is being used in the app) so that it is easy to access and understand.

I chose this source because I wanted a more in-depth look at frontends. I know what a frontend does, but I would be at a loss for how to design one. This frontend architecture Lorenzo describes seems really useful to help keep the frontend and backend seperate, and I will be keeping this reference in mind for future projects.

From the blog CS@Worcester – Ciampa's Computer Science Blog by robiciampa and used with permission of the author. All other rights reserved by the author.

Differences in RESTApi’s.

In class we have been working on a RESTApi, frontend/backend, that supports a Food Drive. So far, we have a database that stores Items with orders. These orders have an ID as well as the items. These orders have preferences, restrictions, and emails attached. We developed HTTP calls to manipulate this data in a Rest API. We developed ways for the admins in the backend to manipulate this data. And we developed ways for the users to interact with their orders and items within the frontend of the API.

I thought it would be interesting to take a look at RESTApis that are available through the internet that are individually made, professionally made, and company made. The number of APIs out there is extremely broad and vast. Which makes it very exciting. There is essentially an API for everything out there. From news, health, gaming, PayPal, etc. there is literally everything available to you to implement in your own web service app.

This website here is a database full of APIs that are available for you to use.

Link: https://rapidapi.com/hub

                This led me to a couple interesting ones. For example, a gaming API for Call of Duty: Modern Warfare. It’s neat. You can implement this API into your web service and users will be able to get access to multiple statistics throughout the game. Say you wanted to develop a gaming website. Where users build teams, go to a match finder, play other teams, and improve their placing on a website leaderboard. If Call of Duty is one of the games these users play against each other on, you could implement this API. And allow users to show off their in-game stats.

                Try bringing this site even further. Imagine giving each users an account that has an empty balance. If there is a way for them to add their own money into the balance, then you could create wagers on the match finder. The match would require a price to enter. If the players have the funds in their balance, it will allow them to play. You would need to implement someway for the users to add money into their accounts. That’s when you implement the PayPal API.

Paypal API Link: https://developer.paypal.com/docs/api-basics/

Since PayPal interacts with user’s banking and extremely confidential information, to work with a PayPal API, you need Developer access. When you create a sandbox or live REST API app, PayPal generates a set of OAuth 2.0 client ID and secret credentials for the sandbox or live environment. You must receive a access token to be authorized. To go live with a PayPal API, your application must get accepted by PayPal before having access to any accounts. They way the accomplish this is by giving you a Sandbox that acts as a life environment but isn’t connected to any accounts. Once all your testing and debugging is done, that’s when you apply to use it live.

From the blog CS-WSU – Andrew Sychtysz Software Developer by Andrew Sychtysz and used with permission of the author. All other rights reserved by the author.

Getting Started with JavaScript and the DOM

Christian Shadis

As I transition into the final semester of my undergraduate degree, I plan to learn Javascript. In my Software Construction, Design, and Architecture class, we briefly explored and edited some frontend code in Node.js and Vue.js. Using my prior knowledge in Java, I was able to understand how most of the code was functioning. As Javascript continues to rise in popularity, it would be a valuable language to add to my skillset. One of the fundamental aspects of learning Javascript, or so it seems, is to understand the Document Object Model (DOM). Having used his book HTML & CSS: Design and Build Web Sites to supplement my knowledge in web design, I decided to consult the fifth chapter of Jon Duckett’s Javascript and JQuery: Interactive Front-End Web Development to learn more about the DOM.

Duckett first specifies the DOM as a set of rules, separate from the HTML and CSS of the website, which “specifies how browsers should create a model of an HTML page and how JavaScript can access and update the contents of a web page while it is in the browser window” (Duckett 184). In other words, the DOM acts analogously to an API in that it facilitates or enables real-time communication between JavaScript code and the HTML page. Duckett proceeds to describe the DOM tree, the benefits of caching DOM queries, and how to traverse the DOM tree. He also described how to access, update, add, and delete HTML content from the page using the DOM. The chapter included a bit of bonus information such as cross-site scripting attack prevention and how to view the DOM in each of the major web browsers.

Reading this chapter was useful in contextualizing how the different parts of a webpage interact with each other. In addition to learning the fundamentals of the DOM, I also gained experience reading JavaScript code, which is a great way to learn how a language works. What stood out to me most about the material was the parallels between the DOM nodes and the LinkedList data structure. Having coded an implementation of the data structure in a previous course, it was intuitive to see how traversal of the DOM tree worked.

I plan to continue to learn JavaScript, and plan to read the remainder of the book. Web design is becoming more ubiquitous by the day, and thus a more valuable tool for developers to have. I would highly recommend this book and others by Jon Duckett to developers – the design is pleasing, he provides sample code, and offers excellent simplistic explanations.

Duckett, J. (2014). Document Object Model. In JavaScript & JQuery: Interactive Front-end web development (pp. 183–242). essay, John Wiley & Sons.

From the blog CS@Worcester – Christian Shadis' Blog by ctshadis and used with permission of the author. All other rights reserved by the author.

Blog Post # 7: Review of software components used in our CS-343 Software Architecture class

While attempting to brush up on knowledge on Frontend Web development presented to me in my Software Architecture class at WSU, I found a YouTube video [1] that contained a stack of free software tools and apps that fit closely with the class architecture.

Our class has shown usage of the following tools to give us a real-world cutting-edge view of what is being used by software engineers primarily geared towards enterprise-level web application development. They are all Open-Source and free to use for all.

1. Docker – A containerization model being used to run software components in virtualized containers. It is more efficient and easier to use than either virtualization environments like VMWare, VirtualBox, as well as Microsoft’s Hyper-V, which will be a contender in the future, but is “too much tool” for most current business needs.

2. Visual Studio Code – An integrated development environment (IDE), which is free, lightweight, surprisingly usable and extensible. Its limitations imposed by its general size are more than made up for by the vast amount of 3rd party extension libraries that exist for it.

3. Operating Systems: Another huge advantage of this stack of products is that most if not all of them are available in some form on Windows 10/11, macOS, and many forms of Linux.

4. Design Tools: PlantUML – Fantastic in unison with Swagger Preview to produce useful design documents.

5. For Backend Development: Used by servers in association with databases to store and secure data.

  • Node.js – The most commonly used open-source backend JavaScript Framework.
  • MongoDB – A NOSQL database that uses JSON information embedded in non-relational data relationships.
  • Express – A really handy JavaScript Framework Extension.
  • Swagger Preview – A tool used to show JSON or YAML file formats in a mock UI format. It allowed us to test all of our JSON/REST database call from our back-end projects.  

6. For Frontend Development: Used to write websites (or mobile applications) to interact with users.

  • Vue.js – A JavaScript framework used to build UI’s and SPA’s (Single Page Applications). IMHO, it is at least as useful as Express is on the backend.
  • AXIOS: Integration of this package helps with controlling asynchronous HTTP requests to REST endpoints.
  • JavaScript: Of course, but with the help of Vue and Axios, not much actual barebones JS was really needed.

7. Source Code and CI/CD:

  • GIT – I have used git on a number of projects in the past and have had trouble with it at times because I had been used to more traditional non-distributed source control systems. The instruction in this class has shown me how amazing it really is, and it will remain my highest regarded VSC system.  
  • GITLAB:  – I have used GitHub in the past and am now more of a fan of GitLab. 3 cheers for the sole developers. 3 cheers for Dmitriy Zaporozhets and Valery Sizov. It is a web-based DevOps lifecycle tool supporting GIT, issue-tracking, and a continuous integration and deployment chain.

A few helpful hints I picked up from the YouTube video are:

  1. GUID Generator: UUIDV4 is a tool that can be downloaded using NPM (a Node Package Manager) which will generate Globally Unique ID’s for use with our application. [2]
  2. POSTMAN – This is a great tool for testing your API’s [3]
  3. JSON Formatter – Really useful when looking at complex JSON strings. It is usable in most modern browsers. [4]

As a final note, I think that although there are many other tools out there that could have been used to build the instructions for this class, and that software tends to live in an environment of competition from other companies, that this stack wins in my mind, because it is fully open source, completely cross-platform, and very leading-edge. Vue, as an example, would not have been pushed into the tool it is if it were not for competition from REACT and ANGULAR. Competition brings us all forward, but open source has the additional benefit of being altruistic.

References:

1. Building a REST API with NODE JS and Express

2. https://www.uuidgenerator.net/version4

3. https://www.postman.com/product/rest-client/

4. https://jsonformatter.org/

From the blog cs@worcester – (Twinstar Blogland) by Joe Barry and used with permission of the author. All other rights reserved by the author.

Tips on easing the learning process of JavaScript

After a long 2 and a half years of learning to code, there are many challenges I have come across that halted my process. Here I am going to sum some of the issues I may have come across during my journey. And some tips I have found to be incredibly useful to change the tides and direction of how I learn.  

                First things first, understand your distractions. Why you are distracted, and how to make “learning to code” the distraction for you. What I mean is, picture how you use your social media. You may think “I am just going to check Facebook for a few minutes before I get to coding”. Next thing you know, you are on Facebook for three hours. Where did the time go? Somehow, someway you just got sucked into what you decided to put in front of yourself. What if, you decided instead “I am going to just code for 10 minutes then play around on Facebook.”. Next thing you know, you got sucked into coding for hours instead of spending all your time on social media. It will always be better to leave yourself no time to look at social media than having no time to practice coding.

This is how you should structure all your priorities. Don’t treat yourself first in hopes that this will give you the motivation to start getting your responsibilities out of the way. There is a limited time in every single day. We all know, the longer the day goes on, the more tired and slow you become. Utilize your energy correctly. You should be treating yourself when you are already burnt out. That way when all your energy is dispensed, you are relaxing readying yourself to recharge rather than forcing yourself to continue working when you are exhausted. This will help with sleep issues too. Using your brain at maximum capacity is going to ultimately make it harder for you to fall asleep. Utilize your time correctly during the day.

Now your coding, great job! Only problem is, you may find yourself being over-confident now. Just because you have solved something and moved along with almost no hiccups, doesn’t mean you have that solidified it in your knowledge. Learning something quickly is the worst thing that can happen. This sets you up to forget very quickly. The most memorable thing for a person is when they struggled hard, overcame their adversity, and accomplished their goal. You forget a cakewalk the minute it’s over. Limit the number of things you are learning at one time. And practice it in code. That way, you won’t forget. The key point is to make your work memorable for the next time it comes up.

I found this site, which is somewhat a journal. That gives tips on how to learn JavaScript Faster. It touches on a lot of the things I just described.

Link: https://www.sitepoint.com/mind-tricks-to-learn-javascript-faster/

From the blog CS-WSU – Andrew Sychtysz Software Developer by Andrew Sychtysz and used with permission of the author. All other rights reserved by the author.

Importance of JavaScript in Web Development

JavaScript has become the most popular programming language for web development in the last two decades. The fact that JavaScript is used by more than 94 percent of all websites, according to recent studies, demonstrates the importance of JavaScript as a web development language. Since part of our course is to work with frontend and backend, we work with a mix of JavaScript, HTML, and CSS. This blog will focus only on the importance of JavaScript for developers to build a frontend.
What is JavaScript?
JavaScript is a client-side programming language that enables web developers to create dynamic and interactive websites by allowing them to integrate bespoke client-side scripts. Developers can also develop server-side code in JavaScript using cross-platform runtime engines like Node.js. Combining JavaScript, HTML5, and CSS3 allows developers to create web pages that function across multiple browsers, platforms, and devices as already mentioned in the previous paragraph.

Many JavaScript frameworks, such as AngularJS, ReactJS, and NodeJS, are available on the web. Developers utilize JavaScript extensively to construct diverse web applications and add interactive elements to them. JavaScript is supported by the majority of online browsers, allowing dynamic content to be displayed beautifully on websites. WordPress is the most popular content management system (CMS) today, and JavaScript is used extensively in its development. A web developer who is well-versed in JavaScript may create a variety of applications that are both stable and scalable. If you want to be a professional WordPress developer, you’ll need to understand JavaScript so you can make informed decisions right away.

The role of a front-end web developer is to write the code and mark-up that a web browser renders when you visit a website 
When it comes to front-end development, there are three key components: HTML, CSS, and JavaScript. Each is necessary for a webpage to function properly. HTML is responsible for the site’s structure and content, CSS is responsible for its aesthetics, and JavaScript is responsible for its interaction. When it comes to developing websites, they all operate together, but the focus of this blog post is on JavaScript and how it’s employed.
JavaScript is a versatile programming language that may be used to do a variety of tasks on a website. For starters, it is responsible for the site’s overall interactivity. Image sliders, pop-ups, site navigation mega menus, form validations, tabs, accordions, and other complex UI components are all feasible with JavaScript.

It can also perform more subtle operations. For example, you might click a checkbox on a form, and a pop-up appears asking you another question, depending on which checkbox you picked. It adds capabilities to the site that would otherwise be impossible to provide with just HTML and CSS. JavaScript enables websites to respond to user action and dynamically update their content without requiring a page reload.
For a front-end web developer, JavaScript is an essential tool. Websites would not have evolved into the dynamic web apps that they are today without it.

From the blog CS@Worcester – Site Title by proctech21 and used with permission of the author. All other rights reserved by the author.