Author Archives: Jared Moore

Server Side and Node.js

The internet could not exist without servers to handle the exchange of data between devices. Given the importance of servers, the software and systems that run on them are equally as important. The programming of these applications is called Server-side development and is a large computer science field.

Most server-side applications are designed to interact with client-side applications and handle the transfer of data. A common form of server-side development is for supporting web pages and web applications. An emerging popular platform for web application backends is Node.js, a server-side language based on JavaScript.

A benefit to Node.js being based on JavaScript means that the same language can be used on the front end and the back end. Because JavaScript can handle JSON data natively, handling data on the server-side becomes much easier compared to some other languages. As the name suggests, JavaScript is a scripting language so code for Node.js also does not need to be compiled prior to running. Node.js uses the V8 engine developed by Google to efficiently convert JavaScript into bytecode at runtime. This feature can speed up development time for running smaller files frequently, especially during testing.

Node.js also comes bundled with a command-line utility called Node Package Manager. Abbreviated npm, it manages open-source libraries for Node.js projects and easily installs them into the project directory. The npm package repository is comparable to the Maven repository for Java. However, according to modulecounts.com, npm is over three times larger than Maven with nearly 1.8 million packages compared to less than 500 thousand. Each Node.js project has a package.json file where settings for the project are defined such as running scripts, version number, required npm packages, and author information.

The majority of Node.js applications share common packages that have become standard frameworks throughout the community. An example of this is Express.js, which is a backend web framework that handles API routing and data transfer in Node.js.

At the core of node.js is the event loop which is responsible for checking for the next operation to be done. This allows for code to be executed out of order without waiting for an unrelated operation to finish before the next. The default asynchronous ability of node.js is ideal for webservers where many users are continuously requiring different tasks to be done with differing speeds of execution. When people visit different API routes, they expect the server to respond as quickly as possible and not be hung up on a previous request.

I have used Node.js before but now that we have begun to use it in class, I wanted to learn more about its benefits in server-side development. For me, the most important takeaway is the need to take advantage of Node.js’s nonblocking ability when developing a program. Doing so will improve the speed of the application and increase usability.


Source: https://www.educative.io/blog/what-is-nodejs, http://www.modulecounts.com

From the blog CS@Worcester – Jared's Development Blog by Jared Moore and used with permission of the author. All other rights reserved by the author.

What is Front-End Development?

During the early days of the web, front-end development was less complex focusing mostly on static documents, linking pages, and basic design. As time went on, more and more functionality has been given to the web browser making front-end development a major field. Front-end development is the client-side programming of a web appl

During the early days of the web, front-end development was less complex focusing mostly on static documents, linking pages, and basic design. As time went on, more and more functionality has been given to the web browser making front-end development a major field. Front-end development is the client-side programming of a web application. This type of development focuses on the design, functionality, and interaction of the webpage with the backend components. 

The structural components of front-end development takes place in HTML, which stands for HyperText Markup Language. HTML defines the layout of the webpage elements along with their content and properties. Each html element has its own tag name which separates it from others based on the functionality it provides. For example, a video element will provide different functionality than an image element or a header element.

By default, an html page is very bland. CSS, Cascading Style Sheets, is used to style an html page giving each element or a group of elements custom properties. If the background color of the website needed to be black and the text white, this would be done using CSS. 

Other than some simple animations with css, all front-end functionality of a web page comes from Javascript. A once simple scripting language designed for simple html modifications that evolved into a large front-end and back-end programming language. One use of javascript is to dynamically control html elements on the webpage. It also handles making and receiving requests from web servers. 

Originally, when data was submitted in a web application, the entire page would need to be refreshed to pull an updated html document which was common with PHP. Today, a common design architecture is to have a single page web application where the data is synchronized to the webpage without needing to be reloaded to update. 

Due to the complexity of making a single page web application, many frameworks exist to make development easier. These include react, vue, and angular. These frameworks handle linking data to components and binding the data between the server and the client. This way, only the element with updated data needs to be refreshed rather than the entire page. Handling updates in this way makes working with web applications faster and improves the user experience.

I selected this article to learn a bit more about frontend development and its current features. We will be working on client-side development in this class and I thought it would be a good topic to look into. I find it interesting the history of how the web has developed over the years. I will use this guide with its details to help me structure my projects in the future.


Resource: https://info340.github.io/client-side-development.html

From the blog CS@Worcester – Jared's Development Blog by Jared Moore and used with permission of the author. All other rights reserved by the author.

Why Use Gradle?

Gradel is an automation tool that allows developers to quickly run tasks to prepare, compile, and test software. There are a handful of automation tools similar, but Gradle is one of the most popular and supported build automation tools. Without a build-tool, a developer would have a series of commands that must be manually typed into the terminal to execute a set of steps before code can be built and run. Doing this is very tedious and does not scale easily, especially for larger projects.

The Gradle build process is divided into three phases. The first is initialization, which prepares the coding environment for the build process. The main type of initialization is fetching dependencies for software projects. A developer is able to declare which libraries their code requires and Gradel will download them to the project library.

The second phase of the build process is configuration, where Gradel determines what needs to be executed for the software to run. In Gradel, each step is called a task and is run automatically. The order of the tasks is determined during this step and is defined based on defined in the Gradel configuration file. Each task can be dependent upon the last allowing multistep tasks to execute without user intervention. Gradel can also automate testing to be run during the build sequence. This allows for a developer to identify testing issues during development because the build will fail if a test does not pass. The purpose of running tests with Gradel is to identify run-time issues that the Java compiler will not pickup.

The final phase is execution, where each task is run by Gradel. At the end of this phase, the user will see the status of their build. A successful build means that the tasks ran without error and the software is able to be run. An unsuccessful build can occur for a number of reasons but usually occurs due to a failed test, unmet dependency, or compilation error.

I selected this blog because I wanted to learn more about the Gradel build tool. Especially because we will be using it for projects in this class. I am familiar with the build tool Webpack for NodeJS but I have never used a build tool for Java. There are similarities between them but the configuration for each varies greatly.

One difficulty I have had with Java is dealing with dependencies. In other languages such as NodeJS or Python, there are command-line tools that make installing libraries easy such as npm and pip. I will definitely use Gradel in my next project for managing dependencies, as I have yet to find a solution for this in Java, until now.


Reference https://docs.gradle.org/current/userguide/what_is_gradle.html

From the blog CS@Worcester – Jared's Development Blog by Jared Moore and used with permission of the author. All other rights reserved by the author.

Docker and It’s Components

I chose this article because it explains in detail the components of docker and how they each are applied in different phases of software development. I believe this will help unfamiliar students learn more about the components of docker beyond that of a general understanding. Given that we are already using docker in this course, I imagine we will be using it for future projects as well. A deeper understanding of this platform would be highly beneficial.

The docker service is made up of several components. A container is like a stripped-down virtual machine where programs can be run in isolation. A Docker image is what defines what will make up a container and contains the directions for how and what will run. For example, a MySQL Docker image would contain the instructions for running an instance of MySQL server and exposing the necessary ports. What runs the docker image is called the Docker Engine and is responsible for virtualizing the container on the host machine. The last major component is docker-compose, which allows one to configure storage, network, and interaction between Docker image containers. The purpose of docker is to allow pre-configured containers to run exactly the same wherever docker is installed. Because all containers are identical to the image they are created on, multiple containers can be run to quickly scale to meet performance demands.

I have used docker in the past on multiple occasions, the last being for my database design final project building a full-stack web application. To see how docker is used in a full-stack application, you can review the source code here gitlab.com/mjared94/todo-app inside the server directory. In the app, we used docker to streamline team development and simplify the backend services in use. The application required a database and a way to manage the database during development. Rather than installing the software individually on each of our computers, we used docker to quickly launch the services identically on each machine. This can be done using docker-compose, which allows you to outline how you want your containers to interact with each other as mentioned above.

Our web server was run directly from the terminal relying on the docker services. In production, we could have also containerized our web server to run in conjunction with the other containers. Doing so would have enabled anyone with docker to run our todo application, along with the required backend services, without any prior configuration. Containerizing our application would also make deployment seamless because our pre-configured container could be launched on virtually any web host. After reading this article and understanding the docker components in more detail, I will be better able to use them in the future. 


Blog resource link: https://www.infoworld.com/article/3204171/what-is-docker-the-spark-for-the-container-revolution.html

From the blog CS@Worcester – Jared's Development Blog by Jared Moore and used with permission of the author. All other rights reserved by the author.

REST Introduction

APIs play an integral role in the development of modern-day applications. Whether you are writing your API or integrating someone else’s, it is crucial to understand their functioning. There are many APIs, but the most common types are REST APIs that operate using HTTP and HTTPS. I chose to write about REST APIs because we will be working with them in this class. They are essential to understand for most development projects. The blog post “Best practices for REST API design” discusses the basics of a REST API, how they work, and development practices. The post also provides code examples using NodeJS with the ExpressJS library showing how the concept of a REST API is implemented.

According to the article, a REST API follows a set of standard design principles that allow users to quickly be able to work with the API. For example, a REST API should include method endpoints such as /GET /POST /PUT and /DELETE. These endpoints are used to send and receive data between the server and the client. The difference between the endpoints is the operations they perform. Sending a request to a GET endpoint will send the client data, a POST request will add data, and a DELETE request will remove data.

Although there are a set of rules a REST API must follow, there are many best practices left up to the developer that they should follow. For example, a REST API should send and receive data in JSON. JSON makes working with data simpler because it is in an easily transferable and structured format. The API should also run over HTTPS rather than HTTP to add an extra level of security between communication.

A REST API closely resembles the structure of a URL where each method is separated by a forward slash. To pass a parameter, a question mark is used followed by the variable name equal to a value. In order to be readable, the developer must name and structure the API paths wisely, based on the function an endpoint is responsible for. For example, a GET request to find a user by ID could look something like /user/findOne?id=120. In this example, the findOne method is prefaced by /user/, and we know the user id we are finding is 120. If the API replaced the word user for record, then the developer using the API would not know what type of data to receive in the request.

Finally, a REST API should be versioned, meaning the highest level path of the URL structure should be the API’s version number. Using the find user example, a versioned API would read /v1/user/findOne?id=120. If we made a change to our API, we could modify that in the v2 API without breaking the code written using the v1 API.

I will definitely use these tips while developing a REST API in the future. One of the most important guidelines I learned was to always send and receive data in JSON. In the past, I have made a REST endpoint return a single value such as a number. Without any context, the data from the API would be hard to understand. If the values were named in JSON then it would be easier to work with and manipulate.


Reference blog resource: stackoverflow.blog/2020/03/02/best-practices-for-rest-api-design

From the blog CS@Worcester – Jared's Development Blog by Jared Moore and used with permission of the author. All other rights reserved by the author.

Introduction

Hello,

My name is Jared and I am a third-year computer science student at Worcester State University. You can learn more about my background on LinkedIn and GitHub. I have experience developing web-based applications using Typescript, NodeJS, PostgreSQL, GraphQL, and React.

My goal with this blog is to share what I learn along the way and help as many as I can. I believe software development is a thriving diverse community where people of all backgrounds can come together to learn and share knowledge.

From the blog CS@Worcester – Jared's Development Blog by Jared Moore and used with permission of the author. All other rights reserved by the author.