Running Bash Commands in Node.js

Disclaimer:

This is not explicitly a “guide”. There are far better resources available on this topic; This is simply a discussion about the child_process module.

Anecdotal Background

I’ve been in the process of creating programs to automate the running of my Minecraft Server for my friends and I. In that process, I’ve created a functional system using a node server, a C++ server, and a lot of bash files. While it works, I can’t be sure there aren’t bugs without proper testing. Trust me, nothing about this project has has “doing it properly” in mind.

When learning front-end JavaScript, I recall hearing that it can’t modify files for security reasons. That’s why I was using a C++ server to handle file interactions. Little did I realize node can easily interact with files and the system itself. Once I have the time, I’m going to recreate my set up in one node server. The final “total” set up will involve a heroku node server, a local node server, and a raspberry pi with both a node server (for wake on lan) as well as an nginx server as a proxy for security for the local node servers.

Log

As a bit of a prerequisite, I’ve been using a basic module for a simple improvement on top of console.log. I create a log/index.js file (which could simply be log.js, but I prefer having my app.js being the only JavaScript file in my parent directory. The problem with this approach, however, is that you end up with many index.js files which can be hard to edit at the same time).

Now, depending on what I need for my node project I might change up the actual function. Here’s one example:

module.exports = (label, message = "", middle = "") => {
    console.log(label + " -- " + new Date().toLocaleString());
    if(middle) console.log(middle);
    if(message) console.log(message);
    console.log();
}

Honestly, all my log function does is print out a message with the current date and time. I’m sure I could significantly fancier, but this has proved useful when debugging a program that takes minutes to complete. To use it, I do:

// This line is for both log/index.js and log.js
const log = require("./log"); 

log("Something");

Maybe that’ll be useful to someone. If not, it provides context for what follows…

Exec

I’ve created this as a basic test to see what it’s like to run a minecraft server from node. Similar to log, I created an exec/index.js. Firstly, I have:

const { execSync } = require("child_process");
const log = require("../log");

This uses the log I referenced before, as well as execSync from node’s built in child_process. This is a synchronous version of exec which, for my purposes, is ideal. Next, I created two basic functions:

module.exports.exec = (command) => {
    return execSync(command, { shell: "/bin/bash" }).toString().trim();
}

module.exports.execLog = (command) => {
    const output = this.exec(command);
    log("Exec", output, `$ ${command}`);
    return output;
}

I create a shorthand version of execSync which is very useful by itself. Then, I create a variant that also creates a log. From here, I found it tedious to enter multiple commands at a time and very hard to perform commands like cd, as every time execSync is ran, it begins in the original directory. So, you would have to do something along the lines of cd directory; command or cd directory && command. Both of which become incredibly large commands when you have to do a handful of things in a directory. So, I created scripts:

function scriptToCommand(script, pre = "") {
    let command = "";

    script.forEach((line) => {
        if(pre) command += pre;
        command += line + "\n";
    });

    return command.trimEnd();
}

I created them as arrays of strings. This way, I can create scripts that look like this:

[
    "cd minecraft",
    "java -jar server.jar"
]

This seemed like a good compromise to get scripts to look almost syntactically the same as an actual bash file, while still allowing me to handle each line as an individual line (which I wanted to use so that when I log each script, each line of the script begins with $ followed by the command). Then, I just have:

module.exports.execScript = (script) => {
    return this.exec(scriptToCommand(script));
}

module.exports.execScriptLog = (script) => {
    const output = this.execScript(script);
    log("Exec", output, scriptToCommand(script, "$ "));
    return output;
}

Key Note:

When using the module.exports.foo notation to add a function to a node module, you don’t need to create a separate variable to reference that function inside of the node module (without typing module.exports every time). You can use the this keyword to act as module.exports.

Conclusion

Overall, running bash, shell, or other terminals in node isn’t that hard of a task. One thing I’m discovering about node is that it feels like every time I want to do something, if I just spend some time to make a custom module, I can do it more efficiently. Even my basic log module can be made far more complex and save a lot of keystrokes. And that’s just a key idea in coding in general.

Oh, and for anyone wondering, I can create a minecraft folder and place in the sever.jar file. Then, all I have to do is:

const { execScriptLog } = require("./exec");

execScriptLog([
    "cd minecraft",
    "java -jar server.jar"
]);

And, of course, set up the server files themselves after they generate.

From the blog CS@Worcester – The Introspective Thinker by David MacDonald and used with permission of the author. All other rights reserved by the author.

Blog #4: Rest APIs

For the last month, including the our class’s most recent homework assignment, we have been working on Rest APIs. In the last assignment, we had to construct different Rest API Endpoints. For this blog, I wanted to learn a lot more about Rest APIs and try to master them better. I found a great blog about it all on stackoverflow.blog, and I will attach a link to that blog at the bottom of this post. I definitely recommend checking it out to learn more on this subject like I did. Rest APIs are among some of the most popular available web services that is used to allow clients and browser applications to communicate with a server. These APIs must be designed in a way that keep in mind account security, performance, and easy usability. An important thing for Rest APIs is JSON. Almost all networked technologies can use JSON as it is the standard for transferring data, and it is accepted by Rest APIs for request payloads. A lot of what the blog I read goes over is how to write Rest API Endpoints and coding. It explains the importance of including error handling, filtering, sorting, and nesting resources. It shows even more specifics like how collections should be named using plural nouns rather than verbs. This blog is super great for helping with every single specific when writing these endpoints, even down to the exact code to use for all the different common HTTP errors. It later goes on to explain using cache data to improve performance as well as specifics for versioning APIs. Overall, this blog taught me a lot more about writing these API Endpoints, and the best part is it helped show how it all is done with clear examples of actual code. It even came in helpful during my homework assignment earlier this week. Anyone looking to learn about or brush up on this topic should follow the link to see the blog that I used to research for Rest APIs, and I guarantee it will be extremely educational!

From the blog CS@Worcester – Tim Drevitch CS Blog by timdrevitch and used with permission of the author. All other rights reserved by the author.

REST API’s: How to Use Them


Now that we, hopefully, have a decent basis of knowledge about REST and REST API’s we can finally begin discussing the specifics on implementing a REST API. I am afraid that, once again, I will run out of space before being able to cover all of the aspects of REST API creation so I encourage any readers to check the source I am referencing for this post. My source is a pretty well laid out tutorial that has a step by step guide on each part of this process. This is what I have been doing most recently in my coursework so it is nice to finally reach some directly relevant information. Granted much of the surrounding code was already done for us, so some of this will be brand new to me, but we have been designing some endpoints. All of that aside, let’s just jump right into it.

Well actually one more quick note before we begin, this tutorial is using Docker, which you can read about in one of my earlier posts. Additionally there are some parts of this web application that are already completed, so you can download this to follow along with the tutorial if you want. This example is just a database for a generic company with employee information, created with javascript. I am not going to go too deep into detail about creating all of these files to make the web application, as that is an entirely different topic. The first thing to note when implementing a REST API is an acronym, CRUD. Essentially this means that your API should create, retrieve, update, and delete. The equivalent methods to these are post for create, get for retrieve, put or patch for update, and delete for delete believe it or not. One other important part of implementing a REST API is handling response codes. The codes that you may want to account for can vary for each function, for example a function that edits data will use response code 201. To do this you can add a section for request handling at the end of your openapi.yml file. Here you can specify specific messages to be returned by each HTTP response code that gives more specific feedback based on your web application. And I am afraid that is all I have space to discuss, once again coming up a bit short.

Well, as I said in my first post, I vastly underestimated the sheer amount of information there was about REST API’s. I really tried my best to give some information about them, but there is still so much more that simply could not fit into these three blog posts. I would definitely try to follow along with a tutorial implementing a REST API either with docker or flask, as I saw both when researching this post. Regardless I found this information helpful and I hope any readers did too.

Source

From the blog CS@Worcester – My Bizarre Coding Adventures by Michael Mendes and used with permission of the author. All other rights reserved by the author.

Agile Software Development

This week on my CS Journey, I want to focus on Agile software development and its methodologies. Agile methodology is a type of project management process, mainly used for software development. It evolves through the collaborative effort and cross-functional teams and their customers. Scrum and Kanban are two of the most widely used Agile methodologies. Today I want to focus mainly on Scrum. Recently I saw that employers are looking for candidates who have experience in scrum and agile development, so it is important that we learn more about it.

 Scrum is a framework that allows for more effective collaborations among teams working on various complex projects. It is a management system that relies on step by step development. Each cycle is consisting of two-to four-week sprints, where each sprint’s goal is to build the most important features first and come out with a potentially deliverable product. Agile Scrum methodology has several benefits,  it encourages products to be built faster since each set of goals must be completed within each sprint’s time frame.

 Now let’s look at the three core roles that scrum consists of scrum master, product owner, and the scrum team. The scrum master is the facilitator of the scrum. In addition to holding daily meetings with the scrum team, the scrum master makes certain that scrum rules are being enforced and applied, other responsibilities also include motivating the team, and ensuring that the team has the best possible conditions to meet its goals and produce deliverable products. Secondly, the product owner represents stakeholders, which are typically customers. To ensure the scrum team is always delivering value to stakeholders and the business, the product owner determines product expectations, records changes to the product, and administers a scrum backlog which is a detailed and updated to-do list for the scrum project. The product owner is also responsible for prioritizing goals for each sprint, based on their value to stakeholders. Lastly, the scrum team is a self-organized group of three to nine developers who have the business, design, analytical, and development skills to carry out the actual work, solve problems, and produce deliverable products. Members of the scrum team self-administer tasks and are jointly responsible for meeting each sprint’s goals.

Below I have provided a diagram that shows the structure of the sprint cycles. I think understanding the Agile methodologies Is helpful because most of the major companies help teams and individuals effectively prioritize work and features. I highly recommend visiting those websites it provides detailed explanations of how a scrum cycle works.

 

Sources: https://zenkit.com/en/blog/agile-methodology-an-overview

https://www.businessnewsdaily.com/4987-what-is-agile-scrum-methodology.html#:~:text=Agile%20scrum%20methodology%20is%20a,with%20a%20potentially%20deliverable%20product.&text=Agile%20scrum%20methodology%20has%20several%20benefits

From the blog Derin's CS Journey by and used with permission of the author. All other rights reserved by the author.

Rest API’s: Resources, Their Methods and More…


As was promised last week, I return for the second REST API post.Please try to contain your excitement. All jokes aside this topic has been pretty interesting to learn about, and perhaps a bit more complicated than I initially assumed. For this week I will be discussing the remainder of the post I discussed last class, and another I found containing more specific information about REST API’s. Specifically this post will discuss resources, resource methods, and some other important concepts, such as endpoints. Additionally I will discuss a part of the REST API tutorial website discussing a comparison between REST and HTTP, as they do appear to be similar in function and appearance to some extent.

First up for discussion are resources and their methods. Luckily this is significantly more straightforward than the previously discussed topics. As stated on the REST API tutorial website, as always all links will be posted at the end of the post. Resources are any piece of information that can be named. Some examples provided are a document, image, or collection of other resources. Essentially a resource is an identifier for some component that is interacting with another. As for resource methods, these are your GET, PUT, POST, and DELETE methods. One common misconception about these that this site discusses is that one of the authors of the HTTP specification stated that there is no set way to implement them. These methods should also be listed in the API response for the resources they are altering. Finally these methods should instead be used to create application state transitions based on the clients selections. I feel like all of this could use a summary. Essentially resources, data, and resources methods, methods that alter data in some way, should be kept together to improve readability for your client. Additionally the manner in which the methods are implemented should be tied into what exactly your client wants. We are almost ready to discuss how to use REST API’s, but there is just a bit more setup first.

I found another blog post by a frontend developer who goes over a few more important aspects of REST. I am unfortunately running out of room in this post, so I am only going to focus on discussing two things, requests and endpoints. A request is a URL that consists of four components, being an endpoint, method, headers, and data. These requests are how you grab and modify data, which is returned back as a response. As for endpoints, they are the URL you requested. It is the link used to access the endpoint within the REST API itself. For example, if you had a REST API that saved and stored data of customers from a grocery store, you may have an endpoint like www.grocery.data.com/customers/viewdata. I am short on space so this closer will be brief, but my next post will be the last in this REST trilogy.

Sources

https://restfulapi.net/

https://www.smashingmagazine.com/2018/01/understanding-using-rest-api/

From the blog CS@Worcester – My Bizarre Coding Adventures by Michael Mendes and used with permission of the author. All other rights reserved by the author.

Docker

Hello everyone and welcome to week 13 of the coding journey blog. In this week’s post I will be talking about a popular container that many software developers use called docker. Docker is very helpful in today’s world and helps manage as well as deliver software much more efficiently.

So what exactly is docker is a question that many people when they first hear the term. Well docker is basically an open platform that is used for developing, shipping and running applications. The reason why docker is so important and useful is because it allows you to separate your applications from your infrastructure, which results in quicker software release. Timing is everything in today’s world and so a tool like docker is needed for a faster approach in software. With docker, the infrastructure of the software can be managed the same way as applications and so this shortens the gap between producing and releasing code for production use. The tool docker has is it allows an application to be run in a separate environment known as a container. This isolation as well as heightened security allows users to run many containers on a certain host. Containers are efficient because they run directly through the host machine’s kernel and so you can run more with this process than a virtual machine. The process for docker first starts with you developing your code and other parts using the docker containers. Then docker becomes helpful in testing your code as well as sharing it through the container. Then when the code is all ready to go, you can deploy your application in the product environment, whether it is local or on a cloud service. Essentially, there are many reasons for using docker when developing code and working on an application. Docker allows applications for more faster and more consistent deployment of applications so it isn’t a hassle if there are any bug fixes that need to change down the line. Containers are known for continuous integration and continuous delivery in the workflow. Docker is also helpful in scaling applications because it allows for highly portable workloads, as it can be changed easily to fit any business needs.

I know for a fact that throughout my journey with developing and learning code, I will use docker to run and create apps. It makes the process a lot more smoother and efficient as this is what most developers are looking for because they don’t want extra hassles. Docker simplifies creating applications and my future projects will be much easier to manage through docker and its ability to use containers.

For more resources check out these links:

https://docs.docker.com/get-started/overview/

From the blog CS@Worcester – Roller Coaster Coding Journey by fbaig34 and used with permission of the author. All other rights reserved by the author.

REST-API

In past few week we have working with REST API, and how it works. we have learned lot of thing about REST-API code and how you implement them to create WEB-order. Today I am going to tack about what is REST API is and how do you implement them using Docker Plate-form. So, What is REST API? REST API acronym is REpresentational State Transfer. There are six Guiding Principles of REST. Client-server, stateless, Cacheable, Uniform interface, Layered system, Code on demand witch is optional.

Client server is Separating the user interface concerns from the data storage concern. Stateless Each request from client to server must contain all of the information necessary to understand the request, and cannot take advantage of any stored context on the server. Session state is therefore kept entirely on the client. Cacheable Cache constraints require that the data within a response to a request be implicitly or explicitly labeled as cacheable or non-cacheable. Uniform interface is applying the software engineering principle of generality to the component interface,  the overall system architecture is simplified and the visibility of interactions is improved. Layered system is style layered, it allows an architecture to be composed of hierarchical layers by construing component behavior such that each component cannot “see” beyond the immediate layer with witch they are interacting. Code on demand witch is REST allows client functionality to be extended by downloading and executing code in the form of applets or scripts. This simplifies clients by reducing the number of features required to be pre-implemented.

The Key abstraction of information in REST is a resource. Any information that can be named can be a resource: a document or image, a temporal service, a collection of other resources, a non-virtual object (e.g. a person), and so on. REST uses a resource identifier to identify the particular resource involved in an interaction between components.

Another important thing associated with REST is resource methods to be used to perform the desired transition. A large number of people wrongly relate resource methods to HTTP GET/PUT/POST/DELETE methods. A lot of people prefer to compare HTTP with REST. REST and HTTP are not same.

Here is example of REST-API from .YML file. In this file you write docker script file to tell how your website to be. In this file image you see the tag, summary, operationID, Response for each code and much more…

Resources: https://gitlab.com/LibreFoodPantry/training/microservices-activities https://searchapparchitecture.techtarget.com/definition/RESTful-API

From the blog CS@Worcester – </electrons> by 3electrones and used with permission of the author. All other rights reserved by the author.

Data persistence layer(12/1/2020)

 When it comes to management and figuring out how to run a data tool, think of it as a warehouse or retail store front in this sense. This in particular is how a Data presistence layer works. When it comes to a DPL you will have 6 different methods were you must go through in order for to the data to reach its finally desistation. these different methods are considered to be semantic, Data Warehouse, presistent, transform, staging, and finally source. to be more specific a presistent layer contains a set of tables that are in charged of gathering and recording information. This is done so we understand the full history in etor spect of changes to the data of the table or query. infact when it comes to having multiple source, table or query files they are allowed to be staged at a different table or in a different view inorder to meet the transfer layer requirements. Something crucial to learn about a DPL is that in many cases you can consider it a bi-temporal table.You might be wonder to yourself what is a bi-temporal table is? well it is considered to be a table that permits queries with a size of two or larger to be stored in. This all comes down too how useful DPL is. With DPL you are allowed to include and store all of the history you need all of the time instead of how many other software do it where you have a break up of history and information when needed the most. As well this will save you a tremondce amount of time do to the flecability to be able to drop and rebuild your data set with the same amount of data history at any point in time. DPL as well offered ETL performance which is considered to be where you are allowed to make changes to to the process data. When it comes to DPL, DPL is aware of what has been altered and when it has been altered in order to make the appropriate changes in the data set. The main reasoning behind switching to a data persistence layer is mainly for the reason of allowing you to be able to analyst data in a timely manner while still staying accuarete. This also allows you to have add ons like churn or other third party software to boost the persistence layering unlike like other data processesers. Another feature is the ability to have evidence for example presistanent layering will give you information that will point directly too and explain why something has changed. As well it gives you the power of auditing where you are able to orginiaze, maintain, complete an accurate picture of past event to understand what truly happened in the data. 

From the blog Nicholas Dudo by and used with permission of the author. All other rights reserved by the author.

Law of Demeter: tell don’t ask

For this week, a topic that I chose to learn more about was the Law of Demeter. The Law of Demeter was chosen because it is one of the topics covered in our course syllabus and it is also a concept that I have not heard about before and so I was curious to learn more about it. To supplement my understanding of this concept I watched a youtube video about it and also looked at an article by hackernoon. I had chosen this video because it was short and concise and was uploaded by a channel catered towards teaching programming concepts. The summary of the video was that often in programming, cohesion is good and coupling is bad and the law of demeter is a design principle used to reduce coupling. Also the usage of the law demeter is implemented with some design pattern such as facade or adapter that performs some form of delegation to remove the need of chaining. The second source I used was an article on the Law of Demeter by hackernoon. I had used this second source because I wanted to see an example of this concept implemented in code; and also the code was implemented in java, which is language I was familiar with. The summary of this article regards the Law of Demeter as “Don’t talk to Strangers” rule and the article examples where Law of Demeter is implemented and violated.

In conjunction, these two sources helped me in understanding the Law of Demeter. In addition to the explanation of the Law of Demeter in the video source, I also enjoyed how the youtuber, Okravi, explains that the Law of Demeter is a design principle and that some people do not agree with it, and so he states that the Law of Demeter does not have to be strictly be followed and that you should just use the concept where it makes sense in code. From that statement, I thought that had related to many of my previous blog posts where I have listened to experience developers explain a topic; where they say that following a principle strictly is not realistic; and rather we should strive for using a concept when it makes sense in our code. Also reflecting on the Law of Demeter made me think more about the concepts of coupling and cohesion in my code; and also the many times where in our course the usage of chaining methods was acceptable; however after learning about the Law of Demeter, I can see that that code can be refactored.

Links to video and article below

https://hackernoon.com/object-oriented-tricks-2-law-of-demeter-4ecc9becad85

From the blog CS@Worcester – Will K Chan by celticcelery and used with permission of the author. All other rights reserved by the author.

software development for tuning (12/1/2020)

In today’s modern economy everything we touch unless it’s a pull or push door is run of software and hardware. And modern-day vehicles are no different, we have come along way from the typical car having an engine crank. Instead, now we have a push to start a feature where within seconds your car will start up and this is all run off of computers. These computers take crank and camshaft singles from seniors in order to start the car. In laments terms, they need computer hardware and software in order to make this happen. What we will be diving into is taking these stock parameters that the vehicle was given from the factory and we will be trying to alter them to either A increase fuel economy, B increase power, or C even be able to increase longevity for the vehicle itself. The possibility is endless when it comes to vehicles and the ability to alter and make little or big changes to see a maximum gain. Many might wonder how does tuning works how is it even possible. Now tuning is not easy by no stretch of the imagination, you must know precisely what you’re doing or you will fry the computer in your vehicle. In a sense think of it as you are changing parameters in your OS system. Instead of it being like the Linux OS which is open-source and you allowed to make as many changes as you would please without needing keys to access the OS. A vehicle Operating system is locked more in the sense of a window or MAC OS system where you need to have certain keys or software to unlock the computer or even be able to read it when servicing the vehicle. This is where a tuner comes in or in other words a software developer who goes into the parameters of the vehicle and will make changes. For example, we will go into something that is ignition timing. What this is, is the amount of time before the piston reaches the top dead center.  TDC is when the top of the piston reaches the top of the cylinder. What a tuner can do is go into the computer and advance this so instead of it being 0 degrees before TDC we can change it to 12 degrees so when the camshaft is at 12 degrees of rotation which translates actually to 88 degrees of rotation. This allows us to see the performance of the engine which will optimize how the fuel gets burnt in the combustion chambers. With tuning, you’re also allowed to go into different modules like the transmission modules which will let you write a code that allows the shift solenoids to open up much later to hold the RPM band higher and give you more efficiency, and lets you have better cooling opportunities. Another parameter that you’re allowed to access is the body module which controls everything inside the interior of the vehicle. Know you must be careful with this and it is one of the main reasons why these computers are closed source is because it makes it more secure  and much harder to hack 

From the blog Nicholas Dudo by and used with permission of the author. All other rights reserved by the author.