Category Archives: CS-343

Running Bash Commands in Node.js

Disclaimer:

This is not explicitly a “guide”. There are far better resources available on this topic; This is simply a discussion about the child_process module.

Anecdotal Background

I’ve been in the process of creating programs to automate the running of my Minecraft Server for my friends and I. In that process, I’ve created a functional system using a node server, a C++ server, and a lot of bash files. While it works, I can’t be sure there aren’t bugs without proper testing. Trust me, nothing about this project has has “doing it properly” in mind.

When learning front-end JavaScript, I recall hearing that it can’t modify files for security reasons. That’s why I was using a C++ server to handle file interactions. Little did I realize node can easily interact with files and the system itself. Once I have the time, I’m going to recreate my set up in one node server. The final “total” set up will involve a heroku node server, a local node server, and a raspberry pi with both a node server (for wake on lan) as well as an nginx server as a proxy for security for the local node servers.

Log

As a bit of a prerequisite, I’ve been using a basic module for a simple improvement on top of console.log. I create a log/index.js file (which could simply be log.js, but I prefer having my app.js being the only JavaScript file in my parent directory. The problem with this approach, however, is that you end up with many index.js files which can be hard to edit at the same time).

Now, depending on what I need for my node project I might change up the actual function. Here’s one example:

module.exports = (label, message = "", middle = "") => {
    console.log(label + " -- " + new Date().toLocaleString());
    if(middle) console.log(middle);
    if(message) console.log(message);
    console.log();
}

Honestly, all my log function does is print out a message with the current date and time. I’m sure I could significantly fancier, but this has proved useful when debugging a program that takes minutes to complete. To use it, I do:

// This line is for both log/index.js and log.js
const log = require("./log"); 

log("Something");

Maybe that’ll be useful to someone. If not, it provides context for what follows…

Exec

I’ve created this as a basic test to see what it’s like to run a minecraft server from node. Similar to log, I created an exec/index.js. Firstly, I have:

const { execSync } = require("child_process");
const log = require("../log");

This uses the log I referenced before, as well as execSync from node’s built in child_process. This is a synchronous version of exec which, for my purposes, is ideal. Next, I created two basic functions:

module.exports.exec = (command) => {
    return execSync(command, { shell: "/bin/bash" }).toString().trim();
}

module.exports.execLog = (command) => {
    const output = this.exec(command);
    log("Exec", output, `$ ${command}`);
    return output;
}

I create a shorthand version of execSync which is very useful by itself. Then, I create a variant that also creates a log. From here, I found it tedious to enter multiple commands at a time and very hard to perform commands like cd, as every time execSync is ran, it begins in the original directory. So, you would have to do something along the lines of cd directory; command or cd directory && command. Both of which become incredibly large commands when you have to do a handful of things in a directory. So, I created scripts:

function scriptToCommand(script, pre = "") {
    let command = "";

    script.forEach((line) => {
        if(pre) command += pre;
        command += line + "\n";
    });

    return command.trimEnd();
}

I created them as arrays of strings. This way, I can create scripts that look like this:

[
    "cd minecraft",
    "java -jar server.jar"
]

This seemed like a good compromise to get scripts to look almost syntactically the same as an actual bash file, while still allowing me to handle each line as an individual line (which I wanted to use so that when I log each script, each line of the script begins with $ followed by the command). Then, I just have:

module.exports.execScript = (script) => {
    return this.exec(scriptToCommand(script));
}

module.exports.execScriptLog = (script) => {
    const output = this.execScript(script);
    log("Exec", output, scriptToCommand(script, "$ "));
    return output;
}

Key Note:

When using the module.exports.foo notation to add a function to a node module, you don’t need to create a separate variable to reference that function inside of the node module (without typing module.exports every time). You can use the this keyword to act as module.exports.

Conclusion

Overall, running bash, shell, or other terminals in node isn’t that hard of a task. One thing I’m discovering about node is that it feels like every time I want to do something, if I just spend some time to make a custom module, I can do it more efficiently. Even my basic log module can be made far more complex and save a lot of keystrokes. And that’s just a key idea in coding in general.

Oh, and for anyone wondering, I can create a minecraft folder and place in the sever.jar file. Then, all I have to do is:

const { execScriptLog } = require("./exec");

execScriptLog([
    "cd minecraft",
    "java -jar server.jar"
]);

And, of course, set up the server files themselves after they generate.

From the blog CS@Worcester – The Introspective Thinker by David MacDonald and used with permission of the author. All other rights reserved by the author.

Agile Software Development

This week on my CS Journey, I want to focus on Agile software development and its methodologies. Agile methodology is a type of project management process, mainly used for software development. It evolves through the collaborative effort and cross-functional teams and their customers. Scrum and Kanban are two of the most widely used Agile methodologies. Today I want to focus mainly on Scrum. Recently I saw that employers are looking for candidates who have experience in scrum and agile development, so it is important that we learn more about it.

 Scrum is a framework that allows for more effective collaborations among teams working on various complex projects. It is a management system that relies on step by step development. Each cycle is consisting of two-to four-week sprints, where each sprint’s goal is to build the most important features first and come out with a potentially deliverable product. Agile Scrum methodology has several benefits,  it encourages products to be built faster since each set of goals must be completed within each sprint’s time frame.

 Now let’s look at the three core roles that scrum consists of scrum master, product owner, and the scrum team. The scrum master is the facilitator of the scrum. In addition to holding daily meetings with the scrum team, the scrum master makes certain that scrum rules are being enforced and applied, other responsibilities also include motivating the team, and ensuring that the team has the best possible conditions to meet its goals and produce deliverable products. Secondly, the product owner represents stakeholders, which are typically customers. To ensure the scrum team is always delivering value to stakeholders and the business, the product owner determines product expectations, records changes to the product, and administers a scrum backlog which is a detailed and updated to-do list for the scrum project. The product owner is also responsible for prioritizing goals for each sprint, based on their value to stakeholders. Lastly, the scrum team is a self-organized group of three to nine developers who have the business, design, analytical, and development skills to carry out the actual work, solve problems, and produce deliverable products. Members of the scrum team self-administer tasks and are jointly responsible for meeting each sprint’s goals.

Below I have provided a diagram that shows the structure of the sprint cycles. I think understanding the Agile methodologies Is helpful because most of the major companies help teams and individuals effectively prioritize work and features. I highly recommend visiting those websites it provides detailed explanations of how a scrum cycle works.

 

Sources: https://zenkit.com/en/blog/agile-methodology-an-overview

https://www.businessnewsdaily.com/4987-what-is-agile-scrum-methodology.html#:~:text=Agile%20scrum%20methodology%20is%20a,with%20a%20potentially%20deliverable%20product.&text=Agile%20scrum%20methodology%20has%20several%20benefits

From the blog Derin's CS Journey by and used with permission of the author. All other rights reserved by the author.

Docker

Hello everyone and welcome to week 13 of the coding journey blog. In this week’s post I will be talking about a popular container that many software developers use called docker. Docker is very helpful in today’s world and helps manage as well as deliver software much more efficiently.

So what exactly is docker is a question that many people when they first hear the term. Well docker is basically an open platform that is used for developing, shipping and running applications. The reason why docker is so important and useful is because it allows you to separate your applications from your infrastructure, which results in quicker software release. Timing is everything in today’s world and so a tool like docker is needed for a faster approach in software. With docker, the infrastructure of the software can be managed the same way as applications and so this shortens the gap between producing and releasing code for production use. The tool docker has is it allows an application to be run in a separate environment known as a container. This isolation as well as heightened security allows users to run many containers on a certain host. Containers are efficient because they run directly through the host machine’s kernel and so you can run more with this process than a virtual machine. The process for docker first starts with you developing your code and other parts using the docker containers. Then docker becomes helpful in testing your code as well as sharing it through the container. Then when the code is all ready to go, you can deploy your application in the product environment, whether it is local or on a cloud service. Essentially, there are many reasons for using docker when developing code and working on an application. Docker allows applications for more faster and more consistent deployment of applications so it isn’t a hassle if there are any bug fixes that need to change down the line. Containers are known for continuous integration and continuous delivery in the workflow. Docker is also helpful in scaling applications because it allows for highly portable workloads, as it can be changed easily to fit any business needs.

I know for a fact that throughout my journey with developing and learning code, I will use docker to run and create apps. It makes the process a lot more smoother and efficient as this is what most developers are looking for because they don’t want extra hassles. Docker simplifies creating applications and my future projects will be much easier to manage through docker and its ability to use containers.

For more resources check out these links:

https://docs.docker.com/get-started/overview/

From the blog CS@Worcester – Roller Coaster Coding Journey by fbaig34 and used with permission of the author. All other rights reserved by the author.

REST-API

In past few week we have working with REST API, and how it works. we have learned lot of thing about REST-API code and how you implement them to create WEB-order. Today I am going to tack about what is REST API is and how do you implement them using Docker Plate-form. So, What is REST API? REST API acronym is REpresentational State Transfer. There are six Guiding Principles of REST. Client-server, stateless, Cacheable, Uniform interface, Layered system, Code on demand witch is optional.

Client server is Separating the user interface concerns from the data storage concern. Stateless Each request from client to server must contain all of the information necessary to understand the request, and cannot take advantage of any stored context on the server. Session state is therefore kept entirely on the client. Cacheable Cache constraints require that the data within a response to a request be implicitly or explicitly labeled as cacheable or non-cacheable. Uniform interface is applying the software engineering principle of generality to the component interface,  the overall system architecture is simplified and the visibility of interactions is improved. Layered system is style layered, it allows an architecture to be composed of hierarchical layers by construing component behavior such that each component cannot “see” beyond the immediate layer with witch they are interacting. Code on demand witch is REST allows client functionality to be extended by downloading and executing code in the form of applets or scripts. This simplifies clients by reducing the number of features required to be pre-implemented.

The Key abstraction of information in REST is a resource. Any information that can be named can be a resource: a document or image, a temporal service, a collection of other resources, a non-virtual object (e.g. a person), and so on. REST uses a resource identifier to identify the particular resource involved in an interaction between components.

Another important thing associated with REST is resource methods to be used to perform the desired transition. A large number of people wrongly relate resource methods to HTTP GET/PUT/POST/DELETE methods. A lot of people prefer to compare HTTP with REST. REST and HTTP are not same.

Here is example of REST-API from .YML file. In this file you write docker script file to tell how your website to be. In this file image you see the tag, summary, operationID, Response for each code and much more…

Resources: https://gitlab.com/LibreFoodPantry/training/microservices-activities https://searchapparchitecture.techtarget.com/definition/RESTful-API

From the blog CS@Worcester – </electrons> by 3electrones and used with permission of the author. All other rights reserved by the author.

Data persistence layer(12/1/2020)

 When it comes to management and figuring out how to run a data tool, think of it as a warehouse or retail store front in this sense. This in particular is how a Data presistence layer works. When it comes to a DPL you will have 6 different methods were you must go through in order for to the data to reach its finally desistation. these different methods are considered to be semantic, Data Warehouse, presistent, transform, staging, and finally source. to be more specific a presistent layer contains a set of tables that are in charged of gathering and recording information. This is done so we understand the full history in etor spect of changes to the data of the table or query. infact when it comes to having multiple source, table or query files they are allowed to be staged at a different table or in a different view inorder to meet the transfer layer requirements. Something crucial to learn about a DPL is that in many cases you can consider it a bi-temporal table.You might be wonder to yourself what is a bi-temporal table is? well it is considered to be a table that permits queries with a size of two or larger to be stored in. This all comes down too how useful DPL is. With DPL you are allowed to include and store all of the history you need all of the time instead of how many other software do it where you have a break up of history and information when needed the most. As well this will save you a tremondce amount of time do to the flecability to be able to drop and rebuild your data set with the same amount of data history at any point in time. DPL as well offered ETL performance which is considered to be where you are allowed to make changes to to the process data. When it comes to DPL, DPL is aware of what has been altered and when it has been altered in order to make the appropriate changes in the data set. The main reasoning behind switching to a data persistence layer is mainly for the reason of allowing you to be able to analyst data in a timely manner while still staying accuarete. This also allows you to have add ons like churn or other third party software to boost the persistence layering unlike like other data processesers. Another feature is the ability to have evidence for example presistanent layering will give you information that will point directly too and explain why something has changed. As well it gives you the power of auditing where you are able to orginiaze, maintain, complete an accurate picture of past event to understand what truly happened in the data. 

From the blog Nicholas Dudo by and used with permission of the author. All other rights reserved by the author.

Law of Demeter: tell don’t ask

For this week, a topic that I chose to learn more about was the Law of Demeter. The Law of Demeter was chosen because it is one of the topics covered in our course syllabus and it is also a concept that I have not heard about before and so I was curious to learn more about it. To supplement my understanding of this concept I watched a youtube video about it and also looked at an article by hackernoon. I had chosen this video because it was short and concise and was uploaded by a channel catered towards teaching programming concepts. The summary of the video was that often in programming, cohesion is good and coupling is bad and the law of demeter is a design principle used to reduce coupling. Also the usage of the law demeter is implemented with some design pattern such as facade or adapter that performs some form of delegation to remove the need of chaining. The second source I used was an article on the Law of Demeter by hackernoon. I had used this second source because I wanted to see an example of this concept implemented in code; and also the code was implemented in java, which is language I was familiar with. The summary of this article regards the Law of Demeter as “Don’t talk to Strangers” rule and the article examples where Law of Demeter is implemented and violated.

In conjunction, these two sources helped me in understanding the Law of Demeter. In addition to the explanation of the Law of Demeter in the video source, I also enjoyed how the youtuber, Okravi, explains that the Law of Demeter is a design principle and that some people do not agree with it, and so he states that the Law of Demeter does not have to be strictly be followed and that you should just use the concept where it makes sense in code. From that statement, I thought that had related to many of my previous blog posts where I have listened to experience developers explain a topic; where they say that following a principle strictly is not realistic; and rather we should strive for using a concept when it makes sense in our code. Also reflecting on the Law of Demeter made me think more about the concepts of coupling and cohesion in my code; and also the many times where in our course the usage of chaining methods was acceptable; however after learning about the Law of Demeter, I can see that that code can be refactored.

Links to video and article below

https://hackernoon.com/object-oriented-tricks-2-law-of-demeter-4ecc9becad85

From the blog CS@Worcester – Will K Chan by celticcelery and used with permission of the author. All other rights reserved by the author.

software development for tuning (12/1/2020)

In today’s modern economy everything we touch unless it’s a pull or push door is run of software and hardware. And modern-day vehicles are no different, we have come along way from the typical car having an engine crank. Instead, now we have a push to start a feature where within seconds your car will start up and this is all run off of computers. These computers take crank and camshaft singles from seniors in order to start the car. In laments terms, they need computer hardware and software in order to make this happen. What we will be diving into is taking these stock parameters that the vehicle was given from the factory and we will be trying to alter them to either A increase fuel economy, B increase power, or C even be able to increase longevity for the vehicle itself. The possibility is endless when it comes to vehicles and the ability to alter and make little or big changes to see a maximum gain. Many might wonder how does tuning works how is it even possible. Now tuning is not easy by no stretch of the imagination, you must know precisely what you’re doing or you will fry the computer in your vehicle. In a sense think of it as you are changing parameters in your OS system. Instead of it being like the Linux OS which is open-source and you allowed to make as many changes as you would please without needing keys to access the OS. A vehicle Operating system is locked more in the sense of a window or MAC OS system where you need to have certain keys or software to unlock the computer or even be able to read it when servicing the vehicle. This is where a tuner comes in or in other words a software developer who goes into the parameters of the vehicle and will make changes. For example, we will go into something that is ignition timing. What this is, is the amount of time before the piston reaches the top dead center.  TDC is when the top of the piston reaches the top of the cylinder. What a tuner can do is go into the computer and advance this so instead of it being 0 degrees before TDC we can change it to 12 degrees so when the camshaft is at 12 degrees of rotation which translates actually to 88 degrees of rotation. This allows us to see the performance of the engine which will optimize how the fuel gets burnt in the combustion chambers. With tuning, you’re also allowed to go into different modules like the transmission modules which will let you write a code that allows the shift solenoids to open up much later to hold the RPM band higher and give you more efficiency, and lets you have better cooling opportunities. Another parameter that you’re allowed to access is the body module which controls everything inside the interior of the vehicle. Know you must be careful with this and it is one of the main reasons why these computers are closed source is because it makes it more secure  and much harder to hack 

From the blog Nicholas Dudo by and used with permission of the author. All other rights reserved by the author.

Software Architectural Patterns and Their Uses

Image for post

When working with larger projects, a unified system or pattern can be useful in keeping things organized and cohesive over time. Design patterns can be applied to general problems in software development, such as creating many similar objects with slight differences from a factory class which handles object creation within the program, or using decorators in a program to add specific attributes or behaviors on a per-object basis. But while these design patterns are helpful for solving singular problems, they often lack the scope necessary to layout and plan the entirety of a project or large program or application.

Architectural Patterns

I found an article written on the blogging platform Medium https://medium.com/@nethmihettiarachchi484/common-software-architectural-patterns-in-a-nutshell-7df312d3989c which briefly explained the concept of Architectural patterns, which refer to higher-level guidelines and decisions made regarding a piece of software as a whole, rather than one specific problem within the code. This was confusing at first, but as the article provided brief explanations (and examples) of various architectural patterns, I was able to grasp the concept more conclusively. Now I will go into detail regarding two patterns I found likely to be useful and the kinds of programs they would be most applicable to.

The Layered Pattern uses a system of layers to group together similar functionality for ease of access and organization. Layers can set access rules for other layers (layer C has access to layers A and B, layer C only has access to layer B). For example, imagine a solitaire program, one layer controls drawing and taking input for interface components (windows, buttons, score etc) while another layer could contain the logical rules of the game solitaire, mechanics and random distribution of the cards (things the user would never need to access directly). For applications where there are a lot of similar behaviors which could be easily grouped into layers, the layered pattern makes a lot of sense.

Additionally, the Multi-tier pattern also concerns layers, in this case, entire systems (complete features, ie: user-input for input related features, display for display features) are grouped into tiers. A tier is a grouping of elements within the program which are interrelated, and seem most often used in web-related capacities or in online applications with various segments to consider (web browser, web, front-end and back-end could be potential tiers). Since each component or feature can only be part of one tier, individual tier choices will need to be carefully considered. Another major drawback is the amount of setup that goes into the process of implementation, since each of the tiers needs to exist as a grouping before the components can be assigned. I think this pattern would be most useful for large projects spanning multiple languages and technologies in the real world (web-applications for example, with front-end, back-end, browser components).

While I only highlighted two of the patterns discussed in this blog post, I think that layered and multi-tier patterns will be most useful for small-medium (layered) and large projects (multi-tier) in general.

Site referenced: https://medium.com/@nethmihettiarachchi484/common-software-architectural-patterns-in-a-nutshell-7df312d3989c

From the blog CS@Worcester – CodeRoad by toomeymatt1515 and used with permission of the author. All other rights reserved by the author.

REST API Design

This week on my CS Journey I want to focus on REST API Design. In my last blog, I talked about how an API request works and how to be able to read the API documentation and use it effectively. In this blog, I will be emphasizing briefly the key constraints to REST API design. There are six important constraints to the design. Which are: Client-Server, Stateless, Cache, Uniform Interface, Layered System, and Code on Demand. Together, these make up the theory of REST.


Starting with client-server constraint is the concept that the client and the server should be separate from each other and allowed to evolve individually and independently. In other words, a developer should be able to make changes to an application whether on the data structure or the database design side at the same time it is not impacting the client server side. Next REST APIs are stateless, meaning that calls can be made independently, and each call contains all the data necessary to complete itself successfully. The other Constrain is Cache, since a stateless API can increase requests and handles large loads of calls, a REST API should be designed to encourage the storage of cacheable data. That means that when data is cacheable, the response should indicate that the data can be stored up to a certain time. 


The next constrain is Uniform Interface, having a uniform interface that allows the client to talk to the server in a single language. This interface should provide standardized communication between the client and the server, such as using HTTP with resources to CRUD (Create, Read, Update, Delete). Also, another constrain is a layered system. As the name implies, a layered system is a system comprised of layers, with each layer having a specific functionality and responsibility. In REST API design, the same principle holds, with different layers of the architecture working together to build a hierarchy that helps create an Application. Also, A layered system helps systems to increase flexibility and longevity and it allows you to stop attacks within other layers, preventing them from getting to your actual server architecture. Finally, the least known of the six constraints is Code on Demand which allows for code to be transmitted via the API for use within the application. Together, these constraints make up a design that operates similarly to how we access pages in our browsers on the World Wide Web.


Overall, I learned the most important aspects of REST API Design. The blog was certainly helpful to understand the key constraints very well. I have only mentioned the main important parts of it. I highly recommend everyone taking a look at the source below. 

Source: https://www.mulesoft.com/resources/api/what-is-rest-api-design

From the blog Derin's CS Journey by and used with permission of the author. All other rights reserved by the author.

Windows Vs Linux (Week 1 makeup)

 As many of us know there are two main Os systems one being the more commonly known as Windows operating system and the second most commonly known operating System is Mac OS. With the two giants out of the way, there is still one more OS that is overlooked by many to this day.  Most may know this particular OS as Linux. Linux is an open-source  Operating system that is derived from the Unix platform that is also most commonly found in MAC operating systems. Many of you might be wondering what open source means, and it is quite simple to open source stands for a system where anyone can be a developer for it and can make their own or appropriate changes to the program in order to help or straighten the operating system. While windows as many might know by is a licensed operating system where in order to install the operating system you must buy the rights in order to download and have it on your system if not already installed.  When it comes to Windows it is very straightforward and simple to use rather than Linux where it will take you a little reverse engineering in order to fully understand and grasp the full potential. When it comes to both of these operating systems they both offer pros and cons. Some of Linux’s pros we will start off, first and for most Linux is a free open-source OS. Linux uses a more traditional monolithic Kernel. A monolithic Kernel is considered to be modular in its design where it allows most of its drivers to be able to dynamically load and unload when running. One of the cons with the Linux system though is that it is still considered and regarded as a bare-bone operating system with little to no features and has barely any updates throughout the years. Know when it comes to actually see the usage comparing between Linux and Windows there is no comparison between the two for the fact that windows will beat Linux tenfold especially since every prebuilt laptop or computer you buy from the store other than MAC will be running Windows at any price point.  As well when it comes to windows you will usually see multiple kinds of users ranging from businesses, developers, and kids. As well something to consider with windows which could be a double-edged sword and this would be for the fact that windows are a closed sources system that usually has an issue running open-source networks. With that being said it still is possible when downloading third-party apps or software that are add ons to the Windows OS which can turn your once known windows machine into a Linux machine to run open-source software. Which allows Windows to be versatile in many ways.  

From the blog Nicholas Dudo by and used with permission of the author. All other rights reserved by the author.