Category Archives: CS-343

A Strange Way to Use Get Requests

Strap in because this post is going to get very anecdotal.

Backstory

I have a history of running Minecraft servers for my friends and I. A few years ago, I learned how to port-forward a locally hosted server and use it to play on. While it’s not the most secure thing in the world technically, it works really well and it’s produced a lot of fun. For whatever reason, I was thinking about it again recently.

When I’m out of school, that’s usually when we play. I’ve slowly been learning more and more. I’m currently at the point where I can make intermediate datapacks for the game and use shell/bash/batch scripts to run the server. Over the summer for instance I created a shell script to run the server, back it up, and reload it if it crashes.

There’s still one major problem with my servers, however. I need to manually turn on one of my computers and run it. Then that computer has to stay on until I want the server off. It might waste power if no one is using it (also increasing the risk of the world becoming corrupted) and it’s just slowly going to wear out my hardware.

Recently, I remembered that I had two old laptops. Unfortunately, the newer one didn’t have a power cord. So I broke my way in and took out the ram, hard drive, and wifi card. Then I swapped those components into the other laptop. Lastly, I installed Lubuntu onto it. It works surprisingly well. I then set up NoMachine so I can remote into the machine without having it in front of me. The BIOS supports wake on LAN so I took a few hours to get that working. Now, I can leave that laptop plugged in next to my router and send a signal over the internet to wake it up. While the router blocks the magic packet required on ports 7 and 9, a simple port forward allows wake on LAN to work from anywhere. I would definitely not recommend that for most scenarios, but until I find myself the victim of attacks on my router, I think I’ll take my chances.

Now, my goal has been to create programs that run all the time that laptop is running to be able to receive requests to launch Minecraft servers. I’ll work out a script later for handling crashes and such, as well as a datapack for automatically stopping the server when no one has been online in 30 minutes or so. So I started work on a node server.

Node.js

While I am learning about node in my CS-343 course, I have already taken the Web Developer Bootcamp by Colt Steele on Udemy and I highly recommend it. This is the basis of my server so far:

const express = require("express");
const app = express();

const server = 
app.listen(process.env.PORT, process.env.IP, function() {
     console.log("The server has started.");
});

var minecraft_server1_on = false;

app.get("/minecraft/server1/start", function(req, res) {
     res.send("Turning on the server…\n");
     minecraft_server1_on = true;
});

app.get("/minecraft/server1/ping", function(req, res) {
     res.send(minecraft_server1_on);
});

app.get("/minecraft/server1/declare_off", function(req, res) {
     res.send("The server is off.\n");
     minecraft_server1_on = false;
});

app.get("/exit", function(req, res) {
     res.send("Exiting…\n");
     server.close();
});

app.get("*", function(req, res) {
     res.send("The server is running.\n");
});

Now, I’m certain that I’m making mistakes and doing things in a strange way. It definitely is not in a state I would use for anything other than a private server, and even then I’m still working on it.

Here’s the important parts. It’s an express app with a few get routes. I’m running this server from a bash script using nohup so that it can simply run in the background while the pc runs. However, I want to be able to stop the server from another bash script without using process ID’s so I don’t risk closing the wrong thing. While I was considering my options, I thought of something really interesting. I can use the curl command to perform a basic get request in a bash script. So I created a route /exit that when requested, it shuts down the server.

It’s incredibly simple and there are definitely ways around it, but I never even thought of using get requests in this way. There is information in the requesting of a web page, excluding the normal stuff. Just the fact that a request occurred can be useful. What this means is I can check if the server is running by pinging most routes. I can also both send and receive boolean data without any real complexity. Once again, I’m sure this violates some design principle. However, for something like this that is meant to be mostly casual and private, why should I need to set up proper post routes and databases when I can use this.

My method of turning on a Minecraft server now; I store the state in a variable in the node server. This is fine because when this server stops, all other servers should have stopped so I don’t need a database. Then, by pinging certain routes, I can turn a Minecraft server on, check if it’s on, and declare that I have turned it off somewhere else. The node server can maintain the state of Minecraft servers (I can possibly run multiple on different ports) as well as handle inside and outside tracking.

Keep in mind again, I’m looking for the easiest way I know how to make this work right now, not the best way to do it. So from here, I had no idea how to make the node server actually start a Minecraft server. I know how to run one from a bash command though. So I created a C++ program that will also run all the time. It can periodically check the status of the node server. For instance, if I send a request to the node server to turn on a Minecraft server, the C++ program can detect that change by running system("curl http://localhost:PORT/minecraft/server1/ping"); Once again, I’m using a UNIX command in C++ rather than using a wrapper library for curl because it was an easier solution for me. The node server can then return true or false. In fact, the C++ server won’t directly run the command. It will run a bash script that runs the command and stores output into a file. C++ can then read the file and get the result.

I’m currently still in the process of making this work. After this, I’ll make another node server hosted on Heroku that has a nice front end to allow myself and other people to request the laptop to wake on LAN, and then directly interact with those local node server. I may even make a Discord bot to allow people to simply message in chat to request the server to turn on.

Conclusion

Once again, I do not recommend anyone actually does this the way I have. However, the whole point of coding is to make a computer do what you want it to. If you can hide the get requests behind authorization (which I will probably do) as well as fix any other issues, this could be useful. It’s not even specifically about using get requests in this way. Abstract out and realize that it’s possible to do something in a way you never thought of, and it’s possible to use something in a way you never have. Consider what you know and explore what’s possible. Figure out whether or not it’s a good practice based on what problems you run into. I think that’s one of the best ways to learn and if you can find a functional example to fixate on, the way I have, you can find yourself learning new things incredibly quickly.

From the blog CS@Worcester – The Introspective Thinker by David MacDonald and used with permission of the author. All other rights reserved by the author.

Models

https://www.lucidchart.com/blog/types-of-UML-diagrams

https://c4model.com/

For this week we’ll be covering modeling which is an important aspect in not only software design, but also design in general. Modeling is a very simple concept; it is when one creates a visual representation of a project, program, or whatever someone is working on. It is a visual tool used to streamline and simplify the process of development. In software design, there are two specific types of models which are called UML and C4 diagrams. UML, which stands for Unified Modeling Language, is used to give a visual representation to the architecture, design, and implementation of a software system. For example, a software system by a group of interconnected boxes which symbolize each class in that system. A class is represented in UML as a box which is separated into three sections which hold the name of the class, its attributes, and its operations. C4 meanwhile is used to give a visual representation to the architecture at different levels of detail. For example, a software system is represented by an interconnected group of boxes symbolizing each element of that system. As one zooms closer, each element breaks down into smaller parts that make up of that component. This process spans four levels from the system context level, to the container level, to the component, to the code or UML level. The key difference between C4 and UML diagrams is that C4 uses the general framework of UML to visualize a software system at greater scales.

The general importance of these models is that before creating software, you have to form a plan. If one were to skip this step, they would likely run into multiple problems like forgetting to implement a part of the software, making a mistake that isn’t noticed until much later, or having no clear direction in development. Often times when these mistakes are made, developers would have to devote time to correct them when they could have created a model allowing them to develop more of the software in a shorter amount of time. Models can also be useful with how they can make peer review easier. If a developer wants a peer’s feedback on the general structure of their work, they can give them an easy to understand model rather than the code for the classes that make up an entire software system. Models are a very simple but also very important aspect to software development which is also the main reason why I chose to write about this topic.

From the blog CS@Worcester – Rainiery's Blog by rainiery and used with permission of the author. All other rights reserved by the author.

What is UML and the differences with OOM

The difference between OOM and UML by definitions.

Object-oriented modeling (OOM) is the construction of objects using a collection of objects that contain stored values of the instance variables found within an object. Unlike models that are record-oriented, object-oriented values are solely objects.

UML stands for “Unified Modeling Language. The purpose of UML is visually representing a system along with its main actors, roles, actions, artifacts or classes, in order to better understand, alter, maintain, or document information about the system. We can say that UML is the new approach for documenting and modeling the software

Object-oriented modeling is the process of construction of objects. Objects contain stored values of the instance variables. It creates the union of the application and database development. The union is then transformed into a unified data model. This approach makes object identification and communication easy. It supports data abstraction, inheritances, and encapsulation. Modeling techniques are implemented using OOP supporting languages. OOM consists of the following three cases: Analysis, Design, and Implementation

UML, short for Unified Modeling Language, is a standardized modeling language consisting of a set of diagrams. UML is used to help system developers clarify, demonstrate, build, and document the output of a software system. UML represents a set of practices that have proven successful in modeling large and complex systems and is a very important part of the development of object-oriented software and software development processes. UML primarily USES graphical notations to represent the design of software projects. Using UML helps project teams communicate, explore potential designs, and validate the architectural design of the software. Below we will walk you through what UML is, the history of UML, and a description of each UML diagram type, supplemented by UML examples.

Why we like UML? What is the benefit of UML?

As the value of software products increases, companies are looking for technologies to improve software production processes, improve quality, reduce costs, and shorten time to market. These technologies include component technologies, visual programming, and the application of patterns and frameworks. Companies are also looking for technology that can manage the complexity of systems as they grow in scope and scale. They also recognize the need to address periodic architectural issues such as physical distribution, concurrency, replication, security, load balancing, and fault tolerance.

In the end, UML provides users with an off-the-shelf, expressive visual modeling language so that they can develop and exchange meaningful models. It provides Extensibility and Specialization mechanisms for core concepts. It Independents of the specific programming language and development process. Provides a formal basis for understanding the modeling language. It encourages the development of the market for object-oriented tools. It supports higher-level development concepts such as collaboration, frameworks, patterns, and components. It integrates Best Practices.

Sources:

https://www.techopedia.com/definition/28584/object-oriented-modeling-oom

Introduction to Object-Orientation and the UML

From the blog haorusong by and used with permission of the author. All other rights reserved by the author.

Why use Docker?

 This week on my CS Journey, I want to talk about Docker. I know we went over several different activities in class; however, I was still a little confused, so I decided to look more into detail from outside sources to understand the concept and terms well. Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. A container is not so much different than a Virtual Machine But, instead of creating a full operating system, a Docker Container has just the minimum set of operating system software needed for the application to run and rely on the host Linux Kernel itself.

The first blog talked about the importance of docker and how to step a docker file in the root directory. There was a 12-minute video from YouTube that explained the concept very well. I learned a lot from that YouTube video. The blog also talked about creating a docker-compose file which is a tool that allows you to deploy and manage multiple containers at the same time. I learned the concepts that go inside a file including, From where to take a Docker file to build a particular image, Which ports you want to expose, How to link containers, Which ports you want to bind to the host machine, and other key commands. This blog was very helpful to do assignment 4 building the Nginx and the MongoDB servers.  

The next blog talked about the benefit of docker, one of the major benefits of containers is portability. Containers can run on top of virtual machines, or in the cloud. This has made it easier to use cases of containers to be around software development. People now can write applications, place it in a container, and then the application can be moved across various environments, as it is encapsulated inside the container which I think is helpful in the environment. It was quite interesting to learn that Docker offers privately hosted repositories of containers, which are about $1 per container. Many tech companies are looking to get into the action of using docker at their place especially the cloud hosting companies.

It was interesting to learn about the concepts of Docker, how to use the commands properly and how major companies are using docker at their workplace, Overall, the blogs and the video helped me to understand the tools behind docker especially running your image from localhost to port and other interesting concepts. I highly recommend everyone check out the video it is well explained.

Source1: https://medium.com/@reyhanhamidi/why-bother-use-docker-61eadf968d87 

Source2: https://www.networkworld.com/article/2361465/docker-101-what-it-is-and-why-it-s-important.html

Video: https://www.youtube.com/watch?v=YFl2mCHdv24

From the blog Derin's CS Journey by and used with permission of the author. All other rights reserved by the author.

Docker Review

 

Today I want to explain some information from what I learned in the class and some sources outside.
What is Docker?
Docker is a package of Linux containers, providing a simple and easy to use container interface. It is by far the most popular Linux container solution. It is a kind of visual machine but it also has differences from the visual machine.
Docker packages the application and its dependencies in a single file. Running this file generates a virtual container. The program runs in this virtual container as if it were running on a real physical machine. With Docker, you don’t have to worry about the environment.
Generally speaking, Docker’s interface is quite simple. Users can easily create and use containers and put their own applications into the containers. Containers can also be versioned, copied, Shared, and modified just like normal code.
The main USES of Docker currently fall into three categories. First, provide a disposable environment. Examples include testing other people’s software locally and providing a unit test and build an environment for continuous integration. Second, provide flexible cloud services. Because Docker containers can be opened and closed on demand, they are ideal for dynamic dilatation and shrinkage. Third, establish micro-service architecture. With multiple containers, a machine can run multiple services, so the microservice architecture can be simulated on its own.
Docker mainly contains three basic concepts, namely image, container, and warehouse. Once you understand these three concepts, you can understand the entire life cycle of Docker. The following is a brief summary of these three points that we have a short view of the class.
Docker image is a special file system that not only provides the programs, libraries, resources, configuration files required by the container runtime but also contains some configuration parameters for the runtime. 
The essence of containers is processed, but unlike processes that execute directly on the host, container processes that run in their own separate namespace containers can be. Create, start, stop, delete, pause, and so on, the relationship between an image and a container is analogous to classes and instances in object-oriented programming.
Once the image is built, it is easy to run on the current host, but if you need to use the image on other servers, you need a centralized service to store and distribute the image, and Docker Registry is one such service. A Docker Registry can contain multiple repositories; Each warehouse can contain multiple tags; Each label corresponds to an image, where the label can be understood as the version number of the image.
On the whole, Docker is a concept that is more and more interesting, which is worth our in-depth understanding. 
Sources:
https://medium.com/codingthesmartway-com-blog/docker-beginners-guide-part-1-images-containers-6f3507fffc98
https://docs.docker.com/get-started/part2/

From the blog haorusong by and used with permission of the author. All other rights reserved by the author.

Adding Comments to Code

This post is about Chapter 4. Comments from the book, Clean Code by Robert Martin. This chapter along with the rest of this book has a strong relationship with the course because it is all about how to write clean organized code. This chapter is on writing effective comments. Rather than long drawn out chapters of plain text, the chapter is split up into many subheadings with even more sub-subheadings. I find this format very useful because it splits up and organizes the information so that it is easier to read and understand. Many of the subtopics have many examples to go with them. This makes it even easier to understand what they are trying to say. The chapter is split up into into two main sections/subheadings, “Good Comments” and “Bad Comments”. Each have many subheadings to them such as “Legal Comments”, “Explanation of Intent” , and “Informative Comments” under the “Good Comments” subheading. “Redundant Comments”, “Misleading Comments”, and “Mandated Comments” under the “Bad Comments” subheading. I selected this article because we have not gone over proper comment writing in my classes (or at least I have not), and I think it would be an important thing to learn for future jobs and class projects. I have found the information in the article to be very direct and informative. As said earlier, it is split up into various subheadings with subtopics so if you are debating whether to add a certain comment into your code, you can go right to topic of the exact type of comment you are looking to add. I have learned about “noise comments” which is basically putting comments that are not needed such as /** Default Constructor*/ over a constructor. I would do this because I would typically like to explain everything in the code but things such as constructors explain themselves. Another is “mandated comments” where you give a javadoc at every single function. I would always do this but it only creates clutter. An example of a “good comment” that I have learned about is “warning comments” where you can warn other programmers of certain consequences for example leaving, “// don’t run unless you have time to kill” above a test with a really big file. Another example of a “good comment” is “amplification” where a comment can be used to amplify the importance of something that may not seem that way. I will be able to apply this knowledge in the future with projects and work to provide clean uncluttered code and powerful, efficient comments.

From the blog CS@Worcester – Austins CS Site by Austin Engel and used with permission of the author. All other rights reserved by the author.

JavaScript

In Past class session we are learning a lot about docker. Not just docker but also how it work with Java & JavaScript. Today I am going to introduce JavaScript and how it work in docker. So let’s start what’s different between Java and Java script. So Java applications are run in a virtual machine or web browser while JavaScript is run on a web browser. Java code is compiled whereas while JavaScript code is in text and in a web page. JavaScript is an OOP scripting language, whereas Java is an OOP programming language.

Do not confuse JavaScript with the java programming language . Both “Java” and “JavaScript” are trademarks or registered trademarks of Oracle in the U.S. and others countries. However, the two programming languages have a very different syntax, semantic, and use.

JavaScript (JS) is a lightweight, interpreted, or just-in-time compiled programming language with first-class function While it is most well-known as the scripting language for Web pages, many non-browser environments also use it, such as Node.jsApache and Adobe. JavaScript is a prototype-based, multi-paradigm, single-threaded, dynamic language, supporting object-oriented, imperative, and declarative (e.g. functional programming) styles. The Docker  platform allows developers to package and run applications as containers. A container is an isolated process that runs on a shared operating system, offering a lighter weight alternative to virtual machines. Though containers are not new, they offer benefits — including process isolation and environment standardization — that are growing in importance as more developers use distributed application architectures.

  • JavaScript is used in Web Application Development. Some of the examples are -NetFlix, Facebook, Uber, LinkedIn etc.
  • JavaScript is used in Mobile Application Development. Some of the examples are- Spotify, Instagram, Facebook, Skype, Uber etc.
  • JavaScript is used in Game Development. Let’s see some games where JavaScript is used- Angry birds, Candy Crush ,Systems Offline, Re-wire, Offline Paradise etc.

Runnig JS code in browser: A good way to learn JavaScript is to run it in browser’s JavaScript console. Just open your favorite browser and press F12 key or Ctrl + Shift + I on your keyboard.You will see something like this on the screen now click on the console tab which I marked. And thats your plaground, yes you heard me correct you can write javascript code here. Lets write our legendary program here and see.You will see something like this on the screen now click on the console tab which I marked. And thats your plaground, yes you heard me correct you can write javascript code here. Lets write our legendary program here and see.

here is link to instruction on How To Build a Node.is Application with Docker.

https://www.digitalocean.com/community/tutorials/how-to-build-a-node-js-application-with-docker

https://www.oreilly.com/library/view/javascript-the-definitive/9781491952016/ch01.html

From the blog CS@Worcester – </electrons> by 3electrones and used with permission of the author. All other rights reserved by the author.

DOCKER-COMPOSE

Following the Docker activities in class, I found docker-compose quite confusing. I decided to use that as an opportunity to look for some other resources that explains the concept briefly but detailed so I can understand the term well. I finally settled on this blog that gives a tutorial on docker-compose. I like the fact that all concepts have been covered and explained simply. I also see some terms that in found confusing during class activities defined and explained. I like how it starts by giving a walkthrough of docker in general and then introduces docker-compose so the connection is clear and as a reader, you would not get lost.

This blog talks about the fundamentals of docker. It explains docker-compose as a docker tool used to define and run multi container applications. It talks about the features of docker-compose, structure of a docker-compose file and explains some of necessary keywords found in the structure. It gives images of structure of docker-compose file and images on docker-compose and docker to differentiate them. Blog also explains some docker-compose commands.

I learned in this blog that you can start using docker-compose with docker files by defining your apps environment using a docker file, defining the services for your app so that they can run in an isolated environment and start the app by running docker-compose. Docker-compose can be added to a pre-existing project and to your workspace if you add a docker file.

Docker-compose files can work by applying multiple commands that are declared within a single .yml configuration file. The compose file consists of keywords that make up the structure such as services, ports, volumes, build, etc. In class, these keywords were not clear to me. However, in this blog I learned that version denotes the version of the Docker-compose which is usually the latest version. All containers created are defined in services and docker composes will create containers based on the name we provide under this section. Build specifies the location of the docker file and ports map the container’s port to the host machine. The image keyword allows to run a service using a pre-build image by specifying the image location. Understanding of these concepts really helped me in the recent homework assignment.

I also learned that the purpose of the build in docker-compose build is to get images ready to create containers but this will be skipped if a service is using a pre-built images. I always thought that docker compose up builds images and run containers but little did I know that it also forks containers directly if images are already built.

I hope others find this blog helpful especially those that are new or beginners to the term, as it will give an insight of the term and that will help guide you through working with docker.

https://www.educative.io/blog/docker-compose-tutorial

From the blog CS@Worcester – GreenApple by afua3254 and used with permission of the author. All other rights reserved by the author.

Factory Design Pattern

In keeping with my recent learning with java design patterns, I have been learning about the Factory Design Pattern. It is defined as a creational pattern by the Gang of Four and is very widely used because of the plethora of applications that it has. This seemed to be one of the best design patterns to learn as I saw a great many recommendations to do so in my research on design patterns. In the Factory Method Pattern article from Java T Point, the design pattern is made by, “[defining] an interface or abstract class for creating an object but [letting] the subclasses decide which class to instantiate.” In other words, if the client requires multiple similar behaviors, the Factory Design Pattern is used to choose the required subclass instance to complete the required behavior. The article outlines the key advantages to this design pattern, including the promotion of “loose-coupling” because of the lack of application specific classes into the code. It then gives a couple of examples for usage including a class not knowing what subclasses will be required. There is then a simple UML diagram which shows a small example of how an electricity bill may be calculated using the Factory Design Pattern. This example is then expanded upon. First, an abstract Plan class is created. It contains an abstract getRate method and a concrete calculateBill method. The subclasses show the real usefulness of the design pattern. There are 3 subclasses: DomesticPlan (with a rate of 3.50), CommercialPlan (with a rate of 7.50), and InstitutionalPlan (with a rate of 5.50). Each class implements the getRate method, which sets the Plan class’s rate variable to their respective rates. From here, a GetPlanFactory class is created which uses the Plan class and its subclasses to return a Plan object. The one method here is getPlan, which takes a string variable and has a set of if statements, each of which returns a different Plan subclass object depending on the string. Finally, the GenerateBill class is the one which the client interacts with. It asks the client the name of the plan needed, which the PlanFactory class uses to return an object of the one of the Plan subclasses. Then, units of electricity are taken from the client. The Plan subclass object then calls getRate to get the rate associated with the plan, and then calculateBill to tell the client the total charge for electricity usage. Although the explanation seems complicated, it is quite simple and clean in implementation, and extremely useful when one does not know which of a given set of behaviors will be required. I expect I can use this design pattern quite frequently in my personal, school-related, and professional coding. 

From the blog CS@Worcester – Marcos Felipe's CS Blog by mfelipe98 and used with permission of the author. All other rights reserved by the author.

Best Practices for Using Docker

For the past few weeks, we have been working a lot with Docker and how to use it. Coincidentally, I came across an article describing some best practices when using Docker. Some of the things in the article were even things we used in class.

Many things were given to keep in mind. There is size. Images should not waste space, especially as it could be a security concern. Using Alpine Linux is a great way to save space though it may not always be appropriate. It is also important to keep your Dockerfile current. Using the latest tag is one way but the author also says to check regularly to make sure you really are building from the latest version. The author also mentions a new tool called docker scan to find known vulnerabilities in your Docker images. There is a lot of emphasis on Docker containers being simple and easy to create and take down. The system should be designed to do so without adversely affecting your app as that is the whole point of Docker. At the end of the article there are additional links to more articles about Docker by the same author.

I selected this article because it is relevant to the current course material. We have been using Docker in all our classes for the last few weeks. As I was reading the article I recognized where the author says use “Use Alpine Builds” because we do use those in class. The article even explains to use “FROM alpine”, something we start our Dockerfiles with in class.

Because the tips in the article were kept very general, I was able to understand most of it. For more technical articles I often get confused or lost, especially since I have no experience working in the software development industry. Reading this article made me realize how far I have come since I started the CS major. Going in I had no programming experience and little to no aptitude when it came to computers. Seeing the things we learn in class actually be relevant in the real world is a very validating thing. It proves to me that I am actually learning, something that is only more difficult with remote learning.  

I hope to be able to use this article going through the course and on the final project. These tips should help me get the most out of Docker.

From the blog CS@Worcester – Half-Cooked Coding by alexmle1999 and used with permission of the author. All other rights reserved by the author.