Category Archives: Week 13

MongoDB in Five Minutes

This week I wanted to look more into MongoDB. I found an introduction video titled “MongoDB in 5 Minutes with Eliot Horowitz” from MongoDB’s YouTube channel to do so. I wanted to learn more about MongoDB because we have been using it a lot in class, but I do not yet have a clear idea of what it is and how it works. I have already taken a relational database course, so I have a clear mental image of what an SQL database is and how it is organized. I know MongoDB is not a relational database but that is the extent of my knowledge.

They start by describing relational databases as excel sheets on steroids, which is a pretty close comparison. They then use an example of a patient record keeping system for a doctor’s office. A patient information table might need multiple phone numbers and addresses, and these columns would end up being mostly empty in a relational database. We might also have the patient’s information spread over many tables, adding complexity. When this database is used in in application it makes the application inefficient and hard to maintain.

MongoDB uses documents and each document can be different. Not all the documents in this case would have to have the same number of phone numbers or addresses. They don’t need the same number of columns like records in a relational database table.

This video is very short and was also created by MongoDB. It is not surprising that they only show one use case where MongoDB is better than relational databases. If relational databases are Excel then MongoDB is kind of like Microsoft Word. It is important to remember that relational databases enforce things like integrity that MongoDB doesn’t, making it better in certain situations. Using multiple tables also isn’t too much of a problem; that is what joins are for. I can definitely see the advantages of using MongoDB now though. The idea of documents is so simple and it is easy to grasp compared to relational databases. This provides a good start to look more into MongoDB. I may have needed this knowledge for the final project, though we won’t have one anymore. I will definitely be using MongoDB in the future though as it is just becoming more and more popular.

From the blog CS@Worcester – Half-Cooked Coding by alexmle1999 and used with permission of the author. All other rights reserved by the author.

Learning JavaScript

As my interest in learning Javascript grows, I have started to try and strategize how I should go about learning it. I have heard of several JavaScript frameworks, so I was wondering if I should start with one like Angular, Vue, Ember, or something, and in my research I found a blog post by Francois-Xavier P. Darveau with a very telling title: Yes, You Should Learn Vanilla JavaScript Before Fancy JS Frameworks. This is a post which details why Darveau believes that learning vanilla JavaScript is so important. He starts with a short story of his time in college in which he started to learn Node.js for a project and rather than quickly find libraries and do some Stack Overflow style copy/pasting to get the project working, he dove in and tried to do everything himself. Although it was time-consuming and arduous and was not exactly optimal, he had learned far more behind the scenes than he would have otherwise. He emphasizes that learning all the shortcuts from frameworks and libraries without knowing the how’s or why’s is more akin to pretending than real knowledge. He then explains the meaning of vanilla JavaScript, if not inferred before, as “plain JS without any additional libraries.” He notes that, of course, frameworks and libraries can be extremely useful and time-saving, a lack of vanilla JS knowledge can leave one helpless if something goes wrong or the field collectively decides to hop over to the next new best framework. Some framework pros and cons are detailed as well. Pros: abstracting hard code, shipping code faster and increasing development velocity, and focusing on an app’s value instead of its implementation. Cons: When work scales up and apps become more complex or multiple teams are working on multiple apps, there will no doubt be times when a deep JS understanding is needed to get everything to go smoothly. If a vanilla JS foundation is strong, the only thing changing when getting into a new framework will be mainly syntax. The speed at which new useful frameworks come out is faster than one can master them on their own, but with a vanilla JS understanding, you can already be a step ahead. He then provides plenty of resources for how to learn JS and where it can be studied. I found this very useful as I hope to learn much more JS soon. I have just done some work with Node.JS in order to implement a new REST API endpoint and I hope to get down JS as it becomes increasingly prominent in the CS world. I am also doing an independent study related to mathematical modeling and linear algebra using MATLAB in the upcoming semester, so maybe there will be an opportunity there to apply my JS knowledge.

From the blog CS@Worcester – Marcos Felipe's CS Blog by mfelipe98 and used with permission of the author. All other rights reserved by the author.

What is software architecture design?

I. Concept of system architecture

By Edward Crawley, Bruce Cameron, And Daniel Selva co-authored SYSTEM ARCHITECTURE: Strategy and Product Development for Complex Systems. In the book, the word “system” is defined in this way: a system is a set of entities and their relationships, whose functions are greater than the sum of their respective functions.

In other words, the function has to be 1+1>2, which is called emergence. For example, a pile of bricks and wood cannot provide shelter from the wind and rain, but they can form a warm house. The function of the house is greater than the sum of the functions of the pile of materials, so the house is a system.

Now that you know what a system is, let’s look at what a system architecture does:

1) Determine the form and function of the system. To put it bluntly, it’s analyzing requirements.

2) Determine the entities, forms, and functions of the entities in the system. It’s dividing up the system. To accomplish this task, the book proposes some points of attention: identifying potential entities, focusing on important entities, abstracting entities, and defining the boundaries of the system and the environment in which the system resides.

3) Determine the relationship between entities. This includes identifying relationships between internal entities and entities located at boundaries and defining the form and function of those relationships. That is to define internal and external interfaces.

4) Forecast emergence. The prediction of final function realization, performance realization, and also the prediction of system failure, is the emergence of non-expectations.

II. The book also explains the architect’s function from another perspective:

1) Disambiguation. That is, the architecture is designed so that you don’t have a vague understanding of the requirements.

2) Define the system concept. Put forward the overall solution, define the key terms in the system, define the key measurement criteria.

3) Design decomposition. The key to breaking down the system into entities and the relationships between entities is to control the complexity of the system and not overscale it.

It can be seen that the system architecture is a step between the requirements and the implementation, which not only analyzes the requirements but also proposes a feasible implementation scheme.

The system architecture is suitable for a team composed of one or a few people because many people will lead to insufficient integrity of thinking. If multiple people work together, the best form is also to divide up the hierarchy, with a hierarchy of units to be completed by a single person. This requires a high level of knowledge, synthesis, analysis, and imagination on the part of the architect.

Sources:

Click to access Preface.pdf

From the blog haorusong by and used with permission of the author. All other rights reserved by the author.

Running Bash Commands in Node.js

Disclaimer:

This is not explicitly a “guide”. There are far better resources available on this topic; This is simply a discussion about the child_process module.

Anecdotal Background

I’ve been in the process of creating programs to automate the running of my Minecraft Server for my friends and I. In that process, I’ve created a functional system using a node server, a C++ server, and a lot of bash files. While it works, I can’t be sure there aren’t bugs without proper testing. Trust me, nothing about this project has has “doing it properly” in mind.

When learning front-end JavaScript, I recall hearing that it can’t modify files for security reasons. That’s why I was using a C++ server to handle file interactions. Little did I realize node can easily interact with files and the system itself. Once I have the time, I’m going to recreate my set up in one node server. The final “total” set up will involve a heroku node server, a local node server, and a raspberry pi with both a node server (for wake on lan) as well as an nginx server as a proxy for security for the local node servers.

Log

As a bit of a prerequisite, I’ve been using a basic module for a simple improvement on top of console.log. I create a log/index.js file (which could simply be log.js, but I prefer having my app.js being the only JavaScript file in my parent directory. The problem with this approach, however, is that you end up with many index.js files which can be hard to edit at the same time).

Now, depending on what I need for my node project I might change up the actual function. Here’s one example:

module.exports = (label, message = "", middle = "") => {
    console.log(label + " -- " + new Date().toLocaleString());
    if(middle) console.log(middle);
    if(message) console.log(message);
    console.log();
}

Honestly, all my log function does is print out a message with the current date and time. I’m sure I could significantly fancier, but this has proved useful when debugging a program that takes minutes to complete. To use it, I do:

// This line is for both log/index.js and log.js
const log = require("./log"); 

log("Something");

Maybe that’ll be useful to someone. If not, it provides context for what follows…

Exec

I’ve created this as a basic test to see what it’s like to run a minecraft server from node. Similar to log, I created an exec/index.js. Firstly, I have:

const { execSync } = require("child_process");
const log = require("../log");

This uses the log I referenced before, as well as execSync from node’s built in child_process. This is a synchronous version of exec which, for my purposes, is ideal. Next, I created two basic functions:

module.exports.exec = (command) => {
    return execSync(command, { shell: "/bin/bash" }).toString().trim();
}

module.exports.execLog = (command) => {
    const output = this.exec(command);
    log("Exec", output, `$ ${command}`);
    return output;
}

I create a shorthand version of execSync which is very useful by itself. Then, I create a variant that also creates a log. From here, I found it tedious to enter multiple commands at a time and very hard to perform commands like cd, as every time execSync is ran, it begins in the original directory. So, you would have to do something along the lines of cd directory; command or cd directory && command. Both of which become incredibly large commands when you have to do a handful of things in a directory. So, I created scripts:

function scriptToCommand(script, pre = "") {
    let command = "";

    script.forEach((line) => {
        if(pre) command += pre;
        command += line + "\n";
    });

    return command.trimEnd();
}

I created them as arrays of strings. This way, I can create scripts that look like this:

[
    "cd minecraft",
    "java -jar server.jar"
]

This seemed like a good compromise to get scripts to look almost syntactically the same as an actual bash file, while still allowing me to handle each line as an individual line (which I wanted to use so that when I log each script, each line of the script begins with $ followed by the command). Then, I just have:

module.exports.execScript = (script) => {
    return this.exec(scriptToCommand(script));
}

module.exports.execScriptLog = (script) => {
    const output = this.execScript(script);
    log("Exec", output, scriptToCommand(script, "$ "));
    return output;
}

Key Note:

When using the module.exports.foo notation to add a function to a node module, you don’t need to create a separate variable to reference that function inside of the node module (without typing module.exports every time). You can use the this keyword to act as module.exports.

Conclusion

Overall, running bash, shell, or other terminals in node isn’t that hard of a task. One thing I’m discovering about node is that it feels like every time I want to do something, if I just spend some time to make a custom module, I can do it more efficiently. Even my basic log module can be made far more complex and save a lot of keystrokes. And that’s just a key idea in coding in general.

Oh, and for anyone wondering, I can create a minecraft folder and place in the sever.jar file. Then, all I have to do is:

const { execScriptLog } = require("./exec");

execScriptLog([
    "cd minecraft",
    "java -jar server.jar"
]);

And, of course, set up the server files themselves after they generate.

From the blog CS@Worcester – The Introspective Thinker by David MacDonald and used with permission of the author. All other rights reserved by the author.

Agile Software Development

This week on my CS Journey, I want to focus on Agile software development and its methodologies. Agile methodology is a type of project management process, mainly used for software development. It evolves through the collaborative effort and cross-functional teams and their customers. Scrum and Kanban are two of the most widely used Agile methodologies. Today I want to focus mainly on Scrum. Recently I saw that employers are looking for candidates who have experience in scrum and agile development, so it is important that we learn more about it.

 Scrum is a framework that allows for more effective collaborations among teams working on various complex projects. It is a management system that relies on step by step development. Each cycle is consisting of two-to four-week sprints, where each sprint’s goal is to build the most important features first and come out with a potentially deliverable product. Agile Scrum methodology has several benefits,  it encourages products to be built faster since each set of goals must be completed within each sprint’s time frame.

 Now let’s look at the three core roles that scrum consists of scrum master, product owner, and the scrum team. The scrum master is the facilitator of the scrum. In addition to holding daily meetings with the scrum team, the scrum master makes certain that scrum rules are being enforced and applied, other responsibilities also include motivating the team, and ensuring that the team has the best possible conditions to meet its goals and produce deliverable products. Secondly, the product owner represents stakeholders, which are typically customers. To ensure the scrum team is always delivering value to stakeholders and the business, the product owner determines product expectations, records changes to the product, and administers a scrum backlog which is a detailed and updated to-do list for the scrum project. The product owner is also responsible for prioritizing goals for each sprint, based on their value to stakeholders. Lastly, the scrum team is a self-organized group of three to nine developers who have the business, design, analytical, and development skills to carry out the actual work, solve problems, and produce deliverable products. Members of the scrum team self-administer tasks and are jointly responsible for meeting each sprint’s goals.

Below I have provided a diagram that shows the structure of the sprint cycles. I think understanding the Agile methodologies Is helpful because most of the major companies help teams and individuals effectively prioritize work and features. I highly recommend visiting those websites it provides detailed explanations of how a scrum cycle works.

 

Sources: https://zenkit.com/en/blog/agile-methodology-an-overview

https://www.businessnewsdaily.com/4987-what-is-agile-scrum-methodology.html#:~:text=Agile%20scrum%20methodology%20is%20a,with%20a%20potentially%20deliverable%20product.&text=Agile%20scrum%20methodology%20has%20several%20benefits

From the blog Derin's CS Journey by and used with permission of the author. All other rights reserved by the author.

Docker

Hello everyone and welcome to week 13 of the coding journey blog. In this week’s post I will be talking about a popular container that many software developers use called docker. Docker is very helpful in today’s world and helps manage as well as deliver software much more efficiently.

So what exactly is docker is a question that many people when they first hear the term. Well docker is basically an open platform that is used for developing, shipping and running applications. The reason why docker is so important and useful is because it allows you to separate your applications from your infrastructure, which results in quicker software release. Timing is everything in today’s world and so a tool like docker is needed for a faster approach in software. With docker, the infrastructure of the software can be managed the same way as applications and so this shortens the gap between producing and releasing code for production use. The tool docker has is it allows an application to be run in a separate environment known as a container. This isolation as well as heightened security allows users to run many containers on a certain host. Containers are efficient because they run directly through the host machine’s kernel and so you can run more with this process than a virtual machine. The process for docker first starts with you developing your code and other parts using the docker containers. Then docker becomes helpful in testing your code as well as sharing it through the container. Then when the code is all ready to go, you can deploy your application in the product environment, whether it is local or on a cloud service. Essentially, there are many reasons for using docker when developing code and working on an application. Docker allows applications for more faster and more consistent deployment of applications so it isn’t a hassle if there are any bug fixes that need to change down the line. Containers are known for continuous integration and continuous delivery in the workflow. Docker is also helpful in scaling applications because it allows for highly portable workloads, as it can be changed easily to fit any business needs.

I know for a fact that throughout my journey with developing and learning code, I will use docker to run and create apps. It makes the process a lot more smoother and efficient as this is what most developers are looking for because they don’t want extra hassles. Docker simplifies creating applications and my future projects will be much easier to manage through docker and its ability to use containers.

For more resources check out these links:

https://docs.docker.com/get-started/overview/

From the blog CS@Worcester – Roller Coaster Coding Journey by fbaig34 and used with permission of the author. All other rights reserved by the author.

REST-API

In past few week we have working with REST API, and how it works. we have learned lot of thing about REST-API code and how you implement them to create WEB-order. Today I am going to tack about what is REST API is and how do you implement them using Docker Plate-form. So, What is REST API? REST API acronym is REpresentational State Transfer. There are six Guiding Principles of REST. Client-server, stateless, Cacheable, Uniform interface, Layered system, Code on demand witch is optional.

Client server is Separating the user interface concerns from the data storage concern. Stateless Each request from client to server must contain all of the information necessary to understand the request, and cannot take advantage of any stored context on the server. Session state is therefore kept entirely on the client. Cacheable Cache constraints require that the data within a response to a request be implicitly or explicitly labeled as cacheable or non-cacheable. Uniform interface is applying the software engineering principle of generality to the component interface,  the overall system architecture is simplified and the visibility of interactions is improved. Layered system is style layered, it allows an architecture to be composed of hierarchical layers by construing component behavior such that each component cannot “see” beyond the immediate layer with witch they are interacting. Code on demand witch is REST allows client functionality to be extended by downloading and executing code in the form of applets or scripts. This simplifies clients by reducing the number of features required to be pre-implemented.

The Key abstraction of information in REST is a resource. Any information that can be named can be a resource: a document or image, a temporal service, a collection of other resources, a non-virtual object (e.g. a person), and so on. REST uses a resource identifier to identify the particular resource involved in an interaction between components.

Another important thing associated with REST is resource methods to be used to perform the desired transition. A large number of people wrongly relate resource methods to HTTP GET/PUT/POST/DELETE methods. A lot of people prefer to compare HTTP with REST. REST and HTTP are not same.

Here is example of REST-API from .YML file. In this file you write docker script file to tell how your website to be. In this file image you see the tag, summary, operationID, Response for each code and much more…

Resources: https://gitlab.com/LibreFoodPantry/training/microservices-activities https://searchapparchitecture.techtarget.com/definition/RESTful-API

From the blog CS@Worcester – </electrons> by 3electrones and used with permission of the author. All other rights reserved by the author.

Law of Demeter: tell don’t ask

For this week, a topic that I chose to learn more about was the Law of Demeter. The Law of Demeter was chosen because it is one of the topics covered in our course syllabus and it is also a concept that I have not heard about before and so I was curious to learn more about it. To supplement my understanding of this concept I watched a youtube video about it and also looked at an article by hackernoon. I had chosen this video because it was short and concise and was uploaded by a channel catered towards teaching programming concepts. The summary of the video was that often in programming, cohesion is good and coupling is bad and the law of demeter is a design principle used to reduce coupling. Also the usage of the law demeter is implemented with some design pattern such as facade or adapter that performs some form of delegation to remove the need of chaining. The second source I used was an article on the Law of Demeter by hackernoon. I had used this second source because I wanted to see an example of this concept implemented in code; and also the code was implemented in java, which is language I was familiar with. The summary of this article regards the Law of Demeter as “Don’t talk to Strangers” rule and the article examples where Law of Demeter is implemented and violated.

In conjunction, these two sources helped me in understanding the Law of Demeter. In addition to the explanation of the Law of Demeter in the video source, I also enjoyed how the youtuber, Okravi, explains that the Law of Demeter is a design principle and that some people do not agree with it, and so he states that the Law of Demeter does not have to be strictly be followed and that you should just use the concept where it makes sense in code. From that statement, I thought that had related to many of my previous blog posts where I have listened to experience developers explain a topic; where they say that following a principle strictly is not realistic; and rather we should strive for using a concept when it makes sense in our code. Also reflecting on the Law of Demeter made me think more about the concepts of coupling and cohesion in my code; and also the many times where in our course the usage of chaining methods was acceptable; however after learning about the Law of Demeter, I can see that that code can be refactored.

Links to video and article below

https://hackernoon.com/object-oriented-tricks-2-law-of-demeter-4ecc9becad85

From the blog CS@Worcester – Will K Chan by celticcelery and used with permission of the author. All other rights reserved by the author.

Blog #2: UML Diagrams

Fairly earlier in this semester we worked with Unified Modeling Language (UML) and Entity Relationship (ER) diagrams. These tools are both extremely useful in representing databases. I often find that the way these diagrams show databases without coding is similar to how pseudocode shows how a program or method works without explicitly coding. In this way, like pseudocode, these diagrams are perfect for putting together a project before actually programming it like brainstorming. For my final project this semester in a separate class, I had to make a database for an ice hockey program using SQL queries. Before I even started writing queries for that project, I started putting it together in an ER diagram without even thinking about it. This made the actual programming for that project ten times easier. For the purposes of this blog, I decided to dive deeper into studying UML and ER diagrams to try to learn even more about them. I found an awesome article about UML Class Diagrams at visual-paradigm.com that tried to be a tutorial on how they work. I will leave the link to it at the bottom. It backs up my claims that they diagram is used primarily for visualization of projects after, during, and most importantly before they are programmed. It also went over a ton of specifics and formatting for the diagrams that we went over in class this semester like the layout of the tables or boxes and the different arrows that showed relationships in the model database. It gets way more in depth than I plan on being in this blog post like when to use a plus sign instead of a minus sign for variables in the tables (for private and public variables). I definitely recommend checking it out yourself as it taught me a lot, not just now, but at the beginning of the semester itself. Similar to the situation with Docker for me, I found that it was supper awesome that I was able to see that learning these diagrams in class would prove useful in real life. When I used a diagram for my final project in my other class (which was not required), I knew that these were very important tools in the real world. For that reason, I am especially happy that it was one of the topics that we covered this semester.

https://www.visual-paradigm.com/guide/uml-unified-modeling-language/uml-class-diagram-tutorial/

From the blog CS@Worcester – Tim Drevitch CS Blog by timdrevitch and used with permission of the author. All other rights reserved by the author.

Blog #1: Docker Platform

A large topic that we covered this semester has been Docker. For that reason, I decided to do a bit of outside research on what it does well and how to use it as best as possible. Most of the helpful information that I found for this topic came directly from Docker’s website in the Docker Overview section. I will put the link to that at the end of this post. The website claims that Docker is used mainly for enabling the user to separate their applications from their infrastructure so software can be delivered quickly. This definitely holds true with what we have learned in class and with what we have used Docker for specifically during this semester. It is a free platform that thrives off of helping users develop, ship, and run applications. Its biggest strength is that is shortens the time it takes to go from writing code to running it in a production. Docker containers are indubitably important because they allow many developers to edit and program locally while being able to share their work with others. Containers, to put it simply, are runnable images. Images are templates that have instructions to help build containers and are read-only. A lot of work for docker actually is done in command prompt applications like Git Bash or Terminal. For this class, I used Terminal because I have a MacBook. At first, when using Docker in this class, it was very unclear to me how it all functioned or what the point of using it was. After some time, I began to realize the importance of it as it made connecting programs easy and quick for me. It has come to my attention that the Computer Science department at Worcester State is actually trying to create an entire database for the cafeteria workers with the help of undergraduate students. This was very exciting for me to hear for many reasons. I always am appreciative when I know that what I am learning in school has real life implications and uses, and this motivated me to want to learn it even more. Not to mention, it might be a specific platform that I will need to use in my Capstone for my last semester here before graduating. This blog post has been an extra excuse to take some time to research Docker even more than before and dive deeper into learning what it is for/why we use it rather than simply how to use it. This is not to say that I would not have done more research on Docker outside of class in general (especially with the Capstone next semester), but more to say that this blog is in some way motivating me to get the research done and helping me understand the platform even more by allowing me to discuss my findings. I definitely recommend anyone wondering about Docker to read the information I studied before this blog post by using the link I will put at the bottom. Coming from the Docker website itself, I believe it is a great reference to learn about the free platform and extremely helpful when trying to discover how it works and why we use it.

https://docs.docker.com/get-started/overview/

From the blog CS@Worcester – Tim Drevitch CS Blog by timdrevitch and used with permission of the author. All other rights reserved by the author.