Author Archives:

JavaScript/Node.js

This week on my CS Journey, I want to look more into JavaScript and how it is used in docker. Although we did a few activities on JavaScript, I was still confused so I decided to read and research more into it. JavaScript is a text-based programming language used both on the client-side and server-side which is mainly for the web. Many of the websites use JavaScript on all browsers making JavaScript the most-deployed programming language in history. The name JavaScript is quote misleading due to the resemblance of java programming language however, JavaScript is completely different from the Java programming language. Both Java and JavaScript are written, assembled and executed differently, and each has dramatic differences when it comes to what it can do. JavaScript is mainly used for: Adding interactive behavior to web pages like Change the color of a button when the mouse hovers over it, displaying animations, creating web and mobile apps, Game development, and  building web servers.

Now let’s get into Node.js which is what is used in docker. Over the decade Node.js has enabled JavaScript programming outside of web browsers, which has a dramatic success of Node means that JavaScript is now also the most-used programming language among software developers. Node.js is an open-source, cross-platform runtime environment for developing server-side and networking applications. Node.js applications are written in JavaScript and can be run within the Node.js runtime or any other operating systems. some of the features I learned from the blog is that, Node.js  can be  Asynchronous and Event-Driven meaning that All APIs of the Node.js library are asynchronous, it essentially means a Node.js based server never waits for an API to return data. It can be very fast since it is being built on the Google Chrome’s V8 JavaScript Engine, Node.js library is very fast in code execution, and lastly, No Buffering- Node.js applications never buffer any data. These applications simply output the data in chunks. 

The diagram below also helped to understand the concept well.

I highly recommend that everyone read the assigned book on JavaScript because it is very helpful in terms of understanding the concepts and the book covers a wide variety of interesting topics. Also, I suggest that everyone follow this tutorial it is an example that shows how to get a Node.js application into a Docker container. The guide also helps to understand basics of how a Node.js application is structured.

Here is the tutorial: https://www.digitalocean.com/community/tutorials/how-to-build-a-node-js-application-with-docker-quickstart

Sources : https://learning.oreilly.com/library/view/javascript-the-definitive/9781491952016/ch16.html

https://www.tutorialspoint.com/nodejs/nodejs_introduction.htm

From the blog Derin's CS Journey by and used with permission of the author. All other rights reserved by the author.

What is UML and the differences with OOM

The difference between OOM and UML by definitions.

Object-oriented modeling (OOM) is the construction of objects using a collection of objects that contain stored values of the instance variables found within an object. Unlike models that are record-oriented, object-oriented values are solely objects.

UML stands for “Unified Modeling Language. The purpose of UML is visually representing a system along with its main actors, roles, actions, artifacts or classes, in order to better understand, alter, maintain, or document information about the system. We can say that UML is the new approach for documenting and modeling the software

Object-oriented modeling is the process of construction of objects. Objects contain stored values of the instance variables. It creates the union of the application and database development. The union is then transformed into a unified data model. This approach makes object identification and communication easy. It supports data abstraction, inheritances, and encapsulation. Modeling techniques are implemented using OOP supporting languages. OOM consists of the following three cases: Analysis, Design, and Implementation

UML, short for Unified Modeling Language, is a standardized modeling language consisting of a set of diagrams. UML is used to help system developers clarify, demonstrate, build, and document the output of a software system. UML represents a set of practices that have proven successful in modeling large and complex systems and is a very important part of the development of object-oriented software and software development processes. UML primarily USES graphical notations to represent the design of software projects. Using UML helps project teams communicate, explore potential designs, and validate the architectural design of the software. Below we will walk you through what UML is, the history of UML, and a description of each UML diagram type, supplemented by UML examples.

Why we like UML? What is the benefit of UML?

As the value of software products increases, companies are looking for technologies to improve software production processes, improve quality, reduce costs, and shorten time to market. These technologies include component technologies, visual programming, and the application of patterns and frameworks. Companies are also looking for technology that can manage the complexity of systems as they grow in scope and scale. They also recognize the need to address periodic architectural issues such as physical distribution, concurrency, replication, security, load balancing, and fault tolerance.

In the end, UML provides users with an off-the-shelf, expressive visual modeling language so that they can develop and exchange meaningful models. It provides Extensibility and Specialization mechanisms for core concepts. It Independents of the specific programming language and development process. Provides a formal basis for understanding the modeling language. It encourages the development of the market for object-oriented tools. It supports higher-level development concepts such as collaboration, frameworks, patterns, and components. It integrates Best Practices.

Sources:

https://www.techopedia.com/definition/28584/object-oriented-modeling-oom

Introduction to Object-Orientation and the UML

From the blog haorusong by and used with permission of the author. All other rights reserved by the author.

Why use Docker?

 This week on my CS Journey, I want to talk about Docker. I know we went over several different activities in class; however, I was still a little confused, so I decided to look more into detail from outside sources to understand the concept and terms well. Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. A container is not so much different than a Virtual Machine But, instead of creating a full operating system, a Docker Container has just the minimum set of operating system software needed for the application to run and rely on the host Linux Kernel itself.

The first blog talked about the importance of docker and how to step a docker file in the root directory. There was a 12-minute video from YouTube that explained the concept very well. I learned a lot from that YouTube video. The blog also talked about creating a docker-compose file which is a tool that allows you to deploy and manage multiple containers at the same time. I learned the concepts that go inside a file including, From where to take a Docker file to build a particular image, Which ports you want to expose, How to link containers, Which ports you want to bind to the host machine, and other key commands. This blog was very helpful to do assignment 4 building the Nginx and the MongoDB servers.  

The next blog talked about the benefit of docker, one of the major benefits of containers is portability. Containers can run on top of virtual machines, or in the cloud. This has made it easier to use cases of containers to be around software development. People now can write applications, place it in a container, and then the application can be moved across various environments, as it is encapsulated inside the container which I think is helpful in the environment. It was quite interesting to learn that Docker offers privately hosted repositories of containers, which are about $1 per container. Many tech companies are looking to get into the action of using docker at their place especially the cloud hosting companies.

It was interesting to learn about the concepts of Docker, how to use the commands properly and how major companies are using docker at their workplace, Overall, the blogs and the video helped me to understand the tools behind docker especially running your image from localhost to port and other interesting concepts. I highly recommend everyone check out the video it is well explained.

Source1: https://medium.com/@reyhanhamidi/why-bother-use-docker-61eadf968d87 

Source2: https://www.networkworld.com/article/2361465/docker-101-what-it-is-and-why-it-s-important.html

Video: https://www.youtube.com/watch?v=YFl2mCHdv24

From the blog Derin's CS Journey by and used with permission of the author. All other rights reserved by the author.

Docker Review

 

Today I want to explain some information from what I learned in the class and some sources outside.
What is Docker?
Docker is a package of Linux containers, providing a simple and easy to use container interface. It is by far the most popular Linux container solution. It is a kind of visual machine but it also has differences from the visual machine.
Docker packages the application and its dependencies in a single file. Running this file generates a virtual container. The program runs in this virtual container as if it were running on a real physical machine. With Docker, you don’t have to worry about the environment.
Generally speaking, Docker’s interface is quite simple. Users can easily create and use containers and put their own applications into the containers. Containers can also be versioned, copied, Shared, and modified just like normal code.
The main USES of Docker currently fall into three categories. First, provide a disposable environment. Examples include testing other people’s software locally and providing a unit test and build an environment for continuous integration. Second, provide flexible cloud services. Because Docker containers can be opened and closed on demand, they are ideal for dynamic dilatation and shrinkage. Third, establish micro-service architecture. With multiple containers, a machine can run multiple services, so the microservice architecture can be simulated on its own.
Docker mainly contains three basic concepts, namely image, container, and warehouse. Once you understand these three concepts, you can understand the entire life cycle of Docker. The following is a brief summary of these three points that we have a short view of the class.
Docker image is a special file system that not only provides the programs, libraries, resources, configuration files required by the container runtime but also contains some configuration parameters for the runtime. 
The essence of containers is processed, but unlike processes that execute directly on the host, container processes that run in their own separate namespace containers can be. Create, start, stop, delete, pause, and so on, the relationship between an image and a container is analogous to classes and instances in object-oriented programming.
Once the image is built, it is easy to run on the current host, but if you need to use the image on other servers, you need a centralized service to store and distribute the image, and Docker Registry is one such service. A Docker Registry can contain multiple repositories; Each warehouse can contain multiple tags; Each label corresponds to an image, where the label can be understood as the version number of the image.
On the whole, Docker is a concept that is more and more interesting, which is worth our in-depth understanding. 
Sources:
https://medium.com/codingthesmartway-com-blog/docker-beginners-guide-part-1-images-containers-6f3507fffc98
https://docs.docker.com/get-started/part2/

From the blog haorusong by and used with permission of the author. All other rights reserved by the author.

Adapter Design Pattern

 

    This week On my Blog, I want to talk about a design pattern. The One I want to focus on is Adapter Design Pattern. Adapter design pattern Convert an interface of a class into another interface clients expect. The adapter lets classes work together that couldn’t otherwise because of incompatible interfaces. The adapter design pattern is a structural design pattern that allows two unrelated/uncommon interfaces to work together. In other words, the adapter pattern makes two incompatible interfaces compatible without changing their existing code. Now Let’s take a look at an example that shows how the design works. From the blog, I used it provided a diagram that I think is helpful to understand the design pattern. In this diagram, Socket wrenches provide an example of the Adapter. A socket attaches to a ratchet, provided that the size of the drive is the same. Here a Ratchet is ½ in size and the socket is ¼ without using an adapter you cannot connect the ratchet and socket together. 



    Now let us take a look at the benefits and liabilities of using the adapter design pattern. The first benefit is that you can let any two unrelated classes run together. Second, Improved the reuse of classes, it Increased the transparency of classes and it has Good flexibility. Now for the Liabilities, the Adapter design pattern takes Too much use of adapters will make the system very messy and difficult to master as a whole. For example, if an interface A and B are implemented and interface B is adapted internally. If too many tasks are running in A system, it will be A disaster. So, if it’s not necessary, you can simply refactor the system without using an adapter. Also, in object-oriented programming especially in JAVA it inherits at most one class, it can only adapt to one adapter class at most, and the target class must be an abstract class.

    A real-life example that I found was about the card reader ACTS as an adapter between the memory card and the laptop. You insert the memory card into the card reader and insert the card reader into the portable computer. The memory card can be read from the portable computer.

    This source is helpful to understand the design pattern even more. I would suggest you take a look at the website because it goes into further detail about the Adapter design pattern, including various examples using diagrams and JAVA codes. 

 

Source: https://sourcemaking.com/design_patterns/adapter#:~:text=Intent,class%20with%20a%20new%20interface

From the blog haorusong by and used with permission of the author. All other rights reserved by the author.

Facade Design Pattern

This week on my CS Journey, I want to talk about the design patterns, Especially the Facade design pattern. In class, we also learned about other designs which are helpful. However, I wanted to focus on the Facade design pattern. To start of Facade is a structural design pattern that adds an interface to an existing system to hide its complexities. In other words, it provides a simple interface that can be used to manipulate more complex logic behind the scenes, that are hidden from the main user. According to the Gang of Four, the Facade pattern intends to “Provide a unified interface to a set of interfaces in a subsystem” From the blog I used it provided a diagram that I think is helpful to understand the design pattern. In the diagram, we can see that Facade is acting as an interface for the complex Subsystems that are hidden from the main clients. 



Now let’s take a look at a real-life example. A good example that I found was When a computer starts up, it involves the work of CPU, memory, hard drive, and other subsystems. To make it easy to use for users we can add a Facade that wraps the complexity of the task and provide one simple interface instead. This applies to the Facade design pattern because it hides the complexities of the system and provides an easy interface to the users. 

 

One of the liabilities is that a Facade might provide limited functionality in comparison to working with the subsystem directly. Does not prevent applications from using subsystems classes if they need to. Also, it can increase the number of wrapper classes. Another liability is that Facade can perform way too many tasks. One of the benefits is that it reduces compilation dependencies is vital in large software systems. Overall, The Facade design pattern is helpful when dealing with the complex system because It enables us to use the complex system much more easily.

 

There are a lot of examples, diagrams, and Java Codes, readings and a link to a GitHub resource that are provided in the blog I used. I would suggest you to take a look at the website because it goes into further detail about the facade design pattern, including various examples using diagrams and also this reference contains a lot of information that is broken down into easy steps which are user friendly.

 

Source : https://www.baeldung.com/java-facade-pattern

















From the blog Derin's CS Journey by and used with permission of the author. All other rights reserved by the author.

S.O.L.I.D Principles

 This week on my CS Journey, I want to talk about the SOLID design principles. I am sure that you have heard the term many times, however, let us look at the principles in detail. The reason I picked this topic is as a Computer science major student we write many programs and having a strong principle will enable us to write effective object-oriented code. 

To start of SOLID is an acronym where each letter represents a software design principle. 

S – for Single Responsibility Principle

O – for Open/Closed Principle

L – for Liskov Substitution Principle

I – for Interface Segregation Principle

D – for Dependency Inversion Principle

 

Each principle overlaps here and there. The Single Responsibility Principle states that in a well-designed application, each class should have only one single responsibility. essentially meaning a class should only have one job. I think this is a very good concept to use because when you are working with complex programs and the class has more than one responsibility it will become a nightmare. The open-closed principle states that classes should be open for extension but closed for modification. For example, I learned that if you want to add a new feature to your program you should do it by extending it rather than modifying it which minimizes the risk of failures. Next, the Liskov Substitution Principle which explains that an object of a superclass can be replaced with objects of its subclasses without causing problems to the application. This means that a child class should never change the characteristics of its parent class.

 

The  Interface Segregation Principle is the fourth SOLID design principle that states no clients should not be forced to depend on methods they don’t use.  In other words, interfaces shouldn’t include too many functionalities since the interfaces are difficult to maintain and change over time, so it is best to avoid. Lastly, the Dependency Inversion Principle states that the High-level modules or classes should not depend on low-level modules/classes and both should depend upon abstractions because a change in one class can break another class which is very risky.

 

Overall, the blog I read talked about each principle in detail using various examples such as UML diagrams, Java programs, and other diagrams that helped me to understand each concept. I learned a lot from the blog, and I think that these concepts are all key to writing a good solid code. In the future, I will be using the principles to write effective object-oriented code. 

 

 

Here is the blog I Used:

https://raygun.com/blog/solid-design-principles/

From the blog Derin's CS Journey by and used with permission of the author. All other rights reserved by the author.

NicholasDudo Computer Science Blog

 Hello, 

    My name is Nicholas Dudo, I am currently enrolled at Worcester state as a senior in the Computer science program. I am currently on track to graduate in the spring of 2021 with a concentration of big data analytics. In my free time, I enjoy wrenching on cars and my own truck which is my non-computer related hobby. As well I like to participate in as many things possible when it comes to sports. For example playing pick up games in soccer, basketball, and even football on occasions. Other than that you can catch me with my friends giving out a helping hand on whatever is needed. I am a fast learner and very hands-on. When it comes to my computer-related hobbies I like to mess around with creating programs or tweaking them in any way possible. As well I love to learn new techniques even when it involves cloud computing or even searching a database for specific material. I am very aware that computer science is not something for the people fain at heart. It takes practice and an extreme amount of devotion to succeed in. with that being said I really do enjoy learning the tools of the trade and figuring out what needs to happen to succeed in the course. At one point in my life, I want to look back and share the story of where I have come from and say that it is possible if you put your mind to it. I am excited to see where this carrier takes me!

From the blog Nicholas Dudo by and used with permission of the author. All other rights reserved by the author.

CS 343 First Post

Welcome to Haoru’s Blog

Hello Everyone! This is my first post for 343.

From the blog haorusong by and used with permission of the author. All other rights reserved by the author.

CS 343 First Post

 Hello Everyone! This is my first post for 343. I’m excited and looking forward to this semester. Lets stay Focused!

From the blog Derin's CS Journey by and used with permission of the author. All other rights reserved by the author.