Category Archives: CS-343

K8 Crash Course

Blog Discovery

As a newcomer to the software industry, Kubernetes was foreign to me. This is also an article on the K8s, so please indulge me.

Applications: developers who provide a predefined service to stakeholders, Docker is a well-known runtime environment for building and building applications in containers. Creating a sample Go app; let’s create a basic Go app that we’ll use to deploy to our minikube cluster.

Kubernetes: The Kubernetes Engineering Manager is an open source container management and deployment platform. It orchestrates the clusters of virtual machines and schedules the containers to run on those virtual machines based on their available compute resources and the resource requirements of the container.

What are Pods?

A Kubernetes pod is a group of one or more containers, linked together for the purpose of Nodes and clusters: workstation that provides resources to developers a node can be thought of as the workstation that provides these resources and Kubernetes, the manager that allocates these positions to employees. In Kubernetes, a node is a working computer that can be virtual or physical. A node can have many pods, and Kubernetes automatically manages the scheduling of pods among the nodes in the cluster.

Deployment: the team’s objectives and structure defined at the start of the year a deployment provides declarative updates for Pods and ReplicaSets. In a deployment, we set a desired state and the deployment controller gradually converts the current state to the desired state. Take a look at the deployment file below deployment. The above deployment file is like a declarative template for Pods and Replicas.

The above deployment named myapp in {metadata.name}, creates a replica set to display two pods of myapp.

What are ReplicaSets?

Creating a K8s deployment kubectl apply -f deploy. Kubernetes Services: The SPOC team that routes relevant external communications to developer pods are ephemeral resources. In Kubernetes, a service is an abstraction that defines a logical set of pods and a policy for accessing them. Kubernetes services provide addresses through which associated pods can be accessed. Create a Kubernetes service kubectl apply -f service. The minikube tunnel command can be used to expose LoadBalancer services. The Minikube tunnel runs as a process on the host and creates a network route to the cluster CIDR service using the cluster IP address as a gateway. You can now use this IP address to open the service in the browser. The section on services is incomplete without an analogy with the life of developers. Imagine an external team uncertain or confused about the use of a feature developed by the development team. With that, we come to the end of the article on K8s and the Adventures of the Freshest Day One.

https://betterprogramming.pub/relating-with-docker-and-kubernetes-as-developers-an-analogue-5e662b1f817b

From the blog CS@Worcester – The Dive by gonzalezwsu22 and used with permission of the author. All other rights reserved by the author.

Microservices and their Pro’s and Con’s

Microservice Architecture is a strategy for designing software in which each part of software exists separately from the others in what is called a microservice. These microservices have their own CPU’s, exist in their own runtime environment, and are designed to do one specific job. Microservices communicate with each other through Application Program Interfaces (APIs) to form the whole of the product.

In their blog “What Is Microservice Architecture? Microservices Explained”, Johnathan Johnson and Laura Schiff discuss in detail the difference between Microservice Architecture and Monolith Architecture, a system in which the entire application exists as a single unit, and how Microservice Architecture is implemented and used. Today I want to discuss the pro’s and con’s of this architecture listed in the blog, and how I plan on using it in the future. I selected this post specifically because it goes into detail about the benefits and drawbacks of Microservice Architecture. If you would like to learn more about the specifics of the implementation, please visit the blog post linked above.

There are many pro’s to using microservices. The main reason it is implemented is because it promotes feature independence. This feature independence has many benefits, such as developer independence, isolation, resilience, and scalability. Personally I think that developer independence is the most beneficial since developers will become very competent with their microservice and will be experts at their specific job.

When a microservice wants to interact with another it makes a call to that service’s API. Since microservices exist in their own runtime environment they will continue to function even if that API call is not executed. This means that if a part of an application goes down the rest of the program continues to run, albeit at a loss of functionality. In my opinion this is the one of the best reasons to use microservices since it makes applications very resilient.

The final pro I would like to discuss is scalability. Since each service has its own dedicated CPU, its usage can be scaled to meet demand. I think this is a great byproduct of microservices, but do not believe this should be the main reason for using them.

The issues related to microservices are real, but manageable. Microservice Architecture adds complexity to a project, but this can be mitigated with training. Microservices also cost more since they use more processing power and require experienced engineers, but I believe the benefits are worth the cost. Finally microservices adds a security risk since each microservice can be attacked individually. Given an experienced team of engineers and good oversight, this risk can be lessened.

In conclusion, I believe Microservice Architecture to be a very good design choice and plan on using it in my future work. Reading the blog post was worth the few minutes it took, and I can say that I understand microservices much better now than I had before. The benefits far outweigh the risks and I can imagine many employers would already be using this architecture.

From the blog CS@Worcester – Ryan Blog by rtrembley and used with permission of the author. All other rights reserved by the author.

What is Semantic Versioning and why is it important?

Semantic Versioning 2.0.0, SemVer for short, is a format widely used by developers to determine version numbers. When implementing Semantic Versioning, you will need to declare an API (Application Programming Interface). This is because the way the version number is incremented depends on the changes made against the previous API. A semantic version consists of 3 numbers separated by decimals and is formatted like so: MAJOR.MINOR.PATCH. There are rules to follow when incrementing each of the numbers and the SemVer documentation provides a very helpful summary of when to do so:

MAJOR version is incremented when API incompatible changes have been made.

MINOR version is incremented when backwards compatible functionality has been added.

PATCH version is incremented when backwards compatible bug fixes have been made.

Next let’s take a look at each version number specifically. I came across this blog post on the Web Dev Simplified Blog and it does a good job of explaining each of the versions further. 

MAJOR Version 

The major version number is only to be incremented when any API breaking change has been introduced. The blog provides examples and a major change could be as simple as an entire library rewrite or a rework of a single component that still breaks the API. The version MUST be incremented if any backwards incompatible changes have been made. It is possible that minor and patch level changes have been made as well, but they must be reset to 0 when the major version is incremented. For example, when a major change is made to version 1.17.4, the new version number will be 2.0.0.

MINOR Version

The minor version number is incremented when backwards compatible changes have been made that don’t break the API. According to the SemVer documentation, it MUST be incremented if any public API functionality is marked as deprecated and MAY be incremented if substantial new functionality or improvements are introduced within the private code. When incrementing the minor version number, the patch level must be reset to 0 as well. For example, when a minor change is made to version 1.14.5, the new version number will be 1.15.0.

PATCH Version

The patch version number is incremented simply when backwards compatible bug fixes have been introduced. This number is commonly updated, and the only change that should be made is the fixing of incorrect behavior. For example, a bug fix for version 0.4.3, would make the new version 0.4.4.

Hopefully this blog was helpful in understanding Semantic Versioning. It is evident of its wide usage that Semantic Versioning is a useful tool to use when creating new versions. For the most part, the process of deciding the version number is straightforward, but it is neat to see that there is an actual guideline that many developers follow.

From the blog CS@Worcester – Null Pointer by vrotimmy and used with permission of the author. All other rights reserved by the author.

Software Construction Log #3 – Understanding Docker Volumes, Mounts, and data management

Though most people may not consider this an issue during the early stages of learning how to use Docker for deployment, data retention and persistence is one of the problems that one needs to consider when they need to utilize Docker containers. Previously, I wrote about virtualization using either Virtual Machines and Docker, though I mostly focused on how both essentially work on an operating system and what they require regarding system resources. What I did not mention, however, was how either Virtual Machines or Docker operate when it comes to data persistence. We are aware that the host systems that we use retain the data that we create between sessions. This means that if I power the computer that I am using now off and then on after submitting this blog post, the majority of the data that was created in the previous session will be available in the next session. However, as we begin to work with virtualization, this  issue of data persistence becomes a much greater issue for us to consider when working with Docker. In the case of virtual machines, data persistence is not much of an issue.

Data persistence for Docker containers, however, works differently. It is stated in the Docker documentation that:

The data doesn’t persist when that container no longer exists, and it can be difficult to get the data out of the container if another process needs it.

https://docs.docker.com/storage/

When a container is stopped and then restarted, whatever files that were created and used in the container will be deleted and the container will essentially run on a “clean state”. However, there is a way to guarantee data persistence on a docker container at any point through binding mounts or volumes to the container. Though Docker offers other types of binds, such as tmpfs mounts and named pipes, I will mostly be focusing on bind mounts and volumes for the remained of this post as ways of maintaining data persistence between a host machine and a Docker container.

While I was researching for more information regarding the differences between using bind mounts and volumes, I came across the following two articles, one tutorial titled Docker Volumes – Tutorial on buildVirtual.Net and one article titled Guide to Docker Volumes on Baeldung.Com by Ashley Frieze. In the Baeldung article, Frieze showcases how the Docker file system works and, in turn, how data retention is affected in a Docker container before explaining the differences between using volumes and bind mounts. Likewise, the buildVirtual tutorial also outlines the above differences, as well as showing how to utilize and delete volumes through docker commands.

Although both bind mounts and volumes can be used for data persistence, it is important to know which method to utilize depending on where we want the binds to be stored in the host system or how other docker or non-docker processes may need to interact with the specific data.

Direct link to the resources referenced in the post: https://www.baeldung.com/ops/docker-volumes and https://buildvirtual.net/amp/docker-volumes-tutorial/

Recommended materials/resources reviewed related to Docker mount and volumes:
1) https://4sysops.com/archives/introduction-to-docker-bind-mounts-and-volumes/
2) https://medium.com/@BeNitinAgarwal/docker-containers-filesystem-demystified-b6ed8112a04a
3) https://www.baeldung.com/ops/docker-container-filesystem
4) https://digitalvarys.com/docker-volume-vs-bind-mounts-vs-tmpfs-mount/
5) https://medium.com/devops-dudes/docker-volumes-and-bind-mounts-2fb4bd9df09d
6) https://docs.microsoft.com/en-us/visualstudio/docker/tutorials/use-bind-mounts
7) https://blog.logrocket.com/docker-volumes-vs-bind-mounts/

From the blog CS@Worcester – CompSci Log by sohoda and used with permission of the author. All other rights reserved by the author.

Docker

What is vs What isn’t

Docker is an infrastructure manager for Linux Containers. Docker is an excellent image distribution strategy for server templates created with configuration management systems like as Chef, Puppet, SaltStack, and others. Docker is not a replacement for Configuration Managers such as Chef, Puppet, SaltStack, and others. Docker offers a single store of public and private disk images that may be used to run a variety of operating systems (Ubuntu, Centos, Fedora, even Gentoo). Docker is not yet capable of connecting several servers or virtual machines.

When to use

Docker, like git or java, is a basic technology that you should start using in your everyday development and operations processes. You may use docker as a version control system for the complete operating system of your project. You can also use a docker to distribute/collaborate on your app’s operating system with a team. When your project has to go through many stages of development, use Docker. Try Drone or Shippable, both of which support Docker CI/CD. One of the best thing is that docker allows you to execute your code in the same environment as your server on your laptop.

Docker comparison

When you run java, you run the program on any system with a JVM, however, when you run docker, you run the code on any machine with a Docker server after you arrange your servers exactly the way you want them. Git tracks the changes in your code while docker tracks the changes in your system. In Docker, you can track changes throughout your entire system. GitHub is mainly though for code management, and Docker Hub is though for container build, management and distribution. They all work in the same way, despite the fact that they do various tasks.

Why docker

Since we’re already using Docker in class, why not take advantage of the extra time to learn more about it? I understood what Docker was but had never considered what it wasn’t, therefore this post helped me better comprehend what Docker isn’t. I’ve also discovered how docker is similar to both java and git, something I never considered before. I had no idea there were alternatives to Docker, such as the Amazon AMI Marketplace, which is the closest thing you’ll find to the Docker Index. With Docker, you can run the images on any Linux server that runs Docker. Another is the Warden project, which is an LXC manager designed for Cloud Foundry but lacks any of Docker’s social capabilities, such as sharing images with others on the Docker Index. The most important thing I learned is when to use docker.

https://www.ctl.io/developers/blog/post/what-is-docker-and-when-to-use-it/

From the blog cs@worcester – Dream to Reality by tamusandesh99 and used with permission of the author. All other rights reserved by the author.

Week-6

Hello, I want to write this blog after finishing my exam like an hour ago; I am looking over some class activities and see any questions to review, but something that sometimes confused me. I read the word “docker-compose” in some class-work exercises; I got interested and looked it up again. I found two links that helped me understand what Docker Compose does and how you use Docker Compose?.

Docker Compose: Run multiple containers as a single service or extend many different Docker containers midway. Even an essential tool for any application that needs various micro-services, as it allows each service to be in a separately managed container easily.

What Does Docker Compose Do?

Docker containers are running applications in an isolated environment. Its application deployments are arranged in Docker for the benefit. However, it’s often complex as running a single container. Usually, Many containers come together to act as one service made up of many changing parts.

Running all deployment time is disordered, so Docker provides Docker Compose; it runs multiple containers to clean it up. It helps all arrangements in one YAML file and starts all the containers with one 

command.

Rather than having all services in one big container, Docker Compose allows to split them up into individually manageable containers. This is better for building and deployment, and it can manage all of them in separate codebases and doesn’t need to start each container manually.

Using Docker Compose is a three-step process:

  • Build the part images using their Dockerfiles, or pull them from a registry.
  • Set all of the component services in a docker-compose.yml file.
  • Run all of them together using the docker-compose CLI.

Docker Compose still builds and publishes Docker containers using a Dockerfile. But, instead of running them directly, it can use Docker Compose to manage the configuration of a multi-container deployment.

How Do You Use Docker Compose?

The form for a docker-compose file is done in docker-compose.yml. It doesn’t need to place this at the root of a project like a Dockerfile. It can go anywhere as it doesn’t depend on any other code. However, it builds the images locally and will need to go into a project folder with the produced code.

A Compose configuration file that runs a WordPress instance using the WordPress container off the Docker Hub. However, this depends on a MySQL database, which Composes also creates.

  • First, a version number since the arrangement can change depending on which version.
  • Next, A list of Services. 
  • Lastly, the volumes are stored.

From the blog Andrew Lam’s little blog by Andrew Lam and used with permission of the author. All other rights reserved by the author.

Week-6

Hello, I want to write this blog after finishing my exam like an hour ago; I am looking over some class activities and see any questions to review, but something that sometimes confused me. I read the word “docker-compose” in some class-work exercises; I got interested and looked it up again. I found two links that helped me understand what Docker Compose does and how you use Docker Compose?.

Docker Compose: Run multiple containers as a single service or extend many different Docker containers midway. Even an essential tool for any application that needs various micro-services, as it allows each service to be in a separately managed container easily.

What Does Docker Compose Do?

Docker containers are running applications in an isolated environment. Its application deployments are arranged in Docker for the benefit. However, it’s often complex as running a single container. Usually, Many containers come together to act as one service made up of many changing parts.

Running all deployment time is disordered, so Docker provides Docker Compose; it runs multiple containers to clean it up. It helps all arrangements in one YAML file and starts all the containers with one 

command.

Rather than having all services in one big container, Docker Compose allows to split them up into individually manageable containers. This is better for building and deployment, and it can manage all of them in separate codebases and doesn’t need to start each container manually.

Using Docker Compose is a three-step process:

  • Build the part images using their Dockerfiles, or pull them from a registry.
  • Set all of the component services in a docker-compose.yml file.
  • Run all of them together using the docker-compose CLI.

Docker Compose still builds and publishes Docker containers using a Dockerfile. But, instead of running them directly, it can use Docker Compose to manage the configuration of a multi-container deployment.

How Do You Use Docker Compose?

The form for a docker-compose file is done in docker-compose.yml. It doesn’t need to place this at the root of a project like a Dockerfile. It can go anywhere as it doesn’t depend on any other code. However, it builds the images locally and will need to go into a project folder with the produced code.

A Compose configuration file that runs a WordPress instance using the WordPress container off the Docker Hub. However, this depends on a MySQL database, which Composes also creates.

  • First, a version number since the arrangement can change depending on which version.
  • Next, A list of Services. 
  • Lastly, the volumes are stored.

From the blog Andrew Lam’s little blog by Andrew Lam and used with permission of the author. All other rights reserved by the author.

Week-6

Hello, I want to write this blog after finishing my exam like an hour ago; I am looking over some class activities and see any questions to review, but something that sometimes confused me. I read the word “docker-compose” in some class-work exercises; I got interested and looked it up again. I found two links that helped me understand what Docker Compose does and how you use Docker Compose?.

Docker Compose: Run multiple containers as a single service or extend many different Docker containers midway. Even an essential tool for any application that needs various micro-services, as it allows each service to be in a separately managed container easily.

What Does Docker Compose Do?

Docker containers are running applications in an isolated environment. Its application deployments are arranged in Docker for the benefit. However, it’s often complex as running a single container. Usually, Many containers come together to act as one service made up of many changing parts.

Running all deployment time is disordered, so Docker provides Docker Compose; it runs multiple containers to clean it up. It helps all arrangements in one YAML file and starts all the containers with one 

command.

Rather than having all services in one big container, Docker Compose allows to split them up into individually manageable containers. This is better for building and deployment, and it can manage all of them in separate codebases and doesn’t need to start each container manually.

Using Docker Compose is a three-step process:

  • Build the part images using their Dockerfiles, or pull them from a registry.
  • Set all of the component services in a docker-compose.yml file.
  • Run all of them together using the docker-compose CLI.

Docker Compose still builds and publishes Docker containers using a Dockerfile. But, instead of running them directly, it can use Docker Compose to manage the configuration of a multi-container deployment.

How Do You Use Docker Compose?

The form for a docker-compose file is done in docker-compose.yml. It doesn’t need to place this at the root of a project like a Dockerfile. It can go anywhere as it doesn’t depend on any other code. However, it builds the images locally and will need to go into a project folder with the produced code.

A Compose configuration file that runs a WordPress instance using the WordPress container off the Docker Hub. However, this depends on a MySQL database, which Composes also creates.

  • First, a version number since the arrangement can change depending on which version.
  • Next, A list of Services. 
  • Lastly, the volumes are stored.

From the blog Andrew Lam’s little blog by Andrew Lam and used with permission of the author. All other rights reserved by the author.

Week-6

Hello, I want to write this blog after finishing my exam like an hour ago; I am looking over some class activities and see any questions to review, but something that sometimes confused me. I read the word “docker-compose” in some class-work exercises; I got interested and looked it up again. I found two links that helped me understand what Docker Compose does and how you use Docker Compose?.

Docker Compose: Run multiple containers as a single service or extend many different Docker containers midway. Even an essential tool for any application that needs various micro-services, as it allows each service to be in a separately managed container easily.

What Does Docker Compose Do?

Docker containers are running applications in an isolated environment. Its application deployments are arranged in Docker for the benefit. However, it’s often complex as running a single container. Usually, Many containers come together to act as one service made up of many changing parts.

Running all deployment time is disordered, so Docker provides Docker Compose; it runs multiple containers to clean it up. It helps all arrangements in one YAML file and starts all the containers with one 

command.

Rather than having all services in one big container, Docker Compose allows to split them up into individually manageable containers. This is better for building and deployment, and it can manage all of them in separate codebases and doesn’t need to start each container manually.

Using Docker Compose is a three-step process:

  • Build the part images using their Dockerfiles, or pull them from a registry.
  • Set all of the component services in a docker-compose.yml file.
  • Run all of them together using the docker-compose CLI.

Docker Compose still builds and publishes Docker containers using a Dockerfile. But, instead of running them directly, it can use Docker Compose to manage the configuration of a multi-container deployment.

How Do You Use Docker Compose?

The form for a docker-compose file is done in docker-compose.yml. It doesn’t need to place this at the root of a project like a Dockerfile. It can go anywhere as it doesn’t depend on any other code. However, it builds the images locally and will need to go into a project folder with the produced code.

A Compose configuration file that runs a WordPress instance using the WordPress container off the Docker Hub. However, this depends on a MySQL database, which Composes also creates.

  • First, a version number since the arrangement can change depending on which version.
  • Next, A list of Services. 
  • Lastly, the volumes are stored.

From the blog Andrew Lam’s little blog by Andrew Lam and used with permission of the author. All other rights reserved by the author.

Week-6

Hello, I want to write this blog after finishing my exam like an hour ago; I am looking over some class activities and see any questions to review, but something that sometimes confused me. I read the word “docker-compose” in some class-work exercises; I got interested and looked it up again. I found two links that helped me understand what Docker Compose does and how you use Docker Compose?.

Docker Compose: Run multiple containers as a single service or extend many different Docker containers midway. Even an essential tool for any application that needs various micro-services, as it allows each service to be in a separately managed container easily.

What Does Docker Compose Do?

Docker containers are running applications in an isolated environment. Its application deployments are arranged in Docker for the benefit. However, it’s often complex as running a single container. Usually, Many containers come together to act as one service made up of many changing parts.

Running all deployment time is disordered, so Docker provides Docker Compose; it runs multiple containers to clean it up. It helps all arrangements in one YAML file and starts all the containers with one 

command.

Rather than having all services in one big container, Docker Compose allows to split them up into individually manageable containers. This is better for building and deployment, and it can manage all of them in separate codebases and doesn’t need to start each container manually.

Using Docker Compose is a three-step process:

  • Build the part images using their Dockerfiles, or pull them from a registry.
  • Set all of the component services in a docker-compose.yml file.
  • Run all of them together using the docker-compose CLI.

Docker Compose still builds and publishes Docker containers using a Dockerfile. But, instead of running them directly, it can use Docker Compose to manage the configuration of a multi-container deployment.

How Do You Use Docker Compose?

The form for a docker-compose file is done in docker-compose.yml. It doesn’t need to place this at the root of a project like a Dockerfile. It can go anywhere as it doesn’t depend on any other code. However, it builds the images locally and will need to go into a project folder with the produced code.

A Compose configuration file that runs a WordPress instance using the WordPress container off the Docker Hub. However, this depends on a MySQL database, which Composes also creates.

  • First, a version number since the arrangement can change depending on which version.
  • Next, A list of Services. 
  • Lastly, the volumes are stored.

From the blog Andrew Lam’s little blog by Andrew Lam and used with permission of the author. All other rights reserved by the author.