Category Archives: Week 10

The Basic Docker Compose File Structure

In last week’s blog, I covered docker compose files and their significance. For today’s blog, I would like to go in depth on docker-files to explain how they are structured. To give a quick review of what Docker Compose is. Docker Compose is a tool that allows developers to create and run multi-container docker applications. Once again, I am referring to The Definitive Guide to Docker compose, posted by Gabriel Tanner. He does a good job of breaking down the structure of a docker-compose.yml file.

Docker Compose File Structure

By using a docker file, you can list the containers you want and all of the specifications as you would in a docker run command. As explained in last weeks blog, almost every docker-compose file should include the following:

  • The version of the compose file
  • The services which will be built
  • All used volumes
  • The networks which connect the different services

Docker files are structured upon indentation. Consider the idea of levels of abstraction.

On the top levels are the version, services, and volumes tag. The next level down is where the containers are listed, then parameter tags such as image, volumes, and ports. And finally on the lowest level, the respective rules for each parameter.

Here is a simple docker-compose.yml file:

Let’s take a look at each tag individually.

Version:

The version tag is used to specify the version of our compose file, based on the Docker engine we are using. For example, if we are using Docker engine release 19.03 or later:

Services:

The service tag acts as a parent tag, as all of the containers are listed underneath it.

Base Image/Build:

The image tag is where you would define the base image of each container. You can define a build using preexisting images available on DockerHub. In this case, we are defining a web1 container using the nginx:mainline image. 

You can even define the base image by pointing to a custom Dockerfile:

https://gabrieltanner.org/blog/docker-compose

Volumes:

The volumes tag allows you to designate a directory where the persisting data of a container is managed. Here is an example of a normal volume being specified.

https://gabrieltanner.org/blog/docker-compose

Another way of defining a volume is through path mapping, linking a directory on the host machine to a container destination separated by a : operator.

Ports:

Specifying a port allows you to expose your host machine to the container that is running. In this case, we are defining port 10000 for the host port and we want to expose it to port 80.

Those are all of the main components that you need to structure a Docker Compose file. In this blog post, I simply covered the basic tags and structure. Compose files allow you to manage multiple containers and specify properties to a finer detail. There are tags that I haven’t covered such as dependencies, commands, and environment that Gabriel explains really well. 

From the blog CS@Worcester – Null Pointer by vrotimmy and used with permission of the author. All other rights reserved by the author.

CS-343 Post #5

After doing more work with activities 13 and 15 and working on homework 4, I wanted to read more about how to use the http methods with REST API calls because I was having some trouble understanding how to implement them in the homework at first and wanted to get a better grasp on how they work to set up my endpoints and paths.

I have decent understanding of each of the methods and how they correlate with the paths and endpoints we have been working with so far. The main source I used for reading more about the methods was the REST API tutorial about http methods. It goes over each of the methods and gives a summary at the end which is a table. It also gives a brief glossary over some terms used and references to other sources. It gave good explanations for each method and was divided in evenly sized sections.

The main five methods used that we are familiar with are GET, PUT, POST, DELETE, and PATCH. GET is used to find and return an item or object based on the parameters we choose, like a name or ID. POST creates an object and posts it to the data. PUT updates and replaces data for an object. DELETE removes/deletes an item/object. PATCH is used for minor/partial updates. We have not used PATCH in our activities yet, but it did show up as a method in an example given for activity 12 for how http methods work with urls.

Going more into the methods, GET methods are considered safe methods because they do not make any changes to the data, and they are also idempotent because they are expected to allow multiple similar requests with the same results. The other methods all change the code or data in some way, so they are not considered safe. DELETE is the other idempotent method because the result of each DELETE is a removed object. PUTs and PATCHs seem the same because they both are used to make changes to existing data, but they differ how much they update. PATCHs are for smaller and partial updates, while PUTs are for bigger updates and usually involve replacing existing data.

After this reading and doing more work with REST API, I have a better idea on how to implement them in code and continue with the activities and homework.

https://restfulapi.net/http-methods/

From the blog Jeffery Neal's Blog by jneal44 and used with permission of the author. All other rights reserved by the author.

Concurrency

https://www.vogella.com/tutorials/JavaConcurrency/article.html

This article discusses the use of Concurrency in Java. As defined in the article Concurrency is “the ability to run several programs or parts of a program in parallel”. Concurrency is meant to cut down on time and make a program or programs more effective and easier to use together. The article describes how Java uses several threads to “parallel process” or behave asynchronously which ultimately makes the application run faster and smoother. Concurrency is all about the performance gain in the application, this gain can be calculated using “Amdahls Law” which tells you the maximum performance gain.

According to the article a Java program runs its processes in one thread (by default) through javas “Thread Code” which is also capable of allowing multiple threads through the Thread class. Through threads you use the synchronize keyword in order to define which methods or code should be executed by one single thread. The synchronized keyword provides specific sections of code with a “lock” and any code protected by this lock will be executed by one thread instead of multiple. The memory of threads communicates with the memory of the application through the “java memory model” which also allows the thread memory to refresh itself with the main memory.

I chose this article as it goes into depth about the subject of Concurrency and helps students like myself understand what exactly concurrency is and what the purpose or use of concurrency is. Concurrency is a complex subject as it involves the thread class which isnt commonly used by beginners in java and it involves new keywords not previously learned. The article goes into detail about threads and what exactly they do/why they are important and it also tells you the exact purpose of concurrency which is program efficiency i.e. speed, correctness, etc.

I actually learned quite a bit from this article. I already knew the general idea/concept of concurrency and what it is but as far as the more in depth information which was provided like threads, synchronize and volatile I never knew exactly what each part was and what they each entailed. This article was organized very well and provides the reader with the necessary information in order to learn about and work with concurrency in their own code. Overall the article was very helpful and I would recommend it to other students as it explains the subject very well.

From the blog CS@Worcester – Dylan Brown Computer Science by dylanbrowncs and used with permission of the author. All other rights reserved by the author.

MongoDB

What is it?

MongoDB is an open-source database with a document-oriented data format and a non-structured query language. MongoDB Atlas is a globally accessible cloud database solution for modern applications. This best-in-class automation and well-established procedures enable completely managed MongoDB deployments on AWS, Google Cloud, and Azure. It also guarantees availability, scalability, and adherence to the most strict data security and privacy standards. MongoDB Cloud is a unified data platform that comprises a global cloud database, search, data lake, mobile, and application services, as well as a global cloud database.

Is MongoDB a SQL?

No. It is not. It is one of the most powerful NoSQL databases and systems available today. Because it is a NoSQL tool, it does not employ the traditional rows and columns associated with relational database administration. It is a collection-based and document-based architecture. A collection of key-value pairs is the basic unit of data in this database. It provides for varied fields and formats in documents. This database stores documents in the BSON format, which is a binary version of JSON.

Originated/Creation

Dwight Merriman, Eliot Horowitz, and Kevin Ryan found MongoDB in 2007. They decided to create a database to solve the difficulties of scalability and agility that they were seeing at DoubleClick. That’s when MongoDB came into existence. In 2009, the company made the move to open-source development, with commercial support and extra services offered. In 2013, the company’s name was changed to MongoDB Inc. from 10gen. It went public on NASDAQ as MDB on October 20, 2017, with an initial public offering (IPO) price of $24 per share. On October 30, 2019, it announced cooperation with Alibaba Cloud to deliver a MongoDB-as-a-Service solution to its clients. BABA’s managed services are available from any of the company’s data centers across the world.

Advantages of using MongoDB

MongoDB may be operated across globally scattered data centers and cloud regions, offering unprecedented levels of availability and scalability. Fast and Iterative Development: Your company’s project delivery will no longer be hampered by changing business needs. Developers can quickly design and update applications thanks to a flexible data format with changeable schema and powerful GUI and command-line tools. MongoDB’s versatile data model stores data in JSON-like documents, making data storage and combining simple. The document model is mapped to the objects in your application code, making data manipulation simple.

Why did I pick MongoDB?

I chose to study more about MongoDB after seeing a lot of MongoDB terminology in our class activities and homework assignment. I knew it was a No SQL database that used the JSON format, but I didn’t know much else. I learned about MongoDB’s history, including how it was formed, when it was created, who built it, and why it was created. I also studied how MongoDB differs from other databases like MySQL, Cassandra, and RDBMS. The advantages of using MongoDB were the most that impressed me.

Link: https://intellipaat.com/blog/what-is-mongodb/#no1

From the blog cs@worcester – Dream to Reality by tamusandesh99 and used with permission of the author. All other rights reserved by the author.

Back-end API

These weeks, I’ve been learning about APIs and back-ends. I also had the opportunity to practice with an API in my current homework. However, I am very confused about the relationship between API and back-end. I don’t understand how the API is related to the back-end, what the API and back-end are used for.

To answer those questions, I tried to search for some information about API and back-end. Back-end API development introduction, written by Cesare Ferrari, is a resource that I have found useful. The website has clear definitions of back-end and API; and their relationship is also described in detail, which helps me get better understanding of the two new terms. From the website, I know that back-end is a service that will send data to the front-end which interacts with the end users. On the other hand, API, Application Programing Interface, is a set of definitions and protocols for building and integrating application software. API can be also considered a back-end component, which is typically used by front-end applications to communicate with back-end applications. In other words, the API is used to outline all the requirements or functions that will interact with the end user. The back-end will rely on the API to create endpoints that fulfill all the requirements designed in the API. There are different types of APIs, but the REST APIs, which stands for Representational State Transfer, is one of the most popular. It communicates via HTTP requests to perform four basic functions known as CRUD. These are creating data (post), reading data (get), updating data (put), and deleting data (delete).

Moreover, the website also provides a general information of Node.js and Express to explain more how to create and use back-end applications. Node.js is used to execute a Javascript on the back-end without the browser. Express is a library or a Node.js application is used to create and send HTTP requests.

All in all, I think the website is a good resource because it’s short, concise, and provides the essential information I need. It gives me a general idea of what the back-end and API are, and how the back-end and API relate to each other. By understanding the two new definitions, I was also able to understand what I was supposed to do in each one. Based on what I read from the site and what I did with my API homework, I can envision the API as an interface class and the back-end as my concrete class that will implement all abstract methods from the interface class.

From the blog CS@Worcester – T's CSblog by tyahhhh and used with permission of the author. All other rights reserved by the author.

More on REST APIs

REST APIs have become increasingly popular over the last few years, and for good reason. REST has less of a learning curve than other API models, has a smaller footprint, and parsing of JSON is less intensive than traditional XML parsing. There are a few key standards to REST, including utilizing certain requests like GET, POST, PUT, and DELETE, for example.

When a call is made using REST API, there are a few things that go on within the call itself. First are the endpoints, which is a unique URL that is used to represent objects or groups of objects of data. Every API request has its own endpoint. In addition, there are also the methods used for the request. These include those I listed above, like GET, POST, and PUT. A header contains the information that represents the metadata associated with the REST API request. The header also indicates the format of the request and the response, and provides information about the status of the request. Lastly, the request also consists of data, which is also referred to as the body, that is usually used with the POST and PUT commands and contains the information that will be updated or created.

Another important part of REST APIs are parameters. When someone is sending a REST API request, they can use parameters to narrow or further specify their search request. These are valuable tools, since it allows you to filter the data being received in a response. Some parameter types include path, header, cookie, and the most common, query. Query parameters are located at the end of a URL path and can either be required or optional. This can be useful if, for example, you are using a base GET command to get all of the objects in the database, and you can have an optional parameter to specify which of all those objects you want, using something like ID or name, depending on your implementation.

Source

I chose the above source because it gave good additional information on REST APIs, compared to other websites I visited. This helped me further understand how REST APIs differ from other web APIs, and what helps make REST different and usually better. REST APIs definitely have their many uses, and allows ease of use with their standards of requests, using easy to understand methods like GET, POST, and PUT, to name a few.

From the blog CS@Worcester – Erockwood Blog by erockwood and used with permission of the author. All other rights reserved by the author.

Docker

Resource Link: https://www.linode.com/docs/guides/when-and-why-to-use-docker/

For this week I decided to research and write about what exactly docker is and what it’s used for in commercial applications or projects. With how much we use docker in class, I was curious as to what its other applications were and how it was used it other settings. I was curious about this because I had never heard of docker or used docker before this class, and it seems like such a useful and important piece of software. Even with just what we’ve covered so far in class, it appears to be very versatile and suitable to a variety of different applications.

This article summarizes what docker is, when and how it originated, the benefits of docker, and when and when not to use docker. I chose this article because it provides all the necessary information in a clean and concise way. While many other resources about docker were bloated and included lots of unnecessary information, this one has just the right amount of information to inform a user about what docker is, how to learn more about it, and how to get started using it. This makes it a great resource for someone new to docker, who maybe just heard about it and doesn’t know exactly what it is.

First, the article provides details about what docker is and when it was released. Docker was released in 2012, and since has become an important technology in web development. This was surprising to me because I figured I would have heard about such a big and important technology in web development, at least in one of my other classes. I figured that since I hadn’t heard of it, it must be a newer technology, but it’s already 9 years old. However, it does make sense that it would be a fast growing piece of software because of its versatility and ease of use. It makes the process of setting up and configuring a web server much simpler than it would be otherwise, and it makes monitoring those web servers easy through the docker application.

Next, the article delves into some of the benefits of docker. The benefits include reproducibility, isolation, security, docker hub, environment management, and continuous integration. Docker is reproducible in a similar way to a java application, because whereas a java application runs on a java virtual machine that is the same across all operating systems, a docker container runs in docker and is the same across all operating systems. This is very useful because it means that the container doesn’t have to be changed across different operating systems, meaning that setup time is reduced. Then, isolation is the property of a docker container where it can have instillations independent of the operating system it’s on, and independent of other docker containers. This helps to avoid conflicts between docker containers that may require different dependencies or installations. Next, with the different parts of an applications separated into different docker containers, security is increased. This is because if a single docker container is compromised, other docker containers won’t be affected. Also, different containers means that different versions of a project can be maintained independently, for example for testing. The docker hub makes it easy to find and use certain images, also leading to ease of setting up a docker container. Finally, docker can be used as a part of tools like Jenkins, meaning that when an update is made, it can automatically be saved and pushed to docker hub as an image, and put right into deployment. This cuts down on development time, and increases the ease of use by reducing the amount of work that must be done manually.

From the blog CS@Worcester – Alex's Blog by anelson42 and used with permission of the author. All other rights reserved by the author.

Draw Your Own Map

The “Draw Your Own Map Pattern” is chiefly about assessing your current role in an organization and looking forward to your next professional endeavor, be it within that organization, or externally. The Problem section explicitly defines the situation such that your current employer does not have the position you’re looking for in your organization. The proposed solution, in summary, is to put careful thought into your future external position and then create a plan with micro steps to get you there; these steps will help keep your sights on the potential position.

The pattern seems to coincide very much with the steps I’ve taken in life. I’ve personally undergone a very windy path to get where I am now and will certainly be considered having an unconventional path in any career I manage to land outside of college. While I don’t always have scheduled periods of reflection in my career, I find that periodically discussing with coworkers what they want to make of their careers inspires them to pursue their goals and also forces me to look inward and re-evaluate if I’m happy with my career at that point.

Perhaps my favorite takeaway from this pattern was the activity provided in the Action section at the bottom. For the unacquainted, the activity requests the reader to make a map of three jobs that could be logically pursued beyond their current one. The authors then insist that the reader do this with the web of three jobs for each of the previous branches and assess if any of these roles would satisfy them. The exercise implores that the branching is done one more time and this final iteration should be roughly representative of your total career prospects. I found that engaging with this exercise left me feeling hopeful and optimistic about my potential career paths —which is to be expected as a student— and would specifically recommend it to others looking to make a change who may be a bit more pessimistic about their prospects. As someone who has hopped careers, the hesitancy to reconsider one’s career comes from a fear of needing to take drastic action but by using this exercise I think those in a similar situation to my own would realize that they’re not as far from their destination as they may seem.

From the blog CS@Worcester – Cameron Boyle's Computer Science Blog by cboylecsblog and used with permission of the author. All other rights reserved by the author.

Stay in the Trenches – Apprenticeship Pattern

What this apprenticeship pattern starts off talking about is how you have been developing extraordinary code for years, meeting the standard for the company, but all of a sudden you are offered a management position which will in fact take you away from coding into another direction. Now the pattern states that this may be tempting and show that you are leveling up, but it is actually sort of an illusion in a way. You may feel that a promotion will help you out, but in this case it is taking away from the journey you have took so far and the motivation to be a good programmer. When you take the promotion, then you will start deteriorating in the skill of programming slowly as time passes by. What staying in the trenches means is to stay true to your passion, and try to find rewards for you exemplary work in some other sort of way by negotiating with the company.

I think the main message of this pattern is important and how it is talking about staying true to your passion. One thing that I don’t agree with is just because you may come across a different path in life that may be outside of programming, does not mean that you should not take. There are a lot of people that won’t be able to be in the position of manager and although a person has a passion for programming, there are times when someone needs to go to the uncomfortable side to level up. It may be time to start learning more about the bigger picture of software development and managing a team of programmers, then just doing programming yourself. It will be a different change and the only way to find out whether you like it or not, is to try it out. In the future, if you believe that the role isn’t for you, then you can always go back to some role of programming at a different level. The goal however shouldn’t just be to give up programming all together, but to try some different path while also staying up to date with the programming aspect. I personally believe that this will help you out more with experience of leading a team and not only focusing on the developing aspect. The more variance experiences a person has, then in the future it can help apply in all sorts of different aspects.

From the blog CS@Worcester – Roller Coaster Coding Journey by fbaig34 and used with permission of the author. All other rights reserved by the author.

Apprenticeship Patterns Blog – A Different Road

For this week’s blog post, I read the section  “A Different Road ” from chapter three of the book Apprenticeship Patterns by Dave Hoover and Adewale Oshineye. The section talked about how taking a different road could be a risky or life-changing event.  Despite this risk, I believe that one should be not afraid to do something different with your life. A change no matter whether they seem good or bad at the time will teach you something new or a new experience that can drive you to push forward. The author started the section by talking about explaining how saying goodbye to the craft can be risky, but  “Even if you leave the road permanently, the values and principles you have developed along the way will always be with you” I think this is very true, no matter the change, the principles you have acquired so far in the career will always be there for you.

Currently, I want to talk about the sad reality and risk one may face when one changes a craft. The author talked about how a person switched from an IT job to be a teacher and afterward it was hard for the person to get back to the industry because “most HR people in big companies didn’t like it.” Sadly, most software companies nowadays see the gaps in a person’s career, and you must justify within their value system why you left and why you are coming back now. Also, as technology is evolving daily, companies want individuals that are willing to learn quickly and adapt to the environment rapidly. The solution suggested in this section I found quite helpful, which was to Write down some of the other jobs you think you would enjoy doing, find people who are doing those jobs and ask them questions regarding them. This interaction will help an individual to decide if they are making the right decision to choose a different path. Reading this section made me think of what my life will be if I choose a different road, it is risky, but is it worth it. Sometimes you must risk It for a greater reward.

From the blog Derin's CS Journey by and used with permission of the author. All other rights reserved by the author.