What is Docker?

Summary:

This article goes over what docker is how it has it become so mainstream today. They generally go over what containers are, the components of docker, docker’s advantages, and even it’s drawbacks. Docker became popular due to the fact it makes it easy to move code for an application and all of its dependencies from the developer’s laptop to a server. After reading this article, we should have a general understanding of what exists within docker and how it works.

Reason:

The reason I chose this article was because we use docker very often in class and most of use had never heard of it coming into to computer science. Why suddenly was it something that seemed so instrumental in most of our assignments? As we continued to use docker, the more and more it appeared to use how versatile it was and the power of its capabilities.

What I learned:

Docker is a software platform for building applications based on containers, it uses small and lightweight execution environments that make shared use of the operating system kernel but otherwise run-in isolation from one another. Containers are self-contained units of software you can deliver from a server over there to a server over there, from your laptop to EC2 to a bare-metal giant server, and it will run in the same way because it is isolated at the process level and has its own file system. A docker file is a text file that that provides a set of instructions to run an image. A docker image is a portable read-only executable file containing instruction for creating a container. Docker run utility is the command that launches a container. Docker hub is a repository where images can be stored, shared, and managed. Docker Engine is the server technology that creates and runs the containers. Docker compose is a command-line tool that uses YAML files to define and run multi container Docker applications. It allows you to create, start, stop, and rebuild all the services from your configuration and view the status and log output of all running services. The advantages are that docker containers are that it’s minimalistic and enables portability, they enable composability, and they help ease orchestration and scaling. The disadvantages however are containers are not virtual machines, they don’t provide bare-metal speed, and they are stateless and immutable. Today container usage continues to grow as cloud-native development techniques become the mainstream model for building and running software, but Docker is now only a part of that puzzle.

Source: https://www.infoworld.com/article/3204171/what-is-docker-the-spark-for-the-container-revolution.html

From the blog CS@Worcester – Life as a CS Student by Dylan Nguyen and used with permission of the author. All other rights reserved by the author.

API

API stands for application programming interface, which is a set of definitions and protocols for building and integrating application software.

How do APIs work?

The Application Programming Interface (API) is a software interface that allows two apps to communicate with one another without the need for a human to intervene. A programming interface (API) is a set of software capabilities and operations. A software code that can be accessed or executed is referred to as an API. API stands for application programming interface, and it is a code that allows two different software programs to communicate and exchange data with one another.

It allows goods or services to talk with one another without requiring knowledge of how they are deployed.

Consider the following example to better understand how the API works:

Let’s look at how API works with a basic example from everyday life. Assume you’ve gone to a restaurant for lunch or dinner. The server approaches you and hands you a menu card, which you can modify by specifying that you want a veggie sandwich without onion.

The waiter will take your order once some time has passed. However, it is not as straightforward as it appears, since there is a process that occurs in the middle.

Because you will not go to the kitchen to pick up your order or tell the cooking crew what you want, the waiter will play a vital role in this situation.

API also does the same by taking your request, and just like the waiter tells the system what you want and gives a response back to you.

Why would we need an API?

Here are a few reasons to use API:

  • The abbreviation for Application Programming Interface is API. API allows two separate software applications to communicate and exchange data.
  • It makes it easier to incorporate content from any website or application.
  • App components can be accessed using APIs. Services and information are delivered in a more flexible manner.
  • The generated content can be automatically published.
  • It enables a consumer or a business to personalize the material and services they utilize the most.
  • APIs assist in anticipating changes in software that must be made over time.

To sum up, the major reason APIs are so important in today’s marketplaces is because they enable speedier innovation. Change barriers are removed, and more people can contribute to the success of an organization. They have two advantages: they allow the company to generate better products while also distinguishing itself from the competitors.

From the blog CS@Worcester – Site Title by proctech21 and used with permission of the author. All other rights reserved by the author.

JSON

I’ve been seeing so many json files while working with Docker and cant help myself but wonder what is JSON? What do they do and why do we need them along with JavaScript. In this blog, I want to cover this topic to help myself and others to learn more about JSON.

JSON stands for JavaScript Object Notation, and is a way to store information in an organized, easy to access manner. Basically, JSON gives human-readable collection of data that can be accessed in logical manner. There are many ways to store JSON data but Array and nest objects are the most popular ones. However, I will not go into the details about those two methods but focusing more on JSON definition.

Why does JSON matter?

JSON becomes more and more important for sites to be able to load data quickly and seamlessly, or in the background without delaying page rendering. Also, it helps switching up the contents of a certain element within our layouts without refreshing webpages. This is convenience not only for users but also developers in general. Because of its popularity, many big sites rely on content provided by sites such as Twitter, Flickr, and others. These sites provide RSS feeds to minimize the effort to import and use the server side, but by using them with AJAX (a powered sites), we run into a problem that we would only be able to load an RSS feed if we’re requesting it from the same domain it’s hosted on. JSON allows us to overcome the cross-domain issue because using callback function in JSON would send the JSON data back to our domain. This capability makes JSON so useful as it solves so many problem that were difficult to work around.

JSON structure

JSON is a string whose format very much resembles JavaScript object literal format. We can include the same basic data types inside JSON as we can in a standard JavaScript object such as strings, numbers, arrays, booleans, and other object literals. This allows us to construct a data hierarchy. JSON is also purely a string with specified data format which means it contains only properties and it doesn’t have methods.

REST vs SOAP: The JSON connection

Originally, this kind of data was transferred in XML format using a protocol called SOAP. However XML was robust and difficult to manage in JavaScript. The reason is JavaScript already have objects, which are a way to express data within this language therefore Doughlas Crockford (JSON creator, also JSLint and JSMin) took a subset of that expression as a specification for new data interchange format and renamed it JSON.

REST began to overtake SOAP in transferring data. The biggest advantages of programming using REST APIs is that we can use multiple data formats which means you can include not only XML but also HTML and JSON. Since developers prefer JSON over XML so they come to favor REST over SOAP.

Today, JSON is the standard for exchanging data between web and mobile clients and back-end services. As I go deeper into the software development cycle, I feel like the need for JSON is essential. Let aside all advantages, there are disadvantages but the importance of JSON is undebatable.

Source: https://www.infoworld.com/article/3222851/what-is-json-a-better-format-for-data-exchange.html

From the blog CS@Worcester – Nin by hpnguyen27 and used with permission of the author. All other rights reserved by the author.

REST APIs

I chose to write about REST APIs this weeek, because we are covering them in class. And because REST APIs are ubiquiutous in Software Development these days. Application program interface or APIs are a a way of sending data between services over the internet. APIs have become popular because they allow companies to use already available tools like email automation, username and password validation, and payment processing to other established services. This allows developers to focus on the unique business demands of their customers and products.

In web development the client is the service or person making requests to access the data within the application. Most often, the browser is the client. A resource is a piece of information that the API can provide for the client. A resource needs to have a unique indentifier. A server receives and fufills client requests. REST stands for Representational State Transfer. When the client requests a resource from the API it comes in the reource’s current state, in a standardized representation.

At my job we have one workflow problem that could be swolved with an API. We have a client side web application where customers place orders, and an internal desktop application where we keep track of primers needed for those orders. The internal desktop application is called the primer log. The client application is a web application, but the internal application is not. This causes our company all kinds of problems. My coworkers will often say “I wish these services could talk to eachother”. And that is exactly what an API does. Our internal application is keeping track of which primers we have, where they are, and which are dry. But the customers cannot see that. Customers will frequently request that we use primers that we do not have onsite.

We will eventually recreate our primer log as a web application, which would allow our client web application and our primer log to communicate. So when a customer requests we use a primer, that would be a client request for a resource from the API. The API would check our internal primer log, and send a response back to the client indicating whether or not the requested primer is onsite. This API response could prompt the client to order the required primer if we did not have it onsite. That is ROI, not to mention the time it would save my team, and the turn around time for our customers. This is just an example of how an API can solve a real life business problem.

From the blog CS@Worcester – Jim Spisto by jspisto and used with permission of the author. All other rights reserved by the author.

Software Construction Log #4: Understanding Semantic Versioning

          Software releases are not always one-and-done affairs; more often than not software that we use is actively being worked on and maintained for an extended period of time well after its initial release. We can expect that software, during its support lifetime, will undergo several types of changes, including implementation of new features and various vulnerability fixes. However, it is important to note that such changes to software are as important to properly document as the technical aspects of the software, such as its use and structure as they were conceived during development. Such documentation of changes is often referred to as “Software Versioning” and it involves the process of applying a version scheme in order to track the changes that have been implemented to a project. While developer teams may develop their own scheme for versioning, some may prefer to use Semantic Versioning (https://semver.org/) as a means of keeping track of changes.

          Semantic Versioning is a versioning scheme that applies a numerical label to a project, which numerical label is separated into three parts (X.Y.Z), which then parts are incremented depending on the type of change that has been implemented. These parts are referred in the documentation as MAJOR.MINOR.PATCH and defined as:

1. MAJOR: version when you make incompatible API changes,
2. MINOR version when you add functionality in a backwards compatible manner, and
3. PATCH version when you make backwards compatible bug fixes.

https://semver.org/

The way semantic versioning works is that, when incrementing the left most part, the progress of the remaining parts is reset to zero, meaning that if a major change is implemented then the minor and patch numbers are reset to zero. Likewise, when a minor change is implemented, the patch number is reset to zero. While this scheme is relatively straightforward in and of itself, the naming convention of the numerical labels (specifically major and minor) may confuse some due to the ambiguity that such names may present. However, there is another naming convention that applies to semantic versioning, which defines the numerical version label as (breaking change).(feature).(fix).

          Though both naming conventions are used, I find the later to be far more straightforward to understand and utilize, as the names can give one a better idea of the importance of a newly implemented update. As I was researching on more resources regarding Semantic Versioning, along with the official documentation, I came across the following archived article on Neighbourhood.ie titled Introduction to SemVer. In this article, Irina goes into further detail regarding semantic versioning by explaining the naming of each component, as well as noting the difference between the two naming conventions.

          Although they go into further detail into semantic release in another article, this article sufficiently covers the fundamentals of semantic versioning. This versioning scheme is not the only way to  software development, it is still an important tool that can help in documenting a project’s history during its support lifetime and outline important changes more clearly and efficiently.

Direct link to the resource referenced in the post: https://neighbourhood.ie/blog/2019/04/30/introduction-to-semver/

Recommended materials/resources reviewed related to semantic versioning:
1) https://www.geeksforgeeks.org/introduction-semantic-versioning/
2) https://devopedia.org/semantic-versioning
3) https://www.wearediagram.com/blog/semantic-versioning-putting-meaning-behind-version-numbers
4) https://developerexperience.io/practices/semantic-versioning
5) https://gomakethings.com/semantic-versioning/
6) https://neighbourhood.ie/blog/2019/04/30/introduction-to-semantic-release/

From the blog CS@Worcester – CompSci Log by sohoda and used with permission of the author. All other rights reserved by the author.

SOLID Principles of Object-Oriented Programming

Thinking about design patterns, I decided to research the SOLID principles that reinforce the need for design patterns.

A wonderful source explaining the SOLID principles is from https://www.freecodecamp.org/news/solid-principles-explained-in-plain-english/.

The SOLID acronym was formed from:

  • Single Responsibility Principle
  • Open-Closed Principle
  • Liskov Substitution Principle
  • Interface Segregation Principle
  • Dependency Inversion Principle

The Single Responsibility Principle states that a class has only one responsibility and therefore should only have one reason that it changes.

The Open-Closed Principle states that extensions to classes can be added for new functionality, but the code of the class should not be changed.

The Liskov Substitution Principle states that objects of a subclass should be able to work with methods that expect the object of the superclass and the method should not give irregular output.

The Interface Segregation Principle relates to separating interfaces. There should be multiple specific interfaces versus creating one general interface in which there are many overridden methods that have no use.

The Dependency Inversion Principle states that classes should depend on interfaces or abstract classes rather than concrete classes and functions. This would help classes be open to extensions.

I researched SOLID from this source because it had in-depth examples of code violating the examples, and how it could be fixed. For the Single Responsibility Principle example, it showed how class Invoice had too many responsibilities and separated its methods and created class InvoicePrinter and InvoicePersistence so that each class has one responsibility for the application.

The Liskov Substitution Principle example involves a Rectangle superclass and Square subclass (because a Square is a Rectangle). The setters for the Square class are overridden because of the property of squares to have the same height and width. However, this violates the principle because of what happens when the getArea function from the Rectangle class is tested. The test class puts in a value of 10 for the height and expects the area to be width * 10, but because of the override for the Square class dimensions, the width is also changed to 10. So if the width was originally 4, the getArea test expects 40, but the Square class override makes it 100. I thought this was a great example because I would expect the function to pass the test but did not remember how assigning the height in the test would also change the width.

The examples provided were very helpful in understanding how it can be seen in code, and the diagrams were a bonus for visual representation of the code. Going forward, I will know that I have this information to reference, if need be, when dealing with class designs. It has shown me principles to help make coherent classes with detailed explanations.

From the blog CS@Worcester – CS With Sarah by Sarah T and used with permission of the author. All other rights reserved by the author.

Design smells

Resource link: https://flylib.com/books/en/4.444.1.47/1/

This week I decided I wanted to learn more about the different design smells. I understood the definitions from the activities in class, but I wanted to learn more about what exactly each design smell meant, and how to avoid them. I think it’s important to learn about the design smells so that you can know to look out for them when working on a project yourself. This is because accidently implementing one of the design smells into your code could lead to a lot of difficulty and frustration if you ever need to make changes later. For this reason, it is best to actively try and avoid them to ensure that whatever code you write will always be easy to modify and maintain.

This resource sums up exactly what each of the design smells is, why it’s bad, and what the consequences of implementing the design smells are. I liked that it uses practical examples and analogies to make the concepts explained easier to understand. While the concepts may be hard to understand because of how abstract they are, when broken down or applied to a situation everyone knows, it makes it much easier to get a grasp on what is being explained.

The resource breaks down into sections, each describing a different design smell. The first one is rigidity. Rigidity is described as when code is difficult to make even small changes in. This is bad because most often frequently code will need to be modified or changed, and if it’s difficult to even make small changes such as bug fixes, then that’s a very large issue that must be addressed.

The next design smell is fragility. Fragility is almost similar to rigidity in that it makes it difficult to make changes to code, but with fragility it is difficult because the code has a tendency to break when changes are made, whereas with rigidity things don’t necessarily break, but it is designed in such a way to where changes are very difficult to make.

Next, immobility is the design smell where a piece of code is immobile because it could be used elsewhere, but the effort involved in moving the code to where it could be useful is too hard for it to be practical. This is an issue because it means that instead of being able to reuse a piece of code, you have to write completely new code. That means that time is wasted when it could be used for more important changes.

Next, viscosity is the design smell where changes to code could be made in a variety of different ways. This is an issue because it means that time might be wasted deciding on what implementation of a change should be made. Also, disagreements might happen about how a change should be made, meaning that a team won’t be able to work towards the same goal.

The next design smell is needless complexity. Needless complexity is usually when code is added in anticipation of a change that might need to be made in the future. This could lead to bloated code that has many features that aren’t needed or aren’t being used. It is best to add code only when it’s needed to avoid this, and to reduce the overall complexity.

Next, needless repetition is the design smell where sections of code are repeated over and over, rather than being abstracted. This leads to code being hard to change or modify because it has to be changed in many different locations, instead of just one if it were abstracted. This is the benefit of abstraction, that a code modification that changes a lot of how it functions can be changed by altering code in one location.

Finally, opacity is the design smell where code is written in a way that’s hard, or difficult to understand. A piece of code could be considered opaque for a number of reasons, but in general it is when code that is understandable to one person might not be understandable to others. To avoid this, code should be written in a clear and concise manner that is easy to trace and understand, no matter who is working on it.

From the blog CS@Worcester – Alex's Blog by anelson42 and used with permission of the author. All other rights reserved by the author.

Monolithic vs. Microservice Architectures

Several of our classes have discussed two different forms of software architecture. Due to my intent to graduate with both Software Development and Big Data Analytics concentrations, this topic especially interested me. On one hand, I need to know the software architecture designs to best plan an implementation for, and on the other hand, I need to know how to deal with massive data sets. These two factors combined peak my interest as an intersection point of both fields.

A server-side application has several components, namely presentation, business logic, database access, and application integration. The Monolithic Architecture will package these components linearly, resulting in ease of development, testing, deployment, and scalability. However, its design flaws include limitations in size and complexity, difficulty in understanding, and application-wide redeployment.1

The Microservice Architecture is split into a set of smaller and connected services, each with its own architecture. Each service performs a given task(s) which is then, in one way or another, linked back to the other services. Microservice Architecture is characterized as faster to develop, independent deployment, and faster and independent scalability. However, its disadvantages include added complexity, increased difficulty in testing, and increased inter-service change difficulty.1

With these things in mind, questions such as “should I implement a Microservice Architecture or a Monolithic Architecture”, and “should I always use one or the other” arrive. What I’ve learned is that despite the fairly detailed description of both of these architectures there isn’t a uniform consensus of when to apply them. Martin Fowler says a Monolith Architecture should always be implemented first, due to the fact that the Microservice Architecture implies a complexity that may not be needed. When the prospects of maintaining or developing a monolithic application becomes too cumbersome one should begin transitioning into using Microservice Architecture2, Stefan Tilkov disagrees, however, stating that building a new system is when one thinks about partitioning the application into pieces. He further states that cutting a monolithic architecture into a microservice architecture is unrealistic.3

Would I start using Monolithic Architecture, or would I start using Microservices Architecture for software? While thinking about this problem, I know that different parts of the application are doing different things. One deals with users placing orders, which in turn requires the user to have their own UI, API, Database, and servers; and the other requires the approvers to have their own UI, API, Database, and servers. This is convincing in favor of not beginning development with a monolithic architecture design, since planning an application’s design should be thought of immediately. Therefore, I agree with Stefan Tilkov more on this issue, based on the knowledge that I have as a student.

Links:

  1. https://articles.microservices.com/monolithic-vs-microservices-architecture-5c4848858f59
  2. https://martinfowler.com/bliki/MonolithFirst.html
  3. https://martinfowler.com/articles/dont-start-monolith.html

From the blog CS@Worcester – Chris's CS Blog by Chris and used with permission of the author. All other rights reserved by the author.

Object-Oriented Programming #1

Object-oriented programming (OOP) is a computer programming model that organizes software design around data, or objects, rather than functions and logic. An object can be defined as a data field that has unique attributes and behavior. OOP focuses on the objects that developers want to manipulate rather than the logic required to manipulate them. Object-Oriented-Programing allows programmers to think of software development as if they are working with real-life entities.

4 Pillars of Object-Oriented Programming

  • Encapsulation. This principle states that all important information is contained inside an object and only select information is exposed. The implementation and state of each object are privately held inside a defined class. Other objects do not have access to this class or the authority to make changes. They are only able to call a list of public functions or methods. This characteristic of data hiding provides greater program security and avoids unintended data corruption 
  • Abstraction. Objects only reveal internal mechanisms that are relevant for the use of other objects, hiding any unnecessary implementation code. The derived class can have its functionality extended. This concept can help developers more easily make additional changes or additions over time.
  • Inheritance Classes can reuse code from other classes. Relationships and subclasses between objects can be assigned, enabling developers to reuse common logic while still maintaining a unique hierarchy. This property of OOP forces a more thorough data analysis, reduces development time and ensures a higher level of accuracy.
  • Polymorphism Objects are designed to share behaviors and they can take on more than one form. The program will determine which meaning or usage is necessary for each execution of that object from a parent class, reducing the need to duplicate code. A child class is then created, which extends the functionality of the parent class. Polymorphism allows different types of objects to pass through the same interface.

The difference between concepts of encapsulation and abstraction is that encapsulation is about the packaging of the class (like how data should be accessed (setters/getters) and what data should be accessed (access specifiers)), whereas abstraction more about what the class does for you at a conceptual level. Encapsulation is hiding unnecessary data in a capsule or unit and Abstraction is showing essential feature of an object

Important advantages include:

  • Objects comprise data that define its state and methods that define its behavior. Each object encapsulates these two entities.
  • The internal implementation of object methods is invisible to the user. This way objects abstract state changes under a simplified external API.
  • Objects are instances of classes. Classes are blueprints to build objects. The class of an object is also its type.
  • Classes can inherit both state and behavior from other classes. Based on this notion, objects of the subclass support casting into objects of the parent class.
  • This form of casting gives rise to polymorphism. The program can implicitly cast an object of a class to an object of the class’s ancestors.

Resources:

The Four Pillars of Object Oriented Programming (keylimeinteractive.com)

6 Pros and Cons of Object Oriented Programming – Green Garage (greengarageblog.org)

From the blog CS@Worcester – Delice's blog by Delice Ndaie and used with permission of the author. All other rights reserved by the author.

REST APIs : First Look

Last week we went over docker and some general information about it. Now this week we have started to go over REST APIs in our activity’s so I decided to dedicate this weeks blogpost to it. The Source I have chosen is a fairly recent article written by Zell Liew that is lengthy and gives us some general insight on what REST APIs are and should be a good read to many who are looking to learn.

A REST API is firstly, an application programming interface(API) which allows the programs to communicate with each other and this API follows REST. REST stands for Representational State Transfer which is basically another set of rules/constraints that the API has to follow when being created. Here with REST APIs, we are able to request and get data back as a response. Zell goes over the 4 makings of a request which firstly the endpoint which is the URL we request for. The endpoint itself is made of the root-endpoint which looks something like https://api.website.com. Then there is the path which helps us finalize what we are requesting and query parameters. Then we have the method as the 2nd part which is what sort of request do we actually send which are in 5 types: get, put, post, patch, and delete which are then used to determine the action to take. Then there is the 3rd part which the the header which actually has a number of uses but they in general provide information in one way or another. Lastly, there is the data/body which contains the information we are sending to the server. Zell also goes over some more about authentication, https code, etc.

Zell’s explanation over REST APIs is a good start on helping me understand on what APIs are and their actual use in computer science. This view on how websites work is also quite interesting. This also will help in class as we potentially work with APIs further in our class activities and make the process a bit more fluid. The prevalence of REST APIs is very much apparent with it being used by big company sites like Google and Amazon and with the rise of the cloud over the years has led to increased usage of REST APIs. This information of REST APIs is a must if I am to continue to improve and potentially work in a career that has to work in areas like this as it looks like REST APIs will continue to be used for a while longer since people are comfortable with it so I should too.

Source: https://www.smashingmagazine.com/2018/01/understanding-using-rest-api/

From the blog CS@Worcester – kbcoding by kennybui986 and used with permission of the author. All other rights reserved by the author.