Category Archives: Week 8

Docker Compose

For this week’s blog post, I am reviewing a blog post made by Gabriel Tanner. Mr. Tanner is a software engineer at Dynatrace and in this blog post, he talks about the characteristics of Docker Compose, why we should use Docker compose. Mr. Tanner starts the blog post about why we have/use Docker.

“With applications getting larger and larger as time goes by it gets harder to manage them in a simple and reliable way.”

The first feature about Docker compose that Mr. Tanner talks about is its portability and how it can construct and destruct a development environment using the docker-compose up and docker compose down commands respectively. The blog post also goes common uses of Docker compose and common reasons why people use Docker compose. Some examples of common uses of Docker are its ability to run several containers on a single host and to run your program in an isolated environment. An example of a common reason why people use Docker compose is people might want to run their program in an environment similar to the one used in the production stage. The post also goes on to talk about volumes and the different types of volumes and their syntax, networking so that our containers can communicate with one another and many other different topics.

The entire blog post is basically one big tutorial about Docker compose. It defines the features of Docker compose, gives examples of its uses, and explains why we should use it. I think this is a blog post that is worth reviewing for this class because I think it could be a really good resource to have in the class. The post is a little long, but it is very thorough. I think it would be a good way to review Docker compose before a midterm or final. In addition, the blog post also covers a lot of information that we have not done over in class, so it also provides a way for us to investigate and learn more deeply about the topic. For the first half of the blog post, it covers material that we have already covered in class but in the second half of the blog post, it covers a lot of features about Docker that we have not yet covered in class such as using multiple Docker compose files by passing an optional override file. This is a feature of Docker that I can see myself using in the future and is a feature that I wish I had learned sooner. A couple semesters ago, I was doing a project in MatLab and Java and was running several large programs on one computer. This made the project very time-consuming and difficult because it took a long time to run all of the programs, generate and collect all of the results. Had I known what I know now about Docker, I would have done a lot of things differently.  

https://gabrieltanner.org/blog/docker-compose

From the blog CS@Worcester – Just a Guy Passing By by Eric Nguyen and used with permission of the author. All other rights reserved by the author.

Anestiblog #2

This week I read a blog post about the top 10 most popular software architecture patterns, and I thought it really related to this class. The blog starts off going into what software architecture patterns are, and why it should be focused on. The blogger describes the patterns as an outcome of the design principles architects use and the decisions they make. The blogger thinks it should be focused on because it enables a software system to deliver its business, operational, and technical objectives. The blog then gives some tips on how to know if your patterns are good, and then lists the 10 most common patterns. The blog ends on how to evaluate which pattern is best for your project. I selected this blog post because from last week I wanted to go deeper in the world of software architecture, and I thought the different patterns was a good way to go. I think this blog was a great read that I recommend for many reasons. A huge reason was that it really is good at giving background information before really getting into the list, so that you understand what a software architecture pattern is. Before I read the blog, I had no idea what it was, and that made me want to learn more, and this blog helped with that. Another reason I would recommend reading this blog is because it gives the top 10 most common patterns. That is important because for a career in this field, you have to know the most common patterns because those are the ones that will be used the most in future jobs. If anyone is thinking about a career in this field then they should definitely know the 10 patterns. The last reason I will mention is that in the end, the blogger explains that to get the right pattern for your application, software architects with that skill are the most sought after, so it really should be a big reason to learn more about patterns because you can really use it to your advantage. I learned what the software architecture patterns are, and that the software architecture models make it easy to reuse them in other projects since you now know the decisions and trade-offs. I also learned that the interpreter pattern, and the layered architecture pattern are two of the most well-used patterns, as I think I have done something similar to them before. This material affected me hugely because it showed how important the patterns are to software development, so it showed that I will be needing it if I ever want to become a developer in the near future, so it is really great to know this stuff. I will take all of this knowledge with me as I continue in my career. Everyone who wants a career in software development just has to read this blog as well.

link : https://nix-united.com/blog/10-common-software-architectural-patterns-part-1/

From the blog CS@Worcester – Anesti Blog's by Anesti Lara and used with permission of the author. All other rights reserved by the author.

YAML (You Always Make Logs)

While working with YAML files is a relatively new concept for me, I must say that the structure is somewhat similar to another style of coding I am more familiar with: CSS. The two are not exactly the same, however the “key: value” format makes coding with YAML files that much easier. As noted later on, the importance of indentation in YAML reminds me of Python, a familiar programming language.

As part of the title, I have decided to make my own acronym for YAML: (Y)ou (A)lways (M)ake (L)ogs. This is because I view YAML files almost like logs for a certain state of a project; each one consists of all the different elements that make up a certain level of a SemVer state. In fact, YAML files consist of all the different materials that have been seen in this course before; ports, images and more make up a YAML file. In addition, these files are often used with preview software (such as Swagger) to create visualizations of APIs.

In the link below, one can find a video that explains the structure of YAML files. This information is alongside applications of the files, and even a tutorial for setting up a YAML extension in Visual Studio Code. I have chosen to watch this video on YAML files for two reasons: first, I need more practice with YAML, and in my opinion, increasing my exposure to it is the best way to gain more experience. Second, videos are a preferable source of educational media in my opinion; having visual examples helps to get the idea across better than simple discussions.

An example of a very simple YAML file. Notice how it starts with “version: (SemVer Value)”.

YAML files can be simple, like the one shown above; they can also be thousands upon thousands of lines of code. Traditionally, they start with the version number, and are also dependent on proper indentation (similar to Python). Features of a YAML file can be as significant as a docker image, such as nginx; features can be as focused as making an element “read-only” within an API endpoint.

Among everything else, it is important to emphasize and take away this lesson from YAML files: these files are a core component of software design, connecting several familiar aspects together. By learning about YAML, I can figure out a way to run images, map ports and even more, all within a single file called by the “docker-compose up” command. Personally, I feel as though YAML will be an invaluable resource in future IT endeavors; while software such as Git focuses on the level of version control, YAML focuses on what exactly goes on in that level.

Link: https://www.youtube.com/watch?v=fwLBfZFrLgI

From the blog CS@Worcester – mpekim.code by Mike Morley (mpekim) and used with permission of the author. All other rights reserved by the author.

UML Diagram

UML stands for Unified Modeling Language. It is a way of visualizing a software program by using a collection of diagrams. The main aim of UML is to define a standard way to visualize the way a system has been designed, it is not a programming language but rather a visual language. In 1997, UML was adopted as a standard by the Object Management Group (OMG) and still it has been managed by it. UML diagram is used with object-oriented programming and is organized into two distinct groups which is Structural diagrams which represents the static aspects of the system and behavioral Diagram which represents the dynamic aspect of the system.

The Class diagram is a kind of Structural diagram, it shows how the code is being put in the system and one can know the different aspects of the code just by looking at the class diagram. While looking at the class diagram there are three sections: Upper, Middle, and Bottom. The upper section contains the class name, The Middle section contains the attributes of the class and The Bottom section includes class operations (methods). The Class has different access levels depending on the access modifier (visibility) and they have symbols such as Public (+), Private (-), Protected (#), Package (~), Static (Underlined). These symbols help one to identify the visibility of different attributes just by looking at the symbols. Classes are connected with each other in different ways such as Association, Multiplicity, Aggregation, Inheritance, Composition, Bi-directional, and so on. This helps if a class has been inherited from different another class by using the sign by which it connected the other class, which helps the viewer to understand the UML diagram easily.

I used this topic because it helped me a lot in understanding how the UML diagram works and the different properties. Homework 1 was based on the UML diagram; this article goes over everything you should know to create UML diagrams. It also has links that explain more about the subject matter and makes things clear. And because I am interested in software development, UML is one of the main tools for software engineers. In the future, the class diagram is important too, since it’s object-oriented and helps when working on a big project. The topic provided examples that made it easy to understand different concepts of UML diagrams such as the visibility of attributes, arrows that show how classes are connected to each other which makes the whole concepts of the UML diagrams easy to understand.

The article used: https://drawio-app.com/uml-diagrams/

From the blog CS@Worcester – Mausam Mishra's Blog by mousammishra21 and used with permission of the author. All other rights reserved by the author.

API in Software

This week in class, we worked on API and did some activities on it. Was my first time trying, and I was interested in learning more about it because I have heard about API but never tried to dig more into it. Now that we did it in class, I was interested in doing some research and learning why API is important and needed.

What is an API?

API (Application Programming Interface) is a software interface that allows two applications to interact with each other without user intervention. it’s a collection of software functions and procedures. In simple terms, API means a simple code that can be accessed and executed; a code that helps two different software to communicate and exchange data with each other.

How does API work?

Now that we know the importance of API, let’s see how it works by giving an example. You’re searching for a hotel room from an online travel booking site. Using the site’s online form, you select the city you want to stay in, check-in and check-out dates, number of guests, and number of rooms. Then you click “search.” As you may know, the travel site aggregates information from many different hotels. When you click “search,” the site then interacts with each hotel’s API, which delivers results for available rooms that meet your criteria. This can all happen within seconds because of an API, which acts like a messenger that runs back and forth between applications, databases, and devices.

What does API do?

APIs also facilitate calls to a server, but they execute it more simply. They connect the web, allowing developers, applications, and sites to tap into databases and services (or, assets)—much like open-source software. APIs do this by acting like a universal converter plug offering a standard set of instructions.

Why are APIs important?

Without APIs in our tool belt, most software couldn’t exist. It’s not just access to the data we need, but it’s also the mechanics of many other APIs that we depend on to make software go. For maps, there is the Google Maps API. Amazon has an API that lets you tap into their inventory of products. There is Twilio for sending MMS campaigns, and Yelp for finding places to eat. So there is an API for just about anything you can think of, around 20,000 that we know of as reported by ProgrammableWeb.

I chose to talk about this topic because, as a computer science major, knowing the importance of API is vital. We can understand that API is used in our everyday lives and plays a big role in software development. I cannot wait to do more activities in class using API but also personal exercises to get familiar with it.

Intro to APIs: What Are They & What Do They Do? (upwork.com)

What is an API? Full Form, Meaning, Definition, Types & Example (guru99.com)

From the blog CS@Worcester – Gracia's Blog (Computer Science Major) by gkitenge and used with permission of the author. All other rights reserved by the author.

Server Side and Node.js

The internet could not exist without servers to handle the exchange of data between devices. Given the importance of servers, the software and systems that run on them are equally as important. The programming of these applications is called Server-side development and is a large computer science field.

Most server-side applications are designed to interact with client-side applications and handle the transfer of data. A common form of server-side development is for supporting web pages and web applications. An emerging popular platform for web application backends is Node.js, a server-side language based on JavaScript.

A benefit to Node.js being based on JavaScript means that the same language can be used on the front end and the back end. Because JavaScript can handle JSON data natively, handling data on the server-side becomes much easier compared to some other languages. As the name suggests, JavaScript is a scripting language so code for Node.js also does not need to be compiled prior to running. Node.js uses the V8 engine developed by Google to efficiently convert JavaScript into bytecode at runtime. This feature can speed up development time for running smaller files frequently, especially during testing.

Node.js also comes bundled with a command-line utility called Node Package Manager. Abbreviated npm, it manages open-source libraries for Node.js projects and easily installs them into the project directory. The npm package repository is comparable to the Maven repository for Java. However, according to modulecounts.com, npm is over three times larger than Maven with nearly 1.8 million packages compared to less than 500 thousand. Each Node.js project has a package.json file where settings for the project are defined such as running scripts, version number, required npm packages, and author information.

The majority of Node.js applications share common packages that have become standard frameworks throughout the community. An example of this is Express.js, which is a backend web framework that handles API routing and data transfer in Node.js.

At the core of node.js is the event loop which is responsible for checking for the next operation to be done. This allows for code to be executed out of order without waiting for an unrelated operation to finish before the next. The default asynchronous ability of node.js is ideal for webservers where many users are continuously requiring different tasks to be done with differing speeds of execution. When people visit different API routes, they expect the server to respond as quickly as possible and not be hung up on a previous request.

I have used Node.js before but now that we have begun to use it in class, I wanted to learn more about its benefits in server-side development. For me, the most important takeaway is the need to take advantage of Node.js’s nonblocking ability when developing a program. Doing so will improve the speed of the application and increase usability.


Source: https://www.educative.io/blog/what-is-nodejs, http://www.modulecounts.com

From the blog CS@Worcester – Jared's Development Blog by Jared Moore and used with permission of the author. All other rights reserved by the author.

CS-343 Post #4

I wanted to read more about REST APIs after working on them with the past couple activities and having some questions about them due to some mistakes I was making. I was getting a better understanding of the concept as the activities continued on, but there are some things I wanted to clear up based on what I was wrong about or had trouble understanding at first.

From the activities, I know that APIs are used for applications and build them with code. A REST API is a version of an API that uses REST standards and constraints, which seem to work more with HTTP methods. When working on the activity, I kept confusing some of the methods and mistaking the API methods with the source paths. I was also a little confused at first about the request body after activity 12, but then after activity 13, I had a better understanding and it was basically the area of section in the request where you can enter the information for the method. For example, you can write the name and ID of items for creating an item, or you could enter a new name for an existing item given its ID.

I wanted to look more into what REST is about, and found a good article by RedHat called “What is a REST API?”, that goes over REST APIs and has a section just talking about REST. It is described as a set of constraints, not protocols or standards, and it is talked about how HTTP methods are used with it. It also talks about the multiple formats that HTTP can use with REST APIs, such as JSON, HTML, and PHP. The headers are considered very important because they contain identification data about the requests made, such as the URL and metadata.

Some of the criteria for an API to be considered “RESTful” is to have a client-server architecture, stateless communication, cacheable data, uniform interface for components, a layered system for organization, and optional code on demand. The uniform interface is also said to require that the requested resources can be identifiable and worked with by the client, have self descriptive messages, and available hypermedia. While there is a sizable set of criteria for an API to meet to be RESTful, it is efficient to use because it can lead to the API being faster and better to manage with the methods.

I have a better understanding of what makes an API qualified to be RESTful, and I see where I was making my mistakes in the activities.

https://www.redhat.com/en/topics/api/what-is-a-rest-api

From the blog Jeffery Neal's Blog by jneal44 and used with permission of the author. All other rights reserved by the author.

Code Smells

A smell of code has to do with being a superficial indicator which in most cases has corresponded to a much bigger and deeper problem in the system. This term was first coined by Kent Beck. This character became famous even after the appearance he had with Martin Fowler’s book. Code winds are very subjective winds and which differ only on the basis of language, developers but also methodology that have other factors.

What are the some frequently seen smells?

Bloaters: In its entirety this includes code, but also classes and methods. This smell has become great over time, making the accumulation of functionality but also the creep of features it has.

For example: it is about long methods, for classes of gods but also for long lists of parameters.

Dispensable: This smell refers to code that is otherwise known as dead code, which can not be called or executed. These are unnecessary blocks of code, as these codes do not offer benefits, but make it possible to increase technical debt.

Psrsh: It is a duplicate code, it deals with the refactoring of objects, as well as makes the generalization premature.

Connections: This smell means the moment when a code must be independent and end up bound together due to lack of access control, as well as excessive delegation.

Ex: We have code tracking, as well as the use of private and internal members.

*Developers in most cases are trained to see what are the logical errors that have been accidentally inserted into their code. Errors of this type will range from the most forgotten cases of different edges, which are not treated to the point where there are logical errors which make possible the crash that can have systems from most. Code winds are some signals that our code needs to be refactored in order to improve the alignment, support but also readability.

The presence of code scents is a very serious topic, despite the names being perhaps a bit ridiculous. Anyone who has little experience in software development is aware that code winds also have the property of seriously slowing down software release.

With the use of code wind detection tools, but also making it possible to submit codes in short as well as controlled refactoring sessions, we have the opportunity to go beyond the initial impact of code winds. In this way we have the right to discover the deepest problem that lies within the software. Code winds can in many cases be vital to make it possible to discover when to refactor and what refactoring techniques to use.

There is a part that is very essential in the way the software is developed as well as to find the flavors of the code. Another role is to dig into the root causes but also to fix them through what is called refactoring.

Most common codes are:

  • Bloaters
  • Object-Orientation Abusers

  • Change Preventers

  • Dispensables

  • Couplers

The ones that are found the most by details are :

Long method
Duplicate Code
Inheritance method
Data Clumps
Middle Man
Primitive types
Divergent Code
Shotgun Surgery
Feature EnvyPrimitive Obsession
Lazy Class
Type Embedded in Name
Uncommunicative Name
Dead Code

References:

https://levelup.gitconnected.com/10-programming-code-smells-that-affect-your-codebase-e66104e0341d

https://www.infoq.com/news/2016/09/refactoring-code-smells/

https://8thlight.com/blog/georgina-mcfadyen/2017/01/19/common-code-smells.html

From the blog CS@worcester – Xhulja's Blogs by xmurati and used with permission of the author. All other rights reserved by the author.

Architecture

https://martinfowler.com/architecture/

This article on software architecture from Martin Fowler asks the questions “What is architecture?” and “Why does architecture matter?”. The answers are justified through the article which begins with why he does not always like using the term architecture but he accepts it because he believes a good architecture supports programming evolution. Fowler believes that architecture should be defined as “Architecture is about the important stuff. Whatever that is”. Fowler explains that the sole definition of what architecture is has been a debate among computer scientists for a long time but that the definition he puts in this article is the result of a conversation between himself and another professional named Ralph Johnson.

Although Fowler’s definition is his own and very simple it makes sense just like many other computer scientists “definitions” of software architecture. Clearly architecture can be defined in many different ways according to who you ask for a definition but Fowler also states that architecture is important because bad architecture will affect a system in different ways including being harder to modify, having more bugs when updated and these updates/new features being released slower which impacts the customer for which the system was designed. 

Fowler also discusses two different types of architecture which are application and enterprise architecture. Application architecture is based on an “application” which in Fowler’s words is a body of code perceived as a single unit by developers. Enterprise architecture is based on larger amounts of code which work independently from one another in different systems but all used by one company or “enterprise”.

I chose this article as it takes more of a thoughtful approach to the concept of software architecture and in part helps the reader open their understanding of the subject to different “definitions” of the concept which can help when developing software architecture in the future. I feel as though this article would be helpful to anyone in the programming field as it gives you multiple perceptions of architecture, what it is, and why it is important.

This article taught me the general idea of both application and enterprise architecture. I was able to see what Fowler and Johnson viewed architecture as and why they viewed it that way when other computer scientists may define or view architecture in a different light. I learned that architecture itself is very complicated and cannot be defined under one singular definition as it fits many different definitions for many different people in computer science.

From the blog CS@Worcester – Dylan Brown Computer Science by dylanbrowncs and used with permission of the author. All other rights reserved by the author.

REST API

Hello all who are reading this, on this post I’ll be discussing a blog that I found in the link just below this paragraph. This particular blog post written by Douglas Minnaar gives a great overview of REST API’s and the knowledge necessary to build them in a structured manner. The blog begins defining the REST acronym, who defined and coined the term, and why one might use the REST style.

https://dev.to/drminnaar/rest-api-guide-14n2

This first part covering the general fundamentals provided a clear and concise picture of how one should structure a REST web structure. In the first part it covers specifically the six architectural constraints of REST. One of the more important constraints I thought was the concept of a Uniform Interface and the ‘principle of generality’ which describes avoiding making a complex interface when simplicity would be far more advantageous across multiple clients.

Another constraint covered was the Layered System. In the blog it is described as more of a restraint for the client, that they should be decoupled in a way that allows the server to work without the client assuming what the server is going to do. It allows the server to pass it through stages like security and other tiers without the client checking back or communicating with it to verify as to not disrupt the process or possibly break security.

Part two shows an HTTP API to go further into the constraints described in part one and how to define the contracts for resources. I felt this part went by a little bit too fast without accurately describing more about what a contract is or what your resources should be but it did show some great points on their naming conventions and how best to rank them, these being techniques I’ll want to apply later should I work further with a REST API. The other sections of part two includes status codes, which I already had learned about in class, and content negotiation. Content Negotiation was very short and only really described as to pay attention to primarily json or xml and throw a 406 code otherwise.

The third part actually gave an example project based off his guide and the Richardson Maturity Model, which in part one is described as a leveled list into how ‘mature’ or as how I read it, how well designed a model is based off how many URI’s a service has and if it implements multiple HTTP methods or status codes. The architecture of the project uses the Onion architecture which I found interesting and almost immediately understood only because of how an onion is structured. The “Ranker” project is mostly an application of the REST architecture and by circumstance is also a movie ranker? It mostly allows you to manage users, movies and ratings but the core of this project is to demonstrate REST and the Richardson Maturity Model and its methodology.

 I felt like this particular blog post gave me some new concepts to think of when working on a REST API as well as some general formatting properties.

From the blog CS@Worcester – A Boolean Not An Or by Julion DeVincentis and used with permission of the author. All other rights reserved by the author.