CODE SMELLS

In class, we did some activities about design smells, their definition, and what they do. So I was interested and decided to dig deeper and learn more about different design smells. I was curious because as a computer science major, I think knowing about design smells is very important especially how to avoid them.

In computer programming, design smells, also known as “code smells” are structures in the design that indicate a violation of fundamental design principles and negatively impact the design’s quality. Code smells are not bugs or errors. Instead, these are absolute violations of the fundamentals of developing software that decrease the quality of code.

Code smells indicate a deeper problem, but as the name suggests, they are sniffable or quick to spot. The best smell is something easy to find but will lead to an interesting problem, like classes with data and no behavior. Code smells can be easily detected with the help of tools.

How to get rid of code smell?

Code smells can lead to a serious defect in a program, failure of the whole system, and many others.

Once all types of smells are known, the process of code review begins. Two or more developers may use the primary method, the ad-hoc code review process to try and identify such smells manually. Many smells are not possible to be found by manual reviewing and automated code review tools are used for identifying such bad smells.

Code smells knowing or unknowingly are introduced in the source code, and may also form if you are solving other smells. Developers discard most of the smells consciously because they seem to have a marginalized effect or are just too hard to explain.

When developers find smelly code, the next step they do is refactoring. Refactoring is the process of changing a software system in such a way that it does not alter the external behavior of the software yet improves its internal structure. It may be the single most important technical factor in achieving agility. The goal is to stay within reasonable operating limits with limited continual damage. By staying within these limits, you keep costs low, because costs relate nonlinearly to the amount of repair necessary. Refactoring is a process in which the code is divided into smaller sections according to the identified smells.

A decision is then made to either remove them or replace them with a better series of code that may increase code quality and enhance some nonfunctional quality simplicity, flexibility, understandability, performance. After refactoring, run tests to ensure things still work correctly. Sometimes this process has to be repeated until the smell is gone.

I chose this topic because, as a computer science major with a concentration in software development, knowing about code smells is very important. When writing codes, we are supposed to be aware of this, so when it happens we know what to do to get rid of them.

What is a Code Smell and How to Get Rid of It? – QATestLab

What are Code Smells? | How to detect and remove code smells? (codegrip.tech)

From the blog CS@Worcester – Software Intellect by rkitenge91 and used with permission of the author. All other rights reserved by the author.

The Four Principles of Object-Oriented Design

A programming language supports object-oriented design when it contains features that allow programmers to apply four principles: abstraction, encapsulation, polymorphism, and inheritance. When my class was reviewing the principles, I realized we were struggling to define and provide real-world examples of them. Because an understanding of the principles is fundamental to applying them, the goal of this post is to review and provide real-world examples of them. To review the principles, I will be reflecting and adding onto ParTech’s article, “Basic Principles of Object-Oriented Programming,” which defines and describes the principles.

Abstraction is the process of packing details of a real-world phenomenon into a simplified representation. ParTech describes the process of making coffee with a machine as an example of abstraction: “You use a button defined interface to make coffee, without needing to worry about the internal working of a machine.” In other words, a coffee machine’s button hides (or abstracts) the science the machine carries out to make coffee. A button on a coffee machine is an effective example because we only need to know what button to press to make coffee rather than the machine’s internal systems, such as its circuitry and fluid transfer apparatus.

Encapsulation is the process of packing data and operations that manipulate the data into objects.  ParTech describes a medicine pill as an example of encapsulation: “All the medicine(objects) are stored inside the pill (class) and you can consume it whenever needed.” In other words, a pill and its contents are analogous to a class and its data members respectively. While ParTech is not wrong for using a pill as an example of encapsulation, I believe they explained it as a comparison rather than an example. A pill is an effective example because it has qualities (data) and operations humans can perform on it. Its qualities include a name, expiration date, and composition. Its operations include purchasing, disposing, and consuming.

Polymorphism is the state of having many forms. ParTech uses method overloading and overriding as examples of polymorphism. While method overloading and overriding are effective examples of polymorphism, they are not real-world examples. A real-world example of polymorphism is a person because a person can be a student, husband, and chef at once.

Inheritance is the practice of giving the data members of one class to another class. ParTech does not provide an example of inheritance. To understand inheritance, one needs to recognize that many objects are of the same classification. For example, a student, husband, and chef are people. The student, husband, and chef have the characteristics of people plus other distinct characteristics. If we were to implement my example in code, inheritance is the feature that will allow us to give the people characteristics to the student, husband, and chef without rewriting the code for it.

With a stronger recognition of how the object-oriented principles apply in real life, I can better translate them into software during my course and career.


Article: https://www.partech.nl/en/publications/2020/10/basic-principles-of-object-oriented-programming#

From the blog CS@WORCESTER – Andy Truong's Blog by atruong1 and used with permission of the author. All other rights reserved by the author.

Blog post 3 – REST API

After spending some time in class working with the REST API, I found myself still having questions regarding what it is and how it works. I decided that I should do some further reading on my own, and I thought that I should make a blog post explaining my findings. I read several articles, but one article in particular by IBM I think describes REST API the best.

Before we delve into REST itself, we first have to understand what an API is. API stands for application programming interface; it is a set of rules that define how applications or devices can communicate with each other. It’s a mechanism that allows applications to access resources within other applications. The application that is utilized to access the other application is called the client, while the application that contains the resources being accessed is called the server.

REST API is an API that uses REST principles. Unlike other APIs which have pretty strict frameworks, REST is pretty flexible. The only necessary requirement is that the REST design principles, or architectural constraints, are followed. These are:

  1. Uniform interface – All API requests for the same resources should be the same regardless of where the request came from.
  2. Client-server decoupling – The client and the server need to completely independent of each other. The client should only know about the URI, or the Uniform Resource Identifier, and the server should only pass the client to the requested data via HTTP.
  3. Statelessness – All requests need to include all the information necessary to process them.
  4. Cacheability – Resources should be cacheable on both the client and the server. The server should also know whether or not caching is allowed for a delivered resource.
  5. Layered system architecture – The calls and responses go through different intermediary layers.
  6. Code on demand – Usually, REST APIs send static resources, but in some cases, they can also contain executable code. In such cases the code should on run on-demand

I didn’t know that REST API had design principles, so this was new information to me. However, so far, I only discussed what REST API is, we still need to understand how it works. REST APIs communicate using HTTP, Hypertext Transfer Protocol, requests to perform basic functions in databases like creating, reading, updating, and deleting data inside a resource. For example, a GET request would retrieve data, a DELETE request would remove data, a PUT request would update data, and so on. All HTTP methods are able to be utilized in API calls. Another thing to note is that the resource representation can be delivered to the client in virtually any form including JSON, HTML, Python, and even normal text files. Finally, it’s important to note the request headers, response headers, and parameters in calls. They’re important because they contain vital identifier information such as URIs, cookies, caching, etc.

https://www.ibm.com/cloud/learn/rest-apis

From the blog CS@Worcester – Fadi Akram by Fadi Akram and used with permission of the author. All other rights reserved by the author.

REST API’s #2

A REST API (also known as RESTful API) is an application programming interface (API or web API) that conforms to the constraints of REST architectural style and allows for interaction with RESTful web services. REST is an acronym for Representational State Transfer and an architectural style for distributed hypermedia systems. It was created by computer scientist Roy Fielding.

An API is a set of definitions and protocols for building and integrating application software. It’s sometimes referred to as a contract between an information provider and an information user—establishing the content required from the consumer (the call) and the content required by the producer (the response).

REST has a set of rules that developers follow when they create their API. One of these rules states that you should be able to get a piece of data (called a resource) when you link to a specific URL. Each URL is called a request while the data sent back to you is called a response.

In order for an API to be considered RESTful, it has to conform to these criteria:

  • A client-server architecture made up of clients, servers, and resources, with requests managed through HTTP.
  • Stateless client-server communication, meaning no client information is stored between get requests and each request is separate and unconnected.
  • Cacheable data that streamlines client-server interactions.
  • A uniform interface between components so that information is transferred in a standard form. This requires that:
    • resources requested are identifiable and separate from the representations sent to the client.
    • resources can be manipulated by the client via the representation they receive because the representation contains enough information to do so.
    • self-descriptive messages returned to the client have enough information to describe how the client should process it.
    • hypertext/hypermedia is available, meaning that after accessing a resource the client should be able to use hyperlinks to find all other currently available actions they can take.
  • A layered system that organizes each type of server (those responsible for security, load-balancing, etc.) involved the retrieval of requested information into hierarchies, invisible to the client.
  • Code-on-demand (optional): the ability to send executable code from the server to the client when requested, extending client functionality. 

In conclusion, here are 3 good reasons to use rest api’s:

  • Scalability. This protocol stands out due to its scalability. Thanks to the separation between client and server, a product may be scaled by a development team without much difficulty.
  • Flexibility and portability. With the indispensable requirement for data from one of the requests to be properly sent, it is possible to perform a migration from one server to another or carry out changes on the database at any time. Front and back can therefore be hosted on different servers, which is a significant management advantage.
  • Independence. With the separation between client and server, the protocol makes it easy for developments across a project to take place independently. In addition, the REST API adapts at all times to the working syntax and platform. This offers the opportunity to use multiple environments while developing.

resources:

https://www.smashingmagazine.com/2018/01/understanding-using-rest-api/

https://restfulapi.net/

From the blog CS@Worcester – Delice's blog by Delice Ndaie and used with permission of the author. All other rights reserved by the author.

BASH Scripts

The command line can be an incredibly useful tool, allowing for quick navigation of directories, launching apps/executables, and a plethora of other tasks. However for all the use cases that it contains, it can be difficult to keep track of all the different commands, let alone have to repeat them often. Cases when someone needs to repeat a series of command line commands can be time consuming and tedious. Luckily there is a tool in our tool belt that will allow anyone to automate this process. This tool is called a BASH script.

BASH stands for Bourne Again SHell, and is in its essence a command line interpreter that interprets user commands and allows us to carry out different actions. We can use this to our advantage by creating a .bash file and entering the commands that we want to run within that file. It really is that simple. Once we have the file with all the commands that we want to run saved, we can go back into the command line and run it. This will then execute each command in the bash file one by one until it runs through them all, at which point the command line will be ready to accept another command.

But what if you need to loop through certain commands? Bash scripting allows for loops to be written directly within the file, and supports a myriad of loop types. Such as for-loops, while-loops, and until-loops. You can also control the exit conditions of these loops with pre-set ranges, breakpoints, or good old enumeration. If-statements are also supported, and work in a very similar way to traditional programming languages. In reality bash scripting is its own kind if programming language. One that focuses on executing command line commands. Many things that you can do in most high and low level languages you can do in a bash script. You can even write individual functions in a bash script and have them execute only if specific conditions are met. Just like in a normal programming language like Java or Python

Given all this one can easily see how a bash script can be so incredibly versatile. From simple clusters of commands, to complex functions with loops and conditional statements bash scripting gives anyone the tools they need to get the job done. Being able to automate different command line tasks can save time, and being able to do so in a complex manner opens up the door to intricate automation scripts that can in some cases remove the need for the user to even interact with the command line at all.

https://ryanstutorials.net/bash-scripting-tutorial/bash-script.php

https://ryanstutorials.net/bash-scripting-tutorial/bash-loops.php

https://ryanstutorials.net/bash-scripting-tutorial/bash-if-statements.php

https://ryanstutorials.net/bash-scripting-tutorial/bash-functions.php

From the blog CS@Worcester – Sebastian's CS Blog by sserafin1 and used with permission of the author. All other rights reserved by the author.

Docker Compose

For this week’s blog post, I am reviewing a blog post made by Gabriel Tanner. Mr. Tanner is a software engineer at Dynatrace and in this blog post, he talks about the characteristics of Docker Compose, why we should use Docker compose. Mr. Tanner starts the blog post about why we have/use Docker.

“With applications getting larger and larger as time goes by it gets harder to manage them in a simple and reliable way.”

The first feature about Docker compose that Mr. Tanner talks about is its portability and how it can construct and destruct a development environment using the docker-compose up and docker compose down commands respectively. The blog post also goes common uses of Docker compose and common reasons why people use Docker compose. Some examples of common uses of Docker are its ability to run several containers on a single host and to run your program in an isolated environment. An example of a common reason why people use Docker compose is people might want to run their program in an environment similar to the one used in the production stage. The post also goes on to talk about volumes and the different types of volumes and their syntax, networking so that our containers can communicate with one another and many other different topics.

The entire blog post is basically one big tutorial about Docker compose. It defines the features of Docker compose, gives examples of its uses, and explains why we should use it. I think this is a blog post that is worth reviewing for this class because I think it could be a really good resource to have in the class. The post is a little long, but it is very thorough. I think it would be a good way to review Docker compose before a midterm or final. In addition, the blog post also covers a lot of information that we have not done over in class, so it also provides a way for us to investigate and learn more deeply about the topic. For the first half of the blog post, it covers material that we have already covered in class but in the second half of the blog post, it covers a lot of features about Docker that we have not yet covered in class such as using multiple Docker compose files by passing an optional override file. This is a feature of Docker that I can see myself using in the future and is a feature that I wish I had learned sooner. A couple semesters ago, I was doing a project in MatLab and Java and was running several large programs on one computer. This made the project very time-consuming and difficult because it took a long time to run all of the programs, generate and collect all of the results. Had I known what I know now about Docker, I would have done a lot of things differently.  

https://gabrieltanner.org/blog/docker-compose

From the blog CS@Worcester – Just a Guy Passing By by Eric Nguyen and used with permission of the author. All other rights reserved by the author.

Anestiblog #2

This week I read a blog post about the top 10 most popular software architecture patterns, and I thought it really related to this class. The blog starts off going into what software architecture patterns are, and why it should be focused on. The blogger describes the patterns as an outcome of the design principles architects use and the decisions they make. The blogger thinks it should be focused on because it enables a software system to deliver its business, operational, and technical objectives. The blog then gives some tips on how to know if your patterns are good, and then lists the 10 most common patterns. The blog ends on how to evaluate which pattern is best for your project. I selected this blog post because from last week I wanted to go deeper in the world of software architecture, and I thought the different patterns was a good way to go. I think this blog was a great read that I recommend for many reasons. A huge reason was that it really is good at giving background information before really getting into the list, so that you understand what a software architecture pattern is. Before I read the blog, I had no idea what it was, and that made me want to learn more, and this blog helped with that. Another reason I would recommend reading this blog is because it gives the top 10 most common patterns. That is important because for a career in this field, you have to know the most common patterns because those are the ones that will be used the most in future jobs. If anyone is thinking about a career in this field then they should definitely know the 10 patterns. The last reason I will mention is that in the end, the blogger explains that to get the right pattern for your application, software architects with that skill are the most sought after, so it really should be a big reason to learn more about patterns because you can really use it to your advantage. I learned what the software architecture patterns are, and that the software architecture models make it easy to reuse them in other projects since you now know the decisions and trade-offs. I also learned that the interpreter pattern, and the layered architecture pattern are two of the most well-used patterns, as I think I have done something similar to them before. This material affected me hugely because it showed how important the patterns are to software development, so it showed that I will be needing it if I ever want to become a developer in the near future, so it is really great to know this stuff. I will take all of this knowledge with me as I continue in my career. Everyone who wants a career in software development just has to read this blog as well.

link : https://nix-united.com/blog/10-common-software-architectural-patterns-part-1/

From the blog CS@Worcester – Anesti Blog's by Anesti Lara and used with permission of the author. All other rights reserved by the author.

YAML (You Always Make Logs)

While working with YAML files is a relatively new concept for me, I must say that the structure is somewhat similar to another style of coding I am more familiar with: CSS. The two are not exactly the same, however the “key: value” format makes coding with YAML files that much easier. As noted later on, the importance of indentation in YAML reminds me of Python, a familiar programming language.

As part of the title, I have decided to make my own acronym for YAML: (Y)ou (A)lways (M)ake (L)ogs. This is because I view YAML files almost like logs for a certain state of a project; each one consists of all the different elements that make up a certain level of a SemVer state. In fact, YAML files consist of all the different materials that have been seen in this course before; ports, images and more make up a YAML file. In addition, these files are often used with preview software (such as Swagger) to create visualizations of APIs.

In the link below, one can find a video that explains the structure of YAML files. This information is alongside applications of the files, and even a tutorial for setting up a YAML extension in Visual Studio Code. I have chosen to watch this video on YAML files for two reasons: first, I need more practice with YAML, and in my opinion, increasing my exposure to it is the best way to gain more experience. Second, videos are a preferable source of educational media in my opinion; having visual examples helps to get the idea across better than simple discussions.

An example of a very simple YAML file. Notice how it starts with “version: (SemVer Value)”.

YAML files can be simple, like the one shown above; they can also be thousands upon thousands of lines of code. Traditionally, they start with the version number, and are also dependent on proper indentation (similar to Python). Features of a YAML file can be as significant as a docker image, such as nginx; features can be as focused as making an element “read-only” within an API endpoint.

Among everything else, it is important to emphasize and take away this lesson from YAML files: these files are a core component of software design, connecting several familiar aspects together. By learning about YAML, I can figure out a way to run images, map ports and even more, all within a single file called by the “docker-compose up” command. Personally, I feel as though YAML will be an invaluable resource in future IT endeavors; while software such as Git focuses on the level of version control, YAML focuses on what exactly goes on in that level.

Link: https://www.youtube.com/watch?v=fwLBfZFrLgI

From the blog CS@Worcester – mpekim.code by Mike Morley (mpekim) and used with permission of the author. All other rights reserved by the author.

UML Diagram

UML stands for Unified Modeling Language. It is a way of visualizing a software program by using a collection of diagrams. The main aim of UML is to define a standard way to visualize the way a system has been designed, it is not a programming language but rather a visual language. In 1997, UML was adopted as a standard by the Object Management Group (OMG) and still it has been managed by it. UML diagram is used with object-oriented programming and is organized into two distinct groups which is Structural diagrams which represents the static aspects of the system and behavioral Diagram which represents the dynamic aspect of the system.

The Class diagram is a kind of Structural diagram, it shows how the code is being put in the system and one can know the different aspects of the code just by looking at the class diagram. While looking at the class diagram there are three sections: Upper, Middle, and Bottom. The upper section contains the class name, The Middle section contains the attributes of the class and The Bottom section includes class operations (methods). The Class has different access levels depending on the access modifier (visibility) and they have symbols such as Public (+), Private (-), Protected (#), Package (~), Static (Underlined). These symbols help one to identify the visibility of different attributes just by looking at the symbols. Classes are connected with each other in different ways such as Association, Multiplicity, Aggregation, Inheritance, Composition, Bi-directional, and so on. This helps if a class has been inherited from different another class by using the sign by which it connected the other class, which helps the viewer to understand the UML diagram easily.

I used this topic because it helped me a lot in understanding how the UML diagram works and the different properties. Homework 1 was based on the UML diagram; this article goes over everything you should know to create UML diagrams. It also has links that explain more about the subject matter and makes things clear. And because I am interested in software development, UML is one of the main tools for software engineers. In the future, the class diagram is important too, since it’s object-oriented and helps when working on a big project. The topic provided examples that made it easy to understand different concepts of UML diagrams such as the visibility of attributes, arrows that show how classes are connected to each other which makes the whole concepts of the UML diagrams easy to understand.

The article used: https://drawio-app.com/uml-diagrams/

From the blog CS@Worcester – Mausam Mishra's Blog by mousammishra21 and used with permission of the author. All other rights reserved by the author.

API in Software

This week in class, we worked on API and did some activities on it. Was my first time trying, and I was interested in learning more about it because I have heard about API but never tried to dig more into it. Now that we did it in class, I was interested in doing some research and learning why API is important and needed.

What is an API?

API (Application Programming Interface) is a software interface that allows two applications to interact with each other without user intervention. it’s a collection of software functions and procedures. In simple terms, API means a simple code that can be accessed and executed; a code that helps two different software to communicate and exchange data with each other.

How does API work?

Now that we know the importance of API, let’s see how it works by giving an example. You’re searching for a hotel room from an online travel booking site. Using the site’s online form, you select the city you want to stay in, check-in and check-out dates, number of guests, and number of rooms. Then you click “search.” As you may know, the travel site aggregates information from many different hotels. When you click “search,” the site then interacts with each hotel’s API, which delivers results for available rooms that meet your criteria. This can all happen within seconds because of an API, which acts like a messenger that runs back and forth between applications, databases, and devices.

What does API do?

APIs also facilitate calls to a server, but they execute it more simply. They connect the web, allowing developers, applications, and sites to tap into databases and services (or, assets)—much like open-source software. APIs do this by acting like a universal converter plug offering a standard set of instructions.

Why are APIs important?

Without APIs in our tool belt, most software couldn’t exist. It’s not just access to the data we need, but it’s also the mechanics of many other APIs that we depend on to make software go. For maps, there is the Google Maps API. Amazon has an API that lets you tap into their inventory of products. There is Twilio for sending MMS campaigns, and Yelp for finding places to eat. So there is an API for just about anything you can think of, around 20,000 that we know of as reported by ProgrammableWeb.

I chose to talk about this topic because, as a computer science major, knowing the importance of API is vital. We can understand that API is used in our everyday lives and plays a big role in software development. I cannot wait to do more activities in class using API but also personal exercises to get familiar with it.

Intro to APIs: What Are They & What Do They Do? (upwork.com)

What is an API? Full Form, Meaning, Definition, Types & Example (guru99.com)

From the blog CS@Worcester – Gracia's Blog (Computer Science Major) by gkitenge and used with permission of the author. All other rights reserved by the author.